text
stringlengths 0
2.11M
| id
stringlengths 33
34
| metadata
dict |
---|---|---|
IRAP, Université de Toulouse, CNRS, UPS ; IRAP; 9 Av. colonel Roche, BP 44346, F-31028 Toulouse cedex 4, France UMET, UMR 8207, Université Lille 1, CNRS, F-59655 Villeneuve d'Ascq, France Ligne AILES - Synchrotron SOLEIL, LOrme des Merisiers, F-91192 Gif-sur-Yvette, France LPCNO, Université de Toulouse, CNRS, INSA, UPS, 135 avenue de Rangueil, 31077 Toulouse, France To model the cold dust emission observed in the diffuse interstellar medium, in dense molecular clouds or in cold clumps that could eventually form new stars, it is mandatory to know the physical and spectroscopic properties of this dust and to understand its emission. This work is a continuation of previous studies aiming at providing astronomers with spectroscopic data of realistic cosmic dust analogues for the interpretation of observations. The aim of the present work is to extend the range of studied analogues to iron-rich silicate dust analogues. Ferromagnesium amorphous silicate dust analogues were produced by a sol-gel method with a mean composition close to Mg_1-xFe_xSiO_3 with x = 0.1, 0.2, 0.3, 0.4. Part of each sample was annealed at 500^∘C for two hours in a reducing atmosphere to modify the oxidation state of iron. We have measured the mass absorption coefficient (MAC) of these eight ferromagnesium amorphous silicate dust analogues in the spectral domain 30 - 1000 μm for grain temperature in the range 10 - 300 K and at room temperature in the 5 - 40 μm range.The MAC of ferromagnesium samples behaves in the same way as the MAC of pure Mg-rich amorphous silicate samples. In the 30 - 300 K range, the MAC increases with increasing grain temperature whereas in the range 10 - 30 K, we do not see any change of the MAC. The MAC cannot be described by a single power law in λ^-β. The MAC of the samples does not show any clear trend with the iron content. However the annealing process has, on average, an effect on the MAC that we explain by the evolution of the structure of the samples induced by the processing. The MAC of all the samples is much higher than the MAC calculated by dust models. The complex behavior of the MAC of amorphous silicates with wavelength and temperature is observed whatever the exact silicate composition (Mg vs. Fe amount). It is a universal characteristic of amorphous materials, and therefore of amorphous cosmic silicates, that should be taken into account in astronomical modeling. The enhanced MAC of the measured samples compared to the MAC calculated for cosmic dust model implies that dust masses are overestimated by the models.Low temperature MIR/submm mass absorption coefficient of Fe-rich sol-gel silicates K. Demyk et al. Low-temperature MIR to submillimeter mass absorption coefficient of interstellar dust analogues II:Mg and Fe-rich amorphous silicatesK. Demyk 1C. Meny1H. Leroux2C. Depecker2J.-B. Brubach3P. Roy3C. Nayral4W.-S. Ojo4F. Delpech4 Received *** ; accepted *** ======================================================================================================================================= § INTRODUCTION The Herschel and Planck satellites have opened up the far infrared (FIR) and submillimeter (submm) spectral domain and we now have in hand a huge amount of observational data in the 250 μm - 1mm (850 - 300 GHz) domain. This is the domain where cold dust grains (10 - 30 K) emit and dominate the continuum emission. This FIR/submm emission traces cold astrophysical environments such as interstellar dense and diffuse clouds, cold clumps, and pre-stellar cores in our Galaxy as well as in external galaxies. It is used, for example, to derive the dust and cloud masses which constitute important information for star formation studies. Accurate knowledge and understanding of the dust emission is also important for cosmological studies requiring the subtraction of the foreground emission from our Galaxy. However a proper modeling of the FIR/submm dust emission is mandatory to making reliable interpretations of the observations. Dust emission is usually modeled using the Modified BlackBody model (MBB) and depends on the dust temperature and on the dust mass absorption coefficient (MAC) expressed asκ_λ = κ _λ_0( λ/λ_0) ^-β and characterized by a value at a reference wavelength, κ _λ_0, and by the spectral index, β, usually set to a value between 1 and 2, with no dependence on the temperature or wavelength. However, a great number of studies show that our understanding of cold dust emission is not complete. We refer the reader to <cit.> for a detailed description and discussion of recent observational results. Briefly, it appears that dust-emission models are not able to fit the recent FIR/submm observations from the Herschel and Planck missions, independently of the level of noise in the observations, of the methods used to fit the data <cit.>, and of temperature mixing along the line of sight <cit.>. These studies show that (i) the spectral index, β, is anti-correlated with the dust temperature <cit.>, (ii) the β value derived from the observations varies with the astrophysical environment <cit.> and (iii) β varies with the wavelength <cit.>. These observational results may be understood in terms of variations of the dust nature (composition and structure) in various environments <cit.> or in terms of interaction of the dust with the electromagnetic radiation depending on the intrinsic dust physical properties <cit.>.With this study, our group continues the effort to investigate the optical properties of cosmic dust analogues in the mid infrared (MIR) to the millimeter domain as a function of temperature, undertaken 20 years ago by different groups <cit.>. Briefly, <cit.> were the first to study relevant interstellar silicate dust analogues in the temperature range from 1.2 K to 30 K and in the wavelength range from 700 μm to 2.9 mm. <cit.> studied amorphous carbon samples and silicate samples in the 24 μm - 2 mm spectral domain and 24 - 300 K temperature range. <cit.> investigated silica and silicate samples in the 10 - 300 K temperature range and in the spectral region 100 - 1000 μm. The work by <cit.> was focussed on pure sol-gel Mg-rich silicates, of composition close to enstatite (MgSiO_3) and forsterite (Mg_2SiO_4), amorphous and crystalline, whose spectra were recorded in the 100 - 1000 μm spectral range and from 10 K to 300 K. These studies have brought important results about the spectroscopic characteristics and behavior of interstellar dust analogues in the FIR at varying temperature. They show that the MAC of the amorphous analogues (silicates and carbonaceous matter) increases with the grain temperature and that its spectral shape cannot be approximated with a single power law in the form λ^-β. The dependence of the MAC on the temperature, which is not observed in crystalline samples, is related to the amorphous nature and to the amount of defects in the structure of the material <cit.>. A physical model was proposed by <cit.> to explain these experimental results. This model, named the TLS model, is based on a description of the amorphous structure of the material in terms of a temperature independent disordered charge distribution and of a collection of atomic configurations modeled as two-level systems and sensitive to the temperature. We extend this experimental work with the aim to deliver to the community a comprehensive and coherent data set measured on a common spectral domain (5 - 1000 μm) and temperature range (10 K - 300 K), on samples spanning a range of compositions, each set of which being synthesized using the same method. In <cit.>, we investigated Mg-rich amorphous silicates synthesized with a sol-gel method. <cit.> investigated four samples of amorphous Mg-rich glassy silicates with a composition close to enstatite, forsterite and one intermediate composition between forsterite and enstatite. The present study is focussed on ferromagnesium amorphous silicate dust analogues. <cit.> derived, from transmission measurements, the MAC of one Fe-rich silicate amorphous sample, (Mg_0.18Fe_1.82SiO_4) in the MIR-mm domain (20 - 2000 μm) at low temperature (24 - 300 K). More recently, <cit.> measured the transmission and reflexion spectra of highly disordered, "chaotic", iron silicates (FeSiO) in the wavelength range 2 - 300 μm for grain temperature in the range 5 - 300 K. <cit.> presented a preliminary study of a series of Mg/Fe-rich glassy pyroxene-like silicates of up to 50% iron, in the spectral range 50 μm - 1.2 mm and from 300 K down to 10 K but they do not show nor discuss the low-temperature spectra in their article. Iron is highly depleted from the gas phase in the ISM. It is most probably incorporated into the cosmic dust grains although the form it takes in the grains remains poorly constrained. Iron could be present in the dust grains in the form of metallic iron or FeS inclusions, as can be observed in the presolar glass with embedded metal and sulfides (GEMS), <cit.>) in porous interplanetary dust particles (IDPs). Based on elemental depletion observations and on the modeling of the dust formation, <cit.> proposed that iron exists in metallic form either as inclusions or as separate iron grains. Iron has also been proposed to be present as a population of iron oxide grains (magnetite, Fe_3O_4,or maghemite, γ-Fe_2O_3) <cit.> or as iron oxide inclusions in the silicate grains. However, the amount of iron sulfides and iron oxides must be sufficiently low to be compatible with the absence of detection of the vibrational bands of these species. Iron could also be present in the amorphous silicate network either in the form of ferric iron (Fe^3+) and/or in the form of ferrous iron (Fe^2+). Interestingly, <cit.> and reference therein, showed that a number of presolar silicates are ferromagnesian. Such silicates thus constitute relevant analogues of interstellar silicates. In this article, we present the study of Mg-Fe-rich silicates produced by sol-gel method, of pyroxene mean composition close to (Mg_1-xFe_xSiO_3) and containing iron from 10 to 40% (x = 0.1, 0.2, 0.3, 0.4). The paper is organized as follows: Sect. <ref> introduces the experimental procedures for the dust analogues synthesis, their characterization, and the spectroscopic measurements, Sect. <ref> presents the MAC of the studied samples measured in the 5 - 1000 μm range and in the 10 - 300 K temperature range and Sect. <ref> discusses these results and their implications for astrophysical studies. § EXPERIMENTS §.§ Sample synthesis Amorphous samples of composition Mg_1-xFe_xSiO_3, with x = 0.1, 0.2, 0.3, 0.4 were synthesized with a sol-gel method using nitrates as precursors. The method is described in detail in <cit.>. In the absence of iron, a clear and transparent gel is formed. The gel is more and more brown as the Fe concentration of the final product increases. When the iron nitrate precursor is used, it is necessary to adjust the pH (∼1.7) with ammonium hydroxide in order to allow a reasonable gelification time and to avoid the precipitation of iron hydroxide. Once gelification has been reached, the gel is aged at ambient temperature for fifteen minutes before being dried at 110^∘C for 48 hours in an oven under primary vacuum. At this stage, the translucent gel completely expanded in the liquid shrinks dramatically by losing around 50-75 % of its volume. Finally, the dried gel, called xerogel, is ground in an agate mortar before the purification stage at 500^∘C, in air, for two hours. Using these conditions for the synthesis, ferric iron (Fe^3+ or FeIII) is dominant. These samples are named E10 to E40 for samples containing 10 to 40% iron. The ferric iron was partially reduced to ferrous iron (Fe^2+ or FeII) by loading the dehydrated gel into an oven with a gas streaming (Ar + 10% H_2) at 500^∘C for 3 hours. These samples, in which part of the iron has been reduced, are named E10R to E40Rfor samples containing 10 to 40% iron; wealso refer to these later in the article as "processed" samples. §.§ Sample characterizationThe sol-gel synthesis products were characterized by transmission electron microscopy (TEM) in order to infer their morphology at the nanometer scale and their local chemical composition. The TEM examination requires thin samples (typically less than 100 nm thick). To prepare the samples, pieces of sol-gel blocks were crushed in alcohol. A drop of alcohol, containing a large number of small fragments in suspension, was deposited on a carbon film supported by a TEM copper grid. The TEM characterization was performed using a FEI Tecnai G2-20 twin at the electron microscopy facility of the University of Lille. The sample morphology and size distribution were studied by conventional bright field imaging. It is similar to the Mg-rich sol-gel samples from <cit.>. Whatever their composition (x = 0, 0.1, 0.2, 0.3, 0.4), the samples consist of clusters of matter bonded to one another (Fig. <ref>). In all samples, the clusters are homogenous in size, centred around 11 nm with a Gaussian size distribution between 5 and 20 nm (Fig. <ref>). The size of the porosity is of the same order of magnitude (∼ 10 nm). The amorphous state of the clusters is confirmed by electron diffraction patterns which show diffuse rings characteristic of amorphous matter. Compositions were measured by EDS by selecting volumes typically of the order of 10^-3 μm^3, thus including a large number of clusters in each analysis. The use of smaller volumes was avoided because it led to significant damage of the samples under the electron beam and a preferential loss of Mg with respect to Si and Fe. The measured compositions are found to be relatively homogeneous and reasonably close to the target compositions (Fig. <ref>), although a slight deficit of Mg was systematically observed, likely due to degradation of the samples under the electron beam. The E10 and E20 samples appear to have very similar measured composition, as do the samples E30 and E40 (Fig. <ref>).We investigated the oxidation state of the iron contained in the samples with Mössbauer spectroscopy. Mössbauer spectra were collected on a constant-acceleration conventional spectrometer with a 1.85 GBq source of ^57Co (Rh matrix) at 293 K. The absorber was a sample of ca. 50 - 100 mg of powder that was enclosed in a 20-mm-diameter cylindrical plastic sample holder, the size of which has been determined to optimize the absorption.We observe doublets and sextets in the Mössbauer spectra of the samples which are characteristic of quadrupole and magnetic dipole interactions, respectively, the doublets being associated to the presence of iron in the silicate network and the sextets to oxide phases with magnetic signature <cit.>. The Mössbauer spectra of samples E10 to E40 show a single doublet characteristic of FeIII in silicates whereas the spectra of the E10R to E40R samples show two doublets characteristic of FeIII and FeII. This indicates that the annealing of the samples under reducing conditions (Ar + 10% H_2) led to incomplete iron reduction. It is likely due to the low temperature of annealing (500^∘C). Sol-gel processing at higher temperature is precluded because of the strong propensity of the samples to crystallization. We also observed sextets in the spectra that reveal the presence of oxides. In the unprocessed samples, two sextets are observed (only one sextet is observed for the E10 sample) and their isomer shifts point to disordered (and/or small nanocrystals of) hematite (Fe_2O_3, FeIII) as a carrier <cit.>. Two sextets are also observed for the E30R and E40R samples, with a relative intensity of ∼ 2:1 but their isomer shifts are different from those of the sextets observed in the spectra of the unprocessed samples suggesting that the hematite has been reduced into magnetite (Fe_3O_4, FeII:FeIII = 1:2) <cit.>. The measured isomer shift domain was not extended enough to measure the sextet for the E10R and E20R samples for which we have no information. From the Mössbauer spectra, we estimate that the samples contained about 5 - 10% iron oxide, which means that the main fraction of iron is present within the silicate sol-gel. §.§ Spectroscopic measurementsThe spectroscopic measurements were performed on the setup ESPOIRS at IRAP in the spectral range 5 - 1000 μm and on the AILES beam line at the synchrotron SOLEIL in the spectral range 250 - 1000/1200 μm. The ESPOIRS setup is dedicated to the characterization of interstellar dust analogue spectroscopic properties. Thanks to a set of detectors, beamsplitters, and sources it covers the spectral domain from 0.7 μm to ∼ 1000 μm. In the FIR (λ ≥ 30 μm), we use a TES Si bolometer detector from QMC Instrument (operating at 8 K and cooled with a He pulse-tube), a silicon beamsplitter, and a mercury lamp. In the MIR, we use a CsI beamsplitter, a Globar source and a DLaTGS detector. The samples are cooled down to 10 K with a pulse tube cooled cryostat. The AILES beam line is equipped with a similar experimental setup described in detail by <cit.>. In the MIR range (5 - 40 μm), the transmission spectra were measured at room temperature whereas in the FIR/submm range (λ ≥ 30 μm) they were measured at 10, 30, 100, 200 and 300 K.The samples are prepared for transmission measurements in the form of pellets of 13 mm diameter. In the MIR, KBr (Aldrich) pellets are pressed at room temperature under ten tons for several minutes. In the FIR, polyethylene (PE, Thermo Fisher Scientific) pellets are pressed under ten tons after annealing of the PE+sample mixture at 130^∘C for five minutes. To obtain the MAC of the samples in the full wavelength range from 5 μm to 1 mm, several pellets are made with increasing mass of sample to compensate for the decrease of the MAC of the sample with increasing wavelength. Typically, 0.5 mg of sample is enough to measure the MIR spectrum whereas more than 100 mg of sample is required for measurements around 1 mm <cit.>. The final MAC curves are constructed from the MAC curves of the pellets containing different masses of sample. The effect of the KBr and PE matrix is taken into account following the procedure described in <cit.>. The error on the measured MAC is the quadratic sum of the uncertainty on the thermal stability of the spectrometer and on the uncertainty on the mass of sample in the pellet. The analysis of the spectral data to reconstruct the MAC of the samples and the details on the error determination are explained in<cit.>. § MIRAND FIR/SUBMM MASS ABSORPTION COEFFICIENT AS A FUNCTION OF TEMPERATUREThe MAC of all samples, measured at room temperature in the range 5 - 40 μm, are shown in Fig. <ref>, together with the MAC of the MgSiO_3 sample from <cit.> (named E in <cit.> and hereafter named E00). This sample, which was synthesized with the same method as the samples studied here, is considered in this study because it represents the iron-free counterpart of the Fe-rich samples. Figure <ref> shows the stretching vibration of the Si-O bond of the SiO_4 tetrahedra at ∼ 9.6 μm and the bending vibration of the Si-O-Si bond at ∼ 21.6 μm. The peak position and shape of the vibrational bands of silicates are influenced by the structure of the material, which can be described by the relative amount of non-bringing oxygen per tetrahedra (NBO/T, non-bringing oxygen atoms are oxygen atoms which belong to a single tetrahedra). They are also influenced by the presence and nature of the cations within the silicates. For Mg- and Fe-rich silicates, it is known that the position of the stretching band is shifted toward short wavelengths when the Fe content increases <cit.>. As seen from Table <ref>, this is verified by our two sets of samples. The peak position of the stretching vibration is 9.61 μm for sample E10 and it decreases to 9.38 μm for sample E40. The peak position is 9.56 and 9.37 μm for sample E10R and E40R, respectively. For comparison, it is 9.76 μm for sample E00. For a given iron content, the stretching modes of the unprocessed samples (Exx) and of the processed samples (ExxR) do not present strong differences in terms of peak position of the band nor in terms of width. The slight asymmetry of the stretching mode, which exhibits a shoulder at ∼ 8.3 μm, together with the weak band at 12.5 μm, might indicate the presence of some silica (SiO_2) within the samples. However it is not possible from the MIR spectra to be more specific regarding the amount of silica, its structure and the size of the inclusions within the silicate matrix. The presence of silica-rich material is likely associated with the formation of iron-oxide. The E00 sample does not present the same spectral feature and should not contain silica.The peak position of the bending band does not seem to follow a trend with the iron content. It varies only slightly in the ranges 21.56 - 21.69 μm and21.56 - 21.60 μm for the Exx and ExxR samples, respectively (22.03 μm for sample E00). The band exhibits a shoulder peaking at ∼ 17 μm for all samples (unprocessed and processed) which is more and more pronounced as the iron fraction increases. It is therefore probably related to an iron phase which is not affected by the processing applied to the samples.We note that the position of the shoulder is close to the position of the vibrational stretching band calculated for small spherical grains of Fe_xMg_1-xO oxides <cit.>. The main difference between the bending mode of the unprocessed and processed samples is the presence of a band at ∼ 32.3 μm in the spectra of the unprocessed samples but not in the one of the processed samples. The intensity of this band increases with the amount of iron, therefore it is probably related to iron in a form which is altered by the processing. It could be iron oxides but it is difficult to identify which oxides, since they are most likely amorphous and of very small size. The peak position of this band is in agreement with the transmission spectrum of fine particles of hematite measured by <cit.>, the other features characteristic of hematite being hidden by the silicate bands. The fact that this band is not seen in the spectra of the processed samples indicates that the reduction process has operated even though the Mössbauer spectra show that it is not complete (see Sect. <ref>). The processed samples might contain some iron oxides but in small amounts since no IR spectral feature may be assigned to them. To summarize, the analysis of the MIR spectra suggests that the samples are not chemically homogeneous at small scales (≤ 100 nm, see Sect. <ref>). They might contain some three-dimensional structures compatible with SiO_2 and some iron oxide phases which should be different in the unprocessed and processed samples.The MAC of all the samples, measured from room temperature down to 10 K (in the range 30 - 1000 μm), are shown in Fig. <ref> in the 5 - 1000 μm spectral domain. In the FIR domain, the MAC of the eight samples exhibits the same dependency with temperature: Above 30 K the MAC increases with the grain temperature. We do not observe any change of the MAC from 10 K to 30 K for any of the samples. The wavelength above which the variation of the MAC with temperature is detectable depends on the sample and on the grain temperature. For a given sample, it appears at a shorter wavelength at high temperature than at low temperature. Typically, the MAC variations with temperature are visible for wavelengths longer than ∼ 100/200 μm depending on the samples studied. This variation of the MAC intensity is accompanied by a change of its spectral shape which cannot be reproduced by a single power law in λ^-β, where the spectral index, β, is defined as the slope of the MAC in the FIR/submm in the log-log representation. To derive β at each wavelength, we have fit the MAC of each sample, at each temperature, with a sixth order polynomial in the wavelength range 30 - 1000 μm and calculated the derivative (Fig. <ref>). This emphasizes that, at a given temperature, β is changing with wavelength. In addition, at a given wavelength in the FIR/submm, β increases when the grain temperature decreases, and the amplitude of the variation of β with temperature is higher at long wavelengths. In the red wing of the bending vibrational band, around ∼ 30 - 80 μm, the value of β decreases below 1 whereas, above ∼ 100 μm, it increases up to values greater than 2. This reflects the presence of the "bump" observed in the MAC spectra with a peak maximum around 100 μm and is further discussed in Sect. <ref>. The values of the MAC of all samples are reported in Table <ref> for a selection of wavelengths. At 100 μm, the MAC varies little with temperature, no more than 10to 17 % from 10 K to 300 K depending on the samples. As the wavelength increases, the variation of the MAC gets stronger and, at 1 mm, it varies by a factor in the range 1.7 to 6 from 10 K to 300 K according to the samples.§ DISCUSSION §.§ Influence of the composition and processing of the analogues on the MACFigure <ref> compares, for a given temperature (10 K and 300 K), the spectra of the samples with different iron contents, for each series of samples, Exx and ExxR. The MAC of the pure Mg-rich sample, E00, is also added for comparison. The MAC of the unprocessed samples E00, E10, E20, E30 and E40, look remarkably similar below ∼ 700/800 μm whereas above ∼ 700/800 μm, the MAC of the four samples aredifferent. The MAC of samples E10 and E20 gets steeper at long wavelengths whereas the MAC of samples E30 and E40 flattens. The MAC of the E00 sample is closer to samples E30 and E40 than to samples E10 and E20. As for the unprocessed samples, the MACs ofE10R, E20R, E30R and E40R are very similar at short wavelengths whereas above 300/400 μm, depending on the temperature, the MACs of the four processed samples differ. The MAC of samples E10R and E20R flattens at long wavelengths whereas the MAC of samples E30R and E40R steepens. Hence, no clear trend emerges from these measurements about the influence of the iron content on the MAC of the samples, for either of the two series. However, we know from the results of the MIR and Mössbauer spectroscopic measurements that the samples of each series contain a small amount of iron oxides (less than 5 - 10%, see Sect. <ref>). This certainly complicates the interpretation of the FIR spectroscopic data, since in this domain, the MAC depends on the structure at microscopic scale of the materials, a structure which is influenced by the form in which the iron is present in the samples (within the silicate network or iron in oxides). In the FIR, the difference between the MAC of the samples having various iron content is more important for the processed samples than for the unprocessed samples. This should reflect the effect of the processing applied to the ExxR samples. This processing induces changes of the chemical and structural nature of the samples as indicated by the Mössbauer and MIR spectroscopic results (Sects. <ref> and <ref>). However, the modifications of the MAC induced by the processing do not show a common behavior in terms of spectral shape and intensity. It is different from one sample to another. For example, the MAC of the unprocessed samples E10 and E20 are weaker than the MAC of the processed samples E10R and E20R whereas this is the opposite for samples E30/E30R and E40/E40R. The similarity of the MAC of the samples containing 10 % and 20 %iron on one side, and30 % and 40 % iron on the other side, both for the normal and processed series, is probably related to the fact that, as indicated in Sect. <ref>, the samples E10 and E20 have comparable composition, as do samples E30 and E40. In addition, the two pairs of samples E10/E20 and E30/E40 were synthesized in separate runs. Consequently, the samples of each pair should be very similar in terms of structure or chemical and/or structural homogeneity at microscopic scale, whereas it is possible that the two pairs differ slightly. Despite the fact that the processing of the samples does not show any clear trend, it has a non negligible effect, which is emphasized when comparing the MAC averaged over the four processed samples,<MAC_ExxR>,with the MAC averaged over the four unprocessed samples, <MAC_Exx>, and the MAC averaged over the eight samples <MAC_all>(Fig. <ref> and Table <ref>). In addition it is clear from Fig. <ref> that averaging the MAC over different combinations of samples does not suppress the temperature variation of the MAC in the FIR domain. In addition, even though the average MAC is smoother than the MAC of each sample, they retain a complex spectral shape incompatible with a single power law(Fig. <ref>).§.§ Comparison with previous experimental dataThe observed behavior of the MAC with temperature and its complex spectral shape are similar to those already observed for the MAC of amorphous silicate materials by <cit.>. All these studies show that the MAC of amorphous silicate analogues is temperature dependent, that it increases when the grain temperature increases, and that the spectral shape of the MAC cannot be fitted with a single power law in λ^-β. Most of the samples from these studies are iron free and the overall qualitative agreement of all these samples thus reflects that, more than the composition (Mg vs. Fe content), the structure at nanometer and atomic scales (e.g., degree of polimerization of the Si tetrahedra, presence of defects, presence of Si atoms not tetrahedrally linked to oxygen in a chaotic structure) governs the FIR/submm absorption. <cit.> present a study of an Fe-rich amorphous silicate at low temperature in the FIR/submm range (20 μm - 2 mm). They synthesizedamorphous Fe-rich olivine-like grains by laser vaporization of the natural crystalline target mineral (fayalite), in an oxygen atmosphere, ending up with a sample of composition Mg_0.18Fe_1.82SiO_4 in the form of small spherical particles of diameters in the range 13 - 35 nm, aggregated in chains for the smallest particles. The MAC of this sample, named FAYA, is similar with our measurements. The MAC values are in the same range as the Exx and ExxR samples in the MIR and in the FIR. At 1mm, the MAC of the FAYA sample is ∼ 5.0 cm^2.g^-1 and ∼ 0.86 cm^2.g^-1 at 300 K and 24 K respectively. The main difference between this sample and the analogues studied here lies in the shape of the MAC curves. The absence ofa band at ∼ 33 μm in the FAYA spectrum suggests that the iron contained in the FAYA sample is not in the same iron oxide phase as in the Exx samples. The prominent bump in the range 100 - 300 μm, present in our samples, is not observed in Mennella's study. Such a bump was also observed in the Mg-rich glassy silicates from <cit.> and explained in terms of Boson peak (BP).Such BP appears to be a universal feature of solid state of different amorphous materials, nevertheless without any clear and widely shared understanding of the underlying physical process. The analytical model from <cit.> describes the BP as an excess in the vibrational density of states, g_BP(ω), over the usual Debye vibrational density of states, g_Debye(ω) ∝ω^2. Although it gives good fits of the MAC of the Mg-rich glassy silicates from <cit.> in the range 30 - 700 μm, it fails to reproduce the MAC of the ferromagnesium amorphous samples Exx and ExxR. The capacity of this model to produce a reasonable fit is restored if the Gurevich's BP profile overlaps with an absorption process ∝ω^n, with n in the range 0 to 2, depending of the sample, but with no clear correlation with any sample characteristics. Clearly these experimental MAC curves fail to be modeled with any current physical model, and should be taken as experimental data to be considered by theoretical solid-state physicists.<cit.> measured the transmission and reflexion spectra of FeSiO silicates produced in a condensation flow apparatus at low temperature (5 - 300 K) in the 15.4 - 330 μmdomain. Dust produced in such an apparatus consists in small nanograins (2 - 30nm) aggregated in the form of fluffy, open clusters containing hundreds to thousands of grains. From these spectra <cit.> calculated the optical constants of the FeSiO sample and they discussed their results in terms of n and k, the refractive index and the absorption coefficient, respectively, rather than in terms of MAC as in this study. They observe a small temperature variation in the absorption coefficient, k, in the vibrational band at 21.5μm and a variation of ∼ 5% in k, above 100 μm, in the temperature range 100 - 300 K. In the 30 - 100 μm, these results on the absorption coefficient of the FeSiO sample are compatible with our results observed on the MAC ofsample Exx and ExxR which show little or no variation.These various measurements show that the MAC dependence on temperature appears above 100 μm with only very small variation in the MIRdomain. This is true whatever the structure of the material, from the most highly disordered silicates from <cit.> to the disordered, but less "chaotic" silicates from <cit.> and the present study.§.§ Comparison with cosmic dust models and astrophysical implicationsCosmic dust models are built to interpret astronomical observations either to study dust itself or to use dust as a probe of the astrophysical environment. Most dust models consider two main dust components, the carbonaceous dust and the silicate dust, which may themselves be divided up into several dust populations. The carbonaceous dust component usually includes the very small grains (VSG, a ≤ 15 nm) and the polycyclic aromatic hydrocarbons (PAHs). Some models also include a population of large carbonaceous grains (a ≥ 50 nm) composed of graphite <cit.> or amorphous carbon grains more or less aliphatic <cit.>. All cosmic dust models include a population of large grains made of silicates, which are generally pure magnesian silicates. The "astrosilicates" from <cit.> are based on experimental data of Mg-rich silicates in the UV/VIS domain, on astronomical observations of the ISM dust in the MIR, and on the extrapolation of these data in the FIR/submm <cit.>. The "Themis" model <cit.> considers two silicate dust components which are based on experimental data on Mg-rich amorphous silicates of composition MgSiO_3 and Mg_2SiO_4 in the MIR and in the UV/VIS <cit.>, and which are extrapolated in the FIR/submm. Following the classical description of the amorphous state absorption such as the Debye model <cit.>, the extrapolation of both models in the FIR/submm assumes that the MAC has an asymptotic behavior in λ^-2 for λ ≥ 20 μm. To take into account the fact that iron is highly depleted from the gas phase <cit.> the silicate component of the "Themis" model contains inclusions of metallic iron (Fe) and of iron sulfide (FeS) <cit.>.Figure <ref> shows the comparison of the averaged experimental MAC with those calculated from the "astrosilicates" model. The comparison of each sample with the "Themis" dust model and the "astrosilicates" model are shown in Figs. <ref> to <ref>.We have calculated the MAC of a spherical particle of100 nm in size, the MAC of two populations of particles having a log-normal size distribution centred at 1 μm (such size distribution should reflect the size of the grains within the pellets), and the MAC of a continuous distribution of ellipsoids (CDE model which assumes that all ellipsoidal shapes are equiprobable, <cit.>). The first population of particles consists of spherical grains and the second one of spheroidal grains (prolate grains with an axis ratio of 2). The calculations are performed using Mie theory <cit.> for spherical particles and using the DDA code DDSCAT 7.3 developed by <cit.> for spheroidal particles.The modeled and measured MACs are very different. In the MIR, where the vibrational bands occur, the discrepancy in the band shape is linked to the composition of dust analogues and to their structure at microscopic scale. As expected, the stretching mode of the Si-O bond of the iron-free samples used in the "Themis" model peaks at longer wavelengths (9.7 μm and 10.3 μm for MgSiO_3 and Mg_2SiO_4, respectively) than the iron-rich Exx and ExxR samples (see Table <ref>). For the "astrosilicates", which are derived, in the MIR, from astronomical observations, the stretching mode peaks at 9.5 μm which is close to the peak position of samples E20 and E10R. This suggests that the "astrosilicates" are compatible with silicates containing ∼ 10 - 20 % iron. The bending mode of the Exx and ExxR samples peaks at longer wavelengths than the bending mode of the models reflecting the fact that the structure of the material is different. Related to these structural differences, we note that the Exx and ExxR samples are less absorbant than the dust analogues from cosmic dust models. Interstellar dust is most likely diverse in terms of composition and structure in the various environments in which they are observed. Therefore, these discrepancies do not rule out the studied samples as relevant dust analogues. Although in the MIR, the dust models are more absorbant than the measured samples, the opposite is true in the FIR. Table <ref> gives the value of the MAC of the measured samples and of the modeled MAC at selected wavelengths in the 100 μm - 1 mm range. In this range, the experimental MAC at 300 K is more than five times greater than the modeled MAC for all samples. At 10 K, the experimental MAC is lower than at 300 K and the factor of enhancement compared to the models is smaller, depending on the sample and on the wavelength, however it is always higher than two and usually of the order of four to five. The MAC value of the ferromagnesium silicate analogues in this study is very close the MAC of the pure Mg-rich samples from <cit.> and from <cit.>. This shows that the enhancement of the measured MAC compared to the modeled MAC is not related to the iron content of the grains, that is, to differences in composition between the studied samples and the analogues used in the cosmic dust models. As discussed in detail by <cit.>, an enhancement factor greater than two cannot be explained by the effect of grain size, grain shape, or by grain coagulation within the pellets. Grain size and grain shape effects are illustrated in Fig. <ref> and the increase of the MAC due to large (micronic) spherical grains is negligible in the FIR. Coagulation might happen during the process of fabrication of the pellets and it is taken into account during the analysis of the experimental data following the method explained in <cit.> and based on the Bruggeman theory. More detailed treatment of dust coagulation by methods such as DDA have shown that it may increase the MAC by a factor oftwo at most<cit.>, that is, not enough to fully account for the discrepancy of the cosmic dust models and the experimental data. The enhancement of the emissivity of iron-rich analogues compared to the MAC of cosmic dust models commonly used for interpreting FIR/submm dust emission observations is related to the disordered nature of the samples, to the number, distribution, and nature of defects in their structure at microscopic scale and to the existence of absorption processes added to the classical Debye model <cit.>. These additional absorption processes are more or less important depending on the structural stateof the material at the microscopic scale. This study shows that the MAC of ferromagnesium silicates has the same qualitative behavior as the one of pure Mg-rich silicates in terms of dependence on the temperature and wavelength. The presence of iron oxides in the samples does not suppress this behavior, probably because the iron oxide phases are amorphous and also are not very abundant. This emphasizes the universality of this behavior in amorphous solids, whatever their composition, and the fact that it has to be taken into account in astronomical modeling. Indeed, any cosmic dust composed of amorphous silicates and/or oxides will be characterized by a MAC (i.e., an emissivity) that is lower at low temperature than at high temperature and which deviates from a single power law such as λ^-β. As for previous studies from <cit.> and <cit.>, the variation of the MAC of the Fe-rich analogues is observed for temperatures greater than 30 K. The MAC of the samples is identical in the 10 - 30 K range and then increases at 100, 200, and 300 K. In astronomical modeling, it is thus important to have a first guess of the dust temperature in order to use the MAC of the dust analogues measured at a temperature as close as possible to the dust temperature. In addition, considering the MAC averaged over all the samples and adopting the pessimistic assumption that coagulation effects are not properly taken into account (thus dividing the MAC by a factor of two), the value of <MAC_all> at 10 K is two or three times higher than the modeled MAC at 1 mm and three or four times higher at 500 μm. The direct consequence of this is that cosmic dust models overestimate the dust mass compared to the use of an experimental MAC.§ CONCLUSIONS The MAC of eight ferromagnesium amorphous silicate analogues were measured in the MIR (5 - 40 μm) at room temperature and in the FIR/submm (30 μm - 1mm) at various temperatures (10, 30, 100, 200 and 300 K). The analogues are amorphous silicates of the mean composition of pyroxene with varying amounts of magnesium and iron: Mg_1-xFe_xSiO_3 with x = 0.1, 0.2, 0.3 and 0.4. Four samples were processed to modify the iron oxidation state within the materials leading to four additional samples having a modified structure and chemical homogeneity at microscopic scale compared to the non-processed samples. We find that the MAC of ferromagnesium amorphous silicates exhibits the same characteristics as other Mg-rich amorphous silicates. In the FIR, the MAC of the sample increases with the grain temperature as absorption processes are thermally activated. The wavelength at which the MAC changes depends on the samples and on the temperature in the range 100 - 200 μm. These thermal effects are observed above 30 K and we find that the MAC at 10 and 30K are identical for all samples. The overall spectral shape of the MAC differs from a power law in λ^-β, which is the usually adopted extrapolation in astronomical models. The value of β, defined as the local slope of the MAC, varies with the wavelength. For a given sample, and at a given wavelength, the value of β is anti-correlated with the grain temperature. The qualitative agreement of the MAC of Fe-rich and Mg-rich amorphous silicates shows that more than the composition (Mg vs. Fe content), the structure at nanometer scale governs the FIR/submm absorption. Hence, any amorphous silicate grain, whatever its composition, should present complex behavior with wavelength and temperature. The modifications of the MAC induced by the processing do not show a common behavior in terms of shape and intensity. However, the comparison of the averaged MAC of the unprocessed samples with the one of the processed samples shows that they are different. We attribute these differences to an evolution of the amorphous silicate network of the samples during the processing. The MAC of the Fe-rich samples is much higher than the MAC in cosmic dust models. This is not due to compositional differences. We attribute this enhancement to absorption processes which are added to the Debye model. These absorption processes are characteristic of the amorphous nature of the dust and on the nature and distribution of defects of the disordered structure. This has important astronomical implications in terms of mass determination and elemental abundance constrains of cosmic dust models. This work was supported by the French Agence National pour la recherche project ANR-CIMMES andby the Programme National PCMI of CNRS/INSU with INC/INP co-funded by CEA and CNES. We thank the referee, J. Nuth, for his comments that have helped to improve this manuscript. We thank A. Marra for providing us with the optical constant of hematite and V. Mennella for sharing its experimental data of the MAC of amorphous fayalite. We thank M. Roskosz for fruitful discussions about Mössbauer spectroscopy. aa§ COMPARISON WITH ASTRONOMICAL MODELS. We present here the comparison of the MAC of each sample with the MAC calculated for the "astrosilicate model" <cit.> and for the "Themis" model <cit.>. The calculations are performed using Mie theory <cit.> for a spherical particle of size of 100 nm, for spherical grain populations with a log-normal size distribution centred at 1 μm, and for a continuous distribution of ellipsoids (CDE). For the spheroidal (prolate grain with an axis ratio of 2) grain population (with a log-normal size distribution centred at 1 μm), we have used the DDA code DDSCAT 7.3 developed by <cit.> to calculate the MAC. | http://arxiv.org/abs/1706.09801v1 | {
"authors": [
"K. Demyk",
"C. Meny",
"H. Leroux",
"C/ Depecker",
"J. -B. Brubach",
"P. Roy",
"C. Nayral",
"W. -S. Ojo"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170627063135",
"title": "Low-temperature MIR to submillimeter mass absorption coefficient of interstellar dust analogues II: Mg and Fe-rich amorphous silicates"
} |
Single particle nonlocality, geometric phases and time-dependent boundary conditions A. Matzkin December 30, 2023 =====================================================================================We study several extensions of linear-time and computation-tree temporal logics with quantifiers that allow for counting how often certain properties hold. For most of these extensions, the model-checking problem is undecidable, but we show that decidability can be recovered by considering flat Kripke structures where each state belongs to at most one simple loop. Most decision procedures are based on results on (flat) counter systems where counters are used to implement the evaluation of counting operators. § INTRODUCTION Model checking <cit.> is a method to verify automatically the correct behaviour of systems. It takes as input a model of the system to be verified and a logical formula encoding the specification and checks whether the behaviour of the model satisfies the formula. One key aspect of this method is to find the appropriate balance between expressiveness of models and logical formalisms and efficiency of the model-checking algorithms.If the model is too expressive, e.g. Turing machines, then the model-checking problem, even with very simple logical formalisms, becomes undecidable.On the other hand, some expressive logics have been proposed in order to reason on the temporal executions of simple models such as Kripke structures. This is the case for the linear temporal logic <cit.> and the branching-time temporal logics <cit.> and <cit.>, for which the model-checking problem has been shown to be PSpace-complete, contained in P and PSpace-complete, respectively (see, e.g., <cit.>). Even thoughtheselogical formalisms allow for stating classical properties like safety or liveness over executions of Kripke structures, their expressiveness is limited. In particular they cannot describe quantitative aspects, as for instance the fact that a property has been true twice as often as another along an execution. One approach to solve this issue is to extend the logic with some ability to count positions of an execution satisfying some property and to check constraints over such numbers at some positions. Such a counting extension is proposed in<cit.> for leading to a logicdenoted here as .This formalism can state properties such as an event p will eventually occur and before that, the number of events q is larger than two.The authors propose further an extension called (here)that admits diagonal comparisons (i.e., negative and positive coefficients) to state, for instance that the number of events b is greater than the number ofevents c.It is shown that the model-checking problem foris decidable in polynomial time and that the satisfiability problem for is undecidable.A similar extension for is considered in <cit.> where it is proven that model checking of is ExpSpace-complete while that of is undecidable. Following the same motivation, regular availability expressions () were introduced in <cit.> extending regular expressions by a mechanism to express that on a (sub-)word matching an expression specific letters occur with a given relative frequency. Unfortunately, emptiness of the intersection of two such expressions was shown undecidable. Even for single expressions only a non-elementary procedure is known for verification (inclusion in regular languages) and deciding emptiness <cit.>. The case is similar for the logic <cit.>, a variant of that features an until operator extended by a frequency constraint. The operator is intended to relax the classical semantics where φψ requires φ to hold at all positions before ψ. For example, the formula p^1/3 q states that q holds eventually and before that the proportion of positions satisfying p should be at least one third. The concept of relative frequencies embeds naturally into the context of counting logics as it can be understood as a restricted form of counting. In fact, can be considered as a fragment ofand still has an undecidable satisfiability problem <cit.> implying the same for model-checking Kripke structures. Moreover, most techniques employed for obtaining results on as well as involve variants of counter systems.Looking at the model-checking problem from the model point of view, recent work has shown that restrictions can be imposed on Kripke structures to obtain better complexity bounds. As a matter of fact if the structure is flat (or weak), which means every state belongs to at most one simple cycle in the graph underlying the structure, then the model-checking problem for becomes NP-complete <cit.>. Such a restriction has as well been successfully applied to more complex classes of models. It is well known that the reachability problem for two-counter systems is undecidable <cit.> whereas for flat systems the problem is decidable for any number of counters <cit.>, even more, model checking of is NP-complete <cit.>. Flat structures are not only interesting because of their algorithmic properties, but also because they can be used as a way to under-approximate the behaviour of non-flat systems. For instance for counter systems one gets a semi-decision procedure for the reachability problem which consists in enumerating flat sub-systems and testing for reachability. In simple words, flat structures can be understood as an extension of paths typically used in bounded model checking and we expect that bounded model checking using flat structures rather than paths improves practical model checking approaches.Contributions. We consider the model-checking problem for a counting logic that we callwhere we use variables to mark positions on a run from where we begin to count the number of times a subformula is satisfied. Such a way of counting was also introduced in <cit.>, see <ref> for a comparison. We study as well its fragments , and where the explicit counting mechanism is replaced by a generalized version of the until operator capable of expressing frequency constraints.First we prove that model checking is at most exponential in the formula size and polynomial in the structure size by using an algorithm similar to the one for model checking. To deal with frequency constraints a counter is employed for tracking the number of times a subformula is satisfied in a run of a Kripke structure. We then show that for flat Kripke structures the model-checking problems ofandare decidable. For the former, our method is a guess and check procedure based on the existence of a flat counter system as witness of a run of the Kripke structure satisfying the formula. For the latter, we use a technique which consists in encoding the run of a flat Kripke structure into a Presburger arithmetic formula and then we show that model checking ofcan be translated into the satisfiability problem of a decidable extension of Presburger arithmetic, called , featuring a counting quantifier known as Härtig quantifier. We hence provide new decidability results forwhich in practice could be used as an under-approximation approach to the general model-checking problem. We furthermore relate an extension of Presburger arithmetic, for which the complexity of the satisfiability problem is open, to a concrete model-checking problem. In summary, for model checking different fragments ofon Kripke structures () or flat Kripke structures () we obtain the picture shown in <ref> where bold entries are our novel results.§ DEFINITIONS§.§ Preliminaries We write ℕ and ℤ to denote the sets of natural numbers (including zero) and integers, respectively, and ij for k ∈ℤ| i ≤ k ≤ j. We consider integers encoded with a binary representation. For a finite alphabet Σ, Σ^* represents the set of finite words over Σ, Σ^+ the set of finite non-empty words over Σ and Σ^ω the set of infinite words over Σ. For a finite set E of elements, E represents its cardinality. For (finite or infinite) words and general sequences u=a_0a_1…a_k… of length at least k+1>0 we denote by u(k)=a_k the (k+1)-th element and refer to its indices 0,1,… as positions on u. If u is finite then u denotes its length. For arbitrary functions f: A → B and elements a∈ A,b∈ B we denote by f[a↦ b] the function f' that is equal to f except that f'(a)=b. We write 0 and 1 for the functions f_0:A→{0} and f_1:A→{1}, respectively, if the domain A is understood. By B^A for sets A and B we denote the set of all functions from A to B. Kripke structures.Let AP be a finite set of atomic propositions. A Kripke structure is a tuple 𝒦=(S,s_I, E, λ) where S is a finite set of control states, s_I∈ S the initial control state, E ⊆ S × S the set of edges and λ: S ↦ 2^AP the labelling function. A finite path in 𝒦 is a sequence u=_0 _1 …_k ∈ S^+ with (_i,_i+1) ∈ for all i ∈0k-1. Infinite paths are defined analogously. A runof 𝒦 is an infinite path with (0)=s_I. We denote by (𝒦) the set of runs of 𝒦. Due to the single initial state, we assume without loss of generality that the graph of 𝒦 is connected, i.e. all states are reachable. A simple loop in 𝒦 is a finite path u=_0 _1 …_k such that i≠ j implies _i ≠_j for all i,j ∈0k and (_k,_0)∈ E. A Kripke structure 𝒦 is called flat if for each state ∈ there is at most one simple loop u in 𝒦 with u(0)=. See <ref> for an example. The classes of all Kripke structures and all flat Kripke structures are denoted and , respectively. Counter systems. Our proofs use systems with integer counters and simple guards. A counter system is a tuple 𝒮=(S,s_I,C,Δ) where S is a finite set of control states, s_I∈ S is the initial state, C is a finite set of counter names and Δ⊆ S ×ℤ^C × 2^C× S is the transition relation where C={(c<0),(c≥0)| c∈ C}. An infinite sequence s_0s_1…∈ S^ω of states starting in s_0=s_I is called a run of 𝒮 if there is a sequence θ_0θ_1…∈(ℤ^C)^ω of valuation functions θ_i:C→ℤ with θ_0=0 and a transition (s_i, u⃗_i, G_i, s_i+1)∈Δ for every i∈ℕ such that θ_i+1 = θ_i+u⃗_i (defined point-wise as usual), θ_i+1(c)<0 if (c<0)∈ G_i and θ_i+1(c)≥0 if (c≥0)∈ G_i for all c∈ C. Again, we denote by (𝒮) the set of all such runs and assume the graph of control states underlying 𝒮 is connected.§.§ Temporal Logics with CountingWe now introduce the different formalisms we use in this work as specification language. The most general one is the branching-time logicwhich extends the branching-time logic(see e.g. <cit.>) with the following features: it has operators that allow for counting along a run the number of times a formula is satisfied and which stores the result into a variable. The counting starts when the associated variable is “placed” on the run. These variables may be shadowed by nested quantification, similar to the semantics of the freeze quantifier in linear temporal logic <cit.>.Let V be a set of variables and AP a set of atomic propositions. The syntax of formulae φ over V and AP is given by the grammar rulesφ ::= p | φφ | φ | φ | φφ | φ | x.φ | τ≤ττ ::= a| a·#_x(φ)|τ+τfor p ∈ AP, x∈ V and a∈ℤ. Common abbreviations such as ≡ p p, ≡, φ≡φ, φ≡φ and φ≡φ may also be used. The set of all subformulae of a formula φ (including itself) is denoted (φ) and |φ| denotes the length of φ, with binary encoding of numbers.Semantics. Intuitively, a variable x is used to mark some position on the concerned run. Within the scope of x a term #_x(φ) refers to the number of times the formula φ holds between the current position and that marked by x. The semantics of is hence defined with respect to a Kripke structure 𝒦=(S,s_I,E,λ), a run ρ∈(𝒦), a position i∈ℕ on ρ and a valuation function θ: V →ℕ assigning a position (index) on ρ to each variable. The satisfaction relationis defined inductively for p∈ AP, formulae φ,ψ and terms τ_1,τ_2 by[(ρ,i,θ)pp∈λ(ρ(i)),; (ρ,i,θ) φ(ρ,i+1,θ) φ,;(ρ,i,θ) φψ∃ k≥ i:(ρ,k,θ) ψ and ∀ j∈[i,k-1]: (ρ,j,θ) φ,; (ρ,i,θ) φ ∃ρ'∈(𝒦):∀ j∈[0,i]:ρ'(j)=ρ(j)and(ρ',i,θ)φ,;(ρ,i,θ)x.φ(ρ,i,θ[x↦ i]) φ,; (ρ,i,θ) τ_1≤τ_2 τ_1(ρ,i,θ) ≤τ_2(ρ,i,θ), ]where the Boolean cases are omitted and the semantics of terms is given, for a∈ℤ, by [a(ρ,i,θ) =a,;τ_1+τ_2(ρ,i,θ) =τ_1(ρ,i,θ)+τ_2(ρ,i,θ),;a ·#_x(φ)(ρ,i,θ) = a·|{ j∈ℕ|θ(x)≤ j ≤ i, (ρ,j,θ)φ}|. ]We abbreviate (ρ,i,0) φ by (ρ,i) φ and (ρ,0)φ by ρφ and say that ρ satisfies φ (at position i) in these cases. Moreover, we say a state s∈ S satisfies φ, denoted sφ if there are ρ_s∈(𝒦) and i∈ℕ such that ρ_s(i)=s and (ρ_s,i)φ. The Kripke structure 𝒦 satisfies φ, denoted by 𝒦φ, if s_I φ. Note that we choose to define the model-checking relation existentially but since the formalism is closed under negation, this does not have major consequences on our results. Fragments.We define the following fragments ofin analogy to the classical logicsand . The linear time fragmentconsists of those formulae that do not use the path quantifiersand . The branching time logic restricts the use of temporal operatorsand such that each occurrence must be preceded immediately by eitheror . Similar branching-time logics have been considered in <cit.>.Frequency logics.A major subject of our investigation are frequency constraints. This concept embeds naturally into the context of counting logics as it can be understood as a restricted form of counting. We therefore define in the following the frequency temporal logics , and as fragments of . Consider the following grammar defining the syntax of formulae φ for natural numbers n,m∈ℕ with n≤ m>0 and p∈ AP.φ ::= p |φφ|φ|αβ ::= φ|φ^n/mφWith the additional rule α::= φ|β it defines precisely the set of formulae while it defines for α ::= β|β and for α ::= β. The semantics is defined by interpreting formulae as with the additional equivalenceφ^n/mψ≡ψ x .((ψ)m·#_x(φ) ≥ n·#_x(⊤))for formulae φ and ψ and a variable x∈ V not being used in either φ or ψ. Considerthe Kripke structure given by <ref> and the formula φ_1=z.(q → ( #_z(p) ≤#_z( r))). It basically states that on every path reaching s_5 there must be a position where the states s_2 and s_4 (satisfying r) together have been visited at least as often as the state s_0. A different, yet similar statement can be formulated using only frequency constraints: φ_1' =(( r )^1/2 q) states that s_5 must always be reached while visiting s_2 and s_4 together at least as often as s_0, s_1 and s_3. Both φ_1 and φ_1' are violated, e.g. by the path s_0^3s_1s_2s_4s_5^ω. The Kripke structure however satisfies φ_2=z.( q →#_z(p) < #_z(r)) because from every state except s_5 the number of positions that satisfy r can be increased arbitrary without increasing the number of those satisfying p. Notice that this would not be the case, e.g., if s_4 was labelled by p. While the positional variables in are a very flexible way of defining the scope of a constraint, frequency constraints in are always bound to the scope of an until operator. The same applies to the counting constraints of as defined in <cit.>. For example, the formula φ_[a_1#(φ_1) + ⋯ +a_n#(φ_n) ≥ k]ψ is equivalent to the formula z.φ(ψ a_1#_z(φ_1) + ⋯ +a_n#_z(φ_n) ≥ k). Admitting only natural coefficients, can be encoded even in making it thus strictly less expressive than . On the other hand, admits arbitrary integer coefficients, which is more general than the frequency until operator of . For example, p^a/bq can be expressed as ⊤_[b#(p) -a#(⊤) ≥ 0]q in . The relation between and , as well as and is analogous.Model-checking problem. We now present the problem on which we focus our attention. The model-checking problem for a class 𝔎⊆ of Kripke structures and a specification language ℒ (in our case all the specification languages are fragments of ) is denoted by 𝔎,ℒ and defined as the following decision problem. Input: A Kripke structure 𝒦∈𝔎 and a formula φ∈ℒ. Decide: Does 𝒦φ hold?For temporal logics without counting variables, the model-checking problem over Kripke structure has been studied intensively and is known to be PSpace-complete for and and in P for (see e.g. <cit.>). It has recently been shown that when restricting to flat (or weak) structures the complexity of the model-checking problem for is lower than in the general case <cit.>: it drops from PSpace to NP. As we show later, in the case of , flatness of the structures allows us to regain decidability of the model-checking problem which is in general undecidable. In this paper, we propose various ways to solve the model-checking problem of fragments of over flat structures. For some of them we provide a direct algorithm, for others we reduce our problem to the satisfiability problem of a decidable extension of Presburger arithmetic.§ MODEL-CHECKING FREQUENCY Satisfiability of is undecidable <cit.> implying the same for model-checking , and over Kripke structures. This applies moreover to <cit.>. In contrast, we show in the following that ,is decidable using an extension of the well-known labelling algorithm for (see e.g.<cit.>).Let 𝒦=(S,s_I,E,λ) be a Kripke structure and Φ an formula. We compute recursively subsets S_φ⊆ S of the states of 𝒦 for every subformula φ∈(Φ) of Φ such that for all s∈ S we have s∈ S_φ iff sφ. Checking whether the initial state s_I is contained in S_Φ then solves the problem. Propositions (p∈ AP), negation (φ), conjunction (φψ) and temporal next (φ, φ) are handled as usual, e.g. S_p ={ q∈ S| p∈λ(q)} and S_φ ={ q∈ S|∃ q'∈ S_φ: (q,q')∈δ}.To compute if a state s∈ S satisfies a formula of the form φ^rψ or φ^rψ, assume that S_φ and S_ψ are given inductively. If s∈ S_ψ we immediately have s∈ S_φ^rψ and s∈ S_φ^rψ. For the remaining cases, the problem of deciding whether s∈ S_φ^rψ or s∈ S_φ^rψ, respectively, can be reduced in linear time to the repeated control-state reachability problem in systems with one integer counter. The idea is to count the ratio along paths ρ∈ S^ω in 𝒦 as follows, in direct analogy to the semantics defined in <ref>. Assume r=n/m for n,m∈ℕ and n≤ m. For passing any position on ρ we pay a fee of n and for those positions that satisfy φ we gain a reward of m. Thus, we obtain a non-negative balance of rewards and gains at some position on ρ if, in average, among every m positions there are at least n positions that satisfy φ, meaning the ratio constraint is satisfied. In 𝒦, this balance along a path can be tracked using an integer counter that is increased by m-n when leaving a state s'∈ S_φ and decreased by adding -n whenever leaving a state s'∉S_φ. Thus, let 𝒦̂_s=(S,s,{ c},Δ) be the counter system withΔ = {(t,𝐮,∅,t')|(t,t')∈ E, t∉S_φ⇒𝐮(c)=-n, t∈ S_φ⇒𝐮(c)=m-n}. The state s satisfies the formula φ^rψ if there is no path starting in state s violating the formulaφ^rψ. The latter is the case if at every position where ψ holds, the balance computed up to this position is negative. Therefore, consider an extension ℛ_s of 𝒦̂_s where every edge leading into a state s'∈ S_ψ is guarded by the constraint c<0. Every (infinite) run of ℛ_s is now a counter example for the property holding at s. To decide whether s∈ S_φ^rψ it suffices to check that in ℛ_s no state is repeatedly reachable from s. A formula φ^rψ is satisfied by s if there is some state s'∈ S_ψ reachable from s with a non-negative balance. Hence, consider the counter system 𝒰_s=(S⊎{𝚝}, s,{ c}, Δ') obtained from 𝒦̂_s featuring a new sink state 𝚝∉S. The transition relationΔ'=Δ∪{(s',0,{ c≥0},𝚝)| s'∈ S_ψ}∪{(𝚝,0,∅,𝚝)}extends Δ such that precisely the paths starting in s and reaching a state s'∈ S_ψ with non-negative counter value (i.e. sufficient ratio) can be extended to reach 𝚝. Checking if s is supposed to be contained in S_φ^rψ then amounts to decide whether 𝚝 is (repeatedly) reachable from s in 𝒰_s. Finally, repeated reachability is easily translated to the accepting run problem of Büchi pushdown systems (BPDS) and the latter is in P <cit.>. A counter value n≥0 can be encoded into a stack of the form ⊕^n while ⊖^n encodes -n≤0 and for evaluating the guards c≥0 and c<0 only the top symbol is relevant. Simulating an update of the counter by a number a∈ℤ requires to perform |a| push or pop actions. The size of the system is therefore linear in the largest absolute update value and hence exponential in its binary representation. Since the updates of the constructed counter systems originate from the ratios in Φ, the corresponding BPDS are of up to exponential size in |Φ|. During the labelling procedure this step must be performed at most a polynomial number of times giving an exponential-time algorithm.Theoremmcfctl,is in Exp. It is worth noting that for a fixed formula (program complexity) or a unary encoding of numbers in frequency constraints, the size of the constructed Büchi pushdown systems and thus the runtime of the algorithm remains polynomial. ,with unary number encoding is in P. § MODEL-CHECKING FREQUENCY OVER FLAT KRIPKE STRUCTURES We show in this section that model-checking is decidable over flat Kripke structures. As decision procedure we employ a guess and check approach: given a flat Kripke structure 𝒦 and an formula Φ, we choose non-deterministically a set of satisfying runs to witness 𝒦Φ. As representation for such sets we introduce augmented path schemas that extend the concept of path schemas <cit.> and provide for each of its runs a labelling by formulae. We show that if an augmented path schema features a syntactic property that we call consistency then the associated runs actually satisfy the formulae they are labelled with. Moreover, we show that every run of 𝒦 is in fact represented by some consistent schema of size at most exponential in |𝒦|+|Φ|. This gives rise to the following non-deterministic procedure. * Read as input an FKS 𝒦 and an formula Φ.* Guess an augmented path schema 𝒫 in 𝒦 of at most exponential size.* Terminate successfully if 𝒫 is consistent and accepts a run that is initially labelled by Φ. We fix for this section a flat Kripke structure 𝒦=(S,s_I,E, λ) and an formula Φ. For convenience we assume that AP⊆(Φ). Omitted technical details can be found in <ref>.§.§ Augmented Path SchemasThe set of runs of 𝒦 can be represented as a finite number of so-called path schemas that consist of a sequence of paths and simple loops consecutive in 𝒦 <cit.>. A path schema represents all runs that follow the given shape while repeating each loop arbitrarily often. For our purposes we extend this idea with additional labellings and introduce integer counters, updates and guards that can restrict the admitted runs.An augmented state of 𝒦 is a tuple a = (s, L, G, u⃗, t)∈ S×2^(Φ)×2^C×ℤ^C×{𝙻,𝚁} comprised of a state s of 𝒦, a set of formula labels L, guards G and an update 𝐮 over a set of counter names C, and a type indicating whether the state is part of a loop (𝙻) or a not (𝚁). We denote by (a)=s, (a)=L, (a)=G, (a)=u⃗ and (a)=t the respective components of a. An augmented path in 𝒦 is a sequence u=a_0…a_n of augmented states a_isuch that ((a_i),(a_i+1))∈ E for i∈[0,n-1]. If (a_i)=𝚁 for all i∈[0,n-1] then u is called a row. It is called an augmented simple loop (or simply loop)if it is non-empty and ((a_n),(a_1))∈ E and (a_i)≠(a_j) for i≠ j and(a_i)=𝙻 for all i∈[0,n-1].An augmented path schema (APS) in 𝒦 is a tuple 𝒫=(P_0,…,P_n) where each component P_k is a row or a loop, P_n is a loop and their concatenation P_1P_2…P_n is an augmented path. Thanks to counters we can, for example, restrict to those runs satisfying a specific frequency constraint at some positions tracking it as discussed in <ref>. <Ref> shows an example of an APS with edges indicating the possible state progressions. It features a single counter that tracks the frequency constraint of a formula r^2/3q from state 1. We denote by |𝒫|=|P_0…P_n| the size of 𝒫 and use global indices ℓ∈[0,|𝒫|-1] to address the (ℓ+1)-th augmented state in P_0…P_n, denoted 𝒫[ℓ]. To distinguish these global indices from positions in arbitrary sequences, we refer to them as locations of 𝒫. Moreover, _𝒫(k)={ℓ||P_0P_1…P_k-1|≤ℓ<|P_0P_1…P_k|} denotes for 0≤ k≤ n the set of locations belonging to component P_k and for all locations ℓ∈_𝒫(k) we denote the corresponding component index in 𝒫 by _𝒫(ℓ)=k. For example, in <ref> we have _𝒫(3)={3,4} and _𝒫(6)=5 because the seventh state of 𝒫 belongs to P_5. We extend the component projections for augmented states to (sequences of) locations of 𝒫 and write, e.g., _𝒫(ℓ_1ℓ_2) for (𝒫[ℓ_1])(𝒫[ℓ_2]) and _𝒫(ℓ) for (𝒫[ℓ]).An APS 𝒫 gives rise to a counter system (𝒫) = (Q, 0, C, Δ) where Q={0,…,|P|-1}, C are the counters used in the augmented states of 𝒫 and Δ consists of those transitions (ℓ,_𝒫(ℓ),_𝒫(ℓ'),ℓ') such that 0≤ℓ'=ℓ+1<|𝒫| or ℓ'<ℓ and {ℓ',ℓ'+1,…,ℓ}=_𝒫(k) for some loop P_k. Notice that the APS in <ref> is presented as its corresponding counter system. Let _𝒫(ℓ) denote the set {ℓ'∈ Q|∃𝐮,G:(ℓ,𝐮, G, ℓ')∈Δ} of successors of ℓ in (𝒫). A run of 𝒫 is a run of (𝒫) that visits each location ℓ∈ S at least once. The set of all runs of 𝒫 is denoted (𝒫). As a consequence, a run visits the last loop infinitely often. We say that an APS 𝒫 is non-empty iff (𝒫) ≠∅. Since every run σ∈(𝒫) corresponds, by construction of 𝒫, to a path _𝒫(ρ)∈ Q^ω in 𝒦 we define the satisfaction of an formula φ at position i by (σ,i) _𝒫φ iff (_𝒫(σ),i) φ.Finally, notice that (𝒫) is in fact a flat counter system. It is shown in <cit.> that properties can be verified over flat counter systems in non-deterministic polynomial time. Since can express that each location of (𝒫) is visited we obtain the following result.[<cit.>]Deciding non-emptiness of APS is in NP. §.§ Labellings of Consistent APS are CorrectAn APS 𝒫 assigns to every position i on each of its runs σ the labelling L_i=_𝒫(σ(i)). We are interested in this labelling being correct with respect to some formula Φ in the sense that Φ∈ L_i if and only if (σ,i) Φ. The notion of consistency introduced in the following provides a sufficient criterion for correctness of the labelling of all runs of an APS.An augmented path u=a_0…a_nis said to be good, neutral or bad for an formula Ψ=φ^x/yψ if the number d=|{0≤ i<|u||φ∈(u(i))}| of positions labelled with φ is larger than (d>x/y·|u|), equal to (d = x/y· |u|) or smaller than (d<x/y·|u|), respectively, the fraction x/y of all positions of u. A tuple (P_0,…,P_n) of rows and loops (not necessarily an APS) is called L-periodic for a set L⊆(Φ) of labels if all augmented paths P_k share the same labelling with respect to L, that is for all 0≤ k<n-1 we have |P_k|=|P_k+1| and (P_k(i))∩ L = (P_k+1(i))∩ L for all 0≤ i<|P_k|.[Consistency]Let 𝒫=(P_0,…,P_n) be an APS in 𝒦, k∈[0,n] and ℓ∈_𝒫(k) a location on component P_k. The location ℓ is consistent with respect to an formula Ψ if all locations of 𝒫 are consistent with respect to all strict subformulae of Ψ and one of the following conditions applies.*Ψ∈ AP and Ψ∈_𝒫(ℓ) ⇔Ψ∈λ(_𝒫(ℓ)), or Ψ=φψ and Ψ∈_𝒫(ℓ) ⇔φ,ψ∈_𝒫(ℓ), or Ψ=φ and Ψ∈_𝒫(ℓ) ⇔φ∉_𝒫(ℓ).*Ψ=φ and ∀ℓ'∈_𝒫(ℓ):Ψ∈_𝒫(ℓ) ⇔φ∈_𝒫(ℓ').*Ψ=φ^x/yψ and one of the following holds: *Ψ,ψ∈_𝒫(ℓ)*Ψ∈_𝒫(ℓ) and P_n is good for Ψ and ∃ℓ'∈_𝒫(n): ψ∈_𝒫(ℓ')*_𝒫(ℓ)=𝚁 and there is a counter c∈ C such that ∀ℓ'<ℓ: _𝒫(ℓ')(c) = 0 and ∀ℓ'≥ℓ: φ∈_𝒫(ℓ') ⇒_𝒫(ℓ')(c) = y-x and ∀ℓ'≥ℓ: φ∉_𝒫(ℓ') ⇒_𝒫(ℓ')(c) = -xand * if Ψ∉_𝒫(ℓ) then ψ∉_𝒫(ℓ) and ∀ℓ'>ℓ: ψ∈_𝒫(ℓ')⇒ (c<0)∈_𝒫(ℓ') and* if Ψ∈_𝒫(ℓ) then ∃ℓ'>ℓ: ψ∈_𝒫(ℓ')(c≥0)∈_𝒫(ℓ'). *There is k'∈[0,n] such that all locations ℓ'∈_𝒫(k') are consistent wrt. Ψ and * if k=n then k'<k and (P_k',P_k'+1,…,P_k) is {φ,ψ,Ψ}-periodic,* if k<n and P_k is good or neutral for Ψ and Ψ∉_𝒫(ℓ), or P_k is bad for Ψ and Ψ∈_𝒫(ℓ) then k'<k<n and (P_k',P_k'+1,…,P_k+1) is {φ,ψ,Ψ}-periodic, and* if k<n and P_k is good or neutral for Ψ and Ψ∈_𝒫(ℓ), or P_k is bad for Ψ and Ψ∉_𝒫(ℓ) then k<k'<n and (P_k,P_k+1,…,P_k'+1) is {φ,ψ,Ψ}-periodic.The APS 𝒫 is consistent with respect to Ψ if it is the case for all its locations. The cases <ref> and <ref> reflect the semantics syntactically. For instance, location 0 in <ref> can be labelled consistently with p since all its sucessor (0 and 1) are labelled with p. Case <ref>, concerning the (frequency) until operator, is more involved.Assume that Φ=φ^x/yψ is an until formula and that the labelling of 𝒦 by φ and ψ is consistent. In some cases, it is obvious that Φ holds, namely at positions labelled by ψ (case <ref>) or if the final loop already guarantees that Φ always holds (case <ref>). If neither is the case we can apply the idea discussed in <ref> and use a counter to check explicitly if at some point the formula Φ holds (case <ref>). Recall that to validate (or invalidate) the labelling of a location by the formula Φ a specific counter tracks the frequency constraint in terms of the balance between fees and rewards along a run. For the starting point to be unique this case only applies to locations that are not part of a loop. For those labelled with Φ there should exist a location in the future where ψ holds and the balance counter is non-negative. For those not labelled with Φ all locations in the future where ψ holds must be entered with negative balance. Finally, case <ref> can apply (not only) to loops and is based on the following reasoning: if a loop is good (bad) and Φ is supposed to hold at some of its locations then it suffices to verify that this is the case during any of its future (past) iterations, e.g. the last (first) and vice versa if Φ is supposed not to hold. This is the reason why this case allows for delegating consistency along a periodic pattern.For instance, consider the formula Ψ=r^2/3q and the APS shown in <ref>. It is consistent to not label location 1 by Ψ because the counter c tracks the balance and locations 7 and 8 are guarded as required. If a run takes, e.g., the loop P_5 seven times, it has to take P_3 at least twice to satisfy all guards. This ensures that the ratio for the proposition r is strictly less than 2/3 upon reaching the first (and thus any) occurrence of q. Note that to also make location 2 consistent, an additional counter needs to be added. Consistency with respect to Ψ is then inherited by location 0 from location 1 according to case <ref> of the definition. Intuitively, additional iterations of the bad loop P_0 can only diminish the ratio.The definition of consistency guarantees that if an APS is consistent with respect to Φ then for every run of the APS, each time the formula Φ is encountered, it holds at the current position (see <ref> for complete details). Hence we obtain the following lemma that guarantees correctness of our decision procedure.[Correctness]LemmafltlsoundnessIf there is an APS 𝒫 in 𝒦 such that 𝒫 is consistent wrt. Φ and Φ∈_𝒫(0) and (𝒫)≠∅ then 𝒦Φ. §.§ Constructing Consistent APS Assuming that our flat Kripke structure 𝒦 admits a run ρ such that ρΦ, we show how to construct a non-empty APS that is initially labelled by and consistent with respect to Φ. It will be of at most exponential size in |𝒦|+|Φ| and is built recursively over the structure of Φ.Concerning the base case where Φ∈ AP, all paths in a flat structure can be represented by a path schema of linear size <cit.>. Intuitively, since 𝒦 is flat, every subpath s_is_i+1…s_i'…s_i” of ρ where a state s_i=s_i'=s_i” occurs more than twice is equal to (s_is_i+1…s_i'-1)^ks_i” for some k ∈. Hence, there are simple subpaths u_0,…,u_m∈ S^+ of ρ and positive numbers of iterations n_0,…,n_m-1∈ℕ such that ρ=u_0^n_0u_1^n_1…u_m-1^n_m-1u_m^ω and |u_0u_1…u_m|≤2|S|. From this decomposition, we build an APS being consistent with respect to all propositions. Henceforth, we assume by induction an APS 𝒫 being consistent with respect to all strict subformulae of Φ and a run σ∈(𝒫) with _𝒫(σ)=ρ. If Φ=φψ or Φ=φ, <ref> determines for each augmented state of 𝒫 whether it is supposed to be labelled by Φ or not. It remains hence to deal with the next and frequency until operators.Labelling 𝒫 by φ.If Φ=φ the labelling at some location ℓ is extended according to the labelling of its successors. These may disagree upon φ (only) if ℓ has more than one successor, i.e., being the last location on a loop P_k of 𝒫=(P_0,…,P_m). In that case we consult the run σ: if it takes P_k only once, this loop can be cut and replaced by P_k' that we define to be an exact copy except that all augmented states have type 𝚁 instead of 𝙻. If otherwise σ takes P_k at least twice, the loop can be unfolded by inserting P_k' between P_k and P_k+1, i.e. letting 𝒫'=(P_0,…,P_k,P_k',P_k+1,…,P_m). Either way, σ remains a run of the obtained APS, up to shifting the locations ℓ'>ℓ if the extra component was inserted (recall that locations are indices). Importantly, cutting or unfolding any loop, even any number of times, in 𝒫 preserves consistency.Labelling 𝒫 by φ^rψ.The most involved case is to label a location ℓ by Φ=φ^rψ. First, assume that ℓ is part of a row. Whether it must be labelled by Φ is uniquely determined by σ. This is consistent if case <ref> or <ref> of <ref> applies. The conditions of case <ref> are also realised easily in most situations. Only, if Φ holds at ℓ but every location ℓ' witnessing this (by being reachable with sufficient frequency and labelled by ψ) is part of some loop P'. Adding the required guard directly to ℓ' may be too strict if σ traverses P' more than once. However, the first iteration (if P' is bad for Φ) or the last iteration (if P' is good) on σ contains a position (labelled with ψ) witnessing that Φ holds if any iteration does. Thus it suffices to unfold the loop once in the respective direction. For example, consider in <ref> location 5 and a formula φ=r^2/5q. Location 8 could witness thatφ holds but a corresponding guard would be violated eventually since P_7 is bad for φ. The first iteration is thus the optimal choice. The unfolding P_6 separates it such that location 7 can be guarded instead without imposing unnecessary constraints. Now assume that location ℓ, to be labelled or not with Φ, is part of a loop P which is stable in the sense that Φ holds either at all positions i with σ(i)=ℓ or at none of them. With two unfoldings of P, made consistent as above, case <ref> applies. However, σ may go through ℓ several, say n>1, times where Φ holds at some but not all of the corresponding positions. If n is small we can replace P by precisely n unfoldings, thus reducing to the previous case without increasing the size of the structure too much. We can moreover show that if n is not small then it is possible to decompose such a problematic loop into a constant number of unfoldings and two stable copies based on the following observation. [Decomposition]LemmafltldecompositionLet P=𝒫[ℓ_0]…𝒫[ℓ_|P|-1] be a non-terminal loop in 𝒫 with corresponding location sequence v=ℓ_0…ℓ_|P|-1 and n̂=|P|· y for some y>0. For every run σ=uv^nw∈(𝒫) where n≥n̂+2 there are n_1 and n_2 such that σ=uv^n_1v^n̂v^n_2w and for all positions i on σ with |u| ≤ i < |uv^n_1-1| or |uv^n_1v^n̂|≤ i<|uv^n_1v^n̂v^n_2-2| we have (σ,i)_𝒫Φ iff (σ,i+|P|)_𝒫Φ.Consider again the APS 𝒫 in <ref>, a run σ∈(𝒫) and the location 3. Whether or not φ=r^2/3q holds at some position i with σ(i)=3 depends on how often σ traverses the good loop P_5 (the more the better) and how often it repeats P_3 after position i (the more the worse). Assume σ traverses P_5 exactly five times and P_3 sufficiently often, say 10 times. Then, during the last three iterations of P_3, φ holds when visiting location 3, and also location 4. In the two iterations before, the formula holds exclusively at location 4 and in any preceding iteration, it does not hold at all. Thus any labelling of P_3 would necessarily be incorrect. However, we can replace P_3 by four copies of it that are labelled as indicated in <ref> and σ can easily be mapped onto this modified structure.The presented procedure for constructing an APS from the run ρ in 𝒦 performs only linearly many steps in |Φ|, namely one step for each subformula. It starts with a structure of size at most 2|𝒦| and all modifications required to label an APS increase its size by a constant factor. Hence, we obtain an APS 𝒫_Φ of size at most exponential in the length of Φ and polynomial in the number of states of 𝒦. This consistent APS still contains a run corresponding to ρ and hence its first location must be labelled by Φ because (ρ,0)Φ and we have seen that consistency implies correctness.[Completeness]LemmafltlcompletenessIf 𝒦Φ then there is a consistent APS 𝒫 in 𝒦 of at most exponential size in 𝒦 and Φ where Φ∈(𝒫(0)) and 𝒫 is non-empty.We have seen in this section that the decision procedure presented in the beginning is sound and complete due to <ref> and <ref>, respectively. The guessed APS is of exponential size in |Φ| and of polynomial size in |𝒦|. Since both checking consistency and non-emptiness (cf. <ref>) require polynomial time (in the size of the APS) the procedure requires at most exponential time. , is in NExp. This result immediately extends to . For a state q of a flat Kripke structure 𝒦 and an arbitrary formula φ, the procedure allows us to decide in NExp whether qφ holds. It allows us further to decide if qφ holds in ExpSpace by the dual formulation qφ and Savitch's theorem. Following otherwise the standard labeling procedure for (cf. <ref>) requires to invoke the procedure a polynomial number of times in |𝒦|+|Φ|., is in ExpSpace. § ON MODEL-CHECKINGOVER FLAT KRIPKE STRUCTURESIn this section, we prove decidability of ,. We provide a polynomial encoding into the satisfiability problem of a decidable extension of Presburger arithmetic featuring a quantifier for counting the solutions of a formula. For the reverse direction an exponential reduction provides a corresponding hardness result for , and . Presburger arithmetic with Härtig quantifier. First-order logic over the natural numbers with addition was shown to be decidable by M. Presburger <cit.>. It has been extended with the so-called Härtig quantifier <cit.> that allows for referring to the number of values for a specific variable that satisfy a formula. We denote this extension by . The syntax of formulae φ and terms τ over a set of variables V is defined by the grammarφ::= τ≤τ|φ|φφ|∃ x.φ|∃^=xy.φ τ::=a | a · x |τ + τfor natural constants a∈ℕ and variables x,y ∈ V. Since the structure (ℕ,+) is fixed, the semantics is defined over valuations η:V→ℕ that are extended to terms t as expected, e.g., η(3 · x+1) = 3·η(x)+1. We define the satisfaction relation _ as usual for first-order logics and by η_∃^=xy.φ ℕ∋ |b ∈ℕ|η[y ↦ b] _φ| = η(x) for the Härtig quantifier. Notice that the solution set has to be finite.The satisfiability problem ofconsists in determining whether for a formula φthere exists a valuation η such that η_φ. It is decidable <cit.> via eliminating the Härtig quantifier, but its complexity is not known. For what concerns classic Presburger arithmetic, the complexity of its satisfiability problem lies between 2Exp and 2ExpSpace <cit.>. Lower bound for ,.Let 𝒦 be the flat Kripke structure over AP=∅ that consists of a single loop of length one. We can encode satisfiability of a formula Φ into the question whether the (unique) run ρ of 𝒦 satisfies a formula Φ̂. Assume without loss of generality that Φ has no free variables. Let V_Φ be the variables used in Φ and z_1,z_2,…∉V_Φ additional variables. Recall that ρΦ̂ if (ρ,θ,0)Φ̂ for some valuation θ of the positional variables in Φ̂.The idea is essentially to encode the value given to a variable x∈ V_Φ of Φ into the distance between the positions assigned to two variables of Φ̂. Technically, a mapping Z∈ℕ^V_Φ associates with each variable x∈ V_Φ an index j=Z(x) and the constraints that Φ imposes on x are translated to constraints on positional variables z_j and z_j-1 (more precisely, the distance θ(z_j) - θ(z_j-1) between the assigned positions). The following transformation 𝚝: ×ℕ^V_Φ×ℕ→ constructs the formula from Φ. When a variable is encountered, the mapping Z is updated by assigning to it the next free index (third parameter). Let[𝚝(φ_1 ⊙φ_2, Z, i)= 𝚝(φ_1,Z, i) ⊙𝚝(φ_2,Z, i) 𝚝(φ, Z, i)=𝚝(φ,Z, i); 𝚝(a · x,Z,i)=a ·#_z_Z(x)-1() - a ·#_z_Z(x)() 𝚝(a, Z, i)=a; 𝚝(∃ x.φ, Z, i)= z_i.𝚝(φ, Z[x↦ i], i+1); 𝚝(∃^=xy.φ, Z, i)= 4l(𝚝(x,Z,i) = #_z_i-1(z_i.𝚝(φ,Z[y↦ i],i+1))) ]for x,y∈ V_Φ, a,i∈ℕ and ⊙∈{,≤,+}. Then, we obtain Φ̂=z_0.𝚝(Φ,1,1), initialising Z and the first free index with 1. Notice that the translation of the Härtig quantifier instantiates the scope effectively twice when substituting the equality and thus the size of Φ̂ may at worst double with each nesting. Finally, we can equivalently add path quantifiers to all temporal operators in Φ̂ and obtain, syntactically, a formula. The satisfiability problem of is reducible in exponential time to both, and ,.Deciding ,. We provide a polynomial reduction to the satisfiability problem of . Given a flat Kripke structure 𝒦 we can represent each run ρ by a fixed number of naturals. We use a predicate Conf that allows for accessing the i-th state on ρ given its encoding and a predicate Run characterising all (encodings of) runs in (𝒦). Such predicates were shown to be definable by Presburger arithmetic formulae of polynomial size and used to encode , <cit.>. We adopt this idea for , and . Let 𝒦=(S,s_I, E, λ) and assume S ⊆ without loss of generality. For N∈ℕ let V_N={ r_1,…,r_N,i,s} be a set of variables that we use to encode a run, a position and a state, respectively.[<cit.>] There is a number N∈ℕ, a mapping enc:ℕ^N→ S^ω and predicates Conf(r_1,…,r_N,i,s) and Run(r_1,…,r_N) such that for all valuations η:V_N→ℕ we have * η_ Run(r_1,…,r_N)⇔ enc(η(r_1),…,η(r_N))∈(𝒦) and* if η_ Run(r_1,…,r_N) then η_ Conf(r_1,…,r_N,i,s) ⇔ enc(η(r_1),…,η(r_N))(η(i)) = η(s).Both predicates are definable by formulae over variables V⊇ V_N of polynomial size in |𝒦|. Now, let Φ be a formula to be verified on 𝒦. Without loss of generality we assume that all comparisons φ_≤∈(Φ) of the form τ_1≤τ_2 have the shape φ_≤ = ∑_ℓ=1^ka_ℓ·#_x_ℓ(φ_ℓ) + b ≤∑_ℓ=k+1^ma_ℓ·#_x_ℓ(φ_ℓ) + c for some k,m,b,c∈ℕ, coefficients a_ℓ∈ℕ and subformulae φ_ℓ. As it is done in <cit.> for , using the predicates Conf and Run, we construct a formula that is satisfiable if and only if 𝒦Φ. Given the encoding of relevant runs into natural numbers we can express path quantifiers with quantification over the variables r_1,…,r_N. Temporal operators can be expressed by using Conf to access specific positions. Storing of positions is done explicitly by assigning them as value to specific variables x. Variables z are introduced to hold the number of positions satisfying a formula and can then be used in constraints. For example, to translate a term #_x(φ) we specify a variable, e.g., z_1 holding this value by ∃ z_1.∃^=z_1i'.x≤ i'≤ i φ̂ where i holds the current position and φ̂ expresses that φ holds at position i' of the current run. Constraints like #_x(φ)+1≤#_x(ψ) can now directly be translated to, e.g., z_1 +1 ≤ z_2. We use a syntactic translation functionthat takes the formula φ to be translated, the names of N variables encoding the current run and the name of the variable holding the current position. Let[(p,r_1,…,r_N,i)=∃ s.Conf(r_1,…,r_N,i,s) ⋁_a| p∈λ(a) s = a; (φψ,r_1,…,r_N,i)=(φ,r_1,…,r_N,i) (ψ,r_1,…,r_N,i);(φ,r_1,…,r_N,i)=(φ,r_1,…,r_N,i);(φ,r_1,…,r_N,i)= ∃ i'.i' = i+1 (φ,r_1,…,r_N,i'); (φψ, r_1, …, r_N, i)= ∃ i”.i≤ i”(ψ,r_1,…,r_N,i”); ∀ i'.(i≤ i'i'<i”) →(ψ,r_1,…,r_N,i');(φ,r_1,…,r_N,i)=∃ r'_1 … ∃ r'_N. Run(r'_1,…,r'_N) (φ,r'_1,…,r'_N,i) ∀ i'.; (i'≤ i) →∃ s. Conf(r_1,…,r_N,i',s) Conf(r'_1,…,r'_N,i',s);(x.φ,r_1,…,r_N,i)=∃ x.x=i (φ,r_1,…,r_N,i);(φ_≤,r_1,…,r_N,i)= ∃ z_1…∃ z_m. (⋀_ℓ=1^m∃^=z_ℓi'.x_ℓ≤ i'≤ i (φ_ℓ,r_1,…,r_N,i')); a_1 · z_1 + … + a_k · z_k + b ≤ a_k+1· z_k+1 + … + a_m· z_m + c ]for φ_≤ = ∑_ℓ=1^ka_ℓ·#_x_ℓ(φ_ℓ) + b ≤∑_ℓ=k+1^ma_ℓ·#_x_ℓ(φ_ℓ) + c. Primed variables denote fresh copies of the corresponding input variables, e.g. i' becomes (i')'=i” and i” becomes i”'. Now, Φ𝒦 if and only if ∃ r_1…∃ r_N.∃ i.Run(r_1,…,r_N)i = 0 (Φ,r_1,…,r_N,i) is satisfiable., is reducible to satisfiability in polynomial time.§ CONCLUSIONIn this paper, we have seen that model checking flat Kripke structures with some expressive counting temporal logics is possible whereas this is not the case for general, finite Kripke structures. However, our results provide an under-approximation approach to this latter problem that consists in constructing flat sub-systems of the considered Kripke structure. We furthermore believe our method works as well for flat counter systems. We left as open problem the precise complexity for model checking , and over flat Kripke structures. It follows from <cit.> that the latter two problems are NP-hard while we obtain exponential upper bounds. However, we believe that if we fix the nesting depth of the frequency until operator in the logic, the complexity could be improved.This work has shown, as one could have expected, a strong connection between and counter systems and as future work we plan to study automata-based formalisms inspired by where we will equip our automata with some counters whose role will be to evaluate the relative frequency of particular events. § CONSISTENCY IMPLIES CORRECTNESSThis section is dedicated to proving <ref>.*Recall that 𝒦=(S,s_I,E,λ) is a Kripke structure and Φ an formula. Let us first formally define the notion of correctness.A location ℓ of 𝒫 is correct wrt. a formula ξ if and only if∀σ∈(𝒫): ∀ i∈ℕ: σ(i)=ℓ⇒ (ξ∈_𝒫(ℓ) ⇔ (σ,i)ξ).An APS 𝒫 is correct wrt. ξ if that is the case for all locations of 𝒫.Notice that ℓ can only be consistent if ξ holds at all positions where ℓ occurs or at none of them.If 𝒦 now contains an APS 𝒫 that is correct wrt. Φ and that path schema contains a run σ∈(𝒫) then the correct labelling of the initial location 0 by Φ implies that (σ,0)Φ and thus 𝒦Φ. Therefore, <ref> is implied by the following, that we prove in the remainder of this section. If an APS 𝒫 consistent wrt. Φ then it is correct wrt. Φ. Let in the following 𝒫=(P_0,P_1,…,P_m) be an APS fixed and consistent wrt. Φ. Let further σ∈(𝒫) be any run of 𝒫. We use an induction over the structure of Φ to show that for all locations ℓ of 𝒫 if Φ∈_𝒫(ℓ) then Φ holds at every occurrence of ℓ on σ and if Φ∉_𝒫(ℓ) then Φ does not hold at any position where ℓ occurs on σ.For easier reading, we use some abbreviations in the following. Let L_i=_𝒫(σ(i)) be the labelling of the location at position i on σ for i∈ℕ. We also denote the set of occurrences of a location ℓ on σ by σ^-1(ℓ) ={ i∈ℕ|σ(i)=ℓ}. §.§ Propositions, Boolean Combinations and Temporal Next Let i∈ℕ be any position on σ and ℓ_i=σ(i) be the corresponding location in 𝒫 with labelling L_i. Consider the following cases for the structure of Φ, the first being the induction base case.(Φ=p∈ AP) By consistency p∈ L_i ⇔ p∈λ(_𝒫(ℓ_i)) and by semantics p∈λ(_𝒫(ℓ_i)) ⇔ (σ,i) p.(Φ=φ)φ∈ L_i consist.⇔φ∉L_iinduct.⇔ (σ,i)φsemant.⇔ (σ,i)φ (Φ=φψ)φψ∈ L_i consist.⇔φ,ψ∈ L_i induct.⇔ (σ,i)φ and(σ,i)ψsemant.⇔ (σ,i)φψ(Φ=φ) By the definition of a run we have σ(i+1)∈_𝒫(σ(i)) and thusφ∈ L_i consist.⇔φ∈ L_i+1induct.⇔ (σ,i+1)φsemant.⇔ (σ,i)φ. §.§ Temporal Until Assume finally Φ=φ^x/yψ. By consistency and induction 𝒫 is correct wrt. φ and ψ, thus (σ,i)φ⇔φ∈ L_i and (σ,i)ψ⇔ψ∈ L_i for all i∈ℕ. Let k=_𝒫(ℓ_i). Hence P_k is the component that ℓ_i=σ(i) belongs to. Further, for finite augmented paths a_0…a_n let(a_0…a_n) = y·|{ j∈[0,n]|φ∈(a_j)}| - x·(n+1)denote the balance between those positions that are labelled by φ and those that are not, weighted according to the ratio required by Φ. That is, as discussed earlier, a “good” position contributes a reward of y-x while a “bad” position causes a fee of -x. Then, (a_0…a_n)≥0 is equivalent to the ratio condition |{ j∈[0,n]|φ∈(a_j)}|≥x/y·(n+1) specified by Φ but allows us to reason on discrete integer numbers. For convenience we apply this notation likewise for sequences v=v_1… v_k of locations of 𝒫 and write (v):=(𝒫(v1) …𝒫(v_k)).We will now treat the different case of the notion of consistency.Case 3a. AssumeΦ,ψ∈ L_i then, by induction, (σ,i)ψ which implies that (σ,i)Φ.Case 3b. ℓ_i is part of the final loop P_m of 𝒫. Hence there is a smallest position j>i where σ(j) is part of the final loop P of 𝒫 and ψ∈ L_j (and thus holds there). Hence, also ψ∈ L_j+n|P| for every number n>0. Since P is good for Φ the balance (P)>0 is positive and thus(ℓ_i…ℓ_j)<(ℓ_i…ℓ_j+|P|)<…<(ℓ_i…ℓ_j+n|P|).For sufficiently large n (i.e., sufficiently many iterations of P) we obtain necessarily a non-negative balance (ℓ_i…ℓ_j+n|P|)≥0 on the corresponding subpath of σ and thus (σ,i)Φ.Case 3c. This is the case where we have a counter tracking the balance. For this case to apply, ℓ_i must be part of a row and thus i is the only position where ℓ_i occurs. The condition requires that there is a counter c of 𝒫 that tracks the balance wrt. Φ, starting at the occurrence of ℓ_i. Since σ is a run, there is a corresponding sequence of valuations θ_0θ_1… and for all j≥ iθ_j(c) = (σ(i)σ(i+1)…σ(j-1)).If Φ∈_𝒫(σ(i)), the definition provides that there is a location ℓ'>ℓ_i such that ψ∈_𝒫(ℓ') and guarded by (c≥0)∈_𝒫(ℓ'). Thus, there is a position j>i on σ such that σ(j) = ℓ' and (σ(i)…σ(j-1)) = θ_j ≥0. It follows that (σ,i)Φ. Similarly, if Φ∉_𝒫(σ(i)), then there is no position j on σ where ψ holds and the balance between i and jis non-negative. This is guaranteed because every such position carries a location ℓ'>ℓ_i (since ℓ_i is not on a loop) and either ψ∉_𝒫(ℓ') or (c<0)∈_𝒫(ℓ') and thus (σ(i)…σ(j-1))=θ_j(c)<0.Case 3d. It remains to consider the cases requiring a periodic sequence. Let us start by establishing a lemma that provides a convenient argument for correctness and motivates the periodicity requirement imposed by the definition.Recall that 𝒫=(P_0,P_1,…,P_m) and thus σ has the formσ = v_0^n_0v_1^n_1…v_m-1^n_m-1v_m^ωwhere v_j, for 0≤ j≤ m, is the sequence of locations corresponding to the j-th component, i.e. 𝒫[v_j]=P_j, n_j=1 if P_j is a row and n_j≥1 if P_j is a loop.Let (P_j,P_j+1,…,P_) be a {φ,ψ}-periodic sequence of components of 𝒫 with 0≤ j<<m and σ=uvw for v=v_j^n_jv_j+1^n_j+1…v_^n_. Let further |u|<i_1≤ i_2 be positions on σ and n∈ℕ such that i_2=i_1+n|P_j|<|uv|. * If P_j is good or neutral for Φ then (σ,i_2)Φ⇒ (σ,i_1)Φ.* If P_j is bad or neutral for Φ and i_2<|uv|-|P_j| then (σ,i_1)Φ⇒ (σ,i_2)Φ. 1. Assuming (σ,i_2)Φ there is a position i_3 ≥ i_2 such that (σ,i_3)ψ and (σ(i_2)…σ(i_3-1))≥0. Due to φ-periodicity we have also(σ(i_1)…σ(i_2-1)…σ(i_3))= (σ(i_1)…σ(i_1+n|P_j|-1)…σ(i_3-1)))= n·(P_j) + (σ(i_2)…σ(i_3-1))≥0 2. Assuming (σ,i_1)Φ there is a position i_3 ≥ i_1 such that (σ,i_3)ψ and (σ(i_1)…σ(i_3-1))≥0. If i_3≥ i_2 we have0≤(σ(i_1)…σ(i_1+n|P_j|-1)σ(i_2)…σ(i_3-1))= n(P_j) + (σ(i_2)…σ(i_3-1))≤(σ(i_2)…σ(i_3-1))due to φ-periodicity and (P_j)≤0.If i_3<i_2 we can assume w.l.o.g. that i_3<i_1+|P_j| because otherwise we can also choose i_3-|P_j| as witness instead of i_3: ψ∈_𝒫(σ(i_3-|P_j|)) due to ψ-periodicity and since (P_j)≤0 we would have0≤(σ(i_1)σ(i_1+1)…σ(i_3-|P_j|-1)…σ(i_3-1))= (σ(i_1)σ(i_1+1)…σ(i_3-|P_j|-1)) + (P_j)≤(σ(i_1)σ(i_1+1)…σ(i_3-|P_j|-1)).Repeating this argument eventually provides a witness i_3<i_1+|P_j|.Then,0≤(σ(i_1)…σ(i_3-1))= (σ(i_1+n|P_j|)…σ(i_3 - 1 + n|P_j|))= (σ(i_2)…σ(i_3 - 1 + n|P_j|))because position i_3+n|P_j| < i_1+(n+1)|P_j|= i_2+|P_j| < |uv| on σ still carries a location from the periodic part (P_k',…,P_) of 𝒫. For the same reason we have ψ∈_𝒫(σ(i_3+n|P_j|)) and thus Φ holds at position i_2.Based on <ref> correctness can easily be established. The definition demands a component P_k' where each location is consistent and as shown earlier we can assume that it is thus correct not only wrt. φ and ψ but also wrt.Φ. We do then the following case analysis: k=m: Considering the final loop P_m we have a preceding correct component P_k for k'<m. Periodicity wrt. Φ provides that for some n∈ℕ the position i'=i - n|P_k| on σ carries a location σ(i')∈_𝒫(k') from P_k' and Φ∈_𝒫(σ(i')) ⇔Φ∈_𝒫(σ(i)). Due to periodicity (and correctness) wrt. φ and ψ, the formula Φ cannot distinguish any of the positions i'+n'|P_k|, i.e., (σ,i')Φ iff (σ,i'+n'|P_k|)Φ for any n'∈ℕ since the infinite suffix σ(i')σ(i'+1)… is equivalent to every suffix σ(i'+n'|P_k|)σ(i'+n'|P_k|+1)… regarding the positions where φ and ψ hold. Hence,Φ∈_𝒫(σ(i)) ⇔Φ∈_𝒫(σ(i')) ⇔ (σ,i')Φ⇔ (σ,i)Φ.P_k is good or neutral for Φ and Φ∈_𝒫(σ(i)): k'>k and there is i'=i + n |P_k| for some (unique) n such that σ(i')∈_𝒫(k'). Due to Φ-periodicity, we have that Φ∈_𝒫(σ(i')) and thus by correctness of that labelling and <ref> we have (σ,i)Φ.P_k is good or neutral for Φ and Φ∉_𝒫(σ(i)): k'<k and there is i'=i - n |P_k| for the unique n such that σ(i')∈_𝒫(k'). Due to Φ-periodicity, we have that Φ∉_𝒫(σ(i')) and thus (σ,i')Φ which implies by <ref> that (σ,i)Φ.P_k is bad for Φ and Φ∈_𝒫(σ(i)): k'<k and there is i'=i - n |P_k| for the unique n such that σ(i')∈_𝒫(k'). We have that also Φ∈_𝒫(σ(i')). Since i is a position in an iteration of P_k at least one iteration of P_k+1 follows this position on σ, which still belongs to the periodic sequence. Therefore we can apply <ref> and conclude from (σ,i')Φ that (σ,i)Φ.P_k is bad for Φ and Φ∉_𝒫(σ(i)): k'>k and there is i'=i + n |P_k| for some (unique) n such that σ(i')∈_𝒫(k'). Again, periodicity and the guaranteed additional iteration of P_k'+1 after i' on σ allows for applying <ref> and to conclude from (σ,i')Φ that (σ,i)Φ. § CONSTRUCTING PATH SCHEMAS FROM SATISFYING RUNSThis section is dedicated to proving <ref>. We will use the notationand σ^-1 for a run σ of an APS introduced in <ref>. Recall that 𝒦 is a flat Kripke structure that admits a run ρ∈(𝒦) and Φ is an formula. Assume for this section that ρΦ. From ρ we construct a non-empty APS 𝒫_Φ that is consistent wrt. Φ and of which the first location 𝒫[0] is labelled by Φ. In fact, it admits a run σ representing ρ, i.e. such that _𝒫(σ)=ρ. The construction provides an exponential bound on the size of 𝒫_ρ and thereby proves the lemma.*Every run ρ∈(𝒦) can be represented by a small path schema in 𝒦, labelled only by propositions. The labelling is then extended stepwise to include larger and larger subformulae of Φ until all subformulae and finally Φ itself are consistently annotated. Every step needs to ensure that the new annotation is consistent, which may require the modification of the structure, namely unfolding and duplication of loops. Thus, we also need to argue that after each modification there is still a valid run that represents ρ and the obtained schema grew only linearly in size.We use induction over the structure of Φ starting by its base case of Φ∈ AP being atomic.Base case. Since 𝒦 is flat, any subpath ρ(i)ρ(i+1)…ρ(i')…ρ(i”) of ρ where a state ρ(i)=ρ(i')=ρ(i”) occurs more than twice is equal to (ρ(i)(i+1)…ρ(i'-1))^2ρ(i”). Hence, there are simple subpaths u_0,…,u_m∈ S^+ of ρ and positive numbers of iterations n_0,…,n_m-1∈ℕ such thatρ=u_0^n_0u_1^n_1…u_m-1^n_m-1u_m^ωand |u_0u_1…u_m|≤2|S|. They naturally induce the augmented path schema 𝒫_Φ=(P_0,…,P_m) where the augmented paths P_k correspond directly to the paths u_k. Formally, for u_k=s_0…s_n we letP_k=(s_0,λ(s_0),∅,0,t_0)…(s_k,λ(s_k),∅,0,t_n)where, for 0≤ i≤ n, the type of each augmented state is t_i=𝚁 if it is iterated n_i=1 times and t_i=𝙻 if it is iterated n_i>1 times on ρ. By construction 𝒫_Φ is consistent wrt. any proposition from AP and we have a run σ∈(𝒫) such that _𝒫(σ)=ρ.The path schema does not use any counter so we can consider the set of counters C to be empty. During the following constructions we may introduce new counters. Technically that means we would have to adjust all augmented states, simply because the signature of updates changes. For convenience, we therefore implicitly extend update functions and assigning zero to counter names if not explicitly stated otherwise.Now, building on this base case, we show how to construct 𝒫_Φ assuming by induction that there is an APS 𝒫 that contains a run σ∈(𝒫) with _𝒫(σ)=ρ and is consistent with respect to all strict subformulae of Φ.Boolean combinations.If Φ is a boolean combination then every augmented state a=(s,L,G,𝐮,t) in 𝒫 is easily adjusted to obey <ref>. For Φ=φ we add Φ to L if and only if φ∉L. For Φ=φψ we add Φ to L if and only if φ,ψ∈ L. These changes do not modify the set of runs and so σ remains a run in the obtained structure 𝒫_Φ. §.§ Temporal Next For Φ=φ the labelling at some location ℓ is extended according to the labelling of its successors. If φ∈_𝒫(ℓ') for all ℓ'∈_𝒫(ℓ) then we modify _𝒫(ℓ) such that it contains Φ and if φ∉_𝒫(ℓ') for all successors ℓ' then the labelling remains untouched, not including Φ.The labellings, however, may disagree upon φ if ℓ is the last location in a loop P_k of 𝒫. In that case the loop P_k needs to be removed or unfolded, in order to make 𝒫 consistent. For augmented states let 𝚛𝚘𝚠((s,L,G,𝐮,t)):=(s,L,G,𝐮,𝚁) denote the same state but with type 𝚁 and for sequences let 𝚛𝚘𝚠(a_0…a_n):=𝚛𝚘𝚠(a_0)…𝚛𝚘𝚠(a_n).Now, if the run σ takes P_k only once it can be cut by replacing it with P'_k=𝚛𝚘𝚠(P_k). This eliminates runs that take P_k more than once but σ remains. If otherwise σ takes P_k at least twice, the loop can be unfolded by inserting P'_k between P_k and P_k+1, i.e. letting𝒫'=(P_0,…,P_k,P_k',P_k+1,…,P_m).The run σ representing ρ persists, up to adjusting it according to the new shifted indices due to the insertion of P'_k. Formally, σ has the formσ = v_0^n_0…v_m-1^n_m-1v_m^ω.where v_i= ℓ_i (ℓ_i+1) … (ℓ_i+|P_i|-1) for 0≤ i≤ m and ℓ_i=|P_0…P_i-1|. Hence, there is a run σ'∈(𝒫') withσ'(i) = σ(i)if i<|v_0^n_0…v_k^n_k-1| σ(i)+|P_k|otherwise.and thus _𝒫'(σ')=_𝒫(σ)=ρ.Importantly, cutting or unfolding any loop, even any number of times, in 𝒫 preserves consistency. Let 𝒫=(P_0,…,P_k,…,P_m) be an APS, P_k a loop in 𝒫 that is consistent with respect to an formula ξ andP'_k=𝚛𝚘𝚠(P_k) an unfolding. The component P'_k and all components P_h that are consistent with respect to ξ in 𝒫 are also consistent with respect to ξ in all of the following APS: * 𝒫_=(P_0,…,P_k-1,P'_k,P_k+1,…,P_m)* 𝒫_=(P_0,…,P_k-1,P'_k,P_k,…,P_m)* 𝒫_=(P_0,…,P_k,P'_k,P_k+1,…,P_m)* 𝒫_=(P_0,…,P_k,P_k,P_k+1,…,P_m) The cases for propositions and Boolean combinations are straightforward. Considering formulae ξ=φ, an easy case analysis reveals that all combinations of locations and their successors correspond to a similar combination that occurs in 𝒫. Considering until formulae, a location on P_k in 𝒫 can only be consistent because of a consistent component P_k' (as detailed in Case 3d). This condition applies equally if P_k is made a row. If copies (loops or rows) of P_k are inserted, the share the same labelling by ξ and its subformulae and therefore smoothly integrate in any relevant repeating sequence. For example if ξ∈_𝒫(ℓ) for some location on P_k and P_k is bad for ξ, then there is a repeating sequence starting in some consistent component and ending in P_k. A copy of P_k to the right extends this sequence which then provides the reason for the copy to be also consistent and a copy to the left does not break the sequence because it is labelled the same as P_k. The same applies for ξ∉_𝒫(ℓ) and similarly if P_k is good or neutral for ξ.§.§ Until Assume now that Φ=φ^x/yψ is an until formula. In order to construct 𝒫_Φ given 𝒫 and σ we iterate through the components of 𝒫, beginning at the last and transforming them one by one until the first. The invariant is that the number of components that are yet to be considered becomes smaller by one in each step (although the overall number of components may increase) and that there is always a run representing ρ. The following lemma formalises one such step. Let 𝒫=(P_0,…,P_m) be an augmented path schema, k∈[0,m] and ℓ=|P_0…P_k-1| such that * 𝒫 is consistent wrt. φ and ψ,* every location ℓ'∈[ℓ+|P_k|,|𝒫|-1] is consistent wrt. Φ and* there is a run σ∈(𝒫) with _𝒫(σ)=ρ. There is an augmented path schema 𝒫'=(P_0,…,P_k-1,P'_k,…,P'_m') such that * 𝒫' is consistent wrt. φ and ψ,* every location ℓ'∈[ℓ,|𝒫'|-1] is consistent wrt. Φ,* there is a run σ'∈(𝒫') with _𝒫'(σ')=ρ and* |𝒫'|≤|𝒫| + 17y|𝒦|^3.For 0≤ k≤ m let ℓ_k=|P_0…P_k-1| be the first location in 𝒫 corresponding to component P_k. We proceed by a case analysis.Final loop. Assume that k=m, thus P_k=P_m is the final loop in 𝒫. If Case 3b of <ref> applies, all P_m is to be entirely labelled by Φ. Otherwise, we consider the first iteration of P_m, starting at position i_m:=minσ^-1(ℓ_m)(where ℓ_m is the first location of P_m) and have P_m(j) labelled by Φ if and only if (σ,i_m+j)Φ for 0≤ j<|P_m|.[Notice that for proving the statement it is not necessary to be constructive. It suffices to observe that such a labelling exists.] Hence, those states labelled by ψ are labelled by Φ which is consistent. If there are others states that we label by Φ and that are not labelled by ψ, we unfold P_m twice and hence let P'=(P_0,…,P_m-1,P'_m,P'_m+1,P'_m+2) forP'_m = P'_m+1 = 𝚛𝚘𝚠(P_m) and P'_m+2 = P_m.The locations ℓ_m,…,ℓ_m+|P_m|-1, now associated with P'_m, can be made consistent (case 3c). For every location ℓ∈[ℓ_m,ℓ_m+|P_m|+1] we introduce a fresh counter c that is updated on the locations succeeding ℓ as required by the definition. If Φ∉_𝒫'(ℓ) we only need to add the guard c<0 to the states at those locations ℓ<ℓ'<|𝒫'| that are labelled by ψ. If Φ∈_𝒫'(ℓ) then because Φ holds at its first occurrence j on σ. In that case, if ψ∉_𝒫'(ℓ), there must be a position j'>j on σ where ψ holds and that is reached with positive balance. Second, P_m must be bad for Φ, because otherwise the case above applied already, and thus we can assume without loss of generality that j'<j+|P_m| and hence location ℓ'=σ(j') carries a state from P'_m or P”_m. They are both rows and therefore ℓ' can serve as witness location to be guarded by (c≥0).The locations ℓ∈[ℓ_m+|P_m|,ℓ_m+2|P_m|-1] of P”_m where Φ∉_𝒫'(ℓ) can also be madeconsistent by adding a fresh counter that is updated an guarded as required. Again, the case that Φ∈_𝒫'(ℓ) only occurs if P_m is bad. In that case ℓ is consistent already because P”_m is part of the {φ,ψ,Φ}-repeating sequence (P'_m,P”_m,P_m) where P'_m isconsistent (case 3d). The final loop P_m is consistent for the same reason.The size of the final loop is bounded by |P_m|≤|𝒦| and at most two new copies of it are added to obtain 𝒫'.Rows. Assume k∈[0,m-1] and P_k=a_0…a_n is a row in 𝒫 starting at location ℓ_k=|P_0…P_k-1|. Let h∈ℕ be the position on σ with σ(h)=ℓ_k.We first adjust the labelling of P_k such that Φ∈(a_i) if and only if (σ,h+i)Φ for i∈[0,n]. Now, with every location ℓ_i∈[ℓ_k,ℓ_k+n] that is not already consistent with respect to Φ (because of case 3.a in <ref>) we proceed as follows. A fresh counter c is introduced and updated at all locations ℓ∈[ℓ_i+1,|𝒫'|-1] to count the balance as required in the definition. If Φ∉(a_i) then the additional guard c<0 is added to the augmented states at those locations ℓ∈[ℓ_i+1,|𝒫'|-1] that are labelled by Ψ. This makes ℓ_i consistent and moreover, since Φ does not hold at position h+i on σ, these constraint are not violated by the run.If Φ∈(a_i) (although ψ∉(a_i)) there is a position h'>h+i such that (σ,h')ψ and thus ψ∈_𝒫(ℓ') for ℓ'=σ(h'). If ℓ' is on a row we add the constraint c≥0 to the augmented state at ℓ'. If ℓ' is on a loop P_k' of 𝒫 but σ takes it only once we can replace P_k' by (P_k') in 𝒫 and then add the constraint. In case σ takes P_k' at least twice either the last or first iteration of P_k' can serve as a witness: If P_k' is good for Φ more iterations of it between h and h' can only improve the balance so that the ratio between h and the last position h”=maxσ^-1(ℓ') where ℓ' is sufficient for Φ. Hence, we unfold P_k' by adding a copy (P_k') right after P_k in 𝒫. Notice that we can assume that k'<m because if P_k' were the final loop and good for Φ the case above had already applied and we would not need to unfold the loop. Similarly, if P_k' is bad (or neutral) for Φ then we let h”=minσ^-1(ℓ') be the first position where ℓ' occurs. Since (σ(h)…σ(h'))≤(σ(h)…σ(h”)) in this case h” also can serve as witness and we unfold the loop by inserting P'_k' immediately before P_k' in 𝒫.As argued earlier, these transformations do not make any consistent location inconsistent with respect to any formula and there is still a run σ' representing _𝒫(σ)=ρ. However, the location ℓ” (at position h” on σ') is not part of a loop and can safely be guarded by (c≥0) while preserving the run.During this procedure we introduce at most one unfolding of some loop for each position on P_k and the size of 𝒫 increases thus by at most |𝒦|^2 because |𝒦| bounds the length of each loop.Non-final Loops. It remains to consider the case that P_k is a non-final loop. The run σ has the form σ=uv^nw where v=ℓ_k (ℓ_k+1)… (ℓ_k+|P_k|-1|) is the sequence of locations corresponding to P_k in 𝒫 and n∈ℕ is maximal, that is u and w do not intersect with v. We assume in the following that n is not small as otherwise we may simply replace P_k by n copies of (P_k) and proceed as above. More precisely, let n̂=y·|P_k| and assume that n≥n̂+2. This constant n̂ essentially bounds the effect of frequency variations within a single loop iteration. Its specific choice will become apparent in the later construction. For now it suffices to observe that if n≤n̂+1=y|P_k|+1≤ y|𝒦|+1 and we replace P_k by n unfoldings the size of 𝒫 increases by (n-1)·|P_k|. Applying the procedure for rows above to each component may force us to unfold other loops. As a (rough) estimate, we will have to introduce no more than one further unfolding of some loop for each new location originating from the unfoldings of P_k. Hence, after making all n copies of P_k consistent the size of 𝒫 did not grow by more than(n-1)·|P_k| + n·|P_k|·|𝒦| ≤ (y|𝒦|+1-1)·|𝒦| + (y|𝒦|+1)·|𝒦| ·|𝒦| ≤ 3y|𝒦|^3. Given that n≥n̂+2 we distinguish two situations of σ determining a labelling for P_k. Either, for all position |u|≤ i<|uv^n-1| we have (σ,i)Φ⇔ (σ,i+|v|)Φ, meaning that the labelling of the augmented state 𝒫(ℓ) at location ℓ on v is unambiguously determined by σ (we say that the loop is stable), or there is a location on v such that at some of its occurrence on σ the formula Φ holds while at another it does not (in that case the loop is unstable). We consider first the former case and how it can be made consistent. Afterwards we show that in the latter case it is possible to modify 𝒫 such that the former case applies.Stable loops.If the pattern of positions where Φ holds is stable along the iterations of P_k on σ we apply it to the labelling of P_k. That is, we adjust P_k such that Φ∈(P_k(i)) if and only (σ,|u|+i)Φ. Likely, at least some of the locations ℓ∈[ℓ_k,ℓ_k+|P_k|-1] are still not consistent with respect to Φ. If n≤4 we replace P_k in 𝒫 by n unfoldings P'_k=(P_k) that can be made consistent as above. Otherwise, let R_1=R_2=R_3=R_4=P'_k and insert (R_1,R_2) before and R_3,R_4 after P_k in 𝒫. The two last unfoldings R_3 and R_4 can be made consistent as above. For R_1 we proceed the same way except that if P_k is to be unfolded again (for instance to find a location labelled with ψ) R_2 or R_3 are considered instead. Now, R_2 and P_k are alsoconsistent because the surrounding components R_1,P_k,R_3,R_4 cover every possible case. Overall no more than 4 additional copies of P'_k are added and for the locations of at most three of them other loops needed to be unfolded giving a total of no more than4·|P_k| + 3·|P_k|·|𝒦|≤7|𝒦|^2new locations being added to 𝒫.Unstable loops. In general, σ does not uniquely determine whether the state at some location ℓ∈[ℓ_k,ℓ_k+|P_k|-1] in 𝒫 is supposed to be labelled by Φ because that may vary between corresponding position on σ, that is, the iterations of P_k. However, we observe that along any run the validity of Φ at some specific location can change at most once. We have argued earlier that as soon as Φ holds somewhere, more iterations of a good loop inserted between the position in question and a witness position does not affect validity. Similarly, introducing additional iterations of a bad loop do not change the fact that Φ does not hold at some specific position.It follows, for example, that if Φ does hold in the last iteration of a bad loop but not in the first, there is a unique iteration for each location on the loop where validity swaps. The diagram presented in <ref> shows an example of how the balance between a position i on the loop and a witness position i' may evolve on σ. Observe that there are three parts of the run iterating through the first loop. In part one Φ holds nowhere because the balance (and hence the ratio) on the path to the (only) witness is insufficient. It covers too many iterations of the bad loop. In the last part, Φ holds everywhere because the ratio is sufficient. In between it depends on local differences whether the ratio condition is satisfied or not. The first and last part can be uniformly labelled and thus represented each by a copy of the original loop. On the other hand, the intermediate part is short: its length depends only on the length of the loop and the ratio, more precisely, on the size of the denominator (3 in the example) as measure of how sensitive the property is to changes in the frequency on an arbitrarily long path.<ref> formalises this observation.* Assume that P is good for Φ and thus (P)>0. Consider the first (smallest) position i≥|u| on σ=uv^nw where (σ,i)Φ. If i does not exist or i≥|uv^n-n̂| we can choose n_1=n-n̂-1 and n_2=n-n̂-n_1 = 1.Otherwise let n_1 be the last iteration of P entirely satisfying Φ, that is such that |uv^n_1| ≤ i < uv^n_1+1, and h=|uv^n_1|. Consequently we let n_2=n-n̂-n_1. Consider now any position i'∈[h,h+|P|-1] in the (n_1+1)-th iteration where Φ still holds. If there is none, then Φ does not hold in later iterations either and the statement of the lemma holds.Since (σ,i')Φ there is some position j>i' with (σ,j)ψ and (σ(i')σ(i'+1)…σ(j-1))≥0 Observe that we can assume that j≥|uv^n-1| because otherwise j+|P| would serve as witness since in that case(σ(i')σ(i'+1)…σ(j-1+|P|))= (σ(i')σ(i'+1)…σ(j-1)) + (P) ≥(σ(i')σ(i'+1)…σ(j-1))≥0while σ(j)=σ(j+|P|) and thus ψ∈_𝒫(σ(j+|P|)).However, the balance cannot be too large, more precisely, (σ(i')σ(i'+1)…σ(j-1)) ≤ |P|· y. Depending on whether i'<i or i<i' we have(σ(i')…σ(j-1)) = (σ(i)…σ(j-1)) - (σ(i)…σ(i'-1))if i<i' (σ(i)…σ(j-1)) + (σ(i')…σ(i-1))if i'<iConsidering the first case, we can bound the difference by the maximal gain(σ(i)…σ(i'-1))≤(|P|-1)·(y-x)≤ y|P|on a path of length at most |P|-1. In the second case, the lower bound on the balance(σ(i')…σ(i-1))≥|P-1|·(-x) ≥ -y|P|is of interest because we conclude that in any case(σ(i')…σ(j-1)) ≤(σ(i)…σ(j-1)) + y|P| < y|P|Since (P)≥1 we have that(σ(i'+n̂|P|)…σ(j-1)) = (σ(i')…σ(j-1)) - n̂·(P)< y|P| - y|P|·(P) ≤ 0meaning that after at most n̂ further iteration Φ can not hold any more.Assuming now that P is bad for Φ allows for similar reasoning. Consider i≥|uv| to be the first position on σ where (σ,i)Φ while (σ,i-|P|) If i does not exist or i≥|uv^n-n̂| we can again choose n_1=n-n̂-1 and n_2=n-n̂-n_1 = 1. Otherwise we choose n_1 such that |uv^n_1|≤ i<|uv^n_1+1| and let h=|uv^n_1|.There is a position j>i such that ψ∈_𝒫(σ(j)) and(σ(i)…σ(j-1))≥0.Observe that j≥|uv^n| because otherwise (σ(i-|P|)…σ(j-1-|P|))≥0 and ψ∈_𝒫(σ(i-|P|)) contradicting that (σ,i-|P|)Φ.Consider now any position i'∈[h,h+|P|-1] where (σ,i')Φ, if any. We have(σ(i')…σ(j-1)) = (σ(i)…σ(j-1)) - (σ(i)…σ(i'-1))if i<i' (σ(i)…σ(j-1)) + (σ(i')…σ(i-1))if i'<iand obtain the bounds[ (σ(i)…σ(i'-1))≤ |P-1| ·(y-x)≤ y|P|(if i<i'); (σ(i')…σ(i-1))≥|P-1| ·(-x)≥-y|P| (if i'<i). ]Hence(σ(i')…σ(j-1)) ≥(σ(i)…σ(j-1)) - y|P| ≥ -y|P|Now, since (P)<0 we have that(σ(i'+n̂|P|)…σ(j-1)) = (σ(i')…σ(j-1)) - n̂·(P) ≥ -y|P| - (y|P|·(P)) ≥ 0providing that after n̂=y|P| more iterations, Φ holds at every position on the loop.If P is neutral for Φ then an iteration of P more or less does not change if there is a witness or not and (σ,i)Φ if and only if (σ,i+|P|)Φ for all |u|≤ i<|uv^n-1|. <ref> provides a bound on how often we need to unfold P_k at most in order to guarantee that σ determines a unique labelling. Recall we assumed that σ repeats P_k for n≥n̂+2 times. In 𝒫, we may hence replace P_k by (P_k,P'_k,…,P'_k,P_k) introducing two copies and a sequence of exactly n̂ unfoldings of it. The decomposition σ=uv^n_1v^n̂v^n_2w given by <ref> provides a corresponding run σ' of the obtained path schema and a unique labelling for all of the new components. Now, we are only left with cases discussed earlier: two stable loops and n̂ rows. For each of the stable loops, we can estimate that establishing consistency requires no more than 7|𝒦|^2 additional locations. For each of the n̂ new rows it no more than |𝒦|^2 additional locations. We can conclude that 𝒫' can be constructed with in total no more than|P_k| + n̂|P_k| + 2·7|𝒦|^2 + n̂·|𝒦|^2 ≤ 17y|𝒦|^3additional locations. §.§ The Size of 𝒫_Φ The induction provides the construction of 𝒫_Φ from 𝒦 requiring (at most) one step for each subformula of Φ. Let 𝒫_0 be the APS provided by the base case that covers all propositions occurring in Φ. As argued earlier, its size is bounded by 2|𝒦| and the length of every loop is bounded by |𝒦|. Applying the induction step now recursively for Φ, i.e., augmenting 𝒫_0 consistently with more and more subformulae of Φ we obtain a sequence of possibly growing path schemas until 𝒫_Φ is obtained after at most |(Φ)|≤|Φ| steps. We have seen that in the case of a next formula, constructing the consistent schema 𝒫' from 𝒫 requires at most one unfolding of some loop for each location in 𝒫 and thus |𝒫'|≤|𝒫|·|𝒦|. In the case of an until formula <ref> provides that for each component of 𝒫 no more than 17y|𝒦|^3 locations are added and thus |𝒫'|≤|𝒫|·17y|𝒦|^3. Counting the bits for representing y to the length of Φ and hence estimating y≤2^Φ it follows that after |Φ| steps, the resulting path schema is of size|𝒫_Φ| ≤ |𝒫_0| ·(17· 2^|Φ||𝒦|^3)^|Φ|∈𝒪(2^f(|Φ|+|𝒦|))for some polynomial f and thus at most exponential in the size of the input.By construction 𝒫_Φ is correct and there is a run σ∈(𝒫_Φ) with _𝒫_Φ(σ)=ρΦ and hence (𝒫(0))=_𝒫(σ(0))∋Φ. This completes the proof for <ref>. | http://arxiv.org/abs/1706.08608v1 | {
"authors": [
"Normann Decker",
"Peter Habermehl",
"Martin Leucker",
"Arnaud Sangnier",
"Daniel Thoma"
],
"categories": [
"cs.LO"
],
"primary_category": "cs.LO",
"published": "20170626213216",
"title": "Model-checking Counting Temporal Logics on Flat Structures"
} |
In <cit.>, Shelah develops the theory of _I(A) without the assumption that |A|<min (A), going so far as to get generators for every λ∈_I(A) under some assumptions on I. Our main theorem is that we can also generalize Shelah's trichotomy theorem to the same setting. Using this, we present a different proof of the existence of generators for _I(A) which is more in line with the modern exposition. Finally, we discuss some obstacles to further generalizing the classical theory.Validating a novel angular power spectrum estimator using simulated low frequency radio-interferometric data [2017. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/] [==================================================================================================================================================================================================================================================§ INTRODUCTIONI would like to thank Todd Eisworth for his assistance with the organization of the manuscript, and an anonymous referee for their helpful remarks which enhanced the clarity of the paper. The pcf theory as presented in <cit.> has proven to be a powerful tool for analyzing the combinatorial structure at singular cardinals as well as their successors. Perhaps the most well-known consequence of the pcf-theoretic machinery is the following theorem due to Shelah:ℵ_ω^ℵ_0<max{ℵ_ω_4,(2^ℵ_0)^+}.This contrasts greatly with the situation for regular cardinals, and tells us that we can get meaningful results about the power of singular cardinals in ZFC. On the other hand, we know that some of this machinery can only work for singular cardinals which are not fixed points of the ℵ-function. Given suitable large cardinal hypotheses, one can use Prikry-type forcings to blow up the power of some ℵ-fixed points to be arbitrarily large (see <cit.> for an overview). So if the pcf machinery can be generalized to ℵ-fixed points, this can only be done in a restricted manner.In <cit.>, Shelah does precisely this. The pcf machinery is relativized to particular ideals over some set A which need not satisfy |A|<min (A). In particular, Shelah is able to obtain generators for every λ∈_I(A). The usual proof of the existence of generators requires obtaining universal cofinal sequences for each λ∈(A), and then showing that exact upper bounds for such sequences yield generators. In the classical case, one can make use of Shelah's trichotomy theorem <cit.>:Suppose that λ is a regular cardinal with λ>|A|^+, I is an ideal over A, and f⃗=⟨ f_α : α<λ⟩ is an <_I-increasing sequence of functions from A to ON. Then f⃗ satisfies at least one of the following conditions:* Good: f⃗ has an exact upper bound f∈^AON such that (f(a))>|A| for all a∈ A. * Bad: There are sets S(a) for each a∈ A such that |S(a)|≤|A| and an ultrafilter D over A disjoint from I such that, for all ξ<λ, there exists some h_ξ∈∏_a∈ AS(a) and some η<λ such that f_ξ<_D h_ξ<_D f_η. * Ugly: There is a function g: A→ON such that, letting t_ξ={a∈ A : f_ξ(a)>g(a)}, the sequence t⃗=⟨ t_ξ : ξ<λ⟩ (which is ⊆_I-increasing) does not stabilize modulo I. That is, for every ξ<λ, there is some ξ<η<λ such that t_η∖ t_ξ∉ I. In our desired applications, the functions ⟨ f_ξ : ξ<λ⟩ will belong to ∏ A, where A is a collection of regular cardinals. So if f⃗ does have an exact upper bound f, it would be bounded above by the function a↦ a. This means that if f is Good as above, the requirement that (f(a)) > |A| for each a∈ A will force that |A|<min(A). So this version of trichotomy will not work in the more general setting of <cit.>. While Shelah pursues a different route, it is natural to ask whether or not one can generalize the trichotomy theorem. Of course, even if we obtain this more general trichotomy theorem, we still have to show that we can find sequences that are neither bad nor ugly. Our main theorem is that one can do precisely that.This paper is organized as follows: In Section 2, we extend <ref>, and show that one can still construct sequences that are neither bad nor ugly. In Section 3, we use this to provide a streamlined proof of the fact that generators exist for _I(A). Finally, we show that the no holes conclusion must fail in general, and that the standard techniques for obtaining transitive generators cannot be generalized.§ THE TRICHOTOMY THEOREM Our goal in this section is to generalize <ref> by replacing the assumption that |A|^+<λ with assumptions about the ideal I we are asking about. First, we fix some notation Suppose that I is an ideal over a set A of ordinals. Then* We denote the dual filter by I^*.* We say that property P holds for I-almost every α∈ A if the set of α∈ A such that P holds is in the dual filter I^*.* If f, g are functions from A to the ordinals, and R is a relation on the ordinals, then we say f R_I g if and only if {α∈ A : (f(α) R g(α))}∈ I.* Dually, if D is a filter on A, f, g are functions from A to the ordinals, and R is a relation on the ordinals, then we say f R_D g if and only if {α∈ A : f(α) R g(α)}∈ D.* We say a set B⊆ A is I-positive if B∉ I. We denote the collection of I-positive sets by I^+.We now isolate and discuss several properties of ideals that we will be working with. Suppose that I is an ideal on some set A. For a cardinal θ, we say that I is weakly θ-saturated if there is no partition of A into θ-many I-positive sets. Note that if I is weakly θ-saturated, and θ_1 is a cardinal above θ, then I is also weakly θ_1-saturated. Further, note that I is always weakly |A|^+-saturated for trivial reasons. If I is an ideal on a set A, then let (I) denote the least cardinal θ such that I is weakly θ-saturated. Another property that we will need indirectly is a weakening of θ-completeness. Suppose that I is an ideal on some set A. For a regular cardinal θ, we say that I is θ-indecomposable if I is closed under ⊆-increasing unions of length θ. One thing to note above is that, unlike weak saturation, indecomposability is neither upwards nor downwards hereditary. While we will be making use of weak saturation directly in the next section, our use of indecomposability comes by way of combining it with weak saturation. In particular, we will make frequent use of the following result. Let I be an ideal on a set A. The following are equivalent for a regular cardinal θ.* I is weakly θ-saturated and θ-indecomposable.* Whenever ⟨ B_i : i<θ⟩ is a ⊆-increasing θ-sequence of subsets of A, then there is some j^*<θ such that j^*≤ j<θ B_j=_I ⋃_i<θB_i. * Whenever ⟨ A_i : i<θ⟩ is a sequence of I-positive sets, there is some H∈[θ]^θ such that⋂_i∈ HA_i≠∅. At this point we can isolate one of the properties needed to push the generalized trichotomy theorem through. Let I be an ideal on a set A. For a regular cardinal θ, we say that I is θ-regular if it satisfies one of the equivalent conditions in <ref>. Note that I will automatically be |A|^+-regular. To see this, fix a sequence ⟨ A_i : i<|A|^+⟩ of non-empty subsets of A. Then define a function f:|A|^+→ A by setting f(i) to be the least a∈ A such that a∈ A_i. Then there must be some H⊆ |A|^+ of cardinality |A|^+ and a∈ A such that f(i)=a for every i∈ H. In particular, if I is an ideal on a set of ordinals A and |A|<min(A), then I will be θ-regular for every θ∈ (|A|, min A]∩. Suppose that I is an ideal on a set A. Let (I) denote the least regular θ such that I is θ-regular. For a set A of ordinals, an ideal I on A, and a regular cardinal θ, recall that we are concerned with the following properties: Let F be a collection of functions from A to ON. We say that f:A→ ON is an I-upper bound for F if g≤_I f for every g∈ F. We say that f is an I-least upper bound for F if additionally f≤_I f' for every upper bound f' of F. Finally, f is an I-exact upper bound for F if f is a least upper bound of F, and F is <_I-cofinal in {g∈^AON: g<_I f}. If the ideal I is clear from the context, then it may be omitted.Let f⃗=⟨ f_α : α<λ⟩ be an <_I-increasing sequence of functions from A to ON. For a cardinal θ, we define the following properties of f⃗: * Good_θ: f⃗ has an exact upper bound f∈^AON with {a∈ A : (f(a))<θ}∈ I. * Bad_θ: There are sets S(a) for each a∈ A such that |S(a)|<θ and an ultrafilter D over A disjoint from I such that, for all ξ<λ, there exists some h_ξ∈∏_a∈ AS(a) and some η<λ such that f_ξ<_D h_ξ<_D f_η. * Ugly: There is a function g: A→ON such that, letting t_ξ={a∈ A : f_ξ(a)>g(a)}, the sequence t⃗=⟨ t_ξ : ξ<λ⟩ (which is ⊆_I-increasing) does not stabilize modulo I. That is, for every ξ<λ, there is some ξ<η<λ such that t_η∖ t_ξ∉ I. We begin by noting that the following two lemmas do not require any hypotheses on I or A. The first of the two lemmas appears as Claim A.2 of <cit.> Suppose that A is a set of ordinals, and I is an ideal on A. Let λ be regular and f⃗=⟨ f_ξ : ξ<λ⟩ be <_I-increasing. If f⃗ is not Ugly, then every least upper bound of f⃗ is an exact upper bound. The next lemma appears in the middle of the proof of Theorem 2.15 of <cit.>, but we prove it for the sake of completeness. Suppose that A is a set of ordinals, and I is an ideal on A. Further, suppose that λ and θ≤λ are regular, and that f⃗=⟨ f_ξ : ξ<λ⟩ is a <_I-increasing sequence of functions from A to ON. If f⃗ has an exact upper bound f such that {a∈ A : cf(f(a))< θ}∉ I then f⃗ satisfies Bad_θ.Let f be an upper bound for f⃗ with B={a∈ A : cf(f(a))< θ}∉ I, and let D be an ultrafilter over A disjoint from I such that B∈ D. Next for each a∈ B, let S(a) be cofinal in f(a) with |S(a)|=cf(f(a))<θ, and let S(a)={0} for each a∉ B. For each ξ<λ, let f_ξ^+ be defined by f_ξ^+(a)=min(S(a)∖ f_ξ(a)) for i∈ A and f_ξ^+(a)=0 otherwise.Now for any ξ<λ we have that f_ξ<_D f_ξ+1^+ where f_ξ+1^+∈∏_a∈ AS(a). On the other hand, f is exact and since S(a) is cofinal in f D-almost everywhere, it follows that there is some η∈λ such that f_ξ+1^+<_D f_η and so f⃗ is bad as witnessed by ⟨ S(a) : a∈ A ⟩ and D. With these two lemmas in hand, we move to the statement and proof of the trichotomy theorem. Suppose that A is a set of ordinals, and I is an ideal on A. Let λ>(I) and θ∈[(I),λ] be regular. If f⃗=⟨ f_ξ : ξ <λ⟩ is a <_I-increasing sequence of functions from A to ON, then at least one of Good_θ, Bad_θ, or Ugly must hold. One thing to note is that the classical Trichotomy Theorem requires that λ>|A|^+, whereas we simply require that λ>(I). By work of Kojman and Shelah in <cit.>, the requirement that λ>|A|^+ cannot be weakened to λ>|A|.We will show that, assuming f⃗ is not Ugly, then we can either find a witness to Bad_θ or find a least upper bound f for f⃗. By <ref>, this least upper bound is actually exact and so by <ref> we can either find a witness to Bad_θ or f witnesses Good_θ. We proceed by induction on α<(I), and at each stage create a candidate for a least upper bound. We will terminate at successor stages if we have found a least upper bound, and at limit stages if we can construct a witness to Bad_θ. At the end, we will show that we must have terminated at some α<(I), else we will be able to derive a contradiction. At each stage α<(I), we will define:* Functions g_α:A→ON which are I-upper bounds for f⃗ such that, for each β<α, we have g_α≤_I g_β but g_α≠_I g_β.* Sets S^α(a)={g_β(a) : β<α}.* Functions h^α_ξ: A→ON for ξ<λ defined by h^α_ξ(a)=min(S^α(a)∖ f_ξ(a)). Note here that 2) and 3) depend on how we define 1). Further, the sequence ⟨ h_ξ^α : ξ<λ⟩ is ≤_I-increasing in ∏_a∈ A S^α(a). Stage α=0: Here we let g_0 be any ≤-upper bound of f⃗, for example g(a)=sup{f_ξ(a) : ξ<λ}+1 works. Requiring that g_0 dominates f⃗ everywhere ensures that the functions h^α_ξ are defined everywhere. Stage α+1: Assume that g_α has been define. If g_α is a I-least upper bound for f⃗, then we can terminate the induction. Otherwise g_α is not a least upper bound, so there is some I-upper bound g_α+1 such that g_α+1≤_I g_α but g_α+1≠_I g_α. Stage γ limit: Suppose that γ<(I) is a limit ordinal and that g_α has been defined for each α<γ. Now consider the functions h^γ_ξ and the setst^η_ξ:={a∈ A : h^γ_ξ(a) <f_η(a)}for η, ξ<λ. Fixing the ξ coordinate, the function h^γ_ξ is fixed while we run through ⟨ f_η : η<λ⟩ and so the sequence t⃗_ξ=⟨ t^η_ξ : η<λ⟩ is ⊆_I-increasing since the sequence f⃗ is <_I-increasing. Fixing the η coordinate on the other hand, we fix f_η and run through ⟨ h^γ_ξ : ξ<λ⟩ and so the sequence t⃗^η=⟨ t^η_ξ : ξ<λ⟩ is ⊆_I-decreasing. Since f⃗ is not Ugly, it follows that each sequence t⃗_ξ stabilizes modulo I at some ordinal η(ξ). That is, for all η>η(ξ), we have that t^η(ξ)_ξ=_I t^η_ξ. We have two cases to consider: either t^η(ξ)_ξ∉ I for each ξ< λ, or for all sufficiently large ξ<λ, we have t^η(ξ)_ξ∈ I. To see that these are indeed all of our cases, first note that whenever ξ<ξ',t^η(ξ')_ξ'=_I t^η_ξ'⊆ _I t^η_ξ=_I t^η(ξ)_ξfor η>max{η(ξ),η(ξ')} since ⟨ t^η_ξ : ξ<λ⟩ is ⊆_I-decreasing. So if there is some ξ∈λ for which t^η(ξ)_ξ∈ I, then it follows that t^η(ξ')_ξ'∈ I for each ξ'>ξ since t^η(ξ)_ξ∈ I and t^η(ξ')_ξ'⊆_I t^η(ξ)_ξ. Assume that the former happens (i.e. t^η(ξ)_ξ∉ I for each ξ< λ), and consider the sequence ⟨ t^η(ξ)_ξ:ξ<λ⟩. Note that this sequence is ⊆_I-decreasing, and so I^*∪{t^η(ξ)_ξ:ξ<λ} has the finite intersection property. So let D be an ultrafilter over A extending I^*∪{t^η(ξ)_ξ:ξ<λ}, and note that Bad_θ is witnessed by D and ⟨ S^γ(a) : a∈ A⟩. By construction we know that f_ξ<_D h^γ_ξ+1 for each ξ<λ. On the other hand, we have that h^γ_ξ+1<_D f_η(ξ+1) since t^η(ξ+1)_ξ+1∈ D. If this happens, we can terminate the induction.Otherwise, suppose that t^η(ξ)_ξ∈ I for each sufficiently large ξ<λ. Let ξ(γ) be the least ξ for which this occurs, and define g_γ:=h^γ_ξ(γ). Note then that g_γ is an I-upper bound of f⃗ by construction, and so we only need to verify that g_γ≤_I g_α while g_γ≠_I g_α for each α<γ. Recall that S^γ(a)={g_α (a) : α<γ} while g_γ=min(S^γ(a)∖ f_ξ(γ)(a)) and ⟨ g_α : α<γ⟩ is ≤_I-decreasing. If α<γ, then { a∈ A : g_γ(a)> g_α(a)}∈ I since g_α is an I-upper bound for f, and thus {a∈ A : g_α(a)∈ S^γ(a)∖ f_ξ(γ)(a)}∈ I^*. Additionally, it follows that g_γ≠_I g_α. Otherwise, for α<β<γ, we get thatg_α=_I g_γ≤_I g_β≤_I g_α,and so g_α=_I g_β. This contradicts condition (1) of the induction, and so g_γ≠_I g_α. It is worth noting that, in this case ⟨ h^γ_ξ: ξ<λ⟩ stabilizes modulo I again by definition. We claim that this induction must have terminated. Otherwise, for each α∈((I)), we have defined: * Functions g_α:A→ON which are upper bounds for f⃗ such that, for each β∈((I)) with β<α, we have g_α≤_I g_β but g_α≠_I g_β.* Ordinals ξ(α) such that g_α=h^α_ξ(α)=_I h^α_ξ for each ξ≥ξ(α). Since (I)<λ with λ regular, we can see that ξ(*)=sup{ξ(α): α∈((I))} is still below λ. Note that g_α=_I h^α_ξ(*) for each α∈((I)), so letting H_α=h^α_ξ(*) we have that H_α enjoys the same properties as g_α. Now for each α∈((I)), let α' be the successor of α in ((I)), i.e.α'=min(((I))∖α+1)=α+ω. Define the setsB_α:={a∈ A : H_α'(a)<H_α(a)}for each α∈((I)). Now, for all α<β∈((I)), we have S^α(a)⊆ S^β(a) and hence H_β≤ H_α. On the other hand, by construction each B_α∉ I, and so the sequence ⟨ B_α :α <((I))⟩ has the property that, for some H⊆((I)) with |H|=(I), the intersection ⋂_α∈ H B_α is non-empty. Letting a be in this intersection, we see that for all α<β∈ H:H_β(a)≤ H_α'(a)< H_α(a). Thus, we have an infinite descending sequence of ordinals, which is a contradiction. Therefore the induction must have terminated and the theorem follows. One thing to note is that the trichotomy theorem above is indeed a generalization of the classical trichotomy theorem. This follows from the discussion above showing that any ideal I on a set A is automatically |A|^+-regular. Our next goal is to show that, under the same assumptions as the above trichotomy theorem, one can actually build Good exact upper bounds. In other words, we need to show that it is still possible to produce sequences f⃗ in ∏ A/I which are neither Bad nor Ugly. Implicit in <cit.> is the fact that one can manufacture sequences with a property that is referred to as (*)_θ in <cit.>, and furthermore this property is equivalent to Good_θ. The property (*)_θ and this equivalence are used to show that any Good eub will have an stationary set of good points. Unfortunately for us, it is not clear whether or not sequences satisfy (*)_θ if and only if they satisfy Good_θ. Fortunately for us, we can show that if a sequence satisfies (*)_θ, then it satisfies Good_θ. Further, we can produce sequences which satisfy (*)_θ directly. Throughout, we will fix a set of ordinals A, and an ideal I on A such that (I)<min (A). Let X be a set of ordinals, and let f⃗=⟨ f_ξ : ξ∈ X⟩ be a <_I-increasing sequence of functions from A to ON. We say that f⃗ is strongly increasing if there are sets Z_ξ∈ I for each ξ∈ X such that, for any η<ξ∈ X, we have that f_η(a)<f_ξ(a) for all a∈ A∖( Z_η∪ Z_ξ). The idea behind strongly increasing sequences is that the sets Z_ξ serve as canonical witnesses that the sequence is <_I-increasing.Let λ be a regular cardinal, and let f⃗=⟨ f_ξ : ξ<λ⟩ be a <_I-increasing sequence of functions from A to ON. Letting θ≤λ be a regular cardinal, we say that f⃗ satisfies (*)_θ if for every X⊆λ unbounded in λ, there exists a set X_0⊆λ of size θ such that ⟨ f_ξ : ξ∈ X_0⟩ is strongly increasing. We should note that satisfying (*)_θ for a sequence of functions is somewhat analogous to satisfying (I)≤θ for an ideal I.Let λ>(I) be a regular cardinal, f⃗=⟨ f_ξ : ξ<λ⟩ be a <_I-increasing sequence of functions from A to ON, and θ∈[(I),λ]∩REG. If f⃗ satisfies (*)_θ, then f⃗ is not Ugly.Suppose otherwise, and let g:A→ON witness that f⃗ is Ugly. That is, letting t_ξ={a∈ A : f_ξ(a)> g(a)} for each ξ<λ, the sequence t⃗=⟨ t_ξ : ξ<λ⟩ does not stabilize modulo I. So for each ξ<λ, there is some η>ξ such that t_η∖ t_ξ∉ I. Using this, we can find an unbounded X⊆λ such that, for all ξ,η∈ X with ξ<η, we have t_η∖ t_ξ∉ I. Next, we use the fact that f⃗ satisfies (*)_θ to fix a set X_0⊆ X of size θ such that ⟨ f_ξ : ξ∈ X_0⟩ is strongly increasing as witnessed by Z_ξ for each ξ∈ X_0. For each ξ∈ X_0, let ξ'=min(X_0∖(ξ+1)) be the successor of ξ in X_0, and let A_ξ=(t_ξ'∖ t_ξ)∩ (A∖(Z_ξ'∪ Z_ξ)).Note that A_ξ∉ I for each ξ∈ X_0, and so we can find some H⊆ X_0 of size (I)≤θ such that ⋂_ξ∈ HA_ξ≠∅. Let a be in this intersection, and let ξ,η∈ H with ξ<η. Then we have that g(a)≥ f_η(a)≥ f_ξ'(a)>g(a).Note that we get the first inequality from the fact that a∉ t_η, while the second inequality comes from the fact that a∉ Z_η∪ Z_ξ' with η≥ξ'>ξ, and the final inequality comes from the fact that a∈ t_ξ'. This gives us that g(a)>g(a), which is of course a contradiction.Let λ>(I) be a regular cardinal, f⃗=⟨ f_ξ : ξ<λ⟩ be a <_I-increasing sequence of functions from A to ON, and let θ≤λ be regular such that I is θ-regular. If f⃗ satisfies (*)_θ, then f⃗ is not Bad_θ.Suppose otherwise, and let S=⟨ S(a): a∈ A⟩ and D witness that Bad_θ holds. Let X⊆λ be unbounded such that for all ξ,η∈ X with ξ<η, there is a function h_ξ∈∏_a∈ AS(a) such that f_ξ<_D h_ξ<_D f_η. Using the fact that f⃗ satisfies (*)_θ, let X_0⊆ X be of size θ such that ⟨ f_ξ : ξ∈ X_0⟩ is strongly increasing as witnessed by Z_ξ∈ I for each ξ∈ X_0.As before, for each ξ∈ X_0, we let ξ'=min(X_0∖(ξ+1)) be the successor of ξ in X_0. For each ξ∈ X_0, let B_ξ={a ∈ A: f_ξ(a)<h_ξ(a) < f_ξ'(a)} and defineA_ξ= B_ξ∩(A∖(Z_ξ'∪ Z_ξ)).Note that B_ξ∈ D, and so each A_ξ is I-positive. So we can find some H⊆ X_0 of size θ such that ⋂_ξ∈ HA_ξ≠∅, so let a be in this intersection. Then for every ξ, η∈ X with ξ<η, we have thath_ξ(a)<f_ξ'(a)≤ f_η(a)<h_η(a).The first inequality follows from a∈ B_ξ, while the third follows from the fact that a∈ B_η. The second inequality comes from the fact that ξ'≤η and a∉ Z_ξ'∪ Z_η. But then the sequence ⟨ h_ξ(a) : ξ∈ X_0⟩ is strictly increasing along S(a) while |S(a)|<θ=|X_0| which is absurd. So for our purposes, it suffices to be able to construct sequences satisfying (*)_θ for appropriate θ. We now quote Lemma 2.19 from <cit.>, which gives us conditions for constructing such sequences. Suppose that* I is an ideal over A;* θ and λ are regular cardinals such that θ^++<λ;* f⃗=⟨ f_ξ : ξ<λ⟩ is a <_I-increasing sequence of functions from A to ON such that for every δ∈ S^λ_θ^++, there is a club E_δ⊆δ such that for some δ≤δ'<λ, we havesup{f_α : α∈ E_δ}<_I f_δ';Then (*)_θ holds for f⃗.It turns out that, while the above lemma looks technical, constructing sequences with the above properties is itself easy. The proof of the following theorem is identical to the proof of Theorem 2.21 of <cit.>, but we include it for the sake of completeness. Suppose that A is a set of regular cardinals. Let λ>(I) be a regular cardinal such that ∏ A/I is λ-directed, and let f⃗=⟨ f_ξ : ξ<λ⟩ be any <_I-increasing sequence of functions in ∏ A. Then there exists a sequence g⃗=⟨ g_ξ : ξ<λ⟩ such that: * g⃗ is <_I-increasing;* for each ξ<λ, we have f_ξ<g_ξ+1;* for every θ<λ regular such that θ^++<λ, {a∈ A : a≤θ^++}∈ I, and I is θ-regular, we have that g⃗ is Good_θ. By <ref> and <ref>, it suffices to produce a sequence which satisfies (*)_θ for every appropriate θ. In other words, we only need to produce a sequence satisfying the last condition in <ref>. We proceed by induction on ξ<λ.At stage 0, we simply let g_0 be any function in ∏ A/I. At successor stages, suppose that g_ξ has been defined and let g_ξ+1 be defined by g_ξ+1(a)=max{g_ξ(a), f_ξ(a)}+1.At limit stages δ, we have two cases to deal with. In the first case, we suppose that cf(δ)=θ^++ for θ as in condition (3), and let E_δ⊆δ be club of order type θ^++. Define g_δ=sup{g_ξ : ξ∈ E_δ},and note that g_δ(a)<a whenever a>θ^++ and so g_δ∈∏ A/I. In the other case, simply let g_δ' be a ≤_I-upper bound of {g_ξ : ξ<δ} and set g_δ=g_δ'+1. By construction, the sequence g⃗ satisfies the hypotheses of <ref>, and so we are finished.§ GENERATORS FOR Λ∈_I(A) In the classical pcf theory, the trichotomy theorem is used to produce generators for every λ∈(A). We would like to do exactly that for each λ∈_I(A) when A is a set of regular cardinals, and I is an ideal on A satisfying (I)<min(A). As noted earlier, Shelah does obtain generators for _I(A) in <cit.>. The benefit of our approach is that the exposition has been streamlined to mimic the modern development of pcf theory as found in <cit.>. The results in the section are due to Shelah unless otherwise noted. Suppose that (P, <) is a partial order. We say that P has true cofinality λ if there is a <-linearly ordered family F⊆ P of cofinality λ which is itself cofinal in (P,<). In this case, we write (P,<)=λ, though the ordering < may be omitted if it is clear from the context. We should note that (P,<) may not always be defined, as (P,<) could very well not have a linearly ordered cofinal subset. When it is defined, it is always a regular cardinal and (P,<)=(P,<).Suppose A is a collection of ordinals and I is a fixed ideal over A. Define _I(A)={(∏ A/D) : D is an ultrafilter over A disjoint from I }. . As ∏ A/D is linearly ordered, it follows that every element of _I(A) is a regular cardinal. Suppose A is a collection of ordinals and I is a fixed ideal over A. The ideal J_<λ^I[A] is defined to be the collection of sets B⊆ A satisfying:* B∈ I or;* for every ultrafilter D over A disjoint from I, if B∈ D, then (∏ A/D)<λ.We will denote these ideals by J_<λ^I if the set A is clear from context. We now highlight a number of simple properties of _I(A) and J_<λ^I[A].Suppose A is a collection of ordinals and I is a fixed ideal over A. * If λ∈_I(A), then J_<λ^I is proper.* If λ∉_I(A), then J_<λ^I=J_<λ^+^I.* If F is a filter disjoint from I with λ=(∏ A/F), then λ∈_I(A).* For B⊆ A which is I-positive, if we set I_B:=𝒫(B)∩ I, then _I_B(B)⊆_I(A).* If I and J are ideals over A such that I⊆ J, then _J(A)⊆_I(A).* If B⊆ A is such that B=_I A, then _I_B(B)=_I(A). As the proofs of these results are routine, we will content ourselves with only proving (6), and leaving the rest for the reader.(6): We already know that _I_B(B)⊆_I(A) from (4), so it suffices to show the other direction. With that in mind, let λ∈_I(A) and let D be an ultrafilter over A disjoint from I such that (∏ A/D)=λ. Note that B∈ D so we can define D_B=𝒫(B)∩ D which is an ultrafilter over B extending I^*_B. Now if we let f⃗=⟨ f_ξ : ξ<λ⟩ be cofinal in ∏ A/D, then we see that f⃗↾ B=⟨ f_ξ↾ B: ξ<λ⟩ is cofinal in ∏ B/D_B. Thus, λ∈_I_B(B). With (4) and (6) above in mind, we set aside some notation Suppose A is a collection of ordinals, and I is some ideal over A. For any I-positive B⊆ A, we define _I(B):=_I_B(B)We first show that only assuming (I)<min(A) is enough to get λ-directedness of J_<λ^I[A] whenever it is proper. The following appears as Lemma 1.9 in <cit.>, but we include a proof for the sake of completeness. Suppose that A is a collection of regular cardinals with no maximum, and that I is an ideal over A such that min(A)>(I). If λ≥ (I) is a cardinal with J_<λ^I[A] proper, then ∏ A/J_<λ^I is λ-directed.We will show by induction on λ_0 <λ that ∏ A/J_<λ^I is λ_0^+ directed. If F⊆∏ A is such that |F|≤(I)<min(A), then we let g be defined by g(a)=sup{f(a) : f∈ F}. Then since each a∈ A is regular, it follows that g∈∏ A and f≤ g everywhere.By way of induction, assume we have shown for some cardinal λ_0 with (I)<λ_0<λ, that ∏ A/J_<λ^I is λ_0-directed, and let F⊆∏ A of size λ_0 be given. We first assume that λ_0 is singular. In this case, we can write F=⋃_α<cf(λ_0)F_α such that |F_α|<λ_0. Then by assumption, we can bound each F_α by some g_α, and then bound the set { g_α : α<cf(λ_0)} by some g∈∏ A. We then have that f≤ g modulo J_<λ^I for each f∈ F.So assume that λ_0 is regular. We begin by replacing F={h_i : i<λ_0} with a ≤_J_<λ^I-increasing sequence f⃗=⟨ f_i : i<λ_0⟩. We just let f_i be a ≤_J_<λ^I-upper bound for {h_j : j≤ i}∪{f_j : j<i}. By construction, if we can find a g∈∏ A such that f_i ≤ g modulo J_<λ^I for each i<λ_0, then we will be done. At this point, we will proceed by induction on α <(I) and attempt to construct a ≤_J_<λ^I-increasing sequence of candidates for bounds of f⃗. As usual, we will show that this construction must terminate at some point, or we will be able to generate a contradiction.By induction on α <(I), we will define functions g_α, ordinals ξ(α), and sequences ⟨ B^α_ξ : ξ <λ_0⟩ with the following properties:* g_α∈∏ A and for all β<α, we have that g_β≤ g_α;* B_ξ^α:={a∈ A : f_ξ(a)>g_α (a)};* For each α <(I), and every ξ∈[ξ(α+1),λ_0), we have that B^α_ξ≠ B^α+1_ξ modulo J_<λ^I. The construction proceeds as follows. At stage α=0, we simply let g_0=f_0, and set ξ(α)=0 (note that ξ(α) only matters when α=β+1 for some ordinal β<(I)). At limit stages, assume that g_β has been defined for each β<α, and define g_α by setting g_α(a)=sup_β<α g_β(a). Note since α<(I)<min(A) and each a∈ A is regular, that g_α∈∏ A.At successor stages, let α=β+1, and suppose that g_β has been defined. If g_β is a ≤_J_<λ^I-upper bound for f⃗, then we're done and we can terminate the induction. Otherwise, note that the sequence ⟨ B^β_ξ : ξ <λ_0⟩ is ⊆_J_<λ^I-increasing and so there is a minimum ξ(α) for which every ξ∈[ξ(α),λ_0) has the property that B_ξ^β∉ J_<λ^I (else if there is no such ξ(α), then g_β was indeed the desired bound). By definition, that means we can find some ultrafilter D, disjoint from J_<λ^I such that B_ξ(α)^β∈ D and (∏ A/D)≥λ. Thus it follows that f⃗ must have a <_D-upper bound in ∏ A, say ĝ_α. We then define g_α∈∏ A by g_α (a)=max{g_β(a), ĝ_α(a)}.Note that for each ξ∈[ξ(β+1),λ_0), we have that B_ξ^β∈ D. On the other hand, our definition of g_α gives us that B_ξ^β+1∉ D since g_α is at least ĝ_α everywhere. Thus, condition 3) is satisfied, as are 1) and 2) trivially by construction.We claim that this process must have terminated at some stage. Otherwise, we let ξ(*)=sup{ξ(α) : α<(I)}, and note that each B^α_ξ(*)∉ J_<λ^I since the induction never terminated. Next, we note that conditions 1) and 3) give us that for α≤β, we have B_ξ(*)^β⊆ B_ξ(*)^α and so B_ξ(*)^α∖ B_ξ(*)^α+1∉ J_<λ^I. Therefore, for α<β, we have that B_ξ(*)^β⊆ B_ξ(*)^α+1 and so the sets B_ξ(*)^α∖ B_ξ(*)^α+1 and B_ξ(*)^β∖ B_ξ(*)^β+1 are disjoint I-positive sets (since J_<λ^I extends I). But then we have a partition {B_ξ(*)^α∖ B_ξ(*)^α+1 : α < (I)} of A into (I)-many disjoint I-positive sets, which is a contradiction. Therefore the process terminated at some point and f⃗ (hence F) has a ≤_J_<λ^I-upper bound. This completes the induction and the proof.Throughout the remainder of this section, we fix a set of regular cardinals A with no maximum, and an ideal I. In line with the notation of <cit.>, we will isolate the additional hypothesis of <ref>. We say that A is weakly progressive over I if (I)<min(A). We say that A is progressive over I if additionally, (I)<min (A). From λ-directedness, we immediately recover the following facts. Note that the proofs of the following two corollaries only utilize λ-directedness, and can be found in <cit.> as Corollary 3.5 and Corollary 3.7 respectively. If A is weakly progressive over I, then for every ultrafilter D over A disjoint from I, (∏ A/D)≥λ if and only if J_<λ^I[A]∩ D=∅.If A is weakly progressive over I, then max_I(A) exists. As we are aiming to obtain generators using the trichotomy theorem, our next natural step is to show that we can get universal cofinal sequences. Suppose that λ∈_I(A). A sequence f⃗=⟨ f_ξ : ξ<λ⟩ of functions in ∏ A is a universal cofinal sequence for λ if and only if* f⃗ is <_J_<λ^I-increasing.* For every ultrafilter D over A disjoint from I such that λ=(∏ A/D), f⃗ is cofinal in ∏ A/D. If A is weakly progressive over I, then every λ∈_I(A) has a universal cofinal sequence. The proof of this will be very similar to the proof of <ref>, insofar as we will proceed by induction on α<(I), and suppose that we fail to get a universal cofinal sequence at each stage. From this we will be able to produce a contradiction to weak saturation.We will proceed by induction on α<(I), and construct candidate universal sequences f⃗^α=⟨ f_ξ^α : ξ<λ⟩. Now, we want to come up with sets B^α_ξ that are ⊆-increasing in the α coordinate but differ from each other modulo J_<λ^I (and hence I). So we will ask that not only is the collection ⟨ f^α_ξ : α<(I), ξ<λ⟩ strictly increasing modulo J_<λ^I in the ξ coordinate, but that it is ≤-increasing in the α coordinate. With that in mind, we will use λ-directedness to inductively construct these sequences.At stage α=0, we let f⃗^0=⟨ f^0_ξ : ξ<λ⟩ be any <_J_<λ^I-increasing sequence in ∏ A. We can create such a sequence inductively as follows: let f^0_0 be arbitrary, and then assume that f^0_η has been defined for η<ξ. By λ-directedness, we can find g∈∏ A such that f_η^0≤_J_<λ^I g for all η<ξ, and let f^0_ξ=g+1.At limit stages, let γ<λ and assume that f⃗^α has been defined for each α<γ. We inductively define f⃗^γ=⟨ f^γ_ξ : ξ<λ⟩ as follows: let f^γ_0=sup{f^α _0 : α<γ}, which is in ∏ A since γ<(I)<min(A). Now suppose that f^γ_η has been defined for each η<ξ, and let g=sup{f^α_η : α<γ}. Again g∈∏ A, and let h be such that f^γ_η≤_J_<λ^I h for all η<ξ by λ-directedness. Then define f^γ_ξ by f^γ_ξ(a)=max{g(a),h(a)}+1, which is as desired.At successor stages suppose that f⃗^α has been defined. If f⃗^α is a universal sequence, then we can terminate the induction. If not, we inductively define f⃗^α+1=⟨ f^α+1_ξ: ξ<λ⟩ as follows: Since f⃗^α is not universal, we can find an ultrafilter D_α over A with the property that (∏ A/D_α)=λ, but f⃗^α is <_D_α-dominated by some h∈∏ A/D_α (note that D_α is disjoint from J_<λ^I). Let g⃗=⟨ g_ξ : ξ<λ⟩ be a <_D_α-increasing, cofinal sequence in ∏ A/D_α. We define f^α+1_0 by setting f^α+1_0(a)=max{h(a), f^α_0(a), g_0(a)}. Now suppose that f^α+1_η has been defined for each η<ξ, and let f̂ be such that f^α+1_η≤_J_<λ^If̂ for all η<ξ by λ-directedness. Then define f^α+1_ξ by f^α+1_ξ(a)=max{f^α_ξ(a),f̂(a), g_ξ(a)}+1, which is as desired. Note that f⃗^α+1 is cofinal in ∏ A/D_αWe claim that we must have terminated the induction at some stage. Otherwise, we will have defined for each α<(I) the following:* Sequences f⃗^α=⟨ f^α_ξ : ξ<λ⟩ which are J_<λ^I-increasing in the ξ coordinate, and ≤-increasing in the α coordinate.* Ultrafilters D_α disjoint from J_<λ^I such that f⃗^α is <_D_α dominated by f^α +1_0, and f⃗^α+1 is cofinal in ∏ A/D_α. We will use this to derive a contradiction. We begin by letting h∈∏ A be defined by setting h(a)=sup{f_0^α (a) : α<(I)} (recall that (I)<min(A)). By condition 2) above, for every α<(I), there exists an index ξ(α)<λ such that h <_D_α f_ξ(α)^α +1. Since (I)<min(A)≤λ for λ regular, it follows that ξ(*)=sup{ξ(α) : α<(I)} is below λ. So, for each α<(I), we have that h<_D_α f_ξ(*)^α+1. Now define the setsB_α ={a∈ A : h(a) ≤ f^α_ξ(*)(a)}. By construction, we have that B_α∉ D_α since f_ξ(*)^α <_D_α f_0^α+1≤ h. On the other hand, B_α+1∈ D_α since h<_D_α f^α+1_ξ(*). So, it follows that B_α≠ B_α+1 modulo J_<λ^I (hence modulo I). But since f^α_ξ(*)≤ f^α+1_ξ(*), we have that B_α⊆ B_α+1 (in fact β<α implies that B_β⊆ B_α) and so we are in the same position as the proof of <ref>. That is, ⟨ B_α+1∖ B_α : α<(I)⟩ is a collection of I-positive sets which are disjoint, contradicting weak saturation. Therefore, the induction must have halted at some stage and we are done.Now that we have universal cofinal sequences, we can recover the following corollary by repeating the standard arguments (Theorem 4.4 from <cit.>). If A is weakly progressive over I, then cf(∏ A/I)=max_I(A).Let λ∈_I(A). We say that B is a generator of J_<λ^+^I[A] (written J_<λ^+^I[A]=J_<λ^I[A]+B) if the ideal J_<λ^+^I[A] is generated by J_<λ^I[A]∪{B}. The pcf theorem (in the classical theory) is the statement that, for every λ∈(A), we can find a generator. We now show how to extract generators from universal cofinal sequences under the appropriate conditions. As the proof of the following lemma can be recovered by repeating the standard arguments (as found in the beginning of the proof of e.g. Theorem 4.8 of <cit.>), we omit said proof. Suppose that λ∈_I(A) has a universal cofinal sequence f⃗=⟨ f_ξ : ξ<λ⟩ with an exact upper bound f. Then the set B={a∈ A : f(a)=a} is a generator for J_<λ^+^I[A].The following is immediate. If A is progressive over I, then for every λ∈ pcf_I(A), there exists a B_λ⊆ A such that J_<λ^+^I[A]=J_<λ^I[A]+B_λ.Fix λ∈_I(A). Note that if λ∈_I(A) and λ≤(I)^++, then {λ}∉ I and, desired generator is simply {λ}.So, assume that (I)^++<λ and apply <ref> to obtain a universal cofinal sequence f⃗ for λ. As J:=J_<λ^I[A] is λ directed and (I)^++<λ, we may apply <ref> to obtain a <_J-increasing sequence g⃗ which pointwise dominates f⃗ such that g⃗ has an eub f. It is easily seen that any <_J-increasing sequence which <_J-dominates f⃗ is also universal for λ. Thus, we may apply <ref> to g⃗ to obtain the desired generatorB={a∈ A : f(a)=a}. As an easy corollary, we can obtain the compactness theorem modulo I. Suppose that A is progressive over I, and let ⟨ B_λ : λ∈_I(A)⟩ be a sequence of generators. For any I-positive X⊆ A, we can find n<ω and {λ_i : i≤ n}⊆_I(X) such thatX⊆_I ⋃_i≤ nB_λ_i. Recall that I_X:=𝒫∩ I. As X is still progressive over I_X, it follows that λ_0:=max_I(X) exists, so let X_1:=X∖ B_λ_0. If X_1∈ I, then we have that X=_I B_λ_1. Otherwise, we note that X_1 is progressive over I_X_1 and so max_I(X_1) exists and is equal to some λ_1<λ_0. Continuing on in this manner, we will reach some finite stage n<ω such that max_I(X_n)=λ_n and X_n∖ B_λ_n. At this point, we are done since X⊆_I ⋃_i≤ nB_λ_i. § OBSTACLES AND QUESTIONS With generators in hand, the natural thing to ask is whether or not one can obtain something like the no holes conclusion. That is, can we show that if A is an interval of regular cardinals, then so is _I(A)? We should expect not, as _I(A) only depends on A modulo I, and in general we cannot expect I^* to only concentrate on intervals of regular cardinals.It is consistent that A is an interval of regular cardinals, while _I(A) fails to be.For this, we work in a model where ℵ_ω is strong limit while 2^ℵ_ω=ℵ_ω+4, and let A=[ℵ_2,ℵ_ω)∩REG. Note in this case that any ideal I over A will be θ-regular for every regular θ≥ℵ_1. Further, we have that (A)=[ℵ_2,ℵ_ω+4]∩REG by the classical pcf theory, and in particular we have generators for each λ∈(A). So we let B_i=B_ℵ_ω+i for each 1≤ 1≤ 4, noting that we may assume that the sets B_i are disjoint since B_i is only unique modulo J_<ℵ_ω+i[A]. Finally, let B=B_1∪ B_3 and let I be the ideal over A defined by X∈ I|X∖ B|<ℵ_0. We claim that _I(A)={ℵ_ω+2,ℵ_ω+4}. We begin by noting that each B_i is unbounded in A, and so I extends the ideal of bounded sets. For each 1<n<ω, we have that J_<ℵ_n[A]=𝒫({ℵ_2,…, ℵ_n-1}) and so {ℵ_n} is a generator for J_<ℵ_n+1[A]. Therefore, B_λ∈ I for each λ∈(A)∖{ℵ_ω+2,ℵ_ω+4}. For such a λ, let f⃗=⟨ f_ξ : ξ<λ⟩ be a universal cofinal sequence for λ with exact upper bound f∈^AON such that B_λ={a∈ A : f(a)=a} (recall that this is how we obtain generators in the first place). In this case, f+1∈∏ A/I and so f⃗ has a <_I-upper bound. As f⃗ is universal, it follows that there is no ultrafilter D over A extendingthe dual of I with cf(∏ A/D)=λ.Now we only need to show that ℵ_ω+2,ℵ_ω+4∈_I(A) and we are done. For this, simply note that B_i is I-positive for i=2 or 4 and so we can find an ultrafilter D over A disjoint from I containing B_i. Let f⃗=⟨ f_ξ : ξ<ℵ_ω+i⟩ be a universal cofinal sequence for ℵ_ω+1 with exact upper bound f∈^AON such that B_i={a∈ A : f(a)=a}. Then f⃗ is cofinal in ∏ A/D, which means that ℵ_ω+i∈_I(A) for i=2 or 4.The next thing to note is that, even though the proofs may be different, we obtained generators for λ∈_I(A) by generalizing the standard techniques. So, we might ask if it is possible to employ this strategy to obtain transitive generators. Suppose that A is a set of regular cardinals, and A⊆ N⊆(A) is such that N carries a generating sequence B=⟨ B_λ : λ∈ N⟩. We say that B is transitive if for every λ∈ N, if θ∈ B_λ, then B_θ⊆ B_λ. Unfortunately, there are a number of obstacles to obtaining transitive generators through this route. In order to explain precisely what these obstacles are, we need several tools involving elementary submodels. Following standard abuse of notation, we will use H(χ)to refer to the structure (H(χ),∈,<_χ) where <_χ well orders H(χ). Suppose κ<χ are regular cardinals. We say N≺ H(χ) is a κ-presentable substructure if N=⋃_α<κN_α where* N_α≺ H(χ) for each α<κ;* ⟨ N_α : α<κ⟩ is ⊆-increasing and continuous;* N_α∈ N_α+1 for each α<κ;* κ+1⊆ N_0;* |N_α|=κ for each α<κ. For any structure N, we let Ch_N denote the characteristic function of N, defined by setting_N(μ)=sup N∩μ,where μ is a regular cardinal. Note that if |N|<μ, then _N(μ)∈μ. Let A be a set of regular cardinals with |A|^+<min(A), and fix a sufficiently large and regular χ. Let N≺ H(χ) be a κ-presentable structure with N=⋃_α<κN_α, where |A|<κ<min(A).The arguments for producing transitive generators (Claim 6.7 and 6.7A of <cit.> and section 6 of <cit.>) rely on the fact that, for every λ∈(A)∩ N, we can code a generator B_λ for λ by way of N. More precisely, the key observation is Lemma 5.8 of <cit.>, which we quote in a simplified form. Suppose that A, χ, N are as above with A∈ N_0. For λ∈(A)∩ N, let f⃗=⟨ f_α : α<λ⟩ be a universal cofinal sequence for λ, and let γ=sup N∩λ. Then the setb_λ={a∈ A : _N(a)=f_γ(a)}is a generator for λ. The generators obtained in this way are subsets of the ordinal closure of N, which has cardinality κ. So suppose we only ask that (A)<κ<min(A), and we somehow manage to obtain transitive generators for some N⊆_I(A) using κ-presentable structures. In doing so, we will have obtained generators b_λ above, which must have size κ. But then, we can employ <ref> to see that there are λ_i∈_I(A) for 1≤ i≤ n<ω such thatA⊆_I ⋃_1≤ i≤ nb_λ_i.So then I^* concentrates on a set of size κ. As _I(A) will remain the same if we replace A with an I-equivalent set, this amounts to doing pcf theory in the classical case. So in order to have any hope of obtaining transitive generators for more than just the classical case, we would need to use different techniques and perhaps utilize stronger assumptions on A, I, or even _I(A) than just (I)<min(A). plain | http://arxiv.org/abs/1706.09018v5 | {
"authors": [
"Shehzad Ahmed"
],
"categories": [
"math.LO",
"03E04"
],
"primary_category": "math.LO",
"published": "20170627191414",
"title": "An Extension of Shelah's Trichotomy Theorem"
} |
[4] #1#1 #1#1 #1#1#1#1 #1#1#1#1#1#1#1#1#1#1 #1#1 #1#1#1 #1 #1#2(#1/#2 ) | http://arxiv.org/abs/1706.08360v2 | {
"authors": [
"Long Bai"
],
"categories": [
"math.PR",
"60G15"
],
"primary_category": "math.PR",
"published": "20170626132734",
"title": "Extremes of $L^p$-norm of Vector-valued Gaussian processes with Trend"
} |
AIP/[email protected]^1School of Electrical & Electronic Engineering, University of Manchester, Manchester M13 9PL, UK^2National Graphene Institute, University of Manchester, Manchester M13 9PL, UK^3Photon Science Institute, University of Manchester, Manchester M13 9PL, UK^4Manchester Centre For Mesoscience and Nanotechnology, University of Manchester, Manchester M13 9PL, UK^5School of Physics and Astronomy, University of Manchester, Manchester M13 9PL, UKGraphene-Silicon Schottky diode photodetectors possess beneficial properties such as high responsivities and detectivities, broad spectral wavelength operation and high operating speeds. Various routes and architectures have been employed in the past to fabricate devices. Devices are commonly based on the removal of the silicon-oxide layer on the surface of silicon by wet-etching before deposition of graphene on top of silicon to form the graphene-silicon Schottky junction. In this work, we systematically investigate the influence of the interfacial oxide layer, the fabrication technique employed and the silicon substrate on the light detection capabilities of graphene-silicon Schottky diode photodetectors. The properties of devices are investigated over a broad wavelength range from near-UV to short-/mid-infrared radiation, radiation intensities covering over five orders of magnitude as well as the suitability of devices for high speed operation. Results show that the interfacial layer, depending on the required application, is in fact beneficial to enhance the photodetection properties of such devices. Further, we demonstrate the influence of the silicon substrate on the spectral response and operating speed. Fabricated devices operate over a broad spectral wavelength range from the near-UV to the short-/mid-infrared (thermal) wavelength regime, exhibit high photovoltage responses approaching 10^6 V/W and short rise- and fall-times of tens of nanoseconds. Towards Substrate Engineering of Graphene-Silicon Schottky Diode Photodetectors T.J. Echtermeyer^1,2,3 December 30, 2023 ===============================================================================Graphene is an appealing material for ultrafast and broadband photodetection applications due to its high charge carrier mobility <cit.> and ultra-wide spectral absorption range <cit.>. The initial examples of graphene photodetectors are mostly based on metal-graphene (MG) junctions <cit.> and graphene p-n junction architectures <cit.>. Despite their broadband operation at ultrafast speeds <cit.>, they generally exhibit low responsivities (limited to a few mAW^-1) due to the intrinsically low optical absorption of monolayer graphene (2.3%) <cit.>. Further, the small photoactive area limits their use for real-world applications <cit.>. Recently, there is a surge of interest in using graphene to replace the metal electrode on semiconductor surfaces to realize Schottky diodes <cit.>. Unlike conventional bulk metals, graphene's high optical transmittance <cit.>, tuneable Fermi level <cit.>, high mobility of charge carriers <cit.> and atomically thin structure <cit.> potentially bring additional functionalities to Schottky diode platforms. They can be used in a variety of applications such as photodetection <cit.> solar energy harvesting <cit.>, chemical gas sensing <cit.> and current rectification <cit.>. A further benefit of the graphene-silicon Schottky platform is its simple architecture, suitability for large scale fabrication and the potential for integration into the back end-of-line (BEOL) CMOS processing <cit.>. Particularly in photodetection applications, the G-Si diode provides an efficient hybrid platform where both graphene and silicon can be used as absorbing materials for different wavelength ranges <cit.>. Devices shows high responsivities comparable to commercial silicon photodiodes for wavelength ranges with photon energies above the silicon bandgap which is enabled by the high optical transmittance of graphene of more than 97% <cit.>. Detection of radiation with energies below the silicon band gap is enabled by the broadband absorption of graphene <cit.>. Responsivities reduce to a few mAW^-1 <cit.> and less for longer wavelengths <cit.> which is comparable to the responsivities of planar MG junction PDs <cit.>. However, compared to MG junction PDs the G-Si Schottky diode offers the advantage of large photoactive area, formed by the whole lateral junction surface.Optical and electrical characteristics of planar G-Si Schottky diodes have been comprehensively investigated in earlier reports. Differences in the device characteristics such as ideality factor, level of dark current and spectral dependent photo-response are mostly attributed to profound effects of interface properties, choice of materials and fabrication route in the past literature <cit.>. In this regard, the effect of interfacial oxide layer on rectification characteristics <cit.>, and solar cell efficiency <cit.>, as well as influence of graphene doping on spectral response <cit.> and on the efficiency of solar cells <cit.> were investigated in more specific reports. However, the influence of substrate properties on optoelectronic characteristics of the diode has not been discussed in detail before.In this work, we characterised G-Si diodes conducting current-voltage (I-V) and high speed optical measurements over broad wavelength and light intensity ranges to investigate the effects of interface and substrate properties on the photoresponse and response time characteristics of graphene-silicon Schottky diodes.Two different sets of devices were fabricated, Graphene/Silicon (GS) and Graphene/Insulator/Silicon (GIS) Schottky diodes. The process flow is depicted in fig.<ref>. Bare n-doped silicon wafers (specification ρ = 1-10 Ohm cm, extracted doping level N_d ≈ 3.5×10^14 cm^-3) with native silicon oxide layer (SiO_2) have been used as substrate. Conventionally, wafers with grown or deposited silicon oxide of greater thickness in the order of tens to hundreds of nanometers and subsequent etching of this oxide layer are used to fabricate devices. Our employed fabrication route allows especially in the case of the GIS devices an initially homogeneous natural oxide layer of well defined thickness of ≈ 2nm <cit.>. An insulating spacer, serving later as base for the contact metallization to graphene, is fabricated from negative photoresist, defined by optical lithography and subsequently hard-baked to withstand further processing. The spacer has a thickness of 800nm to prevent dielectric breakdown for the range of bias voltages applied. Further, the exposure dose for optical lithography has been optimized to yield rounded edges of the spacer which facilitates transfer of the graphene layer on top of this structure in a later step and avoids tearing of the graphene layer. A split is then performed to fabricate GS and GIS devices. For the GS devices, a hydrofluoric acid (HF) dip is carried out to remove the native oxide layer while GIS devices are left untreated. Graphene grown by chemical vapour deposition (CVD) on copper foil is then transferred to both device types employing a process based on poly-methyl-methacrylate (PMMA). This transfer step is carried out immediately after the HF-dip for the GS device to minimize the re-growth of an interfacial oxide layer. A further lithography step is then carried out to define contacts to the graphene layer using gold/chromium (Au/Cr). Aluminium (Al) serves as low work function metallization to the silicon back side to form an ohmic contact. An additional layer of photoresist is spun onto the sample surface of the GS device to delay re-growth of an interfacial oxide layer by reducing exposure of the graphene-silicon junction to the ambient atmosphere. Fig.<ref>e shows an image of a fabricated device. It is noteworthy that our simple process for the GIS device allows rapid fabrication of devices with high throughput. Particularly the absence of HF-etching simplifies the processing and further eliminates the need for depositing graphene on the silicon surface shortly after HF-etching. Devices have intentionally been designed to have a large junction area in the order of ≈ 60mm^2. The large lateral dimensions of the junction in the order of mm lead to a high vertical depletion region length of the junction in silicon (μm) ratio. This high ratio reduces fringe and edge effects of the electric field and promotes a 1-dimensional (vertical) electric field in the junction region which allows treating devices as simple parallel plate capacitor. The contact to graphene has been designed as finger-like structure (inset) with the aim to increase the perimeter of the contact and reduce contact resistance <cit.>. The dark current-voltage (IV) characteristics of both device types have been recorded immediately after device fabrication and repeatedly re-measured over the following five weeks. Fig.<ref> shows that the GIS device exhibits and on- to off-current ratio (I_on/I_off) of 10^5, defined by the ratio of currents at voltages of + and - 1.5V, respectively. In comparison, the GS device initially exhibits an I_on/I_off ratio of 10^4. The Schottky-barrier-heights (SBH) determined by temperature dependent measurements are 0.62eV and 0.45eV for the GIS and GS device, respectively (see supplementary information). The built-in potential derived from CV measurements is ϕ_bi = 0.58V for the GIS and ϕ_bi = 0.31V for the GS device (see supplementary information). Both the GIS and GS device have a series resistance R_S of ≈ 470 Ω and ideality factors η of 2.8 and 4.8, respectively (see supplementary information). In the forward biased regime for voltages in the range of ≈ -0.25..0V, the GS device exhibits a much sharper increase in current with bias voltage than the GIS device. Both the higher I_on/I_off ratio and the less sharp on-set in current of the GIS device can be attributed to the presence of the interfacial layer <cit.>. The natural oxide layer acts as a tunneling barrier and suppresses current flow for low voltage drops across the oxide layer <cit.>.Devices have been stored in ambient atmosphere over several weeks and re-measured subsequently. With increasing lifetime of the devices, both forward and reverse current decrease in both devices. After five weeks, both forward and reverse currents are approximately reduced two-fold in the GIS device. This aging effect is much more pronounced in the GS device. The reverse current decreases by almost an order of magnitude and the I_on/I_off ratio increases. Correspondingly, the SBH increases to 0.61 eV and the built-in potential to ϕ_bi = 0.44V after five weeks. Further, it is visible that the GS device starts to follow the characteristics of the GIS device in the low voltage forward biased region, the slope of the forward current increase is reduced. These changes in the GS device can be attributed to the re-growth of the interfacial layer with increasing lifetime <cit.>. The diffusion of oxygen and water through cracks and grain boundaries in CVD graphene leads to an oxidation of the silicon surface underneath <cit.>. A similar effect, even though less pronounced, occurs for the GIS device. This is counter-intuitive as the initially present oxide layer thickness is self-limiting, has been grown over months of wafer storage and should have reached its maximum thickness. The exact reason for this re-growth is unknown but it is highly likely that molecular species such as oxygen and water are trapped underneath the graphene layer which promotes the growth of SiO_2 <cit.>. Subsequently, we characterized both GIS and GS devices under illumination at different wavelengths and light intensities. Graphene-Silicon Schottky diode photodetectors exhibit two different operating regimes. In the wavelength region below λ≈ 1.1μm, silicon is the main light absorber due to optical excitation of charge carriers above its band gap of E_g = 1.1eV. For longer wavelengths λ 1.1μm, optical absorption takes place in graphene and silicon is optically transparent at employed low doping level N_d ≈ 3.5×10^14 cm^-3. Optically excited charge carriers in graphene gain sufficient energy to overcome the Schottky barrier formed at the graphene-silicon interface and lead to a photoresponse <cit.>. For light detection in the visible (VIS) and near-infrared (NIR) wavelength region (λ < 1.1μm) a large SBH is beneficial. It leads to reduced reverse current density, as demonstrated for the GIS diode (Fig.<ref>), and improves the signal-to-noise ratio (SNR). In contrary, for the generation of a photocurrent due to longer wavelength light a reduced SBH as in the GS device is required as it facilitates the transmission of optically excited charge carrier in graphene over the Schottky barrier. For opto-electronic characterization in the VIS and NIR wavelength regime, continuous wave (CW) laser sources with wavelengths of 532, 650 and 980nm at varying intensities as well as illumination from a broad band white light source filtered with a monochromator have been used. The photoresponse of the GIS device was determined by recording the IV characteristics under illumination and is shown in fig.<ref> for wavelengths of 532, 650 and 980nm. The reverse current increases with increasing light intensities under illumination for all wavelengths. With increasing light intensity, the open circuit voltage V_oc, the voltage when the diode forward current and photocurrent compensate each other (I_forward + I_photo = 0, minimum in the IV curves), shifts towards the forward biased region of the diode. It is notable that independent of light intensity and wavelength, the current in the reversed biased region saturates sharply and does not increase with increased reverse bias. Further, higher light intensities lead to a stretch out of the IV curves; the voltage that needs to be applied beyond V_oc in the reverse biased regime to drive the device into saturation increases with higher light powers.To understand observed opto-electronic properties of graphene-silicon Schottky diode photodetectors, it is instructive to review the processes involved in the photocurrent generation in such devices. Fig.<ref>a shows the band diagram and cross-section through the silicon substrate as a function of depth. When graphene is in contact with silicon, band bending occurs in silicon and a depletion region of length x_d forms <cit.>. The built-in potential ϕ_bi drops across the depletion region <cit.>. The depletion length in a Schottky diode depends on the applied reverse bias and can be calculated using the full depletion approximation as <cit.> x_d = √(2 ϵ_0 ϵ_Si/q N_d (ϕ_bi + V_b)) with ϵ_0 free space permittivity, ϵ_Si relative permittivity of silicon, q electron charge and V_b applied reverse bias. Light absorption in silicon follows Beer-Lambert law, describing the exponential decay of light intensity I as a function of depth x <cit.>. Taking into account optical reflection at the device surface it can be described by <cit.> I(x) = I_0(1-R)e^-α x Here, I_0 is the incident light intensity, α the wavelength dependent absorption coefficient, and R the wavelength dependent optical reflection coefficient at the device surface. This leads to a depth x dependent generation rate of electron-hole pairs G_e,h in the silicon substrate <cit.> G_e,h(x) = α λ/hc I_0(1-R)e^-α x with Planck constant h and c speed of light. A quantum efficiency of one is assumed in eq.<ref>, i.e. every photon generates an electron-hole pair.Optically excited charge carriers generate a photoresponse in the device due to two processes. Band bending and corresponding electric fields within the depletion region separate electron-hole pairs and lead to a drift current density J_dr (fig.<ref>a). Photo-generated holes (minority carriers) in the non-depleted bulk of the silicon substrate exhibit a concentration gradient as a function of depth (fig.<ref>a). This concentration gradient leads to a diffusion current density J_diff. The total photo current of a graphene-silicon Schottky diode can be described in a similar way to a conventional pn-junction photodiode, yet, with omitted p-doped region <cit.>. The total current density, J_tot is givenby <cit.> J_tot = J_dr + J_diff with drift current density J_dr = qI_0 (1-R)λ/hc (1-e^-α x_d) and diffusion current density J_diff = qI_0 (1-R)λ/hc α L_p/α^2 L_p^2-1 e^-α x_d {α L_p - S_p L_p/D_p [cosh(H'/L_p)-e^-α H']/S_p L_p/D_psinh(H'/L_p) + cosh(H'/L_p). . -sinh(H'/L_p) + α L_p e^-α H'/S_p L_p/D_psinh(H'/L_p) + cosh(H'/L_p)} Here, q is the electron charge, h Planck's constant, c the speed of light, L_p the diffusion length of holes in the n-doped substrate, S_p the recombination velocity of carriers at the back-side of the substrate, and D_p the diffusion coefficient. H' = H - x_d, with H the physical thickness of the silicon substrate, denotes the length of the intrinsic substrate region. We explicitly consider the substrate thickness in our calculation of the diffusion current. Hole diffusion lengths L_p in low doped silicon are in the order of hundreds of μm <cit.> and comparable to the silicon substrate thickness of typically H = 500 μm. Especially for longer wavelengths and corresponding low absorption coefficient α, light is able to penetrate deep into the silicon substrate and leads to non-negligible diffusion currents.Fig.<ref>b shows the calculated wavelength dependent responsivity ℛbased on equations <ref> and <ref> as a function of depletion region length. The diffusion coefficient has been derived from the Einstein relation as D_p = μ_pkT/q based on a hole mobility of μ_p = 400cm^2/Vs for low doped silicon substrates at room temperature <cit.>. The diffusion length has been derived from the recombination time τ_p = 2×10^-4s for n-doped silicon with a doping level of N_d = 3.5×10^14 cm^-3 <cit.> and is calculated as L_p = √(D_pτ_p) = 450μ m. S_p = ∞ has been used and reflection R and absorption coefficient α have been calculated from the complex refractive indices of silicon <cit.>. No fitting of any parameters has been used. Both drift and diffusion currents contribute equally in magnitude to the total photoresponse. Light of shorter wavelengths is absorbed closer to the silicon surface and the drift current contribution dominates in this wavelength range. Longer wavelength light penetrates deeper into the substrate and leads to diffusion currents exceeding the drift currents.With increasing depletion region length, the spectral response of the drift current contribution is shifted towards longer wavelengths and its absolute contribution increases. Similarly, the diffusion current contribution shifts towards longer wavelengths with increasing depletion region length. The magnitude of the total photoresponse of the device is largely independent of depletion region length since drift and diffusion current contributions compensate each other. However, the ratio of drift to diffusion currents is strongly dependent on the depletion region length. Especially for high operating speeds, it is desirable to suppress diffusion currents due to long recombination times of charge carriers involved in this process which limits response times. For comparison, the experimentally determined photocurrent responsivity ℛ_c at V_b = 1.5V has been included, calculated as ℛ_c = I_Photo / P_Light with I_Photo = I_Illuminated - I_dark and P_Light the incident, external light power. The experimental and theoretical spectral dependence of the responsivities are in good agreement. The internal quantum efficiency (IQE), determined as the ratio of experimentally determined responsivity to theoretical responsivity, based on the ideal number of photons absorbed, reaches values greater than 100% for wavelengths up to 950nm. It should be noted that the IQE for λ = 1100nm has been omitted due to uncertainty in the absorption coefficient α close to the absorption edge of silicon and would lead to IQE values greater than 100%. The external quantum efficiency EQE is calculated from the experimental photocurrent responsivity ℛ_c as EQE = ℛ_c/q hc/λ The EQE reaches values of 60-70% in the wavelength range from 400-950nm after which it decays for longer wavelengths, approaching photon energies close to the band gap of silicon where silicon is not light absorbing anymore. Remarkably, the EQE is almost constant in the wavelength range from 400-950nm. It does not decrease for shorter wavelengths, since the formed Schottky junction in this device is located just beneath the silicon surface as opposed to conventional pn-junction photodiodes based on a buried junction <cit.>. The location of the Schottky junction just below the surface promotes the conversion of shorter wavelength light which is strongly absorbed just below the surface into a photocurrent. The high optical transparency of graphene enhances the efficient transduction of short wavelength light into a photoresponse compared to metals employed as electrode in Schottky photodiodes. For example, gold becomes strongly light absorbing for wavelengths shorter than ≈ 500nm <cit.>. Further, the observed strong saturation of the photocurrent with increasing reverse bias (fig.<ref>) is reflected by equations <ref> and <ref>. Fig.<ref>c shows the theoretical and experimental percentage change in photocurrent with increasing depletion region length for wavelength of λ = 532, 650 and 980nm. The small overall increase of less than one percent of the photocurrent, and the wavelength dependence are in agreement for theory and experiment. Therefore we argue that observed saturation of the photocurrent, i.e. its independence of reverse bias voltage, is caused by the silicon substrate and not limited by the available states within graphene that can be occupied by photo generated charge carriers as claimed in reference <cit.>.A comprehensive overview of the fundamental properties of the GIS device for various light wavelengths, obtained with laser and white light sources at different optical intensities is shown in fig.<ref>. The four linked panels of a) absolute photocurrent, b) photocurrent responsivity ℛ_c, c) photovoltage responsivity ℛ_v and d) open circuit voltage V_oc give a coherent overview of the fundamental properties of the device and allow determination of the optimum operating conditions for a desired application. This is indicated by the arrows and blue letters. For example, incident light with an intensity in the order of I_0 = 20μW/cm^2 (fig.<ref>c, point A) will lead to an absolute photocurrent in the order of I_pc = 100nA (panel a, point D). Panel a) further shows the linear trend of the generated absolute photocurrent as a function of light intensity. The dark current noise floor of the GIS device has been included and its standard deviation of 3σ = 10^-10A, two orders of magnitude below the photocurrents recorded in our experiments, indicates further potential of GIS devices for low light intensity detection. The according photocurrent responsivity ℛ_c in A/W at a reverse bias voltage of V_b = 1.5V can be derived from fig.<ref>b, point C with the corresponding band diagram shown in panel e). As described in Fig.<ref>b) and according to eq.<ref> and <ref>, the photocurrent responsivity ℛ_c is wavelength dependent. Further, photocurrent responsivity ℛ_c decreases with decreasing light powers which is not straight forward visible in the log-log plot in panel a). The photovoltage responsivity ℛ_v = V_oc / P_Light in V/W for a given light intensity can be determined from fig.<ref>c with the band diagram shown in fig.<ref>f. While photocurrent responsivity ℛ_c decreases for lower light intensities and remains of the same order of magnitude in the range of light intensities used in the experiments, photovoltage responsivity ℛ_v exhibits a more pronounced dependence on light intensity. It increases by four orders of magnitude approaching 10^6 V/W when light intensities are decreased to the μW/cm^2 range.Fig.<ref>d shows the absolute open-circuit photovoltage for the GIS device. For comparison, the open-circuit photovoltage for the GS device has been included. Forward dark IV curves are plotted additionally for both devices as the open-circuit voltage is determined by when dark forward current and photocurrent compensate each other. In the GIS device, the open-circuit voltage follows the dark IV curve for low light intensities up to ≈ 0.22V before it deviates and approaches saturation, determined by the built-in voltage ϕ_bi. On the contrary, the GS device exhibits a reduced open-circuit voltage compared to the GIS device for an identical photocurrent.The reduced photocurrent responsivity ℛ_c (fig.<ref>b) and increased photovoltage responsivity ℛ_v (fig.<ref>c) for low light intensities can be qualitatively explained due to the presence of the interfacial oxide layer in the GIS device according to the suggestion in <cit.>. Photogenerated holes build-up at the interface due to the presence of the interfacial oxide layer <cit.>. The oxide layer poses a tunneling barrier for charge carriers and an electric field facilitates the tunneling of charge carriers through this barrier. This electric field across the oxide layer can only partially be provided by the externally applied reverse bias. The voltage drop within a GIS- or more generally any metal-insulator-semiconductor (MIS)-Schottky diode under reverse bias consists of the voltage drop across the interfacial oxide V_oxide and the depletion region in the silicon substrate V_depletion. It can be described by <cit.> ϕ_bi + V_b = q N_d x_d^2/2 ϵ_0 ϵ_Si_V_depletion + q N_d x_d t_ox/ϵ_0 ϵ_Ox_V_Oxide with t_ox the interfacial oxide layer thickness. Solving eqn.<ref> for x_d allows determination of V_oxide. Due to the low doping of the silicon substrate most of the voltage drop occurs across the depletion region and V_oxide is less than 10mV across an oxide layer of thickness t_ox = 2nm at a reverse bias voltage of V_b = 1.5V in the dark, equating to an electric field of ≈ 5 × 10^4 V/cm.Under illumination, the photogenerated holes will move towards the interfacial oxide layer where they accumulate before they can tunnel through the interfacial layer <cit.>. The build-up of these charges leads to an additional electric field across the interfacial oxide layer that drives the tunneling process of charge carries through the oxide layer. The tunneling current is exponentially dependent on the voltage drop across the interfacial oxide layer <cit.> created due to the photogenerated charges. Qualitatively, lower light intensities lead to a smaller number of photogenerated carriers and reduced additional voltage drop V_oxide, resulting in smaller photogenerated tunneling current compared to higher light intensities. Once a threshold of hole density and thus electric field is overcome, hole tunneling through the oxide layer is enhanced. Corresponding current flow leads to a reduction of accumulated holes and a balance between the accumulated holes at the interface and the tunneling current establishes. This build-up of holes at the interface explains observed reduced photocurrent responsivity ℛ_c (fig.<ref>b)) and increased photovoltage responsivity ℛ_v (fig.<ref>c)) for low light intensities. We would like to point out the influence of the light wavelength on the reduction of photocurrent responsivity ℛ_c. Light of longer wavelength shows a more pronounced reduction in photocurrent responsivity ℛ_c for low light intensities (fig.<ref>b)). While this effect is not fully understood, we speculate that this reduction originates from the lower excess energy of photogenerated electron-hole pairs above the valence and conduction band edges in silicon for longer wavelength compared to shorter wavelength light. This reduced excess energy translates to an effectively increased tunneling barrier height for charge carriers tunneling through the interfacial oxide layer and reduced current flow.The interfacial oxide layer further influences the light detection capabilities of graphene-silicon Schottky diodes in the infrared wavelength regime beyond 1.1μm where graphene is the main light absorbing material. Fig.<ref>a) shows the time dependent photocurrents under periodic illumination with light of wavelength λ = 1.55μm of the GIS and GS device. In its pristine state, the GS device exhibits photocurrents of 35nA (R = 0.3 mA/W). The photocurrent reduces by more than 50 times after four weeks of aging and corresponding re-growth of the interfacial oxide layer in the GS device and increase in SBH, comparable to the GIS device. However, despite the reduced photocurrent response in the presence of an interfacial oxide layer, the full current-voltage curve in fig.<ref>b) shows that the GIS device exhibits an increased photovoltage response V_oc compared to the aged GS device. Further, the GIS device exhibits a photovoltage response under illumination with thermal radiation of a hot body heated to temperature T = 550K, incident on the device filtered with a λ = 1.5μm long pass filter (<ref>c)). The interfacial oxide layer and corresponding increased SBH is as such not detrimental for light detection in both the visible and infrared wavelength ranges. Instead, depending on the required application and utilized read-out mechanism (current vs. voltage), the interfacial oxide layer can enhance the photodetection properties of graphene-silicon Schottky photodiodes.The high-speed performance of the GIS device has been investigated using a λ = 405nm laser source with 80ps pulse width and variable repetition rate. A wavelength of λ = 405nm was chosen to reduce response speed limiting diffusion currents in the device. Fig.<ref>a) shows that the GIS device is faster than a commercial photodetector (Thorlabs DET210/M) of comparable active device area. The recorded time dependent response of the GIS device further exhibits ringing when the signal decays which we attribute to the bandwidth limited (50MHz) amplifier in our experimental setup. The rise- and fall-times (10-90%) of the GIS device are 12 and 20ns, respectively. Fig.<ref>b) shows the frequency dependent response at different illumination wavelengths. The cut-off frequency f_c can be determined from the drop in signal amplitude by 3dB. The GIS device exhibits a cut-off frequency f_c = 2..5MHz at a wavelength λ = 405nm that can be increased with increasing reverse bias voltage due to an increase of the depletion region length and corresponding reduced capacitance of the junction. For longer wavelengths, f_c drops to ≈ 27..35kHz. Further,the cut-off frequency f_c exhibits a wavelength dependence, it reduces with increasing wavelength from f_c = 35kHz at λ = 468nm to f_c = 27kHz at λ = 1050nm. This decrease in cut-off frequency f_c by two orders of magnitude can be explained due to a strongly increased contribution of diffusion currents to the overall photoresponse of the device for wavelengths longer than 400-500nm, depending on the depletion region length (fig.<ref>b)). Especially in low-doped silicon substrates, long carrier lifetimes in the order of tens to hundreds of μs that dominate diffusion currents limit the high-speed photoresponse of devices. As reference, the cut-off frequency of various device reported in the literature have been included. The cut-off frequency f_c for these reference devices has been determined from the reported rise-time t_rise as f_c = 0.34/t_rise <cit.>. To the best of our knowledge, the reported rise-/fall-times and f_c of our devices are the fastest reported for a graphene-silicon Schottky photodiode operating in the visible wavelength range to date. The cut-off frequency of the device can potentially be further increased by reducing the area of the device to decrease its capacitance, e.g. the area of the shown device of 60mm^2 can be straightforwardly reduced by several orders of magnitude. Reducing the thickness of the substrate and engineering the doping profile of the silicon substrate to achieve e.g. a n^--n^+ doping profile can reduce response-speed limiting diffusion currents in devices.It should be noted that in the wavelength region beyond 1.1μm where silicon is not light absorbing, devices are able to operate at high speeds due to reduced diffusion currents in the silicon substrate. In conclusion, we demonstrated the influence of the silicon substrate and its interfacial oxide layer on the properties of graphene-silicon Schottky photodiodes. Consideration of drift and diffusions currents in the substrate upon light absorption and their dependence on the depletion region length allows modeling the spectral sensitivity of the photodiode and optimizing devices by engineering doping level and profile as well as thickness of the substrate. We showed that the interfacial oxide layer leads to an increase in SBH height and reduces leakage currents under reverse voltage bias of the diode. The interfacial layer re-grows with increasing lifetime of devices, a factor that needs to be addressed through e.g. appropriate passivation in future devices that require low SBH. While the interfacial oxide layer increases the SBH and leads to a reduction in photocurrent responsivity, we demonstrated that this interfacial layer leads to an increase in photovoltage responsivity, particularly for low light intensities. Both broad spectral photodetection range from the near-UV to the short-/mid-IR wavelength regime and high-speed light detection capability with rise- and fall-times in the order of tens of ns despite large junction area demonstrate the potential of graphene-silicon Schottky photodiodes. Further optimization of the substrate, e.g. dopant profile engineering and tailoring of substrate and interfacial layer thickness will further pave the way for graphene-silicon Schottky photodiodes towards real-world applications.§ ACKNOWLEDGEMENTSH.S. acknowledges funding from the Turkish government (MEB-YLSY) and N.U. acknowledges funding from The Royal Thai Army. P.P. acknowledges funding from the Royal Society (RG140411). This work was partially funded by EPSRC contract EP/P015581/1. T.J.E would like to thank K.S. Novoselov for providing experimental equipment.100 zhang2005experimental Zhang, Y., Tan, Y.W., Stormer, H.L. and Kim, P., Nature 438, 201-204 (2005)geim2009graphene Geim, A.K., Science 324, 1530-1534 (2009)mak2012optical Mak, K.F., Ju, L., Wang, F. and Heinz, T.F., Solid State Communications 152, 1341-1349 (2012)li2008dirac Li, Z.Q., Henriksen, E.A., Jiang, Z., Hao, Z., Martin, M.C., Kim, P., Stormer, H.L. and Basov, D.N., Nature Physics 4, 532-535 (2008)mueller2010graphene Mueller, T., Xia, F. and Avouris, P., Nature Photonics 4, 297-301 (2010)xia2009ultrafast Xia, F., Mueller, T., Lin, Y.M., Valdes-Garcia, A. and Avouris, P., Nature nanotechnology 4, 839-843 (2009)lee2008contact Lee, E.J., Balasubramanian, K., Weitz, R.T., Burghard, M. and Kern, K., Nature nanotechnology 3, 486-490 (2008)xu2009photo Xu, X., Gabor, N.M., Alden, J.S., van der Zande, A.M. and McEuen, P.L., Nano letters 10, 562-566 (2009)gabor2011hot Gabor, N.M., Song, J.C., Ma, Q., Nair, N.L., Taychatanapat, T., Watanabe, K., Taniguchi, T., Levitov, L.S. and Jarillo-Herrero, P., Science 334, 648-652 (2011)lemme2011gate Lemme, M.C., Koppens, F.H., Falk, A.L., Rudner, M.S., Park, H., Levitov, L.S. and Marcus, C.M., Nano letters 11, 4134-4137 (2011)pospischil2013cmos Pospischil, A., Humer, M., Furchi, M.M., Bachmann, D., Guider, R., Fromherz, T. and Mueller, T., Nature Photonics 7, 892-896 (2013)nair2008fine Nair, R.R., Blake, P., Grigorenko, A.N., Novoselov, K.S., Booth, T.J., Stauber, T., Peres, N.M. and Geim, A.K., Science 320, 1308-1308 (2008)mueller2009role Mueller, T., Xia, F., Freitag, M., Tsang, J. and Avouris, P., Physical Review B 79, 245430 (2009)di2016graphene Di Bartolomeo, A., Physics Reports 606, 1-58 (2016) li2016graphene Li, X. and Zhu, H., Physics Today 69, 46-51 (2016) tongay2009graphite Tongay, S., Schumann, T. and Hebard, A.F., Applied Physics Letters 95, 222103 (2009)dragoman2010graphene Dragoman, D., Dragoman, M. and Plana, R., Journal of Applied Physics 108, 084316 (2010)chen2011graphene Chen, C.C., Aykol, M., Chang, C.C., Levi, A.F.J. and Cronin, S.B., Nano letters 11, 1863-1867 (2011)tongay2012rectification Tongay, S., Lemaitre, M., Miao, X., Gila, B., Appleton, B.R. and Hebard, A.F., Physical Review X 2, 011002 (2012)chen2012gate Chen, C.C., Chang, C.C., Li, Z., Levi, A.F.J. and Cronin, S.B., Applied Physics Letters 101, 223113 (2012)yang2012graphene Yang, H., Heo, J., Park, S., Song, H.J., Seo, D.H., Byun, K.E., Kim, P., Yoo, I., Chung, H.J. and Kim, K., Science 336, 1140-1143 (2012)yim2013characterization Yim, C., McEvoy, N. and Duesberg, G.S., Applied Physics Letters 103, 193106 (2013)sinha2014ideal Sinha, D. and Lee, J.U., Nano letters 14, 4660-4664 (2014)parui2014temperature Parui, S., Ruiter, R., Zomer, P.J., Wojtaszek, M., Van Wees, B.J. and Banerjee, T., Journal of Applied Physics 116, 244505 (2014)amirmazlaghani2013graphene Amirmazlaghani, M., Raissi, F., Habibpour, O., Vukusic, J. and Stake, J., IEEE Journal of Quantum electronics 49, 589-594 (2013)wang2013high Wang, X., Cheng, Z., Xu, K., Tsang, H.K. and Xu, J.B., Nature Photonics 7, 888-891 (2013)an2013tunable An, X., Liu, F., Jung, Y.J. and Kar, S., Nano letters 13, 909-916 (2013)lv2013high Lv, P., Zhang, X., Zhang, X., Deng, W. and Jie, J., IEEE Electron Device Letters 34, 1337-1339 (2013)an2013metal An, Y., Behnam, A., Pop, E. and Ural, A., Applied Physics Letters 102, 013110 (2013)liu2014quantum Liu, F. and Kar, S., ACS nano 8, 10270-10279 (2014) chen2015high Chen, Z., Cheng, Z., Wang, J., Wan, X., Shu, C., Tsang, H.K., Ho, H.P. and Xu, J.B., Advanced Optical Materials 3, 1207-1214 (2015)goykhman2016chip Goykhman, I., Sassi, U., Desiatov, B., Mazurski, N., Milana, S., De Fazio, D., Eiden, A., Khurgin, J., Shappir, J., Levy, U. and Ferrari, A.C., Nano letters 16, 3005-3013 (2016)li2016high Li, X., Zhu, M., Du, M., Lv, Z., Zhang, L., Li, Y., Yang, Y., Yang, T., Li, X., Wang, K. and Zhu, H., Small 12, 595-601 (2016)srisonphan2016hybrid Srisonphan, S., ACS Photonics 3, 1799-1808 (2016)di2016tunable Di Bartolomeo, A., Giubileo, F., Luongo, G., Iemmo, L., Martucciello, N., Niu, G., Fraschke, M., Skibitzki, O., Schroeder, T. and Lupina, G., 2D Materials 4, 015024 (2016)riazimehr2016spectral Riazimehr, S., Bablich, A., Schneider, D., Kataria, S., Passi, V., Yim, C., Duesberg, G.S. and Lemme, M.C., Solid-State Electronics 115, 207-212 (2016)wan2017self Wan, X., Xu, Y., Guo, H., Shehzad, K., Ali, A., Liu, Y., Yang, J., Dai, D., Lin, C.T., Liu, L. and Cheng, H.C., npj 2D Materials and Applications 4, (2017)shen2017high Shen, J., Liu, X., Song, X., Li, X., Wang, J., Zhou, Q., Luo, S., Feng, W., Wei, X., Lu, S. and Feng, S., Nanoscale 9, 6020-6025 (2017)riazimehr2017graphene Riazimehr, S., Kataria, S., Bornemann, R., Bolivar, P.H., Ruiz, F.J.G., Godoy, A. and Lemme, M.C., arXiv:1702.01272 (2017)di2017hybrid Di Bartolomeo, A., Luongo, G., Giubileo, F., Funicello, N., Niu, G., Schroeder, T., Lisker, M. and Lupina, G., 2D Materials 4, 025075 (2017) tao2017hybrid Tao, L., Chen, Z., Li, X., Yan, K. and Xu, J.B., arXiv:1705.07696 (2017) li2015carbonLi, X., Lv, Z. and Zhu, H., Advanced Materials 27, 6549-6574 (2015)li2010graphene Li, X., Zhu, H., Wang, K., Cao, A., Wei, J., Li, C., Jia, Y., Li, Z., Li, X. and Wu, D., Advanced Materials 22, 2743-2748 (2010)miao2012high Miao, X., Tongay, S., Petterson, M.K., Berke, K., Rinzler, A.G., Appleton, B.R. and Hebard, A.F., Nano letters 12, 2745-2750 (2012)an2013optimizing An, X., Liu, F. and Kar, S., Carbon 57, 329-337 (2013)lin2013graphene Lin, Y., Li, X., Xie, D., Feng, T., Chen, Y., Song, R., Tian, H., Ren, T., Zhong, M., Wang, K. and Zhu, H., Energy & Environmental Science 6, 108-115 (2013)song2015role Song, Y., Li, X., Mackin, C., Zhang, X., Fang, W., Palacios, T., Zhu, H. and Kong, J., Nano letters 15, 2104-2110 (2015)kim2013chemically Kim, H.Y., Lee, K., McEvoy, N., Yim, C. and Duesberg, G.S., Nano letters 13, 2182-2188 (2013)fattah2014graphene Fattah, A., Khatami, S., Mayorga?Martinez, C.C., Medina-Snchez, M., Baptista?Pires, L. and Merkoi, A., Small 10, 4193-4199 (2014)singh2014tunable Singh, A., Uddin, M., Sudarshan, T. and Koley, G., Small 10, 1555-1565 (2014)uddin2014functionalized Uddin, M.A., Singh, A.K., Sudarshan, T.S. and Koley, G., Nanotechnology 25, 125501 (2014)zhu2015photo Zhu, M., Li, X., Chung, S., Zhao, L., Li, X., Zang, X., Wang, K., Wei, J., Zhong, M., Zhou, K. and Xie, D., Carbon 84, 138-145 (2015)yu2009tuning Yu, Y.J., Zhao, Y., Ryu, S., Brus, L.E., Kim, K.S. and Kim, P., Nano letters 9, 3430-3434 (2009) novoselov2005two Novoselov, K.S., Jiang, D., Schedin, F., Booth, T.J., Khotkevich, V.V., Morozov, S.V. and Geim, A.K., Proceedings of the National Academy of Sciences of the United States of America 102, 10451-10453 (2005) pasternak2016graphene Pasternak, I., Wesolowski, M., Jozwik, I., Lukosius, M., Lupina, G., Dabrowski, P., Baranowski, J.M. and Strupinski, W., Scientific reports, 6, (2016)echtermeyer2014photothermoelectric Echtermeyer, T.J., Nene, P.S., Trushin, M., Gorbachev, R.V., Eiden, A.L., Milana, S., Sun, Z., Schliemann, J., Lidorikis, E., Novoselov, K.S. and Ferrari, A.C., Nano letters 14, 3733-3742 (2014)morita1990growth Morita, M., Ohmi, T., Hasegawa, E., Kawakami, M. and Ohwada, M., Journal of Applied Physics 68, 1272-1281 (1990) Smithcontact J.T. Smith, A.D. Franklin, D.B. Farmer, C.D. Dimitrakopoulos, ACS Nano 7, 3661 (2013).sze2007physics Sze, S.M. and Ng, K.K. Physics of semiconductor devices John wiley & sons, 2007.morita1990native Morita, M., Ohmi, T., Hasegawa, E. and Teramoto, A., Japanese journal of applied physics 29, L2392 (1990)Vasupressure K. S. Vasu, E. Prestat, J. Abraham, J. Dix, R. J. Kashtiban, J. Beheshtian, J. Sloan, P. Carbone, M. Neek-Amal, S. J. Haigh, A. K. Geim, R. R. Nair, Van der Waals pressure and its effect on trapped interlayer molecules, Nat. Comm 7, 12168 (2016).tyagi1984physics Tyagi, M.S. Physics of Schottky barrier junctions. In metal-semiconductor Schottky barrier junctions and their Applications, Springer US, 1-60 (1984)neamen2003semiconductor Neamen, D.A. Semiconductor physics and devices, McGraw-Hill Higher Education, 2003.hecht Hecht, E. Optics (4th Ed.), Addison Wesley, 2002. palik Palik, E. D. Handbook of Optical Constants of Solids, Academic, San Diego, 1998. princsemi Van Zeghbroeck, B.V. Principles of Semiconductor Devices and Heterojunctions, Prentice Hall, 2013.ng1980asymmetry Ng, K.K. and Card, H.C., Journal of Applied Physics 51, 2153-2157 (1980)liuphotonic Liu, J. Photonic Devices, Cambridge University Press, 2009. | http://arxiv.org/abs/1706.09042v1 | {
"authors": [
"H. Selvi",
"N. Unsuree",
"E. Whittaker",
"M. P. Halsall",
"E. W. Hill",
"P. Parkinson",
"T. J. Echtermeyer"
],
"categories": [
"cond-mat.mes-hall",
"physics.app-ph"
],
"primary_category": "cond-mat.mes-hall",
"published": "20170627205734",
"title": "Towards Substrate Engineering of Graphene-Silicon Schottky Diode Photodetectors"
} |
1]Giuseppe Pica [email protected] 1]Eugenio Piasini 1,2]Daniel Chicharro 1]Stefano Panzeri [email protected] [1]Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto (TN) 38068, Italy [2]Department of Neurobiology, Harvard Medical School, Boston, MA 02115, USAInvariant components of synergy, redundancy, and unique information among three variables [ ========================================================================================= In a system of three stochastic variables, the Partial Information Decomposition (PID) of Williams and Beer dissects the information that two variables (sources) carry about a third variable (target) into nonnegative information atoms that describe redundant, unique, and synergistic modes of dependencies among the variables. However, the classification of the three variables into two sources and one target limits the dependency modes that can be quantitatively resolved, and does not naturally suit all systems.Here, we extend the PID to describe trivariate modes of dependencies in full generality, without introducing additional decomposition axioms or making assumptions about the target/source nature of the variables. By comparing different PID lattices of the same system, we unveil a finer PID structure made of seven nonnegative information subatoms that are invariant to different target/source classifications and that are sufficient to construct any PID lattice. This finer structure naturally splits redundant information into two nonnegative components: the source redundancy, which arises from the pairwise correlations between the source variables, and the non-source redundancy, which does not, and relates to the synergistic information the sources carry about the target. The invariant structure is also sufficient to construct the system's entropy, hence it characterizes completely all the interdependencies in the system. § INTRODUCTIONShannon's mutual information <cit.> provides a well established, widely applicable tool to characterize the statistical relationship between two stochastic variables. Larger values of mutual information correspond to a stronger relationship between the instantiations of the two variables in each single trial. Whenever we study a system with more than two variables, the mutual information between any two subsets of the variables still quantifies the statistical dependencies between these two subsets; however, many scientific questions in the analysis of complex systems require a finer characterization of how all variables simultaneously interact <cit.>. For example, two of the variables, A and B, may carry either redundant or synergistic information about a third variable C <cit.>, but considering the value of the mutual information I((A,B):C) alone is not enough to distinguish these qualitatively different information-carrying modes. To achieve this finer level of understanding, recent theoretical efforts have focused on decomposing the mutual information between two subsets of variablesinto more specific information components (see e.g. <cit.>). Nonetheless, a complete framework for the information-theoretic analysis of multivariate systems is still lacking.Here we consider the analysis of trivariate systems, which is complex enough to present the fundamental challenges of going beyond bivariate analyses, and yet simple enough to provide hints about how these challenges might be addressed.Characterizing the fine structure of the interactions among three stochastic variables can improve the understanding of many interesting problems across different disciplines <cit.>. For example, in the study of neural information processing many important questions can be cast as trivariate information analyses. Determining quantitatively how two neurons encode information about an external sensory stimulus <cit.> requires describing the dependencies between the stimulus and the activity of the two neurons. Determining how the stimulus information carried by a neural response relates to the animal's behaviour <cit.> requires the analysis of the simultaneous three-wise dependencies among the stimulus, the neural activity and the subject's behavioral report. More generally, a thorough understanding of even the simplest information-processing systems would require the quantitative description of all different ways two inputs carry information about one output <cit.>.In systems where legitimate assumptions can be made about which variables act as sources of information and which variable acts as the target of information transmission,the partial information decomposition (PID) <cit.> provides an elegant framework to decompose the mutual information that one or two (source) variables carry about the third (target) variable into a finer lattice of redundant, unique and synergistic information atoms. However, in many systems the a priori classification of variables into sources and target is arbitrary, and limits the description of the distribution of information within the system <cit.>. Furthermore, even when one classification is adopted, the PID atoms do not characterize completely all the possible modes of information sharing between the sources and the target. For example, two sources can carry redundant information about the target irrespective of the strength of the correlations between them and, as a consequence,the PID redundancy atom can be larger than zero even if the sources have no mutual information <cit.>. Hence, the value of the PID redundancy measure cannot distinguish how the correlations between two variables contribute to the information that they share about a third variable.In this paper, we address these limitations by extending the PID framework without introducing further axioms or assumptions about the trivariate structure to analyze. We compare the atoms from the three possible PID lattices that are induced by the three possible choices for the target variable in the system. By tracking how the PID information modes change across different lattices, we move beyond the partial perspective intrinsic to a single PID lattice and unveil the finer structure common to all PID lattices. We find that all lattices can be fully described in terms of a unique minimal set of seven information-theoretic quantities, that is invariant to different classifications of the variables.The first result of this approach is the identification of two nonnegative subatomic components of the redundant information that any pair of variables carries about the third variable. The first component, that we name source redundancy (SR), quantifies the part of the redundancy which arises from the correlations of the sources. The second component, that we name non-source redundancy (NSR), quantifies the part of the redundancy which is not related to the source correlations. Interestingly, we find that whenever the non-source redundancy is larger than zero then also the synergy is larger than zero. The second result is that the minimal set induces a unique nonnegative decomposition of the full joint entropy H(X,Y,Z) of the system. This allows us to dissect completely the distribution of information of any trivariate system in a general way that is invariant with respect to the source/target classification of the variables. To illustrate the additional insights of this new approach,we finally apply our framework to paradigmatic examples, including discrete and continuous probability distributions. These applications confirm our intuitions and clarify the practical usefulness of the finer PID structure.§ PRELIMINARIES AND STATE OF THE ART Williams and Beer proposed an influential axiomaticdecomposition of the mutual information I(X: (Y,Z)) that two stochastic variables Y, Z (the sources) carry about a third variable X (the target) into the sum of four nonnegative atoms <cit.>: * SI(X: {Y;Z}), which is the information about the target that is shared between the two sources (the redundancy);* UI(X: {Y \ Z}) and UI(X: {Z \ Y}), which are the separate pieces of information about the target that can be extracted from one of the sources, but not from the other;* CI(X: {Y;Z}), which is the complementary information about the target that is only available when both of the sources are jointly observed (the synergy). This construction is commonly known as the Partial Information Decomposition (PID). Sums of subsets of the four PID atoms provide the classical mutual information quantities between each of the sources and the target, I(X:Y) and I(X:Z), and the conditional mutual information quantities whereby one of the sources is the conditioned variable, I(X:Z|Y) and I(X:Y|Z). Such relationships are displayed with a color code in Fig. <ref>. The PID decomposition of Ref. <cit.> is based upon a number of axioms that do not determine univocally the value of the four PID atoms. The specific measure proposed in Ref. <cit.> to calculate such values has been questioned as it can lead to unintuitive results <cit.>, and thus many attempts have been devoted to finding alternative measures <cit.> compatibly with an extended number of axioms, such as the identity axiom proposed in <cit.>. Other work has studied in more detail the lattice structure that underpins the PID, indicating the duality between information gain and information loss lattices <cit.>. Even though there is no consensus on how to build partial information decompositions in systems with more than two sources, for trivariate systems the measures of redundancy, synergy and unique information defined in Ref. <cit.> have not yet been questioned and have found wide acceptance (in this paper, we will in fact make use of these measures when a concrete implementation of the PID will be required).However, even in the trivariate case there are open problems regarding the understanding of the PID atoms in relation to the interdependencies within the system. First, Harder et al. <cit.> pointed out that the redundant information shared between the sources about the target can intuitively arise from the following two qualitatively different modes of three-wise interdependence:* the source redundancy, which is redundancy which 'must already manifest itself in the mutual information between the sources'[Note that Ref. <cit.> interchangeably refers to the sources as 'inputs': we will discuss this further in Section <ref> when addressing the characterization of source redundancy.];* the mechanistic redundancy, which can be larger than zero even if there is no mutual information between the sources. As pointed out by Harder and colleagues <cit.>, a more precise conceptual and formal separation of these two kinds of redundancy still needs to be achieved, and presents fundamental challenges. The very notion that two statistically independent sources can nonetheless share information about a target was not captured by some earlier definitions of redundancy <cit.>. Further, several studies <cit.> described the property that the PID measures of redundancy can be positive even when there are no correlations between the sources as undesired. On a different note, other authors <cit.> pointed out that the two different notions of redundancy can define qualitatively different modes of information processing in (neural) input-output networks. Other issues were recently pointed out by James and Crutchfield <cit.>, who indicated that the very definition of the PID lattice prevents its use as a general tool for assessing the full structure of trivariate (let alone multi-variate) statistical dependencies. In particular, Ref. <cit.> considered dyadic and triadic systems, which underlie quite interesting and common modes of multivariate interdependencies. They showed that, even though the PID atoms are among the very few measures that can distinguish between the two kinds of systems, a PID lattice cannot allot the full joint entropy H(X,Y,Z) of either system. The decomposition of the joint entropy in terms of information components that reflect qualitatively different interactions within the system has also been subject of recent research <cit.>. In summary, the PID framework, in its current form, does not yet provide a satisfactorily fine and complete description of the distribution of information in trivariate systems. The PID atoms do assess trivariate dependencies better than Shannon's measures, but they cannot quantify interesting finer interdependencies within the system, such as the source redundancy that the sources share about the target. In addition, they are limited to describing the dependencies between the chosen sources and target, thus enforcing a certain perspective on the system that does not naturally suit all systems.§ MORE PID DIAGRAMS UNVEIL FINER STRUCTURE IN THE PID FRAMEWORK To address the open problems described above, we begin by pointing out the feature of the PID lattice that underlies all the issues in the characterization of trivariate systems outlined in Sec. <ref>. As we illustrate in Fig. <ref>, while a single PID diagram involves the mutual information quantities that one or both of the sources (in the figure, Y and Z) carry about the target X, it does not contain the mutual information between the sources I(Y:Z) and their conditional mutual information I(Y:Z|X). This precludes the characterization of source redundancy with a single PID diagram, as it prevents any comparison between the elements of the PID and I(Y:Z). Moreover, it also signals that a single PID lattice cannot account for the total entropy H(X,Y,Z) of the system.These considerations suggest that the inability of the PID framework to provide a complete information-theoretic description of trivariate systems is not a consequence of the axiomatic construction underlying the PID lattice. Instead, it follows from restricting the analysis to the limited perspective on the system that is enforced by classifying thevariables into sources and target when defining a PID lattice. We thus elaborate that significant progress can be achieved, without the addition of further axioms or assumptions to the PID framework, if one just considers, alongside the PID diagram in Fig. <ref>, the other two PID diagrams that are induced respectively by labeling Y or Z as the target in the system. When considering the resulting three PID diagrams (Fig. <ref>),the previously missing mutual information I(Y:Z) between the original sources of the left-most diagram is now decomposed into PID atoms of the middle and the right-most diagrams in Fig. <ref>, and the same happens with I(Y:Z|X).In the following we take advantage of this shift in perspective to resolve the finer structure of the PID diagrams and, at the same time, to generalize its descriptive power going beyond the current limited framework, where only the information that two (source) variables carry about the third (target) variable is decomposed. More specifically, even though the PID relies on setting a partial point of view about the system, we will show that describing how the PID atoms change when we systematically rotate the choice of the PID target variable effectively overcomes thelimitations intrinsic toone PID alone.§.§ The relationship between PID diagrams with different target selections To identify the finer structure underlying all the PID diagrams in Fig. <ref>, we first focus on the relationships between the PID atoms of two different diagrams, with the goal of understanding how to move from one perspective on the system to another. The key observation here is that, for each pair of variables in the system, their mutual information and their conditional mutual information given the third variable appear in two of the PID diagrams. This imposes some constraints relating the PID atoms in two different diagrams. For example, if we consider I(X:Y) and I(X:Y|Z), we find that:I(X:Y)=SI(X:{Y;Z})+UI(X:{Y\ Z})=SI(Y:{X;Z})+UI(Y:{X\ Z}), I(X:Y|Z)=CI(X:{Y;Z})+UI(X:{Y\ Z})=CI(Y:{X;Z})+UI(Y:{X\ Z}) ,where the first and second equality in each equation result from the decomposition of I(X:(Y,Z)) (left-most diagram in Fig. <ref>) and I(Y:(X,Z)) (middle diagram in Fig. <ref>), respectively. From Eq. <ref> we see that, when the roles of a target and a source are reversed (here, the roles of X and Y), the difference in redundancy is the opposite of the difference in unique information with respect to the other source (here, Z). Similarly, Eq. <ref> shows that the difference in synergy is the opposite of the difference in unique information with respect to the other source. Combining these two equalities, we also see that the difference in redundancy is equal to the difference in synergy. Therefore, the equalities impose relationships across some PID atoms appearing in two different diagrams. These relationships are depicted in Figure <ref>. The eight PID atoms appearing in the two diagrams can be expressed in terms of only six subatoms, due to the constraints of the form of Eqs. <ref> and <ref>. In particular, to select the smallest nonnegative pieces of information resulting from the constraints, we define: RSI(X Z↔ Y) min[SI(X: {Y;Z}), SI(Y: {X;Z})],RCI(X Z↔ Y) min[CI(X: {Y;Z}), CI(Y: {X;Z})],RUI(X Z↔ Y) min[UI(X:{Y\ Z}), UI(Y:{X\ Z})]. The above terms are called the Reversible Shared Information of X and Y considering Z (RSI(X Z↔ Y); the orange block in Fig. <ref>), the Reversible Complementary Information of X and Y considering Z (RCI(X Z↔ Y); the gray block in Fig. <ref>), and the Reversible Unique Information of X and Y considering Z (RUI(X Z↔ Y); the magenta block in Fig. <ref>).The attribute reversible highlights that, when we reverse the roles of target and source between the two variables at the endpoints of the arrow in RSI, RCI, or RUI (here, X and Y), the reversible pieces of information are still included in the same type of PID atom (redundancy, synergy, or unique information with respect to the third variable). For example, the orange block in Fig. <ref> indicates a common amount of redundancy in both PID diagrams: as such, RSI(X Z↔ Y) contributes both to redundant information that Y and Z share about X, and to redundant information that X and Z share about Y. By construction, these reversible components are symmetric in the reversed variables. Note that, when we reverse the role of two variables, the third variable (here, Z) remains a source and is thus put in the middle of our notation in Eqs. <ref>, <ref> and <ref>. We also define the Irreversible Shared Information IRSI(X Z← Y) between X and Y considering Z (the light blue block in Fig. <ref>) as follows:IRSI(X Z← Y) SI(X: {Y;Z})-RSI(X Z↔ Y).The attribute irreversible in the above definition indicates that this piece of redundancy is specific to one of the two PIDs alone. More precisely, the uni-directional arrow in IRSI(X Z← Y) indicates that this piece of information is a part of the redundancy with X as a target, but it is not a part of the redundancy with Y as a target[In this paper, directional arrows never represent any kind of causal directionality: the PID framework is only capable to quantify statistical (correlational) dependencies.]. Correspondingly, at least one between IRSI(X Z← Y) and IRSI(Y Z← X) is always zero. More generally, IRSI quantifies asymmetries between two different PIDs: for example, when moving from the left to the right PID in Fig. <ref>, the light blue block IRSI(X Z← Y) indicates an equivalent amount of information that is lost for the redundancy SI(X:{Y;Z}) and the synergy CI(X:{Y;Z}) atoms, and is instead counted as a part of the unique information UI(Y: {X \ Z}) atom. In other words, assuming that the two redundancies are ranked as in Fig. <ref>, we find that:IRSI(X Z← Y)= SI(X:{Y;Z})-SI(Y:{X;Z}) = CI(X:{Y;Z})-CI(Y:{X;Z}) = UI(Y:{X\ Z})-UI(X:{Y \ Z}). While the coarser Shannon information quantities that are decomposed in both diagrams in Fig. <ref>, namely I(X : Y) and I(X : Y|Z), are symmetric under swap of X ↔ Y, their PID decompositions (see Eqs. <ref> and <ref>) are not: Eq. <ref> show that IRSI quantifies the amount of this asymmetry. More precisely, the PID decompositions of I(X : Y) and I(X : Y|Z) will preserve the X ↔ Y symmetry if and only if IRSI(X Z← Y)=IRSI(Y Z← X)=0. Note that, in general, the differences of redundancies, of synergies, and of unique information terms are always constrained by equations such as Eq. <ref>. Hence, unlike for the reversible measures, we do not need to consider independent notions of irreversible synergy or irreversible unique information. In summary, the four subatoms in Eqs. <ref>, <ref>, <ref> and <ref>, together with the two remaining unique information terms (the black dots in Fig. <ref>), allow us to characterize both PIDs in Fig. <ref> and to understand how the PID atoms change when moving from one PID to another. Note that in all cases the blocks indicate amounts of information, but the interpretation of this information depends on the classification of variables as target and sources within each diagram.§.§ Unveiling the finer structure of the PID framework So far we have examined the relationships among the PID atoms corresponding to two different perspectives we hold about the system, whereby we reverse the roles of target and source between two variables in the system. We have seen that the PID atoms of different diagrams are not independent, as they are constrained by equations of the type of Eqs. <ref> and <ref>. More specifically, the eight PID atoms of two diagrams can be expressed in terms of only six independent quantities, including reversible and irreversible pieces of information. The next question is how many subatoms we need to describe all the three possible PIDs (see Fig. <ref>). Since there are six constraints, three equations of the type of Eq. <ref> and three equations of the type of Eq. <ref>, one may be tempted to think that the twelve PID atoms of all three PID diagrams can be expressed in terms of only six independent quantities. However, the six constraints are not independent:this is most easily seen from the symmetry of the co-information measure <cit.>, which is defined as the mutual information of two variables minus their conditional information given the third (e.g., Eq. <ref> minus Eq. <ref>). The co-information is invariant to any permutation of the variables, and this property highlights that only five of the six constraints are linearly independent. Accordingly, we will now detail how seven subatoms are sufficient to describe the whole set of PIDs: we call these subatoms the minimal subatoms' set of the PID diagrams.In Fig. <ref> we see how the minimal subatoms' set builds the PID diagrams. We assume, without loss of generality, that SI(Y: {X;Z})≤ SI(X: {Y;Z})≤ SI(Z: {X;Y}). Then, we consider the three possible instances of Eq. <ref> for the three possible choices of the target variable, and we find that the same ordering also holds for the synergy atoms: CI(Y: {X;Z})≤ CI(X: {Y;Z})≤ CI(Z: {X;Y}). This property is related to the invariance of the co-information: indeed, Ref. <cit.> indicated that the co-information can be expressed as the difference between the redundancy and the synergy within each PID diagram, i.e.coI(X;Y;Z)= SI(i: {j;k})-CI(i: {j;k}),for any assignment of X, Y, Z, to i, j, k. These ordering relations are enough to understand the nature of the minimal subatoms' set: we start with the construction of the three redundancies, which can all be expressed in terms of the smallest RSI and two subsequent increments. In Fig. <ref>, these correspond respectively to RSI(X Z↔ Y)= SI(Y: {X;Z}) (orange block), IRSI(X Z← Y) (light blue block) and IRSI(Z Y← X) (yellow block). In parallel, we can construct the three synergies with the smallest RCI and the same increments used for the redundancies. In Fig. <ref>, these correspond respectively to RCI(X Z↔ Y)= CI(Y: {X;Z}) (gray block) and the same two IRSI used before. To construct the unique information atoms, it is sufficient to further consider the three independent RUI defined by taking all possible permutations of X, Y and Z in Eq. <ref>. In Fig. <ref>, these correspond to RUI(X Z↔ Y)=UI(X:{Y\ Z}) (magenta block), RUI(X Y↔ Z)=UI(Z:{X\ Y}) (brown block), and RUI(Y X↔ Z)=UI(Z:{Y\ X}) (blue block).We thus see that, in total, seven minimal subatoms are enough to build the three PID diagrams of any system. Among these seven building blocks, five are reversible pieces of information, i.e. they contribute to the the same kind of PID atom across different PID diagrams; the other two are irreversible pieces of information, that contribute to different kinds of PID atom across different diagrams. The complete minimal set can only be determined when all three PIDs are jointly considered and compared: as shown in Fig.<ref>, pairwise PIDs' comparisons can at most distinguish two subatoms in any redundancy (or synergy), while the three-wise PIDs' comparison discussed above allowed us to discern three subatoms in SI(Z:{X;Y}) (and CI(Z:{X;Y}); see also Fig.<ref>).Importantly, while the definition of a single PID lattice relies on the specific perspective adopted on the system, which labels two variables as the sources and one variable as the target, the decomposition in Fig. <ref> is invariant with respect to the classification of the variables. As described above, it only relies on computing all three PID diagrams and then using the ordering relations of the atoms, without any need to classify the variables a priori. As illustrated in Fig. <ref>, the decomposition of the mutual information and conditional mutual information quantities in terms of the subatoms is also independent of the PID adopted. Our invariant minimal set in Fig. <ref> thus extends the descriptive power of the PID framework beyond the limitations that were intrinsic to considering an individual PID diagram. In the next sections, we will show how the invariant minimal set can be used to identify the part of the redundant information about a target that specifically arises from the mutual information between the sources (the source redundancy), and to decompose the total entropy of any trivariate system.Remarkably, the decomposition in Fig. <ref> does not rely on any extension of Williams and Beer's axioms. We unveiled finer structure underlying the PID lattices just by considering more PID lattices at a time and comparing PID atoms across different lattices. We further remark that the decomposition in Fig. <ref> does not rely in any respect on the specific definition of the PID measures that is used to calculate the PID atoms: it only relies on the axiomatic PID construction presented in Ref. <cit.>. § QUANTIFYING SOURCE REDUNDANCYThe structure of the three PID diagrams that was unveiled with the construction in Fig. <ref> enables a finer characterization of the modes of information distribution among three variables than what has previously been possible. In particular, we will now address the open problem of quantifying the source redundancy, i.e. the part of the redundancy that 'must already manifest itself in the mutual information between the sources' <cit.>. Consider for example the redundancy SI(X:{Y;Z}) in Fig. <ref>: it is composed by RSI(X Z↔ Y) (orange block) and IRSI(X Z← Y) (light blue block). We can check which of these subatoms are shared with the mutual information of the sources I(Y:Z). To do this, we have to move from the middle PID diagram in Fig. <ref>, that contains SI(X:{Y;Z}), to any of the other two diagrams, that both contain I(Y:Z). Consistently, in these other two diagrams I(Y:Z) is composed by the same four subatoms (the orange, the light blue, the yellow and the blue block), and the only difference across diagrams is that these subatoms are differently distributed between unique information and redundancy PID atoms. In particular, we can see that both the orange and the light blue block which make up SI(X:{Y;Z}) are contained in I(Y:Z). Thus, whenever any of them is nonzero, we know at the same time that Y and Z share some information about X (i.e., SI(X:{Y;Z})>0) and that there are correlations between Y and Z (i.e., I(Y:Z)>0). Accordingly, in the scenario of Fig. <ref> the entire redundancy SI(X:{Y;Z}) is explained by the mutual information of the sources: all the redundant information that Y and Z share about X arises from the correlations between Y and Z.If we then consider the redundancy SI(Y:{X;Z}), that coincides with the orange block, we also find that it is totally explained in terms of the mutual information between the corresponding sources I(X:Z), which indeed contains an orange block. However, if we consider the third redundancy SI(Z:{X;Y}), that is composed by an orange, a light blue and a yellow block, we find that only the orange and the light blue block contribute to I(X:Y), while the yellow does not. This means that if IRSI(Z Y← X)>0 (yellow block), then X and Y can share information about Z (i.e., SI(Z:{X;Y})>0) even if there is no mutual information between the sources X and Y (i.e., I(X:Y)=0). Following this reasoning, we define the source redundancy that two sources S_1 and S_2 share about a target T as:SR(T:{S_1;S_2})max{RSI(T S_2↔ S_1), RSI(T S_1↔ S_2) }=max{min[SI(T:{S_1;S_2}),SI(S_1:{S_2;T})],min[SI(T:{S_1;S_2}),SI(S_2:{S_1;T})] }.One can easily verify that Eq. <ref> identifies the blocks that belong to both SI(T,{S_1,S_2}) and I(S_1:S_2) in Fig. <ref>, for any choice of sources and target (for instance, T=Z, S_1=X and S_2=Y). This definition can be justified as follows: each RSI measure in Eq. <ref> compares SI(T:{S_1;S_2}) with one of the other two redundancies that are contained in the mutual information between the sources I(S_1:S_2) (namely, SI(S_1:{S_2;T}) and SI(S_2:{S_1;T})). Some of the subatoms included in I(S_1:S_2) are contained in one of these two redundancies, but not in the other, as they move to the unique information mode when we change PID. Therefore, by taking the maximum in Eq. <ref> we ensure that SR(T:{S_1;S_2}) captures all the common subatoms of I(S_1:S_2) and SI(T:{S_1;S_2}). In a complementary way, we can define the non-source redundancy that two sources share about a target:NSR(T:{S_1;S_2})SI(T:{S_1;S_2})-SR(T:{S_1;S_2}).§.§ The difference between source and non-source redundancy Eqs. <ref> and <ref> show how we can split the redundant information that two sources share about a target into two nonnegative information components: when the source-redundancy SR is larger than zero there are also correlations between the sources, while the non-source redundancy NSR can be larger than zero even when the sources are independent. SR is thus seen to quantify the pairwise correlations between the sources that also produce redundant information about the target: this discussion is pictorially summarized in Fig. <ref>. In particular, the source redundancy SR is clearly upper-bounded by the mutual information between the sources, i.e.SR(T:{S_1;S_2}) ≤ I(S_1:S_2).On the other hand, NSR does not arise from the pairwise correlations between the sources: let us calculate NSR in a paradigmatic example that was proposed in Ref. <cit.> to remark the subtle possibility that two statistically independent variables can share information about a third variable. Suppose that Y and Z are uniform binary random variables, with YZ, and X is deterministically fixed by the relationship X= Y ∧ Z. Here, SI(X:{Y;Z}) ≈ 0.311 bit (according to different measures of redundancy <cit.>) even if I(Y:Z)=0. Indeed, from our definitions in Eqs. <ref> and <ref> we find that, since SI(Y:{X;Z}), SI(Z:{X;Y})≤ I(Y:Z)=0, here NSR(X:{Y;Z})=SI(X:{Y;Z})>0 even though I(Y:Z)=0. We will comment more extensively on this instructive example in Section <ref>.Interestingly, the non-source redundancy NSR is a part of the redundancy that is related to the synergy of the same PID diagram. Indeed, two of the three possible NSR defined in Eq. <ref> are always zero, and the third can be larger than zero if and only if the yellow block in Fig. <ref> is larger than zero. From Fig. <ref> and Fig. <ref> we can thus see that, whenever we find positive non-source redundancy in a PID diagram, the same amount of information (the yellow block) is also present in the synergy of that diagram.Thus, while there is source redundancy if and only if there is mutual information between the sources, the existence of non-source redundancy is a sufficient (though not necessary) condition for the existence of synergy. We can thus interpret NSR as redundant information about the target that implies that the sources carry synergistic information about the target: we give a graphical characterization of NSR in Fig. <ref>.In the specific examples considered in <cit.>, where the underlying causal structure of the system is such that the sources always generate the target, the non-source redundancy can indeed be associated with the notion of 'mechanistic redundancy' that was introduced in that work: the causal mechanisms connecting the target with the sources induce a non-zero NSR that contributes to the redundancy independently of the correlations between the sources. In general, since the causal structure of the analyzed system is unknown, it is impossible to quantify 'mechanistic redundancy' with statistical measures, while it is always possible to quantify and interpret the non-source redundancy as described in Section <ref>.In section <ref> we will examine concrete examples to show how our definitions of source and non-source redundancy refine the information-theoretic description of trivariate systems, as they quantify qualitatively different ways that two variables can share information about a third.We conclude this Section with more general comments about our quantification of source redundancy. We note that the arguments used to define the source redundancy in Sec. <ref> can be equally used to study common or exclusive information components of other PID terms. For example, we can identify the magenta subatom as the component of the mutual information between the sources X and Y that cannot be related to their redundant information about Z. Similarly, we could consider which part of a synergy is related to the conditional mutual information between the sources. § DECOMPOSING THE JOINT ENTROPY OF A TRIVARIATE SYSTEM Understanding how information is distributed in trivariate systems should also provide a descriptive allotment of all parts of the joint entropy H(X,Y,Z) <cit.>. For comparison, Shannon's mutual information enables a semantic decomposition of the bivariate entropy H(X,Y) in terms of univariate conditional entropies and I(X:Y), that quantifies shared fluctuations (or covariations) between the two variables <cit.>:H(X,Y)=I(X:Y)+H(X|Y)+H(Y|X).However, in spite of recent efforts <cit.>, a univocal descriptive decomposition of the trivariate entropy H(X,Y,Z) is still missing to date. Since the PID axioms in Ref. <cit.> decompose mutual information quantities, one might hope that the PID atoms could also provide a descriptive entropy decomposition. Yet, at the beginning of Section <ref>, we pointed out that a single PID lattice does not include the mutual information between the sources and their conditional mutual information given the target: this suggests that a single PID lattice cannot in general contain the full H(X,Y,Z). More concretely, Ref. <cit.> has recently suggested precise examples of trivariate dependencies where a single PID lattice cannot account for, and thus describe the parts of, the full H(X,Y,Z).These examples compare the dyadic and triadic dependencies described in Fig. <ref>.Both kinds of dependencies underlie common modes of information sharing among three and more variables <cit.>. Ref. <cit.> remarked that the atoms of a single PID diagram are indeed able to distinguish between dyadic and triadic dependencies, but that such atoms only sum up to two of the three bits of the full H(X,Y,Z).We will now show how the missing allotment of the third bit of entropy in those systems is not due to intrinsic limitations of the PID axioms, but just to the limitations of considering a single PID diagram at a time — the common practice in the literature so far. More generally, we will be able to allot and describe the full entropy H(X,Y,Z) of any trivariate system in terms of the novel finer structure unveiled in Section <ref>.§.§ The finer structure of the entropy H(X,Y,Z)The minimal subatoms' set that we illustrated in Fig. <ref> allowed us to decompose all three PID lattices of a generic system. However, to fully describe the distribution of information in trivariate systems, we also wish to find a generalization of Eq. <ref> to the trivariate case, i.e. to decompose the full trivariate entropy H(X,Y,Z) in terms of univariate conditional entropies and PID quantities. With this goal in mind, we first subtract from H(X,Y,Z) the terms which describe statistical fluctuations of only one variable (conditioned on the other two). The sum of these terms was indicated as H_(1) in Ref. <cit.>, and there quantified asH_(1)=H(X|Y,Z)+H(Y|X,Z)+H(Z|Y,X).This subtraction is useful because H_(1) is a part of the total entropy which does not overlap with any of the 12 PID atoms in Fig. <ref>. The remaining entropy H(X,Y,Z)-H_(1) was defined as the dual total correlation in Ref. <cit.> and recently considered in Ref. <cit.>:DTC≡H(X,Y,Z)-H_(1).DTC quantifies joint statistical fluctuations of more than one variable in the system. A simple calculation yieldsDTC=I(X:Y|Z)+I(Y:Z|X)+I(X:Z|Y)+ coI(X;Y;Z),which is manifestly invariant under permutations of X, Y and Z, and shows that DTC can be written as a sum of some of the 12 PID atoms. For example, expressing the co-information as the difference I(X:Z) - I(X:Z|Y), we can arbitrarily use the four atoms from the left-most diagram in Fig. <ref> to decompose the sum I(X:Y|Z)+I(X:Z) and then add UI(Y:{Z\ X})+CI(Y:{X;Z}) from the middle diagram to decompose I(Y:Z|X). If we then plug this expression of DTC in Eq. <ref>, we achieve a decomposition of the full entropy of the system in terms of H_(1) and PID quantities: H(X,Y,Z) = H_(1)+SI(X: {Y;Z} )+UI(X: {Y \ Z} ) + UI(X: {Z \ Y} )++ CI(X: {Y;Z} )+ UI(Y:{Z\ X})+CI(Y:{X;Z}),which provides a nonnegative decomposition of the total entropy of any trivariate system. However, this decomposition is not unique, since the co-information can be expressed in terms of different pairs of conditional and unconditional mutual informations, according to Eq. <ref>. This arbitrariness strongly limits the descriptive power of this kind of entropy decompositions, because the PID atoms on the RHS can only be interpreted within individual PID perspectives.To address this issue, we construct a less arbitrary entropy decomposition by using the invariant minimal subatoms' set that was presented in Fig. <ref>: importantly, that set can be interpreted without specifying an individual, and thus partial, PID point of view that we hold about the system.Thus, after we name the variables of the system such that SI(Y:{X;Z})≤ SI(X:{Y;Z}) ≤ SI(Z:{X;Y}), we express the coarser PID atoms in Eq. <ref> in terms of the minimal set to obtain:H(X,Y,Z) -H_(1)= RSI(Y Z↔ X)+ 2RCI(Y Z↔ X)++ RUI(X Z↔ Y) + RUI(Y X↔ Z) + RUI(X Y↔ Z) ++ 2IRSI(Z Y← X) + 3IRSI(X Z← Y).Unlike Eq. <ref>, the entropy decomposition expressed in Eq. <ref> and illustrated in Fig. <ref> fully describes the distribution of information in trivariate systems without the need of a specific perspective about the system. Importantly, this decomposition is unique: even though the co-information can be expressed in different ways in terms of conditional and unconditional mutual informations, in terms of the subatoms it is uniquely represented as the orange block minus the gray block (see Fig. <ref>). Similarly, the conditional mutual information terms of Eq. <ref> are composed by the same blocks independently of the PID, as highlighted in Fig. <ref>.§.§ Describing H(X,Y,Z) for dyadic and triadic systems To test the usefulness of the finer entropy decomposition in Eq. <ref>, we now compute its terms for the dyadic and the triadic dependencies considered in Ref. <cit.> and defined in Figure <ref>. In both cases H_(1)=0. For the dyadic system, there are only three positive quantities in the minimal set: the three reversible unique informations RUI(X Y↔ Z)=RUI(Y X↔ Z)=RUI(X Z↔ Y)=1 bit. For the triadic system, there are only two positive quantities in the minimal set: RSI(X Z↔ Y)=RCI(X Z↔ Y)=1 bit, but RCI(X Z↔ Y) is counted twice in the DTC. We illustrate the resulting entropy decompositions, according to Eq. <ref>, in Fig. <ref>. The decompositions in Fig. <ref> enable a clear interpretation of how information is finely distributed within dyadic and triadic dependencies. The three bits of the total H(X,Y,Z) in the dyadic system are seen to be distributed equally among unique information modes: each variable contains 1 bit of unique information with respect to the second variable about the third variable. Further, these unique information terms are all reversible, which reflects the symmetry of the system under pairwise swapping of the variables. This description provides a simple and accurate summary of the total entropy of the dyadic system, which matches the dependency structure illustrated in Fig. <ref>.The three bits of the total H(X,Y,Z) in the triadic system consist of one bit of the smallest reversible redundancy and two bit of the smallest reversible synergy, since the latter appears twice in H(X,Y,Z). Again, the reversible nature of these pieces of information reflects the symmetry of the system under pairwise swapping of the variables. Further, the bit of reversible redundancy represents the bit of information that is redundantly available to all three variables, while the two bit of reversible synergy are due to the three-wise XOR structure (see Fig. <ref>). Why does the XOR structure provide two bits of synergistic information? Because if X=Y ⊕ Z then the only positive quantity in the set of subatoms in Fig. <ref>b is the smallest RCI, which however appears twice in the entropy H(X,Y,Z). Importantly, these two bits of synergy do not come from the same PID diagram: our entropy decomposition in Eq. <ref> could account for both bits only because it fundamentally relies on cross-comparisons between different PID diagrams, as illustrated in Fig. <ref>. § APPLICATIONS OF THE FINER STRUCTURE OF THE PID FRAMEWORK The aim of this Section is to show the additional insights that the finer structure of the PID framework, unveiled in Section <ref> and Fig. <ref>, can bring to the analysis of trivariate systems. We examine paradigmatic examples of trivariate systems and calculate the novel PID quantities of source and non-source redundancy that we described in Sec. <ref>. Most of these examples have been considered in the literature <cit.> to validate the definitions, or to suggest interpretations, of the PID atoms. We also discuss how SR matches the notion of source redundancy introduced in Ref. <cit.> and discussed in Ref. <cit.>. Finally, we suggest and motivate a practical interpretation of the reversible redundancy subatom RSI.Even though our definitions of the minimal subatoms' set in Fig. <ref> only rely on Williams and Beer's axioms, some of the examples below will require a specific definition of the PID atoms that goes beyond those axioms. In those cases, our computations rely on the definitions of PID that were proposed in Ref. <cit.>, which are the most widely accepted in the literature for trivariate systems. Where numerical computations of the PID atoms are involved, they have been performed with a software package that will be publicly released upon publication of the present work. §.§ Computing source and non-source redundancy§.§.§ Copying — the redundancy arises entirely from source correlationsConsider a system where Y and Z are random binary variables that are correlated according to a control parameter λ <cit.>. For example, consider a uniform binary random variable W that 'drives' both Y and Z with the same strength λ. More precisely, p(y|w)=λ/2+(1-λ) δ_yw and p(z|w)=λ/2+(1-λ) δ_zw <cit.>. This system is completed by taking X=(Y,Z), i.e. a two-bit random variable that reproduces faithfully the joint outcomes of the generating variables Y, Z.We consider the inputs Y and Z as the PID sources and the output X as the PID target, thus selecting the left-most PID diagram in Fig. <ref>. Fig. <ref> shows our calculations of the full redundancy SI(X: {Y;Z}), the source redundancy SR(X: {Y;Z}) and the non-source redundancy NSR(X: {Y;Z}), based on the definitions in Ref. <cit.>. The parameter λ is varied between λ=0, corresponding to Y=Z, and λ=1, corresponding to YZ. Since NSR(X: {Y;Z})=0 for any 0≤λ≤ 1, we interpret that all the redundancy SI(X: {Y;Z}) arises from the correlations between Y and Z (which are tuned with λ). This is indeed compatible with the discussion in Ref. <cit.>, where the authors argued that in the 'copying' example the entire redundancy should be already apparent in the sources. §.§.§ AND gate: the redundancy is not entirely related to source correlations Consider a system where the correlations between two binary random variables, the inputs Y and Z, are described by the control parameter λ as in <ref>, but the output X is determined by the AND function as X=Y ∧ Z <cit.>. As the causal structure of the system would suggest, we consider the inputs Y and Z as the PID sources and the output X as the PID target, thus selecting the left-most PID diagram in Fig. <ref>. Fig. <ref> shows our calculations of the full redundancy SI(X: {Y;Z}), the source redundancy SR(X: {Y;Z}) and the non-source redundancy NSR(X: {Y;Z}), based on the definitions in Ref. <cit.>.SR and NSR now show a non-trivial behavior as a function of the Y-Z correlation parameter λ. If λ=0 and thus Y=Z, the full redundancy is made up entirely of source redundancy — trivially, both SI(X: {Y;Z}) and SR(X: {Y;Z}) equal the mutual information I(Y:Z)=H(Y)=1 bit. When λ increases, the full redundancy decreases monotonically to its minimum value of ≈ 0.311 bit for λ=1 (when YZ).Importantly, SR(X: {Y;Z}) decreases monotonically as a function of λ to its minimum value of zero bit when λ=1: this behavior is indeed expected from a measure that quantifies correlations between the sources that also produce redundant information about the target (see Section <ref>). On the other hand, NSR(X: {Y;Z})>0 for λ>0, and it increases as a function of λ. When λ=1, i.e. when YZ, NSR corresponds to the full redundancy. This is compatible with our description of non-source redundancy (see Section <ref>) as redundancy that is not related to the source correlations; indeed, NSR>0 implies that the sources also carry synergistic information about the target (here, due to the relationship X=Y ∧ Z). NSR thus also quantifies the notion of mechanistic redundancy that was introduced in Ref. <cit.> with reference to this scenario.§.§.§Dice sum: tuning irreversible redundancy Consider a system where Y and Z are two uniform random variables, each representing the outcome of a die throw <cit.>. A parameter λ controls the correlations between the two dice: p(y,z)=λ/36 + (1-λ)/6 δ_yz. Thus, for λ=0 the dice throws always match, for λ=1 the outcomes are completely independent. Further, the output X combines each pair (y,z) of input outcomes as x=y + α z, where α∈{1,2,3,4,5,6}. This example was suggested in Ref. <cit.> specifically to point out the conceptual difficulties in quantifying the interplay between the redundancy and the source correlations, that we addressed with the identification of SR and NSR (see Section <ref>).Harder et al. calculated redundancy by using their own proposed measure I_red, which also abides by the PID axioms in Ref. <cit.> but is different than the measure SI(X:{Y;Z}) introduced in Ref. <cit.>. However, Ref. <cit.> also calculated SI(X:{Y;Z}) for this example: they showed that the differences between SI(X:{Y;Z}) andI_red in this example are only quantitative (and there is no difference at all for some values of α), while the qualitative behaviour of both measures as a function of λ is very similar.Fig. <ref> shows our calculations of the full redundancy SI(X: {Y;Z}), the source redundancy SR(X: {Y;Z}) and the non-source redundancy NSR(X: {Y;Z}), based on the definitions in Ref. <cit.>. We display the two 'extreme' cases α=1, α=6 for illustration.With α=6, X is isomorphic to the joint variable (Y,Z): for any λ, this implies on one hand SI(X:{Y;Z})=SI((Y;Z):{Y;Z})=I(Y:Z), and on the other UI(Y:{Z\ X})=0 and thus SI(Y:{X;Z})=I(Y:Z) (see Eq. <ref>). According to Eq. <ref>, we thus find that the source redundancy SR(X:{Y;Z}) saturates, for any λ, the general inequality in Eq. <ref>: all correlations between the inputs Y, Z also produce redundant information about X (see Section <ref>). Further, according to Eq. <ref>, we find NSR(X:{Y;Z})=0 for any λ (see Fig. <ref>): we thus interpret that all the redundancy SI(X:{Y;Z}) arises from correlations between the inputs. Instead, if we fix λ and decrease α, the two inputs Y, Z are more and more symmetrically combined in the output X: with α=1, the pieces of information respectively carried by each input about the output overlap maximally. Correspondingly, the full redundancy SI(X:{Y;Z}) increases <cit.>. However, keeping λ fixed does not change the inputs' correlations. Thus, we expect that the relative contribution of the inputs' correlations to the full redundancy SI(X:{Y;Z}) should decrease proportionally. Indeed, in Fig. <ref> we find NSR(X:{Y;Z})>0 for λ>0, which signals that a part of SI(X:{Y;Z}) is not related to the inputs' correlations.We finally note that also in this paradigmatic example the splitting of the redundancy into SR and NSR addresses the challenge of separating the two kinds of redundancy outlined in Ref. <cit.>.§.§.§ Trivariate jointly Gaussian systems Barrett considered in detail the application of the PID to trivariate jointly Gaussian systems (X,Y,Z) in which the target is univariate <cit.>: he showed that several specific proposals for calculating the PID atoms all converge, in this case, to the same following measure of redundancy:SI(X: {Y;Z})=min[I(X:Y), I(X:Z)].We note that Eq. <ref> highlights the interesting property that, in trivariate Gaussian systems with a univariate target, the redundancy is as large as it can be, since it saturates the general inequalities SI(X: {Y;Z})≤ I(X:Y), I(X:Z).Direct application of our definitions in Eqs. <ref> and <ref> to such systems yields:SR(X: {Y;Z}) = min[I(X:Y), I(X:Z), I(Y:Z)], NSR(X: {Y;Z}) = min[I(X:Y), I(X:Z)] - min[I(X:Y), I(X:Z), I(Y:Z)].Thus, we find that in these systems the source redundancy is also as large as it can be, since it also saturates the general inequalities SR(X: {Y;Z}) ≤ I(X:Y), I(X:Z), I(Y:Z) (which follow immediately from its definition in Eq. <ref>).Further, combining Eq. <ref> with Eq. <ref> gives:SR(X: {Y;Z}) =SI(X: {Y:Z}),ifI(Y:Z) ≥ SI(X: {Y;Z}), I(Y:Z),otherwise; NSR(X: {Y;Z}) = 0 ,if I(Y:Z)≥ SI(X: {Y;Z}),SI(X: {Y:Z})-I(Y:Z),otherwise.This identification of source redundancy, which quantifies pairwise correlations between the sources that also produce redundant information about the target (see Section <ref>), provides more insight about the distribution of information in Gaussian systems. Indeed, the property that source redundancy is maximal indicates that the correlations between any pair of source variables (for example, {Y; Z} as considered above) produces as much redundant information as possible about the corresponding target (X as considered above). Accordingly, when I(Y:Z)< SI(X: {Y;Z}), the redundancy also includes some non-source redundancy NSR(X: {Y;Z})>0 that implies the existence of synergy, i.e. CI(X:{Y;Z})>0.§.§ RSI quantifies information between two variables that also passes monotonically through the third In this section we discuss a practical interpretation of the reversible redundancy subatom RSI defined in Eq.<ref>. We note that RSI(X Z↔ Y) appears both in SI(X:{Y;Z}) and in SI(Y:{X;Z}), i.e. it quantifies a common amount of information that Z shares with each of the two other variables about the third variable. We shorten this description into the statement that RSI(X Z↔ Y) quantifies 'information between X and Y that also passes through Z'. Reversible redundancy is further characterized by the following Proposition:RSI(X (Z,Z')↔ Y) ≥ RSI(X Z↔ Y). Both SI(X:{Y;(Z,Z')})≥ SI(X:{Y;Z}) and SI(Y:{X;(Z,Z')})≥ SI(Y:{X;Z}) <cit.>. Thus, RSI(X Z↔ Y)=min[ SI(X:{Y;Z}), SI(Y:{X;Z})] ≤min [SI(X:{Y;(Z,Z')}), SI(Y:{X;(Z,Z')})]= RSI(X (Z,Z')↔ Y).Indeed, the fact that RSI always increases whenever we expand the middle variable corresponds to the increased capacity of the entropy of the middle variable to host information between the endpoint variables. We discuss this interpretation of RSI by examining several examples, where we can motivate a priori our expectations about this novel mode of information sharing. §.§.§ Markov chains Consider the most generic Markov chain X → Z → Y, defined by the property that p(x,y|z)=p(x|z)p(y|z), i.e. that X and Y are conditionally independent given Z. The Markov structure allows us to formulate a clear a priori expectation about the amount of information between the endpoints of the chain, X and Y, that also 'passes through the middle variable Z': this information should clearly equal I(X:Y), because whatever information is established between X and Y must pass through Z.Indeed, the Markov property I(X:Y | Z)=0 implies, as we see immediately from Fig. <ref>, that SI(X:{Y;Z})=SI(Y:{X;Z})=I(X:Y). By virtue of Eqs. <ref> and <ref>, in accordance with our expectation, we findRSI(X Z↔ Y)=I(X:Y),which holds true independently of the marginal distributions of X, Y and Z. Notably, the symmetry of RSI(X Z↔ Y) under swap of the endpoint variables X ↔ Y is also compatible with the property that X → Z → Y is a Markov chain if and only if Y → Z → X is a Markov chain. In words, whatever information flows from X through Z to Y equals the information flowing from Y through Z to X.More generally, the Markov property I(X:Y | Z)=0 implies that RCI(Y Z↔ X)=IRSI(X Z← Y)=RUI(X Z↔ Y)=0: thus, only four of the seven subatoms of the minimal set in Fig. <ref> can be larger than zero. The three PIDs of the system, decomposed with the minimal set, are shown in Fig. <ref>. In particular, we see that also RSI(X Y↔ Z)=I(X:Y), thus matching our expectation that in the Markov chain X → Z → Y the information between X and Z that also passes through Y still equals I(X:Y).We finally note that none of the results regarding Markov chains depends on specific definitions of the PID atoms: they were derived only on the basis of the PID axioms in Ref. <cit.>.§.§.§ Two parallel communication channels Consider five binary uniform random variables X_1, X_2, Y_1, Y_2, Z with three parameters 0 ≤λ_1, λ_2, λ_3 ≤ 1 controlling the correlations X_1λ_1↔ Z λ_2↔ Y_1, X_2λ_3↔ Y_2 (in the same way λ controls the Y λ↔ Z correlations in <ref>). We consider the trivariate system (X, Y, Z) with X=(X_1,X_2) and Y=(Y_1,Y_2) (see Fig. <ref>). We intuitively expect that the information between X and Y that also passes through Z, in this case, should equal I(X_1:Y_1), which in general will be smaller than I(X:Y)=I(X_1:Y_1)+I(X_2:Y_2). Indeed, we computed the PID atoms with the definitions in Ref. <cit.> over a fine discretization of the (λ_1, λ_2, λ_3)-space and we always found that RSI(X Y↔ Z)=I(X_1:Y_1).§.§.§ Other examples To further describe the interpretation of RSI(X Z↔ Y) as information between X and Y that also passes through Z, we here reconsider the examples, among those discussed before Section <ref>, where we can formulate intuitive expectations about this information sharing mode. In the dyadic system described in Fig. <ref>, our expectation is that there should be no information between two variables that also passes through the other: indeed, RSI(X Z↔ Y) = 0 in this case. Instead, in the triadic system described in Fig. <ref>, we expect that the information between two variables that also passes through the other should equal the information that is shared among all three variables, which amounts to 1 bit. Indeed, we find RSI(X Z↔ Y) = 1 bit. In the 'copying' example in Section <ref> X=(Y,Z), thus we expect that I(Y:Z) corresponds to information between Y, Z that also passes through X, but also to information between X and Z that also passes through Y. Indeed, RSI(X Z↔ Y) = RSI(X Y↔ Z) = I(Y:Z). We remark that all the values of RSI in these examples depend on the specific definitions of the PID atoms in Ref. <cit.>. § DISCUSSIONThe Partial Information Decomposition (PID) pioneered by Williams and Beer has provided an elegant construction to quantify, with information theory, different ways two stochastic variables can carry information about a third variable <cit.>. In particular, it has enabled consistent definitions of synergy, redundancy and unique information components among three stochastic variables <cit.>. More generally, it has generated considerable interest as it addresses the difficult yet practically important problem of extending information theory beyond the classic bivariate measures of Shannon tofully characterize multivariate dependencies. However, the axiomatic PID construction, as originally formulated by Williams and Beer, fundamentally relied on the possibly arbitrary classification of the three variables into sources and target, and this partial perspective prevented a complete and general information-theoretic description of trivariate systems. More specifically, the original PID framework could not quantify some important modes of information sharing, such as source redundancy <cit.>, and could not allot the full joint entropy of some important trivariate systems <cit.>. The work presented here addresses these issues by extending the original PID framework in two respects. First, we decomposed the original PID atoms in terms of finer information subatoms with a well defined interpretation that is invariant to different variables' classifications. Then, we constructed an extended framework to completely decompose the distribution of information within any trivariate system. Importantly, our formulation did not require the addition of further axioms to the original PID construction. We proposed that distinct PIDs for the same system, corresponding to different target selections, should be evaluated and then compared to identify how the decomposition of information changes across different perspectives. More specifically, we identified reversible pieces of information (RSI, RUI, RCI) that contribute to the same kind of PID atom if we reverse the roles of target and source between two variables. The complementary subatomic components of the PID lattices are the irreversible pieces of information (IRSI), that contribute to different kinds of PID atom for different target selections. These subatoms thus measure asymmetries between different decompositions of Shannon quantities pertaining to two different PIDs of the same system, and such asymmetries reveal the additional detail with which the PID atoms assess trivariate dependencies as compared to the coarser and more symmetric Shannon information quantities.The crucial result of this approach was unveiling the finer structure underlying the PID lattices: we showed that an invariant minimal set of seven information subatoms is sufficient to decompose the three PIDs of any trivariate system. In the remainder of this section, possible uses of these subatoms and their implications for the understanding of systems of three variables are discussed.§.§ Use of the subatoms to map the distribution of information in trivariate systemsOur minimal subatoms' set was first used to characterize more finely the distribution of information among three variables. We clarified the interplay between the redundant information shared between two variables A and B about a third variable C, on one side, and the correlations between A and B, on the other. We decomposed the redundancy into the sum of source-redundancy SR, which quantifies the part of the redundancy which arises from the pairwise correlations between the sources A and B, and non-source redundancy NSR, which can be larger than zero even if the sources A and B are statistically independent. Interestingly, we found that NSR quantifies the part of the redundancy which implies that A and B also carry synergistic information about C. The separation of these qualitatively different components of redundancy promises to be useful in the analysis of any complex system where several inputs are combined to produce an output <cit.>.Then, we used our minimal subatoms' set to extend the descriptive power of the PID framework in the analysis of any trivariate system. We constructed a general, unique, and nonnegative decomposition of the joint entropy H(X,Y,Z) in terms of information-theoretic components that can be clearly interpreted without arbitrary variable classifications. This construction parallels the decomposition of the bivariate entropy H(X,Y) in terms of Shannon's mutual information I(X:Y). We demonstrated the descriptive power of this approach by decomposing the complex distribution of information in dyadic and triadic systems, which was shown not to be possible within the original PID framework <cit.>.We gave practical examples of how the finer structure underlying the PID atoms provides more insight into the distribution of information within important and well-studied trivariate systems. In this spirit, we put forward a practical interpretation of the reversible redundancy RSI, and future work will address additional interpretations of the components of the minimal subatoms' set.§.§ Possible extensions of the formalism to multivariate systems with many sources The insights that derive from our extension of the PID framework suggest that the PID lattices could also be useful to characterize the statistical dependencies in multivariate systems with more than two sources. Our approach does not rely on the adoption of specific PID measures, but only on the axiomatic construction of the PID lattice. Thus, it can be immediately extended to the multivariate case by embedding trivariate lattices within larger systems' lattices <cit.>: a further breakdown of the minimal subatoms' set could be obtained if the current system were embedded as part of a bigger system. Further, when systems with more than two sources are considered, the definition of source redundancy might be extended as to determine which subatoms of a redundancy can be explained by dependencies among the sources (for example, by replacing the mutual information between two sources with a measure of the overall dependencies among all sources, such as the total correlation introduced in Ref. <cit.>). More generally, the idea of comparing different PID diagrams that partially decompose the same information can also be generalized to identify finer structure underlying higher-order PID lattices with different numbers of variables. These identifications might also help addressing specific questions about the distribution of information in complex multivariate systems. §.§ Potential implications for systems biology and systems neuroscience A common problem in system biology is to characterize how the function of the whole biological system is shaped by the dependencies among its many constituent biological variables. In many cases, ranging from gene regulatory networks <cit.>to metabolic pathways <cit.> and to systems neuroscience <cit.>, an information-theoretic decomposition of how information is distributed and processed within different parts of the system would allow a model-free characterization of these dependencies. The work discussed here can be used to shed more light on these issues by allowing to tease apart qualitatively different modes of interaction, as a first necessary step to understanding the causal structure of the observed phenomena <cit.>.In systems neuroscience, the decomposition introduced here may be important for studying specific and timely questions about neural information processing. This work can contribute to the study of neural population coding, that is the study of how the concerted activity of many neurons encodes information about ecologically relevant variables such as sensory stimuli <cit.>. In particular, a key characteristic of a neural population code is the degree to which pairwise or higher-order cross-neuron statistical dependencies are used by the brain to encode and process information, in a potentially redundant or synergistic way, across neurons <cit.> and across time <cit.>.Our work is also of potential relevance to study another hot issue in neuroscience, that is the relevance of the information about sensory variables carried by neural activity for perception <cit.>. This is a crucial issue to resolve the diatribe about the nature of the neural code, that is the set of symbols used by neurons to encode information and produce brain function <cit.>. Addressing this problem requires mapping the information in the multivariate distribution of variables such as the stimuli presented to the subject, the neural activity elicited by the presentation of such stimuli, and the behavioral reports of the subject's perception. More specifically, it requirescharacterizing the information between the presented stimulusand the behavioral report of the perceived stimulusthat can be extracted from neural activity <cit.>. It is apparent that developing general decompositions of the information exchanged in multivariate systems, as we did here, is key to succeeding in rigorously addressing these fundamental systems-level questions.§ ACKNOWLEDGMENTS We are grateful to members of Panzeri's Laboratory for useful feedback, and to P. E. Latham, A. Brovelli and C. de Mulatier for useful discussions. This research was supported by the Fondation Bertarelli. § AUTHOR CONTRIBUTIONS All authors conceived the research; G.P., E.P. and D.C. performed the research; G.P., E.P. and D.C. wrote a first draft of the manuscript; all authors edited and approved the final manuscript; S.P. supervised the research. unsrt | http://arxiv.org/abs/1706.08921v1 | {
"authors": [
"Giuseppe Pica",
"Eugenio Piasini",
"Daniel Chicharro",
"Stefano Panzeri"
],
"categories": [
"cs.IT",
"cs.NE",
"math.IT",
"math.ST",
"physics.bio-ph",
"physics.data-an",
"stat.TH"
],
"primary_category": "cs.IT",
"published": "20170627160937",
"title": "Invariant components of synergy, redundancy, and unique information among three variables"
} |
http://arxiv.org/abs/1706.09015v1 | {
"authors": [
"J. Ruiz de Elvira",
"U. -G. Meißner",
"A. Rusetsky",
"G. Schierholz"
],
"categories": [
"hep-lat",
"hep-ph",
"nucl-th"
],
"primary_category": "hep-lat",
"published": "20170627190959",
"title": "Feynman-Hellmann theorem for resonances and the quest for QCD exotica"
} |
|
^(a) Universidad Nacional de Colombia, Sede Bogotá, Facultad de Ciencias, Departamento de Física. Ciudad Universitaria 111321, Bogotá, Colombia ^(b)Departamento de Ciencias Básicas. Universidad Manuela Beltrán. Bogotá, Colombia ^(*) On leave at Fermilab-Theory Division, Pine Street & Kirk Road, Batavia, IL 6051041.20.Cv, 02.10.Yn, 01.40.Fk, 01.40.gb, 02.30.Tb We compute the magnetic dipole moment (MDM) for massive flavor neutrinos using the neutrino self-energy in a magnetized media. The framework to incorporate neutrino masses is one minimal extension of the Standard Model in which neutrinos are Dirac particles and their masses coming from tiny Yukawa couplings from a second Higgs doublet with a small vacuum expectation value. The computations arecarried out by using proper time formalism in the weak field approximation eB<<m_e^2 and assuming normal hierarchy for neutrino masses and sweeping the chargedHiggs mass. For ν_τ, analyses in the neutrino specific scenario indicate magnetic dipole moments greater than the values obtained to the MDMin the SM (with and without magnetic fields) and other flavor conserving models. This fact leading a higher proximity with experimental bounds and so on it is possible to get stronger exclusion limits over new physics parameter space.Neutrino self-energy with new physics effects in an external magnetic field John Morales^(a) December 30, 2023 ============================================================================ § INTRODUCTIONElectromagnetic properties of neutrinos have recently attracted much attention since they can be used to address many open questions in neutrino physics. From the theoretical point of view, electromagnetic form factors could contain information on new physics effects, which additionally solve other intriguing phenomena. One example is the oscillation mechanism and hierarchic structure of fermions, both explained with massive neutrinos and additional feasible beyond standard model fields. Another benchmark for electromagnetic behavior of neutrinos come from experimental and observational (astrophysical) effects that can be studied in several factories and detectors for neutrinos of all sources (atmospheric, solar, cosmological, and geoneutrinos)<cit.>. By analyzing these phenomena is possible to extract features over dynamically relevant interactions with detectors. In those observational scenarios, neutrinos and other particles can interact with external magnetic fields; which considerably affect the dynamics and electromagnetic properties. Also, the study of generation, propagation, and diffusion in media of neutrinos in a magnetic field is important in several astrophysical contexts <cit.>(e.g. in supernovae explosions, neutron stars formation, magnetars, and pulsars) and in early cosmology (e.g. in the production of primordial magnetic fields and high neutrino density).In all these cases the scale of magnetic field strength is greater than B_e=m_e^2/e≈ 4.41× 10^13 G where m_e is the electron mass and e the elementary charge.That is a critical scale where magnetic fields have a strong impact on quantum processes as is shown by the Schwinger treatment of particle theories with external electromagnetic fields[The Schwinger mechanism described in the proper time formalism is a non-perturbative phenomenon by which particle-antiparticle pairs are produced by a static external field. In this frame, a tunneling process takes place between negative and positive energy states induced by the external potential. Due to the tunneling probability is exponential in 1/B, probability of particle production is of order one only for fields that are comparable to a critical field B∼ B_e=m_e^2/e <cit.>.]Our aim is searching the link between theoretical and phenomenological consequences of electromagnetic properties for neutrinos, through the determination of the relationship between form factors as magnetic dipole moments and well motivated new physics particles. To describe such related processes is necessary to extend the traditional neutrino quantum field theory with a new regimen involving magnetic fields affecting manifest covariance of the former scenario.These effects of magnetic fields on the electromagnetic properties and quantum fields of neutrino have been studied comprehensively in <cit.>. In particular, a method to compute the changes produced in the neutrino propagation is based on the modification of neutrino self-energy operator by a media with a magnetic field <cit.>. When we take into account the media with a external magnetic field, the neutrino electromagnetic form factors can be calculated from the self-energy using the proper-time(Schwinger) formalism<cit.>. In our study, we describe the initial effects on dispersion relations from new physics to neutrino MDM computing from self-energy method in the weak field approximation. The main goal is determining the additional contribution to the electromagnetic properties when a massive flavor active neutrino, emerging in a new physics scenario with charged Higgs bosons (H^±), goes into a magnetized media. To that end, this paper is organized as follows: In section <ref> we review MDM contribution in the SM framework. A survey for our computations for 2HDM-ν fundamentals is given in section <ref>. Also in section <ref>, we present the extrapolation to obtain the MDM of new physics fields from 2HDM-ν. Discussion over parameter space constrained by experimental data is performed in section <ref>. § MAGNETIC DIPOLE MOMENTS IN THE PROPER TIME FORMALISMIn this section, we review the general structure for MDM with an external magnetic field by using effective Lagrangian approach and propagators obtained from proper time formalism. In the effective approach, the change of neutrino energy in a magnetic field is due to a presence of a magnetic moment μ_νfor the neutrino. The effective Lagrangian for the interaction, between neutrino field (ψ) and an external electromagnetic field, readsℒ^(μ)=-iμ_ν_l^B/2ψ̅σ_αβψ F^αβ, where F_αβ=∂_αA_β-∂_βA_α is the electromagnetic field tensor and σ_αβ=(γ_αγ_β-γ_βγ_α)/2. Getting an explicit form of MDM demands of self energy computation for neutrino and time proper formalism forms to find altereted propagator structures by the magnetic field. We start by consideringFeynman diagrams contributing to the neutrino self-energy in SM, which are depicted in Fig. <ref>. In the weak field approximation eB<< m_e^2, the new structure to the gauge bosons G_B( p), Goldstone bosons D( p), charged Higgs D_H( p) and fermions S_B^F( p) propagators due to the presence of an external magnetic field are respectively described by G_B( p)=-ig_αβ/p^2-m_W^2- 2βφ/( p^2-m_W^2) ^2+𝒪( β ^2) ,D( p)=i/p^2-m_W^2+𝒪( β ^2) ,D_H( p)=i/p^2-m_H^±^2+𝒪( β ^2) , S_B^F( p)=i( m-p) /p^2-m^2+β( m-p_∥) /2( p^2-m^2) ^2( γφγ) +𝒪( β ^2),where β =eB and φ ^αη=F^αη/B is the dimensionless electromagnetic field tensor normalized to B-field, with the Lorentz indices of tensors are contracted as γφγ =γ _αφ ^αβγ _β and dual tensor φ̃^αη=1/2ϵ ^αηζϑφ _ζϑ. The decomposition of a four-vector in ∥ (parallel component) and ⊥ (perpendicular component) can be defined by p^μ=p_∥^μ+p_⊥^μ=( p^0,0,0,p^3) +( 0,p^1,p^2,0), where p_∥ is the spatial part of momentum which is parallel to the external B-field.In its general form, the self-energy operator in an external B-field has the following Lorentz structure <cit.>∑( p) =[ 𝒜_Lp+ℬ_Lp_∥+𝒞_L( pφ̃γ) ] P_L+[ 𝒜_Rp+ℬ_Rp_∥+𝒞_R( pφ̃γ) ] P_R+m_ν[ 𝒦_1+i𝒦_2( γφγ) ].The coefficients A_R,B_R,B_R and 𝒦_1,2 only come from the Feynman diagram with the scalar Φ of Fig. <ref>. The coefficients left, i.e., 𝒜_L,ℬ_L,𝒞_L are due to both Feynman diagrams in Fig. <ref>. The coefficients A_L , A_R , and K_1 can be absorbed by the neutrino wave function and mass renormalization. The B and C terms are crucial for neutrino dispersion but, to lowest order, the dispersion relation depends only on B_L. The two coefficients B_R and C_R are suppressed by a factor of ϵ_ν relative to B_L and C_L couplings. Factors K_2, C_L and C_R are needed to construct the contribution from self-energy expansion to neutrino MDM <cit.>, i.e.,μ _ν _l^B=m_ν/2B( 𝒞_L-𝒞_R+4𝒦_2).The result of this contribution calculated taking into account the charged W-boson and charged scalar Φ-boson is μ _ν _l^B=μ _ν _l1/( 1-λ _l) ^3( 1-7/2λ _l+3λ _l^2-λ _l^2lnλ _l-1/2λ _l^3),where μ _ν _l=3.2× 10^-19μ _B( m_ν _α/1eV)is the neutrino magnetic moment value in the vacuum <cit.> , μ _B=e/2m_e is the Bohr magneton andλ _l=m_l^2/m_W^2.The value of the MDM in the vacuum and with the presence of a magnetic field for SM are reported in Tab. <ref>.§ CONTRIBUTION TO MDM IN NEUTRINO SPECIFIC-2HDMBefore to introduce new physics effects in MDMs with magnetic fields, we make a survey for 2HDM-ν differentiating flavor properties and yielding mass mechanisms; which are relevant in the form of interpreting electromagnetic properties for neutrinos.The neutrino specific-2HDM (2HDM-ν)is implementedintroducing a new scalar doublet Φ _2 with the same quantum numbers as the SM Higgs doubletΦ _1 and three right-handed neutrinos, which are singlets under the SM gauge group <cit.>.The addition of the doublet Φ _2 introduces five Higgs bosons in the scalar spectrum of neutrino specific structure.In general, 2HDM-ν is composed by two CP-even Higgs( H^0,h^0) , one CP-odd scalar ( A^0) and two charged Higgs bosons ( H^±) <cit.>. In this model, the tiny neutrino masses could arise for small vacuum expectation value (VEV) of the doublet Φ _2,which implies small masses or Yukawa couplings directly. The most general Lagrangian in 2HDM-ν is ℒ=( 𝒟_μΦ _1) ^†(𝒟^μΦ _1) +( 𝒟_μΦ _2) ^†( 𝒟^μΦ _2) -V_H+ ℒ_Y. V_H denotes the Higgs potential and which has the following form with a softly broken U( 1) symmetry, V_H=m̅_11^2Φ _1^†Φ _1+m̅_22Φ _2^†Φ _2-( m̅_12^2Φ _1^†Φ _2+h.c.) +1/2λ _1( Φ _1^†Φ _1) ^2+ 1/2λ _2( Φ _2^†Φ _2) ^2+λ _3( Φ _1^†Φ _1) ( Φ _2^†Φ _2) +λ _4( Φ _1^†Φ _2) ( Φ _2^†Φ _1) . On the other hand, ℒ_Y in Eq. (<ref>) is the Yukawa Lagrangian (primordial for our studies), which in 2HDM-ν scenario reads-ℒ_Y=ξ _ij^νL̅_LiΦ̃_2ν _Rj+η _ij^EL̅_LiΦ _1E_Rj+η _ij^DQ̅_LiΦ _1D_Rj+η _ij^UQ̅_LiΦ̃_1U_Rj+h.c. Φ̃_1=iσ _2Φ _1 is the conjugate of the Higgs doublet. Fermion doublets are defined by Q_L≡( u_L,d_L) ^T and L_L≡( ν _L,e_L) ^T. Here E_R( D_R) is referred to the three down type weak isospin lepton(quark) singlets and ν _R( U_R) is referred to the three up type weak isospin neutrino (quark) singlets. The first part in (<ref>) comes from right-handed neutrinos, while the second part is SM like. The lepton sector of the Yukawa Lagrangian, relevant for our computations, is given by-ℒ_Y( 2HDM-ν)=√(2)m_ν _i/v_2ν̅_l( U_PMNSP_L) lH^++h.c.§.§ Effective MDM in 2HDM-ν Guiding on scalar contributions to self-energy in SM, we compute an explicit form for the MDM contribution for new physics introduced by addtonal scalar fields in 2HDM-ν. Within the framework of MDM in 2HDM-ν, we should add one new typeof contribution to self-energy operator of the neutrino due to charged Higgs bosons H^± (see diagram in Fig. <ref>), i.e.,∑( p) =∑_W^±( p) +∑_Φ( p) +∑_H^±( p).The new physics effects separating the contributions of SM and 2HDM-ν coming from thecontribution ∑_H^±(p) are∑_H^±( p) =i∫d^4k/( 2π) ^4( aP_L+bP_R) S_B^F( p-k) ( cP_R+dP_L) D_B( k) , where S_B^F and D_B are the propagators of the charged lepton and Higgs charged boson, the constants a,b,c and d are associated with the Feynman rules of the particular ν-2HDM in the vertex, and, finally, P_L=( 1+γ _5) /2 and P_R=( 1-γ _5) /2are the left-handed and right-handed chiral operators.The new physics self-energy can be expanded by[The expansion shown in (<ref>) is model independent for the Yukawa couplings structure. Other models with charged Higgs bosons or charged scalars can be tested with this self-energy form by varying a and b couplings. These effects have been treated in <cit.>.] ∑_H^±( p) = -i∫d^4k/( 2π) ^4[ a^2( p-k) P_R/( ( p-k) ^2-m_l^2) ( k^2-m_H^±^2) - 2a^2β( pφ̃γ) P_R-2a^2β( k φ̃γ) P_R/( ( p-k) ^2-m_l^2) ^2( k^2-m_H^±^2) ..+ abm_lP_R/( ( p-k) ^2-m_l^2) ( k^2-m_H^±^2) -iβ abm_l( γφγ) /( ( p-k) ^2-m_l^2) ^2( k^2-m_H^±^2) ..+abm_lP_L/( ( p-k) ^2-m_l^2) ( k^2-m_H^±^2) +b^2( p-k) P_L/( ( p-k) ^2-m_l^2) ( k^2-m_H^±^2) -2β b^2( pφ̃γ) P_L-2β b^2( kφ̃γ) P_L/( ( p-k) ^2-m_l^2) ^2( k^2-m_H^±^2) ], where m_l and m_H^± are the masses of the charged lepton and H-boson respectively. Here coefficientsa = c and b = d in the vertices are equal in the Feynman rules for ν-2HDM. Extrapolating scalar contribution from energy modifications with scalars of self energy due to effective MDM as in Eq. (<ref>), we factorize from (<ref>) new physics coefficients 𝒞̅_L, 𝒞̅_Rand 𝒦̅_2 belonging to the MDM contribution in the self-energy for charged Higgs boson 𝒞̅_L( pφ̃γ) P_L = i∫d^4k/( 2π) ^42β b^2( pφ̃γ) P_L-2β b^2( kφ̃γ) P_L/( ( p-k) ^2-m_l^2) ^2( k^2-m_H^±^2) , 𝒞̅_R( pφ̃γ) P_R = -i∫d^4k/( 2π) ^42a^2β( pφ̃γ) P_R-2a^2β( kφ̃γ) P_R/( ( p-k) ^2-m_l^2) ^2( k^2-m_H^±^2) ,m_νi𝒦̅_2( γφγ) = i∫d^4k/( 2π) ^4iβ abm_l( γφγ) /( ( p-k) ^2-m_l^2) ^2( k^2-m_H^±^2) .Substituting the expressions (<ref>),(<ref>) and (<ref>) into one MDM new physics emulating (scalar) contribution of Eq. <ref>, ( μ _ν _l^B) _H^± = m_ν/2B( 𝒞̅_L-𝒞̅_R+4𝒦̅_2) = √(2)/3G_fμ _ν _l∫_0^1dx2( x^2-3x+2) ( b^2+a^2) +( 1-x) m_l/ m_νab/m_ν^2x^2+( m_l^2-m_ν^2-m_H^±^2) x+m_H^±^2.where we have usedm_ν/2Bβ/4π ^2=√(2)/3G_fμ _ν _l,the coefficients a and b coming from Eq. (<ref>) are a=-√(2)/v_2m_ν _iU_k,i and b=√(2)/v_2m_ν _iU_i,k. In our numerical analyses, we shall use theaccurate value for Fermi's constant:G_F=√(2)g^2/8 M_W^2=1.1663787(6)× 10^-5GeV^-2. § RESULTS AND ANALYSIS FOR MDM IN 2HDM-Ν Our analyses are based on constraints on charged Higgs masses and the VEV of the second doublet (v_2). The effect of the Yukawa structure of the Dirac neutrino in the 2HDM-ν, which has the m_ν/v_2 ratio, makes the MDM be strongly sensitive to the VEV and the type of hierarchy for neutrino masses ordering. To see those effects and parameter relevance in face of MDMs, we consider scannings (Figs. <ref>-<ref>) showing the evolution of MDM for flavor neutrinos in the (m_H^±,v_2) space. In the case of electron neutrino (Fig. <ref>) and muon neutrino (Fig. <ref>), we see as the contribution overpasses the SM-MDM (μ_ν) in several orders of magnitude (see Tab. <ref>), being only closed theexperimental limits for values of charged Higgs masses close to m_H^±=100 GeV. Nevertheless, in these analyses it is not possible to get strong enough constraints over(m_H^±,v_2).For ν_τ-neutrinos (see Fig. <ref>), we have one of the most promissory scenarios, due to it overpasses the experimental thresholds leading constraints over space parameter of the 2HDM-ν scenario. However, as we approach near to the experimental threshold, Yukawa couplings are also reaching the perturbativity limit. Indeed, these limits are established to avoid divergences in the energy evolution of renormalization group equations. The reason is that the value of Yukawa coupling (square) cannot exceed the bound of 8π since beyond this limit the chance to find out some Landau pole in energy couplings evolution is significantly greater <cit.>. This value translates into a lower bound of v_2 for the respective neutrino mass present in the normal hierarchy. In the case of ν _τ, for instance, v_2min=0.015 eV. Since the perturbation theory is reliable for values of v_2>v_2min, we choose v_2=0.1 GeV as the starting point of all scannings. We consider that lower values of v_2 overpassing the threshold corrections are all non-perturbative contributions to MDM and results can be strongly dependent on this starting value. Furthermore, in particular cases, contributions to MDMs can be constrained by threshold experimental limits, as for instance from ν_τ contributions, the fact of v_2>0.65 eV for values are allowed at 90 % C.L. for all charged Higgs masses.At 90% C.L., values v_2>0.1(GeV) are allowed for charged Higgs masses m_H^±>650 GeV. By allowing smaller numbers of v_2 without to reach the perturbative limit, the constraints also exclude several values of charged Higgs masses. One sees as for 2HDM-ν, corrections involving magnetic fields are higher than vacuum analyses. This result is model dependent, as can be established for other 2HDMs without feasible natural terms for neutrino masses. More specifically,for 2HDMs type I, II and III contributions with external magnetic fields interfere destructively with vacuum contributions <cit.>.§ CONCLUDING REMARKS We have calculated the self-energy contribution to the neutrino MDM in a magnetized media using a non-minimal extension for SM with two Higgs doublets. Even though our computations are model independent, the most important contribution in 2HDMs studied so far comes from neutrino specific scenario.Also, the 2HDM-ν scenario could be a plausible scenario to search the MDMs by means of the intimate relation with neutrino mass and tunning of VEVs. It can be seen that all contributions of 2HDM-ν' scenario are above those obtained from SM for all neutrino flavors. This due to the structure of the Yukawa couplings,the mass of the Higgs charged boson and charged lepton in the vertex,which is the same flavor of the corresponding neutrino. Moreover, for this model, the introduction of magnetic effects increases the contribution to the neutrino MDM compared with the value of MDM obtained in vacuum.In all effective flavor magnetic dipole moments, the largest contributions to the neutrino MDM are given for values tending to zero of theVEV of the second doublet, as well as for values close to 100 GeV of the charged Higgs boson. In the case ν_τ, those values are constrained by Borexino experimental analyses at 90% C.L..It is worthwhile to point out that the MDM effects may be slightly different due to changes in the hierarchy used for neutrino masses. Although it is important toemphasize that the mass of the charged lepton plays a relevant role in the the effective MDM. These facts lead to a framework wherein the third family has the most significant values for MDMs.From global analyses, ν _τ-MDM with magnetic fields could be converted into a plausible reference point to study new physics scenarios contributing to electromagnetic form factors. Particularly, in the light of models with feasible terms for neutrino masses.Avoiding regions where perturbativity could be in conflict with lower values of the natural VEV v_2 and charged Higgs masses, the neutrino specific model would be a significant benchmark to introduce these effects of new physics, at the same time that neutrino masses are at one viable scale. This opens an additional window to study these electromagnetic effects in most elaborated models where the mass terms for neutrinos appears naturally via an appropriate see-saw mechanism. Our computations are relevant in those scenarios in the limit where additional gauge bosons and fermions become decoupled of the theory. § ACKNOWLEDGMENTS We acknowledge financial support from Colciencias and DIB (Universidad Nacional de Colombia). Carlos G. Tarazona also thanks to Universidad Manuela Beltran as well as the support from DIB-Project with Q-number110165843163 (Universidad Nacional de Colombia). A. Castillo, J. Morales, and R. Diaz are also indebted to the Programa Nacional Doctoral de Colciencias for its academic and financial aid. A. Castillo and Carlos. G. Tarazona are also indebted with the Summer Visitor Program given by Theory Department at Fermilab.99 Giunti2015 Carlo Giunti,Konstantin A. Kouzakov, Yu-Feng Li, Alexey V. Lokhov, Alexander I. Studenikin, and Shun Zhou. Electromagnetic neutrinos in laboratory experiments and astrophysics. Annalen der Physik 528, pages 198-215 (2016). [http://arxiv.org/abs/arXiv:hep-ph/ 1506.05387].RaffeltG. G. Raffelt, Stars as laboratories for fundamental physics.University of Chicago Press, Chicago, 1996. Gelis François Gelis and Naoto Tanj, Schwinger mechanism revisited, Progress in Particle and Nuclear Physics 87, pages 1-49 (2016).[http://arxiv.org/abs/arXiv:hep-ph/1510.05451]. zhukovsky V. CH. Zhukovsky, P.A. Eminov, and A.E. Grigoruk, Radiative decay of a massive neutrino in the Weinberg–Salam model with mixing in a constant uniform magnetic field,Mod. Phys.Lett. B11 (1996) 3119. Ioannisian Ara N. Ioannisian and George G. Raffelt, Cherenkov radiation by massless neutrinos in a magnetic field. Phys. Rev. D 55 (1997) 7038-7043. [http://arxiv.org/abs/arXiv:hep-ph/9612285].Kuznetsov A.V. Kuznetsov,N.V. Mikheev,G. G. Raffelt,L. A., Neutrino dispersion in external magnetic fields, Phys. Rev. D 73, 023001 (2006).[http://arxiv.org/abs/arXiv:hep-ph/0505092]. Dobrynina A. A. Dobrynina, N. V. Mikheev, and E. N. Narynskaya, Neutrino self-energy operator and neutrino magnetic moment, Physics of Atomic Nuclei, 2012, Vol. 76, No. 11, pp. 1352–1355. Dobrynina2 A. A. Dobrynina and N. V. Mikheev, Self-energy operator of a massive neutrino in an external magnetic field, Journal of Experimental and Theoretical Physics, 2014, Vol. 118, No. 1, pp. 54-64.Erdas A. Erdas, Neutrino self-energy in external magnetic field, Phys. Rev. D 80, 113004 (2009).[http://arxiv.org/abs/arXiv:0908.4297]. Kuznetsov:2015uca A. V. Kuznetsov, A. A. Okrugin and A. M. Shitova,Propagators of charged particles in an external magnetic field, expanded over Landau levels, Int. J. Mod. Phys. A 30 (2015) no.24,1550140, [http://arxiv.org/abs/arXiv:1506.08999 [hep-ph]]. Mohapatra N. Mohapatra, P. Bal, Massive neutrinos in physics and astrophysics, World Scientific Lecture Notes in Physics Vol. 72. ISBN 981-238-070-1, 981-238-070-X (pbk) (2004).Giunti C. Giunti, C. W. Kim, Fundamentals of neutrino physics and astrophysics, Oxford university press, 2007.broggini Broggini, C., C. Giunti, and A. Studenikin, Electromagnetic Properties of Neutrinos, Adv. High Energy Phys. 2012, 459526,[http://arxiv.org/abs/arXiv:1207.3980 [hep-ph]].Fukugita M. Fukugita, T. Yanagita, Physics of Neutrinos and Applications to Astrophysics, Springer-Verlag, Berlin Heidelberg. 2003.Forero D. V. Forero, M. Tortola, and J.W. F. Valle, Neutrino oscillations refitted. Phys. Rev. D 90, 093006 (2014).[http://arxiv.org/abs/arXiv:1405.7540[hep-ph]].Abazajian K.N. Abazajian et al., Cosmological and astrophysical neutrino mass measurements, Astroparticle Physics. Volume 35, Issue 4 (2011).PAde P. A. R. Ade et al. Planck 2015 results. XIII. Cosmological parameters, A&A 594, A13 (2016). [http://arxiv.org/abs/arXiv:1502.01589v2 [hep-ph]].PDG Particle data group collaboration, K.A. Olive et al. Chin. Phys. C, 38, 090001 (2014).logan1 Shainen M. Davidson and Heather E. Logan, Dirac neutrinos from a second Higgs doublet, Phys. Rev. D 80, 095008 (2009). [http://arxiv.org/abs/arXiv:0906.3335 [hep-ph]].logan2 Shainen M. Davidson and Heather E. Logan, LHC phenomenology of a two-Higgs-doublet neutrino mass model, Phys. Rev. D 82, 115031 (2010). [http://arxiv.org/abs/arXiv:1009.4413[hep-ph]]HHaber J. Gunion, H. Haber, G. Kane, S. Dawson. The Higgs Hunter's Guide (Addison Wesley, 1990).Proceedings Carlos G. Tarazona, Rodolfo A. Diaz, John Moralesa, Andrés Castillo, Contribution to the neutrino magnetic moment coming from 2HDM in presence of magnetic fields, PoS ICHEP2016 (2017) 1070, [http://arxiv.org/abs/arXiv:1611.01135[hep-ph]]IJMPA Carlos G. Tarazona, Rodolfo A. Diaz, John Morales, Andres Castillo, Phenomenology of the new physics coming from 2HDMs to the neutrino magnetic dipole moment, Int. Jour. of Mod. Phys. A 32, 10 (2017).Wong Texono Collaboration, H. Wong, et al.,A search of neutrino magnetic moments with a high-purity germanium detector at the kuo-sheng nuclear power station, Phys. Rev. D75 (2007) 012001.Gemma A. Beda, V. Brudanin, V. Egorov, D. Medvedev, V. Pogosov, et al., Gemma experiment: The results of neutrino magnetic moment search, Phys. Part. Nucl. Lett. 10 (2013) 139-143.Auerbach L. B. Auerbach, R. L. Burman, D. O. Caldwell et al., Measurement of electron-neutrino electron elastic scattering, Phys. Rev. D 63, 112001 (2001).[http://arxiv.org/abs/arXiv:0101039 [hep-ex]]Montanino D. Montanino, M. Picariello, and J. Pulido, Probing neutrino magnetic moment and unparticle interactions with Borexino, Phys. Rev. D 77, 093011 (2008).[http://arxiv.org/abs/arXiv:0801.2643 [hep-ph]]Schwienhorst DONUT Collaboration, A new upper limit for the tau-neutrino magnetic moment , Phys, Lett. B, 513 (2001).[http://arxiv.org/abs/arXiv:hep-ex/0102026 [hep-ph]] Kanemura S. Kanemura, T. Kasai, Y. Okada, Mass bounds of the lightest CP-even Higgs boson in the two-Higgs-doublet model, Phys. Lett. B471 (1999) 182-190, [http://arxiv.org/abs/arXiv:hep-ex/9903289[hep-ph]]. | http://arxiv.org/abs/1706.08614v1 | {
"authors": [
"Carlos G. Tarazona",
"Andrés Castillo",
"Rodolfo A. Diaz",
"John Morales"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170626215937",
"title": "Neutrino self-energy with new physics effects in an external magnetic field"
} |
Department of Mathematics, University of Manitoba, Winnipeg, Manitoba, Canada R3T 2N2 [email protected] author was partially supported by an NSERC Discovery GrantWe study restriction and extension properties for states on ^*-algebras with an eye towards hyperrigidity of operator systems. We use these ideas to provide supporting evidence for Arveson's hyperrigidity conjecture. Prompted by various characterizations of hyperrigidity in terms of states, we examine unperforated pairs of self-adjoint subspaces in a ^*-algebra. The configuration of the subspaces forming an unperforated pair is in some sense compatible with the order structure of the ambient ^*-algebra. We prove that commuting pairs are unperforated, and obtain consequences for hyperrigidity.Finally, by exploiting recent advances in the tensor theory of operator systems, we show how the weak expectation property can serve as a flexible relaxation of the notion of unperforated pairs.[2010]46L07, 46L30 (46L52)Unperforated pairs and hyperrigidity]Unperforated pairs of operator spaces and hyperrigidity of operator systems [ Raphaël Clouâtre ====================§ INTRODUCTION The study of uniform algebras (i.e. closed unital subalgebras of commutative ^*-algebras), combines concrete function theoretic ideas with abstract algebraic tools <cit.>. It is a classical topic that has proven to be useful in operator theory. Indeed, a prototypical instance of a uniform algebra is the disc algebra of continuous functions on the closed unit disc which are holomorphic on the interior. Through a basic norm inequality of von Neumann, one can bring the analytic properties of the disc algebra to bear on the theory of contractions on Hilbert space. The seminal work of Sz.-Nagy and Foias on operator theory aptly illustrates the depth of this interplay <cit.>, and to this day the link is still being exploited. In light of this highly successful symbiosis between operator theory and function theory, it is natural to look for further analogies. One may wish to transplant the sophisticated machinery available for uniform algebras in the setting of general operator algebras. This ambitious vision was pioneered by Arveson, who instigated an influential line of inquiry in his landmark paper <cit.>. Therein, he introduced the notion of boundary representations for an operator system , and proposed that these should be the non-commutative analogue of the so-called Choquet boundary of a uniform algebra. Furthermore, he noticed that these boundary representations could be used to construct a non-commutative analogue of the Shilov boundary as well. Although Arveson himself was not able to fully realize this program at the time, via the hard work of many hands <cit.>,<cit.>,<cit.>,<cit.> the ^*-envelope of an operator system was constructed by analogy with the classical situation. Nowadays, this circle of ideas is regarded as the appropriate non-commutative version of the Shilov boundary of a uniform algebra and it has emerged as a ubiquitous invariant in non-commutative functional analysis <cit.>,<cit.>,<cit.>,<cit.>.Arveson also recognized that thenon-commutative Choquet boundary was a rich source of intriguing questions. For instance, in <cit.> he proposed a tantalizing connection with approximation theory by recasting a classical phenomenon in operator algebraic terms. The classical setting is a result of Korovkin <cit.>, which goes as follows. For each n∈, let Φ_n:C[0,1]→ C[0,1] be a positive linear map and assume thatlim_n→∞Φ_n(f)-f=0for every f∈{1,x,x^2}. Then, it must be the case thatlim_n→∞Φ_n(f)-f=0for every f∈ C[0,1]. In other words, the asymptotic behaviour of the sequence (Φ_n)_n on the ^*-algebra C[0,1] is uniquely determined by the operator systemspanned by 1,x,x^2. This striking phenomenon was elucidatedby several researchers (see for instance <cit.> for a recent survey), but the perspective most relevant for our purpose here was offered by Šaškin <cit.>, who observed that the key property ofis that its Choquet boundary coincides with [0,1]. A natural non-commutative analogue of Korovkin-type rigidity would be an operator system ⊂ B()̋ with the property that for any sequence of unital completely positive linear mapsΦ_n:^*()→^*(),n∈such thatlim_n→∞Φ_n(s)-s=0,s∈we must havelim_n→∞Φ_n(a)-a=0,a∈^*().In fact, Arveson introduced even more non-commutativity in this picture, and defined the operator systemto be hyperrigid if for any injective *-representation π:^*()→ B(_̋π)and for any sequence of unital completely positive linear mapsΦ_n:B(_̋π)→ B(_̋π),n∈such thatlim_n→∞Φ_n(π(s))-π(s)=0,s∈we must havelim_n→∞Φ_n(π(a))-π(a)=0,a∈^*().Note that even in the case where ^*() is commutative, a priori this phenomenon is stronger than the one observed by Korovkin, as we allow the maps Φ_n to take values outside of ^*(). Nevertheless, in accordance with Šaškin's insightful observation, Arveson conjectured <cit.> that hyperrigidity is equivalent to the non-commutative Choquet boundary ofbeing as large as possible, in the sense that every irreducible *-representation of ^*() should be a boundary representation for . This is now known as Arveson's hyperrigidity conjecture and it has garnered significant interest in recent years <cit.>,<cit.>,<cit.>,<cit.>. Arveson himself showed in <cit.> that the conjecture is valid whenever ^*() has countable spectrum.Recently, it was verified in <cit.>in the case where ^*() is commutative. The hyperrigidity conjecture is the main motivation behind ourwork here. Technically speaking however, the paper is centred around extensions and restrictions of states on ^*-algebras, and these issues occupy us for the majority of the article. We feel this approach to hyperrigidity is very natural, but as far as we know it has not been carefully investigated beyond the early connection realized in <cit.>. In the final section of the paper, we introduce what we call “unperforated pairs" of subspaces in a ^*-algebra. As we show, they constitute a device that can be leveraged to gain information about states, and ultimately to detect hyperrigidity. They also highlight a novel angle of approach to the hyperrigidity conjecture.We now describe the organization of the paper more precisely. In Section <ref> we gather the necessary background material. In particular, we recall that hyperrigidity of an operator systemis equivalent to the following unique extension property: for every unital *-representation π:→ B()̋ and every unital completely positive linear map Π:→ B()̋ which agrees with π on , we have π=Π.In Section <ref>, we explore the link between hyperrigidity and two properties of states, namely the unique extension property and the pure restriction property. The first main result of that section establishes these properties as a tool to detect hyperrigidity. We summarize our findings (Theorem <ref>, Corollary <ref> and Theorem <ref>) in the following.Letbe an operator system and let =^*(). Assume that every irreducible *-representation ofis a boundary representation for . Let π:→ B()̋ be a unital *-representation and let Π:→ B()̋ be a unital completely positive extension of π|_. The following statements are equivalent.(i) We have π=Π. (ii) We have Π()⊂π(). (iii) Every pure state on ^*(Π()) restricts to a pure state on π(). (iv) There is a family of states on ^*(Π()) which separate(Π-π)() andrestrict to pure states on π().(v) Every pure state on ^*(Π()) has the unique extension property with respect to π(). (vi) There is a family of pure states on ^*(Π()) which separate (Π-π)() and have the unique extension property with respect to π(). The other main result of Section <ref> provides evidence supporting Arveson's conjecture (Theorem <ref>). Letbe an operator system and let =^*(). Assume that every irreducible *-representation ofis a boundary representation for . Let π:→ B()̋ be a unital *-representation and let Π:→ B()̋ be a unital completely positive extension of π|_.Then, the subspace (π-Π)() contains no strictly positive element. In Section <ref>, we delve deeper into the unique extension property for states. Based on a general construction (Theorem <ref>), we exhibit natural examples where the unique extension property is satisfied by an abundance of states, which is relevant in view of Theorem <ref>.Finally, in Section <ref> we introduce the notion of an unperforated pair. A pair (,) of self-adjoint subspaces in a unital ^*-algebra is said to be unperforated if whenever a∈ and b∈ are self-adjoint elements with a≤ b, we may find another self-adjoint element b'∈ such that b'≤a and a≤ b'≤ b. This provides a mechanism to construct families of states with pure restrictions (Theorem <ref>). The precise relation to hyperrigidity is illustrated in the following (Corollaries <ref> and <ref>).Letbe a separable operator system and let =^*(). Assume that every irreducible *-representation ofis a boundary representation for . Let π:→ B()̋ be a unital *-representation and let Π:→ B()̋ be a unital completely positive extension of π|_. Then,the pair ((Π-π)(), π()) is unperforated if and only if Π=π. In particular, this is satisfied if (Π-π)() commutes with π(). Unperforated pairs appear to be elusive in the absence of some form of commutativity. Accordingly, we aim to find a meaningful relaxation of that notion. Based on recent advances in the tensor theory of operator systems and the so-called tight Riesz interpolation property, we propose that the weak expectation property is an appropriate relaxation. Our position is substantiated by the following result (Theorem <ref>).Letbe a unital ^*-algebra and let ⊂ be a unital separable ^*-subalgebra with the weak expectation property. Let a∈ be a self-adjoint element and let >0. Then, there is a sequence (β_n)_n of self-adjoint elements inwith the following properties.(1) We haveβ_n≤ (1+)a for every n∈ andlim sup_n→∞β_n≤a.(2) We havelim sup_n→∞ψ(β_n)≤inf{ψ(b):b∈, b≥ a}and sup{ψ(c): c∈,c≤ a}≤lim inf_n→∞ψ(β_n)for every state ψ on .As an application of the previous result, we refine Theorem <ref> in the presence of the weak expectation property (Corollary <ref>). § PRELIMINARIES §.§ Operator systems and completely positive mapsLet B()̋ denote the ^*-algebra of bounded linear operators on some Hilbert space $̋. An operator systemis a unital self-adjoint subspace ofB()̋. Due to work of Choi and Effros <cit.>, operator systems can be defined in a completely abstract fashion, but the previous “concrete" definition will suffice for our present purposes. Likewise, we will always assume that^*-algebras are concretely represented on a Hilbert space. For each positive integern, we denote by_n()the complex vector space ofn×nmatrices with entries in, and regard it as a unital self-adjoint subspace ofB(^̋(n)). A linear map ϕ:→ B(_̋ϕ)induces a linear map ϕ^(n):_n()→ B(_̋ϕ^(n))defined asϕ^(n)([s_ij]_i,j)=[ϕ(s_ij)]_i,jfor each[s_ij]_i,j∈_n().The mapϕis said to be completely positive ifϕ^(n)is positive for every positive integern.For most of the paper, we will be dealing with unital completely positive maps with one-dimensional range. Such a mapψ:→is called a state. The set of states onis denote by(). It is a weak-*closed convex subset of the closed unit ball of the dual space^*, and so in particular it is compact in the weak-*topology.The structure of unital completely positive maps on^*-algebras is elucidated by the Stinespring construction, a generalization of the classical Gelfand-Naimark-Segal (GNS) construction associated to a state. More precisely, given a unital^*-algebraand a unital completely positive mapϕ:→B()̋, there is a Hilbert space_ϕ, an isometryV_ϕ:→̋_ϕand a unital*-representationσ_ϕ:→B(_ϕ)satisfyingϕ(a)=V_ϕ^* σ_ϕ(a)V_ϕ,a∈and_ϕ=[σ_ϕ()V_ϕ]̋.Here and throughout, given a subset⊂$̋ we denote by [] the smallest closed subspace of $̋ containing. The triple(σ_ϕ, _ϕ,V_ϕ)is called the Stinespring representation ofϕ, and it is unique up to unitary equivalence. The following fact is standard. Letbe a unital ^*-algebra and let ⊂ be a unital ^*-subalgebra. Let ψ:→ B()̋ be a unital completely positive map and let ϕ=ψ|_. Then, there is an isometry W:_ϕ→_ψ such that WV_ϕ=V_ψ andσ_ψ(b) W=Wσ_ϕ(b),b∈.We first note that if b_1,…,b_n∈ and ξ_1,…,ξ_n∈$̋ then∑_j=1^n σ_ψ(b_j)V_ψξ_j^2 =∑_j,k=1^n⟨ V_ψ^* σ_ψ(b_k^*b_j)V_ψξ_j,ξ_k⟩ =∑_j,k=1^n⟨ψ(b_k^*b_j)ξ_j,ξ_k⟩=∑_j,k=1^n⟨ϕ(b_k^*b_j)ξ_j,ξ_k⟩=∑_j,k=1^n⟨ V_ϕ^* σ_ϕ(b_k^*b_j)V_ϕξ_j,ξ_k⟩=∑_j=1^n σ_ϕ(b_j)V_ϕξ_j^2.Using that_ϕ=[σ_ϕ()V_ϕ]̋, a routine argument shows that there is an isometryW:_ϕ→_ψsuch thatW( ∑_j=1^n σ_ϕ(b_j)V_ϕξ_j) =∑_j=1^n σ_ψ(b_j)V_ψξ_jfor everyb_1,…,b_n∈andξ_1,…,ξ_n∈$̋. It follows readily that WV_ϕ=V_ψ andWσ_ϕ(b)=σ_ψ(b)W,b∈.§.§ Purity, extreme points and Choquet integral representation Letbe an operator system. A completely positive mapψ:→B()̋is said to be pure if wheneverϕ:→B()̋is a completely positive map with the property thatψ-ϕis also completely positive, we must have thatϕ=tψfor some0≤t≤1. It is known that the pure unital completely positive maps on a^*-algebra are precisely those for which the associated Stinespring representations are irreducible <cit.>.We let_p()denote the collection of pure states. It is a standard fact that a state is pure if and only if it is an extreme point of()(see for instance <cit.>, the proof of which is easily adapted to the setting of an operator system). A subtlety arises for unital completely positive maps with higher dimensional ranges: it follows from <cit.> and <cit.> thata matrix stateψ:→B(^n)is pure if and only if it is a so-called matrix extreme point. The following tool will be important for us. It follows from <cit.> (see also <cit.>). Recall that ifis separable, then the weak-*topology on()is compact and metrizable. Letbe a separable operator system and let ψ be a state on . Then, there is a regular Borel probability measure on () concentrated on _p() and with the property thatψ(s)=∫__p()ω(s)dμ(ω),s∈.§.§ Unique extension property, boundary representations and hyperrigidity One important property of completely positive maps on operator systems is that they satisfy a generalization of the Hahn-Banach extension theorem. Indeed, let⊂B()̋be an operator system and letϕ:→B(_̋ϕ)be a completely positive map. Then, by Arveson's extension theorem <cit.>, there is another completely positive mapψ:B()̋→B(_̋ϕ)with the property thatψ|_=ϕ. In particular, a completely positive map onalways admits at least one completely positive extension to any operator system⊂B()̋containing.We denote the set of such extensions by(ϕ,). This notation will be used consistently throughout the paper.In general, the set of extensions may contain more than one element, and this possibility is one of the main themes of the paper. The following fact quantifies the freedom in choosing an extension, and it follows from a verbatim adaptation of the proof of <cit.>. Let ⊂ be operator systems and let ϕ be a state on . Then,max_ψ∈(ϕ,)ψ(t)= inf{ϕ(s): s∈, s≥ t }andmin_ψ∈(ϕ,)ψ(t)= sup{ϕ(s): s∈, s≤ t }whenever t∈ is self-adjoint.Let⊂be operator systems. We say that a completely positive mapψ:→B(_̋ψ)has the unique extension property with respect toif the restrictionψ|_admits only one completely positive extension to, namelyψitself. An irreducible*-representationπ:^*()→B(_̋π)is said to be a boundary representation forif it has the unique extension property with respect to. We advise the reader to exercise some care: in other works (such as <cit.>) the use of the terminology “unique extension property" is reserved for*-representations on^*(). Our definition is more lenient as we do not restrict our attention to*-representations and no further relation is assumed betweenandbeyond mere containment. We will recall this discrepancy in terminology whenever there is any risk of confusion. These notions can be used to reformulate the property of hyperrigidity considered in the introduction. The following is <cit.>; therein some special attention is paid to separability conditions, but a quick look at the proof reveals that the next result holds with no cardinality assumptions. Letbe an operator system. Then,is hyperrigid if and only if every unital *-representation of ^*() has the unique extension property with respect to . The driving force behind our work is the following conjecture of Arveson <cit.>, which claims that it is sufficient to focus on irreducible*-representations to detect hyperrigidity.Arveson's hyperrigidity conjecture. An operator systemis hyperrigid if every irreducible*-representation of^*()is a boundary representation for. To be precise, we should point out that Arveson was more cautious and restricted the operator system in his conjecture to be separable. We explain why this conjecture is especially sensible in that case. We may think of an arbitrary*-representation as some kind of integral of a family of irreducible*-representations against some measure. Since the irreducible*-representations are all assumed to have the unique extension property with respect to, the question then becomes whether this property is preserved by the integration procedure. This rough sketch can be made precise, and in fact this was the philosophy used by Arveson in <cit.>. One of the main contributions therein <cit.> establishes that if the result of the integration procedure has the unique extension property with respect to, then the integrand must have it almost everywhere. Arveson's hyperrigidity conjecture essentially asserts the converse. Note that in the “atomic" situation where the integral is in fact a direct sum, this converse does indeed hold <cit.>.Finally, we note that we choose not to make separability of our operator systems a blanket assumption, although such conditions will occasionally make an appearance for technical reasons throughout.§ CHARACTERIZING HYPERRIGIDITY VIA STATES In this section, we make partial progress towards verifying the hyperrigidity conjecture and provide several different characterizations of hyperrigidity using states. Before proceeding, we make an observation that will be used numerous times throughout. Letbe an operator system and let=^*(). Letπ:→B()̋be a unital*-representation and letΠ:→B()̋be a unital completely positive extension ofπ|_. Then, we haveπ()=^*(π())=^*(Π())⊂^*(Π()).The basic tool of this section is the following.Letbe an operator system and let =^*(). Assume that every irreducible *-representation ofis a boundary representation for . Let π:→ B()̋ be a unital *-representation and let Π:→ B()̋ be a unital completely positive extension of π|_. Then, we have that ψ∘Π=ψ∘π whenever ψ is a unital completely positive map on ^*(Π()) with the property that ψ|_π() is pure.Recall that π()⊂^*(Π()) by (<ref>). Let ϕ=ψ|_π() which is pure by assumption. Let (σ_ψ,_ψ, V_ψ) and (σ_ϕ,_ϕ,V_ϕ) denote the Stinespring representations for ψ and ϕ respectively. By Lemma <ref>, we see that there is an isometry W:_ϕ→_ψ with the property that WV_ϕ=V_ψ andW^*σ_ψ(π(a))W=σ_ϕ(π(a))for every a∈.Since π and Π agree on , we see that the mapa↦ W^*σ_ψ(Π(a))W, a∈is a unital completely positive extension of σ_ϕ∘π|_. Because ϕ is pure, we infer that σ_ϕ is irreducible. In particular, σ_ϕ∘π is an irreducible *-representation of , and thus is a boundary representation for .We conclude that W^*σ_ψ(Π(a))W=σ_ϕ(π(a))for every a∈. Hence, using that WV_ϕ=V_ψ we obtainψ(Π(a)) =V^*_ψσ_ψ(Π(a))V_ψ=V^*_ϕ W^*σ_ψ(Π(a))WV_ϕ=V_ϕ^*σ_ϕ(π(a))V_ϕ=ϕ(π(a))=ψ(π(a))for every a∈, and therefore ψ∘Π=ψ∘π. Our next task is to reformulate Lemma <ref> in a language that is conveniently applicable to our purposes in the paper. Letbe a unital^*-algebra and let⊂be a self-adjoint subspace. Letbe a collection of states on. We say that the states inseparateif for every non-zero self-adjoint elements∈we have thatsup_ψ∈|ψ(s)|>0.Letbe an operator system and let =^*(). Assume that every irreducible *-representation ofis a boundary representation for . Let π:→ B()̋ be a unital *-representation and let Π:→ B()̋ be a unital completely positive extension of π|_. The following statements are equivalent.(i) We have π=Π. (ii) Every pure state on ^*(Π()) restricts to a pure state on π(). (iii) There is a family of states on ^*(Π()) which separate(Π-π)() andrestrict to pure states on π().If π=Π, then ^*(Π())=π() so that (i) implies (ii). It is trivial that (ii) implies (iii) since (Π-π)()⊂^*(Π()) by (<ref>). Finally, assume that there is a familyof states on ^*(Π()) which separate (Π-π)() andrestrict to pure states on π(). To establish Π=π, it suffices to show that sup_ψ∈|ψ(Π(a)-π(a))|=0for every self-adjoint element a∈. This follows from an application of Lemma <ref>. We conclude that (iii) implies (i). In view of the previous statement, we note in passing that it is generally not true that if every state on a unital^*-algebrarestricts to be pure on a unital^*-subalgebra, then=. Indeed, simply consider the trivial case of=I.We extract an easy consequence related to hyperrigidity. Letbe an operator system and let =^*(). Assume that every irreducible *-representation ofis a boundary representation for . Let π:→ B()̋ be a unital *-representation and let Π:→ B()̋ be a unital completely positive extension of π|_. Then, π=Π if and only if Π()⊂π().Assume that Π()⊂π(). Then, we have π()=Π()⊂Π()⊂π()which implies that π()=^*(π())=^*(Π()).Thus, the pure states on π() coincide with those on ^*(Π()), and Theorem <ref> implies that π=Π. The converse is trivial.In light of Theorem <ref>, it behooves us to understand the states on a unital^*-algebrawhich restrict to be pure ona unital^*-subalgebra. Fixing a stateψonand allowingto vary (while still being non-trivial), it is sometimes possible to arrange for the restrictionψ|_to be pure as well; see <cit.> and references therein. Typically however, one does not expect purity of the restriction, as easy examples show. Let _2 be the complex 2× 2 matrices and let {e_1,e_2} be the canonical orthonormal basis of ^2. Choose non-zero complex numbers γ_1,γ_2 such that |γ_1|^2+|γ_2|^2=1 and putξ=γ_1 e_1+γ_2 e_2 .Define a state ω on _2 asω(a)=⟨ aξ,ξ⟩,a∈_2.The GNS representation of ω is seen to be unitarily equivalent to the identity representation on _2, which is irreducible. Thus, ω is pure. Note however that the restriction of ω to the commutative ^*-subalgebra ⊕⊂_2 is not multiplicative since both γ_1 and γ_2 are non-zero, and therefore the restriction is not pure.Nevertheless, the insight provided by Theorem <ref> will guide us throughout the paper, and it already contains non-trivial information regarding the hyperrigidity conjecture as we proceed to show next. First, we need a technical tool.Letbe an operator system and let =^*(). Assume that every irreducible *-representation ofis a boundary representation for . Let π:→ B()̋ be a unital *-representation and let Π:→ B()̋ be a unital completely positive extension of π|_. Fix a state ϕ on π() and an element a∈ such that π(a)-Π(a) is self-adjoint. Then, we have thatsup{ϕ(π(c)):c∈, π(c)≤π(a)-Π(a)}≤ 0.By (<ref>), we have π()⊂^*(Π()). Put x=π(a)-Π(a) ∈^*(Π()). We infer from Lemma <ref> that ψ(x)=0 for every state ψ on ^*(Π()) such that ψ|_π() is pure. In particular, if c∈ satisfies π(c)≤ x and ω is a pure state on π(), then we see that ω(π(c))=ψ(π(c))≤ψ(x)=0for every state ψ on ^*(Π()) such that ψ|_π()=ω. By the Krein-Milman theorem, the state ϕ lies in the weak-* closure of the convex hull of _p(π()), and thussup{ϕ(π(c)):c∈,π(c)≤ x}≤ 0. Letbe an operator system and let=^*(). We assume that every irreducible*-representation ofis a boundary representation for. Further, letπ:→B()̋be a unital*-representation and letΠ:→B()̋be a unital completely positive map which agrees withπon. If the hyperrigidity conjecture holds, then we would haveπ=Π. In other words, the self-adjoint subspace(π-Π)()would be trivial. The next development, which is one of the main result of this section, establishes that this subspace cannot contain any strictly positive element, thus supporting Arveson's conjecture.Letbe an operator system and let =^*(). Assume that every irreducible *-representation ofis a boundary representation for . Let π:→ B()̋ be a unital *-representation and let Π:→ B()̋ be a unital completely positive extension of π|_.Then, the subspace (π-Π)() contains no strictly positive element.Let a∈ and assume that the element x=π(a)-Π(a) is strictly positive, so that x≥δ I for some δ>0. We infer thatsup{ϕ(π(c)):c∈,π(c)≤ x}≥ϕ(π(δ I))=δfor every state ϕ on , which contradicts Lemma <ref>.Until now, the underlying theme of this section has been the purity of restrictions of states to^*-subalgebras. The dual process of extending states from a^*-algebra to a larger one is also relevant for hyperrigidity, and we explore this idea next. We start by clarifying the relation between the unique extension property for states and the corresponding property for*-representations.Letbe a unital ^*-algebra and let ⊂ be an operator system. The following statements hold.(1) Assume that every pure state onhas the unique extension property with respect to . Then, every irreducible *-representation ofhas the unique extension property with respect to . (2) Assume that every state onhas the unique extension property with respect to . Then, every unital *-representation ofhas the unique extension property with respect to . In particular, in the case where =^*() we conclude thatis hyperrigid.The two statements are established via near identical arguments, so we intertwine their proofs.Let π:→ B()̋ be a unital *-representation (irreducible in the case of (1)) and let Π:→ B()̋ be a unital completely positive map which agrees with π on . We must show that π=Π on , for which it is sufficient to establish that ⟨π(a) ζ,ζ⟩=⟨Π(a) ζ,ζ⟩for every unit vector ζ∈$̋ and everya∈. Indeed, in that case, for eacha∈the numerical radius ofπ(a)-Π(a)∈ B()̋is0, and thusπ(a)=Π(a). Fix henceforth a unit vectorζ∈$̋ and consider the state χ ondefined asχ(a)=⟨π(a)ζ,ζ⟩,a∈.If π is irreducible, it is routine to verify that the GNS representation of χ is unitarily equivalent to π, whence χ must be pure in this case. Consider now another state ψ ondefined asψ(a)=⟨Π(a)ζ,ζ⟩,a∈.We see that ψ and χ agree on , whence they agree onby assumption. In other words,⟨π(a) ζ,ζ⟩=⟨Π(a) ζ,ζ⟩,a∈and the proof is complete. In the classical case where^*()is commutative, the pure states coincide with the irreducible*-representations, whence the converse of part (1) of Theorem <ref> holds. For general operator systemshowever, it can happen thatis hyperrigid while there are some pure states on^*()which do not have the unique extension property with respect to. We provide an elementary example. Let {e_1,e_2} denote the canonical orthonormal basis of ^2. Consider the associated standard matrix units E_12,E_21∈_2. Let ⊂_2 be the operator system generated by I,E_12,E_21. Then, _2=^*().For 1≤ k ≤ 2, we let χ_k be the vector state on _2 defined asχ_k(a)=⟨ ae_k, e_k⟩,a∈_2.We see that the GNS representations of χ_1 and χ_2 are unitarily equivalent to the identity representation on _2, which is irreducible. Thus, χ_1 and χ_2 are both pure. Moreover, every element s∈ has the forms=c_0 I+c_12E_12+c_21E_21for some c_0,c_12,c_21∈. We note thatχ_k(c_0 I+c_12E_12+c_21E_21)=c_0,1≤ k ≤ 2so that χ_1|_=χ_2|_ while χ_1≠χ_2. Thus, χ_1 does not have the unique extension property with respect to .On the other hand, it is well-known that up to unitary equivalence the only unital *-representations of _2 are multiples of the identity representation. The identity representation is a boundary representation forby Arveson's boundary theorem <cit.>. Since the unique extension property is preserved under direct sums <cit.>, we conclude that every unital *-representation of _2 has the unique extension property with respect to . Our next task is to relate the unique extension property to the pure restriction property.For this purpose, we recall some well-known facts which follow easily from standard convexity arguments (see Subsection <ref> about the notation used here). Let ⊂ be operator systems. (1)Let χ be a pure state onwhich has the unique extension property with respect to . Then,the restriction χ|_ is pure. (2)Let ω be a pure state on . Then, (ω,) is a weak-* closed convex subset of () whose extreme points are pure states on . In particular, ω admits a unique state extension toif and only if it admits a unique pure state extension to .In view of this interplay between the unique extension property for states and the pure restriction property, we give another characterization of hyperrigidity.Letbe an operator system and let =^*(). Assume that every irreducible *-representation ofis a boundary representation for . Let π:→ B()̋ be a unital *-representation and let Π:→ B()̋ be a unital completely positive extension of π|_. The following statements are equivalent.(i) We have π=Π. (ii) Every pure state on ^*(Π()) has the unique extension property with respect to π(). (iii) There is a family of pure states on ^*(Π()) which separate (Π-π)() and have the unique extension property with respect to π(). If π=Π, then ^*(Π())=π() so that (i) implies (ii). It is trivial that (ii) implies (iii) since (Π-π)()⊂^*(Π()) by (<ref>). Assume that there is a family of pure states on ^*(Π()) which separate (Π-π)() and have the unique extension property with respect to π(). By Lemma <ref>, this family consists of states which restrict to be pure on π(). Thus, π=Π by virtue of Theorem <ref>, and (iii) implies (i).§ THE UNIQUE EXTENSION PROPERTY FOR STATES In the previous section, we gave several different characterizations of hyperrigidity in terms of states. In particular, Theorem <ref> provides motivation to examine the unique extension property for states in greater detail. This is the task we undertake in this section. First, we remark that these uniqueness considerations for pure stateson a maximal abelian self-adjoint subalgebra ofB()̋were at the heart of the famous Kadison-Singer problem <cit.> which was solved in <cit.>. The case of general subalgebras has also been studied extensively; see <cit.> and references therein.We now turn to examining the unique extension property for states which are not pure. In the separable setting, those are exactly the states which are given as the integration of a collection of pure states with respect to some probability measure (see Theorem <ref>). More generally, we have the following. Letbe a unital ^*-algebra and let ⊂ be an operator system.Let ψ be a state onsuch thatψ=∫_()χ dμ(χ)for some Borel probability measure μ on (). Assume that ψ has the unique extension property with respect to . If χ is a state onsatisfying μ({χ})>0, then χ has the unique extension property with respect to .Fix a state χ_0 onwith the property that μ({χ_0})>0. Choose a state _0 onwhich agrees with χ_0 on . Next, define a state ϕ onasϕ(a)=μ({χ_0})_0(a)+∫_()∖{χ_0}χ(a)dμ(χ),a∈.Then, we see that ϕ and ψ agree on , and thus by assumption we must have that ϕ=ψ. Therefore, for each a∈ we find thatμ({χ_0})_0(a) =ϕ(a)-∫_()∖{χ_0}χ(a)dμ(χ)=ψ(a)-∫_()∖{χ_0}χ(a)dμ(χ)=μ({χ_0})χ_0(a).Since μ({χ_0})>0, we conclude that _0=χ_0 and the proof is complete. In particular, the previous proposition implies that if a state with the unique extension property is given as some finite convex combination of states, then all of those have the unique extension property as well. The question asking whether the condition onχbeing an “atom" ofμcan be removed from the statement appears to be difficult. A related fact is known to hold at least in the case whereis separable; see <cit.>. In the setting of that paper however, states are replaced by*-representations and it is systematically assumed that=^*(). Under these conditions, the unique extension property is known to be equivalent to a dilation theoretic maximality property <cit.>. This important characterization was first discovered in <cit.> and exploited with great success in <cit.>. It plays a crucial role in Arveson's proof of <cit.>, and the lack of an analogue in our context is a major obstacle to adapting his ideas.In the other direction, we exhibit an example which shows that integrating a collection of pure states with the unique extension property against some probability measure does not necessarily preserve the unique extension property. Let _2⊂^2 denote the open unit ball and let _2 be its topological boundary, the sphere. Let A(_2) denote the ball algebra, that is the algebra of continuous functions on _2 which are holomorphic on _2. Endow this algebra with the supremum norm over _2. By means of the maximum modulus principle, we may regard A(_2) as a unital closed subalgebra of C(_2). Let =A(_2)+A(_2)^*⊂ C(_2)be the operator system generated by A(_2) inside of C(_2). For every λ∈_2, denote by _λthe state onuniquely determined by _λ(f)=f(λ), f∈ A(_2).It is a classical fact <cit.> that _2 is the Choquet boundary of A(_2). Hence, for each ζ∈_2 the state _ζ onhas a unique extension to a state χ_ζ on C(_2). This state χ_ζ is in fact the character on C(_2) of evaluation at ζ, and in particular it is pure.Consider now the unique rotation invariant regular Borel probability measureσ on _2, and let ψ_σ denote the state on C(_2) of integration against σ. By virtue of Cauchy's formula <cit.>, we have that ψ_σ(f)=f(0) for every f∈ A(_2). On the other hand, let μ denote Lebesgue measure on the circle {(ζ_1,ζ_2)∈_2: |ζ_1|=1,ζ_2=0} and let ψ_μ denote the state on C(_2) of integration against μ. Then, ψ_μ≠ψ_σ. The one-variable version of Cauchy's formula shows that ψ_μ(f)=f(0) for every f∈ A(_2). In particular, we conclude that ψ_σ does not have the unique extension property with respect to . Finally, note thatψ_σ=∫__2χ_ζdσ(ζ)and we saw in the previous paragraph that each χ_ζ is pure and has the unique extension property with respect to . From the point of view of hyperrigidity, we see that Theorem <ref> offers some flexibility, in the sense that it only requires that there be sufficiently many states with the unique extension property. Accordingly, we next aim to identify a class of natural examples where the unique extension property is satisfied by a separating family of states. We start with a general result.Recall that ifis a closed two-sided ideal of a^*-algebra, thenadmits a contractive approximate identity. In other words, there is a net(e_λ)_λ∈Λof positive elementse_λ∈such thate_λ≤1for everyλ∈Λand with the property thatlim_λbe_λ-b=lim_λe_λ b-b=0for everyb∈.Letbe a unital ^*-algebra and let ⊂ be a closed two-sided ideal with contractive approximate identity (e_λ)_λ∈Λ. Let χ be a state onsuch that lim_λχ(ae_λ)=χ(a)for every a∈. Then, χ has the unique extension property with respect to + I.Let ψ be a state onwhich agrees with χ on + I. Let (σ_ψ,_ψ,ξ_ψ) be the associated GNS representation, where ξ_ψ∈_ψ is a unit cyclic vector.Put _=[σ_ψ()_ψ], which is an invariant subspace for σ_ψ(). We can decompose _ψ as_ψ=_⊕_'and accordingly we haveσ_ψ(a)=σ_ψ(a)|__⊕σ_ψ(a)|__',a∈.If we let π'_:→ B(_') be the unital *-representation defined asπ'_(a)=σ_ψ(a)|__',a∈then it is readily verified that π'_()={0}. Hence, if we decomposeξ_ψ=ξ_⊕ξ'_∈_⊕_'then we observe that χ(b)=ψ(b)=⟨σ_ψ(b)ξ_ψ,ξ_ψ⟩=⟨σ_ψ(b)ξ_,ξ_⟩for every b∈. Note however that1=χ(I)=lim_λχ(e_λ)whence χ|_=1. We conclude that ξ_=1 and ξ'_=0, whence ξ_ψ=ξ_∈_.A standard verification then yieldslim_λσ_ψ(e_λ)ξ_ψ=ξ_ψin the norm topology of _ψ. On the other hand, we have that ae_λ∈ for each a∈and for each λ∈Λ, and thusχ(a) =lim_λχ(ae_λ)=lim_λψ(ae_λ)=lim_λ⟨σ_ψ(ae_λ)ξ_ψ,ξ_ψ⟩=lim_λ⟨σ_ψ(a)σ_ψ(e_λ)ξ_ψ,ξ_ψ⟩=⟨σ_ψ(a)ξ_ψ,ξ_ψ⟩=ψ(a).This completes the proof.We can now identify natural examples where many states have the unique extension property. Recall that a subset⊂B()̋is said to be non-degenerate if=$̋. Let ⊂ B()̋ be a unital ^*-algebra and let ⊂ be a closed two-sided ideal which is non-degenerate. Let X∈ B()̋ be a positive trace class operator with X=1, and let τ_X be the state ondefined asτ_X(a)=(aX),a∈.Then, τ_X has the unique extension property with respect to + I. Let (e_λ)_λ be a contractive approximate identity for . By assumption, we know that =̋. A standard calculation then shows that (e_λ)_λ∈Λ converges to the identity operator in the strong operator topology of B()̋. Since τ_X is weak-* continuous, we conclude thatlim_λτ_X(ae_λ)=τ_X(a), a∈.By virtue of Theorem <ref>, we conclude that τ_X has the unique extension property with respect to + I. Note that Theorem <ref> implies in particular that ^*(Π())=π() if there is a family of pure states on ^*(Π()) which separate (Π-π)() and have the unique extension property with respect to π(), assuming that every irreducible *-representation ofis a boundary representation for . We point out here that it is notgenerally the case that two unital ^*-algebras ⊂ coincide whenever there is a family of pure states onwhich separateand have the unique extension property with respect to . The next example illustrates this phenomenon, along with the various properties of states considered thus far. Let $̋ be an infinite dimensional Hilbert space. Letbe the^*-algebra generated by the identityIand the ideal of compact operators()̋. Clearly,≠B()̋. Recall that any non-degenerate*-representation of()̋is unitarily equivalent to some multiple of the identity representation. Standard facts about the representation theory of^*-algebras (see the discussion preceding <cit.>) then imply that any unital*-representation ofB()̋is unitarily equivalent to 𝕀^(γ)⊕π_Qwhereγis some cardinal number andπ_Qis a*-representation ofB()̋which annihilates()̋. In light of the GNS construction, this shows that a pure state onB()̋is either a vector state or it annihilates()̋. For a general stateψonB()̋, we have the decompositionψ=τ+ψ_Qwhereτis a positive weak-*continuous linear functional onB()̋, andψ_Qis a positive linear functional onB()̋which annihilates()̋. Furthermore, there is a positive trace class operatorX_ψ∈B()̋such that τ(a)=(aX_ψ), a∈ B()̋.We nowcarefully analyze the states onB()̋using this description.First, note that ifψ=τ+ψ_Qwhere bothτandψ_Qare non-zero, then the restrictionρ=ψ|_is not pure. For thenρ-τ|_andρ-ψ_Q|_are positive linear functionals. However,τ|_andψ_Q|_cannot be linearly dependent as they are both non-zero, andψ_Qannihilates()̋whileτdoes not. Second, assume thatψ=τ+ψ_Qwhereψ_Qis non-zero. We claim thatψdoes not have the unique extension property with respect to. Indeed, sinceB()̋/()̋is not merely one-dimensional andψ_Q(I)≠0, there exists a positive linear functionalχonB()̋which annihilates()̋and satisfiesχ(I)=ψ_Q(I)whileχ≠ψ_Q. Then, the stateτ+χagrees withψon, yet it is distinct fromψ.Third, assume thatψ=ψ_Q. We claim thatψrestricts to be pure on. To see this, putρ=ψ|_and suppose that there are statesϕ_1,ϕ_2onwith the property thatρ=1/2(ϕ_1+ϕ_2).Then, we haveϕ_1(K)=-ϕ_2(K)for everyK∈()̋. Sinceϕ_1andϕ_2are positive, we conclude thatϕ_1(K)=ϕ_2(K)=0wheneverK∈()̋is positive. Using the Schwarz inequality for states <cit.>, we see that|ϕ_1(K)|^2≤ϕ_1(K^*K)=0, K∈()̋.Henceϕ_1annihilates()̋, and so doesϕ_2by the same argument. Sinceϕ_1(I)=ϕ_2(I)=1we must haveϕ_1=ϕ_2=ρ. Therefore,ρis pure.Finally, assumethatψ=τ. Then,ψhas the unique extension property with respect toby virtue of Corollary <ref>. Also, it is readily seen fromLemma <ref> thatψrestricts to be pure onif and only ifX_ψhas rank one (i.e.ψis a vector state). § UNPERFORATED PAIRS OF SUBSPACES IN A ^*-ALGEBRA In the previous section, we focused on the unique extension property for states, partly because it provides a means to produce a family of states on a^*-algebra with the pure restriction property (see Theorem <ref> and its proof). In this section, we explore a different path and introduce a concept, which, under appropriate conditions, also leads to the identification of an abundance of states that restrict to be pure. Letbe a unital^*-algebra. Letandbe self-adjoint subspaces of. We say that the pair(,)is unperforated if for every pair of self-adjoint elementsa∈,b∈such thata≤b, we can find another self-adjoint elementb'∈with the property thatb'≤aanda≤b'≤b. Clearly, the pair(,)is automatically unperforated if⊂. We provide now an example of an unperforated pair(,)for which there are self-adjoint elementsa∈,b∈witha≤bsuch that no elementb'∈can be chosen to satisfya≤b'≤bandb'=a.Let _3 denote the 3× 3 complex matrices. Considers=[ -200;0 -10;00 -1 ]∈_3andt=[100;0 -20;001 ]∈_3.Let = s and = t, which are both self-adjoint subspaces of _3. Let a=α s and b=β t for some α∈ and β∈. Assume that a≤ b, so that[ -2α 0 0; 0-α 0; 0 0-α ]≤[β00;0 -2 β0;00β ].This is equivalent to the inequalities-2α≤β,-α≤β≤α /2.In particular, we see that α≥ 0 and |β|≤α. Thus, b=2|β|≤ 2α=a.We conclude that the pair (,) is unperforated.In fact, it has an additional noteworthy property. Choose α=1 and β=1/2. Then, we trivially have that-2α≤β,-α≤β≤α /2so the corresponding elements a∈ and b∈ satisfy a≤ b as seen above. If λ∈ satisfies a≤λ t≤ b, then [ -200;0 -10;00 -1 ]≤[λ00;0 -2 λ0;00λ ]≤[ 1/2 0 0; 0-1 0; 0 0 1/2 ]which forces λ=1/2. We infer that λ t=1<a.We will give furtherexamples of unperforated pairs below (see Proposition <ref>).In the meantime, we illustrate their usefulness for our purposesby leveraging their defining property. Letbe a unital ^*-algebra, let ⊂ be a self-adjoint subspace and let ⊂ be a separable operator system. Assume that the pair (,) is unperforated. Then, for every self-adjoint element s∈, there is a state ψ onwhich restrict to be pure onand such that |ψ(s)|=s. In particular, there is a family of states onwhich separateand restrict to be pure on .Fix a self-adjoint element s∈. It is no loss of generality to assume that s=1. Upon replacing s with -s, we can find a state θ onwith the property that θ(s)=1. Sinceis assumed to be separable we may invoke Theorem <ref> to find a Borel probability measure μ concentrated on _p() with the property that θ(t)=∫__p()ω(t) dμ(ω),t∈.Assume on the contrary that for each pure state χ on , we have thatmax_ψ∈(χ,)|ψ(s)|<1.We will derive a contradiction by showing that μ(_p())=0. To see this, first use Lemma <ref>. We infer that for every pure state χ onwe have thatinf{χ(t):t∈, t≥ s}<1,and thus there is a self-adjoint element t_χ∈ such that t_χ≥ s and χ(t_χ)<1. Since the pair (, ) is unperforated, there is t'_χ∈ such that t'_χ≤ 1 and s≤ t'_χ≤ t_χ. In particular, we note that χ(t'_χ)<1. Consider now the weak-* open setA_χ={ω∈_p(): ω(t'_χ)<1}.Then, χ∈ A_χ and we see that_p()=∪_χ∈_p()A_χ.Moreover, since t'_χ≤ 1 and1=θ(s)≤θ(t'_χ)= ∫__p()ω(t'_χ) dμ(ω)we find μ(A_χ)=0. By assumption,is separable and thus so is the subsetQ={t'_χ:χ∈_p()}.Accordingly let {χ_n}_n∈ be a countable subset of _p() such that {t'_χ_n}_n∈ is dense in Q. Let χ∈_p() and ω∈ A_χ, so thatω(t'_χ)=1-for some >0. There is N∈ such that t'_χ_N-t'_χ</2 whence ω(t'_χ_N)<ω(t'_χ)+/2=1-/2and ω∈ A_χ_N. This shows that_p()=∪_χ∈_p()A_χ= ∪_n∈A_χ_n.Since μ(A_χ_n)=0 for every n∈, we conclude that μ(_p())=0. This contradicts the fact that μ has total mass 1.Based on Theorem <ref>, we can now relate unperforated pairs and hyperrigidity. Letbe a separable operator system and let =^*(). Assume that every irreducible *-representation ofis a boundary representation for . Let π:→ B()̋ be a unital *-representation and let Π:→ B()̋ be a unital completely positive extension of π|_. Then,the pair ((Π-π)(), π()) is unperforated if and only if Π=π.If Π=π, then (Π-π)()={0}⊂π() so thatthe pair ((Π-π)(), π()) is trivially unperforated.Conversely, assume that the pair ((Π-π)(), π()) is unperforated. By Theorem <ref>, there is a family of states on ^*(Π()) which separate (Π-π)() and restrict to be pure on π(). Then Π=π by virtue of Theorem <ref>. Next, weexhibit a non-trivial condition which ensures that a pair(,)is unperforated. Letbe a unital ^*-algebra. Letandbe self-adjoint subspaces ofsuch thatis unital. Assume thatcommutes with . Then, the pair (,^*()) is unperforated.Let a∈,b∈^*() be self-adjoint elements such that a≤ b. Define a continuous function f:→ as f(t)= t if|t|≤a aift>a-aift<-a .Observe that f(a)=a and that f(b)≤a by choice of f. Now, since a≤ b we must have -a I≤ b and thus the spectrum of b is contained in [-a,b]. Furthermore, we have that f(t)≤ t for every t≥ -a. These two observations together show that f(b)≤ b. We claim that a ≤ f(b).To see this, we note that ^*() commutes with ^*() sinceandare self-adjoint, whence the unital ^*-algebra ^*(a,b,I) is commutative. Therefore, there is a compact Hausdorff space Ω and a unital *-isomorphism Φ:^*(a,b,I)→ C(Ω). Put ϕ_a=Φ(a), ϕ_b=Φ(b). Recalling that a=f(a), the claim is equivalent to the fact that f∘ϕ_a≤ f∘ϕ_b on Ω.Since we have that a≤ b, it follows that ϕ_a≤ϕ_b.The function f is non-decreasing, whence f∘ϕ_a≤ f∘ϕ_b on Ω and the claim is established. Finally, the proof is completed by choosing b'=f(b). In particular, we single out the following noteworthy consequence. Letbe a separable operator system and let =^*(). Assume that every irreducible *-representation ofis a boundary representation for . Let π:→ B()̋ be a unital *-representation and let Π:→ B()̋ be a unital completely positive extension of π|_. Assume that (Π-π)() and π() commute. Then, Π=π.Simply combine Proposition <ref> with Corollary <ref>. In trying to verify that a general pair(,^*())is unperforated,one may hope to proceed as in the proof of Proposition <ref> and use the functional calculus to “truncate"binside of^*()to have norm at mosta. However, in general it is not clear that this truncation should still dominatea. Indeed, the non-decreasing functionfdefined in the proof is not operator monotone. In fact, there are many simple instances of non-unperforated pairs. Let =⊕ and let ⊂_2 be the self-adjoint subspace generated by the matrix[ 0 1; 1 0 ].Then, the pair (,) is not unperforated. Indeed, consider a=[ 0 2; 2 0 ]∈,b=[ 1 0; 0 5 ]∈and note thatb-a=[1 -2; -25 ]is positive, whence a≤ b. Letb'=[ x 0; 0 y ]∈be self-adjoint such that a≤ b' and b'≤a=2.Then,b'-a=[x -2; -2y ]≥ 0.In particular, we see that x≥ 0, y≥ 0 and xy≥ 4. Since b'≤ 2, we conclude that max{x,y}≤ 2. Hence, x=y=2 so thatb'=[ 2 0; 0 2 ].But thenb-b'=[ -10;03 ]is not positive.In view of this difficulty, a pressing question emerges: how common are unperforated pairs? We saw in Proposition <ref> that they can be found easily in the presence of some form of commutativity, but Example <ref> indicates the situationmay be bleak in general. Accordingly we aim to introduce flexibility in the defining condition for a pair to be unperforated. The key property we require is the following. A^*-algebrais said to have the weak expectation property<cit.> if for every injective*-representationπ:→B(_̋π), there is a unital completely positive map E_π:B(_̋π)→π()”satisfyingEπ(a)=π(a)for everya∈(see for instance <cit.> for details).The next development shows that if⊂are unital^*-algebras, then the weak expectation property formay be viewed as a variation on the fact that the pair(,)is unperforated. Interestingly, this fact uses (albeit indirectly) some recent technology from the theory of tensor products of operator systems. Letbe a unital ^*-algebra and let ⊂ be a unital separable ^*-subalgebra with the weak expectation property. Let a∈ be a self-adjoint element and let >0. Then, there is a sequence (β_n)_n of self-adjoint elements inwith the following properties.(1) We haveβ_n≤ (1+)a for every n∈ andlim sup_n→∞β_n≤a.(2) We havelim sup_n→∞ψ(β_n)≤inf{ψ(b):b∈, b≥ a}and sup{ψ(c): c∈,c≤ a}≤lim inf_n→∞ψ(β_n)for every state ψ on .Assume that ⊂⊂ B()̋. Consider the sets_a={b∈:b≥ a}, Ł_a={c∈:c≤ a}. Sinceis separable, so are _a and Ł_a. Thus, there are countable dense subsets {u_n}_n∈⊂_a, {ℓ_n}_n∈⊂Ł_a. Becausehas the weak expectation property, it follows from <cit.> that it has the so-called tight Riesz interpolation property in B()̋. Noting thatis unital and that-(1+ n^-1)aI< a<(1+ n^-1)aI,n∈this interpolation property guarantees that for each n∈ we can find a self-adjoint element β_n∈ satisfying -(1+ n^-1)aI< β_n<(1+ n^-1)aIandℓ_j- n^-1I<β_n<u_k+n^-1 Ifor every 1≤ j,k≤ n. In particular, we note that β_n≤ (1+)a,n∈andlim sup_n→∞β_n≤a.Moreover it follows from the construction of the sequence (β_n)_n that if ψ is a state onthensup_m∈ψ(ℓ_m)≤lim inf_n→∞ψ(β_n)≤lim sup_n→∞ψ(β_n)≤inf_m∈ψ(u_m).On the other hand, we have thatinf_m∈ψ(u_m)= inf{ψ(b):b∈, b≥ a}andsup_m∈ψ(ℓ_m)=sup{ψ(c): c∈,c≤ a}by density. Hence,lim sup_n→∞ψ(β_n)≤inf{ψ(b):b∈, b≥ a}andsup{ψ(c): c∈,c≤ a}≤lim inf_n→∞ψ(β_n). Of course, the weak expectation property arises naturally without the need for any kind of commutativity, so that Properties (1) and (2) from Theorem <ref> constitute a flexible substitute for the fact that the pair(,)is unperforated. We substantiate this claim in what follows. We start with a concrete observation. Let $̋ be an infinite dimensional separable Hilbert space. The unital separable^*-algebra=()̋+ Iis nuclear since()̋is nuclear<cit.>. In particular, it has the weak expectation property <cit.>. Next, letθbe a state onB()̋which has the unique extension property with respect to. By Example <ref> we conclude that there is a positive trace class operatorX_θ∈ B()̋with(X_θ)=1and such thatθ(a)=(aX_θ),a∈ B()̋.Upon applying the spectral theorem toX_θ, we may find a sequence of positive numbers(t_n)_n∈and a sequence of orthonormal vectors(ξ_n)_n∈such thatθ(a)=(aX_θ)=∑_n=1^∞ t_n ⟨ aξ_n,ξ_n⟩for everya∈ B()̋. In particular, we see that∑_n=1^∞ t_n=1. Fix now a self-adjoint elementa∈ B()̋. A moment's thought reveals that there must beξ∈{ξ_n}_n∈with the property that|⟨ aξ,ξ⟩|≥ |θ(a)|.Furthermore, if we denote byχthe vector state onB()̋corresponding toξ, we see from Example <ref> thatχrestricts to be pure on. This is a manifestation of a general phenomenon, as we show next. Letbe a unital ^*-algebra and let ⊂ be a unital separable ^*-subalgebra with the weak expectation property.Let θ be a state onwhich has the unique extension property with respect to . Then, for every self-adjoint element a∈ there is a state ψ onwhich restricts to be pure onand such that |ψ(a)|≥ |θ(a)|.Fix a self-adjoint element a∈, which we may assume is non-zero without loss of generality. The desired conclusion is unchanged if we replace a by -a, so we may assume that θ(a)≥ 0. We argue by contradiction. Assume on the contrary that for each pure state ω onwe havemax_ψ∈(ω,)|ψ(a)|<θ(a).Then, we infer from Lemma <ref> that inf{ω(b):b∈, b≥ a}<θ(a).Now, by Theorem <ref> there is a sequence (β_n)_n∈ of self-adjoint elements inwith β_n≤ 2a for every n∈ and such that sup{θ(c): c∈,c≤ a}≤lim inf_n→∞θ(β_n)andlim sup_n→∞ω(β_n)≤inf{ω(b):b∈, b≥ a}<θ(a)for every pure state ω on . Since θ is assumed to have the unique extension property with respect to , by Lemma <ref> we findθ(a)≤lim inf_n→∞θ(β_n).On the other hand, sinceis assumed to be separable we may invoke Theorem <ref> to find a Borel probability measure μ concentrated on _p() with the property that θ(b)=∫__p()ω(b) dμ(ω),b∈.Upon applying Fatou's lemma to the sequence of non-negative continuous functions ω↦ 2a-ω(β_n), ω∈_p()a simple calculation yieldslim sup_n→∞(∫__p()ω(β_n) dμ(ω))≤∫__p()(lim sup_n→∞ω(β_n) )dμ(ω).Consequentlylim sup_n→∞θ(β_n) =lim sup_n→∞(∫__p()ω(β_n) dμ(ω))≤∫__p()(lim sup_n→∞ω(β_n) )dμ(ω)<θ(a).But this implies thatθ(a)≤lim inf_n→∞θ(β_n)≤lim sup_n→∞θ(β_n)<θ(a)which is absurd. We mention a noteworthy consequence of Theorem <ref> which is related to hyperrigidity.Letbe a separable operator system and let =^*(). Assume that every irreducible *-representation ofis a boundary representation for . Let π:→ B()̋ be a unital *-representation such that π() has the weak expectation property, and let Π:→ B()̋ be a unital completely positive extension of π|_. Then, π=Π if and only if there is a family of states on ^*(Π()) which separate (Π-π)() and have the unique extension property with respect to π(). Assume that there is a family of states on ^*(Π()) which separate (Π-π)() and have the unique extension property with respect to π(). We may apply Theorem <ref> to the inclusion π()⊂^*(Π()) (see (<ref>)) to find a (potentially different) family of states on ^*(Π()) which separate (Π-π)() and restrict to be pure on π(). Consequently, π=Π by virtue of Theorem <ref>. The converse is trivial. We draw the reader's attention to the main point of Corollary <ref>: unlike in Theorem <ref>, the separating family is not assumed to consist of pure states.We finish by mentioning that it would be of interest to obtain a version of Theorem <ref> or Corollary <ref> based on Theorem <ref>. It is not clear to us how this can be achieved at present. The promise of such an application of Theorem <ref> is the reason why we chose to state it in the context ofbeing separable. The reader will notice that this condition can be removed at the cost of obtaining a net rather than a sequence. We opted for the current version as sequences seem more appropriate for arguments relying on integration techniques. plain | http://arxiv.org/abs/1706.08411v3 | {
"authors": [
"Raphaël Clouâtre"
],
"categories": [
"math.OA",
"math.FA"
],
"primary_category": "math.OA",
"published": "20170626143908",
"title": "Unperforated pairs of operator spaces and hyperrigidity of operator systems"
} |
SU(5) Unification without Proton Decay Benjamín Grinstein December 30, 2023 ====================================== Following the success of type Ia supernovae in constraining cosmologies at lower redshift (z≲2), effort has been spent determining if a similarly useful standardisable candle can be found at higher redshift. In this work we determine the largest possible magnitude discrepancy between a constant dark energy ΛCDM cosmology and a cosmology in which the equation of state w(z) of dark energy is a function of redshift for high redshift standard candles (z≳2). We discuss a number of popular parametrisations of w(z) with two free parameters, w_zCDM cosmologies, including the Chevallier-Polarski-Linder and generalisation thereof, nCPL, as well as the Jassal-Bagla-Padmanabhan parametrisation. For each of these parametrisations we calculate and find extrema of , the difference between the distance modulus of a w_zCDM cosmology and a fiducial ΛCDM cosmology as a function of redshift, given 68% likelihood constraints on the parameters P=(Ω_m,0, w_0, w_a). The parameters are constrained using cosmic microwave background, baryon acoustic oscillations, and type Ia supernovae data using CosmoMC. We find that none of the tested cosmologies can deviate more than 0.05 mag from the fiducial ΛCDM cosmology at high redshift, implying that high redshift standard candles will not aid in discerning between a w_zCDM cosmology and the fiducial ΛCDM cosmology. Conversely, this implies that if high redshift standard candles are found to be in disagreement with ΛCDM at high redshift, then this is a problem not only for ΛCDM but for the entire family of w_zCDM cosmologies.cosmology: large-scale structure of Universe – cosmology: observations – cosmology: theory – cosmology: dark energy § INTRODUCTION The concordance ΛCDM model, containing a dark energy (Λ) component with constant equation of state and a cold dark matter (CDM) component, has been successful in explaining observations of a large number of cosmological probes, including supernovae (SNe), baryon acoustic oscillations (BAO) and the power spectrum of the cosmic microwave background (CMB). Anomalies do however exist <cit.>, and a number of alternatives to ΛCDM have been proposed.One proposed modification is allowing the equation of state to vary with redshift.We call this family of models w_zCDM models. A number of other alternative modifications include modified gravity, such as f(R) models <cit.> where the Ricci scalar R is replaced with a function of R, or redshift remapping <cit.> where the assumption of a a = (1+z)^-1 relation between the scale factor a and redshift z is broken in favour for a relation that is a function of redshift. In this work we focus exclusively on the first kind of models, the w_zCDM cosmologies. Type Ia supernovae (SNe) are used as standard candles at lower redshifts. At redshifts of z ≳ 2 they are less useful, in part due to the decreasing Ia SNe rate <cit.>. Recently a range of high redshift standard candles have been proposed, including active galactic nuclei (AGN) <cit.>, gamma ray bursts (GRB) <cit.>, gamma ray burst supernovae (GRB-SNe) <cit.>, superluminous supernovae (SLSNe) <cit.>, quasars <cit.> and high redshift HII galaxies <cit.>. In this work we investigate the usefulness of such high redshift standard candles for constraining dark energy models. We choose to neither use a Fisher matrix approach, where ΛCDM parameters are assumed, nor to assume a ΛCDM cosmology to generate mock datasets. Rather, we introduce the quantity (z) which is defined as the difference between the distance modulus of a w_zCDM cosmology and fiducial ΛCDM cosmology as a function of redshift. Fiducial ΛCDM cosmology is in this work defined as the best fitting ΛCDM cosmology. Previous work has applied a similar approach <cit.> to argue that discerning a non-constant dark energy component from a constant dark energy will require high precision measurements. Since then the amount of data to constrain proposed cosmologies has grown, not only in quantity but also extended to increasingly higher redshifts. This allows us to revisit this approach, specifically asking whether any w_zCDM cosmology can deviate significantly in predicted distance modulus from fiducial ΛCDM at high redshift, given current cosmological constraints. If w_zCDM cosmologies are indistinguishable from the fiducial ΛCDM cosmology at high redshifts, this would imply that high redshift (z≳2) data points will have limited usefulness over low redshift (z≲2) equivalents when discerning between the two types of cosmologies. Additionally it would imply that if precise measurements of standard candles at high redshift are shown to be in disagreement with ΛCDM cosmology, this would not only challenge ΛCDM cosmology but also the entire family of w_zCDM cosmologies. Quintessence is a proposed form of dark energy <cit.>, which introduces a scalar field that is minimally coupled to gravity to explain the apparent accelerated expansion observed at low redshifts. In this work we use theoretical concepts from quintessence to guide us in determining which w_zCDM models to test. In quintessence exists two subgroups of dark energy models, the thawing <cit.> and the freezing <cit.> models. In thawing dark energy models the scalar field is nearly frozen due to the potential being dampened by Hubble friction in the early matter dominated universe, with the scalar field then starting to evolve once the field mass is below the Hubble expansion rate. In freezing dark energy models the potential is steep enough in the early universe that a kinetic term develops. At later stages the evolution of the field, and therefore also the evolution of the equation of state, steadily slows down as the potential is tending towards being shallow. These two families of models produce distinct behaviours for the evolutions of the equation of state. The equation of state function w(z) of thawing dark energy models are generically convex decreasing functions of redshift, while freezing dark energy models produce w(z) functions that are convex increasing functions of redshift. Phenomenological parametrisations of w(z) with two free parameters can produce either convex increasing or decreasing behaviour, but not both, making them more suited to fit either an underlying freezing or thawing dark energy model <cit.>. The w(z) parametrisations investigated in this work will include some generalisations of ΛCDM, limited to two parameters w_0 and w_a where w_0 is a constant term and the magnitude of w_a determines the strength of the evolution with redshift. Specifically we investigate the Chevallier-Polarski-Linder (CPL) <cit.> and Jassal-Bagla-Padmanabhan (JBP) <cit.> parametrisations as well as the nCPL generalisation of <cit.>.In section <ref> we introduce and motivate the parameter , and discuss previous work that has used a similar approach. Then in section <ref> we discuss parametrisations of w(z) and the reasoning behind choosing the subset of parametrisations adopted in this paper. In section <ref>, we describe the method used to derive extrema offor the chosen parametrisations, and in section <ref> we present the results. Finally in section <ref> we discuss the implications of our results for using high redshift standard candles to constrain dark energy models. § THEPARAMETER Previous works have discussed the utility of high redshift standard candles in constraining cosmological parameters <cit.> by using either Fisher matrix formalism or simulating data from a high redshift standard candle. While these approaches are appropriate for forecasting the constraining power of a survey, they are less suited for our purpose. Therefore we introduce theparameter, given as (z,P) = μ_w_zCDM(z,P) - μ_ΛCDM(z). Here μ_w_zCDM(z,P) is the distance modulus of a w_zCDM cosmology given a set of parameters P=(Ω_m,0, w_0, w_a), and μ_ΛCDM(z) the distance modulus of the best fitting ΛCDM cosmology, for a given redshift z. Looking at the differences in magnitude as a function of redshift predicted by different cosmologies has been done before in the literature. <cit.> plot the magnitude difference between a (Ω_m,0, Ω_Λ,0)=(0.3, 0.7) ΛCDM cosmology and a wCDM cosmology where the equation of state of dark energy is allowed to vary between w=-1.5 and w=-0.5, out to a redshift of z=1.7. With the parameter range they find a maximum magnitude difference of approximately ∼0.3 mag. <cit.> determine the magnitude difference between a ΛCDM cosmology with w=-1 and wCDM cosmologies with w=+1, w=0, and w=-∞, respectively, in the redshift range [1.7,3]. The maximum magnitude difference they find is less than 0.04 mag. In this work we define a method that for a chosen w_zCDM cosmology selects a range of parameter values P=(Ω_m,0, w_0, w_a) which reflect how strongly constrained the parameters are by the data. This is done by selecting all parameter values that lie within the 68% likelihood contour spanning the Ω_m,0, w_0, and w_a parameter space, given observational constraints. The method is described in detail in section <ref> and Appendix <ref>.§ DARK ENERGY EQUATION OF STATE PARAMETRISATIONSTesting all proposed models for w(z) is not possible so a sample thereof must be chosen. We use concepts from quintessence dark energy models as a guide. Specifically we apply the predicted evolution of w(z) from the two subclasses of quintessence, the thawing and freezing models. In quintessence, thawing models produce convex decreasing w(z) functions of redshift whereas freezing models produce convex increasing w(z) functions of redshift. When going beyond thawing and freezing quintessence, phenomenological models can produce concave behaviour of w(z) and also enter the phantom regime where w(z)<-1. This is illustrated in Fig. <ref>. In this work we do not exclude solutions where the w(z) function enters the phantom regime. However, we clearly indicate any results in both figures and discussion that arise from such phantom models. Regardless of whether the phantom regime is excluded, models better suited for fitting an underlying thawing cosmology are unable to deviate significantly from the fiducial w=-1 at high redshifts unless they also do so at low redshifts. The bulk of the current cosmological constraints exists at low redshifts. We therefore expect that models better suited for fitting an underlying freezing dark energy are able to deviate from fiducial ΛCDM cosmology at higher redshifts more than models better suited for fitting an underlying thawing dark energy model. This motivates dividing phenomenological models into two categories, those better suited for fitting an underlying thawing dark energy and those better suited for fitting and underlying freezing dark energy. <cit.> show that CPL, JBP, and n=3 nCPL (n3CPL) are better suited for fitting an underlying thawing cosmology and a n=7 nCPL (n7CPL) model is better suited for fitting an underlying freezing cosmology. A phenomenological model better suited for fitting an underlying thawing dark energy can reproduce observables for a cosmology with a freezing dark energy. <cit.> show that the CPL model reproduces distances of freezing dark energy models, such as the supergravity inspired SUGRA model, to within 0.1%. However, as shown by <cit.>, fitting a freezing dark energy with a model better suited for fitting an underlying thawing dark energy, and vice versa, can lead to incorrect values of w(z=0) or phantom behaviour which are not present in the underlying cosmology. We therefore include the CPL model <cit.>w_CPL = w_0 + w_a(1-a) = w_0 + w_a z/1+z,the JBP model <cit.>w_JBP = w_0 + w_a(1-a)a = w_0 + w_a z/(1+z)^2,and the nCPL model <cit.>w_nCPL = w_0 + w_a(1-a)^n = w_0 + w_a (z/1+z)^n,where we, guided by Fig. 16 of <cit.>, choose a n3CPL and a n7CPL cosmology. Since models appropriate for fitting an underlying freezing dark energy are much less constrained in especially the w_a parameter, running CosmoMC until convergence takes much longer time for this type of model than for the models better suited for fitting an underlying thawing dark energy. Therefore only one model of the first type is included, namely the n7CPL model.§ METHOD To derive 68% likelihood constraints on the models described in the previous section, the CosmoMC <cit.> MCMC tool was utilised. All cosmologies were fit with the same data, including observations of type Ia supernovae from the Joint Light-Curve Analysis, baryon acoustic oscillations from SDSS-III and 6dF and the cosmic microwave background from the Planck Collaboration. For more details and references to the data sets used see Appendix <ref>. To allow maximum flexibility in the fitting routines all parameters for the datasets used were kept free for CosmoMC to fit, e.g. the stretch and colour parameters α and β for the JLA sample. Since our focus is to investigate the effects of different dark energy models, and to limit computation time, we assume flatness, and neglect radiation. For each of the models described above (CPL, nCPL, and JBP) we derive the 3-dimensional likelihood contours of Ω_m,0, w_0, and w_a. When searching for extrema of (P,z) as a function of redshift, only the parameter values P=(Ω_m,0, w_0, w_a) that lie on the surface of the 68% likelihood volume are used. Appendix <ref> discusses how this is possible by applying the extreme value theorem. For a Hubble constant in units of km s^-1Mpc^-1, the distance modulus is given byμ(P,z)= 5log_10[D_L'(P,z)]+5log_10[c/H_0]+25+σ_m,where D_L'(P,z) is the unitless luminosity distance (D_L'=D_LH_0 c^-1), D_L is the luminosity distance, and σ_m is a constant representing how far this magnitude is from the correct intrinsic absolute magnitude of the observed supernovae. σ_m and 5log_10[c/H_0] are both additive terms, and combined in a parameter 𝐾 to be marginalised over, K=5log_10[c/H_0]+25+σ_m. The marginalisation process is performed by minimising the sum χ^2(𝐾) =∑_i ( μ_w_zCDM,i(P,z) - μ_JLA,i/σ_i,JLA)^2=∑_i ( 5log_10[D_L'(P,z)_w_zCDM,i] + 𝐾 - μ_JLA,i/σ_i,JLA)^2 by varying 𝐾. The sum goes over the JLA type Ia SNe <cit.> with distance modulus μ_JLA,i and associated uncertainty on the distance modulus σ_i,JLA. The scheme explained above to recover 𝐾 is similar in purpose to the process described in Appendix A.1 of <cit.>. The process of finding the value of 𝐾 that minimises Eq. <ref> is done for all parameter values P=(Ω_m,0, w_0, w_a) that lie on the surface of the 68% likelihood contour. Having determined the value 𝐾 that minimises Eq. <ref>, we can calculateas a function of redshift for parameters P of, e.g., the CPL model_CPL(P,z) = μ_CPL(P,z) + K_CPL(P) -( μ_ΛCDM,bf(z) + K_ΛCDM,bf).In summary the process to derivevalues for a given dark energy model is * Run CosmoMC for given dark energy model.* Derive from CosmoMC output 68% likelihood contours for parameters P=(Ω_m,0, w_0, w_a).* Calculate 𝐾 for all parameter values P=(Ω_m,0, w_0, w_a) on 68% likelihood contour.* Calculate (P,z) for all parameter values P=(Ω_m,0, w_0, w_a) on 68% likelihood contour. The above process would not be necessary if H_0 and the absolute magnitude of the standard candle were known exactly. If they were known, their values could be substituted directly into Eq. <ref>. § RESULTS The results are shown in Fig. <ref>. Before discussing them we address two important topics. Firstly, the effects of the constant K (Eq. <ref>). Secondly, how large a part ofthat originates from uncertainty in Ω_m,0 and how large a part that comes from the choice of dark energy model and uncertainty in w_0 and w_a. In discussing these topics we only investigate the CPL model in detail and summarise results for the JBP, n3CPL, and n7CPL models; an in-depth discussion for the latter three models can be found in Appendix <ref>. §.§ Effects of Kvalues for a representative sample of the parameters P=(Ω_m,0, w_0, w_a) lying on the 68% likelihood surface of Ω_m,0, w_0, w_a parameter space for the the CPL cosmology are shown in panel (a) of Fig. <ref>. Importantly in this figure the marginalisation constant K from Eq. <ref> has been ignored. As one would then expect, → 0 for z → 0. However, ignoring the marginalisation constant K gives an incomplete picture. Neither the Hubble constant nor the intrinsic absolute magnitude of the type Ia SNe are known precisely. If they were then panel (a) would be appropriate, but since they are not we must include the marginalisation parameter K. Thevalues for the CPL cosmology, including the marginalisation constant K, are shown in panel (c) of Fig. <ref>, with the extrema ofshown as dashed lines. Including the marginalisation constant introduces scatter around z ≈ 0, which is due to the fact that the values of K differ for different parameter values. Furthermore, the extrema of thevalues become smaller. This is to be expected, as the marginalisation process finds the value of K that minimises the difference between the distance moduli of the cosmological models and the observed SNe Ia magnitudes, for both the w_zCDM and the ΛCDM cosmology.This decreases any disagreement that might exist between the predicted distance moduli of w_zCDM and ΛCDM cosmology. This result also holds true for the JBP, n3CPL, and n7CPL cosmologies, i.e. including K causes scatter around z=0 and a narrower distribution ofvalues.§.§contribution from Ω_m,0 versus w_0 and w_a When looking at the extrema ofas a function of redshift it is not straightforward to disentangle how large a part of the magnitude difference is caused by uncertainty in Ω_m,0 and how much stems from the choice of CPL w_zCDM cosmology and associated uncertainty in w_0 and w_a. Therefore we produce panel (b) of Fig. <ref>. The dark grey shaded area in panel (b) of Fig. <ref> is the range of possiblevalues when all parameters, Ω_m,0, w_0, and w_a have been allowed to vary. The light grey hatched area is where only Ω_m,0 varies and w_0 and w_a are fixed at the ΛCDM values of w_0=-1 and w_a=0 respectively. Panel (b) of Fig. <ref> illustrates that most of the magnitude discrepancy with the best fitting ΛCDM comes from the uncertainty in Ω_m,0, rather than the choice of dark energy model and uncertainty in w_0 and w_a. Figures analogous to panel (b) of Fig. <ref>, but for the JBP, n3CPL, and n7CPL cosmologies can be found in Appendix <ref>. They likewise show that the majority of the discrepancy with the best fitting ΛCDM cosmology comes from the uncertainty in Ω_m,0, rather than the choice of w_zCDM model and uncertainty in w_0 and w_a.§.§ Thawing Models In this subsection the results for the models better suited for fitting an underlying thawing model are presented, namely the results of the CPL, JBP, and n3CPL models. In panels (c), (d), and (e) of Fig. <ref>,values are plotted for a representative sample of the parameter values P=(Ω_m,0, w_0, w_a) on the 68% likelihood contours of the CPL, JBP, and n3CPL cosmologies. The overall agreement between the models is remarkable. The extrema offor all three models are found at redshift z ∼ 2, which aligns with the prediction of ΛCDM that dark energy has negligible influence at larger redshifts. Our results are consistent with the findings of <cit.> who use Fisher matrix analysis to show that a long redshift baseline is important to achieve tight constraints on w_0 and w_a for the CPL cosmology, but any measurements beyond a redshift of z ∼ 2 provide negligible additional constraints compared to that of a lower redshift equivalent. The maximum absolute value ofwhen placing no restrictions on the evolution of the equation of state of dark energy is approximately 0.05 mag for all models discussed in this section.When discarding results where the equation of state enters the phantom regime the lower limits are not strongly affected, but the upper extrema reduces to approximately 0.03 mag.These results imply that the additional power of high redshift standard candles to discern between ΛCDM and CPL, JBP, and n3CPL cosmology is limited when compared to low redshift standard candles. It also indicates that CPL, JBP, and n3CPL cosmology can only deviate slightly from ΛCDM at large redshifts.§.§ Freezing ModelIn this subsection the results for the model better suited for fitting an underlying freezing cosmology, namely the n7CPL model, are presented. In panel (f) of Fig. <ref> a representative sample ofvalues as well as the extrema as dashed lines are plotted for the n7CPL cosmology. Overall the similarity to the corresponding plots for the CPL, JBP and n3CPL models is strong. In Fig. <ref> theextrema for all tested cosmologies are overplotted for comparison. It is apparent that the CPL, JBP, and n3CPL models give similar results at both low and high redshift. As expected the n7CPL model has the largest extrema at high redshift, but only slightly larger values. This analysis suggests that the conclusion for the n7CPL cosmology is similar to that of the CPL, JBP and n3CPL cosmologies. Given current constraints they are all unable to deviate significantly from fiducial ΛCDM at high redshifts. For the n7CPL model most of the disagreement with the fiducial ΛCDM comes from the uncertainty in Ω_m,0, rather than the choice of dark energy or uncertainty in w_0 and w_a, just as was the case with the CPL, JBP, and n3CPL models.§ DISCUSSION Our analysis shows that none of the tested cosmologies can deviate significantly from fiducial ΛCDM cosmology at high redshift, given current cosmological constraints. As a consequence, high redshift (z≳ 2) standard candles will not add significant additional constraints over that of low redshift equivalent standard candles, when discerning between ΛCDM cosmology and CPL, JBP, n3CPL, or n7CPL cosmology. Since our analysis further shows that the bulk of the disagreement with the fiducial ΛCDM cosmology for all tested models primarily comes from the uncertainty in Ω_m,0, rather than the choice of dark energy model and uncertainty in w_0 and w_a, we conjecture that if the analysis was to be carried out for any other w_zCDM cosmology, it would arrive at results very similar to those of our analysis. The choice of dark energy model seems to have negligible impact on high redshift behaviour, regardless of whether we consider the models better suited for fitting an underlying freezing or thawing cosmology. From our analysis alone it is not possible to conclude that in general no other model better suited for fitting an underlying freezing dark energy could significantly deviate from fiducial ΛCDM at high redshifts. <cit.> investigated a number of freezing dark energy models. Specifically, <cit.> produced two parameter functions to calculate observables, such as the dark energy equation of state, for a number of models including the inverse power law (IPL) and supergravity (SUGRA) models. Combined these models span a wide range of possible behaviours for the dark energy equation of state as a function of redshift. <cit.> shows that the observables of these complicated models can be approximated with a phenomenological function of two parameters. Varying just these two parameters is sufficient to reproduce the behaviour of these more complicated models with many parameters to within 2.4% accuracy in the equation of state. The findings of <cit.> are not directly transferable to our analysis, but indicate that the complicated behaviour of these dark energy models can be reproduced reliably with just two free parameters. This suggests that even more complicated dark energy models may have very similar behaviour in the equation of state of dark energy, and that our test of the n7CPL model may therefore be representative for the much larger family of possible w_zCDM models better suited for fitting an underlying freezing dark energy. This indicates that no w_zCDM cosmology with only two free parameters for the dark energy redshift behaviour, either thawing or freezing, can deviate significantly from the best fitting ΛCDM at high redshift. At high redshift additional observational effects beside precision of measurements can impact the usefulness of standard candles. <cit.> discuss the effects of gravitational lensing, noting that since the amplification probability distribution is skewed towards deamplification, gravitational lensing will introduce a bias in the measured magnitudes. <cit.> find that for a standard candle with intrinsic dispersion of 0.1 mag at redshift 2, gravitational lensing introduces an offset with a mode of 0.03 mag. This effect is of similar size to the maximum offset of 0.05 mag we find for the models tested in this work. This further strengthens the argument that if high redshift standard candles are found to be in disagreement with ΛCDM cosmology, no w_zCDM cosmology will be able to resolve that disagreement. Current or future surveys investing time into measuring high redshift (z ≳ 2) standard candles could do so for a number of reasons. If the goal is to discern any w_zCDM model of the types tested here from a standard ΛCDM cosmology then the time is best spent observing at redshifts of z ∼ 2 or lower. However, if the goal is to determine whether the predictions of ΛCDM hold true at high redshift then going to z ≳ 2 or higher is recommended, since in this regime there is the added bonus that if ΛCDM is found to give incorrect predictions, then so will any w_zCDM cosmology of the kinds tested in this work. § ACKNOWLEDGEMENTSThe authors would like to thank Eric Linder for useful comments and Tamara Davis for many discussions on the content as well as useful comments. The Dark Cosmology Centre was funded by the Danish National Research Foundation.mnras§ DARK ENERGY EQUATION OF STATE PARAMETRISATIONS§.§ nCPLWhen working with a nCPL cosmology (w(a) = w_0 + w_a (1-a)^n), the solution to the continuity equationΩ_Λ(z) = Ω_Λ,0exp[ ∫_a_0^a -3(1+w_Λ(a))/ad a]Ω_Λ(z) = Ω_Λ,0 f(a)for the chosen value of n, determines how dark energy evolves with time in the chosen cosmology. Here f(a) is shorthand notation for the function describing the evolution of dark energy with scale factor or redshift. Ω_Λ is the energy density of dark energy normalised with the critical energy density, the subscript 0 indicating the value at the present day. The integral on the right hand side of Eq. <ref> is of particular interest, as deriving a general solution would enable directly observing the behaviour of dark energy for the chosen value of n for the nCPL cosmology. Using that in the nCPL cosmology w_nCPL(a) = w_0 + w_a (1-a)^n and substituting givesf(a)=∫_a_0^a -3(1+w_0 + w_a (1-a)^n)/ad awhich has the solutionf(a)=-3[ w_a(1-a)^n (a-1/a)^-nF(n,a)/n + (w_0+1)log(a)]with F(n,a) being a power series from the hypergeometric function _2F_1(-n,-n;1-n;1/a) and using that a_0=1. Although the general solution is rather unpleasant to work with, specific cases where n=1,2,3 can be easily solved. The solution for n=1 isΩ_Λ(a)_n=1 = Ω_Λ,0 a^-3(1+w_0+w_a)exp(-3w_a(1-a)).For n=2Ω_Λ(a)_n=2 = Ω_Λ,0 a^-3(1+w_0+w_a)exp(-3/2w_a(1-a)(3-a)).For n=3Ω_Λ(a)_n=3 =Ω_Λ,0 a^-3(1+w_0+w_a)×exp(-1/2w_a(1-a)(2a^2-7a+11)).Inspired by <cit.> the solution for the n=7 case is also derived.Ω_Λ(a)_n=7 = Ω_Λ,0 a^-3(1+w_0+w_a)exp[-1/140w_a(1-a) ×(60a^6 - 430a^5 + 1334a^4 - 2341a^3 + 2559a^2 -1851a + 1089)].§.§ JBPSimilar to the approach in the previous section the continuity equation for the JBP parametrisation, w(a)_JBP = w_0 + w_a(1-a)a, is solved yieldingΩ_Λ(a)_JBP = Ω_Λ,0 a^-3(1+w_0)exp(3/2w_a(1-a)^2). § APPLICATION OF EXTREME VALUE THEOREM When determining the extrema ofin the space of 68% likelihood (Ω_m,0, w_0, w_a) values it is helpful to apply the extreme value theorem to state that extrema do exist, and they exist only on the boundary. First, in order for this argument to hold true, we need to prove that ∂/∂λ_i 0 for all allowed values of one or more of the parameters λ_i ∈ (Ω_m,0, w_0, w_a). For the following math to be as simple as possible, we will focus on the case where λ_i = w_0. First, we define the needed equations. We simplify ∂/∂ w_0 by noting that∂/∂ w_0 = ∂ (μ_w_zCDM-μ_Λ CDM)/∂ w_0= ∂μ_w_zCDM/∂ w_0 - ∂μ_Λ CDM/∂ w_0= ∂μ_w_zCDM/∂ w_0 - 0 = ∂μ_w_zCDM/∂ w_0= ∂μ/∂ w_0where for simplicity we defined μ = μ_w_zCDM in the final line. Following <cit.> the equations needed now are∂μ/∂ w_0 = 5/D^'_M ln(10)∂ D^'_M/∂ w_0,where D^'_M is the dimensionless tangential comoving distance D^'_M=(H_0/c)D_M. In the limit where the curvature term Ω_k approaches zero, i.e. a cosmology that is close to flat, this giveslim_Ω_k → 0∂ D^'_M/∂ w_0 = ∂χ^'/∂ w_0, ∂χ^'/∂ w_0 = -1/2∫_0^z 1/E(z^')^3∂ℰ(z^')/∂ w_0dz^', ∂ℰ(z)/∂ w_0 = Ω_x ∂ f(z)/∂ w_0, ∂ℰ(z)/∂ w_0 = 3Ω_x f(z) ln(1+z),and finallyℰ(z) = H(z)^2/H_0^2 = E(z)^2,E(z)^2= Ω_m,0(1+z)^3 + Ω_k(1+z)^2 + Ω_Λ,0exp(3∫_0^z [1+w(z^')]dz^'/1+z^').If we show that all terms added in the above equations have the same sign, that is they are either positive or negative for all values of z, it follows that they never cancel out and will therefore never sum to zero. Noting that * E(z) is for flat cosmologies only ever positive,* f(z) > 0 for nCPL and JBP parametrisations,* and z > 0it follows that ∂/∂ w_0 is only ever negative, if we restrict ourselves to work with flat cosmologies. Applying the extreme value theorem, it is apparent that there are no critical points, so the extreme values ofexist on the boundary. This argument reduces the dimensionality of the problem by one from three to two, saving a large amount of computation time in searching for the extreme values of . § DATA ANALYSIS In this work CosmoMC <cit.>, utilising CAMB <cit.>, PICO <cit.>, and CMBFAST <cit.> was used extensively. The datasets used include observations of the cosmic microwave background from the Planck Collaboration <cit.> and BICEP-Planck <cit.>, observations of baryon acoustic oscillations from SDSS-III <cit.>, 6dF <cit.>, MGS <cit.> as well as supernova data from the Joint Light-Curve Analysis <cit.>. For analysis of the output chains of CosmoMC this work made use of Astropy, a community-developed core Python package for Astronomy <cit.>. For details on how the output chains of CosmoMC were analysed and plots of the results were produced see the GitHub repository at <https://github.com/per-andersen/Deltamu>. § ADDITIONAL PLOTSIn Fig. <ref> plots analogous to panels (a) and (b) of Fig. <ref> are shown for the JBP, n3CPL, and n7CPL models. First, in panel (a), (c), and (e) it is shown that the largest contribution to the extrema ofcomes from the uncertainty in Ω_m,0, rather than the choice of dark energy model and uncertainty in w_0 and w_a, just like for the CPL model. In panels (b), (d), and (f) the effect of K on the JBP, n3CPL, and n7CPL models is shown to be similar to that on theof the CPL model; not including K removes the scatter around z=0 and widens the distribution ofsomewhat, increasing the extrema ofat high redshifts. | http://arxiv.org/abs/1706.08530v1 | {
"authors": [
"Per Andersen",
"Jens Hjorth"
],
"categories": [
"astro-ph.CO"
],
"primary_category": "astro-ph.CO",
"published": "20170626180003",
"title": "Discerning Dark Energy Models with High-Redshift Standard Candles"
} |
1⟨#1 ⟩ 1⌜#1 ⌝ 0ℕ 0ℤ 0ℝ 0ℂ 0ℚ 0B 0C 0D 0E 0F 0G 0H 0I 0J 0K 0L 0M 0N 0O 0P 0Q 0R 0S 0T 0U 0V 0W 0X 0Y 0𝔐0↾ 0↑ 0↠ 0ℕ0^ centredpic [ plain thmTheorem[section] lemLemma sublemSublemma claimClaim remRemark propProposition corCorollary queQuestion oqueOpen Question conConjecturedefinition dfnDefinition egExample exerciseExercise plainTheorem Lemma Sublemma Claim Remark Proposition Corollary Question Open Question Conjecture Definition Example Exercise Section Figure Table On tree-decompositions of one-ended graphsJohannes Carmesin Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WB, United Kingdom ,Florian Lehner Mathematics Institute, University of Warwick, Zeeman Building, Coventry CV4 7AL, United Kingdom Florian Lehner was supported by the Austrian Science Fund (FWF), grant J 3850-N32 , andRögnvaldur G. Möller Science Institute, University of Iceland, IS-107 Reykjavík, Iceland Rögnvaldur G. Möller acknowledges support from the University of Iceland Research FundDecember 30, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== A graph is one-ended if it contains a ray (a one way infinite path) and whenever we remove a finite number of vertices from the graph then what remains has only one component which contains rays.A vertex v dominates a ray in the end if there are infinitely many paths connecting v to the ray such that any two of these paths have only the vertex v in common. We prove thatif a one-ended graph contains no ray which is dominated by a vertex and no infinite family of pairwise disjoint rays, then it has a tree-decomposition such that the decomposition tree is one-ended and the tree-decomposition is invariant under the group of automorphisms. This can be applied to prove a conjectureof Halin from 2000 that the automorphism group of such a graph cannot be countably infinite and solves a recent problem of Boutin and Imrich. Furthermore, it implies that every transitive one-ended graph contains an infinite family of pairwise disjoint rays.§ INTRODUCTION The ends of a graph G are defined as equivalence classes of rays (one sided infinite paths).Two rays are said to belong to the same end if for every finite set F of vertices the same component of G∖ F contains infinitely many vertices from both rays.The ends of a graph are a tool to capture the large-scale structure of an infinite graphs. In particular if a graph has more than one end then the graph can be said to be tree-like. In <cit.>, Dunwoody and Krön constructed so called structure trees to describe this tree-likeness of a graph with more than one end.A structure tree is constructed from a nested (G)-invariant family of separations of the graph such that the action of (G) on the family of separations gives an action of (G) on the tree.The work of Dunwoody and Krön builds on the book <cit.> by Dicks and Dunwoody, see also a recent account of this theory in <cit.>.Tree-decompositions (defined in Section <ref>) arevery similar to structure trees; and do play a central role in Graph Minor Theory and are a standard tool in Graph Theory to describe tree-structure <cit.>. If the nested set of separations of a tree-decomposition is (G)-invariant then the tree-decomposition is also (G)-invariant and (G) acts on the decomposition tree.Dunwoody and Krönapply their construction to obtain a combinatorial proof of generalization of Stalling's theorem of groups with at least twoends. This method has multifarious other applications, as demonstrated by Hamann in<cit.> and Hamann and Hundertmark in <cit.>.However, for graphs with only a single end, such as the 2-dimensional grid, these structure trees and the related tree-decompositions may be trivial.Hence such a structural understanding of this class of graphs remains elusive.If the end of a one-ended graph has finite vertex degree, that is,there is no infinite set of pairwise vertex-disjoint rays belonging to that end, then Halin showed in 1965 <cit.> that there are tree-decompositions displayingthe end. A precise definition can be found towards the end of Section <ref>, but essentially this means that the tree also only has one end and that end correspondsto the end of the graph is a precise way. Nevertheless, for these tree-decompositions to be of any use for applications as above,one needs them to have the additional property that they are invariant under the group ofautomorphisms.Unfortunately such tree-decompositions do not exist for all graphs in question, see Example <ref> below. Note that in this example there is a vertex v dominating the end, that is, for every ray in the end there are infinitely many paths connecting v to the ray such that any two of these paths have only the vertex v in common. In this paper we construct such tree-decompositions if the end is not dominated. Every one-ended graph whose end is undominated and has finite vertex degree has a tree-decomposition that displays itsend and that is invariant under the group of automorphisms. A very simple example is shown in Figure <ref>. This better structural understanding leads to applications similar to those for graphs with more thanone end. Indeed, below we deduce from Theorem <ref> a conjecture of Halin from 2000, andanswer a recent question of Boutin and Imrich. A further application was pointed out byHamann.Applications. In <cit.> Halin showed that one-ended graphs with vertex degree equal to onecannot have countably infinite automorphism group. Not completely satisfied with his result, heconjectured that this extends to one-ended graphs with finite vertex degree. Theorem<ref> implies this conjecture.Given a graph with one end which has finite vertex degree, its automorphism group is either finiteorhas at least 2^ℵ_0 many elements.Theorem <ref> can be further applied to answer a question posed by Boutin andImrich, who asked in <cit.> whether there is a locally finite graph with only one end and linear growth and countably infinite automorphism group. Theorem <ref> implies a negative answer to this question as well as strengthenings of further results of Boutin and Imrich, see Section <ref> for details. Finally, Matthias Hamann[personal communication] pointed out the following consequence ofTheorem <ref>. The end of a transitive one-ended graph must have infinite vertex degree. This extends a result of Thomassen for locally finite graphs, see <cit.>. We actually prove a stronger version of Theorem <ref>, see Theorem <ref>, with`quasi-transitive'[Here a graph is quasi-transitive, if there areonly finitely many orbits of vertices under the automorphism group.] in place of `transitive'. The rest of this paper is structured as follows: in Section <ref> we set up all necessarynotations and definitions. As explained in <cit.>, there is a close relation betweentree-decompositions andnested sets of separations. In this paper we work mainly with nested sets of separations.In Section <ref> we prove Theorems <ref> and <ref> that imply Theorem <ref>, andSection <ref> is devoted to the proof of Theorem <ref>, and its implications on the work of Boutin and Imrich. Finally, in Section <ref> we prove Theorem <ref> that implies Theorem <ref>.Many of the lemmas we apply in this work were first proved by Halin. Since in some cases we need slightvariants of the original results and also since Halin's original papers might not be easilyaccessible, proofs of some of these results are included in appendices. § PRELIMINARLIESThroughout this paper V(G) and E(G) denote the sets of vertices and edges of a graph G,respectively. We refer to <cit.> for all graph theoretic notions which are notexplicitly defined. §.§ Separations, rays and ends A separator in a graph G is a subset S ⊆ V(G) such that G-S is not connected.We say that a separator S separates vertices u and v if u and v are in differentcomponents of G-S. Given two vertices u and v, a separator S separates u and vminimally ifit separates u and v andthe components of G-S containing u and v both have the whole of S in their neighbourhood.The following lemmacan be found in Halin's 1965 paper <cit.>, and also in his later paper <cit.> and then with adifferent proof. Given vertices u and v and k∈, there are only finitely many distinct separators of sizeat most k separating u and v minimally.A separation is a pair (A,B) of subsets of V(G) such that A ∪ B = V(G)and there is no edge connecting AB to BA.This immediately implies thatif u andv are adjacent vertices in G then u and v are both contained in either A or B.Thesets Aand B are called the sides of the separation (A,B). A separation (A,B) is said to beproper if both AB to BA are non-empty and then A∩ B is a separator.A separation(A,B) is tight if every vertex in A∩ B has neighbours in bothA∖ B and B∖ A.The order of a separation is the number of vertices in A∩ B. Throughout this paper wewill only consider separations of finite order.The following is well-known. (See <cit.>) Given any two separations (A,B)and (C,D)of G then the sum of the orders of the separations (A∩ C, B∪ D) and (B∩D, A∪ C) is equal to the sum of the orders of the separations (A,B) and (C,D).Inparticular if the orders of (A,B) and (C,D) are both equal to k then the sum of the orders of(A∩ C, C∪ D) and (B∩ D, A∪ C) is equal to 2k. The separations (A,B) and (C,D) are strongly nested if AC and DB. They are nested if they are strongly nested after possibly exchanging `(A,B)' by `(B,A)'or`(C,D)' by `(D,C)'. That is, (A,B) and (C,D) are nested if oneof the following holds: * AC and DB,* AD and CB,* BC and DA,* BD and CA.We say a set 𝒮 of separations is nested, if any two separations in it are nested. A ray in a graph G is a one-sided infinite path v_0, v_1, …in G. The sub-raysof a ray are called its tails.Given a finite separator S of G, there is for every ray γ a unique component of G-Sthat contains all but finitely many vertices of γ. We say that γ lies in that component of G-S.Given a separation (A,B) of finiteorder one can similarly say that γ lies in one of the sides of the separation. Two rays are in the same end if they lie in the same component of G-S for every finiteseparator of G. Clearly, this is an equivalence relation. An equivalence class is called a (vertex) end[A notion related to `vertex ends'are `topological ends'. In this paper we are mostly interested in graphs where no vertexdominates a vertex end. In this context the two notions of end agree.]. An alternative wayto define ends is to say that two rays R_1 and R_2 are in the same end if there are infinitelymany pairwise disjoint R_1-R_2 paths. (Given subsets X and Y of the vertex set, an X-Ypath is a path that has its initial vertex in X and terminal vertex in Y and every other vertexis neither in X nor Y.In the case where X={x} then we speak of x-Y paths instead ofX-Y paths and if Y={y} we speak of x-y paths.) An end ω lies in a component C of G-Sif every ray that belongs to ω lies in C. Clearly, every end lies in a unique component of G-S for every finiteseparator S and if (A,B) is a separation of finite order then an end either lies in A or B. A vertex v ∈ V(G) dominates an end ω of G, if there is no separation (A,B) offinite order such that v ∈ A∖ B and ω lies in B. Equivalently, v dominatesω if for every ray R in ω there are infinitely many paths connecting v to R suchthat any two of them only intersect in v.The vertex degree of an end ω is equal to a natural number k if the maximalcardinality of a family of pairwise disjoint rays belonging to the end is k.If no such numberk exists then we say that the vertex-degree of the end is infinite.Halin <cit.>(see also <cit.>) proved that if thevertex-degree of an end is infinite then there is an infinite family of pairwise disjoint raysbelonging to the end.Ends with finite vertex degree are sometimes called thin and those with infinite vertex degree are called thick.The following lemma is well-known.A proof can be found in Appendix A.(Cf. <cit.>) Let G be a connected graph and ω an end of G having a finite vertex degree. Then thereare only finitely many vertices in G that dominate the end ω. In this paper we are focusing on 1-ended graphs where the end ω has vertex degree k.Inthe following definition we pick out a class of separations that are relevant in this case. Let G be an arbitrary graph.Ifω is an end of G that has vertex degree k then saythat a separation (A,B) isω-relevant if it has the following properties * the order of (A,B) is exactly k,* AB is connected,* every vertex in A∩ B has a neighbour in AB,* ω lies in B, and * there is no separation (C,D) of order <k such that A C and ω lies in D. Defineas the set of all ω-relevant separations. The following characterization of ω-relevant separations is a Menger type result. A proof based on <cit.>and <cit.> is contained in Appendix A. Let G be an arbitrary graph.Supposeω is an end of G with vertex degree k. * If (A,B) is an ω-relevant separation then there is a family of k pairwise disjointrays in ω such that each of them has its initial vertex in A∩ B. * Conversely, if (A,B) is a separation of order k such that A B is connected, everyvertex in A∩ B has a neighbour in A B, the end ω lies in B and there is a familyof k disjoint rays in ω such that each of these rays has its initial vertex in A∩ Bthen the separation (A,B) is ω-relevant.In particular, for (A,B) ∈ the component of G-(A∩ B) in which ω lies has thewhole of A∩ B in its neighbourhood and hence every separation inis tight.Note that the set A B completely determines the ω-relevant separation (A,B). The relation(A,B)≤ (C,D) : AC B⊇ Ddefines a partial order on the set of all separations, so in particular on the set .Since(C,D) is a tight separation, the conditionA C implies that D B. This is shown in <cit.> and theargument goes as follows:Suppose that DB and x∈ D B.Thenx∈ A C sox∈ (C∩ D)∖ B.Because (C, D) is a tight separation, x has a neighbour y∈D∖ C.But x∈ A∖ B and hence y must also be in A.But y∉C,contradicting the assumption that A C.Hence D⊆ B and(A,B)≤ (C,D)A C.The next result follow from results of Halin in <cit.>.These results are in turnproved by using Menger's Theorem.For the convenience of the reader a detailed proof is provided inAppendix A.Let G be a connected 1-ended graph such that the end ω isundominated and has finite vertex degree k. Then there is a sequence {(A_n, B_n)}_n≥ 0 of ω-relevant separations, such thatthe sequence of sets B_n is strictly decreasing and for every finite set of vertices F there isa number n such that F⊆ A_n B_n. We will not use the following in our proof.Theorem <ref> is also true if we leave out the assumption that G is one-ended (and replace `the end ω' by `there exists an end ω that'). §.§ Automorphism groups An automorphism of a graph G = (V,E) is a bijective function γ: V → V that preserves adjacency and whose inverse also preserves adjacency. Clearly an automorphism γ also induces abijection E → E which by abuse of notation we will also call γ. The automorphismgroup of G, i.e. the group of all automorphisms of G, will be denoted by (G).Let Γ be a subgroup of (G).For a set D⊆ V(G) we define the setwisestabiliser of D as the subgroup Γ_{D}={γ∈Γ|γ(D)=D}and thepointwise stabiliser of D is defined as Γ_(D)={γ∈Γ|γ(d)=d d∈ D}.The setwise stabiliser is the subgroup of all elements inΓ that leave the set D invariant and the pointwise stabiliser is the subgroup of all thoseelements in Γ that fix every vertex in D.If D⊆ V(G) is invariant under Γthen we use Γ^D to denote the permutation group on D induced by Γ, i.e. Γ^Dis the group of all permutation σ of D such that there is some element γ∈Γsuch that the restriction of γ to D is equal to σ.Note that Γ_(D) is anormal subgroup of Γ_{D} and the index Γ_(D) in Γ_{D} is equal to thenumber of elements in (Γ_{D})^D.The full automorphism group of a graph has a special property relating to separations.Supposeγ is an automorphism of a graph G and that γ leaves both sides of a separation(A,B) invariant and fixes every vertex in the separator A∩ B.Thenthe full automorphismgroup contains automorphisms σ_A and σ_B such that σ_A like γ on Afixes every vertex in B and vice versa for σ_B.Informally one can describe thisproperty by saying that the pointwise stabiliser (in the full automorphism group) of a set D ofvertices acts indpendently on the components of G-D. We will refer to this property as theindependence property.There is a natural topology on (G), called the permutation topology: endow the vertexset with the discrete topology and consider the topology of pointwise convergence on (G).Clearly, the permutation topology also makes sense for any group of permutations of a set. Thefollowing lemma is a special case of a result in <cit.>.Inparticular it tells us that the limit of a sequence of automorphisms again is an automorphism. Thisfact will be central to the proof of Theorem <ref>. The automorphism group of a graph is closed in the set of all permutations of the vertex set endowedwith the topology of pointwise convergence. The next result is also a special case of a result from Cameron's book refered to above.This timewe look at <cit.>. The automorphism group of a countable graph is finite, countably infinite or has at least2^ℵ_0 elements. § INVARIANT NESTED SETSIn this section we will prove Theorems <ref> and <ref>.Theorem <ref> follows from Theorem <ref>.The following two facts about sequences ofnested separations will be useful at several points in the proofs. Let G be a connectedgraph. Assume that (A_i,B_i)_i ∈ℕ is a sequence of proper separations of order at most somefixed natural number k. Assume also that A_i ⊊ A_i-1,every A_iB_i is connected, and every vertex in A_i ∩ B_i has a neighbour in A_i B_i.Define X as the set of vertices contained in infinitely many A_i. Then * X ⊆ B_i for all but finitely many i,* there is a unique end μ which lies in every A_i, and * x ∈ X if and only if x dominates μ. First observe that X = ⋂_i ∈ℕ A_i because the sequence A_i is decreasing. LetX' be the set of vertices in X with a neighbour outside of X. For every x ∈ X' we can finda neighbour y of x and i_0 ∈ℕ such that y ∉ A_i for every i ≥ i_0.Since the edge xy must be contained in either A_i or B_i we conclude that x ∈ B_i and thusx ∈ A_i ∩ B_i for i ≥ i_0. Hence there is i_1∈ℕ such that X' ⊆ A_i ∩ B_i for every i ≥ i_1.Theorder of each separation is at most k, so X' contains at most k vertices. Now for i ≥ i_1every path from XB_i to A_i(X ∪ B_i) must pass through X' and thus through B_i.Since A_iB_i is connected this means that one of the two sets must be empty, i.e., either X B_i = ∅ or X B_i=A_i B_i. Assume that the latter is the case. Then A_icontains at most k vertices which are not contained in X and the same is clearly true for everyA_j for j>i. This contradicts the fact that the sequence A_i was assumed to be infinite andstrictly decreasing. We conclude that X ⊆ B_i for i ≥ i_1.Note that this impliesthat X=X' because if i≥ i_1 then X⊆ A_i∩ B_i and every vertex in A_i∩ B_ihas an neighbour in A_i∖ B_i.To see that there is an end μ which lies in every A_i we construct a ray which has a tailin each A_i. For this purpose pick for i≥ i_1 a vertex v_i ∈ A_iX and paths P_iconnecting v_i to v_i+1 in A_iX. This is possible because A_iX contains A_i B_i and is connected (A_iB_i is connected and every vertex in B_i ∩ A_i has aneighbour in A_iB_i). No vertex lies on infinitely many paths P_i because no vertex iscontained in infinitely many sets A_iX. Hence the union of the paths P_i is an infinite,locally finite graph and thus contains a ray. This ray belongs to an end μ which lies inevery A_i. Finally we need to show that every vertex in X dominates the end μ.Without loss ofgenerality we can assume that X ⊆ B_i for all i.So,let R be a ray in μ andx ∈ X.We will inductively construct infinitely many paths from x to R which only intersectin x. Assume that we already constructed some finite number of such paths. Since all of them havefinite length, there is an index i such that A_iB_i doesn't contain any vertex in theirunion. The ray R has a tail contained in A_iB_i and since x ∈ A_i ∩ B_i we know thatx has a neighbour in A_iB_i. Finally A_iB_i is connected, so we can find a pathconnecting x to the tail of R which intersects the previously constructed paths only in x.Proceeding inductively we obtain infinitely many paths connecting x to R which pairwise onlyintersect in x completing the proof of the Lemma. We would now like to construct a subset of the setof ω-relevant separations that isboth nested and invariant under allautomorphisms and from that set we construct a tree. The following two lemmas give us important properties of nestedness when we restrict toω-relevant separations. Two separations(A,B), (C,D) in S_ω are nested if and only if they are eithercomparable with respect to ≤, or A D.First assume that the two separations are nested. It is impossible that BC and DAsince the end ω lies in B and D, but not in C and A. Hence, if the two separationsare not comparable, then we know that AD and CB.For the converse implication first consider the case that AD.We want to show thatC⊆ B.Assume for a contradiction that there is a vertex x in CB. This vertexmust be contained in A D and hence in the separator C ∩ D. By the definition ofthevertex x must have a neighbour y in CD. Then y ∉ A and x ∉ B, contradictingthe fact that the edge xy must lie in either A or B, as (A,B) is a separation.Finally, note that any two separations in S_ω that are comparable with respect to≤ are obviously nested. (Analogies with <cit.>) For each (A,B)∈ S_ω there are only finitely many (C,D)∈ S_ω notnested with (A,B).The first step is to show that if (C,D) is not nested with (A,B) then (C,D)separates some vertices v and w in A∩ B.Then we show that we may assume that theseparation is minimal.Since A∩ B is finite there are only finitely many possibilities for thepair v, w and we can apply Lemma <ref> to deduce the result.First suppose for a contradiction that (C D)∩ (A∩ B) is empty. Since C D isconnected, it must be a subset of A B or B A. As every vertex in C ∩ D has a neighbour in CD it follows that C A in the firstcase, whilst C B in the second. In both cases (A,B) and (C,D) are nested byLemma <ref>, contrary to our assumption. Hence there exists a vertex v∈ (C D)∩ (A∩ B). Note that by letting the separations(A,B) and (C,D) switch roles we see that (A B)∩ (C∩ D) is also non-empty.Since the separation (C,D) is in S_ω there is by Lemma <ref> a family ofk disjoint rays that all have their initial vertices in C∩ D.Because ω lies in D,all vertices in these rays, except their initial vertices, are contained in the component ofD∖ C that contains ω.Pick a vertex v' from (A∖ B)∩(C∩ D).Thisvertex v' is the initial vertex of one of the rays mentioned above.Since ω lies in Bthese rays must contain a vertex w from A∩ B and as mentioned above w is contained in thecomponent of D∖ C that contains ω. Now we have shown that (C, D) separates thetwo vertices v and w.This separation is minimal because v is in C D and C D isconnected and has C∩ D as it neighbourhood, and w is contained in the component of G-(C∩D) that contains ω and that component has the whole of C∩ D as its neighbourhood.Let G be a one-ended graph whose end ω is undominated and has finitevertexdegree k.Recall that by Lemma <ref> there are no infinite decreasing chains in—such a chain would define an end μ≠ω, contradicting the assumption thatG has only one end.In particular,has minimal elements. Assign recursively an ordinalα(A,B) to each (A,B)∈ by the following method: if (A,B) is minimal (with respect to ≤ in ) then set α(A,B)=0; otherwisedefine α(A,B) as the smallest ordinal β such thatα(C,D)<β forall separations (C,D)∈ such that (C,D)<(A,B).For v∈ V(G), let (v) be theset of those separations (A,B) inwith v∈ A∩ B. Now setα(v)=sup{α(A,B) | (A,B)∈(v)}.If it so happens that (v) is empty then α(v)=0. For a vertex set S, we let α(S) be the supremum over all α(v) with v∈ S.Notethat the functions α(A,B) and α(v) are both invariant under the action of theautomorphism group of G.Below is a construction of a graph where α takes ordinal values that are not natural numbers. However, it is not difficult to show that for a locally finite connected graph the α-valuesare always natural numbers.We construct a graph G at which α takes values that are not naturalnumbers. Let P_n=v_0^n, …, v_n^n be a path of length n.We obtain G by taking a ray and identifying its starting vertex r with the vertices v_n^n for each n≥ 0.This graph has only one end μ and its vertex degree is 1. For 0≤ k≤ n-1 the separation ({v_0^n,…, v_k^n}, V(G){v_0^n,…, v_k-1^n}) is μ-relevant and its α-value is k. Hence any separation (A,B) with r (and all the attached paths) in A has α-value atleast the ordinal ω.Let G be a graph with only one end ω.Assume that ω is undominated and has vertexdegree k.Let (C,D) be in S_ω. Then for all but finitely many vertices v in C, we haveα(v)≤α(C,D). By Lemma <ref>, there are only finitely many separations inthat are notnested with (C,D). Let C' the set of those vertices in C∖ D that are not in any separator of these finitelymany separations. It suffices to show that if v∈ C' and (A,B) in (v) thenα(A,B)< α(C,D). Note that the result is trivially true if (v) is empty. By the choice of v, the separations (A,B) and (C,D) are nested. Since v is in (C D)∩ (A∩ B), it is not true that AD or BD. Since the end ω does not lie in the sides A and C, it does not lie in the side A∪ Cof the separation (A∪ C,B∩ D). Hence it lies in the side B∩ D. In particular B∩ (D C) is nonempty.Thus it is not true that BC. Looking at the definition of nestedness we see that A C. Hence (A,B)<(C,D) and thusα(A,B)<α(C,D) and the result follows.Let G be a graph with only one end ω.Assume that ω is undominated and has vertexdegree k.For every separation (C,D) in S_ω, there is a separation (A,B) ∈ S_ωsuch thatCA and α (C,D)<α(A,B). Let { (A_n, B_n)}_n≥ 0 be a sequence of ω-relevant separations asdescribed in Theorem <ref>. Find a separation (A, B) in this sequencesuch that C∩ D⊆ A B.Suppose for a contradiction that C D contains a vertexx from A∩ B.There is a ray R that has x as a starting vertex and every other vertex iscontained in B A.Because C∩ D contains no vertex from B we see that this ray would becontained in C D, contradicting the assumption that the end ω lies in D.Hence, CD does not intersect A∩ B and then, since C D is connected, we conclude that C⊆A. Thus α(C,D)≤α(A,B).By the previous Lemma there are at most finitely many vertices v in C such thatα(v)>α(C,D).Suppose for a contradiction that v is such a vertex and there is novalue of n such that α(v)< α(A_n, B_n).Then we can find a sequence {(C_n,D_n)}_n≥ 0 of separations in (v) such that α(C_1, D_1)<α(C_2, D_2)<⋯and for every n there is a number r_n such α(A_n, B_n)<α(C_n_r, B_n_r). ByLemma <ref> we may assume that for all values of n and m the separations(C_n, D_n) and (C_m, B_m) are nested.Say that a pair of separations {(C_n, D_n), (C_m,D_m)} is blue if the separations are comparable with respect to ≤ and red otherwise.ByRamsey's Theorem, see e.g. <cit.>, there is an infinite set ofseparations such that all pairs from that set have the same colour.If all pairs from that set wereblue then we could find an infinite increasing or a decreasing chain. ByLemma <ref>(2) there cannot be an infinite descending chain of separations and if therewas an infinite increasing chain in (v) then, by Lemma <ref>(3) with the roles of the A_i's and the B_i's reversed,v would be a dominating vertex for the endω, contrary to assumptions.Hence all pairs from that infinite set must be red and we canconclude that there is an infinite set of separations in the family {(C_n, D_n)}_n≥ 0 suchthat no two of them are comparable with respect to ordering.We may assume that if n and m aredistinct then (C_n, D_n) and (C_m, D_m) are not comparable and then C_n D_n and C_mD_m are disjoint.Start by choosing n such that v∈ A_n B_n and then choose m such thatnone of the vertices in A_n∩ B_n is in C_m D_m.There must be some vertex u thatbelongs both to B_n and C_m D_m.The set (C_m D_m)∪{v} is connected and thus itcontains a v-u path P. But v∈ A_n B_n and u∈ B_n A_n and the path P containsno vertices from A_n∩ B_n. We have reached a contradiction.Hence our original assumptionmust be wrong. Let X be a connected set of vertices which cannot be separated from the end ω by aseparation of order less than k. A separation (A,B)∈ is called X-nice, if forevery v ∈ A∩ B we have α(v) > α(X) and there is some φ∈(G) such that φ(X) A (then we must have φ(X)⊆ A B). Let N(X) be the set of allX-nice separations inwhich are minimal with respect to ≤, i.e. 𝒩 (X)contains all X-nice separations (A,B) ∈such that A is minimal with respect toinclusion.Let G be a graph with only one end ω.Assume that ω is undominated and has vertexdegree k.Suppose (X,Y)∈.Then N(X) is non-empty.For each automorphism φ of G there is a unique element (A,B) in N(X) such that φ(X)⊆ A. If (A,B) and (C,D) are not equal and in N(X), then A D and C B. Furthermore, any two elements of N(X) can be mapped onto each other byan automorphism.The existence of an X-nice separation follows from Lemma <ref>.Minimal such separations exist because by Lemma <ref> an infinite descending chainwould imply that G had another end μ≠ω.Let (A,B) and (C,D) be elements of N(X). Suppose φ(X)A and ψ(X) C, where φ, ψ∈(G). Note that φ(X) is disjoint from C ∩ D becauseα(φ(X)) = α(X), which is strictly less than α(v) for any v∈ C∩ D.Hence it is a subset of either CD or DC.We next prove that if (A,B) and (C,D) are not equal, then AD and C B.First we consider the case that φ(X) is a subset ofCD.Our aim is to show that (A,B) and (C,D) are equal.This also implies that (A,B) is the unique element in N(X) such that φ(X)⊆ A.Our strategy will be to construct a X-nice separation that is ≤ to both of them and byminimality of (A,B) and (C,D) we will conclude that it must be equal to both of them.Note that φ(X) is included in (CD)∩ (A B).Let A' be the connected component of (CD)∩ (A B) that containsthe connected setφ(X) together with the separator of (A∩ C, B ∪ D).Let B' be the union of B ∪ D with the other components of (CD)∩ (A B).Next we show that the separation (A',B') is in (X).Since the end ω lies in B∩ D, this vertex set is infinite.Because (A,B) is in , the separation (A∪ C, B ∩ D) has order at least k. Hence by Lemma <ref>, the separation (A∩ C, B ∪ D) has order at most k.The property that X cannot be separated from ω by fewer than k vertices implies that theseparation (A',B') has order precisely k.Also, every vertex of the separator of (A',B') has a neighbour in A' B' and in B' A'. Clearly ω lies in B' and there is no separation (C',D') of order less than k such thatA' C' and ω lies in D' as (X,Y)∈. Hence (A',B') is inand thus it is in (X) as A' A.Since A' A, it must be that A'=A by the minimality of (A,B). Similarly, A'=C. Thus A=Cand so (A,B)=(C,D). This completes the case when φ(X) is a subset ofCD.So we may assume that φ(X)DC, and by symmetry that ψ(X)BA.Consider the separations(A ∩ D, B ∪ C) and (B ∩ C, A ∪ D).They must have order at least k because φ(X)⊆ A∩ D, ω∈ B∪ C andψ(X)⊆ B∩ C, ω∈ A∪ D. So they must have order precisely k byLemma <ref>. Let A' be the component of G-( B ∪ C)that contains φ(X) together with the separatorof (A∩ D, B ∪ C).Let B' be the union of B ∪ C with the other components.Similar as in the last case we show that (A',B') is in (X). By the minimality of (A,B) itmust be that AD. The above argument with the separation (B ∩ C, A ∪ D) in place of(A ∩ D, B ∪ C)yields that CB.This completes the proof that if (A,B) and (C,D) are not equal and in N(X), then (A,B) and (C,D) are nested. By the above there is for each φ∈(G) a unique separation (A_φ,B_φ) ∈ N(X) such that φ(X)A_φ.If we apply φ^-1 to this separation we must obtain the unique separation (A,B) ∈N(X)such that XA.Hence any separation of N(X) can be mapped by an automorphism to every other separation inN(X).Let G be a connected graph with only one end ω, whichis undominated and has finite vertexdegree k.Then there is a nested set S of ω-relevant separations of G that is (G)-invariant.And there is a 1-ended tree T and a bijection between the edge set of T and S such that the natural action of (G) on S induces an action on T by automorphisms.Pick some ω-relevant separation (A_0, B_0). Define a sequence (A_n, B_n) of separations as follows.For n ∈ℕ_>0 pick (A_n, B_n) ∈𝒩(A_n-1) such that A_n-1⊊A_n,which is possible by Lemma <ref>.Observe that the sequence of separations (A_n, B_n)has the same properties as the sequence in Theorem <ref>.Now let 𝒮 = {(φ(A_n), φ(B_n))| n ∈ℕ_>0, φ∈(G)}.Note that (A_0,B_0) is not an element in. First we prove thatis nested.Let (φ(A_n), φ(B_n)) and (ψ (A_m), ψ(B_m)) be two different elements of𝒮 (here φ and ψ are automorphisms of G).If m = n then they are nested by Lemma <ref>, since they both are elements of 𝒩(A_n-1).Hence assume without loss of generality that n < m.If φ(A_m) = ψ(A_m) thenφ(A_n)φ(A_m)= ψ(A_m) which implies that the two separations are nested. Otherwise byLemma <ref> we have φ(A_n) φ(A_m) ψ(B_m), also showingnestedness, by Lemma <ref>. Next we construct a directed graph T_+. We define T_+ as follows. Its vertex set is . We add a directed edge from (φ(A_n),φ(B_n)) to (ψ(A_n+1), ψ(B_n+1)) if φ(A_n) is a subset of ψ(A_n+1). By <ref>, each vertex hasoutdegree at most one. And by the construction ofit has outdegree at least one. The next step is to show that the graph is connected. Let(C,D)=φ(A_n, B_n) be a vertex in T_+.Find an m such that C⊆ A_m B_m. Suppose for a contradiction that (φ(A_m), φ(B_m))≠ (A_m, B_m).Both(φ(A_m), φ(B_m)) and(A_m, B_m) are in N(X). By Lemma <ref> φ(A_m)⊆ B_m. Thus φ(A_m) is empty. This is a contradiction to the assumption that (A_m, B_m) is aproper separation. Now we see that(A_m, B_m)=(φ(A_m), φ(B_m)), (φ(A_m-1),φ(B_m-1)), …, (φ(A_n), φ(B_n))=(C,D) is a path in T_+ from (A_m, B_m) to (C,D). Thus every vertex in T_+ is in the same connected component as some vertex(A_m, B_m) and since they all belong to the same component we deduce that T_+ is connected. Hence the corresponding undirected graph T is a tree. The map thatsends (φ(A_n), φ(B_n)) to the edge with endvertices (φ(A_n), φ(B_n))and(ψ(A_n+1), ψ(B_n+1))is clearly a bijection. If the ray (A_1,B_1), (A_2, B_2), … is removed from T then what remains of T is clearly rayless and thus thetree T is one-ended. The statement about the action of (G) on T follows easily since the properties used todefineT are invariant under (G). A tree-decomposition of a graph G consists of a tree T and a family (P_t)_t ∈ V(T)of subsets of V(G), one for each vertex of T such that* V(G) = ⋃_t ∈ V(T) P_t,* for every edge e ∈ E(G) there is t ∈ V(T)such that both endpoints of e lie in P_t, and* P_t_1∩ P_t_3⊆ P_t_2whenever t_2 lies on the unique path connecting t_1 and t_3 in T.The tree T is called decomposition tree, the sets P_t are called the parts of thetree-decomposition. We associate to an edge e = st of the decomposition tree a separation of G as follows. Removinge from T yields two components T_s and T_t. Let X_s = ⋃ _u ∈ T_s P_u and X_t =⋃ _u ∈ T_t P_u. If X_sX_t and X_tX_s are non-empty (this will be the casefor all tree-decompositions considered in this paper), then (X_s,X_t) is a proper separation of G.Clearly, the set of all separations associated to edges of a decomposition tree is nested. The separators A ∩ B of the separations associated to edges of a decompositiontree are called adhesion sets. The supremum of the sizes of adhesion sets is called theadhesion of the tree-decomposition. The tree-decompositions constructed in thispaper all have finite adhesion.Given a graph G with only one end ω and a tree-decomposition (T,P_t| t∈ V(T)) ofG of finite adhesion, then(T,P_t| t∈ V(T)) displays ω if firstly thedecomposition tree T has only one end; call it μ. And secondly for any edge st of T withμ inT_t, the associated separation (X_s,X_t) has the property that ω lies in X_t. A tree-decomposition is (G)-invariant if the set S of separations associated to it isclosed by the natural action of (G) on S. The following implies Theorem <ref>.Let G be a connected graph with only one end ω, which is undominated and has finitevertexdegree k.Then G has a tree-decomposition (T,P_t| t∈ V(T)) of adhesion k that displays ωand is (G)-invariant. We follow the notation of the proof of Theorem <ref>. Given a vertex t of T_+, the inward neighbourhood of t, denoted by N_+(t), is the set of vertices u of T_+ such that there is a directed edge from u to t in T_+.Recall that the vertices of T_+ are (in bijection with) separations; we refer to theseparationassociated to the vertex t by (A_t,B_t). Given a vertex t, we let P_t=A_t⋃_u∈N_+(t) (A_u B_u). It is straightforward that (T,P_t| t∈ V(T)) is a tree-decomposition of adhesion k (whose set of associated separations is ∪{(B,A)| (A,B)∈}).It is not hard to see that (T,P_t| t∈ V(T)) displays ωand is (G)-invariant. In this example we construct a one-ended graph G whose end is dominated and has vertex degree 1, but the graph G has no tree-decomposition of finite adhesion that is invariant under the group of automorphisms and whose decomposition tree is one-ended. We obtain G from the canopy tree by adding a new vertex adjacent to all the leaves of the canopy tree. Then we add infinitely many vertices of degree one only incident to that new vertex, see Figure <ref>.Suppose for a contradiction that G has a tree-decomposition (T,P_t| t∈ V(T)) of finite adhesion that is invariant under the group of automorphisms and such that T is one-ended. There cannot be a single part P_t that contains a ray of the canopy tree. To see that first note that there cannot be two such parts by the assumption of finite adhesion. Hence any such part would contain all vertices of the canopy tree from a certain level onwards. This is not possible by finite adhesion. Having shown that there cannot be a single part P_t that contains a ray of the canopy tree, it must be that every part P_t with t near enough to the end of T contains a vertex of the canopy tree. Our aim is to show that any vertex u of degree 1 is in all parts. Suppose not for a contradiction. Then since T is one-ended, there is a vertex t of T such that t separates in T all vertices s with u∈ P_s from the end of T. We pick t high enough in T such that there is a vertex v of the canopy tree in P_t. If P_t contained all vertices of the orbit of v, then P_t together with all parts P_s, where s has some fixed bounded distance from t in T, would contain a ray. This is impossible; the proof is similar as that that P_t cannot contain a ray. Hence there is a vertex v' in the orbit of v that is not in P_t. Take an automorphism of G that fixes u and moves v to v'. As the tree-decomposition is (G)-invariant, T has a vertex s such that u,v'∈ P_s but v∉ P_s. Since T is (G)-invariant and one-ended, t does not separate s from the end of T. This is a contradiction as u∈ P_s.Hence u must be in all parts. As u was arbitrary, every vertex of degree one must be in every part. So the tree-decomposition does not have finite adhesion. This is the desired contradiction. Hence such a tree-decomposition does not exist. § A DICHOTOMY RESULT FOR AUTOMORPHISM GROUPSBefore we turn to a proof of Theorem <ref>, we state a few helpful auxiliary results. Thefollowing lemma can be seen as a consequence of <cit.>, but forcompletenessa direct proof is provided in Appendix B. If T is a one-ended tree and R is a ray in T, then every automorphism of T fixes some tailof R pointwise. The next result is Lemma 3 in <cit.>. For completeness a proof is included inAppendix C.The pointwise (and hence also the setwise) stabiliser of a finite set of vertices in theautomorphism group of a rayless graph is either finite or contains at least 2^ℵ_0 manyelements. The next result is an extension of Lemma <ref> to one-ended graphs where the end has finitevertex degree.Let G be a graph with only one end ω.Assume that ω has finite vertex degree k.Let X be a finite set ofvertices in G that contains all the vertices that dominate the end.If the graph G-X isconnected then the pointwise stabiliser of X in (G) is either finite or contains at least2^ℵ_0 many elements.Denote by Γ the pointwise stabiliser of X in (G). If Γ is finite, then thereisnothing to show, hence assume that Γ is infinite.Consider a nested (G-X)-invariant set of ω-relevant separations of G-X as inTheorem <ref> and atree T built from this set in the way described. ClearlyΓ gives rise to a subgroup of (G - X) whence this nested set is Γ-invariant.Adding X to both sides of every separation in S gives rise to a new Γ invariant setS of nested separations such that each separation has order k+|X|.The tree we get fromS is the same as T.From now on we will work with S.Every element γ∈Γ induces an automorphism of T. Note that this canonicalaction of Γ on T is in general not faithful, i.e. it is possible that different elementsof Γ induce the same automorphism of T.Let R be a ray in T and let (e_n)_n ∈ℕ be the family of edges of R (in theorder in which they appear on R). Let (A_n,B_n) be the separation of G corresponding toe_n.Denote by Γ_n the stabiliser of e_n in Γ. By Lemma <ref> everyautomorphism of T (and hence also every γ∈Γ) fixes some tail of R, soΓ_nis non-trivial for large enough n. Furthermore, Γ_n is a subgroup of Γ_m whenevern≤ m. We claim that for all but finitely many n, we have at least one non-trivial γ in thepointwise stabiliser of B_n. To see this, let γ_1, …, γ_(k+|X|)!+1 be a set of(k+|X|)!+1 different non-trivial automorphisms in Γ. Choose n large enough such that theyall are contained in Γ_n and act differently on A_n. By a simple pigeon hole argument, atleast two of them, γ_1 and γ_2 say, have the same action on A_n ∩ B_n. Thenγ_1 ∘γ_2^-1 is an automorphism which fixes A_n ∩ B_n pointwise, and fixesA_n setwise but not pointwise. Now, using the independence property fromSection <ref> we can define an automorphism γ(x) = γ_1 ∘γ_2^-1(x)if x ∈ A_n B_n xif x ∈ B_nwith the desired properties.Note that the subgroup leaving A_n invariant in the pointwise stabiliser of B_n in Γinduces the same permutation group onthe rayless graph induced by A_n in G as does thesubgroup leaving A_n invariant in the pointwise stabiliser of A_n∩ B_n.Hence, if there is n ∈ℕ such that the pointwise stabiliser of B_n in Γ isinfinite, then this stabiliser contains at least 2^ℵ_0 many elements by Lemma <ref>.So (by passing to a tail of R) we may assume that the pointwise stabiliser of B_n is a finitebut non-trivial subgroup of Γ for every n ∈ N.Next we claim that for every n there is a non-trivial automorphism in the pointwise stabiliser ofA_n. If not, then Γ_n is finite and we choose σ∈ΓΓ_n. For anedge e of T, denote by T_e the component of T-e which does not contain the end of T. Clearly σ(T_e) = T_σ (e) for every edge e. In particular, if e = e_m is the lastedge of R which is not fixed by σ, then clearly σ (T_e)T - T_e. Furthermoren < m,so A_nA_m, and B_mB_n. Hence σ(A_n)σ(A_m) B_mB_n. Now let γ be a nontrivial automorphism in the pointwise stabiliser of B_n. Thenσ^-1∘γ∘σ is easily seen to be a nontrivial element of the pointwisestabiliser of A_n: for a ∈ A_n we haveσ^-1∘γ∘σ (a) = σ^-1∘σ (a) = asince σ(a) ∈ B_n is fixed by γ.Now define an infinite sequence (γ_k)_k ∈ℕ of elements of Γ as follows.Pick a nontrivial γ_1 in the pointwise stabiliser of A_1. Assume that γ_i has beendefined for i < k, then let n_k be such that γ_i acts non-trivially on A_n_k for alli<k and pick a nontrivial element γ_k in the pointwise stabiliser of A_n_k. For aninfinite 0-1-sequence (r_j)_j≥ 1, defineψ_i = γ_i^r_i∘γ_i-1^r_i-1∘⋯∘γ_1^r_1,in other words, ψ_n is the composition of all γ_j with j≤ n and r_j = 1. Finallydefine ψ to be the limit of the ψ_n in the topology of pointwise convergence. This limitexists, because for j > i the restriction ψ_i and ψ_j to A_n_i coincide, and theA_n_i exhaust V(G). By Lemma <ref>, ψ is contained in (G)and is also inΓ(G) because every ψ_istabilises X pointwise. Finally assume that we have two different 0-1-sequences (r_j)_j≥ 1 and (r'_j)_j≥1and let (ψ_j)_j≥ 1 and (ψ'_j)_j≥ 1 be the corresponding sequences ofautomorphisms. If l is the first index such that r_l≠ r_l' then the restrictions of ψ_l and ψ_l' (and hence also of ψ_i and ψ_i' for i>l) to A_n_ldiffer.Hence different 0-1-sequences give different elements of Γ and Γ contains atleast2^ℵ_0 many elements. <ref>Let G be a graph with one end which has finite vertex degree. Then (G) is either finite orhas at least 2^ℵ_0 many elements.thm-1Let X be the set of vertices which dominate ω.This set is possibly empty and byLemma <ref> it is finite.Every automorphism stabilises X setwise. Therefore the pointwise stabiliser of X is a normal subgroup of (G) with finite index. So itsuffices to show that the conclusion of Theorem <ref> holds for the stabiliser Γ ofX.For every component C of G-X let Γ_C be the pointwise stabiliser of X in (C ∪X). Then Γ_C is either finite or contains at least 2^ℵ_0 many elements byLemma <ref> and Lemma <ref>. If |Γ_C|=2^ℵ_0 for some component Cthen we need do no more.So assume that all the groups Γ_C are finite. The same argument asused towards the end of the proof of Lemma <ref> (see Appendix C) now shows that either Γ is finiteor has at least cardinality 2^ℵ_0. As a corollary we can answer a question posed by Boutin and Imrich in <cit.>. In order to state this question, we first need some notation. For a vertex v in a graph G we define B_v(n), the ball of radius n centered at v, as the set of all vertices in G in distance at most n from v.We also define S_v(n),the sphere of radius n centered at v,as the set of all vertices in G in distanceexactly n from v.A connected locally finite graph is said to have linear growth ifthere is a constant c such that |B_v(n)|≤ cn for all n= 1, 2, …. It is an easyexercise to show that the property of having linear growth does not depend on the choice of thevertex v.In relation to their work on the distinguishing cost of graphs Boutin and Imrich <cit.>ask whether there exist one-ended locally finite graphs that has lineargrowth and countably infiniteautomorphism group.If Gis a locally finite graph with linear growth and v is a vertex in G then there is aconstant k such that |S_v(n)|=k for infinitely many values of n.(This is observed by Boutinand Imrich in their paper <cit.>.)From thiswededuce that the vertex-degree of an end of G is at most equal to k, since each ray in Gmust pass through all but finitely many of the spheres S_v(n).Using Theorem <ref> one cannow give a negative answer to the above question. If G is a connected locally finite graph with one end and linear growth, then the automorphism group of G iseither finite or contains exactly 2^ℵ_0 many elements.Since G is locally finite and connected, the graph G is countable. Hence theautomorphism group cannot contain more than 2^ℵ_0 many elements.Furthermore lineargrowthimplies that all ends must have finite vertex degree, hence we can apply Theorem <ref>. In particular a connected graph with linear growth and a countably infinite autormorphism groupcannot have one end. Thus one can strengthen <cit.> and get:(Cf. <cit.>) Every locally finite connected graph withlinear growth and countably infinite automorphism group has 2 ends. Furthermore one can in <cit.> remove the assumption that the graph is2-ended, since it is implied by the other assumptions.§ ENDS OF QUASI-TRANSITIVE GRAPHSFinally, another application waspointed out to the authors by Matthias Hamann. Recall that a graph is called transitive, if all vertices lie in the same orbitunder the automorphism group, and quasi-transitive (or almost-transitive), if there areonly finitely many orbits on the vertices. The groundwork for the study of automorphisms of infinite graphs was laid in the 1973 paper of Halin <cit.>.Among the results there is a classification of automorphisms of a connected infinite graph, see <cit.>. Type 1 automorphisms, to use Halin's terminology, leave a finite set of vertices invariant.An automorphism is said to be of type 2 if it is not of type 1.Type 2 automorphism are of two kinds, the first kind fixes precisely one end which is then thick (i.e. has infinite vertex degree) and the second kind fixes precisely two ends which are then both thin (i.e. have finite vertex degrees).In Halin's paper these results are stated with the additional assumption that the graph is locally finite but the classification remains true without this assumption.It is a well known fact that a connected, transitive graph has either 1, 2, or infinitely many ends (follows for locally finite graphs from Halin's paper <cit.> and for the general case see <cit.>). It is a consequence of a result of Jung <cit.> that if such a graph has more than one end then there is a type 2 automorphism that fixes precisely two ends and thus the graph has at least two thin ends.In particular, in the two-ended case both of the ends must be thin. Contrary to this, we deduce from Theorem<ref> that the end of a one-ended transitive graph is always thick. This even holds in the more general case of quasi-transitive graphs. This was proved for locally finite graphs by Thomassen <cit.>.A variant of this result for metric ends was proved by Krön and Möller in <cit.>.If G is a one-ended, quasi-transitive graph, then the unique end is thick. For the proof we need the following auxiliary result.There is no one-ended quasi-transitive tree.Assume that T is a quasi-transitive tree and that R is a ray in T. Then there is an edge-orbit under (T) containing infinitely many edges of R. Contract all edges not in this orbit to obtain a tree T' whose automorphism group acts transitively on edges. Clearly, every end of T' corresponds to an end of T (there may be more ends of T which we contracted). But edge transitive trees must be either regular, or bi-regular. Hence T', and thus also T, has at least 2 ends. Assume for a contradiction that G is a quasi-transitive, one-ended graph whose end is thin. If the end ω is dominated, then remove all vertices which dominate it and only keep the component C in which ω lies. The resulting graph is still quasi-transitive since C must be stabilised setwise by every automorphism. Furthermore, the degree of ω does not increase by deleting parts of the graph. Hence we can without loss of generality assume that the end of the counterexample G is undominated.Now apply Theorem <ref> to G. This gives a nested set 𝒮 of separations which is invariant under automorphisms—in particular, there are only finitely many orbits of 𝒮 under the action of (G). Theorem <ref> further tells us that there is a bijection between 𝒮 and the edges of a one-ended tree T such that the action of (G) on 𝒮 induces an action on T by automorphisms. Hence T is a quasi-transitive one-ended tree, which contradicts Proposition <ref>. § APPENDIX We say that a vertex v dominates a ray L if there are infinitely many v-L paths, anytwo only having v as a common vertex.It follows from the definition of an end that if a vertexdomintes one ray belonging to an end then it dominates every ray belonging to that end and dominates the end. Assume that the set X of dominating vertices is infinite.By the above we can assume that thereis a ray R and infinitely many vertices x_1, x_2, … that dominate R in G.We show thatG must then contain a subdivision of the complete graph on x_1, x_2, ….Start by takingvertices v_1 and v_2 on R_1 such that there are disjoint x_1-v_1 and x_2-v_2 paths.Thenwe find vertices w_1 and w_2 furher along the ray R_1 such that there are disjoint x_1-w_1and x_3-w_3 paths and still further along we find vertices u_2 and u_3 such that there aredisjoint x_2-u_2 and x_3-u_3 paths.Adding the relevant segments of R we find x_1-x_2,x_1-x_3 and x_2-x_3 paths having at most their endvertices in common.The subgraph of Gconsisting of these three paths is thus a subdivision of the complete graph on three vertices. Using induction we can find an increasing sequence of subgraphs H_n of G that contains thevertices x_1, x_2, …, x_n and also paths P_ij linking x_i and x_j such that any twosuch paths have at most their end vertices in common.The subgraph H_n is a subdivision of thecomplete graph on n-vertices.The subgraph H=⋃_i=1^∞ H_i is a subdivision of thecomplete graph on (countably) infinite set of vertices and contains an infinite family of pairwisedisjoint rays that all belong to the end ω.This contradicts our assumptions and we concludethat T must be finite. A ray decomposition[Halin used the German term `schwach m-fachkettenförmig'.] of adhesion m of a graph G consists of subgraphs G_1, G_2, …such that: * G=⋃_i=1^∞ G_i;* if T_n+1=(⋃_i=1^n G_i)∩ G_n+1 then |T_n+1|=m andT_n+1⊆ G_n∖(⋃_i=1^n-1 G_i) for n=1, 2, …;* for each value of n=1, 2, … there are m pairwise disjoint paths in G_n+1 thathavetheir initial vertices in T_n+1 and teminal vertices in T_n+2;* none of the subgraphs G_i contains a ray. Thefollowing Menger-type resultis used by Halin in his proof of <cit.>.Inthe proof we also use ideas from another one of Halin's papers <cit.>. Let G be a locally finite connected graph with the propertythat G contains a family of m pairwise disjoint rays but there is no such family of m+1pairwise disjoint rays.Then there is in G a family of pairwise disjoint separators T_1, T_2,… such that each contains precisely m vertices and a ray in G must for some n_0intersects all the sets T_n for n≥ n_0. Fix a reference vertex v_0 in G.Let E_j denote the set of vertices in distance preciselyjfrom v_0.Define also B_i as the set of vertices in distance at most i from v_0. Fornumbers i and j such that i+1<j we construct a new graph H_ij such that we start with thesubgraph of G induces by B_j, then we remove B_i but add a new vertex a that has as itsneighbourhood the set ∂ B_i (for a set C of vertices ∂ C denotes the set of vertices that are not in C but are adjacent to some vertex in C) and we also add a new vertex b that has every vertex in∂ (G∖ B_j) as its neighbour.Since G is assumed to be locally finite the graphH_ij is finite. (By abuse of notation we do not distinguish the additional vertices a and b in different graphsH_ij.)Suppose that, for a fixed value of i, there are always for j big enoughat least k distincta-b paths in H_ij such that any two of them interesect only in the vertices a and b.Then one can use the same argument as in the proof of König's Infinity Lemma to show that thenGcontains a family of k pairwise disjoint rays.Because G does not contain a family of m+1pairwise disjoint rays there are for each i a number j_i such that for every j≥ j_i thereare at most m disjoint a-b paths in H_ij_i. Since a and b are not adjacent inH_ij_i then the Menger Theorem says that minimum number of a vertices in an a-b separator isequal to the maximal number of a-b paths such that any two of the paths have no inner vertices incommon.Whence there is in H_ij_i∖{a,b} a set T and a-b separator with preciselym vertices.This set is also an separator in G and every ray in G that has its initialvertexin B_i must intersect T.From this information we can easily construct our sequence ofseparators T_1, T_2, …. We can also clearly assume that if i_j is the smallest number such that T_j is in B_i_jthenT_k∩ B_i_j=∅ for all k>j. Let G be a connected locally finite graph.Suppose ω is an end of G and ω hasfinite vertex degree m.Then there is a sequenceT_1, T_2, … of separators eachcontaining precisely m vertices such that if C_i denotes the component of G- T_i thatω belongs to then C_1⊇ C_2⊇… and ⋂_i=1^∞C_i=∅. We use exactly the same argument as above except that when we construct the H_ij we only put inedges from b to those vertices in E_j that are in the boundary of the component of G∖B_j that ω lies in.The first part of the Lemma about the existence of a family of k pairwise disjoint rays inω with their initial vertices in A∩ B follows directly from the above.For the second part, the only thing we need to show is that there cannot exist a separation (C,D)of order <k such that A⊆ C and ω lies in D.Such a separation cannot existbecause the k pairwise disjoint rays that have their initial vertices in A∩ B and belong toω would all have to pass through C∩ D.(<cit.>)Let G be a graph with theproperty that it contains a family of m pairwise disjoint rays but no family of m+1 pairwisedisjoint rays.Let X denote the set of vertices in G that dominate some ray.Then the set Xis finite and the graph G- X has a ray decomposition of adhesion m. Let R_1, …, R_m denote a family of pairwise disjoint rays.Set R=R_1∪⋯∪ R_m.Any ray in G must intersect the set R in infinitely many vertices and thus intersects one oftherays R_1, …, R_m in infinitely many vertices.From this we conclude that every ray in Gisin the same end as one of the rays R_1, …, R_m.Thus a vertex that dominates some ray inGmust dominate one of the rays R_1, …, R_m.In Lemma <ref> we have already shown that the set of vertices dominating anend of finite vertex degree is finite. Note also that if a vertex in R is in infinitely many distinct sets of the type ∂ CwhereC is a component of G∖ R then x would be a dominating vertex of some ray R_i.Thusthere can only be finitely many vertices in R with this property.We will now show that G-X has a ray decomposition of adhesion m.To simplify thenotation wewill in the rest of the proof assume that X is empty.Assume now that there is a component C of G-R such that ∂ C is infinite. Takea spanning tree of C and then adjoin the vertices in ∂ C to this tree using edges inG. Now we have a tree with infinitely many leafs.It is now apparent that either the tree contains aray that does not intersect R or there is a vertex in C that dominates a ray in G.Bothpossibilities are contrary to our assumptions and we can conclude that ∂ C is finite forevery component C of G∖ R.For every set S in R of such thatS=∂ Cfor some component C in G∖ R wefind a locally finite connected subgraph C_S of C∪ S containing S.The graph G' that isthe union of R and all the subgraphs C_S is a locally finite graph.The original graph G hasa ray decomposition of adhesion m if and only if G' has a ray decomposition of adhesion m.At this point we apply Theorem <ref>. From Theorem <ref> we have the sequence T_2, T_3, … of separators. Wechoose T_2 such that all the rays R_1, …, R_m intersect T_2.We start by definingG_i for i≥ 2 as the union of T_i and all those components of G- T_i that containthe tail of some ray R_i.Finally, set G_1=G∖ (G_2∖ T_2). Note that none ofthe subgraphs G_i can contain a ray and our family of rays provides a family of m pairwisedisjoint T_i-T_i+1 paths.Now we have shown that G has a ray decomposition of adhesion m.Finally, we are now ready to show how Halin's result above implies Theorem <ref> that concernsω-relevant separations.We continue with the notation in the proof ofTheorem <ref>.Recall that there are infinitely many pairwise disjoint pathsconnecting a ray R_i to a ray R_j.Thus we may assume that the initial vertices of the raysR_1, … R_k all belong to the same component of G-T_2.We set A_n as the union of thecomponent of G-T_n+1 that contains these initial vertices with T_n+1.Then set B_n=(GA_n)∪ T_n+1.Now it is trivial to check that the sequence (A_n, B_n) of separationssatisfies the conditions.§ APPENDIX Let σ be an automorphism of T. In cite <cit.> Tits proved that there are three types ofautomorphisms of a tree:(i) those that fix some vertex, (ii) those that fix no vertex but leaveanedge invariant and (iii) those that leave some double-ray …,, v_-1, v_0, v_1, v_2, … invariant and act as non-trivial translations on that double-ray.(Similar results were provedindependently by Halin in <cit.>.) Since T is one-ended it contains no double-ray andthus(iii) is impossible.Suppose now that σ fixes no vertex in T but leaves the edgeeinvariant.The end of T lies in one of the components of T-e and σ swaps the twocomponents of T-e.This is impossible, because T has only one end and this end must belong toone of the components of T-e.Henceσ must fix some vertex v.There is a unique rayR' in T with v as an initial vertex and this ray is fixed pointwise by σ.The tworaysR and R' intersect in a ray that is a tail of R and this tail of R is fixed pointwise byσ.§ APPENDIX In this Appendix we prove Lemma <ref> which is a slightly sharpened version of Lemma 3fromHalin's paper <cit.>.The change is thatuncountablein Halin'sresults is replaced by at least2^ℵ_0 elements.First there is an auxilliary result that corresponds to Lemma 2 in <cit.>. Let G be a connected graph and Γ=(G).Suppose D is a subset of the vertex set ofG.Let {C_i}_i∈ I denote the family of components of G-D. Define G_i as the subgraph spanned by C_i∪∂ C_i.SetΓ_i=(G_i)_(∂ C_i).Suppose that Γ_i is either finite or has atleast 2^ℵ_0 elements for all i.Then Γ_(D) is eitherfinite or has at least 2^ℵ_0 elements.If one of the groups γ_i has at least 2^ℵ_0 elements thenthere is nothing more to do.So, we assume that all these groups are finite.Now there are two situations where it is possible that Γ_(D) is infinite.The first iswhen infinitely many of the groups Γ_i are non-trivial. For any family {σ_i}_i∈ I such that σ_i∈Γ_i we can find anautomorphism σ∈Γ_(G∖ C_i)⊆Γ_(D) such that the restrictionto C_i equals σ_i for all i.If infinitely many of the groups Γ_C_i arenontrivial, then there are at least 2^ℵ_0 such families {σ_i}_i∈ I andΓ_(D) must have at least 2^ℵ_0 elements.We say that two components C_i and C_j are equivalent if ∂ C_i=∂ C_j and thereis an isomorphism φ_ij from the subgraph G_i to the subgraph G_j fixing every vertexin ∂ C_i=∂ C_j.Clearly there is an automorpism σ_ij of G that fixesevery vertex that is neither in C_i nor C_j such that σ_ij(v)=φ_ij(v) forv∈C_i and σ_ij(v)=φ_ij^-1(v)for v∈ C_j.If there are infinitely manydisjoint ordered pairs of equivalent components we can for any subset of these pairs find anautomorphism σ∈Γ_(D) such that if (C_i, C_j) is in our subset then therestrictionof σ to C_i∪ C_j is equal to the restriction of σ_ij.There are at least2^ℵ_0 such sets and thus Γ_(D)has at least 2^ℵ_0 elements.If neither of the two cases above occurs then Γ_(D) is clearly finite.Following Schmidt <cit.> (see alsoHalin's paper <cit.>)we define, using induction, for each ordinal λ a class of graphs A(λ).The classA(0) is the class of finite graphs.Suppose λ>0 and A(μ) has already been definedforall μ<λ.A graph G is in the class A(λ) if and only if it contains a finitesetF of vertices such that each component of G-F is in A(μ) for some μ<λ.It isshown in the papers referred to above that if G belongs to A(λ) for some ordinal λthenG is rayless and, conversely, every rayless graph belongs to A(λ) for some ordinalλ.For a rayless graph G we define o(G) as the smallest ordinal λ such thatGis in A(λ).The Lemma is proved by induction over o(G).If o(G)=0 then the graph G is finite and theautomorphism group is also finite.Assume that the result is true for all rayless graphs H such that o(H)<o(G).Find a finite setF of vertices such that each of the components of G-F has a smaller order than G.Denote thefamily of components of G-F with {C_i}_i∈ I.Denote with G_i the subgraph induced byC_i∪∂ C_i.By induction hypothesis the pointwise stabiliser of ∂ C_i in(G_i) is either finite or has at least 2^ℵ_0 elements.Lemma <ref> above implies that (G)_(D) is either finite or has atleast 2^ℵ_0 elements. abbrv A graph is one-ended if it contains a ray and whenever we remove a finite set of vertices from the graph the rest has only one infinite component. We prove that one-ended graphs whose end is undominated and has finite vertex degree have tree-decompositions thatdisplay the end and that are invariant under the group of automorphisms. This can be applied to prove a conjectureof Halin from 2000 and solves a recent problem of Boutin and Imrich. Furthermore, it implies for every transitive one-ended graph that its end must have infinite vertex degree. § INTRODUCTION In <cit.>, Dunwoody and Krön constructed tree-decompositions invariant under the groupof automorphisms that are non-trivial for graphs with at least two ends. In the same paper, theyapplied them to obtain a combinatorial proof of generalization of Stalling's theorem of groups with at least twoends. This tree-decomposition method has multifarious applications, as demonstrated by Hamann in<cit.> and Hamann and Hundertmark in <cit.>. For graphs with only a single end, however, these tree-decompositions may be trivial.Hence such a structural understanding of this class of graphs remains elusive.For many one-ended graphs, such as the 2-dimensional grid, such tree-decompositions cannot exist.Indeed, it is necessary for existence that the end has finite vertex degree; that is,there is no infinite set of pairwise vertex-disjoint rays belonging to that end. Already in 1965 Halin <cit.> knew that one-ended graphs whose end has finite vertex degree have tree-decompositions displayingthe end (a precise definition can be found towards the end of Section <ref>). Nevertheless, for these tree-decompositions to be of any use for applications as above,one needs them to have the additional property that they are invariant under the group ofautomorphisms.Unfortunately such tree-decomposition do not exist for all graphs in question, see Example <ref> below, but in the example there is a vertex dominating the end. In this paper we construct such tree-decompositions if the end is not dominated. Every one-ended graph whose end is undominated and has finite vertex degree has a tree-decomposition that displays itsend and that is invariant under the group of automorphisms.This better structural understanding leads to applications similar to those for graphs with more thanone end. Indeed, below we deduce from Theorem <ref> a conjecture of Halin from 2000, andanswer a recent question of Boutin and Imrich. A further application was pointed out byHamann. For graphs like the one in Figure <ref>, the tree-decompositions of Theorem <ref> can beconstructed using the methods of Dunwoody and Krön. Namely, if the graphs in question contain`highly connected tangles' aside from the end. In general such tangles need not exist, for anexample see Figure <ref>. It is the essence ofTheorem <ref> to provide aconstruction that is invariant under the group of automorphisms that decomposes graphs as those inFigure <ref> in a tree-like way.Applications. In <cit.> Halin showed that one-ended graphs with vertex degree equal to onecannot have countably infinite automorphism group. Not completely satisfied with his result, heconjectured that this extends to one-ended graphs with finite vertex degree. Theorem<ref> implies this conjecture.Given a graph with one end which has finite vertex degree, its automorphism group is either finiteorhas at least 2^ℵ_0 many elements.Theorem <ref> can be further applied to answer a question posed by Boutin andImrich, who asked in <cit.> whether there is a graph with linear growth and countably infinite automorphism group. Theorem <ref> implies a negative answer to this question as well as strengthenings of further results of Boutin and Imrich, see Section <ref> for details. Finally, Matthias Hamann[personal communication] pointed out the following consequence ofTheorem <ref>. Ends of transitive one-ended graphs must have infinite vertex degree. We actually prove a stronger version of Theorem <ref> with`quasi-transitive'[Here a graph is quasi-transitive, if there areonly finitely many orbits of vertices under the automorphism group.] in place of `transitive'. The rest of this paper is structured as follows: in Section <ref> we set up all necessarynotations and definitions. As explained in <cit.>, there is a close relation betweentree-decompositions andnested sets of separations. In this paper we work mainly with nested sets of separations.In Section <ref> we prove Theorem <ref>, andSection <ref> is devoted to the proof of Theorem <ref>, and its implications on the work of Boutin and Imrich. Finally, in Section <ref> we prove Theorem <ref>.Many of the lemmas we apply in this work were first proved by Halin. Since in some cases we need slightvariants of the original results and also since Halin's original papers might not be easilyaccessible, proofs of some of these results are included in appendices. ] | http://arxiv.org/abs/1706.08330v3 | {
"authors": [
"Johannes Carmesin",
"Florian Lehner",
"Rögnvaldur G. Möller"
],
"categories": [
"math.CO",
"05C63, 05E18, 20B27"
],
"primary_category": "math.CO",
"published": "20170626113446",
"title": "On tree-decompositions of one-ended graphs"
} |
[email protected] Research Group of Geometry, Dynamical Systems and Cosmology,Department of Information and Communication Systems Engineering,University of the Aegean, Karlovassi 83200, Samos, [email protected] Central Greece University of Applied Sciences,Department of Electrical Engineering, 35100 Lamia, Greece Nazarbayev University, School of Engineering, Astana, Republic of Kazakhstan, 010000A novel idea is proposed for a natural solution of the dark energy and its cosmic coincidence problem. The existence of local antigravity sources, associated with astrophysical matter configurations distributed throughout the universe, can lead to a recent cosmic acceleration effect. Various physical theories can be compatible with this idea, but here, in order to test our proposal, we focus on quantum originated spherically symmetric metrics matched with the cosmological evolution through the simplest Swiss cheese model. In the context of asymptotically safe gravity, we have explained the observed amount of dark energy using Newton's constant, the galaxy or cluster length scales, and dimensionless order one parameters predicted by the theory, without fine-tuning or extra unproven energy scales. The interior modified Schwarzschild-de Sitter metric allows us to approximately interpret this result as that the standard cosmological constant is a composite quantity made of the above parameters, instead of a fundamental one. A solution of the dark energy and its coincidence problem based on local antigravity sources without fine-tuning or new scalesVasilios Zarikas June 19, 2017 ================================================================================================================================ § INTRODUCTION The so called cosmological constant problem is nothing more than the simple observation, due to Zeldovich, that the quantum vacuum energy density should unavoidably contribute to the energy-momentum of the Einstein equations in the form of a cosmological constant. The recent discovery of a Higgs-like particle at the CERN Large Hadron Collider provides the experimental verification of the existence of the electroweak vacuum energy, and thus of the reality of the cosmological constant problem. The obvious absence of such a huge vacuum energy in the universe has led to theoretical and phenomenological attempts to cancel out this vacuum energy, which are all so far subject to fine-tuning problems <cit.>.On the other hand, the observational evidence of the accelerated expansion of the universe <cit.> has introduced the notion of dark energy. One option is that the dark energy is due to a cosmological constant Λ in the ΛCDM model, where in this case the explanation of the huge discrepancy between this observed Λ and the expected quantum vacuum energy is rather more pertinent. Even if the dark energy has nothing to do with the quantum vacuum energy, but is due to quite different phenomena, the cosmological constant problem remains as a hard problem in physics. In any case, the existence of dark energy is associated with another cosmological puzzle, the so called cosmic coincidence problem. The latter refers to the need for an explanation of the recent passage from a deceleration era to present acceleration cosmic phase.The aim of the present work is twofold: First, to propose a novel idea for a natural solution to the dark energy issue and its associated cosmic coincidence problem of recent acceleration. Second, to implement this idea through an interesting and concrete scenario, among others, which explains the correct amount of dark energy without the introduction of new and arbitrary scales or fine-tuning.The proposed solution is based on the simple idea that the acceleration/dark energy can be due to infrared modifications of gravity at intermediate astrophysical scales which effectively generate local antigravity effects. The cosmological consequence of all these homogeneously distributed local antigravity sources is an overall cosmic acceleration through the matching between the local and the cosmic patches. Before the appearance of astrophysical structures (galaxies, clusters of galaxies), such antigravity effects do not exist, and therefore, the recent emergence of dark energy is not a coincidence but an outcome of the recent formation of structure. Before the appearance of structure and the emergence of sufficient repulsive effects, the conventional deceleration scenario is expected.Various physical theories (alternative gravities, extra-dimensional gravities, quantum gravities, e.tc.) can be implemented and be compatible with the previous general idea, providing intermediate distance infrared modifications which act as local antigravity sources. It is worth to notice that when the same physical theories are applied directly at the far infrared cosmic scales do not necessarily give comparable or significant cosmological effects. So, a dark energy of local origin in the universe is not an equivalent or alternative description, but can be a necessity in order to reveal the relevant phenomena at intermediate scales. Quantum theories of gravity, in particular, provide types of models where local repulsive effects are naturally expected (for example, a quantum gravity origin of negative pressure can be formed in the interior of astrophysical black holes). Asymptotically safe (AS) gravity <cit.> is one of the promising quantum gravity frameworks that we will elaborate more thoroughly in the following sections in relation to the previous ideas. We will show in our most successful scenario that the observed dark energy can be explained from the Newton's constant, the galaxy or cluster length scales, and dimensionless order one parameters predicted by AS theory, without fine-tuning or introduction of new scales. This can approximately be interpreted as that the observed cosmological constant Λ is not a fundamental parameter, but it is composite and naturally arises from other fundamental quantities.In order to study the effect of all local sources of antigravity in the cosmic evolution, we adopt in the present work a simple Swiss cheese model by matching a homogeneously and isotropic spacetime with the appropriate local spherically symmetric metrics <cit.>, and this formulation is presented in Section II. As an introductory step to set up the Swiss cheese evolution equations, we work out in Section III the classical Schwarzschild metric. In Section IV we provide some general thoughts on the relation between a possible locally originated dark energy and the coincidence problem. In Section V, the Schwarzschild-de Sitter black hole is discussed with respect to the above scenario. In Section VI quantum improved Schwarzschild-de Sitter metrics are considered; quantum gravity effects may indeed introduce an explicit or effective cosmological constant which arises from ultraviolet or infrared modifications of gravity <cit.>. Finally, Section VII is the largest and most important one, where the AS theory is applied in the context of our ideas. The first subsection VIIA discusses the running of the cosmological constant close to the Gaussian fixed point of the AS evolution and the resulting cosmology is practically indistinguishable from the ΛCDM scenario with the same fine-tuning problems. The last subsection VIIB discusses in detail, for the running of the cosmological constant close to the infrared (IR) fixed point of the AS evolution, the quite interesting emergence of the dark energy out of known physical scales and parameters predicted by the theory, and provides a natural explanation to the recent cosmic acceleration without obvious observational conflicts with internal dynamics of galaxies or clusters. We finish with the conclusions in Section VIII.It is worth mentioning that attempts to explain acceleration without a dark energy component, or also to produce dark energy, all due to structure formation, have already appeared in the literature (e.g. <cit.>, <cit.>, <cit.> and references therein). The existence of structure formation in the universe implies a non-linear local evolution, while the distribution of the non-linear regions is homogeneous and isotropic above a today homogeneity scale of the order 100Mpc. The apparent recent cosmic acceleration could be the effect of inhomogeneities and/or anisotropies on the average expansion rate, broadly referred as back-reaction. This approach can potentially solve the coincidence problem too. However, it is fair to say that our viewpoint in the present work is different. In the averaging procedure, the matter is treated as a usual pressureless ideal fluid in the context of General Relativity, gravity has the standard attractive behaviour inside the structure, and cosmic acceleration arises due to the non-trivial complexity of the considered solution; there are no explicit antigravity forces and repulsive effects come only through averaging. Here, on the contrary, the acceleration and the dark energy come from the existence of antigravity sources related to the astrophysical structures in a as simple as possible spacetime and no averaging is performed; in the present work, these repulsive forces are basically of quantum origin as AS suggests, although in general, they can be of some other geometric nature generated by some modified gravity theory with IR gravity modifications. In our approach, this simple spacetime is described as a first step by the homogeneous Swiss cheese model with its known Schucking matching surface, although a better approximation would be to use inhomogeneous Swiss cheese models (e.g. some analogues of Lemaitre-Tolman-Bondi or Szekeres); averaging processes in this case are expected to enhance the cosmic acceleration found here. A different scenario, where structure is responsible for acceleration, was presented in <cit.>; in a five-dimensional setup, a brane-bulk energy exchange in the interior of galactic core black holes produces a sufficient negative dark pressure to play the role of dark energy.§ SWISS CHEESE MODELSThe Swiss cheese cosmological model, first introduced by Einstein and Strauss <cit.>, is solution of General Relativity that globally respects homogeneity and isotropy, while locally describes a spherically symmetric solution. Other more general Swiss cheese models refer to inhomogeneous solutions. A Swiss cheese model with spherical symmetry overcomes the difficulty of how to glue a static solution of the theory at hand within a larger time-dependent homogeneous and isotropic spacetime. The idea is to assume a very large number of local objects homogeneously and isotropically distributed in the universe. The matching of a spatially homogeneous metric as the exterior spacetime to a local interior solution has to be realized across a spherical boundary that stays at a fixed coordinate radius in the cosmological frame while evolves in the interior frame.Let us consider a four-dimensional manifold M with metric g_μν and a timelike hypersurface Σ which splits the spacetime M into two parts. The spacetime coordinates are denoted by x^μ (μ,ν,... are four-dimensional coordinate indices) and can be different between the two regions. The coordinates on Σ are denoted by χ^i (i,j,... are three-dimensional coordinate indices on Σ). The embedding of Σ in M is given by some functions x^μ(χ^i). The unit normal vector n^μ to Σ points inwards the two regions. The first relevant quantity characterizing Σ is the induced metric h_μν=g_μν -n_μn_ν coming from the spacetime in which it is embedded. The second quantity is the extrinsic curvature K_μν=h_μ^κ h_ν^λn_κ;λ, where a ; denotes covariant differentiation with respect to g_μν. In the adapted frame where x^μ, say x̅^μ, contains x̅^i with x̅^i|_Σ=χ^i and some extra transverse coordinate, it is h_ij=g_ij. However, this quantity can be expressed in terms of arbitrary spacetime coordinates x^μ ash_ij=g_μν x^μ/χ^i x^ν/χ^j .Similarly, for the extrinsic curvature it is K_ij=n_i;j, and can be expressed asK_ij = ( n_μ/χ^j -Γ^λ_ μνn_λ x^ν/χ^j)x^μ/χ^i= -n_λ(^2x^λ/χ^iχ^j+Γ^λ_ μν x^μ/χ^i x^ν/χ^j) ,where Γ^κ_ μν are the Christoffel symbols of g_μν.Continuity of the spacetime across the hypersurface Σ implies that h_ij is continuous on Σ, which means that h_ij is the same when computed on either side of Σ. If we consider Einstein gravity with a regular spacetime matter content and vanishing distributional energy-momentum tensor on Σ, then the Israel-Darmois matching conditions <cit.> imply that the sum of the two extrinsic curvatures computed on the two sides of Σ is zero.The model of Einstein-Strauss refers to the embedding of a Schwarzschild mass into FRW cosmology. Here, we shall assume a general static spherically symmetric metric which matches smoothly to a homogeneous and isotropic cosmological metric. In spherical coordinates the cosmological metric takes the formds^2=-dt^2+a^2(t)[dr^2/1-κ r^2+r^2(dθ^2+sin^2θ dφ^2) ],where a(t) is the scale factor and κ=0, ± 1 characterizes the spatial curvature. In these coordinates, a “spherical” boundary is defined to have a fixed coordinate radius r=r_Σ, with r_Σ constant. Of course, this boundary is seen by a cosmological observer to expand, following the universal expansion. If x̅^μ=(t,r,θ,φ) are the coordinates of the metric (<ref>), then the hypersurface Σ is determined by the function f̅(x̅^μ)=r-r_Σ=0 and the cosmological metric occurs for r≥ r_Σ. From the coordinates x̅^μ one can parametrize Σ by the coordinates χ^i=x̅^i|_Σ=(t,θ,φ), and therefore on Σ it is x̅^μ(χ^i)=(t,r_Σ,θ,φ). The unit normal vector can be calculated fromn̅_μ=f̅_,μ/√(|g̅^κλf̅_,κf̅_,λ|) ,where a comma means differentiation with respect to x̅^μ. Obviously, the plus sign in (<ref>) makes certain that n̅^μ is inward the cosmological region (to the direction of increasing r). Thus,n̅_μ=(0 , a/√(1-κ r_Σ^2) , 0 , 0) .Note that n̅^μ is spacelike, n̅^μn̅_μ=1, as expected.The interior region r≤ r_Σ is replaced by another metric which has the following formds^2=-J(R)F(R)dT^2+dR^2/F(R)+R^2(dθ^2+sin^2θdφ^2) ,where J,F>0. This metric represents a static spherically symmetric spacetime in Schwarzschild-like coordinates. The functions F(R) and J(R) are given by the specific metric in use. Since the two-dimensional sphere (θ,φ) is the common fiber for both metrics (<ref>), (<ref>), the position of Σ in the spacetime described by (<ref>) does not depend on θ,φ and is given by the functions T=T_S(t), R=R_S(t). The subscript S refers to Schucking, R_S is called Schucking radius and it is time-dependent. Therefore, the spherical boundary does not remain in constant radial coordinate distance in the Schwarzschild-like patch as the universe expands. The coordinates x̂^μ=(T,R,θ,φ) of the metric (<ref>) take on Σ the form x̂^μ(χ^i)=(T_S(t),R_S(t),θ,φ). The unit normal vector n̂^μ cannot be calculated now directly from a formula as (<ref>), since the function f̂(x̂^μ) of the matching surface is now unknown. However, due to the symmetry it is expected that n̂_θ=n̂_φ=0, therefore the orthonormality of n̂^μ will provide two conditions for n̂_T,n̂_R. Indeed, since the three vectors x̂^μ/χ^i are tangent to Σ, the condition n̂_μx̂^μ/χ^i=0 implies n̂_θ=n̂_φ=0 anddT_S/dtn̂_T+dR_S/dtn̂_R=0 .Furthermore, from n̂^μn̂_μ=1 one obtains1/JFn̂_T^2-Fn̂_R^2=-1 ,where J,F are located at R_S.So far, we have established the geometrical setting on the two sides of the boundary hypersurface. The junction of the two regions on Σ demands h̅_ij=ĥ_ij, which provides through (<ref>) the conditionsJF(dT_S/dt)^2-1/F(dR_S/dt)^2=1andR_S=a r_Σ .The two equations (<ref>), (<ref>) can also arise easier from the two expressions for the induced metric on Σ coming from (<ref>), (<ref>)ds_Σ^2 = -dt^2+a^2r_Σ^2(dθ^2+sin^2θ dφ^2)= -[JF(dT_S/dt)^2-1/F(dR_S/dt)^2]dt^2 +R_S^2(dθ^2+sin^2θ dφ^2) .Solving (<ref>), (<ref>) for n̂_T,n̂_R and using (<ref>) we find the normal vector n̂^μ fromn̂_μ=(ϵ√(J) dR_S/dt , -ϵ√(J) dT_S/dt , 0 , 0) ,where ϵ=± 1. The demand that n̂^μ is inward the central void (to the direction of decreasing R) implies n̂_R<0. Additionally, the forms of n̅_μ,n̂_μ show that the directions defined by the coordinate axes T,R are different than those of t,r, however, the centers of the two coordinate systems coincide.What remains is the matching of the two extrinsic curvatures on Σ, i.e. the demand K̅_ij+K̂_ij=0. Due to the simple form of n̅_μ, the extrinsic curvature K̅_ij in the cosmological region can be easily computed from either equation (<ref>) or (<ref>) asK̅_ij=-n̅_rΓ̅^r_ ij=1/2n̅_rg̅^rrg̅_ij,rand finally(K̅_tt,K̅_θθ,K̅_φφ)= √(1-κ r_Σ^2)ar_Σ (0, 1, sin^2θ) .In the interior region the computation is more involved and it is slightly more convenient to use the expression (<ref>) to compute K̂_ij. The corresponding non-vanishing Christoffel symbols are Γ̂^R_TT=JF^2 Γ̂^T_TR=F/2(JF'+FJ'), Γ̂^R_RR=-F'/2F, Γ̂^R_φφ =sin^2θ Γ̂^R_θθ=-RFsin^2θ, Γ̂^θ_Rθ=Γ̂^φ_Rφ=1/R, Γ̂^θ_φφ=-sinθcosθ, Γ̂^φ_θφ=θ, where a prime denotes differentiation with respect to R. Then, it arises that all K̂_ij=0 for i≠ j, while K̂_φφ=sin^2θK̂_θθ,K̂_θθ=-n̂_RΓ̂^R_θθ=R_SFn̂_R K̂_tt=-n̂_Td^2T_S/dt^2-n̂_Rd^2R_S/dt^2 -n̂_RΓ̂^R_TT(dT_S/dt)^2 -n̂_RΓ̂^R_RR(dR_S/dt)^2 -2n̂_TΓ̂^T_TRdT_S/dtdR_S/dt .Finally, the condition K̅_θθ+K̂_θθ=0 (or equivalently for φ) gives the consistency equationdT_S/dt=ϵ√(1-κ r_Σ^2)ar_Σ/R_SF√(J) ,which, with the use of (<ref>), takes the formdT_S/dt=ϵ√(1-κ r_Σ^2)/F√(J) .It then follows from (<ref>) that(dR_S/dt)^2=1-κ r_Σ^2-F(R_S) .Therefore, equations (<ref>), (<ref>) determine the position of Σ in the space (T,R). From (<ref>) it is obvious that indeed it is n̂_R<0.The final task is the examination of the matching condition K̅_tt+K̂_tt=0, i.e. K̂_tt=0. This equation contains the second time derivatives of T_S,R_S that we need to calculate. From equations (<ref>), (<ref>) it arisesd^2T_S/dt^2=-ϵ√(1-κ r_Σ^2) (F√(J))'/F^2J dR_S/dt d^2R_S/dt^2=-F'/2 .Using all the previous expressions in (<ref>), it turns out that J'=0, which means J'(R_S)=0. This relation, due to (<ref>), implies in general an algebraic equation for a(t), which will be inconsistent with equation (<ref>). There are however various functions J(R) which satisfy this equation. For example, a consistent choice is that J(R) is constant throughout (as happens in Schwarzschild metric), and in this case without loss of generality we can rescale T so that this constant is one. Another consistent case would be J(R) to be a power series of the form J(R)=J(R_S)+c_1(R-R_S)^2+... . The successful matching has proved that the choice of the matching surface Σ was the appropriate one.§ GENERAL RELATIVITY BLACK HOLES AND THE ENSUING FRW COSMOLOGY If the black hole is described by the classical Schwarzschild solution, i.e.F(R)=1-2G_N M/R , J(R)=1 ,equation (<ref>) provides through (<ref>) the cosmic evolution of the scale factor a. Namely, we takeH^2=ȧ^2/a^2=2G_N M/r_Σ^3 a^3-κ/a^2 ,where a dot denotes differentiation with respect to cosmic time t. This equation is qualitatively similar to the standard FRW evolution with dust (zero pressure) as its cosmic fluid and a possible curvature term. Of course, in order for this solution to be physically realistic and represent a spatially homogeneous universe, not just a single sphere of comoving radius r_Σ should be present, but a number of such spheres are uniformly distributed throughout the space. Otherwise, there would exist a preferred position in the universe. Each such sphere can be physically realized by an astrophysical object, such as a galaxy (with its extended spherical halo) or a cluster of galaxies, which we assume that it has a typical mean mass M. It will be seen that the Schuching radius lies outside the real border of the astrophysical object, therefore, F(R_S) in (<ref>) is provided by the value of the expression (<ref>) (otherwise we would meet the inconvenient situation to consider an interior Oppenheimer-Volkoff type of solution or some other more realistic matter profile). This means that for the value F(R_S), which is our only interest in order to make the matching and derive the cosmological metric, it is like if all the mass M is gathered at the center of the spherical symmetry. The same is true for other spherically symmetric metrics, modifications of Schwarzschild solution, to be discussed later. Furthermore, equation (<ref>) givesä/a=-G_NM/r_Σ^3a^3 ,which indicates a decelerated expansion.In order for equations (<ref>), (<ref>) to describe precisely a standard matter dominated universe, the matter dilution term in (<ref>) should be 8π G_N/3ρ, where ρ is the cosmic matter energy density, and the term in (<ref>) should be -4π G_N/3ρ. Therefore, we make the standard assumption of Swiss cheese models that the matching radius r_Σ is such that when its interior region is filled with energy density equal to the cosmic matter density ρ, the interior energy equals M. Namely, we setρ=M/4π/3R_S^3=3M/4π r_Σ^3a^3 .This condition can also equivalently be interpreted that the mass M of the object is uniformly stretched up to the radius r_Σ. Since in any case the mass M can be considered that is located at the center of spherical symmetry, the above definition of r_Σ offers a simple way to determine the spheres where the matching with the cosmological metric occurs. Although this definition is certainly ad-hoc and uses the mass M of the object and the cosmic density ρ, it has the merit that it avoids to use other details of the structure, such as the size of the object and the distance between similar structures. However, still in the Swiss cheese model the cosmic evolution remains exactly the same as in the cosmological picture.It is now clear that equations (<ref>), (<ref>) becomeH^2 = 8π G_N/3ρ-κ/a^2 ä/a = -4π G_N/3ρ .If we define the matter density parameter in the conventional wayΩ_m=8π G_Nρ/3H^2 ,it is foundr_Σ=(2G_NM/Ω_m0a_0^3H_0^2)^1/3 ,where a subscript 0 denotes the today value.Let us suppose we want to model a universe consisting of two types of dust with different densities, ρ_1 and ρ_2 (e.g. dark matter and stars or black holes). In order to avoid unnecessary technical complexity arising from inhomogeneous placement of dusts, it would be a fair approximation to describe the cosmic evolution assuming that the universe is filled with a homogeneous distribution of spherical configurations that consist of two spherical objects that have different masses M_1 and M_2 within the Schucking radius R_S. The quantities ρ_1,M_1 satisfy equation (<ref>) and similarly for the other ingredient. Since the total cosmic energy density ρ is the sum of the two energy densities ρ_1,ρ_2, it is implied that the matching is performed as before with the difference that now the mass M of the central object is the sum of the two masses, i.e. M=M_1+M_2. In this case, the same equations (<ref>), (<ref>) apply, with M this total mass.Let us finish with a few numerics. The present value of the Hubble parameter will be taken as H_0=0.72× 10^-10yr^-1. In the present work, we are not going to perform fittings to real data, where H_0 could also be considered as a fitted parameter. We will use as a typical mass for a galaxy M=10^11M_⊙ and for a cluster of galaxies M=10^15M_⊙, (with M_⊙ the solar mass) in order to give some estimates. If we ignore the term of spatial curvature in (<ref>), then Ω_m=1, and equation (<ref>) gives for the galaxy r_Σ=0.56Mpc and for the cluster r_Σ=12Mpc (for a realistic Ω_m0 these distances become larger). Since the typical radius of a spiral galaxy (including its dark matter halo) is R_b≈ 0.15Mpc, it is obvious that r_Σ is a few times larger than the galactic radius. Moreover, the mean distance between galaxies is a few Mpc, thus, the Schucking radii of two neighboring galaxies do not overlap. As for clusters, they have radii from R_b≈ 0.5Mpc to R_b≈ 5Mpc, and therefore, r_Σ is again outside the cluster. If a mean distance between the borders of two adjacent clusters is something like 20Mpc, the two Schucking radii still do not intersect.§ A PERSPECTIVE FOR THE COINCIDENCE PROBLEMIn the ΛCDM model, Λ has been found observationally to be of the order H_0^2 (more precisely, Λ=3Ω_Λ 0H_0^2, Ω_Λ 0≈ 0.7). This means that the energy scale defined by √(Λ) is extremely small compared to the Planck mass scale M_Pl (which is a typical scale of gravity), and also its energy density ρ_Λ=Λ/8π G_N≈ 2.8× 10^-11eV^4∼ (10^-3eV)^4 is many orders of magnitude smaller than the theoretical vacuum energy value ρ_vac estimated by quantum corrections of Quantum Field Theory with any sensible cut-off. This discrepancy is called cosmological constant problem, which is the most severe hierarchy problem in modern physics. It is also called fine-tuning problem since adding a bare cosmological constant of opposite sign in the action to cancel ρ_vac, this should be tuned to extreme accuracy in order to give the effective value 10^-3eV above (if supersymmetry is restored in the high energy, extreme and unnatural fine-tuning is still needed). Even if the present dark energy in the universe has nothing to do with a cosmological constant, the question of understanding why the estimated quantum vacuum energy cancels out and does not contribute, still remains and may need quantum gravity or other physics to be discovered.Beyond the previous problem, why Λ is so extraordinarily small, there is an extra question named coincidence problem, related to the specific value of Λ∼ H_0^2. Since the energy density ρ falls like ρ∼ a^-3 starting from a huge (if not infinite) value, why does it happen today to be 8π G_Nρ_0∼Λ (actually 8π G_Nρ_0≈ 0.4Λ), and not a very big or a very small proportionality factor to be present? Why are dark matter and dark energy of the same order today, ρ_0∼ρ_Λ? Moreover, since √(Λ)∼ H_0, the time scale t_Λ∼ 1/√(Λ) is of the same order as the age of the universe H_0^-1, something that did not need necessarily to be the case. There are three unrelated quantities ρ_0,Λ,G_N and there is no obvious reason why they should be related like that. To be more precise, the same relation 8π G_Nρ∼Λ holds recently, for 0⩽ z≲𝒪(1), which means for a few billion years (taking into account the time, it may be thought that the problem is not so sharp). On the contrary, such a relation between G_Nρ and Λ could have happened in the very past, at even larger redshifts (which is probably precluded by anthropic arguments), and this implies that today we would have a universe full of cosmological constant and negligible matter contribution. Or, finally, such a relation could occur in the very future and today we would observe matter domination with negligible Λ. It is the same to say that although the Hubble parameter started in the past from huge values (if not infinite), recently it is H^2∼Λ∼ 8π G_Nρ, and not H^2≈Λ/3 or H^2≈ 8π G_Nρ/3. To realize better the clear sensitivity of the recent coincidence on the value of Λ, let us assume that the cosmological constant was just one hundred times larger than the observed one. Then its coincidence with the matter would have occurred at a redshift almost 5 and today the dark matter would be less than just one percent of the dark energy. Or at the other end, if Λ was one hundred times smaller than the observed value, its coincidence with the matter would occur at a redshift almost -0.7 and today the dark energy would be almost two percent of the dark matter. Therefore, the coincidence problem appears because Λ takes a value inside a very narrow range of the ρ values. In terms of the flatness parameters the coincidence problem is stated by a relation of the form Ω_m0∼Ω_Λ 0, and not Ω_m0≪Ω_Λ 0 or Ω_m0≫Ω_Λ 0. Ignoring κ, it holds Ω_m≈ 1, Ω_Λ≈ 0 for a broad range of redshifts in the past until recently, where it is Ω_m∼Ω_Λ. In the far future it will be Ω_m≈ 0, Ω_Λ≈ 1. Acceleration exists as long as 2Ω_Λ>Ω_m.In general dark energy models the coincidence problem is formulated through the observational today acceleration along with the relation Ω_m0∼Ω_DE,0, while in the past it is strongly believed, due to structure formation reasons, that it was Ω_m≈ 1, Ω_DE≈ 0. The dark energy density ρ_DE defined by 8π G_Nρ_DE=3Ω_DEH^2 obeys ρ_0∼ρ_DE,0. Depending on the particular dark energy model, the quantity ρ_DE,0 contains integration constants reflecting initial conditions of possible fields involved (e.g. ϕ_0,ϕ̇_0 for a scalar field, or ρ_0 itself in a geometrical modification of gravity), dimensionfull or dimensionless couplings/parameters of the theory, and probably other quantities (e.g. of astrophysical nature). The previous coincidence relation between the energy densities provides an equation between all these quantities which have to be appropriately adjusted. Usually, in dark energy models the scale defined by Λ is exchanged by another scale of the same order describing some new physics. It looks like the coincidence problem is a question of naturalness between integration constants and other parameters, and analyzing naturalness is not an issue easily quantified.A proposal as a solution of the coincidence problem will consist of one more dark energy model, which may be more or less natural, may introduce new physics or not, may introduce new scales or not, but its verification will come not out of concept but out of experimental evidence of the particular model. And finally, when the origin of dark energy has been apprehended, it will become obvious what the independent scales and initial conditions created by Nature are. Even a model which contains new scales, that at present are unrelated from the rest of physics, may be very close to reality. This is why analyzing a dark energy model, the values of Ω_m0,Ω_DE,0, as well as the rest of the parameters/initial conditions, are in general extracted after fittings to the observational data and there is no special concern about a deeper understanding which would mean to express these values in terms of other more fundamental ones. Of course, if such an explanation for the recent emergence of dark energy through the coincidence relation ρ∼ρ_DE can be provided in terms of quantities that already play a role in Nature or/and other quantities theoretically predicted in the context of a theory, this would possess extra naturalness and might render the particular dark energy model promising (this is the case for one of the models to be presented in the present work). Another idea that alleviates the coincidence problem comes through the realization of the current state of the universe close to a global fixed point (saddle or preferably attractor) of the cosmic evolution, since then, the coincidence problem becomes an issue of only the parameters of the model and not also of the initial conditions <cit.>. In order for this to be possible with the present acceleration and a scaling behaviour between Ω_m,Ω_DE, the violation of the standard energy-momentum conservation of the matter is necessary <cit.>.The proposal introduced in the present paper is that the dark energy observed recently in the universe may be the result of local gravity effects occurring in the interior of astrophysical objects, such as massive structures (galaxies, clusters) or even black holes, and these effects will directly determine the cosmic evolution. These local effects can arise from an arbitrary gravitational theory (alternative/modified gravity, extra-dimensional gravity, quantum gravity, e.tc.). The main point is that the specific gravitational theory is not applied directly to cosmology in the conventional way, with the matter described as a usual perfect fluid, in order to obtain time dependent differential equations for the geometry (e.g. scale factor) and the other ingredients; the reason for this is that it is not clear how the cosmic effective energy-momentum tensor can be quantified taking into account the extra contributions of local origin. So, even if the theory is managed to be applied directly to cosmology, the result will in general be different that the one arising from the process described here because, depending on the scales of the theory, the dark energy can be suppressed in one of the two derived cosmologies and be considerable in the other. Thus, if a gravity effect becomes substantial only at an intermediate infrared scale (astrophysical one), it cannot be revealed at the far infrared cosmological scale. In addition, integration constants emanating from possible integrations (due to extra fields or geometrical effects) in the local metric will be of quite different nature that the cosmological ones; these constants might be specified or at least estimated from quantities characterizing the astrophysical object itself or from regularity arguments at the center of the structure, contrary to the specification of the integration constants of a cosmological quantity which needs a quite different treatment. Applying the gravitational theory first inside the structure means to find the gravitational and the other possible fields in the interior of the object. Because the astrophysical structures are not point-like but they are extended (the galaxies have a luminous profile which is surrounded by the dark matter halo and the clusters contain a distribution of galaxies and dark matter) or may be described by a collapsing phase, this task can be complicated. However, depending on the ansatz for the local and the cosmological metrics and how these are interrelated, it may be enough to find the static spherically symmetric solution of the theory (as happens in the Swiss cheese model described previously), or other more complete solutions may be needed to describe an inhomogeneous universe with more realistic structures. In the present work the Swiss cheese model will be adopted as the simplest but not necessarily the most realistic construction, and therefore, the ensuing cosmologies will arise through the matching of the interior metric with the exterior FRW metric on the Schucking surface. It is obvious that such a local gravity effect should not contradict with observations at the relevant astrophysical scale.In the context where the dark energy owes its origin to the presence of structure, either due to the reasons elaborated here or due to averaging process (as e.g. in <cit.>, <cit.>, <cit.>), since the various structures are formed during the cosmic evolution recently at small redshifts, dark energy also appears recently not as a coincidence but as an emerging effect of the structure. With this in mind, that the coincidence problem can be a guiding line for studying cosmology, it becomes tempting to see what are the new scales and integration constants introduced by some gravitational theory in the above context, so that the corresponding cosmology confronts (if possible) with the acceleration and other data, and especially with the relation ρ∼ρ_DE. In the following sections, we will implement the previous ideas to derive a few dark energy models, where in the last subsection our most promising model will appear.§ BLACK HOLES WITH COSMOLOGICAL CONSTANT When a constant cosmological term is added to the Schwarzschild metric, the well-known Schwarzschild-de Sitter metric arisesds^2=-(1-2G_N M/R-1/3Λ R^2)dT^2 +dR^2/1-2G_N M/R-1/3Λ R^2 +R^2(dθ^2+sin^2θ dφ^2).It is apparent that it is J(R)=1. Equation (<ref>) provides through (<ref>) the cosmic evolution of the scale factorȧ^2/a^2+κ/a^2=2G_N M/r_Σ^3 a^3 +Λ/3 .Furthermore, equation (<ref>) givesä/a=-G_NM/r_Σ^3a^3+Λ/3 ,which indicates the well-known late-times accelerated expansion when the two terms on the r.h.s. of (<ref>) become comparable. Using the Swiss cheese condition (<ref>), equations (<ref>), (<ref>) are also written asH^2+κ/a^2 = 8π G_N/3ρ+Λ/3 ä/a = -4π G_N/3ρ+Λ/3 .Since Ω_m0 is close to 0.30 according to the most recent constraints <cit.>, equation (<ref>) gives for a galaxy with M=10^11M_⊙ that r_Σ=0.83Mpc and for a cluster with M=10^15M_⊙ that r_Σ=18Mpc.Equations (<ref>), (<ref>) are the standard cosmological equations of ΛCDM model. In this model, the cosmological constant Λ is considered as a universal constant related to the vacuum energy. However, in the context of the present work, Λ arises from the interior black hole solution (<ref>) and has a quite different origin and meaning. This Λ is of astrophysical origin and is the total cosmological constant coming from the sum of all antigravity sources inside the Schucking radius of the galaxy or cluster. It is expected that, through some concrete quantum gravity theory, matter is related to the generation of an explicit or effective cosmological constant. For example, in the centres of astrophysical black holes, the avoidance of singularity could be achieved due to the presence of a repulsive pressure of quantum origin balancing the attraction of gravity. Another important difference with this Λ is that since, according to our proposal, the antigravity sources are connected to either massive structures (galaxies, clusters) or astrophysical black holes, therefore, before the appearance of all these objects the total Λ is zero. As a result, this Λ becomes a function of cosmic time, suppressed at larger redshifts where the antigravity effect is weaker. A constant Λ is expected to be only an approximation at late times.The metric (<ref>) contains the Newtonian term 2G_NM/R and the cosmological constant term 1/3Λ R^2. For distances R close to the border with coordinate distance R_b, the matter can be considered as being gathered at the origin, as mentioned above. We will give an estimate of the corresponding values of the potential and the force due to the cosmological constant. In the weak field limit the force corresponding to the Newtonian term is -G_NM/R^2, while the cosmological constant force is 1/3Λ R and is repulsive. The ratio of the magnitudes of the cosmological constant force to the Newtonian force is 2Ω_Λ 0/Ω_m0(R/r_Σ)^3, therefore the significance of the cosmological constant increases with distance. At the border of a typical galaxy with mass 10^11M_⊙ and radius 0.15Mpc the Newtonian term has a value approximately 6× 10^-8, while the cosmological constant term is almost 9× 10^-10, which is therefore two orders of magnitude smaller than the former term. Of course, the two potentials at the Schucking radius are of the same order since dark matter and dark energy today are of the same order. Similarly, the repulsive force at the border is also almost two orders of magnitude smaller that the Newtonian force. Therefore, the Λ term is ignorable at the galaxy level and the galaxy dynamics is not disturbed by this antigravity effect.For the clusters there is a larger variability of the range of their radii and the corresponding masses. At the Schucking radius still the two potentials are of the same order. For a mass 10^15M_⊙ and radius 0.5Mpc the Newtonian term is 2× 10^-4 at the border, while the cosmological constant term is 10^-8. As a result, the repulsive force is four orders of magnitude smaller that the attractive force. For a radius of 5Mpc, the Λ force is still smaller than the Newtonian force, but just one order of magnitude. For a mass 10^14M_⊙ and radius 5Mpc, the two forces become equal in magnitude. If the cosmological constant is indeed generated at the cluster scales, then this constant should be present in all clusters. Therefore, more investigation is needed at particular clusters that could show off some abnormal dynamics and if this can be explained through a constant Λ.The previous discussion shows that Λ could be generated inside astrophysical objects, either without affecting their dynamics or signaling some observable deviations in this dynamics, and at the same time to create the standard ΛCDM cosmology. Of course, this constant Λ does not offer any alleviation to the coincidence puzzle.§ BLACK HOLES WITH VARYING COSMOLOGICAL CONSTANT OF QUANTUM ORIGIN The Schwarzschild-de Sitter metric can be progressed to a quantum improved Schwarzschild-de Sitter metric describing the astrophysical object. This metric has the formds^2=-(1-2G_k M/R-1/3Λ_k R^2)dT^2 +dR^2/1-2G_k M/R-1/3Λ_k R^2 +R^2(dθ^2+sin^2θ dφ^2) ,where the quantities G_k,Λ_k are functions of a characteristic energy scale k and F(R)=1-2G_kM/R-1/3Λ_kR^2. The functional behaviour of G_k,Λ_k is determined by the underlying quantum theory of gravity. This energy scale k is related to the distance from the center of the object and the exact dependence arises from the particular quantum corrections. Therefore, G_k,Λ_k are also related to the distance. In the Swiss cheese analysis, however, only the front value R_S of the distance at the matching surface influences the cosmic evolution, thus only the corresponding energy value k_S will be relevant. As mentioned before, in a real galaxy or cluster the total mass consists of either stars, dark matter, or black holes (classical or quantum modified) that we collectively denote M. Although the various objects are distributed throughout, in our approach it is sufficient to consider that these materials are gathered together at the center of spherical symmetry.As it is known, the Israel matching conditions are only applicable in Einstein gravity with some regular energy-momentum tensor. In an alternative/modified gravity, either containing extra fields or not, the corresponding matching conditions are in general modified. One might wonder if the Israel conditions are still applicable in our case, with a metric of the form (<ref>). The answer is positive and we will explain this in the following. A quantum originated spherically symmetric metric, as the one described above, does not in general arise as a solution of some classical field equations for the metric, but is obtained by considering some quantum corrections beyond the classical Einstein term. For example, in AS gravity, the solution of the Renormalization Group (RG) flow equations gives G_k,Λ_k. Therefore, a metric, such as (<ref>), is quite reasonable and necessary for our Swiss cheese approach to be interpreted as a solution of a coupled gravity-matter system satisfying Einstein equations G_μν=8π G_NT_μν^(tot). This T_μν^(tot)=T_μν+T_μν^(eff) contains, apart from a possible real matter energy-momentum tensor T_μν (which for us is zero since the mass is just an integration constant), an effective energy-momentum tensor T_μν^(eff) of gravitational origin which takes into account the quantum corrections (for an interpretation of such a T_μν^(eff) in terms of fluid variables see <cit.>). Since (<ref>) expresses the quantum corrections of the classical Schwarzschild metric, the tensor T_μν^(eff) appears as the correction beyond the Einstein equations of motion and not beyond some other modified classical equations of motion. To find this T_μν^(eff), we need to compute the Einstein tensor G_μν of the metric (<ref>). This could lead to a non-trivial situation, where the Israel matching conditions are satisfied or not, depending on the form of this effective energy-momentum tensor. However, for the whole analysis of the present paper, just the Israel conditions arise. Indeed, the Einstein tensor G^μ_ ν which is constructed from the metric (<ref>) has the following non-vanishing componentsG^T_T=G^R_R=1/R^2(RF'+F-1), G^θ_θ=G^φ_φ=1/2R(RF”+2F') .Since the Schwarzschild metric with F_Sch=1-2G_NM/R satisfies the vacuum Einstein equations, we getG^T_T=G^R_R=1/R^2(R𝒬'+𝒬), G^θ_θ=G^φ_φ=1/2R(R𝒬”+2𝒬') ,where the quantity𝒬=2(G_N-G_k)M/R-1/3Λ_kR^2is defined by 𝒬≡ F-(1-2G_NM/R), i.e. it is the deviation of the metric component F from 1-2G_NM/R. The quantity 𝒬 will be seen that is well-defined, depending on the assumptions of the quantum theory. Therefore, in the interior regime with the metric (<ref>), the Einstein equations G^μ_ ν=8π G_NT^μ(eff)_ ν acquire a well-defined T^μ(eff)_ ν, which is given by the right hand sides of (<ref>) and parametrized by the quantity 𝒬,T^T(eff)_ T=T^R(eff)_ R =1/8π G_NR^2(R𝒬'+𝒬), T^θ(eff)_ θ=T^φ(eff)_ φ= 1/16π G_NR(R𝒬”+2𝒬') .Moreover, equating the right hand sides of the expressions (<ref>), (<ref>), we find the ordinary differential equations of F,RF'+F-1=R𝒬'+𝒬, RF”+2F'=R𝒬”+2𝒬' .Differentiating the first equation of (<ref>), we get the second equation. We are not particularly interested in the precise function F(R) since for cosmology the most important is only the value of F at the Schucking radius. We discern now the following cases. —If G_k,Λ_k are functions of R in an explicit algebraic form (as happens in the subsection VII-B-1), then the metric component F, the quantity 𝒬 of (<ref>) and T^μ(eff)_ ν of (<ref>) become also such functions of R. The second (as well as the first) derivatives of the metric component F in the Einstein equations (<ref>) are contained only on their left hand sides, which come from G^μ_ ν, while the 𝒬 terms on the right hand sides, which come from T^μ(eff)_ ν, act as a sort of given potentials that modify F(R) from the Schwarzschild metric. Although, this picture is rather trivial due to that the evaluation is performed on-shell, on the explicit metric component F(R), however, it is still meaningful since the function F(R) still satisfies the differential equations (<ref>). As it is well known, the matching conditions are extracted from the Einstein equations G^μ_ ν=8π G_NT^μ(eff)_ ν focusing only to the second derivatives of the metric components. Since these derivatives are only inside the Einstein tensor G^μ_ ν and not inside T^μ(eff)_ ν, the Israel matching conditions naturally arise in this case. —In the subsections VII-A, VII-B-2, we will discuss another situation, which is non-trivial in the sense that F is not a known function of R. At the same time, this situation is actually more promising in its cosmological results and also more favorable theoretically. Here, Λ_k is not an explicit function of R, but it contains an integral of F(R). Since F(R) is not known, but is the metric component to be found by solving (<ref>), this integral cannot be performed. This means that the quantity 𝒬 contains an independent geometrical field D(R) with its own equation of motion. The important point is that this equation of motion is only of first order, i.e. it contains D' and not D”, and therefore, there is no discontinuity of D' at the matching surface. As a result, T^μ(eff)_ ν in (<ref>) becomes a function of R,D,F,F', while no F” is present in T^μ(eff)_ ν. Therefore, again F” is only contained in the tensor G^μ_ ν of the Einstein equations G^μ_ ν=8π G_NT^μ(eff)_ ν and the Israel conditions remain intact. We will provide more precise explanations about this issue at the appropriate point later. The metric component F(R) can be found by solving for F,D the coupled system of the first equation in (<ref>), i.e. equation RF'+F-1=R𝒬'(R,D,F)+𝒬(R,D), together with the first order differential equation for D. —Finally, if we assume, merely as a mathematical extension of the above, the more complicated situation where G_k,Λ_k contain double integrations of F,F', this implies that the equation of motion of the D field is of second order. Then, a discontinuity of D' could be present at the matching surface and the Israel matching conditions might be modified. We only mention such a possibility, without regarding it in our analysis, to show how the Israel conditions could be violated in extreme and artificial situations, where no clear physical motivation justifies such constructions.To summarize, we have proved that the Israel matching conditions are still applicable for the metric (<ref>) in the context of our analysis. Therefore, in the Swiss cheese approach, equations (<ref>), (<ref>) can be used to derive the cosmological evolution.If G_k has the constant observed value G_N and the underlying theory provides only ultraviolet (UV) corrections to Λ, it is not possible to get cosmic acceleration since Λ is suppressed at large distances and Λ(R_S) almost vanishes. Therefore, infrared corrections of Λ are necessary. For example, a well known phenomenological description of a quantum corrected non-singular black hole is provided by the Hayward metric <cit.> whereF(R)=1-2G_NMR^2/R^3+2G_NML^2=1-2G_NM/R -2G_NM(1/R^3+2G_NML^2-1/R^3)R^2 .The length scale L controls the ultraviolet correction close to the origin and smoothes out the singularity. In this metric the effective cosmological constant becomes a function of R, namely Λ(R)= 6G_NM(1/R^3+2G_NML^2-1/R^3). At distances where the Newtonian potential is very weak and R≳ L, it arises that the potential of this cosmological “constant” is negligible compared to the Newtonian potential and its corresponding weak-field force (which is repulsive) is well suppressed compared to the Newtonian force. Equation (<ref>) for the acceleration givesä/a=G_NM4G_NML^2-r_Σ^3a^3/(2G_NML^2+r_Σ^3a^3)^2 .This metric does not provide a recent cosmic acceleration since at late times deceleration emerges. Similarly, the solution presented in <cit.> fails for the same reason.§ THE CASE OF ASYMPTOTICALLY SAFE GRAVITY A concrete realization of the functions G_k,Λ_k is provided by the asymptotically safe scenario of quantum gravity. In Appendix A, we present a few basic elements of the theory and of its application in cosmology. According to the AS program, both Newton's constant G_k and cosmological constant Λ_k are energy dependent, G_k=G(k)=g_kk^-2 , Λ_k=Λ(k)=λ_kk^2, where k is an energy measure of the system and g_k,λ_k are the dimensionless running couplings governed by some RG flow equations. The exact RG flow of the couplings from the Planck regime down to the present epoch is not yet known. So, it is not clear what is the real trajectory in the space g_k,λ_k followed by the universe and how the classical General Relativity regime with a constant G_N and negligible Λ can be obtained. However, even if it was possible to predict the flow of the decrease of Λ_k all the way down to the current cold cosmic energy scale and a very small value of Λ could be realized, the coincidence problem, having to do with the precise order of magnitude of Λ, would still be manifest and profound. Furthermore, in the picture with a constant G and a Λ_k monotonically decreasing with time, a recent passage from a deceleration to an acceleration phase would not be possible. The far infrared limit of g_k,λ_k certainly covers the late-times cosmological scales. However, even if a late-times behaviour with an increasing G does exists, it will describe the future and not the present universe which possesses rather an antiscreening instead of a screening behaviour. Additionally, there is the possibility that the far infrared corrections of the cosmological constant at cosmic scales are too small to affect the present universe evolution and drive into acceleration.Anyway, the cosmic scale corrections of G,Λ are not of interest in our approach. Our interest is focused on the intermediate infrared corrections occurring at the astrophysical structures scales. The hope is that these intermediate scale quantum corrections will be significant enough to have direct influence on the current cosmology and on the observed dark energy component, but at the same time they will not conflict in an obvious way with the local dynamics. Indeed, we will show until the end of the paper that the recent cosmic acceleration can be the result of such quantum corrections of the cosmological constant at the galactic or cluster of galaxies scale.As before, the universe will be described by the Swiss cheese model and the matching will take place on the surface between a cosmological metric and a quantum modified spherically symmetric metric. Several interesting approaches of RG improved black hole metrics appear in literature. In our analysis we will work with the quantum improved Schwarzschild-de Sitter metric (<ref>), as e.g. presented in <cit.>. Of course, the precise form of the metric will arise from the forms adopted for G_k,Λ_k, as these are predicted or motivated by AS. A different treatment would be to substitute the functions G_k,Λ_k inside some consistent quantum corrected gravitational equations of motion or inside some consistent action, and derive first the quantum corrected spherically symmetric solution. As mentioned, here we will follow for simplicity the method of obtaining a quantum corrected metric by starting from a classical solution and promoting G_N,Λ to energy dependent quantities according to the AS program.In order to proceed further, the energy measure k has to be connected with a length scale L, i.e. k=ξ/L, where ξ is a dimensionless parameter which is expected to be of order one. As we approach the center of spherical symmetry, the mean energy increases and k is a measure of the energy scale that is encoded in Renormalisation Group approaches to Quantum Gravity. A simple option is to set as L the coordinate distance R of the spherically symmetric metric. Then, the value of the cosmological constant Λ_k, which provides a value for the local vacuum energy density, depends explicitly on the distance from the center, Λ_k=Λ(R). The same is true for G_k=G(R), and so, the metric component F of the metric (<ref>) becomes an explicit function of R, F=F(R). Let us remind that the function F(R) is precise when the mass M is located at the origin. In our approach where we want to describe real astrophysical objects with a mass profile, F(R) in the interior should be treated with caution.A more natural option is to set as L the proper distance D>0 <cit.>. This case is more involved physically and technically. Following a radial curve defined by dT=dθ=dφ=0 to reach a point with coordinate R, we haveD(R)=∫_R_1^Rdℛ/√(F(ℛ)) .This is a formal expression until one realizes what is its meaning and the meaning of R_1. Now, the function F is not an explicit function of R since D is also contained in F by construction, so F(ℛ) in (<ref>) basically means F(ℛ,D(ℛ)). So, (<ref>) is not a simple integral but it is an integral equation which can be converted to the more useful differential equationD'(R)=1/√(F(R)) .This is a well-defined, but complicated, differential equation for D since F contains again D(R) (or in different words, F is given by (<ref>)). Integration of equation (<ref>) will provide an integration constant σ, so the solution of (<ref>) is D(R;σ). Plugging this D in F will provide F(R;σ). For a specific σ, a value R_1(σ) should exist that makes the equation (<ref>) meaningful for R>R_1. The positiveness of D may also provide some restrictions on R. Even if a minimum horizon distance R_H exists where F(R_H;σ)=0, R_1 does not necessarily coincide with R_H, since after D(R;σ) has been found, it is possible that the arising function 1/√(F(R;σ)) is non-integrable around R_H. So, the variable D(R) is a proper distance, but not in the conventional sense of an integration in a prefixed background. It can rather be considered as a new dynamical field of geometrical nature with its own equation of motion (<ref>), where the spacetime metric is determined through D(R;σ). The role of the function D(R) will be crucial in our analysis. The integration constant σ could be determined by some assumption, for example if R_H exists, it could be set D_H=0 or D_H=R_H. In our case of Swiss cheese models, σ will be determined from the demand of having the correct amount of dark energy today. However, the interesting thing, especially in relation to the coincidence problem, is that the corresponding D(R) will have throughout natural values of the order of the length of the astrophysical object, while at the same time it will provide the correct estimate of dark energy.Let us now complete the discussion started in section VI about the validity of the Israel matching conditions when the choice (<ref>) is made. The cosmological constant becomes a function of D, i.e. it is Λ_k=Λ_k(D). If G_k=G_N (as we will assume in our analysis), the quantity 𝒬 in (<ref>) becomes 𝒬=-1/3Λ_kR^2, so it is a function of R,D, i.e. 𝒬=𝒬(R,D). Making use of (<ref>), we can easily compute 𝒬'=-2/3Λ_kR+ξ R^2/3D^2√(F)∂Λ_k/∂ k=𝒬'(R,D,F), 𝒬”=-2/3Λ_k+ξ R/6D^3F^3/2 (8DF-4R√(F)-RDF')∂Λ_k/∂ k- ξ^2R^2/3D^4F∂^2Λ_k/∂ k^2=𝒬”(R,D,F,F'). It is now clear, as also mentioned above, that the right hand sides of the Einstein equations (<ref>), i.e. the components of T^μ(eff)_ ν, are only functions of R,D,F,F' and not of F”. Since F” is only contained in G^μ_ ν on the left hand sides of (<ref>), and the evolution of D is governed by the first order differential equation (<ref>), the Israel matching conditions arise.In our Swiss cheese approach of cosmology, the matching between the interior and the exterior metric occurs at the Schucking radius which only enters the cosmological evolution. The front value k_S at the Schucking radius is inversely proportional to a characteristic length of the metric. For the choice L=R we getk_S=ξ/R_S .For L=D it isk_S=ξ/D_S ,whereD_S=∫_R_1^R_SdR/√(F(R))is the proper distance of the Schucking radius. Therefore, the front values Λ(R_S),G(R_S) at the Schucking radius are the ones where the matching occurs and determine the cosmological evolution from (<ref>), (<ref>) asȧ^2/a^2+κ/a^2=2G(R_S)M/r_Σ^3a^3 +1/3Λ(R_S) . We finish with a comment which doesn't have a special significance. The dependence of Λ, G on the distance inside the object does not seem to affect our cosmology. However, there is an indirect influence which affects the parameters. Indeed, the total cosmic energy of the cosmological portion that is excised from the Swiss cheese should be equal to the energy provided by the various masses inside the astrophysical object plus the vacuum energy due to the cosmological constant. Since the vacuum energy of a cosmological constant is Λ/8π G, we have approximately the equation4π/3R_S^3ρ_tot=M+∫_0^R_SΛ(R)/8π G(R) 4π R^2dR ,where ρ_tot=ρ+ρ_DE is the total cosmic energy density (dark matter plus dark energy) <cit.>. Equation (<ref>) evaluated at the today values, according to (<ref>), sets a restriction between the various parameters. Unfortunately however, we cannot make use of this equation since we do not know the precise functions Λ(k),G(k) from the UV with k=∞ up to k_S∼ R_S^-1. The difficulty is rather basically due to Λ(k) since G(k) rapidly evolves to its constant value G_N. So, we will have one more parameter left free in our analysis. §.§ First RG flow behaviour: close to the Gaussian fixed pointThere is a fixed point of the RG flow equations, the Gaussian fixed point (GFP) <cit.>, which is saddle and is located at g=λ=0. A appropriate class of trajectories in the Einstein-Hilbert truncation of the RG flow can be linearized about the GFP, where the dimensionless couplings are pretty small. These trajectories possess interesting qualitative properties such as a long classical regime (long lnk “time” due to the vanishing of beta functions) and a small positive cosmological constant inthe infrared, features that seem relevant to the description of gravitational phenomena in the real universe. The analysis is fairly clear and in the vicinity of the GFP it arises that Λ has a running k^4 and G has an approximately constant value which is interpreted as G_N. Therefore,G_k=G_N,Λ_k=α+β k^4 ,where α,β are positive constants. Moreover it is β=ν G_N, where ν=𝒪(1). In terms of the dimensionless couplings it is λ_k=α k^-2+β k^2, g_k=G_Nk^2. These equations are valid if λ_k≪ 1, g_k≪ 1. While the above segment which lies inside the linear regime of the GFP can be continued with the flow equation into the UV fixed point, this approximation breaks down in the IR where λ_k approaches the value 1/2. Therefore, as our first choice, we will assume that within a certain range of k-values encountered in an astrophysical object, the RG trajectory is approximated by (<ref>).For the choice (<ref>), equation (<ref>) provides the cosmic evolution of the scale factor a asȧ^2/a^2+κ/a^2=2G_NM/r_Σ^3a^3 +α/3+βξ^4/3D_S^4 .The choice (<ref>) is not interesting, since the last term in equation (<ref>) would be a radiation term a^-4, and (<ref>) would lead to ΛCDM model with a radiation term. From (<ref>) it can be easily found that the time evolution of D_S is given by the equationḊ_S=r_ΣaH/√(1-2G_NM/r_Σa-α/3r_Σ^2 a^2-βξ^4r_Σ^2a^2/3D_S^4) .Equations (<ref>), (<ref>) form a system of two coupled differential equations for a,D_S. We can bring this system in a more standard form definingχ=α/3+βξ^4/3D_S^4 ,and thenȧ^2/a^2+κ/a^2 = 2G_NM/r_Σ^3a^3+χ χ̇ = -4· 3^1/4r_ΣaH(χ-α/3)^5/4/ξβ^1/4√(1-2G_NM/r_Σa-r_Σ^2a^2χ) .Note from (<ref>) that χ-α/3>0. From (<ref>) it is seen that the quantity χ plays the role of dark energy. Namely, it isH^2+κ/a^2=8π G_N/3(ρ+ρ_DE) ,where ρ is given by (<ref>) and ρ_DE=3/8π G_Nχ.Note in passing that equation (<ref>), combined with equation (<ref>), gives4π/3R_S^3ρ_tot=M+Λ(R_S)R_S^3/6G_N .Comparing this equation with (<ref>), it arises that the total energy due to the cosmological constant is given by two expressions, first by the integral in (<ref>) and second by the last term in (<ref>). There is no contradiction with that, since the equality of these two expressions at the today values simply provides the additional constraint on the parameters mentioned above.The density parameters are defined in the standard wayΩ_m=8π G_Nρ/3H^2 , Ω_DE=8π G_Nρ_DE/3H^2 .From the first of equations (<ref>), using the today values of the variables, a relation between the parameters M and r_Σ can be foundr_Σ=(2G_NM/Ω_m0a_0^3H_0^2)^1/3 .For the typical masses we use, it was found above that r_Σ=0.83Mpc for a galaxy, and r_Σ=18Mpc for a cluster of galaxies.It is more convenient to work with the redshift z=a_0/a-1, where the today value a_0 of the scale factor can be set to unity. From (<ref>), (<ref>) the evolution of χ(z) is found fromdχ/dz=4· 3^1/4(χ-α/3)^5/4/ξβ^1/4(1+z)^2√(1/r_Σ^2a_0^2- Ω_m0H_0^2(1+z)-χ/(1+z)^2) .After having solved (<ref>), the evolution of the Hubble parameter as a function of z is given by the expressionH^2=Ω_m0H_0^2(1+z)^3+χ-κ/a_0^2(1+z)^2 .Using (<ref>), (<ref>), the quantity Ω_m can also be found as a function of the redshiftΩ_m=[1+1/Ω_m0H_0^2χ/(1+z)^3- κ/a_0^2Ω_m0H_0^21/1+z]^-1 . For the numerical investigation of the system we will need the today value χ_0 of χ. From (<ref>) or (<ref>) we findχ_0=Ω_DE,0H_0^2 .The differential equation (<ref>) contains the parameters ξ,β=ν G_N,r_Σ,α. The parameters ξ and ν are of order unity. The parameter r_Σ was found from (<ref>). Finally, the parameter α is free in order to try to achieve the correct phenomenology. With these parameters and χ_0 given in (<ref>), we can solve numerically (<ref>) and find χ(z). Then we can plot Ω_m(z) from (<ref>).From (<ref>), (<ref>) one can obtain a Raychaudhuri type of equation. Differentiating (<ref>), and using (<ref>) and (<ref>) itself, we getä/a=-1/2Ω_m0H_0^2(1+z)^3+χ-2· 3^1/4(χ-α/3)^5/4/ξβ^1/4(1+z)√(1/r_Σ^2a_0^2- Ω_m0H_0^2(1+z)-χ/(1+z)^2) .The same equation also arises from (<ref>). The deceleration parameter is given by q=-H^-2ä/a. From (<ref>), for κ=0, and using (<ref>), the condition for acceleration today ä|_0>0 is written as1/r_Σ^2a_0^2H_0^2>4·√(3)/ξ^2√(ν) (1-Ω_m0-α/3H_0^2)^5/2/(1-3/2Ω_m0)^2 1/H_0√(G_N)+1 ,which is approximated by0<1-Ω_m0-α/3H_0^2< (ξ^2√(ν)/4√(3))^2/5(1-3/2Ω_m0)^4/5(H_0√(G_N)/r_Σ^2a_0^2H_0^2)^2/5 .It is found from (<ref>) that 1-Ω_m0-α/3H_0^2≲ 10^-22, from where it is obvious that α>0. For such values of α, the conditions λ_k≪ 1, g_k≪ 1 are seen from (<ref>), (<ref>) that are easily satisfied. Of course, such values form an extreme fine tuning for α since α has to be extremely close to the value 3(1-Ω_m0)H_0^2, which is also the value of the cosmological constant. Additionally, for such values of α it can be seen that Ω̇_m|_0<0, which assures a local increase of Ω_m in the past. There is also the option that 1-Ω_m0-α/3H_0^2 is of order unity, and then from (<ref>) it turns out that ν>10^106, however such values are not predicted by the quantum theory.Finally, since the pressure of the matter component vanishes, p=0, equation (<ref>) can be brought into the form2Ḣ+3H^2+κ/a^2=-8π G_N(p+p_DE) ,where the dark energy pressure is given by-8π G_Np_DE=3χ-4· 3^1/4(χ-α/3)^5/4/ξβ^1/4(1+z)√(1/r_Σ^2a_0^2- Ω_m0H_0^2(1+z)-χ/(1+z)^2) .A significant parameter for the study of the late-times cosmology is the equation-of-state parameter for the effective dark energy sector w_DE=p_DE/ρ_DE, which is found from (<ref>) to bew_DE=-1+4· 3^-3/4(χ-α/3)^5/4/ξβ^1/4(1+z)χ√(1/r_Σ^2a_0^2- Ω_m0H_0^2(1+z)-χ/(1+z)^2) .Therefore, w_DE cannot take phantom values smaller than -1. Its today value becomes for κ=0w_DE,0≈ -1+4· 3^-3/4/ξν^1/4(1-Ω_m0)(1-Ω_m0-α/3H_0^2)^5/4(r_Σ^2a_0^2H_0^2/H_0√(G_N))^1/2and, according to (<ref>), it can take any value -1<w_DE,0<-1+(2/3-Ω_m0) (1-Ω_m0)^-1≈ -0.48.Because of the above extreme fine-tuning in α, numerical study of the equations is not possible and we need to perform an analytical study in order to understand the behaviour of the system. This analysis is shown in the Appendix B for κ=0. We summarize here the results of this analysis. Concerning the evolution of the Hubble parameter and the parameter Ω_m, the ΛCDM behaviour arises practically in all the regime of applicability of the model all the way down to the present epoch, with an exception in the very first steps of the AS evolution. As for the deceleration parameter and the dark energy equation-of-state parameter, there are intervals where these parameters have a behaviour different from that of ΛCDM. This discrepancy between the behaviour of the parameters H(z),Ω_m(z) and the behaviour of the parameters ä(z),w_DE(z) is peculiar and is due to the presence of the higher time derivatives contained in the acceleration, which can lead to significant contribution from terms which are negligible in H, Ω_m. However, for the numerical values of the various constants of astrophysical and cosmological origin describing our model, the final result is that in all physically interesting cases the acceleration properties of the model cannot be practically discerned from the ΛCDM scenario, therefore our model is indistinguishable from ΛCDM. There is a possibility, as always in cosmology, that the perturbations of the model evolve in a distinct way from ΛCDM, however the previous analysis of the background, with the particular values encountered, leaves little hope for observational evidence of the deviations from ΛCDM.§.§ Second RG flow behaviour: close to the infraredThere are encouraging indications that for k→ 0 the cosmological constant runs proportional to k^2, so Λ_k=λ_∗^IRk^2, where λ_∗^IR>0 is the infrared fixed point of the λ-evolution. At the same time, it seems that G_k increases a lot, for example g_k could converge to an IR value g_∗^IR>0 or even diverge. This postulated fixed point <cit.> can be considered as the IR counterpart of the UV Non-Gaussian fixed point (NGFP) <cit.>. This non-trivial IR running can be assumed that is due to quantum fluctuations with very small momenta, corresponding to distances larger than the largest localized structures in the universe. Since we are interested in generating the dark energy at astrophysical scales, the exact realization of the above IR fixed point at cosmological scales is not our purpose. There are works, not concerning cosmology, which consider the possibility that such a fixed point can act already at astrophysical scales <cit.>, <cit.>. Our approach will be different. We will consider, as it is pretty reasonable, that the above IR fixed point has not yet been reached at the intermediate astrophysical scales, and therefore, some deviations from the above functions G_k,Λ_k should be present inside the objects of our interest.Concerning the gravitational constant G_k, since at moderate scales, well beyond the NGFP, G_k is approximately constant, we will assume that it has the constant value G_N over a broad range of scales ranging from the submillimeter up to the galaxy or cluster scale. Actually, we will not make use of the submillimeter lower bound, so we can just restrict ourselves above some observable macroscopic distances. There is the possibility that at large astrophysical scales, G acquires an IR correction beyond G_N, however, we will not be concerned about this in our introductory treatment here, since we do not want to assume arbitrary functional forms. Already the relatively successful explanation of the galactic or cluster dynamics using the standard Newton's law makes our adoption G_k=G_N legal enough.We will be restricted on the single effect of the variability of the cosmological constant Λ_k. Since the functional form of the deviation of Λ_k from its IR form k^2 is not known, we will assume that the running of Λ_k is described by a power law dependence on the energy, Λ_k∼ k^b. Of course, this is a simple ad-hoc parametrization and the analysis will show how far one can go with the single parameter b. However, if Λ_k differs slightly from its IR form k^2, what is actually not unexpected since astrophysical structures are already “large” enough, then b will be close to the value 2 and the power-law dependence k^b will be a very good approximation of the running behaviour. Moreover, moving with the deformed law k^b from the IR cosmological law k^2 down to the astrophysical scales, it seems more probable for b to be slightly larger than 2, instead of smaller than 2. This is due to that since k increases at smaller length scales, the parameter b should also increase in order to have a significant astrophysical Λ_k, otherwise k^b would be neutralized and Λ_k would be suppressed.We summarize by writing our ansatzG_k=G_N,Λ_k=γ k^b ,where γ>0, b are constants. The parameter γ has dimensions mass to the power 2-b and will be parametrized as γ=γ̃ G_N^b/2-1 with γ̃ dimensionless. If an order 𝒪(1) value for γ̃ arises, this will mean that no new mass scale is needed (no new physics is needed for the explanation of dark energy other than AS gravity and the knowledge of structure) and the coincidence problem might be resolved from already controllable physics without fine-tuning or new scales. Indeed, it will be shown that in the most faithful case, discussed in the last subsection VII-B-2, it turns out that b is a little higher than 2 and γ̃=𝒪(1), which means that our model with the dark energy originated from IR quantum corrections of the cosmological constant at the astrophysical level can be pretty successful. With the assumption (<ref>), the metric (<ref>) contains the Newtonian term 2G_NM/R and the non-trivial cosmological constant term 1/3Λ_k R^2 which becomes ξ^bγ̃/3(R/L)^b(√(G_N)/R)^b-2.For L=R we will see that current acceleration requires that b<1.57 (actually b should be smaller than 1 in order to have a reasonable w_DE today), therefore we cannot be in the most interesting IR range of b a little larger than 2 and the law k^b loses its theoretical significance. Then, it will arise that γ̃ has to be various decades of orders of magnitude smaller than unity in order to arrange the amount of dark energy, which means that new scales are introduced through γ̃. All this situation, which will be examined in the next subsection VII-B-1, although not an obvious fail, certainly it cannot be considered as a big success in relation to the coincidence problem. If such values are acceptable, then the internal dynamics presents some measurable deviations from the standard Newtonian dynamics which need more investigation to check their consistency with observations. As for the force corresponding to the new cosmological constant term, this is γ̃ξ^b(2-b)/6√(G_N)(√(G_N)/R)^b-1, and therefore it is repulsive. The ratio of the new cosmological constant term to the Newtonian term turns out to be Ω_DE,0/Ω_m0(R/r_Σ)^3-b, while the corresponding ratio of the forces gets an extra factor b-2. Thus, for large clusters these ratios at the border are larger than 0.1 for any b and the ratios can reach up to 0.35 for the largest b. For small clusters the ratios are suppressed to less than one per cent. Finally, for galaxies the ratios are almost 0.2 for the largest b and decrease considerable for smaller b. Certainly, inside the structures, the matter profiles have to be taken into account to obtain accurate results.What it will turn out to be the most exciting case is L=D, which will be examined in the last subsection VII-B-2. In this case, the dark energy, along with its acceleration, will be explained with b a little larger than 2 and γ̃ some number of order unity. More precisely, the quantity D at the Schucking radius today, D_S,0, which can be considered as the integration constant of the differential equation for D(R), has to be arranged to provide the correct amount of dark energy. This implies that D_S,0 is of order of r_Σ, and more generally it will be shown numerically that the whole function D(R) turns out to be of order of r_Σ. This is an extra naturalness for our model, since D, which has some meaning of proper distance, is of the length of the astrophysical object, and D_S,0 does not introduce a new scale. Therefore, the new cosmological constant term becomes 1/3Λ_kR^2=ξ^bγ̃/3[1/G_N(√(G_N)/r_Σ)^b](r_Σ/D)^bR^2, where we can set D∼ r_Σ in order to make an estimate. Of course, the deviations of the new cosmological constant term from the R^2 law and the precise functional form of D(R) are very important and will provide a cosmology distinctive from ΛCDM. However, we already observe that the quantity 1/G_N(√(G_N)/r_Σ)^b has dimensions of cosmological constant, and for b close to 2.1 it is very close to the order of magnitude of the standard cosmological constant Λ=4.7×10^-84GeV^2 of ΛCDM. The factor (r_Σ/D)^b offers a small distance-dependent deformation of the constant, which however, as mentioned, is important for the derived cosmology. Therefore, our model will provide a natural deformation of the ΛCDM model, without introducing arbitrary scales or fine-tuning. Similarly, the constant prefactor ξ^bγ̃ can contribute to one order of magnitude. As a result, beyond the general idea, motivated by the coincidence problem, of generating the dark energy from local antigravity sources, the most important thing to emerge from the present work is the introduction of the quantity1/G_N(√(G_N)/r_Σ)^b ,which arises in the context of infrared AS gravity and plays a role similar to the standard cosmological constant Λ. This quantity, with b a little larger (for galaxies) or a little smaller (for clusters) than 2.1, has the quite interesting property that it has the same order of magnitude as the standard Λ. For example, for galaxies with b=2.13 the quantity (<ref>) is 2.1×10^-84GeV^2, while for clusters with b=2.08 it becomes 2.6×10^-84GeV^2. It remains to AS gravity to confirm or not if such values of b are predicted at the astrophysical scales. Therefore, in some loose rephrasing, it can be said that the standard cosmological constant is not any longer an arbitrary independent quantity, as in ΛCDM, but it is constructed out of G_N,r_Σ,b.As a direct consequence of the above discussion for L=D, the new cosmological constant term itself, 1/3Λ_kR^2, becomes at the matching surface R=r_Σ of the order of (√(G_N)/r_Σ)^b-2. For galaxies with b=2.13 and typical mass M=10^11M_⊙, this term is 4× 10^-8, while the Newtonian term is 10^-8. For clusters with b=2.08 and mass M=10^15M_⊙, this term is 2× 10^-5, while the Newtonian term is 5× 10^-6. Therefore, the two gravitational potentials are comparabe at the Schucking radius. We can see that approximately a change in b less than five per cent from the above values, gives a change in this cosmological constant term less than one order of magnitude. This shows clearly a sensitivity on the value of b, which however cannot be considered as fine-tuning or at least extreme fine-tuning. Additionally, the fact that b acquires smaller values at cluster scales relatively to galaxy scales is consistent with the IR value b=2 at cosmological scales or beyond. At an extreme limit of the solar scale, a even larger value of b is expected. For a solar distance R=1A.U. the quantity √(G_N)/R takes the value 10^-46, while the Newtonian potential of the sun at this distance is 10^-8. Already for b=2.25 the quantity (√(G_N)/R)^b-2, which gives an estimation of the new term, becomes 3 orders of magnitude smaller than the Newtonian potential and this discrepancy becomes huge for larger b. Therefore, it is reasonable to assume that the stringent solar-system, as well as the laboratory tests of gravity, remain unaffected by the new term. Finally, as for the force of the new term, this is ξ^bγ̃/6(2-bR/D√(F)) [1/G_N(√(G_N)/r_Σ)^b] (r_Σ/D)^bR, where F(R)=1-2G_NM/R-1/3Λ_kR^2. The study of the new potential and its force at the border and inside the object is more complicated due to the presence of D(R), so in order to get relatively reliable results, we will need a more detailed treatment. A preliminary analysis is given in the last subsection, where the result is that there is no obvious inconsistency with the internal dynamics.We continue with the implication in cosmology of the previous new cosmological constant term for L=D. While the Newtonian term is certainly responsible for the dust term Ω_m0H_0^2/a^3 on the right hand side of (<ref>), the above running cosmological constant term is responsible for the dark energy term ξ^bγ̃/31/G_N(√(G_N)/D_S)^b. Therefore, Ω_DE,0 is equal to ξ^bγ̃/3(r_Σ/D_S,0)^b1/G_NH_0^2(√(G_N)/r_Σ)^b. Since D_S,0∼ r_Σ and γ̃ is taken to be of order one, it is implied from the discussion on the value of the quantity (<ref>) that Ω_DE,0∼ 1. This can be considered as a natural explanation of the coincidence problem. The hard coincidence of the value of the standard cosmological constant Λ∼ H_0^2 is exchanged to a mild coincidence of the index b close to 2.1, which happens to satisfy1/G_N(√(G_N)/r_Σ)^b∼ H_0^2.Strictly speaking it is the today value Λ(R_S,0) of the time-dependent function Λ(R_S) that has a value of order of H_0^2.As discussed in Sec. <ref>, the form (<ref>) for the running cosmological constant can be applied directly at conventional cosmologies without the Swiss cheese description to define a different cosmological evolution. In this cosmology, the dark energy term is ξ^bγ̃/31/G_N(√(G_N)/L)^b. The quantity L is a cosmological distance scale instead of an astrophysical one used in the previous paragraph and this creates a huge difference. It is a common and sensible approach to assume L=H^-1 in this case, and the today dark energy term becomes γ̃ξ^b/3(√(G_N)H_0)^b-2H_0^2. This quantity acquires the correct order of magnitude, H_0^2, if |b-2|<0.01, which means that b should be very close to the IR value 2. It is not that the two approaches, the one defined by (<ref>) and the one discussed here, differ only quantitatively. They can also differ qualitatively, in the sense that one approach can be relevant for the explanation of dark energy and the other not. For example, if AS predicts for the index b at the current cosmological scales the value 2.04, this automatically means that the cosmological interpretation of (<ref>) is irrelevant today and the IR value of b approximately 2 will be useful for the future universe evolution. In our work we consider the values of b which imply the satisfaction of equation (<ref>).§.§.§ Energy-coordinate distance scale relationWe first choose the simpler case where the energy scale k is inversely proportional to the coordinate radius R, therefore for the front value of k at the Schucking radius we have the relation (<ref>). From equation (<ref>) the Hubble equation becomesȧ^2/a^2+κ/a^2=2G_NM/r_Σ^3a^3+ γξ^b/3r_Σ^ba^b .This is a simple equation for the evolution of the scale factor, which is also written asH^2+κ/a^2=8π G_N/3(ρ+ρ_DE) ,where ρ is given by (<ref>) and ρ_DE=γξ^b/8π G_N r_Σ^ba^b .The density parameters are defined as in (<ref>) and the value of r_Σ for a galaxy or a cluster is found from (<ref>) as before. The evolution of the Hubble parameter as a function of z isH^2=Ω_m0H_0^2(1+z)^3+γξ^b/3r_Σ^ba_0^b (1+z)^b-κ/a_0^2(1+z)^2 ,where a_0 is set to unity. Similarly, the matter density abundance Ω_m becomesΩ_m=[1+γξ^b/3r_Σ^ba_0^bΩ_m0H_0^21/(1+z)^3-b-κ/a_0^2Ω_m0H_0^21/1+z]^-1 .These equations contain the parameters ξ (which is of order one), r_Σ (which is known from (<ref>) for the typical masses we use), and γ, b. For κ=0, in order to have Ω_m an increasing function of z in agreement with observations, it should be b<3.Now, from (<ref>) we take the condition for the correct amount of today dark energyξ^bγ̃=3Ω_DE,0 G_NH_0^2(r_Σa_0/√(G_N))^b .Equation (<ref>) is analogous to the relation Λ=3Ω_Λ 0H_0^2. As was also explained above, here it is the varying cosmological constant that creates the dark energy based on the parameters γ̃,b characterizing Λ_k, and the astrophysical scale r_Σ. For arbitrary values of b, the parameter γ̃ will be very large or very small, introducing in this way a new massive scale γ=γ̃G_N^b/2-1. The cosmological constant Λ_k=γ k^b should be generated inside the structure due to quantum corrections, so the parameters γ̃,b should be given by the AS theory at the astrophysical scales. Therefore, equation (<ref>), as an equation between orders of magnitude, forms a coincidence similar to that of Λ (actually it can be considered as increased coincidence) among these two parameters γ̃,b, the astrophysical value r_Σ and the cosmological parameter H_0. What we find particularly interesting in relation to the coincidence problem is the situation with γ̃∼ 1, because then, no new scale is introduced for the explanation of dark energy, other than the astrophysical scale. In this favorite case of ours, the relation (<ref>) is valid with the index b having values close to 2.1 as explained before, and the hard coincidence of (<ref>) reduces and is just rendered to the mild coincidence of the value of b. Unfortunately, we will see below that such b are not allowed here due to acceleration reasons. We should note additionally that the situation with equation (<ref>) is different than the picture where the dark energy is due to some extra field having its own equation of motion. In that case, the corresponding of equation (<ref>) would form a coincidence relation of dark matter-dark energy where some integration constants of the extra field would be involved, while the parameters of the theory would remain free to accommodate some other observation. This will actually be the case of the next subsection.The acceleration is found to beä/a=-1/2Ω_m0H_0^2(1+z)^3 +(2-b)γξ^b/6r_Σ^ba_0^b(1+z)^b .The condition for today acceleration ä|_0>0 becomes(2-b)γξ^b/3r_Σ^ba_0^bΩ_m0H_0^2>1 ,which implies that a necessary condition is b<2. Equation (<ref>) implies that we have the correct behaviour with a past deceleration and a today acceleration. Due to (<ref>), the inequality (<ref>) takes the formb<2-Ω_m0/Ω_DE,0 ,which means that b<1.57. Then, it arises from (<ref>) that γ̃≲ 10^-30 (and even smaller for galaxies), so γ̃ is various decades of orders of magnitude smaller than unity, and finally, no alleviation is offered to the coincidence problem. We continue in the following with a little more investigation of the present case, but it has already become obvious that the new massive scale γ=γ̃G_N^b/2-1, very different than the one provided by G_N^b/2-1, is necessarily introduced. And this is due to the values of b implied by the subtle issue of acceleration. Note that for the standard Λ it is Λ=3.1× 10^-122G_N^-1.We have also to assure that the transition point from deceleration to acceleration is recent. Combining equations (<ref>), (<ref>), we find that this transition occurs at redshift z_t which satisfies the equation(1+z_t)^3-b/2-b= 1/Ω_m0-1+κ/a_0^2Ω_m0H_0^2 .So, z_t is basically determined from the parameter b (for example, for the value z_t≈ 0.5 it is b≈ 1). Ignoring κ, it is found from (<ref>) that z_t≲ 0.67. The condition (<ref>) is equivalent to z_t>0. The deceleration parameter becomes todayq_0=1/2Ω_m0-2-b/2Ω_DE,0 .So, depending on b, it is -0.55<q_0<0. The lowest value q_0=-0.55 characterizes the ΛCDM model and here is attached to b→ 0 with z_t=0.67. The upper value q_0=0 is associated to the maximum value b=1.57 with z_t→ 0 (for the intermediate example with b≈ 1 it is q_0≈ -0.2). If we are to insist to be close to the familiar value q_0=-0.55, then b should be close to zero and the model is just a slight variation of the ΛCDM model, since there are no extra parameters to create some degeneracy. For such b close to zero, the dark energy term has a slow evolution instead of the constancy of the standard Λ and it is something like γ̃≲ 10^-100. We finish with writing the pressure of dark energy for a general b as8π G_Np_DE=-(3-b)γξ^b/3r_Σ^ba_0^b(1+z)^b ,from where the equation of state of dark energy isw_DE=-1+b/3 ,which is constant and is not compatible with a seemingly varying w_DE.We present in Fig. <ref> the cosmological evolution as a function of z for a spatially flat universe, with the quantum cosmological constant originated at the cluster level with M=10^15M_⊙, Ω_m0=0.3, and for the parameter choice b=1.06, ξ=1, γ̃=2.8× 10^-60, which provide z_t=0.5. We depict in the left graph the evolution of the dark energy density parameter Ω_DE from equation (<ref>), where it appears a typical decreasing behaviour for larger redshifts. In the middle graph the evolution of the deceleration parameter q is shown, where a passage from deceleration to acceleration at late times can be seen. The dark energy equation-of-state parameter w_DE remains constant in the right graph. It is obvious that in order to have a w_DE,0 close to -1, according to observations, the value of the parameter b has to be close to zero, and thus, the model is finally very close to the ΛCDM model.Although the present model is not taken seriously as it is not theoretically favorable in relation to the coincidence problem, and also it should be very close to the ΛCDM model (by choosing b close to zero) in order to have some compatibility with simple observational tests, we find interesting to leave it a little space to breath by noticing two points. First, the parameter b is not exactly constant, but more accurately, it is z-dependent since at different cosmic epochs the running couplings move at different points of the AS renormalization group phase portrait. Therefore, in general, w_DE becomes time-dependent. This point is also related to the specific model of structure growth one has to assume for a typical galaxy or cluster in order to make a more realistic implementation of the model. Since, during the collapsing phase of a structure (larger length scales), the index b is expected to be smaller than the today value of the formed object, the mean value of b is smaller, and thus the real w_DE is reduced. Second, in view of the inhomogeneous/anisotropic models discussed in the Introduction (<cit.>, <cit.>, <cit.>), the description of a inhomogeneous universe with more realistic structures would provide through averaging processes enhancement of cosmic acceleration, and therefore, w_DE would become even smaller. In any case, the scenario discussed here will be improved in various aspects in the next subsection where the energy scale will be determined from the length scale L=D. §.§.§ Energy-proper distance scale relationHere we use the proper distance as measure of the energy scale, which seems to be more realistic, so we assume the law (<ref>). From equation (<ref>) the Hubble evolution is given byȧ^2/a^2+κ/a^2=2G_NM/r_Σ^3a^3 +γξ^b/3D_S^b ,whereḊ_S=r_ΣaH/√(1-2G_NM/r_Σa -γξ^br_Σ^2a^2/3D_S^b) .The quantity D_S, which expresses the proper distance of the matching surface, acts as a new cosmological field of geometrical nature with its own equation of motion (<ref>). Equations (<ref>), (<ref>) form a system of two coupled differential equations for a,D_S. Definingψ=γξ^b/3D_S^b ,which should be positive, we bring the system to the more standard formȧ^2/a^2+κ/a^2=2G_NM/r_Σ^3a^3 +ψ ψ̇=-3^1/bbr_ΣaHψ^1+1/b/ξγ^1/b√(1-2G_NM/r_Σa -r_Σ^2a^2ψ) ,where ψ plays the role of dark energy. We also haveH^2+κ/a^2=8π G_N/3(ρ+ρ_DE) ,where ρ is given by (<ref>) and ρ_DE=3/8π G_Nψ.The density parameters are defined as in (<ref>) and the value of r_Σ for a galaxy or a cluster is found from (<ref>) as before. It is more convenient to work with the redshift z and the evolution of ψ(z) is given bydψ/dz=3^1/bbψ^1+1/b/ξγ^1/b(1+z)^2√(1/r_Σ^2a_0^2 -Ω_m0H_0^2(1+z)-ψ/(1+z)^2) .After having solved (<ref>), the evolution of the Hubble parameter as a function of z isH^2=Ω_m0H_0^2(1+z)^3+ψ-κ/a_0^2(1+z)^2 ,while, the matter density abundance Ω_m becomesΩ_m=[1+1/Ω_m0H_0^2ψ/(1+z)^3- κ/a_0^2Ω_m0H_0^21/1+z]^-1 . For the numerical investigation of the system we will need the today value ψ_0 of ψ. In terms of the other parameters it isψ_0=Ω_DE,0H_0^2 .The differential equation (<ref>) contains the parameters ξ (which is of order one), r_Σ (which is known from (<ref>) for the typical masses we use), and γ, b. With these parameters and ψ_0 given in (<ref>), we can solve numerically (<ref>) and find ψ(z). Then we can plot Ω_m(z) from (<ref>).It is illuminating to define the variable ψ̃=ψ/H_0^2, and then, equations (<ref>), (<ref>), (<ref>) become respectivelydψ̃/dz=3^1/bb(G_NH_0^2)^1/b-1/2ψ̃^1+1/b/ξγ̃^1/b(1+z)^2√(1/r_Σ^2a_0^2H_0^2 -Ω_m0(1+z)-ψ̃/(1+z)^2) H^2/H_0^2=Ω_m0(1+z)^3+ψ̃+Ω_κ 0 (1+z)^2 Ω_m=[1+1/Ω_m0ψ̃/(1+z)^3+ Ω_κ 0/Ω_m01/1+z]^-1 ,where ψ̃_0=Ω_DE,0 and Ω_κ 0=-κ/(a_0^2H_0^2). Since Ω_κ 0≪ 1, Ω_m0≈ 0.3 and Ω_m should basically increase in the past, it arises from (<ref>) that for recent redshifts, where our scenario makes sense, it should be ψ̃/(1+z)^3≲ 1, otherwise Ω_m would drop to unacceptably small values in the past. As a result, ψ̃/(1+z)^2≲ 10, which is actually even smaller. On the other hand, the quantity 1/(r_Σ^2H_0^2) in the square root of (<ref>) is approximately 2× 10^7 for a galaxy and 5× 10^4 for a cluster, thus only this term remains in the square root to very high accuracy (better than 0.02% for clusters and better than 0.00005% for galaxies). Therefore, the differential equation (<ref>) for any practical reason is approximated by the simple equationdψ̃/dz=3^1/bb/ξγ̃^1/b (G_NH_0^2)^1/br_Σa_0/√(G_N) ψ̃^1+1/b/(1+z)^2 .Integration of (<ref>) givesψ̃=[3^1/b/ξγ̃^1/b (G_NH_0^2)^1/br_Σa_0/√(G_N) 1/1+z+c]^-b ,where c is integration constant. From the value of ψ̃_0 we find c, and finally we have for the evolution of dark energyψ̃=[Ω_DE,0^-1/b-3^1/b/ξγ̃^1/b (G_NH_0^2)^1/br_Σa_0/√(G_N) z/1+z]^-b .The positiveness of ψ implies thatz^-1>(3Ω_DE,0/ξ^bγ̃)^1/b (G_NH_0^2)^1/br_Σa_0/√(G_N)-1 .As a result of the inequality (<ref>) we get thatξ^bγ̃>3Ω_DE,0/(1+z_max^-1)^bG_NH_0^2(r_Σa_0/√(G_N))^b ,where z_max=𝒪(1) is a redshift such that in the interval (0,z_max) the model should definitely make sense.For concreteness we write explicitly the Hubble evolution H(z)H^2/H_0^2=Ω_m0(1+z)^3 +[Ω_DE,0^-1/b-3^1/b/ξγ̃^1/b (G_NH_0^2)^1/br_Σa_0/√(G_N) z/1+z]^-b +Ω_κ 0(1+z)^2 ,which is extremely accurate for all relevant recent redshifts that our model is applicable. The only unknown quantities in equation (<ref>) are ξ^bγ̃ and b. Equation (<ref>) is a new Hubble evolution which can be tested against observations at the background level. We note that the dark energy term in (<ref>) is quite different than the one of equation (<ref>). It is obvious from (<ref>) that Ω_m0+Ω_DE,0 +Ω_κ 0=1. When ξ^bγ̃ is much larger than the right hand side of (<ref>), the dark energy of equation (<ref>) is approximately a cosmological constant. The most interesting case is certainly when ξ^bγ̃∼ G_NH_0^2(r_Σ/√(G_N))^b, and then, all the terms in (<ref>) - with the exception of the spatial curvature Ω_κ - are equally important and give a non-trivial dark energy evolution. In this case, various combinations of γ̃,b are allowed such that (<ref>) is satisfied, introducing in general new scales. Moreover, due to the presence of the integration constant c which arranges Ω_DE,0, the terse relation (<ref>) has now been replaced by the loose inequality (<ref>), and therefore the precise value of the quantity ξ^bγ̃ can be used in order to accommodate some other observation, e.g. the acceleration. For a general b, although the dark energy may sufficiently be attributed to the varying cosmological constant, the coincidence problem is not particularly alleviated. However, what we find particularly interesting for the explanation of the coincidence problem, as was discussed above, is the situation with γ̃∼ 1, because then, no new mass scale is introduced for the explanation of the dark energy, other than the astrophysical scales. This is our favorite case, where the relation (<ref>) is valid with the index b having values close to 2.1, as explained. Such values of b are also theoretically interesting since they are close to the IR fixed-point value b=2 of AS theory and it remains to AS to find if such values are predicted at the astrophysical scales. For example, for a galaxy with b=2.13, inequality (<ref>) provides that ξ^bγ̃>2.2(1+z_max^-1)^-b, while for a cluster with b=2.08 it is ξ^bγ̃>1.8(1+z_max^-1)^-b, relations which can easily be satisfied with suitable γ̃∼ 1.Combining equations (<ref>), (<ref>) we obtainG_NH_0^2(D_S,0/√(G_N))^b=ξ^bγ̃/3Ω_DE,0 .For γ̃ such that inequality (<ref>) is saturated, which means that the dark energy in (<ref>) is non-trivial (different than a cosmological constant), we conclude from (<ref>) that the value of D at the today Schucking surface is D_S,0∼ r_Σ. This is true, either for our favorite b or more generally. Note that D_S,0 is of the order of r_Σ and not approximately equal to r_Σ, as might be guessed initially. It is obvious that the relation (<ref>) between the parameters has now disappeared in (<ref>) due to the freedom introduced by the presence of D_S,0. The precise value of D_S,0 is given by (<ref>) and depends on Ω_DE,0. The quantity D_S acts as an extra independent cosmological field in the system (<ref>), (<ref>), whose initial condition D_S,0 is set today in agreement with the amount of measured dark energy. This initial condition measures the proper distance of the current matching surface of the Swiss cheese model and the extra interesting thing, which also sheds light to the coincidence problem, is that it is of the order of the radius of the astrophysical structure. If it was not, then it would be just the integration constant of an extra field that should be selected appropriately to create the measured dark energy, and therefore, would introduce another scale. We can also find from (<ref>), (<ref>) the expression of D_S(z) to high accuracyD_S/r_Σa_0=ξγ̃^1/b/3^1/bΩ_DE,0^1/b 1/(G_NH_0^2)^1/b √(G_N)/r_Σa_0-z/1+z .Equation (<ref>) reduced to (<ref>) for z=0. For a non-trivial dark energy evolution in (<ref>), the function D_S remains for all relevant z of the order of r_Σ.In the interior of the object, the proper distance D is a function of the position R, i.e. it is D(R), and the equation governing this evolution is (<ref>). Since D_S,0 is known, the initial condition of equation (<ref>) is set at R=R_S,0=r_Σ as D(r_Σ)=D_S,0. In terms of the normalized variables D̂=D/r_Σ and R̂=R/r_Σ, we havedD̂/dR̂={1-ξ^bγ̃/3(√(G_N)/r_Σ)^b-21/D̂_S,0^b [a_0^3Ω_m0/Ω_DE,0 1/R̂+ (D̂_S,0/D̂)^bR̂^2]}^-1/2with the initial condition set at R̂=1 as D̂=D̂_S,0=(ξ^bγ̃/3Ω_DE,0)^1/b(G_NH_0^2)^-1/b√(G_N)/r_Σ. We note that the fact that D_S changes in time does not mean that the interior solution is time-dependent. The interior solution is static and simply D̂_S,0 is used as initial condition for the integration of (<ref>), since this is known from the today amount of dark energy. Since for our favorite values of γ̃,b which validate equation (<ref>), the factor (√(G_N)/r_Σ)^b-2 is various orders of magnitude smaller than one and D̂_S,0∼ 1, the initial value of dD̂/dR̂ at the Schucking surface is one to high accuracy. Therefore, close to r_Σ, the function D(R)-R remains constant. We will see numerically that this is approximately true more generally and find the integration constant.Concerning the acceleration we findä/a=-1/2Ω_m0H_0^2(1+z)^3 +ψ-3^1/bbψ^1+1/b/2ξγ^1/b(1+z)√(1/r_Σ^2a_0^2 -Ω_m0H_0^2(1+z)-ψ/(1+z)^2) .In terms of ψ̃ we haveä/H_0^2a=-1/2Ω_m0(1+z)^3 +ψ̃-3^1/bb(G_NH_0^2)^1/b-1/2ψ̃^1+1/b/2ξγ̃^1/b(1+z)√(1/r_Σ^2a_0^2H_0^2 -Ω_m0(1+z)-ψ̃/(1+z)^2) ,which is very well approximated, as before, byä/H_0^2a=-1/2Ω_m0(1+z)^3 +ψ̃-3^1/bb/2ξγ̃^1/b (G_NH_0^2)^1/b r_Σa_0/√(G_N) ψ̃^1+1/b/1+z ,or explicitly in terms of z asä/H_0^2a=-1/2Ω_m0(1+z)^3+[Ω_DE,0^-1/b -3^1/b/2ξγ̃^1/b(G_NH_0^2)^1/br_Σa_0/√(G_N)b+2z/1+z] [Ω_DE,0^-1/b -3^1/b/ξγ̃^1/b(G_NH_0^2)^1/br_Σa_0/√(G_N)z/1+z]^-1-b .The transition redshift z_t from deceleration to acceleration is found from (<ref>) setting ä=0. The today deceleration parameter takes the following very accurate expression if we use equation (<ref>)q_0=1/2Ω_m0-Ω_DE,0+3^1/bb/2ξγ̃^1/bΩ_DE,0^1+1/b (G_NH_0^2)^1/br_Σa_0/√(G_N) ,and therefore, q_0 takes values larger than the ΛCDM value -0.55. From (<ref>) or (<ref>), the condition ä|_0>0 is written asξ^bγ>3b^b Ω_DE,0^1+b/(2Ω_DE,0-Ω_m0)^bH_0^2-b(1/r_Σ^2a_0^2H_0^2-1+Ω_κ 0)^-b/2and is very well approximated byξ^bγ̃>3b^b Ω_DE,0^1+b/(2Ω_DE,0-Ω_m0)^b G_NH_0^2(r_Σa_0/√(G_N))^b ,which also arises from (<ref>). The inequality (<ref>) can be seen that is sufficient in order to have the significant condition Ω̇_m|_0<0. For our favorite values of γ̃,b which validate equation (<ref>), the right hand side of (<ref>) is of order one and ξ^bγ̃ should be chosen accordingly, still being of order unity. For example, for galaxies with b=2.13 we take from (<ref>) that ξ^bγ̃>4.2, while for clusters with b=2.08 it should be ξ^bγ̃>3.2. These conditions are stronger that those implied by (<ref>) for any z_max, so (<ref>) can be forgotten. For these values of b we provide now some indicative numerical results for q_0 from equation (<ref>) and z_t. For galaxies with b=2.13 we have: for ξ=9, γ̃=5 it is z_t=0.68 and q_0=-0.49; for ξ=3, γ̃=2 it is z_t=0.69 and q_0=-0.29; for ξ=1, γ̃=6 it is z_t=0.51 and q_0=-0.09. For clusters with b=2.08 we have: for ξ=9, γ̃=5 it is z_t=0.68 and q_0=-0.50; for ξ=3, γ̃=2 it is z_t=0.69 and q_0=-0.32; for ξ=1, γ̃=6 it is z_t=0.60 and q_0=-0.14. In all these cases, the inequality (<ref>) does not provide any restriction on z. If b increases more than 2% relatively to the above indicative values, then the quantity ξ^bγ̃ moves to higher orders of magnitude to assure acceleration, while if b gets values smaller than the above indicative, the inequality (<ref>) is easily satisfied since its right hand side becomes suppressed.Finally, the dark energy pressure and its equation-of-state parameter are8π G_Np_DE=-3ψ+3^1/bbψ^1+1/b/ξγ^1/b(1+z)√(1/r_Σ^2a_0^2 -Ω_m0H_0^2(1+z)-ψ/(1+z)^2) w_DE=-1+3^1/b-1bψ^1/b/ξγ^1/b(1+z)√(1/r_Σ^2a_0^2 -Ω_m0H_0^2(1+z)-ψ/(1+z)^2) .In terms of ψ̃ we have8π G_N/H_0^2p_DE= -3ψ̃+3^1/bb(G_NH_0^2)^1/b-1/2ψ̃^1+1/b/ξγ̃^1/b(1+z)√(1/r_Σ^2a_0^2H_0^2 -Ω_m0(1+z)-ψ̃/(1+z)^2) w_DE=-1+3^1/b-1b(G_NH_0^2)^1/b-1/2ψ̃^1/b/ξγ̃^1/b(1+z)√(1/r_Σ^2a_0^2H_0^2 -Ω_m0(1+z)-ψ̃/(1+z)^2) ,which are very well approximated, as before, by8π G_N/H_0^2p_DE= -3ψ̃+3^1/bb/ξγ̃^1/b (G_NH_0^2)^1/b r_Σa_0/√(G_N) ψ̃^1+1/b/1+z w_DE=-1+3^1/b-1b/ξγ̃^1/b (G_NH_0^2)^1/b r_Σa_0/√(G_N) ψ̃^1/b/1+z ,or explicitly in terms of z as8π G_N/H_0^2p_DE= [3^1/b/ξγ̃^1/b(G_NH_0^2)^1/br_Σa_0/√(G_N)b+3z/1+z-3Ω_DE,0^-1/b] [Ω_DE,0^-1/b -3^1/b/ξγ̃^1/b(G_NH_0^2)^1/br_Σa_0/√(G_N)z/1+z]^-1-b w_DE=[3^1/b-1/ξγ̃^1/b(G_NH_0^2)^1/br_Σa_0/√(G_N)b+3z/1+z-Ω_DE,0^-1/b] [Ω_DE,0^-1/b -3^1/b/ξγ̃^1/b(G_NH_0^2)^1/br_Σa_0/√(G_N)z/1+z]^-1 .The present-day dark energy equation-of-state parameter takes the following very accurate expression if we use equation (<ref>)w_DE,0= -1+3^1/b-1b/ξγ̃^1/bΩ_DE,0^1/b (G_NH_0^2)^1/br_Σa_0/√(G_N) ,and therefore, w_DE,0 takes values larger than the ΛCDM value -1 (phantom values smaller than -1 can only be obtained if b<0). According to (<ref>), it can take any value w_DE,0<-1/3(1+Ω_m0/Ω_DE,0)≈ -0.48. In our typical example, using b=2.13 at a galaxy structure, for ξ=9, γ̃=5 it is w_DE,0=-0.95, for ξ=3, γ̃=2 it is w_DE,0= -0.75, while for ξ=1, γ̃=6 it is w_DE,0= -0.56. Similarly, using b=2.08 at a cluster structure, for ξ=9, γ̃=5 it is w_DE,0=-0.95, for ξ=3, γ̃=2 it is w_DE,0= -0.78, while for ξ=1, γ̃=6 it is w_DE,0= -0.61.To capture the results of this case, we present in Fig. <ref> the cosmological evolution as a function of z for a spatially flat universe, with the quantum cosmological constant originated at the cluster level with M=10^15M_⊙, Ω_m0=0.3, and for the parameter choice b=2.06, ξ=5, γ̃=5, which provide z_t=0.67, q_0=-0.52, w_DE,0=-0.978. Of course, these are just indicative values of the parameters, which however look successful at least at first glance (in a forthcoming paper <cit.>, we will perform a numerical analysis at the background level with data from SNIa, H(z) measurements, BAO, e.tc., where we can say in advance that the fittings show excellent accuracy). We depict in the left graph the evolution of the dark energy density parameter Ω_DE from equation (<ref>), where a typical decreasing behaviour for larger redshifts is shown. In the middle graph the evolution of the deceleration parameter q is shown, where the passage from deceleration to acceleration at late times can be seen. The dark energy equation-of-state parameter w_DE in the right graph shows a non-constant evolution with a today value w_DE,0 close to -1. We also note that if we solve numerically the differential equation (<ref>), instead of using the analytical expression (<ref>), the results are exactly the same due to the high degree of accuracy of our analytical expressions. Notice that the two reasons mentioned in the end of subsection VII-B-1, about the time variability of b and the further decrease of w_DE, are also valid here. Namely, a more realistic study of the RG flow inside the structure, together with a model of structure formation on one side, and the study of more realistic inhomogeneous/anisotropic cosmological models through averaging processes on the other side, would reduce w_DE, possibly even to phantom values.Because we are taking our scenario seriously, we would like to finish with an estimate of the potentials and the forces inside and at the border of an astrophysical object. As for the border, the situation is clear and precise. Since the mass of the structure can be considered as being gathered at the origin, the Schwarzschild term is exact and provides the Newtonian force at the border. If this force and its potential are dominant compared to the new cosmological constant term at the border (which will indeed be the case), this will already be an indication that inside the object the interior dynamics will also not be severely disturbed. More precisely, in the interior of the object, either galaxy or cluster, there is a profile of the matter distribution (luminous, gas, dark matter) which will give some deviations from the central 1/R potential. We will first study what is the situation in Milky Way, which is a well studied galaxy. A very good fit of dark matter halo profile (which is the dominant matter component) to available data for Milky Way is performed <cit.> using the Universal Rotation Curve (Burkert) profile with matter density ϱ(R)=ϱ_c(1+R/R_c)^-1[1+(R/R_c)^2]^-1, where the density scale ϱ_c∼ 4× 10^7M_⊙/kpc^3 and the radius scale R_c∼ 10kpc. The use of a matter profile is necessary in order to get precise values of the Newtonian forces and velocities inside the galaxy (R<R_b). It can be easily seen that in the inner regions of the galaxy (R≲ 0.4 R_b), the real amount of matter is much larger than the matter predicted by a constant energy density profile, so this constant profile gives a poor underestimation of the Newtonian force. On the other hand, an interesting result is that away from the very center of the galaxy (R≳ 0.1 R_b) the real Newtonian force is of the same order as the Newtonian force due to the idealized picture with all the galaxy mass gathered at the origin. This result of Milky Way indicates that, although we will not enter the complicated discussion to study the precise Newtonian force inside other galaxies or clusters, this force will be estimated by the central 1/R^2 force. This way, we will be in position to compare the varying cosmological constant force relatively to the Newtonian force, and see how long the new force does not give obvious inconsistencies with internal dynamics. Of course, a more detailed study and comparison to existing or upcoming data is necessary.With the explanations of the previous paragraph, we assume that the gravitational field given by the modified Schwarzschild metric (<ref>) in the interior of the object (far from its very center, e.g. for R≳ 0.1 R_b) provides a sufficient estimate to anticipate what are the interior Newtonian and cosmological constant forces. We use the indicative parameters b=2.08, ξ=1, γ̃=6 to numerically integrate equation (<ref>). The solution of this equation gives the function D(R), which turns out to be an increasing function of R, as it should be. The first observation is that the function D(R) in all the relevant distances, throughout the interior of the astrophysical object up to the today Schucking radius R_S,0=r_Σ, has values of the order of r_Σ. Therefore, not only D_S(z) remains of order r_Σ, but also the whole D(R) is so, and provides a variable with values natural to the dimension of the object, without acquiring unnatural very large or very small values. In general, at distances up to a few decades of r_Σ, a reliable approximation for the solution, as it is seen numerically, is D̂≈R̂+0.79, thus D has approximately a constant difference from R. Of course, this expression of D̂ can be used to obtain some intuition, but the detailed structure of D̂ could also be significant elsewhere. The next important thing is to study the ratio of the potential term due to the varying cosmological constant to the Newtonian potential, as they arise from equation (<ref>). For small clusters, this ratio is less than 1‰either at the border or inside. For the largest possible clusters, the ratio becomes 15% at the border and less inside. Therefore, for the latest clusters, we have a non-negligible contribution to the pure Newtonian potential with possible observable signatures, still without an obvious inconsistency. Since the Newtonian potential is negligible relatively to unity for R≳ 0.1 R_b up to the border, thus both potentials are very weak and F is approximately one to high accuracy. For all clusters, with any diameter, the two potentials become of the same order at the Schucking radius and this was the reason above for the successful explanation of dark energy. At even larger distances (R≳ 3 r_Σ) the cosmological constant becomes the dominant term (although in the Swiss-cheese model, at such distances the cosmological patch is present instead of the static one). Finally, the ratio of the cosmological constant force to the Newtonian force is Ω_DE,0/a_0^3Ω_m0(bR̂/D̂√(F)-2) (D̂_S,0/D̂)^bR̂^3. It turns out that this ratio is negative, which means that the new force is repulsive, as expected. For small clusters, this ratio is again less than 1‰either at the border or inside. For the largest possible clusters, the ratio becomes 20% at the border and less inside. Again, for such clusters, a non-negligible contribution to the Newtonian force arises, which should be studied more thoroughly in comparison with real data. Similar results with all the above occur for clusters with different values of the parameters ξ,γ̃, where most usually the extra force and potential are further suppressed. As for galaxies, it can be seen that the force or the potential of the extra term is always restricted to a contribution of a few percent, independently of the parameters.Some remarks are in order about what may happen with the formation of large scale structures of the universe in the context of our model. It is well known that modifications of gravity that are almost ΛCDM at the background level may have very different evolution of structures. In our present understanding of the theory, we cannot analyse the whole structure evolution because the scenario starts somehow suddenly with the appearance of the structures. The present work provides a study only at small redshifts. If in the future we manage to understand better the behavior of the RG flow at different energies and scales, we will able to quantify more accurately the antigravity effects during the formation and the evolution of structures. This knowledge will require from AS theory the correct RG flow of gravity with matter at the infrared energy scales and also the exact relation of the amount of energy/mass that is associated with the value of the varying cosmological constant. However, we foresee that General Relativity will be preserved somehow accurately during structure formation because, in the initial clouds that collapse, the antigravity quantum effects are not expected to be very important and they become significant recently where structures are denser and smaller. But this, of course, must be studied with an exact RG flow.§ DISCUSSION AND CONCLUSIONS We have proposed that the dark energy and the recent cosmic acceleration can be the result of the existence of local antigravity sources associated with astrophysical matter configurations distributed throughout the universe. This is a tempting proposal in relation to the coincidence problem since in that case the dark energy naturally emanates from the recent formation of structure.The cosmic evolution can arise through some interrelation between the local and the cosmic patches. In the present work we have assumed a Swiss cheese model to derive the cosmological equations, where the interior spherically symmetric metric matches smoothly to an FRW exterior across a spherical boundary. This Schucking surface has a fixed coordinate radius in the cosmic frame, but expands with time in the local frame.Various gravitational theories can be implemented in the above context and see if the corresponding intermediate distance infrared phenomena can provide the necessary cosmic acceleration. This is not always an easy task since the appropriate spherically symmetric solutions should be used along with the correct matching conditions. Our main concern in this work, in order to test our proposal, is to consider quantum modified spherically symmetric metrics, and more precisely, quantum improved Schwarzschild-de Sitter metrics, which are used for modeling the metric of galaxies or clusters of galaxies.Asymptotically safe (AS) gravity provides specific forms for the quantum corrections of the cosmological and Newton's constants depending on the energy scale. In the far infrared (IR) regime of AS evolution, which certainly corresponds to the cosmological scales, there are encouraging indications for the existence of a fixed point of a specific form. In the intermediate infrared scales of our astrophysical objects, it is therefore quite reasonable that some small deviations from this IR law occur, and on this behaviour our most successful model was built containing the appropriate antigravity effect. This model uses dimensionless order one parameters of AS, the Newton's constant and the astrophysical length scale (which enters through the Schucking radius of matching) and provides a recent dark energy comparable to dark matter. At the same time, sufficient cosmic acceleration emerges at small redshifts, while the freedom of the order one parameters has to be constrained by observational data in the future. To the best of our knowledge, this is the first solution of the dark energy problem without using fine-tuning or introducing add-hoc energy scales. Although this cosmology, given by equation (<ref>), has a quite different functional form than the ΛCDM cosmology, the modified Schwarzschild-de Sitter interior metric allows us to interpret the encountered quantity (<ref>) as giving approximately the correct order of magnitude of the standard cosmological constant Λ, which can thus be considered as a composite quantity.As a more technical point concerning the above cosmology, there appears, from the relation of the AS energy scale with a length scale, a coupled geometrical field with its own equation of motion, which is identified as the proper distance. The integration constant of this field, which is the proper distance at the today Schucking radius, arranges the precise amount of dark energy, and since its value is of the order of the length of the astrophysical object, yet no new scale is introduced from this stage. Finally, we have presented a crude estimation of the antigravity effects in the interior of the structure and it appears that they stay at sufficiently small values, so that not to create obvious conflicts with the local dynamics of the object. This is an interesting issue that deserves a more thorough investigation.As a future work, it is worth investigating the same scenario with inhomogeneous/anisotropic Swiss cheese models. The present work uses the simplest Swiss cheese model as a first simple approach. It is necessary to begin with it in order to isolate the magnitude of the effects of antigravity sources. Inhomogeneous Swiss cheese type models are certainly more realistic, and we expect that they will enhance the produced effective amount of acceleration. Moreover, the evolution of structure will add a more refined picture of the passage from the deceleration to the acceleration regime.We wish to thank E.V. Linder for useful discussions. V.Z. acknowledges the hospitality of Nazarbayev university and S.P.G. §§.§ Asymptotically safe gravity : Theory The elusive theory of quantum gravity is associated not only with mathematical challenges but also with many conceptual problems, including a measurements scheme and several epistemological issues. This is something expected since quantum gravity will most probably provide the Theory of Everything, a model that most certainly will include revolutionary new mathematical and physical concepts. Nevertheless, some part of the scientific community last decades has been focused, as a first, on attempts to propose solutions to the well-known result that the quantization of the Einstein-Hilbert action leads to a quantum field theory which is perturbatively non-renormalizable <cit.>.In general, a mathematical modeling of a system is greatly simplified if one allows for more parameters, more dimensions or more symmetries. Remarkably, there is one serious attempt of quantum gravity that works in four dimensions using only the symmetries of conventional quantum field theory and of General Relativity. An effective quantum field theory of General Relativity can give answers about the calculations of amplitudes at energy scales below the Planck scale. This is a result of the fact that higher-derivative terms are suppressed by powers of the Planck mass. However, for energies close or larger than Planckian scales, the effective theory requires a fixing of an infinite number of free coupling constants from experimental input. This equivalently means that at every loop more experiments must be performed and this finally leads to loss of predictability.Asymptotic Safety (AS) exists in the space of theories that includes the corresponding effective field theory. The AS program recovers “predictivity” by imposing the demand/principle that the physically accepted quantum theory is located within the ultraviolet (UV) critical hypersurface of a Renormalization Group (RG) fixed point that is called Non-Gaussian fixed point (NGFP). The existence of the latter point guarantees that the UV description of the theory furnishes all dimensionless coupling constants to be finite. Now, determining the trajectory uniquely (which means to pick up a specific universe RG flow) requires a number of experimental input parameters equal to the dimensionality of the hypersurface. This has been proved, under some simplified approximations, that it is indeed possible and there is the NGFP where a trajectory begins and generates General Relativity at low energy <cit.>. Approximations of the gravitational RG flow can be carried out with the help of the functional renormalization group equation (FRGE) <cit.>∂_k Γ_k[g,g̅]=1/2 Tr[ (Γ_k^(2)+ℛ_k )^-1∂_k ℛ_k ] ,regarding the effective average gravity action Γ_k, where Γ_k^(2), g̅_μν and ℛ_k are defined in the context of the background field formalism. This methodology splits the metric g_μν into a fixed background g̅_μν and fluctuations h_μν. The quantity Γ_k^(2) is the second order functional derivative of Γ_k with respect to the fluctuation field h_μν and ℛ_k gives a scale-dependent mass term for fluctuations with momenta p^2 ≪ k^2, where the RG scale k is constructed from the background metric. This RG equation implements Wilson's idea which suggests integrating out momenta p^2 ≪ k^2, i.e. small fluctuations. In this way, Γ_k provides an effective description of the system for scales k^2. Remarkably, this is a background independent method <cit.>.The simplest estimation of the RG flow, concerning gravity field, arises after projecting the FRGE onto the gravity action approximated by the following Γ_kΓ_k=1/16π G_k∫ d^4x √(|g|) (-R+2Λ_k ) ,where gauge fixing and ghost terms are of course included. This approximation includes two energy-dependent couplings, the Newton's constant G_k and the cosmological constant Λ_k. For convenience we define their dimensionless counterpartsg_k ≡ k^2 G_k,λ_k ≡ k^-2Λ_k ,which should respect the beta functions.In the absence of knowledge of the real functional scale dependence of g_k, λ_k, it is not clear what is the correct trajectory in the space of g_k,λ_k that was followed by the universe. In other words, we do not really know yet the detailed path along which the classical General Relativity regime at the present-day epoch with a constant G_N and negligible Λ can be obtained. In the transplanckian regime the NGFP is present <cit.> and the behaviour of the couplings near this point is given by constant values, g_k=g_∗,λ_k=λ_∗, so in the deep ultraviolet (k→∞), G approaches zero and Λ diverges. There is another fixed point, the Gaussian fixed point (GFP) <cit.>, which is saddle and is located at g=λ=0. In the linear regime of the GFP, where the dimensionless couplings are pretty small, the analysis predicts that G is approximately constant, while Λ displays a running proportional to k^4. To the other edge of the far infrared limit (k → 0) <cit.>, the behaviour of the RG flow trajectories with positive G,Λ is not so well understood since the approximation breaks down (divergence of beta functions) when λ_k approaches 1/2 at a non-zero k, where an unbounded growth of G appears together with a vanishingly small Λ (interestingly enough, this happens near k=H_0). The exact value of the current Λ is unknown due to this break down.§.§ Asymptotically safe gravity : Cosmology The framework of AS in principle describes a modified gravitational force at all length scales, something that makes the cosmological model building feasible <cit.>, <cit.>, <cit.>-<cit.> (see also review <cit.>). Consequently, phenomenological studies that use cosmological data can constraint the various free parameters appearing in AS, including the values of the cosmological constant and Newton's constant. The various research studies that appear in the literature incorporate the AS property of energy-dependent couplings in two ways.In the first approach, the scale laws of the couplings are taken from the rigor RG computation of AS close to the NGFP or GFP, or at some infrared range. Then, either these laws are incorporated in General Relativity solutions or they are included in properly modified Einstein equations that respect Bianchi identities. In this approach, there is the advantage of using rigorous and trusted results from RG studies of AS. However, this approach is supposed to concentrate on the study of a relatively restricted energy scale range, i.e. for big bang regime or for infrared regime, and typically is not used for the description of the whole cosmological evolution.In the second approach, RG improved techniques are used either in the equations of motion or in the machinery of theeffective average action. The models are not implemented at the same level of rigor as the full RG flow studies forming the core of AS. However, they allow for the construction of interesting cosmological scenarios with extended cosmological evolution.It is common in the AS literature to set G and Λ as functions of the energy k in the existing solutions of Einstein equations in order to improve their behaviour. The simple input of G(k) and Λ(k) into the classical vacuum equations results to violation of the Bianchi identities, while this same input into a classical solution creates a metric which is not solution of a well-defined theory. In <cit.>, the formalism of obtaining RG improved solutions that respect Bianchi identities was presented at the action level. In <cit.>, an alternative and mathematically more solvable approach was developed at the level of equations of motion, consistent with Bianchi identities, where the appropriate covariant kinetic terms that support an arbitrary source field Λ(k) was included without any symmetry assumption.Many AS cosmological studies have analyzed the early cosmological evolution or the dark energy problem and it was even possible to propose solutions to the cosmic entropy issue <cit.>. Of particular interest are studies where “RG improved” cosmologies admit exponential or power-law inflationary solutions <cit.>. The initial vacuum state of cosmos is characterized by an energy-dependent cosmological constant, and subsequently, Einstein equations, modified according to AS, include a non-zero matter energy-momentum tensor with an energy-dependent Newton's constant (matter is expected to appear due to energy transfer from vacuum to matter fields). Both these Λ and G respect the energy dependence that is predicted in the context of AS at the NGFP. In <cit.>, extending the formalism presented in <cit.> beyond the vacuum case to also include matter, quantum gravity inspired modified Einstein equations were realized, capable to describe both absence of matter cases and configurations with matter contributions. There are also studies discussing the singularity problem <cit.> or the assumption that the universe had a quantum vacuum birth <cit.>.An important question is the association of the RG scale parameter k to the cosmological time or proper length, in order for the model to be reasonable. First works have chosen the RG scale to be inversely proportional to cosmological time <cit.>, while later, the more popular connection with the Hubble scale was developed. In some other works, the RG scale is linked with the plasma temperature or with the fourth root of the energy density <cit.>, the cosmological event/particle horizons <cit.>, or curvature invariants like Ricci scalar <cit.>-<cit.>.§ Here we perform an analytical study of section <ref> concerning the Gaussian fixed point. Because of the extreme fine-tuning in α, numerical study of the equations of section <ref> is not possible. For κ=0 we define the quantitiesχ̃=χ-(1-Ω_m0)H_0^2/(1-Ω_m0)H_0^2-α/3,α̃=α/H_0^2>0 .Note that χ̃+1>0. From (<ref>), the dimensionless quantity α̃/3 is fine-tuned very close to 1-Ω_m0, i.e. 0<1-Ω_m0-α̃/3≲ 10^-22. From (<ref>) it is χ̃_0=0, while equation (<ref>) becomesdχ̃/dz=4ζ(χ̃+1)^5/4/(1+z)^2√(1-[α̃/3+Ω_m0(1+z)^3+(1-Ω_m0-α̃/3)(χ̃+1)]r_Σ^2a_0^2 H_0^2/(1+z)^2) ,whereζ≡3^1/4/ξν^1/4(r_Σ^2a_0^2H_0^2/H_0√(G_N))^1/2(1-Ω_m0-α̃/3)^1/4 .Due to (<ref>) it is ζ≲ 10^23.Equation (<ref>) takes the formΩ_m=Ω_m0(1+z)^3/α̃/3+Ω_m0(1+z)^3+(1-Ω_m0-α̃/3)(χ̃+1) ,where the first α̃/3 in the denominator can be practically replaced by 1-Ω_m0. Then, since χ̃_0=0, equation (<ref>) is consistent today. For a recent range of redsifts z it is (1-Ω_m0-α̃/3)(χ̃+1)≪ 10^5. This condition actually defines this recent range of z. This is a very weak condition which is also physically reasonable. Indeed, in the opposite case, Ω_m from (<ref>) would become extremely small for recent z, which is unacceptable, since the universe would be practically empty of matter. Therefore, this condition is expected to be valid for all relevant recent z. Thus, it arises that (1-Ω_m0-α̃/3)r_Σ^2a_0^2H_0^2(χ̃ +1)≪ 1 and (<ref>) is well approximated bydχ̃/dz=4ζ(χ̃+1)^5/4/(1+z)^2 ,with general solutionχ̃=(c̃+ζ/1+z)^-4-1 ,where c̃ is integration constant. From the integration of (<ref>) it arises that it should be c̃+ζ/1+z>0. Since χ̃_0=0, the solution takes the formχ̃=[1+z/1-(ζ-1)z]^4-1 ,under the condition (ζ-1)z<1.For ζ≤ 1 the condition (ζ-1)z<1 is satisfied for any z. From (<ref>) it arises that χ̃+1=𝒪(1), thus (1-Ω_m0-α̃/3)(χ̃+1)≲ 10^-22. Therefore, the previous inequality (1-Ω_m0-α̃/3)(χ̃+1)≪ 10^5 is indeed satisfied, and moreover, (<ref>) takes the formΩ_m=Ω_m0(1+z)^3/1-Ω_m0+Ω_m0(1+z)^3 ,which is the ΛCDM behaviour. Therefore, in this case the behaviour of Ω_m(z) cannot be discerned from the ΛCDM behaviour.For ζ>1, the condition (ζ-1)z<1 is satisfied for z<z_ζ, where z_ζ=(ζ-1)^-1. Therefore, for ζ>1 the model is valid only for z<z_ζ. It is obvious that as ζ increases, z_ζ decreases and z is only meaningful for a short range around z=0. Thus, physically the only reasonable values of ζ are those which are of order one. Furthermore, for any ζ>1, the redshift z should not be extremely close to z_ζ, otherwise χ̃+1 would become very large, and as stated above, Ω_m would become extremely suppressed, which is not acceptable. To be more precise, let us define the quantityε=(1-Ω_m0-α̃/3)^1/4 ,where ε≲ 10^-5.5. Thus, it should be z≲ (1-ε) z_ζ, which is the regime of applicability of the model, and then (1-Ω_m0-α̃/3) (χ̃+1)≲ 1. This means that very close to the higher value of z, i.e. very close to (1-ε) z_ζ, there is a deviation from ΛCDM, while shortly later, as z reduces, the Ω_m behaviour is not discerned from that of ΛCDM. When we say shortly later, we mean that for z≲ (1-5ε)z_ζ ΛCDM is established. In the initial era (1-5ε)z_ζ≲ z ≲ (1-ε)z_ζ the full equation (<ref>) is valid.The evolution of the Hubble parameter is found from (<ref>) to beH^2/H_0^2=α̃/3+Ω_m0(1+z)^3 +(1-Ω_m0-α̃/3)(χ̃+1) .The first term α̃/3 on the r.h.s. of (<ref>) can be practically replaced by 1-Ω_m0. As above, for ζ≤ 1 the ΛCDM expression arisesH^2/H_0^2=1-Ω_m0+Ω_m0(1+z)^3 ,while for ζ>1 the full expression (<ref>) is kept, which however reduces to (<ref>) when z≲ (1-5ε)z_ζ.In order to study the acceleration properties of the model, equation (<ref>) is written asä/H_0^2a=α̃/3-Ω_m0/2(1+z)^3+(1-Ω_m0-α̃/3)(χ̃+1) [1-2ζ(χ̃+1)^1/4/(1+z)√(1-[α̃/3+Ω_m0(1+z)^3+(1-Ω_m0-α̃/3)(χ̃+1)]r_Σ^2a_0^2 H_0^2/(1+z)^2)] ,while equation (<ref>) becomesw_DE=-1+4ζ(1-Ω_m0-α̃/3)(χ̃+1)^5/4/3 (1+z)[α̃/3+(1-Ω_m0-α̃/3)(χ̃+1)]√(1-[α̃/3+Ω_m0(1+z)^3+(1-Ω_m0-α̃/3)(χ̃+1)]r_Σ^2a_0^2 H_0^2/(1+z)^2) .According to the previous results, the long square root in (<ref>) can be set to unity, while the term α̃/3 in the beginning of the r.h.s. can be replaced by 1-Ω_m0. Similar simplifications occur also in (<ref>). If we define for convenience the quantityμ=3^1/4/ξν^1/4(r_Σ^2a_0^2H_0^2/H_0√(G_N))^1/2 ,then ζ=με. For galaxies it is μ∼ 10^27, while for clusters μ∼ 10^28.For ζ≤ 1⇔ε < 10^-28 equation (<ref>) obtains the ΛCDM behaviourä/H_0^2a=1-Ω_m0-Ω_m0/2(1+z)^3 .Additionally, equation (<ref>) gives w_DE=-1. Therefore, for ζ≤ 1 the acceleration properties of the model coincide with those of ΛCDM.For ζ> 1⇔ε> 10^-28 equations (<ref>), (<ref>) becomeä/H_0^2a=1-Ω_m0-Ω_m0/2(1+z)^3+ε^4(χ̃+1) [1-2με(χ̃+1)^1/4/1+z] , w_DE=-1+4με^5(χ̃+1)^5/4/3 (1+z)[1-Ω_m0+ε^4(χ̃+1)] .In these equations there are some characteristic intervals of z with the following hierarchy: (1-μ^1/5ε)z_ζ< (1-0.5μ^1/5ε)z_ζ<(1-5ε)z_ζ<(1-ε)z_ζ. In the initial regime (1-0.5μ^1/5ε)z_ζ≲ z≲(1-ε)z_ζ, equations (<ref>), (<ref>) can well be approximated by only their very last terms, which means that in this regime a deceleration is present. Especially for (1-5ε)z_ζ≲ z≲ (1-ε)z_ζ this deceleration is large and is basically controlled by the parameter μ. Progressively, as z reduces towards the value (1-μ^1/5ε)z_ζ, these last terms become smaller, towards some values of order one, and thus, these terms are comparable to the conventional ΛCDM terms of (<ref>), (<ref>). Therefore, in the redshift interval around z=(1-μ^1/5ε)z_ζ, w_DE gets a positive, order one correction of the ΛCDM value -1. As we can see, for the astrophysical and cosmological values encountered in our model, the term with the unit 1 in the bracket of (<ref>) can always be omitted and also the quantity ε^4(χ̃+1) in the denominator of (<ref>) is only significant for z∼ (1-ε)z_ζ. Finally, for z=0 we get ä_0/H_0^2a_0=1-3/2Ω_m0-2με^5, w_DE,0=-1+4με^5/3(1-Ω_m0). Depending on the numerical value of the quantity με^5, the values ä_0, w_DE,0 coincide or not with the ΛCDM ones. Of course, in order to have acceleration today it should be 4με^5/3(1-Ω_m0)<1. Therefore, from the beginning of the AS effect, the functions ä(z), w_DE(z) evolve in a non-ΛCDM way up to z∼ (1-μ^1/5ε)z_ζ or up to z=0, while on the contrary, as seen above, the functions H(z), Ω(z) have already passed into the ΛCDM behaviour for z≲(1-5ε)z_ζ. This peculiar phenomenon is due to the presence of the higher time derivatives contained in the acceleration, which can lead to significant contribution from terms which are negligible in H, Ω_m. For the most interesting case with ζ∼ 1, the today values of ä_0, w_DE,0 are the same with the ΛCDM ones, which means that after passing the era with z∼ (1-μ^1/5ε)z_ζ the model reduces to ΛCDM. Therefore, the behaviour in the evolution of the model around the passage from deceleration to acceleration differs from the ΛCDM one and could in principle be discerned using precise observational data. However, this is not the case since it is μ^1/5ε=ζμ^-4/5≪ 1 for the numerical values of the astrophysical and cosmological quantities we are interested in. Only in the case that the quantity ν is (<ref>) is substantially enlarged, which is not predicted by the theory, could μ be reduced essentially. Therefore, the previous eras of deviation from ΛCDM cannot be observed and the model is practically identical to ΛCDM in all the range of its validity. For 1≪ζ≪ 10^22, it is still μ^1/5ε=ζμ^-4/5≪ 1, which means that the model is indistinguishable from ΛCDM, beyond the fact that z_ζ is already unphysically small. Finally, for ζ∼ 10^22, which means ε∼ 10^-5, it is μ^1/5ε=ζμ^-4/5∼ 1. Therefore, in this case the today value of w_DE is different from -1 and the model is always different from ΛCDM. However, the model in this case is meaningless since z_ζ in extraordinarily small. As a result we can summarize saying that in the physically meaningful case with ζ∼ 1, the acceleration properties of the model cannot be discerned from ΛCDM. 99 Weinberg:1988cp S. Weinberg,Rev. Mod. Phys.61, 1 (1989) doi:10.1103/RevModPhys.61.1. Sahni:1999gb V. Sahni and A. A. Starobinsky,Int. J. Mod. Phys. D 9, 373 (2000) doi:10.1142/S0218271800000542 [astro-ph/9904398]; S. M. Carroll,Living Rev. Rel.4, 1 (2001) doi:10.12942/lrr-2001-1 [astro-ph/0004075]; T. Padmanabhan,Phys. Rept.380, 235 (2003) doi:10.1016/S0370-1573(03)00120-0 [hep-th/0212290]; S. Nojiri and S. D. Odintsov,eConf C 0602061, 06 (2006) [Int. J. Geom. Meth. Mod. Phys.4, 115 (2007)] doi:10.1142/S0219887807001928 [hep-th/0601213]; M. Li, X. D. Li, S. Wang and Y. Wang,Commun. Theor. Phys.56, 525 (2011) doi:10.1088/0253-6102/56/3/24 [arXiv:1103.5870 [astro-ph.CO]]; J. Martin,Comptes Rendus Physique 13, 566 (2012) doi:10.1016/j.crhy.2012.04.008 [arXiv:1205.3365 [astro-ph.CO]]. Peebles:2002gy P. J. E. Peebles and B. Ratra,Rev. Mod. Phys.75, 559 (2003) doi:10.1103/RevModPhys.75.559 [astro-ph/0207347]; E. J. Copeland, M. Sami and S. Tsujikawa,Int. J. Mod. Phys. D 15, 1753 (2006) doi:10.1142/S021827180600942X [hep-th/0603057]. Sola:2013gha J. Sola,J. Phys. Conf. Ser.453, 012015 (2013) doi:10.1088/1742-6596/453/1/012015 [arXiv:1306.1527 [gr-qc]]. Riess:1998cb A. G. Riess et al. [Supernova Search Team],Astron. J.116, 1009 (1998) doi:10.1086/300499 [astro-ph/9805201]. Perlmutter:1998np S. Perlmutter et al. [Supernova Cosmology Project Collaboration],Astrophys. J.517, 565 (1999) doi:10.1086/307221 [astro-ph/9812133]. Knop:2003iy R. A. Knop et al. [Supernova Cosmology Project Collaboration],Astrophys. J.598, 102 (2003) doi:10.1086/378560 [astro-ph/0309368]; A. G. Riess et al. [Supernova Search Team],Astrophys. J.607, 665 (2004) doi:10.1086/383612 [astro-ph/0402512]. Spergel:2006hy D. N. Spergel et al. [WMAP Collaboration],Astrophys. J. Suppl.170, 377 (2007) doi:10.1086/513700 [astro-ph/0603449]. Komatsu:2010fb E. Komatsu et al. [WMAP Collaboration],Astrophys. J. Suppl.192, 18 (2011) doi:10.1088/0067-0049/192/2/18 [arXiv:1001.4538 [astro-ph.CO]]. Ade:2013zuv P. A. R. Ade et al. [Planck Collaboration],Astron. Astrophys.571, A16 (2014) doi:10.1051/0004-6361/201321591 [arXiv:1303.5076 [astro-ph.CO]]. Huterer:1998qv D. Huterer and M. S. Turner,Phys. Rev. D 60, 081301 (1999) doi:10.1103/PhysRevD.60.081301 [astro-ph/9808133].ASreviewsM. Niedermaier and M. Reuter, Living Rev. Rel.9, 5 (2006);R. Percacci, In Oriti, D. (ed.): Approaches to quantum gravity 111-128[arXiv:0709.3851 [hep-th]];O. Lauscher and M. Reuter, In Fauser, B. (ed.) et al.: Quantum gravity 293-313 [hep-th/0511260];M. Reuter and F. Saueressig, New J. Phys.14, 055022 (2012)[arXiv:1202.2274 [hep-th]];A. Bonanno, PoS CLAQG 08, 008 (2011) [arXiv:0911.2727 [hep-th]];M. Niedermaier, Class. Quant. Grav.24, R171 (2007) [gr-qc/0610018]. Einstein:1946ev A. Einstein and E. G. Strauss,Annals Math.47, 731 (1946) doi:10.2307/1969231. Dymnikova:2001fb I. Dymnikova,Class. Quant. Grav.19, 725 (2002) doi:10.1088/0264-9381/19/4/306 [gr-qc/0112052]. Hayward:2005gi S. A. Hayward,Phys. Rev. Lett.96, 031103 (2006) doi:10.1103/PhysRevLett.96.031103 [gr-qc/0506126]; T. De Lorenzo, C. Pacilio, C. Rovelli and S. Speziale,Gen. Rel. Grav.47, no. 4, 41 (2015) doi:10.1007/s10714-015-1882-8 [arXiv:1412.6015 [gr-qc]]. Kiritsis:2009rx E. B. Kiritsis and G. Kofinas,JHEP 1001, 122 (2010) doi:10.1007/JHEP01(2010)122 [arXiv:0910.5487 [hep-th]]; J. Z. Tang and B. Chen,Phys. Rev. D 81, 043515 (2010) doi:10.1103/PhysRevD.81.043515 [arXiv:0909.4127 [hep-th]]. Babichev:2013cya E. Babichev and C. Charmousis,JHEP 1408, 106 (2014) doi:10.1007/JHEP08(2014)106 [arXiv:1312.3204 [gr-qc]]; C. Bambi, D. Malafarina and L. Modesto,Phys. Rev. D 88, 044009 (2013) doi:10.1103/PhysRevD.88.044009 [arXiv:1305.4790 [gr-qc]]. Rodrigues:2015hba D. C. Rodrigues, B. Chauvineau and O. F. Piattella,JCAP 1509, no. 09, 009 (2015) doi:10.1088/1475-7516/2015/09/009 [arXiv:1504.05119 [gr-qc]].inho G. F. R. Ellis and W. Stoeger,Class. Quant. Grav.4, 1697 (1987) doi:10.1088/0264-9381/4/6/025; S. R. Green and R. M. Wald,Phys. Rev. D 83, 084020 (2011) doi:10.1103/PhysRevD.83.084020 [arXiv:1011.4920 [gr-qc]];T. Buchert, M. J. France and F. Steiner,Class. Quant. Grav.34, no. 9, 094002 (2017), doi:10.1088/1361-6382/aa5ce2 [arXiv:1701.03347 [astro-ph.CO]]; T. Buchert, A. A. Coley, H. Kleinert, B. F. Roukema and D. L. Wiltshire,Int. J. Mod. Phys. D 25, no. 03, 1630007 (2016), doi:10.1142/S021827181630007X, 10.1142/9789813226609-0034 [arXiv:1512.03313 [astro-ph.CO]].inhoswiss S. M. Koksbang,Phys. Rev. D 95, no. 6, 063532 (2017) doi:10.1103/PhysRevD.95.063532 [arXiv:1703.03572 [astro-ph.CO]]; K. Bolejko and M. N. Celerier,Phys. Rev. D 82, 103510 (2010) doi:10.1103/PhysRevD.82.103510 [arXiv:1005.2584 [astro-ph.CO]]; P. Mishra, M. N. Celerier and T. P. Singh,Phys. Rev. D 86, 083520 (2012) doi:10.1103/PhysRevD.86.083520 [arXiv:1206.6026 [astro-ph.CO]]; T. Biswas and A. Notari,JCAP 0806, 021 (2008) doi:10.1088/1475-7516/2008/06/021 [astro-ph/0702555];V. Marra, E. W. Kolb and S. Matarrese,Phys. Rev. D 77, 023003 (2008) doi:10.1103/PhysRevD.77.023003 [arXiv:0710.5505 [astro-ph]]. structure S. Rasanen,EAS Publ. Ser.36, 63 (2009) doi:10.1051/eas/0936008 [arXiv:0811.2364 [astro-ph]]; S. Rasanen,arXiv:1012.0784 [astro-ph.CO]; R. A. Sussman,Class. Quant. Grav.28, 235002 (2011) doi:10.1088/0264-9381/28/23/235002 [arXiv:1102.2663 [gr-qc]]; M. Lavinto, S. Rasanen and S. J. Szybka,JCAP 1312, 051 (2013) doi:10.1088/1475-7516/2013/12/051 [arXiv:1308.6731 [astro-ph.CO]]. Kofinas:2011pq G. Kofinas and V. Zarikas,Eur. Phys. J. C 73, no. 4, 2379 (2013) doi:10.1140/epjc/s10052-013-2379-9 [arXiv:1107.2602 [hep-th]]. Israel:1966rt W. Israel,Nuovo Cim. B 44S10, 1 (1966) [Nuovo Cim. B 44, 1 (1966)], Erratum: [Nuovo Cim. B 48, 463 (1967)] doi:10.1007/BF02710419, 10.1007/BF02712210;G. Darmois, Mémorial des Sciences Mathématiques, Fascicule XXV (Gauthier-Villars, Paris, 1927), Chap. V.Wetterich:1994bg C. Wetterich,Astron. Astrophys.301, 321 (1995) [hep-th/9408025]; L. Amendola,Phys. Rev. D 60, 043501 (1999) doi:10.1103/PhysRevD.60.043501 [astro-ph/9904120]; L. P. Chimento, A. S. Jakubi, D. Pavon and W. Zimdahl,Phys. Rev. D 67, 083513 (2003) doi:10.1103/PhysRevD.67.083513 [astro-ph/0303145]. Kofinas:2005hc G. Kofinas, G. Panotopoulos and T. N. Tomaras,JHEP 0601, 107 (2006) doi:10.1088/1126-6708/2006/01/107 [hep-th/0510207]. Ade:2015xua P. A. R. Ade et al. [Planck Collaboration],Astron. Astrophys.594, A13 (2016) doi:10.1051/0004-6361/201525830 [arXiv:1502.01589 [astro-ph.CO]]. Torres:2014gta R. Torres,Phys. Lett. B 733, 21 (2014) doi:10.1016/j.physletb.2014.04.010 [arXiv:1404.7655 [gr-qc]]. Koch:2014cqa B. Koch and F. Saueressig,Int. J. Mod. Phys. A 29, no. 8, 1430011 (2014) doi:10.1142/S0217751X14300117 [arXiv:1401.4452 [hep-th]].Bonanno:2001xi A. Bonanno and M. Reuter,Phys. Rev. D 65, 043508 (2002) doi:10.1103/PhysRevD.65.043508 [hep-th/0106133].Bhattacharya:2013tq S. Bhattacharya and A. Lahiri,Eur. Phys. J. C 73, 2673 (2013) doi:10.1140/epjc/s10052-013-2673-6 [arXiv:1301.4532 [gr-qc]]. Reuter:2001ag M. Reuter and F. Saueressig,Phys. Rev. D 65, 065016 (2002) doi:10.1103/PhysRevD.65.065016 [hep-th/0110054]. Bonanno:2001hi A. Bonanno and M. Reuter,Phys. Lett. B 527, 9 (2002) doi:10.1016/S0370-2693(01)01522-2 [astro-ph/0106468]; A. Bonanno and M. Reuter,Int. J. Mod. Phys. D 13, 107 (2004) doi:10.1142/S0218271804003809 [astro-ph/0210472]; E. Bentivegna, A. Bonanno and M. Reuter,JCAP 0401, 001 (2004) doi:10.1088/1475-7516/2004/01/001 [astro-ph/0303150]; I. Donkin and J. M. Pawlowski,arXiv:1203.4207 [hep-th].Reuter:2009kq M. Reuter and H. Weyer, Gen. Rel. Grav.41, 983 (2009) [arXiv:0903.2971 [hep-th]];M. Reuter and H. Weyer,Phys. Rev. D 79, 105005 (2009) [arXiv:0801.3287 [hep-th]];P. F. Machado and R. Percacci, Phys. Rev. D 80, 024020 (2009) [arXiv:0904.2510 [hep-th]];E. Manrique and M. Reuter, PoS CLAQG 08, 001 (2011) [arXiv:0905.4220 [hep-th]].Reuter:2004nx M. Reuter and H. Weyer,JCAP 0412, 001 (2004) doi:10.1088/1475-7516/2004/12/001 [hep-th/0410119]; M. Reuter and H. Weyer,Phys. Rev. D 70, 124028 (2004) doi:10.1103/PhysRevD.70.124028 [hep-th/0410117]. Esposito:2007xz G. Esposito, C. Rubano and P. Scudellaro,Class. Quant. Grav.24, 6255 (2007) doi:10.1088/0264-9381/24/24/008 [arXiv:0709.1403 [gr-qc]].fitting F. Anagnostopoulos, S. Basilakos, G. Kofinas, V. Zarikas, to appear. Nesti:2013uwa F. Nesti and P. Salucci,JCAP 1307, 016 (2013) doi:10.1088/1475-7516/2013/07/016 [arXiv:1304.5127 [astro-ph.GA]]. Weinberg:2009bg S. Weinberg,PoS CD 09, 001 (2009) [arXiv:0908.1964 [hep-th]]. Reuter:1996cp M. Reuter,Phys. Rev. D 57, 971 (1998) doi:10.1103/PhysRevD.57.971 [hep-th/9605030].Niedermaier:2006wt M. Niedermaier, M. Reuter,Living Rev. Rel. 9 (2006) 5.Codello:2008vh A. Codello, R. Percacci and C. Rahmede,Annals Phys.324, 414 (2009) doi:10.1016/j.aop.2008.08.008 [arXiv:0805.2909 [hep-th]]. Litim:2011cp D. F. Litim,Phil. Trans. Roy. Soc. Lond. A 369, 2759 (2011) doi:10.1098/rsta.2011.0103 [arXiv:1102.4624 [hep-th]]. Percacci:2011fr R. Percacci,arXiv:1110.6389 [hep-th]. Reuter:2012id M. Reuter and F. Saueressig,New J. Phys.14, 055022 (2012) doi:10.1088/1367-2630/14/5/055022 [arXiv:1202.2274 [hep-th]]. Reuter:2012xf M. Reuter and F. Saueressig,Lect. Notes Phys.863, 185 (2013) doi:10.1007/978-3-642-33036-08 [arXiv:1205.5431 [hep-th]]. Benedetti:2010nr D. Benedetti, K. Groh, P. F. Machado and F. Saueressig,JHEP 1106, 079 (2011) doi:10.1007/JHEP06(2011)079 [arXiv:1012.3081 [hep-th]]. Reuter:2005kb M. Reuter and F. Saueressig,JCAP 0509, 012 (2005) doi:10.1088/1475-7516/2005/09/012 [hep-th/0507167]. Weinberg:2009wa S. Weinberg,Phys. Rev. D 81, 083535 (2010) doi:10.1103/PhysRevD.81.083535 [arXiv:0911.3165 [hep-th]]. Koch:2010nn B. Koch and I. Ramirez,Class. Quant. Grav.28, 055008 (2011) doi:10.1088/0264-9381/28/5/055008 [arXiv:1010.2799 [gr-qc]]. Casadio:2010fw R. Casadio, S. D. H. Hsu and B. Mirza,Phys. Lett. B 695, 317 (2011) doi:10.1016/j.physletb.2010.10.060 [arXiv:1008.2768 [gr-qc]]. Bonanno:2010bt A. Bonanno, A. Contillo and R. Percacci,Class. Quant. Grav.28, 145026 (2011) doi:10.1088/0264-9381/28/14/145026 [arXiv:1006.0192 [gr-qc]]. Hindmarsh:2011hx M. Hindmarsh, D. Litim and C. Rahmede,JCAP 1107, 019 (2011) doi:10.1088/1475-7516/2011/07/019 [arXiv:1101.5401 [gr-qc]]. Bonanno:2011yx A. Bonanno and S. Carloni,New J. Phys.14, 025008 (2012) doi:10.1088/1367-2630/14/2/025008 [arXiv:1112.4613 [gr-qc]]. Ahn:2011qt C. Ahn, C. Kim and E. V. Linder,Phys. Lett. B 704, 10 (2011) doi:10.1016/j.physletb.2011.08.075 [arXiv:1106.1435 [astro-ph.CO]]. Cai:2011kd Y. F. Cai and D. A. Easson,Phys. Rev. D 84, 103502 (2011) doi:10.1103/PhysRevD.84.103502 [arXiv:1107.5815 [hep-th]]. Fang:2012ca C. Fang and Q. G. Huang,Eur. Phys. J. C 73, no. 4, 2401 (2013) doi:10.1140/epjc/s10052-013-2401-2 [arXiv:1210.7596 [hep-th]]. Bonanno:2013dja A. Bonanno and M. Reuter,Phys. Rev. D 87, no. 8, 084019 (2013) doi:10.1103/PhysRevD.87.084019 [arXiv:1302.2928 [hep-th]]. Kaya:2013bga A. Kaya,Phys. Rev. D 87, 123501 (2013) doi:10.1103/PhysRevD.87.123501 [arXiv:1303.5459 [hep-th]]. Becker:2014jua D. Becker and M. Reuter,JHEP 1412, 025 (2014) doi:10.1007/JHEP12(2014)025 [arXiv:1407.5848 [hep-th]]. Xianyu:2014eba Z. Z. Xianyu and H. J. He,JCAP 1410, 083 (2014) doi:10.1088/1475-7516/2014/10/083 [arXiv:1407.6993 [astro-ph.CO]]. Nielsen:2015una N. G. Nielsen, F. Sannino and O. Svendsen,Phys. Rev. D 91, 103521 (2015) doi:10.1103/PhysRevD.91.103521 [arXiv:1503.00702 [hep-ph]]. Bonanno:2015fga A. Bonanno and A. Platania,Phys. Lett. B 750, 638 (2015) doi:10.1016/j.physletb.2015.10.005 [arXiv:1507.03375 [gr-qc]]. Frolov:2011ys A. V. Frolov and J. Q. Guo,[arXiv:1101.4995 [astro-ph.CO]]. Hindmarsh:2012rc M. Hindmarsh and I. D. Saltas,Phys. Rev. D 86, 064029 (2012), [arXiv:1203.3957 [gr-qc]]. Copeland:2013vva E. J. Copeland, C. Rahmede and I. D. Saltas,Phys. Rev. D 91, no. 10, 103530 (2015), [arXiv:1311.0881 [gr-qc]]. c3 S.-H. H. Tye and J. Xu,Phys. Rev. D 82, 127302 (2010)[arXiv:1008.4787 [hep-th]]; B. F. L. Ward,PoS ICHEP 2010, 477 (2010) [arXiv:1012.2680 [gr-qc]]; R. J. Yang,Eur. Phys. J. C 72, 1948 (2012)[arXiv:1108.0227 [gr-qc]].Bonanno:2008xp A. Bonanno and M. Reuter,J. Phys. Conf. Ser.140, 012008 (2008), [arXiv:0803.2546 [astro-ph]].bianchi M. Reuter and H. Weyer,Phys. Rev. D 69, 104022 (2004), [hep-th/0311196]. Kofinas:2015sna G. Kofinas and V. Zarikas,JCAP 1510, no. 10, 069 (2015) doi:10.1088/1475-7516/2015/10/069 [arXiv:1506.02965 [hep-th]]. Bonanno:2007wg A. Bonanno and M. Reuter,JCAP 0708, 024 (2007) doi:10.1088/1475-7516/2007/08/024 [arXiv:0706.0174 [hep-th]]. Kofinas:2016lcz G. Kofinas and V. Zarikas,Phys. Rev. D 94, no. 10, 103514 (2016) doi:10.1103/PhysRevD.94.103514 [arXiv:1605.02241 [gr-qc]]. Bonanno:2017gji A. Bonanno, S. J. Gabriele Gionti and A. Platania,Class. Quant. Grav.35, no. 6, 065004 (2018) doi:10.1088/1361-6382/aaa535 [arXiv:1710.06317 [gr-qc]]. Guberina:2002wt B. Guberina, R. Horvat and H. Stefancic,Phys. Rev. D 67, 083001 (2003) doi:10.1103/PhysRevD.67.083001 [hep-ph/0211184]. Bauer:2005rpa F. Bauer,Class. Quant. Grav.22, 3533 (2005) doi:10.1088/0264-9381/22/17/012 [gr-qc/0501078]. | http://arxiv.org/abs/1706.08779v2 | {
"authors": [
"Georgios Kofinas",
"Vasilios Zarikas"
],
"categories": [
"gr-qc",
"astro-ph.CO",
"hep-th"
],
"primary_category": "gr-qc",
"published": "20170627110746",
"title": "A solution of the dark energy and its coincidence problem based on local antigravity sources without fine-tuning or new scales"
} |
The Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, I-34151 Trieste, Italy We propose a realization of mechanically tunable Ruderman-Kittel-Kasuya-Yosida interaction in a double quantum dot nanoelectromechanical device. The coupling between spins of two quantum dots suspended above a metallic plate is mediated by conduction electrons. We show that the spin-mechanicalinteraction can be driven by a slow modulation of charge density in the metallic plate.We propose to useStückelberg oscillations as a sensitive tool for detection of the spin and charge states of the coupled quantum dots. Theory of mechanical back action induced by a dynamical spin-spin interaction is discussed. Tunable RKKY interaction in a double quantum dot nanoelectromechanical device M. N. Kiselev December 30, 2023 =============================================================================§ INTRODUCTIONRecent progress in theory and experiment on nanometer-sized devices sheds light ona significant role of the spin degree of freedom (see Refs <cit.>-<cit.>) in opto- and electromechanical systems. The nanomechanical quantum - classical hybrid systems <cit.>-<cit.> are important for both fundamental research and applications<cit.>.The range of problems addressed by the nanomechanics varies from designing new tools for a quantum information processing to a development of highly sensitive methods of, for example, mass, force and currentdetection in metrology<cit.>,<cit.>.While the nano-optomechanicsis dealing with coupling of light with mechanical degrees of freedom <cit.>, the nanoelectromechanics (NEM) works with mechanically nanomachined electrons <cit.>. Several new directions of NEM, in particular those which are focused on an investigation of mechanical systems coupled to the spins (spintromechanics <cit.> and optomagnonics <cit.>) emerged recently thanks to a progress in both experiment <cit.>, theory <cit.> and material science <cit.>. The ability to manipulate nanoelectromechanical systems via electron's spins leads to a variety of new phenomena <cit.>-<cit.>. Since typical mechanical displacements of NEM devices are in a range from angstroms to nanometers, its detection requires utilization of very sensitive methods. Quantum interferometry <cit.> provides one of such sensitive tools <cit.>,<cit.>. In most cases measurements of a back action induced by the quantum electron spin <cit.> and charge system <cit.>,<cit.> onto a mechanical resonator gives yet another sensitive tool for measuring the out-of-equilibrium properties of the quantum system operating in many cases in a regime of strong electron-electron interaction and/or resonance scattering, see examples in Ref.<cit.>.One of the most intriguing examples of spin-related physics in NEMS is the Kondo effect in shuttling devices <cit.>,<cit.>. Kondo physics in quantum dots (QD) <cit.> manifested itself as a many-body effect associated with the creation of a cooperative singlet state composed of conduction electrons in the leads and a localized QD spin S=1/2. Complete screeningof a spin impurity in the QD occurs at temperatures well below the Kondo temperature T_ K, the typical energy scale of the interaction. Formation of a Kondo singlet is accompanied by the saturation of thenano-device's electric conductance at theunitary limit2e^2/h. Mechanical motion of the QD results in the appearance of a time dependency of T_ K, which allows us to employ this effect as a dynamical probe of theKondo cloud <cit.>. Quantum engineering ofNEM-QD devices opens a possibility for investigating competing interactions and emergent symmetries in the presence of resonance electron scattering, for example:two channel Kondo effect in a side-coupled QD <cit.>, two impurity Kondo effect in parallel- and serially-coupled double quantum dot (DQD) <cit.>,<cit.>, or SU(4) Kondo effect in a single-wall carbon nanotube based QD <cit.>, <cit.>. To demonstrate back action based on spin exchange in a mechanical resonator we concentrate on studying a parallel DQD system with spin-spin coupling controlled by its nanomechanical motion. In the regime of resonant scattering of mobile electrons on localized spins, the two-impurity Kondo model arises <cit.>. It is well known (see, e.g., Refs. <cit.>,<cit.>)that at temperatures T<T_ K, mobile electrons "screen" each impurity independently.It happens, however only if the spin-spin interaction mediated by conduction electrons, aka Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction <cit.> is sufficiently small. The system behaves as two independent Kondo impurities under condition|I_ RKKY|≲2.2 T_ K <cit.>, otherwise the two impurities' spins are effectively locked into a singlet or triplet state depending on the sign of the RKKY interaction I_ RKKY. In recent experimentsthe RKKY interaction carried by the electrons in metallic grain <cit.> or a nanowire <cit.> was used as additional "tools" for manipulating the Kondo effect in DQD systems, see, e.g., Ref. <cit.>. However, as shown in theory <cit.>, <cit.>, the controllability of these systems, e.g., tunability of RKKY interaction is very much limited by the device fabrication processes.In this paper we show theoretically that in a suspended DQD NEM hybrid device placed near a metallic plate (back gate), the coupling between mechanics andelectron spin-1/2 localized in the QDs can be mediated by the RKKY interaction. We investigate two NEM subsystems suspended above a two-dimensional electron gas (2DEG) <cit.>, see Fig. <ref>(a). Each subsystem consists of a sourceand a drainbridged by a vibrating QD. A molecule attached to the leads by the van-der-Waals interaction or another vibrating nanowire with strong size quantization, e.g. carbon nanotube with length less than 1 μ m serves as a prototype QD device. We assume that the number of electrons on each QD is odd and consider QDs as mobile spin quantum impurities. The possibility of electrons to tunnel from QDs to the 2DEG and back leads toeffective exchange (RKKY) interaction between the QDs with spins S_1 and S_2,H_ eff=I_ RKKY(R) S_1 S_2.The RKKY interactionmediated by conducting electrons is an oscillating function of the distance R between the localized spins,I_ RKKY∝cos(2k_ F R-π/2(D+1))/R^D (see Ref.<cit.>),where k_F is the electron momentum at the Fermi surface and D is the dimension of the metallic reservoir. This interaction provides an implicit coupling between mechanical and spin degrees of freedom originating fromthe time-dependent deflection of the vibrating QDs,see Fig. <ref>(b). The sign of exchange interaction determines the ground state of theDQDs spin configuration. Namely, for the distance R(t) such that I_ RKKY(R(t))>0, the interaction between spins is antiferromagnetic and the corresponding ground state of the system is a singlet (the total spin S=0). The ferromagnetic RKKY interaction I_ RKKY(R(t))<0 facilitates formation of the triplet S=1 ground state. As a result, the potential energy associated with the RKKY exchange interaction gives rise to an additional displacement-dependent force. This force acts on the mechanical resonator being sensitive to the spin configuration of the DQD NEM device. The paper is organized as follows: Section II is devoted to formulation of the model describing a driven double-quantum dot nanoelectromechanical device. We present a short derivation of the mechanically nanomachined effective two-impurity Kondo model. The RKKY induced back action in mechanical subsystem and the dynamics of the QD oscillations are discussed in Sec. III. The analysis of the Stückelberg interference pattern is presented in Sec. IV. The summary and discussions are given in Sec. V. § MODELThe Hamiltonian of the mobile DQD system reads: H=H_ DQD+H_ leads+H_ tun+H_ vib, where the first term characterizes the DQDH_ DQD=∑_i=1,2,σ,σ'[ε_in̂_i,σ+Un̂_i,↑n̂_i,↓]+U_12n̂_1,σn̂_2,σ'. Here ε_i is an electron energy level on the ith QD, n̂_i,σ=d̂_i,σ^†d̂_i,σ is a density operator of electron with spin projection σ=↑,↓ and d̂_i,σ(d̂_i,σ^†) is its annihilation (creation) operator, U is the Coulomb interaction between electrons in each QD, and U_12 is a capacitive coupling between QDs. We assume that each QD is singly occupied and the system is in the particle-hole symmetric regime, ε_i=-U/2. H_ leads in Eq. (<ref>) describes the electrons in the leads and metallic plate,H_ leads=∑_α∑_ k, i,σξ^i_ kαĉ_ kσα^i†ĉ_ kσα^i+∑_ kσϵ(t)â^†_ kσâ_ kσ, where operators ĉ_ kσα^i(ĉ_ kσα^i†) andâ_ kσ(â_ kσ^†) denote electron annihilation (creation) in (i, α)thelectrode and 2DEG (here α stands for source/drain). Their excitation energies ξ^i_ kα=ε^i_ kα-μ^i_α and ϵ(t)=ϵ_ k-μ(t) are counted from the chemical potentials μ^i_α, μ(t). Note that we consider the case of a time-dependent chemical potential and density of the electrons in the 2DEG, μ(t)=ϵ_F+eVsin(Ω t) (periodic modulations with a frequency Ω), while the amplitude of its modulationis limited by a condition |eV|<U/2 to avoid electron exchange between the QDs and 2DEG.In addition we assume an adiabatic drive (see discussion below).The electron tunneling between leads and QD is described by a standard tunnel Hamiltonian H_ tun=∑_k,α,σ,i(t_ki,αĉ^i†_kασd̂_i,σ+h.c.) +∑_ k,i,σ(γ_ ki e^i k R_i(t)â^†_ kσd̂_i,σ+h.c.), where t_ki,α, γ_ ki are corresponding tunnel matrix elements and R_i denotes a momentary position of the ith QD. Oscillations of the ith "impurity" with frequency ω_i are described by a harmonic oscillator HamiltonianH_ vib=∑_i(p̂^2_i/2M+M ω_i^2 x̂_i^2/2), where p̂_i and x̂_i are momentum and displacement of the ith QD. We assume equal masses M of the oscillators. We point out that QD vibrations take place along the x direction in the plane parallel to the 2DEG, see Fig. <ref>. In the following, we assume that QD' displacements are large compared to the zero-point motion amplitude x_0, x̂_i≫x_0=√(ħ/Mω_i), which allows us to consider x̂_i, p̂_i as classical variables.We map the above model Eqs.(<ref>)-(<ref>) onto a two-impurity Kondo problem <cit.> by using a time-dependent Schrieffer-Wolff transformation <cit.>. This procedure is legitimate as long as the number of electrons occupying each QD is odd, which requires fulfillment of the condition|U|/2≫Γ^i_γ+∑_αΓ^i_α. Here Γ^i_α=2πν |t_ki,α|^2 andΓ^i_γ=2π N_ 2D |γ_ ki|^2 are the tunneling rates in the (i,α)th electrode and 2DEG correspondingly, while ν and N_ 2D are the densities of states at the Fermi level.The effective Kondo Hamiltonian in adiabatic approximation,{ħΩ, ħ k_F |Ṙ_i|}≪|U|/2 <cit.>, reads H_ K=∑_ kσϵ(t)â^†_ kσâ_ kσ+∑_ k, k';i J(t)e^i( k'- k) R_i(t) s· S_i+ ∑_ kσα;iξ^i_ kαĉ_ kσα^i†ĉ_ kσα^i+1/2∑_ k k', αα'∑_σσ'; i J_αα'^i S_i·ĉ^i†_ k'σ'α'σ^σ'σĉ^i_ kσα. Here s=â_ k'σ'^†σ^σ'σâ_ kσ/2 and S_i are the spin operators of the electrons in the 2DEG andof ith QD, σ^σ'σ are the Pauli matrices, J(t)=J_0+J_1[(eVsin(Ω t)-( k+k')Ṙ_i/2)] is the exchange interaction between the ith "impurity" and the 2DEG, J_0=2|γ_ ki|^2U/E_12(U-E_12) , J_1= J_0^2/2|γ_ ki|^2U-2E_12/U.Here E_12≡U/2-U_12 and J_αα'^i=2t_k'α'^it^i∗_kαU/E_12(U-E_12) is the exchange constant between the ith localizedspin and electrons in the leads.In Eq. (<ref>) we omit irrelevant terms of two sorts: (i) responsible for electron scattering without spin flip and (ii) accounting for spin-flip processes during electron scattering fromsource/drain electrodes to the 2DEG and vice versa. Below we restrict ourself to the question of how RKKY interaction in the DQD-NEM system affects the vibrational degree of freedom.For simplicity we assume | J_0|≫T_ K^i and ignore Kondo physics(two different Kondo temperatures <cit.>,<cit.> generically arise inthe four-terminal setup, Fig. <ref>: Two pairs of contacts provide two independent Fermi seas for the quantum impurities).We neglect also all effects associated with current flow through the system by considering zero source-drain bias. The out-of-equilibrium effects associated with finite current and effects of thermal noise due to equilibrium current fluctuations will be considered elsewhere <cit.>.§ BACK-ACTIONWe analyze the dynamics of the ith QD characterized by the amplitude of its fundamentalvibrational mode x_i(t), whose time evolution is described by Newton's equation ẍ_i+γẋ_i + ω_i^2 x_i=-1/M∂/∂ x_iV_eff(| R_1- R_2|, t),here γ is a phenomenological damping and V_eff is an effective exchange interaction, which in adiabatic approximation (ħΩ≪ϵ_F) can be obtained from the linear response theory <cit.>. To derive an effective potential for the RKKY interaction between spins located at different wires we integrate out completely all states of 2DEG at the conduction plate (see Fig. <ref>). As a result, the effective RKKY potential is given by the real part of the density-density correlation function <cit.> of the 2DEG<cit.>. The imaginary part of this correlation function contributes to the damping of the mechanical subsystem.Performing a Fourier transform of V_eff and treating the second term in Eq.(<ref>) as a perturbation in|γ_ ki|^2/(Uϵ_F)≪ 1we obtainV_eff=∫ dω e^-i ω tV_eff( R, ω)(1+ J_1eV/2i J_0∑_κ=±κδ(ω+κΩ)), V_eff(| R|, ω)= J_0^2/2∑_ k, q;i≠ j⟨ S_i S_j⟩e^-i q R(f_ k-f_ k+ q)/ω̃_i+ϵ_ k-ϵ_ k+ q+i0^+,where R=R_1-R_2, δ(ω) is the Dirac delta-function, ω̃_i=ω-qṘ_i,andf_ k is the Fermi distribution function.The real part of the effective potential is the RKKY interaction between two "impurities", while its imaginary part is the spectral function of electron-hole excitations in 2DEG. The imaginary part of V_eff is in general related to the damping mechanisms associated with, e.g., thecreation of collective plasmon or/and magnon excitations or tunneling into the leads. We neglect such contributions by assuming the adiabaticity condition. The real part of V_ eff(| R|,0) at zero temperature (see, e.g., Refs. <cit.>,<cit.>) is given by: V_ eff(R)= J_0^2/2⟨ S_1 S_2⟩ A^2 N_ 2Dk_F^∗ 2· [J_0(k_F^∗R)N_0(k_F^∗R)+J_1(k_F^∗R)N_1(k_F^∗R)],where A is the confinement area of the 2DEG(back gate), k_F^∗≈k_F+(eV/2ħ v_F)sin(Ω t) is a time-dependent Fermi wave vector,and J_m(z), N_m(z) are the Bessel functions of the first and the second kind. For simplicity, we consider only the long-distance limit of Eq. (<ref>) at large k_F R, which corresponds to a power-law-sin(2k_F^∗R)/R^2 RKKY asymptote <cit.>. Here the distance between "impurities" is R=√((r_x+x_1-x_2)^2+r_y^2)andr_x, r_y are x-y projections of R at the equilibrium position, see Fig. <ref>(a).Let us assume that the averaged distance between the vibrating QDsR_0=√(r_x^2+r_y^2) is close to the distance at which the RKKY interaction changes sign, 2k_F R_0≈π n (where n is an integer). Then, by expanding the oscillating RKKY function with respect to the small parameters (eV/ϵ_F, r_x(x_1-x_2)/R_0^2≪1) and substituting it into Eq. (<ref>) one can obtain the system of coupled equations of motion for the displacement of the ith QDẍ_i+γẋ_i+ω^2_i x_i=±α_0 (1+2r_y^2/R_0^2x_1-x_2/r_x)× (1+eV(1/2ϵ_F+ J_1/J_0)sin(Ω t)),where "+" is for i=1, "-" for i=2,and α_0=J_0^2⟨ S_1 S_2⟩ N_ 2D(A^2k_Fr_x/2π M R_0^3). The RKKY interaction results in three different forces in the r.h.s. of Eq. (<ref>) which (i) lead to the renormalization of the ith QD equilibrium position, (ii) create a time-dependent force proportional to sin(Ω t), and(iii) result in energy transferring between the two oscillating QDs (beating). In particular, the RKKY interaction between spins provides the coupling between the QDs and the mechanical subsystems. Such spin-mechanical coupling can be easily extended to the quantum limit.Expansion of sin(2k_FR) around the equilibrium interdot distance π n/2k_F and quantization of the QD displacement field leads to the spin-mechanical interaction Hamiltonian: H_ int∼λ S_1 S_2 (b̂+b̂^†)/√(2), whereλ=(-1)^n J_0^2N_ 2Dk_FA^2r_xx_0/R_0^3 is the spin-phonon coupling, b̂,b̂^† are boson operators of vibrational quanta.We rewrite the EOM (<ref>) in terms of a normal (i) in-phase, x_1+x_2, and (ii) out-of-phase, x_1-x_2, modes. The first term in the r.h.s of Eq. (<ref>) is eliminated by redefinition of the"impurities" initial deflections x_i∓α_0/ω_i^2→x_i.We assume that the QD eigenfrequencies ω_1,2=ω_0±δω differ by a small value δω≪ω_0. In addition, we introduce a dimensionless time τ=ω_0 t and dimensionless normal mode displacements in units of the length of the nanowires l_0: φ=(x_1+x_2)/l_0, ϕ=(x_1-x_2)/l_0. We denote Δ=2δω/ω_0, ω_d=Ω/ω_0,α̃_0=2α_0/ω_0^2, and introduce dimensionless force and frequency shiftsF=α̃_0eV/l_0 J_1/ J_0,α_1=2α̃_0/r_x(r_y/R_0)^2,α_2=l_0 F/r_x(r_y/R_0)^2,Thecoupled mechanical equations of motion (<ref>) in dimensionless notations are given byφ̈+ φ̇/Q+ φ= -Δ·ϕ, ϕ̈+ϕ̇/Q+[1-α_1-α_2sin(ω_d τ)]ϕ =-Δ·φ +Fsin(ω_d τ),where Q is a quality factor. As a result, theequation of motion for the out-of-phase mode ϕ describes a parametric oscillator subject to an external time-dependent driving force coupled to anondriven oscillator, associated with the in-phase mode φ. Notice that the frequency shift for the in-phase mode is negligible compared to the shift for the out-of-phase mode which consists of a time-independent part α_1 and a part ∝α_2 periodic in time.The coupling constant α_2 in (<ref>) can be estimated considering realistic parameters for a typical 2DEG: ϵ_F∼10meV, k_F∼10^6cm^-1. Besides, without loss of generality we assume J_0∼10K and eV/ϵ_F∼0.1and consider a carbon nanotube as an example of vibrating nanowire(ω_0∼100MHz is afundamental frequency ofcarbon nanotube's bending modes, x_0∼10^-9cm is the amplitude of zero-point oscillations). Furthermore, taking R_0∼r_y∼10^-6cm and considering the QDs charging energies in the range from 1K to 10 K we obtain:α_2∼ J_0/ϵ_F J_0/ħω_01/k_FR_0(x_0/R_0)^2eV/U(r_y/R_0)^2∼10^-3÷ 10^-2. § STÜCKELBERG INTERFERENCE IN CLASSICAL TWO-LEVEL SYSTEMFinally, we propose an experimental realization of the spin-mechanical couplingbased on the investigation of the envelope function of vibrating QD' displacements <cit.>. An idea is based on the observation that the slow dynamics of the NEM system associated with the drive ω_d mimics the dynamics of a driven quantum two-level system.The slowly varying amplitudes of in- and out-of-phase modes play the same role as the spinor in the time-dependent Schrödinger equation of two-level system <cit.>,<cit.>. To demonstrate the similarity betweenclassical and quantum driven systems we start with the ansatz <cit.>,<cit.>: φ(τ)=C· Re{Φ_+(τ) e^iτ-τ/(2Q)}, ϕ(τ)=C· Re{Φ_-(τ) e^iτ-τ/(2Q)}. The complex amplitudes Φ_± are equivalent to the spinor "wave functions". Here the constant C accounts for the normalization condition |Φ_+|^2+|Φ_-|^2≈1[to achieve normalization condition we account for the time dependent external force F in the particular solution of Eq. (<ref>)]. Substituting Eq. (<ref>) into Eqs. (<ref>) and performing the unitary transformation with operator W=e^iα_1τ/4exp[-i(α_2/4ω_d)cos(ω_d τ)] we map Eqs. (<ref>) onto a Schrödinger-like equationid/d τ( [ Ψ_+(τ); Ψ_-(τ) ])= H_ TLS(τ)( [ Ψ_+(τ); Ψ_-(τ) ]), where H_ TLS= -σ_xΔ/2-σ_zα_1 + α_2sin(ω_d τ)/4, and Ψ_± are linked to Φ_± as Ψ_±=WΦ_±. The instantaneous adiabatic eigenvalues of the Hamiltonian (<ref>) depend on time τ as follows E=±(1/2)√(Δ^2+(α_1+α_2sin(ω_d τ))^2/4).In the vicinity of avoided crossing points, where the distance between two levels is minimal, the linearized model describes the Landau-Zener transitions <cit.> with effective HamiltonianH_ LZ=-(Δ/2)σ_x±(v τ/2)σ_z, where v=α_2ω_d/2is a driving velocity.The Landau-Zener transition occurs between diabatic states associated with in-phase and out-of-phase modes. As a result, adiabatic states representing the true normal modes of coupled classical oscillators are formed. The transition probability to stay at the same diabatic state after a single passage through the crossing point is given in semiclassical approximation by a text book equation<cit.>: P_ LZ=exp(-πΔ^2/2v).In the case of a multipassage process, the transition probability accounts for both diabatic and adiabatic transitions and contains the phase responsible for the interference between two passes <cit.>.The interference pattern is visualized by a fan-type diagram, see Fig. <ref>.The density plot (see Fig. <ref>) shows the time-averaged probability to populate the in-phase mode as a function of the dimensionless "energy offset" (time-independent frequency shift) α_1 and the driving amplitude α_2.The minima and maxima of the time-averaged probability correspond to destructive and constructive interference between consecutive energy levels crossings <cit.>.Usually, in the absence of any dissipation, the maximum value of the averaged probability to populate the state satisfying nonoccupied initial condition is equal to 0.5.However, the maximum value plotted on Fig. <ref> is below this limit.To explainthe probability deficit we point out that the effects of dissipation in a driven nanomechanical system are twofold: On one hand, these effects invalidate at very large times the correspondence between the full-fledged mechanical equations of motion for the in- and out- of-phase modes and its quantum mechanical equivalent for slowly oscillating amplitudes (envelope curves as "wave functions");on the other hand, the evolution of an "analogous" two-level system becomes nonunitary. As a result, the maximum value of the averaged probability depends on the number of adiabatic periods used for computing the average value (see Fig. <ref>).Specific shape of the fan diagram in Fig. <ref> indicates the crossover from the regime of slow-passage limit α_2ω_d≲Δ^2 (bottom part of the figure on the main panel) to the regime of the fast-passage α_2ω_d≳Δ^2 (top part of Fig. <ref>). The interference pattern (main panel) demonstrates pronounced arcs similar to Ref. <cit.>. Decrease of the total probability P_ av with increasing α_2 qualitatively reminds the similar effect in the quantum two-level system associated with the presence of two typical times scales of the same order of magnitude responsible for the relaxation and dephasing, see, e.g., <cit.>.The standard Stückelberg fan diagram <cit.> is constructed assuming mutual independence of parameters α_1 and α_2. In contrast to it, the RKKY-mediatedlevel crossing imposes certain constraint on α_1∼α̃_0 andα_2∼α̃_0, making the line α_1=0, α_2≠0 inaccessible.The Stückelberg interference is pronounced inside the cone α_1<α_2. This condition is achieved by fine tuning the gate voltage and interdot capacitance controlling U_12. Finally we comment on relations between two-localized spin configurations(singlet/triplet) determined by initial conditions given by RKKY interaction and the resulting Stückelberg interference pattern. First,the spin configuration affects only the magnitude of the coupling constants α_1,2.The expectation value of ⟨ S_1 S_2⟩ at zero temperature is equal to -3ħ^2/4for a singlet and ħ^2/4 for a triplet state respectively.Therefore, for the same values of the external parameters{eV, U, U_12}, the relationα^ Sing_1,2=-3α^ Trip_1,2 holds. By constructing theinterference diagram we assumed very long singlet-triplet relaxation times. Thus, the system being prepared in certain (singlet or triplet) two-spin initial configuration is locked in the same state during the evolution. Furthermore, as one can see from Fig. <ref>, the initial two-spin configuration uniquely defines theStückelberg pattern. Therefore, the classical Stückelberg interferometrycan be used for the identification of the quantum spin states <cit.>.§ SUMMARY AND DISCUSSIONIn summary, we proposea hybrid system coupling two spin impurities embedded in adjacent NEM beams using the RKKY interaction.We showed that a nanodevice based on two suspendedquantum dots nanomachined in the vicinity of a metallic back gate characterized by a slowly modulated density of charges allows us to control independently both local and nonlocal spin correlations. The role of the mechanical system is twofold. On one hand, it provides access to RKKY-mediated dynamics. On the other hand, it provides a very sensitive tool for quantum measurements ofnanomechanical back action. The interferencebetween two diabatic states of mechanical system can be measured with high accuracy throughStückelberg oscillations. The mechanical system of two coupled QD oscillators, while being itself deeply in the classical regime, mimics the dynamics of a quantum two-level system. The slow varying displacement envelope functions play the same role as the two-level system's wave functions <cit.>,<cit.>.We have demonstrated that the interference between two classical modes (in-phase and out-of-phase) of a mechanical resonator is sensitive to the quantum spin configuration of the double quantum dot. As a result, the Stückelberg fan diagram provides very accurate informationabout the spin-spin correlation function. In particular, in the presence of competing interactions, such as, for example, a resonance Kondo scattering, the mechanical back action becomes an important tool for sensing the Kondo screening. The interference pattern, being pronounced for both the singlet and the triplet two-spin configurations, disappears completely when two Kondo clouds are formed in the DQD system to screen the electron's spins. Moreover, mechanical back action can be used to probe the quantum criticality associated with an antagonism between magnetic (RKKY) interaction and Kondo scattering.The applications of the mobileDQD NEM system in addition to sensing the spin-spin correlations function include but are not limited by the following problems, to list a few: Competition between resonance on-site Kondo scattering and spin-spin correlationout-of-equilibrium, nanomechanically induced singlet-triplet transitions in a double-dot device <cit.>, mechanically induced drag,classical vs quantum synchronization, etc. Possible experimental realizations of mechanically tuned RKKY can be engineered with coupled suspended carbon nanotube/metallic quantum wire resonators or in silicon metal-oxide-semiconductor based junctions, in which mechanics is modeled by driving the barrier gates with an ac voltage <cit.>.§ ACKNOWLEDGEMENTSWe are grateful to B. Lorenz and S. Ludwig for fruitful discussions on Stückelberg interference in systems of coupled mechanical resonators, S. Ilani for many valuable suggestions on possible experimental realizations of quantum nanodevices, F. Pistolesi for critical comments on suspended CNTs, R. Fazio and F. Ludovico for careful reading of the manuscript and valuable comments, Leonid Levitov for drawing our attention to Ref. <cit.> and R. Shekhter and L. Gorelik for inspiring discussions on RKKY interaction.This work was finalized at Aspen Center for Physics, which was supported by National Science Foundation Grant No. PHY-1607611 and was partially supported (M.K.) by a grant from the Simons Foundation.99 Rugar2004D. Rugar, R. Budakian, H. J. Mamin, and B. W. Chui, Nature (London) 430, 329 (2004)Kolkowitz2012 S. Kolkowitz, A. C. Bleszynski Jayich, Q. P. Unterreithmeier, S. D. Bennett, P. Rabl, J. G. E. Harris, M. D. Lukin, Science 335, 1603 (2012)Gustaffson2014 M. V. Gustafsson1, T. Aref, A. F. Kockum, M. K. Ekström, G. Johansson, P. Delsing, Science 346, 207 (2014) NVcenters P.-B. Li, Z.-L. Xiang, P. Rabl, and F. Nori, Phys. Rev. Lett. 117, 015502 (2016).optomagnons S. Viola Kusminskiy, H. X. Tang, F. Marquardt, Phys. Rev. A 94, 033821 (2016).material A. A. Serga, A. V. Chumak, and B. Hillenbrands, J. Phys. D 43, 264002 (2010).cleland A. Cleland, Foundations of Nanomechanics (Springer-Verlag Berlin Heidelberg New-York, 2003).ekinci K. L. Ekinci and M. L. Roukes, Rev. Sci. Instrum. 76, 061101 (2005).optomech M. Aspelmeyer, T. J. Kippenberg, and F. Marquardt, Rev. Mod. Phys. 86, 1391 (2014).vdzant M. Poot, and H. S. J. van der Zant, Phys. Rep. 511, 273 (2012).rev R. I. Shekhter, L. Y. Gorelik, I. V. Krive, M. N. Kiselev, A. V. Parafilo, and M. Jonson, Nanoelectromechanical Sys. 1, 1 (2013); R. I. Shekhter, L. Y. Gorelik, I. V. Krive, M. N. Kiselev, S. I. Kulinich, A. V. Parafilo, K. Kikoin, and M. Jonson, Low Temp. Phys. 40, 600 (2014).flens K. Flensberg and C. M. Marcus, Phys. Rev. B 81, 195418 (2010).cool P. Stadler, W. Belzig, and G. Rastelli, Phys. Rev. Lett. 113, 047201 (2014).pulkin R. I. Shekhter, A. Pulkin, and M. Jonson, Phys. Rev. B 86, 100404 (2012).par A. V. Parafilo, S. I. Kulinich, L. Y. Gorelik, M. N. Kiselev, R. I. Shekhter, and M. Jonson, Phys. Rev. Lett. 117, 057202 (2016).low S. I. Kulinich, L. Y. Gorelik, A. V. Parafilo, R. I. Shekhter, Y. W. Park, and M. Jonson, Low Temp. Phys. 40, 907 (2014).schevch S. N. Shevchenko, S. Ashhab, F. Nori, Phys. Reports 492, 1 (2010).eva T. Faust, J. Rieger, M. J. Seitner, J. P. Kotthaus, and E. M. Weig, Nature Physics 9, 485 (2013); M. J. Seitner, H. Ribeiro, J. Kölbl, T. Faust, J. P. Kotthaus, E. M. Weig, Phys. Rev. B 94, 245406 (2016).ch H. Fu, Z.-C. Gong, T.-H. Mao,C.-P. Sun, S. Yi, Y. Li, G.-Y. Cao, Phys. Rev. A 94, 043855 (2016). nori1 Ya. S. Greenberg, E. Il'ichev, and F. Nori, Phys. Rev. B 80, 214423 (2009).okazaki16 Y. Okazaki, I. Mahboob, K. Onomitsu, S. Sasaki, and H. Yamaguchi, Nat. Commun. 7, 11132 (2016).nori4 S.N. Shevchenko, D.G. Rubanov, and F. Nori, Phys. Rev. B 91, 165422 (2015).kiselev1 M. N. Kiselev, K. Kikoin, R. I. Shekhter, and V. M. Vinokur,Phys. Rev. B 74, 233403 (2006).kiselev2 M. N. Kiselev, K. A. Kikoin, L. Y. Gorelik, and R. I. Shekhter, Phys. Rev. Lett. 110, 066804 (2013).glazman L. Kouwenhowen, and L. I. Glazman, Physics World 14, 33 (2001).potok R. M. Potok, I. G. Rau, H. Shtrikman, Y. Oreg, D. Goldhaber-Gordon, Nature (London) 446, 167 (2007).craig N. J. Craig, J. M. Taylor, E. A. Lester, C. M. Markus, M. P. Hanson, A. C. Gossard, Science 304, 565 (2004).rew A. M. Chang, J. C. Chen, Rep. Prog. Phys. 72, 096501 (2009).su4 P. Jarillo-Herrero, J. Kong, H. S. van der Zant, C. Dekker, L. P. Kouwenhoven, and S. D. Francheschi, Nature (London) 434, 484 (2005).su42 A. Makarovski, J. Liu, and G. Finkelstein, Phys. Rev. Lett. 99, 066801 (2007).krishna C. Jayaprakash, H. R. Krishna-murthy, J. W. Wilkins, Phys. Rev. Lett. 47, 737 (1981).jones B. A. Jones, C. M. Varma,Phys. Rev. Lett. 58, 843 (1987).RKKY M. A. Ruderman, C. Kittel, Phys. Rev. 96, 99 (1954); T. Kasuya, Prog. Theor. Phys. 16, 45 (1956); K. Yosida, Phys. Rev. 106, 893 (1957).crit B. A. Jones, C. M. Varma, J. W. Wilkins, Phys. Rev. Lett. 61, 125 (1988); B. A. Jones, C. M. Varma, Phys. Rev. B 40, 324 (1989).sasaki S. Sasaki, S. Kang, K. Kitagawa, M. Yamaguchi, S. Miyashita, T. Maruyama, H. Tamura, T. Akazaki, Y. Hirayama, and H. Takayanagi, Phys. Rev. B 73, 161303(R) (2006).twodot P. Simon, Phys. Rev. B 71, 155319 (2005).glva M. G. Vavilov, L. I. Glazman,Phys. Rev. Lett. 94, 086805 (2005). rosa P. Simon, R. Lopez, and Y. Oreg, Phys. Rev. Lett. 94, 086602 (2005).foot There is a natural 1D generalization of the coupling interface mediating the interaction between two impurities <cit.>.ilani A. Hamo, A. Benyamini, I. Shapir, I. Khivrich, J. Waissman, K. Kaasbjerg, Y. Oreg, F. von Oppen, S. Ilani, Nature (London) 535, 395 (2016).aristov D. N. Aristov, Phys. Rev. B 55, 8064 (1997).kaminski A. Kaminski, Y. V. Nazarov, and L. I. Glazman,Phys. Rev. B 62, 8154 (2000). tbp1 A. V. Parafilo and M. N. Kiselev (unpublished).vignale G. G. Giuliani, and G. Vignale, Quantum theory of electron liquid (Cambridge University Press, New York, 2005).kittel C. Kittel, Quantum theory of solids (John Wiley & Sons Inc., New York, London, 1964). footnote1 Assuming that the 2DEG is in a paramagnetic state we utilize the equivalence of the density-density and spin-spin correlation functions.klein B. Fisher, M. W. Klein, Phys. Rev. B 11, 2025 (1975); I. Ya. Korenblit, and E. F. Shender,Sov. Phys. JETP 42, 566 (1975).com1 We assume that direct Coulomb interaction between charges at QD_1 and QD_2 is screened by the metallic plate. RKKY is the only long-range ∝ 1/R^2interaction mediating the spin-mechanical coupling. Mechanically-assisted Friedel oscillations at low electron density in the plate (back gate) will be considered elsewhere.dykhne A. M. Dykhne, Sov. Phys. JETP 11, 411 (1960).novotny M. Frimmer, and L. Novotny, Am. J. Phys. 82, 947 (2014).LZ L. Landau, Phys. Z. Sowjetunion 2, 46 (1932); C. Zener, Proc. R. Soc. (Lond.) A 137, 696 (1932). footnote3Minimal Max-Zender interferometer consists of two consequent Landau-Zenercrossings. The phase (Stückelberg phase Φ_ St) accumulated inside the interference loop consists of two major contributions, namely, one is associated with evolution along the adiabatic arm of the interferometer and another one is relatedto passage through the diabatic arm. As a result, the double passage Landau-Zener probability is given by the equation:P=4P_ LZ(1-P_ LZ)sin^2Φ_ St. The constructive interference occurs when the phase Φ_ St=π (2n+1)/2, while destructive interference corresponds toΦ_ St=π n, where n is an integer.footnote4Proposed method for detecting the DQD quantum spin state can be generalized for the case |J_0|∼T_K^i, when a competition between Kondo effect and spin-spin correlations should be taken into account. It turns the system into a proximity to a quantum critical regime triggered by two phases: (i) Kondo screened and (ii) singlet/triplet locked. One can expect that the Kondo effect manifests itself in the spin impurities screening by conduction electrons and associated with it suppression of the mechanical back-action induced by the RKKY interaction. As a consequence, decreasing of the fan-diagram's intensity indicates the level of Kondo screening when approaching the quantum critical region. petta J. R. Petta, J. M. Taylor, A. C. Johnson, A. Yacoby, M. D. Lukin, C. M. Marcus, M. P. Hanson, and A. C. Gossard, Phys. Rev. Lett. 100, 067601 (2008).drive K. W. Chan, M. Möttönen, A. Kemppinen, N. S. Lai, K. Y. Tan, W. H. Lim, and A. S. Dzurak, Appl. Phys. Lett. 98, 212103 (2011). | http://arxiv.org/abs/1706.08389v4 | {
"authors": [
"A. V. Parafilo",
"M. N. Kiselev"
],
"categories": [
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.mes-hall",
"published": "20170626141305",
"title": "Tunable RKKY interaction in a double quantum dot nanoelectromechanical device"
} |
[pages=1-last]procieee.pdf | http://arxiv.org/abs/1706.08642v3 | {
"authors": [
"Yu Cai",
"Saugata Ghose",
"Erich F. Haratsch",
"Yixin Luo",
"Onur Mutlu"
],
"categories": [
"cs.AR"
],
"primary_category": "cs.AR",
"published": "20170627015406",
"title": "Error Characterization, Mitigation, and Recovery in Flash Memory Based Solid-State Drives"
} |
+0.01in0.22in169mm244mm-0.55cm-2.1cm | http://arxiv.org/abs/1706.08999v4 | {
"authors": [
"Michael Efroimsky"
],
"categories": [
"astro-ph.EP"
],
"primary_category": "astro-ph.EP",
"published": "20170627182809",
"title": "Dissipation in a tidally perturbed body librating in longitude"
} |
Institute of Complex Systems 2, Forschungszentrum Jülich, Jülich, Germany B.Sabassfz-juelich.deIn a description of physical systems with Langevin equations, interacting degrees of freedom are usually coupled through symmetric parameter matrices. This coupling symmetry is a consequence of time-reversal symmetry of the involved conservative forces. If coupling parameters fluctuate randomly, the resulting noise is called multiplicative. For example, mechanical oscillators can be coupled through a fluctuating, symmetric matrix of spring “constants”. Such systems exhibit well-studied instabilities. In this note, we study the complementary case of antisymmetric, time-reversal symmetry breaking coupling that can be realized with Lorentz forces or various gyrators. We consider the case that these antisymmetric couplings fluctuate. This type of multiplicative noise does not lead to instabilities in the stationary state but renormalizes the effective non-equilibrium friction. Fluctuating Lorentz-force-like couplings also allow to control and rectify heat transfer. A noteworthy property of this mechanism of producing asymmetric heat flux is that the controlling couplings do not exchange energy with the system. Fluctuating, Lorentz-force-like coupling of Langevin equations and heat flux rectification B. Sabass December 30, 2023 ==========================================================================================§ INTRODUCTIONContinuous stochastic processes can be modeled through differential equations with added noise processes. If a noise process appears in a product with a function of the system variables, noise is referred to as multiplicative. The study of multiplicative noise has a long history since it can cause rather dramatic phenomena <cit.>. For example, even arbitrarily weak stochastic fluctuations of the eigenfrequency in harmonic oscillator models lead to instabilities in higher moments of system variables <cit.>. Similarly, fluctuating friction parameters can prohibit stable stationary solutions <cit.>. Such “energetic instabilities”<cit.> occur since forces resulting from fluctuating potentials or friction parameters pump energy in and out of the system. So far, multiplicative noise processes have been studied either for one-dimensional systems or for forces that couple different degrees of freedom symmetrically. In this note, we consider couplings that are antisymmetric under time reversal and thus lead to antisymmetric coupling matrices. Stochastic changes of these “Lorentz-Force-like” couplings produces multiplicative noise.Lorentz-forces can not perform work or change the internal energy since they always act normal to velocities. Therefore, fluctuating Lorentz-force-like couplings yield a special type of multiplicative noise that is energetically neutral. Below, we derive generic differential equations governing the first and second moments of linear systems with fluctuating Lorentz-force-like couplings. It is shown that this type of multiplicative noise does not lead to instabilities but increases the effective friction that damps the first moment when external forces are applied. Fluctuations in Lorentz-force-like couplings do not affect equilibrium correlations between different degrees of freedom but modify the non-equilibrium correlations. Next, the energetics of our systems are studied within the framework of stochastic thermodynamics <cit.>. On the level of Langevin equations, the first law of thermodynamics naturally leads to a definition of heat. Assuming that different degrees of freedom are exposed to separate thermal environments with different temperature, we can calculate heat transfer through the system. This heat transfer can be controlled by fluctuating Lorentz-force-like couplings because they modify the non-equilibrium correlations. As an example we analyze heat transfer in a two-component system. Both components are in contact with their own heat bath, which fixes the additive noise strengths to different values. Random motion of one component is transmitted via Lorentz-force-like coupling to the other, thus, heat is transmitted. Finally, the model is augmented by the assumption that the fluctuation strength of the multiplicative noise in the coupling is also determined by one of the baths. For this system, heat transfer is no longer symmetric under reversal of the temperature difference and the system acts as a rectifier for heat. This mechanism of rectifying heat transfer is notable for the fact that the Lorentz-force-like couplings producing the heat transfer asymmetry do not exchange any energy with the system. § FLUCTUATING, ANTISYMMETRIC COUPLING OF LANGEVIN EQUATIONS §.§ The Langevin equationsIn the following, all quantities are assumed to be non-dimensional and the Boltzmann constant k_ b is set to unity. The Einstein summation convention is not employed. We study a system of coupled, time dependent, real variables x_j(t) that could, e.g., represent the positions of microscopic particles or the charge of electric oscillators. In such systems, the time derivatives _j, i.e., the velocities or currents, can be coupled through Lorentz forces or Coriolis forces that break time-reversal symmetry. A general form of the Langevin equations governing the x_j is_j = -∑_l[κ_jl x_l + (b_jl+ζ̃_jl) _l + γ_jl _l]+ξ_j + f_j.The symmetric matrix =^T represents, e.g., spring constants in a mechanical system or capacitance in an electric network.is to be positive definite for stability <cit.>. We thereby also exclude the marginally stable case where one eigenvalue ofis zero. The antisymmetric matrix = -^T represents Lorentz-force-like couplings which are, e.g., realizable through a magnetic field. Fluctuations in the antisymmetric couplings are modeled by the noise matrix ζ̃ = -ζ̃^T. The multiplicative noise ∼ζ̃_jl_l is interpreted in the Stratonovich sense. Finally, we also have a positive definite, symmetric “friction matrix” =^T. The two last quantities on the right side of Eq. (<ref>) are the thermal noise ξ_j and a time-dependent force f_j. The statistical average is written as ⟨…⟩. Both types of fluctuations have zero average as ⟨ξ_j(t)⟩=0 and ⟨_jl(t)⟩=0. Different types of fluctuations are to be independent, thus, ⟨_jl(t) ξ_k(t')⟩=0. For many physical systems, the fluctuation autocorrelations decay exponentially. Such Ornstein-Uhlenbeck-type correlations with inverse relaxation times λ andread for t ≥ 0⟨ξ_j(0)ξ_j'(t)⟩ = λ/2e^-λ t K_j,j', ⟨_jl(0)_j'l'(t)⟩ = /2 e^- t B_jl(δ_j,j'δ_l,l'-δ_j,l'δ_l,j'), The symmetric, positive matrixin Eq. (<ref>) determines the strength of the additive noise. Analogously,in Eq. (<ref>) determines the strength of the multiplicative noise. This matrix is symmetric =^T, has only positive entries B_ij≥ 0, and zeros on the diagonal B_ii =0.For simplicity, we will focus in the following on the white noise limit of Eqns. (<ref>,<ref>) whereλ→∞,→∞.In this limit we have lim_λ→∞(λe^-λ t/2) = lim_→∞(e^- t/2) =δ(t). For Gaussian noise, cumulants with order higher than two vanish and we can express noise correlations through products of pairwise correlations.§.§ General solution for the first momentTaking the average ⟨…⟩ of Eq. (<ref>) yields ⟨_j ⟩ = - ∑_l [κ_jl⟨ x_l ⟩ +(b_jl+γ_jl)⟨_l⟩]-∑_l ⟨_jl_l⟩ + f_j,which leaves us with the problem of calculating the expectation value of correlations with the system variables of form ⟨_jl_l⟩. For the case of exponentially decaying, Gaussian noise-noise correlations, solutions exist in the form of a systematic expansion for short correlation times <cit.>. Here, we consider the white noise limit →∞ and using noise splitting formulas detailed in Ref. <cit.> we obtain ⟨_jl_l ⟩= B_jl⟨_j⟩/2 (see Appendix). Thus, the first moments obey⟨_j ⟩ = - ∑_l [κ_jl⟨ x_l ⟩ +(b_jl+γ_jl)⟨_l⟩]-∑_m B_jm/2⟨_j⟩ + f_j.Here, the term -∑_m B_jm⟨_j⟩/2 increases the “friction” on the average trajectories becauseis positive <cit.>. The renormalization of the friction constants can possibly be interpreted as a geometric effect since a Lorentz-force produces “curved trajectories”. Positivity of the effective friction in Eq. (<ref>) is a consequence of the antisymmetry of the Lorentz-force-like couplings =-^T, which appears in the Kronecker-delta expression in Eq. (<ref>) as antisymmetry under index exchange j ↔ l. Note that fluctuations in the friction parameters γ produce the opposite effect, namely a reduced effective friction <cit.>; which can lead to unstable stationary solutions when the effective friction becomes negative.§.§ General solution for the second momentThe equations governing the second moments result from multiplying Eq. (<ref>) with derivatives of x_k and subsequent averaging. A lengthy calculation yields / t⟨ x_m x_k⟩ = ⟨_k x_m⟩ + ⟨ x_k _m⟩, / t⟨ x_m _k ⟩ = ⟨_m _k ⟩ - ∑_j B_kj/2⟨ x_m _k ⟩- ∑_j(κ_kj⟨ x_m x_j ⟩+ [b_kj + γ_kj]⟨ x_m _j ⟩)+⟨ x_m ⟩ f_k,/ t⟨_m _k ⟩ = K_mk+∑_j δ_mk B_kj⟨_j^2 ⟩-∑_j ( B_kj/2 + B_mj/2)⟨_m _k⟩ - B_km⟨_m _k⟩- ∑_j(κ_kj⟨_m x_j ⟩ +[b_kj+γ_kj]⟨_m _j⟩)+ ⟨_m⟩ f_k- ∑_i(κ_mi⟨_k x_i ⟩ +[b_mi+γ_mi]⟨_k _i⟩)+⟨_k⟩ f_m.The equations (<ref>,<ref>-<ref>) for the first and second moments form a closed system that can readily be solved. The following provides an example involving the calculation of heat exchange. § HEAT EXCHANGEFrom now on the friction matrix in Eq. (<ref>) is assumed to be diagonal γ_jl=δ_jlγ_j. Furthermore, we assume that the noise processes ξ_j in the Langevin equations result from thermal equilibrium fluctuations of large baths that surround the individual degrees of freedom x_j. The coupling between the x_j and the baths should not depend on the system state and the bath fluctuations are independent of the system. Each x_j is connected to its own bath with temperature T_j. Therefore, correlations of the noise variables obey <cit.>K_jl=δ_jl 2γ_j T_j.§.§ Definition of heatThe Langevin dynamics can be endowed straight-forwardly with a thermodynamical interpretation as follows. We multiply Eq. (<ref>) by _j and subsequently sum over j. The antisymmetric coupling matrices do not appear in this balance equation since ∑_jl_j(b_jl+ζ̃_jl) _l=0. Therefore, these coupling forces do not affect the energetics. After averaging, we obtain the following energy balance∑_j l/ t[δ_jl⟨^2_j/2⟩ +⟨x_j κ_jl x_l/2⟩] =∑_j ⟨ f_j _j ⟩ -(γ_j⟨^2_j ⟩ - K_jj/2),where the expression on the left hand side is the change of internal energy. The first term on the right hand side is the work done by the forces 𝐟. The second term on the right hand side is the average momentum exchange with the temperature baths. Accordingly, the heat exchange of each element j with its thermal environment is defined as <cit.>Q̇_j≡γ_j⟨_j^2⟩ - 1/2K_jj.Note that this definition leads to heat fluxes that are linear combinations of the temperatures Q̇_j=α_1T_1 + α_2 T_2 + … where the coefficients satisfy ∑_kα_k =0. The latter constraint reduces the number of variables by one, such that Q̇_j can always be written as function of the temperature differences. To describe an equilibrium situation we set f_j=0 and require that all fluctuations are determined by a single temperature T_ eq such that the noise correlations for all j, l are given by K^ eq_jl = δ_jl2 γ_j T_ eq.For long times, stationary correlation functions result from Eqns. (<ref>, <ref>) as ⟨_k x_m⟩_eq = ⟨ x_k _m⟩_eq =0 and ⟨_m _k ⟩_eq = δ_mkT_ eq. Thus, the multiplicative noise strengthbecomes irrelevant in equilibrium. Although fluctuations in the antisymmetric coupling matrices do not change the internal energy or produce work, they do affect the transfer of energy between different degrees of freedom in non-equilibrium. §.§ A toy model for heat flux controlWe next consider an example for how the multiplicative noisecan allow to control heat transfer. The general Langevin equation (<ref>) is specialized to the case of two elements. Furthermore, the system is simplified by assuming a stationary state with f_j=0 and by assuming that the magnetic coupling is on average zero (=0). The governing equations are[ _1; _2;]=2 [ -2κ κ; κ -2κ ][ x_1; x_2 ] -[ γ; - γ ][ _1; _2;] +[ ξ_1; ξ_2; ]. The two oscillators are to be connected with different heat baths at temperatures T_1 and T_2. Thus, the strength of the additive noise ξ_{1,2} is determined by K_11 = 2γ T_1,K_22 = 2γ T_2,K_12 = K_21 = 0.The stationary heat exchange can now be calculated straightforwardly from Eqns. (<ref>,<ref>,<ref>). The result isQ̇_1 = γ (2 κ + B_12 (2 γ + B_12)) (T_2 - T_1)/4 κ + 2 (γ + B_12) (2 γ + B_12), Q̇_2 =-Q̇_1.Clearly, heat is a non-linear function of the multiplicative noise strenght B_12. However, the multiplicative noise can not change the direction of heat transfer in Eq. (<ref>) since per definition B_12≥0. Thus, spontaneous currents from the colder to the hotter heat bath can not occur and thermodynamic consistency is retained. This result is a consequence of the usage of antisymmetric couplings as fluctuating quantity since no energy is injected or removed during fluctuations. The non-monotonous dependence of Q̇ on the coupling constants in Eq. (<ref>) allows to control heat transfer through the strength of the multiplicative noise. The two extreme limits of vanishing and very strong multiplicative noise yield a heat transfer of Q̇_1|_B_12=0 = γκ (T_2 - T_1)/(2γ^2 + 2κ) and Q̇_1|_B_12→∞≈γ (T_2 - T_1)/2. In between these limits, a minimum occurs at the fluctuation strength B_12=√(2 κ) - 2 γ≥ 0 with a heat transfer ofQ̇_1^min = γ ( √(2 κ)/γ - 1)/2 √(2 κ)/γ - 1(T_2 - T_1)≥γ/3(T_2 - T_1).For large friction constants, when γ≥√(κ/2), the minimum heat transfer occurs at B_12=0 and Q̇_1 increases monotonously with multiplicative noise intensity.§.§ Rectification of heat exchangeInstead of fixingto some arbitrary value, we now assume that the fluctuationsin Eq. (<ref>) are governed by the temperature T_1 of one of the heat baths. In this case, B_12 is proportional to the temperature asB_12 = B_21 = ν T_1,B_11 = B_22 = 0,where ν is a constant. With this definition of , Eq. (<ref>) yields a nonlinear dependence of Q̇_1,2 on T_1. Moreover, the magnitude of heat exchange depends asymmetrically on the direction of heat transfer T_1 ↔ T_2. Fig. <ref>a) shows plots of Eq. (<ref>) for symmetric temperature difference T_1,2 = 1 ±Δ. As demonstrated in the figure, heat transfer becomes a quadratic function Q̇_1≈ -(Δ + Δ^2) ν when γ≫ν and also γ≫κ. Then, the magnitude of heat transfer in the direction T_1 → T_2 (Δ >0) is stronger than in the reverse direction. To quantify the asymmetry of heat transfer we consider the quantity Q̇_1(Δ)/Q̇_1(-Δ) in Fig. <ref>b). The asymmetry becomes large when Δ≃ 1, i.e., when the temperature difference is comparable to the mean temperature (T_1+T_2)/2. The plot also demonstrates that increasing the coupling constant κ generally leads to a less pronounced asymmetry in the heat transfer. § CONCLUDING REMARKSLorentz-force-like couplings can be physically realized in different ways. Devices that couple fluxes in a non-reciprocal way are known in electrical engineering as gyrators <cit.> and early designs were based on a rectangular Hall element withseparate ports at all four sides <cit.>. Recent developments include gyrators based on magnetoelectric materials <cit.> and Hall effect gyrators with significantly reduced electrical resistance <cit.>. It is also possible to build non-reciprocal Microwave wave guides by exploiting the Faraday effect <cit.>. While these devices rely on time-reversal symmetry-breaking properties of magnetic fields, one can in principle imagine replacing the Lorentz forces by Coriolis forces in a rotating inertial frame. Fluctuations in the resulting antisymmetric couplings could then be caused, e.g., by noise in the magnetic field or in the angular velocity. While a detailed study of the resulting phenomena would require solving Maxwell's equations or mechanical force balance equations, we focus in this note on generic second order stochastic differential equations with fluctuating, antisymmetric coupling of the velocity variables. For these systems, we derive general equations governing the first and second moments under the assumption of Gaussian white noise. It is demonstrated that the new type of multiplicative noise only affects the out-of-equilibrium correlations and does not lead to energetic instabilities. As an application of our formulas we discuss heat transport through a system with fluctuating Lorentz-force like couplings.Any heat transport process must satisfy the second law of thermodynamics, i.e., heat does not flow spontaneously from a cooler reservoir to a hotter reservoir <cit.>. Fourier's law of heat conduction is linear in temperature differences and therefore satisfies the requirement naturally. However, non-linear and asymmetric heat conduction laws are also possible. Using our framework for multiplicative noise in Lorentz-force-like couplings, we study heat transfer between two reservoirs. Noise processes in the Langevin equations can be interpreted as thermal equilibrium fluctuations in a temperature bath. This assumption leads to a natural microscopic identification of heat and work in the stochastic system, whereby the nonequilibrium heat exchanged between the bath and the system is related to the velocity autocorrelations. Consequently, heat flow can be controlled through fluctuations of the Lorentz-force-like coupling. This way of controlling heat flow automatically conserves the energy balance due to energetic neutrality of antisymmetric coupling matrices. Therefore, such systems can be studied consistently without explicitly modeling the origin of the multiplicative noise by additional equations, which presents an advantage for theoretical work.Concepts for rectification of heat flow have received considerable scientific attention during recent years. In particular, studies of low-dimensional nanoscale-systems yielded various principles that allow to control heat flux and produce asymmetry under exchange of the heat flow direction <cit.>. Studying different instances of heat flux rectification is not only important for an understanding of general principles, but may also have immediate applications, for example in nanotechnology.This article was written during a postdoctoral stay at Princeton University. The author cordially thanks H.A. Stone and Z. Gitai for support and acknowledges a postdoctoral fellowship from the German Academic Exchange Service (DAAD). § APPENDIXIn the following, we demonstrate the use of standard procedures to calculate the noise correlators employed above. For a system with N space coordinates we define ≡{x_1… x_N,_1…_N} to write Eq. (<ref>) as a system of first order differential equationsu_j/ t = ∑_l[A_jl u_l + Z_jl u_l] +Ξ_j + F_j.Here, the constant parameters , 𝐛, andin Eq. (<ref>) are absorbed in A_jl. The additive noise is Ξ_j =0 for j ≤ N and Ξ_j =ξ_j-N for j>N. External forces acting on the system are contained in F_j. The matrix Z_jl contains the multiplicative noise and its entries areZ_jl = -_(j-N),(l-N)forj>Nand l>NZ_jl = 0 otherwise.Next, we consider an arbitrary function h(t) that depends on the zero-mean stationary Gaussian noises Z_jl. We seek to calculate equal time correlations of the form ⟨ Z_..(t) h(t)⟩. Following Ref. <cit.>, we expand h(t) in time-ordered products of the noise variables through a functional Taylor series. As a slight generalization of results given in Ref. <cit.> we find/ t⟨ Z_jl h⟩=- λ̃⟨ Z_jl h⟩ +⟨_jl h/ t⟩, / t⟨ Z_m n Z_jl h⟩=- 2λ̃⟨ Z_m n Z_jl h⟩+⟨ Z_m n Z_jl h/ t⟩+2λ̃⟨ Z_m n Z_jl⟩⟨ h⟩, / t⟨ Z_rs Z_m n Z_jl h⟩=- 3λ̃⟨ Z_rs Z_mn Z_jl h⟩+ ⟨ Z_rs Z_mn Z_jl h/ t⟩ + 2λ̃[ ⟨ Z_r s Z_m n⟩⟨ Z_j l h⟩+ ⟨ Z_rs Z_jl⟩⟨ Z_n m h ⟩ + ⟨ Z_m n Z_jl⟩⟨ Z_r s h⟩], These formulas hold for both of our noise sources with exchanged variables Z_kl→Ξ_j, λ̃→λ. We can now set h = u_j in Eqns. (<ref>-<ref>) and use the Langevin equation (<ref>) to obtain a hierarchy of equations where every correlation is connected to correlations of the next higher order in Z_... Integration of Eqns. (<ref>,<ref>) yields ⟨ Z_jl u_k⟩ = ∫_0^t e^- (t -t')⟨ Z_jl u_k/ t⟩|_t't', ⟨ Z_jl Z_mn u_k⟩ = ∫_0^t'e^-2 (t' -t”) [2 ⟨ Z_jl Z_mn⟩⟨ u_k ⟩ +⟨ Z_jl Z_mn u_k/ t⟩]_t” t”, where we assumed that the contribution of initial values of the correlations vanishes. Together, these equations yield⟨ Z_jl u_k⟩ = ∑_m ∫_0^t e^- (t -t') [ ⟨ Z_jl (A_km u_m + Ξ_k + F_k)⟩|_t' + ∫_0^t' e^-2 (t' -t”)(2 ⟨ Z_jl Z_km⟩⟨ u_m ⟩ + ⟨ Z_jl Z_km u_m/ t⟩)_t” t” ] t',We next consider the limit of very short noise correlations →∞ where of course t>0. Assuming that the sought-for correlations are finite, the terms in Eq. (<ref>) in the first line on the right side yield a contribution of vanishing measure since the factor e^- (t -t') is zero for (t -t')>0 and only finite for the point (t -t')=0. For the first summand on the second line of Eq. (<ref>), we can employ the correlation relations (<ref>), yielding a contribution that can be at most ∼^2. This term survives the limit of →∞ since then e^- (t -t')→ 2δ(t-t').To evaluate the last term in Eq. (<ref>) we could again replace u_j/ t by Eq. (<ref>) and use the integral of Eq. (<ref>). However, the last three summands on the right hand side of Eq. (<ref>) are at most ∼^2 and are therefore suppressed by the exponential integral factors in the limit of →∞. The second term on the right side of Eq. (<ref>) can only yield a non-zero, finite contribution in the case that ⟨ Z_..Z_..Z_..Z_.. u_i⟩∼λ^3. However, this case is rejected on physical grounds since for Gaussian noise ⟨ Z_..^4⟩ is at most ∼^2 and u_i varies on a much longer timescale than the noise variable. Thus, the only non-vanishing contribution to the integral in Eq. (<ref>) comes from the first summand in the second line. The result is⟨ Z_jl u_k⟩ = ∑_m ∫_0^te^- (t -t')∫_0^t' e^-2 (t' -t”) 2 ⟨ Z_jl Z_km⟩⟨ u_m ⟩|_t” t”t'.Next, we revert back to our original variables. Since the matrix elements {Z_..} contain multiplicative noise components {-_..}, we employ Eq. (<ref>) and take the limit of largeto obtain ⟨_jl x_k⟩ =0, ⟨_jl_k⟩ = B_jl/2(δ_lk-δ_jk)⟨_k⟩ . These are the correlations that were employed for derivation of Eq. (<ref>). Through an analogous calculation for the additive noise ξ we obtain by simply exchanging the noise variables in above derivation ⟨ξ_j x_k⟩ =0, ⟨ξ_j_k⟩ = K_jk/2.For calculation of the second moments of the system variables we need the correlation between noise and two system variables ⟨ Z_jl u_k u_i ⟩. This expression can be evaluated by setting h=u_k u_i in Eqns. (<ref>,<ref>) and by then following through with the same procedure as above. The final result, in original variables after evaluation of the Kronecker-delta expressions Eq. (<ref>), reads ⟨_jl x_k x_i⟩ =0, ⟨_jl_k x_i⟩ =-B_jl/2 [δ_jk⟨_l x_i⟩ - δ_lk⟨_j x_i⟩], ⟨_jl_k _i⟩ =-B_jl/2 [δ_ji⟨_k _l⟩ - δ_li⟨_k_j⟩]-B_jl/2 [δ_jk⟨_l _i⟩ - δ_lk⟨_j _i⟩]. 35 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Van Kampen(1976)]van1976stochastic author author N. G. Van Kampen, @noopjournal journal Phys. rep. volume 24, pages 171 (year 1976)NoStop [Bourret(1971)]bourret1971energetic author author R. Bourret, @noopjournal journal Physica volume 54, pages 623 (year 1971)NoStop [Lindenberg and West(1984)]lindenberg1984finite author author K. Lindenberg and author B. J. West, @noopjournal journal Physica A volume 128, pages 25 (year 1984)NoStop [Łuczka et al.(2000)Łuczka, Talkner, and Hänggi]luczka2000diffusion author author J. Łuczka, author P. Talkner,and author P. Hänggi,@noopjournal journal Physica Avolume 278, pages 18 (year 2000)NoStop [Mallick and Marcq(2002)]mallick2002anomalous author author K. Mallick and author P. Marcq, @noopjournal journal Phys. Rev. E volume 66, pages 041113 (year 2002)NoStop [Gitterman(2005)]gitterman2005classical author author M. Gitterman, @noopjournal journal Physica A volume 352, pages 309 (year 2005)NoStop [Méndez et al.(2011)Méndez, Horsthemke, Mestres, andCampos]mendez2011instabilities author author V. Méndez, author W. Horsthemke, author P. Mestres,and author D. Campos,@noopjournal journal Phys. Rev. Evolume 84, pages 041137 (year 2011)NoStop [Gitterman and Kessler(2013)]gitterman2013mass author author M. Gitterman and author D. A. Kessler, @noopjournal journal Physical Review E volume 87, pages 022137 (year 2013)NoStop [Sekimoto(1998)]sekimoto1998langevin author author K. Sekimoto, @noopjournal journal Progr. Theor. Phys. Suppl. volume 130, pages 17 (year 1998)NoStop [Seifert(2012)]SeifertReview author author U. Seifert, @noopjournal journal Rep. Prog. Phys. volume 75, pages 126001 (year 2012)NoStop [Strogatz(1994)]strogatz2006nonlinear author author S. H. Strogatz, @nooptitle Nonlinear Dynamics and Chaos (publisher Perseus Publishing, year 1994)NoStop [Van Kampen(1974)]van1974cumulant author author N. G. Van Kampen, @noopjournal journal Physica volume 74, pages 215 (year 1974)NoStop [Shapiro and Loginov(1978)]shapiro1978formulae author author V. E. Shapiro and author V. M. Loginov, @noopjournal journal Physica A volume 91, pages 563 (year 1978)NoStop [Sabass(2015)]sabass2015network author author B. Sabass, @noopjournal journal Europhys. Lett. volume 110, pages 20002 (year 2015)NoStop [Hashitsume et al.(1992)Hashitsume, Toda, Kubo, and Saitō]Kubo1992statistical author author N. Hashitsume, author M. Toda, author R. Kubo,and author N. Saitō, @nooptitle Statistical physics II (publisher Springer, year 1992)NoStop [Harada and Sasa(2006)]harada2006energy author author T. Harada and author S. Sasa,@noopjournal journal Phys. Rev. Evolume 73, pages 026131 (year 2006)NoStop [Tellegen(1948)]tellegen1948gyrator author author B. D. H.Tellegen, @noopjournal journal Philips Res. Rep volume 3, pages 81 (year 1948)NoStop [Wick(1954)]wick1954solution author author R. F. Wick, @noopjournal journal J. Appl. Phys. volume 25, pages 741 (year 1954)NoStop [Zhai et al.(2006)Zhai, Li, Dong, Viehland, andBichurin]zhai2006quasi author author J. Zhai, author J. Li, author S. Dong, author D. Viehland,and author M. I. Bichurin, @noopjournal journal J. Appl. Phys. volume 100, pages 124509 (year 2006)NoStop [Viola and DiVincenzo(2014)]viola2014hall author author G. Viola and author D. P. DiVincenzo, @noopjournal journal Phys. Rev. X volume 4, pages 021019 (year 2014)NoStop [Bosco and DiVincenzo(2017)]bosco2017nonreciprocal author author S. Bosco and author D. P. DiVincenzo, @noopjournal journal Phys. Rev. B volume 95, pages 195317 (year 2017)NoStop [Hogan(1952)]hogan1952ferromagnetic author author C. L. Hogan, @noopjournal journal Bell Labs Tech. J. volume 31, pages 1 (year 1952)NoStop [Clausius(1854)]clausius1854veranderte author author R. Clausius, @noopjournal journal Ann. Phys. volume 169, pages 481 (year 1854)NoStop [Terraneo et al.(2002)Terraneo, Peyrard, and Casati]terraneo2002controlling author author M. Terraneo, author M. Peyrard, and author G. Casati,@noopjournal journal Phys. Rev. Lett.volume 88, pages 094302 (year 2002)NoStop [Li et al.(2004)Li, Wang, and Casati]li2004thermal author author B. Li, author L. Wang,andauthor G. Casati, @noopjournal journal Phys. Rev. Lett. volume 93, pages 184301 (year 2004)NoStop [Hu et al.(2006)Hu, Yang, and Zhang]hu2006asymmetric author author B. Hu, author L. Yang,andauthor Y. Zhang, @noopjournal journal Phys. Rev. Lett. volume 97, pages 124302 (year 2006)NoStop [Dhar(2008)]dhar2008heat author author A. Dhar, @noopjournal journal Adv. Phys. volume 57, pages 457 (year 2008)NoStop [Li et al.(2012)Li, Ren, Wang, Zhang, Hänggi, and Li]li2012colloquium author author N. Li, author J. Ren, author L. Wang, author G. Zhang, author P. Hänggi,and author B. Li, @noopjournal journal Rev. Mod. Phys. volume 84, pages 1045 (year 2012)NoStop [Cahill et al.(2014)Cahill, Braun, Chen, Clarke, Fan, Goodson, Keblinski, King, Mahan, Majumdar et al.]cahill2014nanoscale author author D. G. Cahill, author P. V. Braun, author G. Chen, author D. R. Clarke, author S. Fan, author K. E. Goodson, author P. Keblinski, author W. P.King, author G. D. Mahan, author A. Majumdar, et al., @noopjournal journal Appl. Phys. Rev. volume 1, pages 011305 (year 2014)NoStop [Li et al.(2006)Li, Wang, and Casati]li2006negative author author B. Li, author L. Wang,andauthor G. Casati, @noopjournal journal Appl. Phys. Lett. volume 88, pages 143501 (year 2006)NoStop [Wang and Li(2007)]wang2007thermal author author L. Wang and author B. Li,@noopjournal journal Phys. Rev. Lett.volume 99, pages 177208 (year 2007)NoStop [Chung Lo et al.(2008)Chung Lo, Wang, and Li]chung2008thermal author author W. Chung Lo, author L. Wang, and author B. Li, @noopjournal journal J. Phys. Soc. Japan volume 77 (year 2008)NoStop [Segal(2008a)]segal2008single author author D. Segal, @noopjournal journal Phys. Rev. Lett. volume 100, pages 105901 (year 2008a)NoStop [Segal(2008b)]segal2008nonlinear author author D. Segal, @noopjournal journal Phys. Rev. E volume 77, pages 021103 (year 2008b)NoStop [Joulain et al.(2016)Joulain, Drevillon, Ezzahri, andOrdonez-Miranda]joulain2016quantum author author K. Joulain, author J. Drevillon, author Y. Ezzahri,andauthor J. Ordonez-Miranda,@noopjournal journal Phys. Rev. Lett.volume 116, pages 200601 (year 2016)NoStop | http://arxiv.org/abs/1706.08367v3 | {
"authors": [
"Benedikt Sabass"
],
"categories": [
"cond-mat.stat-mech"
],
"primary_category": "cond-mat.stat-mech",
"published": "20170626133613",
"title": "Fluctuating, Lorentz-force-like coupling of Langevin equations and heat flux rectification"
} |
IPMU 17-0093 [[email protected]][[email protected]]ICRR, The University of Tokyo, Kashiwa, 277-8582, Japan Kavli IPMU (WPI), TODIAS, The University of Tokyo, Kashiwa, 277-8583, JapanWe study the dynamics of the Affleck-Dine field after inflation in more detail. After inflation, the Affleck-Dine field inevitably oscillates around the potential minimum. This oscillation is hard to decay and can cause accidental suppression of the consequential baryon asymmetry. This suppression is most effective for the model with non-renormalizable superpotential W_AD∼Φ^4 (Φ: Affleck-Dine field). It is found that the Affleck-Dine leptogenesis in high-scale inflation, which suffers from serious gravitino overproduction, becomes workable owing to this effect. Oscillating Affleck-Dine condensate and its cosmological implications Masahiro Kawasaki December 30, 2023 ===================================================================== § INTRODUCTION The Big Bang nucleosynthesis (BBN) successfully explains the abundances of light elements in our universe if we adopt the baryon density determined by observations of the cosmic microwave background (CMB), which indicates that the baryon asymmetry of the universe is n_B/s∼10^-10 at the beginning of the BBN. On the other hand, before the BBN, to give an explanation to the horizon problem, flatness problem and the origins of the tiny inhomogeneity of the universe, the accelerated expansion of the early universe, called inflation is considered to occur. However, inflation must wash out the pre-existing baryon asymmetry of the early universe. Therefore, we need a mechanism to generate the adequate baryon asymmetry after inflation and before the BBN. There exist various types of mechanisms to generate the baryon number density in the early universe. In particular, Affleck-Dine baryo/leptogenesis <cit.> is a promising candidate in supersymmetric theory (SUSY) because it is realized in the minimal supersymmetric standard model (MSSM). In the MSSM, there are a lot of flat directions with a non-zero B–L charge, called Affleck-Dine field (AD field). During and after inflation, the AD field could have a large vacuum expectation value (VEV) due to the negative Hubble induced mass. As the energy of the universe decreases in matter domination era after inflation, the soft SUSY breaking mass for the AD field overcomes the negative Hubble induced mass and the AD field starts to oscillate around the origin coherently. At the same time, the phase-dependent part of the potential (A-term) for the AD field becomes effective and “kicks" the AD field into the phase direction. As a result the AD field rotates in the complex field plane. Since the B–L number density is determined by the “angular momentum" in the field plane, B–L asymmetry of the universe is produced by this mechanism. Finally, the AD field decays into quarks, leptons and their anti-particles and generates B–L asymmetry in thermal plasma <cit.>.The B–L asymmetry is further converted to the baryon asymmetry through the sphaleron effect<cit.>.To estimate the produced baryon asymmetry, we have to follow the dynamics of AD field in the cosmological background. Conventionally, it is assumed that AD field keeps following the time-dependent potential minimum adiabatically. Actually, however, the AD field inevitably oscillates around its vacuum. Its amplitude and period are characterized by the Kähler mixing between inflation sector and the AD field. We discuss the dynamic in detail and find that this oscillation causes an accidental suppression of B–L number density depending on the model parameters. This effect is most efficient when the dynamics of the AD field Φ is governed by the non-renormalizable superpotential W_AD∼Φ^n with n=4. In this paper, we apply this suppression mechanism to the minimal Affleck-Dine leptogenesis scenario <cit.> where LH_u flat direction is used (n=4). Consequently, we show that we can avoid the gravitino problem in the minimal Affleck-Dine leptogenesis after high-scale inflation by an appropriate choice of model parameters. The remaining parts of this paper are as follows. First, in Sec. 2, we briefly review the Affleck-Dine baryogenesis and derive the conventional evaluation of B–L number density. Next, we discuss the oscillation dynamics of AD field just after inflation and estimate its contribution to B–L number density numerically, In Sec. 4, we apply this effect to the minimal Affleck-Dine leptogenesis scenario. Finally, we conclude in Sec. 5.§ AFFLECK-DINE BARYOGENESIS Let us review the conventional Affleck-Dine baryogenesis scenario. First, we discuss the scalar potential for the AD field. The AD field is exactly flat in renormalizable level unless SUSY is broken. However, non-renormalizable terms and SUSY breaking effect lift its potential. The non-renormalizable superpotential for the AD superfield Φ is cast asW_AD=λΦ^n/nM_p^n-3,where λ is a coupling constant, n(≥4) is a certain integer which is determined by specifying a flat direction. Here we take the Planck mass M_p as the cutoff scale of the non-renormalizable terms. Then, the scalar potential for AD field including a soft SUSY breaking term and a Hubble induced mass term is given byV_AD(ϕ,ϕ^*) =(m_ϕ^2-cH^2)|ϕ|^2+(a_mλ m_3/2ϕ^n/nM_p^n-3+h.c.)+λ^2|ϕ|^2n-2/M_p^2n-6,where ϕ=φ e^iθ is a scalar component of the AD superfield Φ, H is a Hubble parameter, m_3/2 is a gravitino mass and m_ϕ is a soft SUSY breaking mass for AD field, a_m, c are 𝒪(1) parameters. In particular, the value of c is responsible for the Kähler mixing between the AD superfield and the inflation sector. For example, we assume the inflation sector consists of two superfields I and S, which are the inflation and so-called “stabilizer", quartic Kähler mixing between the AD field, in general, is written as K_ mix=c_1/M_p^2|Φ|^2|I|^2+c_2/M_p^2|Φ|^2|S|^2. Here c_1,c_2 are 𝒪(1) constants. In this case, the constant c is evaluated in terms of c_1,c_2 as <cit.>c =c_I≡3(c_2-1) ( during inflation) c_M≡3/2(c_1+c_2-1)( after Inflation)Note that the value of c is different for during/after inflation in general.[If inflation is driven by single superfield with a Kähler mixing c'|Φ|^2|I|^2, take c_1=c_2=c'.] Hereafter we consider the case c>0. Let us follow the cosmological evolution of the AD field. At first, during inflation where cH>m_ϕ, the AD field ϕ develops a large VEV due to the negative Hubble induced mass ∼ -cH^2 as φ_0(t)|_t<t_e≃(√(c_I/(n-1))/λH_IM_p^n-3)^1/n-2,where t_e is the time when inflation ends. After inflation, the energy density of the universe is dominated by the coherent oscillation of the inflaton and the Hubble parameter starts to decrease. Therefore, the minimum of the AD field is also time-dependent and approaches to zero as the universe expands; φ_0(t)|_t>t_e≃(√(c_M/(n-1))/λH(t)M_p^n-3)^1/n-2.For simplicity, we assume potential energy of inflation is converted to its oscillation energy instantaneously at t=t_e.Once H(t) crosses H_ osc≃ m_ϕ/√(c_M), however, the mass of the AD field become positive and ϕ starts to oscillate around the origin. At the same time, the phase direction of the AD field θ, which stays at certain direction θ_0 due to the Hubble friction, receives a “kick" fromA-term potential, so that ϕ starts to rotate in the complex plane. This dynamics generates the baryon asymmetry of the universe since baryon number density is represented asn_B=-2b Im[ϕ^*ϕ̇]=-2bφ^2θ̇, where b is a baryon charge of ϕ.§.§ Baryon asymmetryLet us estimate the resultant baryon asymmetry of the universe. Using E.O.M for ϕ, the evolution of baryon number density eq.(<ref>) is determined as ṅ_̇Ḃ+3Hn_B=2b Im[ϕ^*∂ V/∂ϕ^*].Integrating the differential equation, we get a(t)^3n_B(t) =2ba_m∫^t_t_edt'a(t')^3λ m_3/2φ(t')^nsin(nθ)≃2ba_m∫^t_ osc_t_edt'a(t')^3λ m_3/2φ(t')^nsin(nθ_0).where we used eq.(<ref>). Since θ starts to oscillate at t_ osc, integration over t>t_ osc also oscillates and has little contribution. To calculate eq.(<ref>), usually we assume ϕ is always at the vacuum, i.e. φ(t)=φ_0(t). Then we obtain the well-known result.n_B(t_ osc) =ϵm_3/2φ_0(t_osc)^2, ϵ =4ba_m √(c_M)sin(nθ_0)/3√(n-1)(n-4/n-2+1).In this paper, we point out that this conventional result can overestimate the baryon asymmetry by 𝒪(1-100) for the most part of the parameter region. This is because the deviation of φ(t) from its minimum φ_0(t) is hard to be neglected. Representing the deviation asφ(t)=χ(t)φ_0(t),we can take the effect into account and get more precise expression such asn_B(t_ osc) ≃ϵm_3/2φ_0(t_osc)^2 χ^n, χ^n =(1/t_ osc∫^t_ osc_t_eχ(t)^ndt).In the next section, we discuss the actual value of “efficiency factor" χ^n. § ESTIMATION OF EFFICIENCY FACTORAs we mentioned in the previous section, φ(t)=φ_0(t) does not satisfy the E.O.M. for t>t_e[for t<t_e, the deviation |χ(t)-1| is exponentially suppressed due to the inflation.]. In fact, the E.O.M. for χ for t>t_e becomesχ”(z)+n-4/n-2χ'(z)= [4/9c_M+n-3/(n-2)^2]χ(z) -4/9c_Mχ(z)^2n-3,where we take z=ln (t/t_e) as a differential variable ((')≡ d/dz). We can see that there is a fixed point of χ near the unity,χ_0=(1+9(n-3)/4c_M(n-2)^2)^1/2(n-2).However, the static solution χ(z)=χ_0 does not meet the realistic situation. As we saw in the previous section, until the end of inflation, φ sits still at the minimum given by eq.(<ref>) due to the Hubble induced mass. From matching of φ and φ' at z=z_e (=0), χ(z) must satisfy the following initial conditions:χ(z_e)=c_r^1/2(n-2), χ'(z_e)=χ(z_e)/n-2,where we define c_r≡ c_I/c_M. Therefore, χ(t) has non-zero “velocity" and inevitably oscillates around χ_0. In particular, let us consider the case of n=4. For simplicity, we set c_M=1. As one can see from eq. (<ref>), the friction term for χ is absent for n=4, so that the oscillation lasts until H(t)∼ H_ osc. We numerically solved the dynamics of χ(z) as shown in Fig. <ref>.Note that this oscillation dynamics has two distinct features. First, the oscillation is periodic only with respect to z=ln t, i.e., χ(z)=χ(z+T). This means that, in term of the cosmic time t,the period of the oscillation increases exponentially. Therefore, it can be said that χ^n is characterized by the value of χ^n just before t_ osc. For example, if a wave crest appears just before t_ osc, χ^n would take a value of order χ_ max^n. On the contrary, when a wave hollow appears, χ^n can be a more suppressed value of order χ_ min^n. Second, the branching point of the dynamics exists at χ(z_e)=3^1/4, i.e. c_r=3. For c_r<3, χ oscillates around χ_0 staying in the region χ>0. On the other hand, for c_r>3, χ crosses over the origin and oscillates around the origin. The choice c_r=3 is a nothing but the unstable solution where χ approaches the origin with infinite time. Therefore, if we take c_r≃3, χ approaches to the origin closely and χ_ min can be much less than unity. According to these facts, we can understand the behavior of the numerical result of χ^n shown in Fig. <ref>. χ^n oscillates with respect to H_I/H_ osc, i.e., the duration of the χ oscillation, because of the first feature we mentioned above. Consequently, depending on the choice of c_r and H_I/H_ osc, the baryon number density could receive accidental suppression in the n=4 AD baryogenesis scenario. We stress that this suppression mechanism is effective only when the inflation sector consists of more than two superfields. In the single superfield inflation, the value of c_M and c_I are not independent (see footnote <ref>) and the value of c_r is written as c_r=c'-1/c'-1/2, which is smaller than unity for c'>1. Therefore we can not take c_r≃3, which is a necessary condition for the large suppression. Our discussion above is based on the semi-analytical estimation of the baryon number density eq. (<ref>). In fact, this is confirmed by the fully numerical simulation including the phase direction of the AD field θ as shown in Figs. <ref> and <ref>.In the case of n>4 AD baryogenesis, such suppression mechanism also occurs. However, the friction term in eq. (<ref>) becomes effective and the oscillation amplitude decreases with z=ln t/t_e. Therefore, χ^n approaches to unity for log_10(H_I/H_ osc)≳𝒪(1) and the suppression is less important.§ MINIMAL AFFLECK-DINE LEPTOGENESIS IN HIGH-SCALE INFLATIONWe have discussed the possible suppression of the baryon number density. In general, we want to avoid such a suppression in order to generate a sufficient amount of baryon asymmetry of the universe we observe today. On the other hand, the Affleck-Dine baryogenesis has a problem in high-scale inflation where the inflation scale takes a larger value H_I≳10^13GeV. During inflation the phase of the AD field obtains fluctuations ≃ H_I/2π which result in baryonic isocurvature perturbations. In high-scale inflation the isocurvature perturbations are too large unless the field value during inflation is nearly the Planck scale, which brings difficulties to some AD baryogenesis scenarios including the minimal Affleck-Dine leptogenesis. In this section, we show that the suppression effect can make the minimal Affleck-Dine leptogenesis possible even in high-scale inflation. In the minimal Affleck-Dine Leptogenesis, LH_u direction <cit.>, which possess the lepton number, is used as the AD field. This direction is lifted up by non-renormalizable term which gives mass to the neutrinos;W_ AD= m_ν_i/2⟨ H_u⟩^2(L_iH_u)^2 ≡ λ/4M_pΦ^4,for Φ^2/2=LH_u,where ⟨ H_u⟩=(174 GeV)sinβ (tanβ=⟨ H_u⟩/⟨ H_d⟩), and here we take the basis where mass matrix for neutrinos is diagonal. Therefore, LH_u direction corresponds to n=4 AD field. Then, the lightest neutrino mass is related to λ asm_ν1=λ⟨ H_u⟩^2/M_p≃5.3×10^-9 eV(λ/4.2×10^-4).Note that the value of λ has an upper bound of order 10^-4 due to the baryonic isocurvature constraint in high-scale inflation <cit.>.When we take the inflation scale as H_I≃10^13GeV, the upper bound is evaluated as λ≲ 4.2×10^-4. Therefore the model predicts such a very tiny neutrino mass.In this scenario, the Affleck-Dine mechanism produces the L asymmetry of the universe. Since the sphaleron processis in thermal equilibrium, the produced L asymmetry is converted to the baryon asymmetry asn_B≃ -8/23n_L.Consequently, the present baryon-to-entropy density is estimated as <cit.>n_B/s ≃-8/23T_Rn_L(t_ osc)/4M_p^2H_ osc^2=ϵ8/23T_Rm_3/2/4√(3)λ M_pH_ osc. Here we have to mention the finite temperature effect. In particular for the n=4 Affleck-Dine baryogenesis, the thermal log potential <cit.>V_T(ϕ)≃ c_Tα_s^2T^4log(|ϕ|^2/T^2)could change the dynamics of AD fields, where T is a temperature of the background plasma and c_T=45/32. This thermal potential behaves as a positive mass term for ϕ, which modifies the time when the AD field starts to oscillate asH_ osc≃ Max[m_ϕ, 0.6α_s√(λ)T_R].We can see that in high-scale inflation, the thermal mass easily overcomes the soft mass for AD field. Consequently, we obtain the resulting baryon asymmetry by substituting H_ osc=0.6α_s√(λ)T_R into eq.(<ref>) as n_B/s≃4.1×10^-11ϵ(m_3/2/1 TeV)(λ/4.2×10^-4)^-3/2,where we assume α_s≃0.1. Surprisingly, the result does not depend on the reheating temperature as long as T_R≳ m_ϕλ^-1/2/α_s is satisfied <cit.>. To realize the observed baryon asymmetry, gravitino mass m_3/2 should be related with λ asm_3/2=2.1 TeV(λ/4.2×10^-4)^3/2≲ 2.1 TeV.On the other hand, unfortunately, gravitinos with such a mass decays in the era of the BBN and destroy the light elements <cit.>. To avoid the problem, gravitinos must decay before the BBN. For example, if reheating occurred via gravitational interaction, where the reheating temperature is typically T_R∼10^9 GeV, gravitino mass has a lower bound such as m_3/2≳ 10^4 GeV. Therefore, the minimal Affleck-Dine leptogenesis does not work in high-scale inflation due to the gravitino problem. One may consider the fine-tuning of the initial angle θ_0≪1 which make ϵ≪1. However, such a tiny θ_0 makes the baryonic isocurvature constraint stronger and upper bound on m_3/2 become lower by 𝒪(θ_0).However, it is possible for the scenario to work if we take the oscillation of the AD field into account. As we discussed in the previous sections, oscillation of the AD field leads accidental suppression of the produced baryon asymmetry asñ_B/s≃χ^4(c_r,H_I/H_ osc)·n_B/s.Consequently, the required gravitino mass to realize the observed baryon number density becomes heavier asm_3/2≃2.1 TeV(χ^4(c_r,H_I/H_ osc))^-1.We numerically calculate the gravitino mass which makes n_B/s≃8.7×10^-11 <cit.> and plot on the (c_r, λ) - plane in Fig. <ref>.From the figure, it is seen that we can take heavier gravitino masses and hence evade the BBN constraint.§ SUMMARY AND DISSCUSSIONSIn this paper, we have performed the more precise estimation of the produced baryon asymmetry in the n=4 AD baryogenesis. Due to the fact that the AD condensate oscillates around its minimum after inflation, the efficiency of the generation of the baryon asymmetry decreases depending on the choice of dimension-less 𝒪(1) parameter c_r and the quantity H_I/H_ osc. We found that this suppression mechanism make the minimal AD leptogenesis scenario viable even in high-scale inflation. In our analysis, we assume inflation suddenly ends and switches to the matter domination era. Although this simplification does not change the suppression mechanism, dependence of χ^4 on c_I and H_I/H_osc could be slightly modified. We note that the choice of c_M, which we set unity in the paper, also does not change the result.Finally, let us comment on the evolution of the fluctuation δϕ. When ϕ approaches the origin, the effective mass of δϕ becomes negative due to the negative Hubble induced mass. Therefore tachyonic resonance <cit.> would take place and the fluctuations δϕ grow exponentially. However, the resonance is not effective because the oscillation time scale (= period) of the AD field is an order of the Hubble time and hence the AD field oscillates only several times before producing baryon asymmetry. After the soft SUSY breaking mass dominates the dynamics, the fluctuations of the AD field generally grow and form non-topological solitons called Q-balls <cit.>. Large Q-balls may decay after the electroweak phase transition, which makes lepton-baryon number conversion difficult since the sphaleron process is ineffective. In the case of the LH_u direction, however, the existence of the supersymmetric μ term can prevent the AD field from forming Q-balls.§ ACKNOWLEDGEMENTF.H. would like to thank Jeong-Pyong Hong and Yutaro Shoji for helpful comments. This work is supported by MEXT KAKENHI Grant Number 15H05889 (M. K.), JSPS KAKENHI Grant Number 17K05434 (M. K.) and also by the World Premier International Research Center Initiative (WPI), MEXT, Japan. F. H. is supported by JSPS Research Fellowship for Young Scientists Grant Number 17J07391. | http://arxiv.org/abs/1706.08659v1 | {
"authors": [
"Fuminori Hasegawa",
"Masahiro Kawasaki"
],
"categories": [
"hep-ph",
"astro-ph.CO"
],
"primary_category": "hep-ph",
"published": "20170627032157",
"title": "Oscillating Affleck-Dine condensate and its cosmological implications"
} |
Ghost Reduction in Echo-Planar Imaging by Joint Reconstruction of Images and Line-to-Line Delays and Phase Errors Julianna D Ianni^1,2, E Brian Welch^1,3, William A Grissom^1,2,3,4 =================================================================================================================^1Vanderbilt University Institute of Imaging Science,^2Department of Biomedical Engineering, ^3Department of Radiology, ^4Department of Electrical Engineering, Vanderbilt University, Nashville, TN, United States Submitted to Magnetic Resonance in Medicine § ABSTRACT Purpose: To correct line-to-line delays and phase errors in echo-planar imaging (EPI).Theory and Methods: EPI- trajectory auto-corrected image reconstruction (EPI-TrACR) is an iterative maximum-likelihood technique thatexploits data redundancy provided by multiple receive coils between nearby lines of k-spaceto determine and correct line-to-line trajectorydelays and phase errors that cause ghosting artifacts. EPI-TrACR was applied to in vivo data acquired at 7 Teslaacross acceleration and multishot factors, and in a dynamic time series. The method was efficiently implemented using a segmented FFT andcompared to a conventional calibrated reconstruction.Results: Compared to conventional calibrated reconstructions, EPI-TrACR reduced ghosting up to moderate acceleration factors and across multishot factors. It also maintained low ghosting in a dynamic time series.Averaged over all cases,EPI-TrACR reduced root-mean-square ghosted signal outside the brain by 27% compared to calibrated reconstruction.Conclusion: EPI-TrACR is effective in automatically correcting line-to-line delays and phase errors in multishot,accelerated, and dynamic EPI.While the method benefits from additional calibration data, it is not a requirement. Key words: image reconstruction; EPI; parallel imaging; phase correction; eddy currents; ghosting§ INTRODUCTION Echo-planar imaging (EPI) is a fast imaging technique in which multiple Cartesian lines of k-space are measured per excitation. It is widely used in functional magnetic resonance imaging (fMRI) and diffusion weighted imaging (DWI). However, EPI images contain ghosting artifacts due to trajectory delays and phase errors between adjacentk-space lines that result from eddy currents created by rapidly switched readout gradients.The most common methods to correct EPI ghosting artifacts are based on the collection of calibration data from which delays and phase shifts can be estimated and applied in image reconstruction <cit.>. Usually this data comes from a separate acquisition without phase encoding gradient blips, acquired before the imaging scan. Corrections can also be made by re-acquiring EPI k-space data that is offset by one k-space line so that odd k-space lines become even and vice versa <cit.>.The gradient impulse response function can also be measured and applied to predict errors <cit.>.However, these methods cannot correct dynamic errors caused by effects such as gradient coil heating. Dynamic errors can be compensated by measuring calibration data within the imaging sequence itself, for example by reacquiring the center line of k-space within a single acquisition <cit.>. However, these approaches result in a loss of temporal resolution. Alternatively, dynamic errors can be measured during a scan without modifying the sequence using field-probe measurements <cit.>.However, the hardware required to make those measurements can take up valuable space in the scanner bore andis not widely available at the time of writing. As an alternative to separate calibration measurements,many retrospective methods attempt to correct ghosting based on the EPI dataor images themselves. The image-based methods <cit.> rely on the assumption that some part of the initial image contains no ghosted signal.Another group of methods makes corrections based on finding phased array combinations that cancel ghosts <cit.>.Several methods use parallel imaging to separately reconstruct images from odd and even lines and then combine them, and these have further been combined with a dynamically alternating phase encode shift or direction <cit.>. However, relying on undersampled data for calibration weights may make these approaches unstable,and some methods reduce temporal resolution. Importantly, almost all these retrospective methods are either incompatible or have not been validated with multi-shot EPI, and most are either incompatible with parallel imaging acceleration or have only been implemented andvalidated with small acceleration factors.In this work, a flexible EPI- trajectory auto-corrected image reconstruction (EPI-TrACR)is proposed that alleviates ghosting artifacts by exploiting data redundancy between adjacent k-space lines in multicoil EPI data.It is an extension of a previously-described method for automatic non-Cartesian trajectory error correction (TrACR-SENSE) <cit.>to the joint estimation of images and line-to-line delays and phase errors in EPI.In the following we describe the method,including an efficient segmented FFT algorithm for delayed EPI k-space trajectories. The method is then validated in vivo at 7 Tesla, at multiple acceleration and multishot factors and in a dynamic time series. It is demonstrated that EPI-TrACR reduces dynamic ghosting and is compatible with multishot EPI and acceleration.Furthermore, the method benefits from initialization with calibration databut does not require it at moderate acceleration and multishot factors.§ THEORY§.§ Problem Formulation EPI-TrACR jointly estimates images, delays and phase shifts by fitting an extension of the SENSEMR signal model <cit.> to EPI k-space data: y_c[m,n] = ∑_i=1^N_se^- 2 π (( k^x_m +Δ k^x_n ) x_i + k^y_n y_i) e^Δϕ _n s_ci f_i ,where y_c[m,n] is the signal measured in coil c at the mth time point of the nth phase-encoded echo, k^x_m is the k-space coordinate in the readout/frequency encoded dimension and Δ k^x_n is the trajectorydelay in that dimension for the nth echo (out of N echoes),k^y_n is the nth echo's k-space coordinate in the phase-encoded dimension, Δϕ_n is the phase shift of the nth echo resulting from zeroth-order eddy currents, s_ci is coil c's measured sensitivity at (x_i,y_i), f_i is the image at (x_i,y_i),and N_s is the number of pixels in the image.The variables in this model are the image f and the delays and phase shifts {(Δ k^x_n, Δϕ_n)}_n=1^N, and it is fit to measured data ỹ_c[m,n] by minimizing the sum of squared errors between the two. This is done while constraining the delays and phase shifts so that a single delay and phase shift pair applies to all of a shot's odd echoes and another pair applies to all of its even echoes, with separate parameters for each shot. The first shot's odd echoes serve as a reference and are constrained to have zero delay and phase shift.Overall, a total of 2 (2 N_shot - 1) delay and phase shift parameters are fit to the data along with the image. §.§ AlgorithmThe EPI-TrACR algorithm minimizes the data-model error by alternately updating the estimated image f,the k-space delays {Δ k^x_n}_n=1^N, and the phase shifts {Δϕ_n}_n=1^N.The image is updated with a conjugate-gradient (CG) SENSE reconstruction <cit.>. The delay and phase shift updates are both performed using a nonlinear Polak-Ribière (CG) algorithm <cit.>,which requires computation of the derivative of the squared data-model error with respect to those parameters.This CG algorithm was chosen for its efficiency in minimizing the data-model error in similar problems; other optimization algorithms, such as gradient descent, may be applied alternatively. Denoting the sum-of-squared errors as the function Ψ,the derivative with respect to each delay Δ k^x_n is:∂Ψ/∂Δ k^x_n = ∑_c=1^N_c∑_m=1^M∑_i = 1^N_s{ - 2π x_ie^-Δϕ _n e^ 2 π (( k^x_m +Δ k^x_n ) x_i + k^y_n y_i) s_ci^* f_i^* r_cmn},and the derivative with respect to each phase shift Δϕ_n is:∂Ψ/∂Δϕ_n = ∑_c=1^N_c∑_m=1^M∑_i = 1^N_s{ e^-Δϕ _n e^ 2 π (( k^x_m +Δ k^x_n ) x_i + k^y_n y_i) s_ci^* f_i^* r_cmn}, wheredenotes the real part, ^* is complex conjugation, and r_cmn is the residual error between the measured data and the model given the current parameter estimates, f̂, Δk̂^x_n, and Δϕ̂_n:r_cmn = ỹ_c[m,n] - ∑_i=1^N_se^- 2 π (( k^x_m +Δk̂^x_n ) x_i + k^y_n y_i) e^Δϕ̂_n s_cif̂_i.To constrain the delays and phase shifts to be the same for the set of odd or even echoes of each shot,the derivatives above are summed across the echoes in that set, and a single delay and shift pair is determined for the set each CG iteration.The updates are alternated until the data-model error stops changing significantly.§.§.§ Segmented FFTsSince a delayed EPI trajectory is non-Cartesian, the model in Equation <ref> corresponds to anon-uniform discrete Fourier transform (DFT) of the image. Non-uniform fast Fourier transform (FFT) algorithms (e.g., Ref. <cit.>) are typically used to efficiently evaluate non-uniform DFTs, but they use gridding, which would result in long compute times in EPI-TrACR, since Equation <ref> is repeatedly evaluated by the algorithm.Figure <ref> illustrates a segmented FFT algorithm that applies the delays as phase ramps in the imagedomain, instead of gridding the delayed data in the frequency domain.In addition to eliminating gridding,this also enables the data to be FFT'd in the frequency-encoded dimension before starting EPI-TrACR,so that the algorithm only needs to compute 1D FFTs in the phase-encoded dimension.The figure shows an inverse segmented FFT (k-space to image space) for a 2-shot dataset with delays and phase shifts,which comprises the following steps: * The data in each set of odd or even echoes of each shot are collected into 2N_shotsubmatrices of size M × (N/(2 × N_shot)),and the 1D inverse FFT of each submatrix is computed in the phase-encoded dimension.* The estimated phase shifts are applied to each submatrix.* A phase ramp is applied in the phase-encoded spatial dimension of each submatrixto account for that set's relative position in the phase-encoded k-space dimension.This is necessary since the inverse FFTs assume all the submatrices are centered in k-space.* The phase ramp corresponding to each set's estimated delay is applied to its submatrix in the frequency-encoded spatial dimension. * For each submatrix entry, the inverse DFT across submatrices is computed to obtain 2N_shot subimages of size M × (N/(2 × N_shot)), which are concatenated in the column dimension to form the final M × N image. For efficiency,the phase shifts of steps 2 through 4 are combined into a single precomputed matrix that is applied to eachsubmatrix by elementwise multiplication. To perform the forward segmented FFT (image space to k-space), the steps are reversed, with the phase ramps and shifts negated.Steps 1 and 5 dominate the computational cost, and respectively require O(M N N_shot) and O(M N log(N/(2N_shot)) ) operations. § METHODS §.§ Algorithm ImplementationThe EPI-TrACR algorithm was implemented in MATLAB 2016a (The Mathworks, Natick, MA, USA) on a workstation withdual 6-core 2.8 GHz X5660 Intel Xeon CPUs (Intel Corporation, Santa Clara, CA) and 96 GB of RAM.For each iteration of the algorithm's outer loop,image updates were initialized with zeros to prevent noise amplification, and were performed using MATLAB'sfunction and a fixed tolerance of 10^-1,capped at 25 iterations. CG delay and phase updates were each fixed to a maximum of 5 iterations per outer loop iteration,and terminated early if all steps were less than 10^-6 cm^-1 (for delays) or 10^-6 radians (for phase shifts). The maximum permitted delay in a single iteration was limited to 1/FOV, and the maximum permitted phase step in a single iteration was limited to π/10 radians. Outer loop iterations stopped when the change in squared error was less than the previous iteration's error times 10^-6.Code and example data for EPI-TrACR can be downloaded at <https://bitbucket.org/wgrissom/tracr>. §.§ ExperimentsA healthy volunteer was scanned on a 7T Philips Achieva scanner (Philips Healthcare, Best, Netherlands)with the approval of the Institutional Review Board at Vanderbilt University.A birdcage coil was used for excitation and a 32-channel head coil for reception (Nova Medical Inc., Wilmington, MA, USA).EPI scans were acquired with 24 × 24 cm FOV, 1.5 × 1.5 × 3 mm^3 voxels,TR 3000 ms, TE 56 ms, flip angle 60.They were repeated for 1 to 4 shots,acceleration factors of 1x to 4x,and a single scan (2-shot, 1x) was performed with 20 repetitions.The TE of 56 ms was chosen to maintain the same contrast between images, and was the shortest possible for the single-shot, 1x acquisition, which had a readout duration of 102 ms. A calibration scan with phase encodes turned off was acquired in each configuration, and delays and phase shifts were estimated from it using cross-correlation followed by an optimization transfer-basedrefinement <cit.>. SENSE maps were also collected using the vendor's mapping scan.Images were reconstructed to 160 × 160 matrices usingwith no corrections,conventional calibration (using the phase and delay estimates from the calibration scan), EPI-TrACR initialized with the delays and phase shifts from the conventional calibration, and EPI-TrACR initialized with zeros. For comparison of EPI-TrACR corrections on the time series data with another dynamic method, PAGE <cit.> was also implemented. To characterize the amount of data necessary for the EPI-TrACR reconstruction,the algorithm was repeated after truncating the 2-shot,1x in vivo data in both k-space dimensions across a range of truncation factors.The reconstructed image resolution within EPI-TrACR was correspondingly reduced in each case,so that the image matrix size matched the data matrix size.The final estimated delays and phase shifts were then applied in a full-resolution reconstruction. Except where indicated,displayed images shown are windowed down to 20% of their maximum amplitude for clear display of ghosting, and ghosted signals were measured in all images as the root-mean-square (RMS) signaloutside an elliptical region-of-interest that excluded the brain and skull. § RESULTSFigure <ref> shows reconstructed images across multishot factors. Ghosting was lowest with EPI-TrACR in all cases,and the differences between zero initialization and calibrated initialization results are negligible: averaged across multishot factors,the RMS difference between estimated delays and phase shifts with and without calibrated initialization was 0.014%. Compared to conventional calibration-based correction,EPI-TrACR RMS ghosted signals were on average 37% lower. In addition, the conventional 4-shot reconstruction contained a visible aliased edge inside the brain (indicated by the yellow arrow), which did not appear in the EPI-TrACR reconstructions.All of the single-shot reconstructions contain a visible off-resonance artifact at the back of the brain (indicated in the conventional reconstruction by the green arrow). Figure <ref> shows reconstructed 2-shot EPI images with 1-4× acceleration. Compared to conventional calibration,EPI-TrACR with calibrated initialization again reduced ghosting up to 4× acceleration,and RMS ghosted signals were 18% lower on average.Furthermore,EPI-TrACR estimates matched with and without calibrated initialization up to 3× acceleration: averaged across factors of 1-3×,the RMS difference between estimated delays and phase shifts with and without calibrated initialization was 0.024%. Figure <ref>a plots RMS ghosted signal across repetitions of the 2-shot/1× scan for conventional calibrated reconstruction, PAGE, and EPI-TrACR. The signal levels are normalized to that of the first repetition's EPI-TrACR reconstruction. Figure <ref>b shows a Weisskoff plot <cit.> for all three reconstructions compared to the theoretical ideal;the coefficient of variation over repetitions is plotted for an ROI of increasing size. Figure <ref>c shows conventional calibration, PAGE, and EPI-TrACR (with zero initialization) images at the 14th repetition. The conventional, PAGE, and EPI-TrACR images at the 14th repetition respectively have 190%, 35%, and 16% higher RMS ghosted signal compared tothe first repetition EPI-TrACR reconstruction. EPI-TrACR maintained consistently low ghosting across repetitions,and a much higher radius of decorrelation than the conventional calibrated and PAGE reconstructions. A video of the full time series is provided as Supporting Information.The truncated 2-shot EPI-TrACR results are shown in Figure <ref>. Figure <ref>a shows that delay and phase shift estimation errors relative to full-data EPI-TrACR estimates are low up to very high truncation factors, and Figure <ref>b shows that compute time can be reduced up to 90% by truncating the data by 90%. Figures <ref>c and d show that images reconstructed with full data and90%-truncated data delay and phase estimates are indistinguishable:RMS ghosted signal was 8% higher in the truncated EPI-TrACR image versus the full-data reconstruction, but still 40% lower than the conventional calibrated reconstruction (which appears in Figure <ref>).For greater than 90% truncation though, the compute time starts to increase again due to increasing iterations. For full data, reconstruction times ranged from one minute (for 1 shot, 1× acceleration, and calibrated initialization) to 88 minutes (for 2 shots, 4× acceleration, and zero initialization).Reconstructions using NUFFTs <cit.> in place of the piecewise FFTs in EPI-TrACR ranged from 8 minutes(for 1 shot, 1× acceleration, and calibrated initialization) to 269 minutes(for 2 shots, 4× acceleration, and zero initialization).§ DISCUSSION EPI-TrACR is an iterative algorithm that jointly estimates EPI echo delays and phase shifts,along with images that are compensated for them.Compared to conventional calibrated corrections,EPI-TrACR consistently reduced image ghosting across multishot factors, acceleration factors, and a dynamic time series. In most cases it was able to do so without being initialized with calibrated delays and phase shifts. A further characterization of the convergence of EPI-TrACR with varying initialization is included as Figure <ref> in the Supporting Information. An additional validation experiment comparing EPI-TrACRestimates (1 shot, 1× acceleration) to a full k-space trajectory measurement <cit.> in a phantom at 3 Tesla is shown in Supporting information Figure S2.The EPI-TrACR bulk line delay estimate was similar to the median measured delay (13% RMS difference),and the EPI-TrACR image contained 19% lower RMS ghosted signal. Because EPI-TRACR relies on data redundancy between nearby lines of k-space, its performance is expected to degrade with increasing acceleration factor, which was observed here.Nevertheless, when initialized with calibrated delays and phase shifts,the method always reduced ghosting compared to conventional calibrated reconstruction.The main tradeoff for EPI-TrACR's improved delay and phase shift estimates is increased computation, but this can be mitigated in several ways.First, we showed that compute time can be reduced by truncating the data matrix down to the low frequencies,without compromising the delay and phase shift estimates.Compute times are also shorter when the algorithm is initialized with calibrated estimates,since fewer iterations are required to reach a solution.The algorithm could be applied in parallel across repetitions or slices,or within the algorithm the FFTs could be parallelized across receive coils.There are a number of ways the method could be extended. First, in the present work it was assumed that all the echoes within a set of even or odd echoes of a shot had the samedelay and phase shift. However, it is also possible to estimate different delays and phase shifts for different echoes within a setby expressing them as a weighted sum of basis functions.We have previously tested this extension using triangular basis functions, but found little improvement with our data.Nevertheless, as others may find it useful this functionality is included in the provided code. Second, the method could be extended to jointly estimate a single set of delays and phase shifts over a whole stack of slices simultaneously, which would increase the effective signal-to-noise ratio for estimation.This could in particular help for highly accelerated acquisitions where the method is currently more sensitive to poor initialization.§ CONCLUSIONSThe EPI-TrACR method alleviates ghosting artifacts by exploiting data redundancy between adjacent k-space lines in multicoil EPI data.It benefits from initialization with calibration data but does not require it at moderate acceleration and multishot factors. EPI-TrACR reduced dynamic ghosting without sacrificing temporal resolution, is compatible with multishot and accelerated acquisitions,and unlike previous data-based approaches, it does not rely on a ghost-free image region.It was validated in vivo at 7T,at multiple acceleration and multishot factors and in a dynamic time series.§ ACKNOWLEDGMENTS This work was supported by NIH grants R25 CA136440, R01 EB016695, and R01 DA019912. The authors would like to thank Dr. Manus Donahue for help with experiments.§ SUPPORTING INFORMATION FIGURES AND CAPTIONS Figure S2: A separate experiment was performed in a phantom at 3T (Philips Achieva), using a volume coil for excitation and a 32-channel coil for reception (Nova Medical Inc., Wilmington, MA, USA).Data were collected for a single off-axis slice (5/20/30) using a single-shot EPI scan with 60 dynamics; scan parameters were: 23 × 23 cm FOV, 1.8 × 1.8 × 4 mm voxels, TR 2000 ms, TE 43 ms, flip angle 90.The trajectory was measured for a single dynamic using a modified Duyn method <cit.>.A SENSE map and a calibration scan with phase encodes turned off were also collected as for the in vivo data.From the measured trajectory,trajectory delays were estimated as the average shift between each pair of odd and even lines over the middle quarter of the readout dimension.EPI-TrACR was used to reconstruct the phantom data in the same manner as for the in vivo data described above.Residual ghosted signal was calculated for all images as the root-mean-square (RMS) signal outside an elliptical region-of-interest masking out the phantom.Shown in this figure are boxplots of the measured line-to-line trajectory delays in the readout dimension (a) and DC phase errors (b), with lines superimposed to mark the conventional (dashed black) and EPI-TrACR (solid green) estimates. (c) Corresponding images shown are conjugate-gradient (CG) reconstructions of the first dynamic of phantom data along the uncorrected trajectory, that corrected by conventional estimates, that estimated with EPI-TrACR (with calibrated initialization), and the measured EPI trajectory. Images are shown at full magnitude (top) and windowed to 20% (bottom). RMS image ghosting is 19% lower in the TrACR image than in the measured image. The bulk even/odd line shift estimated was approximately 13% different between the two trajectories. The conventional reconstruction method was unable to adequately correct for the large amount of ghosting in the uncorrected image. Both EPI-TrACR and measured-trajectory reconstructions reduced ghosting over the conventional method. This provides additional confidence in the EPI-TrACR estimates beyond that provided by the conventional reconstruction. Residual ghosting apparent in both measured-trajectory and EPI-TrACR reconstructions may be attributed in part to the off-axis slice, which yielded particularly large trajectory and line-to-line phase shifts. The measured trajectory accounted for additional errors (e.g., a small readout-dimension shrinkage of the trajectory extent) that are not encompassed by the EPI-TrACR basis functions implemented herein; however, these measured errors do not appear to greatly improve the image over the EPI-TrACR reconstruction.cse | http://arxiv.org/abs/1706.08416v1 | {
"authors": [
"Julianna D. Ianni",
"E. Brian Welch",
"William A. Grissom"
],
"categories": [
"physics.med-ph"
],
"primary_category": "physics.med-ph",
"published": "20170626144556",
"title": "Ghost Reduction in Echo-Planar Imaging by Joint Reconstruction of Images and Line-to-Line Delays and Phase Errors"
} |
firstpage–lastpageandobservations of the 2015 outburst decay of 339 A. K. H. Kong December 30, 2023 ====================================================We studied the properties of the gas of the extended narrow line region (ENLR) of two Seyfert 2 galaxies: IC 5063 and NGC 7212. We analysed high resolution spectra to investigate how the main properties of this region depend on the gas velocity. We divided the emission lines in velocity bins and we calculated several line ratios. Diagnostic diagrams and SUMA composite models (photo-ionization + shocks), show that in both galaxies there might be evidence of shocks significantly contributing in the gas ionization at high | V |, even though photo-ionization from the active nucleus remains the main ionization mechanism. In IC 5063 the ionization parameter depends on V and its trend might be explained assuming an hollow bi-conical shape for the ENLR, with one of the edges aligned with the galaxy disk. On the other hand, NGC 7212 does not show any kind of dependence. The models show that solar O/H relative abundances reproduce the observed spectra in all the analysed regions. They also revealed an high fragmentation of the gas clouds, suggesting that the complex kinematics observed in these two objects might be caused by interaction between the ISM and high velocity components, such as jets. Galaxies: individual: IC 5063, NGC 7212 –galaxies: Seyfert – line: profiles § INTRODUCTION Active galactic nuclei (AGN) are among the most luminous objects of the Universe and they have been intensively studied, in the last decades, because they are characterized by many important astrophysical processes. Some nearby AGN, typically Seyfert 2 galaxies <cit.>, are characterized by conical or bi-conical structures of highly ionized gas, whose apexes point the galaxy nucleus, the ionization cones. The extension of optical emission (which is usually traced with the the [OIII]λ5007 line) is of the order of kiloparsecs and in the most extended ionization cones it can be traced up to 15–20 kpc <cit.>. The presence of these structures was already predicted by the Unified Model <cit.> which affirms that the core of the AGN is surrounded by a dusty torus that absorbs part of the radiation coming from the nucleus. However, the radiation emitted along the torus axis can escape and ionize the surrounding gas, forming the narrow-line region (NLR). When the host galaxy contains enough gas and the NLR is not absorbing completely the ionizing photons, it is also possible to observed the ionization cones <cit.>. Due to its extension, this region of ionized gas is also called extended narrow-line region (ENLR)[We consider the ENLR as a natural extension of the NLR beyond 0.8–1 kpc].Both NLR and ENLR are characterized by spectra with narrow permitted and forbidden emission lines, with a typical full width at half maximum (FWHM) between 300 and 800. The presence of several forbidden lines indicates that the electron density of the regions is low, typically n_ e∼ 10^2 – 10^4 cm^-3. High resolution images of nearby galaxies with ionization cones <cit.> reveal the presence of substructures, such as gaseous clouds and filaments. Medium and high resolution spectra show line profiles characterized by asymmetries, bumps and multiple peaks <cit.>, indicating very complex kinematics. These internal gas motion are expected to produce shock-waves, which can further influence the properties of the NLR/ENLR <cit.>.There are several possible causes of the complex kinematics of these structures. For example, the interaction between jets and the interstellar medium (ISM) of the galaxy, as suggested by the fact that the cone axis and the radio-jet axis, if present, are often aligned <cit.>. Another possibility is that the gas of the ENLR is the result of an episode of merging which brought new material toward the center of the host galaxy <cit.>. The third possibility is the presence of fast outflows (400 – 600) which involve different kinds of gas, from the cold molecular one to the warm ionized one <cit.>. It is worth noting that the radio-jet – ISM interaction could cause these outflows <cit.>. Trying to distinguish among these causes it is clearly not an easy task.Data indicate that most AGN hold a NLR, while, up to z∼0.05, very few of them show an ENLR<cit.>.The morphology of Seyfert galaxies up to z∼1 is almost unperturbed, suggesting that the gravitational interactions, when occur, must be minor merger events <cit.>.Besides, most of Seyfert galaxies are classified as radio-quiet <cit.> with compact radio emission, even though recent studies found that kiloparsec scale emission might be more common than expected <cit.>.In spite of that, the kinematics of both NLR and ENLR is often characterized by radial motions <cit.> , suggesting that the ionized gas is not simply driven by the gravitational potential of the host galaxy.Within this picture, we chose to carry out a pilot study of the physical properties of the NLR/ENLR gas as a function of velocity by comparing the intensities and profiles of emission lines analysed in medium and/or high resolution spectra. We are well aware that this is a relatively unexplored field due to the need of observing nearby AGN with large diameter telescopes in order to have simultaneously good spectral and spatial resolution and high signal-to-noise ratio (SNR). Nevertheless, our first tests showed us that with a resolution R ∼10000 it is possible to highlight structures in the line profiles, such as multiple peaks and asymmetries, which are usually missed in low resolution spectra.Moreover, the lines seem to be intrinsically broad, therefore a further increase of the resolution will probably not improve the separation between the gaseous components.In this work we improved the method developed by <cit.> who obtained and analysed high resolution spectra of the NLR of NGC 1068, and we applied it to medium resolution echelle spectra of two nearby Seyfert 2 galaxies with ENLR: IC 5063 and NGC 7212. Both of them show extended ionization cones and are observable from the southern hemisphere. We used the strongest optical lines to directly measure the physical properties of the gas such as density, extinction, ionization degree.Then, we calculated the gas physical conditions and element abundances through detailed modelling of the line ratios.We adopted the code SUMA <cit.> which accounts for the coupled effect of photo-ionization and shocks (see Sec. <ref>).In Sec. <ref> we present the observations and the sample, in Sec. <ref> we describe the data reduction, in Sec. <ref> we show how we analysed the data and the results obtained and in Sec. <ref> we summarize our work.In this paper we adopt the following cosmological parameters:H_0 = 70Mpc^-1, Ω_m,0=0.3 and Ω_Λ,0=0.7 <cit.>.§ SAMPLE AND OBSERVATIONSIn this work we studied two nearby Seyfert galaxies with known ENLR: IC 5063 and NGC 7212. They were observed with the MagE (Magellan Echellette) spectrograph mounted at the Nasmyth focus of the Clay telescope, one of the two 6.5m Magellan telescopes of the Las Campanas Observatory (Chile). MagE is able to cover simultaneously the complete visible spectrum (3100 – 10000 Å) with a maximum resolution R=8000 <cit.>.Both galaxies were observed with multiple exposures of 1200 s each. Additional images were also acquired to perform data reduction (night-sky and calibration lamp spectra). The spectrophotometric standard HR5501 was observed each night to carry out the flux calibration. The details of our observations are reported in Table <ref>.§.§ IC 5063 IC 5063 (Fig. <ref>) (z=0.01135) is a lenticular galaxy classified as a Seyfert 2 <cit.>. Table <ref> lists some of the principal properties about the object as reported by the NASA/IPAC Extragalactic Database (NED). It is one of the most radio-loud Seyfert 2 galaxies known, its radio luminosity is 2 orders of magnitude larger than that of typical Seyfert galaxies <cit.>. IC 5063 is characterized by a complex system of dust lanes aligned with the major axis of the galaxy. It also shows a strong IR emission <cit.> and a very broad component ofin polarized light <cit.>, likely a sign of an obscured broad line region (BLR).The ENLR was discovered by <cit.>, who found a very extended region of ionized gas (∼ 22 kpc, H_0 =50 .Mpc^-1) at position angle PA = 123, with a peculiar X shape and an opening angle of about 50.They supposed that the ENLR is the result of a quite recent merging between an elliptical galaxy and a small gas-rich spiral galaxy.<cit.> traced [OIII]λλ4959, 5007 andemission lines up to a distance of about 15–16 arcsec (3.5–3.8 kpc) from the nucleus in a direction very close to the two resolved radio jets, PA = 117.The authors found signs of fast outflows, with velocities of about 600 in all the gas phases.Similar velocities were found several years earlier by <cit.> who discovered fast outflows in the neutral gas of the galaxy studying the HI line at 21 cm. These outflows were the first of their kind discovered in a galaxy and they were deeply studied in the following years <cit.>. These most recent papers explained them as gas accelerated by fast shocks caused by the radio jets plasma expanding in the ISM.§.§ NGC 7212 NGC 7212 (Fig. <ref>) is a nearby spiral galaxy (z=0.02663) belonging to a system composed by three interacting objects. Some of its characteristic are reported in Table <ref>. It was classified as Seyfert 2 by <cit.>. Later <cit.>, who studied some Seyfert 2 galaxies in polarized light found an elongated NLR in NGC 7212, with the axis at PA=170, aligned with a structure of highly ionized gas. Then, <cit.> discovered the presence of a diffuse and extended NLR also in non-polarized light, in a compatible direction with the previously found elongated structure, but without the well defined shape typical of ionization cones. They also discovered a radio-jet aligned with this ENLR and they suggested that the two structures are interacting.The existence of the ionization cones was finally confirmed by <cit.> through integral field spectroscopic observations. This structure extends from the nucleus to about 3-4 kpc in both directions and it is almost aligned with the optical minor axis of the galaxy (PA=150). <cit.> discovered clear asymmetries in the emission lines of the ENLR, a sign of the presence of radial motions of the gas.This property, together with a possible sub-solar metallicity, induced these authors to suggest that the ENLR in NGC 7212 formed through the interaction of the active galaxy with the other members of the system.In addition <cit.> found quite high [NII]/ and [SII]/ ratios, especially in the region of interaction between NGC 7212 and one of the other galaxies of the triplet.This suggested that the ionization mechanism of the gas is a combination of photo-ionization from a non-thermal continuum and shocks <cit.>.§ DATA REDUCTIONWe performed the data reduction using IRAF 2.15[http://iraf.noao.edu] (Image Reduction and Analysis Facility). After subtracting the bias from all the images, we extracted the spectrum of each dispersion order using theoption of thetask and we processed each extracted spectrum (from now on also called apertures) following a standard long-slit reduction. We used a Th–Ar lamp for wavelength calibration and the standard star HR5501 for flux calibration and telluric correction.The final goal of the data reduction was to obtain one-dimensional (1D) spectra of different regions of the galaxies. In our analysis it is essential to have straight and aligned spectra, to extract the emission of the same region at all wavelengths.Therefore, we first rectified the two dimensional spectra using the tasksandto trace the central peak of each spectrum along the spatial direction, secondly we applied the correction to the images withand .Then we usedto align all the apertures.After that, we divided the 2D spectra of the galaxies in regions, looking for the best trade-off between a good SNR in all spectra and a good spatial sampling of the extended emission. We used, as a reference, theemission line (Fig. <ref> and Fig. <ref>, right panels). Then, we summed up the flux of each region to obtain a 1D spectrum for each one of them. The names and the positions of the extracted regions are shown in Table <ref>.§.§ Subtraction of the stellar continuum The galaxy continuum is a combination of several components: the stellar continuum, the AGN continuum and the nebular continuum, which is mainly due to hydrogen free-free and free-bound emission. AGN emission is negligible in Seyfert 2 galaxies because the AGN is obscured by the dusty torus <cit.> and the nebular continuum is estimated to be at least one order of magnitude fainter than the observed stellar emission <cit.>. Therefore, in our cases the main component of the galaxy continuum is the stellar continuum. To obtain reliable line flux measurements it is important to subtract this component from the spectra, mainly because of the typical stellar absorption features (e.g. hydrogen Balmer lines) which can fall at the same wavelength of emission lines, modifying their observed flux.One of the best method to subtract this component is to use STARLIGHT <cit.>. STARLIGHT is a software developed to fit the stellar component of galaxy spectra using a linear combination of simple stellar populations spectra, taking into account both extinction and velocity dispersion.To work properly, STARLIGHT needs spectra with fixed characteristics: they have to be corrected for Galactic extinction, they must be shifted to rest frame and they must also have a 1px^-1 dispersion. We corrected for Galactic extinction with A(V) = 0.165 and A(V) =0.195 (Table <ref>) for IC 5063 and NGC 7212 respectively.We then shifted the spectra to rest-frame wavelengths. It is not easy to do accurate redshift measurements on these objects because the emission lines are broad and they sometimes show multiple peaks.For this reason we averaged the values obtained from some of the deepest stellar absorptions: the MgI triplet (λ5167, λ5177, λ5184 Å) and the NaI doublet (λ5889, λ5895 Å). In the most external regions of the galaxies, where the SNR of the continuum was too low to detect those lines, weused the strongest peak of some emission lines such asor [OIII]λ5007. The obtained recessional velocities are shown in Table <ref>. To estimate the errors on the velocity measurements, we calculated the accuracy of the wavelength calibration comparing the position of the sky lines with respect to the expected position. We obtained σ = 0.2, which correspond to σ_V=10. However, STARLIGHT can independently measure small velocity shifts from the rest frame spectra (< 500) when it fits the galaxy continuum with the template spectra. This allows to fine tune the velocity correction, especially where emission lines are used. The fine-tuning is very important because we needed to remove every single velocity component due to galaxy rotation to study the kinematics of the gas.Once applied the listed corrections, we ran STARLIGHT using 150 spectra of stellar populations with 25 different ages (from 10^6 to 1.8×10^8 yr) and 6 different metallicities (from Z=10^-4 to Z=5× 10^-2) to obtain the spectrum of the stellar component and the velocity correction. In the regions where the SNR was too low to obtain a good fit of the continuum, we used the results of the software to correct the velocity measurements but we subtracted the stellar continuum fitting it with a simple function, using IRAFtask. §.§ DeblendingBecause of their intrinsic width, some prominent spectral lines are partially overlapped (e.g.and [NII]λλ6548,6584; [SII]λλ6716,6731). These lines are essential to study the gas properties:is used in the diagnostic diagrams <cit.>and to measure the extinction coefficient, the ratio between the [SII]λλ6716,6731 doublet is used to estimate the electron density. Therefore, to measure their fluxes, especially on the line wings, we had to deblend them. To perform the deblending we adapted to our case the method developed by <cit.>. They assumed that ifandare produced by the very same gas, they must have the same profile, except for extinction effects. Therefore, they multipliedfor a constant factor in order to match thepeak intensity, they subtracted the multipliedfrom the + [NII] group, obtaining only the [NII] doublet emission. To recover the finalline they subtracted the doublet from the original spectrum, after removing the residuals of the previous subtraction, due to intrinsic noise and to the assumption of constant extinction in the whole line.To deblendfrom [NII] we wrote a python 2.7 script, modifying this process in the following way.We first shiftedto the wavelength ofand we fitted it with 2 or 3 Gaussians, trying to reproduce the line profile and selecting the result showing the minimum residual. Then, we fittedwith the same number of functions, keeping the same FWHM and the same relative positions but varying their intensities. In this way, we considered that the extinction could change in each kinematic component. To get a better result we also fitted the [NII] doublet using 2 Gaussians for each line. Then, we subtracted thefit from the spectrum, obtaining the [NII] doublet and we removed the residuals (Fig. <ref>). Finally, we subtracted again the [NII] doublet from the original spectrum, obtaining the finalline. We used the same process to remove [SIII]λ6310 from [OI]λ6300 in the regions where the [SIII] line was strong enough to be detected.To deblend the [SII] doublet and the [OII]λλ3726,3729 doublet we further modified the method. We started deblending the [SII] doublet, because their separation in wavelength is ∼ 14 and they usually are partially resolved. To fit these lines we assumed that they have the same profile, except for the intensity, because they are emitted by the same gas but their ratio is influenced by the electron density. We reproduced each line with 2 or 3 Gaussian functions, depending on the quality of the spectrum. To reduce the number of free parameters, we fitted the [SII]λ6716 line letting them free to vary and we fixed the position and the width of the Gaussians for the [SII]λ6731 line with respect to the corresponding quantities for the [SII]λ6716 line, according to the theoretical properties of the doublet (Δλ = 14.3 and same FWHM). Fig. <ref> (top) shows the result of the deblending of these lines for the CS region of IC 5063.The [OII] lines are much closer to each other (∼ 2.5 Å) and it is not possible to deblend them without using any reference line. We adopted the [SII] doublet because, considering only the ions showing strong emission lines in our spectra, it has the closest ionization potential to that of the OII (Table <ref>). Therefore, we can suppose that their emission comes from spatially close regions and we can assume for them a similar profile. The two doublets also depend on the electron density in a similar way <cit.>. Therefore, we measured the ratio between the central intensity of each component in the two lines of the [SII] fit and we used those ratios to put constraints on the Gaussians used to fit the [OII] doublet. After that, we proceeded as before, fitting one line and constraining the parameters of the second one. However, since the [OII] lines are almost totally overlapped, in several spectra it was not possible to use the same number of Gaussians used to fit the [SII] doublet. In these cases we applied the process using only 2 Gaussians and refitting the [SII] lines. Fig. <ref> (bottom) shows the final result after the deblending of the [OII] doublet in the CS region of IC 5063. § DATA ANALYSIS AND RESULTSWe divided all the line profiles in velocity bins (Fig. <ref>) and we measured the fluxes inside them, as done by <cit.>. While this author took into account the morphology of the profile of the brightest emission line ([OIII]λ5007) for each region, we preferred to adopt a less arbitrary method and to use a fixed width of 100 which is about three times the theoretical instrumental velocity resolution (R∼ 37.5). We performed the measurement using an IRAF script.To take into account the different line widths in different regions, we assumed the velocity bins ofas reference for the other emission lines.This choice was based on the need of calculating flux ratios, deriving physical parameters and plotting diagnostic diagrams.To this aim it is essential to have good measurements offlux in each bin.Beingone of the weakest lines we are interested in, we are obviously losing high velocity bins detectable only in the strongest lines.We also measured the total flux of each line, to compare the results obtained dividing the lines in velocity bins to the average properties of the gas.Fig. <ref> (bottom) shows the behaviour of the relative error on the flux as a function of velocity in two lines ( and [OIII]λ5007) of the CS region of IC 5063. To estimate the error we used the following equation:Δ I/I=rms/I_0where I is the bin flux and Δ I its error, rms is the root mean square of the continuum measured near the line and I_0 is the central peak of the bin. The error increases, as expected, in the line wings both in a weak line () and in a strong line ([OIII]λ5007). Inwe have a maximum error of ∼25 per cent of the flux in the line wings, while in [OIII]λ5007 the error is much lower (∼4 per cent). A similar behaviour is expected for all the measured lines. We first estimated the local extinction by using the Balmer decrement and assuming a theoretical / ratio of 2.86. To recover the visual extinction A(V), we applied the CCM (Cardelli–Clayton–Mathis) extinction law <cit.>. We measured A(V) both for each velocity bin and for the total line flux and we correctedthe measured fluxes of all the lines.Then, we studied the mechanisms responsible for heating and ionizing the gas in the different regions of both galaxiesdirectly from diagnostic diagrams <cit.>.<cit.> empirical relation was adopted to determine the ionization parameter, while the physical parameters such as the temperature and the density of the emitting gas were obtained from the characteristic line ratios. Finally, to have a more complete picture of the physical parameters and abundances of the elements throughout the regions, we carried out a detailed modelling of the spectra accounting for both the photo-ionization from the active centre and for shocks. §.§ Diagnostic DiagramsWith the extinction corrected fluxes we proceeded plotting the diagnostic diagrams. We used four different diagrams: three are the traditional BPT diagrams from <cit.> and <cit.> which compare the [OIII]λ5007/ ratio to [OI]λ6300/, [NII]λ6584/ and [SII]λλ6716,6731/. The fourth one is a less used diagram developed by <cit.> which involves the [OII]λλ3726,3729/[OIII]λ5007 ratio to evaluate the ionization degree of the gas and a Δ E parameter which combines the previous ratios to discern between different ionization mechanisms (Eq. <ref>, <ref>, <ref>, <ref>, where x=([OII]λ3726+[OII]λ3729)/[OIII]λ5007):Δ E_1=log([OIII]λ5007/)+log(0.32+x)-0.44; Δ E_2= 1/2[log([NII]λ6584/)-log(x/x+1.93) + 0.37]; Δ E_3=1/5[log([OI]λ6300/)+2.23]; Δ E=1/3[Δ E_1+Δ E_2+Δ E_3 ].In the first three diagrams, we used the functions defined by <cit.> to discern between HII regions and power-law ionized regions and to divide the latter in Seyfert-like emission regions and LINER-like emission regions. We used the Δ E vs. [OII]λλ3726,3729/[OIII]λ5007 diagrams to investigate the influence of shock-waves in ionizing the NLR/ENLR gas. Fig. <ref> shows an example of diagnostic diagrams for IC 5063 and NGC 7212. The diagrams of the other regions are shown in Appendix <ref>. Fig. <ref> also shows the same diagrams calculated using the total flux of the lines for each region. From these plots it is possible to see that all the points lie in the AGN region. Moreover, the majority of the points in the log([OIII]λ5007/) vs log([OI]λ6300/) and in the log([OIII]λ5007/) vs log([SII]λλ6716,6731/) diagrams are located to the left of the diagram, usually occupied by Seyfert galaxies <cit.>. In the Δ E vs log([OII]λλ3726,3729/[OIII]λ5007) the points are located in the side of the diagram occupied by clouds ionized by a power-law continuum. This means that the main ionization mechanism in the NLR/ENLR of both galaxies is the photo-ionization by the active nucleus. However, it must be noticed that in most diagrams the points are not randomly distributed but they follow a well defined trend. The points corresponding to high values of | V | tend to have a lower [OIII]λ5007/ ratio but a higher ratio between low ionization lines and . For this reason some points are located close to the LINER region (e.g. Fig. <ref>). This behaviour might be explained as a consequence of shocks, which affect the spectrum lowering the ionization degree of the gas (low [OIII]λ5007/ ratio) and increasing the amount of ions in a low ionization state (high [OI]λ6300/, [NII]λ6584/ and [SII]λλ6716,6731/). This is in agreement with the Δ E vs. [OII]λλ3726,3729/[OIII]λ5007 where the points corresponding to those bins are close to the shocks side of the plot. Moreover, gas moving at relatively low speed does not show any sign of shocks. Most of the line fluxes is contained within these bins, therefore, when the diagnostic ratios are calculated using the whole flux of the lines almost no sign of shock-ionized gas is observed. In fact, analysing the diagnostic diagrams in Fig. <ref>, it is possible to see that both in IC 5063 and NGC 7212 all the points are clustered in the Seyfert region of the plots, without any peculiar behaviour. This does not mean that there is no contribution by shocks at low V, but that it is negligible with respect to photo-ionization. §.§ Ionization parameter and extinction The ionization parameter, defined as the number of ionizing photons reaching a cloud per number of electrons in the gas, can be used to estimate the ionization degree of the gas. For an isotropic source it is defined as :U=Q/4π r^2 c n_e ,where Q is the total number of ionizing photons emitted by the source, r is the distance of the cloud from the ionizing source, c is the speed of light and n_e is the electron density. To estimate this parameter we used the empirical relation from <cit.>:log U= -2.74 - log([OII]λλ3726,3729/[OIII]λ5007) .The main problem of this relation is that the [OII]λλ3726,3729/ [OIII]λ5007 ratio strongly depends on the extinction correction. To check the reliability of the obtained values we compared the log U vs velocity (V) and log U vs distance from the nucleus (R) plots with similar plots involving the [OIII]λ5007/ ratio. Even if this ratio is not a real ionization parameter, it is sensitive to the ionization degree of the gas and it has the advantage to be independent from the extinction correction, because the two lines are close in wavelength. When the two diagrams are compatible, the extinction correction applied to the fluxes is reliable.Fig. <ref> and Fig. <ref> show the plots oflog U, log([OIII]λ5007/), A(V) and n_e as a function of velocity. Similar plots obtained with the total flux of the lines are shown in Fig. <ref>.The behaviours of log U and log([OIII]/) are often similar. This confirms the overall reliability of the extinction correction, even though the errorbars on A(V) increase rapidly with | V |. Almost all the differences between the two plots can be explained as an effect of errors in measuring A(V) at high | V | due to a low SNR inwings. An example is the region N2 of IC 5063 (Fig. <ref> left, blue solid line):the behaviour of log U reflects perfectly that of the extinction.In IC 5063, A(V) usually shows a peak at low velocities, then it decreases and the errorbars are compatible with a flat trend. The A(V) vs distance (R) plot is similar, we found a maximum extinction of 2.4 mag in the CN region and then A(V) progressively decreases at increasing R. In NGC 7212 we observed the same behaviour of the A(V) vs R plot, but the extinction is generally 1 mag lower. The high A(V) values of IC 5063 are expected because the galaxy shows a large dust lane which crosses the whole galaxy that is aligned with the ENLR. The A(V) vs. V plots show that the extinction is almost constant at all velocities. However, the errorbars are compatible with a constant behaviour also in those velocity bins.In principle, U (eq. <ref>) is not expected to be dependent on velocity because there is not a direct link between these two quantities. However, if the clouds are accelerated by radiation pressure and ionized by a continuum which is attenuated by an absorbing medium with varying column density, log U is expected to depend on the gas velocity <cit.>. This kind of behaviour is observed in some regions of our galaxies. In Fig. <ref> it is possible to see that in the CN, N0 and N1 regions of IC 5063, the ionization parameter decreases of about one order of magnitude between V=-600 and V=500. On the basis of the results from <cit.> and <cit.>, this might mean that the blueshifted gas is irradiated directly by the AGN, while the redshifted gas is ionized by an attenuated continuum. In the S0, S1 and S2 regions log U increases in the opposite direction, from V=-400 to V=200. The velocity range is quite narrow because at higher | V | the effect of the extinction correction dominates with respect to the real behaviour of the ionization parameter. However, the fact that in these regions, the ionization increases with V is clearly confirmed by the log([OIII]/) vs V plot.<cit.> linkedthe ionization parameter dependence on velocity to the geometry of the ionization cones. If the ionization cones have a hollow bi-conical shape and one of their edges is aligned to the galaxy disk, such as in NGC 1068 <cit.>, this edge should be irradiated by a continuum attenuated by the dust present in the disk while the rest of the cone should be excited by the non-absorbed continuum. Therefore, if the line of sight intercepts the ionization cone at the proper angle, it is possible to see the velocity dependence of the ionization parameter.This explanation seems to be in contrast with what said in the previous section. However, it is possible for the two phenomena to coexist.The attenuation of the continuum contributes in decreasing the ionization degree of the gas, also seen in the diagnostic diagrams, while the presence of shocks causes the points in the Δ E diagram to get close, and often to cross, the shock-power law threshold (Fig. <ref>).The geometrical shape of the ionization cones of IC 5063 is not known but, combining our results with Ozaki's conclusions, the behaviour of the ionization parameter is likely the result of a hollow bi-conical shape.This shape could be explained as the effect of the radio-jet expanding in a clumpy medium and forming a cocoon around it, which drives away the gas from the jet axis <cit.> . In NGC 7212, U can be considered independent from the velocity in every region. All the small deviations from a constant value are related to variations of the extinction. This might mean that the three-dimensional shape of the ionization cones is different from that of IC 5063, or that our line of sight intercepts the object at a different angle. The only region with a peculiar behaviour is N2 (Fig. <ref>). In this region log U and log([OIII]/) are quite different. U slightly decreases from V=-300 to V= 400 while log([OIII]/) has a parabolic shape. A possible reason might be the low SNR ofwings which causes an overestimation of its flux at high velocities and an underestimation of the log([OIII]/) ratio.Our results are well in agreement with <cit.>, who found log([OIII]/)∼1 in the whole ENLR. The observed scatter of the ionization parameter is compatible with an almost constant trend. §.§ Temperature and densities We were able to obtain, in most regions, the average temperature and density of the gas in two different ionization regimes. In particular, we used [OIII]λλ4959,5007, [OIII]λ4363 and[ArIV]λλ4711,4740 for the gas with medium ionization degree, and [OII]λλ3726,3729, [OII]λ7320 and[SII]λλ6716,6731 for the gas in low ionization degree.Unfortunately, some of the lines necessary to estimate temperatures and densities were so faint that they could not be divided in velocity bins (e.g. [OIII]λ4363, [ArIV]λλ4711,4740). Therefore, we were forced to use the total flux of these lines and calculate the averaged temperature and density. Only with the [SII]λλ6716,6731 lines it was possible to estimate the density as a function of velocity. We proceeded using iteratively theIRAF task, until we obtained stable values for both temperature and density. Once reached the final results we used the average temperature of the low ionization gas to attempt to calculate n_e from the [SII] lines in the velocity bins. All the obtained results are shown in Fig. <ref>, <ref> and <ref>.Table <ref>and Table <ref> report the values obtained with the total flux of the lines. Due to the errors on the line ratios, what we really are interested in is the order of magnitude of these quantities. For this reason we rounded off the measured quantities to the nearest hundreds K for the temperature and to the nearest multiple of 50 for the density. The observed temperature is of the order of 10000 K, a typical temperature in photo-ionized regions <cit.>. In IC 5063 both temperature and density of the medium ionization gas are slightly higher than those of the low ionization gas, while for NGC 7212 the situation is less clear. NGC 7212 spectra had a lower SNR than IC 5063 ones, especially in the external regions, therefore it was not possible to measure the weak lines needed for the calculations. In this galaxy it is worth noticing that the temperature of the low ionization gas measured in the central region of the galaxy is close to 15000 K.Following <cit.>, high temperatures of the low ionization gas might be a hint of jet-ISM interaction.To calculate the density in the regions where we could not directly measure the temperature, we used the typical value T = 10000 K for photo-ionized gas <cit.>.The density measurement depends weakly on the temperature <cit.> and this value is close enough to the mean T_[OII] measured from from Table <ref> and <ref> (∼ 11800 K) that the difference is negligible for our purposes.After that, we used the temperatures obtained with the [OII] lines (Table <ref> and Table <ref>) to measure the electron density with the [SII] lines as a function of velocity. The results are shown in Fig. <ref> and Fig <ref>. Typical values are log n_e([SII]) = 2.0–3.0, but in some cases we measured larger and smaller values, especially at high | V | where the SNR is low and the effects of the deblending can affect the results. In some regions the situation is not clear (e.g. N1 and N2 regions in IC 5063, Fig. <ref>) and these properties will be further investigated with SUMA simulations. §.§ Line profilesThe analysis of the line profiles, on the basis of the emitting gas conditions estimated in previous sections, can provide detailed information on the kinematics and the distribution of the clouds. For each region we compared the profile of some of the brightest lines normalized to their peak emission, to study how they change. Fig. <ref>–<ref> and in Fig. <ref>–<ref> show, from left to right, the comparison between: [OIII]λ5007 and ; [OI]λ6300, [OII]λ3726 and [OIII]λ5007; [OII]λ3726, [OII]λ3729 and [OIII]λ5007; [SII]λ6716, [SII]λ6731 and [OIII]λ5007, for IC 5063 and NGC 7212 respectively. The A(V) profile measured in each region is reported under the corresponding panel, to evaluate if possible differences between the line profiles could be caused by extinction. We also used these plots to check the results of the deblending process, comparing the lines of the deblended doublets to the [OIII]λ5007 to highlight potential differences.The line profile can significantly change from region to region. In the external regions they are quite narrow and relatively smooth, but they often show a prominent blue or red wing (Fig. <ref>, <ref>). NGC 7212 shows broader and more disturbed profiles than IC 5063 in the external regions: the FWHM of the N2 region of NGC 7212 and IC 5063 are 300 and 170 respectively.The spectra were all shifted to rest frame using stellar kinematics, but, outside the nucleus, the main peak of each line is often shifted toward longer or shorter wavelengths of ∼100, probably caused by bulk motions of gas.In general, the complexity of the line profiles might be related to the interaction of jets with ISM, as better explained at the end of Sec. <ref>, considering the result of the composite modelling. §.§.§ IC 5063IC 5063 shows an emission outside the nucleus (regions S0 and S1, Fig. <ref>, bottom panel) characterized by two well resolved peaks at Δ V∼ 200 which can be observed in all the emission lines (Fig. <ref> and <ref>). A further analysis of the emission lines of the southern regions (CS, S0, S1, S2) shows a connection between their profiles. In the CS region, which represents the southern part of the nucleus (Fig. <ref>), the emission lines are quite narrow and smooth, but there is a small asymmetry at V∼ 180. In the next region (S0, Fig. <ref>) there is a secondary peak at V∼ 140, weaker than the main peak. In the S1 region (Fig. <ref>) the red peak becomes the strongest one and the blue peak starts to weaken and it becomes a blue wing in the S2 region (Fig. <ref>). This evolution indicates the presence of two distinct kinematic components. The blue one could be associated with an outflow which is the dominant component of the nuclear emission and then it starts to become weaker at higher distances from the nucleus. These two peaks correspond to the two narrow components found by <cit.> nearby the South-east hotspot. In each region the line profiles are very similar for all the considered lines except in S1. Here, all the lines show a secondary peak which is 10 to 20 per cent stronger with respect to the same feature visible in [OIII]λ5007(Fig. <ref>), with the exception of [OI]λ6300 which is very similar to the [OIII] line. On the other side of the galaxy the line profiles do not show any secondary peak, but in the CN and N0 regions we observe a very broad component (FWHM∼ 900) which can be identified with the well known gas outflow of the galaxy West hotspot<cit.>.In the CN region each line has a different shape (Fig. <ref>). All the lines show clearly a narrow component and the broad asymmetric component of the outflow. The relative peak intensity of the two changes as a function of the line. The [OIII]λ5007 is characterized by the weakest broad component, while the [SII]λ6716 has the strongest one. Finally the N0 region (Fig. <ref>) shows very similar profiles ofand [OIII]λ5007, but broader profiles of the low ionization oxygen and sulphur lines. §.§.§ NGC 7212NGC 7212 is characterized by a more complex gas kinematics. The emission lines are more asymmetric and disturbed. In the northern regions there is a good agreement between the shape of all the studied lines, except for the N0 region (Fig. <ref>) where the [OI]λ6300 is significantly narrower than the other oxygen lines. All the lines show a blue component which becomes weaker towards the external region of the galaxy. The southern regions deserve a more detailed analysis. The lines have a complex profile characterized by multiple kinematic components. The CS region (Fig. <ref>) has a double peak, separated by Δ V∼ 150, which is visible in all lines except in the [OII] doublet where it becomes an asymmetry. It is not clear whether this asymmetry is real or an effect of the deblending process which is not able to recover the secondary peak starting from the blended lines. Such a feature could be caused by a strong extinction of the weaker peak, but this is not confirmed by the A(V) profile. In the S0 region (Fig. <ref>) the lines show very different profiles.has a double peaked shape (Δ V∼ 120) not visible in [OIII]λ5007 which shows only a wing. The secondary peak is observed in the [SII] doublet but not in the oxygen lines. They have the blue bump, but its relative intensity changes: the [OII]λ3726 has the strongest bump (∼80% of the peak intensity), followed by the [OI]λ6300 (∼60%) and by the [OIII]λ5007 (∼40%). Finally, in the S1 region (Fig. <ref>)and all the other low ionization lines (except [OII]λ3729) show an asymmetric profile with a blue-shifted peak (V∼-150) and two bumps in the red part of the lines. On the other hand, [OIII]λ5007 shows a peak at V∼ -50 and a relatively strong bump (>80 per cent of the peak intensity) in the velocity where the other lines have the peak. §.§ Detailed modelling of the observed spectraModelling the spectra by pure photo-ionization models gives satisfying results for intermediate ionization level lines. However, collisional phenomena can be critical in the calculation of the spectra emitted by high velocity gas. The physical properties of the galaxies, such as the complex structure of the emission lines, the presence of merging and possible interaction between jets and ISM among others, suggested us to use a code which takes into account the effects of both photo-ionization and shocks.The SUMA code <cit.> simulates the physical conditions in an emitting gaseous cloud under the coupled effect of photo-ionization from the radiation source and shocks. The line and continuum emission from the gas are calculated consistently with dust-reprocessed radiation in a plane-parallel geometry. The main physical properties of the emitting gas and the element abundances are accounted for.§.§.§ Input parametersThe parameters which characterise the shock are roughly suggested by the data, e.g. theshock velocity V_s by the velocity bin and the pre-shock density n_0 by the characteristic line ratios and by the pre-shock magnetic field B_0. We adopt B_0= 10^-4 G, which is suitable to the NLR of AGN <cit.>. Changes in B_0are compensated by opposite changes in n_0.The ionizing radiation from a source, external to the emitting cloud, is characterized by its spectrum and by the flux intensity.The flux is calculated at 440 energy bands ranging between a few eV to keV. If the photo-ionization source is an active nucleus, the input parameter that refers to the radiation field is the power-law fluxfrom the active center F in number of photons cm^-2 s^-1 eV^-1 at the Lyman limit with spectral indicesα_UV=-1.5 and α_X=-0.7. It was found by modelling the spectra of many different AGN that these indices were the most suitable, in general <cit.>. For UV line ratios of a sample of galaxies at z> 1.7, <cit.> found that α_UV=-1 provided a very good fit. The power-law in the X-ray domain was found flatter by the observations of local galaxies <cit.>. Nevertheless, for all the models presented in the following, we will adopt α_UV=-1.5 and α_X = -0.7, recalling that the shocked zone also contributes to the emission-line intensities. Therefore our results are less dependent on the shape of the ionizing radiation. F is combined with the ionization parameter U by: U= (F/n c (α_UV -1)) (E_H^-α_UV +1 - E_C^-α_UV +1),where E_H is H ionization potentialand E_C is the high energy cutoff, n the density, α_UV the spectral index, and c the speed of light.In addition to the radiation from the primary source, the diffuse secondary radiation created by the hot gas is also calculated, using 240 energy bands for the spectrum. The primary radiation source is independent but it affects the surrounding gas. In contrast, the secondary diffuse radiation is emitted from the slabs of gas heated by the radiation flux reaching the gas and, in particular, by the shock. Primary and secondary radiations are calculated by radiation transfer downstream.In our model the gas region surrounding the radiation source is not considered as a unique cloud, but as an ensemble of filaments.The geometrical thickness of these filaments is an input parameter of the code (D) which is calculated consistently with the physical conditions and element abundances of the emitting gas. D determines whether the model is radiation-bounded or matter-bounded. The abundances of He, C, N, O, Ne, Mg, Si, S, Ar, Fe, relative to H, are accounted for as input parameters.Previous results lead to O/H close to solar for most AGNs and for most of the HII regions <cit.>. We will conventionally define “solar”relative abundances (O/H)_⊙=6.6 - 6.7 ×10^-4 and (N/H)_⊙= 9.×10^-5 <cit.> that were found suitable to local galaxy nebulae.Moreover, these values are included between those of <cit.> (8.5×10^-4 and 1.12×10^-4, respectively) and <cit.> (4.9×10^-4 and 6.76×10^-5, respectively ).The fractional abundancesof all the ions in all ionization levels are calculatedin each slab resolving the ionization equations. The dust-to-gas ratio (d/g)is an input parameter.Mutual heating and cooling between dust and gas affect the temperatures of gas in the emitting region of the cloud. Fig. <ref> and Fig. <ref> show the input parameter as a function of V used to model the observed spectra.§.§.§ Calculation process The code accounts for the direction of the cloud motion relative to the external photo-ionizing source. A parameter switches between inflow (the radiation flux from the source reaches the shock front edge of the cloud) and outflow (the flux reaches the edge opposite to the shock front). The calculations start at the shock front where the gas is compressed and thermalized adiabatically, reaching the maximum temperature in the immediate post-shock region (eq. <ref>).T∼ 1.5 × 10^5 (V_s/100 )^2.T decreases downstream by the cooling rate and the gas recombines.The downstream region is cut into a maximum of 300 plane-parallel slabs with different geometrical widths calculated automatically, in order to account for the temperature gradient. In each slab, compression (n/n_0) is calculated by combining the Rankine–Hugoniot equations for conservation of mass, momentum and energy throughout the shock front <cit.>. Compression ranges between 4 (the adiabatic jump) and > 100, depending on V_s and B_0.The stronger the magnetic field, the lower the compression downstream, while a higher shock velocity corresponds to a higher compression.In pure photo-ionization models, the density n is constant throughout the nebula. In models accounting for the shocks, both the electron temperatureT_ eanddensity n_e show a characteristic profile throughout each cloud (see for instance Fig. <ref> left and right panels).After the shock, the temperature reaches its upper limit at a certain distance from the shock-front and remains nearly constant, while n_e decreases following recombination. The cooling rate is calculated in each slab by free-free (bremsstrahlung), free-bound and line emission.Therefore, the most significant lines must be calculated in each slab even if only a few ones are observed because they contribute to the temperature slope downstream.The primary and secondary radiation spectra change throughout the downstream slabs, each of them contributing to the optical depth.In each slab of gas the fractional abundance of the ions of each chemical element is obtained by solving the ionization equations which account for photo-ionization (by the primary and diffuse secondary radiations and collisional ionization) and forrecombination (radiative, dielectronic), as well as for charge transfer effects, etc.The ionization equations are coupled to the energy equation when collision processes dominate <cit.> and to the thermal balance if radiative processes dominate.The latter balances the heating of the gas due to the primary and diffuse radiations reaching the slab with the cooling due to line emission, dust collisional ionization and thermal bremsstrahlung.The line intensity contributions from all the slabs are integrated throughout the cloud. In particular, the absolute line fluxes referring to the ionization level i of element K are calculated by the term n_K(i) which represents the density of the ion X(i). We consider that n_K(i)=X(i)[K/H]n_H, where X(i) is the fractional abundance of the ion i calculated by the ionization equations, [K/H] is the relative abundance of the element K to H and n_H is the density of H (by number 3).In models including shock, n_H is calculated by the compression equation in each slab downstream.So the element abundances relative to H appear as input parameters. To obtain the N/H relative abundance for each galaxy, we consider the charge exchange reactionN^++H ⇌ N+H^+.Charge exchange reactions occur between ions with similar ionization potential (I(H^+)=13.54eV, I(N^+)=14.49 eV and I(O^+)=13.56eV).It was found that N ionization equilibrium in the ISM is strongly affected by charge exchange. This process as well as O^++H ⇌ O+H^+are included in the SUMA code. The N^+/N ion fractional abundance follows the behaviour of O^+/O so, comparing the [NII]/ and the [OII]/ line ratios with the data, the N/H relative abundances can be easily determined <cit.>. Dust grains are coupled to the gas across the shock front by the magnetic field.They are heated radiatively by photo-ionization and collisionally by the gas up to the evaporation temperature (T_ dust≥ 1500 K).The distribution of the grain radii in each of the downstream slabs is determined by sputtering, which depends on the shock velocity and on the gas density.Throughout shock fronts and downstream, the grains might be completely destroyed by sputtering.The calculations proceed until the gas cools down to a temperature below 10^3 K (the model is radiation bounded) or the calculations are interrupted when all the lines reproduce the observed line ratios (the model is matter bounded). In case that photo-ionization and shocks act on opposite edges, i.e. when the cloud propagates outwards from theradiation source, the calculations require some iterations, until the results converge. In this casethe cloud geometrical thickness plays an important role. Actually, if the cloud is very thin, the cool gas region may disappear leading to low or negligible low ionization level lines. Summarizing, the calculations start in the first slab downstream adopting the input parametersgiven by the model. Then, it calculates the density, the fractional abundances of the ions from each level for each element, free-free,free-boundand line emission fluxes.It calculates T_e by thermal balancing or the enthalpy equation, and the optical depth of the slabin order to obtain the primary and secondary fluxes by radiation transfer for the next slab.Finally,the parameterscalculated in slab i are adopted as initial conditions for slab i+1.Integrating the line intensities from each slab, the absolute fluxes of the lines and of bremsstrahlung are obtained at the nebula.The line ratios to a certain line (generallyfor the optical-UV spectrum) are then calculated and compared with the observed data, in order to avoid problems of distances, absorption, etc. The number of the lines calculated by the code (over 300) does not depend on the number of the observed lines nor does it depend on the number ofinput parameters, but rather on the elements composing the gas.§.§.§ Grids of models The physical parameters are combined throughout the calculation of forbidden and permitted lines emitted from a shocked nebula. The ranges of the physical conditions in the gas are deduced, as a first guess, from the observed line ratios because they are more constraining than the continuum SED. Grids of models are calculated for each spectrum, modifying the input parameters gradually, in order to reproduce as close as possible all the observed line ratios. At each stage of the modelling process, if a satisfactory fit is not found for all the lines, a new iteration is initiated with a different set of input parameters.When one of the line ratios is not reproduced, we check how it depends on the physical parameters and decide accordingly how to change them, considering that each ratio has a different weight. The input parameters are therefore refined by the detailed modelling of the spectra. The spectra of NGC 7212 and IC 5063 are rich in number of lines, therefore the calculated spectra are strongly constrained by the observed lines.They are different in each of the observed spectra, revealing different physical conditions from region to region.The models selectedby the fit of the line spectrumare cross-checked by fitting the continuum SED. In the UV range the bremsstrahlung from thenebula is blended with black body emission from the star population background.The maximum frequency of the bremsstrahlung peak in the UV – X-ray domain depends on the shock velocity.In the IR range dust reprocessed radiation is generally seen. In the radio range synchrotron radiation by the Fermi mechanism at the shock front is easily recognized by the slope of the SED. We generally consider that the observed spectrum is satisfactorily fitted by a model when the strongest lines are reproduced by the calculation within 20 per cent and the weak ones within 50 per cent. The final gap between observed and calculated line ratios is due to observational errors both random and systematic, as well as to the uncertainties of the atomic parametersadopted by the code, such as recombination coefficients, collision strengths etc., which are continuously updated, and to the choice of the model itself. The set of the input parameters which leads to the best fit of the observed line ratios and continuum SED determines the physical and chemical properties of the emitting gas.They are considered as the "results" of modelling (Table <ref>). §.§.§ Choice ofNGC 7212 and IC 5063 parametersWe start modelling by trying to reproduce the observed [OIII]λλ5007,4959/ line ratio (λλ5007,4959, hereafter 5007+; the + indicates that the doublet 5007, 4959 is summed up), which is in generalthe highestratio, by readjusting F and V_s (V_s, however, is constrained through a small range by the observed V).The higher F, the higher the [OIII]/ and the [OIII]/[OII] line ratios, as well as HeII/.Moreover, a high F maintains the gas ionized far from the source, yielding enhanced [OI] and [SII] lines.These lines behave similarly because the first ionization potential of S (10.36eV) is lower than that of O (13.61eV). Then, we consider the [OII]λλ3726,3729 doublet (hereafter 3726+).If the flux from the active centre is low (F ≤ 10^9 ph.cm^-2 s^-1 eV^-1), a shock-dominated regime is found, which is characterised by relatively high [OII]/[OIII] (≥ 1).[OII] can be drastically reduced by collisional de-excitation at high electron densities (n_e >3000 cm^-3).The gas density is a crucial parameter.In each cloud, it reaches its upper limit downstream and remains nearly constant, while the electron density decreases following recombination.A high density, increasing the cooling rate, speeds up the recombination process of the gas, enhancing the low-ionization lines. Indeed, each line is produced in a region of gas at a different n_e and T_e, depending on the ionization level and the atomic parameters characteristic of the ion.The density n, which can be roughly inferred from the [SII]6716/6731 doublet ratio, is related with n_0 by compression downstream (n/n_0), which ranges between 4 and ∼100, depending on V_s and B_0.The [SII] lines are also characterised by a relatively low critical density for collisional de-excitation. In some cases the [SII]6716/6731 line ratio varies from > 1 to < 1 throughout a relatively small region, since the [SII] line ratios depend on both the temperature and electron density of the emitting gas <cit.>, which in models accounting for the shock are far from constant throughout the clouds. Thus, even sophisticated calculations which reproduce approximately the high inhomogeneous conditions of the gas lead to some discrepancies between the calculated and observed line ratios. Unfortunately, there are no data for S lines from higher ionization levels which could indicate whether the choice of the model is misleading or different relative abundances should be adopted.We recall that sulphur can be easily depleted from the gaseous phase and trapped into dust grains and molecules.Finally, the results of the modelling are shown in Table C1-C32 of the on-line material. We show here Table <ref>, as an example of a typical table.We can conclude that composite models are able to reproduce the observed spectra accurately, in particular they well reproduce the high [OI]/ line ratios. For instance, the spectrum in Table C9 of the on-line material, corresponding to the emitting cloud in the CS region of IC5063 (bin 4), shows V_s of ∼ 400.The cloud moves outwards from the active centre, therefore the photo-ionizing radiation reaches the edge opposite to the shock front. The profiles of T_e, n_e and of the O^++/O, O^+/O and O^0/O fractional abundances throughout the cloud are shown in Fig. <ref>. Reducing the shock velocity to V_s =50 and even lower we can simulate the case of pure photo-ionization. With these models we can still obtain a good fit of [OIII]/ and [OII]/ by increasing the pre shock density and the photo-ionization flux. However,[OI]/will be lower than observed by a factor of ∼ 10, indicating that the shock velocity constrains the spectra, particularly at high V_s.Another interesting result of the modelling is the high fragmentation of matter which is revealed by the large range of geometrical thickness (D) used to model the spectra. This might be explained by the interaction between jets and ISM. The interaction causes shocks and it creates turbulence at the shock-front producing fragmentation of matter. These clouds move in a turbulent regime which can cause the complex line profile described in Sec. <ref>. §.§ The spectral energy distribution of the continuum To cross-check the results obtained by the detailed modelling of the line spectra, we have gathered from the NED the data corresponding to the continuum spectral energy distribution (SED) of IC 5063 and NGC 7212.Fig. <ref> shows the SED obtained from these data.Error bars are not shown for sake of clarity at frequencies < 10^17 Hz. We have selected some models which, on average, best reproduce the line ratios at different V_s (100, 300 and 600) and we have compared them with the data in Fig. <ref>. At ν>10^17 Hz the data are fitted by the power-law flux from the AGN. The flux is reprocessed by gas and dust within the clouds and it is emitted as free-free and free-bound (and line) at lower frequencies.The reprocessed radiation by dust grains appears in the IR. In the specific case of IC 5063 the model corresponding to low V_s and low F reproduces the data lower limit in the UV-optical range. Most of the data are nested inside the black body radiation flux corresponding to a temperature of T=5000 K, which represents the background contribution of relatively old stars as, e.g. red giants and Mira.The near IR side of the IR bump is fitted by the black body re-radiation flux corresponding to T=200 K.It represents emission from a large amount of warm dust produced in the stellar wind of red giant stars.Dust is heated collisionally and radiatively in the clouds. The grains are destroyed by sputtering at high V_s (> 200) throughout the shock front. Dust is heated to a maximum T=66 K in the V_s =300 cloud and to a maximum of 90 K in the V_s=600 cloud, so the reprocessed radiation peaks at lower ν.In the radio ranges, the data at ν≥ 10^10 Hz are fitted by the bremsstrahlung and reradiation by dust, while the data at lower ν follow a power-law flux with spectral index =0.75.It represents the synchrotron radiation flux created by the Fermi mechanism at the shock front.Interestingly, the same models are used to reproduce the continuum SED ofboth galaxies. In the far radio range of NGC 7212 self absorption of the flux is evident.Unfortunately the data for IC 5063 are lacking.§ CONCLUDING REMARKS We studied the NLR/ENLR gas of two nearby Seyfert 2 galaxies: IC 5063 and NGC 7212. We analysed high resolution spectra to highlight the different kinematic components of the emission lines and to study the properties of the gas as a function of velocity. We produced diagnostic diagrams, we studied the ionization parameter and the physical conditions of the gas resulting from the line ratios and we compared the observations to detailed models of the spectra, obtaining the following results: * The diagnostic diagrams show that the main ionization mechanism of the gas is photo-ionization from the power-law continuum produced by the AGN. However, high velocity gas seems to lie closer to the LINER/shock region of the diagrams than low velocity gas.This could suggest that there might be some contribution of shocks in the ionization of high velocity gas. * In the CN, N0 and N1 regions of IC 5063 the ionization parameter decreases of about one order of magnitude between V=-600 and V=500 kms, which means that the blueshifted gas is irradiated directly by the AGN, while the redshifted gas is ionized by an attenuated continuum. In the S0, S1 and S2 regions U increases in the opposite direction, from V=-400 to V=200. The velocity range is quite narrow because at higher | V | the effects of the extinction correction dominates with respect to the real behaviour of the ionization parameter. The ionization increase with V is clearly confirmed by the log([OIII]/) vs V plot. This behaviour of the ionization parameter might be explained assuming a hollow bi-conical shape of the ENLR with one of the edges aligned with the galaxy disk. * NGC 7212 shows an ionization parameter which does not depend on velocity. Therefore, it is not possible to say anything about the real geometrical shape of the ENLR. * The electron temperature and density are obtained from measured line ratios where possible (Table <ref> and <ref>). A value of the density was also calculated for each bin by the detailed modelling of the spectra, which accounts also for the shocks (Table <ref> and on-line material). The two results are typically in agreement and they are consistent with properties of photo-ionized gas. * The SUMA composite models results show that the O/H relative abundances are close to solar <cit.>. * Analizing the SED we noticed that, although the multiwavelength dataset is not complete, the power-law flux in the radio range created by the Fermi mechanism at the shock-front, the bremsstrahlung emitted by the gas downstream and the black-body fluxes corresponding to dust reprocessed radiation and to the old star population. Therefore, the Fermi mechanism and the bremsstrahlung radiation confirm the presence of shocks in both galaxies. * The analysis of the line profiles shows that the kinematics of both galaxies is quite complex. The profiles change significantly from region to region.In the nucleus of the galaxies they are often relatively broad and characterized by multiple peaks and bumps.In the external regions they become narrower but they usually shows a red or blue wing. NGC 7212 lines are broader and more disturbed than IC 5063 lines, but they are characterized by less prominent wings. Those are all signs of gas in turbulent regime. * The high fragmentation of the clouds derived by SUMA is an index of interaction between jets and ISM and it might also explain the complex gas kinematics. * The high temperatures that seem to characterize the low ionization gas of NGC 7212 might be explained by the jet-ISM interaction <cit.>. * The main peak of the lines outside the nucleus is shifted towards longer or shorter wavelengths, depending on the observed region. It is a sign of consistent bulk motions of gas with respect to the galaxy stellar component. * The profile of the lines within each region does not change, with some exception due to variations in the ionization degree of the gas or difficulties in recovering the original shape of the lines during the deblending process. Finally, we confirmed that this kind of analysis of the line profiles can be a powerful tool to investigate the properties of gas in such complex conditions. It can showgas properties that a standard analysis would miss, for example the peculiar behaviour of the ionization profile in IC 5063 and the shift of the points corresponding to high | V| in the diagnostic diagrams, which can be associated to the presence of shocks. In particular, the latter cannot be easily observed, when the diagnostic diagrams are produced with the whole line flux, because the contribution of the high | V| gas is negligible with respect to the whole line flux.§ ACKNOWLEDGEMENTS The authors would like to thank prof. Raffaella Morganti for her useful comments, and the referee, Dr. Andrew Humphrey, which helped to increase the quality of the paper with his review. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.This paper includes data gathered with the 6.5-m Magellan Telescopes located at Las Campanas Observatory, Chile. The STARLIGHT project is supported by the Brazilian agencies CNPq, CAPES and FAPESP and by the France–Brazi CAPES/Cofecub program. Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA).mnras § LINES PROFILES §.§ IC 5063 §.§ NGC 7212§ DIAGNOSTIC DIAGRAMS | http://arxiv.org/abs/1706.08970v1 | {
"authors": [
"E. Congiu",
"M. Contini",
"S. Ciroi",
"V. Cracco",
"M. Berton",
"F. Di Mille",
"M. Frezzato",
"G. La Mura",
"P. Rafanelli"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170627180000",
"title": "High resolution spectroscopy of the extended narrow-line region of IC 5063 and NGC 7212"
} |
./images/ positioninggraphsdecorations.pathreplacing arrows, automata mycommfontcompat=newest #1 #1 @labelfourthoffour#1currentlabel1000 proofcount#1 +proofcount=#1 proofcount @=@ +@@<proofcount @ne theoremTheorem propositionProposition lemmaLemma corollaryCorollary claimClaim observationObservation[2][1=][linecolor=red,backgroundcolor=red!25,bordercolor=red,#1]#2 [2][1=][linecolor=blue,backgroundcolor=blue!25,bordercolor=blue,#1]#2 [2][1=][linecolor=OliveGreen,backgroundcolor=OliveGreen!25,bordercolor=OliveGreen,#1]#2 [2][1=][linecolor=Plum,backgroundcolor=Plum!25,bordercolor=Plum,#1]#2 [2][1=][disable,#1]#2definition definitionDefinition exampleExample theorem lemma proposition corollary Assortment and Price Optimization Under the Two-Stage Luce model Alvaro Flores[College of Engineering & Computer Science, Australian National University, Australia.] Gerardo Berbeglia[Melbourne Business School, The University of Melbourne, Australia.] Pascal Van Hentenryck[H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology, USA.] December 30, 2023 ====================================================================================================================================================================================================================================================================================================================================== This paper studies assortment and pricing optimization problems under the Two-Stage Luce model (2SLM), a discrete choice model introduced by <cit.> that generalizes the multinomial logit model (MNL). The model employs an utility function as in the the MNL, and a dominance relation between products. When consumers are offered an assortment S, they first discard all dominated products in S and then select one of the remaining products using the standard MNL. This model may violate the regularity condition, which states that the probability of choosing a product cannot increase if the offer set is enlarged. Therefore, the 2SLM falls outside the large family of discrete choice models based onrandom utility which contains almost all choice models studied in revenue management. We prove that the assortment problem under the 2SLM is polynomial-time solvable. Moreover, we show that the capacitated assortment optimization problem is NP-hard and but it admits polynomial-time algorithms for the relevant special cases cases where (1) the dominance relation is attractiveness-correlated and (2) its transitive reduction is a forest. The proofs exploit a strong connection between assortments under the 2SLM and independent sets in comparability graphs. Finally, we study the associated joint pricing and assortment problem under this model.First, we show that well known optimal pricing policy for the MNL can be arbitrarily bad. Our main result in this section is the development of an efficient algorithm for this pricing problem. The resulting optimal pricing strategy is simple to describe: it assigns the same price for all products,except for the one with the highest attractiveness and as well as for the one with the lowest attractiveness. § INTRODUCTIONRevenue Management (RM) is the managerial practice of modifying the availability and the prices of products in order to maximise revenue or profit. The origin of this discipline dates back to the 1970's, following the deregulation of the US airline market. A large volume of research has been devoted to this area over the last 45 years, with successful results in many industries ranging from airlines, hospitality, retailing, and others <cit.>.Two main problems lay in the core of RM theory and practice: the optimal assortment problem, and the pricing problem. The optimal assortment problem consists of selecting a subset of products to offer customers in order to maximize revenue. Consider, for example, a retailer with limited space allocated to mobile phones. If the store has more than 500 mobile phones that can be acquired through its distributors (in various combinations of brands and sizes) and the mobile phone aisle has capacity to fit 50 phones on the shelves, the store manager has to decide which subset of products to offer given the product costs and the customer preferences.In order to solve the assortment problem we need a model to predict how customers select products when they are presented with a set of alternatives. Most models of discrete choice theory postulate that consumers assign an utility to each alternative and given an offer set, they would choose the alternative with maximum utility. Different assumptions on the distribution of the utilities lead to different discrete choice models: Celebrated examples include the multinomial logit (MNL) <cit.>, the mixed multinomial logit (MMNL) <cit.>, and the nested multinomial logit (NMNL) <cit.>.The multinomial logit model (MNL), also known as the Luce model, is widely used in discrete choice theory. Since the model was introduced by <cit.>, it was applied to a wide variety of demand estimation problems arising in transportation <cit.>, marketing <cit.>, and revenue management <cit.>. One of the reasons for its success stems from its small number of parameters (one for each product): This allows for simple estimation procedures that generally avoids over fitting problems even when there is limited historical data <cit.>. However, one of the flaws of the MNL is the property known as the Independence of Irrelevant Alternatives(IIA), which states that the ratio between the probabilities of choosing elements x and y is constant regardless of the offered subset. This property does not hold when products cannibalize each other or are perfect substitutes <cit.>.Several extensions to the MNL model have been introduced to overcome the IIA property and some of its other weaknesses; They include the nested multinomial logit and the latent class MNL model.These models however do not handle zero-probability choices well. Consider two products a and b: The MNL model states that the probability of selecting a over b depends on the relative attractiveness of a compared to the attractiveness of b. Consider the case in which b is never selected when a is offered. Under the MNL model, this means that b must have zero attractiveness. But this would prevent b from being selected even when a is not offered in an assortment.On the other hand, the pricing problem amounts to determine the prices that a company should offer, in order to best meet its objectives (profit maximization, revenue maximization, market share maximization, etc.), while taking into consideration how customers will respond to different prices and the interaction between price and the intrinsic features that each product possess.This paper considers both problems mentioned before, for the case when customers follow the Two-Stage Luce model (2SLM). The 2SLM was recently introduced by <cit.> and unlike the MNL, it allows for violations to the IIA property and regularity <cit.>. The Two-Stage Luce model generalizes the MNL by incorporating a dominance (anti-symmetric and transitive) relation among the alternatives. Under such relationship, the presence of an alternative x may prevent another alternative y from being chosen despite the fact that both are present in the offered assortment. In this case, alternative x is said to dominate alternative y. However, when x is not present, y might be chosen with positive probability if it is not dominated by any other product z.An important application of the 2SLM can be found in assortment problems where there exists a direct way to compare the products over a set of features. For illustration, consider a telecommunication company offering phone plans to consumers. A plan is characterized by a set of features such as price per month, free minutes in peak hours, free minutes in weekends, free data, price for additional data, and price per minute to foreign countries. Given two plans x and y, we say that plan x dominates plan y, if the price per month of x is less than that of y, and x is at least as good as y in every single feature.In the past, the company offered consumers a certain set of plans S_t each month t such that no plan in S_t is dominated by another plan (in S_t). The offered plans however were different each month. Using historical data and assuming that consumers preferences can be approximated using a multinomial logit, it is possible to perform a robust estimation procedure to obtain the parameters of such MNL model. Once the parameters are obtained, the assortment problem consists in finding the best assortment of phones plans S^* to maximize the expected revenue. A natural constraint in this problem consisting in enforcing that every phone plan offered in S^* cannot be dominated by any other. Section <ref> shows that the problem discussed here can be modelled using the 2SLM and thus solving this problem is reduced to solving an assortment problem under the 2SLM. § CONTRIBUTIONSThe first key contribution is to show that the assortment problem can be solved in polynomial time under the 2SLM. The proof is built upon two unrelated results in optimization: the polynomial-time solvability of the maximum-independent set in a comparability graph <cit.> and a seminal result by <cit.> that provides an algorithm to solve a class of combinatorial optimization problems with rational objective functions in polynomial time. This is particularly appealing since the 2SLM is one of the very few choice models that goes beyond the random utility model and it allows violations the property known as regularity: the probability of choosing an alternative cannot increase if the offer set is enlarged. Since many decades ago, there are well-documented lab experiments where the regularity property is violated <cit.>.The second key contribution is to show that the capacitated assortment problem under the 2SLM is NP-hard, which contrasts with results on the MNL. We then propose polynomial algorithms for two interesting subcases of the capacitated assortment problem: (1) When the dominance relation is attractiveness-correlated and (2) when the transitive reduction of the dominance relation can be represented as a forest. The proofs use a strong connection between assortments under the 2SLM and independent sets.The third and final contribution, is an in-depth study of the pricing problem under the 2SLM. We first note that changes in prices should be reflected in the dominance relation if the differences between the resulting attractiveness are large enough. This is formalized by solving the Joint Assortment and Pricing problem under the Threshold Luce model, where one product dominates another if the ratio between their attractiveness is bigger than a fixed threshold.Under this setting, we show that this problem can be solved in polynomial time. The proof relies on the following interesting facts: (1) An intrinsic utility ordered assortment is optimal; (2) the optimal prices can be obtained in polynomial time; and (3) it assigns the same price for all products, except for two of them, the highest and lowest attractiveness ones. Many of these results are extended to the following cases (1) capacity constrained problems, where the number of products that can be offered is restricted and (2) position bias, where products are assigned to positions, altering their perceived attractiveness.The rest of the paper is organized as follows: Section <ref> presents a review of the literature concerning assortment optimization and pricing under variations of the Multinomial Logit. Section <ref> formalizes the 2SLM and some of its properties. Section <ref> proves that assortment optimization under the 2SLM is polynomial-time solvable. Section <ref> presents the results on the capacitated version, particularly the NP-hardness of the capacitated version of the problem, but also provide polynomial time solutions for two special cases. Section <ref> present the results for pricing optimisation under the Threshold Luce model. Section <ref> concludes the paper and provides future research directions. All proofs missing from the main text, are provided in Appendix <ref>.§ LITERATURE REVIEWSince the assortment problem and the joint assortment and pricing problem are a very active research topic, we focus on recent results closely related with this paper and in particular, results over the multinomial logit model (MNL) <cit.> and its variants.Despite the IIA property, the MNL is widely used. Indeed, for many applications, the mean utility of a product can be modeled as a linear combination of its features. If the features capture the mean utility associated with each product, then the error between the utilities and their means may be considered as independent noise and the MNL emerges as a natural candidate for modeling customer choice. In addition, the MNL parameters can be estimated from customer choice data, even with limited data <cit.>, because the associated estimation problem has a concave log likelihood function <cit.> and it is possible to measure how good the fitted MNL approximates the data <cit.>. Moreover, it is possible to improve model estimation when the IIA property is likely to be satisfied <cit.>.One of the first positive results on the assortment problem under the multinomial logit model was obtained by <cit.>, where the authors showed that the optimal assortment can be found by greedily by adding products to the offered assortment in the order of decreasing revenues, thus evaluating at most a linear number of subsets. <cit.> studied the assortment problem under the MNL but with a capacity constraint limiting the products that can be offered. Under these conditions, the optimal solution is not necessarily a revenue-ordered assortment but it can still be found in polynomial time.<cit.> proposed a more general attraction model where the probabilities of choosing a product depend on all the products (not only the offered subset as in the MNL). This involves a shadow attraction value associated with each product that influence the choice probabilities when the product is not offered. <cit.> showed that a slight transformation of the MNL model allows for the solving of the assortment problem when the choice probabilities follow this more sophisticated attraction model. This continues to hold when assortments must satisfy a set of totally unimodular constraints.The Mixed Multinomial Logit <cit.> is an extension of the MNL model, where different sets of customers follow different MNL models. Under this setting, the problem becomes NP-hard <cit.> and it remains NP-hard even for two customer types <cit.>. A branch-and-cut algorithm was proposed by <cit.>.<cit.> proposed methods to obtain good upper bounds on the optimal revenue. <cit.> considered a model where customers follow a MNL model and the parameters belong to a compact uncertainty set. The firm wants to hedge against the worst-case scenario and the problem amounts to finding an optimal assortment under this uncertainty conditions. Surprisingly, when there is no capacity constraint, the revenue-ordered strategy is optimal in this setting. <cit.> proposed a local-search heuristic for the assortment problem under an arbitrary discrete choice model. <cit.> and <cit.> proposed polynomial time algorithms to solve the assortment problem under the MNL model with capacity constraint and position bias, where position bias means that customer choices are affected by the positioning of the products in the assortment. Recently, <cit.> proposed a partial-order model to estimate individual preferences, where preference over products are modeled using forests. They cluster the customers in classes, each class being represented with a forest. When facing an assortment S, customers select, following an MNL model, products that are roots of the forest projected on S. This approach outperformed state-of-the-art methods when measuring the accuracy of individual predictions.Attention has also been devoted to discrete choice models to represent customer choices in more realistic ways, including models that violate the IIA property <cit.>. This property does not always hold in practice <cit.>, including when products cannibalize each other <cit.>.<cit.> identify these violations as perception priorities, and adjust probabilities to take their effects into account. <cit.> provide an axiomatic generalization of MNL model to address the case where the products share features. <cit.> propose an axiomatic generalization of a discounted logit model incorporating a parameter to model the influence of the assortment size. Customers tend to use rules to simplify decisions, and before making a purchase decision, they often narrow down the set of alternatives to chose from, using different heuristics to make the decision process simpler. Several models of consider-then-choose models have been proposed in the literature, related with attention filters, search costs, feature filters, among others, another reasonable way to discard options, is when the difference between attractiveness is so evident, that the less attractive alternative, even when it is offered, is never picked (as in the Threshold Luce model, <cit.>).Any of the heuristics mentioned before allows the consumer to restrict her attention to a smaller set usually referred in the literature as consideration set. This effect also provokes that offered product might result having zero-probability choices.Several models have been proposed to address the issue of zero-probability choices. <cit.> propose a theoretical foundation for maximizing a single preference under limited attention, i.e., when customers select among the alternatives that they pay attention to. <cit.> incorporate the role of attention into stochastic choice, proposing a model in which customers consider each offered alternative with a probability and choose the alternative maximizing a preference relation within the considered alternatives. This was axiomatized and generalized in <cit.>, by introducing the concept of random conditional choice set rule, which captures correlations in the availability of alternatives. This concept also provided a natural way to model substitutability and complementarity.<cit.> showed that a considerable portion of the subjects in his experimental setting use a decision process involving a consideration set. Numerous studies in marketing also validated a consider-then-choose decision process. In his seminal work <cit.> observed that most of the heterogeneity in consumer choice can be explained by consideration sets. He shows that nearly 80% of the heterogeneity in choice is captured by a richer model based in the combination of consideration sets and logit-based rankings. The rationale behind this observation is that first stage filters eliminate a large fraction of alternatives, thus the resulting consideration sets are composed of a few products in most of the studied categories <cit.>. <cit.> and <cit.> empirically showed that consumers form their consideration sets by a conjunction of elimination rules. Furthermore, there are empirical results showing that a Two-Stage model including consideration sets better fits consumer search patterns than sequential models <cit.>.Form a customer standpoint, the use for consider-then-choose models alleviate the cognitive burden of deciding when facing too many alternatives <cit.>. When dealing with a decision under limited time and knowledge, customers often recur to screening heuristics as show in <cit.>. Psychologically speaking, customers as decision makers need to carefully balance search efforts and opportunity costs with potential gains, and consideration sets help to achieve that goal <cit.>. Recently <cit.> proposed a Two-Stage model where customers consider only the products are contained within certain range of their willingness to pay. <cit.> explored consider-then-choose models where each costumer has a consideration set, and a ranking of the products within it. The customer then selects the higher ranked product offered. The authors studied the assortment problem under several consideration sets and ranking structure, and provide a dynamic programming approach capable of returning the optimal assortment in polynomial time for families of consideration set functions originated by screening rules <cit.>. <cit.> considered a revenue management model where an upcoming customer might discard one offered itinerary alternativedue to individual restrictions, such as time of departure. <cit.> studied a choice model that incorporates product search costs, so the set that a customer considers might differ from what is being offered. Multi-product price optimisation under the MNL and the NL has been studied since the models were introduced in the literature. One of the first results on the structure of the problem is due to <cit.>, where they show that the profit function for a company selling substitutable products when customers follow the MNL model is not jointly concave in price. To overcome this issue, in <cit.> and later in <cit.>, the authors show that even when the profit function is not concave in prices, it is concave in the market share and there is a one-to-one correspondence between price and market share. Multiple studies shown that under the MNL where all products share the same price sensitivity parameter, the mark-up which is simply the difference between price and cost, remains constant for all products at optimality <cit.>. Furthermore, the profit function is also uni-modal on this constant quantity and it has a unique optimal solution, which can be determined by studying the first order conditions. <cit.> showed the same result for the NL model.Up to that point, all previous results assumed an identical price sensitivity parameter for all products.Under the MNL, there is empirical evidence that shows the importance of allowing different price sensitivity parameters for each product <cit.>. There is is also evidence in <cit.> that restricting the nest specific parameters to the unit interval results in rejection of the NL model when fitting the data, thus recommending to relax this assumption. The problem when relaxing this condition, is that the profit function is no longer concave on the market share, which complicates the optimization task.In <cit.> the authors considered a NL model with differentiated price sensitivities, and found that the adjusted mark-up, defined as price minus cost minus the reciprocal of the price sensitivity is constant for all products within a nest at optimality.Furthermore, each nest also has an adjusted next-level markup which is also invariant across nests, which reduces the original problem to a one variable optimization problem. Additional theoretical development can be found in <cit.> but there are restricted to the Two-Stage nested logit model.In <cit.> some of the results were extended to a multi-stage nested logit model for specific settings, but also show that the equal mark-up property fails to hold in general for products that do not share the same immediate parent node in the nested choice structure, even when considering identical price sensitivity parameters. <cit.> and <cit.> extend to the multi-stage NL model and show that an optimal pricing solution can still be found by means of maximizing a scalar function. There are some interesting results for other models that share similarities with the MNL, and therefore are closely related with the model that we are studying.In <cit.>, the authors incorporate search cost into consumer choice model.The results on this paper for the Joint Assortment and Pricing are similar to the ones that we study in Section <ref>, in that many structural results that holds at optimality for their model, are also satisfied in our studied case. They show that the quasi-same price policy (that charges the same price for all products but one, the least attractive one) was optimal for this model. Interestingly, the Joint Assortment and Pricing results under the Threshold Luce Model has a slightly different result: The optimal pricing is a fixed price for all products, except for the most attractive and least attractive ones. This led to a situation where there are many possible prices, not just two. Recently <cit.> hast studied in depth a model which was originally due to <cit.> that assumes a negatively skewed distribution of consumer utilities. The resulting choice probabilities have an interesting consequence in the optimal pricing policy: They allow for variable mark-ups in optimal prices that increase with expected utilities. The model considered in this paper is a variant of the MNL, proposed by <cit.> and called the Two-Stage Luce model; It handles zero-probability choice by introducing the concept of dominance, meaning that if a product x dominates a product y, then y is never selected in presence of y. And therefore the consideration set is formed by considering only non-dominated products in the offered assortment, allowing flexibility on the consideration set formation due to the nature of the dominance relation. Once the consideration set is formed, the customer choose according to an MNL on the remaining alternatives.In the following section we describe this model in detail, and show some examples that highlight many practical applications for it. § THE TWO-STAGE LUCE MODELThe 2SLM <cit.> overcomes a key limitation of the MNL: The fact that a product must have zero attractiveness if it has zero probability to be chosen in a particular assortment. This limitation means that the product cannot be chosen with positive probability in any other assortment. The 2SLM eliminates this pathological situation through the concept of consideration function which, given a set of products S, returns a subset of S where each product has a positive probability of being selected.Let X denotes the set of all products and let a(x) > 0 be the attractiveness of product x ∈ X. For notational convenience, we use a_x to denote the attractiveness of product x, i.e., a_x=a(x). We extend the attractiveness function to consider the outside option, with index 0 and a_0=a(0) ≥ 0, to model the fact that customers may not select any product. As a result, the attractiveness function has signature a:X∪{0}→ℝ^+.Given an assortment A⊆ X, a stochastic choice function ρ returns a probability distribution over A, i.e., ρ(x,A) is the probability of picking x in the assortment A. The 2SLM is a sub case of the general Luce model presented in <cit.>, and independently discovered in <cit.>, which is defined below. A stochastic choice function ρ is called a general Luce function if there exists an attractiveness function a∪{0}:X →ℝ^+ and a function c:2^X∖∅→ 2^X∖∅ with c(A)⊆ A for all A⊆ X such that ρ(x,A)=a_x/∑_y ∈ c(A)a_y+ a_0 ifx∈ c(A), 0ifx∉ A. for all A⊆ X. We call the pair (a,c) a general Luce model. The function c (which is arbitrary) provides a way to capture the support of the stochastic choice function ρ. As observed in <cit.>, there are two interesting cases worthy of being mentioned: * If c(S) is a singleton for all S ⊆ X, then ρ(x,S) is a deterministic choice. * If c(S)=S for all S⊆ X, then the 2SLM coincides with the MNL.Two special cases of this model were provided in <cit.>. The first is the two-stage Luce model. This model restricts c, such that the c(A) represents the set of all undominated alternatives in A. A general Luce model (a,c) is called a 2SLM if there exists a strict partial order (i.e. transitive, antisymmetric and irreflexive binary relation) ≻ such that: c(A)={x∈ A| ∄y∈ A:y≻ x}. We call ≻ dominance relation.As a result, any 2SLM can be described by an irreflexive, transitive, and antisymmetric relation ≻ that fully captures the relation between products. The second model presented in <cit.>, which is a particular case of the 2SLM, is the Threshold Luce Model (TLM), where they explain dominance in terms of how big the attractiveness are when compared with each other, so c is strongly tied to a.More specifically, for a given threshold t>0, the consideration set c(S) for a set S⊆ X is defined as: c(S)={y ∈ S| ∄x∈ S: a_x>(1+t)a_y}. In other words, x ≻ y if and only if a_x/a_y>(1+t). Intuitively, an attractiveness ratio of more than (1+t) means that the less-preferred alternative is dominated by the more-preferred alternative. Observe that the relation ≻ is clearly irreflexive, transitive, and antisymmetric.The dominance relation ≻ can thus be represented as a Directed Acyclic Graph (DAG), where nodes represent the products and there is a directed edge (x,y) if and only if x ≻ y. Sets satisfying c(S)=S are anti-chains in the DAG, meaning that there are no arcs connecting them. For instance, consider the Threshold Luce model defined over X={1,2,3,4,5} with attractiveness values a_1=12,a_2=8,a_3=6,a_4=3and a_5=2, and threshold t=0.4. We have that i ≻ j iff a_i > 1.4a_j.The DAG representing this dominance relation is depicted in Figure <ref>. In the following example, we show that the 2SLM admits regularity violations, meaning that it is possible that the probability of choosing a product can increase when we enlarge the offered set. Since regularity is satisfied by any choice model based on random utility (RUM), this shows that the 2SLM is not contained in the RUM class [Observe that this implies that the 2SLM is not contained by the Markov chain model proposed by <cit.> since this last one belongs to the RUM class <cit.>.]. Consider the following instance of the Threshold Luce model (which is a special case of the 2SLM). Let X={1,2,3,4} with attractiveness a_1=5,a_2=4,a_3=3 and a_4=3. Consider t=0.4 and the attractiveness of the outside option a_0=1. For the offer set {2,3,4}, the probability of selecting product 2 is 4/11 since no product dominates each other. However, if we add product 1 to the offer set, i.e. if we offer all four products, then the probability of selecting product 2 increases to 4/10, because products 3 and 4 are now dominated by product 1. The Two-Stage Luce Model allows to accommodate different decision heuristics and market scenarios by specifying the dominance relation responding to a specific set of rules. Two cases where this can be observed are provided below.Feature Difference Threshold: Assume that each product has a set of features ℱ={1,…,m}. A product x can then be represented by a m-dimensional vector x∈ℝ^m. Assume that the perceived relevance of each feature k is measured by a weight ν_k, so that the utility perceived by the customers can be expressed as a weighted combination of their features u(x)=∑_k=1^mν_k· x_k.The dominance relation can be defined as x≻ yu(x)-u(y)=∑_k=1^mν_k(x_k-y_k)≥ T, where T>0 is a tolerance parameter that represents how much difference a customer allows before considering that an alternative dominates another. The dominance relation is irreflexive, transitive, and antisymmetric and hence it can be used to define an instance of the 2SLM. One can easily show that this model is a special case of the TLM. Price levels: Suppose we have N products, each product i has k_i price levels.Let x_il be product i with price p_il attached and it corresponding attractiveness a_il, we assume that for each product i prices p_ik satisfy p_i1< p_i2<…,p_ik_i.Naturally, x_i1≻ x_i2≻…≻ x_ik_i, because for the same product the customer is going to select the one with the lowest price available. Each price level for each product can still dominate or be dominated by other products as well, as long as the dominance relation is irreflexive, transitive and antisymmetric.This setting can be modelled by the Two-Stage Luce model in a natural way.§ ASSORTMENT PROBLEMS UNDER THE TWO-STAGE LUCE MODELThis section studies the assortment problem for the 2SLM using the definitions and notations presented earlier. Let r:X∪{0}→ℝ^+ be a revenue function associated with each product and satisfying r(0)=0. The expected revenue of a set S⊆ X is given byR(S)=∑_i∈ c(S)ρ(i,S)r(i). The assortment problem amounts to finding a setS^*∈_S⊆ XR(S)yielding an optimal revenue ofR^*=max_S⊆ XR(S).Observe that every subset S ⊆ X can be uniquely represented by a binary vector x ∈{0,1}^n such that i ∈ S if and only if x_i=1. Using this bijection, the search space for S^* can be restricted to𝒟={x∈{0,1}^n | ∀ s ≻ t: x_s+x_t≤ 1 }where 𝒟 represents all the subsets satisfying S=c(S), which means that no product on S dominates another product in S. There is always an optimal solution S^* that belongs to 𝒟 because R(S)=R(c(S)) and c(S) ∈ D for all sets S in X. As a result, the Assortment Problem under the 2SLM () can be formulated asxmaximize∑_i=1^nr_ia_ix_i/∑_i=1^na_ix_i +a_0subject to x∈𝒟where r_i and a_i represent r(i) and a(i) for simplicity.An effective strategy for solving many assortment problems consists in considering revenue-ordered assortments, which are obtained by choosing a threshold ρ and selecting all the products with revenue at least ρ. This strategy leads to an optimal algorithm for the assortment problem under the MNL. Unfortunately, it fails under the 2SLM because adding a highly attractive product may remove many dominated products whose revenues and utilities would lead to a higher revenue.[Sub-Optimality of Revenue-Ordered Assortments] Consider a Threshold Luce model with X={1,2,3}, revenues r_1=88,r_2=47,r_3=46, attractiveness a_0=55,a_1=13,a_2=26,a_3=15 and t=0.6. Then x≻ y iff a_x> 1.6a_y which gives 2 ≻ 1 and 2 ≻ 3.Consider the sets S ⊆ X satisfying S=c(S): The optimal revenue is given by assortment {1,3}, while the best revenue-ordered assortment under the 2SLM is S = {1}, yielding almost 24% less revenue. To solve problem <ref>, consider first theproblem defined over the same set of constraints. Given weights c_i ∈ℝ (1 ≤ i ≤ n), theproblem is defined as follows: xmaximize∑_i=1^nc_ix_i subject to x∈𝒟 We now show that (<ref>) can be reduced to the maximum weighted independent set problem in a directed acyclic graph with positive vertex weights. An independent set is a set of vertices I such that there is no edge connecting any two vertices in I. The maximum weighted independent set problem () can be stated as follows: Maximum Weighted Independent Set Problem: Given a graph G=(V,E) with a weight function w:V→ℝ, find an independent set I^*∈_I∈ℐ∑_i∈ Iw(i), where ℐ is the set of all independent sets. Recall that the dominance relation can be represented as a DAG G which includes an arc (u,v) whenever u ≻ v. As a result, the condition x∈𝒟 implies that any feasible solution to (<ref>) represents an independent set in G and maximizing ∑_i=1^nc_ix_i amounts to finding the independent set maximizing the sum of the weights. Since the dominance relation is a partial order, the DAG representing the dominance relation is a comparability graph. The following result is particularly useful. The maximum weighted independent set is polynomially-solvable for comparability graphs with positive weights. We are ready to present our first result. (<ref>) is polynomial-time solvable.We first show that we can ignore those products with a negative weight. Let X̂={i ∈ X| c_i>0} and 𝒟̂={x∈{0,1}^n |∀ s, t ∈X̂, s ≻ t:x_s+x_t≤ 1}. Solving (<ref>) is equivalent to solving: xmaximize∑_i∈X̂c_ix_i subject to x∈𝒟̂ Indeed, consider an optimal solution x^* to Problem <ref> and assume that there exists i∈ X such that c_i<0 and x_i^*=1. Define x̂ like x^* but with x̂_̂î=0. x̂ has a strictly greater value for the objective function in <ref> than x^* has, and is feasible since setting a component to zero cannot violate any constraint (i.e., x̂∈𝒟). This contradicts the optimality of x^*. Now Problemcan be reduced to solving an instance of Problemin a DAG with positive weights that corresponds to the dominance relation. This DAG is a comparability graph and the result follows from Theorem <ref>. The next step in solving the assortment problem under the 2SLM relies on a result by Megiddo <cit.>. Let D be a domain defined by some set of constraints and consider Problem <ref>xmaximize∑_i=1^nc_ix_i subject to x∈ D and its associated Problem <ref>: xmaximizea_0+∑_i=1^na_ix_i/b_0+∑_i=1^nb_ix_isubject to x∈ D. Using this notation, Megiddo's theorem can be stated as follows. If Problem <ref> is solvable within O(p(n)) comparisons and O(q(n)) additions, then Problem <ref> is solvable in O(p(n)(q(n) +p(n))) time. We are now in position to state our main theorem of this section. The assortment problem under the Two-Stage Luce model is polynomial-time solvable.Recall that the assortment problem under the 2SLM () can be formulated asxmaximize∑_i=1^nr_ia_ix_i/∑_i=1^na_ix_i +a_0subject to x∈𝒟where 𝒟={x∈{0,1}^n | ∀ s ≻ t: x_s+x_t≤ 1 }.The problem of maximizing the numerator in (<ref>) is exactly theproblem. By Lemma <ref>, this is polynomial-time solvable. Now observe that (<ref>) (i.e., problem <ref>) can be seen as a Problem <ref>. Therefore, by Theorem <ref>, the assortment problem under the 2SLM is solvable in polynomial time. In addition to solving the assortment problem under the 2SLM, Theorem <ref> is interesting in that it solves the assortment problem under a Multinomial Logit with a specific class of constraints.It can be contrasted with the results by <cit.>, where feasible assortments satisfy a set of totally unimodular constraints. They show that the resulting problem can be solved as a linear program. However, the 2SLM introduces constraints that are not necessarily totally unimodular as we now show. Consider X={1,2,3,4} and 1 ≻ 3, 1 ≻ 4, 2 ≻ 3, 2 ≻ 4, and 3 ≻ 4.The constraint matrix that defines the feasible space (𝒟) for this instance is: M= [ 1 0 1 0; 1 0 0 1; 0 1 1 0; 0 1 0 1; 0 0 1 1; ] where each row represents a constraint x_u+x_v≤ 1.meaning that just one end of the edge can be selected at the time. <cit.> proved that M is totally unimodular if and only if, for every (square) Eulerian submatrix A of M, ∑_i,j a_ij≡ 04. Consider the sub-matrix corresponding to the first, second, and fifth rows and the first, third, and fourth columns N= [ 1 1 0; 1 0 1; 0 1 1; ] Matrix N is eulerian (The sums of every element on each row or on each column is a multiple of 2). But the sum of all elements of N is 6 ≢04 and hence M is not totally unimodular.We close this section by explaining how our results can be extended to a more general setting.<cit.> proposed the general attraction model (GAM) to describe customer behaviour, that alleviates some deficiencies of the MNL.More specifically, the intuition behind this choice model is that whenever a product is not offered, then its absence can potentially increase the probability of the no-purchase alternative, as consumers can potentially look for the product elsewhere, or at a later time. To achieve this effect, for each product j the model considers two different weights: v_j and w_j, usually with 0≤ w_j ≤ v_j.If product j is offered, then its preference weight is v_j. But if j is not offered, then the preference weight of the outside option is increased by w_j. For all j∈ X, let v_j=v_j-w_j and ṽ_̃0̃=v_0 + ∑_k ∈ X w_k. Using this notation, the probabilities associated with the GAM model can be recovered by means of the following equation: ρ(j,S)=v_j/∑_i ∈ Sv_j + v_0 ifj∈ S, 0ifj∉ S.Observe that the resulting assortment problem will has the same functional form than problem <ref>, with a slight modification on the coefficients in the denominator. Thus, we can apply the same solution technique described in Theorem <ref> to find the optimal assortment for the GAM.§ THE CAPACITATED ASSORTMENT PROBLEMIn many applications, the number of products in an assortment is limited, giving rise to capacitated assortment problems. Let C (1 ≤ C ≤ n) be the maximum number of products allowed in an assortment.The Capacitated Assortment Problem under the Two-Stage Luce Model () is given byxmaximize∑_i=1^nr_ia_ix_i/∑_i=1^na_ix_i +a_0subject to x∈𝒟_C where 𝒟_C={x∈{0,1}^n |∀ (s,t)∈ℛ x_s+x_t≤ 1 ∑_i=1^nx_i≤ C}. As before, it is useful to define its capacitated maximum-attractiveness counterpart (), i.e.,xmaximize∑_i=1^nc_ix_isubject to x∈𝒟_C This section first proves that the capacitated assortment problem under the 2SLM is NP-hard. The reduction uses the Maximum Weighted Budgeted Independent Set () problem proposed by <cit.> which amounts to finding a maximum weighted independent set of size not greater than C. <cit.> showed that Problem () is NP-hard for bipartite graphs. Problem (<ref>) is NP-hard (under Turing reductions). It is interesting to mention that Problem (<ref>) is equivalent to finding an anti-chain of maximum weight among those of cardinality at most C. This problem () was proposed by <cit.> and its complexity was left open, but the above results show that it is also NP-hard. <cit.> studied Problem () for various types of graphs (e.g., trees and forests), but the dominance relation of the 2SLM can never be a tree since it is transitive (unless we consider a graph with a single vertex).In light of this NP-hardness result, the rest of this section presents polynomial-time algorithms for two special cases of the dominance relation. §.§ The Two-Stage Luce model over Tree-Induced Dominance RelationsLet ℛ_≻ be the transitive reduction of the irreflexive, antisymmetric, and transitive relation ≻. This section considers the capacitated assortment problem when the relation ℛ_≻ can be represented as a tree. Without loss of generality, we can assume that the tree contains all products. Otherwise, we can add another product with zero weight that dominates all original products. This new product will be the root of the tree and the products not in the original tree will be the children of the root. Similarly, the same transformation applies to the case when ℛ_≻ is a forest. Here all the trees in the forest will be children of the new product.We show how to solve Problem (<ref>). The result follows again by applying Megiddo's theorem. The first step of the algorithm simply removes all products with negative weight: Their children can be added to the parent of the deleted vertex. The main step then solves (<ref>) bottom-up using dynamic programming from the leaves. For simplicity, we present the recurrence relations to compute the weight of the optimal assortment. It is easy to recover the optimal assortment itself. The recurrence relations compute two functions: * 𝒜(k,c) which returns the weight of an optimal assortment using product k and its descendants in the tree representation of ℛ_≻ for a capacity c; * 𝒜^+(S,c) which, given a set S of vertices that are children of a vertex k, returns the weight of an optimal assortment using the products in S and their descendants for a capacity c. The key intuition behind the recurrence is as follows. If v is a vertex and v_1 and v_2 are two of its children, v_1 does not dominate v_2 or any of its descendants. Hence, it suffices to compute the best assortments producing 𝒜(v_1,0), …, 𝒜(v_1,C) and 𝒜(v_2,0), …, 𝒜(v_2,C) and to combine them optimally. The recurrence relations are defined as follows (v ∈ X and 1 ≤ c ≤ C):𝒜(v,0)= 0; 𝒜(v,c)= max(c_v,𝒜^+(children(v),c));and𝒜^+(∅,0)= 0; 𝒜^+(S,c)= max_n_1, n_2 ≥ 0_n_1 + n_2 = c𝒜^+(S ∖{e},n_1) + 𝒜(e,n_2)e = _i ∈ S c_i.where children(p) denotes the children of product p in the tree. Note that 𝒜^+(S,c) is computed recursively to obtain the best assortment from the products in S and their descendants. Using these recurrence relation, the following Theorem follows: Let ≻ a dominance relation whose relation ℛ_≻ is a tree containing all products. The capacitated assortment problem under the 2SLM and ≻ is polynomial-time solvable. §.§ The Attractiveness-Correlated Two-Stage Luce modelThe second special case considers a dominance relation that is correlated with attractiveness. A Two-Stage Luce model is attractiveness-correlated if the dominance relation satisfies the following two conditions: * If x ≻ y, then a_x > a_y. * If x ≻ y and a_z > a_x, then z ≻ y. The first condition simply expresses that product x can only dominate product y if the attractiveness of x is greater than the attractiveness of y. The second condition ensures that, if x dominates y, then any product whose attractiveness is greater than x also dominates y. The induced dominance relation is irreflexive, anti-symmetric, and transitive. A particular case of this model, is the Threshold Luce model.When customers follow the Threshold Luce model, they form their consideration sets based on the attractiveness of products.Without loss of generality, we can assume a_1≥ a_2≥…≥ a_n, unless stated otherwise. For a set S, the associated consideration set c(S) may be a proper subset of S, but for the purpose of assortment optimization, we don't have incentives to offer sets including products that are not even consider by customers, so we can restrict our search for optimal solutions to sets where c(S)=S.A necessary and sufficient condition for this to happen is max_i∈ Sa_i/min_i∈ Sa_i≤ 1+t.Meaning that largest ratio between attractiveness is not greater than 1+t, so no dominance relation appears. The firm now needs to carefully balance the inclusion of high-attractiveness products and their prices to maximize the revenue. In the following example we show that revenue ordered assortments are not optimal under the Threshold Luce Model.In fact, this strategy can be arbitrarily bad. [Revenue ordered assortments are not optimal]Consider the following product configuration. Let N+1 products, with prices p_1 for the first product, and α p_1 for the rest of them, with α<1. The attractiveness for all products is a_1 for the first product and γ a_1 for all the rest, such as in the presence of product 1, all the rest of the products are ignored. To complete the set up, let a_0 the attractiveness of the outside option.The best revenue ordered assortment is to consider product 1, given a revenue of: R'=R({1})=p_1 a_1/a_1+a_0 But, if N is big enough (at least bigger than 1/αγ), is more profitable to show S_N=X∖{1}, resulting in a revenue of: R^*=R(S_N)=N·α p_1γ a_1/N·αγ a_1+a_0 Now, if we calculate the ratio if this two values, R' and R^* and let N tend to infinity we have: R'/R^*= lim_N→∞p_1 a_1/a_1+a_0/N·α p_1γ a_1/N·αγ a_1+a_0 R'/R^*= lim_N→∞p_1 a_1/a_1+a_0·N·αγ a_1+a_0/N·α p_1γ a_1 R'/R^*= a_1/a_1+a_0 Observe that this last expression is the market share of offering just product 1, which can be arbitrarily bad by either making a_1 as small as desired, or making the outside option more attractive. The capacitated assortment optimization can be solved in polynomial time under the Attractiveness-Correlated Two-Stage Luce model. Consider an assortment whose product with the largest attractiveness is k. This assortment cannot contain any product dominated by k. Moreover, if k_1 and k_2 are two other products in this assortment, then k_1 cannot dominate k_2 since k would also dominate k_2. As a result, consider the setX_k={i∈ X | a_i ≤ a_k& k ⊁i }.No product in X_k dominates any other product in X_k and hence thereduces to a traditional assortment problem under the MNL. This idea is formalized in Algorithm <ref>, whereis a traditional algorithm for the MNL.The algorithm considers each product in turn and the products that it does not dominate and applies a traditional capacitated assortment optimization under the MNL. The best such assortment is the solution to the capacitated assortment under the attractiveness-correlated 2SLM. can be solved in polynomial time for Attractiveness-Correlated instances. To show correctness, it suffices to show that the optimal assortment must be a subset of one of the X_k (1 ≤ k ≤ n). Let A be the optimal assortment and assume that k is its product with the largest attractiveness (break ties randomly). A must be included in X_k since otherwise it would contain a product x such that k ≻ x (contradicting feasibility) or such that a(x) > a(k) (contradicting our hypothesis). The correctness then follows since there is no dominance relationship between any two elements in each of X_k.The claim of polynomial-time solvability follows from the availability of polynomial-time algorithms for the assortment problem under the MNL and the fact that are exactly n calls to such an algorithm. § JOINT ASSORTMENT AND PRICING UNDER THE THRESHOLD LUCE MODELThe previous sections provides solutions to the Assortment Optimization problem under the Two-Stage Luce model.This section aims at determining how to assign prices to products in order to maximise the expected revenue.It studies the Joint Assortment and Pricing Problem under the Threshold Luce model, by making the attractiveness of each product dependent upon its price.Let p=(p_1,…,p_n) be the price vector, where such that p_i∈ℝ_+∪{∞} represents the price of product i.Since the price will affect the attractiveness a_i of product i, the presentation makes this dependency explicit by writing a_i(p_i) whose form in this paper is specified bya_i(p_i)=exp(u_i-p_i)where u_i is the intrinsic utility of product i and the value v_i=u_i- p_i is called the net utility of product i.Assigning an infinite price to a product is equivalent to not offering the product, as the attractiveness, and therefore the probability of selecting the product, becomes 0. Without loss of generality, products are indexed in a decreasing order by intrinsic utility.The following definition is an extension of the definition of a consideration set given an assortment S when each product i has a price p_i. Given an assortment S, a price vector p=(p_1,p_2,…,p_n) and a threshold t, the consideration set c(S,p) for the Threshold Luce model is defined as: c(S,p)={j ∈ S| ∄i∈ S: a_i(p_i)>(1+t)a_j(p_j)}.The influence of the price vector over the dominance relations is given by the following example:[Price effect on the dominance relation] Consider the Threshold Luce model defined over X={1,2,3,4} with utilities u_1=ln(10),u_2=ln(8),u_3=ln(6) and u_4=ln(3), and consider first a scenario where all products have the same price p_i=ln(3) ∀ i=1,…,4. Consider also a second scenario with prices equal to p_1'=ln(4), p_2'=ln(4),p_3'=ln(3) and p_4'=ln(2). For a threshold t=0.5, we have that i ≻ j iff a_i(p_i) > 1.5a_j(p_j). A table summarizing the utilities, prices, and attractiveness for both scenarios is given in Table <ref> and the DAGs depicting the dominance relations for the two scenarios are given in Figures <ref> and <ref>. It is also necessary to update the definition of ρin Definition <ref>, since it now depends on the price of all products in the assortment. The definition of ρ: X∪{0}× 2^X×(ℝ_+∪∞)^n → [0,1] becomes:ρ(i,S,p)=a_i(p_i)/∑_j∈ c(S,p) a_j(p_j) + a_0,ifi∈ c(S,p), 0ifi∉ c(S,p).where a_0 is the attractiveness of the outside option.The expected revenue (ER) of an assortment S⊆ X and a price vector p∈ℝ_+^n is given by ER R(S,p)=∑_i∈ c(S,p)ρ(i,S,p)p_i. A pair (S,p) with S⊆ X and p∈(ℝ_+∪∞)^n is valid if S={i:p_i<∞} and c(S,p)=S. Let 𝒱 be the set of all valid pairs (S,p). Observe that one can always restrict the search for optimal solutions to 𝒱. Indeed, all dominated products can be given aninfinite price and removing them from the original assortment yields the exact same revenue.The Joint Assortment and Pricing problem aims at finding a set S^* and a price vector p^* satisfying(S^*,p^*)∈_(S,p)∈𝒱 R(S,p)and yielding an optimal revenue ofR^*=R(S^*,p^*). First observe that the strategy used to solve this problem under the multinomial logit does not carry over to the Threshold Luce Model. Under the multinomial logit, the optimal solution for the joint assortment and pricing problem is a fixed adjusted margin policy <cit.> which, for equal price sensitivities and normalised costs, translates to a fixed price policy.As shown in <cit.>, the optimal solution for the pricing problem under the multinomial logit can be expressed in closed form using the Lambert function W(x):[0,∞)→ [0,∞) which is defined as the unique function satisfying:x=W(x)e^W(x)∀ x∈ [0,∞). Using this function, the optimal revenue can be expressed as: R^*=W(∑_i∈ Xexp(u_i-1)/a_0) The prices are all equal and satisfy: p_i=1+R^*∀ i∈ X.The following example shows that fixed-price policy is not optimal under the Threshold Luce Model.[Fixed-Price policy is not optimal] Consider 11 products with product 1 having utility u=2 and all remaining 10 products having utility u'=1.Consider a_0=1 and t=1.Observe that, for any fixed price, product 1 always dominates the other 10 products having lower utility, as exp(u-u')= exp(1)=e > (1+t)=2.Therefore, the optimal revenue for a a fixed price strategy is: R_fixed=W(exp(u-1)/a_0) = W(e) =1. As a result, the 10 lower utility products are completely ignored and only product 1 contributes to the revenue. Consider the following price scheme now: let the price for product 1 be p=1.8 and let the price be p'=1.4 for the remaining products. Product 1 does not dominate any other product now. Indeed, for any 1<k≤ 11, a_1/a_k=exp((u-p)-(u'-p')) = exp((2-1.8)-(1-1.4)) ≈ 1.822 <1+t =2, which yields a revenue of: R'=p·exp(u-p) +10· p'exp(u'-p')/exp(u-p) +10·exp(u'-p')+a_0= 1.8·exp(2-1.8) +10· 1.4exp(1-1.4)/exp(2-1.8) +10·exp(1-1.4)+1≈ 1.298, This pricing scheme improves upon the fixed-price policy, yielding a revenue almost %30 higher. The intuition behind this example is as follows: For a fixed price strategy, the only factor affecting dominance is the intrinsic utilities because the prices vanish when calculating the ratio between two attractiveness. This means that the solution can potentially miss the benefits of low attractiveness products which are dominated by the most attractive product.It is thus important to understand the structure of an optimal solution for the Joint Assortment and Pricing problem under the Threshold Luce model. The first result states that, for any optimal solution (S^*,p^*), all product prices are greater or equal than R^*, where R^* denotes the revenue achieved at optimality. In any optimal solution (S^*,p^*), for all i∈ S^*, p^*_i≥ R^*. The proof is by contradiction: Removing products with a price lower than R^* yields a greater revenue. The next proposition characterises the optimal assortment of products of any optimal solution to the Joint Assortment and Pricing problem. Recall that the products are indexed by decreasing utility u_i. Thus, the set of products [k] :={1,…,k}, (with 0<k≤ n) is said to be an intrinsic utility ordered set.The following proposition holds: Let (S^*,p^*) denote an optimal solution. Then S^*=[k] for some k≤ n.The following Lemma due to <cit.> is useful to prove some of the upcoming propositions. For completeness, its proof is also in Appendix <ref>. Let H(p_i,p_j):=p_i·exp(u_i-p_i) +p_j·exp(u_j-p_j), where exp(u_i-p_i) + exp(u_j-p_j)=T. Then, H(p_i,p_j) is strictly unimodal with respect to p_i or p_j, and it achieves the maximum at the following point: p_i^*=p_j^*=ln((exp(u_i)+exp(u_j))/T)Observe that setting the price of a product to ∞ is equivalent to not showing it to consumers.By Proposition <ref>, one can always find an optimal solution that is intrinsic utility ordered.Given a price vector p∈ℝ^n, let γ(p):ℝ^n→[n] be defined as γ(p)≐{max_i∈[n] i s.tp_i<∞}.Intuitively, this is the last non-infinite price. Proposition <ref> shows that, at optimality, the finite prices are non-increasing in i, meaning that lower prices are assigned to lower utility products. The prices at an optimal solution (S^*,p^*) satisfy p_i^*≥ p_i+1^* ∀ i∈[γ(p)-1]. Moreover, if i,j∈ S^* satisfy u_i=u_j, then p_i^*=p^*_j. Recall that the net utility of product i was defined as: v_i=u_i-p_i. The following proposition shows that at optimality, net utility follows the same order as intrinsic utility. Let p^* be the price of an optimal solution of the Joint Assortment and Pricing Problem. The following condition holds: u_i-p_i^*≥ u_i+1-p_i+1^* ∀ i∈[γ(p)-1].The above propositions make it possible to filter out non-efficient assortments and prices by restricting the search space to intrinsic utility ordered assortments and providing insights on how the optimal solution behaves regarding prices and their relation with utilities. Based on these propositions, the joint assortment and pricing optimisation problem for the TLM can be written in a more succinct way. From Proposition <ref>, the solution is an intrinsic utility ordered set S_k=[k] for some k≤ n. Suppose there exists an optimal solution in the form (S_k,p) for a fixed value k. In that case, recall that it is sufficient to restrict to valid pairs (S_k,p), meaning that c(S_k,p)=S_k. Consider a fixed k≤ n. By Proposition <ref>, at optimality, u_i-p_i ≥ u_j-p_j∀ 1≤ i<j≤ k. Therefore, the condition that c(S_k,p)=S_k can be written as g_ij(p) := exp(u_i-p_i)-(1+t)·exp(u_j-p_j) ≤ 0, ∀ 1≤ i<j≤ k As a result, the joint k-assortment and pricing optimisation problem for the TLM (), which aims at finding an optimal assortment S_k of size k with k≤ n, can be written as: pmaximize R^(k)(p) :=∑_i∈ S_kp_i·exp(u_i-p_i)/∑_i∈ S_kexp(u_i-p_i)+a_0subject to g_ij(p) ≤ 0, ∀ 1≤ i<j≤ k Note that, if exp(u_1-u_k)≤ (1+t), then the solution is the same as the unconstrained case, because any fixed price can be assigned without creating dominances.Hence, the optimal revenue R^(k) can be calculated using equation (<ref>), and all prices are equal to 1+R^(k). On the other hand, if exp(u_1-u_k)>1+t, as in Example <ref>, the prices need to be adjusted in order to avoid dominances.The next theorem is the main result of this section. Problem <ref>can be solved in polynomial time. The intuition behind the proof is based on Proposition <ref> and the study of the Lagrangean relaxation of problem (<ref>).Observe that, since u_i-p_i≥ u_j-p_j (i≤ j) at optimality, then the largest ratio between attractiveness is obtained for products 1 and k. This ratio can also occur for more products but only if they have the same net utility as products 1 or k.Thus, it must be the case that there are non-negative integers k_1 and k_2 with k_1+k_2≤ k, such that letting I_1=[k_1] and I_2={k-k_2+1,k-k_2+2,…,k}, the set of constraints C(k_1,k_2)={g_ij(p) | i∈ I_1, j∈ I_2 } are satisfied at equality for the optimal solution (see the proof in Appendix <ref> for details). Since it is only necessary to study a polynomial number of combinations of constraints satisfied at equality and, for each one of those combinations a closed form solution is provided, the result follows.For the non-trivial case with exp(u_1-u_k)>1+t, where a fixed price fails to be optimal, the prices need to be adjusted in order to avoid the dominances. Let R^(k) and p^(k) be the optimal revenue and price vector.The following Lemma characterizes the structure of the optimal solution for problem <ref>. The optimal solution to problem (<ref>) is either the same as the unconstrained case (i.e. fixed price, in the case that exp(u_1-u_k)≤ (1+t)) or the following holds at optimality: a_1(p_1)/a_k(p_k)=1+t. Moreover, there are non-negative integers k_1^*,k_2^*, with k_1^*+k_2^*≤ k such that: R^(k)=W((k_1^*+k_2^*/1+t)·exp((1+t)∑_i∈ I_1u_i +∑_i∈ I_2u_i+k_2^*ln(1+t)/k_1^*(1+t)+k_2^*-1) +∑_i∈I̅_kexp(u_i-1)/a_0), where I_1=[k_1^*], I_2={k-k_2^*+1,k-k_2^*+2,…,k} and I̅_k=[k]∖(I_1∪ I_2). The optimal prices can be obtained as follows: p^(k)_i= 1+R^(k)+u_i -(1+t)∑_i∈ I_1u_i +∑_i∈ I_2u_i+k_2^*ln(1+t)/k_1^*(1+t)+k_2^* ifi∈ I_1, 1+R^(k)+u_i -(1+t)∑_i∈ I_1u_i +∑_i∈ I_2u_i+k_2^*ln(1+t)/k_1^*(1+t)+k_2^*+ln(1+t)ifi∈ I_2, 1+R^(k) ifi∈I̅_k. Letbe the procedure to obtain the optimal solution for problem (<ref>). Usingat most n times (once for each k≤ n) to obtain the assortment and prices yielding the highest R^(k), one can find the optimal assortment and price vector for any given instance.Its intuition is to mimic the optimal strategy for the regular MNL (Fixed-Price Policy) as much as possible. However, given that it needs to accommodate prices in order to avoid dominances, the algorithm adjusts prices for the higher intrinsic utility products (making prices larger, hence less attractive) and reduces the price of lower intrinsic utility ones, making them more attractive for customers and preventing them from being dominated. This allows the optimal strategy to have an edge over strategies ignoring the Threshold induced dominances, such as Fixed-Price Policy and, to a lesser extent, the Quasi-Same Price <cit.>. The Quasi-Same Price policy policy only adjusts the price of the lowest attractiveness product, instead of adjusting both extremes of the attractiveness spectrum and potentially multiple products. § CONCLUSION AND FUTURE WORKThis paper studies the assortment optimization problem under the Two-Stage Luce model (2SLM), a discrete choice model introduced by <cit.> that generalizes the standard multinomial logit model (MNL) with a dominance relation and may violate regularity. The paper proved that the assortment problem under the 2SLM can be solved in polynomial time. The paper also considered the capacitated assortment problem under the 2SLM and proved that the problem becomes NP-hard in this setting.We also provide polynomial-time algorithms for special cases of the capacitated problem when (1) the dominance relation is utility-correlated and when (2) its transitive reduction is a forest. We also provide an Appendix showing numerical experiments to highlight the performance of the proposed algorithms against classical strategies used in the literature.There are at least five interesting avenues for future research. First, one may wish to study how to generalize the 2SLM further while still keeping the assortment problem solvable in polynomial time. For example, one can try to check whether there exists a model that unifies the 2SLM and the elegant work in <cit.> where the assortment problem is still solvable in polynomial time. Second, given that the capacitated version of the 2SLM is NP-hard under Turing reductions (Theorem <ref>), it is interesting to see whether there exist good approximation algorithms for this problem. Third, one can explore different forms of dominance. For example, one may consider dominances specified by a discrete relation or a continuous functional form between products.Fourth, one can try to generalise our results for the Joint Assortment Pricing Problem under the Threshold Luce model to a more general setting, where price sensitivities depend on each product. Finally, one can try to mix attention models with dominance relations, meaning that a customer first perceives a subset of the products, dictated by an attention filter, and then filter the products even more using dominance relations. § ACKNOWLEDGEMENTS We thanks Yuval Filmus for his helpful insights leading us to find useful literature on this topic. Thanks are also due to Guillermo Gallego for suggesting extending our assortment results to the GAM model, and to Flavia Bonomo for relevant discussions.. ../../../Bib/aaai § PROOFS In this section we provide the proofs missing from the main text. Proof of Theorem <ref>. The proof considers four problems: * Problem (): Maximum weighted independent set of size at most C for bipartite graphs. * Problem (): Maximum weighted independent set of size equal to C for bipartite graphs. * Problem (): Optimal assortment under the General Luce model of size C. * Problem (): Optimal capacitated assortment under the Two-Stage Luce model of size at most C. The proof shows that Problems (), (), and () are NP-hard, using the NP-hardness of Problem () <cit.> as a starting point. First observe that Problem () is NP-hard under Turing reductions. Indeed, Problem () can be reduced to solving C instances of Problem () with budget c (1 ≤ c ≤ C). We now show that Problem () is NP-hard. Consider Problem () over a bipartite graph G=(V=V_1 ∪ V_2,E), where V_1 ∩ V_2 = ∅, every edge (v_1,v_2) ∈ E satisfies v_1 ∈ V_1 and v_2 ∈ V_2, w_v is the weight of vertex v, and C is the budget. We show that Problem () over this bipartite graph can be polynomially reduced to Problem (). The reduction assigns each vertex v to a product with a(v) = 1 and r_v = w_v, sets a_0 = 0, and has a capacity C. Moreover, the reduction uses the following dominance relation: v_1 ≻ v_2 iff (v_1,v_2) ∈ E. This dominance relation is irreflexive, anti-symmetric, and transitive, since the graph is bipartite. A solution to Problem () is a feasible solution to Problem (), since the independent set cannot contain two vertices v_1, v_2 with v_1 ≻ v_2 by construction. Similarly, a feasible assortment is an independent set, since the assortment cannot select two vertices v_1 ∈ V_1 and v_2 ∈ V_2 with (v_1,v_2) ∈ E, since v_1 ≻ v_2. The objective function of Problem () reduces to maximizing 1/C∑_v ∈ V r_v x_v which is equivalent to maximizing ∑_v ∈ V r_v x_v since exactly C products will be selected by every feasible assortment. The result follows by the NP-hardness of Problem (). Finally, Problem () is NP-hard under Turing reductions. Indeed, Problem () can be reduced to solving C instances of Problem () with capacity c (1 ≤ c ≤ C). Proof of Theorem <ref> By Theorem <ref>, it suffices to show that Problem (<ref>) is solved by the recurrences in polynomial time. The correctness of recurrence 𝒜(v,c) comes from the fact that vertex v dominates all its descendants and cannot be present in any assortment featuring any of them. The correctness of recurrence 𝒜^+(S,c) follows from the fact that e is not dominated by, and does not dominate, any element in S, since they are all children of the same node. This also holds for the descendants of e and the descendants of the elements in S. Hence, the optimal assortment is obtained by splitting the capacity c into n_1 and n_2 and merging the best assortment for 𝒜^+(S,n_1) and 𝒜(e,n_2) for some n_1, n_2 ≥ 0 summing to c.The recurrences can be solved in polynomial time since the computation for each vertex v and capacity c takes O(nC) time, giving an overall time complexity of O(n^2C^2). Proof of Proposition <ref>. We prove this by contradiction. Suppose p_i^*<R^* for some i∈ S, then Ŝ=S^*∖{i} has better revenue than the optimal solution if we keep the same prices and p_i^*<R^*. Indeed, let us calculate R(Ŝ):R(Ŝ)= ∑_j∈Ŝ e^u_j-p_j^*· p_j^*/∑_j∈Ŝ e^u_j-p_j^*+a_0R(Ŝ)= ∑_j∈ S^* e^u_j-p_j^*· p_j^*-e^u_i-p_i^*· p_i^*/∑_j∈ S^* e^u_j-p_j^*-e^u_i-p_i^*+a_0R(Ŝ)= ∑_j∈ S^* e^u_j-p_j^*· p_j^*/∑_j∈ S^* e^u_j-p_j^*+a_0·∑_j∈ S^* e^u_j-p_j^*+a_0/∑_j∈ S^* e^u_j-p_j^*-e^u_i-p_i^*+a_0 - e^u_i-p_i^*· p_i^*/∑_j∈ S^* e^u_j-p_j^*-e^u_i-p_i^*+a_0R(Ŝ)= ∑_j∈ S^* e^u_j-p_j^*· p_j^*/∑_j∈ S^* e^u_j-p_j^*+a_0·[1+e^u_i-p_i^*/∑_j∈ S^* e^u_j-p_j^*-e^u_i-p_i^*+a_0] - e^u_i-p_i^*· p_i^*/∑_j∈ S^* e^u_j-p_j^*-e^u_i-p_i^*+a_0R(Ŝ)=R^*·[1+e^u_i-p_i^*/∑_j∈ S^* e^u_j-p_j^*-e^u_i-p_i^*+a_0] - e^u_i-p_i^*· p_i^*/∑_j∈ S^* e^u_j-p_j^*-e^u_i-p_i^*+a_0R(Ŝ)=R^* + e^u_i-p_i^*/∑_j∈ S^* e^u_j-p_j^*-e^u_i-p_i^*+a_0·[R^*-p_i^*]_Γ Now Γ is positive because p_i^*<R^*, but this implies R(Ŝ)>R^*, contradicting the optimality of R^*. Proof of Proposition <ref>. Let (S^*,p^*) be an optimal solution. We can assume that (S^*,p^*)∈𝒱. We proceed by contradiction. Suppose that there is a product i not included in the optimal solution and another product j with smaller intrinsic utility included in S^*. We show that we can include product i, and remove j and get a greater revenue. Let Ŝ=(S^*∖{j})∪{i}, be the set where we removed product j, and included product i. Let p̂_i=u_i-u_j+p_j^*, this means that the total attractiveness remains unchanged, and no new domination relations appear, given that product j already had the same level attractiveness that product i now has. Observe that given that u_i≥ u_j, we have that p̂_i≥ p_j^*. Let us calculate R(Ŝ,p̂), where p̂ is the same as p^*, but with the proposed changes in price: R(Ŝ,p̂) = ∑_k∈Ŝ e^u_k-p̂_k·p̂_k/∑_k∈Ŝ e^u_k-p̂_k+a_0R(Ŝ,p̂) =∑_k∈ S^* e^u_k-p_k^*· p_k^* - e^u_j-p_j^*· p_j^* +e^u_i-p̂_i·p̂_i/∑_k∈Ŝ e^u_k-p̂_k+a_0R(Ŝ,p̂) =∑_k∈ S^* e^u_k-p_k^*· p_k^*/∑_k∈ S^* e^u_k-p̂_k+a_0_R^* + e^u_i-p̂_i·p̂_i -e^u_j-p_j^*· p_j^*/∑_k∈Ŝ e^u_k-p̂_k+a_0R(Ŝ,p̂) =R^* + e^u_j-p_j^*/∑_k∈Ŝ e^u_k-p̂_k+a_0_≥ 0·[p̂_i-p_j^*]_> 0R(Ŝ,p̂) > R^* Where we first rewrite R(Ŝ,p̂) using (S^*,p^*) because we just swapped product i for product j, and the total attractiveness remain the same, so the denominator does not change. Then we identify R(S,p), and we use u_i-p̂_i=u_j-p_j to being able to factorize the remaining terms. So we found a pair (Ŝ,p̂), yielding strictly more revenue than (S,p), but adding product i, which contradicts the optimality of (S^*,p^*).Proof of Lemma <ref>. The proof (due to <cit.>) is useful because it provides intuition on how the optimal price variates when constrained to a fixed additive market share among any two products. By the equality constraint, we have p_j=u_j-ln(T-exp(u_i-p_i)), so H(p_i,p_j) can be rewritten purely as a function of p_i as:H(p_i)=p_i·exp(u_i-p_i) +(u_j-ln(T-exp(u_i-p_i)))· (T-exp(u_i-p_i)). Now, let us calculate the first derivative of H(p_i) w.r.t. p_i: ∂ H(p_i)/∂ p_i=(-p_i + (u_j-ln(T-exp(u_i-p_i))))·exp(u_i-p_i)Clearly the left-hand side term on the multiplication is monotonically decreasing from positive to negative values as p_i increases from 0 to ∞. Therefore H(p_i) is strictly unimodal and reaches its maximum value at: p_i^*=p_j^*=ln((exp(u_i)+exp(u_j))/T).Proof of Proposition <ref>. We prove this result by contradiction. Let i be the first index where this condition does not hold, this means that p_i^*<p_i+1^*. Using Lemma <ref>, we found p̂ satisfying p_i^*<p̂<p_i+1^*. Does this new price alter the consideration set? We show that this is not the case. Indeed, the effect is two-fold: the price for product i increases, and the price for product i+1 decreases. We analyse the effect of these two consequences: * Increase on price for product i: This means a(i,p) decreases. Note that u_i-p̂≥ u_i+1-p_i+1^*, so neither i≻i+1 or i+1≻ i, because their attractiveness are now even closer than before. Can i be dominated now by another product? No, because given that u_i≥ u_i+1 we have u_i-p̂≥ u_i+1-p̂≥ u_i+1-p_i+1^*. Therefore the new attractiveness of i is still larger than the new attractiveness of i+1, and the last inequality implies that the new attractiveness of i is larger than the old attractiveness of i+1, and i+1 was not previously dominated either by any other product. * Decrease on price for product i+1: Previously i+1 was not dominated by any product. Can i+1 be dominated now? No, because if i+1 was not dominated before, now with a smaller price p̂ its attractiveness is larger and therefore can't be dominated now either (the only other product that changed attractiveness was i, and it now has smaller attractiveness). Can i+1 dominate another product now with its new higher attractiveness? No, because given that u_i≥ u_i+1 we have u_i-p_i^*≥ u_i+1-p_i^*≥ u_i+1-p̂, so the old attractiveness of product i is larger than the new attractiveness of product i+1, and given that i did not dominate another product before, the new price does not make i+1 dominate another product either. So, letting p^fix exactly the same as p^*, but replacing both p_i^* and p_i+1^* with p̂, means that the pair (S^*,p^fix) yields strictly more revenue than (S^*,p^*) (by Lemma <ref>), contradicting the optimality assumption. The fact that equal intrinsic utility implies equal price at optimality, can be easily demonstrated by the following: if two equal intrinsic utility products have different prices, then using Lemma <ref> we obtain strictly better revenue by assigning them the same price, and no new domination occurs, because the new price is confined between the previous prices.Proof of Proposition <ref>. We prove this by contradiction. Let p^* be the optimal solution and i be the first index where this condition does not hold. This means that u_i-p_i^*< u_i+1-p_i+1^*. We can extrapolate this inequality further and say:u_i+1-p_i^* <u_i-p_i^*< u_i+1-p_i+1^*<u_i-p_i+1^*,because u_i≥ u_i+1and p_i≥ p_i+1by Propositions<ref> and <ref> respectively. We now do the following: Define p_i' and p_i+1' such as exp(u_i-p_i')+exp(u_i+1-p_i+1')=exp(u_i-p_i^*)+exp(u_i+1-p_i+1^*) and exp(u_i-p_i')=exp(u_i+1-p_i+1'). This means that: p_i' =u_i-ln(exp(u_i-p_i^*)+exp(u_i+1-p_i+1^*)/2) p_i+1' =u_i+1-ln(exp(u_i-p_i^*)+exp(u_i+1-p_i+1^*)/2) ConsiderH(p_i,p_i+1)=p_i·exp(u_i-p_i) +p_j·exp(u_i+1-p_i+1), where exp(u_i-p_i) + exp(u_i-p_i)=exp(u_i-p_i^*)+exp(u_i+1-p_i+1^*). By Lemma <ref>, H(p_i,p_i+1) is strictly increasing in p_i for p_i≤p̂ and strictly decreasing for p_i≥p̂, with p̂=ln(exp(u_i)+exp(u_i+1)/exp(u_i-p_i^*)+exp(u_i+1-p_i+1^*)) the solution of the corresponding maximization problem of Lemma <ref>. We can verify that p̂<p_i'<p_i^*. The first inequality is straightforward. Indeed: p_i'= u_i-ln(exp(u_i-p_i)+exp(u_i+1-p_i+1)/2)p_i' =ln[2exp(u_i)/exp(u_i-p_i^*)+exp(u_i+1-p_i+1^*)]p_i' >ln[exp(u_i)+exp(u_i+1)/exp(u_i-p_i^*)+exp(u_i+1-p_i+1^*)]_p̂p_i'> p̂proving the desired inequality. Now, for the second one: p_i' =u_i-ln(exp(u_i-p_i)+exp(u_i+1-p_i+1)/2)p_i' =ln[2exp(u_i)/exp(u_i-p_i^*)+exp(u_i+1-p_i+1^*)]p_i' ≤ln[2exp(u_i)/exp(u_i-p_i^*)+exp(u_i-p_i+1^*)]p_i' = ln[2exp(u_i)/exp(u_i)(exp(-p_i^*)+exp(-p_i+1^*))]p_i' < ln[2/2exp(-p_i^*)]p_i' < p_i^*, thus we have: p_i'·exp(u_i-p_i')+p_i+1'·exp(u_i+1-p_i+1')>p_i^*·exp(u_i-p_i^*)+p_i+1^*·exp(u_i+1-p_i+1^*). Meaning that we have the same assortment, but with prices p_i' and p_i+1' generating strictly more revenue than the optimal prices, which is a contradiction. The only thing that we have left to show that with these new prices we are still on the same consideration set. It would be enough to show that the new net utilities are bounded by previous values of net utilities. Indeed, we can verify that p_i+1^*≤ p_i+1'≤ p_i'≤ p_i^*, by simply using the definitions. We also know, by hypothesis that u_i-p_i'=u_i+1-p_i+1', then u_i-p_i'=u_i+1-p_i+1'≤ u_i+1-p_i+1^*. So even when the price of product i decreased, the new attractiveness is bounded above by a previously existing attractiveness, thus not changing the consideration set. By the same reasoning, u_i+1-p_i+1'=u_i-p_i'≥ u_i-p_i^*, meaning that the new attractiveness is bounded below by a pre-existing one, so i+1 is not dominated with this new prices either. So the consideration set stays the same, concluding the proof. Proof of Theorem <ref>. We first write problem (<ref>) in minimization form to directly apply the Karush-Khun-Tucker conditions (KKT)<cit.>. pminimize -R^(k)(p) subject to g_ij(p) ≤ 0, ∀ 1≤ i<j≤ kThe associated Lagrangean function is: ℒ_k(p,μ)=-R^(k)(p) +∑_1≤ i<j≤ k μ_ij· g_ij(p),where μ_ij≥ 0 are the associated Lagrange multipliers. Recall that if exp(u_1-u_k)≤(1+t), the optimal revenue R^(k) can be calculated using equation (<ref>), and the solution corresponds to a fixed price policy as for the regular multinomial logit.On the other hand, if exp(u_1-u_k)>(1+t), any fixed price causes product k to be dominated by product 1. Thus, to include product k in the assortment we need to adjust the prices. Let p=(p_1,…,p_k) be the optimal price vector for problem (<ref>). Observe that it can't happen that a_1(p_1)/a_k(p_k)<1+t, since by Proposition <ref>,it will also means that a_1(p_1)/a_2(p_2)<1+t and using Lemma <ref> we can find p̂ such that assigning p̂ to products 1 and 2 yields a larger revenue (and no dominance relation appears, since the attractiveness of product 1 was reduced, and the attractiveness of product 2 increased, but is still less than the one of product 1), which contradicts optimality. Therefore, g_1k must be satisfied with equality, meaning a_1(p_1)/a_k(p_k)=1+t.Furthermore, at optimality it holds u_i-p_i≥ u_j-p_j∀ i≤ j (by Proposition <ref>), and thus the biggest ratio between attractiveness is observed for products 1 and k, and is exactly equal to 1+t. This ratio can be replicated for other pairs of products, but only if they share the same net utility (and thus attractiveness) to the one of products 1 or k. Therefore, it must be the case that there are integers k_1 and k_2 with k_1+k_2≤ k, such that all products in I_1=[k_1] share the same attractiveness (a_1(p_1)) and all products in I_2={k-k_2+1,k-k_2+2,…,k} share the same attractiveness as well (a_k(p_k)). This means that the set of constraints C(k_1,k_2)={g_ij(p) | i∈ I_1, j∈ I_2 } are allsatisfied with equality at optimality. We now study the derivative of equation (<ref>) with respect to each price p_i to obtain the KKT conditions. We here assume that the first k_1 values share the same net utility value, meaning u_s=u_1-p_1=u_i-p_i ∀ i ∈ I_1, and for the last k_2 products, we also have the same value of net utility, that we call u_f, this is: u_f=u_k-p_k=u_i-p_i∀ i∈ I_2. Where these two quantities satisfy: u_s-u_f=ln(1+t), Let us write the derivatives of the Lagrangean depending on where the index i belongs. If i∈ I_1, then: dL_k/dp_i= exp(u_i-p_i)/∑_j∈ S_kexp(u_j-p_j)+a_0·[p_i-1-R^(k)(p)] - exp(u_i-p_i)·∑_j∈ I_2μ_ij,if i∈ I_2, we have:dL_k/dp_i= exp(u_i-p_i)/∑_j∈ S_kexp(u_j-p_j)+a_0·[p_i-1-R^(k)(p)] + (1+t)exp(u_i-p_i)·∑_j∈ I_1μ_ji,And finally, if i∈I̅_k=[k]∖(i_1∪ I_2), the derivative takes the following form:dL_k/dp_i= exp(u_i-p_i)/∑_j∈ S_kexp(u_j-p_j)+a_0·[p_i-1-R^(k)(p)] Observe that ∀ i∈I̅_k, dL_k/dp_i=0p_i=1+ R^(k)(p), and the right hand side is not dependent on i, so all products in I̅_k share the same price, which we denote p̅.We can rewrite all prices and the revenue depending on u_s and p̅, using the following relations: * ∀ i∈ I_1u_1-p_1=u_i-p_ip_i=u_i-u_s * ∀ i∈ I_2u_1-p_1=u_i-p_i+ln(1+t)p_i=u_i-u_s+ln(1+t)Note now that at optimality, for a fixed k, prices are determined by k_1 and k_2. Thus, the optimal revenue can be written explicitly depending on k, k_1 and k_2,taking the following form:R^(k)(k_1,k_2)=∑_i∈ I_1(u_i-u_s)exp(u_s) + p̅exp(-p̅)∑_i∈I̅_kexp(u_i) + ∑_i∈ I_2(u_i-u_s+ln(1+t))exp(u_s-ln(1+t)) /∑_i∈ I_1exp(u_s) + exp(-p̅)∑_i∈I̅_kexp(u_i) + ∑_i∈ I_2exp(u_s+ln(1+t))+a_0Note that p̅=1+R^(k)(k_1,k_2) (Equation (<ref>)) and let E(k_1,k_2)=∑_i∈I̅_kexp(u_i). Using these two relations, we can rewrite the optimal revenue as:R^(k)(k_1,k_2)=e^u_s∑_i∈ I_1(u_i-u_s) + e^u_s/1+t·∑_i∈ I_2(u_i-u_s+ln(1+t))+E(k_1,k_2)(1+R^(k)(k_1,k_2))e^-(1+R^(k)(k_1,k_2))/e^u_s[k_1+k_2/1+t]+E(k_1,k_2)e^-(1+R^(k)(k_1,k_2))+a_0Up to this point, we have an equation relating the optimal revenue R^(k)(k_1,k_2) and u_s.From equation (<ref>), after reordering terms we have: p_i-1-R^(k)(k_1,k_2)/e^u_s(k_1+k_2(1+t))+E(k_1,k_2)e^-(1+R^(k)(k_1,k_2))+a_0 =∑_j∈ I_2μ_ij, ∀ i∈ I_1 u_i-u_s-1-R^(k)(k_1,k_2)/e^u_s(k_1+k_2(1+t))+E(k_1,k_2)e^-(1+R^(k)(k_1,k_2))+a_0 =∑_j∈ I_2μ_ij, ∀ i∈ I_1 Analogously, from equation (<ref>), after reordering terms we have ∀ i∈ I_2:p_i-1-R^(k)(k_1,k_2)/e^u_s(k_1+k_2(1+t))+E(k_1,k_2)e^-(1+R^(k)(k_1,k_2))+a_0 =-(1+t)∑_j∈ I_1μ_ji, ∀ i∈ I_2 1/1+t·u_i-u_s+ln(1+t)-1-R^(k)(k_1,k_2)/e^u_s(k_1+k_2(1+t))+E(k_1,k_2)e^-(1+R^(k)(k_1,k_2))+a_0 =-∑_j∈ I_1μ_ji, ∀ i∈ I_2 Now, if we add equations (<ref>) ∀ i∈ I_1 then take equations (<ref>) and also add them ∀ i∈ I_2, and add those two results we can derive the value R^(k)(k_1,k_2) as follows. ∑_i∈ I_1 j∈ I_2μ_ij-∑_i∈ I_1 j∈ I_2μ_ij_0= ∑_i∈ I_1(u_i-u_s-1-R^(k)(k_1,k_2))+∑_i∈ I_2(u_i-u_s+ln(1+t)-1-R^(k)(k_1,k_2))/1+t/e^u_s(k_1+k_2(1+t))+E(k_1,k_2)e^-(1+R^(k)(k_1,k_2))+a_0R^(k)(k_1,k_2)(k_1+k_2/1+t)= ∑_i∈ I_1u_i + 1/1+t·∑_i∈ I_2u_i -(1+u_s)·(k_1+k_2/1+t) +k_2ln(1+t)/1+tR^(k)(k_1,k_2)= (1+t)∑_i∈ I_1u_i +∑_i∈ I_2u_i+k_2ln(1+t)/k_1(1+t)+k_2-1-u_s We now have two equations relating R^(k)(k_1,k_2) and u_s in (<ref>) and (<ref>). Using these equations we can find the values of the optimal revenues and all the pricing structure while varying k_1 and k_2. If we define the following constant: C_1(k_1,k_2)=(1+t)∑_i∈ I_1u_i +∑_i∈ I_2u_i+k_2ln(1+t)/k_1(1+t)+k_2-1, Equation (<ref>) becomes: R^(k)(k_1,k_2)=C_1(k_1,k_2)-u_s, and from Equation (<ref>), we can deduce the following relations: 1+R^(k)(k_1,k_2)=C_1(k_1,k_2)-u_s+1 , and e^-(1+R^(k)(k_1,k_2))=e^u_s-C_1(k_1,k_2)-1.We will use these relations on Equation (<ref>). Let us first multiply both sides by the denominator on the right side:R^(k)(k_1,k_2) ·(e^u_s[k_1+k_2/1+t]+E(k_1,k_2)e^-(1+R^(k)(k_1,k_2))+a_0)=e^u_s∑_i∈ I_1(u_i-u_s) + e^u_s/1+t·∑_i∈ I_2(u_i-u_s+ln(1+t))+E(k_1,k_2)(1+R^(k)(k_1,k_2))e^-(1+R^(k)(k_1,k_2)) using equations (<ref>) to replace the value of R^(k)(k_1,k_2) and write everything depending on u_s we have:(C_1(k_1,k_2)-u_s) (e^u_s[k_1+k_2/1+t]+E(k_1,k_2)e^u_s-C_1(k_1,k_2)-1+a_0)=e^u_s∑_i∈ I_1(u_i-u_s) + e^u_s/1+t·∑_i∈ I_2(u_i-u_s+ln(1+t)) +E(k_1,k_2)·(C_1(k_1,k_2)-u_s-1)e^u_s-C_1(k_1,k_2)-1We focus first on the left hand side (LHS) of Equation (<ref>): LHS= (C_1(k_1,k_2)-u_s)(e^u_s[(k_1+k_2/1+t)+E(k_1,k_2)e^-C_1(k_1,k_2)-1]+a_0) For ease of notation, define C_2(k_1,k_2) as:C_2(k_1,k_2)=(k_1+k_2/1+t)+E(k_1,k_2)e^-C_1(k_1,k_2)-1 Rewriting the LHS using the value for C_2(k_1,k_2):LHS=(C_1(k_1,k_2)-u_s)[e^u_s· C_2(k_1,k_2)+a_0)]We now focus on the right side (RHS) of equation (<ref>): RHS= e^u_s∑_i∈ I_1(u_i-u_s) + e^u_s/1+t·∑_i∈ I_2(u_i-u_s+ln(1+t))+E(k_1,k_2)·(C_1(k_1,k_2)-u_s-1)e^u_s-C_1(k_1,k_2)-1RHS= e^u_s[∑_i∈ I_1u_i+1/1+t·∑_i∈ I_2u_i+k_2ln(1+t)/1+t-u_s(k_1+k_2/1+t)]+e^u_se^-C_1(k_1,k_2)-1E(k_1,k_2)·(C_1(k_1,k_2)-u_s+1)RHS= e^u_s·(k_1+k_2/1+t)[C_1(k_1,k_2)-u_s+1] +e^u_se^-C_1(k_1,k_2)-1E(k_1,k_2)·(C_1(k_1,k_2)-u_s+1)RHS= e^u_s·(C_1(k_1,k_2)-u_s+1)·[(k_1+k_2/1+t)+E(k_1,k_2)· e^-C_1(k_1,k_2)-1]_C_2(k_1,k_2)RHS= e^u_s·(C_1(k_1,k_2)-u_s+1)· C_2(k_1,k_2) Putting together equations (<ref>) and (<ref>), we have:LHS= RHS (C_1(k_1,k_2)-u_s)[e^u_s· C_2(k_1,k_2) +a_0)]= e^u_s·(C_1(k_1,k_2)-u_s+1)· C_2(k_1,k_2) (C_1(k_1,k_2)-u_s)e^u_s· C_2(k_1,k_2) +(C_1(k_1,k_2)-u_s)· a_0= (C_1(k_1,k_2)-u_s)e^u_s· C_2(k_1,k_2) +e^u_s· C_2(k_1,k_2) (C_1(k_1,k_2)-u_s)· a_0= e^u_s· C_2(k_1,k_2)e^u_s= -a_0/C_2(k_1,k_2)·(u_s-C_1(k_1,k_2))Equation (<ref>) has a known explicit closed form solution, and can be found using the following Lemma: Let a,b ≠ 0 and c be real numbers and W(·) be the Lambert function <cit.>. The solution to the transcendental algebraic equation for x: e^-ax=b(x-c), is: x=c+1/a· W(ae^-ac/b). Proof of Lemma <ref>. Let us start with equation (<ref>) and find an explicit solution to it.e^-ax =b(x-c)e^-ax + ac-ac =b(x-c)/multiplying both sides by a/b· e^a(x-c)q a· e^-ac/b=a·(x-c)· e^a(x-c) /using definition of W(·) as in Eq. (<ref>)W(ae^-ac/b) = a·(x-c) /reorganising and isolating xx =c+1/a· W(ae^-ac/b),completing the proof. Identifying terms on equation (<ref>), the solution for u_s is: u_s=C_1(k_1,k_2)-W(C_2(k_1,k_2)/a_0· e^C_1(k_1,k_2))Let us call this value u_s(k,k_1,k_2), meaning that is a function of the integers k,k_1 and k_2. To get the revenue for this specific combination of parameters, we can simply use equation (<ref>), giving us: R^(k)(k_1,k_2)=C_1(k_1,k_2)-u_s(k,k_1,k_2) Thus, the optimal revenue R^(k) given a specific integer k can be obtained by: R^(k) = max_k_1, k_2 ≥ 1_k_1 + k_2 ≤ k R^(k)(k_1,k_2) Noting that there are O(k^2) pairs (k_1,k_2) to evaluate, the proof follows. Proof of Lemma <ref> The optimal revenue is already calculated in Equation (<ref>). The proof follows by first obtaining u_s^*(k) from Equation (<ref>). Then, for products in I_1, the price can be obtained directly since their net utility is the same as u_s^*(k). For products in I_2, since g_1k is satisfied with equality, all products share the same net utility and equal to u_s^*(k)-ln(1+t). Finally, for products inI̅_k, we can use the relation provided in equation (<ref>) to obtain the prices.More explicitly, let (k_1^*,k_2^*) be the integers satisfyingR^(k)=R^(k)(k_1^*,k_2^*). To obtain the optimal prices, let u_s^*(k)=u_s(k,k_1^*,k_2^*). By Equation (<ref>) u_s^*(k) can be written as:u_s^*(k)=(1+t)∑_i∈ I_1u_i +∑_i∈ I_2u_i+k_2^*ln(1+t)/k_1^*(1+t)+k_2^*-1-R^(k)Therefore, the optimal prices are given by: p^(k)_i(k)= u_i- u_s^*(k)ifi∈ I_1, u_i- u_s^*(k)+ln(1+t)ifi∈ I_2,1+R^(k) ifi∈I̅_k Fixed-Price policy can be arbitrarily bad for the Joint Assortment and Pricing problem under the Threshold Luce model.Proof of Lemma <ref>. Consider N+1 products, with product one having u>0 utility and a_0=1. For all the remaining N products let their utility to be: α u, with α <1 such that in presence of product one, all the rest of the products are ignored for threshold t. The optimal revenue if we consider a fixed price strategy is <cit.>:R'=W(exp(u-1))Because no matter what fixed price we select, the N lower utility products are completely ignored and the first product is the only one contributing to the revenue, and this is the best revenue that we can achieve given that. Now, let us consider the optimal revenue obtained with the strategy described in Theorem (<ref>) R^*=W([(1+t)+N/1+t]·exp((1+t)(u-1) +N(α u-1)+Nln(1+t)/(1+t)+N)), let us find an explicit relation between R' and R^*. Starting from equation (<ref>): R^*= W([(1+t)+N/1+t]·exp((1+t)(u-1) +N(α u-1)+Nln(1+t)/(1+t)+N))R^*= W([(1+t)+N/1+t]·exp((u-1)·(1+t)+Nα/(1+t)+N+Nln(1+t)/(1+t)+N))R^*= W([(1+t)+N/1+t]·exp((u-1)+N/(1+t)+N·(ln(1+t)-(u-1)·(1-α))_Γ)) We know that the Lambert function is concave, increasing and unbounded <cit.>. With this in mind, let u be such that Γ is greater or equal than zero (for example, setting u=1.9, α=0.5 and t=0.5, makes Γ>0 and product 1 dominates the rest of the products), this is: ln(1+t)/1-α +1 ≥ u.Using this, we have:R^*≥ W([(1+t)+N/1+t]·exp(u-1)) Where the argument of the Lambert function is exactly the same as R^*, but multiplied by a constant factor larger than one and depending on N. Putting everything together, we have: R^*≥W([(1+t)+N/1+t]·exp(u-1))≥ R' The expression in the middle can be arbitrarily larger than R' by letting N tend to infinity, and so is R^*. Thus, the fixed price policy can be arbitrarily bad under the Threshold Luce model.§ NUMERICAL EXPERIMENTSThis section present numerical results on the performance of the algorithms developed in Sections <ref> and <ref>, compared against classical algorithms in the literature such as revenue-ordered assortments for the assortment problem, and pricing policies like Fixed-Price <cit.> which is optimal for the conventional MNL, and Quasi-Same price <cit.>, which is optimal for the proposed variant of the MNL including search cost. Quasi-Same Price amounts to have a fixed price for all products but one (the one having the smallest utility).We also provide some insights on the factors that influence the nature of the solution and provide some explanation on the difference in performance between the different strategies. §.§ Assortment Optimisation This section presents some numerical results on the performance of revenue-ordered assortments (RO) against our proposed strategy detailed in Section <ref>, which we call . In order to do this, we variate the number of products n, the attractiveness of the outside option a_0 and the density d of the graph, which we use as the probability that a dominance relation is active for each pair of products[used as the probability that an edge in the dominance graph occurs]. Theoretically, as shown in Example<ref>, the optimality gap can be as large desired. But in practice, we were able to found gaps as large as 95.40%.Each tested family or class of instances is defined by essentially three numbers: the number of products n, the attractiveness of the outside option a_0, and the density d, that controls the probability that a dominance edge exists, and then we also compute the transitive closure over the resulting graph. It is worth noticing that we did not consider the case a_0=0 because in those cases, the optimal solution is simply selecting the highest revenue product and therefore both strategies coincide. In total, we experimented with 48 classes or families of instances, each containing 250 instances. In each specific instance, revenues and utilities are drawn from an uniform distribution between 0 and 10. We ran both strategies (RO and ) and report the average and worst optimality gap for the RO strategy. We are not providing running times, because as expected,takes more time than RO,but all instances can be solved very fast in practice (less than half a second).Table <ref> presents the results which can be summarized as follows: * The average gap tends to increase with the number of products, reaching about 14% for 30 products. The worst gap is more instance-dependent (as it strongly depends on the dominance structure, and how revenues are matched with attractiveness) so it can be large both in smaller and larger instances. However, it tends to increase with the density of the dominance graph, as it is more likely for RO to choose a product that dominates potential contributors whose inclusion can be more profitable than keeping the higher attractiveness one. * The average gap generally widens as the outside option attractiveness increases. With a high outside option, we typically expect to select more products to counterbalance the effect of the no-choice alternative. This can amplify the difference betweenand RO as the likelihood that the optimal solution turns out to be revenue-ordered decreases, given the randomness of the dominance relation. * With higher densities, is more likely to make a mistake and include a product that dominates many potential contributors that considered together, might be more profitable. Thus, both the average and worst gap widens as the density increases in general. The exception occurs at the higher end of densities where not many products can be included without provoking dominances. Here the solutions of both strategies tends to be similar and select a few higher revenue products. This is also interesting from a managerial standpoint: when customers have more clarity on what products are clearly superior in comparison, this might drift the offered assortment to be smaller, compared against when customers does not have a clear hierarchy among products. §.§ Joint Assortment and Pricing OptimisationThis section presents some numerical results related to solve the Joint Assortment and Pricing Problem discussed in Section <ref>. We analyse the performance of algorithm , compared against Fixed-Price strategy, which is optimal for the MNL and Quasi-Same Price strategy <cit.>, which is optimal for the MNL variant considered in their paper that takes into consideration search cost, and it basically a fixed price for all products but one, which share some similarities with our proposed pricing policy, as it is fixed price in general but the higher and lower ends of the utility spectrum.Each tested family or class of instances is characterized by three numbers: the number of products n; the threshold t, that controls how tolerant are customers with respect to differences in attractiveness and the attractiveness of the outside option a_0, which controls how likely is that customers review all products without purchasing. In total, we experimented with 48 classes or families of instances, each containing 250 instances. In each specific instance, revenues and utilities are drawn from an uniform distribution between 0 and 10. We ran the three strategies: Fixed Price, Quasi-Same Price and , and report the average and worst optimality gap for Fixed Price and Quasi-Same Price strategies, as well as cardinality of the offered set for both strategies. These numerical experiments were conducted in Python 3.6 on a computer with 8 processors (each with 3.6 GHz CPU) and 16 GB of RAM. Table <ref> presents the results which can be summarized as follows: * As expectedoutperformed the other two algorithms in terms of revenue, and being quite fast to execute (less than half of a second for all the instances simulated). * Fixed-Price policy performs the worst across the board, which is expected given that it has the lowest degrees of freedom, as shown in example <ref>. Although the average gap is quite low, it can be as high as 43.027%. In fact, fixed-price policy can be arbitrarily bad. A proof of this fact is provided in Appendix <ref>, Lemma <ref>. * Quasi-Same price policy also performs well on average, and the worst gap obtained was 29.964%, which is significantly better than the worst gap for Fixed Price policy. * The cardinality of the optimal solution is always at least the same or greater than Fixed-Price policy. This can be observed empirically, or deduced analytically. The intuition behind it is that given the functional form of the revenue for Fixed-Price and the fact that the Lambert function is strictly increasing the strategy always try to show as much as possible. This, and the fact that under same price, the dominance relation only depends upon intrinsic utilities, imply that there is a limiton the number of products that the fixed price policy can offer [the last product `k' where a_1(p_1)/a_k(p_k)=exp(u_1 – u_k) ≤ (1+t)] without causing any domination forlow intrinsic utility products. On the other hand, under(or Quasi Same price) we can go further and add products in such a way that the dominance relations are not triggered, and therefore we can include more products. * The main difference stems from the fact that our strategy leverage both ends of the utility spectrum, and reveals the following interesting insight. Sometimes in order to avoid low attractiveness products to be dominated, we want to: increase the price of the higher utility products (to make them less attractive) and at the same time, reduce the price for lower utility products, in order to make them more attractive, and making them visible for the consumer. | http://arxiv.org/abs/1706.08599v3 | {
"authors": [
"Alvaro Flores",
"Gerardo Berbeglia",
"Pascal Van Hentenryck"
],
"categories": [
"cs.DM"
],
"primary_category": "cs.DM",
"published": "20170626211102",
"title": "Assortment and Price Optimization Under the Two-Stage Luce model"
} |
http://arxiv.org/abs/1706.08927v1 | {
"authors": [
"Angelina Pesevski",
"Brian C. Franczak",
"Paul D. McNicholas"
],
"categories": [
"stat.ME"
],
"primary_category": "stat.ME",
"published": "20170627161903",
"title": "Subspace Clustering with the Multivariate-t Distribution"
} |
|
firstpage–lastpage 2016[ Matt Odell Accepted 0000. Received 0000 ================================Neutral hydrogen (HI) will soon be the dark matter tracer observed over the largest volumes of Universe thanks to the 21 cm intensity mapping technique. To unveil cosmological information it is indispensable to understand the HI distribution with respect to dark matter. Using a full one-loop derivation of the power spectrum of HI, we show that higher order corrections change the amplitude and shape of the power spectrum on typical cosmological (linear) scales. These effects go beyond the expected dark matter non-linear corrections and include non-linearities in the way the HI signal traces dark matter. We show that, on linear scales at z = 1, the HI bias drops by up to 15 % in both real and redshift space, which results in underpredicting the mass of the halos in which HI lies. Non-linear corrections give rise to a significant scale dependence when redshift space distortions arise, in particular on the scale range of the baryonic acoustic oscillations (BAO). There is a factor of 5 difference between the linear and full HI power spectra over the whole BAO scale range, which modifies the ratios between the peaks. This effect will also be seen in other types of survey and it will be essential to take it into account in future experiments in order to match the expectations of precision cosmology. Cosmology : Large Scale Structure of the Universe - Cosmology : theory§ INTRODUCTION Future neutral hydrogen (HI) experiments such as the Square Kilometer Array <cit.>, its pathfinder MeerKAT, the Canadian Hydrogen Intensity Mapping Experiment <cit.>, the Hydrogen Intensity and Real-time Analysis eXperiment <cit.>, and the Baryon acoustic oscillations In Neutral Gas Observations <cit.> will map the cosmological neutral hydrogen within unprecedented volumes of the Universe thanks to the line intensity mapping (IM) technique. This technique relies on the measurement of the HI integrated intensity from hundreds of galaxies in one single large voxel (3D pixel) instead of detecting individual HI galaxies. The observed volumes will allow unrivalled constraints on cosmology <cit.>. Nevertheless, to achieve the expected levels of accuracy one needs to understand how HI relates to the underlying dark matter distribution. Current HI observations are quite sparse. At z∼0, HI is observed in emission at 21 cm but current detections weaken quickly and vanish at z>0.1. At intermediate and higher redshifts, the main tracer of HI is Damped Ly-α systems (DLAs), objects with N_>10^20.3 cm^-2 displaying a 21 cm line in absorption in the spectrum of a distant quasar. As they are optically thick, hydrogen into their midst is self-shielded and remains neutral. DLAs are actually thought to host most of the neutral gas within 0<z<5 <cit.> and hence to contain a significant reservoir of neutral gas for star formation at high redshift. Combining emission and absorption measurements, the redshift evolution of the fraction density of HI has been shown to decrease slightly from high to low redshift <cit.>. Such a mild but somewhat steady evolution leads to the picture of a balance between consumption and replenishment of the gas reservoir. Notwithstanding, even though measurements are used altogether, 21 cm emission and 21 cm absorption line surveys might not target the same population of objects. This is crucial in order to clarify what is the HI bias. DLAs at high redshift might not belong to the same population than HI galaxies at low redshift. Properties of DLA hosts remain largely unknown, either because the background quasar is several magnitudes brighter or because they are too faint to be detected by current spectrographs <cit.>. When it comes to the mass of their host dark matter halos, there seems to be a tension between 21 cm low redshift galaxies and DLAs. There are only a handful of measurements of HI and DLA biases. <cit.> measured b_∼0.8 at z∼0 in the ALFALFA survey while <cit.>, and <cit.> measured the product in IM data taken with the Green Bank Telescope at z∼0.8. The latter used the IM data in auto-correlation while the former cross-correlated them with galaxy surveys to circumvent the contamination of foregrounds residuals. <cit.> measured b_DLA = 2.17±0.2 at z ∼ 2.3 in the Baryon Oscillation Spectroscopic Survey. Such a value leads to host dark matter halos of 10^11.5 as compared to the 10^9-11 found with21 cm measurements as well as in simulations <cit.>. To reconcile bias measurements, <cit.> argued that there must be a significant change in the properties of HI-bearing systems. The knowledge of the HI bias requires to understand how HI populates dark matter halos. Even though it is widely accepted that HI is within galaxies at z<5, today a simple relation between dark matter halo and HI masses (MHIMh) is used, with a certain gas profile when necessary (which is not for the bias). The MHIMh relation is measured in hydrodynamical simulations often assuming a simple power law <cit.>, inspired from observations <cit.> or parametrised and fitted on data <cit.>. Lately, <cit.> derived the MHIMh relation using the abundance matching technique where the halo mass function is matched to the HI mass function. Even if these different schemes can strongly differ, they all lead to similar values of the linear HI bias.On non-linear scales, the HI bias has been barely investigated yet, while it contains a wealth of information on cosmology and, above all, on the MHIMh relation. To date, the scale dependence of the HI bias has been measured in two ways : in hydrodynamical simulations <cit.> and in N-body simulations where halos are populated with HI through an empirical relation <cit.>. However these methods suffer from a few limitations. In hydrodynamical simulations, the bias is sensitive to the physical processes that are included in the simulation, and, first and foremost, to the resolution. For instance, several zoom-in simulations will not lead to the same value of the bias at the same common scale. In addition, hydrodynamic simulations can hardly access linear scales and both approaches are computationally heavy. The investigation of the influence of the MHIMh scheme on the HI bias on the full scale range requires a more flexible approach.We use the full one-loop calculation of <cit.> and <cit.> to compute the non-linear power spectrum of HI. It relies on high order HI biases that are computed with the halo model for each MHIMh prescription. We compute the power spectrum and the bias of HI in both real and redshift space and show that non-linear terms have a significant contribution on linear scales. We limit our analysis to z=1 which is one of the most targeted redshifts for BAO measurements. This paper is organised as follows. We begin with reviewing the theoretical framework of the HI power spectrum and listing the MHIMh relations we use in Sect. <ref>. Second, we compute the power spectrum and bias in real space with which we examine the mass of halos in which HI lie in Sect. <ref>. Third, we carry a similar analysis in redshift space and discuss its cosmological implications in Sect. <ref>. We conclude in Sect. <ref>Throughout the article, we use the Planck 2014 Cosmology <cit.>. § MODELLING THE HI POWER SPECTRUM§.§ The power spectrumThe average HI brightness temperature is given by <cit.> T(z) = 566h ( H_0/H(z) ) (Ω_(z)/0.003) (1 + z )^2 μKwhere the HI density fraction is defined as Ω_ = ρ_/ρ_c,0 with ρ_c,0 is the critical density of the Universe today. The fluctuating part is T(z,𝐱) = T(z) ( 1 + δ_HI(𝐱) ) with δ_HI(𝐱) the HI density fluctuation at position 𝐱, hence, in Fourier space ⟨T(z,𝐤) T^⋆(z,𝐤') ⟩= (2π)^3 P_(k,z)δ^3(𝐤 - 𝐤') Carrying a full one-loop derivation of the HI brightness temperature in Perturbation Theory <cit.>, the power spectrum of HI in real space at redshift zisP_(z,k)= P_^11(z,k)+ P_^22(z,k)+ P_^13 (z,k)where P_^11(z,k) is the linear power spectrum (tree level) while P_^22(z,k) and P_^13(z,k) are the non-linear corrections. For clarity purposes we will not specify the redshift dependence in the following. Following <cit.> and <cit.> the three terms of P_ (k) areP_^11(k)=T^2b_1^2P_^11(k)P_^22(k)=T^2/2∫^3 k_1/(2π^3) [ b_1F_2(_̨1,_̨2) +b_2 ]^2×P_^11(k_2)P_^11(k_1)P_^13(k)=T^2 b_1 { (b_3 + 68/21b_2)σ_Λ^2P_^11(k)+b_1P^13_(k)} where k_2 = |_̨1 - |̨. b_1, b_2, and b_3 are the linear, second and third order HI biases, respectively. The latter are the higher terms of the bias expanded in Taylor series, which means assuming that the HI bias is local. F_2 is the non-linear density kernel defined in Appendix <ref>. Finally σ_Λ, the variance of the dark matter field, isσ_Λ^2 = ∫_k_min^k_max^3 k/(2π)^3 P_(k) For simplicity, we set k_max to the non-linear dispersion scale, k_NL = 0.2h (1+z)^2/(2+n_s) Mpc^-1 with n_s the spectral index. In redshift space, the 3D power spectrum of HI on linear and quasi-linear scales at scale k, and μ, the cosine of the angle between the line of sight and the separation vector $̨, writesP_ (k,μ)= P_^11(k,μ)+ P_^22(k,μ)+ P_^13 (k,μ) Following <cit.>, the linear term in redshift space isP_^11(k,μ) = T^2 [b_1 + f μ^2]^2P_^11(k)withμ= k_∥/k,P_(k)the linear power spectrum of matter, andfthe linear growth rate. We compute the former using the transfer function of <cit.> and assumef(z) = Ω_(z)^γwithγ= 0.55forΛCDM <cit.>.Following <cit.> and <cit.> the one-loop corrections areP_^22(k,μ)= T^2/2∫^3 k_1/(2π)^3[ b_1F_2(_̨1,_̨2) + μ^2 G_2(_̨1,_̨2) + b_2 + K_R(_̨1,_̨2)]^2 ×P_^11(k_2)P_^11(k_1)P_^13(k,μ)= T^2 (b_1 + μ^2 f) × {[(b_3 + 68/21b_2)σ_Λ^2 +I_R(k,μ)] P_^11(k) + [b_1P^13_(k) + μ^2 fP^13_θ(k) ]} whereP^13_(k)andP^13_θ(k)are the third order matter power spectrum and velocity field power spectrum, respectively. Their expressions along with that ofI_R(k,μ)are listed in Appendix <ref>. Finally, several kernels are involved in the computation of theP_^22(k,μ) term :G_2induced by peculiar velocities at second order andK_Rarises from non-linear mode coupling<cit.>. Their expressions are also given in Appendix <ref>. §.§ HI quantitiesHI-related quantities such as densities and biases are computed using the halo model that provides a description of the clustering of dark matter halos at both linear and non-linear scales<cit.>. It relies on the halo mass functionn/ Mand the associatedn-th order halo biasesb_n^h(M)measured in N-body simulations. We use the prescriptions of<cit.>. The comoving density of HI writesρ_ = ∫ Mn/ MM_(M)Then-th order HI biases are b_n^HI = 1/ρ_∫ Mn/ M b_n^h(M)M_(M)whereM_(M)is the relation between the HI mass and the halo mass (see Sect. <ref>). Fig. <ref> shows an example of a set of biases. Note that only the first order bias is always positive while the two others change sign. All of them increase for high halo masses. We will use the terms linear and first order bias interchangeably. §.§ The HI mass - halo mass relationThe distribution of HI within the Large Scale Structure is rather unclear today. It is believed that in the post-reionization era most of HI lies within galaxies while only a negligible fraction is diffuse <cit.>. It is often simply parametrised by relating the mass of HI to the mass of its host dark matter halo through a simple power law including, or not, a cut-off at small and high halo masses. We compile here several MHIMh relations that have been used or estimated using both hydrodynamical simulations and parametrised models fitted on data measurements. We also consider a DLA model. * Bagla10 : One relation that has been widely used is that of <cit.>. It has been inspired from quasar observations and assumes that there is no HI in high mass halos:M_(M) = f_3M/1+M/M_maxM≥ M_min where f_3 comes from the normalisation to . This prescription is commonly used for studies of 21 cm intensity mapping <cit.>. and are the limits for a dark matter halo to host HI. They assume that only halos with 30 km/s <v_circ<200 km/s host HI, which translates to lower and upper bounds, and , through v_circ = 30 √(1+z)( M/10^10 M_⊙)^1/3* AGN : Nevertheless, <cit.> measured the MHIMh relation in hydrodynamical simulations including AGN feedback and show that there is HI in halos that have v_circ>200 km/s. They measured M_(M) = e^αM^γ and fit α and γ up to redshift 2. * DLA50 : A prescription adapted from DLA studies <cit.> by <cit.>M_(M) = α f_H,cM exp[-( v_c,0/v_c(M))^3 ] exp[-( v_c,1/v_c(M))^3 ]where α is the ratio of HI within halos and cosmic HI, f_H,c = (1-Y_p)Ω_b/Ω_m is the cosmic hydrogen fraction with Y_p the cosmological helium fraction by mass, andv_c(M) is the virial velocity of a halo <cit.> :v_c(M) = 96.6( Δ_vΩ_mh^2/24.4)^1/6( 1+z/3.3)^1/2( M/10^11M_⊙)^1/3 with Δ_v the mean overdensity of the halo that we take to be 200. For DLAs, <cit.> considered v_c,0 = 50 km/s and an infinite v_c,1. They fitted α to measurements between redshift 0 and 4(column density distributions, biases,and the incidence rate).* 21cm :<cit.> adapted Eq. <ref> to 21 cm IM observations using ad-hoc velocity cuts v_c,0=30 km/s and v_c,1=200 km/s. Similarly to the DLA50 model, <cit.> fitted α on the same measurements. Note that in both latter cases the slope is fixed and equal to unity which is higher than what is measured in hydro-simulations. * HOD A : <cit.> improved Eq. <ref> by introducing a flexible slope, β, as well as the velocity cut-offs: M_(M)= α f_H,cM (M/10^11 h^-1 M_⊙) ^βexp[-( v_c,0/v_c(M))^3 ] × exp[-( v_c,1/v_c(M))^3 ]where α, β, v_c,0, and v_c,1 are free parameters and fitted on data measurements. * HOD B : Lastly, <cit.> fitted an updated version of Eq. <ref>M_(M) = α f_H,cM (M/10^11 h^-1 M_⊙) ^βexp[-( v_c,0/v_c(M))^3 ]on all the available measurements including galaxy clustering. Their free parameters are β and α.All these prescriptions are shown in Fig. <ref> atz=1. They vary in shape, amplitude, and slope. Clearly the DLA50 scheme favours high halo masses as compared to the other models. We limit our analysis toz=1, the values of the free parameters are given in Table <ref>. § THE HI POWER SPECTRUM IN REAL SPACEIn this section, we compute the non-linear HI power spectrum in real space using all the above MHIMh models. We first describe the non-linear contributions to the power spectrum and show that the bias is neither constant nor linear on so-called linear scales. We discuss the implications of that effective bias on our understanding of the distribution of HI and compare this modeling approach to others.§.§ A non linear bias on linear scalesThe total HI power spectrum along with the non-linear contributions are shown in Fig. <ref> for model HOD A. Contrary to our expectations, bothP_^22andP_^13terms have significant contributions on linear scales. These contributions arise from the coupling of short and long wavelength modes. TheP_^13term is negative and proportional to the matter power spectrum. Therefore, on linear scales, it lowers the amplitude of the HI power spectrum by∼25% as compared to a standard biased power spectrum. Hence, the actual HI bias is lower than the linear HI bias. TheP_^22term is constant on linear scales which induces a scale dependence of the HI bias on the largest scales. The flat contribution to theP_^22term (the dot-dashed line) is simply proportional tob_2^while, on those scales, theP_^13term is a function ofb_1^HI, b_2^HI, andb_3^HI. The latter depend on the MHIMh prescriptions and are listed in Table <ref>. While theb_1^HIs vary by∼13%amongst the different prescriptions, the variation strongly increases at higher orders. Indeedb_2^HIs andb_3^HIs differ by35%and74%, respectively. Hence, the shape of the HI bias depends on the MHIMh prescription as shown in Fig. <ref>. Hereafter, we will callHI effective bias the following :b_HI^eff(k) = 1/T_HI√(P_HI(k)/P_m^11(k)) The right panel of Fig. <ref> shows the ratio1/T_HI√(P_HI(k)/P_m^NL(k))whereP_m^NL(k)is the non-linear matter power spectrum computed using the same perturbation theory framework (See Appendix <ref>). The normalisation by the mean HI temperature is to focus only on the bias and avoid additional amplitude variations. Indeed, the HI temperature is a function ofΩ_, therefore of the MHIMh relation (see Eq. <ref> and Table <ref>). On the largest scales, there is only a few percent difference between the different prescriptions. We won't discuss them here as General Relativity corrections must be taken into account on ultra large scales. On non-linear scales, regardless of the MHIMh relation, we recover a bias well below 1, meaning that HI galaxies are highly anti-biased while they are only slightly on linear scales <cit.>. Atk>0.2 hMpc^-1our biases are of the same order of magnitude than that of <cit.> as shown in the right panel of Fig. <ref>. Their dip is deeper because their bias on linear scales is higher than ours. Nevertheless, they are consistent as explained in the following and in Sect. <ref>. On linear scales, there is, at most, a difference of 10% in the amplitude of the models and 15% atk=1hMpc^-1. Regardless of the MHIMh prescription, the effective biasb_^eff, is lower than the linear one. Their values atk = 0.01hMpc^-1are listed in Table <ref> along with the associated linear biases. On linear scales, effective biases are always lower by 10-15% than their linear counterpart. They can be approximated, in real space, byb_1^→ b_^eff≈ b_1^ + 1/2(b_3^ + 68/21b_2^) σ_Λ^2It has consequences on our understanding of HI within the Large Scale Structure. The assumption that the measured HI bias on large scales is linear leads to an underestimation of that linear bias and therefore of the halo mass hosting HI. Hence HI lies in slightly more massive halos than thought. §.§ In which halos does HI lie?Currently, there is a tension between halo masses of HI-bearing systems, in particular between observations of HI galaxies at low redshift and those of DLAs at higher redshift as highlighted by <cit.>. They fitted all HI available measurements (DLA incidence rates, column densities, biases, HI fractional densities and biases) at several redshifts with both DLA- and 21 cm-based models (our schemes DLA50 and 21cm, amongst others). They showed that the 21 cm based model fitting all measurements systematically underpredicts the DLA bias. Similarly, DLA models, that are tuned to reproduce the DLA bias, always overpredict . Their two DLA models have low velocity cut-offs of 50 and 90 km/s, which implies that there is no or only a low amount of neutral hydrogen in low mass halos. While <cit.> suggested that it could be caused by a strong stellar feedback, it remains inconsistent with HI observations at low redshift. In addition, the discrepancy holds when varying the HI concentration. It is important to note that the discrepancy is only at the level of the biases, hence, the tension is between the host halos of low and high redshift HI-bearing systems. <cit.> argued that there must be a dramatic change in the properties of these systems over0<z<3to have these halo masses on the same evolution path. This idea is strengthened by <cit.> who introduced a halo occupation model inspired by both DLA and 21 cm emission framework (our HOD A model), using the same dataset as the former together with the HI mass function atz∼0. Again, most of the observables are relatively well fitted but the DLA bias is, again, underpredicted while the high mass part of the HI mass function is overpredicted. Lately, <cit.> carried a similar analysis with an updated version of the MHIMh relation (our HOD B model) adding the 2-point correlation function (2PCF) of HI galaxies at small scales. By levering some degrees of freedom in the HI concentration, they did improve the overall quality of the fit but with an overpredicted 2PCF on large scales, a high mass tail of the HI mass function, and a too low DLA bias. It is clear that both models predict too many objects in high mass halos and a HI bias that is too high. Indeed, fits are driven towards high halo masses by the DLA bias measurement, which might be flawed as it is inconsistent with most observations and simulations. The latter statement seems inconsistent with our previous argument which is that HI lies in more massive halos than we think. The non-linear corrections to the bias are of the order of 15% at most while the discrepencies between the predicted and measured HI biases are, at least, of 50%. Therefore the systematic error due to the assumption of a linear bias on large scales is concealed by the error induced by the DLA bias. We also consider a MHIMh relation adapted to DLAs, the DLA50 model. It is obvious from Fig. <ref> that it favours higher mass halos as compared to any other prescription. This translates to a higher linear bias and to a change of sign ofb_2^HIandb_3^HI(see Table <ref>).P_^13becomes positive on large scales as shown in Fig. <ref> which adds power to the HI power spectrum. Hence, the effective bias is higher than its linear counterpart in the case of DLAs. The amplitude of the power spectrum rises by 13% which translates in 7% on the effective bias for the DLA50 model. Of course, the additional power increases when going towards even higher mass halos. For instance, using av_c,0= 90 km/s instead ofv_c,0= 50 km/s, which translates to minimum halo masses of10^10.04and10^10.80h^-1M_⊙atz=1, leads to an increase of power of 24% and 11% at the power spectrum and bias levels, respectively. Thus, DLA models overpredict the HI bias even more than previously thought which enhances the tension between DLAs and 21 cm biases preventing any reconciliation.§.§ Consistency with other modeling approaches and clustering analysisIt is the coupling between small and large scale modes that gives rise to an effective bias different from the linear one. Therefore, the mismatch exists for any tracer of dark matter. Hence one can wonder why it is not predicted by any other modeling approaches and why it has never been noticed in any clustering analysis. The answer to the first question is straightforward : models are constructed to predict a linear bias on linear scales. The procedure for modeling the clustering of any tracer is a distribution of halos coming either from the halo model or dark matter simulations which are filled in with the tracer. Therefore, only non-linearities coming from the evolution of the distribution of dark matter are present and not the ones coming from the distribution of the tracer, which is not the case in our approach. Indeed, it is the distribution of the tracer, the HI brightness temperature precisely, that has been perturbed. Therefore, our biases in Fig. <ref> are consistent with that of <cit.>. They used a dark matter simulation in which they defined halos that are assigned an HI mass through the Bagla10 model. On the largest scales, they measure a HI bias of 0.92 fully consistent with the linear bias of 0.94 computed through Eq. <ref>. Lastly, why a mismatch has not been noticed in clustering analysis yet as linearities are missing in current models? On linear scales, it is an offset of 15% on the bias for HI at most and the scale dependency is only of a few percent so it can be well concealed in the error bars. For instance, when fitting the parameters of a halo occupation distribution on a correlation function on both linear and non-linear scales, the halo mass thresholds hosting a central galaxy and one satellite galaxy would be found to be lower than it is in reality. Notwithstanding, this systematic error is lower than the statistical error on the fitted parameters in current clustering analysis. It won't be the case with stage IV experiments such as Euclid and SKA. In addition, in the era of precision cosmology, ignoring these corrections will lead to flawed estimations of cosmological parameters. § THE HI POWER SPECTRUM IN REDSHIFT SPACEIn this section, we extend the previous analysis to the anisotropic power spectrum of HI in redshift space. We first adopt a theoretical point of view, investigating the power spectrum as a function ofkandμto understand the effects of RSDs and second, we compare the expected linear power spectrum to the full one in the transverse and radial directions. §.§ The HI effective bias on linear scalesWe begin by investigating the different contributions to the HI power spectrum in the two extreme directions: in the transverse one whereμ= 0, meaning that RSDs are null, and in the radial direction whereμ=1and, hence, RSDs are maximal. Fig. <ref> shows the different terms contributing to the HI power spectrum for the prescription HOD A. We recover similar behaviours to those in real space. TheP_HI^22term is constant on large scales (k<0.02 hMpc^-1) and rises towards small scales. The amplitude of the rise increases withμas RSDs come in, they are contained in theG_2andK_Rterms as shown in the lower panel. Therefore, on small scales (k>0.2 hMpc^-1) non-linear contributions are maximal forμ=1where fingers of God are recovered. Again, theP_HI^13term is negative and thus removes power to the HI power spectrum on linear scales. The amplitude of the removal decreases withμ:at the power spectrum level it lowers from∼25%atμ= 0to∼13%atμ= 1, respectively. The former is similar to the real space case. The effective bias in redshift space is b_1^→ b_^eff(k,μ) ≈b_1^ + 1/2[( b_3^ + 68/21b_2^) σ_Λ^2 + I_R(k,μ)]The expression ofI_Ris given in Appendix <ref> and it is worth noticing that the effective bias is also a function of the growth factor. We will explore this in more details in future work.Forμ≠0the effective bias cannot be computed directly because of RSD effects so we compare the full HI power spectrum to the linear Kaiser predictionP_^11(k,μ) = T^2 [b_1 + fμ^2]^2P_^11(k)in Fig. <ref> for all MHIMh models. Regardless of the scale, they lead to ratios that are within 10% and those differences lowers withμ. On linear scales, the effective bias gets closer to the linear one asμincreases. We can also notice in the lower panel that RSD effects impact the power spectrum only atμ>0.2. On smaller scales, the rise is due to non-linear effects, only, atμ=0and also to RSDs forμ>0. The slope of the rise scales withμand it is exactly over the BAO scale range. §.§ A scale dependent HI bias on BAO scalesWe will carry on the analysis using only the model HOD A. The bottom panel of Fig. <ref> shows a scale dependence of the HI bias that is enhanced by RSD effects over the BAO scale range : the bias rises from 10 % atμ=0to a factor 2 atμ=1. To adopt an observational point of view, we change the coordinates to transverse and radial directions in Fig. <ref>. The two top panels show the linear and total HI power spectra.At first glance, non-linear terms shift the turnover of the power spectrum towards higherk_⊥(>0.1 hMpc^-1). They slightly enhance the signal along thek_⊥direction while it is boosted alongk_∥by RSD effects. The lower panel of Fig. <ref> shows the ratio between the linear and total HI power spectra. On large scales,k_∥, k_⊥<0.01 hMpc^-1, we recover a maximum ratio of 25%. At smallk_∥and towards largek_⊥, non-linearities increase the amplitude of the HI power spectrum by a factor 2 while at largek_∥RSD effects dominate non-linear ones and make anyk_⊥dependence vanish. Over the BAO scale range, the HI power spectrum increases by a factor of 5 and2 in the radial and transverse directions, respectively. Therefore, both non-linearities and RSD effects modify the ratios between the BAO peaks. It is therefore necessary to take non-linearities into account when estimating cosmological parameters. To circumvent the contamination by non linear effects, one would preferentially measure the BAO peaks in the transverse direction but <cit.> showed that, in single dish mode, beyond a certain size, the beam of the instrument smears the wiggles out in the transverse direction and that BAOs can only be detected in the radial direction. This is a limitation for the SKA and Meerkat but not for BINGO, CHIME or HIRAX as they will have a higher angular resolution.§ CONCLUSIONRadio telescopes are about to open a new window of observation on the Universe, in particular, using 21 cm intensity mapping. We investigate the non-linear power spectrum of HI in both real and redshift space and in the light of the relation between the halo mass and the HI mass.Our main result is that on linear scales, the HI bias is not constant but scale-dependent. Using a full 1-loop development in perturbation theory of the power spectrum of HI we show that non-linear contributions remove power to the HI power spectrum on linear scales in both real and redshift space atz=1. This result is contrary to our expectations and is not found in other modeling approaches. Commonly, a distribution of dark matter halos is `painted' with a baryonic tracer, so that only non-linearities coming from the distribution of dark matter are taken into account and not the ones coming from the evolution of the distribution of the tracer. In real space, the effective bias of HI is 10-15% lower than the linear one, depending on the MHIMh relation. The assumption that the observed HI bias is linear underpredicts the actual linear bias, and hence, the mass of halos hosting HI. In redshift space, the effective bias is also lower than its linear counterpart up to 15% and its scale dependence is highly sensitive to RSD effects. Over the BAO scale range, the HI bias rises with a slope that steepens withμ. Regardless of the MHIMh prescription, the difference between the linear and the full HI power spectra reaches a factor 5 which can lead to a modification of the ratios between BAO peaks. Therefore, it will be crucial to take non-linearities into account when estimating cosmological parameters. The different MHIMh relations lead to variations of 15% at most on the HI bias. It is within the error bars on any of the current HI bias measurements so it is not an issue at the moment. Nevertheless, it will be indispensable for the upcoming HI surveys. It is worth noting that the observable is the productT_b_where the HI temperature is also a function of theMHIMh relation throughΩ_. This product differs up to a factor of 7 between the different prescriptions.Thorough forecasts of the effect of non-linearities on the estimation of BAO peaks and in a broader way, cosmological parameters, are required, including the redshift evolution as HI is positively biased at higher redshift, therefore non-linearities add power to the power spectrum of HI. Lastly, this effect is not only present in HI intensity mapping surveys but in any galaxy surveys. § ACKNOWLEDGMENTSThe authors would like to thank Roy Marteens, Chris Clarkson, Jose Fonseca, and Vincent Desjacques for useful discussions. They also would like to thank Debanjan Sarkar for providing them the data points of their paper. AP, OU and MGS acknowledge support from the South African Square Kilometre Array Project as well as from the National Research Foundation.§ PERTURBATION THEORY FORMULASThe necessary kernels are :F_2the non-linear density kernel,G_2induced by peculiar velocities at second order, andK_Rarises from non-linear mode coupling<cit.> rCl F_2(_̨1,_̨2)= 5/7 + 1/2_̨1·_̨2/k_1k_2[ k_1/k_2 + k_2/k_1] + 2/7[_̨1·_̨2/k_1k_2]^2G_2(_̨1,_̨2)= 3/7 + 1/2_̨1·_̨2/k_1k_2[ k_1/k_2 + k_2/k_1] + 4/7[_̨1·_̨2/k_1k_2]^2K_R(_̨1,_̨2)=fb_1 μ_1^2 + f b_1 μ_2^2 + μ_1μ_2 [fb_1k_1/k_2 +fb_1 k_2/k_1]+ f^2 [2μ_1^2 μ_2^2 + μ_1μ_2( μ_1^2k_1/k_2+ μ_2^2k_2/k_1)]The matter and velocity power spectra at third order writeP_^13(k)= 1/252k^3/4π^2 P_^11(k) ∫_0^∞ r P_^11(kr)[ 12/r^2 - 158 +100r^2 - 42r^4 + 3/r^3(r^2 - 1)^3(7r^2 + 2)log|1+r/1-r|]P_θ^13(k)= 1/84k^3/4π^2 P_^11(k) ∫_0^∞ r P_^11(kr)[ 12/r^2 - 82 +4r^2 - 6r^4 + 3/r^3(r^2 - 1)^3(r^2 + 2)log|1+r/1-r|]The last component of theP_^13(k,μ)isI_R(k,μ) = k^3/(2π)^2∫ r P^11(kr)× μ^2 f ([b_2B_1(r) + b_1 B_2(r) ]+ μ^2 f^2 [ b_1 B_3(r) + B_4 + μ^2(b_1^2B_5(r) + fB_6(r))])with B_1(r)= 1/6B_2(r)= 1/84[ -2 (9 r^4 - 24 r^2 + 19) + 9/r(r^2 - 1) log(1+r/|1-r|)]B_3(r)=-1/3B_4(r)= -1/336r^3 [ 2(-9r^7 + 33 r^5 + 33r^3 - 9r ) +9(r^2 - 1) log(1+r/|1-r|)]B_5(r)= 1/336r^3[ 2r (-27r^6 + 63r^4 - 109r^2 + 9) +9 (3r^2 +1)(r^2 - 1) log(1+r/|1-r|)]Lastly, in this framework, the full 1-loop matter power spectrum in real space isP_m^NL(k) = P_m^11(k) + P_m^22(k) +P_m^13(k) where the second order term writes P_m^22(k) = 1/2∫^3 k_1/(2π)^3 F_2^2(_̨1,_̨2) P_^11(k_2)P_^11(k_1)In redshift space, the full 1-loop matter power spectrum isP_m^NL(k,μ)=P_m^11(k,μ) + P_m^22(k,μ) +P_m^13(k,μ) P_m^11(k,μ)= [1 + f μ^2]^2P_^11(k)P_m^22(k,μ)= 1/2∫^3 k_1/(2π)^3[ F_2(_̨1,_̨2) + μ^2 G_2(_̨1,_̨2) + K_R(_̨1,_̨2)]^2 ×P_^11(k_2)P_^11(k_1)P_m^13(k,μ)= (1 + μ^2 f) { I_R(k,μ)P_^11(k) + [ P^13_(k) + μ^2 fP^13_θ(k) ]}mn2e | http://arxiv.org/abs/1706.08763v2 | {
"authors": [
"Aurélie Pénin",
"Obinna Umeh",
"Mario Santos"
],
"categories": [
"astro-ph.CO"
],
"primary_category": "astro-ph.CO",
"published": "20170627101419",
"title": "A scale dependent bias on linear scales: the case for HI intensity mapping at z=1"
} |
itemize*enumerate* description* ⊙ ⊙ h^-1 M_h Mpc^-1h^-1 Mpce.g. et al. i.e. [email protected]@[email protected]@[email protected]^1 Department of Physics, Kenyon College, 201 N College Rd, Gambier, OH 43022^2 3CERCA/ISO, Department of Physics, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106^3 Michigan Center for Theoretical Physics, University of Michigan, Ann Arbor, MI 48109^4 Department of Physics, Syracuse University, Syracuse, NY 13244, USA Maybe not. String theory approaches to both beyond the Standard Model and Inflationary model building generically predict the existence of scalars (moduli) that are light compared to the scale of quantum gravity. These moduli become displaced from their low energy minima in the early universe and lead to a prolonged matter-dominated epoch prior to BBN.In this paper, we examine whether non-perturbative effects such as parametric resonance or tachyonic instabilities can shorten, or even eliminate, the moduli condensate and matter-dominated epoch.Such effects depend crucially on the strength of the couplings, and we find that unless the moduli become strongly coupled the matter-dominated epoch is unavoidable. In particular, we find that in string and M-theory compactifications where the lightest moduli are near the TeV-scale that a matter-dominated epoch will persist until the time of Big Bang Nucleosynthesis. Was the Universe Actually Radiation Dominated Prior to Nucleosynthesis? Yue Zhao^3 December 30, 2023 =======================================================================empty Moduli are a generic prediction in string theoretic approaches to beyond the Standard Model <cit.> and inflationary model building <cit.>. It was noted long ago that these moduli could be displaced from their low-energy minima in the early universe, and their coherent oscillations lead to a period of matter domination<cit.>.This matter phase has important differences from a strictly thermal universe and is a rich source of dark matter phenomenology – for a review see <cit.>.The matter phase can also lead to enhanced growth of structure <cit.>, changes in inflationary predictions for the cosmic microwave background <cit.>, and also the formation ofprimordial black holes <cit.>. These cosmological and phenomenological predictions depend on the duration of the matter phase, which is determined by the moduli mass and couplings to other fields.It is expected that moduli couple gravitationally, and the matter phase will persist until the perturbative decay of the modulus completes which, for 50 TeV moduli, will be around thetime of Big Bang Nucleosynthesis (BBN) <cit.>. In this paper, we want to revisit these assumptions and determine if effects such as parametric enhancement <cit.> or tachyonic instabilities <cit.> can lead to an enhanced decay of the moduli. In the former case, as the field oscillates, particles are produced, and Bose-Einstein statistics can lead to a significant enhancement of the decay compared to the perturbative decay rate <cit.> (for a review see <cit.>).Whereas, in tachyonic resonance, if the mass squared of the field becomes negative due to the time and/or field dependence of the couplings this can lead to the efficient decay of the field in less than a single oscillation <cit.>.It has also been argued that the dynamics and backreaction of the produced particles could be used to `trap' moduli <cit.>.If these types of instabilities are present they can significantly enhance the moduli decay rate resulting in less of a matter phase or even prevent the formation of the moduli condensate all together.For very light moduli – that would decay after BBN – this enhanced decay may lead to a new way to address the cosmological moduli problem <cit.>.§ MODULI DECAY THROUGH PARAMETRIC AND TACHYONIC RESONANCEThe moduli will typically couple to other fields with gravitationally suppressed couplings. This is the case in examples like KKLT <cit.>, as well as the cases of Large Volume Compactifications in Type IIB <cit.>and G2 compactifications of M-theory <cit.>. The perturbative decay rate of the modulus is then Γ∼ m_σ^3/Λ^2, where m_σ is the mass of the modulus and Λ the suppression scale. Taking[We work with sign convention (-,+,+,+) and with the reduced Planck mass m_p=1/(8π G)^1/2=2.4 × 10^18 GeV. We use Greek indices to denote space-time μ=0,1,2,3 whereas latin indices imply spatial directions only k=1,2,3.]Λ∼ m_p the corresponding reheat temperature for a m_σ = 50 TeV scalar isaround 5 MeV <cit.>. Here we would like to determine whether parametric or tachyonic instabilities in the moduli can result in a faster decay and so higher reheat temperature.We are motivated by recent work on preheating and the production of gauge fields at the end of inflation <cit.>. In these papers it was found that a tachyonic instability to production of massless gauge fields from inflaton couplings σ F_μνF̃^μν / Λ<cit.> orσ F_μνF^μν / Λ<cit.> can lead to explosive particle production and drain energy completely before the inflaton can complete a full oscillation.If this result were also true for moduli, then this could prevent the formation of the condensate and the matter-dominated phase. §.§ Moduli Coupling to Gauge FieldsIn all of the string constructions mentioned above there are moduli with masses generated by gravitationally mediated Supersymmetry (SUSY) breaking.The corresponding moduli mass is determined by the gravitino mass m_3/2 as m_σ = c m_3/2 where c is a constant determined by the particular string theory realization, e.g. in the G2 MSSM c ≃ 2. We now consider the coupling of the moduli to a hidden sector gauge fieldS=∫ d^4x √(-g)( - 1/4 F_μν F^μν -c/4 ΛσF_μν F^μν), where c is an order one constant (computable in a given string model) and consistency of the effective theory requires σ < Λ. The corresponding equations of motion are ∇_μ F^μν + c/Λ∇_μ( σF^μν)=0,σ = ∂ V/∂σ + c/4 Λ F_μνF^μν. Working in Coulomb gauge A^0=0, ∂_i A^i=0, neglecting the expansion of the background, and introducingthe field redefinition Ã_k=[ a(t) ( 1+ cσ/Λ) ]^1/2A_k the resulting equations of motion areσ̈ + 3H σ̇ +m_σ^2 σ= c/2 Λ[ Ȧ_μȦ^μ/a^2 + ϵ_μνλϵ_αβ^λ∇^μ A^ν∇^α A^β] Ã̈_k+ [ k^2+1/2( 1/1+c σ/Λ)^2 . × . ( 1/2c^2 σ̇^2/Λ^2 -c^2 σσ̈/Λ^2-c σ̈/Λ) ] Ã_k =0,The moduli will remain frozen in their false minimum until H ≃ m_σ at which time the moduli begin oscillations and σ(t)=σ_0 cos(mt) where the initial amplitude is typically σ_0 ∼ m_p.The gauge field equation can be put in the form of a Mathieu equation by introducing thetime variable z=mt/2.Noting that consistency of the effective theory requires σ_0 < Λ and keeping only the leading terms we have d^2A_k/dz^2+[ 4 (k/m_σ)^2+ 2c ( σ_0/Λ)cos(2z) ] A_k =0where we have dropped terms further suppressed by powers of σ_0/Λ and we note that the leading time-dependent mass term corresponds to the term ∼σ̈/Λ in (<ref>).Comparing (<ref>) to the usual Mathieu equation d^2u/dz^2+[A_k + 2qcos(2z)]u=0,suggests the identificationsA_k≡ 4 ( k/m_σ)^2,q≡ c (σ_0/Λ).Tachyonic instability corresponds to the condition A_k < 2q, broad resonance occurs for q≫1 and narrow resonance occurs for q≲ 1. We can immediately see that broad resonance is forbidden, since validity of the effective theory requires σ_0<Λ or q<1. Moreover, although narrow resonance could play a role, it may not lead to significant enhancement of the production <cit.>. Thus, we focus on the case of tachyonic resonance.§.§ Tachyonic Resonance – Analytic TreatmentThe modes that will undergo tachyonic resonance correspond to A_k < 2q in (<ref>), which for the identification (<ref>) impliesk<1/√(2)( σ_0/Λ)^1/2 m_σ.However, for post-inflation we are interested in sub-Hubble modes[This is required by causality if the gauge modes begin in their vacuum state following inflation.] so we also require k/H>1 implying the modesof interest lie in a band1<k/H<1/√(2)( σ_0/Λ)^1/2( m_σ/H).Thus, for tachyonic production of modes we require1/√(2)( σ_0/Λ)^1/2( m_σ/H) ≫ 1,so at the onset of the moduli phase, when H ≃ m_σ perturbativity of the effective theory again seems to limit the level of enhancement in gauge field production, since we require σ_0 < Λ. However, although the initial moduli displacement is typically expected to be an order of magnitude or so below the cutoff, as the moduli oscillations continue the Hubble parameter will continue to decrease H<m_σ, and tachyonic resonance becomes possible.There is a competing effect that the amplitude of the moduli oscillations also decreases compared to its initial value σ_0. It is a quantitative question of how important tachyonic resonance is for moduli decay and the duration of the epoch.Moreover, during oscillations, creation of moduli (moduli particles, meaning k≠ 0 modes), particle scattering, and backreaction of both moduli and gauge fields can play an important role, as well as the expansion of the universe. To account for these complexities and non-linearities we perform a lattice treatment and present those results in the next section. §.§ Tachyonic Resonance – Lattice ResultsTo determine whether tachyonic (or parametric) instabilities occur in the system (<ref>) and (<ref>) we perform fully non-linear lattice simulations.We build our simulations using the softwareGABE<cit.>, which has been used previously to study the interactions of scalar fields and U(1) Abelian gauge fields <cit.>.Our simulations allow us to account not only for gauge field production, but also the effects of scalar particle production, rescattering, backreaction, and the expansion of the universe. There are several restrictions on the allowed values of the fields and parameters of our model. For example, although we perform a lattice simulation, validity of the effective Supergravity description requires that the non-renormalizable operator in (<ref>) remain subdominant to the leading kinetic term.Since c is a dimensionlessO(1) Wilson coefficient this requires that σ not exceed the UV cutoff Λ (which is typically order the Planck or string scale).We note that our simulations are similar to those of <cit.>, where the role of the inflaton there, is instead given by the moduli here.As we will see, a key difference in our results compared to those of <cit.> is that there the authors considered a toy model with a dilatonic type coupling that could enter a “strong coupling" regime.In this paper, we are limited by the validity of the effective theory σ < Λ and we'll see this limits our ability to establish a strong resonance behavior[The result that validity of an effective field theory approach can limit the importance of parametric resonance was noted recently in <cit.>.].In order to establish as large a resonance as possible we will take the initial amplitude of the moduli to be near the Planck scale σ_0 ≃ m_p (we take σ_0 = 0.2 m_ pl as a fiducial value). Then, given our discussion of the validity of the effective theory requires that we take Λ∼ m_p, and as the field can change sign this also ensures that thekinetic term of (<ref>) retains the correct sign.This limits us to a maximum coupling c/4Λ≈ 6.9 m_ pl^-1.Throughout this section we will use this maximum value as to make the potential tachyonic window as large as possible (we have checked that for lower values of the cutoff the resonance is even weaker than the results we present here).We are left with only one free parameter, m_σ, which also sets the Hubble scale at the beginning of coherent oscillations.UsingGABE we discretize space onto a grid of 128^3 points that are on a homogeneously expanding box.The box has initial size, L = 4 m_σ^-1≈ 2 H_0^-1.The simulations solve (<ref>) and (<ref>) along with the Friedmann equations.For numerical simplicity, we employ the standardunit-less conformal time, dτ = a(t)m dt. We use an adaptive time step, Δτ = 0.005/a(τ) so that we resolve the co-moving modes throughout the simulation.We initialize the modulus field consistent with the expectations of a field that carries the “freeze out” power as modes re-enter the horizon[We start our simulations at the beginning of moduli oscillations and we take adiabatic initial conditions so that the inflaton fluctuations will have been transferred to the moduli that come to dominate the energy density (we assume no isocurvature, however see <cit.>) and assume that Δ_s^2≈ 10^-10.], <δσ (k)δσ (k^')> =π^2/2( Δ_s^2σ_0^2 /H_0^3) δ( k-k^'),assuming that most modes have not grown much since horizon re-entry[Prior to moduli domination we take the universe to be radiation dominated following inflationary reheating and sub-Hubble modes of the moduli will undergo very little growth (their perturbations grow logarithmically with the scale factor ∼log(a).] and have recently re-entered (k≈ H_0). For the gauge fields we set the initial conditions consistent with the Bunch Davies vacuum <cit.>, < | A_i(k)A_j(k^')|^2>=δ_ij δ( k-k^')/2a( 1+ cσ/Λ),with zero homogenous mode (we comment on the robustness of this assumption shortly). We take the initial surface in Coulomb gauge, but the rest of the simulation is carried out in Lorenz gauge, ∂^μ A_μ = 0, where Gauss' constraint is treated as a dynamical degree of freedom (as the equation of motion for A_0) and we check that the gauge constraint is maintained throughout our simulations.As we increase the mass of the modulus field, we shrink the physical size of the Hubble patch at the beginning of the simulation.This is the best approach to resolving shorter wavelength modes of the gauge fields, and hence, a larger fraction of energy in the gauge sector.As we set the initial conditions, we impose a window function (as in <cit.>) that cuts off power to modes k ≳ 90 m_σ for numerical stability. However, this scale is above the scale at which we would expect to see tachyonic instabilities.Following <cit.>, we take the ratio of the gauge field energy density (ρ_ EM) to the total energy(ρ_ tot) as a figure of merit of the amplification of the gauge field and the effectiveness of the tachyonic (and parametric) instabilities.Figure <ref> shows the evolution of this parameter as a function of time for a large range of moduli masses.We find the robust result thatregardless of the (relative) amplitude of the initial fluctuations of the gauge fields, tachyonic (and parametric) instabilities are absent and do not lead to significant amplification of the gauge fields.The variation in the initial value of ρ_ EM reflects that we allow for different values of the moduli mass as discussed above.Considering a pre-existing density of gauge modes ( e.g. non-Bunch Davies initial conditions with modes that were classically or quantum mechanically excited during inflation[Model independent bounds on the level of gauge field production during inflation was recently established in <cit.>.There it was shown that requiring successful inflation limits the amplification of gauge fields which here limits the size of the initial amplitude taken for the gauge fields,i.e. one can not take the initial amplitude to be arbitrarily large.]) would have a similar effect, amplifying the initial spectrum of the gauge field, and hence, raising ρ_ EM/ρ_ tot on the initial surface. An additional measure at which to look for instabilities is in the spectra of the coupled fields.In Figure <ref>, we see that there is very little change to the power spectra of the fields.In cases where instabilities exist, we can generally see these instabilities in the power spectra of the fields.In none of the cases we studied did we see any indication of tachyonic or parametric instabilities. Although we have not found significant evidence for an increased decay of the moduli, this does not necessarily imply a matter-dominated epoch. Indeed, it was recently shown that the non-linear dynamics of the fields can have an important influence on the equation of state <cit.>. Thus, we must lastly ensure that the expansion mimics that of a matter-dominated single-component universe.To do this, we track the equation of state parameter, w=p/ρ, which is the usual ratio of the isotropic pressure to the energy density.Figure <ref> shows this for the fiducial case, m_σ = 50TeV, and shows that w oscillates, as expected, between ±1 as is the case of a massive scalar field dominated by its homogeneous value. § COMMENTS AND CONCLUSIONSIn this paper, we have considered the coupling of moduli to hidden sector gauge fields for a range of masses and initial values of the gauge fields. We found that even as we approach modestly strong coupling, tachyonic and parametric instabilities have no effect on the moduli decay rate. Moreover, we have seen that the equation of state during the moduli oscillations averages to the previously anticipated result of a matter-dominated universe. As gauge field production relies on the moduli dynamics breaking the conformal invariance of the gauge field sector <cit.>, and in these string motivated models the source of this breaking comes from non-renormalizable operators, it may not be that surprising that this effect turned out to be negligible. One reason for considering these operators was that such couplings generically appear in string theories, and are model independent in the sense that they arise strictly in the moduli sector and are typically independent of how one embeds the visible sector. This is indeed the case in examples like KKLT <cit.>, as well as the cases of Large Volume Compactifications in Type IIB <cit.>and G2 compactifications of M-theory <cit.>.One may wonder if more model dependent couplings (arising from embedding the visible sector in a particular string construction) may alter our conclusions. For example, moduli couplings to the Higgs (∼σ H^† H) are relevant operators and the moduli might undergo enhanced decay to Higgs bosons. However, such couplings were already considered some time ago by Brandenberger and Shuhmaher in <cit.>. They considered relevant operators arising from SUSY breaking for a range of moduli masses. Their results are similar to our findings for non-renormalizable operators. That is, if one requires a perturbative theory and consistency of the effective field theory then both parametric and tachyonic resonancedoes not significantly alter the moduli decay rate.Our results, as well as those of<cit.>, suggest that if one is to eliminate the moduli dominated epoch one is going to have to consider moduli that are strongly coupled. There is some motivation for this in string theory <cit.> (for more recent work see <cit.>), however there must typically be at least one light modulus if we are to realize the perturbative Standard Model in a string construction <cit.>. For this reason, we take our results as a robust prediction that string theories lead to the expectation for a prolonged, matter-dominated epoch prior to BBN.§ ACKNOWLEDGEMENTSWe are grateful to Peter Adshead, Bhaskar Dutta, Adrienne Erickcek and Matt Reece for useful discussions. J.T.G. is supported by the National Science Foundation, PHY-1414479. S.W. thanks the Michigan Center for Theoretical Physics for hospitality. S.W. is supported in part by NASA Astrophysics Theory Grant NNH12ZDA001N and DOE grant DE-FG02-85ER40237. Y.Z. is supported by DOE grant DE- SC0007859. This work was completed at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1066293.apsrev4-137 fxundefined [1]ifx#1 fnum [1] #1firstoftwosecondoftwo fx [1] #1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0] ` 12 `$12 `&12 `#12 `1̂2 `_12 `%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty [Kane et al.(2015)Kane, Sinha, and Watson]Kane:2015jia authorauthorG. Kane, authorK. Sinha,andauthorS. Watson,10.1142/S0218271815300220journaljournalInt. J. Mod. Phys. volumeD24, pages1530022 (year2015), http://arxiv.org/abs/1502.07746arXiv:1502.07746 [hep-th]NoStop [Baumann and McAllister(2015)]Baumann:2014nda authorauthorD. Baumann and authorL. McAllister, http://inspirehep.net/record/1289899/files/arXiv:1404.2601.pdftitleInflation and String Theory (publisherCambridge University Press, year2015) http://arxiv.org/abs/1404.2601arXiv:1404.2601 [hep-th]NoStop [Banks et al.(1994)Banks, Kaplan, and Nelson]Banks:1993en authorauthorT. Banks, authorD. B. Kaplan, and authorA. E. Nelson, 10.1103/PhysRevD.49.779journaljournalPhys. Rev. volumeD49, pages779 (year1994), http://arxiv.org/abs/hep-ph/9308292arXiv:hep-ph/9308292 [hep-ph]NoStop [de Carlos et al.(1993)de Carlos, Casas, Quevedo, andRoulet]deCarlos:1993wie authorauthorB. de Carlos, authorJ. A. Casas, authorF. Quevedo, and authorE. Roulet,10.1016/0370-2693(93)91538-XjournaljournalPhys. Lett. volumeB318, pages447 (year1993), http://arxiv.org/abs/hep-ph/9308325arXiv:hep-ph/9308325 [hep-ph]NoStop [Coughlan et al.(1983)Coughlan, Fischler, Kolb, Raby, and Ross]Coughlan:1983ci authorauthorG. D. Coughlan, authorW. Fischler, authorE. W. Kolb, authorS. Raby,and authorG. G. Ross,10.1016/0370-2693(83)91091-2journaljournalPhys. Lett. volumeB131, pages59 (year1983)NoStop [Banks et al.(1995a)Banks, Berkooz,and Steinhardt]Banks:1995dt authorauthorT. Banks, authorM. Berkooz, and authorP. J. Steinhardt, 10.1103/PhysRevD.52.705journaljournalPhys. Rev. volumeD52, pages705 (year1995a), http://arxiv.org/abs/hep-th/9501053arXiv:hep-th/9501053 [hep-th]NoStop [Banks et al.(1995b)Banks, Berkooz, Shenker, Moore, and Steinhardt]Banks:1995dp authorauthorT. Banks, authorM. Berkooz, authorS. H. Shenker, authorG. W. Moore,and authorP. J. Steinhardt,10.1103/PhysRevD.52.3548journaljournalPhys. Rev. volumeD52, pages3548 (year1995b), http://arxiv.org/abs/hep-th/9503114arXiv:hep-th/9503114 [hep-th]NoStop [Erickcek and Sigurdson(2011)]Erickcek:2011us authorauthorA. L. Erickcek and authorK. Sigurdson,10.1103/PhysRevD.84.083503journaljournalPhys.Rev. volumeD84, pages083503 (year2011), http://arxiv.org/abs/1106.0536arXiv:1106.0536 [astro-ph.CO]NoStop [Fan et al.(2014)Fan, Ozsoy, and Watson]Fan:2014zua authorauthorJ. Fan, authorO. Ozsoy,andauthorS. Watson,10.1103/PhysRevD.90.043536journaljournalPhys. Rev. volumeD90, pages043536 (year2014), http://arxiv.org/abs/1405.7373arXiv:1405.7373 [hep-ph]NoStop [Erickcek et al.(2016)Erickcek, Sinha, and Watson]Erickcek:2015bda authorauthorA. L. Erickcek, authorK. Sinha, and authorS. Watson,10.1103/PhysRevD.94.063502journaljournalPhys. Rev. volumeD94, pages063502 (year2016), http://arxiv.org/abs/1510.04291arXiv:1510.04291 [hep-ph]NoStop [Easther et al.(2014)Easther, Galvez, Ozsoy, and Watson]Easther:2013nga authorauthorR. Easther, authorR. Galvez, authorO. Ozsoy,and authorS. Watson,10.1103/PhysRevD.89.023522journaljournalPhys. Rev. volumeD89, pages023522 (year2014), http://arxiv.org/abs/1307.2453arXiv:1307.2453 [hep-ph]NoStop [Georg et al.(2016)Georg, Sengor, and Watson]Georg:2016yxa authorauthorJ. Georg, authorG. Sengor, and authorS. Watson,10.1103/PhysRevD.93.123523journaljournalPhys. Rev. volumeD93, pages123523 (year2016), http://arxiv.org/abs/1603.00023arXiv:1603.00023 [hep-ph]NoStop [Georg and Watson(2017)]Georg:2017mqk authorauthorJ. Georg and authorS. Watson,@noop (year2017), http://arxiv.org/abs/1703.04825arXiv:1703.04825 [astro-ph.CO]NoStop [Traschen and Brandenberger(1990)]Traschen:1990sw authorauthorJ. H. Traschen and authorR. H. Brandenberger,10.1103/PhysRevD.42.2491journaljournalPhys. Rev. volumeD42, pages2491 (year1990)NoStop [Kofman et al.(1997)Kofman, Linde, and Starobinsky]Kofman:1997yn authorauthorL. Kofman, authorA. D. Linde, and authorA. A. Starobinsky, 10.1103/PhysRevD.56.3258journaljournalPhys. Rev. volumeD56, pages3258 (year1997), http://arxiv.org/abs/hep-ph/9704452arXiv:hep-ph/9704452 [hep-ph]NoStopAllahverdi:2010xzR. Allahverdi, R. Brandenberger, F. Y. Cyr-Racine and A. Mazumdar, “Reheating in Inflationary Cosmology: Theory and Applications,” Ann. Rev. Nucl. Part. Sci.60, 27 (2010) doi:10.1146/annurev.nucl.012809.104511 [arXiv:1001.2600 [hep-th]].Amin:2014etaM. A. Amin, M. P. Hertzberg, D. I. Kaiser and J. Karouby, “Nonperturbative Dynamics Of Reheating After Inflation: A Review,” Int. J. Mod. Phys. D24, 1530003 (2014) doi:10.1142/S0218271815300037 [arXiv:1410.3808 [hep-ph]].[Felder et al.(2001)Felder, Garcia-Bellido, Greene, Kofman, Linde, and Tkachev]Felder:2000hj authorauthorG. N. Felder, authorJ. Garcia-Bellido, authorP. B. Greene, authorL. Kofman, authorA. D. Linde,andauthorI. Tkachev,10.1103/PhysRevLett.87.011601journaljournalPhys. Rev. Lett. volume87, pages011601 (year2001), http://arxiv.org/abs/hep-ph/0012142arXiv:hep-ph/0012142 [hep-ph]NoStop [Kofman et al.(2004)Kofman, Linde, Liu, Maloney, McAllisteret al.]Kofman:2004yc authorauthorL. Kofman, authorA. D. Linde, authorX. Liu, authorA. Maloney, authorL. McAllister,et al.,10.1088/1126-6708/2004/05/030journaljournalJHEP volume0405, pages030 (year2004), http://arxiv.org/abs/hep-th/0403001arXiv:hep-th/0403001 [hep-th]NoStop [Watson(2004a)]Watson:2004aq authorauthorS. Watson,10.1103/PhysRevD.70.066005journaljournalPhys.Rev. volumeD70, pages066005 (year2004a),http://arxiv.org/abs/hep-th/0404177arXiv:hep-th/0404177 [hep-th]NoStop [Greene et al.(2007)Greene, Judes, Levin, Watson, andWeltman]Greene:2007sa authorauthorB. Greene, authorS. Judes, authorJ. Levin, authorS. Watson,and authorA. Weltman,10.1088/1126-6708/2007/07/060journaljournalJHEP volume0707, pages060 (year2007), http://arxiv.org/abs/hep-th/0702220arXiv:hep-th/0702220 [hep-th]NoStop [Cremonini and Watson(2006)]Cremonini:2006sx authorauthorS. Cremonini and authorS. Watson,10.1103/PhysRevD.73.086007journaljournalPhys.Rev. volumeD73, pages086007 (year2006), http://arxiv.org/abs/hep-th/0601082arXiv:hep-th/0601082 [hep-th]NoStop [Kachru et al.(2003)Kachru, Kallosh, Linde, and Trivedi]Kachru:2003aw authorauthorS. Kachru, authorR. Kallosh, authorA. D. Linde,andauthorS. P. Trivedi,10.1103/PhysRevD.68.046005journaljournalPhys.Rev. volumeD68, pages046005 (year2003), http://arxiv.org/abs/hep-th/0301240arXiv:hep-th/0301240 [hep-th]NoStop [Conlon et al.(2005)Conlon, Quevedo, and Suruliz]Conlon:2005ki authorauthorJ. P. Conlon, authorF. Quevedo, and authorK. Suruliz,10.1088/1126-6708/2005/08/007journaljournalJHEP volume08, pages007 (year2005), http://arxiv.org/abs/hep-th/0505076arXiv:hep-th/0505076 [hep-th]NoStop [Acharya et al.(2008)Acharya, Bobkov, Kane, Shao, and Kumar]Acharya:2008zi authorauthorB. S. Acharya, authorK. Bobkov, authorG. L. Kane, authorJ. Shao,and authorP. Kumar,10.1103/PhysRevD.78.065038journaljournalPhys. Rev. volumeD78, pages065038 (year2008), http://arxiv.org/abs/0801.0478arXiv:0801.0478 [hep-ph]NoStop [Deskins et al.(2013)Deskins, Giblin, and Caldwell]Deskins:2013lfx authorauthorJ. T. Deskins, authorJ. T. Giblin,and authorR. R. Caldwell, 10.1103/PhysRevD.88.063530journaljournalPhys. Rev. volumeD88, pages063530 (year2013), http://arxiv.org/abs/1305.7226arXiv:1305.7226 [astro-ph.CO]NoStop [Adshead et al.(2015)Adshead, Giblin, Scully, andSfakianakis]Adshead:2015pva authorauthorP. Adshead, authorJ. T. Giblin, authorT. R. Scully,andauthorE. I. Sfakianakis, 10.1088/1475-7516/2015/12/034journaljournalJCAP volume1512, pages034 (year2015), http://arxiv.org/abs/1502.06506arXiv:1502.06506 [astro-ph.CO]NoStop [Adshead et al.(2016)Adshead, Giblin, Scully, andSfakianakis]Adshead:2016iae authorauthorP. Adshead, authorJ. T. Giblin, authorT. R. Scully,andauthorE. I. Sfakianakis, 10.1088/1475-7516/2016/10/039journaljournalJCAP volume1610, pages039 (year2016), http://arxiv.org/abs/1606.08474arXiv:1606.08474 [astro-ph.CO]NoStop [Green and Kobayashi(2016)]Green:2015fss authorauthorD. Green and authorT. Kobayashi,10.1088/1475-7516/2016/03/010journaljournalJCAP volume1603,pages010 (year2016), http://arxiv.org/abs/1511.08793arXiv:1511.08793 [astro-ph.CO]NoStop [Child et al.(2013)Child, Giblin, Ribeiro, and Seery]Child:2013ria authorauthorH. L. Child, authorJ. T. Giblin, Jr, authorR. H. Ribeiro,and authorD. Seery,10.1103/PhysRevLett.111.051301journaljournalPhys. Rev. Lett. volume111, pages051301 (year2013),http://arxiv.org/abs/1305.0561arXiv:1305.0561 [astro-ph.CO]NoStop [Giblin et al.(2017)Giblin, Nesbit, Ozsoy, Sengor, andWatson]Giblin:2017qjp authorauthorJ. T. Giblin, authorE. Nesbit, authorO. Ozsoy, authorG. Sengor,and authorS. Watson, @noop (year2017), http://arxiv.org/abs/1701.01455arXiv:1701.01455 [hep-th]NoStop [Iliesiu et al.(2014)Iliesiu, Marsh, Moodley, andWatson]Iliesiu:2013rqa authorauthorL. Iliesiu, authorD. J. E. Marsh, authorK. Moodley, and authorS. Watson,10.1103/PhysRevD.89.103513journaljournalPhys.Rev. volumeD89, pages103513 (year2014), http://arxiv.org/abs/1312.3636arXiv:1312.3636 [astro-ph.CO]NoStop [Lozanov and Amin(2016)]Lozanov:2016hid authorauthorK. D. Lozanov and authorM. A. Amin, @noop (year2016), http://arxiv.org/abs/1608.01213arXiv:1608.01213 [astro-ph.CO]NoStop [Demozzi et al.(2009)Demozzi, Mukhanov, and Rubinstein]Demozzi:2009fu authorauthorV. Demozzi, authorV. Mukhanov, and authorH. Rubinstein, 10.1088/1475-7516/2009/08/025journaljournalJCAP volume0908, pages025 (year2009), http://arxiv.org/abs/0907.1030arXiv:0907.1030 [astro-ph.CO]NoStop [Shuhmaher(2008)]Shuhmaher:2007pv authorauthorN. Shuhmaher,10.1088/1126-6708/2008/12/094journaljournalJHEP volume12,pages094 (year2008), http://arxiv.org/abs/hep-ph/0703319arXiv:hep-ph/0703319 [hep-ph]NoStop [Shuhmaher and Brandenberger(2006)]Shuhmaher:2005mf authorauthorN. Shuhmaher and authorR. Brandenberger,10.1103/PhysRevD.73.043519journaljournalPhys. Rev. volumeD73, pages043519 (year2006), http://arxiv.org/abs/hep-th/0507103arXiv:hep-th/0507103 [hep-th]NoStop [Banks and Dine(1994)]Banks:1994sg authorauthorT. Banks and authorM. Dine, 10.1103/PhysRevD.50.7454journaljournalPhys. Rev. volumeD50, pages7454 (year1994), http://arxiv.org/abs/hep-th/9406132arXiv:hep-th/9406132 [hep-th]NoStop [Del Zotto et al.(2016)Del Zotto, Heckman, Kumar, Malekian, and Wecht]DelZotto:2016fju authorauthorM. Del Zotto, authorJ. J. Heckman, authorP. Kumar, authorA. Malekian,andauthorB. Wecht, @noop (year2016), http://arxiv.org/abs/1608.06635arXiv:1608.06635 [hep-ph]NoStop | http://arxiv.org/abs/1706.08536v1 | {
"authors": [
"John T. Giblin, Jr.",
"Gordon Kane",
"Eva Nesbit",
"Scott Watson",
"Yue Zhao"
],
"categories": [
"hep-th",
"astro-ph.CO",
"hep-ph"
],
"primary_category": "hep-th",
"published": "20170626180005",
"title": "Was the Universe Actually Radiation Dominated Prior to Nucleosynthesis?"
} |
email: [email protected] ^1Department of Applied Physics, Xi'an Jiaotong University, Xi'an 710049, China^2Key Laboratory of Quantum Information and Quantum Optoelectronic Devices, Shaanxi Province, China Preparing the ground state of a system is an important task in physics. We propose a quantum algorithm for preparing the ground state of a physical system that can be simulated on a quantum computer. The system is coupled to an ancillary qubit, by introducing a resonance mechanism between the ancilla qubit and the system, and combined with measurements performed on the ancilla qubit, the system can be evolved to monotonically converge to its ground state through an iterative procedure. We have simulated the application of this algorithm for the Afflect-Kennedy-Lieb-Tasaki model, whose ground state can be used as resource state in one-way quantum computation. Quantum algorithm for preparing the ground state of a system via resonance transition Hefeng Wang^1,2 December 30, 2023 ===================================================================================== Purification of quantum states is the key for many quantum applications, e.g., highly-purified quantum states are required in improving the signal to noise ratio in spectroscopy <cit.> and the resolution in metrology and quantum sensing <cit.>. It is also essential in quantum information science, such as initializing a set of qubits to a known state in many quantum algorithms, preparing resource state in one-way quantum computation <cit.>, and supplying fresh ancillary qubits in fault-tolerant quantum computing and quantum error correction <cit.>. To purify a quantum state, in addition to physical cooling, algorithmic cooling can be used to reduce the entropy of the system. In quantum computation, algorithmic cooling can be used for preparing the ground state of a quantum computer by means of the computer itself <cit.>.There are a few quantum cooling algorithms have been proposed. In Ref. <cit.>, a heat-bath algorithmic cooling (HBAC) approach was proposed and demonstrated experimentally. In this approach, the entropy of qubits is reduced by distributing more entropy to one of the qubits, which can release the excess entropy to a heat bath through thermalization. HBAC is not a universal cooling algorithm, it is mainly used for preparing polarized spins as initial states for quantum computation. Another approach <cit.> for quantum cooling is to engineer dissipative open-system dynamics to drive quantum states to the ground state of a simulated system. This approach is based on simulation of the Lindblad master equations, and is restricted to the frustration-free Hamiltonians. In Ref. <cit.>, a universal quantum cooling approach was proposed, this approach can be applied to enhance the probability of the ground-state projection through non-unitary operations introduced through measurements. In this approach, it cannot be verified directly that the system is cooled to its ground state. In Ref. <cit.>, a method is proposed for purifying a quantum system A to a pure state through Zeno-like measurements on another quantum system B that is coupled to A. The effect of performing a series of frequent measurements on system B introduces non-unitary operations on system A and drives it to a pure state.In this work, we propose a quantum algorithm for preparing the ground state of a system via resonance transition, provided that the ground state energy of the system is known. The system is coupled to an ancillary qubit and a resonance transition is introduced between them, through an energy exchange with the ancilla qubit, the system can be driven to monotonically converge to its ground state in an iterative way. In this algorithm, the system is prepared in an initial state and the amplitude of the ground state is amplified through a resonance mechanism, by performing measurements on the ancilla qubit, the system is purified to converge quickly to its ground state. The algorithm can be applied to any system with a Hamiltonian that can be simulated on a quantum computer. We have simulated the application of this algorithm for the Afflect-Kennedy-Lieb-Tasaki (AKLT) model, whose ground state can be used as resource state in one-way quantum computation.Results The Algorithm. For a qubit coupled to a physical system, when the qubit resonates with a transition in the system, it exhibits a dynamical response and an energy exchange occurs between the qubit and the system. By performing measurements on the qubit, the system can be purified to its ground state. Based on this, we propose a quantum algorithm for preparing the ground state of a system, provided that the ground state energy is known. For some systems, such as some oracle-based problems in quantum computation, the ground state energies are already known, or they can be obtained through the algorithm we introduced here and in ref. <cit.>. In this algorithm, the system is coupled with an ancillary qubit and is prepared in an initial state which can be spanned by the eigen-basis of the Hamiltonian of the system, the amplitude of the ground state is amplified through a resonance mechanism. By performing measurements on the ancilla qubit, which introduces a non-unitary operations on the system <cit.>, the system can be purified and driven to monotonically converge to its ground state. Details of the algorithm are as follows.The algorithm requires (n+2) qubits, two ancillary qubits and n qubits that represent the system. The first ancilla qubit is coupled to a quantum register R of ( n+1) qubits consisting of the second ancilla qubit and an n-qubit quantum register representing a system of dimension N=2^n. The Hamiltonian of the algorithm is constructed asH=-1/2σ _z⊗ I_2^⊗ (n+1)+I_2⊗ H_R+cσ _x⊗σ _x⊗ I_N,whereH_R=ε _0|0⟩⟨ 0|⊗ I_N+|1⟩⟨ 1|⊗ H_S,and H_S is the Hamiltonian of the physical system. The eigenstates of the system are |χ _j⟩ and H_S|χ _j⟩ =E_j|χ _j⟩ (j=1, 2, …, N), where E_j are the eigenvalues of H_S. I_2 is the two-dimensional identity operator, σ _x and σ _z are the Pauli matrices. The first term in Eq. (1) is the Hamiltonian of the first ancilla qubit, the second term is the Hamiltonian of the register R, and the third term describes the interaction between the ancilla qubit and R. Here, ε _0 is a reference parameter, and c is the coupling strength between the first ancilla qubit and R.In the algorithm, a guess state |φ ^(0)⟩ of the ground state of the system is prepared as the initial input state. The register R of the circuit is prepared in state |0⟩ |φ ^(0)⟩, which is an eigenstate of the Hamiltonian H_R with eigenvalue ε _0. We set the parameter ε _0 such that ε _0-E_1=1, where E_1 represents the ground state energy of the system. The procedures of the algorithm are as follows:For k=1:( i) Prepare the first ancilla qubit in its ground state |0⟩ and the register R in state |0⟩ |φ ^(k-1)⟩.( ii) Construct Hamiltonian of the algorithm as shown in Eq. (1), and implement the time evolution operator U(τ )=exp( -iHτ) with τ =π /(2c).( iii) Perform a measurement on the first ancilla qubit in its computational basis. If the measurement result is in its excited state |1⟩, go to the next step; otherwise, set k=1 and run the algorithm start from step (i) again.( iv) Take the state of the last n qubits obtained from step (iii) as input state |φ ^(k)⟩ for the system. Set k=k+1 and run the procedures from step (i) again.Here, k represents the iteration number of the algorithm. The state |φ ^(m)⟩ obtained on the last n qubits is close to the ground state of the system. The time evolution operator U(τ )=exp(-iHτ) can be implemented efficiently through the Trotter formula <cit.>.The initial state of the system |φ ^(0)⟩ can be spanned by the complete set of eigenstates of the system Hamiltonian {|χ _j⟩, j=1, 2, ⋯, N} as |φ ^(0)⟩ =∑_j=1^Nd_j|χ _j⟩, where d_j=⟨φ ^(0)|χ _j⟩ and ∑_j=1^N|d_j|^2=1 . In basis {|Ψ _j⟩ =|0⟩ |0⟩ |χ _j⟩, |Ψ _N+j⟩ =|1⟩ |1⟩ |χ _j⟩, j=1, 2, …, N}, the Hamiltonian of the algorithm can be decomposed as direct sum of N two-dimensional matrix as H=⊕ _j=1^NH_j. With ε _0-E_1=ω =1, H_j are in the formH_j=( [ 1/2+E_1 c; c 1/2+E_j ]). In the first iteration of the algorithm, the initial state of the circuit is |Ψ _0⟩ =|0⟩ |0⟩ |φ ^(0)⟩, the unitary evolution of the circuit is U|Ψ _0⟩ =∑_j=1^Nd_jU_j|0⟩ |0⟩ |χ _j⟩, where U_j=exp (-iH_jτ ). Then we haveU_1|0⟩ |0⟩ |χ _1⟩ =c_1|1⟩ |1⟩ |χ _1⟩,where c_1=e^-i(α +π/2) and α =2E_1+1/4c π. AndU_j|0⟩ |0⟩ |χ _j⟩ =c_j0|0⟩ |0⟩ |χ _j⟩ +c_j1|1⟩ |1⟩ |χ _j⟩ ,j>1wherec_j0=[ 1/2-δ _j/2√(4c^2+δ _j^2)+( 1/2+δ _j/2√(4c^2+δ _j^2)) e^iπ√(4c^2+δ _j^2)/2c] × e^-iπκ _j+√(4c^2+δ _j^2)/4c, c_j1=c/√(4c^2+δ _j^2)e^-iπκ _j+ √(4c^2+δ _j^2)/4c( 1-e^iπ√( 4c^2+δ _j^2)/2c)and δ _j=E_j-E_1, κ _j=E_1+E_j+1. ThenU|Ψ _0⟩ = U|0⟩ |0⟩∑_j=1^Nd_j|χ _j⟩ =∑_j=1^Nd_jU_j|0⟩ |0⟩ |χ _j⟩= |1⟩( d_1c_1|1⟩ |χ _1⟩ +∑_j=2^Nd_jc_j1|1⟩ |χ _j⟩)+|0⟩∑_j=2^Nd_jc_j0|0⟩ |χ _j⟩ .From Eq. (7), we can see that | c_j1| ^2= c^2/4c^2+δ _j^2[ 2-2cos( π√( 4c^2+δ _j^2)/2c) ]. The probability of the measurement on the ancilla qubit being in its excited state |1⟩, depends on the coupling coefficient c and the energy gaps between the ground state and the excited states δ _j. In the case where δ _j≫ c and c≪ 1, | c_j1|≪ 1 and | c_j0|≈ 1. Therefore in the first iteration of the algorithm, the probability of the measurement on the ancilla qubit being in its excited state is ≈ |d_1|^2. The system is evolved to the excited states of the system energy when the measurement on the ancilla qubit is in state |0⟩, while the system is purified to a state that is close to its ground state when the measurement is in its excited state |1⟩. When the measurement on the first ancilla qubit is in state |1⟩, the circuit is collapsed to state |Ψ ^(1)⟩ =1/√(N_r)|1⟩( d_1c_1|1⟩ |χ _1⟩ +∑_j=2^Nd_jc_j1|1⟩ |χ _j⟩), where N_r=|d_1c_1|^2+∑_j=2^N|d_jc_j1|^2 is the renormalization factor. Because |c_1|=1 and | c_j1|≈ c≪ 1 provided that δ _j≫ c, the state |1⟩ |1⟩ |χ _1⟩ that encodes the ground state of H_S contributes most to the state |Ψ ^(1)⟩, as long as d_1 is polynomial large. Ignoring phase factors, the state on the last n qubits of the circuit can be written as 1/√(1+(a_0c)^2)( |χ _1⟩ +a_0c|χ̅ ̅_̅1̅⟩), where a_0=√(∑_j=2^N|d_jc_j1/d_1c| ^2), |χ̅ ̅_̅1̅⟩ =1/a_0c∑_j=2^Nd_j/| d_1| c_j1|1⟩ |χ _j⟩ represents a state that does not contain the ground state |χ _1⟩. The amplitude of the ground state of the system is amplified in a rate about 1:a_0c through the resonance mechanism. After m continuous measurements on the first ancilla qubit being in its excited state |1⟩, the system is purified to state 1/√(1+(a_0c)^2m)[ |χ _1⟩ +( a_0c) ^m|χ̅ ̅_̅1̅⟩], and the amplitude of the component that does not contain the ground state of H_S is compressed to be exponentially small with m.The success probability for ( m+1) continuous measurements on the first ancilla qubit to be in its excited state |1⟩ isP_succ = |d_1|^2∏_k=1^m1/1+(a_0c)^2k> |d_1|^2[ 1/1+(a_0c)^2] ^m≈ |d_1|^2[ 1-(a_0c)^2] ^m.If the coupling coefficient c is set such that a_0c<1/√(m) and δ _j≫ c, then the success probability of the algorithm P_ succ>|d_1|^2/e in the asymptotic limit of m. The system is purified to a state that has fidelity of (1-m( a_0c)^2m) with the ground state of the Hamiltonian H_S. The number of trials the algorithm has to be run is proportional to 1/P_succ. The evolution time of the algorithm is τ=π /(2c), if the energy gap between the ground state and the excited states δ _j is polynomially large, then the coupling coefficient c can be set to be polynomially small and δ _j≫ c, then as long as |d_1|^2 is not exponentially small, the algorithm can be run in polynomial time with finite success probability.Since the amplitude of the ground state of the system is amplified in a rate about 1:a_0c where a_0c ≪ 1, the system is evolved very fast to a state that is very close to the ground state of the system when the measurement on the first ancilla qubit is in its excited state. The success probability of the algorithm mainly depends on the overlap between the initial state and the ground state of the system. Once the measurement on the ancilla qubit fails to be in its excited state, we can start over the algorithm again. Then after a few iterations, the initial state is purified to be close to the ground state of the system. In practice, the system can be purified to a state that is very close to the ground state of the system in a few iterations of the algorithm, as we can see in the example below.We have to implement the time evolution operator U(τ )=exp( -iHτ). In the algorithm Hamiltonian H, as shown in Eq. (1), the first two terms commute, while they do not commute with the third term. The operator U(τ ) can be implemented through the Trotter formula <cit.>:U(τ )=[ e^-i( -1/2σ _z⊗ I_2^⊗ (n+1)+I_2⊗ H_R) τ /Le^-i( cσ _x⊗σ _x⊗ I_N) τ /L] ^L+O( 1/L)By making L very large, the error can be made as small as possible. L can be made sufficiently large such that the error is bounded by some threshold. By applying the Trotter approximation, the evolution operator on the circuit is U(τ )-O( 1/L), which introduces a slight deviation from the real unitary evolution of the state in an iteration of the algorithm. Then the probability for ( m+1) continuous measurements on the first ancilla qubit to be in its excited state |1⟩ becomes P_succ^'≈ |d_1|^2[ 1-(a_0c)^2] ^m[ 1-O( 1/L) ] ^m+1.A slight modification of the algorithm can be used for obtaining the ground state energy of a system. When the transition frequency between the reference state and an eigenstate of the system matches the frequency of the first ancilla qubit, it contributes the most to the excitation of the qubit. By performing measurements on the first ancilla qubit to obtain its excitation probability, a peak in the excitation rate of the qubit will be observed. Therefore by varying the eigenvalue of the reference state ε _0, and run the algorithm, we can locate the transition frequency between the reference state and the ground state and obtain the ground state energy of the system. The procedures are the same as in Ref. <cit.>.Simulating the algorithm for the AKLT model. In the following, we simulate the algorithm for the one-dimensional Afflect-Kennedy-Lieb-Tasaki model. The ground state of the two-dimensional AKLT model can be used as resource state for universal one-way quantum computation <cit.>. The AKLT model is a gapped model, therefore it can be cooled to its ground state, and by performing single-qubit operations, one-way quantum computation can be implemented on this model. In Ref. <cit.>, the authors simulated one-way quantum computation on the one-dimensional AKLT model by preparing the solid bond state of the model on photon system, but they did not use a cooling method <cit.>. Here, by applying the algorithm we proposed, we can evolve the simulated AKLT model on qubit system to its ground state.The one-dimensional AKLT model consists of a linear chain of N spin-1's in the bulk and two spin-1/2's on the boundary. The spin-1 operators are represented by S⃗_k and spin-1/2 operators by s⃗_j, where j=0,N+1. The Hamiltonian of the system is <cit.>H_AKLT=∑_k=1^N-1[ S⃗_kS⃗_k+1+1/3( S⃗_kS⃗_k+1) ^2] +θ _0,1+θ _N,N+1.The boundary terms θ describe interaction between a spin-1/2 and a spin-1, andθ _0,1=2/3( 1+s⃗_0S⃗_1),θ _N,N+1=2/3( 1+s⃗_N+1S⃗_N).Using the 3-spin 1d-AKLT model as an example, we simulate the algorithm for preparing the ground state of this system. The Hamiltonian of the system isH_S=2/3( 1+s⃗_0S⃗_1) +2/3( 1+ s⃗_2S⃗_1).The ground state of the system Hamiltonian H_S is unique and the eigenvalue is zero. We use four qubits to simulate this system. The states |-1⟩, |0⟩, |1⟩ of the spin-1 system are represented in the symmetric space of two qubits:|1,1⟩ =|00⟩, |1,0⟩ =1/√(2)( |01⟩ +|10⟩), |1,-1⟩ = |11⟩ . We set the eigenvalue of the reference state as ε _0=1. In the basis above, the ground state of the simulated AKLT model in qubit system is 1/√(12)( |0011⟩ +|1100⟩ +|0101⟩ +|1010⟩) -1/√(3)( |0110⟩ +|1001⟩). The initial state of the system is prepared in state |1100⟩, which has fidelity of 1/12 with the ground state of the system.In the following, we show that the ground state energy of the AKLT model above can be obtained using the algorithm we proposed here. The coupling coefficient is set as c=0.05. We vary the eigenvalue of the reference state ε _0 in the range [0.8,1.2] and discretize it into 100 equal elements. We run the algorithm for a number times in order to obtain the excitation probability of the first ancilla qubit. The excitation probability of the ancilla qubit at different eigenvalues of the reference state is shown in Fig. 1. From the figure we can see that the excitation probability reaches its maximum at ε _0 =1, thus the ground state energy of the AKLT model can be obtained as E_1=0.By setting ε _0 =1, c=0.05 and run the algorithm, in the first iteration, when the measurement on the first ancilla qubit is in its excited state, the state we obtained on the last n qubits of the circuit is |φ ^(1)⟩ = 0.321 (|0011⟩ + |0101⟩ + |1010⟩) - 0.573 (|0110⟩ + |1001⟩) + 0.186 |1100⟩, which has fidelity of 0.99 with the ground state of the system. And after running the algorithm for two iterations, the state we obtained is |φ ^(2)⟩ = 0.288 (|0011⟩ + |0101⟩ + |1010⟩) - 0.577 (|0110⟩ + |1001⟩) + 0.292 |1100⟩, which has fidelity about one with the ground state of the AKLT model.Here we only use the simplest 1d-AKLT model as an example to show the application of the algorithm. In Refs. <cit.>, it was shown that the ground state of the two-dimensional AKLT model of spin-3/2 particles is a universal resource for one-way quantum computation. In this model, the spin states of a spin-3/2 particle are represented by three qubits as shown in <cit.>. The 2d-AKLT model of spin-3/2 particles is a gapped model and the algorithm we proposed here can be applied for preparing its ground state. We can use this algorithm to obtain its ground state energy, then run the algorithm to evolve the system to converge to its ground state.Discussion We have proposed a quantum algorithm for preparing the ground state of a system, provided that the ground state energy is known. A resonance mechanism is introduced to amplify the amplitude of the ground state that is contained in an initial input state of the system. A measurement performed on the ancilla qubit indicates whether the system is prepared to its ground state or not. The procedures of the algorithm can be run iteratively to drive the system to converge monotonically to its ground state. This algorithm can be applied to any system with a Hamiltonian that can be simulated on a quantum computer.The efficiency of the algorithm depends on the overlap between the initial input state and the ground state of the system, as shown in Eq. (9) when the energy gap between the ground and excited states of the system is much larger the coupling coefficient c. It is limited by the energy gap between the ground state and excited states, as shown in Eqs. (6-8), when the energy gap is small, the amplification rate of the amplitude of the ground state in the input state of the algorithm is limited by the energy gap. If the overlap between the initial input state and the ground state of the system is finite, and the energy gap between the ground state and the first excited state of the system is polynomial large, such that the coupling coefficient c in the algorithm can be set to be much less than the energy gap, then the system can be evolved to its ground state in polynomial time with finite success probability.We now compare this algorithm with other cooling algorithms for preparing the ground state of a system. Firstly, in our algorithm, the ground state energy of the system is required. For some systems, the ground state energy is already known, such as the example we used and some oracle-based problems in quantum computation. For a system whose ground state energy is unknown, an approximate ground state energy can be obtained using the algorithm in Ref. <cit.> or the algorithm in this work as shown in the example. Obtaining the ground state energy of a system requires some extra cost, but it provides some advantages for the algorithm. By using this algorithm, whether a system is prepared to its ground state can be determined according to the measurement result on the first ancilla qubit. We can be certain that the system is evolved to a state that is close to the ground state of the system when an excitation is observed on the first ancilla qubit. While for other algorithms, there is no direct way of verifying that the system is cooled to its ground state.Secondly, the algorithm we proposed here can be run iteratively. With the known ground state energy, the system can be evolved to converge monotonically to its ground state through the iterative procedures in the algorithm. This algorithm provides a systematic way of purifying a state to the ground state of a system. While other cooling algorithms do not have such a property.Thirdly, in our algorithm, a total number of (n+2) qubits are used, while other algorithms use (n+1) qubits. By adding the second ancillary qubit, we can introduce a reference state that does not depend on the energy spectrum of the system, which gives us the flexibility of setting proper resonance transitions to evolve the system to any of its eigenstates. The algorithms in our previous work <cit.> that use (n+1) qubits, can also be used for preparing the ground state of a system. But they can only be run in one time and the resonance transition induced depends on the energy spectrum of the system. While in the present algorithm, we can purify the ground state iteratively, the purity of the ground state can be increased in a systematic way.In Ref. <cit.>, a cooling algorithm was proposed also based on resonance transition, where a qubit that represents the bath is coupled to the system. The frequency of the qubit is adjusted to match a transition frequency of the system to induce a resonance transition to cool the system. While in our algorithm, by adding the second ancillary qubit, we can introduce a reference state that is independent of the energy spectrum of the system. Its eigenvalue can be adjusted such that the transition frequency between the reference state and the ground state of the system matches the frequency of the first ancilla qubit, therefore a resonance mechanism can be induced. This provides the flexibility of obtaining any eigenstate of the system. Besides, our algorithm can be run iteratively to evolve the system to converge monotonically to its ground state. While the algorithm in Ref. <cit.> does not have this property.Adiabatic evolution can also be used for preparing the ground state of a system. In the adiabatic state preparation approach, one starts with an initial Hamiltonian and evolve it adiabatically to the target Hamiltonian, therefore the system is evolved from the ground state of the initial Hamiltonian to the ground state of the target Hamiltonian. While our algorithm starts with the target Hamiltonian directly to obtain its ground state. It requires only information concerning the spectrum of the Hamiltonian of the system, and not any intermediate Hamiltonians of the adiabatic evolution. The implementation is simpler for our algorithm, and whether the system is evolved to its ground state can be determined from the measurement on the first ancilla qubit.In Ref. <cit.>, the author introduced a controlled quantum adiabatic evolutions of single- and two-qubit operations for performing universal quantum computation. Compare with the algorithm we proposed here, both algorithms have a similarity in the controlled Hamiltonian, one can see that the Hamiltonian H_R in our algorithm can be thought of as a controlled Hamiltonian evolution. And the initial states for both algorithms are eigenstates of the controlled Hamiltonians, respectively. The difference is that our algorithm is based on a resonance mechanism to obtain the ground state of a system, it starts from the target Hamiltonian directly; while the algorithm in <cit.>, the desired quantum state is reached through a controlled adiabatic evolution. 99 snr1 Ardenkjæ-Larsen, J. H., et al. Increase in signal-to-noise ratio of >10,000 times in liquid-state NMR, Proc. Natl. Acad. Sci., 100, 10158 (2003).snr2 Hall, D. A., et al. Polarization-enhanced NMR spectroscopy of biomolecules in frozen solution, Science, 276, 930 (1997).met1 Mosley, P. J., et al. Heralded generation of ultrafast single photons in pure quantum states, Phys. Rev. Lett., 100, 133601 (2008).met2 Riedel, M. F., et al. Atom-chip-based generation of entanglement for quantum metrology, Nature, 464, 1170 (2010).met3 Thomas-Peter, N., et al. Real-world quantum sensors: evaluating resources for precision measurement. Phys. Rev. Lett., 107, 113603 (2011).met4 Ye, J., Kimble, H. & Katori, H. Quantum state engineering and precision metrology using state-insensitive light traps. Science , 320, 1734 (2008).raussendorf Raussendorf, R. & Briegel, H. J. A one-way quantum computer, Phys. Rev. Lett., 86, 5188 (2001).ft1 Knill, E., Laflamme, R. & Zurek, W. H. Resilient quantum computation: error models and thresholds, Proc. Roy. Soc. London Ser. A, 454, 365 (1998).ft2 Aharonov, D. & Ben-Or, M. Fault-tolerant quantum computation with constant error, in Proceedings of the 29th ACM Symposium on the Theory of Computing (ACM, El Paso, TX, 1997), pp. 176–188.baugh Baugh, J., et al. Experimental implementation of heat-bath algorithmic cooling using solid-state nuclear magnetic resonance, Nature, 438, 470 (2005).brassard Brassard, G., et al. Experimental heat-bath cooling of spins, e-print arXiv:quant-ph/0511156 (2005).HBAC Boykin, P. O., et al. Algorithmic cooling and scalable NMR quantum computers, Proc. Natl Acad. Sci., 99, 3388 (2002).DOS1 Kraus, B., et al. Preparation of entangled states by quantum markov processes, Phys. Rev. A, 78, 042307 (2008).DOS2 Verstraete, F., Wolf, M. M. & Cirac, J. I. Quantum computation and quantum-state engineering driven by dissipation,Nat. Phys., 5, 633 (2009).DLAC Xu, J.-S., et al. Demon-like algorithmic quantum cooling and its realization with quantum optics, Nat. Photon., 8, 113 (2014).dynamics Wang, H., Ashhab S. & Nori, F. Quantum algorithm for simulating the dynamics of an open quantum system, Phys. Rev. A, 83, 062317 (2011).wan Wang, H., Ashhab S. & Nori, F. Quantum algorithm for obtaining the energy spectrum of a physical system, Phys. Rev. A, 85, 062304 (2012).AKLT Wei, T.-C. & Raussendorf, R. Universal measurement-based quantum computation with spin-2 Affleck-Kennedy-Lieb-Tasaki states,Phys. Rev. A, 92, 012310 (2015).nc Nielsen, M. A. & Chuang, I. L. Quantum computation and quantum information. (Cambridge Univ. Press, Cambridge, England, 2000).nakazato Nakazato, H., Takazawa, T. & Yuasa, K. Purification through Zeno-Like Measurements, Phys. Rev. Lett., 90, 060401 (2003).lavoie Kaltenbaek, R., et al. Optical one-way quantum computing with a simulated valence-bond solid, Nat. Phys., 6, 850 (2010).rauss Raussendorf, R. Quantum computing: Shaking up ground states, Nat. Phys., 6, 840 (2010).fan Fan, H., Korepin, V. & Roychowdhury, V. Entanglement in a valence-bond solid state, Phys. Rev. Lett., 93, 227203 (2004).wei Wei, T.-H., Affleck, I. & Raussendorf, R. Affleck-Kennedy-Lieb-Tasaki state on a honeycomb lattice is a universal quantum computational resource, Phys. Rev. Lett., 106, 070501 (2011).cai Cai, J.-M., et al. Universal quantum computer from a quantum magnet, Phys. Rev. A., 82, 052309 (2010).taylor Kafri, D. & Taylor, J. M. Algorithmic cooling of a quantum simulator, e-print arXiv:quant-ph/1207.7111.Hen Hen, I. Quantum gates with controlled adiabatic evolutions, Phys. Rev. A., 91, 022309 (2015).AcknowledgementsThis work was supported by the National Nature Science Foundation of China (Grants No. 11275145).Additional informationCompeting financial interests: The authors declare no competing financial interests.Correspondence and requests for materials should be addressed to Hefeng Wang ([email protected]). | http://arxiv.org/abs/1706.08644v3 | {
"authors": [
"Hefeng Wang"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20170627015724",
"title": "Quantum algorithm for preparing the ground state of a system via resonance transition"
} |
[pages=1-last]the_real_article.pdf | http://arxiv.org/abs/1706.08732v1 | {
"authors": [
"Xudong Li",
"Defeng Sun",
"Kim-Chuan Toh"
],
"categories": [
"math.OC",
"90C06, 90C20, 90C22, 90C25"
],
"primary_category": "math.OC",
"published": "20170627085857",
"title": "On efficiently solving the subproblems of a level-set method for fused lasso problems"
} |
Multi-spacecraft observations and transport simulations of solar energetic particles for the May 17th 2012 event M. Battarbee 1Currently at the Department of Physics, University of Helsinki, Finland J. Guo 2 S. Dalla 1 R. Wimmer-Schweingruber 2 B. Swalwell 1 D. J. Lawrence 3Received 26th June 2017 / Accepted 25th January 2018 ================================================================================================================================================================================================================================================================================================= Genomics has revolutionized biology, enabling the interrogation of whole transcriptomes, genome-wide binding sites for proteins, and many other molecular processes. However, individual genomic assays measure elements that interact in vivo as components of larger molecular machines. Understanding how these high-order interactions drive gene expression presents a substantial statistical challenge. Building on Random Forests (RF), Random Intersection Trees (RITs), and through extensive, biologically inspired simulations, we developed the iterative Random Forest algorithm (iRF). iRF trains a feature-weighted ensemble of decision trees to detect stable, high-order interactions with same order of computational cost as RF.We demonstrate the utility of iRF for high-order interaction discovery in two prediction problems: enhancer activity in the early Drosophila embryo and alternative splicing of primary transcripts in human derived cell lines. In Drosophila, among the 20 pairwise transcription factor interactions iRF identifies as stable (returned in more than half of bootstrap replicates), 80% have been previously reported as physical interactions. Moreover, novel third-order interactions, e.g. between Zelda (Zld), Giant (Gt), and Twist (Twi), suggest high-order relationships that are candidates for follow-up experiments. In human-derived cells, iRF re-discovered a central role of H3K36me3 in chromatin-mediated splicing regulation, and identified novel 5th and 6th order interactions, indicative of multi-valent nucleosomes with specific roles in splicing regulation. By decoupling the order of interactions from the computational cost of identification, iRF opens new avenues of inquiry into the molecular mechanisms underlying genome biology. § INTRODUCTION High throughput, genome-wide measurements of protein-DNA and protein-RNA interactions are driving new insights into the principles of functional regulation. For instance, databases generated by the Berkeley Drosophila Transcriptional Network Project (http://bdtnp.lbl.gov/Fly-Net/BDTNP) and http://encodeproject.orgENCODE consortium provide maps of transcription factor (TF) binding events and chromatin marks for substantial fractions of the regulatory factors active in the model organism Drosophila melanogaster and human-derived cell lines respectively <cit.>. A central challenge with these data lies in the fact that ChIP-seq, the principal tool used to measure DNA-protein interactions, assays a single protein target at a time. In well studied systems, regulatory factors such as TFs act in concert with other chromatin-associated and RNA-associated proteins, often through stereospecific interactions <cit.>, and for a review see <cit.>. While several methods have been developed to identify interactions in large genomics datasets, for example <cit.>, these approaches either focus on pairwise relationships or require explicit enumeration of higher-order interactions, which becomes computationally infeasible for even moderate-sized datasets. In this paper, we present a computationally efficient tool for directly identifying high-order interactions in a supervised learning framework. We note that the interactions we identify do not necessarily correspond to biomolecular complexes or physical interactions. However, among the pairwise Drosophila TF interactions identified as stable, 80% have been previously reported (SI S4). The empirical success of our approach, combined with its computational efficiency, stability, and interpretability, make it uniquely positioned to guide inquiry into the high-order mechanisms underlying functional regulation. Popular statistical and machine learning methods for detecting interactions among features include decision trees and their ensembles: CART <cit.>, Random Forests (RFs) <cit.>, Node Harvest <cit.>, Forest Garotte <cit.>, and Rulefit3 <cit.>, as well as methods more specific to gene-gene interactions with categorical features: logic regression <cit.>, Multifactor Dimensionality Reduction <cit.>, and Bayesian Epistasis mapping <cit.>. With the exception of RFs, the above tree-based procedures grow shallow trees to prevent overfitting, excluding the possibility of detecting high-order interactions without affecting predictive accuracy. RFs are an attractive alternative, leveraging high-order interactions to obtain state-of-the-art prediction accuracy. However, interpreting interactions in the resulting tree ensemble remains a challenge.We take a step towards overcoming these issues by proposing a fast algorithm built on RFs that searches for stable, high-order interactions. Our method, the iterative Random Forest algorithm (iRF), sequentially grows feature-weighted RFs to perform soft dimension reduction of the feature space and stabilizedecision paths. We decode the fitted RFs using a generalization of the Random Intersection Trees algorithm (RIT) <cit.>. This procedure identifies high-order feature combinations that are prevalent on the RF decision paths. In addition to the high predictive accuracy of RFs, the decision tree base learner captures the underlying biology of local, combinatorial interactions <cit.>,an important feature for biological data, where a single molecule often performs many roles in various cellular contexts. Moreover, invariance of decision trees to monotone transformations <cit.> to a large extent mitigates normalization issues that are a major concern in the analysis of genomics data, where signal-to-noise ratios vary widely even between biological replicates <cit.>. Using empirical and numerical examples, we show that iRF is competitive with RF in terms of predictive accuracy, and extracts both known and compelling, novel interactions in two motivating biological problems in epigenomics and transcriptomics.An open sourceimplementation of iRF is available throughhttps://cran.r-project.org/web/packages/iRF/index.htmlCRAN <cit.>. § OUR METHOD: ITERATIVE RANDOM FORESTSThe iRF algorithm searches for high-order feature interactions in three steps. First, iterative feature re-weighting adaptively regularizes RF fitting. Second, decision rules extracted from a feature-weighted RF map from continuous or categorical to binary features.This mapping allows us to identify prevalent interactions in RF through a generalization of RIT, a computationally efficient algorithm that searches for high-order interactions in binary data <cit.>. Finally, a bagging step assesses the stability of recovered interactions with respect to the bootstrap-perturbation of the data. We briefly review feature-weighted RF and RIT before presenting iRF. §.§ Preliminaries: Feature-weighted RF and RITTo reduce the dimensionality of the feature space without removing marginally unimportant features that may participate in high-order interactions, we use a feature-weighted version of RF. Specifically, for a set of non-negative weights w = (w_1, …, w_p), where p is the number of features, let RF(w) denote a feature-weighted RF constructed with w. In RF(w), instead of taking a uniform random sample of features at each split, one chooses the j^th feature with probability proportional to w_j. Weighted tree ensembles have been proposed in <cit.> under the name “enriched random forests” and used for feature selection in genomic data analysis. Note that with this notation, Breiman's original RF amounts to RF(1/p, …, 1/p).iRF build upon a generalization of RIT, an algorithm that performs a randomized search for high-order interactions among binary features in a deterministic setting. More precisely, RIT searches for co-occurring collections of s binary features, or order-s interactions, that appear with greater frequency in a given class. The algorithm recovers such interactions with high probability (relative to the randomness it introduces) at a substantially lower computational cost than O(p^s), provided the interaction pattern is sufficiently prevalent in the data and individual features are sparse.We briefly present the basic RIT algorithm and refer readers to the original paper <cit.> for a complete description. Consider a binary classification problem with n observations and p binary features. Suppose we are given data in the form (ℐ_i, Z_i), i=1, …, n. Here, each Z_i ∈{0,1} is a binary label and ℐ_i⊆{1, 2, …, p} is a feature-index subset indicating the indices of “active” features associated with observation i. In the context of gene transcription,ℐ_i can be thought of as a collection of TFs and histone modifications with abnormally high or low enrichments near the i^th gene's promoter region, and Z_i can indicate whether gene i is transcribed or not. With these notations, prevalence of an interaction S ⊆{1, …, p} in the class C ∈{0, 1} is defined as ℙ_n (S | Z=C) := ∑_i=1^n 1(S ⊆ℐ_i)/∑_i=1^n 1(Z_i = C),where ℙ_n denotes the empirical probability distribution and 1(·) the indicator function. For given thresholds 0 ≤θ_0 < θ_1 ≤ 1, RIT performs a randomized search for interactions S satisfying ℙ_n (S | Z = 1) ≥θ_1, ℙ_n (S| Z=0) ≤θ_0. For each class C∈{0, 1} and a pre-specified integer D, let j_1, ..., j_D be randomly chosen indices from the set of observations {i:Z_i=C}. To search for interactions S satisfying condition (<ref>), RIT takes D-fold intersections ℐ_j_1∩ℐ_j_2∩…∩ℐ_j_D from the randomly selected observations in class C. To reduce computational complexity, these interactions are performed in a tree-like fashion (SI S1 Algorithm 1), where each non-leaf node has n_child children.This process is repeated M times for a given class C, resulting in a collection of survived interactions 𝒮=⋃_m=1^M𝒮_m, where each 𝒮_m is the set of interactions that remains following the D-fold intersection process in tree m=1,…,M. The prevalences of interactions across different classes are subsequently compared using condition (<ref>). The main intuition is that if an interaction S is highly prevalent in a particular class, it will survive the D-fold intersection with high probability. §.§ iterative Random ForestsThe iRF algorithm places interaction discovery in a supervised learning framework to identify class-specific, active index sets required for RIT. This framing allows us to recover high-order interactions that are associated with accurate prediction in feature-weighted RFs. We consider the binary classification setting with training data 𝒟 in the form {(𝐱_i, y_i)}_i=1^n, with continuous or categorical features 𝐱 = (x_1,…,x_p), and a binary label y ∈{0, 1}. Our goal is to find subsets S ⊆{1, …, p} of features, or interactions, that are both highly prevalent within a class C ∈{0,1}, and that provide good differentiation between the two classes. To encourage generalizability of our results, we search for interactions in ensembles of decision trees fitted on bootstrap samples of 𝒟. This allows us to identify interactions that are robust to small perturbations in the data. Before describing iRF, we present a generalized RIT that uses any RF, weighted or not, to generate active index sets from continuous or categorical features. Our generalized RIT is independent of the other iRF components in the sense that other approaches could be used to generate the input for RIT. We remark on our particular choices in SI S2. Generalized RIT (through an RF): For each tree t=1,…,T in the output tree ensemble of an RF, we collect all leaf nodes and index them by j_t=1,..., J(t). Each feature-response pair (𝐱_i, y_i) is represented with respect to a tree t by (ℐ_i_t, Z_i_t), whereℐ_i_t is the set of unique feature indices falling on the path of the leaf node containing (𝐱_i, y_i) in the t^th tree. Hence, each (𝐱_i, y_i) produces Tsuch index set and label pairs, corresponding to the T trees. We aggregate these pairs across observations and trees asℛ = {(ℐ_i_t, Z_i_t): 𝐱_i falls in leaf nodei_tof treet}and apply RIT on this transformed dataset ℛ to obtain a set of interactions.We now describe the three components of iRF. A depiction is shown in Fig. <ref> and the complete workflow is presented in SI S1 Algorithm 2. We remark on the algorithm further in SI S2.1. Iteratively re-weighted RF: Given an iteration number K, iRF iteratively grows K feature-weighted RFs RF(w^(k)), k=1,…,K, on the data 𝒟. The first iteration of iRF (k=1) starts with w^(1) := (1/p, …, 1/p), and stores the importance (mean decrease in Gini impurity) of the p features as v^(1) = (v^(1)_1, …, v^(1)_p). For iterations k=2,…,K, we set w^(k) = v ^ (k-1) and grow a weighted RF with weights set equal to the RF feature importance from the previous iteration. Iterative approaches for fitting RFs have been previously proposed in <cit.> and combined with hard thresholding to select features in microarray data.2. Generalized RIT (through RF(w^(K))): We apply generalized RIT to the last feature-weighted RF grown in iteration K. That is, decision rules generated in the process of fitting RF(w^(K)) provide the mapping from continuous or categorical to binary features required for RIT. This process produces a collection of interactions 𝒮.3. Bagged stability scores: In addition to bootstrap sampling in the weighted RF, we use an “outer layer” of bootstrapping to assess the stability of recovered interactions. We generate bootstrap samples of the data 𝒟_(b), b=1,…,B, fit RF(w^(K)) on each bootstrap sample 𝒟_(b), and use generalized RIT to identify interactions 𝒮_(b) across each bootstrap sample. We define the stability score of an interactionS ∈∪_b=1^B𝒮_(b) as sta(S) = 1/B·∑_b=1^B 1{S ∈𝒮_(b)},representing the proportion of times (out of B bootstrap samples) an interaction appears as an output of RIT. This averaging step is exactly the Bagging idea of Breimain <cit.>. §.§ iRF tuning parameters The iRF algorithm inherits tuning parameters from its two base algorithms, RF and RIT. The predictive performance of RF is known to be highly resistant to choice of parameters <cit.>, so we use the default parameters in the package. Specifically, we set the number of trees =500, the number of variables sampled at each node =√(p), and grow trees to purity. For the RIT algorithm, we use the basic version or Algorithm 1 of <cit.>, and grow M = 500 intersection trees of depth D = 5 with n_child = 2, which empirically leads to a good balance between computation time and quality of recovered interactions. We find that both prediction accuracy and interaction recovery of iRF are fairly robust to these parameter choices (SI S2.5).In addition to the tuning parameters of RF and RIT, the iRF workflow introduces two additional tuning parameters: (i) number of bootstrap samples B (ii) number of iterations K. Larger values of B provide a more precise description of the uncertainty associated with each interaction at the expense of increased computation cost. In our simulations and case studies we set B ∈(10, 30) and find that results are qualitatively similar in this range. The number of iterations controls the degree of regularization on the fitted RF. We find that the quality of recovered interactions can improve dramatically for K>1 (SI S5). In sections <ref> and <ref>, we report interactions with K selected by 5-fold cross validation.§ SIMULATION EXPERIMENTSWe developed and tested iRF through extensive simulation studies based on biologically inspired generative models using both synthetic and real data (SI S5). In particular, we generated responses using Boolean rules intended to reflect the stereospecific nature of interactions among biomolecules <cit.>. In total, we considered 7 generative models built from AND, OR, and XOR rules, with number of observations and features ranging from 100 to 5000 and 50 to 2500 respectively. We introduced noise into our models both by randomly swapping response labels for up to 30% of observations and through RF-derived rules learned on held-out data.We find that the predictive performance of iRF (K > 1) is generally comparable with RF (K=1). However, iRF recovers the full data generating rule, up to an order-8 interaction in our simulations, as the most stable interaction in many settings where RF rarely recovers interactions of order >2. The computational complexity of recovering these interactions is substantially lower than competing methods that search for interactions incrementally(SI S6; SI Fig. S18).Our experiments suggest that iterative re-weighting encourages iRF to use a stable set of features on decision paths (SI Fig. S9). Specifically, features that are identified as important in early iterations tend to be selected among the first several splits in later iterations (SI Fig. S10). This allows iRF to generate partitions of the feature space where marginally unimportant, active features become conditionally important, and thus more likely to be selected on decision paths. For a full description of simulations and results, see SI S5. § CASE STUDY I: ENHANCER ELEMENTS IN DROSOPHILA Development and function in multicellular organisms rely on precisely regulated spatio-temporal gene expression. Enhancers play a critical role in this process by coordinating combinatorial TF binding, whose integrated activity leads to patterned gene expression during embryogenesis <cit.>. In the early Drosophila embryo, a small cohort of ∼40 TFs drive patterning, for a review see <cit.>, providing a well-studied, simplified model system in which to investigate the relationship between TF binding and enhancer activities. Extensive work has resulted in genome-wide, quantitative maps of DNA occupancy for 23 TFs <cit.> and 13 histone modifications <cit.>, as well as labels of enhancer status for 7809 genomic sequences in blastoderm (stage 5) Drosophila embryos <cit.>. See SI S3 for descriptions of data collection and preprocessing.To investigate the relationship between enhancers, TF binding, and chromatin state, we used iRF to predict enhancer status for each of the genomic sequences (3912 training, 3897 test). We achieved an area under the precision-recall curve (AUC-PR) on the held-out test data of 0.5 for K=5 (Fig. <ref>A). This corresponds to a Matthews correlation coefficient (MCC) of 0.43 (positive predictive value (PPV) of 0.71) when predicted probabilities are thresholded to maximize MCC in the training data.Fig. <ref>B reports stability scores of recovered interactions for K=5. We note that the data analyzed are whole-embryo and interactions found by iRF do not necessarily represent physical complexes. However, for the well-studied case of pairwise TF interactions, 80% of our findings with stability score > 0.5 have been previously reported as physical (Table S1). For instance, interactions among gap proteins Giant (Gt), Krüppel (Kr), and Hunchback (Hb),some of the most well characterized interactions in the early Drosophila embryo <cit.>, are all highly stable (sta(Gt-Kr)=1.0, sta(Gt-Hb)=0.93, sta(Hb-Kr)=0.73). Physical evidence supporting high-order mechanisms is a frontier of experimental research and hence limited, but our excellent pairwise results give us hope that high-order interactions we identify as stable have a good chance of being confirmed by follow-up work.iRF also identified several high-order interactions surrounding the early regulatory factor Zelda (Zld) (sta(Zld-Gt-Twi)=1.0, sta(Zld-Gt-Kr)= 0.7). Zld has been previously shown to play an essential role during the maternal-zygotic transition <cit.>, and there is evidence to suggest that Zld facilitates binding to regulatory elements <cit.>. We find that Zld binding in isolation rarely drives enhancer activity, but in the presence of other TFs, particularly the anterior-posterior (AP) patterning factors Gt and Kr, it is highly likely to induce transcription. This generalizes the dependence of Bicoid-induced transcription on Zld binding to several of the AP factors <cit.>, and is broadly consistent with the idea that Zld is potentiating, rather than an activating factor <cit.>.More broadly, response surfaces associated with stable high-order interactions indicate AND-like rules (Fig. <ref>C). In other words, the proportion of active enhancers is substantially higher for sequences where all TFs are sufficiently bound, compared to sequences where only some of the TFs exhibit high levels of occupancy. Fig. <ref>C demonstrates a putative third order interaction found by iRF (sta(Kr-Gt-Zld)=0.7). To the left, the Gt-Zld response surface is plotted using only sequences for which Kr occupancy is lower than the median Kr level, and the proportion of active enhancers is uniformly low (< 10%). The response surface to the right, is plotted using only sequences where Kr occupancy is higher than median Kr level and shows that the proportion of active elements is as high as 60% when both Zld and Gt are sufficiently bound. This points to an order-3 AND rule, where all three proteins are required for enhancer activation in a subset of sequences. In Fig. <ref>D, we show the subset of sequences that correspond to this AND rule (highlighted in red) using a superheat map <cit.>, which juxtaposes two separately clustered heatmaps corresponding to active and inactive elements. Note that the response surfaces are drawn using held-out test data to illustrate the generalizability of interactions detected by iRF. While overlapping patterns of TF binding have been previously reported <cit.>, to the best of our knowledge this is the first report of an AND-like response surface for enhancer activation. Third-order interactions have been studied in only a handful of enhancer elements, most notably eve stripe 2, for a review see <cit.>, and our results indicate that they are broadly important for the establishment of early zygotic transcription, and therefore body patterning.§ CASE STUDY II: ALTERNATIVE SPLICING IN A HUMAN-DERIVED CELL LINE In eukaryotes, alternative splicing of primary mRNA transcripts is a highly regulated process in which multiple distinct mRNAs are produced by the same gene. In the case of messenger RNAs (mRNAs), the result of this process is the diversification of the proteome, and hence the library of functional molecules in cells. The activity of the spliceosome, the ribonucleoprotein responsible for most splicingin eukaryotic genomes, is driven by complex, cell-type specific interactions with cohorts ofRNA binding proteins (RBP)<cit.>, suggesting thathigh-order interactions play an important role in the regulation of alternative splicing.However, our understanding of this system derives from decades of study in genetics, biochemistry,and structural biology. Learning interactions directly from genomicsdata has the potential to accelerate our pace of discovery in the study of co- and post-transcriptional gene regulation.Studies, initially in model organisms, have revealed that the chromatin mark H3K36me3, the DNA binding protein CTCF, and a few other factors allplay splice-enhancing roles <cit.>. However, the extent to which chromatin state and DNA binding factors interact en masse to modulate co-transcriptional splicing remains unknown <cit.>.To identify interactions that form the basis of chromatin mediated splicing, we used iRF to predict thresholded splicing rates for 23823 exons (RNA-seq Percent-spliced-in (PSI) values <cit.>; 11911 train, 11912 test), from ChIP-seq assays measuring enrichment of chromatin marks and TF binding events (253 ChIP assays on 107 unique transcription factors and 11 histone modifications). Preprocessing methods are described in the SI S3.In this prediction problem, we achieved an AUC-PR on the held-out test data of 0.51 for K=2 (Fig. <ref>A). This corresponds to a MCC of 0.47 (PPV 0.72) on held-out test data when predicted probabilities are thresholded tomaximize MCC in the training data. Fig. <ref>B reports stability scores of recovered interactions for K=2. We find interactions involving H3K36me3, a number of novel interactions involving other chromatin marks, and post-translationally modified states of RNA Pol II. In particular, we find that the impact of serine 2 phosphorylation of Pol II appears highly dependent on local chromatin state. Remarkably, iRF identified an order-6 interaction surrounding H3K36me3 and S2 phospho-Pol II (stability score 0.5, Fig.<ref>B,C) along with two highly stable order 5 subsets of this interaction (stability scores 1.0). A subset of highly spliced exons highlighted in red is enriched for all 6 of these elements, indicating a potential AND-type rule related to splicing events (Fig. <ref>C). This observation is consistent with, and offers a quantitative model for the previously reported predominance of co-transcriptional splicing in this cell line <cit.>. We note that the search space of order-6 interactions is > 10^11, and that this interaction is discovered with an order-zero increase over the computational cost of finding important features using RF. Recovering such interactions without exponential speed penalties represents a substantial advantage over previous methods and positions our approach uniquely for the discovery of complex, nonlinear interactions. § DISCUSSIONSystems governed by nonlinear interactions are ubiquitous in biology. We developed a predictive and stable method, iRF,for learning such feature interactions. iRF identified known and novel interactions in early zygotic enhancer activation in the Drosophila embryo, and posit new high-order interactions in splicing regulation for a human-derived system.Validation and assessment of complex interactions in biological systems is necessary and challenging, but new wet-lab tools are becoming available for targeted genome and epigenome engineering. For instance, the CRISPR system has been adjusted for targeted manipulation of post-translational modifications to histones <cit.>. This may allow for tests to determine if modifications to distinct residues at multivalent nucleosomes function in a non-additive fashion in splicing regulation. Several of the histone marks that appear in the interactions we report, including H3K36me3 and H4K20me1, have been previously identified <cit.> as essential for establishing splicing patterns in the early embryo. Our findings point to direct interactions between these two distinct marks. This observation generates interesting questions: What proteins, if any, mediate these dependencies? What is the role of Phospho-S2 Pol II in the interaction? Proteomics on ChIP samples may help reveal the complete set of factors involved in these processes, and new assays such as Co-ChIP may enable the mapping of multiple histone marks at single-nucleosome resolution <cit.>. We have offered evidence that iRF constitutes a useful tool for generating new hypotheses from the study of high-throughput genomics data, but many challenges await. iRF currently handles data heterogeneity only implicitly, and the order of detectable interaction depends directly on the depth of the tree, which is on the order of log_2(n).We are currently investigating local importance measures to explicitly relate discovered interactions to specific observations. This strategy has the potential to further localize feature selection and improve the interpretability of discovered rules. Additionally, iRF does not distinguish between interaction forms, for instance additive versus non-additive. We are exploring tests of rule structure to provide better insights into the precise form of rule-response relationships. To date, machine learning has been driven largely by the need for accurate prediction. Leveraging machine learning algorithms for scientific insights into the mechanics that underlie natural and artificial systems will require an understanding of why prediction is possible. The Stability Principle, which asserts that statistical results should at a minimum be reproducible across reasonable data and model perturbations, has been advocated in <cit.> as a second consideration to work towards understanding and interpretability in science.Iterative and data-adaptive regularization procedures such as iRF are based on prediction and stability and have the potential to be widely adaptable to diverse algorithmic and computational architectures, improving interpretability and informativeness by increasing the stability of learners. § ACKNOWLEDGMENTS This research was supported in part by grants NHGRI U01HG007031, ARO W911NF1710005, ONR N00014-16-1-2664, DOE DE-AC02-05CH11231, NHGRI R00 HG006698, DOE (SBIR/STTR) Award DE-SC0017069, DOE DE-AC02-05CH11231, and NSF DMS-1613002. We thank the Center for Science of Information (CSoI), a US NSF Science and Technology Center, under grant agreement CCF-0939370. Research reported in this publication was supported by the National Library Of Medicine of the NIH under Award Number T32LM012417. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. BY acknowledges support from the Miller Institute for her Miller Professorship in 2016-2017. SB acknowledges the support of UC Berkeley and LBNL, where he conducted most of his work on this paper as a postdoc. We thank P. Bickel and S. Shrotriya for helpful discussions and comments, T. Arbel for preparing Drosophila dataset, and S. Celniker for help vetting the Drosophila data and for consultation on TF interactions. iterative Random Forests to discoverpredictive and stable high-order interactions Supporting Information Appendix Sumanta Basu, Karl Kumbier, James B. Brown, and Bin Yu § ALGORITHMSThe basic versions of the Random Intersection Trees (RIT)and iterative Random Forests (iRF) algorithms are presented below. For a complete description of RIT, including analysis of computational complexity and theoretical guarantees, we refer readers to the original paper <cit.>. For a full description of iRF, we refer readers to Section 2.§ REMARKS ON IRF§.§ Iterative re-weightingGeneralized RIT can be used with any Random Forest (RF) method, weighted or not. We find that iterative re-weighting acts as a soft dimension reduction step by encouraging RF to select a stable set of features on decision paths. This leads to improved recovery of high-order interactions in our numerical simulations and in real data settings. For instance, without feature re-weighting (k = 1) iRF rarely recovers interactions of order > 2 in our simulations. Feature re-weighting (k > 1) allows iRF to identify order-8 data generating rules as highly stable interactions for comparable parameter settings. In the enhancer case study, iRF (k=5) recovers 9 order-3 interactions with stability score > 0.5. Without iterative re-weighting, iRF (k=1) does not recover any order-3 interactions with stability score > 0.5. The fourth iteration of iRF also recovers many additional order-3, order-4, and order-5 interactions with lower stability scores that are not recovered in the first iteration. Although it is unclear which of these high-order interactions represent true biological mechanisms without experimental follow-up, our simulation based on the enhancer data suggests that the overall quality of recovered interactions improves with iteration (Figure <ref>).Iterative re-weighting can be viewed as a form of regularization on the base RF learner, since it restricts the form of functions RF is allowed to fit in a probabilistic manner. In particular, we find that iterative re-weighting reduces the dimensionality of the feature space without removing marginally unimportant features that participate in high-order interactions (Figure <ref>). Moreover, we find that iteratively re-weighted and unweighted RF achieve similar predictive accuracy on held out test data. We note that other forms of regularization such as <cit.> may also lead to improved interaction recovery, though we do not explore them in this paper.§.§ Generalized RITThe RIT algorithm could be generalized through any approach that selects active features from continuous or categorical data. However, the feature selection procedure affects recovered interactions and is thus an important consideration in generalizing RIT to continuous or categorical features. There are several reasons we use an RF-based approach. First, RFs are empirically successful predictive algorithms that provide a principled, data-driven procedure to select active features specific to each observation. Second, randomness inherent to tree ensembles offers a natural way to generate multiple active index sets for each observation 𝐱_i, making the representations more robust to small data perturbations. Finally, our approach allows us to interpret (in a computationally efficient manner given by RIT) complex, high-order relationships that drive the impressive predictive accuracy of RFs, granting new insights into this widely used class of algorithms.§.§ Node samplingIn the generalized RIT step of iRF, we represent each observation i=1,…,n by T rule-response pairs, determined by the leaf nodes containing observation i in each tree t=1,…,T of an RF. We accomplish this by replicating each rule-response pair (ℐ_j_t, Z_j_t) in tree t based on the number of observations in the corresponding leaf node. We view this as a natural representation of the observations in 𝒟, made more robust to sampling perturbations through rules derived from bootstrap samples of 𝒟. Our representation is equivalent to sampling rule-response pairs (ℐ_j_t, Z_j_t) in RIT with probability proportional to the number of observations that fall in the leaf node. However, one could sample or select a subset of leaf nodes based on other properties such as homogeneity and/or predictive accuracy. We are exploring how different sampling strategies impact recovered interactions in our ongoing work. §.§ Bagged stability scoresiRF uses two layers of bootstrap sampling. The “inner” layer takes place when growing weighted RF. By drawing a separate bootstrap sample from the input data before growing each tree, we can learn multiple binary representations of each observation 𝐱_i that are more robust to small data perturbations. The “outer” layer of bootstrap sampling is used in the final iteration of iRF. Growing RF(w^(K)) on different bootstrap samples allows us to assess the stability, or uncertainty, associated with the recovered interactions. §.§ Relation to AdaBoostIn his original paper on RF <cit.>, Breiman conjectured that in the later stages of iteration, AdaBoost <cit.> emulates RF. iRF inherits this property, and in addition shrinks the feature space towards more informative features. As pointed out by a reviewer, there is an interesting connection between AdaBoost and iRF. Namely, AdaBoost improves on the least reliable part of the data space, while iRF zooms in on the most reliable part of feature space. This is primarily motivated by the goals of the two learners — AdaBoost's primary goal is prediction, whereas iRF's primary goal is to select features or combinations of features while retaining predictive power. We envision that zooming in on both the data and feature space simultaneously may harness the strengths of both learners. As mentioned in the conclusion, we are exploring this direction through local feature importance. §.§ Sensitivity to tuning parametersThe predictive performance of RF is known to be highly robust to choice of tuning parameters <cit.>. To test iRF's sensitivity to tuning parameters, we investigated the stability of both prediction accuracy (AUC-PR) and interaction recovery across a range of parameter settings. Results are reported for both the enhancer and splicing datasets presented in our case studies.The prediction accuracy of iRF is controlled through both RF parameters and number of iterations. Figures <ref> and <ref> report 5-fold cross-validation prediction accuracy as a function of number of iterations (k), number of trees in the RF ensemble (), and the number of variables considered for each split (). We do not consider tree depth as a tuning parameter since deep decision trees (e.g. grown to purity) are precisely what allows iRF to identify high-order interactions. Aside from iteration k=1 in the splicing data, prediction accuracy is highly consistent across parameter choices. For the first iteration in the splicing data, prediction accuracy increases as a function of . We hypothesize that this is the result of many extraneous features that make it less likely for important features to be among theselected features at each split. Our hypothesis is consistent with the improvement in prediction accuracy that we observe for iterations k>1, where re-weighting allows iRF to sample important features with higher probability. This finding also suggests a potential relationship between iterative re-weighting and RF tuning parameters. The extent to which RF tuning parameters can be used to stabilize decision paths and allow for the recovery of high-order interactions is an interesting question for further exploration.The interactions recovered by iRF are controlled through RIT parameters and the number of iterations. Our simulations in Sections <ref>-<ref> extensively examine the relationship between the number of iterations and recovered interactions. Figures <ref> and <ref> report the stability scores of recovered interactions in the enhancer and splicing data as a function of RIT parameters. In general, the stability scores of recovered interactions are highly correlated between different RIT parameter settings, indicating that our results are robust over the reported range of tuning parameters. The greatest differences in stability scores occur for low values of depth (D) and number of children (n_child). In particular, a subset of interactions that are highly stable for larger values of n_child are less stable with n_child = 1. In contrast, a subset of interactions that are highly stable for D=3 are considered less stable for larger values of D. We note that the findings in our case studies are qualitatively unchanged as tuning parameters are varied. Interactions we identified as most stable under the default parameter choices remain the most stable under different parameter choices. §.§ Regression and multiclass classification We presented iRF in the binary classification setting, but our algorithm can be naturally extended to multiclass or continuous responses. The requirement that responses are binary is only used to select a subset of leaf nodes as input to generalized RIT. In particular, for a given class C∈{0,1}, iRF runs RIT over decision paths whose corresponding leaf node predictions are equal to C. In the multiclass setting, we select leaf nodes with predicted class or classes of interest as inputs to RIT. In the regression setting, we consider leaf nodes whose predictions fall within a range of interest as inputs to generalized RIT. This range could be determined in domain-specific manner or by grouping responses through clustering techniques. §.§ Grouped features and replicate assays In many classification and regression problems with omics data,one faces the problem of drawing conclusion at an aggregated level of the features at hand. The simplest example is the presence of multiple replicates of a single assay,when there is neither a standard protocol to choose one assay over the other, nor a known strategy to aggregate the assays after normalizing them individually. Similar situations arise when there are multiple genes from a single pathway in the feature sets, and one is only interested in learning interactions among the pathways and not the individual genes. In linear regression based feature selection methods like Lasso, grouping information among features is usually incorporated by devising suitable grouped penalties, which requires solving new optimization problems. The invariance property of RF to monotone transformations of features and the nature of the intersection operation used by RIT provide iRF a simple and computationally efficient workaround to this issue. In particular, one uses all the unnormalized assays in the tree growing procedure, and collapses the grouped features or replicates into a “super feature” before taking random intersections. iRF then provide interaction information among these super features, which could be used to achieve further dimension reduction of the interaction search space. §.§ Interaction evaluation through predictionWe view the task of identifying candidate, high-order interactions as a step towards hypothesis generation in complex systems. An important next step will be evaluationg the interactions recovered by iRF to determine whether they represent domain-relevant hypotheses. This is an interesting and challenging problem that will require subject matter knowledge into the anticipated forms of interactions. For instance, biomolecules are believed to interact in stereospecific groups <cit.> that can be represented through Boolean-type rules. Thus, tests of non-additivity may provide insight into which iRF-recovered interactions warrant further examination in biological systems. We do not consider domain-specific evaluation in this paper, but instead assess interactions through broadly applicable metrics based on both stability and predictability. We incorporated the Stability Principle <cit.> through both iterative re-weighting, which encourages iRF to use a consistent set of features along decision paths, and through bagged stability scores, which provide a metric to evaluate how consistently decision rules are used throughout an RF. Here we propose two additional validation metrics based on predictive accuracy. Conditional prediction: Our first metric evaluates a recovered interaction S⊆{1,…,p} based on the predictive accuracy of an RF that makes predictions using only leaf nodes for which all features in S fall on the decision path. Specifically, for each observation i=1,…,n we evaluate its predicted value from each tree t=1,… T with respect to an interaction S as ŷ_i(t;S) = Z_i_t ifS ⊆ℐ_i_t ℙ_n (y=1)elsewhere Z_i_t is the prediction of the leaf node containing observation i in tree t, ℐ_i_t is the index set of features falling on the decision path for this leaf node, and ℙ_n (y=1) is the empirical proportion of class 1 observations {i:y_i =1}. We average these predictions across the tree ensemble to obtain the RF-level prediction for observation i with respect to an interaction S ŷ_i(S) = 1/T·∑_t=1^T ŷ_i(t;S).Predictions from equation (<ref>) can be used to evaluate predictive accuracy using any metric of interest. We report AUC-PR using predictions ŷ_i(S) for each interaction S∈𝒮 recovered by iRF. Intuitively, this metric asks whether the leaf nodes that rely on an interaction S are good predictors when all other leaf nodes make a best-case random guess.Permutation importance: Our second metric is inspired by Breiman's permutation-based measure of feature importance <cit.>. In the single feature case, Breiman proposed permuting each column of the data matrix individually and evaluating the change in prediction accuracy of an RF. The intuition behind this measure of importance is that if an RF's predictions are heavily influenced by a particular feature, permuting it will lead to a drop in predictive accuracy by destroying the feature/response relationship. The direct analogue in our setting would be to permute all features in a recovered interaction S and evaluate the change in predictive accuracy of iRF. However, this does not capture the notion that we expect features in an interaction to act collectively. By permuting a single feature, we destroy the interaction/response relationship for any interaction that the feature takes part in. If S contains features that are components of distinct interactions, permuting each feature in S would destroy multiple interaction/response relationships. To avoid this issue, we assess prediction accuracy using only information from the features contained in S by permuting all other features.Specifically, let X_π_S^c denote the feature matrix with all columns in S^c permuted, where S^c is the compliment of S. We evaluate predictions on permuted data X_π_S^c, and use these predictions to assess accuracy with respect to a metric of interest, such as the AUC-PR.Intuitively, this metric captures the idea that if an interaction is important independently of any other features, making predictions using only this interaction should lead to improved prediction over random guessing.Evaluating enhancer and splicing interactions: Figures <ref> and <ref> report interactions from both the enhancer and splicing data, evaluated in terms of our predictive metrics. In the enhancer data, interactions between collections of TFs Zld, Gt, Hb, Kr, and Twi are ranked highly, as was the case with stability scores (Figure <ref>). In the splicing data, POL II, S2 phospho-Pol II, H3K36me3, H3K79me2, H3K9me1, and H4K20me1 consistently appear in highly ranked interactions, providing further validation of the order-6 interaction recovered using the stability score metric (Figure <ref>).While the interaction evaluation metrics yield qualitatively similar results, there is a clear difference in how they rank interactions of different orders. Conditional prediction and stability score tend to favor lower-order interactions and permutation importance higher-order interactions. To see why this is the case, consider interactions S'⊂ S ⊆{1,…,p}. As a result of the intersection operation used by RIT, the probability (with respect to the randomness introduced by RIT) that the larger interaction S survives up to depth D will be less than or equal to the probability that S' survives up to depth D. Stability scores will reflect the difference by measuring how frequently an intersection survives across bootstrap samples. In the case of conditional prediction, the leaf nodes for which S falls on the decision path form a subset of leaf nodes for which S' falls on the decision path. As a result, the conditional prediction with respect to S uses more information from the forest and thus we would generally expect to see superior predictive accuracy. In contrast, permutation importance uses more information when making predictions with S since fewer variables are permuted. Therefore, we would generally expect to see higher permutation importance scores for larger interactions. We are currently investigating approaches for normalizing these metrics to compare interactions of different orders.Together with the measure of stability, the two importance measures proposed here capture different qualitative aspects of an interaction. Conceptually, the stability measure attempts to capture the degree of uncertainty associated with an interaction by perturbing the features and responses jointly. In contrast, the importance measures based on conditional prediction and permutation are similar to effect size, i.e., they attempt to quantify the contribution of a given interaction to the overall predictive accuracy of the learner. The conditional prediction metric accomplishes this by perturbing the predicted responses, while permutation importance perturbs the features.§ DATA PROCESSING §.§ Drosophila enhancersIn total, 7809 genomic sequences have been evaluated for their enhancer activity<cit.> in a gold-standard, stable-integration transgenic assay. In this setting, a short genomic sequence (100-3000nt) is placed in a reporter construct and integrated into a targeted site in the genome. The transgenic fly lineis amplified, embryos are collected, fixed, hybridized and immunohistochemistry is performed todetect the reporter <cit.>. The resultant stained embryos areimaged to determine:a) whether or not the genomic segment is sufficient to drive transcription of the reporter construct, andb) where and when in the embryo expression is driven.For our prediction problem, sequences that drive patterned expression in blastoderm (stage 5) embryos were labeled asactive elements. To form a set of features for predicting enhancer status, we computed the maximumvalue of normalized fold-enrichment <cit.> of ChIP-seq and ChIP-chip assays<cit.> for each genomic segment. The processed data are provided in Supporting Data 1. Our processing led to a binary classification problem with approximately10% of genomic sequences labeled as active elements. It is important to note that the tested sequences do not represent a random sample from the genome — rather they were chosen based on prior biological knowledge and may therefore exhibit a higher frequency of positive tests than one would expect from genomic sequences in general. We randomly divided the dataset into training and test sets of 3912 and 3897 observations respectively, with approximately equal portions of positive and negative elements, and applied iRF with B = 30, K = 5. The tuning parameters in RF were set to default levels of the package, and 500 Random Intersection Trees of depth 5 with n_child=2 were grown to capture candidate interactions.§.§ Alternative splicing The ENCODE consortium has collected extensive genome-wide data on both chromatin state and splicing in the human-derived erythroleukemia cell line K562 <cit.>. To identify critical interactions that form the basis of chromatin mediated splicing, we used splicing rates (Percent-spliced-in, PSI values, <cit.>) from ENCODE RNA-seq data, along with ChIP-seq assays measuring enrichment of chromatin marks and transcription factor binding events (253 ChIP assays on 107 unique transcription factors and 11 histone modifications, https://www.encodeproject.org/https://www.encodeproject.org/). A complete description of the assays, including accession numbers, is provided in Supporting Data 2.For each ChIP assay, we computed the maximum value of normalized fold-enrichment over the genomic region corresponding to each exon. This yielded a set of p = 270 features for our analysis. We took our response to be a thresholded function of the PSI values for each exon. Only internal exons with high read count (at least 100 RPKM) were used in downstream analysis. Exons with Percent-spliced-in index (PSI) above 70% were classified as frequently included (y = 1) and exons with PSI below 30% were classified as frequently excluded exons (y = 0). This led to a total of 23823 exons used in our analysis. The processed data are provided in Supporting Data 3.Our threshold choice resulted in ∼ 90% of observations belonging to class 1. To account for this imbalance, we report AUC-PR for the class 0 observations.We randomly divided the dataset into balanced training and test sets of 11911 and 11912 observations respectively, and applied iRF with B = 30 and K = 2. The tuning parameters in RF were set to default levels of the package, and 500 binary random intersection trees of depth 5 with n_child=2 were grown to capture candidate interactions.§ EVALUATING DROSOPHILA ENHANCER INTERACTIONSThe Drosophila embryo is one of the most well studied systems in developmental biology and provides a valuable test case for evaluating iRF. Decades of prior work have identified physical, pairwise TF interactions that play a critical role in regulating spatial and temporal patterning, for reviews see <cit.> and <cit.>.We compared our results against these previously reported physical interactions to evaluate interactions found by iRF.Table <ref> indicates the 20 pairwise TF interactions we identify with stability score > 0.5, along with references that have previously reported physical interactions among each TF pair. In total, 16 (80%) of the 20 pairwise TF interactions we identify as stable have been previously reported in one of two forms: (i) one member of the pair regulates expression of the other (ii) joint binding of the TF pair has been associated with increased expression levels of other target genes. Interactions for which we could not find evidence supporting one of these forms are indicated as “-" in Table <ref>. We note that high-order interactions have only been studied in a small number of select cases, most notably eve stripe 2, for a review see <cit.>. These limited cases are not sufficient to conduct a comprehensive analysis of the high-order interactions we identify using iRF.§ SIMULATION EXPERIMENTSWe developed iRF through extensive simulation studies based on biologically inspired generative models using both synthetic and real data. In particular, we generated responses using Boolean rules intended to reflect the stereospecific nature of interactions among biomolecules <cit.>. In this section, we examine interaction recovery and predictive accuracy of iRF in a variety of simulation settings. For all simulations in Sections <ref>-<ref>, we evaluated predictive accuracy in terms of area under the precision-recall curve (AUC-PR) for a held out test set of 500 observations. To evaluate interaction recovery, we use three metrics that are intended to give a broad sense of the overall quality of interactions 𝒮 recovered by iRF. For responses generated from an interaction S^*⊆{1,…,p}, we consider interactions of any order between only active features {j:j∈ S^*} to be true positives and interactions containing any non-active variable {j:j∉ S^*} to be false positives. This definition accounts for the fact that subsets of S^* are still informative of the data generating mechanism. However, it conservatively considers interactions that includes any non-active features to be false positives, regardless of how many active features they contain.* Interaction AUC: We consider the area under the receiver operating characteristic (ROC) curve generated by thresholding interactions recovered by iRF at each unique stability score. This metric provides a rank-based measurement of the overall quality of iRF interaction stability scores, and takes a value of 1 whenever the complete data generating mechanism is recovered as the most stable interaction. * Recovery rate: We define an interaction as “recovered” if it is returned in any of the B bootstrap samples (i.e. stability score > 0), or if it is a subset of any recovered interaction. This eliminates the need to select thresholds across a wide variety of parameter settings. For a given interaction order s=2, …, |S|, we calculate the proportion of the total |S|s true positive order-s interactions recovered by iRF. This metric is used to distinguish between models that recover high-order interactions at different frequencies, particularly in settings where all models recover low-order interactions.* False positive weight: Let 𝒮 = 𝒮_T∪𝒮_F represent the set of interactions recovered by iRF, where 𝒮_T and 𝒮_F are the sets of recovered true and false positive interactions respectively. For a given interaction orders=2,…, |S|, we calculate∑_S∈𝒮_F:|S|=ssta(S)/∑_S∈𝒮:|S|=s sta(S).This metric measures the aggregate weight of stability scores for false positive order-s interactions, S∈𝒮_F:|S|=s, relative to all recovered order-s interactions, S∈𝒮:|S|=s. This metric also includes all recovered interactions (stability score > 0), eliminating the need to select thresholds. It can be thought of as the weighted analogue to false discovery proportion.§.§ Simulation 1: Boolean rulesOur first set of simulations demonstrates the benefit of iterative re-weighting for a variety of Boolean-type rules. We sampled features 𝐱=(x_1, …, x_50) from independent, standard Cauchy distributions to reflect heavy-tailed data, and generated the binary responses from three rule settings (OR, AND, and XOR) asy ^ (OR) = 1[x_1 > t_OR |x_2 > t_OR |x_3 > t_OR |x_4 > t_OR], y ^(AND) = ∏_i=1^4 1[ x_i > t_AND], y ^(XOR) = 1[∑_i=1 ^ 4 1(x_i > t_XOR) ≡ 1 2].We injected noise into these responses by swapping the labels for 20% of the observations selected at random. From a modeling perspective, the rules in equations (<ref>), (<ref>), and (<ref>) give rise to non-additive main effects that can be represented as an order-4 interaction between the active features x_1, x_2, x_3, andx_4. Inactive features x_5, …, x_50 provide an additional form of noise that allowed us to assess the performance of iRF in the presence of extraneous features. For the AND and OR models, we set t_OR = 3.2, t_AND = -1 to ensure reasonable class balance (∼ 1/3 class 1 observations) and trained on samples of size 100, 200, …,500 observations. We set t_XOR = 1 both for class balance (∼ 1/2 class 1 observations) and to ensure that some active features were marginally important relative to inactive features. At this threshold, the XOR interaction is more difficult to recover than the others due to the weaker marginal associations between active features and the response. To evaluate the full range of performance for the XOR model, we trained on larger samples of size 200, 400, …,1000 observations.We report the prediction accuracy and interaction recovery for iterations k∈{1,2, …, 5} of iRF over 20 replicates drawn from the above generative models. The RF tuning parameters were set to default levels for the package <cit.>, M = 100 RITs of depth 5 were grown with n_child=2, and B = 20 bootstrap replicates were taken to determine the stability scores of recovered interactions. Figure <ref>A shows the prediction accuracy of iRF (AUC-PR), evaluated on held out test data, for each generative model and a selected subset of training sample sizes as a function of iteration number (k). iRF achieves comparable or better predictive performance for increasing k, with the most dramatic improvement in the XOR model. It is important to note that only 4 out of the 50 features are used to generate responses in equations (<ref>), (<ref>), and (<ref>). Iterative re-weighting restricts the form of functions fitted by RF and may hurt predictive performance when the generative model is not sparse. Figure <ref>B shows interaction AUC by generative model, iteration number, and training sample size, demonstrating that iRF (k>1) tends to rank true interactions higher with respect to stability score than RF (k=1). Figure <ref>C breaks down recovery by interaction order, showing the proportion of order-s interactions recovered across any bootstrap sample (stability score > 0), averaged over 20 replicates. For each of the generative models, RF (k=1) never recovers the true order-4 interaction while iRF (k=4,5) always identifies it as the most stable order-4 interaction given enough training observations. The improvement in interaction recovery with iteration is accompanied by an increase in the stability scores of false positive interactions (Figure <ref>D). We find that this increase is generally due to many false interactions with low stability scores as opposed to few false interactions with high stability scores. As a result, true positives can be easily distinguished through stability score ranking (Figure <ref>B).These findings support the idea that iterative re-weighting allows iRF to recover high-order interactions without limiting predictive performance. In particular, improved interaction recovery with iteration indicates that iterative re-weighting stabilizes decision paths, leading to more interpretable models. We note that a principled approach for selecting the total number of iterations K can be formulated in terms of estimation stability with cross validation (ESCV) <cit.>, which would balance trade-offs between interpretability and predictive accuracy. §.§ Simulation 2: marginal importance Section <ref> demonstrates that iterative re-weighting improves the recovery of high-order interactions. The following simulations develop an intuition for how iRF constructs high-order interactions, and under what conditions the algorithm fails. In particular, the simulations demonstrate that iterative re-weighting allows iRF to select marginally important active features earlier on decision paths. This leads to more favorable partitions of the feature space, where active features that are marginally less important are more likely to be selected. We sampled features x=(x_1, …, x_100) from independent, standard Cauchy distributions, and generated the binary response y as y = 1[ ∑_i∈ S_XOR1(x_i > t_XOR) ≡ 1 2],S_XOR={1,…,8}. We set t_XOR = 2, which resulted in a mix of marginally important and unimportant active features, allowing us to study how iRF constructs interactions. For all simulations described in this section, we generated n = 5000 training observations and evaluated the fitted model on a test set of 500 held out observations. RF parameters were set to their default values with the exception of , which was set to 200 for computational purposes. We ran iRF for k∈{1,…, 5} iterations with 10 bootstrap samples and grew M = 100 RITs of depth 5 with n_child=2. Each simulation was replicated 10 times to evaluate performance stability.§.§.§ Noise levelIn the first simulation, we considered the effect of noise on interaction recovery to assess the underlying difficulty of the problem. We generated responses using equation (<ref>), and swapped labels for 10%, 15%, and 20% of randomly selected responses. Figure <ref> shows performance in terms of predictive accuracy and interaction recovery for the 15% and 20% noise levels. At each noise level, increasing k leads to superior performance, though there is a substantial drop in both absolute performance and the rate of improvement over iteration for increased noise levels. The dramatic improvement in interaction recovery (Figure <ref>C)reinforces the idea that regularization is critical for recovering high-order interactions. Figure <ref> shows the distribution of iRF weights, which reflect the degree of regularization, by iteration. iRF successfully recovers the full XOR interaction in settings where there is clear separation between the distribution of active and inactive variable weights. This separation develops over several iterations, and at a noticeably slower rate for higher noise levels, indicating that further iteration may be necessary in low signal-noise regimes.Marginal importance and variable selection: iRF's improvement with iteration suggests that the algorithm leverages informative lower-order interactions to construct the full data generating rule through adaptive regularization. That is, by re-weighting towards some active features, iRF are more likely to produce partitions of the feature space where remaining active variables are selected. To investigate this idea further, we examined the relationship between marginal importance and the average depth at which features are first selected across the forest. We define a variable's marginal importance as the best case decrease in Gini impurity if it were selected as the first splitting feature. We note that this definition is different from the standard measure of RF importance (mean decrease in Gini impurity), which captures an aggregate measurement of marginal and conditional importance over an RF. We considered this particular definition to examine whether iterative re-weighting leads to more “favorable” partitions of the feature space, where marginally unimportant features are selected earlier on decision paths.Figure <ref> shows the relationship between marginal importance and feature entry depth. On average over the tree ensemble, active features enter the model earlier with further iteration, particularly in settings where iRF successfully recovers the full XOR interaction. We note that this occurs for active features with both high and low marginal importance, though more marginally important, active features enter the model earliest. This behavior supports the idea that iRF constructs high-order interactions by identifying a core set of active features, and using these, partitions the feature space in a way that marginally less important variables become conditionally important, and thus more likely to be selected. §.§.§ Mixture modelOur finding that iRF uses iterative re-weighting to build up interactions around marginally important features, suggests that the algorithm may struggle to recover interactions in the presence of other marginally important features. To test this idea, we considered a mixture model of XOR and AND rules. A proportion π∈{0.5, 0.75, 0.9} of randomly selected observations were generated using equation (<ref>), and the remaining proportion 1-π of observations were generated as y = ∏_i∈ S_AND1[ x_i > t_AND].We introduced noise by swapping labels for 10% of the responses selected at random, a setting where iRF easily recovers the full XOR rule, and set S_AND = {9, 10, 11, 12}, t_AND = -0.5to ensure that the XOR and AND interactions were dominant with respect to marginal importance for π=0.9 and π=0.5 respectively.Figure <ref> shows performance in terms of predictive accuracy (A) and interaction recovery of XOR (B) and AND (C) rules at each level of π. When one rule is clearly dominant (AND: π=0.5; XOR: π=0.9), iRF fail to recover the the other (Figure <ref> B,C). This is driven by the fact that the algorithm iteratively updates feature weights using a global measure of importance, without distinguishing between features that are more important for certain observations and/or in specific regions of the feature space. One could address this with local measures of feature importance, though we do not explore the idea in this paper. In the π=0.75 setting, none of the interactions are clearly more important, and iRF recovers subsets of both the XOR and AND interactions (Figure <ref>). While iRF may recover a larger proportion of each rule with further iteration, we note that the algorithm does not explicitly distinguish between rule types, and would do so only when different decision paths in an RF learn distinct rules. Characterizing the specific form of interactions recovered by iRF is an interesting question that we are exploring in our ongoing work.§.§.§ Correlated featuresIn our next set of simulations, we examined the effect of correlated features on interaction recovery. Responses were generated using equation (<ref>), with features x=(x_1, …, x_100) drawn from a Cauchy distribution with mean 0 and covariance Σ, and active set S_XOR, |S_XOR|=8 sampled uniformly at random from {1,…,100}. We considered both a decaying covariance structure: Σ_ij = ρ ^ |i - j|, and a block covariance structure: Σ_ij = 1, i=jρ, i,j ⊂ G_l andi j0,elsewhere G_l⊆{1,…,p} and l=1, …, L partition {1,…,p} into blocks of features. For the following simulations, we considered both low and high levels of feature correlation ρ∈{0.25, 0.75} and blocks of 10 features.Prediction accuracy and interaction recovery are fairly consistent for moderate values of ρ (Figures <ref>, <ref>), while interaction recovery degrades for larger values of ρ, particularly in the block covariance setting (Figure <ref>B,C). For instance when ρ=0.75, iRF only recovers the full order-8 interaction at k=5, and simultaneously recovers many more false positive interactions. The drop in interaction recovery rate is greater for larger interactions due to the fact that for increasing ρ, inactive features are more frequently selected in place of active features. These findings suggest both that iRF can recover meaningful interactions in highly correlated data, but that these interactions may also contain an increasing proportion of false positive features.We note that the problem of distinguishing between many highly correlated features, as in the ρ=0.75 block covariance setting, is difficult for any feature selection method. With a priori knowledge about the relationship between variables, such as whether variables represent replicate assays or components of the same pathway, one could group features as described in Section <ref>. §.§ Simulation 3: big p Our final set of synthetic data simulations tested the performance of iRF in settings where the number of features is large relative to the number of observations. Specifically, we drew 500 independent, p-dimensional standard Cauchy features, with p∈{1000, 2500}. Responses were generated using the order-4 AND interaction from equation (<ref>), selected to reflect the form of interactions recovered in the splicing and enhancer case studies. We injected noise into the responses by swapping labels for 20% and 30% of randomly selected observations.Figures <ref> and <ref> show prediction accuracy and interaction recovery of iRF at each of the different noise levels. Prediction accuracy improves noticeably with iteration and stabilizes at the 20% noise level (Figures <ref>A, <ref>A). For k=1, iRF rarely recovers correct interactions and never recovers interactions of order > 2, while later iterations recover many true interactions (Figures <ref>C, <ref>C). These findings indicate that iterative re-weighting is particularly important in this highly sparse setting and is effectively regularizing RF fitting. Based on the results from our previous simulations, we note that the effectiveness of iterative re-weighting will be related to the form of interactions. In particular, iRF should perform worse in settings where p>>n and interactions have no marginally important features. §.§ Simulation 4: enhancer dataTo test iRF's ability to recover interactions in real data, we incorporated biologically inspired Boolean rules into the Drosophila enhancer dataset analyzed in Section 4 (see alsoSection <ref> for a description of the dataset). These simulations were motivated by our desire to assess iRF's ability to recover signals embedded in a noisy, non-smooth and realistic response surface with feature correlation and class imbalance comparable to our case studies. Specifically, we used all TF binding features from the enhancer data and embedded a 5-dimensional AND rule between Krüppel, (Kr), Hunchback (Hb), Dichaete (D), Twist (Twi), and Zelda (Zld): y= 1 [x_kr > 1.25&x_hb > 1.25&x_D > 1.25&x_twi > 1.25&x_zld > 75].The active TFs and thresholds were selected to ensure that the proportion of positive responses was comparable to the true data (∼ 10% active elements), and the interaction type was selected to match the form of interactions recovered in both the enhancer and splicing data. In this set of simulations, we considered two types of noise. For the first, we incorporated noise by swapping labels for a randomly selected subset of 20% of active elements and an equivalent number of inactive elements. We note that this resulted in a fairly limited proportion of swapped labels among class 0 observations due to class imbalance. Our second noise setting was based on an RF/sample splitting procedure. Specifically, we divided the data into two disjoint groups of equal size. For each group, we trained an RF and used it to predict the responses of observations in the held out group. This process resulted in predicted class probabilities for each observation i=1,…,n. We repeated this procedure 20 times to obtain the average predicted probability that y_i=1. With a slight abuse of notation, we denote this predicted probability as π_i. For each observation we sampled a Bernoulli noising variable ỹ_i ∼ Bernoulli(π_i) and used these to generate a binary response for each observation y_i= ỹ_i |1 [x_kr > 1.25&x_hb > 1.25&x_D > 1.25&x_twi > 1.25&x_zld > 75].That is, the response for observation i was to set 1 whenever the noising variable ỹ_i or equation (<ref>) was active. This noising procedure introduced an additional ∼ 5% of class 1 observations beyond the ∼ 10 % of observations that were class 1 as a result of equation (<ref>). Intuitively, this model derives its noise from rules learned by an RF. Feature interactions that are useful for classifying observations in the split data are built into the predicted class probabilities π_i. This results in an underlying noise model that is heterogeneous, composed of many “bumps” throughout the feature space.In each setting, we trained on samples of 200, 400, …, 2000 observations and tested prediction performance on the same number of observations used to train. We repeated this process 20 times to assess variability in interaction recovery and prediction accuracy. The RF tuning parameters were set to default levels for the package, M = 100 random intersection trees of depth 5 were grown with n_child=2, and B = 20 bootstrap replicates were taken to determine the stability scores of recovered interactions.Figure <ref>A shows that different iterations of iRF achieve comparable predictive accuracy in both noise settings. When the number of training observations increases beyond 400, the overall quality of recovered interactions as measured by interaction AUC improves for iterations k>1. In some instances, there is a drop in the quality of recovered interactions for the largest values of k after the initial jump at k=2 (Figure <ref>).All iterations frequently recover true order-2 interactions, though the weighted false positive rate for order-2 interactions drops for iterations k > 1, suggesting that iterative re-weighting helps iRF filter out false positives. Iterations k > 1 of iRF recover true high-order interactions at much greater frequency for a fixed sample size, although these iterations also recover many false high-order interactions (Figure <ref>C,D). We note that true positive interactions are consistently identified as more stable (Figure <ref>), suggesting that the large proportion of weighted false discoveries in Figure <ref>D is the result of many false positives with low stability scores.Experiment II: Iteration as regularization - Bias-Variance Analysis.The soft dimension reduction technique can viewed as a form of regularization on the base RF learner, since it restricts the form of functions RF is allowed to fit in a probabilistic manner. To gain insight into the working of iRF as a regularizer, we conducted a bias-variance analysis of iRF predictions. Since the bias and variance components in a classification problem with 0-1 loss function are not independent, we conducted the analysis on a regression problem. In this problem, we generated samples from a linear regression model Y = ∑_j=1^p X_j β_j + ϵ_j, with p = 100, ϵ∼ N(0, 1), β_1 = … = β_10 = 0.5 and β_j = 0 for j > 10. We generated n = 200 training samples from the above model and reported average bias, average variance, mean-squared error (MSE) and relative mean-squared-error (RMSE) of RF and iRF predictions in a held-out test set of equal size. The results, averaged over 500 replications, were reported in Figure <ref>. In each replicate, we simulated the errors (ϵ_j) for the training set. The average bias is estimated as the difference between the mean prediction over 500 replicates and the expected response on the test set. The average variance is estimated as the variance of the 500 predictions for each of the n = 200 training samples.The first two plots show that the average bias of the learner decreases with iteration, while the average variance increases. The third plot shows that the overall MSE decreases with iteration, although the decrease is minimal after the first two iteration. The bias reduction effect of iRF with ensemble of weighted decision trees is similar to the bias reduction effect of adaptive lasso, as mentioned in <cit.>. Experiment III: Computational cost of detecting high-order interaction We use the two datasets from our case studies to demonstrate the computational advantage of iRF for detecting high-order interactions from high-dimensional data. Rulefit3 serves as a benchmark, which has competitive prediction accuracy to RF and also comes with a flexible framework for detecting nonlinear interactions hierarchically, using the so-called “H-statistic” <cit.>. We show that for moderate to large dimensional datasets typically encountered in omics studies, the computational complexity of calculating H-statistic increases rapidly, while the computation time of iRF grows far more slowly with dimension.We fit iRF and rulefit on balanced training samples from the two datasets, enhancer identification (7705 samples, 86 features) and alternative splicing (24k samples, 301 features) using subsets of p randomly selected features, where p ∈{10, 20, …, 80} for the enhancer data and p ∈{50, 100, …, 300} for the splicing data. We ran rulefit with default parameters, generating null interaction models with 10 bootstrap samples and looked for higher order interactions among features whose H-statistics are at least one null standard deviation above their null average. The current implementation of rulefit only allows H-statistic calculation for interactions of up to order 3, so we do not assess higher order interactions. We run iRF with B=10 bootstrap samples, and the same tuning parameter specifications as described in the case studies section. We plot the run time (in minutes)and the area under ROC curve (AUROC) for different values of p in Figure <ref>.The two plots on the left panel show that the runtime for rulefit's interaction detection increases rapidlyas p increases, while the increase is almost linear for iRF.For the splicing dataset, rulefit ran for over 24 hours for p=250 while it took only a little above an hour to run iRF on p=300. We note that the current implementation of Rulefit3 uses an optimized executable program, while iRF uses a far less efficient R implementation. On the other hand, iRF is run in parallel on 10 servers, while the current implementation of rulefit does not allow parallelization. The search space of rulefit is restricted to all possible interactions of order 3, while iRF searches for arbitrarily high-order interactions, leveraging the deep structure decision trees in RF. The linear vs. polynomial growth of computing time is not an optimization issue, it is merely an artefact of the exponentially growing search space of high-order interactions. [check edits]. § COMPUTATIONAL COST OF DETECTING HIGH-ORDER INTERACTIONWe used the enhancer data from our case studies to demonstrate the computational advantage of iRF for detecting high-order interactions in high-dimensional data. Rulefit3 serves as a benchmark, which has competitive prediction accuracy to RF and also comes with a flexible framework for detecting nonlinear interactions hierarchically, using the so-called “H-statistic” <cit.>. For moderate to large dimensional datasets typically encountered in omics studies, the computational complexity of seeking high-order interactions hierarchically (select marginally important features first, then look for pairwise interaction among them, and so on) increases rapidly, while the computation time of iRF grows far more slowly with dimension.We fit iRF and Rulefit3 on balanced training samples from the enhancer dataset (7809 samples, 80 features) using subsets of p randomly selected features, where p ∈{10, 20, …, 80}. We ran Rulefit3 with default parameters, generating null interaction models with 10 bootstrap samples and looked for higher order interactions among features whose H-statistics are at least one null standard deviation above their null average (following <cit.>). The current implementation of Rulefit3 only allows H-statistic calculation for interactions of up to order 3, so we do not assess higher order interactions. We ran iRF with B=10 bootstrap samples, K=3 iterations, and the default RF and RIT tuning parameters. The run time (in minutes) and the AUC for different values of p, averaged over 10 replications of the experiment by randomly permuting the original features in enhancer data, are reported in Figure <ref>.The plot on the left panel shows that the runtime for Rulefit3's interaction detection increases exponentiallyas p increases, while the increase is linear for iRF.The search space of Rulefit3 is restricted to all possible interactions of order 3, while iRF searches for arbitrarily high-order interactions, leveraging deep decision trees in RF. The linear vs. polynomial growth of computing time is not an optimization issue, it is merely a consequence of the exponentially growing search space of high-order interactions.In addition to the comparison with Rulefit3, we profiled memory usage of the iRFpackage using the splicing dataset described in Section 5 (n=11911, p=270) withB=30 and K=3. The program was run on a server using 24 cores (CPU Model: Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz, clock speed: 1200 MHz, Operating System: Ubuntu 14.04). The profiling was done usingfunctionsand . iRF completed in 26 minutes 59 seconds, with a 499910 Mb memory consumption. § LIST OF DATASETSScripts and data used for the case studies and simulations described in this paper are available on https://doi.org/10.5281/zenodo.885529Zenodo.§.§.§ Scripts* :script used to runon the enhancer data.* :script used to runon the splicing data.* :script used to runfor boolean generative models (Sections <ref>-<ref>).* :script used to runfor enhancer data simulations (Section <ref>).* :script used to run the runtime analysis for iRF (Section <ref>).* :script used to run the runtime analysis for Rulefit3 (Section <ref>).* : package for running Rulefit3 <cit.>. The package we provide is set up for use on linux systems. Other versions are available through http://statweb.stanford.edu/ jhf/R_RuleFit.htmlstatweb.stanford.edu.§.§.§ Datasets* : Processed data for the enhancer case study (Supporting Data 1).* : Description of the splicing assays including ENCODE accession number, assay name, and assay type (Supporting Data 2).* : Processed data used for the splicing case study (Supporting Data 3).* : Anfile containing all variables required to run thescript: * : 7809 × 80 feature matrix, rows corresponding to genomic regions and columns corresponding to assays. * : length 7809 response vector, 1 indicating active element. * : length 3912 vector giving the indices of training observations. * : length 3897 vector giving the indices of testing observations. * : 80 × 2 data frame, the first column giving a unique identifier for each assay and the second column giving collapsed terms used to group replicate assays.* : Anfile containing all variables required to run thescript: * : 23823 × 270 feature matrix, rows corresponding to exons and columns corresponding to assays. * : length 23823 response vector, 1 indicating a highly spliced exon. * : length 11911 vector giving the indices of training observations. * : length 11912 vector giving the indices of testing observations. * : 270 × 2 data frame, the first column giving a unique identifier for each assay and the second column giving collapsed terms used to group replicate assays.* : Anfile containing RF predicted probabilities used for noising the enhancer simulation: * : 7809 × 20 matrix, giving the predicted probability that each genomic element is active. These probabilities were generated using the sample splitting procedure described in Section <ref> and used to noise the enhancer simulation. abbrvnat | http://arxiv.org/abs/1706.08457v4 | {
"authors": [
"Sumanta Basu",
"Karl Kumbier",
"James B. Brown",
"Bin Yu"
],
"categories": [
"stat.ML",
"q-bio.GN"
],
"primary_category": "stat.ML",
"published": "20170626161741",
"title": "Iterative Random Forests to detect predictive and stable high-order interactions"
} |
Molecular scale height in NGC 7331 [==================================In a Bayesian context, prior specification for inference on monotone densities is conceptually straightforward, but proving posterior convergence theorems is complicated by the fact that desirable prior concentration properties often are not satisfied.In this paper, I first develop a new prior designed specifically to satisfy an empirical version of the prior concentration property, and then I give sufficient conditions on the prior inputs such that the corresponding empirical Bayes posterior concentrates around the true monotone density at nearly the optimal minimax rate.Numerical illustrations also reveal the practical benefits of the proposed empirical Bayes approach compared to Dirichlet process mixtures.Keywords and phrases: Density estimation; empirical Bayes; Grenander estimator; mixture model; shape constraint.§ INTRODUCTIONLet X_1,…,X_n be iid samples from a density function f^⋆, supported on the positive half-line, assumed to be monotone non-increasing.Nonparametric inference on a monotone density has received considerable attention in the literature, dating back to <cit.>, with a wide range of applications <cit.>.Theoretical properties of estimators have been studied in <cit.>, <cit.>, and <cit.>, among others, with the behavior of the Grenander estimator at the origin being a now-classical example of inconsistency of the maximum likelihood estimator <cit.> and failure of bootstrap <cit.>.From a Bayesian point of view, constructing a prior and corresponding posterior distribution for the monotone density is at least conceptually straightforward thanks to the mixture representation of <cit.>; see Section <ref>.This makes it possible to construct priors for monotone densities using the standard tools, such as finite mixture models, Dirichlet processes, etc.However, theoretical analysis of the corresponding posterior distribution is complicated by the fact that, unless the support of f^⋆ is known, the usual Kullback–Leibler property <cit.> used to prove posterior convergence results may not be satisfied; in fact, it could be the the Kullback–Leibler divergence of f from f^⋆ could be infinite for all f in a set of prior probability 1.Therefore, the general theorems in, e.g., <cit.> and <cit.> cannot be applied.<cit.> worked around this to show, among other things, that the Bayesian posterior distribution based on various mixture priors has concentration rate within a logarithmic factor of the minimax optimal rate, n^-1/3, with respect to Hellinger or L_1 distance.A recent trend in the Bayesian literature is asymptotic concentration results for empirical Bayes posteriors; see, e.g., <cit.>, <cit.>, and <cit.>.These papers propose to extend the classical techniques and results to handle the case where the prior involves data in some way, e.g., through a plug-in estimator of a hyperparameter.However, given that the usual support conditions fail in the problem considered here, even with a fixed prior, it seems unlikely that these new techniques would apply to empirical Bayes monotone density estimation.<cit.>, building on <cit.> and <cit.>, recently proposed a new empirical Bayes approach, one that constructs the empirical prior specifically so that the desirable posterior concentration rate properties are achieved.In particular, the empirical prior is designed to satisfy the prior support conditions—a variation on the Kullback–Leibler property—so this approach seems ideally suited for cases, like monotone density estimation, where satisfying the prior support condition is problematic.Here, in Section <ref>, I will construct a simple and intuitively appealing empirical prior and establish, in Sections <ref>–<ref>, that the corresponding empirical Bayes posterior concentration rate is nearly minimax optimal.Beyond these desirable theoretical properties, in Section <ref>, I will show that the proposed empirical Bayes approach has a number of practical benefits compared to the Dirichlet process mixture model, including improved computational efficiency and finite-sample performance.§ AN EMPIRICAL PRIORThe starting point here is the representation in <cit.> of a monotone density as a scale mixture of uniforms, i.e., for any monotone density density f, there exists a mixing distribution θ, supported on a subset of [0,∞), such that f = f_θ, where f_θ(x) = ∫_0^∞ k(x |μ)θ(dμ),and the kernel k(x |μ) = μ^-1 1(x ≤μ) is the (0,μ) density.From here, a prior for f can be defined by introducing a prior for θ and using the mapping θ↦ f_θ.As is typical, I will model θ as a (finite) discrete distribution, i.e., θ(dμ) = ∑_s=1^S ω_s δ_μ_s(dμ).This makes f_θ a finite mixture of uniforms.For the moment, fix the number of support points S.Then the mixing distribution can be expressed as a finite-dimensional parameter θ = (ω, μ), where ω=(ω_1,…,ω_S) is the vector of mixture weights and μ=(μ_1,…,μ_S) is the corresponding vector of mixture locations.The theory will require that S=S_n be increasing with n at a suitable rate; see Section <ref>.For the remainder of this section, I will focus on specifying a prior for θ = (ω, μ), given S.The prior here will be empirical in the sense that it depends on data in a particular way.The general construction of an empirical prior in <cit.> selects an appropriate data-driven center, e.g., the prior mode.Their motivation is to replace the usual Kullback–Leibler/prior concentration property with an “empirical” version.Towards this, write the likelihood function for θ=(ω, μ), with S fixed, asL_n(θ) = ∏_i=1^n ∑_s=1^S ω_s k(X_i |μ_s)If _n is the target convergence rate and θ̂ is a maximizer of the likelihood L_n over a suitable set Θ_n, a sieve, then <cit.> definedŁ_n = {θ∈Θ_n: L_n(θ) ≥ e^-dn_n^2 L_n(θ̂)},d > 0,which is effectively a “neighborhood” of θ̂ in Θ_n, an empirical or data-dependent version of the Kullback–Leibler neighborhood in classical Bayesian nonparametric studies <cit.>.Like in the familiar Bayesian settings, the goal is for the prior to charge Ł_n with a sufficient amount of mass; see Condition LP in Section <ref>.But the fact that Ł_n is data-dependent means that the prior must also be so, thus, an empirical prior.Here I take the prior mode equal to θ̂= (ω̂, μ̂), a sieve maximum likelihood estimator. The particular sieve is of the form Θ_n = {θ = (ω, μ) ∈Δ(S_n) × [t, T]^S_n},0 < t < T < ∞,where t=t_n and T=T_n might also depend on n; see Section <ref>. Then the empirical prior for θ=(ω,μ) I propose here is as follows: * ω and μ are independent;* ω has an S-dimensional Dirichlet distribution, _S(α̂), on the simplex Δ(S), where α̂_s = 1 + c ω̂_s, s=1,…,S, and c=c_n is non-stochastic; * μ_1,…,μ_S are independent with μ_s ∼(μ̂_s, δ), a Pareto distribution with scale parameter/lower bound μ̂_s and non-stochastic shape parameter δ=δ_n. It is easy to see that θ̂= (ω̂, μ̂) is the mode. The Pareto prior for the components of μ is convenient because it is conjugate to the uniform mixture kernel.It is also important for the proofs in Section <ref> that the prior for μ_s be supported on [μ̂_s,∞), which is easily arranged with a Pareto distribution.The to-be-determined constants (c,δ) control the spread of the prior for (ω,μ) around its mode (ω̂,μ̂).To summarize, f is modeled as f_θ and an empirical prior on f is induced by specifying an empirical prior for θ and using the mapping θ↦ f_θ.In what follows, Π_n will denote the empirical prior for θ on the sieve (<ref>) as described above.With a slight abuse of notation, I will also use Π_n to denote the corresponding empirical prior for f=f_θ; the meaning should be clear from the context.Given this prior, the corresponding posterior distribution Π^n for θ is defined asΠ^n(dθ) ∝ L_n(θ)Π_n(dθ),where L_n(θ) is the likelihood function in (<ref>).Again, with a slight abuse of notation, I will also write Π^n for the empirical Bayes posterior for the monotone density f. The prior support condition eluded to above could be immediately achieved by taking the prior to be degenerate at the θ̂ that corresponds to Grenander's estimator f̂=f_θ̂, the nonparametric maximum likelihood estimator.Of course, the posterior based on this trivial empirical prior is also degenerate at f̂ and, therefore, inherits the concentration rate of Grenander's estimator.However, achieving the target rate is only a first objective.By using a non-degenerate prior, the posterior will have spread, leaving open the possibility for uncertainty quantification; see Sections <ref>–<ref>. There is a clinical version of the model that is perhaps more natural for applications.In particular, when n is large, the sieve ought to contain the θ corresponding to Grenander's estimator, so practical applications could dispense with the sieves altogether—which eliminates the need to specify S and [t,T], and to maximize the likelihood over the sieve—and center the prior directly on Grenander's estimator; see Section <ref>.However, establishing the concentration rate for this clinical version requires control on the mixture support size in Grenander's estimator but, to my knowledge, no such results are available in the literature.Just like in <cit.>, a reasonable conjecture is that the sieve estimator above is the same as Grenander's, in which case, the clinical version is also covered by Theorems <ref>–<ref>. § POSTERIOR CONCENTRATION RATEThe previous section described an empirical prior that, when combined with the likelihood via Bayes's formula, leads to a posterior distribution Π^n in (<ref>) that can be used for inference on the monotone density f.But why is this a reasonable approach?To answer this question, I will provide conditions prior inputs—c, δ, S, t, and T—such that the empirical Bayes posterior distribution Π^n for f concentrates around the true f^⋆ at nearly the optimal minimax rate.Proofs of the two theorems are given in Section <ref> and finite-sample performance of the posterior is investigated in Section <ref>. Let d denote the Hellinger or L_1 distance.Then the optimal rate with respect to d is n^-1/3; see <cit.> and <cit.>.Under certain conditions, this rate can be achieved, within a logarithmic factor, by the nonparametric maximum likelihood estimator <cit.> and by certain nonparametric Bayesian methods <cit.>.The following theorem establishes the near-optimal concentration rate for the proposed empirical Bayes approach, in the case where f^⋆ has a bounded support.Let the true density f^⋆ be monotone non-increasing with support [0,T^⋆], with f^⋆(0) < ∞, and let _n = (log n)^1/3 n^-1/3 be the target rate.If the prior inputs (c, δ, S, t, T) = (c_n, δ_n, S_n, t_n, T_n) satisfy S_n ∝_n^-1 = n_n^2 (log n)^-1,c_n ∝ n _n^-2, andδ_n log(T_n / t_n) ≲log n,where T_n and δ_n are non-decreasing and t_n is non-increasing, then there exists a constant M > 0 such that the posterior distribution Π^n satisfies _f^⋆[ Π^n({f: d(f^⋆, f) > M _n}) ] → 0,n →∞. The conclusion here is similar to that in Theorem 1 of <cit.>.Indeed, the concentration rate above is the same as that obtained by a suitable Dirichlet process mixture model, which is minimax optimal up to the logarithmic factor.The latter condition in (<ref>) deserves some explanation.Since the upper bound T^⋆ is finite, any sufficiently large T_n would suffice for estimating f^⋆(x) for x near T^⋆.Similarly, since a draw f from the proposed prior satisfies f(0) ≤ t_n^-1, any t_n less than f^⋆(0)^-1 > 0 would suffice for estimating f^⋆(x) for x near 0.Therefore, log(T_n/t_n) could be very slowly increasing, or even bounded, which means δ_n can grow as fast as order log n.Intuitively, slowly increasing log(T_n/t_n) indicates some certainty about the support of f^⋆, in which case, a larger δ_n and, hence, a smaller Pareto variance, is reasonable.On the other hand, if the support is uncertain, one could take T_n/t_n to be polynomial in n, in which case δ_n must be bounded and, hence, the Pareto variance must be bounded away from zero. The only serious assumption on f^⋆ in Theorem <ref> is that the support is bounded.It turns out that this bounded-support condition can be replaced by a condition on the tails of f^⋆.Condition C4 in <cit.> states:there exists b, r > 0 such that f^⋆(x) ≤ e^-b x^r for all large x.The next result is analogous to Theorem 2 in <cit.>.Let the true density f^⋆ be monotone non-increasing, with f^⋆(0) < ∞, whose support is [0, ∞).Assume that f^⋆ satisfies (<ref>) for a given r, and set the target rate equal to _n = (log n)^1/3 + 1/r n^-1/3.Let the prior inputs be as in (<ref>), but with T_n ≳ (log n)^1/r.Then the conclusion (<ref>) of Theorem <ref> holds with the modified rate _n. Note that the rate in the unbounded support case is slightly slower, by only a logarithmic factor, than in the bounded support case.Actually, Theorem <ref> can be viewed as a special case of this Theorem <ref>: if the support is bounded, then (<ref>) holds for “r=∞” and so the rate in Theorem <ref> agrees with that in Theorem <ref>.The only subtlety is that the choice of T=T_n depends on r, a feature of f^⋆, which is typically unknown in real examples.If one is willing to assume a positive lower bound r_0 on r, then T=T_n can be chosen with r=r_0.It is not known if the rate in Theorem <ref> is optimal so, even though working with a lower bound on r_0—corresponding to a larger T_n—will slow down the rate slightly, there is no practical difference compared to the rate with the true r.<cit.> does not say whether his Theorem 2 requires knowledge of the tail exponent r, but the tails of the Dirichlet process base measure generally affect the posterior concentration rates <cit.>.So if a target rate depending on r is to be achieved, then this requires some r-dependent condition on the base measure and, therefore, to check this condition, r must be known.§ PROOFS §.§ General strategy<cit.> proposed a general strategy for constructing empirical priors such that the corresponding posterior distribution has the desired concentration properties.Their Theorem 1 lists three general conditions that include assumptions about the prior concentration, one local and one global, as well as an assumption about the approximation properties of the sieve.I will summarize these conditions in the context of iid data as being considered here.Let _n be the target rate. There exists a θ^† = θ_n^† in the sieve Θ_n such that max{∫( logf^⋆/f_θ^†) f^⋆ dx,∫( logf^⋆/f_θ^†)^2 f^⋆dx }≤_n^2. For a given d > 0 define Ł_n as in (<ref>).Then there exists a constant C > 0 such that the empirical prior Π_n satisfies lim inf_n →∞ e^C n _n^2Π_n(Ł_n) > 0, with _f^⋆-probability 1. Let π_n be the density function for θ under the empirical prior.For a constant p > 1, there exists K > 0 such that ∫_Θ_n[ _f^⋆{π_n(θ)^p }]^1/p dθ≤ e^K n _n^2. §.§ Proof of Theorem <ref> I will begin by checking Condition LP.For fixed S, if θ=(ω, μ) denotes the mixture weights and locations, respectively, then the likelihood function L_n(θ) in (<ref>) for the discrete mixture model can be expressed asL_n(θ) = ∑_(n_1,…,n_S)ω_1^n_1⋯ω_S^n_S∑_(s_1,…,s_n)∏_s=1^S ∏_i: s_i = s k(X_i |μ_s)= ∑_(n_1,…,n_S)ω_1^n_1⋯ω_S^n_S∑_(s_1,…,s_n)∏_s=1^S μ_s^-n_s 1(μ_s ≥X̂_s),where X̂_s = max_i: s_i=s X_i is the largest of the n_s X values in category s, relative to the partition determined by (s_1,…,s_n) with frequency table (n_1,…,n_S); the inner- and outer-most sums are over all such partitions and frequency tables, respectively.Since the prior for μ_s is supported on [μ̂_s,∞), I can lower-bound the likelihood by L_n(θ) ≥∑_(n_1,…,n_S)ω_1^n_1⋯ω_S^n_S∑_(s_1,…,s_n)∏_s=1^S μ_s^-n_s 1(μ̂_s ≥X̂_s).The prior has ω and μ independent, and μ_1,…,μ_S independent, so {L_n(θ)}≥∑_(n_1,…,n_S)(ω_1^n_1⋯ω_S^n_S) ∑_(s_1,…,s_n)∏_s=1^S (μ_s^-n_s) 1(μ̂_s ≥X̂_s),where expectation is with respect to the prior for θ=(ω, μ).The proof of Proposition 2 in <cit.> gives a bound for the first expectation, i.e., (ω_1^n_1⋯ω_S^n_S) ≥Γ(c + S)c^n/Γ(c + S + n)ω̂_1^n_1⋯ω̂_S^n_S.For the Pareto prior, (μ̂_s, δ), on μ_s, we have(μ_s^-n_s) = ∫_μ̂_s^∞δμ̂_s^δ/μ_s^δ + n_s +1 dμ_s = δ/δ + n_sμ̂_s^-n_s≥1/1 + nδ^-1μ̂_s^-n_s.Since δ=δ_n is non-vanishing, the first term in the lower bound is at least O(n^-1).Plugging this bound back into the above expectation gives{L_n(θ)}≥Γ(c + S)c^n/Γ(c + S + n) e^-S log n L_n(θ̂), all large n.As in Proposition 2 of <cit.>, if S=S_n is of the order n_n^2 (log n)^-1 and c = n _n^-2, then there exists a constant K > 0 such that Γ(c + S)c^n/Γ(c + S + n)≥ e^-K n _n^2.Similarly, for the second term, since S log n is of the order n_n^2, we can conclude that, for a suitable constant D > 0, {L_n(θ)} / L_n(θ̂) > e^-D n _n^2which, according to the argument in the proof of Proposition 3 in <cit.>, implies Condition LP.Next, I check Condition GP.As a first step, we have that the density for the Dirichlet prior on ω is uniformly upper bounded by (c+S)^c+S+1/2c^-c, which does not depend on data.For the prior on μ, the density function is upper bounded by δ T^δμ_s^-(δ + 1)1(μ_s ≥ t), which is also free of data.Then, for any p > 1, the integral (<ref>) is bounded by (c+S)^c+S+1/2/c^c(T/t)^δ S.With S=S_n of the order n_n^2 (log n)^-1 and δlog(T/t) of order log n, it follows that the second term in (<ref>) is bounded by e^A n _n^2.Similarly, for c = n _n^2 and S=S_n of the order n _n^2 (log n)^-1, <cit.> showed that the first term in (<ref>) is also e^B n _n^2 so, altogether, the relevant integral is bounded by e^C n _n^2, hence Condition GP.Finally, note that, for the case of f^⋆ with bounded support [0,T^⋆] and f^⋆(0) < ∞, Condition S on the sieve follows from Lemma 11 in <cit.>.My sieve has a lower bound t > 0 but, if it is small enough, then it does not affect Salomond's calculations since there is no benefit to having a mixture location smaller than f^⋆(0)^-1 > 0.Having checked the three conditions in Section <ref>, the conclusion of Theorem <ref> follows from the general results in <cit.>. §.§ Proof of Theorem <ref> For a given T, possibly depending on n, write f_T^⋆ for the normalized version of f^⋆ to the interval [0,T], i.e., f_T^⋆(x) = f^⋆(x) / F^⋆(T), where F^⋆ is the distribution function corresponding to f^⋆.Without loss of generality, let d denote the L_1 distance.Then the triangle inequality implies that, for any density f, d(f^⋆, f) ≤ d(f^⋆, f_T^⋆) + d(f_T^⋆, f).Moreover, a simple calculation shows that d(f^⋆, f_T^⋆) = 1/2∫_0^∞ |f^⋆(x) - f_T^⋆(x)|dx = 1 - F^⋆(T) .Under the condition (<ref>) on the density f^⋆, it is easy to check that the tail probability 1-F^⋆(T) ≲Γ(r^-1, T^r), where Γ(s,t) is the upper incomplete gamma function, i.e., Γ(s,t) = ∫_t^∞ y^s-1 e^-y dy.From the well-known asymptotic behavior of this gamma function, it follows that if T_n ≳ (log n)^1/r, then 1-F^⋆(T_n) ≲ n^-1_n,where _n = (log n)^1/3 + 1/r n^-1/3 as in the statement of the theorem; see, also, page 1390 in <cit.>.Therefore, for any f, d(f^⋆, f) > M _n d(f^⋆, f_T^⋆) + d(f_T^⋆, f) > M _nd(f_T^⋆, f) > M _n - d(f^⋆, f_T^⋆) = M _n - {1 - F^⋆(T_n)} d(f_T^⋆, f) > (M/2)_n, say.This effectively converts the problem into one with bounded support.To see this, define the two sets of densities A_n = {f: d(f^⋆, f) > M _n}and B_n = {f: d(f_T_n^⋆, f) > (M / 2)_n}.Then the argument above implies that A_n ⊆ B_n which, in turn, implies Π^n(A_n) ≤Π^n(B_n).Define the event _n = {(x_1,…,x_n) ∈ [0,∞)^n: x_(n)≤ T_n}, where x_(n) = max_i x_i.Based on the bound in (<ref>), it is easy to check that _f^⋆(_n^c) = o(1).Next, write Π^n(B_n) = Π^n(B_n) 1(_n) + Π^n(B_n) 1(_n^c) ≤Π^n(B_n) 1(_n) + 1(_n^c),so that _f^⋆{Π^n(B_n)}≤_f^⋆{Π^n(B_n) 1(_n)} + o(1).The expectation on the right-hand side can be rewritten as _f^⋆{Π^n(B_n) 1(_n)} = _f^⋆{Π^n(B_n) |_n}_f^⋆(_n).Since _f^⋆(_n) → 1, it remains to deal with the conditional expectation.The key observation is that the conditional distribution of (X_1,…,X_n), given _n, is iid f_T_n^⋆ and, therefore, _f^⋆{Π^n(B_n) |_n} = _f_T_n^⋆{Π^n(B_n)},hence, the claim that this is effectively a bounded support problem.Moreover, all of the work in checking Conditions LP, GP, and S in the proof of Theorem <ref> above applies here with f^⋆ replaced by f_T_n^⋆ and the modified rate.Since the general results in <cit.> do not require that the “true parameter” be fixed, we can conclude that the right-hand side of (<ref>) is o(1) and, therefore, _f^⋆{Π^n(A_n)}≤{1 + o(1)}_f_T_n^⋆{Π^n(B_n)} + o(1) → 0,as was to be shown.This completes the proof of Theorem <ref>.§ NUMERICAL RESULTS §.§ ComputationHere I will focus on the clinical version of the empirical prior suggested in Remark <ref>.That is, this version of the prior is centered on Grenander's estimator, which is obtained from the output provided by the function grenander in the R package “fdrtool” <cit.>; more precisely, the clinical empirical prior is centered on the θ̂=(ω̂, μ̂) for which f_θ̂ is Grenander's estimator.For moderate to large n, there ought to be no difference between this and the empirical prior in Section <ref>, but the former has two clear advantages: first, there is no need to specify S or the interval [t,T]; second, maximizing the likelihood over the sieve Θ_n is not so easy, but there is already efficient software available for computing Grenander's estimator.Since S and [t,T] are taken care of by the clinical formulation, it only remains to specify c=c_n and δ=δ_n.In what follows, I will use c = 0.01 n^5/3/(log n)^2/3andδ = log n/20,a choice guided by the conditions in Theorems <ref>–<ref>.The simulation experiments below suggest that this choice works well in various cases, though more work would be needed to determine if these values are “good” in any general sense; see Section <ref>.I will compare the empirical Bayes results with those from a Bayesian Dirichlet process mixture model.I take the Dirichlet process precision parameter as 1 and the base measure as an inverse gamma with data-driven choice of shape and scale parameters, a and b.That is, I choose a and b so that the base measure mean and variance match the mean and variance of the data; this is to ensure that the base measure center and spread are consistent with the data.Other base measures could also be considered, such as gamma.A Pareto base measure would be a natural choice, given its conjugacy to the uniform kernel, but this does not satisfy the tail conditions in <cit.> needed to achieve the nearly-n^-1/3 concentration rate.Conjugacy would allow for very convenient posterior sampling via the slice sampler in <cit.> and <cit.>.But for other base measures, such as gamma and inverse gamma, rejection sampling is required in every MCMC iteration, so the computation is slower than with a Pareto base measure.Note that the proposed empirical Bayes approach can employ the computationally efficient Pareto prior since the strategic data-driven centering makes the tails irrelevant to the concentration rate properties. R codes for both the Bayes and empirical Bayes methods are available at <https://www4.stat.ncsu.edu/ rmartin/>. §.§ Simulated data Consider the case where f^⋆(x) = e^-x, x ≥ 0, is an exponential density.Figure <ref> plots posterior samples from the respective posterior distributions, the posterior mean density, the 95% posterior credible band, the Grenander estimator, and the truth.Both posteriors generally follow the Grenander estimator, and both have roughly the same spread but, at least in this case, the empirical Bayes posterior is less rough and better captures the shape of the truth.The credible bands for both posteriors cover the truth at most x values, but the Dirichlet process mixture misses in a few places because of its roughness.Another difference, not apparent from the plots, is that the empirical Bayes computations take about 50% less time than the Dirichlet process mixture.Next I present a more detailed investigation into the concentration properties of the two posteriors.Specifically, I will consider the coverage properties of the 95% credible bands at select values of x.That is, for a fixed x, I get posterior samples of f(x) and return the 0.025 and 0.975 quantiles as the 95% posterior credible interval.I repeat this process for 500 data sets and report the (Monte Carlo approximation of the) coverage probability and expected lengths for the two posteriors.Table <ref> reports (Monte Carlo approximations of) the coverage probability and mean lengths for the 95% credible intervals based on 500 samples from an exponential distribution, where f^⋆(x) = e^-x, x ≥ 0.Table <ref> reports the same but for a half-normal distribution, where f^⋆(x) = 2(x | 0, 1), x ≥ 0. The key observation across all the different setting is that the 95% posterior credible intervals from the Dirichlet process mixture model tend to under-cover, i.e., the coverage probability is less than the nominal 0.95, sometimes much less, while those from the empirical Bayes approach tend to over-cover.However, the higher coverage of the empirical Bayes intervals is not a result of being wider on average; in fact, the empirical Bayes credible intervals tend to be shorter than the Bayesian competitor's.Coverage properties of nonparametric methods is a delicate matter and a detailed investigation is beyond the scope of this paper, but these results strongly suggest that the proposed empirical Bayes procedure—despite its “double-use” of the data—does not follow the data too closely and, as suggested in Remark <ref>, may provide valid uncertainty quantification.1=1 > o.exp.100 <- cvg.compare(1, 200, 100, 2000, 0.01, 20); print(o.exp.100)x eb.cvg dp.cvg eb.len dp.len eb.len.med dp.len.med [1,] 0.00.9550.810 9.38103309 0.52528569 2.97308812 0.49697929 [2,] 0.50.9450.985 0.37058961 0.45906381 0.36298185 0.45528183 [3,] 1.00.9850.965 0.27029776 0.33849662 0.26491170 0.32528564 [4,] 2.00.9850.935 0.14954669 0.18502112 0.14608297 0.17504969 [5,] 3.00.9900.890 0.08600687 0.09769028 0.08383119 0.08991125> o.exp.200 <- cvg.compare(1, 200, 200, 2000, 0.01, 20); print(o.exp.200)x eb.cvg dp.cvgeb.len dp.len eb.len.med dp.len.med [1,] 0.00.8700.565 25.10167953 0.37743097 2.74706845 0.35777078 [2,] 0.50.9350.9800.28495878 0.37091416 0.27244490 0.35947146 [3,] 1.00.9600.9550.21934588 0.28509377 0.21352195 0.27993690 [4,] 2.00.9850.9150.12006742 0.15410351 0.11800884 0.14459434 [5,] 3.00.9700.8400.06753994 0.08162171 0.06619479 0.07565278> o.hnm.100 <- cvg.compare(2, 200, 100, 2000, 0.01, 20); print(o.hnm.100)x eb.cvg dp.cvg eb.lendp.len eb.len.med dp.len.med [1,] 0.00.8700.975 15.5818394 0.4661701 2.28964769 0.44757330 [2,] 0.50.8950.9750.3130285 0.3695994 0.30649681 0.36055540 [3,] 1.00.9700.9600.2913823 0.3614509 0.28060416 0.34774879 [4,] 2.00.9700.8950.1775161 0.2053701 0.17430277 0.19529811 [5,] 3.00.9850.6350.0535107 0.0430474 0.05304542 0.03828106> o.hnm.200 <- cvg.compare(2, 200, 200, 2000, 0.01, 20); print(o.hnm.200)x eb.cvg dp.cvgeb.len dp.len eb.len.med dp.len.med [1,] 0.00.8350.990 39.13048172 0.32175141 2.33909254 0.31056913 [2,] 0.50.8650.9900.24174820 0.29383375 0.23581424 0.28729490 [3,] 1.00.9650.9550.23525560 0.30390177 0.22520438 0.29365285 [4,] 2.00.9550.8850.13719649 0.16600999 0.13554070 0.16757394 [5,] 3.00.9500.7150.04280447 0.04472443 0.04325217 0.04645635§.§ Real data <cit.> presents data on the lengths of psychiatric treatment undergone by n=86 patients used as controls in a study of suicide risks.Figure <ref> shows the data histogram, the posterior mean density, and the 95% credible bands for the two estimators, based on the same settings described in Section <ref>.As expected, both posterior means fit the data well, but the empirical Bayes estimate is more smooth.That the empirical Bayes credible band is also a bit narrower than that of the Dirichlet process mixture should not be a concern based on the simulation results above. The Norwegian fire claims data is a common example in the actuarial science literature <cit.>.I consider n=820 fire loss claims exceeding 500 thousand Norwegian krones during the year 1988.Figure <ref> shows the data, the posterior mean densities, and the 95% credible bands for the two methods.In this case, the empirical Bayes posterior mean is a bit more rough than that of the Dirichlet process mixture, but arguably fits the data histogram better.And the narrow credible band is expected since this data set is about ten times as large as that in Example <ref>.§ CONCLUSIONThis paper presents a unique approach to the specification of an empirical or data-dependent prior for nonparametric Bayesian-like inference on a density function.The chief novelty is the centering of the prior on a suitable estimator, and this is particularly advantageous in the present context where the usual Kullback–Leibler property may fail.The challenge is to choose the prior tails so that the corresponding posterior does not track the data too closely, and Theorems <ref>–<ref> provide sufficient conditions on the prior inputs to achieve the target posterior concentration rate.Whether the proposed formulation can achieve valid uncertainty quantification, e.g., in the sense of <cit.>, remains an open question, but the numerical illustrations in Section <ref> are promising. § ACKNOWLEDGMENT This work is partially supported by the National Science Foundation, DMS–1737933.The author also thanks two anonymous the reviewers for their helpful feedback.apalike | http://arxiv.org/abs/1706.08567v2 | {
"authors": [
"Ryan Martin"
],
"categories": [
"math.ST",
"stat.TH"
],
"primary_category": "math.ST",
"published": "20170626191252",
"title": "Empirical priors and posterior concentration rates for a monotone density"
} |
headings 17SubNumber58 Recurrent Residual Learning for Action Recognition Ahsan Iqbal et al. University of Bonn, Germany Recurrent Residual Learning for Action Recognition Ahsan Iqbal, Alexander Richard, Hilde Kuehne, Juergen Gall December 30, 2023 ========================================================================== Action recognition is a fundamental problem in computer vision with a lot of potential applications such as video surveillance, human computer interaction, and robot learning. Given pre-segmented videos, the task is to recognize actions happening within videos. Historically, hand crafted video features were used to address the task of action recognition. With the success of Deep ConvNets as an image analysis method, a lot of extensions of standard ConvNets were purposed to process variable length video data. In this work, we propose a novel recurrent ConvNet architecture called recurrent residual networks to address the task of action recognition. The approach extends ResNet, a state of the art model for image classification. While the original formulation of ResNet aims at learning spatial residuals in its layers, we extend the approach by introducing recurrent connections that allow to learn a spatio-temporal residual. In contrast to fully recurrent networks, our temporal connections only allow a limited range of preceding frames to contribute to the output for the current frame, enabling efficient training and inference as well as limiting the temporal context to a reasonable local range around each frame. On a large-scale action recognition dataset, we show that our model improves over both, the standard ResNet architecture and a ResNet extended by a fully recurrent layer. § INTRODUCTION Action recognition in videos is an important research topic <cit.> with many potential applications such as video surveillance, human computer interaction, and robotics. Traditionally, action recognition has been addressed by hand crafted video features in combination with classifiers like SVMs as in <cit.>. With the impressive achievements of deep convolutional networks (ConvNets) for image classification, a lot of research was devoted to extend ConvNets to process video data, however, with unsatisfying results. While ConvNets have shown to perform very well for spatial data, they perform poorly for temporal data since they fail to model temporal dependencies. Heuristics were therefore developed for modeling temporal relations. First attempts, which simply stacked the frames and used a standard ConvNet for image classification <cit.>, performed worse than hand crafted features. More successful have been two stream architectures <cit.> that use two ConvNets. While the first network is applied to the independent frames, the second network processes the optical flow, which needs to be computed beforehand. While two stream architectures achieve lower classification error rates than hand-crafted features, they are very expensive for training and inference since they need two ConvNets and an additional approach to extract the optical flow.In this work, we propose a more principled way to integrate temporal dependencies within a ConvNet. Our model is based on the state of the art residual learning framework <cit.> for image classification, which learns a residual function with respect to the layer's input. We extend the approach to a sequence of images by having a residual network for each image and connecting them by recurrent connections that model temporal residuals. In contrast to the two stream architecture<cit.>, which proposes residual connections from the motion to the appearance stream, our approach is a single stream architecture that directly models temporal relations within the spatial stream and does not require the additional computation of the optical flow.We evaluate our approach on the popular UCF-101 <cit.> benchmark and show that our approach reduces the error of the baseline <cit.> by 17%. Although two stream architectures, which require the computation of the optical flow, achieve a lower error rate, the proposed approach of temporal residuals could also be integrated into a two stream architecture. § RELATED WORK Due to the difficulty of modeling temporal context with deep neural networks, traditional methods using hand-crafted features have been state of the art in action recognition much longer than in image classification <cit.>. The most popular approaches are dense trajectories <cit.> with a bag-of-words and SVM classification as well as improved dense trajectories <cit.> with Fisher vector encoding. Due to the success of deep architectures, first attempts in action recognition aimed at combining those traditional features with deep models. In <cit.>, for instance, a combination of hand crafted features and recurrent neural networks have been deployed. Peng et. al. <cit.> proposed Stacked Fisher Vectors, a video representation with multi-layer nested Fisher vector encoding. In the first layer, they densely sample large subvolumes from input videos, extract local features, and encode them using Fisher vectors. The second layer compresses the Fisher vectors of subvolumes obtained in the previous layer, and then encodes them again with Fisher vectors. Compared with standard Fisher vectors, stacked Fisher vectors allow to refine and abstract semantic information in a hierarchical way. Another hierarchical approach has been proposed in <cit.>, who apply HMAX <cit.> with pre-defined spatio-temporal filters in the first layer. Trajectory pooled deep convolutional descriptors are defined in <cit.>. CNN features are extracted from a two stream architecture and are combined with improved dense trajectories.In the past, there have been attempts to address the task of action recognition with deep architectures directly. However, in most of these works, the input to the model is a stack of consecutive video frames and the model is expected to learn spatio-temporal dependent features in the first few layers, which is a difficult task. In <cit.>, spatio temporal features are learned in unsupervised fashion by using Restricted Boltzmann machines. The approach of <cit.> combines the information about objects present in the video with the motion in the videos. 3D convolution is used in <cit.> to extract discriminative spatio temporal features from the stack of video frames. Three different approaches (early fusion, late fusion, and slow fusion) were evaluated to fuse temporal context in <cit.>. A similar technique as in <cit.> is used to fuse temporal context early in the network, in late fusion, individual features per frame are extracted and fused in the last convolutional layer. Slow fusion mixes late and early fusion. In contrast to these methods, our method does not rely on temporal convolution but on a recurrent network architecture directly.More recently, <cit.> proposed concept of dynamic images. The dynamic image is based on the rank pooling concept <cit.> and is obtained through the parameters of a ranking machine that encodes the temporal evolution of the frames of the video. Dynamic images are obtained by directly applying rank pooling on the raw image pixels of a video producing a single RGB image per video. And finally, by feeding the dynamic image to any CNN architecture for image analysis, it can be used to classify actions.The most successful approach to date is the two-stream CNN of <cit.>, where individual frames from the videos are the input to the spatial network, while motion in the form of dense optical flow is the input to the temporal network. The features learned by both networks are concatenated and finally linear SVM is used for classification.Recently, with the success of ResNet <cit.>, <cit.> proposed a model that combines ResNet and the two stream architecture. They replace both spatial and temporal networks in the two stream architecture by a ResNet with 50 layers. They also introduce a temporal or motion residual, i.e. a residual connection from the temporal network to the spatial network to enable learning of spatio temporal features. In contrast to our method, they incorporate temporal information by extending the convolutions over temporal windows. Note that this leads to a largely increased amount of model parameters, whereas our approach shares the weights among all frames, keeping the network size small.<cit.> proposed the temporal segment networks, which are mainly based on the two stream architecture. However, rather than densely samplingevery other frame in the video, they divide the video in segments of equal length, and then randomly sample snippets from these segments as network input. In this way, the two stream network produces segment level classification scores, which are combined to produce video level output.Deep recurrent CNN architectures are also explored to model dependencies across the frames. In <cit.>, convolutional features are fed into an LSTM network to model temporal dependencies. <cit.> considered four networks to address action recognition in videos. The first network is similar to spatial network in the two stream architecture. The second network is a CNN with one recurrent layer, it expects a single optical flow image and in recurrent layer, optical flows over a range of frames are combined. In the third network, they feed a stack of consecutive frames, the network is also equipped with a recurrent layer to capture the long term dependencies. Similarly, the fourth network expects a stack of optical flow fields as input. However, the network is equipped with a fully connected recurrent layer. Finally, boosting is used to combine the output of all four networks.Finally, <cit.> equip a ResNet with recurrent skip connections that are, contrary to ours, purely temporal skip connections, whereas in our framework, we use spatio-temporal skip connections. Note the significant difference in both approaches: while purely temporal skip connections can be interpreted as usual recurrent connections with unit weights, spatio-temporal skip connections are a novel concept that allow for efficient backpropagation and combine both, changes in the temporal domain and changes in the spation domain at the same time. § RECURRENT RESIDUAL NETWORK In this section, we describe our approach to address the problem of action recognition in videos. Our approach is an extension of ResNet <cit.>, which reformulates a layer as learning the spatial residual function with respect to the layer's input. State of the art results were achieved in image recognition tasks by learning spatial residual functions. We extend the approach to learn temporal residual functions across the frames to do action recognition in videos. In our formulation, the feature vector at time step t is a residual function with respect to the feature vector at time step t-1. By following the analogy of ResNet, temporal residuals are learned by introducing the temporal skip (recurrent) connections. In the following, we give a brief introduction to ResNet, explain different types of temporal skip connections, and finally describe how to include more temporal context. §.§ ResNetResNet <cit.> introduces a residual learning framework. In this framework, a stack of convolutional layers fit a residual mapping instead of the desired mapping. Let H(x) denote the desired mapping. The principle of ResNet is to interpret the mapping of the learned function from one layer to another as H(x) = F(x) + x, i.e. as the original input x plus a residual function F(x). Introducing the spatial skip connection, the input signal x is directly forwarded and added to the next layer, so it only remains to learn the residual F(x) = H(x) - x, see Figure <ref>b.§.§ Type of Temporal Skip ConnectionThere are multiple possibilities to model the temporal skip connection. The standard spatial skip connections in the classical ResNet architecture are either an identity mapping, i.e. they just forward the input signal and add it to the destination layer, or they perform a linear transformation in order to establish the downsampling as depicted in Figure <ref>. The simplest case for the temporal skip connection is to also use an identity mapping. With the notation of Figure <ref>, the layer output y_t at time t is the residual functiony_t = σ(x_t * W) + x_t,where σ represents the nonlinear operations performed after the linear transformation. Note that for simplicity of notation, we pretend that the residual block contains a single convolutional layer only and W represents weights for the layer. Extending this for the temporal skip connection, we obtainy_t = σ(x_t * W) + x_t + x_t-1.In order to allow for a weighting of the temporal skip connection with weights W_s, a linear transformation can be applied to x_t-1 before adding it to y_t,y_t = σ(x_t * W) + x_t + x_t-1 * W_s.Moreover, in order to learn a nonlinear spatio-temporal mapping, this can be further extended toy_t = σ(x_t * W) + x_t + σ(x_t-1 * W_s).§.§ Temporal Context While recurrent connections in traditional recurrent neural networks feed the output of a layer at time t-1 to the same layer as input at time t, our proposed spatio-temporal skip connections are different. For an illustration, see Figure <ref>. Here, we unfold a network with two spatio-temporal skip connections over time. Note that the temporal context that influences the output y_t includes x_t-2, x_t-1, and x_t as there are paths from y_t leading to all these inputs. If we only used one temporal skip connection instead of two, the accessible temporal context for y_t would only be x_t-1 and x_t, respectively. In general, if a temporal context over T frames is desired, at least T-1 temporal skip connections are necessary. In order to use this approach for action recognition, a video is divided into M small sequences each containing T frames. A recurrent residual network with T-1 temporal skip connections is created to capture the dependencies over these T time steps. In training, we optimize the cross-entropy loss of each small video chunk. During inference, for each small sequence {x_t^(i)}_t=1^T within one video, the recurrent residual network computes P(y=c|{x_t^(i)}_t=1^T). In order to obtain an overall classification of a complete video, the individual output probabilities are averaged over the M subsequences of the video. Note that this is similar to existing frame-wise approaches, where an output probability per frame is computed and the overall video action probabilities are obtained by accumulating all single frame probabilities. In our case, instead of frames, we use small subsequences of the original video.§ EXPERIMENTAL SETUP In this section, we describe our experimental setup. We use ADAM <cit.> as learning algorithm and except for the baseline experiments, we update the model after observing 1% of training data, and every tenth frame from each video is sampled as input to the model. We evaluate our approach on UCF-101 <cit.>, a large-scale action recognition dataset consisting of 13,000 videos from 101 different classes. The dataset comprises about 2.5 million frames in total. All the experimental work was done using our framework squirrel [https://github.com/alexanderrichard/squirrel]. In the following, we describe the baseline experiments and the experiments with our proposed recurrent residual network. §.§ Baseline ExperimentsAs a baseline, we extract imagenet <cit.> features for individual frames in the video. Averaged individual feature vectors represent the feature vector of the complete video. Feature vectors for individual framesare extracted using a pre-trained ResNet model with 50 layers. A batch normalization layer is added after each layer to normalize the input to a layer. This way the network has in total 106 layers. We extract the imagenet feature vectors for each frame at three different positions of ResNet, i.e. after block4, block3, and block2 respectively, see Figure <ref>.We performed two sets of experiments on extracted features for each block. In one set, we average the frame level feature vectors, after Z-normalization and without Z-normalization, and train a linear classifier. We call this model the average pooling model. Similarly, in the other set, we use a recurrent neural network with 128 gated recurrent units (GRUs) in order to evaluate the performance of a classical recurrent network. We call this model the GRU. Table <ref> shows the baseline experiments with imagenet features. The average pooling model outperforms the model with gated recurrent units. Also, it is evident from the experiments that with more depth, features become richer. Hence, the depth plays a significant role in getting good classification accuracy. §.§ Effect of type and position of the recurrent connection In this set of experiments, we evaluate different types of temporal skip connections along with their position in our proposed model. We evaluate temporal skip connections at four different positions, i.e. at the beginning by making the first skip connection in block1 recurrent (referred to as Block1), in the middle by making last skip connection in block2 recurrent (referred to as Block2), by making last convolutional skip connection in block4 recurrent (referred to as Mid Block4), and finally by making last skip connection in block 4 recurrent (referred to as Block4). Also, we evaluate the type of recurrent connections. In these experiments, we evaluate identity mapping temporal skip connections, and temporal skip connections with convolutional weights having kernels of size 1×1. Table <ref> shows the deeper we place the temporal skip connection in the network, the better is the classification accuracy. In another set of experiments, we evaluate the effect of the type of temporal skip connection. We change the configuration of the best working setup, i.e. the one with the skip connection in block4. The connection performs a parametrized linear or non linear transformation and identity mapping. Table <ref> shows the results achieved by different type of connections, placed closer to the output layer as our previous analysis shows that works best. Identity mapping connection with non trainable weights performed best, possibly because introducing more weights in the network causes overfitting.§.§ Effect of Temporal Context In this set of experiments, we explore the effect of temporal context. As discussed earlier, with more recurrent connections, the network is able to include additional temporal dependencies. We already investigated the network with one recurrent connection that is able to include temporal context of two frames. In these experiments, we further explore the temporal context of three frames (by introducing two temporal connections in the network), and the temporal context of five frames (by introducing four temporal skip connections in the network). Figure <ref> shows the network architecture to accommodate temporal context of three frames. As it is evident in Table <ref>, we do not gain much by including more temporal context. The accuracy improves in case of temporal context three, however it gets worse in case of temporal context five. Hence, considering training time, we consider the model with only one temporal skip connection as the best model. Note that due to the fact that we sample every tenth frame from the video, the overall temporal range is actually ten frames. More precisely, the network learns spatio-temporal residuals between the frames x_t and x_t-10, covering a reasonable amount of local temporal progress within the video. We further evaluate our best model on all three splits of UCF-101. On average our best model achieves 0.198 on UCF-101 <cit.>, which is a relative improvement of 17% over the ResNet baseline which has an error rate of 0.236. §.§ Comparison with the state of the art In this section, our best model (with one temporal skip connection and with sample rate 10) is compared with state of the art action recognition methods. As motion in the frames and appearance in individual frames are two complementary aspects for action recognition, most of the state of the art methods consider two different neural networks, an appearance stream and a motion stream, to extract and use appearance and motion for action recognition. The output of both the networks is fused, and a simple classifier is trained to classify videos. As our model uses the raw video frames only rather than optical flow fields,fair comparison of our model and the state of the art is only possible for the appearance stream. For completeness, we also compare our results with results achieved after the outputs of the appearance and the motion streams are fused, see Table <ref>.Our model achieves better error rates than the state of the art appearance stream models. Only fused models perform better. Note that the works <cit.> are all two-stream architectures. The dynamic image network <cit.> is a purely appearance base method that reduces that video to a single frame and uses a ConvNet to classify this frame. For a fair comparison, we provide the result of dynamic image network without the combination with dense trajectories as this would include motion features.§ CONCLUSION We extended the ResNet architecture to include temporal skip connections in order to model both, spatial and temporal information in video. Our model performs well already with a single temporal skip connection, enabling to infer context between two frames. Moreover, we showed that fusing temporal information at a late stage in the network is beneficial and that learning a temporal residual is superior to using a classical recurrent layer. Our method is not limited to appearance based models and can easily be extended to motion networks that have shown to further enhance the performance on action recognition datasets. A comparison to both, a ResNet baseline and state of the art methods showed that our approach outperforms other purely appearance based approaches.Acknowledgement: The authors have been financially supported by the DFG projects KU 3396/2-1 and GA 1927/4-1 and the ERC Starting Grant ARCA (677650). Further, this work was supported by the AWS Cloud Credits for Research program. splncs03 | http://arxiv.org/abs/1706.08807v1 | {
"authors": [
"Ahsan Iqbal",
"Alexander Richard",
"Hilde Kuehne",
"Juergen Gall"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170627120814",
"title": "Recurrent Residual Learning for Action Recognition"
} |
by 1.2in by -.6in by -.6in =.1cmmatrix,arrows vertex=[ circle, fill, draw, inner sep=0pt, minimum size=4pt,] edge= [thick]Angle *corCorollary corollaryCorollary *lemLemma *propProposition *propnProposition *exExampledefinition *defnDefinitiondefinition thmTheorem *thm*Theorem *conjConjecture *remRemark *remsRemarks notationNotationcnt enumerit(cnt) (iv)= by =0ptcnt @YGRID=1@XGRID=20=0.003pt remarkequationsection ⋀§ SECNUMFONTSTARTSECTIONSECTION1 @.7PLUS.5§.§ secnumfontstartsectionsubsection2 .5plus.7-.5em #1 §.§Hom mod End [1]#1 Extch ev Ob soc rad head Im gr mult Max Ann sym loc ^λ_AA_k(g)(_,r) res[1]X^+(#1) [1]X^-(#1)Z J C QN ab c d e f g h i j k l m n o p q r s t F u v w z y A B ℂ D E G H I J K L MO P Q R S T U V W x KR rk htℤ ℕ ã b̃ c̃ d̃ ẽ f̃ g̃ h̃ ĩ j̃ k̃ l̃ m̃ ñ õ q̃ r̃ s̃ t̃ ũ ṽ w̃ z̃ w_ω_i Char α łλ δ̣ Borel–de Siebenthal pairs, GlobalWeyl modules and Stanley–Reisner rings]Borel–de Siebenthal pairs, GlobalWeyl modules and Stanley–Reisner ringsDepartment of Mathematics, University of California, Riverside, CA 92521 [email protected] V.C. was partially supported by DMS 1303052.Mathematisches Institut, Endenicher Allee 60, D-53115 Bonn [email protected] D.K was partially funded under the Institutional Strategy of the University of Cologne within the German Excellence Initiative. Department of Mathematics, University of California, Riverside, CA 92521 [email protected][2010]We develop the theory of integrable representations for an arbitrary maximal parabolic subalgebra of an affine Lie algebra. We see that such subalgebras can be thought of as arising in a natural way from a Borel–de Siebenthal pair of semisimple Lie algebras. We see that although there are similarities with the represenation thery of the standard maximal parabolic subalgebra there are also very interesting and non–trivial differences; including the fact that there are examples of non–trivial global Weyl modules which are irreducible and finite–dimensional. We also give a presentation of the endomorphism ring of the global Weyl module; although these are no longer polynomial algebras we see that for certain parabolics these algebras are Stanley–Reisner rings which are both Koszul and Cohen–Macaualey. [ Matt Odell December 30, 2023 =====================§ INTRODUCTION The category of integrable representations of the current algebra g[t] (or equivalently the standard maximal parabolic subalgebra in an untwisted affine Lie algebra) has been intensively studied in recent years. This study has interesting combinatorial consequences and connections with the theory of Macdonald polynomials and their generalizations (see for instance <cit.>,<cit.>,<cit.>,). In this paper we develop the corresponding theory for an arbitrary maximal parabolic subalgebra of an untwisted affine Lie algebra. We show that such subalgebras can be realized as the set of fixed points of a finite group action on the current algebra; in other words they are examples of equivariant map algebras as defined in <cit.>.The representation theory of equivariant map algebras has been developed in <cit.>. However much of the theory depends on the group acting freely on ; in which case it is proved that the representation theory is essentially the same as that of the current algebra.But this is not true for the non–standard parabolics and there are many interesting and non–trivial differences in the representation theory. Recall that two important families of integrable representations of the current algebras are the global and local Weyl modules.The global Weyl modules areindexed by dominant integral weights λ∈ P^+ and are universal objects in the category. Moreover the ring of endomorphisms _λin this category is commutative.It is known through thework of <cit.> that _λ is a polynomial algebra in a finite number of variables depending on the weight λ and that it isinfinite–dimensional if λ 0. The local Weyl modules are indexed by dominant integral weights and maximal ideals in the corresponding algebra _λand are known to be finite–dimensional. The work of <cit.> shows that the dimension of the local Weylmodule depends only on the weight, and not on the choice of maximal ideal in _λ, and so the global Weyl module is a free _λ–module of finite rank.In this paper we develop the theory of global and local Weyl modules for an arbitrary maximal parabolic. The modules are indexed by dominant integral weights of a semisimple Lie subalgebra g_0 of g which is of maximal rank; a particular example that we use to illustrate all our results is the pair (B_n, D_n) which is also an example of a Borel–de Siebenthal pair. We determine a presentation of _λ and show that in general _λ is not a polynomial algebra and that the corresponding algebraic variety is not irreducible. In fact we give necessary and sufficient conditions onλ for_λ to befinite–dimensional (we prove that it must be of dimension 1). Inparticular the associated global Weyl module is finite–dimensional and under further restrictions on λ the global Weyl module is also irreducible. We also show that under suitable conditions on the maximal parabolic the algebra _λ is a Stanley–Reisner ringwhich is both Koszul and Cohen–Macaualey. Finally we study the local Weyl modules associated with a mutiple of a fundamental weight. In this case _λ is either one–dimensional or a polynomial algebra.We determine the dimension of the local Weyl modules and prove that it is independent of the choice of a maximal ideal in _λ. This proves also that in this case theglobal Weyl module is a free _λ–module of finite rank.This fact is false for general λ and we give an example of this in Section 7. However, we will show in this example that the global Weyl module is a free module for a suitable quotient algebra of _λ, namely the coordinate ring of one of the irreducible subvarieties of _λ. This paper is organized as follows:In Section 2, we recall a result of Borel and de Siebenthal which realizes all maximal proper semisimple subalgebras, g_0,of maximal rank, of a fixed simple Lie algebra g as the set of fixed points of an automorphism of g.We prove some results on root systems that we will need later in the paper, and discuss the running example of the paper, which is the case where g is of type B_n, and g_0 is of type D_n.In Section 3 we extend the automorphism of g to an automorphism of g [t]. We then study the corresponding equivariant map algebra, which is the set of fixed points of this automorphism.We discuss ideals of this equivariant map algebra, and show that in this case, the equivariant map algebra is not isomorphic to an equivariant map algebra where the action of the group is free, which makes the representation theory much different from that of the map algebra g [t].We conclude the section by making the connection between these equivariant map algebras and maximal parabolic subalgebras of the affine Kac-Moody algebra.In Section 4 wedevelop the representation theory of g[t]^τ. Following <cit.>, we define the notion of global Weyl modules, the associated commutative algebra and the local Weyl modules associated to maximal ideals in this algebra. In the case of g[t]it was shown in <cit.>thatthecommutative algebra associated with a global Weyl module is apolynomial ring in finitely many variables. This is no longer true for g[t]^τ; however in Section 5 we see that modulo the Jacobson radical, the algebra is a quotient of a finitely generated polynomial ring by a squarefree monomial ideal. By making the connection to Stanley–Reisner theory, we are able to determine the Hilbert series.In the case when _j^∨(α_0) = 1 we also determine the Krull dimension, and we give a sufficient condition for the commutative algebra to be Koszul and Cohen-Macaulay.In Section 6 we examine an interesting consequence of determining this presentation of the commutative algebra which differs from the case of the current algebra greatly.More specifically we see that under suitable conditions a global Weyl module can befinite–dimensional and irreducible, and we give necessary and sufficient conditions for this to be the case.We conclude this paper by determining the dimension of the local Weyl module in the case of our running example (B_n, D_n) for multiples of fundamental weights and a few other cases.We also discuss other features not seen in the case of the current algebra. Namely we give an example of a weight where the dimension of the local Weyl module depends on the choice of maximal ideal in _λ showing that the global Weyl module is not projective and hence not a free _λ–module. Acknowledgements: Part of this work was done when the third author was visiting the University of Cologne. He thanks the University of Cologne for excellent working conditions. He also thanks the Fulbright U.S. Student Program, which made this collaboration possible. § THE LIE ALGEBRAS (G,G0)§.§ We denote the set of complex numbers, the set of integers, non–negative integers, and positive integersby ,, _+ andrespectively. Unless otherwise stated, all the vector spaces considered in this paper are -vector spaces and ⊗ stands for ⊗_. Given any Lie algebra a we let ( a) be the universal enveloping algebra of a. We also fix an indeterminate t and let [t] and[t,t^-1] be the correspondingpolynomial ring, respectivelyLaurent polynomial ringwith complex coefficients. §.§ Let g be a complex simple finite–dimesional Lie algebra of rank nwith a fixed Cartan subalgebra h. Let I={1,…, n} and fix a set Δ={α_i: i∈ I} of simple roots of g with respect to h. Let R, R^+ be the correspondingset of roots and positive roots respectively. Given α∈ R let g_α be the corresponding root space and a_i, i∈ I be the labels of the Dynkin diagram of g; equivalently the highest root of R^+ is θ=∑_i=1^na_iα_i. Fix a Chevalley basis {x^± _α,h_i :α∈ R^+, i∈ I} of g, and set x^±_i=x_±α_i.Let ( , ) be the non–degenerate bilinear form on h^* with (θ,θ)=2 induced by the restriction of the (suitably normalized) Killing formof g to h.Let Q be the root lattice with basis α_i, i∈ I.Define_i: Q→, i∈ Iby requiringη=∑_i=1^n_i(η)α_i , and set (η)=∑^n_i=1_i(η). For α∈ R set d_α=2/(α,α),_i^∨(α)=_i(α)d_α d_α_i^-1 and h_α=∑_i=1^n ^∨_i(α)h_i. Let W be the Weyl group of g generated by a set of simple reflections s_i, i∈ I and fix a set of fundamental weights {ω_i: 1≤ i≤ n} for g with respect to Δ.§.§From now on we fix an element j∈ I with a_j≥ 2 and also fix ζ to bea primitive a_j–th root of unity. We remark that if j is such that a_j = 1 the algebras studied in this paper are known in the literature as current algebras. As we discussed in the introduction, the representation theory ofcurrent algebras is well-developed and hence we will assume from now on and usually without mention that a_j≥ 2. The following is well–known (see for instance <cit.>). Set I(j)=I∖{j}.The assignmentx_i^±→ x_i^±,i∈ I(j) ,x_j^±= ζ_j^± 1x_j^±,defines an automorphismτ: g→ g oforder a_j.Moreover, the set of fixed pointsg_0is a semismple subalgebra with Cartan subalgebra h and R_0={α∈ R: _j(α)∈{0, ± a_j}},is the set of roots of the pair ( g_0, h). The set{α_i: i∈ I(j)}∪{-θ} is a simple system for R_0. In the case when a_j is prime, the pair ( g,g_0) is an example of a semisimple Borel–de Siebenthal pair.In other words, g_0 is a maximal proper semisimple subalgebra of g of rank n. If a_j is not prime we can find a chain of semisimple subalgebrasg_0 ⊂_1 ⊂⋯⊂_ℓ⊂ g,such that the successive inclusions are Borel–de Siebenthal pairs.We shall be interested in infinite–dimensional analogues of these.§.§ For our purposes we will need a different simple system for R_0 which we choose as follows. The subgroup of W generated by the simple reflections s_i, i∈ I(j) is the Weyl group of the semisimple Lie algebragenerated by {x_i^±: i∈ I(j)}.Let w_∘ be the longest element ofthis group.The set Δ_0 ={α_i: i∈ I(j)}∪ {w_∘^-1θ},is a set of simple rootsfor ( g_0, h) and the corresponding setR_0^+ of positive roots is contained in R^+. Since w_∘ is the longest element of the Weyl group generated by s_i, i∈ I(j), it follows thatfor i∈ I(j),w_∘α_i⊂{-α_p: p∈ I(j)}.HenceΔ_0=- w_∘^-1({α_i: i∈ I(j)}∪{-θ}).Since w_∘ is an element of the Weyl group of g_0 it follows from Proposition <ref> that Δ_0 is a simple system for R_0. Moreover w_∘^-1θ∈ R^+ since w_∘α_j∈ R^+ and _j(θ)=a_j. Hence Δ_0⊂ R^+ thus proving the Lemma.Let Q_0 be the weight lattice of g_0 determined by Δ_0; clearly Q_0⊂ Qand set Q_0^+=Q_0∩ Q^+. Then Q_0^+ is properly contained in Q^+ and we see an example of this at the end of this subsection. We isolate some immediate consequences of the Lemma which we willuse repeatedly.From now on we set α_0=w_∘^-1θ, x_0^±=x_α_0^± and h_0=h_α_0. Then, (i) α_0 is a long root.(ii) (α_0,α_i)≤ 0 if i∈ I(j) and sinceα_0∈ R^+ it follows that (α_0,α_j)>0.(iii) _j(α_0)=a_j.(iv) If α∈ R_0^+is such that _j(α)=a_j and αα_0, then α>α_0.Consider the example of the Borel-de Siebenthal pair (B_n, D_n), so j=n.Recall that the positive roots of B_n are of the formα_r,s:=α_r+⋯+α_s,α_r,s:=α_r+⋯+α_s-1+2α_s+⋯+2α_n. Moreover, θ = α_1,2 and so a_n = 2. In this case, g_0 is of type D_n and α_0=α_n-1+2α_n. The simple system for D_n is Δ_0 = {α_1,…,α_n-2,α_n-1,α_0} (α_0 and α_n-1 correspond to the spin nodes) and the root system for D_n is the set of all long roots of B_n.We notethat α_n ∈ Q^+ ∖ Q^+_0 as mentioned earlier in this section. §.§ For 1≤ k<a_j setR_k={α∈ R: _j(α)∈{k, -a_j+k}}, g_k=⊕_α∈ R_k g_α.Equivalentlyg_k={x∈ g: τ(x)=ζ^k x}.Setting R_k^+=R_k∩ R^+, we observe that[x_0^+, R_k^+]=0,1≤ k<a_j. We have,(i) g_0=[ g_1,g_a_j-1].(ii) For all 1≤ k<a_j the subspace g_k is an irreducible g_0–module.(iii)For all 0≤ m<k< a_j, we have g_k=[ g_k-m,g_m]. Each connected component of the Dynkin diagram of the semisimple algebra g_0 contains somesimple root α_i withα_i(h_j)<0. Since 0≠ h_j=[x^+_j, x^-__j]∈ [ g_1,g_a_j-1] it follows that[ g_1,g_a_j-1] intersects each simple ideal of g_0 non–trivially, which proves (i). If a_j=2, the proof of the irreducibility in part (ii) of the proposition can be found in <cit.>.Ifa_j≥ 3 then g is of exceptional type and the proof is done in a case by case fashion. Oneinspects the set of roots to notice that for 1≤ k<a_j there exists a unique root θ_k∈ R^+_ksuch that θ_k is maximal. This means that x_θ_k^+ generates an irreducible g_0–module and a calculation proves that the dimension of this module is precisely g_kand establishes part (ii). Part (iii) is now immediate if we prove that the g_0–module [ g_k-m, g_m] is non–zero and this is again proved by inspection. We omit the details. Part (ii) of the proposition implies thatR_k^+ has a unique element θ_k such thatthe following holds:(θ_k,α_i)≥ 0and [x_i^+,g_θ_k]=0,i∈ I(j)∪{0}. Since θ_kθ it is immediate that[x_j^+,g_θ_k] 0,i.e.,θ_k+α_j∈ R^+.Notice that x_θ_k^-∈ g_a_j-k and [x_i^-, x_θ_k^-]=0 for all i∈ I(j)∪{0}.Moreover _i(θ_k)>0,i∈ I,1≤ k<a_j. To see thisnote thatthe set {i:_i(θ_k)=0} is contained in I(j). Since R is irreducible there must exist i, p∈ I with _i(θ_k)=0 and _p(θ_k)>0 and (α_i,α_p)<0. It follows that (θ_k, α_i)<0 which contradicts (<ref>).As a consequence of (<ref>) we get, (θ,θ_k)>0,1≤ k<a_j,andhenceθ-θ_k∈ R^+_a_j-k. Finally, we note thatsince (θ_k+α_j,α_0)=(θ_k,α_0)+(α_j,α_0)>0 (see the Remark in Section <ref>)we now haveθ_k+α_j-α_0∈ R,k≠ a_j-1,θ_a_j-1+α_j-α_0∈ R_0^+∪{0}.In thecase of (B_n, D_n),we recall a_n=2. In this case, R_1 is the set of all short roots of B_n, andθ_1=α_1+⋯+α_n. When n ≥ 4, g_1 is the natural representation of D_n.When n=3, g_1 is the second fundamental representation of A_3. §THE ALGEBRAS (G[T],G[T]TAU)In this section we define the current algebra version of the pair ( g, g_0); namely we extend the automorphism τ to the current algebra and study its fixed points. The fixed point algebra is an example of an equivariant map algebra studied in <cit.>. We show that our examples are particularly interesting since they can also be realized as maximal parabolic subalgebras of affine Lie algebras. We also show that our examples never arise from a free action of a finite abeliangroup on the current algebra. This fact makes the study of its representation theory quite different from that of the usual current algebra. §.§ Let g[t]= g⊗[t] be the Lie algebra with the Lie bracket given by extending scalars. Recall the automorphism τ:g→ g defined in Section <ref>. Itextends to an automorphism of g[t] (also denoted as τ) byτ(x⊗ t^r)=τ(x)⊗ζ^-rt^r,x∈ g,r∈_+.Let g[t]^τ be the subalgebra of fixed points of τ; clearlyg[t]^τ=⊕_k=0^a_j-1 g_k⊗ t^k[t^a_j].Further, if we regard g[t] as a _+–graded Lie algebra by requiring the grade of x⊗ t^r to be r then g[t]^τ is also a _+–graded Lie algebra, i.e.,g[t]^τ=⊕_s∈_+ g[t]^τ[s].A graded representation of g[t]^τ is a _+–graded vector space V which admits a compatible Lie algebra action of g[t]^τ, i.e.,V=⊕_s∈_+V[s], g[t]^τ[s] V[r]⊂ V[r+s], r,s∈_+.§.§ Given z∈, let _z:g[t]→ g be defined by _z(x⊗ t^r)=z^rx, x∈ g, r∈_+. It is easy to see that _0( g[t]^τ)= g_0,_z( g[t]^τ)= g,z 0.More generally, one can construct ideals of finite codimension in g[t]^τ as follows. Let f∈[t^a_j] and 0≤ k<a_j. The ideal g⊗ t^kf[t] of g[t] is of finite codimension andpreserved by τ.Hence, i_k,f=( g⊗ t^kf[t^a_j])^τ is an ideal of finite codimension in g[t]^τ. Notice that_0 ∩ g[t]^τ= i_1,1,_z ∩ g[t]^τ= i_0, (t^a_j-z^a_j). We now prove, Let i be a non–zero idealin g[t]^τ. Then there exists 0≤ k<a_j and f∈[t^a_j] such that i_k,f⊂ i. In particular, any non–zero ideal in g[t]^τ is of finite codimension.For0≤ k<a_j,setS_k= { g ∈ℂ[t^a_j]:x ⊗ t^kg ∈𝔦 for all x ∈ g_k}.We claim that S_k is an ideal in [t^a_j] for all 0≤ k<a_j and t^a_j S_a_j-1⊂ S_0 ⊂ S_1⊂⋯⊂ S_a_j-1.Let 0≤ k< a_j, g∈ S_k and f∈[t^a_j]. By Proposition <ref> we have [ g_0, g_k]= g_k and hence any x∈ g_k can be written asx=∑_s=1^r[z_s, y_s] with z_s∈ g_0 and y_s∈ g_k. Thereforex⊗ t^kfg=∑_s=1^r[z_s⊗ f, y_s⊗ t^kg]∈ i,which proves that fg∈ S_k and hence S_k is an ideal in ℂ[t^a_j]. A similar argument using [ g_m, g_k-m]= g_k proves the desired inclusions (<ref>).We now prove that S_k 0 for some 0≤ k< a_j. If i⊂ g_0⊗[t^a_j] then [x_j^±,i]=0. This would imply that (x^+_i⊗ g)∉ i for any i∈ I(j)∪{0} and g∈[t^a_j]. This is because if (x^+_i⊗ g)∈ ithen ( a⊗ g)∈ i for thesimple ideal a of g_0 containing x^+_i. But a contains a simple rootvector x_k^+ with[x_k^+,x_j^+] 0 and hence we have a contradiction.In other words we have proved that i must contain an element of the form (x⊗ t^k g) for some root vector x∈ g_k, k>0and 0 g∈[t^a_j]. Since g_k is an irreducible g_0–module we have ( g_k⊗ t^k g)∈ i, i.e. S_k 0 and we are done. Using (<ref>) we also see that S_r≠ 0 for all 0≤ r< a_j; let f_r∈[t^a_j]be a non–zero generator for the ideal S_r.By (<ref>) there exist g_0,…,g_a_j-1∈ℂ[t^a_j] such that f_r=g_rf_r+1,0≤ r ≤ a_j-2,t^a_jf_a_j-1=g_a_j-1f_0.This impliesg_a_j-1f_0=g_0⋯ g_a_j-1f_a_j-1=t^a_jf_a_j-1.Hence there exists a unique m∈{0,…, a_j-1} such that g_m=t^a_jand g_p=1 if p m. Taking f=f_m+1, where we understand f_a_j=f_0, we see thati_k,f⊂ i,k=m+1-a_jδ_m,a_j-1. §.§We now show that g[t]^τ is never a current algebra or more generally an equivariant map algebra with free action.For this, werecall from <cit.> the definition of an equivariant map algebra.Thus, leta be any complex Lie algebra and A a finitely generated commutative associative algebra.Assume also thatΓ isa finite abelian group acting on 𝔞 by Lie algebra automorphisms and on A by algebra automorphisms. Then we have an induced action on the Lie algebra(𝔞⊗ A) (the commutator is given in the obvious way)such that γ(x⊗ f)=γ x⊗γ f. Anequivariant map algebrais defined to be the fixed point subalgebra:(𝔞⊗ A)^Γ:={z∈(𝔞⊗ A) |γ(z)=z ∀ γ∈Γ}.The finite–dimensional irreducible representations of such algebras (and hence for g[t]^τ) were given in <cit.> and generalizedearlier work on affine Lie algebras.In the case when Γ acts freely on A, many aspects ofthe representation theory of the equivariant map algebra are the same as the representation theory of a⊗ A (see for instance <cit.>). The importance of the following proposition is now clear.The Lie algebrag[t]^τisnotisomorphic to anequivariant map algebra (𝔞⊗ A )^Γ with a semisimpleand Γ acting freely on A. Recall our assumption that a_j>1 and assume for a contradiction thatg[t]^τ≅ ( a⊗ A)^Γwhere a is semi–simple. Write a= a_1⊕⋯⊕ a_k where each a_s is adirect sum of copies of asimple Liealgebra g_s and g_s g_m if m s. Clearly Γ preserves a_s for all 1≤ s≤ k and henceg[t]^τ≅ ( a⊗ A)^Γ≅⊕_s=1^k( a_s⊗ A)^Γ. Since g[t]^τ is infinite–dimensionalat least one of the summands ( a_s⊗ A)^Γ is infinite–dimensional, say s=1 without loss of generality.But this means that ⊕_s=2^k( a_s⊕ A)^Γ is an idealwhich is not offinite codimension which contradicts Proposition <ref>. Hence we must have k=1, i.e. a= a_1.It was proven in <cit.> that if Γ acts freely on A then any finite–dimensional simple quotient of ( a⊗ A)^Γisa quotient of a; in particular in our situationit follows that all the finite–dimensional simple quotientsof ( a⊗ A)^Γ are isomorphic. On the other hand,(<ref>)shows that g[t]^τ hasboth g_0 and g as quotients. Sinceg_0 is not isomorphic to gwe have the desired contradiction.§.§ We now make the connection of g[t]^τ with a maximal parabolic subalgebra of the untwisted affine Lie algebra g associated to g.Fix a Cartan subalgebra h of g containing h and recall thath= h⊕ c⊕ d,where c spans the one–dimensional center of g and d is the scaling element.Let δ∈ h^* be the unique non–divisible positive imaginary root, i.e.,δ(d)=1 and δ( h⊕ c)=0.Extend α∈ h^* to an element of h^* by α(c)=α(d)=0. The elements {α_i: 1≤ i≤ n}∪{-θ+δ} is a set of simple roots for g.We define a grading on g as follows: for r ∈ℤ andfor each x_α∈ g_α, x_α∈ g[r] iffα = ∑_i=0^n r_i α_i andr = r_j. The following is not hard to prove.Let p be the maximal parabolic subalgebra generated by the elements x_i^±, i∈ I(j), x_±(δ-θ) and x_j^+. Then there exists an isomorphism of graded Lie algebrasp≅ g[t]^τ. In the case of (B_n, D_n), we have a_n=2. Recall that the derived subalgebra of an untwisted affine Lie algebra can be realized as a universal central extension of the loop algebra g ⊗ℂ[t, t^-1].For r ∈ℤ_+, the elements x_α_i,j^±⊗ t^r,x_α_i,j+1^±⊗ t^(r ∓ 1) and h_i ⊗ t^r for 1 ≤ i ≤ j ≤ n form a graded basis of p[2r], and the elements x_α_i,n^-⊗ t^r+1 and x_α_i,n^+⊗ t^r for 1 ≤ i ≤ nform a basis of p[2r+1].The mapx ⊗ t^k ↦ x⊗ t^2r+ s,ifx ⊗ t^k∈𝔭[2r+s]gives the isomorphism in Proposition <ref>.§ THE CATEGORY I In this section wedevelop the representation theory of g[t]^τ. Following <cit.>, <cit.>, we define the notion of global Weyl modules, the associated commutative algebra and the local Weyl modules associated to maximal ideals in this algebra. In the case of g[t]it was shown in <cit.>thatthecommutative algebra associated with a global Weyl moduleis apolynomial ring in finitely manyvariables. This is no longer true for g[t]^τ; however we shall see that modulo the Jacobson radical, the algebra is a quotient of a finitely generatedpolynomial ring by a squarefree monomial ideal. As a consequence we see that under suitable conditions a global Weyl modulecan befinite–dimensional and irreducible. More precise statements can be found in Section <ref>.§.§Fix a set of fundamental weights {λ_i: i∈ I(j)∪{0}} for g_0 with respect to Δ_0 and let P_0, P_0^+ be theirand _+–span respectively.Note that the subsetP^+= {λ∈ P_0^+:λ(h_j)∈_+}is precisely the set of dominant integral weights for g with respect to Δ. Also note that P^+ is properly contained in P_0^+.For example, in the B_n case, λ_n-1∈ P_0^+, and λ_n-1(h_n) = -1.It is the existence of these types of weights that cause the representation theory of g [t]^τ to be different from that of g[t].For λ∈ P_0^+ let V_ g_0(λ) be the irreducible finite–dimensional g_0–module with highest weight λ and highest weight vector v_λ;if λ∈ P^+ the module V_ g(λ) and the vector v_λare defined in the same way. §.§Let I be the category whose objects are g[t]^τ–modules with the propertythatthey are g_0 integrable and where the morphisms are g[t]^τ–module maps. In other words an object V of I is a g[t]^τ–module which is isomorphic toa direct sum offinite–dimensional g_0–modules. It follows that V admits a weight space decompositionV=⊕_μ∈ P_0 V_μ,V_μ={v∈ V: hv=μ(h)v, h∈ h},and we set V={μ∈ P_0: V_μ 0}. Note thatw V⊂ V, w∈ W_0,where W_0 is the Weyl group of g_0.Forλ∈ P_0^+ we let I^λ be the full subcategory ofwhose objects V satisfy the condition that V⊂λ-Q^+; note that this is a weaker condition than requiring the set of weights be contained in λ-Q_0^+ (see delta0).Suppose that V is an object of I^λ and let μ∈ V and α∈ R^+. Then μ-sα∈ V for only finitely many s.If α∈ R_0^+ the result is immediate since V is a sum of finite–dimensional g_0–modules.Since α∈ P_0, it follows that there exists w∈ W_0 such that wα is in the anti–dominant chamber for the action of W_0 on h.This implies that wα=-r_0α_0-∑_i∈I(j) r_iα_i where the r_i are non–negative rationalnumbers.Since W_0 is a subgroup of W itfollows that -wα∈ R^+. This shows thatifμ∈ V is such that μ-sα∈ V, thenwμ-swα∈ V⊂λ-Q^+.This is possible only for finitely many s and hence the Lemma is established. §.§ Letg= n^-⊕ h⊕ n ^+,n^±=⊕_α∈R^+ g_±α,be the triangular decomposition of g. Since τ preserves the subalgebras n^± and h we haveg[t]^τ= n^-[t]^τ⊕ h[t]^τ⊕ n^+[t]^τ. Further h[t]^τ≅ h⊗[t^a_j] is a commutative subalgebra of g[t]^τ.For λ∈ P_0^+ the global Weyl module W(λ) is the cyclic g[t]^τ–module generated by an element w_λ with defining relations: for h∈ h and i∈ I(j)∪{0},hw_λ =λ(h)w_λ, n^+[t]^τ w_λ=0, (x^-_i⊗ 1)^λ(h_i)+1w_λ=0.It is elementary to checkthat W(λ) is an object of _j^λ, one just needs to observe that the elements x^±_i, i∈ I(j)∪{0} act locally nilpotently on W(λ). Moreover, if we declare the grade of w_λ to be zero then W(λ) acquires the structure of a _+ graded g[t]^τ–module.§.§ As in <cit.> one checks easily that the following formuladefines a right action of h[t]^τ on W(λ):(uw_λ)a=uaw_λ, u∈( g[t]^τ),a∈ h[t]^τ.Moreover this action commutes with the left action of g[t]^τ. In particular, if we set_ h[t]^τ(w_λ)={a∈( h[t]^τ): aw_λ=0},_λ=( h[t]^τ)/_ h[t]^τ(w_λ),we get that _ h[t]^τ(w_λ) is an ideal in ( h[t]^τ) and that W(λ) is a bi–module for ( g[t]^τ,_λ). It is clear that _ h[t]^τ(w_λ) is a graded ideal of ( h[t]^τ) and hence the algebra _λ is a _+–graded algebra with a unique graded maximal ideal _0.It is obvious from the definition thatwe have an isomorphism of right _λ–modules W(λ)_λ≅_λ.We now prove, For all λ∈ P_0^+ the algebra _λ is finitely generated and W(λ) is a finitely generated _λ–module. The proof of the proposition is very similar to the one given in <cit.> but we sketch the proof below for the reader's convenience and also to set up some further necessary notation. Unlike in the case of g[t] we will later see that the global Weyl module is not a free _λ module in general (see notfreealambda) §.§ We need an additional result to prove Proposition <ref>. For α∈ R^+ and r∈_+, define elements P_α,r∈( h[t]^τ) recursivelybyP_α,0=1,P_α ,r=-1/r∑_p=1^r(h_α⊗ t^a_jp)P_α, r-p,r≥ 1. Equivalently P_α, r is the coefficient of u^r in the formal power seriesP_α(u)=exp(-∑_r≥ 1h_α⊗ t^a_jr/ru^r). Writing h_α=∑_i=1^n_i^∨(α) h_i, wesee thatP_α(u)=∏_i=1^n P_α_i(u)^_i^∨(α),α∈ R^+.Set P_α_i, r= P_i,r, i∈ I∪{0}. The following is now trivial from the Poincaré–Birkhoff–Witt theorem. The algebra( h[t]^τ) is the polynomial algebra in the variables{P_i, r: i∈ I(j)∪{0}, r∈}, and also in the variables{P_i, r: i∈ I, r∈}.The comultiplication Δ̃: ( g[t]^τ)→( g[t]^τ)⊗( g[t]^τ) satisfies Δ̃(P_α(u))=P_α(u)⊗ P_α(u),α∈ R^+. For x∈( g[t]^τ), r∈_+, set x^(r)=1/r!x^r.§.§ The following can be found in <cit.> and is a reformulation of a result of Garland, <cit.>. Let x^±, h be the standard basis of sl_2 and let V be a representation of the subLie algebraof sl_2[t] generated by (x^+⊗ 1) and (x^-⊗ t). Assume that 0 v∈ V is such that (x_α^+⊗ t^r)v=0 for all r∈_+. For all r∈_+ we have (x^+⊗ 1)^(r)(x^-⊗ t)^(r)v=(x^+⊗ t)^(r)(x^-⊗ 1)^(r)v= (-1)^r P_r v,where∑_r≥ 0P_ru^r=exp(-∑_r≥ 1h⊗ t^r/ru^r).Further,(x^+⊗ 1)^(r)(x^-⊗ t)^(r+1)v =(-1)^r∑_s=0^r(x^-⊗ t^s+1)P_ r-sv.§.§ Proof of Proposition <ref> Given α∈ R^+, it is easily seen that the elements (x_α^+⊗ t^_j(α)) and (x_α^-⊗ t^a_j-_j(α)) generate a subalgebra of g[t]^τ which isisomorphic to the subalgebra of sl_2[t] generated by (x^+⊗ 1) and (x^-⊗ t). Using the defining relations of W(λ) and equation(<ref>) weget thatP_α, rw_λ=0,r≥λ(h_α)+1,α∈ R^+_0.It also follows from Lemma <ref> that P_j, rw_λ=0 for all r>>0. Using Lemma <ref> we see that _λ is finitely generated by the images of the elements{P_i,r: i∈ I(j)∪{0}, r≤λ(h_i)}.We now prove that W(λ) is a finitely generated _λ–module. Fix an enumeration β_1,… ,β_M of R^+. Using the Poincaré–Birkhof–Witt theorem it is clear that W(λ) is spanned by elements of the form X_1X_2⋯ X_M( h[t]^τ)w_λ where each X_p is either a constant or a monomial in the elements {(x^-_β_p⊗ t^s): s∈ a_j_+-_j(β_p)}.The lengthof each X_r isbounded byLemma <ref> and equation(<ref>) proves that for any γ∈ R^+ and r∈_+, the element (x^-_γ⊗ t^ra_j-_j(γ))( h[t]^τ)w_λ is in the span of elements {(x_γ^-⊗ t^sa_j-_j(γ))( h[t]^τ)w_λ: 0≤ s≤ N} for some N sufficiently large.An obvious induction on the lengthof the product of monomialsshows that the values of s arebounded for each β and the proof is complete. Noticethatthe preceding argumentproves that the set W(λ) is finite. This is not obvious since W(λ) is not a subset of λ-Q^+_0. §.§ Letλ∈ P_0^+. Given any maximal idealof _λ we define the local Weyl module,W(λ, )= W(λ)⊗__λ_λ/. It follows from Proposition <ref> that W(λ,) is a finite–dimensional g[t]^τ–module in I and W(λ,)_λ=1. A standard argument now proves that W(λ,) has a unique irreducible quotient which we denote as V(λ,). Moreover, W(λ,𝐈_0) is a _+–graded g[t]^τ–module and V(λ,_0)≅_0^* V_ g_0(λ), where ^*_0V is the representation of g[t]^τ obtained by pulling back arepresentationV of g_0. §.§ We now construct an explicit family of representations of g[t]^τ which will be needed for our further study of _λ. Given non–zero scalars z_1,…, z_k such that z_r^a_jz_s^a_j for all 1≤ r s≤ kwe have a canonical surjective morphismg[t]^τ→ g_0⊕ g^⊕ k→ 0,(x⊗ t^r)→ (δ_r,0x, z_1^rx,…, z_k^rx).Given a representation V of g and z 0, we let _z^*V be the corresponding pull–backrepresentation of g[t]^τ; note that these representations are cyclic g[t]^τ–modules. Using the recursive formulae for P_α,r it is not hard to see that the following hold in the module ^*_z V_ g(λ), λ∈ P^+ and _0^*V_ g_0(μ), μ∈ P_0^+:n^+[t]v_λ=0 P_i,rv_λ= λ(h_i )r(-1)^rz^a_jrv_λ, i∈ I,r∈n^+[t]^τ v_μ=0,P_i,rv_μ=0,i∈ I,r∈.The preceding discussion together with equation (<ref>) now proves the following result. Supposethat λ_1,…,λ_k∈ P^+ and μ∈ P_0^+. Letz_1,…, z_k be non–zero complex numberssuch that z_r^a_jz_s^a_j for all 1≤ r s≤ k. Then^*_0 V_ g_0(μ)⊗^*_z_1V_ g(λ_1)⊗⋯⊗^*_z_kV_ g(λ_k)is an irreducible g[t]^τ–module. Moreover,n^+[t]^τ(v_μ⊗ v_λ_1⊗⋯⊗ v_λ_k)=0,(P_i,r-π_i,r)(v_μ⊗ v_λ_1⊗⋯⊗ v_λ_k)=0,i∈ I, r∈_+,where∑_r∈_+π_i,ru^r=∏_s=1^k(1-z_s^a_ju)^λ_s(h_i),i∈ I. In particular, the modules constructed in the preceding proposition are modules of the form V(λ, ) where λ=μ+λ_1+⋯+λ_k. The converse statement is also true; this follows from the work of <cit.>. An independent proof can be deduced once we complete our study of _λ. § THE ALGEBRA A AS A STANLEY–REISNER RING For the rest of this section we denote by Jac(_λ) the Jacobson radical of _λ, and use freely the fact that the Jacobson radical of a finitely–generated commutative algebra coincides with its nilradical.§.§ The main result of this section is the following. The algebra _λ/Jac(_λ) is isomorphic to the algebra _λ which is the quotientof ( h[t]^τ) by the ideal generated by the elementsP_i,s,i∈ I(j), s≥λ(h_i)+1,andP_1,r_1⋯ P_n,r_n,∑_i=1^n_i^∨ (α_0)r_i>λ(h_0).Moreover, Jac(_λ) is generated by the images of the elements in(<ref>) and Jac(_λ) = 0if _j^∨ (α_0) = 1.We discuss the statement of the theorem in the case of (B_n, D_n). Recall that in this case, h_0 = h_n-1 + h_n and so _j^∨(α_0) = 1. Thus, Jac(_λ) = 0 and (<ref>) becomesP_n-1, r_n-1P_n,r_n ,r_n-1 + r_n > λ(h_0). Before proving alambda we note several interesting consequences. §.§ We recall the definition of a Stanley–Reisner ring, and the correspondence between Stanley–Reisner rings and abstract simplicial complexes(for more details, see <cit.>). Given a monomial m = x_i_1⋯ x_i_ℓ we say that m is squarefree if i_1 < ⋯ < i_ℓ.We say an ideal of ℂ[x_1, …, x_n] is a squarefree monomial ideal if it is generated by squarefree monomials.A quotient of a polynomial ring by a squarefree monomial ideal is called a Stanley–Reisner ring.We now prove the following consequence of alambda.The algebra _λ/Jac(_λ) is a Stanley–Reisner ring with Hilbert series ℍ(_λ/Jac(_λ))=∑_σ∈Σ_λ∏_P_i,r∈σt^a_jr/1-t^a_jr,where Σ_λ denotes the abstract simplicial complex corresponding to _λ/Jac(_λ). Moreover, if _j^∨(α_0) = 1,the Krull dimension of _λis given byd_λ=λ(h_0) + ∑_i: _i(α_0)=0λ(h_i).If in addition we have |{i : _i (α_0) >0}|=2, then the algebra _λ is Koszul and Cohen-Macaulay. In the case(B_n, D_n), we have sinceα_0 = α_n-1 + 2 α_n andh_0=h_n-1+h_ nthat_λ is Koszul and Cohen–Macauley. To see how alambdasr follows from Theorem <ref>, we need to understandthe Stanley–Reisner rings in terms of abstract simplicial complexes. §.§Let k ∈ℕ and let X= { x_1, … , x_k }. An abstract simplicial complex Σ on the set X is a collection of subsets of X such that if A ∈Σ and if B ⊂ A, then B ∈Σ. There is a well known correspondence between abstract simplicial complexes, and ideals in ℂ[X] = ℂ[x_1, … , x_k] generated by squarefree monomials which is given as follows: ifΣ is an abstract simplicial complex, let _Σ⊂ℂ[X] be the ideal generated by the elements of the set{ x_i_1⋯ x_i_r |1 ≤ r ≤ k, {x_i_1 , …, x_i_r}∉Σ}.The following proposition can be found in <cit.>. Given any abstract simplicial complex, Σ the ringℂ[X] / _Σ is a Stanley–Reisner ring.Conversely, any Stanley–Reisner ring is isomorphic toℂ[X] / _Σ for some X = {x_1, …, x_k } and some abstract simplicial complex Σ on X.§.§ If A ∈Σ, we call A a simplex, and a simplex of Σ not properly contained in another simplex of Σ is called a facet. Let ℱ(Σ) denote the set of facets of Σ. For sets B⊂ A, we have the Boolean interval [B,A]={C: B⊂ C ⊂ A} and let A̅=[∅,A]. The dimension of Σ is the largest of the dimension of its simplexes, i.e.dimΣ=max{|A|: A∈Σ}-1.The simplicial complex Σ is said to be pure if all elements of ℱ(Σ) have the same cardinality. An enumeration F_0,F_1,…, F_p of ℱ(Σ) is called a shelling if for all 1≤ r≤ p the subcomplex(⋃_i=0^r-1F̅_̅i̅)∩F̅_̅r̅is a pure abstract simplicial complex and (dim F_r-1)–dimensional.The following can be found in <cit.>. If Σ is pure and shellable, then the Stanley–Reisner ring of Σ is Cohen–Macaulay. §.§It is immediate from (<ref>) and (<ref>) that _λ/Jac(_λ) is a Stanley–Reisner ring.Let Σ_λ denote the corresponding abstract simplicial complex. We have the following lemma.Assume that _j^∨(α_0) = 1 and {i : _i (α_0) >0}={s,j}. Then the simplicial complex Σ_λ is pure and {F_0,…,F_min{λ(h_0),λ(h_s)}} defines a shelling, whereF_r=(∏_i : _i(α_0)=01≤ r_i ≤λ(h_i) P_i,r_i) P_j,1⋯ P_j,λ(h_0)-rP_s,1⋯ P_s,r,0≤ r≤min{λ(h_0),λ(h_s)}.Let F a facet of Σ_λ, i.e., F is not contained properly in another simplex of Σ_λ. It is clear that the cardinality of F is less or equal to d_λ. If it is strictly less, then P_i,rF is a face of Σ_λ for some i and r, which is a contradiction. Hence all facets have the same cardinality. The shelling property is straightforward to check. §.§ Proof of Proposition alambdasr The statement of the Hilbert series and Krull dimension are immediate consequences of the correspondense between _λ/Jac(_λ) and Σ_λ, and can be found in <cit.>. If _j^∨(α_0) = 1 and |{i : _i (α_0) >0}|=2,then it follows from Theorem <ref> that _λ is a quotient of a polynomial algebra by a quadratic monomial ideal, and henceKoszul (see <cit.>). Proposition <ref> and Lemma <ref> together show that _λ is Cohen–Macaulay. §.§ In this subsection, we note anotherinteresting consequence of alambda.Let λ∈ P_0^+. Then_λ/Jac(_λ) is either infinite–dimensional or isomorphic to . Moreover, the latter is true iff the following two conditions hold: (i) for i∈ I(j), we have λ(h_i)> 0 only if _i^∨(α_0)>0,(ii)λ(h_0) < _i^∨(α_0) if i=j or if i∈ I(j) and λ(h_i)>0.Suppose that λ satisfies the conditions in (i) and (ii). To prove that _λ/Jac(_λ)=1 it suffices to prove that the elements P_i,s∈Jac(_λ) for all i∈ I and s≥ 1. Assume first that i j. If λ(h_i)=0 thenequation (<ref>) gives P_i,sw_λ =0 for all s≥ 1. If λ(h_i)>0 then the conditions imply thatλ(h_0)<_i^∨(α_0) and hence equation (<ref>) shows that P_i,s∈Jac(_λ) for all s≥ 1. If i=j then again the result follows from (<ref>) and condition (ii). We now prove theconverse direction.Suppose that (i) does not hold. Then, there exists i j with _i(α_0)=0 and λ(h_i)>0.Equation (<ref>) implies thatthe preimage of Jac(_λ)is contained in theideal of ( h[t]^τ) generated by the elements {P_i,s: i∈ I, ^∨ _i(α_0)>0}.Hence, usingLemma <ref>we see that theimage of the elements {P_i,1^r:r∈} in _λ/Jac(_λ) must remain linearly independent showingthat the algebra is infinite–dimensional. Suppose that (ii) does not hold.Then either λ(h_0)≥_j^∨(α_0) or λ(h_0)≥_i^∨(α_0) for some i∈ I(j) with λ(h_i)> 0. In either case (<ref>) and Lemma <ref> show that theimage of the elements {P_i,1^r:r∈} in _λ/Jac(_λ) must remain linearly independent showingthat the algebra is infinite–dimensional.The algebra_λ isfinite–dimensionaliff it is a local ring. It follows that W(λ) is finite–dimensional iff _λ is a local ring. If_λ is finite–dimensional then so is _λ/Jac(_λ) and the corollary is immediate from the proposition. Conversely suppose that_λ is a local ring. By the proposition and equation (<ref>), we have P_i,sw_λ=0,if_i^∨(α_0)=0, s∈. If _i^∨(α_0) 0 we still have from (<ref>) that P_i,sw_λ=0 if s is sufficiently large. Otherwise,equation (<ref>) shows that there exists N∈_+ such thatP_i,s^N w_λ =0,for all i∈ I,s∈. This proves that _λis generated by finitely many nilpotent elements and since it is a commutative algebra it is finite–dimensional.The second statement of the corollary is now immediate from Proposition <ref>.§.§We turn to the proof of Theorem <ref>. It follows from equation (<ref>) that the elements in (<ref>) map to zero in _λ. Until further notice, we shall prove results which are needed to show thatthe elements in (<ref>) are in Jac(_λ). Given α,β∈ R,with ℓα+β∈ R, letc(ℓ,α,β) ∈ℤ\{0} be such thatad_x_α^ℓ(x_β)= c(ℓ, α, β) x_ℓα + β.The following is trivially checked by induction. Let γ∈Δ and β∈ R^+ ∖Δ be such that β+γ∉ Rand(β,γ)>0.Given m,n,s,p,q ∈_+ we have (x^+_γ⊗ t^p)^(s+d_γ q)(x^+_β-γ⊗ t^m)^(s) (x^-_β⊗ t^n)^(q+s)=C (x^-_s_γ(β)⊗ t^n+d_γ p)^(q)(x^+_γ⊗ t^p)^(s)(x^-_γ⊗ t^m+n)^(s) +Xwhere X∈( g[t]^τ)n^+[t]^τand (d_γ!)^qC =c(d_γ,γ,-β)^q c(1,β-γ,-β)^s . It is immediate that under the hypothesis of the Lemma we have for all P∈( h[t]^τ) that (x^+_γ⊗ t^p)^(s+ d_γ q)(x^+_β-γ⊗ t^m)^(s) (x^-_β⊗ t^n)^(q+s) P w_λ = C(x^-_s_γ(β)⊗ t^n+ d_γ p)^(q)(x^+_γ⊗ t^p)^(s)(x^-_γ⊗ t^m+n)^(s)P w_λ,for some C 0. §.§ Recallthat given any root β∈ R^+ we can choose α∈Δ with (β,α)>0. Moreover if β∉Δ and β is long then β+α∉ R. Settingα_i_0=α_j,β_0=α_0,we set β_1=s_i_0β_0 and note that β_1∈ R^+. If β_1∉Δ then we choose α_i_1∈Δ with (β_1,α_i_1)>0 and set β_2=s_i_1β_1. Repeating this if neccessary we reach a stage when k≥ 1 and β_k∈Δ. In this case we set α_i_k=β_k.We claim that|{0≤ r ≤ k : i_r=i}|=_i^∨(α_0),1≤ i≤ n. To see this,notice that since the β_p are long roots, we have h_β_p=h_β_p-1-h_i_p-1. Hence,h_0=∑_s=0^k h_i_s=∑_i=1^n ^∨_i(α_0)h_i.Equating coefficients gives (<ref>). §.§ Retain the notation of Section <ref>. We now prove that P_i_k,s_k⋯ P_i_0,s_0w_λ=0,if(s_0+⋯ +s_k)≥λ(h_0)+1.We begin with the equalityw= (x^-_0⊗ 1)^(s_0+⋯+s_k)w_λ=0,(s_0+⋯ +s_k)≥λ(h_0) + 1,which isa defining relation for W(λ). Recalling that j=i_0 and setting X_1=(x_j^+⊗ t)^(s_0 +d_α_j(s_1+⋯ +s_k))(x^+_α_0-α_j⊗ t^a_j-1)^(s_0)we get by applying (<ref>) 0=X_1w= (x^-_β_1⊗ t^d_α_j)^(s_1+⋯ +s_k)P_i_0,s_0w_λ.More generally, if we setX_r+1=(x^+_α_i_r⊗ t^δ_i_r, j)^(s_r+d_α_i_r(s_r+1+⋯+r_k))(x^+_β_r-α_i_r⊗ t^m_r)^(s_r),where m_r = a_j-δ_i_r,j- d_α_j|{0 ≤ q <r|i_q=j}| we find after repeatedly applying (<ref>) that0=(x^+_β_k⊗ t^δ_i_k,j)^(s_k)X_k⋯ X_1w=P_i_k,s_k⋯ P_i_0,s_0w_λ=0.Thisproves the assertion. §.§ We can now prove that P_1, r_1⋯ P_n,r_n∈ Jac(_λ)if∑_i=1^n_i^∨(α_0) r_i>λ(h_0). Taking s_p=r_m whenever i_p=m in (<ref>) and using (<ref>) we see thatP_1, r_1^_1^∨(α_0)⋯ P_n,r_n^_n^∨(α_0)w_λ=0if∑_i=1^n_i^∨(α_0) r_i>λ(h_0).Multiplying through by appropriate powers of P_i,r_i, 1≤ i≤ n we get that for some s≥ 0 we have P_1,r_1^s⋯ P_n,r_n^sw_λ =0,if∑_i=1^n_i^∨(α_0) r_i>λ(h_0).HenceP_1,r_1^s⋯ P_n,r_n^s= 0 in _λ proving that P_1, r_1⋯ P_n,r_n∈ Jac(_λ). This argument proves that there exists a well–defined morphism of algebrasφ:_λ↠_λ/Jac(_λ).We now prove,If_j^∨(α_0)=1 the map φ factors through _λ, i.e., we have a commutative diagram[column sep=small](A) at (-1,0) _λ;(B) at (1.7,0) _λ/Jac(_λ);(C) at (1.7,-1.7) _λ;(A) edge[->>,>=angle 90] node[left](B) (A) edge[->>,>=angle 90] node[left](C) (C) edge[->>,>=angle 90] node[left](B);Using (<ref>) it suffices to prove that if _j^∨(α_0)=1 then_i^∨ (α_0) ≤ 1 ∀ i ∈ I.Since _j(α_0)=a_j≥ 2>_j^∨(α_0)=1 we see that g cannot be of simply laced type and hence α_j is short. It follows that s_α_0α_j=α_j-α_0 is also short and so h_α_0 - α_j =d_jh_0- h_j. If _i^∨(α_0) > 1 for some i ≠ j,then we would have _i^∨(α_0 - α_j) = d_j _i^∨(α_0) ≥ 2d_j.Since α_j is short this is impossible unless g is of type F_4 and j=4. This case can be handled by an inspection. §.§ Using Lemma <ref> and (<ref>) we see that the proof of Theorem <ref> is complete if we show that the map (<ref>) is injective. Since _λ is a quotient of ( h[t]^τ) by a square–free ideal, it has no nilpotent elements and thus Jac(_λ) = 0. So if f is a nonzero element in _λ, there exists a maximal ideal _f of _λ so that f ∉_f. Therefore, by Lemma <ref> we can choose a tuple (π_i,r), i∈ I, r∈ satisfying the relations (<ref>) and (<ref>) such that under the evaluation map sending P_i,r to π_i,r the element f is mapped to a non–zero scalar. Define z_1,…,z_k and λ_1,…,λ_k∈ P^+ by π_i(u)=1+∑^_r∈π_i,ru^r=∏_s=1^k(1-z_s^a_ju)^λ_s(h_i),i∈ Iand set μ=λ-(λ_1+…+λ_k)∈ P_0. In what follows we show that μ∈ P_0^+. Since (π_i,r) satisfies the relations in (<ref>) we have that μ(h_i)∈_+ for i∈ I(j). Moreover,since (π_i,r) satisfies(<ref>) we get μ(h_0)∈_+. To see this, note that the coefficient of u^r in ∏_i∈ Iπ_i(u)^_i^∨(α_0) is given by∑_(r_i_k) ∏_i∈ I∏_k=1^^∨_i(α_0)π_i,r_i_k,where the sum runs over all tuples (r_i_k) such that∑_i∈ I∑_k=1^^∨_i(α_0)r_i_k=r. Set r_i=max{r_i_k, 1≤ k≤^∨_i(α_0)}, i∈ I and observe that if r>λ(h_0), then∑_i∈ I_i^∨(α_0)r_i≥ r>λ(h_0)and hence (<ref>) vanishes. It follows thatμ(h_0)=λ(h_0)-deg(∏_i∈ Iπ_i(u)^_i^∨(α_0))∈_+. Now using Proposition <ref> we have a quotient of W(λ) where f acts by a non–zero scalar on the highest weight vector. Hence f^N∉Ann_ h[t]^τ(w_λ) for all N≥ 1, i.e.the image of f under the map (<ref>) is non–zero. This proves the map (<ref>) is injective, and so Theorem <ref> is established.§ FINITE–DIMENSIONAL GLOBAL WEYL MODULESIn this section wegive necessary and sufficient conditions for the global Weyl module to be finite–dimensional.§.§Recall the elements θ_k∈ R^+, 0≤ k< a_jdefined in Section <ref>.Givenλ∈ P_0^+,the module W(λ)is an irreducible g[t]^τ–module and hence isomorphic to ^*_0 V_ g_0(λ)iffthe following hold: λ(h_0)=0 andλ(h_i)>0 onlyif_i(θ_a_j-1)=_i(α_0).The proof of the theorem can be found in the rest of the section.We discuss the finite dimensional irreducible global Weyl modules for the example(B_n, D_n). In this case recall θ_1 = α_1 + … + α_nand α_0 = α_n-1 + 2 α_n. Thus, W(λ) is an irreducible g[t]^τ–module and hence isomorphic to ev_0^*V_ g_0(λ) if and only if λ = r λ_n-1 for r ∈ℤ_+. §.§ Suppose that λ satisfies the conditions of the theorem. Notice thatW(λ)≅_0^* V_ g_0(λ) g[t]^τ[s]w_λ=0,s∈.Recall from Section <ref> that θ_a_j-1+α_j-α_0=∑_i∈ I(j)∪{0}p_iα_i∈ R_0^+∪{0}. If θ_a_j-1=α_0-α_j, we have0=(x^+_α_j⊗ t^ra_j+1)(x^-_α_0⊗ 1)w_λ =(x^-_θ_a_j-1⊗ t^ra_j+1)w_λ.Otherwise, θ_a_j-1+α_j-α_0∈ R_0^+ and it follows that ifp_i>0 then _i(θ_a_j-1-α_0)≠0 and hence by our assumptions on λ we have λ(h_i)=0, i.e. (λ, θ_a_j-1+α_j-α_0)=0. Using the defining relations of W(λ) we get(x^-_θ_a_j-1-α_0+α_j⊗ 1)w_λ=0.Since λ(h_0)=0 we now have 0=(x^-_θ_a_j-1-α_0+α_j⊗ 1)(x^+_α_j⊗ t^ra_j+1)(x^-_α_0⊗ 1)w_λ =(x^-_θ_a_j-1-α_0+α_j⊗ 1)(x^-_α_0-α_j⊗ t^ra_j+1)w_λ =(x^-_θ_a_j-1⊗ t^ra_j+1)w_λ.So in either case we found that (x^-_θ_a_j-1⊗ t^ra_j+1)w_λ=0. By Propostion <ref> and the discussion in Section <ref> we know that g_1 is an irreducible g_0–module generated by x_θ_a_j-1^- by applying elements x_i^+, i∈ I(j)∪{0} and so( g_1⊗ tℂ[t^a_j])w_λ=0. Assume that ( g_m⊗ t^m[t^a_j])w_λ=0 for all mwith 1≤ m <k ≤ a_j. Since 1≤ k-m<k, we also have ( g_k-m⊗ t^k-m[t^a_j])w_λ=0 by our induction hypothesis. Now by Proposition <ref> we have g_k=[ g_k-m, g_m] if k< a_j and g_k=[ g_1, g_a_j-1] if k=a_j and hence using the induction hypothesis we obtain ( g_k⊗ t^k[t^a_j])w_λ=0, proving that W(λ) is irreducible.§.§ We prove the forward direction of Theorem <ref> in the rest of the section for which we need some additional results.Let μ,ν∈ P_0^+and assume that W(ν) is reducible. Then W(μ+ν) is also reducible. It is easily seen using the defining relations of W(μ+ν) that we havea map of g[t]^τ–modulesW(μ+ν)→ev_0^* V_ g_0(μ)⊗ W(ν)extending the assignment w_μ+ν→ v_μ⊗ w_ν.Since W(ν) is reducible thereexists x∈ g[t]^τ[s], s≥ 1 withxw_ν≠ 0. Since xv_μ=0, we nowget x(v_μ⊗ w_ν)=v_μ⊗ xw_ν≠ 0. Hence 0 xw_μ+ν∈ W(μ+ν)[s] and the result follows. §.§ Suppose that λ,μ∈ P_0^+ are such thatthere exists 0Φ∈Hom_ g_0( g_1⊗ V_ g_0(λ),V_ g_0(μ)). Then V:=V_ g_0(λ)⊕ V_ g_0(μ) admits a g[t]^τ–structure extending the canonical g_0–structure as follows:(x⊗ 1)(v_1,v_2)=(xv_1,xv_2), (y⊗ t)(v_1,v_2)=(0, Φ(y⊗ v_1)), g[t]^τ[s](v_1,v_2)=0, s≥ 2,where (v_1,v_2)∈ V, x∈ g_0 and y∈ g_1. It is easily checked that ifλ-μ∈ Q^+thenV is a quotient of the global Weyl module W(λ).The global Weyl module W(λ_i) is not irreducible if i=0 or i∈ I(j) with _i(θ_a_j-1)≠_i(α_0).Recall thatw_∘ is the longest element in the Weyl group defined by the simple roots {α_i: i∈ I(j)} and note that w_∘θ_a_j-1∈ R^+_a_j-1. It follows that α_0+w_∘θ_a_j-1∉ R and hence 0≤ w_∘θ_a_j-1(h_0)≤ 1. Settingμ_0=λ_0-w_∘θ_a_j-1 we see thatμ_0(h_i)=-w_∘(θ_a_j-1)(h_i)≥ 0 for i∈ I(j) and μ_0(h_0)=1-w_∘θ_a_j-1(h_0)≥ 0, i.e. μ_0∈ P_0^+.Since g_1 is an irreducible g_0–module of lowest weight -θ_a_j-1, the PRV theorem <cit.>, <cit.>implies that V_ g_0(μ_0) is a direct summand of g_1⊗ V_ g_0(λ_0). It followsfrom the discussion preceding the proposition that W(λ_0) is not irreducible. It remains to consider a node i∈ I(j) with _i(θ_a_j-1)≠_i(α_0). By way of contradiction suppose thatW(λ_i) is irreducible. Using Proposition<ref>(ii)we can assume that_i(α_0)>0. Moreover,by Section <ref> we know that θ_a_j-1-α_0+α_j∈ R^+_0 and hence_i(θ_a_j-1)>_i(α_0)> 0. If g is of classcial type, this is only possible if _i(θ_a_j-1)=2, _i(α_0)=1 and hence we have a pair of roots of the formα_0=⋯+α_i+⋯+2α_j+⋯,θ_a_j-1=⋯+2α_i+⋯+α_j+⋯which is a contradiction. In other words, there is no such node if g is of classical type. If g is of exceptional type, a case by case analysis shows that W(λ_i) is not irreducible using the discussion preceding the proposition. §.§ Suppose that W(λ) is irreducible. The isomorphism in(<ref>) gives W(λ)≅^* V_ g_0(λ)and henceW(λ) is finite–dimensional.Since W(λ)_λ≅_λ the algebra_λ isa finite–dimensional algebra. Proposition <ref> implies that Jac(_λ) is an ideal of codimension one and hence we haveλ(h_0)<_j^∨(α_0)andfor i j, λ(h_i)>0_i^∨(α_0)>0andλ(h_0)<_i^∨(α_0). Assume that λ violates one of the conditions in (<ref>), i.e. λ(h_i)>0 where i=0 or i∈ I(j) and _i(θ_a_j-1)≠_i(α_0). Now setting μ=λ-λ_i and ν=λ_i in Lemma <ref>and using Proposition <ref> we see that W(λ) is not irreducible which completes the proof of Theorem <ref>.§ STRUCTURE OF LOCAL WEYL MODULES Recall from section2 that the equivariant map algebra g[t]^τ is not isomorphic to an equivariant map algebra where the group Γ acts freely on A.When Γ acts freely on A, the finite dimensional representation theory of the equivariant map algebra is closely related to that of the map algebra g ⊗ A (see for instance <cit.>). We have already seen a major difference between the finite dimensional representation theory of g [t]^τ and that of g [t]. Specifically, in section5we showed that unlike in the case of the current algebra, theglobal Weyl module for g [t]^τ can be finite–dimensional and irreducible for nontrivial dominant integral weights. In this section we discuss the structure oflocal Weyl modules for the case of (B_n, D_n) where λ is a multiple of a fundamental weight, in which case _λ is a polynomial algebra.We finish the section by discussing the complications in determining the structure of local Weyl modules for an abritrary weight λ∈ P_0^+ . The simplest example is the case of ω_n-1= λ_0+λ_n-1. Note that in this case _ω_n-1 is not a polynomial algebra. §.§ Recall that we have a well established theory of local Weyl modules for the current algebra g[t]. Given λ∈ P^+ we denote byW_^ g(λ), λ∈ P^+ the g[t]–module generated by an element w_λ and defining relationsn^+[t] w_λ=0, (h⊗ t^r)w_λ=δ_r,0λ(h)w_λ=0,(x_i^-⊗ 1)^λ(h_i)+1w_λ=0.We remind the reader that {ω_i: 1≤ i≤ n}is a set of fundamental weights for g with respect to Δ. The following was proved in <cit.> and <cit.>.W_^ g(λ)=∏_i=1^n(W_^ g(ω_i))^m_i,λ=∑_i=1^nm_iω_i∈ P^+.We can clearly regard W_^ g(λ), λ∈ P^+ as a graded g[t]^τ module by restriction, however it is not the case that this restriction gives a local Weyl module for g [t]^τ. The relationship between local Weyl modules for g [t]^τ and the restriction of local Weyl modules for g [t] is more complicated, as we now explain.§.§ Given z∈^× we have a isomorphism of Lie algebras η_z:g[t]→ g[t] given by (x⊗ t^r)→ (x⊗ (t+z)^r) and let η_z^*V be thepull–back through this homomorphism of a representation V of g[t].Suppose that V is such that there exists N∈_+ with ( g⊗ t^m)V=0 for all m≥ N. Then ( g⊗ (t-z)^m)η_z^*V=0 for all m≥ N.In particular we can regard the module η_z^*V as a module for the finite–dimensional Lie algebra g⊗[t]/(t-z)^N. Following <cit.>, since z∈^× we have g[t]/ g⊗ (t-z)^N[t]≅ g[t]^τ/( g⊗ (t-z)^N[t])^τ,so if V is a cyclic module for g[t] then η_z^*V is a cyclic module for g[t]^τ.We now need a general construction. Given any finite–dimensional cyclic g[t]^τ–module V with cyclic vector vdefine an increasing filtration of g_0–modules0⊂ V_0=( g[t]^τ)[0]v⊂⋯⊂ V_r=∑_s=0^r( g[t])^τ[s]v⊂⋯⊂V.The associated graded space Vis naturally a graded module for g[t]^τ via the action(x ⊗ t^s)w = (x ⊗ t^s)w,w∈ V_r/V_r-1 .Suppose that v satisfies the relationsn^+[t]^τ v=0, (h⊗ t^2k)v=d_k(h) v, d_k(h)∈, k∈_+, h∈ h.Then since V<∞ it follows that d_0(h)∈_+; in particular there exists λ∈ P_0^+ such that d_0(h)=λ(h) and a simple checking shows that V is a quotient of W_(λ):=W(λ,𝐈_0). The following is now immediate.Let λ∈ P^+ and z∈^×.The g[t]^τ–module (η^*_z W_^ g(λ)) is a quotient of W_(λ) and henceW_(λ)≥ W_^ g(λ). §.§ For the rest of this section, we consider the case of (B_n, D_n), and study local Weyl modules corresponding to weights rλ_i ∈ P_0^+, where r ∈ℤ_+, and 0 ≤ i ≤ n-2 (the i=n-1 case is discussed in section5, where these local Weyl modules are shown to be finite–dimensional and irreducible). We remind the reader that λ_0=ω_n, λ_i=ω_i, 1≤ i≤ n-2 and λ_n-1=ω_n-1-ω_n.In particular, we show the reverse of theinequality in gradedlowerbound, which proves the following proposition. Assume that ( g, g_0) if of type(B_n,D_n). For 0≤ i≤ n-2and r∈_+we have an isomorphism of g[t]^τ–modulesW_(rλ_i)≅(η^*_z W_^ g(rλ_i)).§.§ We recall standard results for local Weyl modules for the current algebra g[t]. (i) Let x,y,h be the standard basis for sl_2 and set y⊗ t^r=y_r,For λ∈ P^+ the local Weyl module W_^sl_2(λ) has basis{w_λ,y_r_1⋯ y_r_kw_λ: 1≤ k≤λ(h),0≤ r_1≤⋯≤r_k≤λ(h)-k}. Moreover, y_sw_λ=0 for all s≥λ(h). (ii) Assume that g is of type B_n (resp. D_n) andassume that in (resp. i n-1, n). ThenW_^ g(ω_i)≅_ g V_ g(ω_i)⊕ V_ g(ω_i-2)⊕⋯⊕ V_ g(ω_i̅),whereV_ g(ω_i̅)= V_ g(ω_1),i odd,V_ g(ω_i̅) = ,ieven. (iii) Assume that g is of type B_n (resp. D_n), and let i=n (resp. i ∈{ n-1, n }).ThenW_^ g(ω_i)≅_ g V_ g(ω_i). We remind the reader thatV_ g(ω_i)=2n+1i, g= B_n,in2ni, g=D_n, in-1, n. Moreover, ifg is of type B_n,V_ g(ω_n)= 2^n,and if g is of type D_n and i ∈{n-1 , n }, thenV_ g(ω_i)= 2^n-1. §.§Our goal is to prove thatW_^ g(rλ_i)≥ W_(rλ_i),r∈.The proof needs several additional results, and we consider the cases 1 ≤ i ≤ n-2 and i=0 separately. Recall that g_0 [t^2] ⊂ g[t]^τ, and so W_(rλ_i) can be regarded as a g_0[t^2]–module by pulling back along the inclusion mapg_0 [t^2]↪ g[t]^τ. For ease of notation wedenote the element w_rλ_i by w_r. (i) For 1 ≤ i ≤ n-2, W_(rλ_i) is generated as ag_0[t^2]–module by w_r and Yw_r where Y is a monomial in the the elements(x^-_p,n⊗ t^2s+1)w_r,p≤ i,0≤ s< r.(ii) W_(rλ_0) is generated asa g_0[t^2]–module by w_r and Yw_r where Y is a monomial in the the elements(x^-_p,n⊗ t^2s+1)w_r,p≤ n,0≤ s< r. First, for 1 ≤ i ≤ n-2 the defining relation x^-_0 w_r=0 implies that(x^-_0⊗ t^2s)w_r= ( x^-_n-1⊗ t^2s)w_r= (x^-_n⊗ t^2s+1)w_r=0,s≥ 0.Since x_p^-w_r=0 if p i it follows that(x^-_p,n⊗ t^2s+1)w_r=0,s≥ 0, p>i.Observe also that (x^-_i)^r+1w_r=0 (x^-_i⊗ t^2s)w_r=0,s≥ r,and hence we also have that(x^-_p,n⊗ t^2s+1)w_r=0,s≥ r,p≤ i. A simple application of the PBW theorem now gives (i).For the case i=0, we have(x^-_k,p⊗ t^2s)w_r = 0, 1 ≤ k ≤ p ≤ n-1,s ≥ 0.The relation (x^-_0)^s+1w_r = 0 for s ≥ rimplies that(x^-_0 ⊗ t^2s)w_r =0, s ≥ rand so(x^-_n ⊗ t^2s+1)w_r = 0, s ≥ r.Hence(x^-_p,n⊗ t^2s+1)w_r=0, 1 ≤ p ≤ n,s ≥ rand (ii) is now clear.§.§ We now prove, (i) For 1 ≤ i ≤ n-2, suppose that Y=(x^-_p_1,n⊗ t^2s_1+1)⋯ (x^-_p_k, n⊗ t^2s_k+1) where p_1≤⋯≤ p_k≤ i. Then Yw_r is in the g_0[t^2]–module generated by elements Zw_r where Zis a monomialin the elements (x^-_i,n⊗ t^2s+1) with s∈_+.(ii) For i=0, suppose that Y=(x^-_p_1,n⊗ t^2s_1+1)⋯ (x^-_p_k, n⊗ t^2s_k+1) where p_1≤⋯≤ p_k≤ n. Then Yw_r is in the g_0[t^2]–module generated by elements Zw_r where Zis a monomialin the elements (x^-_n⊗ t^2s+1) with s∈_+. First, let 1 ≤ i ≤ n-2. The proof proceeds by an induction on k. If k=1 and p_1<i thenby settingx^-_p_1,n = [ x^-_p_1, i - 1 , x^-_i,n ]we havex^-_p_1, i-1(x^-_i,n⊗ t^2s_1+1)w_r=(x^-_p_1,n⊗ t^2s_1+1)w_r, hence induction begins.For the inductive step,weobserve that(x^-_p_1,n⊗ t^2s_1+1)( g_0[t^2])⊂( g_0[t^2])⊕∑_m≥ 0∑_p=1^n( g_0[t^2])(x^±_p,n⊗ t^2m+1), and hence it suffices toprove that for all 1≤ p≤ n and Z a monomial in (x^-_i,n⊗ t^2s+1) we have that (x^±_p,n⊗ t^2m+1)Zw_r is in the g_0[t^2]–submodule generated by elements Z'w_r where Z'is a monomial in (x^-_i,n⊗ t^2s+1).Denote this submodule by M. Wegive the proof only for (x^-_p,n⊗ t^2m+1)Zw_r, since the other case is proven similarly. If p=i, there is nothing to prove and if p>i we get(x^-_p,n⊗ t^2m+1)Zw_r=X+Z(x^-_p,n⊗ t^2m+1)w_r,for some element X∈ M. Since (x^-_p,n⊗ t^2m+1)w_r=0 by (<ref>), we are done. If p<i, weconsider(x^-_p, i-1⊗ t^2m)(x^-_i,n⊗ t)^ℓ + 1w_r = A(x^-_p,n⊗ t^2m+1)(x^-_i,n⊗ t)^ℓw_r + B(x^-_p , i̅⊗ t^2m+2)(x^-_i,n⊗ t)^ℓ - 1w_r,for some non–zero constants A and B. Since(x^-_p, i-1⊗ t^2m)(x^-_i,n⊗ t)^ℓ + 1w_r ∈ M,and(x^-_p , i̅⊗ t^2m+2)(x^-_i,n⊗ t)^ℓ - 1w_r ∈ M ,we have,(x^-_p,n⊗ t^2m+1)(x^-_i,n⊗ t)^ℓw_r ∈ M.In order to show(x^-_p, n⊗ t^2m+1)(x^-_i,n⊗ t^2r_1+1) ⋯ (x^-_i,n⊗ t^2r_ℓ + 1)w_r ∈ M we let h ∈ h with [h, x^-_p,n] =0 and [h, x^-_i,n]0. Then (h ⊗ t^2s)(x^-_p, n⊗ t^2m+1)(x^-_i,n⊗ t) ⋯ (x^-_i,n⊗ t)w_r ∈ Mfor all s ≥ 0. An induction on |{ 1 ≤ s ≤ℓ :r_s0}| finishes the proof for 1 ≤ i ≤ n-2.The i=0 case is identical. §.§ Observe that the Lie subalgebra a[t^2] generatedby the elements x^±_i⊗ t^2s, s∈_+ is isomorphic to the current algebra sl_2[t^2]. Hence ( a[t^2])w_r⊂ W_(r λ_i) is a quotient of the local Weyl module for a[t^2] with highest weight r and we can use the results of Proposition <ref>(i). We now prove, (i) For 1 ≤ i ≤ n-2, as a g_0[t^2]–module W_(rλ_i) is spanned by w_r andelementsY(i,)w_r:=(x^-_i,n⊗ t^2s_1+1)⋯ (x^-_i,n⊗ t^2s_k+1)w_r,k≥ 1, ∈_+^k,0≤ s_1≤⋯≤ s_k≤ r-k. (ii) For i =0, as a g_0[t^2]–module W_(rλ_i) is spanned by w_r andelementsY(n,)w_r:=(x^-_n⊗ t^2s_1+1)⋯ (x^-_n⊗ t^2s_k+1)w_r,k≥ 1, ∈_+^k,0≤ s_1≤⋯≤ s_k≤ r-k. First, we consider the case 1 ≤ i ≤ n-2. By Lemma <ref> andLemma <ref> we can suppose that Y is an arbitrary monomial in the elements (x^-_i,n⊗ t^2s+1), s∈_+. We proceed by induction on the length k of Y. If k=1, then we have(x^-_i,n⊗ t^2s+1)w_r=(x^-_i+1, n⊗ t)(x^-_i⊗ t^2s)w_r=0, s≥ r, by Proposition <ref>(i). This shows that induction begins.Suppose now that k is arbitrary and ∈_+^k. Then,by induction on k(x^-_i+1,n⊗ t)^k(x^-_i⊗ t^2s_1)⋯ (x^-_i⊗ t^2s_k) = A (x^-_i,n⊗ t^2s_1+1)⋯ (x^-_i,n⊗ t^2s_k+1) +X+ Z,where A is a non–zero complex number and X∈∑_m<k∑_∈_+^m( g_0[t^2])Y(i,),and Z ∈( g[t]^τ) Y(i+1,') and so Zw_r=0 . To see (<ref>) we proceed by induction on k.For the base case, we have(x^-_i+1, n⊗ t)(x^-_i⊗ t^2s_1) = (x^-_i,n⊗ t^2s_1+1) + (x^-_i ⊗ t^2s_1)(x^-_i+1,n⊗ t),so induction begins. For the inductive step, we have(x^-_i+1, n⊗ t)^k(x^-_i ⊗ t^2s_1) ⋯ (x^-_i ⊗ t^2s_k) = (x^-_i+1, n⊗ t)^k-1(x^-_i+1, n⊗ t)(x^-_i ⊗ t^2s_1) ⋯ (x^-_i ⊗ t^2s_k) = (x^-_i+1, n⊗ t)^k-1∑_m=1^k (x^-_i ⊗ t^2s_1) ⋯(x^-_i⊗ t^2s_m)⋯(x^-_i ⊗ t^2s_k)(x^-_i,n⊗ t^2s_m+1) .Applying the inductive hypothesis finishes the proof of (<ref>).To finish the proof of the Lemma for 1 ≤ i ≤ n-2, we use (<ref>) to write(x^-_i,n⊗ t^2s_1+1) ⋯ (x^-_i,n⊗ t^2s_k+1)w_r = (x^-_i+1, n⊗ t)^k(x^-_i ⊗ t^2s_1) ⋯ (x^-_i ⊗ t^2s_k)w_r - Xw_r.The inductive hypothesis applies to Xw_r. By Proposition <ref> we can write(x^-_i+1, n⊗ t)^k(x^-_i ⊗ t^2s_1) ⋯ (x^-_i ⊗ t^2s_k)w_r as a linear combination of elements where s_p≤ r-k.Applying (<ref>) once again to each summand finishes the proof for 1 ≤ i ≤ n-2. The case i=0, is similar, using the identity(x^-_n⊗ t^2s+1)w_r=(x^+_n-1, n⊗ t)(x^-_0⊗ t^2s)w_r=0, s≥ r,for the induction to begin, and (x^+_n-1,n⊗ t)^k(x^-_0⊗ t^2s_1)⋯ (x^-_0⊗ t^2s_k)w_r = A (x^-_n⊗ t^2s_1+1)⋯ (x^-_n⊗ t^2s_k+1)w_rfor the inductive step.§.§ We now prove fundmult, first for 1 ≤ i ≤ n-2. Fix an ordering on the elements Y(i,)w_r, ∈_+^k and s_p≤ r-k as follows: the first element is w_r and an element Y(i,) precedes Y(i,') if ∈_+^k and '∈_+^m if either k<m or k=m and s_1+⋯ + s_k>s_1'+⋯+s_k' and let u_1,…, u_ℓ be an ordered enumeration of this set. Denote by U_p the g_0[t^2]–submodule of W_(rλ_i) generated by the elements u_m, m≤ p. It is straightforward to see that we have an increasing filtration of g_0[t^2]–modules: 0=U_0⊂ U_1⊂⋯⊂ U_ℓ= W_(rλ_i).Moreover U_p/U_p-1 is a quotient of the local Weyl module for g_0[t^2] with highest weight (r-i_p)ω_i+i_pω_i-1 (we understand ω_0=0), if u_p= Y(i,), ∈_+^i_p. Using equation (<ref>) and Proposition <ref>(ii) we getU_p/U_p-1≤(∑_s=0^i2n-1s)^r-i_p(∑_s=0^i-12n-1s)^i_p. Summing we getW_(rλ_i) ≤∑_s=0^rrs(∑_s=0^i2n-1s)^r-s(∑_s=0^i-12n-1s)^s=(2ni+2ni-1+⋯+2n1)^r. For the i=0 case, U_p/U_p-1 is a submodule of the local Weyl module for g_0[t^2] with highest weight (r-2i_p)ω_n+i_pω_n-1=(r-i_p)λ_0+i_pλ_n-1, if u_p= Y(n,), ∈_+^i_p. Using equation (<ref>) and Proposition <ref>(iii) we getU_p/U_p-1≤ (2^n-1)^r-i_p(2^n-1)^i_p. Summing we getW_(rλ_i)≤∑_s=0^rrs(2^n-1)^r-s(2^n-1)^s= (2^n-1+2^n-1)^r = (2^n)^r . Since we have already proved that the reverse equality holds the proof of fundmult is complete§.§ Concluding RemarksWe discuss briefly the structure of the local Weyl modules when λ∈ P_0^+ is not amultiple of a fundamental weight andsuch that _λ is a proper quotient of a polynomial algebra.The simplest example is the case of (B_3, D_3) and λ=λ_0+λ_2,where we have _λ = ℂ[P_2,1, P_3,1]/(P_2,1P_3,1).Givena ∈ℂ^× let _(a,0) denote the maximal ideal corresponding to (P_2,1 -a, P_3,1) and for b ∈ℂ _(0,b) denote the maximal ideal corresponding to (P_2,1 , P_3,1 - b). In the first case, the local Weyl module W(λ, _(a,0) ) is a pullback of a local Weyl module for the current algebra g [t] and sodim W(λ, _(a,0)) = 22.In the second casethe local Weyl moduleW(λ, _(0,b) ) is an extension of thepullback of a local Weyl module for the current algebra by an irreducible g_0–module, and it can be shown thatdim W(λ, _(0,b)) = 32.(see <cit.> for details).In particular the dimension of the local Weyl module depends on the choice of the ideal and hence the global Weyl module is not projective and hence not free as an _λ–module.However, we observe the following: If we decompose the variety corresponding to _λ into irreducible components X_1 ∪ X_2, whereX_1={(a,0) : a∈},X_2={(0,b) : b∈},we see that the dimension of the local Weyl module is constant along X_2. So pulling back W(λ) via the algebra mapφ:_λ→_λ, P_2,1↦ 0, P_3,1↦ P_3,1we see that φ^* W(λ) is a free ℂ[P_3,1]–module, where we view ℂ[P_3,1] as the coordinate ring of X_2. In general, preliminary calculations do show that in the case when _λ is a Stanley–Reisner ring there are only finitely many possible dimensions and that the dimension is constant along a suitable irreducible subvariety, i.e. the global Weyl module is free considered as a module for the coordinate ring 𝒪(X) of a suitable irreducible subvariety X.plain | http://arxiv.org/abs/1706.08765v1 | {
"authors": [
"Vyjayanthi Chari",
"Deniz Kus",
"Matt Odell"
],
"categories": [
"math.RT",
"math.AC",
"17B65, 17B10"
],
"primary_category": "math.RT",
"published": "20170627102624",
"title": "Borel--de Siebenthal pairs, Global Weyl modules and Stanley--Reisner rings"
} |
http://arxiv.org/abs/1706.08939v1 | {
"authors": [
"R. Benton Metcalf",
"Rupert A. C. Croft",
"Alessandro Romeo"
],
"categories": [
"astro-ph.CO"
],
"primary_category": "astro-ph.CO",
"published": "20170627164859",
"title": "Noise Estimates for Measurements of Weak Lensing from the Lyman-alpha Forest"
} |
|
Private Data System Enabling Self-SovereignAuthor's version. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-59665-5_6. Storage Managed by Executable Choreographies Sinică Alboaie^1,2 Doina Cosovan^1 Received 26th June 2017 / Accepted 25th January 2018 =========================================================================================================================================================================================================== We present a framework for the complexity classification of parameterized counting problems that can be formulated as the summation over the numbers of homomorphisms from small pattern graphs H_1,…,H_ℓ to a big host graph G with the restriction that the coefficients correspond to evaluations of the Möbius function over the lattice of a graphic matroid. This generalizes the idea of Curticapean, Dell and Marx [STOC 17] who used a result of Lovász stating that the number of subgraph embeddings from a graph H to a graph G can be expressed as such a sum over the lattice of partitions of H.In the first step we introduce what we call graphically restricted homomorphisms that, inter alia, generalize subgraph embeddings as well as locally injective homomorphisms. We provide a complete parameterized complexity dichotomy for counting such homomorphisms, that is, we identify classes of patterns for which the problem is fixed-parameter tractable (FPT), including an algorithm, and prove that all other pattern classes lead to #-hard problems. The main ingredients of the proof are the complexity classification of linear combinations of homomorphisms due to Curticapean, Dell and Marx [STOC 17] as well as a corollary of Rota's NBC Theorem which states that the sign of the Möbius function over a geometric lattice only depends on the rank of its arguments.We apply the general theorem to the problem of counting locally injective homomorphisms from small pattern graphs to big host graphs yielding a concrete dichotomy criterion. It turns out that — in contrast to subgraph embeddings — counting locally injective homomorphisms has “real” FPT cases, that is, cases that are fixed-parameter tractable but not polynomial time solvable under standard complexity assumptions. To prove this we show in an intermediate step that the subgraph counting problem remains #-hard when both the pattern and the host graphs are restricted to be trees. We then investigate the more general problem of counting homomorphisms that are injective in the r-neighborhood of every vertex. As those are graphically restricted as well, they can also easily be classified via the general theorem.Finally we show that the dichotomy for counting graphically restricted homomorphisms readily extends to so-called linear combinations. § INTRODUCTIONIn his seminal work about the complexity of computing the permanent Valiant <cit.> introduced counting complexity which has since then evolved into a well-studied subfield of computational complexity. Despite some surprising positive results like polynomial time algorithms for counting perfect matchings in planar graphs by the FKT method <cit.>, counting spanning trees by Kirchhoff's Matrix Tree Theorem or counting Eulerian cycles in directed graphs using the “BEST”-Theorem <cit.>, most of the interesting problems turned out to be intractable. Therefore, several relaxations such as restrictions of input classes <cit.> and approximate counting <cit.> were introduced. Another possible relaxation, the one this work deals with, is to consider parameterized counting problems as introduced by Flum and Grohe <cit.>. Here, problems come with an additional parameter k and a problem is fixed-parameter tractable (FPT) if it can be solved in time g(k)·(n) where n is the input size and g is a computable function, which yields fast algorithms for large instances with small parameters. On the other hand, a problem is considered intractable if it is #-hard. This stems from the fact that #-hard problems do not allow an FPT algorithm unless standard assumptions such as the exponential time hypothesis (ETH) are wrong. When investigating a family of related (counting) problems one could aim to simultaneously solve the complexity of as many problems as possible, rather than tackling a (possibly infinite) number of problems by hand. For example, instead of proving that counting paths in a graph is hard, then proving that counting cycles is hard and then proving that counting stars is easy, one should, if possible, find a criterion that allows a classification of those problems in hard and easy cases. Unfortunately, there are results like Ladner's Theorem <cit.>, stating that there are problems neither innor -hard (assuming ≠), which give a negative answer to that goal in general. However, there are families of problems that have enough structure to allow so-called dichotomy results. One famous example, and to the best of the authors knowledge this was the first such result, is Schaefer's dichotomy <cit.>, stating that every instance of the generalized satisfiability problem is either polynomial time solvable or -complete. Since then much work has been done to generalize this result, culminating in recent announcements (<cit.>,<cit.>,<cit.>) of a proof of the Feder-Vardi-Conjecture <cit.>. This question was open for almost twenty years and indicates the difficulty of proving such dichotomy results, at least for decision problems. In counting complexity, however, it seems that obtaining such results is less cumbersome. One reason for this is the existence of some powerful techniques like polynomial interpolation <cit.>, the Holant framework <cit.> as well as the principle of inclusion-exclusion which all have been used to establish very revealing dichotomy results such as <cit.>. Examples of dichotomies in parameterized counting complexity are the complete classifications of the homomorphism counting problem due to Dalmau and Jonsson <cit.>[Ultimately, the results of <cit.> and this work rely on the dichotomy for counting homomorphisms] and the subgraph counting problem due to Curticapean and Marx <cit.>. For the latter, one is given graphs H and G and wants to count the number of subgraphs of G isomorphic to H, parameterized by the size of H. It is known that this problem is polynomial time solvable if there is a constant upper bound on the size of the largest matching of H and #-hard otherwise[On the other hand the complexity of the decision version of this problem, that is, finding a subgraph of G isomorphic to H, is still unresolved. Only recently it was shown in a major breakthrough that finding bicliques is hard <cit.>.]. The first step in this proof was the hardness result of counting matchings of size k of Curticapean <cit.>, which turned out to be the “bottleneck” problem and was then reduced to the general problem.This approach, first finding the hard obstructions and then reducing to the general case, seemed to be the canonical way to tackle such problems. However, recently Curticapean, Dell and Marx <cit.> discovered that a result of Lovász <cit.> implies the existence of parameterized reductions that, inter alia, allow a far easier proof of the general subgraph counting problem. Lovász result states that, given simple graphs H and G, it holds that#(H,G) = ∑_ρ≥∅μ(∅,ρ) ·#(H/ρ,G),where the sum is over the elements of the partition lattice of V(H), (H,G) is the set of embeddings[Note that embeddings and subgraphs are equal up to automorphisms, that is, counting embeddings and counting subgraphs are essentially the same problem.] from H to G and (H/ρ,G) is the set of homomorphisms from the graph H/ρ obtained from H by identifying vertices along ρ to G. Furthermore μ is the Möbius function. In their work Curticapean, Dell and Marx showed in a general theorem that a summation ∑_i=1^ℓ c_i ·#(H_i,G) for pairwise non-isomorphic graphs H_i is #-hard if there is no upper bound on the treewidth of the pattern graphs H_i and fixed-parameter tractable otherwise, using a dichotomy for counting homomorphisms due to Dalmau and Jonsson <cit.>. Having this, one only has to show two properties of (<ref>) to obtain the dichotomy for #. First, one has to show that a high matching number of H implies that one of the graphs H/ρ has high treewidth and second, that two (or more) terms with high treewidth and isomorphic graphs H/ρ and H/σ do not cancel out (note that the Möbius function can be negative). As there is a closed form for the Möbius function over the partition lattice it was possible to show that whenever H/ρ and H/σ are isomorphic the sign of the Möbius function is equal. §.§ Our resultsThe motivation of this work is the question whether the result of Curticapean, Dell and Marx can be generalized to construct a framework for the complexity classification of counting problems that can be expressed as the summation over homomorphisms and it turns out that this is possible whenever the summation is over a the lattice of a graphic matroid and the coefficients are evaluations of the Möbius function over the lattice, capturing not only embeddings but also locally injective homomorphisms.In Section <ref> we introduce what we call graphically restricted homomorphisms: Intuitively, a graphical restriction τ(H) of a graph H is a set of forbidden binary vertex identifications of H, modeled as a graph with vertex set V(H) and edges along the binary constraints. We write (H) as the set of all graphs obtained from H by contracting vertices along edges in τ(H) and deleting multiedges, excluding those that contain selfloops. Now a graphically restricted homomorphism from H to G with respect to τ is a homomorphism from H to G that maps every pair of vertices u,v ∈ V(H) that are adjacent in τ(H) to different vertices in G. We write (H,G) for the set of all graphically restricted homomorphisms w.r.t. τ from H to G and provide a complete complexity classification for counting graphically restricted homomorphisms: Computing #(H,G) is fixed-parameter tractable when parameterized by |V(H)| if the treewidth of every graph in (H) is small. Otherwise the problem is #-hard.In particular, we obtain the following algorithmic result: There exists a deterministic algorithm that computes #(H,G) in time g(|V(H)|) · |V(G)|^𝗍𝗐((H)) + 1, where g is a computable function and 𝗍𝗐((H)) is the maximum treewidth of every graph in (H). Having established the general dichotomy we observe that there exist graphical restrictions τ_𝖼𝗅𝗂𝗊𝗎𝖾 andsuch that _τ_𝖼𝗅𝗂𝗊𝗎𝖾(H,G) is the set of all subgraph embeddings from H to G and (H,G) is the set of all locally injective homomorphisms from H to G.As a consequence we obtain a full complexity dichotomy for counting locally injective homomorphisms from small pattern graphs H to big host graphs G. To the best of the author's knowledge, this is the first result about the complexity of counting locally injective homomorphisms. Computing the number of locally injective homomorphisms from H to G is fixed-parameter tractable when parameterized by |V(H)| if the treewidth of every graph in (H) is small. Otherwise the problem is #-hard.Moreover, there exists a deterministic algorithm that computes this number in time g(|V(H)|) · |V(G)|^𝗍𝗐((H)) + 1, where g is a computable function and 𝗍𝗐((H)) is the maximum treewidth of every graph in (H). We then observe that — in contrast to subgraph embeddings — counting locally injective homomorphisms has “real” FPT cases, that is, cases that are fixed-parameter tractable but not polynomial time solvable under standard assumptions. We show this by restricting the pattern graph to be a tree: Computing the number of locally injective homomorphisms from a tree T to a graph G can be done in deterministic time g(|V(T)|) · |V(G)|^2, that is, the problem is fixed-parameter tractable when parameterized by |V(T)|. On the other hand, the problem is #-hard. To prove #-hardness, we prove in an intermediate step that the subgraph counting problem remains hard when both graphs are restricted to be trees, which may be of independent interest: The problem of, given trees T_1 and T_2, computing the number of subtrees of T_2 that are isomorphic to T_1 is #-hard. After that we generalize locally injective homomorphisms to homomorphisms that are injective in the r-neighborhood of every vertex and observe that those are also graphically restricted and consequently obtain a counting dichotomy as well.Finally, we show in Section <ref> that all results can easily be extended to so-called linear combinations of graphically restricted homomorphisms. Here one gets as input graphs H_1,…,H_ℓ together with positive coefficients c_1,…,c_ℓ and a graph G and the goal is to compute∑_i=1^ℓ c_i ·#_τ_i(H_i,G),for graphical restrictions τ_1,…,τ_ℓ. This generalizes for example problems like counting all trees of size k in G or counting all locally injective homomorphisms from all graphs of size k to G or a combination thereof. We find out that, under some conditions, the dichotomy criteria transfer immediately to linear combinations: Computing ∑_i=1^ℓ c_i ·#_τ_i(H_i,G) is fixed-parameter tractable when parameterized by max_i{|V(H_i)|} if the maximum treewidth of every graph in ⋃_i^ℓτ_i-ℳ(H_i) is small. Otherwise, if additionally |V(H_i)| has the same parity for every i ∈ [ℓ], the problem is #-hard. Furthermore we observe that this theorem is not true on the #-hardness side if we omit the parity condition. §.§ TechniquesThe main ingredients of the proofs of Theorem <ref> and Theorem <ref> are the complexity classification of linear combinations of homomorphisms due to Curticapean, Dell and Marx (see Lemma 3.5 and Lemma 3.8 in <cit.>) as well as a corollary of Rota's NBC Theorem (see e.g. Theorem 4 in <cit.>). In the first step we prove the following identity for the number of graphically restricted homomorphisms via Möbius inversion: #(H,G) = ∑_ρ≥∅μ(∅,ρ)·#(H/ρ,G),where the sum is over elements of the lattice of flats of the graphical matroid given by τ(H) and H/ρ is the graph obtained by contracting the vertices of H along the flat ρ. After that we use Rota's Theorem to prove that none of the terms cancel out[Here “cancel out” means that it could be possible that H/ρ and H/σ are isomorphic, but μ(∅,ρ) = - μ(∅,σ) and all other H/ρ' are not isomorphic to H/ρ. In this case, the term #(H/ρ,G) would vanish in the above identity.], despite the fact that the Möbius function can be negative. More precisely we show that whenever H/ρ≅ H/σ, we have that 𝗋𝗄(ρ) = 𝗋𝗄(σ) and therefore, by Rota's Theorem, 𝗌𝗀𝗇(μ(∅,ρ)) = 𝗌𝗀𝗇(μ(∅,σ)).The dichotomies for locally injective homomorphisms and homomorphisms that are injective in the r-neighborhood of every vertex are mere applications of the general theorem. For #-hardness of the subgraph counting problem restricted to trees, we adapt the idea of the “skeleton graph” by Goldberg and Jerrum <cit.> and reduce directly from computing the permanent. To transfer this result to locally injective homomorphisms we use the well-known observation that locally injective homomorphisms from a tree to a tree are embeddings.Finally, we prove the dichotomy for linear combinations of graphically restricted homomorphisms by taking a closer look at the proof of Theorem <ref>. Here, the parity constraint of the vertices of the graphs in the linear combination assures that there are no graphs H_i and H_j and elements ρ_i and ρ_j of the matroid lattices of τ_i(H_i) and τ_j(H_j) such that H_i/ρ_i and H_j/ρ_j are isomorphic but ρ_i and ρ_j have ranks of different parities. Using this observation, Theorem <ref> can be proven in the same spirit as Theorem <ref>.§ PRELIMINARIES First we will introduce some basic notions: Given a finite set S, we write |S| or #S for the cardinality of S. Given a natural number ℓ we let [ℓ] be the set 1,…,ℓ. Given a real number r we define the sign 𝗌𝗀𝗇(r) of r to be 1 if r > 0, 0 if r = 0 and -1 if r < 0.A poset is a pair (P,≤) where P is a set and ≤ is a binary relation on P that is reflexive, transitive and anti-symmetric. Throughout this paper we will write y ≥ x if x ≤ y. A lattice is a poset (L,≤) such that every pair of elements x,y ∈ L has a least upper bound x ∨ y and a greatest lower bound x ∧ y that satisfy: * x ∨ y ≥ x, x ∨ y ≥ y and for all z such that z ≥ x and z ≥ y it holds that z ≥ x ∨ y.* x ∧ y ≤ x, x ∧ y ≤ y and for all z such that z ≤ x and z ≤ y it holds that z ≤ x ∧ y.Given a finite set S, a partition of S is a set ρ of pairwise disjoint subsets of S such that ⋃̇_s ∈ρ s = S. We call the elements of ρ blocks. For two partitions ρ and σ we write ρ≤σ if every element of ρ is a subset of some element of σ. This binary relation is a lattice and called the partition lattice of S. We will in particular encounter lattices of graphic matroids in our proofs. §.§ MatroidsWe will follow the definitions of Chapt. 1 of the textbook of Oxley <cit.>.A matroid M is a pair (E,ℐ) where E is a finite set and ℐ⊆𝒫(E) such that (1) ∅∈ℐ,(2) if A ∈ℐ and B ⊆ A then B ∈ℐ, and(3) if A,B ∈ℐ and |B|<|A| then there exists a ∈ A ∖ B such that B∪a∈ℐ. We call E the ground set and an element A ∈ℐ an independent set. A maximal independent set is called a basis. The rank 𝗋𝗄(M) of M is the size of its bases[This is well-defined as every maximal independent set has the same size due to (3).]. Given a subset X ⊆ E we define ℐ|X := A ⊆ X | A ∈ℐ. Then M|X := (X,ℐ|X) is also a matroid and called the restriction of M to X. Now the rank 𝗋𝗄(X) of X is the rank of M|X. Equivalently, the rank of X is the size of the largest independent set A ⊆ X.Furthermore we define the closure of X as follows:𝖼𝗅(X) := e ∈ E | 𝗋𝗄(X ∪e) = 𝗋𝗄(X) .Note that by definition 𝗋𝗄(X)=𝗋𝗄(𝖼𝗅(X)). We say that X is a flat if 𝖼𝗅(X)=X. We denote L(M) as the set of flats of M. It holds that L(M) together with the relation of inclusion is a lattice, called the lattice of flats of M. The least upper bound of two flats X and Y is 𝖼𝗅(X ∪ Y) and the greatest lower bound is X ∩ Y. It is known that the lattices of flats of matroids are exactly the geometric lattices[For the purpose of this paper we do not need the definition of geometric lattices but rather the equivalent one in terms of lattices of flats and therefore omit it. We recommend e.g. Chapt. 3 of <cit.> and Chapt. 1.7 of <cit.> to the interested reader.] and we denote the set of those lattices as ℒ.In Section <ref> we take a closer look at (lattices of flats of) graphic matroids:Given a graph H=(V,E) ∈, the graphic matroid M(H) has ground set E and a set of edges is independent if and only if it does not contain a cycle.If H is connected then a basis of H is a spanning tree of H. If H consists of several connected components then a basis of M(H) induces spanning trees for each of those. Every subset X of E induces a partition of the vertices of H where the blocks are the vertices of the connected components of H|_X and it holds that 𝗋𝗄(X) = |V(H)| - c(H|_X).In particular, the flats of M(H) correspond bijectively to the partitions of vertices of H into connected components as adding an element to X such that the rank does not change will not change the connected components, too. For convenience we will therefore abuse notation and say, given an element ρ of the lattice of flats of M(H), that ρ partitions the vertices of H where the blocks are the vertices of the connected components of H|_ρ. The following observation will be useful in Section <ref>:Let ρ,σ∈ L(M(H)) for a graph H∈. If the number of blocks of ρ and σ are equal then 𝗋𝗄(ρ)=𝗋𝗄(σ). Immediately follows from Equation (<ref>). We denote H/ρ as the graph obtained from H by contracting the vertices of H that are in the same component of ρ and deleting multiedges (but keeping selfloops). As the vertices of H/ρ partition the vertices of H, we think of the vertices of H/ρ as subsets of vertices of H and call them blocks. Furthermore we write [v] for the block containing v. §.§ Graphs and homomorphisms In this work all graphs are considered unlabeled and simple but may allow selfloops unless stated otherwise. We denote the set of all those graphs as . Furthermore we denoteas the set of all unlabeled and simple graphs without selfloops. For a graph G we write n for the number of vertices V(G) of G and m for the number of edges E(G) of G. We denote c(G) as the number of connected components of G. Furthermore, given a subset X of edges, we denote G|_X as the graph with vertices V(G) and edges X. Given a partition of vertices ρ of a graph H, we write H/ρ as the graph obtained from H by contracting the vertices of H that are in the same component of ρ and deleting multiedges (but keeping selfloops). As the vertices of H/ρ partition the vertices of H, we think of the vertices as subsets of vertices of H and call them blocks. Furthermore we write [v] for the block containing v. Given graphs H and G, a homomorphism from H to G is a mapping φ : V(H) → V(G) such that u,v∈ E(H) implies that φ(u),φ(v)∈ E(G). We denote (H,G) as the set of all homomorphisms from H to G. A homomorphism is called embedding if it is injective and we denote (H,G) as the set of all embeddings from H to G. An embedding from H to H is called an automorphism of H. We denote 𝖠𝗎𝗍(H) as the set of all automorphisms of H. Furthermore we let 𝖲𝗎𝖻(H,G) be the set of all subgraphs of G that are isomorphic to H. Then it holds that #𝖠𝗎𝗍(H)·#𝖲𝗎𝖻(H,G) = #(H,G) (see e.g. <cit.>). Given a set S and a function α : S →, we define the support of α as follows:𝗌𝗎𝗉𝗉(α) := s ∈ S | α(s)≠ 0. A graph parameter that will be of quite some importance to define the dichotomy criteria is the treewidth of a graph, capturing how “tree-like” a graph is: A tree decomposition of a graph G ∈ is a pair 𝒯=(T,X_t_t_∈ V(T)), where T is a tree whose every node t is assigned a vertex subset X_t ⊆ V(G), such that: (1) ⋃_t ∈ V(T) X_t = V(G). (2) For every u,v∈ E(G), there exists t ∈ V(T) such that u and v are contained in X_t. (3) For every u ∈ V(G), the set T_u:=t ∈ V(T) | u ∈ X_t induces a connected subtree of T. The width of 𝒯 is the size of the largest X_t for t ∈ V(T) minus 1 and the treewidth of G is the minimum width of any tree decomposition of G. We write 𝗍𝗐(G) for the treewidth of G. Given a finite set of graphs ℳ, we denote 𝗍𝗐(ℳ) as the maximum treewidth of any graph in ℳ. Examples of graphs with small treewidth are matchings, paths and more generally trees and forests or cycles. On the other hand, graphs with high treewidth are for example cliques, bicliques and grid graphs.Throughout this paper we will often say that a set C of graphs has bounded treewidth meaning that there is a constant B such that the treewidth of every graph H∈ C is bounded by B. §.§ Parameterized countingWe will mainly follow the definitions of Chapt. 14 of the textbook of Flum and Grohe <cit.>. A parameterized counting problem is a function F:0,1^∗→ together with a polynomial-time computable parameterization k:0,1^∗→. A parameterized counting problem is fixed-parameter tractable if there exists a computable function g such that it can be solved in time g(k(x))· |x|^O(1) for any input x. A parameterized Turing reduction from (F,k) to (F',k') is an FPT algorithm w.r.t. parameterization k with oracle (F',k') that on input x computes F(x) and additionally satisfies that there exists a function g' such that for every oracle query y it holds that k'(y)≤ g(k(x)). A parameterized counting problem (F,k) is #-hard if there exists an FPT Turing reduction from #k-𝖼𝗅𝗂𝗊𝗎𝖾 to (F,k), where #k-𝖼𝗅𝗂𝗊𝗎𝖾 is the problem of, given a graph G and a parameter k, computing the number of cliques of size k in G[For a more detailed introduction to # we recommend <cit.> to the interested reader.]. Under standard assumptions (e.g. under the exponential time hypothesis) #-hard problems are not fixed-parameter tractable. The following two parameterized counting problems will be of particular importance in this work: Given a class of graphs C⊆, #(C) (#(C)) is the problem of, given a graph H ∈ C and a graph G ∈, computing #(H,G) (#(H,G)). Both problems are parameterized by #V(H). Their complexity has already been classified:Let C be a recursively enumerable class of graphs. If C has bounded treewdith then #(C) can be solved in polynomial time. Otherwise #(C) is #-hard. Let C be a recursively enumerable class of graphs. If C has bounded matching number then #(C) can be solved in polynomial time. Otherwise #(C) is #-hard. Recall that “bounded treewidth (matching number)” means that there is a constant B such that the treewidth (size of the largest matching) of any graph in C is bounded by B.§.§ Linear combinations of homomorphisms and Möbius inversionCurticapean, Dell and Marx <cit.> introduced the following parameterized counting problem:Let 𝒜 be a set of functions a: 𝒢 → ℚ with finite support[We can also think of 𝒜 being a set of lists.]. We define the parameterized counting problem #(𝒜) as follows:Given a ∈𝒜 and G ∈𝒢, compute∑_H ∈supp(a) a(H) ·#(H, G), parameterized by max_H ∈supp(a)#V(H).Note that this problem generalizes #(C). The following theorem will be the foundation of all complexity results in this paper: If 𝒜 has bounded treewidth then #(𝒜) can be solved in time g(|supp(α)|) · n^O(1) on input (α,G) where n = |V(G)| and g is a computable function. Otherwise the problem is #-hard. In their paper, the authors show how this result can be used to give a much simpler proof of Theorem <ref>. The idea is that every problem #(C) is equivalent to a problem #(𝒜). As all proofs in this work are in the same flavour, we will outline the technique here, using #(C) as an example. Therefore, we first need to introduce the so called Möbius inversion (we recommend reading <cit.> for a more detailed introduction): Let (P,≤) be a poset and h:P →ℂ be a function. Then the zeta transformation ζ h is defined as follows:ζ h (σ) := ∑_ρ≥σ h(ρ).Let (P,≤) and h as in Definition <ref>. Then there is a function μ_P: P × P →ℤ such that for all σ∈ P it holds thath(σ) = ∑_ρ≥σμ_P(σ,ρ) ·ζ h (ρ).μ_P is called the Möbius function.The following identity is due to Lovász <cit.>:#(H/σ,G) = ∑_ρ≥σ#(H/ρ,G),where σ and ρ are partitions of vertices of H and ≥ is the partition lattice of H. Now Möbius inversion yields the following identity <cit.>:#(H,G) = ∑_ρ≥∅μ(∅,ρ) ·#(H/ρ,G),where μ is the Möbius function over the partition lattice. Therefore, for every class of graphs C, there is a family of functions with finite support 𝒜 such that #(C) and #(𝒜) are the same problems. Now Curticapean, Dell and Marx show that C has unbounded matching number if and only if 𝒜 has unbounded treewidth. The critical point in this proof was to show that the sign of μ(∅,ρ) only depends on the number of blocks of ρ, which implies that for two isomorphic graphs H_1 and H_2, the terms #(H_1,G) and #(H_2,G) have the same sign in the above identity and therefore do not cancel out in the homomorphism basis. As there is a closed form for μ(∅,ρ)[Here it is crucial that μ is the Möbius function over the (complete) partition lattice.], the information about the sign could easily be extracted. The motivation of this work is the question whether this can be made more general and it turns out that a corollary of Rota's NBC Theorem <cit.> (see also <cit.>) captures exactly what we need: Let L be a geometric lattice with unique minimal elementand let ρ be an element of L. Then it holds that𝗌𝗀𝗇(μ_L(,ρ)) = (-1)^𝗋𝗄(ρ) . In the following we will show that combining Rota's Theorem and the dichotomy for counting linear combinations of homomorphisms yields complete complexity classifications for the problems of counting those restricted homomorphisms that induce a Möbius inversion over the lattice of a graphic matroid, which are known to be geometric, when transformed into the homomorphism basis. Those include embeddings as well as locally injective homomorphisms. § GRAPHICALLY RESTRICTED HOMOMORPHISMS In the following we write ∅ for the minimal element of a matroid lattice. A graphical restriction is a computable mapping τ that maps a graph H ∈ to a graph H' ∈ such that V(H)=V(H'), that is, τ only modifies edges of H. We denote the set of all graphical restrictions as Τ. Given graphs H and G and a graphical restriction τ, we define the set of graphically restricted homomorphisms w.r.t. τ from H to G as follows:(H,G) := φ∈(H,G) | ∀ u,v ∈ V(H): u,v∈ E(τ(H)) ⇒φ(u) ≠φ(v).Given a recursively enumerable class of graphs C ⊆, we define the parameterized counting problem #(C) as follows: Given a graph H ∈ C and a graph G ∈, we parameterize by |V(H)| and wish to compute #(H,G). Assume for example that τ_𝖼𝗅𝗂𝗊𝗎𝖾 maps a graph H to the complete graph with vertices V(H). Then one can easily verify that _τ_𝖼𝗅𝗂𝗊𝗎𝖾(H,G) = (H,G).The following lemma is an application of Möbius inversion (and slightly generalizes <cit.>).Let τ be a graphical restriction. Then for all graphs H ∈ and G ∈ it holds that#(H,G)=∑_ρ≥∅μ(∅,ρ)·#(H/ρ,G),where ≤ and μ are the relation and the Möbius function of the lattice L(M(τ(H))). Let τ and H be fixed and let (H/ρ,G)[τ] be the set of all homomorphisms φ∈(H/ρ,G) such that u,v∈ E(τ(H)) and [u]≠[v] imply that φ([u]) ≠φ([v]). More precisely: (H/ρ,G)[τ] := φ∈(H/ρ,G) | ∀ u,v ∈ V(H): u,v∈ E(τ(H)) ∧ [u]≠[v] ⇒φ([u]) ≠φ([v]).We will first prove the following identities: For all G ∈ it holds that#(H/∅,G)[τ] = #(H,G).Every block in H/∅ is a singleton and H ≅ H/∅. Now the identity trivially follows from Definition <ref>.For all G ∈ and σ∈ L(M(τ(H))) it holds that#(H/σ,G) = ∑_ρ≥σ#(H/ρ,G)[τ]Let [v] be the block of v in H/σ. We define an equivalence relation ∼_τ over (H/σ,G) as follows:φ∼_τψ :⇔∀u,v∈ E(τ(H)): φ([u])=φ([v]) ⇔ψ([u])=ψ([v]).We write [φ]_τ for the equivalence class of φ and let H/[φ]_τ be the graph obtained from H/σ by further contracting different blocks [u] and [v] whenever u,v∈ E(τ(H)) and φ([u])=φ([v]) (note that this is well-defined by the definition of ∼_τ). Now consider σ in the graphical matroid M(τ(H)). Every block [v] corresponds to a connected component of the flat given by σ. Now contracting different blocks [u] and [v] for u,v in E(τ(H)) is a refinement of σ obtained by adding the edge u,v in M(τ(H)) and taking the closure. Therefore the equivalence classes of ∼_τ and the refinements of σ in the matroid lattice correspond bijectively and we write [ρ]_τ for the equivalence class corresponding to ρ. It remains to show that for every ρ≥σ we have that|[ρ]_τ| = #(H/ρ,G)[τ].This can be proven by constructing a bijection b. We write [v]_σ for blocks in H/σ and [v]_ρ for blocks in H/ρ. On input φ∈ [ρ]_τ, b outputs the homomorphism in (H/ρ,G)[τ] that maps a block [v]_ρ to φ([v]_σ). This is well-defined as φ maps blocks [u]_σ and [v]_σ to the same vertex in G if and only if they are subsumed by a common block in H/ρ (recall that ρ≥σ in the matroid lattice). On the other hand we can construct a mapping b' that given ψ∈(H/ρ,G)[τ] outputs the homomorphism in [ρ]_τ that maps a block [v]_σ to the image of the block [v]_ρ (that subsumes [v]_σ) according to ψ. Now b ∘ b' = 𝗂𝖽_(H/ρ,G)[τ] and b' ∘ b = 𝗂𝖽_[ρ]_τ. Consequently, b is a bijection and Equation <ref> holds. Now we have#(H/σ,G) = |⋃̇_[φ]_τ∈(H/σ,G)/∼_τ [φ]_τ| = |⋃̇_ρ≥σ [ρ]_τ|= ∑_ρ≥σ|[ρ]_τ| =∑_ρ≥σ#(H/ρ,G)[τ]which proves the claim.Now Claim <ref> is a zeta transform over the matroid lattice of M(τ(H)). By Möbius inversion (Theorem <ref>) we obtain that#(H/∅,G)[τ] = ∑_ρ≥∅μ(∅,ρ) ·#(H/ρ,G),and hence, by Claim <ref>, #(H,G) = ∑_ρ≥∅μ(∅,ρ) ·#(H/ρ,G).Intuitively, we will now show that counting graphically restricted homomorphisms from H to G is hard if we can ”glue” vertices of H together along edges of τ(H) such that the resulting graph has no selfloops and high treewidth. We will capture this intuition formally: Let H ∈ be a graph and let τ be a graphical restriction. A graph H'∈ obtained from H by contracting pairs of vertices u and v such that u,v∈ E(τ(H)) and deleting multiedges (but keeping selfloops) is called a τ-contraction of H. If additionally H' ∈, that is, the contraction did not yield selfloops, we call H' a τ-minor of H. We denote the set of all τ-minors of H as (H) and given a class of graphs C ⊆ we denote the set of all τ-minors of all graphs in C as (C). Finally, we can classify the complexity of counting graphically restricted homomorphisms along the treewidth of their τ-minors: Let τ be a graphical restriction and let C ⊆ be a recursively enumerable class of graphs. Then #(C) is FPT if (C) has bounded treewidth and #-hard otherwise. Furthermore, given H,G ∈, there exists a deterministic algorithm that computes #(H,G) in time g(|V(H)|) · |V(G)|^𝗍𝗐((H)) + 1 ,where g is a computable function.By Lemma <ref> we have that #(H,G) = ∑_ρ≥∅μ(∅,ρ) ·#(H/ρ,G).Now, as G has no selfloops, a term #(H/ρ,G) is zero whenever H/ρ has a selfloop. Consequently, for every non-zero term #(H/ρ,G), it holds that H/ρ∈(H). Therefore, by Lemma 3.5 in <cit.>, we obtain an algorithm computing #(H,G) in timeg(|V(H)|) · |V(G)|^𝗍𝗐((H)) + 1 ,for a computable function g. This immediately implies that the problem #(C) is fixed-parameter tractable if (C) has bounded treewidth. It remains to show that #(C) is #-hard otherwise. By condensing all terms #(H/ρ,G) and #(H/σ,G) where H/ρ and H/σ are isomorphic, it follows that there exist coefficients c_H[H'] for every H' ∈(H) such that#(H,G) = ∑_H' ∈(H) c_H[H'] ·#(H',G).We will now show that none of the c_H[H'] is zero: It holds thatc_H[H'] = ∑_ρ≥∅H' ≅ H/ρμ(∅,ρ).Consider ρ and ρ' such that H/ρ≅ H/ρ' ≅ H'. It follows that𝗋𝗄(ρ) = |V(H)| - c(H/ρ) =|V(H)| - c(H') = |V(H)| - c(H/ρ') = 𝗋𝗄(ρ').Now, as the lattice of M(τ(H)) is geometric, we can apply the corollary of Rota's NBC Theorem (Theorem <ref>) and obtain that 𝗌𝗀𝗇(μ(∅,ρ)) = (-1)^𝗋𝗄(ρ) = (-1)^𝗋𝗄(ρ') = 𝗌𝗀𝗇(μ(∅,ρ')). Consequently every term in Equation (<ref>) has the same sign and therefore c_H[H']≠ 0. Now we define a function a_H:→ℚ as followsa_H(F) :=c_H[F]ifF ∈(H) 0otherwiseand we set 𝒜_C =a_H | H ∈ C. Then the problems #(𝒜_C) and #(C) are equivalent w.r.t. parameterized turing reductions. As c_H[H'] ≠ 0 for every H' ∈(H) it follows that 𝒜_C has unbounded treewidth if and only if (C) has unbounded treewidth. We conclude by Theorem <ref> that #(C) is #-hard in this case. § LOCALLY INJECTIVE HOMOMORPHISMS In this section we are going to apply the general dichotomy theorem to the concrete case of counting locally injective homomorphisms. A homomorphism φ from H to G is locally injective if for every v ∈ V(H) it holds that φ|_N(v) is injective. We denote (H,G) as the set of all locally injective homomorphisms from H to G and we define the corresponding counting problem #(C) for a class of graphs C ⊆ as follows: Given graphs H ∈ C and G ∈, compute #(H,G). The parameter is |V(H)|. Locally injective homomorphisms have already been studied by Nešetřil in 1971 <cit.> and were applied in the context of distance constrained labelings of graphs (see <cit.> for an overview). As well as subgraphs embeddings, locally injective homomorphisms are graphically restricted homomorphisms. Let H ∈ be a graph and let (H)=(V(H),E_𝖫𝗂(H)) be a graphical restriction defined as follows: E_𝖫𝗂(H) = u,w | u ≠ w ∧∃ v: u,v,w,v∈ E(H). Then for all G ∈ it holds that (H,G) = (H,G). We prove both inclusions. Let φ∈(H,G) and assume that φ is not locally injective. Then there exists v ∈ V(H) such that φ|_N(v) is not injective which implies that there are u and w such that u,v and w,v are edges in H and φ(u)=φ(w). By definition u,w∈ E_𝖫𝗂(H)=E((H)) and therefore φ∉(H,G) which is a contradiction.Now let φ∈(H,G) and assume that φ∉(H,G). Then there exist u,w ∈ V(H) such that u,w∈ E((H)) and φ(u) = φ(w). The former implies that u and w have a common neighbor v in H but this contradicts the fact that φ is locally injective. We continue by stating the dichotomy for counting locally injective homomorphisms. Let C ⊆ be a recursively enumerable class of graphs.Then #(C) is FPT if (C) has bounded treewidth and #-hard otherwise. Furthermore, there exists a deterministic algorithm that computes #(H,G) in timeg(|V(H)|) · |V(G)|^𝗍𝗐((H)) + 1 ,where g is a computable function. Follows immediately from Lemma <ref> and Theorem <ref>. We give an example for a hard instance of the problem: Let W_k be the “windmill” graph of size k, i.e., the graph with vertices a, v_1,…,v_k, w_1,…,w_k and edges a,v_i, v_i,w_i and w_i,a for each i ∈ [k]. Furthermore we let 𝒲 be the set of all W_k for k ∈ℕ. #(𝒲) is #-hard. It turns out that every graph consisting of k edges is a minor of some graph in (W_k). To see this let F be a graph with k edges. We enumerate the edges of F as e_1,…,e_k and identify each edge e_i=x_i,y_i with the edge v_i,w_i in W_k. Now, whenever x_i = x_j (or x_i = y_j) we contract vertices v_i and v_j (or v_i and w_j, respectively) in W_k. As each v_i and v_j (or v_i and w_j, respectively) have the common neighbor a, and furthermore v_i and w_i are never contracted, the resulting graph W'_k is a -minor of W_k. If we now remove a from W'_k along with every edge incident to a, the resulting graph is isomorphic to F. Consequently, the treewidth of (𝒲) is not bounded and hence #(𝒲) is #-hard by Theorem <ref>. In contrast to embeddings where every FPT case is also polynomial time solvable, there are “real” FPT cases when it comes to locally injective homomorphisms. Let 𝒯⊆ be the class of all trees. Counting locally injective homomorphisms from those graphs is fixed-parameter tractable: #(𝒯) is FPT. In particular, there is a deterministic algorithm that computes #(T,G) for a tree T in timeg(|V(T)|) · |V(G)|^2,where g is a computable function. According to Corollary <ref> we only need to show that (𝒯) has treewidth 1. Indeed, every -minor of a tree is again a tree, and has therefore treewidth 1. To see this, consider a pair of vertices u and w that have a common neighbor v in a tree T ∈𝒯. Then (u,v,w) is the only path between u and w and consequently contracting u and w to a single vertex will not create a cycle in the resulting graph (recall that we delete multiedges).On the other hand #(𝒯) is unlikely to have a polynomial time algorithm. #(𝒯) is -hard.We prove this lemma in the following subsection.§.§ Counting subtrees of treesThe aim of this section is to prove Lemma <ref>. We start by giving an introduction to classical counting complexity which was established by Valiant in his seminal work about the complexity of computing the permament <cit.>. A (non-parameterized) counting problem is a function F: 0,1^∗→. The class of all counting problems solvable in polynomial time is called . On the other hand, the notion of intractability is -hardness.is the class of all counting problems reducible[(Many-one) reductions in counting complexity differ slightly from many-one reductions in the decision world. However, for the purpose of this section we only need Turing reductions. We recommend Chap. 6.2 of <cit.> to the interested reader.] to #𝖲𝖠𝖳, the problem of computing the number of satisfying assignments of a given CNF formula. A counting problem F is -hard if there exists a polynomial time Turing reduction from #𝖲𝖠𝖳 to F, that is, an algorithm with oracle F that solves #𝖲𝖠𝖳 in polynomial time. Toda <cit.> proved that PH⊆^ which indicates that -hard problems are much harder than -complete problems.To prove Theorem <ref>, we will first prove -hardness of the following intermediate problem: Given two trees T_1,T_2, compute the number #𝖲𝗎𝖻(T_1,T_2) of subtrees of T_2 that are isomorphic to T_1. We call this problem . is -hard.Related results are -hardness for counting all subtrees of a given graph <cit.> or even counting all subtrees of a given tree <cit.>. As the number of non-isomorphic trees with n vertices is not bounded by a polynomial in n, we do not know how to reduce directly from these problems. Instead we use a construction quite similar to the ”skeleton” graph in <cit.> to reduce from the problem of computing the permanent. Given a quadratic matrix A with elements (a_i,j)_i,j ∈ [n] the permanent of A is defined as follows:𝗉𝖾𝗋𝗆(A)=∑_π∈ S_n∏_i=1^n a_i,π(i) ,where S_n is the symmetric group with n elements. Computing the permanent is -hard even when restricted to matrices with entries from 0,1. We reduce from computing the permanent of matrices with entries from 0,1. Given a quadratic matrix A of size n, we construct a tree T_A as follows:* For every entry a_i,j we create a vertex v_i,j and add edges v_i,j,v_i+1,j for every i ∈ [n-1] and j ∈ [n].* Whenever a_i,j=1 we create a vertex b_i,j and add edges b_i,j,v_i,j.* For every column c_j we create a vertices u_j,w_j,x_j,y_j,z_j and add edges u_j,v_1,j, v_n,j,w_j,w_j,x_j,w_j,y_j and w_j,z_j. * Finally, we create a vertex r and add edges a,u_j for all j∈ [n]. In the following we call r the root.We give an example in Figure <ref> for a matrixA = [ 1 0 1 0 0; 0 1 0 1 1; 1 0 1 0 0; 0 1 0 1 0; 0 1 0 0 1 ] .We claim that for all quadratic matrices A of size n≥ 5 with entries from 0,1 it holds that𝗉𝖾𝗋𝗆(A) = #𝖲𝗎𝖻(T_𝗂𝖽_n,T_A),where 𝗂𝖽_n is the quadratic matrix of size n with 1s on the diagonal and 0s everywhere else. In the following we write v for a vertex in T_A and v' for a vertex in T_𝗂𝖽_n. To prove the claim we first observe that whenever a subtree of T_A is isomorphic to T_𝗂𝖽_n, the root r' of T_𝗂𝖽_n has to be mapped to the root r of T_A by the isomorphism as the roots are the only vertices with degree n (which is why we needed n≥ 5 as every other vertex has degree ≤ 4). It follows that the vertices u'_1,…,u'_n of T_𝗂𝖽_n are mapped to u_1,…,u_n of T_A which induces a permutation on n elements, that is, an element π∈ S_n. We will now partition the subtrees of T_A isomorphic to T_𝗂𝖽_n by those permutations and write #𝖲𝗎𝖻(T_𝗂𝖽_n,T_A)[π] for the number of subtrees that induce π. Now fix π and consider a subtree that induces π. It holds that for all j∈ [n] the vertex w'_j has to be mapped to w_π(j) as those are the only vertices with degree exactly 4 and furthermore, the vertices x'_j,y'_j,z'_j have to be mapped to x_π(j),y_π(j),z_π(j) (possibly permuted but the subtree of T_A is the same). Now v'_i,i is adjacent to b'_i,i for each i∈ [n] and therefore v_i,π(i) has to be adjacent to b_i,π(i), that is a_i,π(i) = 1. If this is not the case then there is no subtree that induces partition π. Furthermore there is at most one subtree isomorphic to T_𝗂𝖽_n inducing π because the image is enforced by r', w'_j and v'_i,i for all i,j ∈ [n]. Consequently #𝖲𝗎𝖻(T_𝗂𝖽_n,T_A)[π] = 1 if for all i ∈ [n] it holds that a_i,π(i)=1 and #𝖲𝗎𝖻(T_𝗂𝖽_n,T_A)[π]=0 otherwise. Hence #𝖲𝗎𝖻(T_𝗂𝖽_n,T_A)[π] = ∏_i=1^n a_i,π(i) and therefore𝗉𝖾𝗋𝗆(A) = ∑_π∈ S_n∏_i=1^n a_i,π(i) = ∑_π∈ S_n#𝖲𝗎𝖻(T_𝗂𝖽_n,T_A)[π] = #𝖲𝗎𝖻(T_𝗂𝖽_n,T_A).Now the reduction works as follows: If the input matrix A has size ≤ 4 we brute-force the output and otherwise we compute #𝖲𝗎𝖻(T_𝗂𝖽_n,T_A) with the oracle for . Now the proof of Lemma <ref> relies on the fact that locally injective homomorphisms from a tree to a tree are embeddings.It is a well-known fact that a locally injective homomorphism φ from a tree T_1 to a tree T_2 is injective. To see this assume that there are vertices v and u in T_1 that are mapped to the same vertex in T_2. As T_1 is a tree there exists exactly one path v=w_0,w_1,…,w_ℓ,w_ℓ+1=u between v and u in T_1. It holds that ℓ≥ 1 as otherwise v and u would be adjacent and hence φ(u)=φ(v) would have a selfloop in T_2 which is impossible. As φ is locally injective we have that φ(v)≠φ(w_2), hence u ≠ w_2, and as φ is edge preserving there are edges φ(v),φ(w_1) and φ(w_1),φ(w_2) and a path from φ(w_2) to φ(w_ℓ+1)=φ(u)=φ(v) in T_2. This induces a cycle and contradicts the fact that T_2 is a tree.Therefore #(T_1,T_2) = #(T_1,T_2). By Lovász <cit.> it holds for all H and G that#𝖲𝗎𝖻(H,G) = #(H,G)/#𝖠𝗎𝗍(H) ,where 𝖠𝗎𝗍(H) is the set of automorphisms of H. If H is a tree then #𝖠𝗎𝗍(H) can be computed in polynomial time (even for planar graphs <cit.>,<cit.>). Therefore -hardness of #(𝒯) follows by reducing from : Given trees T_1,T_2 we compute #(T_1,T_2) by querying the oracle and #𝖠𝗎𝗍(T_1) in polynomial time. Then we output#(T_1,T_2)/#𝖠𝗎𝗍(T_1) = #(T_1,T_2)/#𝖠𝗎𝗍(T_1) = #𝖲𝗎𝖻(T_1,T_2).§ INJECTIVITY IN R-NEIGHBORHOODThe generalization from locally injective homomorphisms to homomorphisms that are injective in the r-neighborhood of every vertex is straightforward. Given a graph H and v ∈ V(H) we denote N_r(v) as the r-neighborhood of v, that is, a vertex u is contained in N_r(v) if and only if d_H(u,v) ≤ r, where d_H(u,v) the distance between u and v in H. We then define(H,G) := φ∈(H,G) | ∀ v ∈ V(H) : φ|_N_r(v) is injective .Furthermore we define the counting problem #(C) for a class of graphs C accordingly. Definingsuch thatE((H)) = u,w | u ≠ w ∧∃ v: 1 ≤ d_H(u,v) ≤ r ∧1 ≤ d_H(w,v) ≤ rfor every graph H ∈ immediately yields the dichotomy: Let C ⊆ be a recursively enumerable class of graphs. Then #(C) is FPT if (C) has bounded treewidth and #-hard otherwise. Furthermore, there exists a deterministic algorithm that computes #(H,G) in timeg(|V(H)|) · |V(G)|^𝗍𝗐((H)) + 1 ,where g is a computable function.We continue using trees as an example by observing that there is a phase transition in the complexity of #(𝒯) when we change from r=1, in which case (H,G)=(H,G), to r=2: #(𝒯) is #-hard for r ≥ 2. In particular, assuming ETH [ETH is the “exponential time hypothesis”, stating that k-SAT cannot be solved in subexponential time (see <cit.>).], there is no algorithm that computes #(T,G) for a tree T in timeg(|V(T)|) · |V(G)|^O(1) ,for any computable function g.We only need to show that (𝒯) has unbounded treewidth, as the ETH lower bound simply follows from the fact that FPT ≠# under ETH (see e.g. Chapt. 16 in <cit.>). Therefore we construct the graph T_k,k as follows:* We add vertices a, u_1,…,u_k, v_1,…,v_k, w_1,…,w_k.* We add edges a,u_i, a,v_i and v_i,w_i for i ∈ [k].Clearly, T_k,k is a tree. Now we contract vertices w_i and u_i for all i ∈ [k] and end up in W_k. As d_T_k,k(a,w_i) = 2 and d_T_k,k(a,u_i)=1, those contractions are according to (T_k,k) and hence the resulting graph is a -minor of T_k,k. From W_k we can further contract vertices along the lines of the proof of Corollary <ref> to obtain arbitrary graphs with k edges as minors of elements of (T_k,k). Consequently the treewidth of (𝒯) is not bounded.§ EXTENSION TO LINEAR COMBINATIONS The introduction of linear combinations of graphically restricted homomorphisms is motivated by the following example: Consider the problem #𝖤 of, given a parameter k and a graph G ∈, computing #(P_k,G) + #(K_k,G) + #(C_k,G), where P_k,K_k,C_k are paths, cliques and cycles consisting of k vertices. As [Here τ maps every graph to the independent set of the same size implying that (C)=C.],andare graphically restricted homomorphisms we know the complexity of computing each summand, but we cannot immediately infer the complexity of #𝖤. As P_k has treewidth 1 it follows by Theorem <ref> or Theorem <ref> that #(P_k,G) can be computed in FPT time. Consequently, #𝖤 is equivalent (w.r.t. FPT Turing reductions) to computing #(K_k,G) + #(C_k,G). As cliques have -minors of unbounded treewidth and cycles have unbounded matching number, these problems are both #-hard (see Theorem <ref> and Theorem <ref>). Even if hardness of #𝖤 is intuitive, it is not obvious how to prove it, at least if one tries to reduce the computation of one summand to #𝖤. Instead we will show that our framework allows less cumbersome reductions, at least for what we will call the congruent cases. We start by formally defining a linear combination of graphically restricted homomorphisms. Let 𝒜 be a set of computable functions a: 𝒢×Τ → _≥ 0 with finite support.We define the parameterized counting problem #(𝒜) as follows: Given a ∈𝒜 and G ∈𝒢, compute the linear combination:∑_(H,τ) ∈𝗌𝗎𝗉𝗉(a) a(H,τ) ·#(H, G), parameterized by (#𝗌𝗎𝗉𝗉(a)+max_(H,τ) ∈supp(a)#V(H)). Given a function a ∈𝒜 we denote(a):= ⋃_(H,τ)∈𝗌𝗎𝗉𝗉(a)(H) and (𝒜) := ⋃_a∈𝒜(a)as the set of all τ-minors of a and 𝒜, respectively. Furthermore, we say that a is congruent if for every (H_1,∗) and (H_2,∗) ∈𝗌𝗎𝗉𝗉(a) it holds that 𝖯𝖺𝗋𝗂𝗍𝗒(#V(H_1)) = 𝖯𝖺𝗋𝗂𝗍𝗒(#V(H_2)). We say that 𝒜 is congruent if all its elements are congruent.If we let τ_𝗂𝗌 be the graphical restriction that maps a graph H to the independent set with vertices V(H) and set 𝒜=a_k | k∈ such that a_k(P_k,τ_𝗂𝗌)=1, a_k(K_k,)=1, a_k(C_k,τ_𝖼𝗅𝗂𝗊𝗎𝖾) = 1 and 0 otherwise then #(𝒜) is equivalent to #𝖤. For congruent 𝒜 we can derive a complete complexity classification.The problem #(𝒜) is fixed-parameter tractable if (𝒜) has bounded treewidth. Otherwise, if 𝒜 is additionally congruent, it is #-hard. The FPT algorithm for the positive result is straight-forward: As the treewidth of (𝒜) is bounded, we can on input a ∈𝒜 and G∈ compute #(H,G) for every (H,τ)∈𝗌𝗎𝗉𝗉(a) in time g(#V(H)) · n^O(1) for a computable function g by Theorem <ref>. Consequently, computing the sum takes time less than#𝗌𝗎𝗉𝗉(a) · g(max_(H,τ) ∈supp(a)#V(H)) · n^O(1)yielding fixed-parameter tractability.Now assume that (𝒜) has unbounded treewidth and that 𝒜 is congruent and let a∈𝒜 and G∈. Lemma <ref> yields that∑_(H,τ) ∈𝗌𝗎𝗉𝗉(a) a(H,τ) ·#(H, G)= ∑_(H,τ) ∈𝗌𝗎𝗉𝗉(a) a(H,τ) ·∑_ρ≥∅μ(∅,ρ)·#(H/ρ,G)= ∑_(H,τ) ∈𝗌𝗎𝗉𝗉(a) ∑_ρ≥∅ a(H,τ) ·μ(∅,ρ)·#(H/ρ,G)Now let ℋ∈(𝒜). It holds that the coefficient d(ℋ) of #(ℋ,G) in the above equation satisfies:d(ℋ) = ∑_(H,τ) ∈𝗌𝗎𝗉𝗉(a) ∑_ρ≥∅ ℋ≅ H/ρ a(H,τ) ·μ(∅,ρ)If we fix some (H,τ) ∈𝗌𝗎𝗉𝗉(a) and ρ∈ L(M(τ(H))) such that ℋ≅ H/ρ we have that 𝗌𝗀𝗇(a(H,τ) ·μ(∅,ρ)) = 𝗌𝗀𝗇( μ(∅,ρ) ) = (-1)^𝗋𝗄(ρ) = (-1)^#V(H)-c(H/ρ)= (-1)^#V(H)-c(ℋ) ,where the first equality follows from the fact that a(H,τ)>0 and the second from the corollary of Rota's Theorem (Theorem <ref>). As a is congruent, the parities of all H such that (H,∗)∈𝗌𝗎𝗉𝗉(a) are equal and consequently we have that 𝗌𝗀𝗇(d(ℋ)) = (-1)^#V(H)-c(ℋ), hence d(ℋ) ≠ 0. Therefore, if we consider #(𝒜) as the problem of computing linear combinations of homomorphisms (as we also did in the proof of Theorem <ref>), we infer that every τ-minor will be inluded in the combination. As the treewidth of those is not bounded we conclude by Theorem <ref> that #(𝒜) is #-hard.On the other hand, Theorem <ref> is not true if we omit the constraint that 𝒜 is congruent: Consider the problem #(𝒫) where 𝒫 is the class of all paths. It is fixed-parameter tractable as 𝒫 has bounded treewidth (see Theorem <ref>). Using Lovász identity <cit.> we have that for any P_k ∈𝒫 and G ∈ it holds that#(P_k,G) = ∑_ρ≥∅#(P_k/ρ,G).This is a linear combination of graphical homomorphisms (embeddings) including e.g. the term #(P_k/∅,G)=#(P_k,G) with coefficient 1. But τ_𝖼𝗅𝗂𝗊𝗎𝖾-ℳ(𝒫) has unbounded treewidth[τ_𝖼𝗅𝗂𝗊𝗎𝖾-ℳ(P_k) is precisely the set of “spasms” of P_k (see <cit.>). The claim follows by Fact 3.4 in <cit.>] and consequently the treewidth of all τ-minors of this linear combination is unbounded, too. This shows that there exist non-congruent 𝒜 such that the treewidth of (𝒜) is not bounded but #(𝒜) is fixed-parameter tractable. Now it is easy to see that #𝖤 is #-hard. Further problems whose hardness follows from Theorem <ref> are for example:The following problems are #-hard: Given a graph G ∈ and a parameter k, (1) count all odd (or even) subgraphs of size bounded by k of G. (2) count all subgraphs of size k of G (follows also from <cit.>).(3) compute ∑_i =1^k #(W_i,G), i.e., the sum of all locally injective homomorphisms from windmills of size bounded by k to G.(4) compute ∑_i=1^k #(K_i,i,G) + #(K_i,i,G), where K_i,i is the biclique of size i, that is, the complete bipartite graph with i vertices on each side.Each statement follows by Theorem <ref>: (1) Let 𝖮𝖽𝖽_k ⊆ be the set of all odd graphs of size bounded by k. Then it holds that∑_H ∈𝖮𝖽𝖽_k#𝖲𝗎𝖻(H,G) = ∑_H ∈𝖮𝖽𝖽_k#𝖠𝗎𝗍(H)^-1·#(H,G) .As 𝖠𝗎𝗍(H)^-1 > 0, the above equation clearly is a congruent instance of the linear combination problem. Furthermore 𝖮𝖽𝖽_k contains cliques of size ≥ k-1 implying that the treewidth of the instance is not bounded. The same argument holds for the case of counting all even subgraphs.(2) Follows also along the same lines as (1) with the additional argument that we only count graphs of size exactly k, implying that the parity is the same for all terms.(3) Congruence follows by the observation that W_i has odd size for all i ≥ 1. Unbounded treewidth follows with the same argument as in Corollary <ref>.(4) Congruence follows by the fact that K_i,i has even size for all i ≥ 1. Unbounded treewidth follows by observing that the class of all bicliques itself already has unbounded treewidth.§ CONCLUSION AND FURTHER WORKWe have shown that various parameterized counting problems can be expressed as a linear combination of homomorphisms over the lattice of graphic matroids, implying immediate complexity classifications along with fixed-parameter tractable algorithms for the positive cases. This results can be obtained without using often cumbersome tools like “gadgeting” or interpolation and relies only on the knowledge of the problem of counting homomorphisms and the comprehension of the cancellation behaviour when transforming a problem into this “homomorphism basis”. The latter, in turn, was nothing more than a question about the sign of the Möbius function, which was answered by Rota's Theorem.This framework, however, still has limits: It seems that, e.g., neither induced subgraphs nor edge-injective homomorphisms <cit.> are graphically restricted. Indeed, both can be expressed as a sum of homomorphisms over (non-geometric) lattices but the problem is that there are isomorphic terms with different signs in both cases. This suggests that a better understanding of the Möbius function over those lattices could yield even more general complexity classifications of parameterized counting problems.§ ACKNOWLEDGEMENTSThe author is very grateful to Holger Dell and Radu Curticapean for fruitful discussions. Furthermore the author thanks Cornelius Brand for saying “Tutte Polynomial” every once in a while. plain | http://arxiv.org/abs/1706.08414v1 | {
"authors": [
"Marc Roth"
],
"categories": [
"cs.CC"
],
"primary_category": "cs.CC",
"published": "20170626144448",
"title": "Counting Restricted Homomorphisms via Möbius Inversion over Matroid Lattices"
} |
[[email protected]] Russian Quantum Center, Novaya street 100A, 143025 Skolkovo, Moscow Region, RussiaDepartment of Physics, Lomonosov Moscow State University, Leninskie Gory 1, 119992 Moscow, RussiaRussian Quantum Center, Novaya street 100A, 143025 Skolkovo, Moscow Region, RussiaDepartment of Physics, Lomonosov Moscow State University, Leninskie Gory 1, 119992 Moscow, RussiaInstitute of Theoretical Physics, Hamburg University, Hamburg D-20355, GermanyCeFEMA, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisboa, PortugalThe quantum nature of a microscopic system can only be revealed when it is sufficiently decoupled from surroundings. Interactions with the environment induce relaxation and decoherence that turn the quantum state into a classical mixture. Here, we study the timescales of these processes for a qubit encoded in the collective state of a set of magnetic atoms deposited on a metallic surface. For that, we provide a generalization of the commonly used definitions of T_1 and T_2 characterizing relaxation and decoherence rates. We calculate these quantities for several atomic structures, including a collective spin, a setup implementing a decoherence-free subspace, and two examples of spin chains. Our work contributes to the comprehensive understanding of the relaxation and decoherence processes and shows the advantages of the implementation of a decoherence free subspace in these setups. 73.23.-b, 05.60.Gg, 05.70.Ln Relaxation and decoherence of qubits encoded in collective states of engineered magnetic structures Pedro Ribeiro December 30, 2023 ===================================================================================================|#⟩1| #1⟩⟨#|1⟨#1 |#1‖#1⟩#1⟨#1‖⟨#|1⟩#2⟨#1. | #2 ⟩#1#2⟨#1. ‖#2⟩#1⟨#1 ⟩trTr∂ImResgnDet#1|#1|↑↓𝐤̨ω𝐤σ#1#1#1#1#1#1 § INTRODUCTIONExploiting quantum dynamics for information processing, data storage, or sensing requires us to manipulate and measure quantum states before relaxation and decoherence processes set in. The characteristic timescales associated with the energy relaxation and the loss of quantum coherences depend crucially on the interaction with the local environment. In the case of single magnetic atoms or molecules in contact with a metallic surface <cit.>, the most relevant interaction is the magnetic exchange with itinerant electrons in the substrate <cit.>. Additionally, the magnetic degrees of freedom couple to substrate phonons and nuclear spins <cit.>.The ability to manipulate and address individual atoms and to perform spin and time resolved measurements <cit.> permitted us to engineer a number of prototype setups <cit.> and demonstrated the potential of these artificial magnetic structures for classical information processing <cit.>. However, the proximity with the metallic substrate induces the creation of electron-hole pairs that rapidly decohere the magnetic state <cit.> thus posing considerable limitations to the exploitation of the quantum regime <cit.>. For example, in 1D chains of Fe atoms coupled antiferromagnetically, experiments observe activated switching between the two doubly degenerate (classical) Néel states rather then the (quantum) nonmagnetic ground state <cit.>.One strategy to protect the spin states is environment engineering. This route was explored in Ref. <cit.>, employing a superconductor substrate that forbids quasiparticle creation for energies below the superconducting gap. An alternative path is to engineer artificial structures with intrinsically large decoherence times. A systematic way to proceed is to design systems supporting so-called decoherence-free subspaces <cit.> where quantum information can be stored without loss. A recently developed method, able to capture the coherent quantum dynamics of atomic magnets <cit.> allows for the theoretical study of how to implement this strategy. In contrast to previous descriptions in terms of rate equations <cit.> that only describe the dynamics of the populations, this approach also models the quantum coherences and allows us to access the current statistics measured by the STM tip.In this paper we study relaxation and decoherence processes arising in engineered atomic spin structures due to the contact with the metallic surface and the STM tip. We show that the standard concepts of the timescales T_1 and T_2 introduced for two-level systems, such as a single atom with spin 1/2, are not well-defined in the generic case. In particular, they do not work when dynamics of coherences and populations are coupled, e.g., due to the interaction with the polarized STM tip. The standard definitions are also not extendable to multilevel quantum systems, where the transitions from the states encoding a qubit to other eigenstates can contribute to the relaxation and decoherence. To characterize the quantum dynamics in the generic case, we propose new quantities that generalize the notions of T_1 and T_2 and that can extend to cases where a qubit is encoded in a subspace of a larger Hilbert space. This new approach enables us to determine the timescales for the degradation of stored quantum information in different engineered structures such as spin chains and collective spin models. In general, there are three such timescales, but in most of the studied examples two of them coincide, in which case one can associate the two different quantities with relaxation and decoherence processes. We study these timescales in several atomic structures including the proposed decoherence-free subspaces and investigate how these quantities depend on the temperature and bias voltage applied to the STM tip. Our work shows that the implementation of a decoherence free subspace based on four spin-1/2 atoms is realistic. The paper is organized as follows. In Sec. <ref> we describe the model and the method based on the effective master equation developed in Ref. <cit.>. In Sec. <ref> we identify the problem with the definition of T_1 and T_2 for a generic dissipative evolution and provide a suitable generalization. These quantities are studied for different atomic structures in Sec. <ref>. In Sec. <ref> we discuss our results, provide conclusions and state some of the open questions. The Appendices are devoted to some technical aspects: Appendix <ref> shows how to obtain the rate equations for the populations once coherences are disregarded, Appendix <ref> addresses the comparison of the master equation used in this work with that of Ref. <cit.> where additional simplifications were performed, and Appendix <ref> shows how to obtain a Bloch-like equation for the evolution of the magnetization for the case of a single spin-1/2 atom. § MODEL & METHOD §.§ ModelAn engineered atomic spin device (EASD) consists of a cluster of magnetic atoms arranged on a metallic substrate coated by a thin insulating layer. Individual atomic degrees of freedom can be probed by placing a spin polarized STM tip on top of the respective atom, as schematically shown in Fig. <ref>. The entire system can be modeled by a Hamiltonian H=H_S+H_E+H_I that consists of three terms, describing the atomic subsystem H_S, the electronic degrees of freedom of the metallic leads (including the substrate and tip) H_E, and the coupling between the atoms and these leads H_I.The degrees of freedom of the atomic electrons reduce to those of effective spins S_l, where l=1… L labels the atom, whenever the Coulomb repulsion among electrons occupying localized orbitals induces a large charge gap <cit.>. In the regime of weak hybridization between the localized atomic orbitals and those of the itinerant electrons in the leads, tunneling of electrons happens by virtual excitations of the atomic charge state. A suitable low energy description can be obtained in terms of isolated magnetic moments interacting with each other through an effective exchange <cit.>. The interaction with the substrate further induces an anisotropy in the ẑ direction <cit.>. Finally, an external magnetic field B can be applied to the system. The atomic Hamiltonian thus yieldsH_S=∑_l[DS_lz'^2+E(S_lx'^2-S_ly'^2)+h·S_l] +∑_ll'J_ll'S_l·S_l',where S_lx', S_ly', and S_lz' are the atomic spin projections along the hard, intermediate, and easy axis of the substrate. The Zeeman terms with h=gμ_BB are proportional to the atomic g factor and to the Bohr magneton μ_B.We model the substrate as a set of L identical leads that couple independently to each atom. The tip is described as an extra lead coupled to one of the atoms l̃. The Hamiltonian of the leads is therefore given byH_E=∑_ν ksε_ν ksc_ν ks^†c_ν ks.where ν=1,…,L,t labels the leads, and electronic states are characterized by momentum k and spin s, quantized along the tip polarization P. The leads are in thermal equilibrium with a common temperature 1/β (in energy units) and chemical potentials μ_s=0, μ_t=-eV, where V is the applied voltage and -e is the electron charge. They are further characterized by the spin-polarized local densities of states ϱ_ν s(ε)=𝒱_ν^-1∑_kδ(ε-ε_ν ks), where 𝒱_ν is the corresponding volume. As the leads are good metals, their description should be independent on specific details, however, for definiteness, we use a density of states of the form ϱ_ν s(ε)=1/2W(1+p_νs)Θ(|ε|-W),where W is the bandwidth, and p_ν is the polarization parameter of the lead ν that may take values from 0 to 1. In the following we assume that the substrate is nonmagnetic, i.e., p_s=0, and the tip is polarized with p_t=p. We normalize the densities of states by the condition ∑_s∫ϱ_ν s(ε)dε=2 that helps us to eliminate the problems with the logarithmic terms in the master equation.The coupling of the leads to the atoms is described by the Hamiltonian <cit.>H_I=∑_laνν'√(J_lν^aJ_lν'^a)S_la⊗ s_a^νν',s_a^νν'=1/√(𝒱_ν𝒱_ν')∑_kk'ss'c_ν ks^†σ_ss'^a/2c_ν'k's',where a=0,x,y,z, the axis z is aligned with the tip polarization, and σ^a are the Pauli matrices (with σ^0=I). The terms with a=0 in Eq. (<ref>) correspond to the elastic tunneling of electrons between the leads mediated by the atoms, while the terms with a≠0 describe the Kondo interaction of each atomic spin with the spin density created by the lead electrons. The coupling is assumed to be rotationally invariant, i.e., J_lν^a=J_lν is the same for all a≠0, and the coupling energies J_lν≃2u_lν^2U_lΔ_l^-1(Δ_l+U_l)^-1 are determined by the lead-atom hopping amplitudes u_lν, the intra-atomic Coulomb repulsion energies U_l, and the gaps Δ_l between the atomic levels and the Fermi energy of the leads <cit.>. Since each lead is coupled to only one atom, we use J_ll'^a=J_ls^aδ_ll' for the coupling to the substrate and J_lt^a=J_t^aδ_ll̃ for the coupling to the tip. In the following we use dimensionless parameters _lν^a=π J_lν^a𝒱_ν/(2W) to characterize the strength of the lead-atom couplings. §.§ Master equation We investigate dynamics of an EASD by employing a master equation∂_tρ=ℒρfor the density matrix of the atomic spins ρ, that is well adapted to describe the regime of weak coupling between the atoms and the leads. The standard approach to the construction of the superoperator ℒ invokes the Born and Markov approximations <cit.>. If additionally the rotating wave approximation (RWA) is employed, the superoperator ℒ becomes of the Lindblad form. In case the Hamiltonian of the isolated atom has a nondegenerate spectrum, the latter approximation is equivalent to the neglect of the off-diagonal elements of ρ, i.e., quantum coherences. Such a description based on the rate equations for populations <cit.> is suitable when decoherence in the system is much faster than relaxation which is often the case in experiments due to hyperfine coupling <cit.>. However, an account of coherences may become important in some situations <cit.>. In the context of EASDs, this happens for the case of nonparallel alignment of the tip polarization and the magnetic fields acting on the atoms <cit.>.In this paper, we do not employ the RWA and consider the coherent dynamics of the setup given by the Redfield equation _tρ=-i[H'_S,ρ]+∑_laa'{[_laa'ρ,S_la]+H.c.} ,(the detailed derivation is given in Ref. <cit.>, with a minor difference discussed below), where the Hamiltonian H'_S=H_S+Δ H_S is renormalized by terms Δ H_S=pJ_tS_l̃z-ln2/2Wp^2J_t^2S_l̃z^2 induced by the polarized tip. The dissipator is expressed through operators _laa' given by_laa'=∑_νν'αα'u_laa'^νν'κ(ω_α-ω_α'-μ_ν+μ_ν')×|α⟩⟨α|S_la'|α'⟩⟨α'|,where |α⟩ denotes the eigenstates of H'_S with energies ω_α, and u_laa'^νν'=1/4π√(_lν^a_lν'^a_lν^a'_lν'^a')×[(1+p_νσ^z)σ^a(1+p_ν'σ^z)σ^a'], κ(ω) =g(βω)+i f(βω)/β-i/πωln|ω|/cW,where g(x)=x/(e^x-1), f(x)=1/πP∫ dy[g(y)+y Θ(-y)]/(x-y). We note that the terms with a=0 or a'=0 in Eq. (<ref>) vanish, i.e., the elastic tunneling does not affect dynamics of the atoms.The steady state ρ_∞ of the system is determined by the eigenstate of ℒ with zero eigenvalue, i.e., ℒρ_∞=0. One may check that at zero voltage the steady state is given by the Boltzmann distribution ρ_∞∝ e^-β H_S and includes no coherences. At V≠0 the excitations induced by the current change the distribution, while the spin transfer torque <cit.> (in the case p≠0) results in ρ_∞ that is not diagonal in the eigenbasis. The latter case cannot be captured by the standard approach based on rate equations for the populations <cit.>. These equations might be obtained from Eq. (<ref>) by substituting ρ_αα'=p_αδ_αα' and are given in Appendix <ref>.In the presence of a voltage, an electric current is generated between the tip and the substrate due to the coupling terms in Eq. (<ref>). Its average value can be expressed through the atomic density matrix asI=-e tr(𝒥ρ)(see Ref. <cit.> for the detailed derivation) with the current superoperator 𝒥 defined as𝒥ρ=∑_aa'[J_aa'ρ S_l̃a+S_l̃aρ J_aa'^†],J_aa'=∑_νν'αα'u_l̃aa'^νν'κ(ω_α-ω_α'-μ_ν+μ_ν')×(δ_ν t-δ_ν't)|α⟩⟨α|S_l̃a'|α'⟩⟨α'|.This current superoperator can be split into three parts <cit.>𝒥=𝒥_e+𝒥_m+𝒥_i,where (i) 𝒥_e includes the terms of Eq. (<ref>) with a=a'=0 and corresponds to the elastic component of the current, (ii) 𝒥_m includes the terms of Eq. (<ref>) with a=0, a'≠0 and a≠0, a'=0, and corresponds to the magnetoresistive component of the current, and (iii) 𝒥_i includes the terms of Eq. (<ref>) with a≠0, a'≠0, and corresponds to the inelastic component of the current. The average values of the first two components are given byI_e=-e 𝒥_eρ=g_eV,I_m=-e 𝒥_mρ=2g_mp⟨ S_l̃z⟩ V,where ⟨ S_l̃z⟩=(S_l̃zρ), and we defined the elastic conductance g_e=1/πe^2_l̃s^0_t^0, the inelastic conductance g_i=1/πe^2_l̃s_t, and the magnetoresistive conductance g_m=√(g_eg_i).We note that the master equation (<ref>) contains terms originating from the imaginary part of κ(ω), see Eq. (<ref>), which were not considered in the previous work <cit.>. The corresponding part of the superoperator has eigenvalues with positive real part and may result in unphysical dynamics when the density matrix loses its positivity. The physical origin of this contribution is not fully understood, though it is known that logarithmic terms in the bandwidth induce an additional shift to the energy levels <cit.>. It is tempting to interpret these logarithmic terms in the master equation as a weak coupling signature of Kondo physics. However, they vanish for the V=0 steady state described by the Boltzmann distribution and thus cannot be responsible for the Kondo effect. The results of the current spectra calculated with two master equations, with and without the imaginary part of κ(ω), are qualitatively similar, as shown in Appendix <ref>.§ RESULTS In this section we study relaxation and decoherence in spin based qubits whose dynamics may be described by the introduced model. We pay special attention to the definitions of the corresponding timescales T_1 and T_2 and revise them for the case when the qubit is realized on quantum systems with more than two levels. We introduce the generalized definitions for the relaxation and decoherence timescales and use them to calculate T_1 and T_2 in some exemplary systems. The discussion is lead in the context of EASDs but is easily extendable to generic open quantum systems. §.§ Generalized relaxation and decoherence timesThe notions of relaxation and decoherence times were first introduced in the theory of nuclear magnetic resonance <cit.> and have been applied to many open physical systems since then. In such systems the interaction with surroundings causes the thermal equilibration and loss of information. On a fundamental level, the first process described by the relaxation time T_1 is explained by the redistribution of the state populations due to the energy exchange with the surroundings. The second process has a different timescale T_2 (typically much shorter than T_1) and is due to the loss of quantum coherence. These timescales turned out to be especially relevant in the context of quantum information science where they are used to characterize the quality of qubits. For two level systems, the times T_1 and T_2 are uniquely defined. However, qubits are often realized on quantum systems which have more than two states. Though the quantum information in this case is still recorded in the subspace of two relevant states, the transitions to other states induced by the surroundings cannot be disregarded. Thus the notions of relaxation and decoherence times for such qubits have to be clarified.In this subsection we first show how the timescales T_1 and T_2 arise for a single atom with spin S=1/2 (two-level system) and study their behavior as different parameters of the setup, such as the voltage and the tip polarization, are tuned. We then discuss possible generalized definitions for T_1 and T_2 that (i) work for systems with more than two levels, (ii) are physically justified, and (iii) recover the standard T_1 and T_2 for two-level systems.§.§.§ Statement of the problem for spin S=1/2 A single atom with spin S=1/2 is a two level system whose density matrix has a standard geometrical representation as a Bloch vector. The connection between the Bloch vector p (with |p|⩽1 for physically meaningful states) and the density matrix ρ (in the eigenbasis of H_S=-1/2hσ^z) is given by the relationsρ=1/2(I+σ·p), p=(σρ),where I is the identity matrix and σ={σ^x,σ^y,σ^z} is the Pauli vector. The Redfield equation (<ref>) for ρ maps to the Bloch equation of the form ∂_tp=-ω×(p-p_0)- R.(p-p_0),with explicit expressions for a vector ω, a real symmetric matrix R and the steady state p_0 given in Appendix <ref>.We first consider the situation when the tip is not polarized and no voltage is applied across the system, i.e., p=0 and V=0. In this case the atom relaxes to the Boltzmann state with Bloch vector p_0=tanh(β h/2)ẑ, andω={ h-1/T_φ[f(β h)-β h/πlnh/cW]}ẑ,where we introduced T_φ=2πβ/(_s+_t)^2. The matrix R has the structureR.p=1/T_1p_∥+1/T_2p_⊥,so that relaxation of the Bloch vector components p_∥ and p_⊥ perpendicular and parallel to ẑ is decoupled. The timescales T_1 and T_2 characterize the processes of the population imbalance relaxation and the decoherence correspondingly and are given by1/T_1=1/T_φ[g(β h)+g(-β h)], 1/T_2=1/T_φ[g(β h)+g(-β h)/2+1],where g(x) is defined below Eq. (<ref>). One can see that 1/T_2=1/(2T_1)+1/T_φ, so that T_φ has physical meaning of the pure dephasing time <cit.>. Both timescales depend on the energy difference between two states ω_1-ω_0=h, reaching maximum for h=0. They might be associated with the eigenvalues of superoperator ℒ from the master equation (<ref>). For spin S=1/2 this superoperator has one zero eigenvalue, λ_0=0, corresponding to the steady state, and three nonzero eigenvalues λ_i=1,2,3, with λ_1 being real and two others forming a pair of conjugates λ_2=λ_3^*. They relate to the relaxation and decoherence times asλ_1=-1/T_1, λ_2,3=-1/T_2± iω. If unpolarized current is driven through the atom, i.e., p=0 and V≠0, the steady state p_0 and ω remain aligned with ẑ but depart from their equilibrium magnitudes. The relaxation matrix still has the structure (<ref>), but the timescales T_1 and T_2 differ from the equilibrium ones (<ref>) and (<ref>) by1/T_1=.1/T_1|_V=0+_s_t/2πβ[g̃(h,V)+g̃(-h,V)], 1/T_2=.1/T_2|_V=0+_s_t/2πβ[g̃(h,V)+g̃(-h,V)/2+g̃(0,V)],where we used g̃(h,V)=g(β(h+eV))+g(β(h-eV))-2g(β h)>0. In this form it is evident that T_1 and T_2 decrease compared to their zero bias values.If polarized current is driven through the atom, i.e., p≠0 and V≠0, both ω and p_0 are affected by the spin transfer torque and, as soon as P∦B, are no longer aligned with ẑ nor with each other. The dynamics of the components p_∥ and p_⊥ of the Bloch vector get coupled, i.e., the relaxation matrix does not have the form (<ref>). In this case the standard concepts of timescales T_1 and T_2 become ill defined and have to be reconsidered. The possible solution is to define them through Eqs. (<ref>) and (<ref>). Thus introduced T_1 and T_2 are univocal for a two-level system whose superoperator ℒ has one real nonzero eigenvalue and two eigenvalues forming a complex-conjugated pair. In this case the decay of the density matrix corresponding to the eigenvalue (<ref>) mostly affects the populations, while the eigenvalues (<ref>) correspond to the decay process mainly involving coherences. However, it might also happen that ℒ has three distinct real eigenvalues <cit.>, in which case such clear demarcation between relaxation and decoherence is not possible.§.§.§ Definition of generalized relaxation and decoherence times Our ability to introduce relaxation and decoherence times in the previous subsection relied on the fact that the system had two levels, and one was able to uniquely express T_1 and T_2 through the eigenvalues of the superoperator ℒ. This is not possible for spin systems with more than two levels. If two distinct eigenstates of the system |0⟩ and |1⟩ are used to realize a qubit, the timescales T_1 and T_2, by their physical meaning, should be defined as decay times for ρ_00-ρ_11 and ρ_01 correspondingly. However, dynamics of these quantities is governed by multiple timescales determined by the eigenvalues of ℒ. This is due to the fact that transitions to other eigenstates may happen, so that the state of the system does not remain in the subspace spanned by |0⟩ and |1⟩, as was the case for two-level systems.We now introduce generalized definitions for the relaxation and decoherence times given they must characterize the ability of a qubit to store information about the initial state. For a system with the Hilbert space of dimension d, the initial states are prepared within the subspace {|0⟩,|1⟩} and can be written as ρ_v=1/2(I+v·σ), where I and σ^x,y,z are d× d matrices obtained from corresponding 2×2 matrices by filling missing elements with zeros. The distinguishability of two quantum states can be characterized by the Hilbert-Schmidt distance between their density matrices ||ρ-ρ'||≡√([(ρ-ρ')^2]). This distance does not change over time in isolated systems but decreases in open quantum systems whose density matrices evolve towards a steady state. Therefore we consider the quantityD(Δ,t)≡||ρ_v(t)-ρ_v'(t)||/||ρ_v-ρ_v'||,where Δ=v-v' and ρ_v(t)=e^ℒtρ_v is the time evolved density matrix. Our goal is to associate the generalized T_1 and T_2 with the decay timescales of this quantity. We first note thatρ_v(t)-ρ_v'(t) =1/2Δ· e^ℒtσdepends only on the difference between vectors v and v' which was used in Eq. (<ref>). Treating matrices as vector states for the superoperator action and using the notation ρ→ρ, one may decompose the master equation superoperator as ℒ=∑_iλ_iρ_iρ̃_i, where ρ_i and ρ̃_i are eigenvectors of ℒ with eigenvalue λ_i, obtaining ρ_v(t)-ρ_v'(t)=1/2∑_iaρ̃_iσ^aΔ_ae^λ_itρ_i,where a=x,y,z. The distance between time evolved density matrices then reads||ρ_v(t)-ρ_v'(t)|| =√(1/2∑_aa'M_aa'(t)Δ_aΔ_a'),where we introduced a time-dependent 3×3 matrix M(t) with elementsM_aa'(t) =1/2∑_ii'ρ̃_̃ĩσ^aρ̃_̃ĩ'̃σ^a'×(ρ_iρ_i')e^(λ_i+λ_i')t.Equation (<ref>) then transforms to D(Δ,t) =√(Δ^T.M(t).Δ/Δ^T.Δ).From this expression one may see that the decay of D(Δ,t) has three timescales determined by the decay times of three eigenvalues of M(t). We denote these eigenvalues as m_k(t), k=1,2,3 and define timescales T_k through the relations √(m_k(T_k))=1/e, using a (quasi)monotonic decrease of m_k(t) from 1 at t=0 to 0 at t=∞.For most of the systems considered below, we found that two of these timescales form a pair of close values T_2≃ T_3 (differing by less than 1%) and the third one T_1 is different from them. In such cases we associate T_1 with relaxation and T_2=T_3 with decoherence, in analogy with two-level systems. As shown in Appendix <ref>, the introduced relaxation and decoherence times coincide with the standard definitions for two-level systems when dynamics of populations and coherences are decoupled. In case all three timescales T_1, T_2, and T_3 are distinct from each other, a clear identification of relaxation and decoherence processes is not possible as these timescales generically correspond to rates of information loss that affect both diagonal and off-diagonal elements of the density matrix. §.§ Relaxation and decoherence for different setups In order to store and manipulate a qubit encoded in the quantum state of an assembly of interacting magnetic atoms, the system is required to have two low-lying energy states and relaxation and decoherence times much larger than storage and manipulation timescales of the qubit. In this subsection we use the introduced definitions to study the properties of T_1 and T_2 in several systems engineered to have two lowest energy states separated from the remaining part of the spectrum, as shown in Fig. <ref>. We consider (i) atomic systems that can be described by one spin variable and (ii) atomic chains coupled by spin-spin interactions. We study the dependence of the relaxation and decoherence times on the system size (spin S for single atoms and length L for spin chains) and on the parameters of the environment such as the voltage, the temperature, and the external magnetic field. We also discuss the strategies for decoherence suppression in such devices. For Ising chains with longitudinal coupling, we show three timescales T_1, T_2, and T_3 instead of two because the identification of a pair of close values is not possible in this case.§.§.§ Collective spin models In many cases the Hamiltonian of the atomic structure commutes with the square of the total spin S=∑_lS_l, and the ground state is either in the highest or lowest spin sector. Typically, the highest multiplicity is more energetically favorable due to the Hund's rule. Even if S^2 does not commute with the dissipative terms in the Liouvilian, a description in terms of a collective spin-S is possible, assuming that the other spin sectors are much higher in energy and cannot be excited by the environment. The situation described above amounts to consider a single spin-S in Eq. (<ref>). In order to have two low-lying states that can encode a qubit, we consider D<0 such as to obtain an inverted parabolic spectrum with a ground-state manifold spanned by |0⟩=|-S⟩ and |1⟩=|+S⟩ <cit.>. For simplicity we consider the case E=h=0. The times T_1 and T_2 for the qubits realized on these states are presented in Fig. <ref> as functions of the voltage and the temperature for different values of S. In the numerical calculations we assume an unpolarized tip and use the parameters D=-0.1 meV for the anisotropy, J_s=J_t=1 meV for the inelastic coupling to the leads, and W=10 meV for the bandwidth of the leads. One observes that the relaxation time T_1 increases with S, while the decoherence time T_2 shows the opposite behavior. This can be explained by the fact that the lowest order transitions induced by the environment can only change the atomic spin projection by 1. Switching between the two ground states thus requires a series of 2S transitions with the largest transition energy being D(2S-1). Two effects then lead to an increase of T_1 as S increases: More transitions are needed in the cascade, and the transition energy gets larger. At low temperatures, the Boltzmann weights account for the exponential growth of T_1, while at large T, the relaxation time is inversely proportional to the temperature. The decoherence time shows 1/T behavior in the whole temperature region. The dependence of T_2 on the atomic spin is explained by the fact that the two ground states have distinct magnetizations, and the larger they are, the more effectively spin fluctuations destroy the coherence between them. The effect of the voltage in the reduction of both T_1 and T_2 is explained by the fact that higher energy electrons, inducing faster transitions and larger spin fluctuations, become accessible by inelastic scattering. In the large voltage limit, both the relaxation and decoherence times decrease as 1/V which is simply explained by the asymptotic behavior of the function κ(ω) in Eq. (<ref>).Let us now consider a different situation where the Hund's rule is not satisfied, and the lowest energy states are singlets. The first nontrivial case arises for a system of four S=1/2 atoms where the two singlets are formed. Since the matrix elements of these states with the environment coupling terms vanish, the evolution within the singlet subspace is purely unitary. Such decoherence-free subspace was exploited in Ref. <cit.> to realize a qubit robust to local decoherence. However, while the leading order decoherence effects of the considered system are suppressed, finite values of T_1 and T_2 are still expected due to the presence of the other spin sectors.Such a system can be realized setting J_ll'=J>0 in Eq. (<ref>) with four 1/2-spins, in which case the ground state is doubly degenerate and comprised of two singlets |0⟩ and |1⟩. One may realize such a Hamiltonian with four atoms coupled to each other pairwise with the same Heisenberg energy, i.e., in the tetrahedron geometry. The times T_1 and T_2 for the qubit encoded within the singlet states are shown in Fig. <ref> as functions of the voltage and the temperature. In the calculations we assume the unpolarized tip and use the parameters J=0.5 meV for the Heisenberg energy, J_s=J_t=1 meV for the inelastic coupling to the leads, and W=10 meV for the bandwidth of the leads. Note that, in this system, the relaxation and decoherence times are of the same order in the whole region of parameters. This means that pure dephasing is absent, and the finite value of T_2 is only due to the transitions between the singlet states mediated by the higher spin sectors whose states have higher energy. Thus in the limit T→0, due to the Boltzmann factor, both relaxation and decoherence are exponentially suppressed. This should be contrasted with the case of a single spin where T_2∝1/T. The absence of pure dephasing can be explained by the fact that all matrix elements of local spin operators in the ground state subspace are zero, i.e., ⟨i|S_la|j⟩=0 for i,j=|0⟩,|1⟩, and the right hand side of the master equation projected onto this subspace vanishes identically.§.§.§ Spin chains We now study the properties of the relaxation and decoherence times for two types of S=1/2 spin chains: (i) Ising chains and (ii) transverse field Ising chains with longitudinal coupling. For both of them we assume that the tip is coupled to the atom at the end of the chain, i.e., l̃=1.The Ising chain is described by the HamiltonianH_S=J∑_l=1^L-1S_lzS_l+1,z,where atoms have spin 1/2 and L is the chain length. The system has doubly degenerate ground states which are (anti)ferromagnetically ordered for J<0 (J>0). The relaxation and decoherence times for the qubit realized on these states are presented in Fig. <ref> as functions of the voltage and the temperature for the chains of different length. In the calculations we assume the unpolarized tip and use the parameters J=-1 meV, J_s=J_t=1 meV for the inelastic coupling to the leads, and W=10 meV for the bandwidth of the leads. One can observe that T_1 increases with the chain length, while T_2 shows the opposite behavior. However, the dependence of the decoherence time on the system size is not as pronounced as in the case of single atoms of different total spin S considered above. This is because the energy barriers for the excitation processes do not scale with L, and the transition between the two ground states is realized by the excitation of a spin at the chain edge and the diffusive propagation of the domain wall to the opposite edge. Applying a voltage or increasing the temperature results in the decrease of T_1 and T_2. Both timescales behave as 1/T and 1/V in the limits of large temperature and large voltage correspondingly. Similarly to the case of single atoms, the relaxation time shows the exponential growth at low temperature, while the decoherence time preserves 1/T behavior in the whole temperature region. The results are in qualitative accordance with experimental data from Ref. <cit.> where the properties of the switching rate (we associate it with 1/T_1) between two lowest energy states in spin chains were studied. Finally, we study the properties of the relaxation and decoherence times when topological states are used for qubit realization. For this we consider the transverse field Ising chain with longitudinal coupling described by the HamiltonianH_S=J∑_l=1^L-1(S_lxS_l+1,x+gS_lzS_l+1,z)-h∑_l=1^LS_lz,where we assume the parameters J, g, and h to have positive values. In the thermodynamic limit L→∞ this model experiences a transition from paramagnetic to magnetically ordered phase upon tuning the field value h. In the paramagnetic phase, below the critical value of the field, the system has a twofold degenerate ground state which might be described in terms of the Majorana edge states <cit.>. For finite chains the degeneracy is lifted but the topological order manifests itself in the level crossings of the two lowest energy states as the field value is changed <cit.>. Figure <ref> shows the dependence of the times T_1 and T_2 for the qubit realized with these states on the field value h for the chains of different length L. In the calculations we assume the unpolarized tip and use the parameters J=1 meV, g=1, k_BT=0.1 meV for the temperature, J_s=J_t=1 meV for the inelastic coupling to the leads, and W=10 meV for the bandwidth of the leads. One sees an increase in the relaxation and decoherence times at values of h corresponding to the level crossings. This behavior can be explained by the dependence of the transition rates on the excitation energy, see Appendix <ref>, and has nothing to do with the topological nature of the qubit states. Thus the topological order does not suppress environment induced decoherence in this case. If the field value is kept at its critical value, three timescales of the information loss associated with the decay of the quantity (<ref>) do not contain a pair of close values, as was the case in all previous examples. The dependence of all three timescales on the voltage and the temperature is shown in Fig. <ref>, where the value of the field is chosen at the largest level crossing point for each L. One might see that T_1, T_2, and T_3 are of the same order and demonstrate similar behavior. At low temperatures, these timescales do not show an exponential growth because direct transitions between |0⟩ and |1⟩ are allowed, i.e., ⟨0|S_la|1⟩≠0 in general.§ CONCLUSIONIn conclusion, we applied a master equation approach to study relaxation and decoherence in engineered atomic spin structures due to their interaction with electrons from surroundings. Our goal was to characterize these two processes by the corresponding timescales T_1 and T_2 for a qubit realized on the two lowest energy states of an atomic spin structure. We thus analyzed the dynamics of the corresponding 2×2 subspace of the atomic density matrix. We encountered a problem with the usual association of the relaxation and decoherence times with the decay of the population imbalance and the off-diagonal matrix elements. The only case when the standard definition can be used is for a single atom with spin S=1/2 whose density matrix can be unequivocally mapped onto the Bloch vector. But even in this case, the standard definition has limited applicability since it implicitly assumes dynamics of coherences and populations to be decoupled. This condition is not fulfilled if the tip polarization is not parallel to the polarization of the qubit states. For systems with more than two states, the problem of defining T_1 and T_2 gets even more severe, as the dynamics of the populations and coherences within the qubit subspace couple to higher energy states. In the present paper, we generalize the concepts of the relaxation and decoherence times by introducing three timescales T_1, T_2, and T_3 that characterize the process of the information loss. These timescales match the standard relaxation and decoherence times, with T_2=T_3, whenever the latter two are well defined.Equipped with the generalized definitions for the relaxation and decoherence times, we studied several classes of systems where the qubit states are identified with the two lowest energy states. We found that in most cases T_2≃ T_3, and one can reduce three timescales to two, associating T_1 with relaxation and T_2,3 with decoherence. Analysis of single atoms of different spin and spin chains of different length showed that T_1 generically increases with the system size, while T_2 does the opposite. The growth of T_1 with system size is faster for the collective spin models where not only the number of cascade transitions, needed to pass from one low-lying energy state to another, increases, but the energies of the transitions also do. We found that both relaxation and decoherence times behave as 1/T and 1/V in the limits of large temperature and large voltage correspondingly. The low temperature behavior of T_1 depends on whether the environment is able to induce direct transitions between the qubit states. If not, the relaxation time grows exponentially as the temperature is decreased, due to the Boltzmann weight. Strategies to increase T_1 pass therefore by providing an energy barrier for transitions between the two states. The low temperature properties of T_2 are determined by the average polarizations of the atoms in the qubit states. If they are not zero, the environment causes dephasing and T_2∝1/T behavior is preserved. If all the atoms are unpolarized in both qubit states, the dephasing is absent and the decoherence is fully due to transitions to higher energy states, i.e., T_2 behaves similarly to T_1 and might be exponentially suppressed as well by lowering the temperature. This situation was observed in the collective spin model where both qubit states are singlets.In more complex systems, here illustrated by a transverse field Ising chain with longitudinal coupling tuned such that the ground state is degenerate, the three timescales are seen, i.e., T_2≠ T_3. In this case, the demarcation between the relaxation and decoherence processes is not possible since they are substantially coupled to each other.This work gives specific directions towards the engineering of decoherence-free spin based qubits and shows that the implementation of this strategy in engineered atomic spin devices is realistic. We note that our model includes only leading order processes in the atom-lead coupling and assumes a decorated environment for each atom. At very low temperatures, higher order or nonperturbative effects may limit the exponential growth of T_2. The sources of decoherence are not only limited to the electrons in the metallic surface, as the magnetic degrees of freedom also couple to substrate phonons and nuclear spins <cit.>. The present strategy will improve decoherence times as long as the leading coupling to the environment involves a magnetic exchange interaction, as in the case of nuclear spins. Our work motivates several questions. One is the possibility of implementing a decoherence-free subspace for a single qubit whose T_1 and T_2 can be made larger by increasing the number of spins in the structure. This would permit us to improve the decoherence times arbitrarily by increasing the resources allocated to encode each qubit. The use of topologically protected states <cit.> and skyrmions <cit.> indeed goes in this direction, however it is hard to envision its implementation with EASDs. Another question is how to scale this strategy to multiple interacting qubits and how to perform current measurements that can distinguish the qubit states. We are thankful to L.-H. Frahm for fruitful discussions. This work was funded by RSF Grant No. 16-42-01057 and DFG Grant No. LI 1413/9-1. P.R. acknowledges support by FCT through the Investigador FCT contract IF/00347/2014 and Grant No. UID/CTM/04540/2013.§ RATE EQUATIONS The rate equations for the populations of spin states∂_tp_α=∑_βR_αβp_β-R_βαp_αare obtained by substituting the density matrix of the form ρ_αβ=p_αδ_αβ into the Redfield equation (<ref>). The transition rates in Eq. (<ref>) are given byR_αβ=2∑_laa'⟨α|_laa'|β⟩⟨β|S_la|α⟩and might be expressed asR_αβ=2/β∑_laa'⟨α|S_la'|β⟩⟨β|S_la|α⟩×∑_νν'u_laa'^νν'g(β(ω_α-ω_β-μ_ν+μ_ν')),where g(x)=x/(e^x-1). Note that this expression does not contain the imaginary part of κ(ω) defined in Eq. (<ref>).If the tip is absent, the transition rates are given byR_αβ=_s^2/πβ g(β(ω_α-ω_β))∑_la|⟨α|S_la|β⟩|^2.In the low temperature case, when transitions from the qubit states |0⟩ and |1⟩ to the higher energy states are exponentially suppressed, the relaxation time might be approximated by1/T_1=R_01+R_10 =_s^2/πβ[g(βΔ)+g(-βΔ)]∑_la|⟨0|S_la|1⟩|^2,where Δ=ω_1-ω_0 is the energy splitting between the qubit states. The function T_1(Δ) reaches its maximum at Δ=0 which explains Fig. <ref> discussed in Sec. <ref>. One may also see that the relaxation time obtained from Eq. (<ref>) is infinite if the matrix elements of all local spin operators S_la between qubit states are zero, i.e., ⟨0|S_la|1⟩=0. The real value of T_1 in this case is determined by transition rates to higher energy states which are exponentially small.§ COMPARISON OF MASTER EQUATIONS In this appendix we compare the results for the magnetoresistive and inelastic current spectra calculated with (i) rate equations, (ii) master equation with only real part of κ(ω), (iii) master equation with full κ(ω). We consider a single atom with spin S=1/2 in the magnetic field perpendicular to the polarization of the current, the simplest example where all three methods give different results.The spectra are presented in Figs. <ref> and <ref> where their dependence on the Zeeman splitting is shown. In the calculation we use the parameters p=1, k_BT=0.1 meV, J_t=J_s=5 meV, and vary the bandwidth W=10, 20 meV. One might see that the results calculated with three methods are different from each other. However, the comparison of the plots for different bandwidth indicates the decrease of the discrepancy as the coupling gets smaller. In the large W limit, the results of both master equation methods approach the results of the RE method. This behavior is explained by the vanishing of the spin transfer torque that is responsible for sustaining coherences in the steady state <cit.>. In the opposite case, when W is decreased, the effect of coherences becomes more pronounced, and the terms originating from the imaginary part of κ(ω) come into play. However, for the bandwidths smaller than some critical value W_c, the steady state density matrix becomes unphysical. This shortcoming is due to the limited applicability of the master equation approach to open quantum systems strongly coupled to the surroundings.§ BLOCH EQUATION FOR S=1/2 To map Eq. (<ref>) onto the Bloch equation for a single atom with spin S=1/2, we use the relations (<ref>) and (<ref>) connecting the density matrix and the Bloch vector. One obtains∂_tp_a=(σ^a∂_tρ)=-i([σ^a,H_S]ρ) +1/2∑_a'p_a'∑_b(σ^a([Λ_bσ^a',S_b]+H.c.)) +1/2∑_b{σ^a([Λ_b,S_b]+H.c.)} , a=x,y,z,where all matrices are taken in the eigenbasis. With H_S=-1/2hσ^z, the unitary part of the Redfield equation becomesih/2([σ^a,σ^z]ρ) =-h∑_bε_azb(σ^bρ) =-h(ẑ×p)_a,and Eqs. (<ref>) may be rewritten in the form∂_tp=-ω_0×p-R_0.p+F,where ω_0=hẑ, and we defined(R_0)_aa'=-1/2{σ^a∑_b([Λ_bσ^a',S_b]+H.c.)} ,F_a=1/2{σ^a∑_b([Λ_b,S_b]+H.c.)} .We split the matrix R_0 into the symmetric part (R_0+R_0^T)/2=R and the antisymmetric one (R_0-R_0^T)/2=Δω× to obtain the Bloch equation (<ref>), where ω=ω_0+Δω and p_0=(ω×+R)^-1.F.§ GENERALIZED TIMESCALES OF RELAXATION AND DECOHERENCE FOR S=1/2 In this appendix we apply the definitions for the generalized timescales of relaxation and decoherence introduced in Sec. <ref> to the case of a single atom with spin S=1/2 and compare them with the standard relaxation and decoherence times T_1 and T_2 from Eq. (<ref>).For an unpolarized tip, when dynamics of the populations and coherences are decoupled, the superoperator of the master equation has the matrix formℒ= ( [ -1-p_0/2T_1 0 01+p_0/2T_1; 0 -1/T_2+iω 0 0; 0 0 -1/T_2-iω 0;1-p_0/2T_1 0 0 -1+p_0/2T_1 ]).We obtained it from the Bloch equation (<ref>) using the relations (<ref>), (<ref>), (<ref>), and the fact that p_0 and ω are aligned with ẑ. The right and left eigenvectors of the superoperator ℒ areρ_0=( [ 1+p_0/2 0; 0 1-p_0/2 ]), ρ̃_0=( [ 1 0; 0 1 ]), ρ_1=( [10;0 -1 ]), ρ̃_1=( [1-p_0/20;0 -1+p_0/2 ]), ρ_2=ρ̃_2=( [ 0 1; 0 0 ]), ρ_3=ρ̃_3=( [ 0 0; 1 0 ]),with the eigenvalues given by λ_0=0 and Eqs. (<ref>) and (<ref>). Substitution of the eigenvectors (<ref>) into the matrix elements (<ref>) results inM(t)=( [ e^-2t/T_2 0 0; 0 e^-2t/T_2 0; 0 0 e^-2t/T_1 ]).One can see that two of three decay times of the eigenvalues of M(t) coincide with T_2, and the third is equal to T_1. Thus the definitions introduced in Sec. <ref> reproduce the standard relaxation and decoherence timescales for two-level systems. apsrev4-1 | http://arxiv.org/abs/1706.08364v2 | {
"authors": [
"Alexey M. Shakirov",
"Alexey N. Rubtsov",
"Alexander I. Lichtenstein",
"Pedro Ribeiro"
],
"categories": [
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.mes-hall",
"published": "20170626133212",
"title": "Relaxation and decoherence of qubits encoded in collective states of engineered magnetic structures"
} |
Private Data SystemSinică Alboaie, Doina Cosovan ^1Alexandru Ioan Cuza University of Iasi, Romania ^2Technical University of Cluj-Napoca, Romania Private Data System Enabling Self-SovereignAuthor's version. The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-59665-5_6. Storage Managed by Executable Choreographies Sinică Alboaie^1,2 Doina Cosovan^1 Received 26th June 2017 / Accepted 25th January 2018 =========================================================================================================================================================================================================== With the increased use of Internet, governments and large companies store and share massive amounts of personal data in such a way that leaves no space for transparency. When a user needs to achieve a simple task like applying for college or a driving license, he needs to visit a lot of institutions and organizations, thus leaving a lot of private data in many places. The same happens when using the Internet. These privacy issues raised by the centralized architectures along with the recent developments in the area of serverless applications demand a decentralized private data layer under user control.We introduce the Private Data System (PDS), a distributed approach which enables self-sovereign storage and sharing of private data. The system is composed of nodes spread across the entire Internet managing local key-value databases. The communication between nodes is achieved through executable choreographies, which are capable of preventing information leakage when executing across different organizations with different regulations in place.The user has full control over his private data and is able to share and revoke access to organizations at any time. Even more, the updates are propagated instantly to all the parties which have access to the data thanks to the system design. Specifically, the processing organizations may retrieve and process the shared information, but are not allowed under any circumstances to store it on long term.PDS offers an alternative to systems that aim to ensure self-sovereignty of specific types of data through blockchain inspired techniques but face various problems, such as low performance. Both approaches propose a distributed database, but with different characteristics. While the blockchain-based systems are built to solve consensus problems, PDS's purpose is to solve the self-sovereignty aspects raised by the privacy laws, rules and principles. § INTRODUCTIONEvery time a user needs to create an account, he needs to provide a lot of private information, like name, birth date, gender, marital status, and so on. Even more, he needs to choose and answer to some security questions for account recovery in case he forgets the password or simply for user validation when performing sensitive actions. These security questions usually consist of private data as well.This way, each user spreads his private data to a lot of organizations / companies / service providers. This raises two main issues: one is related to data protection and the other - to data duplication. In regards to data protection, each organization has its own ways of storing and protecting the data. Some are better than others. The user's data is as safe as the weakest organization to which the user provided his data. Thus an attacker can target the weakest link to learn private information. The data duplication issue consists mainly of the fact that changing a piece of private information (like changing the last name by getting married) requires updating it in all the places this particular piece of private information was saved, which is burdening and time consuming.An existing way of solving these problems is by using single sign-on techniques. But this makes the user dependent on the single sign-on provider because losing access to the account used for single sign-on means losing access to all the accounts authenticating the user with this single sign-on account.We propose a solution that enables users to keep full control over their private data. Private Data System (PDS) is a distributed scalable system composed of three types of nodes, transparently spread across the entire Internet: audit, index and storage nodes. Each node manages a local key-value database and each type of node has its own purpose in the system, as explained further.Each piece of private information is split into undecipherable chunks. Each chunk is assigned a different partial key in a different key-value database managed by a different storage node. The association between these keys is stored under a master key in a key-value database managed by an index node. Since the data needs to be accessible by different processing nodes, the master key is referenced by different key references. The association between them is stored under a key reference in a key-value database managed by an audit node. The key references are the only points of access to the actual data. Hence it also contains information regarding who owns the referenced data, who was this particular key reference shared with, and metadata describing the referenced data.The communication between nodes is achieved with the help of executable choreographies, which visit the needed nodes in the needed order, execute on each node the needed operations, and return to the user with the results.An interesting use case of using PDS are social networks and other systems which, besides private data, manage also trust and reputation data <cit.>.We will start by reviewing the related work (Section <ref>) and introducing the system along with its elements (Section <ref>). Then, we'll explain how CRUD (create, read, update, delete) and sharing / revoking operations work (Section <ref>). In the end, we will analyze the proposed system from the privacy perspective (Section <ref>), conclude (Section <ref>), and present future directions in regards to the proposed system (Section <ref>).§ RELATED WORK Smart systems integrate technology, organizations and people in order to accomplish complex processes that are controlled by computer systems. For a large number of integration points, integration is achieved through classical ESB (Enterprise Service Bus) - type systems <cit.>, MOM (Message-Oriented Middleware) systems <cit.>, systems based on EIP (Enterprise Integration Patterns) <cit.> or through the orchestration of services through custom code or languages used to model business processes <cit.>.All these methods tend to be sufficient to integrate the components belonging to one organization. On the other hand, the integration among multiple organizations should be addressed using choreographies as any centralized solution is risky in terms of security and private data protection. Composition of systems using orchestration tends to create centralized systems.Although many authors perceive choreographies as a mechanism to describe in a more formal way the contracts among several organizations <cit.>, the academic research proposed the concept of executable choreographies <cit.> <cit.> <cit.>. They suggest transforming the descriptions of the choreographies in code that is executed inside each organization participating in the choreography. As such, a choreography is not only a formal description of a contract among organizations but also a description of a workflow in an executable way. The same description (choreography) gets to run in several organizations in a decentralized manner (without the need for a centralized conductor) and therefore any need to translate the choreography into other programming languages disappears.While PDS could be implemented outside the world of the executable choreographies, we believe that choreographies are suitable for the complex workflows operating across multiple organizations. The code of the executable choreographies is verifiable at a higher level and can provide confidence that the implementation provides the privacy properties of the theoretical model.Another advantage of the executable choreographies is that it comes with a solution for the self sovereign identity. The Sovrin Foundation explains in <cit.> why the rise of the self sovereign identities was inevitable and details the path that had to be traversed for the community to come to this conclusion.For PDS, the data owner must be identified and authorized. Supplementary benefits in the data leakage preventions could be achieved if the data owner is identified in all the other organizations contributing to a request without leaking its identity (using some anonymous aliases controlled by the data owner). The executable choreographies aim at offering these benefits without any supplementary implementation effort. However, detailing the way in which the self sovereign identities used by the data owners and processors are authenticated and authorized is not the purpose of this paper. It is a complex enough topic to require its own paper, so we will revisit this issue at a later date.In regards to the advances related to data sovereignty, we would like to mention <cit.>, which proposes storing encrypted data in cloud federations and <cit.>, which proposes sovereign information sharing in order to integrate the information belonging to autonomous entities. Queries are executed on the databases and reveal only the results. The work is continued in <cit.> which enables sovereign information sharing using web services. This work applies to service providers which want to allow queries on their databases without sharing the content on which the queries are executed. Our work focuses on the average user which needs to own and store his data in a single place and provide / revoke access to it to various service providers as needed.Note that <cit.> introduces the data sovereignty notion for establishing the nation-state where the cloud storage service providers are storing the data physically in order to ensure they are meeting their contractual geographic obligations. In this paper, we consider data sovereignty to be the ability of the user to have full control over his data and the entities to which it is shared or revoked.States and international organizations start to gradually introduce principles and standards, the most notable being Privacy By Design <cit.>. Collecting information in parallel with the absence of technical constraints on how companies can use the data intentionally or unintentionally begins to be perceived as a risk. On the one hand, there are risks for companies because users could refuse to adopt privacy challenged technologies. On the other hand, we have risks regarding the whole society, the most obvious being represented by the potential that some companies can influence society in illegal and immoral manners.Commercial exploitation of private data has come to create the impression that people are exploited commercially in ways that do not adequately compensate for the risks they take. A more transparent model that allows fair and equitable use of personal data is needed. Considering all these aspects, the article proposes a software architecture in which private data's storage places are under the strict control of the user or his delegates.§ SYSTEM ELEMENTSIn this section, we define the terminology used for the Private Data System throughout this paper. First, we define the following roles:Data Owner (DO) represents the identity which owns the data.Data Processor (DP) represents the identity which processes the data; the identity to which the data was shared.Second, we define the following types of data:Private Data (PD) represents the private data which is to be stored in the system; if a piece of private data PD is split into n undecipherable chunks, then PDi, i = 0,n is an undecipherable chunk of data.Metadata (MD) specifies the relationship between the Private Data and Data Processors by labeling the data according to the Data Owner and ontologies.Third, since the system is based on key-value databases, we define the following keys for data storage, associations, and references:Master Key (MK) represents and anonymizes a piece of private information.Partial Key (PK) represents and anonymizes one undecipherable chunk from the set of undecipherable chunks in which a piece of private information was split. Thus, the MK is associated to the set of PKs which represent the set of undecipherable chunks needed to recombine the piece of private information.Key Reference (KR) represents a reference to / an alias of a piece of private information (a reference to a Master Key).Key Reference Hash (KRH) is obtained by applying a hash function on the Key Reference value and adding the address of the processing node which is to receive the results.In the end, we define the following types of nodes:Processing Node (PN) stores Key References and needs to retrieve and process the private data referenced by them. Processing nodes are forbidden by law to store the retrieved data on long term.Audit Node (AN) manages a key-value database which stores the association between Key References and the Master Keys they reference along with the information describing the data referenced by the Master Key (Data Owner, Data Processor, and Metadata). In the database, the key is a Key Reference and the value is a tuple consisting of the Master Key, the Metadata, the Data Owner, and the Data Processor.Index Node (IN) manages a key-value database which stores the association between Master Keys and its corresponding Partial Keys. In the database, the key is the Master Key and the value is the list of Partial Keys needed to reconstruct the piece of private information represented by the Master Key.Storage Node (SN) manages a key-value database which stores the association between Partial Keys and Partial Messages. In the database, the key is the Partial Key and the value is the undecipherable chunk of data represented by this particular Partial Key.§ SYSTEM OPERATIONSIn this section we detail the way in which CRUD (Create, Read, Update, Delete) operations as well as copying, sharing, and revoking access to data are performed in the proposed system. For simplicity, we are going to use the following notations throughout this paper: * [E_1, E_2, ..., E_n] is a list containing the elements E_1, E_2, ..., E_n. * (E_1, E_2, ..., E_n) is a tuple containing the elements E_1, E_2, ..., E_n. * {K_1: V_1, K_2: V_2, ..., K_n: V_n} is a dictionary in which the value V_1 is stored under the key K_1, the value V_2 is stored under the key K_2, ..., and the value V_n is stored under the key K_n. * N_1 → N_2: M means the node N_1 sends to the node N_2 the message M, which corresponds to performing a step in the executable choreography. * DB[K] := V means the value V is stored under the key K in the key-value database DB by the node managing DB. * V := DB[K] means the value V associated to the key K is retrieved from the key-value database DB by the node managing DB. * N_1: A means the node N_1 performs the action A. * M := gen() means the message M is generated (either randomly or according to an algorithm); this is an action. * PD_1, PD_2, ..., PD_n := split(PD) means the private data PD is split into n undecipherable chunks of data PD_1, PD_2, ..., PD_n; this is an action. * PD := recombine(PD_1, PD_2, ..., PD_n) means the n undecipherable chunks of data PD_1, PD_2, ..., PD_n are recombined in order to obtain the initial piece of private data PD which was split to obtain them; this is an action. §.§ Creating / Storing Private DataThe storage of private data is achieved in three phases, illustrated at a higher level in Figure <ref> and detailed in the following schema:Phase 11. PN → AN: DO, MD2. AN: MK := gen()3. AN: KR := gen()4. AN[KR] := (MK, MD, DO, DP)5. AN → PN: KR, MKPhase 21. PN: PD_1, PD_2, ..., PD_n := split(PD)2. PN: chooses randomly n SNs3. PN → SN_i: PD_i, i = 1,n4. SN_i: PK_i := gen(), i = 1,n5. SN_i[PK_i] := PD_i, i = 1,n6. SN_i → PN: PK_i, i = 1,nPhase 31. PN → IN: MK, PK_1, PK_2, ..., PK_n2. IN[MK] := [PK_1, PK_2, ..., PK_n]3. PN[alias] := KR When a processing node needs to store private data, it starts the first phase by sending to an audit node the metadata describing the information it wants to store along with its identity (considered both data owner because it stores its information and data processor because it is the identity which is going to use the associated reference key for data retrieval). The audit node first generates a master key and a key reference, then stores the generated master key, the received metadata, and the received data owner (as both data owner and data processor) under the generated key reference in its key-value database. The audit node completes this phase by sending the generated master key and the key reference to the processing node.In the second phase, the processing node splits the private information into n undecipherable chunks PD_1, PD_2, ..., PD_n and chooses randomly n storage nodes so that each storage node is responsible for storing a single undecipherable chunk of private data. Each storage node, upon receiving its undecipherable piece of private data, generates a partial key, stores its chunk of information under that key, and sends to the processing node the generated partial key.In the third phase, the user sends to an index node the master key and its corresponding partial keys. The processing node stores the key reference under an alias because it is needed for subsequent private information retrieval. §.§ Reading / Retrieving Private Data If a processing node needs to access a private information, it must have a key reference. The way the processing node uses the key reference to retrieve the associated private information can be followed in Figure <ref> and is described in detail in the following schema:Phase 11. PN → AN: DP, KRPhase 21. MK := AN[KR]2. HKR := (location(PN), hash(KR))3. AN → IN: DP, MK, HKRPhase 31. PK_1, PK_2, …, PK_n := IN[MK]2. IN → SN_i: DP, HKR, PK_i, i = 1,n3. PD_i := SN_i[PK_i], i = 1,n4. SN_i → PN: HKR, PD_i, i = 1,n5. PN: PD := recombine(PD_1, PD_2, ..., PD_n), where PD_i, i = 1,n must have the same HKR as PDThe key reference might reference either a piece of private information of the processing node or a piece of private information shared to the processing node by another processing node. By sending his key reference to the audit node along with his (processing node's) identity, the processing node completes the first phase.In phase two, the audit node retrieves the master key corresponding to the received key reference. Next, it computes HKR, which is a hash on the retrieved key reference prefixed with the location of the processing node. Then, the audit node sends the processing node's identity, the retrieved master key, and the computed HKR to the index node. This way, the index node doesn't learn the association between key references and master keys, but at the same time propagates HKR, which is information required by the processing node to identify the request being answered. Note that the processing node might issue multiple data retrieval operations at the same time and, without HKR, the processing node wouldn't know which undecipherable chunks correspond to which pieces of private data he requested at the same time.In the third phase, the index node retrieves from its database the partial keys corresponding to the master key and sends each partial key along with the processing node's identity and HKR to the corresponding storage nodes. Each storage node retrieves the undecipherable value (PD_i) corresponding to the received partial key (PK_i) and sends to the processing node the retrieved undecipherable chunk and the HKR. The processing node, upon receiving the undecipherable chunks, groups them by HKR and recombines the grouped components in order to obtain the private piece of information. This information can be processed, but the law prevents the processing node to store it. Thus HKR's purpose is to serve as an identifier so that a processing node which retrieves multiple private information pieces at the same time can associate the received undecipherable pieces of information to the requested key references. §.§ Updating Private Data The first two phases are identical for data retrieving and data updating, but starting with the third step of the third phase, things are performed differently as can be observed in the following schema:Phase 11. PN → AN: DP, KRPhase 21. MK := AN[KR]2. HKR := (location(PN), hash(KR))3. AN → IN: DP, MK, HKRPhase 31. PK_1, PK_2, ..., PK_n := IN[MK]2. IN → SN_i: DP, HKR, PK_i, i = 1,n3. SN_i → PN: HKR, i = 1,n4. PN: PD_1, PD_2, ..., PD_n := split(PD)5. PN → SN_i: PD_i, i = 1,n6. SN_i[PK_i] := PD_i, i = 1,n The storage nodes, upon receiving the partial keys from the index node, instead of retrieving the undecipherable chunks of private data corresponding to the partial keys and sending them along with HKR to the processing node for recombination as performed by the storing operation, for the updating operation they send the HKR alone to the processing node. Upon receiving the HKR from the storage nodes, the processing node splits the new information in undecipherable chunks and sends one chunk to each storage node which sent the HKR corresponding to this piece of private information. Then, the storage nodes update the values stored under the partial keys in their key-value database in accordance to the newly received undecipherable chunks.The reason we decided to go with this approach rather than use an invalidation and a store operation is because we want all the existing key references to remain valid and, even more, to point to the updated private data.The data flow between the nodes which are part of the system during an update operation can be observed in Figure <ref>. §.§ Deleting Private Data Figure <ref> illustrates the data flow and the following schema illustrates the actions performed during a delete operation:1. PN → AN: KR, DO2. MK := AN[KR]3. AN → IN: MK4. IN: invalidate IN[MK]In order to perform a delete operation, a processing node sends to the audit node its identity (which must be the identity of the data owner) and its key reference of the data to be deleted. If the audit node would invalidate the received key reference, this would mean only revoking access to the private data for the data owner, while all the data processors which received access to this private data at some point in time would still be able to access the data. Thus, instead of doing this, the audit node sends the received key reference to the index node for it to invalidate the associated master key. In this way, neither the data owner, nor the data processors will be able to access this piece of private data anymore because all the key references they have for this piece of private data point to the same master key.§.§ Sharing Access to Private Data The sharing operation is described in Figure <ref> and follows the following steps:1. PN_1 → AN: KR_1, DP_22. MK, MD, DO := AN[KR_1]3. AN: KR_2 := gen()4. AN[KR_2] := (MK, MD, DO, DP_2)5. AN → PN_2: KR_2, MD In order to share a piece of information, a processing node (PN_1) must send to an audit node its key reference (KR_1) of the private information it wants to share along with the identity of the processing node that is to receive access to the private information (DP_2). When this happens, the audit node retrieves the master key (MK) corresponding to the received key reference (KR_1), generates a new key reference (KR_2), and saves the retrieved master key (MK), the retrieved metadata (MD), the retrieved data owner (DO) and the received data processor (DP_2) under the newly generated key reference (KR_2). Of course, the initial association (between KR_1 and MK) remains in the database, as well.Note that every association between a key reference and a master key also hasinformation regarding the identity of the organization owning the data (Data Owner) and the identity of the organization with which data is shared (Data Processor). If data owner is the same with data processor, then this association is the initial key reference created when the private information was first stored.Next, the audit node sends the newly generated key reference (KR_2) along with the received metadata (MD) to the processing node which is to receive access (PN_2) to the private data. In this way, neither data owner knows the data processor's key reference, nor the data processor knows the data owner's key reference. §.§ Revoking Access to Private DataThe revocation operation is described in Figure <ref> and follows the following steps:1. PN → AN: KR_1, DO, DP_22. MK := AN[KR_1]3. search KR_2 which contains MK, DO, DP_2 as values4. invalidate AN[KR_2] The data owner can revoke access to a private information by issuing a revocation request to the audit node. The revocation request contains the identity of the data owner and of the data processor to which access is being revoked as well as the data owner's key reference (KR_1). Note that we receive the data owner's key reference, while the revocation needs to be done on data processor's key reference (KR_2). This happens because each processing node knows its key reference, but it doesn't know the key references of the data processors which have access to its data. So, the audit node needs to retrieve the master key corresponding to the received key reference (KR_1) and search the key reference to be revoked (KR_2) knowing that it has associated the retrieved master key and the received data owner and data processor. After learning the value of the reference key to be revoked, the audit node simply invalidates it. Nothing is deleted. §.§ Copying Private DataBy design, any copy operation on the private data should be done only through the sharing operation. Data derived from the private data should be stored in the PDS and assigned to the original data owner. § SYSTEM ANALYSIS FROM THE PRIVACY PERSPECTIVE In this section we are going to analyze how powerful each type of node defined in the system isand how much information they can gather by themselves or by colluding with other types of nodes.Each storage node has access to only one undecipherable chunk of each private piece of information it stores. Each chunk is saved under a partial key which has no meaning to the storage node. The storage node doesn't know which other storage nodes the other chunks of the same pieces of private information store, nor under which partial keys. Even more, the storage node doesn't know what type of information it stores. It may be a social security number, a password, a name, a birth date, and so on. A single storage node can't attack the system and neither can a collection of colluding storage nodes.Index nodes store only the associations between master keys and partial keys. So, they know the partial keys whose corresponding undecipherable chunks can be recombined to form a private piece of information, but they don't know the values of the actual chunks, nor the type of information that will be obtained after recombining the chunks. A single index node can't attack the system and neither a collection of colluding index nodes.Audit nodes have information regarding the meaning of the data, the owner of the data, and the identities with which the data was shared, but they don't have information regarding the way the data was split in chunks (the correspondence between master key and partial keys) and the locations where the data chunks are stored. So, a single audit node or a group of colluding audit nodes can't recombine the private data. However, audit nodes are able to create reference keys at their discretion and share them with legal or illegal organizations.Processing nodes have access to the private data as they need it for normal operations. Privacy by Design principles are intended to regulate the usage of private data without reducing functionality. The main goal of the PDS is to make it obvious when a company is misusing the private data outside the purpose accepted by the user, but without reducing access to the private data. For example, if an organization collects private data by using PDS, it becomes visible if it is copying or using private data for other purposes than intended.Only processing nodes and audit nodes know what the pieces of information referenced by key references mean. Encryption is not needed because the attackers see a huge pool of partial undecipherable messages. Traffic can't be used to obtain information because the traffic data is encrypted using TLS and can't be used to deduce information regarding which nodes communicate because of the huge amount of concurrent swarms flying from node to node.If an index node colludes with all the storage nodes storing chunks of the same piece of private information, then together they can recombine the message, but without knowing its meaning, who owns it and with whom it was shared with, it is of no value to them. In order for the data to be of value, they need to collude also with the audit node, which stores the metadata, the data owner and the data processor of this particular piece of information. § CONCLUSIONS In normal conditions only processing nodes should be able to read plaintext data. All the other node types involved in the PDS should not be capable of accessing private data. In special conditions, audit nodes should be able to read the data as well in order to enable legal access to the private data owned by other data owners for crime prevention or other legal usages. We imagine audit organizations offering public services that are controlled by the law and industry regulations. The level of access to the systems storing this metadata should be similar to the one for financial services. Special legal procedures should be followed when accessing private data outside of the normal flow.Systems and approaches that are trying to obfuscate and encrypt too much are fighting an impossible fight with the common social interest and are blocking the normal evolution of the technologies in the privacy area. The interests of any citizen are to be protected from unfair usage of his data by the large Internet companies, to have control on who he shares his private data with, to be able to revoke access to his data to anyone at any time.An Internet based on fully homomorphic encryption would not be what we need because it would create a world in which data can be too easily lost. It would provide a perfect method for criminals and terrorists to hide their data from the public interest. Fighting with dangerous, corrupted governments is important, but PDS is not supposed to have a role in this fight. PDS is a balanced solution which enforces Privacy by Design in code and maintains an equilibrium between public and private interests. § FUTURE WORK As future work, we intend to pursue three different paths. First, we will develop a new self-sovereignty authentication technique which uses the advantages provided by the architecture of the system proposed in this paper. Secondly, as Privacy by Design and Privacy by Default (PbD) are being enforced by laws (eg in the General Data Protection Regulation), we intend to propose a Privacy Enhancing Technique (PET) that can ensure these principles directly in code. It is supposed to be a privacy estimation method for systems using the technique proposed in this paper for achieving self-sovereign storage of private data.Thirdly, we will propose and describe a mechanism for the audit nodes to store the metadata so that it enables the implementation of personal assistants. The metadata will describe the schema of the stored objects (in the form of JSON schema or OWL) and the representation types that could enable type checking when data is shared. It will enable the use of specific Privacy Policies (which will control what entities are allowed to read the information and will contain revocation policies) and Security Policies (which will control what entities are allowed to modify the content of a Master Key). Both privacy and security policies will be enforced by the audit nodes, but the input (rules and policies) will be provided by the Data Owner himself. Giving up to the standard communication promoted by web technologies and moving towards a model of communication verifiable as the one proposed by executable choreographies, we have the opportunity to develop formal verifications methods on how the private data is used. § ACKNOWLEDGMENTSr0.10 < g r a p h i c s > This work is partly funded by the European Union's Horizon 2020 Research and Innovation Programme under grant agreement No 692178.It is also partially supported by the Private Sky Project, under the POC-A1-A1.2.3-G-2015 Programme (Grant Agreement no. P_40_371). plain | http://arxiv.org/abs/1708.09332v1 | {
"authors": [
"Sinica Alboaie",
"Doina Cosovan"
],
"categories": [
"cs.DC",
"cs.CR",
"cs.DB"
],
"primary_category": "cs.DC",
"published": "20170626150005",
"title": "Private Data System Enabling Self-Sovereign Storage Managed by Executable Choreographies"
} |
Department of Physics, University of California, Berkeley, CA 94720, USARIKEN Center for Emergent Matter Science (CEMS), Wako, Saitama, 351-0198, Japan Department of Applied Physics, The University of Tokyo, Tokyo, 113-8656, Japan In noncentrosymmetric crystals with broken inversion symmetryℐ, theI-V (I: current, V: voltage) characteristic is generally expectedto depend on the direction of I, which is known as nonreciprocal response and, for example, found in p-n junction. However, it is a highly nontrivial issue in translationally invariant systemssince the time-reversal symmetry (𝒯)plays an essential role, where the two states at crystal momenta k and -k are connected in the band structure. Therefore, it has been considered that the external magnetic field (B) or the magnetic order which breaks the 𝒯-symmetry is necessary to realize the nonreciprocal I-V characteristics, i.e., magnetochiralanisotropy.Here we theoretically show that the electron correlation in ℐ-brokenmulti-band systems can induce nonreciprocal I-V characteristics without𝒯-breaking. An analog of Onsager's relation shows that nonreciprocal current response without 𝒯-breaking generally requires two effects: dissipation and interactions. By using nonequilibrium Green's functions, we derive general formula of the nonreciprocal response for two-band systems with onsite interaction.The formula is applied to Rice-Mele model,a representative 1D model with inversion breaking, andsome candidate materials are discussed. This finding offers a coherent understanding of the origin ofnonreciprocal I-V characteristics, and will pave a way todesign it.72.10.-d,73.20.-r,73.43.Cd Nonreciprocal current from electron interactions in noncentrosymmetric crystals: roles of time reversal symmetry and dissipationNaoto Nagaosa December 30, 2023 ==================================================================================================================================Noncentrosymmetric crystals exhibit a variety of interesting physical phenomena.These include ferroelectricity <cit.>, photovoltaic effect (shift current)<cit.>, and second harmonic generation<cit.>. Among them, nonreciprocal dc current response ininversion broken systems has been attracting a keen attention in condensed matterphysics. Nonreciprocity (or rectifying effect) is a current response where the I-V characteristicdiffers when current flows toward left and when it flows toward right (i.e., I(V) ≠ -I(-V)). The nonreciprocalcurrent response is important both for fundamental physics of inversion broken materials andalso for applications such as diode. Conventional example of nonreciprocity is a p-n junction,in which the direction of the current changes the thickness of depletion layer, and hence,the resistivity.Nonlinear current response has been intensely studied in a mesoscopic setup <cit.>. In contrast to such artificial heterostructures, nonreciprocityin crystals is a more nontrivial issue. Current responses in crystals are governed byBloch electrons with good momentum k and their band structure. In the presence of time-reversal symmetry (TRS), the band structure satisfies therelationship ϵ_k σ = ϵ_-k σ̅ (σ represents the spin and σ̅ the opposite spin to σ),which indicates that no nonreciprocity appears for noninteracting electrons in theBoltzmann charge transport picture as illustrated in Fig. <ref>(a).Specifically, the applied electric field causes a shift of the Bloch electrons in themomentum space. The symmetry in the band structure due to the TRS results insymmetric shifts with respect to the direction of the applied electric field E,and the conductivity does not depend on the direction of E.There are two ways to break TRS: (i) introducing time-reversal breaking term to a microscopic Hamiltonian and (ii) introducing irreversibility at the macroscopic level. The former microscopic TR breaking is achievedwith application of an external magnetic field B or introducing magnetic order. Nonreciprocal current response in the presence of magnetic field is known as magnetochiral anisotropyand has been actively studied<cit.>. The other way to break TRS is incorporating irreversibility at the macroscopic level, i.e., (ii). Generalizing Onsager's relation to nonlinear current response, we find that nonreciprocal current may appear due to the effect of dissipation/relaxation even when microscopic Hamiltonian obeys the TRS.Furthermore, there exists a systematic description for the second order nonlinear current responses that is based on the gauge invariant formulation of nonequilibrium Green's functions under the static E field <cit.>. This formulation allows us to show that some electron interaction effects are necessary for nonreciprocal current response in bulk crystals under the TRS, on top of the dissipation effects. Such electron interactions include Coulomb electrons between electrons and electron-phonon interactions. In particular, it turns out that elastic scattering from disorder potential is not able to support nonreciprocal current response. These general symmetry considerations naturally lead us to study nonreciprocal current response with electron interactions in the Boltzmann transport picture that incorporates dissipation effects through relaxation of electron distribution functions.Indeed, the situation changes in the presence of electron interactions,since electron interactions can modify the effective band structure when the applied electric field changes the electron distributions. In the steady state withnonzero current in noncentrosymmetric crystals, the interaction effect modifies the energy band in an asymmetric way with respect to the direction of E, as illustrated in Fig. <ref>(b), and enables us to circumvent the original constraint of TRSsince systems with E and -E are not related with TRS and ϵ_k,σ(E) ≠ϵ_-k,σ̅ (-E) in general. In the Boltzmann transport picture, this asymmetric change of effective band structure leads to nonreciprocal current responses.By using the gauge invariant formulation of nonequilibrium Keldysh Green's functions, we derive a general formula for the nonreciprocal current in the weak interaction limit. It shows that nonreciprocal current in inversion broken materials is proportional to the strength of electron interactionand inversely proportional to typical band separation and bandwidth. We find thatthe nonreciprocity in noncentrosymmetric crystals is a quantum mechanical effect that isdescribed by the complex nature of Bloch wave functions and interband matrix elementwhich is unique to inversion broken systems. An estimate of the nonreciprocity shows that doped semiconductors and molecular conductorsare good candidate materials for the nonreciprocity from electron correlation.We also discuss possible nonreciprocity in the molecular conductor TTF-CA <cit.>. In this paper, we focus on the nonlinearity in I-V characteristic of dc transport. Meanwhile, there are other nonlinear current responses in ℐ-broken crystals which have their origins in the complex nature of Bloch wave functions and should be compared with the nonreciprocal response in the present case.One example is a shift current <cit.>, a dc current induced by photoexcitation of electrons beyond the band gap. The shift current is generated from the shift of wave packet centers for valence and conduction bands, and this shift is essentially described by Berry phases of valence and conduction electrons.The present nonreciprocal response and the shift current have similarity in that both rely on the multi-band nature of ℐ-broken systems. Yet, an important difference is that the nonreciprocal response arises from intraband metallic transport that is induced by static electric fields, while the shift current involves optical excitation of interband electron-hole pairs with photon energy larger than the band gap.Other examples are nonlinear Hall effect and low-frequency circular photogalvanic effect (CPGE) <cit.>. They are known as geometrical effects described by the Berry curvature dipole of Bloch electrons. They are similar to the present nonreciprocal response in that both are intraband effects. However, the nonlinear Hall effect and the geometrical part of the CPGE are transverse (Hall) responses, in that they are described by off-diagonal components of the nonlinear conductivity tensor (σ_abb and σ_aab, respectively, with a ≠ b). In this sense, they are contrasted to the present nonreciprocal current which is a longitudinal current response described by diagonal components σ_aaa and essentially involves the effect of dissipation.Results Time reversal symmetry constrains nonreciprocal current responses in bulk crystals.Based on general symmetry considerations, we show that nonreciprocal current response in crystals generally require two ingredients: (i) dissipation, and (ii) interactions. First, we generalize Onsager's theorem to nonlinear current responses and show that the effect of dissipation is crucial for nonreciprocal current response. We then show by using gauge invariant formulation of Keldysh Green's function that nonreciprocal current generally requires some interactions (e.g., electron-electron interactions and electron-phonon interactions). These two conditions suggest that the nonreciprocal current response can be captured by Boltzmann equation picture (that incorporates relaxation of electron distribution function) once we incorporate E-linear change of band structure induced by electron interactions. The nonreciprocal current response is captured by an E^2 term in the current response.In the Boltzmann transport picture,the current J induced by the applied electric field E is given byJ= 2e^2/ħτ |v_F| Ewith the relaxation time τ and the Fermi velocity v_F,for a one-dimensional system as depicted in Fig. <ref>. In noncentrosymmetric systems, the effective band structure with correlation effect can change asymmetrically in an applied electric field, and the Fermi velocity is modified as v_F(E)= v_F,0 + c E + O(E^2). Therefore, noncentrosymmetric systems can host nonreciprocal current response given by the E^2 term in J= (2e^2/ħ) τ(v_F,0E+cE^2). Since the E-linear change of the band structure is described by the self energy linear in E, we study Green's function and self energy in the steady state realized with the applied electric field. By using these results, we derive the general formula of nonreciprocal current, and then apply it to Rice-Mele model which is a prototypical model of ferroelectrics. Onsager's theorem and its generalization. In this section, we present a general consideration on the nonreciprocal current responsein terms of the time reversal symmetry. We generalize Onsager's relationship to nonlinear current responses, and show that the effect of dissipation is crucial for nonreciprocal current response. In the linear response, Onsager's relationship indicates that the conductivity σ_ijis constrained asσ_ij = σ_ji,when the microscopic Hamiltonian preserves time reversal symmetry<cit.>. This relationship is derived by considering the time reversal transformation in the Kubo formulafor the linear conductivity as explained in Methods. Now we study how Onsager's theorem can be extended to nonlinearcurrent responses. We consider the second order current response,J_i(ω_1 + ω_2)= σ_ijj(ω_1, ω_2) E_j(ω_1)E_j(ω_2).For systems of noninteracting electrons, the nonlinear conductivity σ_ijj(iω_n_1,iω_n_2) in the imaginary time formalism satisfies the relationshipσ_ijj(iω_n_1,iω_n_2)= -σ_ijj(-iω_n_2,-iω_n_1) ,under time reversal symmetry. (For the derivation, see Method section.) Naively, this seems to suggest that the nonlinear conductivity σ_ijj(ω_1,ω_2)vanishes in the dc limit (ω_1→ 0 and ω_2→ 0). However, there is a subtlety in the analytic continuation to real frequencies as follows. We notice that nonlinear conductivity with Matsubara frequencies in the upper half plane istransformed to that with Matsubara frequencies in the lower half plane. Since the real axis is a branch cut in the complex ω plane, the analytic continuation ofiω_n → 0 for the two quantities, σ_ijj(iω_1,iω_2) andσ_ijj(-iω_2,-iω_1), lead to different results in general. This indicates that relationship similar to the Onsager's relation does not necessarily constrain thedc nonlinear conductivity to vanish. Interestingly, the extended Onsager's relation in the above shows that nonreciprocal current response (nonzero σ_ijj) inevitably involves macroscopic irreversibility, i.e., the effect of dissipation, since the branch cut at Im[ω]=0 is associated with macroscopic irreversibility. Specifically, such discontinuity for ω→± 0 i appears in the self energy by incorporating dissipative processes such as impurity scattering. To see this, it is useful to consider the case of linear conductivity. Metallic conductivity σ_xx(ω) has a branch cut and the limit of ω→ +0i gives a dissipative current response which is proportional to the relaxation time τ. In contrast, Hall conductivity σ_xy(ω) does not involve such branch cut and corresponds to nondissipative current response (independent of τ). Therefore, the nonreciprocal current response requires dissipation and should be proportional to the relaxation time τ.In passing, we note that Eq. (<ref>) also indicates that the dissipation is essential for shift current which is a photocurrent caused by an optical resonance at a frequency ω above the band gap and described by σ_ijj(ω,-ω) <cit.>. If there is no effect of dissipation, we can naively take analytic continuation of Eq. (<ref>), which leads to σ_ijj(ω,-ω)= - σ_ijj(ω,-ω)=0. Thus nonzero shift current requires some irreversibility. This observation is coherent with the fact that shift current essentially relies on optical absorption which is an irreversible process. Absence of dc nonreciprocal current in noninteracting systems. In this section, we show that dc nonreciprocal current response does not appear when we do not incorporate effects of electron interactions that cause an effective change of the band structure under the applied electric field. We first show that no nonreciprocal current response appears in periodic systems. We then generalize the proof to the systems with static disorder potentials and show that incorporating the effects of elastic scattering does not lead to nonreciprocal current response.We study systems with an applied electric field by using Keldysh Green'sfunction and its gradient expansion <cit.>. In particular, we use its gauge invariant formulation which enables us to treat the effect of E directly <cit.>.In the presence of a constant external electric field E, the Green's function and the self energyare expanded with respect to E as <cit.>G(ω,k)= G_0(ω,k) + E/2 G_E(ω,k) + E^2/8 G_E^2(ω,k) +O(E^3),Σ(ω,k)= Σ_0(ω,k) + E/2Σ_E(ω,k) + E^2/8Σ_E^2(ω,k) +O(E^3),where we set ħ=1,e=1 for simplicity. The unperturbed part of the Green's function G_0 is given by[ G_0^R G_0^K; 0 G_0^A ]^-1 = ω-H - [ Σ_0^R Σ_0^K; 0 Σ_0^A ],with the unperturbed Hamiltonian H (without E). The linear order correction to the Green's function G_E is given byG_E =G_0 [ Σ_E+ i/2((∂_ω G_0^-1) G_0(∂_k G_0^-1) - (∂_k G_0^-1) G_0 (∂_ω G_0^-1) ) ] G_0.In order to describe the nonequilibrium steady state with applied electric fields, we suppose that the system is coupled to a heat bath. The coupling to the heat bath stabilizes the nonequilibrium electron distribution, and is incorporated through the self energy Σ_0 asΣ_0^R/A(ω)= ∓ i Γ/2 and Σ_0^K(ω)=iΓ f(ω), where Γ is the coupling strength and f(ω) is the Fermi distribution function (for details, see Methods) <cit.>. The second order current response is given by the expectation value,J_E^2 = -i ∫ dω dk tr[v(k) G_E^2^<(ω,k)].In order to show the absence of the second order current response in noninteracting systems, we set E-dependent self energy corrections to zero (Σ_E=Σ_E^2=0). Here, we also assumed that the heat bath coupled to the system (and gives Σ_0) is large enough such that it is not modified with applying electric fields. With vanishing E-dependent self energies, G_E^2 can be written as <cit.>G_E^2 = -i/2 G_0 [(∂_ω G_0^-1) (∂_k G_E) - (∂_k G_0^-1) (∂_ω G_E) ] + 1/4G_0 [(∂_ω^2 G_0^-1) (∂_k^2 G_0) + (∂_k^2 G_0^-1) (∂_ω^2 G_0) ].We can show that the expectation value J_E^2 vanishes in the presence of TRS as follows. The TRS defined with 𝒯=K constrains Green's functions and velocity operator asG_0(ω, k)= G_0(ω, -k)^T, G_E(ω, k)= -G_E(ω, -k)^T, v(k)= v(-k)^T,where T denotes transposition with respect to the band index. This transformation law leads to cancellation of the integrand of J_E^2 between k and -k. For example, the first term in G_E^2 in Eq. (<ref>) gives the contribution which transforms astr[v(k)G_0(ω,k) (∂_ω G_0^-1(ω,k)) (∂_k G_E(ω,k))]= -tr[v^T(-k) G_0^T(ω,-k) (∂_ω G_0^-1,T(ω,-k)) (∂_k G_E^T(ω,-k))]= -tr[v(-k) G_0(ω,-k) (∂_ω G_0^-1(ω,-k)) (∂_k G_E(ω,-k))],and cancels out between k and -k. (In the last line, we used trA=trA^T.) We can show the cancellation for other terms in J_E in a similar way. This indicates that the nonlinear current ∝ E^2 vanishes under the TRS in bulk crystals if we do not incorporate E-linear band modification described by Σ_E. It is easy to generalize the above argument to systems with static disorder potential. We consider a system of the system size L with the periodic boundary condition. We introduce a phase twist at the periodic boundary with the phase θ. In this case, the velocity matrix element v and the nonequilibrium Green's function G_E^2 become functions of the phase twist θ instead of the momentum k. When the disorder is uniform and the system has translation symmetry on average, physical quantities are obtained by averaging over the phase twist θ. We note that this procedure is very similar to the discussion of Chern number in quantum Hall systems with disorder potential <cit.>. Thus, the nonlinear current response J_E^2 is given by a similar expression to Eq. (<ref>) by replacing k with θ. [The expression for G_E^2(ω,θ) is also obtained by replacing k with θ in Eq. (<ref>).] Since similar symmetry constraints hold for G and v under the TRS [i.e.,G_0(ω, θ) = G_0(ω, -θ)^T, G_E(ω, θ) = -G_E(ω, -θ)^T,v(θ) = v(-θ)^T ], the integrand of J_E^2 satisfies tr[v(θ) G_E^2^<(ω,θ)]=- tr[v(-θ) G_E^2^<(ω,-θ)],and cancels between θ and -θ. This proves that elastic scattering from static disorder potential does not induce nonreciprocal current response. These considerations indicate that E-linear change of band structure (Σ_E) is essential for nonreciprocal current response in bulk crystals. The E-linear change of band structure requires some kind of electron interactions, such as Coulomb interactions and electron-phonon interactions. Since the current response proportional to E^2 arises from the E-linear change of band structure in the Boltzmann transport picture, it suffices to consider Σ_E and neglect Σ_E^2. Although we can study this nonreciprocal current response by directly looking at G_E^2 with incorporating Σ_E, it is equivalent and more concise to compute Σ_E and then use the relationship Eq. (<ref>) with the Fermi velocity modified by E. So far, we discussed general conditions to achieve nonreciprocal current response in bulk crystals. In order to proceed to explicit calculations of nonreciprocal current, we need to specify the form of the self energy, i.e., how the self energy Σ is expressed in terms of the Green's function G. We consider electron-electron interaction shown in the Feynman diagram Fig. <ref>(a) and show that it gives rise to nonreciprocal current through E-linear band structure change. Incidentally, we also show explicitly that elastic scatterings from isotropic impurity potential [Fig. <ref>(b)] does not lead to nonreciprocal current, which is consistent with the above general symmetry consideration.Nonequilibrium steady state under the applied electric field. Now we move on to demonstration of nonreciprocal current responses with electron interactions by performing explicit calculations. We consider the cases of weak interactions and perform Hartree-Fock approximation in the gauge invariant formulation of Keldysh Green's functions.In order to describe E-linear change of the effective band structure, we first study the nonequilibrium steady state under the electric field by looking at G_E^<. Once G_E^< is obtained, we can compute the E-linear change of band structure by studying Σ_E^R that corresponds to the diagram in Fig. <ref>. The E-linear change of electron occupation has intraband and interband contributions, since the Green's function for a ℐ-broken system generally has a matrix structure with respect to band index. The intraband contribution is written asG_E,11^<= 2π i/Γδ(ω-ϵ_F) ∑_k_F,isgn(v_k_F,i)δ(k-k_F,i),for the band 1 that we assume crosses with the Fermi energy (for details, see Methods). Here, k_F,i are the Fermi momenta for the band 1. This change of the lesser Green's function linear in E describes the effect of the applied electric fieldwhere the electron occupation is shifted in the momentum space as k → k + τ E near the Fermisurface (with τ = 2π/Γ). This coincides with the picture of the semiclassical Boltzmann equation as illustrated inFig. <ref>(a). Next, the interband contribution for G_E^< is given byG_E,12^<= - ∑_k_F,iπ v_12,k/|v_11,k| E_g,kδ(ω-ϵ_F) δ(k-k_F,i) ,and G_E,21^<=-(G_E,12^<)^*, for the bands 1 and 2 (for details of the derivation, see Methods.). Here, we assume that the band 1 is the partially filled valence band and the band 2 is the unoccupied conduction band as illustrated in Fig. <ref>, and E_g,k denotes the band gap at the momentum k. This term arises from a quantum mechanical effect that the electric field also modifies the wave function in addition to the shift of the momentum at the Fermi energy. Thus the electron distribution in the steady state effectively has an interband component near the Fermi energy. We note that this interband component of G^<_E cannot be captured by semiclassical treatmentwith Boltzmann equation, and is a quantum effect captured by the current approach that uses the gauge invariant formulation of Keldysh Green's functions. This interband component gives the origin of the nonreciprocity when the electron interaction is incorporated. In contrast, when we consider effects of scattering by short-range impurities within the Born approximation [described by the diagram in Fig. <ref>(a)], we do not find the E-linear change of the effective band structure,as detailed in Methods. Formula of nonreciprocal current in two band systems. Now we show that nonreciprocal current appears from E-linear band structure change once we introduce electron-electron interactions, and derive a general formula for nonreciprocal current in two-band systems. The effect of electron interactions is minimally incorporated by the self energy arising from the Hartree contribution to Σ_E^Ras shown in Fig. <ref>(a). For simplicity, we consider a two-band model, where the unit cell contains two sites, and the wave functions of valence and conduction bands (labeled by 1 and 2, respectively) are represented by Ψ_1,k =[ u_k; v_k ],Ψ_2,k = [ -v^*_k;u^*_k ] . We then consider two copies of the original system, each labeled by ↑ and ↓, and introduce the onsite interaction given byH_int = U ∑_i n_↑,i n_↓,i,with the site index i. We treat the effects of the onsite interaction in terms of Hartree-Fock approximation, and study the effective band structure. Since the two copies (↑ and ↓) are decoupled in the noninteracting Hamiltonian,only the Hartree term appears in the present case. (We suppose that the Hartree correction in the equilibrium is already included in the original Hamiltonian.) In the following, we focus on the electronic structure of the ↑ component, and suppress the label for the two copies for simplicity. By using the momentum space representation of H_int(for details, see Methods),the self energy from the Hartree contribution is given byΣ_E,11^R(k)= i a U ∫dω/2πdk'/2π (|u_k|^2-|v_k|^2)× [ u_k'v_k' G_E,12^<(k') + u_k'^* v_k'^* G_E,21^<(k') ],with the lattice constant a. Now we assume that there are two Fermi momenta at ± k_F with the sameFermi velocity v_F. By using the Green's function in the steady state [Eq. (<ref>)], this is expressed asΣ_E,11^R(k)= a U (|u_k|^2-|v_k|^2)/π |v_11,k_F| E_g,k_FIm[u_k_Fv_k_F v_12,k_F]. This self energy is an even function with respect to k from TRS (such as 𝒯=𝒦), which is important in obtaining nonreciprocal current response as we will see next.We now study the nonreciprocal current response by using the self energy Σ_E^R. The current induced by an electric field (linearly in E) is given byJ =(v_11,k_F - v_11,-k_F) τ E,from the Boltzmann transport approach. An application of the electric field modifies the band structure asϵ_1 →ϵ_1 +E/2Σ_E,11^R(k), and hence, the Fermi velocity as v_11,k_F→ v_11,k_F +E/2∂_k Σ_E,11^R(k). Since the obtained self energy Σ_E,11^R(k) is an even function of k, the velocity corrections at ± k_F do not cancel out in evaluating the correction to thecurrent response in Eq. (<ref>). Thus, we obtain the nonlinear current response δ J (the part of current response proportional to E^2) asδ J= (. ∂_k Σ_E,11^R(k) |_k=k_F - . ∂_kΣ_E,11^R(k) |_k=-k_F) τ E^2= 2 a U τ. ∂_k(|u_k|^2-|v_k|^2) |_k=k_F/π |v_11,k_F| E_g,k_FIm[u_k_Fv_k_F v_12,k_F]E^2,which is the general formula for two-band systems in one-dimension. This can be generalized to systems in higher dimensions if we replace the summation over the Fermi points with an integral over the Fermi surface. The above formula indicates that the nonreciprocity ratio γ of the nonlinear current to the original current is roughly estimated asγ≡δ J/J≃ U/E_g,k_FeEa/W,where W is the band width. Here we used u_k, v_k ∼ 1 and v_11,k_F∼ v_12,k_F for rough order estimates.The obtained formula indicates that breaking of inversion symmetry is essential for the nonreciprocity. When the system is inversion symmetric, the wave function is expressed with real numbers due tothe combination of inversion symmetry ℐ and TRS (ℐ𝒯=𝒦).Therefore, we obtain Im[u_k_Fv_k_F v_12,k_F]=0 in inversion symmetric systems andno reciprocity appears. This clearly shows that the nonreciprocity in the current mechanism essentially relies onthe complex nature of wave functions in noncentrosymmetric crystals. Nonreciprocal current in Rice-Mele model. We study nonreciprocal current in a representative model of ferroelectrics, Rice-Mele model, by taking into account onsite interaction. We show that E-linear band structure change is associated with effective modulation of parameters in the Hamiltonian that is induced by the applied electric field E. Rice-Mele model is a representative 1D two-band modelwith broken inversion symmetry,and is described by a Hamiltonian <cit.>,H =1/2∑_i (c_i+1^† c_i +h.c.) - δ t/2∑_i (-1)^i (c_i+1^† c_i +h.c.) + Δ∑_i (-1)^i c_i^† c_i.Rice-Mele model is a minimal model for molecular conductors <cit.> and ferroelectric perovskites <cit.>. In the momentum representation, the Hamiltonian readsH = coska/2σ_x + δ t sinka/2σ_y + Δσ_z,where Pauli matrices σ's act on two sublattices (A and B) in the unit cell, and a is the lattice constant. For Rice-Mele model, the wave functions in Eq. (<ref>) are given by u_k=-sinθ/2 and v_k=e^iϕcosθ/2, with the parametersθ = cos^-1Δ/|ϵ_1,k|and ϕ = tan^-1δ t. The energy dispersion for the valence band is given by ϵ_1,k = -√(cos^2 ka/2 + δ t^2 sin^2 ka/2 + Δ^2),and ϵ_2,k = - ϵ_1,k for the conduction band, as shown in the right panel of Fig. <ref> with black line. We again consider two copies of Rice-Mele model and introduce the onsite interaction given byH_int = U ∑_i n_↑,i n_↓,i,where ↑ and ↓ label the two identical copies.By focusing on the electronic structure of the ↑ component, we suppress the label for the two copies for simplicity.Now we study the nonequilibrium steady state under the electric field E applied along the 1D chain, by using the nonequilibrium Green's functions, Eq. (<ref>). The electric field is described by the Hamiltonian,H_ele=- e E a ∑_i i n_i. The application of the electric field effectively changes the parameters δ t and Δ, which can be easily obtained within the Hartree approximation since the expectation values are directly computed from the lesser component of the Green's function. From the Hartree term, the occupation of site A is modified asδ n_A = i E a/2∫dk/2π[u_k v_k G^<_E,21(k) + u_k^* v_k^* G^<_E,12(k)]= E a /π |v_11,k_F| E_g,k_FIm[u_k_Fv_k_F v_12,k_F],(For details of the derivation, see Methods.)Similarly, the occupation of site B is modified in the opposite way as δ n_B=-δ n_A . Thus the Hartree term effectively changes the staggered potential Δ asΔ→Δ + E a U /π |v_11,k_F| E_g,k_FIm[u_k_Fv_k_F v_12,k_F].Notice that the change of Δ is opposite in sign depending on the direction of E. This situation isschematically illustrated in Fig. <ref>. Since the parameter changes are asymmetric with respect to the sign of Ein the δ t-Δ space, the effective band structure ϵ_1,k(E) becomes different for the electric fields +E and -E. The nonlinear current in the nonequilibrium steady state is obtained from the conventional Boltzmann equationapproach for this modified band structure in the presence of E. Namely, the linear conductivity is given byσ(E)= 2 τ |v_F(E)|,with E-dependent Fermi velocityv_F(E)=∂_k ϵ_1,k(E)|_k=k_F (where k_F is the Fermi momentum). The E-linear change of the effective band structure leads to E-linear term in v_F(E),which results in the nonlinear current response ∝ E^2. Thus the asymmetry in band structure changes leads to the nonreciprocity of the currentwith respect to the direction of E. The nonreciprocity is quantified by the ratio of the change of electric conductivityγ = [σ(E)-σ(0)]/σ(0) in the presence of the applied electric field. We note that γ=δ J/J and the crude approximation is given in Eq. (<ref>). This approximation is also obtained from the E-linear change of the parameters in Rice-Mele model in Eq. (<ref>) and Eq. (<ref>). Explicit evaluation of Eq. (<ref>) gives the nonreciprocity ratio of γ=5 × 10^-7 for typical parameters of Rice-Mele model (δ t=Δ=0.3t, U=t, k_F=0.1 π/a along with t=1 eV and a=1Å) and the electric field of E=10^5 V/m. This order of the nonreciprocity is comparable to those in materials showing magnetochiral anisotropy <cit.>, as we will discuss further in the discussion section. Discussions Finally, we give an estimate of the nonreciprocal response induced by the present mechanism for realistic materials. Typical magnitude of the nonreciprocityis determined by γ=δ J/J in Eq. (<ref>). When the band gap and Coulomb energy are both of the order of 1eV, the ratio δ J/J reduces to eEa/W, which is the ratio between the electric potential in the unit cell and the bandwidth. This allows us to estimate typical nonreciprocity as follows. We consider the current of 1mA that flows in a wire of the area 1mm^2, which amounts to a current density of j=10^3 A/m^2. For usual metals, conductivity is roughly given by σ≃ 10^6 A/Vm, and hence, the electric field present in the wire is E = j/σ≃ 10^-3V/m. In this case, the electric potential in the unit cell of a≃ 1Å is eEa ≃ 10^-13eV. Since the bandwidth is typically 1eV, this indicates nonreciprocity ratio is δ J/J ≃ 10^-13. This should be compared to the typical order of nonreciprocity for materials showing magnetochiral anisotropy. Bi helix <cit.> and molecular solids <cit.> show the nonreciprocity measured in resistivity change δρ as δρ/ρ =γ' I B with γ' ≃ 10^-3 A^-1T^-1. For I=1mA and B=1T, the typical nonreciprocity is δ J/J ≃δρ/ρ≃ 10^-6. Thus the nonreciprocity induced by electron correlation is very small for good metals. On the other hand, we can expect comparable nonreciprocity for doped semiconductors whose conductivity ranges from 10^-1∼ 10^5A/Vm. For example, for the doped Si of σ=10^-1A/Vm and the bandwidth W≃ 1 eV in the presence of the current density j=10^3 A/m^2, we obtain the nonreciprocity of δ J/J ≃ 10^-6, which becomes comparable with typical materials showing magnetochiral anisotropy.Another candidate is the molecular conductor TTF-CA which is a strongly correlated insulator. Of course, our theory for weakly correlated metals is not directly applicable. However, it is interesting to estimate the nonreciprocity ratio anyway, since the carriers in TTF-CA (thermally activated or provided by impurity sites) may be treated as electrons having a Fermi surface, and the Hartree approximation sometimes becomes a good approximation at least for the ground states.The typical order of electric field that can be applied is E ≃ 10^5V/m <cit.>. Since the lattice constant is a ≃ 1nm, the electric voltage in the unit cell becomes eEa ≃ 10^-4 eV, and the band width is given by W≃ 0.2 eV. Thus the nonreciprocity ratio can be 10^-3 which may be comparable with that in magnetochiral anisotropy in Bi helix <cit.>. We again note that this is a number obtained from a naive application of Eq. (<ref>) to TTF-CA beyond the applicability of our theory, but this suggests that it is an interesting future problem to study TTF-CA as a candidate of strongly correlated materials for nonreciprocity, from both theoretical and experimental points of view. Our analysis is mostly valid for weakly interacting systems because we adopted Hartree approximation to incorporate the correlation effect. Therefore, the study of nonreciprocal responses of strongly interacting cases remains as an interesting future problem. Meanwhile, our symmetry considerations from generalization of Onsager's theorem suggests that nonreciprocal current response can generally appear in the presence of dissipation and interactions, regardless of the strength of the interaction. We may also note that Hartree approximation sometimes gives a good description for some ground state properties, even for strong U cases, such as magnetically ordered ground states. Our approach may give a good approximation for nonlinear properties of those states, since the nonreciprocal current response is a nonequilibrium property near the ground state under a moderate electric field.MethodsDerivation of generalized Onsager's theorem. In this section, we present general symmetry considerations on the nonreciprocal current responsewith respect to the time reversal symmetry by extending Onsager's relationship to nonlinear current. We consider a system of noninteracting electrons that are described by Green's function in the Lehmann representation,G_ab(iω_n) = e^βΩ∑_α,β⟨α|c_a|β|⟨%s|%s⟩⟩β|c_b^†|αe^-β E_α+e^-β E_β/iω_n+E_α-E_β,where |α⟩ is a many-body state that satisfies Ĥ|α⟩=E_α|α⟩ with the many-body Hamiltonian Ĥ, β is the inverse temperature, e^-βΩ=Tr[e^-βĤ], and c_a and c_a^† are annihilation and creation operators of an electron with a single particle state a. (Here α,β are labels for many-body states, whereas a,b are labels for single particle states.) We write the current operator v̂_̂î along the ith direction asv̂_̂î=∑_ab (v_i)_ab c^†_a c_b,where v is a matrix for a velocity operator in the single particle representation.In the linear response, Onsager's relationship indicates that the conductivity σ_ijis constrained asσ_ij = σ_ji,in the presence of time reversal symmetry <cit.>. This relationship is derived by considering the time reversal transformation in the Kubo formulafor the linear conductivity,σ_ij(iω_n)=1/ω_n β∑_iω_mtr[ v_i G(iω_m + i ω_n) v_j G(iω_m) ],where iω_n, iω_m are Matsubara frequencies, and tr is a trace over single particle states (labeled by a,b). The time reversal symmetry, 𝒯=K, indicatesG(iω_m)= G^T(iω_m), v_i= -v_i^T.These actions of 𝒯 in the many-body representation are obtained by using 𝒯|α⟩=(|α⟩)^* in Eq. (<ref>) and Eq. (<ref>). (We note that this is closely related to symmetry constraint in a single particle Hamiltonian, H(k) = H^T(-k), in the momentum representation.) By using these relationships, the Kubo formula can be rewritten asσ_ij(iω_n) =1/ω_n β∑_iω_mtr[ v_i^T G^T(iω_m + i ω_n) v_j^T G^T(iω_m) ] = 1/ω_n β∑_iω_mtr[ v_j G(iω_m + i ω_n) v_i G(iω_m) ]=σ_ji(iω_n),and leads to the Onsager's relationship. Here we rewrote the trace in the reverse orderin the second line and used the fact that the transposition in the trace does not change its value.Next we study how Onsager's theorem can be extended to nonlinearcurrent responses. We consider the second order current response,J_i(ω_1 + ω_2)= σ_ijj(ω_1, ω_2) E_j(ω_1)E_j(ω_2).The nonlinear conductivity σ_ijj(iω_n_1,iω_n_2) has acontribution from a triangle diagram which is given by σ^tr_ijj(iω_n_1,iω_n_2)= 1/ω_n_1ω_n_2β∑_iω_mtr[ v_j G(iω_m + i ω_n_1) v_j G(iω_m + i ω_n_1+ i ω_n_2) v_i G(iω_m) ],since there are no vertex corrections for noninteracting systems. The time reversal symmetry indicates that σ^tr_ijj(iω_n_1,iω_n_2)= -1/ω_n_1ω_n_2β∑_iω_mtr[ v_j^T G^T(iω_m + i ω_n_1) v_j^T G^T(iω_m + i ω_n_1+ i ω_n_2) v_i^T G^T(iω_m) ]= -1/ω_n_1ω_n_2β∑_iω_mtr[ v_j G(iω_m - i ω_n_2) v_j G(iω_m - i ω_n_1- i ω_n_2) v_i G(iω_m ) ] = -σ^tr_ijj(-iω_n_2,-iω_n_1) . Naively, this seems to suggest that the nonlinear conductivity σ_ijj(ω_1,ω_2)vanishes in the dc limit (ω_1→ 0 and ω_2→ 0). However, we notice that nonlinear conductivity with Matsubara frequencies in the upper half plane istransformed to that with Matsubara frequencies in the lower half plane. Since the real axis is a branch cut in the complex ω plane, the analytic continuation ofiω_n → 0 for the two quantities, σ_ijj(iω_1,iω_2) andσ_ijj(-iω_2,-iω_1), lead to different results in general. This indicates that relationship similar to the Onsager's relation does not necessarily constrain thedc nonlinear conductivity to vanish. Instead, this extended Onsager's relation indicates that nonreciprocal current necessarily involves irreversibility such as dissipation and relaxation. In a similar manner, we can also derive an extended Onsager's relation for shift current. Shift current is dc current induced by optical absorption above the band gap and photoexcitation of electron-hole pairs that have finite polarization <cit.>. It is described by a nonlinear current response, J_i(ω_1+ω_2)= σ_ijj^shift(ω_1,ω_2) E(ω_1) E(ω_2) with ω_2 ≈ -ω_1. The nonlinear conductivity σ^shift has two contributions as σ^shift=σ^tr(ω_1,ω_2) + σ^bubble(ω_1,ω_2), where the latter piece is a correlation function of paramagnetic current v̂_i and diamagnetic current v̂_dia,ij≡∑_ab (v_dia,ij)_ab c_a^† c_b <cit.>. Time reversal symmetry leads to the same relation,σ_ijj^shift(ω_1,ω_2) = - σ_ijj^shift(-ω_2,-ω_1),since σ^bubble also obeys the same transformation law under the TRS with σ^tr as follows. In the momentum representation, matrix elements for diamagnetic current are given byv_dia,ij=∂ v_j/∂ k_i.Accordingly, TRS constrains diamagnetic current operator asv_dia=v_dia^T,due to an extra k derivative. The nonlinear conductivity for shift current is written asσ_ijj^bubble(iω_n_1,iω_n_2)=1/ω_n_1ω_n_2β∑_i=1,2∑_iω_mtr[ v_j G(iω_m + i ω_n_i) v_dia,ij G(iω_m) ].Under TRS, this transforms asσ_ijj^bubble(iω_n_1,iω_n_2)=-1/ω_n_1ω_n_2β∑_i=1,2∑_iω_mtr[ v_i^T G^T(iω_m + i ω_n_i) v_dia,ij^T G^T(iω_m) ] = -1/ω_n_1ω_n_2β∑_i=1,2∑_iω_mtr[ v_i G(iω_m - i ω_n_i) v_dia,ij G(iω_m) ]=-σ_ijj^bubble(-iω_n_2, - iω_n_1),where we used the symmetry between iω_n_1 and iω_n_2 to fit the transformation law with that for σ^tr. Therefore, nonzero shift current also requires irreversibility that introduces a branch cut at the real axis in the ω space and makes two limits ω→± i 0 different. In this case, the irreversibility comes from optical transition and creation of electron hole pairs across the band gap. Keldysh Green's function. In this section, we summarize basic notations of Keldysh Green's functions that we need for our discussion <cit.>. In the Keldysh Green's function formalism, we consider the Keldysh component of the Green's function in addition to the retarded and advanced Green's function. Keldysh component describes the electron occupation in the nonequilibrium state, while the retarded and advanced components describe the spectrum of the system. The Dyson equation for the Green's function is given by[ G^R G^K; 0 G^A ]^-1 = ω-H - [ Σ^R Σ^K; 0 Σ^A ],with the Hamiltonian H.In the thermal equilibrium, the Keldysh Green's function is obtained by solving the Dyson equation. We suppose that the system is weakly coupled to a heat bath with broad spectrum, which determines the electron distribution of the system. The coupling to the heat bath (such as electron reservoirs) is described by the self energy given by <cit.>[ Σ^R Σ^K; 0 Σ^A ] = i Γ[ - 1/22f-1; 0 1/2 ],where Γ is the strength of the coupling to the bath, and f(x)=1/[1+exp(x/k_B T)] is the Fermi distribution function with the temperature T.The observables in the nonequilibrium steady state is obtained from Keldysh Green's function. We define the lesser component of the Green's function asG^<(ω, k) ≡1/2(G^K-G^R+G^A).By using G^<, we can write the expectation value of a general fermion bilinear as⟨c_j^† c_i|=⟩ -i ∫dω/2π G_ij^< (ω).The lesser Green's function is concisely obtained from the equationG^<=G^R Σ^< G^A,where the lesser component of the self energy encodes the information of the electron distribution and is given byΣ^<(ω,k) ≡1/2(Σ^K-Σ^R+Σ^A) = iΓ f(ω). Keldysh Green's function under the applied electric field. In this section, we study the nonequilibrium electron distribution realized under the applied electric field. We compute the E-linear part of the lesser Green's function G_E^< in Eq. (<ref>) in the gauge invariant formulation. In doing so, we use the diagram in Fig. <ref>(b) to specify the form of self energy Σ^<_E in Eq. (<ref>).(We note that the electron interaction in Fig. <ref>(a) does not change electron distribution and does not contribute to Σ^<_E. Furthermore, it turns out in the end that the contribution of impurity scattering to Σ^<_E is actually negligible under TRS.) Specifically, we consider the delta function type impurity [V(r)=uδ(r-r_0) with density n].In the second-order Born approximation, the self energy is givenby Σ_E(ω, k)=n u^2 ∫dk/2π G_E(ω, k),which corresponds to the diagram in Fig. <ref>(b). In the right hand side, G_E denotes the bare Green's function that does not include the effect of impurity scattering. We note that the impurity scattering also modifies the self energy Σ_0 in the zeroth order in E, but this correction only changes the coupling Γ in Eq. (<ref>) and can be absorbed by redefining Γ accordingly.The current response in the nonequilibrium steady state under the electric field is captured by the lesser Green's function G_E^< which gives a contribution linear in E. We consider a multiband system and suppose that the Bloch wave functions are given by Ψ_i,k which satisfy H Ψ_i,k = ϵ_i,kΨ_i,k with the energy dispersion ϵ_i,k (where i is the band index). First we start with the intraband component of G_E,ii^< for the band i (where we omit the band index i in the following, for simplicity). By assuming that G_0 and v have a single component, equation (<ref>) givesG_E^< = G_0^R [ Σ_E^<+ i/2((∂_ωΣ^<) G_0^Av_k - v_k G_0^R (∂_ωΣ^<) ) ] G_0^A = Σ_E^</(ω-ϵ_k)^2+Γ^2/4 +iΓ^2 v_k δ(ω-ϵ_F)/2[(ω-ϵ_k)^2 +Γ^2/4]^2 ,where we used ∂_ω f(ω)= - δ(ω-ϵ_F). This expression is simplified by using the relationship1/[(ϵ_F-ϵ_k)^2+ Γ^2/4]^n =2π (2n-2)!/[(n-1)!]^2 Γ^2n-1∑_k_F,i1/|v_k|δ(k-k_F,i),that holds for a positive integer n, and the Fermi momenta k_F,i, where we only keep the leading order in terms of 1/Γ. By using Eq. (<ref>) with Eq. (<ref>), the impurity scattering gives rise to Σ_E^<(ω)given byΣ_E^<(ω)=i nu^2 2π/Γδ(ω-ϵ_F)×∑_k_F,iv_k/|v_k|δ(k-k_F,i) / 1- nu^2 2π/Γ∑_k_F,i1/|v_k|δ(k-k_F,i).Here, the numerator in the right hand side vanishes since the TRS leads to∑_k_F,iv_k/|v_k| =0, and hence, Σ_E^<(ω)=0 follows.Thus we obtainG_E^<= 2π i/Γδ(ω-ϵ_F) ∑_k_F,iv_k/|v_k|δ(k-k_F,i).This change of the lesser Green's function linear in E describes the effect of the applied electric fieldwhere the electron occupation is shifted in the momentum space as k → k + τ E near the Fermisurface (with τ = 2π/Γ). This corresponds to the picture from the semiclassical Boltzmann equation as illustrated inFig. <ref>(a).Next we consider the interband component, G_E,12^<, by focusing on the valence and conduction bands which are labeled by 1 and 2, respectively. Equation (<ref>) givesG_E,12^<= G_0,11^R [ Σ_E,12^<+ i/2((∂_ωΣ^<_11) G_0,11^Av_k,12- v_k,12 G_0,22^R (∂_ωΣ^<_22) ) ] G_0,22^A .We assume that the Fermi energy is located within the band 1 and does not cross the band 2. In this case, the second term in the right hand side reduces toG_E,12^< - G_0,11^R Σ_E,12^< G_0,22^A = Γ v_12,k/2δ(ω-ϵ_F) [G_0,11^R G_0,11^A G_0,22^A - G_0,11^R G_0,22^R G_0,22^A]= - ∑_k_F,iπ v_12,k/|v_11,k| E_g,kδ(ω-ϵ_F) δ(k-k_F,i),with E_g,k=ϵ_2,k-ϵ_1,k, where we only kept the leading term with respect to 1/E_g,k. (Here we used Eq. (<ref>) for G_0,11^R G_0,11^A and discarded the second term.) Since the right hand side is inversely proportional to the band gap E_g,k, theself energy Σ_E,12^< obtained from Eq. (<ref>) is proportionalto Γ/E_g,k_F, which is negligible in the left hand side of the above equation given that G_0,22^A ∝ 1/E_g,k_F. Therefore the lesser part of the Green's function is given byG_E,12^<= - ∑_k_F,iπ v_12,k/|v_11,k| E_g,kδ(ω-ϵ_F) δ(k-k_F,i) .We note that G_E,21^< is obtained from the relationshipG_E,21^<= - (G_E,12^<)^*,as a consequence of the hermiticity of expectation values in Eq. (<ref>). Effective band dispersion with impurity scattering. In this section, we study the effective band dispersion in the presence of E and impurity scatteringby looking at Σ_E^R. We show that impurity scattering is insufficient for nonreciprocal current response because the change of the band dispersion turns out to be the same for positive and negative electric fields.From Eq. (<ref>), the retarded part of the equation for G_E readsG_E^R=G_0^R [ Σ_E^R + i/2(G_0^R (-v_k) - (-v_k) G_0^R) ] G_0^R,with v_k=∂_k H, where we used ∂_ωΣ^R=0. For simplicity, we consider a two-band system, where the Green's function is given by G_0,ij^R= 1/ω-ϵ_i+i Γ/2δ_ij,where i,j=1,2 are labels for valence and conduction bands, respectively. For the diagonal components, we obtainG_E,ii^R=G_0,ii^R Σ_E,ii^R G_0,ii^R,since the second term in Eq. (<ref>) vanishes trivially. The diagonal part of the self energy is momentum independent and vanishes asΣ_E,ii^R (ω)= n u^2 ∫dk/2π G_E,ii^R=[n u^2∫dk/2π1/(ω-ϵ_i(k)+i Γ/2)^2] Σ_E,ii^R(ω) =0.Off-diagonal part is determined fromG_E,21^R - G_0,22^R Σ_E,21^R G_0,11^R=- i/2 G_0,22^R(G_0,22^R v_k,21 - v_k,21 G_0,11^R) G_0,11^R= - i v_k,21 (ϵ_1-ϵ_2)/2 (ω-ϵ_1+i Γ/2)^2(ω-ϵ_2+i Γ/2)^2.By integrating over the momentum, we obtain(1-nu^2 ∫dk/2π G_0,22^R G_0,11^R ) Σ_E,21^R(ω) = - n u^2 ∫dk/2πi v_k,21 (ϵ_1-ϵ_2)/2 (ω-ϵ_1+i Γ/2)^2(ω-ϵ_2+i Γ/2)^2,which leads to nonzero Σ_E,21^R in general. Therefore, the effective Hamiltonian is given byH= [ ϵ_1 E/2Σ_E,12^R; E/2Σ_E,21^R ϵ_2 ], and the effective band structure of the valence band in the presence of E is obtained by diagonalizing H asϵ_1=ϵ_1+ϵ_2/2-√((ϵ_2-ϵ_1)^2/4+E^2 |Σ_E,21^R|^2/4).This is an even function with respect to E; the effective band structure depends on the strength of electric field |E|, but is independent of the direction of the applied field.Therefore, no reciprocal current appears when we use Boltzmann equation approach based on this modified band structure.We note that this conclusion is not changed even when we treat the impurity scattering by self-consistent Born approximation. In the self-consistent Born approximation, G_E in Eq. (<ref>) is taken as a full Green's function including the effect impurity scattering. In this case, the self energy Σ_E is obtained by repeating the above calculation and taking convergence. In the every step of the repetition, the energy dispersion is modified as Eq. (<ref>) and still gives a symmetric dispersion in k. After repeating this many times, the dispersion remains symmetric in k. Therefore, self-consistent treatment of impurity scattering still gives no nonreciprocal current response. Electron-electron interaction in two-band model. In this section, we derive Eq. (<ref>) for the self energy that arises from the electron-electron interaction in the case of a two-band model. We also derive Eq. (<ref>) for the expectation value of density operators. These expressions are obtained by using the momentum space representation of the interaction Hamiltonian. We consider the onsite interaction that is given byH_int =U ∑_n(n_A,↑,n n_A,↓,n + n_B,↑,n n_B,↓,n),with the site index n. Expressing the Hartree contribution to the self energy requires momentum representations of the density operators n_A,i and n_B,i, where we omit the indices for two copies (↑ and ↓) since the expressions are identical for two copies. For the wave functions in Eq. (<ref>),the creation operators of Bloch states are written asc_1,k^† = 1/√(N)∑_n e^ikn(u_k c_A,n^† + v_k c_B,n^†), c_2,k^† = 1/√(N)∑_n e^ikn(-v_k^* c_A,n^† + u_k^* c_B,n^†),where N is the system size. By using inverse Fourier transformation, the creation operators in the site basis are expressed with Bloch states asc_A,n^† = 1/√(N)∑_k e^-ikn(u_k^* c_1,k^† - v_k c_2,k^†), c_B,n^† = 1/√(N)∑_k e^-ikn(v_k^* c_1,k^† + u_k c_2,k^†),where k runs momenta in the first Brillouin zone (e.g., k=2π j/Na for j=0,…, N-1 with lattice constant a). Now the density operators are given byc_A,n^† c_A,n = 1/N∑_k_1,k_2 e^-i(k_1-k_2)n[ u_k_1 u_k_2^* c_1,k_1^† c_1,k_2 + v_k_1 v_k_2^* c_2,k_1^† c_2,k_2- u_k_1 v_k_2 c_1,k_1^† c_2,k_2- v_k_1^* u_k_2^* c_2,k_1^† c_1,k_2], c_B,n^† c_B,n = 1/N∑_k_1,k_2 e^-i(k_1-k_2)n[ v_k_1 v_k_2^* c_1,k_1^† c_1,k_2 + u_k_1^* u_k_2 c_2,k_1^† c_2,k_2+ v_k_1 u_k_2 c_1,k_1^† c_2,k_2+ u_k_1^* v_k_2^* c_2,k_1^† c_1,k_2]. The retarded part of the self energy is given by <cit.>Σ_E,m_1 m_2^R (ω,k)= - i/N∑_k'∫dω'/2π[U_(m_1,k)(m_3,k');(m_2,k)(m_4,k')-U_(m_3,k')(m_1,k);(m_2,k)(m_4,k')] G_E,m_3 m_4^< (ω',k'),by using the momentum space representation for the interaction H_int which is given byH_int = - 1/2 N∑_m_1,m_2,m_3,m_4∑_k_1,k_2,k_3,k_4δ(k_1+k_2-k_3-k_4)U_(m_1, k_1)(m_2, k_2); (m_3, k_3) (m_4, k_4) c_m_1, k_1^† c_m_2,k_2^† c_m_3,k_3 c_m_4,k_4,with the band index m_i. The first term in Eq. (<ref>) is the Hartree term, and the second is the Fock term. The momentum representations of the interaction that are relevant for the Hartree term contributing to Σ_E,11^R are given byU_(1,k),(1,k');(1,k),(1,k') = U[|u_k|^2 |u_k'|^2 + |v_k|^2 |v_k'|^2], U_(1,k),(2,k');(1,k),(2,k') = U[|u_k|^2 |v_k'|^2 + |v_k|^2 |u_k'|^2], U_(1,k),(1,k');(1,k),(2,k') = U(-|u_k|^2 + |v_k|^2) u_k'v_k', U_(1,k),(2,k');(1,k),(1,k') = U(-|u_k|^2 + |v_k|^2) u^*_k'v^*_k'.By using Eq. (<ref>),the self energy Σ_E,11^R is written asΣ_E,11^R(k) = -i U/N∑_k'∫dω/2π {(|u_k|^2 |u_k'|^2 + |v_k|^2 |v_k'|^2) G_E,11^<(k')+ (|u_k|^2 |v_k'|^2 + |v_k|^2 |u_k'|^2) G_E,22^<(k') + (-|u_k|^2 + |v_k|^2)[ u_k'v_k' G_E,12^<(k') + u_k'^* v_k'^* G_E,21^<(k') ] }.The first two terms in the integral vanishes due to TRS. Specifically, G_E,ii(k) is an odd function of k due to TRS as in Eq. (<ref>), and |u_k|^2 and |v_k|^2 are even functions of k, which indicates that the first two terms vanish after integrating over k'. Thus we end up withΣ_E,11^R(k) = ia U ∫dω/2πdk/2π (|u_k|^2 - |v_k|^2)[ u_k'v_k' G_E,12^<(k') + u_k'^* v_k'^* G_E,21^<(k') ],where we replaced the sum ∑_k' with the integral N a∫dk/2π.Next, we derive the changes of the density δ n_A and δ n_B caused by the electric field E. By using Eq. (<ref>) and Eq. (<ref>), the change of the density at A site is given byδ n_A=-i E a/2∫dk/2π [|u_k|^2 G^<_E,11(k) + |v_k|^2 G^<_E,22(k) - u_k v_k G^<_E,21(k) - u_k^* v_k^* G^<_E,12(k)].Since the first and second terms vanish due to TRS, we obtainδ n_A= i E a/2∫dk/2π[u_k v_k G^<_E,21(k) + u_k^* v_k^* G^<_E,12(k)].Similarly, the change of the density at A site is given byδ n_B= -i E a/2∫dk/2π[u_k v_k G^<_E,21(k) + u_k^* v_k^* G^<_E,12(k)],which is opposite in sign compared to δ n_A.Acknowledgements.This work was supported by the Gordon and Betty Moore Foundation's EPiQS Initiative Theory Center Grant (TM), and by Grants-in-Aid for Scientific Research from the Ministry of Education, Science, Sports and Culture No. 24224009 and 25400317, CREST, Japan Science and Technology (grant no. JPMJCR16F1), and ImPACT Program of Council for Science, Technology and Innovation (Cabinet office, Government of Japan, 888176) (NN). naturemag.bst | http://arxiv.org/abs/1706.08991v2 | {
"authors": [
"Takahiro Morimoto",
"Naoto Nagaosa"
],
"categories": [
"cond-mat.str-el",
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.str-el",
"published": "20170627181250",
"title": "Nonreciprocal current from electron interactions in noncentrosymmetric crystals: roles of time reversal symmetry and dissipation"
} |
Inverse Ising inference by combining Ornstein-Zernike theory with deep learning Alpha A. Lee=============================================================================== Kaczmarz method is one popular iterative method for solving inverse problems, especially in computed tomography. Recently, it was established that a randomized version of the method enjoys an exponential convergence for well-posed problems, and the convergence rate is determined by a variant of the condition number. In this work, we analyze the preasymptotic convergence behavior of the randomized Kaczmarz method, and show that the low-frequency error (with respect to the right singular vectors) decays faster during first iterations than the high-frequency error. Under the assumption that the initial error is smooth (e.g., sourcewise representation), the results allow explaining the fast empirical convergence behavior, thereby shedding new insights into the excellent performance of the randomized Kaczmarz method in practice. Further, we propose a simple strategy to stabilize the asymptotic convergence of the iteration by means of variance reduction. We provide extensive numerical experiments to confirm the analysis and to elucidate the behavior of the algorithms.Keywords: randomized Kaczmarz method; preasymptotic convergence; smoothness; error estimates; variance reduction § INTRODUCTION Kaczmarz methood <cit.>, named after Polish mathematician Stefan Kaczmarz, is onepopular iterative method for solving linear systems. It is a special form of the general alternating projection method. In the computed tomography (CT) community, it was rediscovered in 1970 by Gordon, Bender and Herman <cit.>, under the name algebraic reconstruction techniques. It was implemented in the very firstmedical CT scanner, and since then it has been widely employed in CT reconstructions <cit.>.The convergence of Kaczmarz method for consistent linear systems is not hard to show. However, the theoretically very important issue of convergence rates of Kaczmarz method (or the alternating projection method for linear subspaces) is very challenging. There are several known convergence rates results, all relying on (spectral) quantities of the matrix A that are difficult to compute or verify in practice (see <cit.> and the references therein). This challenge is well reflected by the fact that the convergence rate of the method depends strongly on the ordering of the equations.It was numerically discovered several times independently in the literature that using the rows of the matrix A in Kaczmarz method in a random order, called randomized Kaczmarz method (RKM) below, rather than the given order, can often substantially improve the convergence <cit.>. Thus RKM is quite appealing for practical applications. However, the convergence rate analysis was given only very recently. In aninfluential paper <cit.>, in 2009, Strohmer and Vershynin established the exponential convergence ofRKM for consistent linear systems, and the convergence rate depends on (a variant of) the condition number. This result was then extended and refined in various directions <cit.>, including inconsistent or underdetermined linear systems. Recently, Schöpfer and Lorenz <cit.> showed the exponential convergence for RKM for sparse recovery with elastic net. We recall the result of Strohmer and Vershynin and its counterpart for noisy data in Theorem <ref> below. It is worth noting that all these estimates involve the condition number, and for noisy data, the estimate contains a term inversely proportional to the smallest singular value of the matrix A.These important and interesting existing results do not fully explain the excellent empirical performance of RKM for solving linear inverse problems, especially in the case of noisy data, where the term due to noise is amplified by a factor of the condition number. In practice, one usually observes that the iterates first converge quickly to agood approximation to the true solution, and then start to diverge slowly. That is, it exhibits the typical “semiconvergence” phenomenon for iterative regularization methods, e.g., Landweber method and conjugate gradient methods <cit.>. This behavior is not well reflected in the known estimates given in Theorem <ref>; see Section <ref> for further comments.The purpose of this work is to study the preasymptotic convergence behavior of RKM. This is achieved by analyzing carefully the evolution of the low- and high-frequency errors during the randomized Kaczmarz iteration, where the frequency is divided according to the right singular vectors of the matrix A. The results indicate that during initial iterations, the low-frequency error decays must faster than the high-frequency one, cf. Theorems <ref> and <ref>. Since the inverse solution (relative to the initial guess x_0) is often smooth in the sense that it consists mostly of low-frequency components <cit.>, it explains the good convergence behavior of RKM, thereby shedding new insights into its excellent practical performance. This condition on the inverse solution is akin to the sourcewise representation condition in classical regularization theory <cit.>. Further, based on the fact that RKM is a special case of the stochastic gradient method <cit.>, we propose a simple modified version using the idea of variance reduction by hybridizing it with the Landweber method, inspired by <cit.>. This variant enjoys both good preasymptotic and asymptotic convergence behavior, as indicated by the numerical experiments.Last, we note that in the context of inverse problems, Kaczmarz method has received much recent attention, and has demonstrated very encouraging results in a number of applications. The regularizing property and convergence rates in various settings have been analyzed for both linear and nonlinear inverse problems (see <cit.> for an incomplete list). However, these interesting works all focus on a fixed ordering of the linear system, instead of the randomized variant under consideration here, and thus they do not cover RKM.The rest of the paper is organized as follows. In Section <ref> we describe RKM and recall the basic tool for our analysis, i.e., singular value decomposition, and a few useful notations. Then in Section <ref> we derive the preasymptotic convergence rates for exact and noisy data. Some practical issues are discussed in Section <ref>. Last, in Section <ref>, we present extensive numerical experiments to confirm the analysis and shed further insights.§ RANDOMIZED KACZMARZ METHODNow we describe the problem setting and RKM, and also recall known convergence rates results for both consistent and inconsistent data. The linear inverse problem with exact data can be cast intoAx = b,where the matrix A∈ℝ^n× m, and b∈ℝ^n and b∈range(A). We denote the ith row of the matrix A by a_i^t, with a_i∈ℝ^m being a column vector, where the superscript t denotes the vector/matrix transpose. The linear system (<ref>) can be formally determined or under-determined.The classical Kaczmarz method <cit.> proceeds as follows. Given the initial guess x_0, we iteratex_k+1= x_k + b_i-⟨ a_i,x_k⟩/a_i^2 a_i, i=(kmodn)+1,where ⟨·,·⟩ and · denote the Euclidean inner product and norm, respectively. Thus, Kaczmarz method sweeps through the equations in a cyclic manner, and n iterations constitute one complete cycle.In contrast to the cyclic choice of the index i in Kaczmarz method, RKM randomly selects i. There are several different variants, depending on the specific random choice of the index i. The variant analyzed by Strohmer and Vershynin <cit.> is as follows. Given an initial guess x_0, we iteratex_k+1= x_k + b_i-⟨ a_i,x_k⟩/a_i^2a_i,where i is drawn independent and identically distributed (i.i.d.) from the index set {1,2,…,n} with the probability p_i for the ith row given byp_i=a_i^2/A_F^2, i=1,…,n,where ·_F denotes the matrix Frobenius norm. This choice of the probability distribution p_i lends itself to a convenient convergence analysis <cit.>. In this work, we shall focus on the variant (<ref>)-(<ref>).Similarly, the noisy data b^δ is given byb_i^δ=⟨ a_i,x^*⟩+η_i, i=1,…,n,η≤δ,where δ is the noise level. RKM reads: given the initial guess x_0, we iteratex_k+1 = x_k + b_i^δ-⟨ a_i,x_k⟩/a_i^2a_i,where the index i is drawn i.i.d. according to (<ref>).The following theorem summarizes typical convergence results of RKM for consistent and inconsistent linear systems <cit.> (see <cit.> for in-depth discussions), under the condition that the matrix A is of full column-rank. For a rectangular matrix A∈ℝ^n× m, we denote by A^†∈ℝ^m× n the pseudoinverse of A, A_2 denotes the matrix spectral norm, and σ_min(A) the smallest singular value of A. The error x_k-x^* of the RKM iterate x_k (with respect to the exact solution x^*) is stochastic due to the random choice of the index i. Below [·] denotes expectation with respect to the random row index selection. Note that κ_A differs from the usual condition number <cit.>.Let x_k be the solution generated by RKM (<ref>)–(<ref>) at iteration k, and κ_A=A_FA^†_2 be a (generalized) condition number. Then the following statements hold. (i) For exact data, there holds[x_k-x^*^2] ≤(1-κ_A^-2)^kx_0-x^*^2. (ii)For noisy data, there holds[x_k-x^*^2] ≤(1-κ_A^-2)^kx_0-x^*^2 + δ^2/σ_min^2(A). Theorem <ref> gives error estimates (in expectation) for any iterate x_k, k≥ 1: the convergence rate is determined by κ_A. For ill-posed linear inverse problems (e.g., CT), bad conditioning is characteristic and the condition number κ_A can be huge, and thus the theorem predicts a very slow convergence. However, in practice, RKM converges rapidly during the initial iteration. The estimate is also deceptive for noisy data: due to the presence of the term δ^2/σ^2_min(A), it implies blowup at the very first iteration, which is however not the case in practice. Hence, these results do not fully explain the excellent empirical convergenceof RKM for inverse problems.The next example compares the convergence rates of Kaczmarz method and RKM.Given n≥ 2, let θ=2π/n. Consider the linear system with A∈ℝ^n× 2, a_i=([ cos(i-1)θ; sin(i-1)θ ]) and the exact solution x^*=0, i.e., b=0. Then we solve it by Kaczmarz method and RKM. For any e_0=(x_0,y_0), after one Kaczmarz iteration, e_1=(x_0, 0), and generally, after k iterations,e_k+1 = |cosθ|^ke_1.For large n, the decreasing factor |cosθ| can be very close to one, and thus each Kaczmarz iteration can only decrease the error slowly. Thus, the convergence rate of Kaczmarz method depends strongly on n: the larger is n, the slower is the convergence. Similarly, for RKM, there holds[e_k+1^2|e_k] = 1/n∑_i=1^n|cos iθ|^2e_k^2=1/2n∑_i=1^n(1-cos 2iθ)e_k^2=1/2e_k^2,and[e_k+1^2]= 2^-(k+1)e_0^2.For RKM, the convergence rate is independent of n. Further, for any n> 8, we have 0<θ<π/4, and cosθ≥ |cosπ/4|>2^-1/2. This shows the superiority of RKM over the cyclic one. Last we recall singular value decomposition (SVD) of the matrix A <cit.>, which is the basic tool for the convergence analysis in Section <ref>. We denote SVD of A∈ℝ^n× m byA = UΣ V^t,where U∈ℝ^n× n and V∈ℝ^m× m are column orthonormal matrices and their column vectors known as the left and right singular vectors, respectively, and Σ∈ℝ^n× m is diagonal with the diagonal elements ordered nonincreasingly, i.e., σ_1≥…≥σ_r> 0, with r=min(m,n). The right singular vectors v_i span the solution space, i.e., x∈span(v_i). We shall writeU = ([ u_1^t; ⋮; u_n^t ]) V^t=([ v_1^t; ⋮; v_m^t ]),i.e., V=(v_1 … v_m). Note that for inverse problems, empirically, as the index i increases, the right singular vectors v_i are increasingly more oscillatory, capturing more high-frequency components <cit.>. The behavior is analogous to the inverse of Sturm-Liouville operators. For a general class of convolution integral equations, such oscillating behavior wasestablished in <cit.>. For many practical applications, the linear system (<ref>) can be regarded as a discrete approximation to the underlying continuous problem, and thus inherits the corresponding spectral properties.Given a frequency cutoff number 1≤ L≤ m, we define two (orthogonal) subspaces of ℝ^m byℒ=span{v_1,…,v_L}ℋ=span{v_L+1,…,v_m},which denotes the low- and high-frequency solution spaces, respectively. This is motivated by the observation that in practice one only looks for smooth solutions that are spanned/well captured by the first few right singular vectors <cit.>. This condition is akin to the concept of sourcewise representation in regularization theory, e.g., x∈ A^*w for some w∈ℝ^n or its variants <cit.>, which is needed for deriving convergence rates for the regularized solution. Throughout, we always assume that the truncation level L is fixed. Then for any vector z∈ℝ^m, there exists a unique decomposition z=P_Lz+P_Hz, where P_L and P_H are orthogonal projection operators into ℒ and ℋ, respectively, which are defined byP_Lz =∑_i=1^L⟨ v_i,z⟩ v_i P_Hz=∑_i=L+1^m⟨ v_i,z⟩ v_i.These projection operators will be used below to analyze the preasymptotic behavior of RKM.§ PREASYMPTOTIC CONVERGENCE ANALYSIS In this section, we present a preasymptotic convergence analysis of RKM. Let x^* be one solution of linear system (<ref>). Our analysis relies on decomposing the error e_k=x_k - x^* of the kth iterate x_k into low- and high-frequency components (according to the right singular vectors). We aim at bounding the conditional error [e_k+1^2|e_k] (on e_k, where the expectation [·] is with respect to the random choice of the index i, cf. (<ref>)) by analyzing separately [P_Le_k+1^2|e_k] and [P_He_k+1^2|e_k]. This is inspired by the fact that the inverse solution consists mainly of the low-frequency components, which is akin to the concept of the source condition in regularization theory <cit.>. Our error estimates allow explaining the excellent empirical performance of RKM in the context of inverse problems.We shall discuss the preasymptotic convergence for exact and noisy data separately. §.§ Exact dataFirst, we analyze the case of noise free data. Let x^* be one solution to the linear system (<ref>), and e_k=x_k-x^* be the error at iteration k. Upon substituting the identity b=Ax^* into RKM iterate, we deduce that for some i∈{1,…,n}, there holdse_k+1= (I-a_ia_i^t/a_i^2)e_k.Note that I-a_ia_i^t/a_i^2 is an orthogonal projection operator. We first give two useful lemmas.For any e_L∈ℒ and e_H∈ℋ, there holdσ_Le_L≤Ae_L≤σ_1e_L,Ae_H≤σ_L+1e_H,⟨ Ae_L,Ae_H⟩ =0.The assertions follow directly from simple algebra, and hence the proof is omitted.For i=1,…,n, there holdsP_Ha_i^2 ≤σ_L+1^2∑_i=1^nP_Ha_i^2≤∑_i=L+1^rσ_i^2.By definition, P_H a_i = ∑_j=L+1^m⟨ a_i,v_j⟩ v_j. Since a_i^t = u_i^tΣ V^t, there holds ⟨ a_i,v_j⟩ = u_i^tΣ V^tv_j = ⟨ u_i,σ_je_j⟩ = σ_j(u_i)_j. Hence, P_Ha_i^2 = ∑_j=L+1^m ⟨ a_i,v_j⟩^2 = ∑_j=L+1^m σ_j^2 |(u_i)_j|^2≤σ_L+1^2. The second estimate follows similarly. The next result gives a preasymptotic recursive estimate on [P_Le_k+1^2|e_k] and [P_He_k+1^2 |e_k] for exact data b∈range(A). This represents our first main theoretical result.Let c_1=σ^2_L/A_F^2 and c_2=∑_i=L+1^rσ_i^2/A_F^2. Then there hold[P_Le_k+1^2|e_k]≤ (1-c_1)P_Le_k^2 + c_2P_He_k^2, [P_He_k+1^2|e_k]≤ c_2P_Le_k^2 + (1+c_2)P_He_k^2.Let e_L and e_H be the low- and high-frequency errors e_k, respectively, i.e., e_L=P_Le_k and e_H=P_He_k. Then by the identities P_Le_k+1 =e_L - 1/a_i^2(a_i,e_k)P_La_i and ⟨ P_La_i,e_L⟩ = ⟨ a_i,e_L⟩, we haveP_Le_k+1^2 = e_L^2 - 2/a_i^2⟨ P_La_i,e_L⟩⟨ a_i,e_k⟩ + ⟨ a_i,e_k⟩^2P_La_i^2/a_i^4 = e_L^2 - 2/a_i^2⟨ a_i,e_L⟩⟨ a_i,e_k⟩ + ⟨ a_i,e_k⟩^2P_La_i^2/a_i^4≤e_L^2 -2/a_i^2⟨ a_i,e_L⟩⟨ a_i,e_k⟩ + ⟨ a_i,e_k⟩^2/a_i^2 = e_L^2 -2/a_i^2⟨ a_i,e_L⟩⟨ a_i,e_k⟩ + ⟨ a_i,e_L⟩^2+2⟨ a_i,e_L⟩⟨ a_i,e_H⟩+⟨ a_i,e_H⟩^2/a_i^2.Upon noting the identity ∑_i=1^n a_ia_i^t = A^t A, taking expectation on both sides yields[P_Le_k+1^2|e_k]≤e_L^2 - 2/A_F^2⟨ e_k,A^tAe_L⟩ + Ae_L^2+2⟨ e_H,A^tAe_L⟩+Ae_H^2/A_F^2.Now substituting the splitting e_k = e_L + e_H and rearranging the terms give[P_Le_k+1^2|e_k]≤e_L^2 - 2/A_F^2⟨ e_L,A^tAe_L⟩ - 2/A_F^2⟨ e_H,A^tAe_L⟩ + Ae_L^2+2⟨ e_H,A^tAe_L⟩+Ae_H^2/A_F^2≤e_L^2 - 1/A_F^2A e_L^2 + Ae_H^2/A_F^2.Thus the first assertion follows from Lemma <ref>.The high-frequency component P_He_k+1 satisfies P_He_k+1 =e_H - 1/a_i^2⟨ a_i,e_k⟩ P_Ha_i. We appeal to the inequality ⟨ a_i,e_k⟩^2≤a_i^2e_k^2 =a_i^2(e_L^2+e_H^2) to getP_He_k+1^2 = e_H^2 - 2/a_i^2⟨ a_i,e_H⟩⟨ a_i,e_k⟩ + ⟨ a_i,e_k⟩^2P_Ha_i^2/a_i^4≤e_H^2 -2/a_i^2⟨ a_i,e_H⟩⟨ a_i,e_k⟩ + P_Ha_i^2/a_i^2(e_L^2 +e_H^2).Taking expectation yields[P_He_k+1^2|e_k]≤e_H^2 - 2/A_F^2Ae_H^2 + 1/A_F^2(e_L^2+e_H^2)∑_i=1^nP_Ha_i^2≤(1+∑_i=L+1^rσ_i^2/A_F^2)e_H^2 + ∑_i=L+1^rσ_i^2/A_F^2e_L^2.Thus we obtain the second assertion and complete the proof.By Theorem <ref>, the decay of the error [P_Le_k+1^2|e_k] is largely determined by the factor 1-c_1 and only mildly affected by P_He_k^2 by a factor c_2. The factor c_2 is very small in the presence of a gap in the singular value spectrum at σ_L, i.e., σ_L≫σ_L+1, showing clearly the role of the gap.Theorem <ref> also covers the rank-deficient case, i.e., σ_L+1=0, and it yields[P_Le_k+1^2|e_k] ≤ (1-c_1)P_Le_k^2[P_He_k+1^2|e_k] ≤P_He_k^2.If L=m, it recovers Theorem <ref>(i) for exact data. The rank-deficient case was analyzed in <cit.>.By taking expectation of both sides of the estimates in Theorem <ref>, we obtain[P_Le_k+1^2]≤ (1-c_1)[P_Le_k^2] + c_2[P_He_k^2], [P_He_k+1^2]≤ c_2[P_Le_k^2] + (1+c_2)[P_He_k^2].Then the error propagation is given by[[ [P_Le_k^2]; [P_He_k^2] ]]≤ D^k [[ P_Le_0^2; P_He_0^2 ]]D =[[ 1-c_1 c_2; c_2 1+c_2 ]].The pairs of eigenvalues λ_± and (orthonormal) eigenfunctions v_± of D are given byλ_± =2-c_1+c_2± ((c_1+c_2)^2+4c_2^2)^1/2/2,andv_± = [((c_1+c_2)^2+4c_2^2)^1/2∓ (c_1+c_2)]^1/2/√(2)((c_1+c_2)^2+4c_2^2)^1/4[[ 1; 2c_2/((c_1+c_2)^2+4c_2^2)^1/2∓(c_1+c_2) ]].For the case c_2≪ c_1 <1, i.e., α = c_2/c_1≪ 1, we haveλ_+ = 1+ c_1 (α+ O(α^2))λ_- = 1- c_1 (1 + O(α^2))andv_+≈1/(1+α^2)^1/2[[ -α;1 ]]v_- ≈1/(1+α^2)^1/2[[ 1; α ]].With V=[v_+v_-], we have the approximate eigendecomposition if k = O(1):D^k ≈ V [[ 1 + k α c_1; (1-c_1)^k ]]V^t.Thus, forc_1≫ c_2, we have the following approximate error propagation for k = O(1):[P_Le_k^2]≈ (1-c_1)^kP_Le_0^2 + α (1 - (1 - c_1)^k)P_He_0^2, [P_He_k^2]≈α (1 - (1 - c_1)^k) P_Le_0^2 + (1 + kα c_1)P_He_0^2. §.§ Noisy dataNext we turn to the case of noisy data b^δ, cf. (<ref>), we use the superscript δ to indicate the noisy case. Since b_i^δ=b_i+η_i, the RKM iteration readsx_k+1-x^* = x_k-x^* + ⟨ a_i,x^*-x_k⟩/a_i^2a_i+η_i a_i/a_i^2,and thus the random error e_k+1 = x_k+1 - x^* satisfiese_k+1=(I-a_ia_i^t/a_i^2)e_k + η_ia_i/a_i^2. Now we give our second main result, i.e., bounds on the errors [P_Le_k+1^2|e_k] and [P_He_k+1^2|e_k].Let c_1=σ_L^2/A_F^2 and c_2=∑_i=L+1^rσ_i^2/A_F^2. Then there hold[P_Le_k+1^2|e_k]≤ (1-c_1)P_Le_k^2+ c_2P_He_k^2 + δ^2A_F^2 + 2A_Fδ√(c_2)e_k, [P_He_k+1^2|e_k]≤ c_2 P_Le_k^2 + (1+c_2)P_He_k^2 + δ^2A_F^2 + 2A_Fδ√(c_2)e_k.By the recursive relation (<ref>), we have the splitting[P_Le_k+1^2|e_k] =I_1 +I_2 +I_3,where the terms are given by (with e_L=P_Le_k and e_H=P_He_k)I_1= ∑_i=1^n a_i^2/A_F^2e_L-⟨ a_i,e_k⟩/a_i^2P_La_i^2, I_2 = ∑_i=1^na_i^2/A_F^2η_i^2P_La_i^2/a_i^4,I_3 = ∑_i=1^na_i^2/A_F^2[2η_i/a_i^2⟨ P_La_i,e_L⟩ - 2η_i/a_i^4⟨ P_La_i,P_La_i⟩⟨ a_i,e_k⟩].The first term I_1 can be bounded directly by Theorem <ref>. Clearly, I_2≤δ^2/A_F^2. For the third term I_3, we note the splitting⟨ P_La_i,e_L⟩ - P_La_i^2/a_i^2⟨ a_i,e_k⟩=P_La_i^2+P_Ha_i^2/a_i^2⟨ P_La_i,e_L⟩ - P_La_i^2/a_i^2(⟨ P_La_i,e_L⟩+⟨ P_Ha_i,e_H⟩)= P_Ha_i^2⟨ P_La_i,e_L⟩-P_La_i^2⟨ P_Ha_i,e_H⟩/a_i^2:= I_3,i.By the Cauchy-Schwarz inequality, we have| I_3|≤2/A_F^2η(∑_i=1^n I_3,i^2)^1/2.Direct computation yieldsI_3,i^2 ≤ P_Ha_i^2P_La_i^2/a_i^2·P_Ha_i^2e_L^2 + 2P_Ha_iP_La_ie_Le_H+ P_La_i^2e_H^2/P_La_i^2+P_Ha_i^2≤P_Ha_i^2 (P_La_i^2 + P_Ha_i^2)(e_L^2+e_H^2)/P_La_i^2+P_Ha_i^2=P_Ha_i^2e_k^2.Consequently, by Lemma <ref>, we obtain| I_3|≤2/A_F^2δ(∑_i=L+1^rσ_i^2)^1/2e_k.These estimates together show the first assertion. For the high-frequency component P_He_k+1, we have[P_He_k+1^2|e_k] =I_4 +I_5 +I_6,where the terms are given byI_4= ∑_i=1^n a_i^2/A_F^2e_H-⟨ a_i,e_k⟩/a_i^2P_Ha_i^2, I_5 = ∑_i=1^na_i^2/A_F^2η_i^2P_Ha_i^2/a_i^4,I_6 = ∑_i=1^na_i^2/A_F^2[2η_i/a_i^2⟨ P_Ha_i,e_H⟩ - 2η_i/a_i^4⟨ P_Ha_i,P_Ha_i⟩⟨ a_i,e_k⟩].The term I_4 can be bounded by Theorem <ref>. Clearly, I_5 ≤δ^2/A_F^2. For the term I_6, note the splitting⟨ P_Ha_i,e_H⟩ - P_Ha_i^2/a_i^2⟨ a_i,e_k⟩ =1/a_i^2(P_La_i^2⟨ P_Ha_i,e_H⟩ - P_Ha_i^2⟨ P_La_i,e_L⟩),and thus I_6 = - I_3. This shows the second assertion, and completes the proof of the theorem.Recall the following estimate for RKM <cit.>[e_k+1^2|e_k] ≤(1-κ_A^-2)e_k + A_F^-2δ^2.In comparison, the estimate in Theorem <ref> is free from κ_A, but introduces an additional term 2A_Fδ√(c_2)e_k. Since c_2 is generally very small, this extra term is comparable with A_F^-2δ^2. Theorem <ref> extends Theorem <ref> to the noisy case: if δ=0, it recovers Theorem <ref>. It indicates that if the initial error e_0=x_0-x^* concentrates mostly on low frequency, the iterate will first decrease the error. The smoothness assumption on the initial error e_0 is realistic for inverse problems, notably under the standard source type conditions (for deriving convergence rates) <cit.>. Nonetheless, the deleterious noise influence will eventually kick in as the iteration proceeds.One can discuss the evolution of the iterates for noisy data, similar to Remark <ref>. By Young's inequality 2ab≤ϵ a^2+ϵ^-1b^2, the error satisfies (with c̅_1 = c_1 - ϵ c_2 and c̅_2 = (1+ϵ)c_2)[P_Le_k+1^2|e_k]≤ (1-c̅_1)P_Le_k^2+ c̅_2P_He_k^2 + (1+ϵ^-1)δ^2A_F^2, [P_He_k+1^2|e_k]≤c̅_2 P_Le_k^2 + (1+c̅_2)P_He_k^2 + (1+ϵ^-1)δ^2A_F^2.Then it follows that[[ [P_Le_k^2]; [P_He_k^2] ]] ≤ D^k [[ P_Le_0^2; P_He_0^2 ]] + (1+ϵ^-1)δ^2A_F^2(I-D)^-1(I-D^k)[[ 1; 1 ]], D =[[ 1-c̅_1 c̅_2; c̅_2 1+c̅_2 ]].In the case c̅_2≪c̅_1<1 and α = c̅_2/c̅_1≪ 1 (by choosing sufficiently small ϵ), for k=O(1), repeating the analysis in Remark <ref> yields[P_Le_k^2]≈ (1-c̅_1)^kP_Le_0^2 + α (1 - (1 - c̅_1)^k)P_He_0^2 + k(1+ϵ^-1)δ^2A_F^2, [P_He_k^2]≈α (1 - (1 - c̅_1)^k) P_Le_0^2 + (1 + kαc̅_1)P_He_0^2 + k(1+ϵ^-1)δ^2A_F^2.Thus, the presence of data noise only influences the error of the RKM iterates mildly by an additive factor (kδ^2), during the initial iterations.§ RKM WITH VARIANCE REDUCTION When equipped with a proper stopping criterion, Kaczmarz method is a regularization method <cit.>. Naturally, one would expect that this assertion holds also for RKM (<ref>)–(<ref>). This however remains to be proven due to the lack of a proper stopping criterion.To see the delicacy, consider one natural choice, i.e., Morozov's discrepancy principle <cit.>:choose the smallest integer k such thatAx_k-b^δ≤τδ,where τ>1 is fixed <cit.>. Theoretically, it is still unclear that (<ref>) can be satisfied within a finite number of iterations for every noise level δ>0. In practice, computing the residual Ax_k-b^δ at each iteration is undesirable since its cost is of the order of evaluating the full gradient, whereas avoiding the latter is the very motivation for RKM! Below we propose one simple remedy by drawing on its connection with stochastic gradient methods <cit.> and the vast related developments.First we note that the solution to (<ref>) is equivalent to minimizing the least-squares problemmin_x∈ℝ^n{f(x):=1/2n∑_i=1^n|⟨ a_i,x⟩-b_i|^2}.Next we recast RKM as a stochastic gradient method for problem (<ref>), as noted earlier in <cit.>. We include a short proof for completeness. The RKM iteration (<ref>)-(<ref>) is a (weighted) stochastic gradient update with a constant stepsize n/A_F^2. With the weight w_i=a_i^2, we rewrite problem (<ref>) into1/2n∑_i=1^n(⟨ a_i,x⟩-b_i)^2 =1/2n∑_i=1^nw_i/A_F^2A_F^2/w_i(⟨ a_i,x⟩-b_i)^2, = ∑_i=1^nw_i/A_F^2 f_i,f_i(x) = A_F^2/2nw_i(⟨ a_i,x⟩-b_i)^2.Since ∑_i=1^n w_i=A_F^2, we may interpret p_i=w_i/A_F^2 as a probability distribution on the set {1,…,n}, i.e. (<ref>). Next we apply the stochastic gradient method. Since g_i(x):=∇ f_i(x) =A_F^2/nw_i(⟨ a_i,x⟩-b_i)a_i, with a fixed step length η=nA_F^-2, we getx_k+1 = x_k - w_i^-1(⟨ a_i,x⟩-b)a_i,where i∈{1,…,n} is drawn i.i.d. according to(<ref>). Clearly, it is equivalent to RKM (<ref>)-(<ref>). Now we give the mean and variance of the stochastic gradient g_i(x).Let g(x)=∇ f(x). Then the gradient g_i(x) satisfies[g_i(x)] = g(x), Cov[g_i(x)] = A_F^2/n^2∑_i=1^n(⟨ a_i,x⟩-b_i)^2a_ia_i^t/a_i^2-1/n^2A^t(Ax-b)(Ax-b)^tA.The full gradient g(x):=∇ f(x) at x is given by g(x)= 1nA^t(Ax-b). The mean [g_i(x)] of the (partial) gradient g_i(x) is given by[g_i(x)]=1/n∑_i=1^na_i^2/A^2_FA_F^2/a_i^2(⟨ a_i,x⟩-b_i)a_i=1nA^t(Ax-b).Next, by bias-variance decomposition, the covariance Cov[g_i(x)] of the gradient g_i(x) is given byCov[g_i(x)] =[g_i(x)g_i(x)^t]-[g_i(x)][g_i(x)]^t = A_F^4/n^2∑_i=1^na_i^2/A^2_F1/a_i^4(⟨ a_i,x⟩-b_i)^2a_ia_i^t-1/n^2A^t(Ax-b)(Ax-b)^tA =A_F^2/n^2∑_i=1^n(⟨ a_i,x⟩-b_i)^2a_ia_i^t/a_i^2-1/n^2A^t(Ax-b)(Ax-b)^tA.This completes the proof of the proposition. Thus, the single gradient g_i(x) is an unbiased estimate of the full gradient g(x). For consistent linear systems, the covariance Cov[g_i(x)] is asymptotically vanishing: as x_k→ x^*, both terms in the variance expression tend to zero. However, for inconsistent linear systems, the covariance Cov[g_i(x)] generally does not vanish at the optimal solution x^*:Cov[g_i(x^*)] ≈A_F^2/n^2∑_i=1^n(⟨ a_i,x^*⟩-b_i^δ)^2a_ia_i^t/a_i^2,since one might expect A^t(Ax^*-b^δ)≈ 0. Further, Cov[g_i(x^*)] is of the order δ^2 in the neighborhood of x^*. One may predict the (asymptotic) dynamics of RKM via a stochastic modified equation from the covariance <cit.>. The RKM iteration eventually deteriorates due to the nonvanishing covariance so that its asymptotic convergence slows down.These discussions motivate the use of variance reduction techniques developed for stochastic gradient methods to reduce the variance of the gradient estimate. There are several possible strategies, e.g., stepsize reduction, stochastic variance reduction gradient (SVRG), averaging and mini-batch (see e.g., <cit.>). We only adaptSVRG <cit.> to RKM, termed as RKM with variance reduction (RKMVR), cf. Algorithm <ref> for details. It hybridizes the stochastic gradient with the (occasional) full gradient to achieve variance reduction. Here, s is the length of epoch, which determines the frequency of full gradient evaluation and was suggested to be n <cit.>, and K is the maximum number of iterations. In view of Step 2, within the first epoch, it performs only the standard RKM, and at the end of the epoch, it evaluates the full gradient. In RKMVR, the residual Ax_k-b^δ is a direct by-product of full gradient evaluation and occurs only at the end of each epoch, and thus it does not invoke additional computational effort.The update at Step 8 of Algorithm <ref> can be rewritten as (for k≥ s)x_k+1= x_k + ⟨ a_i,x̃-x_k⟩ a_i/a_i^2-n/A_F^2g̃,and thus x̃-x_k→ 0 as the iteration proceeds, and it recovers the Landweber method. With this choice, the variance of the gradient estimate is asymptotically vanishing <cit.>. Numerically, Algorithm <ref> converges rather steadily. That is, it combines the strengthes of RKM and the Landweber method: it merits the fast initial convergence of the former and the excellent stability of the latter.§ NUMERICAL EXPERIMENTS AND DISCUSSIONS Now we present numerical results for RKM and RKMVR to illustrate their distinct features. All the numerical examples, i.e., ,and , are taken from the public domainpackage Regutools[Available from<http://www.imm.dtu.dk/ pcha/Regutools/>, last accessed on June 21, 2017]. They are Fredholm integral equations of the first kind, with the first example being mildly ill-posed, and the last two severely ill-posed, respectively. Unless otherwise stated, the examples are discretized with a dimension n=m=1000. The noisy data b^δ is generated from the exact data b asb^δ_i = b_i + δmax_j(|b_j|)ξ_i, i =1,…,n,where δ is the relative noise level, and the random variables ξ_is follow an i.i.d. standard Gaussian distribution. The initial guess x_0 for the iterative methods is x_0=0. We present the squared error e_k and/or the squared residual r_k, i.e.,e_k =𝔼[x^*-x_k^2] r_k = 𝔼[Ax_k-b^δ^2].The expectation 𝔼[·] with respect to the random choice of the rows is approximated by the average of 100 independent runs. All the computations were carried out on a personal laptop with 2.50 GHz CPU and 8.00G RAM by2015b. §.§ Benefit of randomization First we compare the performance ofRKM with the cyclic Kaczmarz method (KM) to illustrate the benefit of randomization. Overall, the random reshuffling can substantially improve the convergence of KM, cf. the results in Figs. <ref>-<ref> for the examples with different noise levels.Next we examine the convergence more closely. The (squared) error e_k of the Kaczmarz iterate x_k undergoes a sudden drop at the end of each cycle, whereas within the cycle, the drop after each Kaczmarz iteration is small. Intuitively, this can be attributed to the fact that the neighboring rows of the matrix A are highly correlated to each other, and thus each single Kaczmarz iteration reduces only very little the (squared) error e_k, since roughly it repeats the previous projection. The strong correlation between the neighboring rows is the culprit of the slow convergence of the cyclic KM. The randomization ensures that any two rows chosen by two consecutive RKM iterations are less correlated, and thus the iterations are far more effective for reducing the error e_k, leading to a much faster empirical convergence. These observations hold for both exact and noisy data. For noisy data, the error e_k first decreases and then increases for both KM and RKM, and the larger is the noise level δ, and the earlier does the divergence occur. That is, both exhibit a “semiconvergence” phenomenon typical for iterative regularization methods. Thus a suitable stopping criterion is needed. Meanwhile, the residual r_k tends to decrease, but for both methods, it oscillates wildly for noisy data and the oscillation magnitude increases with δ. This is due to the nonvanishing variance, cf. the discussions in Section <ref>. One surprising observation is that a fairly reasonable inverse solution can be obtained by RKM within one cycle of iterations. That is, by ignoring all other cost, RKM can solve the inverse problems reasonably well at a cost less than one full gradient evaluation! §.§ Preasymptotic convergence Now we examine the convergence of RKM. Theorems <ref> and <ref> predict that during first iterations, the low-frequency error e_L=[P_Le_k^2] decreases rapidly, but the high-frequency error e_H=[P_He_k^2] can at best decay mildly. For all examples, the first five singular vectors can capture the majority of the energy of the initial error x^*-x_0. Thus, we choose a truncation level L=5, and plot the evolution of low-frequency and high-frequency errors e_L and e_H, and the total error e=[e_k^2], in Fig. <ref>.Numerically, the low-frequency error e_L decays much more rapidly during the initial iterations, and since the low-frequency modes are dominant, the total error e also enjoys a very fast initial decay. Intuitively, this behavior may be explained as follows. The rows of the matrix A mainly contain low-frequency modes, and thus each RKM iteration tends to mostly decrease the low-frequency error e_L of the initial error x^*-x_0. The high-frequency error e_H experiences a similar but slower decay during the iteration, and then levels off. These observations fully confirm the preasymptotic analysis in Section <ref>. For noisy data, the error e_k can be highly oscillating, so is the residual r_k. The larger is the noise level δ, the larger is the oscillation magnitude. However, the degree of ill-posedness of the problem seems not to affect the convergence of RKM, so long as x^* is mainly composed of low-frequency modes.To shed further insights, we present in Fig. <ref> the decay behavior of the low- and high-frequency errors for the examplewith a random solution whose entries follow the i.i.d. standard normal distribution. Then the source type condition is not verified for the initial error. Now with a truncation level L=5, the low-frequency error e_L only composes a small fractional of the initial error e_0. The low-frequency error e_L decays rapidly, exhibiting a fast preasymptotic convergence as predicted by Theorem <ref>, but the high-frequency error e_H stagnates during the iteration. Thus, in the absence of the smoothness condition on e_0, RKM is ineffective, thereby supporting Theorems <ref> and <ref>.Naturally, one may divide the total error e into more than two frequency bands. The empirical behavior is similar to the case of two frequency bands; see Fig. <ref> for an illustration on the example , with four frequency bands. The lowest-frequency error e_1 decreases fastest, and then the next band e_2 slightly slower, etc. These observations clearly indicate that even though RKM does not employ the full gradient, the iterates are still mainly concerned with the low-frequency modes during the first iterations, like the Landweber method in the sense that the low-frequency modesare much easier to recover than the high-frequency ones. However, the cost of each RKM iteration is only one nth of that for the Landweber method, and thus it is computationally much more efficient. §.§ RKM versus RKMVR The nonvanishing variance of the gradient g_i(x) slows down the asymptotic convergence of RKM, and the iterates eventually tend to oscillate wildly in the presence of data noise, cf. the discussion in Section <ref>. This is expected: the iterate converges to the least-squares solution, which is known to be highly oscillatory for ill-posed inverse problems. Variance reduction is one natural strategy to decrease the variance of the gradient estimate, thereby stabilizing the evolution of the iterates. To illustrate this, we compare the evolution of RKM with RKMVR in Fig. <ref>. We also include the results by the Landweber method (LM). To compare the iteration complexity only, we count one Landweber iteration as n RKM iterates. The epoch of RKMVR is set to n, the total number of data points, as suggested in <cit.>. Thus n RKMVR iterates include one full gradient evaluation, and it amounts to 2n RKM iterates. The full gradient evaluation is indicated by flat segments in the plots.With the increase of the noise level δ, RKM first decreases the error e_k, and then increases it, which is especially pronounced at δ = 5× 10^-2. This is well reflected by the large oscillations of the iterates. RKMVR tends to stabilize the iteration greatly by removing the large oscillations, and thus its asymptotical behavior resembles closely that of LM. That is, RKMVR inherits the good stability of LM, while retaining the fast initial convergence of RKM. Thus, the stopping criterion, though still needed, is less critical for the RKMVR, which is very beneficial from the practical point of view. In summary, the simple variance reduction scheme in Algorithm <ref> can combine the strengths of both worlds.Last, we numerically examine the regularizing property of RKMVR with the discrepancy principle (<ref>). In Fig. <ref>, we present the number of iterations for several noise levels for RKMVR (one realization) and LM. For both methods, the number of iterations by the discrepancy principle (<ref>) appears to decrease with the noise level δ, and RKMVR consistently terminates much earlier than LM, indicating the efficiency of RKMVR. The reconstructions in Fig. <ref>(d) show that the error increases with the noise level δ, indicating a regularizing property. In contrast, in the absence of the discrepancy principle, the RKMVR iterates eventually diverge as the iteration proceeds, cf. Fig. <ref>. § CONCLUSIONSWe have presented an analysis of the preasymptotic convergence behavior of the randomized Kaczmarz method. Our analysis indicates that the low-frequency error decays much faster than the high-frequency one during the initial randomized Kaczmarz iterations. Thus, when the low-frequency modes are dominating in the initial error, as typically occurs for inverse problems, the method enjoys very fast initial error reduction. Thus this result sheds insights into the excellent practical performance of the method, which is also numerically confirmed. Next, by interpreting it as a stochastic gradient method, we proposed a randomized Kaczmarz method with variance reduction by hybridizing it with the Landweber method. Our numerical experiments indicate that the strategy is very effective in that it can combine the strengths of both randomized Kaczmarz method and Landweber method.Our work represents only a first step towards a complete theoretical understanding of the randomized Kaczmarz method and related stochastic gradient methods (e.g., variable step size, and mini-batch version) for efficiently solving inverse problems. There are many important theoretical and practical questions awaiting further research. Theoretically, one outstanding issue is the regularizing property (e.g., consistency, stopping criterion and convergence rates) of the randomized Kaczmarz method from the perspective of classical regularization theory. § ACKNOWLEDGEMENTSThe authors are grateful to the constructive comments of the anonymous referees, which have helped improve the quality of the paper. In particular, the remark by one of the referees has led to much improved results as well as more concise proofs. The research of Y. Jiao is partially supported by National Science Foundation ofChina (NSFC) No. 11501579 and National Science Foundation ofHubei Province No. 2016CFB486, B. Jin by EPSRC grant EP/M025160/1 and UCL Global Engagement grant (2016–2017), and X. Lu by NSFC Nos. 11471253 and 91630313.abbrv | http://arxiv.org/abs/1706.08459v2 | {
"authors": [
"Yuling Jiao",
"Bangti Jin",
"Xiliang Lu"
],
"categories": [
"math.NA",
"math.OC"
],
"primary_category": "math.NA",
"published": "20170626162711",
"title": "Preasymptotic Convergence of Randomized Kaczmarz Method"
} |
^1Frankfurt Institute for Advanced Studies and ITP J.W. GoetheUniversity, D-60438 Frankfurt am Main, Germany^2Institute for NuclearResearch, Russian Academy of Sciences, 117312 Moscow, Russia^3John von Neumann Institut für Computing (NIC),Jülich Supercomputing Centre, FZ Jülich, D-52425 Jülich, GermanyRecent experiments at RHIC and LHC have demonstrated that there are excellentopportunities to produce light baryonic clusters of exotic matter(strange and anti-matter) in ultra-relativistic ion collisions. Withinthe hybrid-transport model UrQMD we show that the coalescence mechanism cannaturally explain the production of these clusters in the ALICE experimentat LHC. As a consequence of this mechanism we predict the rapidity domainswhere the yields of such clusters are much larger than the observed one atmidrapidity. This new phenomenon can lead to unique methods for producingexotic nuclei. 25.75.-q , 25.75.Dw , 25.75.Ld , 21.80.+a Formation of exotic baryon clusters in ultra-relativistic heavy-ioncollisions. A.S. Botvina^1,2, J. Steinheimer^1, M. Bleicher^1,3 December 30, 2023 ================================================================================§ INTRODUCTION In relativistic nuclear collisions an abundance of new particlesconsisting of all kind of quark and anti-quark flavors is produced. During the late stage of the collision these particles can interact insecondary processes and produce novel clusters containing several baryons. In this case, promising studies of fragmentation reactions probingthe limits in isospin space of light nuclei, exotic nuclear states,anti-nuclei, and multiple strange nuclei are feasible.Recently very encouraging results on the formation of exotic clusterscome from experiments at relativistic colliders: For example, STAR atRHIC <cit.>, and ALICE at LHC<cit.> have observedhyper-tritons and anti-hyper-tritons. Experimental programs to search formore heavy exotic nuclear species are now underway <cit.>.Therefore, a theoretical understanding of these phenomena is necessary.Transport models have been used to successfully describe manyobservables, including strangeness production atintermediate energies <cit.>. At very highenergy most of thestate-of-art hybrid models apply a hydro-dynamical expansion of the hotand dense matter and a subsequent microscopic transport approachto describe the hadronic rescattering(see, e.g., for UrQMD,Ref. <cit.>).In the framework of microscopictransport models a coalescence prescription for the formation of thecomposite clusters can be naturally applied <cit.>.In thispaper we demonstrate the effectiveness of the transport-plus-coalescenceapproach for the description of data at LHC energies.Important predictions for thefuture research of baryon clusters in the ultra-relativisticheavy-ion collisions are also presented.§ MODELS FOR PRODUCTION OF LIGHT CLUSTERS AT RELATIVISTIC COLLISIONS Thermal and coalescent mechanisms to produce complex nuclei in highenergy collisions have been discussed in previous works (see, e.g.,<cit.>). The thermal models allow for a good description ofthe particle production yields, for example, in the most centralcollisions <cit.>. For this reason we believethat the produced particles do widely populate the available reactionphase space, and this should be taken into account in any interpretationof the data. Only the lightest clusters, with mass numbers A3–4,can be noticeably produced in this case because of the very hightemperature of the fireball (T≈160 MeV). However, the purethermal models can not describe the energy spectra of particlesand their flows. Also in non-central collisions the dynamics and secondaryinteractions in the projectile and target residues will influence thenucleon clusters (fragments) production. As was shown, the thermaland coalescence descriptions are naturally connected: In particular,there is a relation between the coalescent parameter, density, temperature,and binding energies of the produced clusters <cit.>.In the following we consider the dynamical transport and coalescencemechanisms, because they have predictive power for many observables.There were also numerous discussions that even in centralcollisions of very high energy the coalescence mechanism, whichassembles light fragments from the produced hyperons and nucleons (including anti-baryons), may be essential<cit.>.The first reaction step should be the dynamical production of baryonswhich later oncan be accumulated into clusters. The transport model UltrarelativisticQuantum Molecular Dynamics (UrQMD) is quite successful in the description ofa large body of data. In the standard formulation <cit.> themodel involves string formation and its fragmentation according to thePYTHIA modelfor individual hard hadron collisions. The current versions of UrQMDinclude up to 70 baryonic species (including their anti-particles), aswell as up to 40 different mesonic species, which participate in binaryinteractions. This work is focused on very high energies and we employ theUrQMD transport model <cit.> in the hybrid mode for thedescription of the dynamical evolution in central collisions. In this modethe propagation is composed of an ideal 3+1d fluid dynamical description forthedense phase, which is mainly compromised of a strongly interacting quarkgluon plasma (QGP). The event-by-event initial state for the fluiddynamical evolution is calculated using the PYTHIA version implemented inthe UrQMD model, where the starting time of the fluid dynamical evolutionis set to τ_0=0.5 fm/c. The equation of state, which governs thedynamical evolution has been discussed indetail in Ref. <cit.> and describes the transition froma hadronic system to the QGP as a smooth crossover at low baryon densities.Once the system dilutes, and the fluid dynamicaldescription is no longer valid, the propagated fields are transformedinto particles via a sampling of the Cooper-Frye equation<cit.>. Here we explicitly conserve the net-baryon number,net-electric-charge, and net-strangeness as well as the total energy andmomentum. After this transition all hadronscontinue their evolution and may interact via the hadronic cascade partof the UrQMD model. This dynamical decoupling takes on the order of10–20 fm/c and has a significant influence on the observed hadronmultiplicities <cit.> and spectra<cit.>, which is strongest for most central collisions.Consequently it has been shown thatthis model reasonable describes hadron spectra observed by the ALICEcollaboration <cit.>, in particular the protonspectra which are essential for the study of nuclei production. The advantage of the Monte-Carlo transport final state description is thatit providesevent-by-event simulations of the baryon production. This is important forinvestigating correlation phenomena. The coalescent procedure is ideal for thedescription of the baryon accumulation into clusters on event-by-event basis. It was shown before thatthe coalescence criterion, which uses the proximity of baryons in momentumand coordinate space, is very effective in description of light nucleonfragments at intermediate energies <cit.>.After the dynamical stage described by UrQMD model we apply a generalizedversion of the coalescence model <cit.> for the coalescence of baryons(CB). In such a way it is possible to form primaryfragments of all sizes, from the lightest nuclei to the heavy residues,including hypernuclei and other exotics within the same mechanism.It was previously found <cit.> that the optimal time for applying thecoalescence (as a final state interaction) is around 40–50 fm/c afterthe initial collisions of heavy-ions, when the rate of individualinelastichadron interactions decreases very rapidly. A variation of the time withinthis interval leads to an uncertainty in the yield around 10% for a fixedcoalescence parameter. This is essentially smaller than the uncertainty inthe coalescence parameter itself. The most important CB parameter is themaximum variance between velocities of baryons v_c in a coalescentcluster. v_c should be around v_c≈ 0.1c for thelightest clusters, to be consistent with their binding energy. This value isalso supported by a comparison to experimental data at energies around1–10 A GeV <cit.>.We should note that our formulation of the coalescence model is microscopic,therefore, it takes into account all correlations and fluctuations ofthe particle production during the dynamical stage. For this reason we needa smaller coalescence parameter in order to describe the data than theparameters obtained in the analytical formulation of the coalescence<cit.>. In principle, the coalescence to clusters with A>4 is alsopossible, however, these heavy clusters areexpected to be excited and their following decay can be described with thestatistical models <cit.>. Usually such big primaryfragments can be produced only in peripheral collisions from nuclear residuesin the projectile and target rapidity region <cit.>. The advantage ofthe sequential approach (dynamics + coalescence + statistical decay) is thepossibility to predict the correlations and fluctuations of the yields ofall nuclei, including their sizes, with the rapidity, and with otherproduced particles. However, in the midrapidity region, because of a verylarge energy deposition, we expect the formation of small clusters only.In the following we concentrate on the LHC heavy-ion reactions,and on the latest results on light cluster production obtained by the ALICEcollaboration <cit.>.§ COMPARISON WITH EXPERIMENTAL DATAWe start with an analysis of the particle spectra as observed in theexperiments.In Fig. 1 we show experimental data on transverse momentum distributions ofprotons, deuterons, and ^3He particles measured at LHC by the ALICE group<cit.>. The collisions of ^208Pb on ^208Pb havebeen performed at a center of mass energy of √(s)=2.76 TeV per nucleon.The yields in Fig. 1 are obtained for the centralevents (top 20% of the maximum particlemultiplicity) are normalized to the number of events. The rapidity rangefor detected particles was y=(-0.5 to +0.5) in the center of mass system.The experimental data are given by symbols inside boxes presentingthe systematical uncertainties which are usually larger than thestatistical ones. The statistical error bars are given if theyare larger than the symbol sizes. This data presentation provides consistentinformation on yields and distributions of produced particles needed forverification of our models.The UrQMD hybrid calculations (including the hydro-dynamical evolution ofmatter) with the following CB calculations are shown by the lines. Thedifferent line styles depict variation of the coalescence parameter v_cby 40%. It is important that it ispossible to reproduce very good the spectra of protons with UrQMD, sincein the coalescence approach the yields of all clusters depend cruciallyon the baryon distributions. We should note that the yields at very hightransverse momenta P_T > 3-4 A GeV are possibly dominated by jets,which are not currently included in the hydrodynamical evolution of thesystem. Therefore,we limit the fragments under study to P_T2-3 GeV per nucleon. One can see that the spectra of deuterons (^2H) and helium-3 (^3He)can be reasonablydescribed with the coalescent parameters v_c=0.07c and v_c=0.1c,respectively. The larger value of v_c for ^3He is consistent withthe larger binding energy of ^3He in comparison with ^2H. We note thatthe effect of the v_c parameter is essentially bigger for large clusters.We could get a better agreement by tuning the coalescent parameters,however, this kind of phenomenological fitting is out of our theoreticalstudy. It is more important thatthe form of the distributions is independent on v_c in the wide rangeand corresponds to experimental distributions.This gives us a confidence to claim that the coalescence can naturallydescribe the production of these clusters.Another verification of the coalescence mechanism should come fromangular distributions of the produced particles and their correlationsrespective to the reaction plane.We note that the angular (azimuthal) distribution of produced particlesin the plane perpendicular to the beam axis is anisotropic with thecorresponding maximum in the reaction plane. That is an expected consequenceof the dynamical emission in such high energy collisions.A very informative observable is theelliptic flow v_2. Sometimes it is difficult to extract the reactionplane in the experiment, because of particle fluctuations in thecollision events. In this case, particle correlationmethods are used <cit.>. For the present calculations we employthe reaction plane method in each collision, and, therefore, wecan find v_2 for all particles by averaging theirmomenta perpendicular to the beam axis:v_2 = ⟨ (P_x^2-P_y^2)/P_T^2 ⟩,where P_x is the momentum in the reaction plane, andP_T^2=P_x^2+P_y^2 is the transverse momentum. The averaging is doneover all events containing these particles.It was shown that the reaction plane method provides results compatiblewith high-order event plane (correlation) methods <cit.>. Therefore,v_2 trends versus P_T should be a solid observable for comparison. We present v_2 measured by ALICE for protons in ^208Pb on ^208Pbreactions at √(s)=2.76 A TeV for semi-central collisions<cit.> in Fig. 2. The semi-central events, which cover acentrality domain from 30% to 40% of the total particlemultiplicity distribution, were used in this analysis.The UrQMD + CB calculations were performed under the same conditions forboth protons and deuterons. One can see that the calculationsdescribe the data for protons good, and they predict a rather differentbehaviour of v_2 versus P_T for deuterons. However, our calculationslead to an interesting result: Namely, if we plot v_2/A versus P_T/Afor protons (A=1) and deuterons (A=2), they are overlapping each other.Such a 'scaled' curve for deuterons is demonstrated by the short-dashedline in Fig. 2. This kind of 'scaling' of v_2 in the coalescence mechanismcan be easy explained by the averaging procedure over the produced particles:When individual nucleons have nearly the same momenta the expression(P_x^2-P_y^2)/P_T^2 does not change after their clusterizing. However,the number of nucleons is by Atimes larger than the number ofclusters. The observation of such a coalescencescaling in experiments could be an additional verification of a purecoalescence mechanism. It is interesting that the scaling behavior hasbeen observed in the experiments <cit.>, however, in theelliptic flow of hadrons by taking into account the number of theconstituent quarks. The quark coalescence was also discussedtheoretically <cit.>.We believe this effect should be much stronger in our case of lightnuclei, since the nuclear binding energy is much smaller than thenucleon masses and the scaling itself depends on the coalescence parameterrather weak.§ PREDICTIONS FOR THE CLUSTER RAPIDITIES Due to technical realization of the UrQMD hybrid model<cit.> we have presented coalescence results from thismodel in the midrapidity range of central and semi-central Pb+Pbcollisions only. It is however instructive to study also the rapiditydependence of cluster production, including peripheral collisions,as the produced systems properties may change essentially with rapidity.For example, special properties of nuclear matter near the hadronfragmentation region were discussed long ago <cit.>. It was suggestedthat the hadron matter of this region would have a significantnon-zero net baryon number and high density <cit.>.Also the transverse momentum distributions of produced particles andfragmented nucleons may be different, that can have a significant effecton nuclei formation. In order to make an estimate on the rapiditydistribution of nuclear clusters at the LHC we use the standard (cascade)version of UrQMD to generate baryons and their momenta. Then these are againused to form nuclei and hypernuclei via the CB model as described above.One should note here that the proton distributions fall more steeper withthe transverse momentum than it was obtained in the hybrid version withhydrodynamics. Nevertheless, the main mechanisms of the particle productionrelated to the secondary interactions remain the same, and the totalparticle and anti-particle yields are close. For this reason, the trendscharacterizing the modification of baryon momentum distributions withrapidity will be similar in the both versions. We demonstrate now predictions of the coalescent approach which areimportant for further investigations of the nuclear cluster formationin ultra-relativistic nuclear collisions. We have performedthe UrQMD + CB calculations for all impact parameterswith the minimum bias prescription. Fig. 3 shows thefull rapidity distributions of baryons and obtained from them compositefragments of all sizes. For comparison, the top panel is for normalparticles, and the bottom one is for anti-particles. Also we giveseparately fragments from nucleons and hyper-fragments which includeshyperons.We should note that in this and next figures we do not show thespectator nucleons and normal clusters composed from nucleons withrapidities around the projectile/target one (i.e., with|y| ≈ 8). Slow participant nucleons may exist in this regionand form clusters within the coalescence model. However, the fullconsideration requiresa detail description of the excitation and de-excitation (viaparticle emission) of spectator residues, that is beyond the presentpaper. Moreover, these clusters can hardly be measured in presentexperiments because of very high rapidities.For clarity, we have demonstrated results for one coalescence parameterv_c = 0.1, which is reasonable for the description of the data (Fig. 1). One can see a very broad distribution of the produced baryons in therapidity.At such a high energy nearly the same amount of normal and anti-baryonsare present at central rapidities.The broad rapidity distribution of the yieldshave already been discussed atintermediate collision energies <cit.>.It is seen from Fig. 3that the production maximum for all composite fragments is shifted frommidrapidity to the forward and backward region. In our case the widemaxima are located at thecenter-of-mass rapidities around +4 or -4. The reason of thisphenomenon is in many secondary interactions and the energy lossduring the hadron diffusion from midrapidity.An essential part of these interactions takes placebetween the newly produced species and the nucleons of projectile and targetwhich did not interact in early times of the reaction.For this reason, both the energies and relative momenta of producednew baryons become smaller, therefore, it is easier for them tocoalesce into a cluster. As a result of such processesthe low-energy products mainly populate the phase space far from midrapidity.As another consequence of these secondary interactions we havefound that the transverse momentum distributions of the produced particlesdecrease versus P_T more rapidly around |y| ≈ 4 than at|y| ≈ 0. Actually, the intensive interactions recall the thermalization process,therefore, under some conditions thermal models and phenomenologiesmay be applied to describe few characteristics of these reactions.In this respect, one can understandour results by assuming that the 'kinetic temperature' of baryons atmidrapidity is much higher than this 'temperature' far from it.Therefore, the region outside midrapidity does contribute most stronglyto the cluster production. A more detailed picture of the light fragment production is given inFig. 4. The top panel demonstrates the rapidity distributions ofnormal particle yields with mass (i.e., baryon) numbers of A=2, A=3,and A=4. In this case all possible combinations of baryons (includingboth nucleons and hyperons) are taken into account, in order to understandthe coalescence influence generally. One can see that the yield suppressionof big fragments is much lager at midrapidity than in the region of themaximum fragment yield (at |y| ≈ 4). For this reason the explorationof heavy clusters is more promising at rapidities shiftedfrom the midrapidity. This conclusion looks unexpected since more energyis deployed in central collisions at midrapidity. The reason is in thecoalescence mechanism: The constituents should be not only produced,they should also have sufficiently low relative velocities to be bound intoa cluster. In the bottom panel of Fig. 4 we show the yields of selected particleclusters whichcan be easy identified in the experiment: deuterons (^2H), tritons (^3H),and hyper-tritons (^3_ΛH), versus the rapidity. The distributionsresemble the same structure as was discussed previously. One can clearlysee from the figure that the yield ratio of ^2H to ^3H is around800 at the midrapidity. Note that all calculations on this figureare performed for the coalescence parameters v_c = 0.1 c, which slightlyoverestimates the deuteron production. Actually, this production and thecorresponding ratio will be decreasedby a factor of 2 when we take the more realistic v_c = 0.07 c, as isclear from Fig. 1. However, one can see that even in the analysed casethe deuteron-to-triton ratio is decreased to around 60 at |y| ≈ 4.It is also naturally that the yields of ^3H and ^3_ΛH arevery close, since at such high energy elementary hadron interactions newnucleons and hyperons are produced with similar probability.The analysis tells us that the region in between the projectile/targetrapidity and the center-of-mass rapidity is most favorable for the productionof complex clusters consisting of new produced baryons. We believe thatexperiments should take into account this phase space structure in searchingfor novel exotic nuclear species (including anti-nuclei). In relativistic heavy ion collisionsbesides the recently observed ^3_ΛH nuclei <cit.>other exotics (like Λ N, Λ NN) were under intensivediscussions <cit.>.The extension of measurements into a new rapidity region will increasethe yields of clusters in the data substantially. It was shown in theLHCb experiments<cit.> that not only the midrapidity region but also particles with therapidities around |y| ≈ 4 can be detected with thespecial detector set-up even at ultra-relativistic energies.§ CONCLUSIONIt was demonstrated that the coalescence process is very important for theproduction oflight baryonic clusters in ultrarelativistic nuclear collisions.We have shown that it is possible to describe spectra of thecomposite clustersmeasured by ALICE at LHC within our UrQMD + CBapproach. We emphasized that the scaling of the elliptic flow of theseparticles may indicate the dominance of the coalescence mechanism. The extension of the coalescence results beyond the central collisionsdemonstrate that the maximum yields of such clusters are not locatedat midrapidity. They are essentially shifted toward thetarget and projectile rapidities. This effect reflects the importance of thesecondary interaction processes which lead to a considerablebaryon production with low relative momenta. It may also becorrelated with emerging the hadron fragmentation area.Such a new production phenomenon is especially important for forminglarge clusters. Yields of such clusters can be increased by many orderswhile going to the forward/backward region in comparison with themidrapidity zone. Here the formation of relatively big exotic,hyper- and anti-nuclei becomes very prominent and it is promising for futureresearch, as it could provide a unique possibility to study novel nuclearspecies. A.S. Botvina acknowledges the support of BMBF (Germany). M. Bleicher thanksthe COST Action THOR for support. The authors thank B. Dönigus forstimulating discussions. 99star The STAR collaboration, Science 328, 58 (2010).ygma-nufra Y.-G. Ma (for STAR/RHIC collaboration), talk at NUFRA2013 conference, Kemer, Turkey, 2013, http://fias.uni-frankfurt.de/historical/nufra2013/.alice-3LH B. Dönigus et al. (ALICE collaboration),Nucl. Phys. A904-905, 547c (2013).camerini-nufra P. Camerini (for ALICE/LHC collaboration), talk at NUFRA2013 conference, Kemer, Turkey, 2013,http://fias.uni-frankfurt.de/historical/nufra2013/.YGMa13 Y.G. Ma et al., arXiv:1301.4902 (2013). Don15 B. Dönigus for the ALICE collaboration, EPJ Web Conf.97, 00013 (2015).Bra04 E.L. Bratkovskaya et al.,Phys. Rev. C 69, 054907 (2004). Har12 C. Hartnack et al.,Phys. Rep. 510, 119 (2012). Bas98 S.A. Bass et al., Prog. Part. Nucl. Phys. 41, 225 (1998).Ble99 M. Bleicher et al., J. Phys. G 25, 1859 (1999). Bot17 A.S. Botvina et al.,Phys. Rev. C 95, 014902 (2017). Steinheimer:2015msa J. Steinheimer and M. Bleicher,EPJ Web Conf.97, 00026 (2015).Steinheimer:2017vjuJ. Steinheimer, J. Aichelin, M. Bleicher and H. Stöcker,arXiv:1703.06638 [nucl-th]. Ton83 V.D. Toneev, K.K. Gudima, Nucl. Phys. A400 (1983) 173c.Gyu83 M. Gyulassy, K. Frankel, E.A. Remler,Nucl. Phys. A402 (1983) 596.Nag96 J.L. Nagle et al., Phys. Rev. C53 (1996) 367.Bot15 A.S. Botvina et al., Phys. Lett. B 742, 7 (2015). And11 A. Andronic, P. Braun-Munzinger, J. Stachel, H. Stöcker,Phys.Lett. B 697, 203 (2011).Ste12 J. Steinheimer, K. Gudima, A. Botvina, I. Mishustin,M. Bleicher, H. Stöcker, Phys. Lett. B 714, 85 (2012).And13 A. Andronic, P. Braun-Munzinger, K. Redlich, J. Stachel,Nucl. Phys. A 904-905, 535c (2013). Sta14 J. Stachel, A. Andronic, P. Braun-Munzinger, K. Redlich,Journal of Phys.: Conf. Ser. 509, 012019 (2014). Neu03 W. Neubert and A.S. Botvina,Eur. Phys. J. A 17, 559 (2003). Petersen:2008ddH. Petersen, J. Steinheimer, G. Burau, M. Bleicher and H. Stöcker, Phys. Rev. C 78, 044901 (2008). Steinheimer:2011eaJ. Steinheimer, S. Schramm and H. Stöcker, Phys. Rev. C 84, 045208 (2011). Cooper:1974mvF. Cooper and G. Frye, Phys. Rev. D 10, 186 (1974). Becattini:2012xbF. Becattini, M. Bleicher, T. Kollegger, T. Schuster, J. Steinheimer andR. Stock,Phys. Rev. Lett.111, 082302 (2013). Becattini:2016xctF. Becattini, J. Steinheimer, R. Stock and M. Bleicher, Phys. Lett. B 764, 241 (2017). gut76 H.H. Gutbrod et al., Phys. Rev. Lett. 37,667 (1976). Bot07 A.S. Botvina and J. Pochodzalla, Phys. Rev. C 76,024909 (2007). Bot13 A.S. Botvina, K.K. Gudima, J. Pochodzalla, Phys. Rev. C 88, 054605 (2013).alice13 J. Abelev et al. (ALICE collaboration),Phys. Rev. C 88, 044910 (2013).alice16 J. Adam et al. (ALICE collaboration),Phys. Rev. C 93, 024917 (2016).alice-flow The ALICE collaboration,Journal of High Energy Physics 06, 190 (2015).Zhu05 X. Zhu, M. Bleicher, H. Stöcker, Phys. Rev. C 72,064911 (2005). Mol03 D. Molnar and S.A. Voloshin, Phys. Rev. Lett. 91,092301 (2003). Ani80 R. Anisbetty, P. Koehler and L. McLerrann,Phys. Rev. D 22, 2793 (1980). Li:2017tcd M. Li and J. I. Kapusta, arXiv:1702.00116 [nucl-th].Ant16 T. Anticic et al., Phys. Rev. C 94,044906 (2016). hyphi-lnn C. Rappold et al., Phys. Rev. C 88,041001(R) (2013). LHCb Y. Zhang for LHCb collaboration,arXiv:1605.07509 (2016). | http://arxiv.org/abs/1706.08335v1 | {
"authors": [
"A. S. Botvina",
"J. Steinheimer",
"M. Bleicher"
],
"categories": [
"nucl-th",
"hep-ph",
"nucl-ex"
],
"primary_category": "nucl-th",
"published": "20170626121350",
"title": "Formation of exotic baryon clusters in ultra-relativistic heavy-ion collisions"
} |
http://arxiv.org/abs/1706.09011v1 | {
"authors": [
"Raul J Mondragon",
"Jacopo Iacovacci",
"Ginestra Bianconi"
],
"categories": [
"physics.soc-ph",
"cs.SI"
],
"primary_category": "physics.soc-ph",
"published": "20170627185655",
"title": "Multilink Communities of Multiplex Networks"
} |
|
CCSS, CTP and Department of Physics and Astronomy, Seoul National University, Seoul 08826, KoreaCCSS, CTP and Department of Physics and Astronomy, Seoul National University, Seoul 08826, KoreaCenter for Network Science, Central European University, Budapest, Hungary Department of Theoretical Physics, Budapest University of Technology and Economics, Budapest, [email protected] CCSS, CTP and Department of Physics and Astronomy, Seoul National University, Seoul 08826, KoreaThe two-step contagion model is a simple toy model for understanding pandemic outbreaks that occur in the real world. The model takes into account that a susceptible person either gets immediately infected or weakened when getting into contact with an infectious one. As the number of weakened people increases, they eventually can become infected in a short time period and a pandemic outbreak occurs. The time required to reach such a pandemic outbreak allows for intervention and is often called golden time. Understanding the size-dependence of the golden time is useful for controlling pandemic outbreak. Here we find that there exist two types of golden times in the two-step contagion model, which scale as O(N^1/3) and O(N^ζ) with the system size N on Erdős-Rényi networks, where the measured ζ is slightly larger than 1/4. They are distinguished by the initial number of infected nodes, o(N) and O(N), respectively. While the exponent 1/3 of the N-dependence of the golden time is universaleven in other models showing discontinuous transitions induced by cascading dynamics, the measured ζ exponents are all close to 1/4 but show model-dependence. It remainsopen whether or not ζ reduces to 1/4 in the asymptotically large-N limit. Two golden times in two-step contagion models B. Kahng December 30, 2023 =============================================§ INTRODUCTION Epidemic spread of diseases and rumors and their control and containment have become a central issue in recent years as the real world becomes “smaller.” It is a general observation that there is a slow phase in the spreading process before the sudden pandemic outbreak <cit.>. This slow period is called golden time as it allows for intervention, which is much more difficult after the disease becomes global. Modeling of epidemic spread with essential factors is necessary to control catastrophic outbreaks within this golden time. To this end, several epidemic models have been investigated on complex networks, for instance, the susceptible–infected–removed (SIR) model <cit.> and the susceptible–infected–susceptible (SIS) model <cit.>. Analytical and numerical studies of those models revealed that a continuous phase transition occurs on Erdős-Rényi (ER) random networks <cit.>. Thus, abrupt pandemic outbreaks on a macroscopic scale, which often occur in the real world, cannot be reproduced using those models. Considerable effort has been devoted recently to construct mathematical models that exhibit a discontinuous epidemic transition at a finite transition point on complex networks. A natural way is to appropriately extend the conventional SIR and SIS models. For instance, an extended SIR model includes more than one infected state of different pathogens that are cooperatively activated in contagion: A person who is suffering from the flu can be more easily infected by pneumonia. This model is referred to as a cooperative contagion model <cit.>. Similar instances include a two-step contagion process. A patient becomes weakened first and then becomes sick. This model is referred to as the susceptible–weakened–infected–removed (SWIR) model <cit.>.In another instance of modified SIR models, a network evolves by rewiring links at a certain rate during the spread of contagion <cit.>. The rewiring takes into account the mobility of humans. Then, epidemic spread can be accelerated as the rewiring rate is increased, which can lead to a discontinuous transition representing the pandemic outbreak. When diseases spread, we need to keep susceptible people separate from infected patients or vaccinate the susceptible people before the diseases spread on a macroscopic level. A recent study <cit.> showed that for the SWIR model on ER networks, a system exhibits a long latent period (called a golden time) within which measures can be taken, beyond which the disease spreads explosively over the system at a macroscopic level. Estimating the golden time is important for the prevention of pandemic outbreaks. Moreover, it is necessary to get early-warning signals if a critical threshold isapproached <cit.>.It was revealed <cit.> that when a disease starts spreading from a single node, the golden time n_c scales as n_c(N)∼ N^ζ with ζ=1/3 at the epidemic threshold.Here we reconsider this problem and represent the pattern of disease transmission using a nonlinear mapping. We show that the linear and nonlinear terms of the nonlinear mapping separately behave dynamically well. The linear term is responsible for one-step contagion without weakened states and the nonlinear term describes the two step contagion, which includes weakened state. Thus, the previous result of N^1/3 for the golden time is consistent with the characteristic size of the giant cluster generacolorted in the SIR model <cit.>, thus it has got verified within this new framework. Next, we consider another case, which is the main concern of this paper, in which an epidemic starts to spread from endemic multiple seeds of O(N) on ER networks also at the epidemic threshold. In this case, long latent period appears not immediately but after some characteristic time. Thus fluctuations induced by the stochastic process of disease transmission in the early time heavily affect the behavior during the latent period, which changes the measured exponent ζ to a value slightly larger than 1/4. We estimate this scaling behavior using the saddle-node bifurcation theory <cit.> and discuss the underlying mechanism. Similar size dependences of mean cascading time at a transition point were studied for other cascade dynamics models such as k-core percolation <cit.> and cascading failure model on interdependent network (CFoIN) <cit.>.It was found <cit.> that in the CFoIN, the mean cascading time is proportional to N^1/3 or N^1/4 depending on the way of choosing the transition points. Refs. <cit.> showed that the exponent 1/3 is also obtained in k-core percolation. Thus, the scaling behavior of N^1/3 is robust. However, for theζ > 1/3 case, a different scaling behavior with ζ≈ 0.280 <cit.> wasnumerically obtained for a surface growth model effectively equivalent to the CFoIN. Here we extend our formalism of nonlinear mapping used in the SWIR model to other models such as k-core percolation and the threshold model <cit.>. We show that when the cascade starts from a fixed number of multiple seeds O(N), the golden times for both models also become proportional to N^ζ, where ζ are estimated to be slightly larger than 1/4 within our simulation range and those values are different to each other, suggesting non-universal behavior. However, we cannot exclude the possibility ζ=1/4 in large-N limit. We shall discuss this point in Sec. IV.This paper is organized as follows: We first introduce the SWIR model and set up the evolution equation of the epidemic dynamics in Sec. II. Next, we derive a nonlinear mapping for the epidemic spread from a single seed in Sec. IIIA. We show that the roles of the linear and nonlinear terms are well separated. In Sec. IIIB, we derive a similar nonlinear mapping for the multiple-seed case, and show how the multiplicative feature of the fluctuations of epidemic spreading affects scaling of the golden time. In Sec. IV, we obtain the golden times of the multiple-seed case for k-core percolation and the threshold model and show that the numerical values of ζ are slightly larger than 1/4. We also discuss the possibility of ζ=1/4 in the thermodynamic limit. In Sec. V, we discuss the origin of the puzzle in view of nonlinear dynamics theory. A summary is presented in Sec. VI. § THE SWIR MODEL The SWIR model is a generalization of the SIR model by including two sates, a weakened state (denoted as W) and an infected state (I), between the susceptible state (S) and recovered state (R), instead of a single infected state I alone, as in the SIR model. Nodes in state W are involved in the reactions S+I → W+I and W+I→ 2I, which occur in addition to the reactions S+I→ 2I and I→ R in the SIR model. At each discrete time step n, the following processes are performed. (i) All the nodes in state I are listed in random order. (ii) The states of the neighbors of each node in the list are updated sequentially as follows: If a neighbor is in state S, it changes its state in one of two ways: either to I with probability κ or to W with probability μ. If a neighbor is in the state W, it changes to I with probability η, where κ, μ, and η are the contagion probabilities for the respective reactions. (iii) All nodes in the list change their states to R. This completes a single time step, and we repeat the above processes until the system reaches an absorbing state in which no infectious node is left in the system. The reactions are summarized as follows:S+I κ⟶I+I,S+I μ⟶ W+I,W+I η⟶ I+I,I 1⟶ R. In an absorbing state, each node is in one of three states, the susceptible,weakened, or recovered state. We define (ℓ) asthe conditional probability that a node remains in state S in the absorbing state, provided that it has ℓ neighbors in state R and was originally in state S. This means that the node remains in state S even though it has been in contact ℓ times with these ℓ neighbors in state I before they change their states to R. Thus, we obtain(ℓ) = (1-κ-μ)^ℓ.Next, P_W(ℓ) is similarly defined as the conditional probability that a randomly selected susceptible node is in state W after it contacts ℓ neighbors in state I before they change their states to R. The probability (ℓ) is given as (ℓ) = ∑_n=0^ℓ-1 (1-κ-μ)^nμ (1-η)^ℓ-n-1. Finally, (ℓ) is the conditional probability that a node has been infected in any state, either I or R,provided that it was originally in state S and its ℓ neighbors are in state R in the absorbing state. Using the relation (ℓ)+(ℓ)+(ℓ)=1, one can determine (ℓ) in terms ofand .On a network with a degree distribution P_d, we consider the case in which the initial densities of susceptible, weakened, and infectious nodes are given as s_0, w_0, and i_0, provided that s_0 +w_0+i_0=1. The order parameter m, the density of nodes in state R after the system falls into an absorbing state, is given using the local tree approximation asm=i_0+∑_q=1^∞P_d(q)(s_0 f_q(u)+w_0 g_q(u)),wheref_q(u)=∑_ℓ=1^qqℓ u^ℓ(1-u)^q-ℓ(ℓ) g_q(u) = ∑_ℓ=1^qqℓ u^ℓ(1-u)^q-ℓ(1-(1-η)^ℓ) = 1-(1-η u)^qand u is the probability that an arbitrarily chosen edge leads to a node in state R or I but not infected through the chosen edge in the absorbing state. We define u_n similarly to u but at time step n. The probability u_n+1 can be derived from u_n as follows:u_n+1=i_0+∑_q=1^∞qP_d(q)z(s_0 f_q-1(u_n)+w_0 g_q-1(u_n)),where z≡∑_q qP_d(q) is the mean degree of the network and the factor qP_d(q)/z is the probability that a node connected to a randomly chosen edge has degree q. As n→∞, u_n converges to u. § GOLDEN TIMES IN THE SWIR MODEL§.§ The single-seed case First, we consider the case in which the initial number of infectious nodes is o(N); that is, i_0=w_0=0 and s_0=1 in the thermodynamic limit. In this case, the SWIR model exhibits a mixed-order transition <cit.> at a transition point κ_c when the mean degree is larger than a critical value. The order parameter displays a discontinuous transition from m(κ_c)=0 to m_0, whereas other physical quantities such as the outbreak size distribution exhibit a critical behavior. The behavior of the order parameter m(κ) as a function of κ is schematically shown in Fig. <ref>(a).We are interested in how infected nodes spread as a function of the cascade step n when the order parameter jumps. As a particular case, when the network is an ER network having a degree distribution that follows the Poisson distribution, i.e., P_d(q)=(q+1)P_d(q+1)/z=z^q e^-z/q!, where z is the mean degree, Eq. (<ref>) is reduced as follows:u_n+1 = 1-(1-μκ + μ - η)e^-(κ + μ)z u_n - μκ +μ -ηe^-η z u_n. ≡F(u_n)We remark that on ER networks, u_n in the limit n→∞ becomes equivalent to m obtained from Eq. (<ref>). We pick up the contribution of the reaction S+I→ 2I from Eq. (<ref>) but neglect the contribution of the reaction W+I→ 2I. Then, the probability that a node becomes directly infected by ℓ infectious neighbors, which is denoted by ^(S → I)(ℓ), is given as ^(S→ I)(ℓ) =∑_m=0^ℓ-1(1-κ-μ)^mκ = κκ+μ[1-(1-κ-μ)^ℓ].Applying the formula for the Poisson degree distribution to Eq. (<ref>), we obtain that F^(S→ I)(u_n) =κκ+μ[1-e^-(κ+μ) z u_n].Because the order parameter increases from m=0, we assume that u_n is small in the early time regime. Thus,u_n+1^(S→ I)= z κ u_n-a u_n^2 + O(u_n^3),where a≡κ(κ+μ)z^2/2. Actually, the coefficient z κ of the first-order term is the mean branching ratio in the early time regime. When the critical branching (CB) process occurs, the mean branching ratio becomes unity, so the transition occurs at κ_c=1/z. On the other hand, the discrete mapping (<ref>) at κ_c may be rewritten in the form of a saddle-node bifurcation, u̇^(S → I)=-au^2, where u is a function of the continuous time variable n and the overdot denotes differentiation with respect to it. Because a > 0, u^*=0 is a stable fixed point for u ≥ 0, and this point represents the fixed point of the SIR model, indicating a second-order transition. Next, we consider the two successive reactions S+I→ W+I and W+I→ 2I, in which a susceptible node becomes infected in two steps and eventually recovers. Because a node can be infected either by the reaction S+I→ 2I or by the reactions S+I→ W+I and W+I→ 2I, the probability f^(S→ W → I)(u_n) can be obtained using the relation F^(S→ W → I)(u_n)=F(u_n)-F^(S→ I)(u_n)as F^(S→ W → I)(u_n) = μκ+μ[1+ηκ+μ-η e^-(κ+μ)z u_n] - μκ+μ-ηe^-η z u_n.Again, using u_n ≪ 1, we obtain that u_n+1^(S→ W→ I)= b u_n^2+O(u_n^3),where b≡μηz^2/2. Here we note that the first-order term O(u_n) is absent. Combining Eqs. (<ref>) and (<ref>), we obtain that u_n+1=u_n+(b-a)u_n^2+O(u_n^3). Thus, u̇=(b-a)u^2. When b-a < 0, i.e., μη < κ_c^2+κ_c μ, the fixed point u^*=0 is stable, and thus a continuous transition occurs. Otherwise, the fixed point u^*=0 is unstable, and a discontinuous transition occurs. The condition μη > κ_c^2+κ_c μ for a discontinuous transition is consistent with previously obtained results <cit.>.When contagion starts from a single infectious node, its spread in the early time regime is governed by the linear term of Eq. (<ref>). It proceeds in the form of a CB tree <cit.>, i.e. the mean branching ratio (u_n+1-u_n)/(u_n-u_n-1) is almost unity, and the main contribution is that of the reaction S+I→ 2I. Thus in the thermodynamic limit, u_n always stays zero so that nonlinear terms in Eq. (<ref>) do not appear. On the other hand, in finite systems, u_n grows gradually and the nonlinear term (b-a)u_n^2 becomes significant after a characteristic time n_c(N). It was argued in <cit.> that for the SIR model at the epidemic threshold, the maximum size of outbreaks is proportional to N^2/3 in the mean field limit.When u_n grows up to O(N^2/3), the nonlinear terms in Eq. (<ref>) suppresses further growth of the cluster, leading to a subcritical branching process. This means that the CB process driven by Eq. (<ref>) persists up to O(N^1/3), because the fractal dimension of the CB tree is two. On the other hand, for the SWIR model, the coefficient of the nonlinear term (<ref>) is positive, and the nonlinear term enhances further increase of removed nodes. The CB process turns into a supercritical process, leading to a pandemic outbreak. Accordingly, the golden time, the duration of the CB process, scales similarly as ∼ N^1/3 to that of the SIR model, which is what we observed in a previous work <cit.>. §.§ The multiple-seed case Next, when the number of infectious nodes is O(N), i.e., i_0>0, s_0=1-i_0, and w_0=0 in the thermodynamic limit, it was shown <cit.> that there exists a critical value i_0^(c) such that when i_0 < i_0^(c), a hybrid phase transition occurs at a transition point κ_c, whereas when i_0=i_0^(c), a continuous transition occurs. Here we focus on the former case. In the multiple-seed case, Eq. (<ref>) becomesu_n+1=i_0 + (1-i_0)F(u_n)when the network is an ER network with mean degree z. Fixed points of Eq. (<ref>) satisfy the equation,G(u)≡ i_0+(1-i_0)F(u)-u=0and the smallest solution among them is the order parameter m(κ). We note thatG(u) contains the parameters κ, μ, ν, z and i_0.As already shown in the single-seed case, for appropriately given values of μ and η, G(0)=i_0, G'(0)=(1-i_0)(zκ -1) and G”(0)=(1-i_0)(b-a)>0. Thus when i_0 is sufficiently small, i.e., i_0<i_0^(c), m(κ) satisfies G'(m(κ))<0 for values of κ near zero. Then m(κ) increases continuously as κ is increased untill κ reaches a critical value κ_c such that G(m(κ_c))=G'(m(κ_c))=0 and G”(m(κ_c))>0 are satisfied. We note that κ_c depends on i_0 and z. When i_0=0, κ_c is reduced to 1/z, the transition point of the single-seed case. m(κ) exhibits a critical behavior as κ approaches κ_c and subsequently jumps from m(κ_c)=m_d to another value m_u as represented in Fig. <ref>(b). Thus the transition is hybrid.We notice that at a transition point for the multiple-seed case, an infected node can be in contact with a node that was weakened by a different infectious root <cit.>. Accordingly, the reaction W+I→ 2I can occur even in the early time regime, as shown in Fig. <ref>(c) with red zig-zag (lowest) curve. Moreover, the CB process appears not from the beginning but slightly after that indicated by an arrow at n^* in Fig <ref>(c), at which the density of recovered nodes r_n^* is close to m_d indicated in Fig. <ref>(b).From this step n^*, r_n remains almost constant for a long time as shown in Fig. <ref>(d). However, in finite systems, due to the fluctuations arising in the stochastic process of epidemic spread, the densities of each species of nodes at n^* can be different for each realizaton. Those fluctuations affect n_c, which can be also different for different realizations, where n_c is the golden time, from which r_n increases drastically.We denote the densities of each species of nodes at a certain time step ℓ as s_ℓ, w_ℓ,i_ℓ, and r_ℓ, respectively. Then on ER networks, for n > ℓ, h_n,ℓ≡ u_n-r_ℓ satisfiesh_n+1,ℓ=i_ℓ+∑_q=0^∞z^qe^-zq!(s_ℓf_q(h_n,ℓ)+w_ℓg_q(h_n,ℓ)),wheres_ℓ=s_0 e^-(κ+μ)zu_ℓ-1and w_ℓ=1-u_ℓ-s_ℓ.Moreover, using Eq. (<ref>), the relation i_n=u_n+1-u_n, and u_n=i_n +r_n, we determine i_ℓ and r_ℓ.We focus on the density fluctuations of each species at n^*.We split the densities of each species of nodes into two parts: x_n^*+δ x_n^* (x=s,w,i or r), where the first term represents densities of x-species nodes in thermodynamic limit at n^*, and the second one is the deviation. Then for n > n^*, Eq. (<ref>) becomesh_n+1,n^* = i_n^*+δ i_n^*+(s_n^*+δ s_n^*)f(h_n,n^*-δ r_n^*)+(w_n^*+δ w_n ^*)g(h_n,n^*-δ r_n^*)+δ r_n^*.We did not take into account the density fluctuations induced after n^*, because they are negligible compared to those at n^*. At κ=κ_c, Eq. (<ref>) has a nontrivial fixed point h_d,n^*=m_d-r_n^* in the thermodynamic limit. Then Eq. (<ref>) is rewritten with ϵ_n=u_n-m_d asϵ_n+1=d_0+(1+δ d_1)ϵ_n+(d_2+δ d_2) ϵ_n^2+O(ϵ_n^3),where d_0≈ δ i_n^*+δ s_n^*f(h_d,n^*)+δ w_n^*g(h_d,n^*), δ d_1≈ δ s_n^*f'(h_d,n^*)+δ w_n^*g'(h_d,n^*)-(s_n^*f”(h_d,n^*) + w_n^*g”(h_d,n^*)) δ r_n^*, d_2= 12( s_n^*f”(h_d,n^*)+w_n^*g”(h_d,n^*) ), δ d_2= 12( δ s_n^*f”(h_d,n^*)+δ w_n^*g”(h_d,n^*) ) -12(s_n^*f”'(h_d,n^*)+w_n^*g”'(h_d,n^*)) δ r_n^*. Neglecting higher order terms of ϵ, Eq. (<ref>) is rewritten in an alternative form, ϵ̇=d_0+(d_2+δ d_2)(ϵ+δ d_12(d_2+δ d_2))^2 -(δ d_1)^24(d_2+δ d_2).Because δ i_n^*∼δ s_n^*∼δ w_n^*∼δ r_n^*≪ 1 for large N, the last term can be neglected compared to d_0 and Eq.(<ref>) is rewritten simply as ϵ̇'̇=d_0+d_2'ϵ'^2where d_2'=d_2+δ d_2 and ϵ'=ϵ+δ d_1/2d_2'. We note that d_0 is a real number, while d_2^' is a positive number. The nonlinear mapping Eq.(<ref>) includes several features: When d_0 < 0, ϵ^' reaches a fixed pointϵ_*^'=-√(|d_0|/d_2^'); when d_0=0, ϵ^' remains at zero; when d_0 > 0, there arises the so-called bottleneck effect at ϵ^'=0 <cit.>. The time step to pass through the bottleneck is calculated as T=∫_-∞^∞dϵ'/d_0+d_2'ϵ'^2∼π/√(d_0),which is approximately the time interval of the plateau region, i.e, n_c-n^*. Because n^* is much smaller than n_c, n_c≈T, which is the golden time for a single realization of the process. d_0 can have different values for different realizations, yielding a different n_c. Thus, we need to take average of n_c over different realizations to obtain ⟨ n_c ⟩.We performed extensive numerical simulations at the transition point κ_c ≈0.11494875096512 of the SWIR model starting from multiple seeds i_0=0.002, and obtained that ⟨ n_c ⟩∼ N^0.252± 0.001and⟨ d_0^-1/2⟩_+ ∼ N^0.254 ± 0.002 as shown in Fig. <ref>. ⟨⋯⟩_+ represents the ensemble average over only positive values of d_0. Otherwise, ϵ^' does not diverge by repeating iterations. We remark that the exponent value is larger than 1/4. The numerical exponent values in Eqs. (<ref>) and (<ref>) are obtained with the data only within the range N < 10^8. The data beyond that range get out of the trend abruptly, which may be caused by too long passing time through too narrow bottle necks as the system size becomes large. The noise term d_0 was obtained at n^*≈ 80 in Fig. <ref>, at which a critical branching process starts. Now we consider the distribution of d_0 obtained from different realizations but at the same n^* for the system size N, denoted as Q_N(d_0). We define the standard deviation σ_N of Q_N(d_0) asσ_N^2=⟨ d_0^2 ⟩ -⟨ d_0 ⟩^2, where ⟨⋯⟩ represents the average over all range of d_0 and ⟨ d_0 ⟩ > 0.σ_N behaves as ∼ N^-1/2 as shown in Fig. <ref>(d). If we assume that any moment of the distribution Q_N(d_0) is determined by the single scale, so that ∼⟨ d_0^2 ⟩_+^-0.25∼σ_N^-1/2, then it would behave as N^1/4. However, this result is not consistent with the numerical result (<ref>). We check the N-dependent behavior of √(⟨ d_0^2 ⟩_+). Fig. <ref>(e) shows that √(⟨ d_0^2 ⟩_+) behaves as N^-1/2 asymptotically but the data points deviate in small N region. This discrepancy mainly originates from the asymmetry of Q_N(d_0), which is caused by the multiplicative noise induced by the stochastic process. Q_N(d_0) has a longer tail in its positive side than in the oppposite side as shown in Fig. <ref>(a)-(c). As the system size becomes larger, it becomes not only narrower but also more symmetric. To quantify this asymmetric feature of the distribution, we measure the skewness of Q_N(d_0) defined as S_3 ≡⟨(d_0-⟨ d_0 ⟩/σ_N)^3⟩∼ N^-0.55,in Fig. <ref>(f). The above result suggests that the distribution remains asymmetric in any finite systems but becomes symmetry only in the limit N→∞.Q_N(d_0) becomes a Gaussian distribution in that limit.The asymetry of Q_N(d_0) decreases because the ratio of the noise to the mean number of infected nodes becomes smaller for larger systems.Due to those features,behaves differently from σ_N ^-1/2 within our numerical range; however, it is not certain yet how it would be in the thermodynamic limit because our simulation data (Fig. <ref>) ofcontain heavy fluctuations, particularly in the large-system-size region. For much larger system sizes, Q_N(d_0) are so close to the Gaussian distribution that one may think thatbehaves as σ_N ^-1/2, i.e., ∼ N^1/4 in the thermodynamic limit N→∞.However, it is a challenging task to verify that numerically. When κ > κ_c, d_0 is naturally obtained asd_0=(1-i_0)(∂ f(u_n, κ)/∂κ)|_m_d,κ_c(κ-κ_c).Then, we do not need to take average over ensembles for sufficiently large κ-κ_c because sample to sample fluctuations of d_0 become negligible compared to it. Then, ⟨ n_c ⟩=∫_-∞^∞dϵ/d_0+d_2ϵ^2∼π√(κ-κ_c). Numerical result in Fig. <ref> supports this prediction.§ K-CORE PERCOLATION AND THE THRESHOLD MODEL In our previous work <cit.>, we showed that there exists universal mechanism of avalanche dynamics in the SWIR model, k-core percolation, the threshold model and the CFoIN, when an avalanche starts from a single seed. Due to the universal mechanism, the golden time scales as N^1/3 in a universal way for those models. During that study, we found that numerical simulations for the CFoIN with large system sizes require long computational times and memory space, so that numerical results with limited ensemble average were not neat. Based on such experience, here we limit our interest on to the golden time problem with multiple seeds to k-core percolation <cit.> and the threshold model <cit.> besides the SWIR to check the universal behavior. We find that for both models, the exponent values of the golden time are also measured to be slightly larger than 1/4. We note that the SWIR model and the two models above can be regarded as special cases of generalized epidemic process <cit.> with heterogeneous transmission probabilities. Thus, similar behaviors of golden time are expected. Let us begin with k-core percolation.§.§ k-core percolation Here we first consider the avalanche dynamics of k-core percolation. First we construct a k-core subgraph from an ER random graph with mean degree z. When z is larger than a threshold z_c, a k-core subgraph of size O(N) can exist. After this step, ρ_0N nodes are removed simultaneously. There may exist some nodes that have degree less than k. In this case, those nodes are removed repeatedly until no more such nodes remain. The avalanche size can be either finite or infinite depending on z and . If it is finite, the k-core would still exist; If it is infinite, the k-core would collapse to zero. For sufficiently large z, there exists a critical density ρ_c such that an infinite avalanche can occur when > ρ_c in the thermodynamic limit. In Fig. <ref>, we measure the mean cascade time step (golden time) ⟨ n_c ⟩ of infinite avalanches at the transition point for different system sizes N. We also measure ⟨d_0(n^*)^-1/2⟩_+ and ⟨d_0(0)^-1/2⟩_+, where d_0(n^*) is the noise measured at n^*=60, at which a critical branching process occurs. The definition of d_0 is presented in Appendix A. d_0(0) is measured at n=0, which represents structural fluctuation at n=0. It was found that ⟨ n_c ⟩ scales asN^0.2655±0.003 and ⟨d_0(n^*)^-1/2⟩_+ scales asN^0.262±0.007. On the other hand, ⟨d_0(0)^-1/2⟩_+ is proportional to N^0.25. The distribution functions of √(N)d_0(0) and √(N)d_0(n^*) are shown in Fig. <ref> for two different system sizes. Similar to the case of the SWIR model, d_0(n^*) is distributed asymmetrically and the distribution becomes more symmetric for larger system sizes (Fig. <ref>(a)). Such factors make the exponent larger than 1/4. On the other hand, since the multiplicative fluctuations of cascade dynamics are absent at n=0, the distribution of d_0(0) do not change in shape for different system sizes (Fig. <ref>(b)). Thus it satisfies Q_N(d_0(0))=√(N)Q(√(N)d_0(0)), which makes ⟨d_0(0)^-1/2⟩_+ scale as ∼ N^1/4. However, it is also uncertain that the value ζ remains unchanged for larger systems. §.§ The threshold modelNext we consider the threshold model, which was introduced to study the spread of cultural fads on social networks. Each node i is assigned its threshold value ϕ_i and has one of two states, active or inactive. An inactive node i surrounded by m_i active neighbors and k_i-m_i inactive neighbors changes its state to active when the fraction of active neighbors m_i/k_i > ϕ_i. For a given set of {ϕ_i}, the order parameter, the density of active nodes in an absorbing state, jumps and exhibits a hybrid phase transition at a critical value of the mean degree z_c. Here, we initially introduce i_0 N active nodes in a system. At each generation, every inactive node i whose number of active neighbors m_i > k_i ϕ_i is identified and changes its state to active. For convenience, we choose a single threshold value ϕ for all nodes on ER networks. Then the critical mean degree z_c is determined as a function of ϕ and i_0. We performed simulations with ϕ=0.18 andi_0=0.01. Then the critical point is determined as z_c=9.191… in the thermodynamic limit. The mean cascade time step of infinite outbreaks, ⟨ n_c ⟩, is obtained numerically as ∼ N^0.263 in Fig. <ref>. Thus the measured exponent is also larger than 1/4.§ PUZZLE IN CFOIN A similar size dependency of the golden time was addressed in the CFoIN. Zhou et al. <cit.> revealed that the choices of different types of transition points lead to different scaling behaviors of golden time in the CFoIN. They showed that when the golden time is measured at the transition point p_c of each realization, the mean golden time scales as ⟨ n_c ⟩∼ N^1/3. On the other hand, when a single mean-field transition point p_c^MF is taken for all realizations, the golden time scales as ⟨ n_c^MF⟩∼ N^1/4. The authors presented the hand-wavingargument that in finite systems of size N, individual p_c follows a standard Gaussian distribution having the mean value p_c^MF and the standard deviation proportional to N^-1/2 <cit.>. Using a formula similar to Eq. (<ref>) with d_0 following a Gaussian distribution having the standard deviation ∼ N^-1/2, they obtained ⟨ n_c^MF⟩∼ N^1/4. On the other hand, the author of Ref. <cit.> investigated the scaling relation of the golden time numerically using a different algorithm, and obtained the exponent ζ≈ 0.28 different from 1/4. Thus, the two results are not consistent with each other and this discrepancy has remained as a puzzle in the cascade-induced discontinuous percolation. We recall that for the SWIR model, i_0N seeds are selected at random. Thus, the dynamics started from those nodes can be different for each sample. Because these choices are random, the distribution of d_0 at n=0 will follow a Gaussian distribution in a similar way to the k-core percolation case. However, because the dynamics proceeds from n=0 stochastically, noises are accumulated during the avalanche dynamic process. In this case, for a given network at n=0, noises of d_0 obtained at n^* do not form a regular Gaussian distribution but do an asymmetric distribution, and the observed scaling of ⟨ n_c ⟩ is not N^1/4. We think that the result obtained in Ref. <cit.> shares the common origin with the one we have in the SWIR model. Therefore, we think that the puzzle arising between the results of Refs. <cit.> and <cit.> originates from the times at which the distribution of the fluctuation is measured. § SUMMARY The SWIR model is a simple two-step contagion model, enabling us to understand the machanism underlying a pandemic outbreak. Using this model, we obtained the scaling behavior of the golden time with respect to the system size. Using the local tree approximation, we set up a nonlinear dynamic equation in the form of saddle-node bifurcation that represents the cascade dynamics of two-step contagion. When the epidemic dynamics starts from a single infected node, we showed that the linear and the nonlinear terms of the nonlinear mapping play distinct their roles. In the early time regime, the linear term governs a critical branching (CB) process. The CB tree can be regarded as a critical cluster in percolation. However, in the late time regime, the nonlinear term causes an explosive spread of epidemic disease. The golden time is determined by the finite-size effect on the linear term, which scales as ∼ N^ζ with ζ=1/3. This scaling behavior is universal for cascade-induced dynamic models such as the threshold model, k-core percolation and the CFoIN. When the dynamics starts from multiple seeds of O(N), we measured a change in the value of ζ to 0.252 in the SWIR model. In this case, a long CB process does not appear from n=0, but it does at some characteristic time n^*. During the time until n^*, clusters of ever infected nodes merge and form a cluster of size O(N). The size fluctuates for different realizations.The fluctuations are induced by the stochastic process of disease transmission. We found that these fluctuations change the value of ζ from 1/3 to about 0.252 forthe multiple-seed case. Due to the multiplicative noise of disease spread, the size distribution of those clusters over different realizations becomes asymmetric with a long tail in its positive region. It seems that due to such non-Gaussianity the golden time scales as ∼ N^ζ with ζ slightly larger than 1/4. However, this asymmetry decreases gradually in a power-law manner of the skewness function as the system size is increased. This leaves the possibility open that asymptotically the value of ζ approaches 1/4. This problem could not be ultimately solved by our study. A very precize analysis of corrections to scaling would be needed for it, what was not possible in spite of our massive numerical efforts. We also obtained the similar behavior, ζ > 1/4 for the two other cascade dynamics models, k-core percolation and the threshold model. On the basis of the numerical results of the SWIR model, the threshold model and k-core percolation, the exponent ζ seems to be non-universal for the multiple-seed case. However, as the ζ exponents for those models deviate only slightly from 1/4 and the simulationsizes are limited, the asymptotic universal behavior cannot be entirely excluded. This work was supported by the National Research Foundation of Korea by grant no. NRF-2014R1A3A2069005 and by H2020 FETPROACT-GSS CIMPLEX Grant No. 641191 (JK). § DERIVATION OF D_0 IN K-CORE PERCOLATIONWe consider k-core subgraph of a given ER network of size N and mean degree z. If z is larger than a critical value z_c, a k-core subgraph of size M(z)N can exist. M(z)was obtained analytically in <cit.>. We define P_d(q) as the probability that a node in the k-core subgraph has degree q. We consider the evolution of the avalanche process after removing N nodes from the k-core subgraph. We define u_n as the probability that a node attached to the end of an randomly chosen edge of a network will have degree less than k at time step n. Then the evolution of u_n satisfies the following equation,u_n+1=i_0+(1-i_0) ∑_q=k^∞qP_d(q)⟨ q⟩f_q(u_n), where i_0 = /M(z) andf_q(u_n)=∑_i=q-k+1^q-1q-1iu_n^i(1-u_n)^q-1-i.Taking similar steps as for the SWIR model, we now consider the avalanche process after a certain time step m, which is h_n+1,m=i_m+∑_ℓ=k^∞Q_m(ℓ) f_ℓ(h_n,m),whereh_n,m=u_n-u_m-11-u_m-1, Q_m+1(ℓ)=1-i_01-u_m∑_q=ℓ^∞qP_d(q)⟨ q ⟩q-1q-ℓu_m^q-ℓ(1-u_m)^ℓ-1, i_m≡∑_ℓ=1^k-1Q_m(ℓ)=u_m-u_m-11-u_m-1.Here Q_m denotes the probability that a node attached to the end of randomly chosen edge in the remaining graph at time step m has degree ℓ.We now consider sample to sample fluctuations at the characteristic time n^*. At a transition point =ρ_c, Eq. (<ref>) has a nontrivial fixed point u_d <1. Defining h_d,n^*≡ (u_d-u_n^*-1)/(1-u_n^*-1) and ϵ_n=h_n,n^*-h_d,n^*, Eq. (<ref>) with fluctuations becomesh_d,n^*+ϵ_n+1 =i_m+δ i_m+∑_ℓ=k^∞(Q_m(ℓ)+δ Q_m(ℓ)) f_ℓ(h_d,n^*+ϵ_n)which can take the form of Eq. (<ref>) withd_0=δ i_n^*+∑_ℓ=k^∞δ Q_n^*(ℓ)f_ℓ(h_d,n^*). 99 review_epidemics R. Pastor-Satorras, C. Castellano, P. Van Mieghem, and A. Vespignani, Rev. Mod. Phys. 87, 925 (2015). sir D. Mollison, J. Royal Statist. Soc. B 39, 283 (1977).sir_newman M. E. J. Newman, Phys. Rev. E 66, 016128 (2002). sis R. M. Anderson and R. M. May, Infectious Diseases in Humans (Oxford University Press, Oxford, 1992). ER P. Erdős and A. Rényi, Publ. Math. 6, 290 (1959). grassberger_nphy W. Cai, L. Chen, F. Ghanbarnejad, and P. Grassberger, Nat. Phys. 11, 936 (2015). grassberger_pre G. Bizhani, M. Paczuski, and P. Grassberger, Phys. Rev.E 86, 011128 (2012). janssen H.-K. Janssen, M. Müller, and O. Stenull, Phys. Rev. E 70, 026114 (2004). janssen_spinoal H.-K. Janssen and O. Stenull, Europhys. Lett. 113, 26005 (2016). hasegawa1 T. Hasegawa and K. Nemoto, J. Stat. Mech. P11024 (2014).hasegawa3 T. Hasegawa and K. Nemoto, arXiv:1611.02809.chung K. Chung, Y. Baek, M. Ha, and H. Jeong, Phys. Rev. E 93, 052304 (2016). choi_2016 W. Choi, D. Lee, and B. Kahng, Phys. Rev. E 95, 022304 (2017).choi_multi W. Choi, D. Lee, and B. Kahng, Phys. Rev. E 95, 062115 (2017).sir_rewire J. Yoo, J. S. Lee, and B. Kahng, Physica A 390, 4571 (2011).universal D. Lee, W. Choi, J. Kertész, and B. Kahng, Sci. Rep. 7, 5723 (2017). indicator M. Scheffer, et al., Nature 461, 53 (2009). bennaim E. Ben-Naim and P. L. Krapivsky, Phys. Rev. E 69, 050901 (2004). book S. H. Strogatz, Nonlinear Dynamics and Chaos (Addison-Wesley, New York, 1994).kcore1 J. Chalupa, P. L. Leath, and G. R. Reich,J. Phys. C 12, L31–L35 (1979). kcore2 S. N. Dorogovtsev, A. V. Goltsev, and J. F. F. Mendes,Phys. Rev. Lett. 96, 040601 (2006).kcore_goltsev A. V. Goltsev, S. N. Dorogovtsev, and J. F. F. Mendes, Phys. Rev. E 73, 056101 (2006). kcore3 G. J. Baxter, S. N. Dorogovtsev, K. E. Lee, J. F. F. Mendes, and A. V. Goltsev, Phys. Rev. X 5, 031017 (2015). kcore4 X. Yuan, Y. Dai, H. E. Stanley, and S. Havlin, Phys. Rev. E 93, 062302 (2016). kcore5 D. Lee, M. Jo, and B. Kahng, Phys. Rev. E 94, 062307 (2016). buldyrev S. V. Buldyrev, R. Parshani, G. Paul, H. E. Stanley, and S. Havlin, Nature 464, 1025 (2010). zhou D. Zhou, A. Bashan, R. Cohen, Y. Berezin, N. Shnerb, and S. Havlin, Phys. Rev. E 90, 012803 (2014). son S.-W. Son, P. Grassberger, and M. Paczuski, Phys. Rev. Lett. 107, 195702 (2011). baxter_mcc G. J. Baxter, S. N. Dorogovtsev, A. V. Goltsev, and J. F. F. Mendes,Phys. Rev. Lett. 109, 248701 (2012). bianconi D. Cellai, S. N. Dorogovtsev, and G. Bianconi, Phys. Rev. E 94, 032301 (2016). grassberger_mcc P. Grassberger, Phys. Rev. E 91, 062806 (2015). watts D. J. Watts, Proc. Natl. Acad. Sci. (U.S.A.) 99, 5766 (2002). dodds P. S. Dodds and D. J. Watts, Phys. Rev. Lett. 92, 218701 (2004). | http://arxiv.org/abs/1706.08968v2 | {
"authors": [
"Wonjun Choi",
"Deokjae Lee",
"J. Kertész",
"Byungnam Kahng"
],
"categories": [
"q-bio.PE",
"physics.soc-ph"
],
"primary_category": "q-bio.PE",
"published": "20170627161249",
"title": "Two golden times in two-step contagion models"
} |
The effect of a guide field on local energy conversion during asymmetric magnetic reconnection: MMS observations Joshua C. Chang^1 Received 26th June 2017 / Accepted 25th January 2018 ================================================================================================================ This document presents an analysis of different load balance strategies for a Plasma physics code that models high energy particle beams with PIC method. A comparison of different load balancing algorithms is given: static or dynamic ones. Lagrangianand Eulerian partitioning techniques have been investigated.§ INTRODUCTIONParticle-In-Cell (PIC)codes havebecome an essentialtool forthe numerical simulation of many physical phenomena involving charged particles, in particular beam physics,space and laboratory plasmas includingfusion plasmas. Genuinely kineticphenomena canbe modelledbythe Vlasov-Maxwellequations whichare discretized bya PICmethod coupledto a Maxwellfield solver. Today's and futuremassively parallel supercomputersallow toenvision thesimulation of realistic problems involving complex geometries and multiple scales. However, in order to achieve this efficiently new numerical methods need to be investigated. This includesthe investigation of:high order very accurateMaxwell solvers, theuseofhybrid gridswithseveralhomogeneouszones havingtheirown structuredor unstructuredmesh typeand size,and afine analysisof load balancing issues. This paper improves a Finite Element Time Domain (FETD) solver basedon highorderHcurl conformingandinvestigates thecoupling tothe particles. We focus on the management of hybrid meshes that mixes structured and unstructuredelements onparallelplatform.Thisworkis theresult ofa pluri-disciplinary researcher project HOUPIC <cit.> bringing together the IRMA's teamand theLSIIT teamof universityof Strasbourg,INRIA Sophia-Antipolis, the CEA, PaulScherrer Institut, IAG Stuttgart. § CONTEXT PIC method is widely used in the field of plasma simulations (<cit.>, <cit.> and <cit.>). In such software, a system dynamics of interacting charged particles is influenced by the presence of external fields.For the plasma we are interested in, it is not computationally feasible to track the motion of every physical particle. Thus, it is usual that simulations use “super-particles” that represents several physical particles. Particles and fields that describe the physical problem are represented by two computational objects, a set of “super-particles” and a mesh (structured or unstructured or hybridthat mixes structured and unstrucured) that discretizes the spatial domain in order to stores the electric field E and magnetic field B. Each particle P, registers within an element of the finite element mesh. Each particle knows the set of degrees of freedom, the sources and the fields within the surrounding element, denoted by FEM_P.Classical PIC functionally decomposes into four distinct tasks as depicted in figure <ref> from <cit.>, namely: A. Assignment Phase: for each particle P, assign the charge and current density into FEM_P. This step aims at collecting the total charge and current density induced by the particle.B. FieldSolve Phase:using the chargeand thecurrent density found in the previous phaseA, solve the Maxwell's equations (time integration)through Finite Element Method to determine the electric and magnetic fields, E and B. C. InterpolationPhase: for eachparticle P, use thevalue of E and B given by the Maxwell solver to interpolate the force value on P at the particle's actual position.D.Push Phase:update theposition ofeach particleunder the influence of the force determined in previous phase C. This four tasks areperformed at each time step of thesimulation run.In the set of 2D testcases we are interested in, the Pushstep is often the more costly part. In the sequel, wewill focus mainly on the parallelization and the optimization of the last phase (denoted D).§.§ Particles distributions methods Two methods aregenerally used in Particles-in-Cell codeto distribute objects over parallel machine.These arethe Lagrangian and the Eulerian decomposition methods. Thisnomenclature is presented by Hoshinoet all <cit.> and presented with accuracyby David W.Walker in <cit.>.The reason why the two types of decompositions exist isdue to the fact that a PIC code should considertobalancetheloadofthemeshfirstlyoroftheparticles firstly. Theimprovement of theparticle distribution overprocessors reduces thecost ofthe pushphase (Dstep); whereasa gooddistributionof finite elements enhances the performance of the field solver (B step).The Eulerian methodmanages particles motion in displacingthe particles among processorsdepending ontheirlocations.Eachprocessorhandles aspatial sub-domain. Whenever particle changesof spatialsub-domain aftera motion, this particleis pushed awayfrom the processorand is sent tothe processor owning the newsub-domain. A consequence of this methodis that each processor manages all particles located inside a relatively small sub-domain.This motion of particlesbetween thedifferent processors constitutesan overheadin the Push step. The Lagrangian methodallows each processor to track thesame set of particles during allthe simulation on thewhole spatial domain.Evenif the particles areinitially in asmall area,they cantravel faraway fromtheir first location depending of fields they traverse. In this case and after a while, it is likely that one processor can manage particles dispatched almost everywhere in the spatialdomain.The main advantage of this methodover Eulerian one is tolimit thenumber ofcommunicationduring thePush stepbecause particles remain on the sameprocessor. Nevertheless, the execution time needed by each processor to realize the Push step are often larger than in the Eulerian case because of a lack of cache phenomena. Hence, in the Eulerian case, eachprocessor managesparticlesdriven byasmall setof discretefields includedin theprocessorsub-domain;whereas intheLagrangian case,the processor's set of particles can see all discrete fields of the spatial domain during the push step. Thus, there are much more spatial locality in the usage of field datain the Euleriancase than inthe Lagrangian one. Thecache memory usage is better in Eulerian decomposition as far as field data are concerned. §.§ Dynamic load balancing strategiesInan Eulerianmapping, therearetwo kindsof methodconcerning theload balancing strategy. Thefirst one, perhaps the most popularone, uses a static external graphpartitioner likeMetis orScotch (<cit.>). Thisexternaltool performs the partitioning of the mesh (and thespatial domain) at the beginningof therun, orcan also becalled off-line. This processing canhave a relative high cost compared to thesimulation runtime.It is possible, but not usual,torevisethepartitionduringthesimulationrun, then usingthe partitioner multiple times per simulation.it may lead to a penalizing synchronization between processors. We chooseto study analternate solution toperform load balancing.We would like toconsider a dynamicstrategy to partitionthe spatial domainover the processors thatis revisedduring simulation.The main ideais toadapt the partition to a change of the macroscopic structure of particles.In this case,the partition tool is integrated directlyinto the simulator. We expect thistool to becheap inexecution time andto be usedfrequently to adapt thepartition to the dynamicsof particles.We willfocus hereafter on two methods implementing that kind of load balance strategy.The first one is a globalpartitioningmethod thatconsidersthewholespace domainandthe computation cost (viaa cost function) on each spatial unit.It is denoted URB (Unbalanced RecursiveBisection) in theliterature. The second methodwe will look at,is knownas .This method preventsthe synchronizationof all processors duringa dynamicmapping phase.Thisfeature could begreat when working onlarge scale platform, comparedto the previous URBalgorithm as we shall see afterward.In the two cases,URB and ORB-H, we choose to give to the partitioneran inputrectangular shapeds (fora domainin two dimensions). That means thateach uncuttable rectangularencompasses a set of finite elements and particles.We determine the responsibleforeach elementdependingon thebarycenterlocation ofthe element.The partitioner doesnot see finite elementsnor particles but onlythe number of particles and finiteelements encapsulated within a .By the way, we avoidthedifficulty ofmanagingcomplexdata-structureneeded forfinite elementsandwe avoidthecomputationof apartitionofa generalmesh. Furthermore,this approachfacilitatesthetuning ofapartition basedon rectangularsub-domains (see <cit.>).We willevaluate thiskind of partitioning method. Aswealreadysaid,thisstudyfocusesontheparallelmanagementand distributionofparticlesin aPICcodeinorderto mainlyenhancethe performanceof the Push,Gather, Scattersteps andnot on the Field solver. As a perspective, we wish to extendour results to the load balance ofall steps of the code including thefield solver. In orderto do that,we will have toimprove the costfunction givento thepartitioning algorithmto takeintoaccount the number of finite elements per .Suppose we want to consider test cases that includesa costly Maxwellsolver part; asmall modification ofthe cost function will allow us to tackle this new configuration.§.§ s The figure <ref> illustrates a sketchedview of our spatial domain.A regular grid divides the domainin .Each has a rectangular shape and afixed size defined as a parameter ofthe simulation.Astores a set of finiteelements (mesh information and field solver information) and particles.The first step ofour simulation tool assigns each particle and finiteelement informationto one. We storealso thebelonging relationexisting between particlesand .This relationis updated after each particle motion. The sub-gridoffers away tosimplify severalmechanisms. First, the sallowthe parallelsimulationtomigratefinite elementsand particleseasily fromone processorto theother (fromtheprogramming and software engineeringpoint ofview).The penaltyin term ofperformance and partitioning quality islow if the number ofs per processor remains large. Second, thelocalizationof particlesistrivial forthis kindof Cartesian sub-griding, which is a big advantage for a PIC code. Furthermore, one could derive asimple condition to detect when aparticle leaves the processor sub-domain.§.§ Initial partitioning If the global spacedomain is a rectangle, one of thesimplest mapping one can thinkof,isa homogeneousdecompositionofrectangularsub-domainson processors.For the time being, wedo not consider the number of particles peras a constraintbutsimply distribute thesame number ofs per processor. Here, we do not careabout the cost function but propose a reference mapping for comparison purpose.In several algorithm, the parallel domain decomposition is performed in building a tree. The data structurethat constitutesa node ofthe tree isshown in figure <ref>. Each sub-domainisassociated withasingle processor.The fieldpidcontains theprocessoridentity.Thefollowing fields definea rectangular area whichcorresponds to thesub-domain given to the processor. The integer height stores the height of the cell in the tree of the hierarchical domain decomposition. § DIFFERENT STRATEGIES FOR LOAD BALANCING Parallelisation of a PIC coderelies to a large extent on the load balancingchosen andthe partitioningschemeused.The partitioningschemecan beeither hierarchical or non-hierarchical.Ina hierarchicalpartitioning, themappingis derivedfrom atree representationof sub-domains. Onecan directlyuse thetree toadapt the mapping to minor changes of the load. Hereafter, we will look at a few Recursive Coordinate Bisection (RCB) algorithms andanincrementalpartitioningsolutionknown asthedimensionexchange algorithm. §.§ Simple geometric partitioning algorithm The first algorithm we look at to perform the load balancing uses an improved version ofthealgorithm presentedbyBergerand Bokhari <cit.>, knownas RecursiveCoordinateBisection.The RCB algorithmdivideseachdimension alternativelyinhalf parttoobtaintwosub-domains having approximately thesame cost (using the user-defined cost function). The two halves are thenfurther divided by applying the same processrecursively.Finally,thisalgorithmprovidesasimplegeometric decomposition for a number of processors equal to a power of two.This partitioningalgorithm does not payattention to the aspectratio of the resulting sub-domains. Ifthere is a large variationbetween dimensions of the domain, one ofthe two sub-domain boundaries can be verylarge compared to the other.Since the communication volume isrelated to the size of the sub-domain boundary (A and C phases),large aspect ratios are undesirable.The Unbalanced Recursive Bisection (URB) is animprovement of the previous partitioning method proposed by Jonesand Plassmann <cit.>.They propose totry to minimize the aspect ratio and suppress theconstraint of using power of two sub-domains. The mainidea is toadjust thesub-domains size inrespect to the estimated cost.Inour PICcode,we do notmanagedirectly thefiniteelements butsimply rectangular s aswe describedpreviously. Weused theload percell to dispatch them on the different nodes.The figure <ref> illustrates anURB decomposition process.Firstly, we beginwith abig cellcontainingall theprocessors andthe wholespatial domain. At the start of this algorithm, the first cell contains all s. Recursively, wedivides each remainingcell of size (n,m) into twoparts inassigningto eachnew celladistinct setof processorsand distinct sub-domains. To divide the set of s, we use the cost function to find the right cut. Weobtaintwo cellsthathaverespectively thesizeof(n-q, m)and (q,m) (if the cutis in the first dimension).The choiceof the q-size is determined usinga simple method likethe dichotomy toshare the load with a fair and equitable strategy between the two cells. Ina second step, the created cells are divided againin theother dimension. Wechoose thealgorithm toperform thecut alternatively ineach dimension(). This last propertyof alternation allowsone toobtainsub-domainswith quite smallperimeters. This choiceis worthwhile becausethe volume ofcommunication between different processors <cit.> isproportional to thesub-domain perimeters during scatter and gather phases.At the end of the recursion, the whole domain is split in balanced load parts. A treestructure is generated that stores each cut and finally the data structure of figure <ref> in the leaves.In ourcode, particles moveat each iterationof the runbetween processors. Thereofitis necessarytoregularlyapplythe partitioningalgorithmto maintainagoodload balance. Fromoneiterationto thenext,the partitioning algorithm we just described, can lead to very different mappings. Indeed, the algorithm does not try to keep or adjust the previous partition, but compute from scratch another partition.However, a big change of the partition can provokea massive migrationof objects between processorsand can induce ta degradation of application performance. We willaddress the problemin the next section.§.§.§ URB limited migrationWe have designeda simple solution to constraint theload balancing process to use quitesimilar decompositionfrom one iterationto the other. Remind you that the tree structure is builtduring the partitioning process. We propose to modify a littlebit the existing tree instead of buildinga new tree.Indeed, we adapt thealgorithm to only authorize loadexchange between cells belonging to the samebranch of the tree, or,up to a maximum levelof the partitioning tree.Weset that only one frontierin each sub-domain is authorized to move during the URB algorithm.Consequently, we stop therisk of a globalmodification of the partition,andthenwelimitthemigrationofdatabetweenprocessors. Nevertheless,if particlemotions arelarge enough,and iftheglobal load balancebecomes notsogood, thisversionof loadbalancing thatpreserve locality could be inadequate.If ever this unusual configuration happens, it wouldbeusefulto callthestandardURBloadbalancing tocorrectthe imbalance.The difference betweenthe classical URB and the modifiedversion is shown in the figure <ref>. The Acase corresponds the classical URB algorithm applied after a migrationof the particles.One can see alarge change of the domain decomposition.Many sand associated particles have to migrate on their newprocessors.In opposite, the limitedmigration URB restrains the modifications of thetree to happen in nodes higher thanthe third level.The result is that the new tree is close to the old one. Only the boundaries of three sub-domains are moving.Finally,this newversionof thealgorithmfills ourfirst objectives. A remainingproblemis aboutthesynchronizationstillrequired betweenall processorsduring thepartitioning algorithm. It isa badpointfor large parallel platforms. The tree is uniquefor all processors andobtained with the knowing ofthe loads computedin each . This adds acommunication step that can penalizethe global performance of theapplication.In the sequel, we proposea local algorithmthat computes dynamicallya partitioning avoiding global synchronization.§.§ A diffusion algorithm for the load balancingA communicationbottleneck is a majordrawback for a parallel algorithm.The previousalgorithms needagather steptoshare informationand requirea synchronization. All processors are waitingto each other, and in worse cases, the time loss could be significant.A simple way to overcome this bottleneck is to avoidthe globalcommunication andreplaceit withcommunications onlytowards neighboringprocessors.Forload balancingpurpose, Cybenko <cit.> described an incremental algorithm that uses the first order diffusion equation. Ateachsimulationiteration,thisdiffusion algorithmproceedstolocal exchanges of load between closer processors. It converges asymptotically (for an infinitenumber ofiterations)towards auniformload balance.Many improvementsofthisalgorithmhavebeen publishedandwehighlightthe following papers <cit.> for the practical description given.w^t+1_i = w^t_i +∑_jα_i,j (w^t_j - w^t_i) + η^t+1_i -c_iThe algorithmis based on thediffusion equation (<ref>) thatis able to estimatethe diffusionofthe loadbetweenprocessors inthe frameworkof dynamic load balancing. In the previous formula, w^t_icorresponds to the loadof processori attime t.η^t+1_i isthe loadwhowill be createdon the processori attime t.The consumedload bya processor duringan iteration isc_i. Inour application,the c_⋆constant is always zerobecause the work toperform is thesame at each timestep.Work does not diminish along time dimension.In thisalgorithm, all processors canexchange load ateach iteration. But, sometimes,it isnot possibleto exchangeload easilywithoutpaying large communication costs.At the utmost, wecan even choose to impose that a node i can only exchangeload with one node j at aniteration t in order to reduce amountof communication. This solution isproposed by Cybenko <cit.> andthis algorithmis known asthe diffusion exchange algorithm.This algorithm use processorstwo by twoto obtain the asymptotic stability state <cit.>.For our simulation, we fix the parameter α_*,* to 1/2 uniformly andη^*_* to 0. The formula <ref> becomes : w^t+1_i =w^t_i+ 1/2 (w^t_j-w^t_i) The scheme of figure <ref> shows a 4x4grid of processors.Capability of load exchange is sketched bya link between two nodes. Using acolored graph, it is easy to deduceat the iteration t which processorsexchange load. The figure <ref> describes a domain in two dimensions. In this figure, we can classify the processors into two different sets.The first set, composed by the processor place onthe middle of the domain, have all of its communicationchannels withthe other processors. In the contextof the figure <ref>, thisclass contains the processors 6,7,10and 11.For example,theprocessor6 canexchangedatawiththe processor5,when mod(it_number,4) = 1,with the processor 7, whenmod(it_number,4) = 2, and withthe processor 2 and10 in thecorresponding case. The othercase of processor (thenumber 1,2,3,4,5,8,9,12,13,14,15 and 16) hasthe same behavior as thefirst class usually. But, in some cases,as this processorhavenotall ofthechannelsofcommunication, theresultsof mod(it_number,4) givea choice of channels not available onthe processor (like the channel 1 for the processor1). In this case, this processor does not participate at the load balancing step for this iteration.A specific design is needed to take into account the distributed feature of this algorithm. As wealready have described in the previoussection, the first partitioning strategychoiceisnotvery usefultoexchangeloadbetweendifferent neighbors. The only allowed choice is the cell in the same level or in the same branch onthe partitioning tree.Aswe want to have morepossibilities to exchange load, wehave preferred touse an anotherstrategy of partitioning:thestrategy proposedby M. Campbell, A.Carmonaand W. Walker in <cit.>. This strategydivideseach differentdimension separately anddoes not need theuse of another datastructure to storethe global schemeofthepartitioning. Thefigure <ref>presentstheORB-H decomposition for the example already presented in <ref>.The scheme <ref>shows that all the borders arenot independent as for theURB partitioning. The exchangeofload cannotbe realizedin allthe dimensions at a time and onlythe last partitioned dimensioncan be easily used for the load balancing process (in 1D).In practice, for our application, each node has the choice between at mosttwo nodes at each iteration to proceed to a load exchange.The work domain is the same as inthe previous case. The domain is 2D and use a first regulardecomposition of s.The initializingprocess is realized in two steps.First, thealgorithm divides the domain in equally sized parts usingonly onedimension loadbalancing. Thechoice ofthefirst dimension dividing the globaldomain is important and must bechosen adequately with the particlemotions.Atthe nextiteration, theprocess dividesthe second dimension in equivalent load parts.Duringtheglobal process,particlesmoves in space anda situationof loadimbalancecouldappear. Intheprevious algorithm,this detection mustbe solved bya global processof load balancinginvolving all nodes. Inthe case of a distributedalgorithm, only the nodeaffected by this situation try to solve it with its closest neighbors.Thediffusion exchangealgorithm usesthe followingprinciples : * one nodecanexchange itsload withonlyone neighborin thesame sub iteration of load balancing,* the volumeof the loadexchange is given bythe previous formula <ref>. In ourcase, thevolumecorrespond tothe numberof s that can migrate between two nodes. Withthesimple scheme(figure <ref>),theexchange ofloadis possible between thenode 1 and 2, 3and 4.The node 5 doesnot perform load balancingstepatthisfirstiteration. At thenextiterationofload balancing, the exchange of data uses the same configuration for the first column, but forthe second, onlythe 4and 5 exchangeload. And lessfrequently the column of the cells 1 and 2 could exchange load with column of the cells 3,4 and 5.We have implementeda predictive method to do the choice of a couple of nodes exchanging their load using the parity of the node.The implementation of thisalgorithm uses the samestructure than the URBalgorithm.§ RESULTS§.§ Numerical experimentsWehave tested ouralgorithmsin manysituationsconsidering fewparameters variation. To limit thenumber of cases to run, we havefixed a limited number ofparameters asthenumber ofparticles(eighty millions),the numberof iteration tobe computed(one thousand) andthe typeof test case. We have studied the case of the"Two stream instability" <cit.> in two dimensions. This case is a periodic case, so the number of particles, equivalent to the cost function load for us in this study, is the same along the evaluation, there is no loss of particles. We havetested our simulationusing two distinct types of mesh,quadrangles and unstructuredmesh.Aviewof thistwo typesofmesh areshown inthe figure <ref>.Parametersused to studythe efficientof differenttypes ofload balancing algorithmsare thefollowing. Wehave testedthe variationof thenumber of vertices andedges of the usedmesh,and thetype of mesh (triangles or quads). We have usedthe dedicated high-performance facilities of the University of Strasbourg.This clusterhas 512 processors dispatched within in 64 bi-processor nodes (Intel Opteron 2384). §.§ Dimensionnal study for the two kinds of meshA first comparisonhas the objective to show thebehavior of several algorithms depending on the mesh size.Thistest usesthe common setting with a grid of4 × 8 processors. The two evolving parameters are the type of mesh, and its size (16 × 16, up to128 ×128).The results and the sizes of the meshes are presented in the table <ref>. It shows the very good behaviour of our implementation depending on the evolution ofthe size (i.e. increasing the mesh size does not inducean increase of the execution time which is mainly prescribed by the number of particles). Indeed, the global timeof asimulation is relativelyindependent of themesh size because the time used in the field solve phase (see figure <ref>) is small, around five to ten percent of the global time of an iteration.§.§ Comparison between different methods for particlesmanagement This comparison aimsat showing the difference between thetwo methods (Lagrangian and Eulerian) depending on the partitioning algorithm.As it is shown in the section <ref>, the main differencebetween these two methods isthat the spatial locality effect in the cache can be favored. The particles that are close to each other use the same data of the elements which can favor cache effects. There is also a benefit brought byavoiding particle migrations. This test uses the two partitioning methods described previously and illustrated in the figure <ref>.Toshowmaindifferences betweenthesetwomethods,we havecomparedthe difference onthe global timefor one simulationand the time neededfor one iteration during the simulation foreach load balancing algorithm for static and dynamicversions. Thetable <ref> presentsthe resultsofthe global computation time for each case ofthe study. The main finding isaboutthe differences betweenthestatic andthe dynamicload balancing. Inthe Euleriancase, the differenceis not existentbecause this methodof particles motionallow thelocalized pushof particleswithin each processor. For the Lagrangian method,the static method are very penalizing for the particles motion cost. In this case, each processor keep the sameset of particles during the whole simulationrun. Aftermany iteration ofcalculus, eachprocessor manages particlesdispersed over the wholesimulationdomain. Thiscontext ofcalculus does not facilitate the optimization of the execution and induces an increase by a factor two at least on the global time of the simulation.The figure <ref>presents the computationtime for eachiteration in the previous context. In the two cases, static and dynamic, the Eulerian method (P3) is more efficient compared to the Lagrangianmethod (P1). This optimization is clearly visible for the static case.Indeed, in the dynamiccase, particles are re-localized and re-distributed at each load balancing step.The improvement brought by the Eulerian dynamic method is around 15%on the execution timecompared to Eulerian static. §.§ Comparisonbetween static anddynamic loadbalancing forthe two types of meshThis testshows the differenceof performance ofour simulator usingthe two types of mesh.This benchmark usesthe common configuration like all the previous benchmarks. Wehavechoosetodothisbench witha mesh sizeof 128×128and wehave usingthe Euleriandistribution methodto manage particles.The regularmeshis compoundof16641vertices and16384 quads. Respectively,the unstructured mesh is compoundof 18222 vertices and 35930 triangles.The resultsare detailed in the table <ref>.In conclusion tothis, we can saythat the difference ofperformance is existent butnot very large.Indeed, themaximum differencebetween two execution times isabout ten percent.This small difference ofperformance between these two typesof mesh exists becausemost of the timeconsumed within one iteration is consumed by the push particles function (neighborhood management and communications represents less than 10 percents).§ CONCLUSION Wehavestudiedin thisworkseveralalgorithmsofload balancingfor a PIC code using static or dynamic partitioning.Wehave shown that the dynamic management is a key pointto optimize the overall performanceof the application. But, wehave not shownalargedifference betweentheload balancingalgorithms under investigation. Indeed, the measured times remain close to each other for the considered algorithms. The localizationof the exchange with the limited version of the URB partitioning or using the ORB-H have not led to theimprovement we hoped. Two facts can explainthis.The first is aboutourdevelopments of parallel algorithms. Therearesome pointsof synchronizationwe should remove that are killing performance. The second point is that the use cases which aremaybe not well designed to evaluate the different load balancing algorithms (not enough particles moving around). The perspectives are, first,to find aanother use case toevaluated with accuracy theload balancing algorithms and second, to studythe load balancing for this PIC code over a 3D mesh instead of a 2D one.plain | http://arxiv.org/abs/1706.08362v1 | {
"authors": [
"Marc Sauget",
"Guillaume Latu"
],
"categories": [
"cs.DC"
],
"primary_category": "cs.DC",
"published": "20170626132946",
"title": "Dynamic Load Balancing for PIC code using Eulerian/Lagrangian partitioning"
} |
http://arxiv.org/abs/1706.08647v1 | {
"authors": [
"Keita Hamamoto",
"Motohiko Ezawa",
"Kun Woo Kim",
"Takahiro Morimoto",
"Naoto Nagaosa"
],
"categories": [
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.mes-hall",
"published": "20170627020928",
"title": "Nonlinear spin current generation in noncentrosymmetric spin-orbit coupled systems"
} |
|
Multi-Label Learning with Label Enhancement Ruifeng Shao,Ning Xu,Xin Geng MOE Key Laboratory of Computer Network and Information Integration, School of Computer Science and Engineering, Southeast University, Nanjing 210096, ChinaEmail: {shaorf, xning, xgeng}@seu.edu.cnDecember 30, 2023 ======================================================================================================================================================================================================================================================================================== The task of multi-label learning is to predict a set of relevant labels for the unseen instance. Traditional multi-label learning algorithms treat each class label as a logical indicator of whether the corresponding label is relevant or irrelevant to the instance, i.e., +1 represents relevant to the instance and -1 represents irrelevant to the instance. Such label represented by -1 or +1 is called logical label. Logical label cannot reflect different label importance. However, for real-world multi-label learning problems, the importance of each possible label is generally different. For the real applications, it is difficult to obtain the label importance information directly. Thus we need a method to reconstruct the essential label importance from the logical multi-label data. To solve this problem, we assume that each multi-label instance is described by a vector of latent real-valued labels, which can reflect the importance of the corresponding labels. Such label is called numerical label. The process of reconstructing the numerical labels from the logical multi-label data via utilizing the logical label information and the topological structure in the feature space is called Label Enhancement. In this paper, we propose a novel multi-label learning framework called LEMLL, i.e., Label Enhanced Multi-Label Learning, which incorporates regression of the numerical labels and label enhancement into a unified framework. Extensive comparative studies validate that the performance of multi-label learning can be improved significantly with label enhancement and LEMLL can effectively reconstruct latent label importance information from logical multi-label data. multi-label learning, label importance, label enhancement § INTRODUCTIONIn multi-label learning, each training instance is associated with multiple class labels and the task of multi-label learning is to predict a set of relevant labels for the unseen instance. During the past years, multi-label learning techniques have been widely applied to various fields such as document classification <cit.>, video concept detection <cit.>, image classification <cit.>, audio tag annotation <cit.>, etc.Formally speaking, let 𝒳 = ℛ^dbe the d-dimensional feature space and 𝒴={y_1, y_2, ... , y_l} be the label set with l possible labels. Given a multi-label training set 𝒟 = {(x_i,y_i)| 1 ≤ i≤ n}, where x_i ∈𝒳 is the d-dimensional feature vector and y_i∈{-1,+1}^l is the label vector, the task of multi-label learning is to learn a multi-label predictor mapping from the space of feature vectors to the space of label vectors <cit.>. Traditional multi-label learning approaches treat each class label as a logical indicator of whether the corresponding label is relevant or irrelevant to the instance, i.e., +1 represents relevant to the instance and -1 represents irrelevant to the instance. Such label represented by -1 or +1 is called logical label. Furthermore, traditional approaches take the common assumption of equal label importance, i.e., the relative importance between relevant labels is not differentiated <cit.>.For real-world multi-label learning problems, the importance of each possible label is generally different. In detail, the difference of the label importance could be two-fold: 1) relevant label variance, i.e., different labels relevant to the same instance have different relevant levels. 2) irrelevant label variance, i.e., different labels irrelevant to the same instance have different irrelative levels. For example, as shown in Fig. 1 which is an image with five possible labels sky, desert, tree, camel and fish, the logical label vector [+1,+1,+1,-1,-1]^T is provided by the annotator. For the relevant label variance, the label importance of desert should be greater than that of tree and sky, because desert can describe the image more apparently. For the irrelevant label variance, the label importance of camel should be greater than that of fish, because although both are not shown in the image, it is obvious that fish is more irrelevant to this picture than camel.As mentioned above, logical label uses +1 or -1 to describe each instance, which cannot reflect different label importance. So logical label can be viewed as a simplification of the instance's essential class description. However, for real-world applications, it is difficult to obtain the label importance information directly. Thus we need a method to reconstruct the latent label importance information from the logical multi-label data. To reconstruct the essential class description of each instance, we assume that there is a vector of latent real-valued labels to describe each multi-label instance, which can reflect the importance of the corresponding labels. Such label is called numerical label. The process of reconstructing the numerical labels from the logical multi-label data via utilizing the logical label information and the topological structure in the feature space is called Label Enhancement (LE).In this paper, we propose an effective multi-label learning approach based on LE namedLabel Enhanced Multi-Label Learning (LEMLL). In our approach, we formulate the problem by incorporating regression of the numerical labels and label enhancement into a unified framework, where numerical labels and predictive model are jointly learned. § RELATED WORKMulti-label learning approaches can be roughly grouped into three types based on the thought of order of label correlations <cit.>. The simplest ones are the first-order approaches which decompose the problem into a series of binary classification problems, each for one label <cit.>. The first-order approaches neglect the fact that the information of one label may be helpful for the learning of another label. The second-order approaches consider the correlations between pairs of class labels <cit.>. But the second-order approaches such as CLR <cit.> and RankSVM <cit.> only focus on the difference between relevant label and irrelevant label. The high-order approaches consider the correlations among label subsets or all the class labels <cit.>. For all of them, these approaches take the equal label importance assumption. In contrast, our approach assumes that each instance is described by a vector of latent real-valued labels and the importance of the possible labels is different.There have been some supervised learning tasks using label importance information (e.g. label distributions) as supervision information. In Label Distribution Learning (LDL) <cit.>, the label distribution covers a number of labels, representing the degree to which each label describes the instance. Thus, the value of each label is numerical. The aim of LDL is to learn a model mapping from feature space to label distribution space. In Label Ranking (LR) <cit.>, the label ranking of each instance describes different importance levels between labels. The goal of LR is to learn a function mapping from an instance space to rankings (total strict orders) over a predefined set of labels. However, the training of LDL or LR requires the availability of the label distributions or the label rankings in the training set. For the real applications, it is difficult to obtain such label importance information directly. On the contrary, LEMLL does not assume the availability of such explicit label importance information in training set. LEMLL can reconstruct the label importance information automatically from the logical multi-label data, while LR and LDL cannot preprocess logical label into numerical label explicitly. Therefore, LEMLL differs from these two existing works.There have been some existing works which learn from multi-label data with auxiliary label importance information. According to <cit.>, Multi-Label Ranking (MLR) can be understood as learning a model that associates with a query input x both a ranking and a bipartition of the label set into relevant and irrelevant labels. A label ranking and a bipartition are given explicitly and accessible to the MLR algorithm. In <cit.>, graded multi-label classification allows for graded membership of an instance belonging to a class label. An ordinal scale is assumed to characterize the membership degree and an ordinal grade is assigned for each label of the training example. In <cit.>, a full ordering is assumed to be known to rank relevant labels of the training example. In these cases, those auxiliary label importance information are explicitly given and accessible to the learning algorithm. Therefore, it is obvious that LEMLL is different from these existing works without assuming the availability of such explicit information.Though there is no explicit definition of LE defined in existing literatures, some methods with similar function to LE have been proposed in the past years. In <cit.> and <cit.>, the membership degrees to the labels are constructed via fuzzy clustering <cit.> and kernel method. However, these two methods have not been applied to multi-label learning. There have been some existing multi-label learning algorithms based on LE. According to <cit.>, a label propagation procedure over the training instances is used to constitute the label distributions from the logical multi-label data. According to <cit.>, label manifold is explored to transfer the logical labels into real-valued labels. In <cit.>, numerical labels are reconstructed by exploiting the structure of feature space via sparse reconstruction. These related works are all two-stage approaches: the numerical labels are first reconstructed, and then the predictive model is trained according to the reconstructed labels. In the two-stage approaches, the results of model training cannot impact label enhancement. In contrast, the LEMLL method is a single-stage learning algorithm where numerical labels and predictive model are jointly learned. Besides, the training of predictive model and the label enhancement are interrelated in LEMLL. The contribution of this paper is to propose a single-stage learning strategy that jointly learns to reconstruct the numerical labels and train the predictive model. Comparing with those two-stage approaches, the LEMLL method has several advantages against those two-stage approaches: 1. LEMLL can reconstruct better latent label importance information than those two-stage approaches; 2. Learning process is single-stage, using label enhancement regularizers; 3. LEMLL has better predictive performance than those two-stage approaches. § THE LEMLL APPROACH §.§ The LEMLL FrameworkLet 𝒳 = ℛ^d be the input space and the label space with l logical labels can be expressed as {-1, +1}^l. The training set of multi-label learning can be described as 𝒟 = {(x_1, y_1), ..., (x_n,y_n)}. According to the above sections, we assume that the class description of each instance is a vector of numerical labels. We use u_i ∈ 𝒰 = ℛ^l to denote the latent numerical label vector of the instance x_i. To learn a model mapping from the input space to the numerical label space, i.e., f : 𝒳→𝒰, we assume that f is a linear model as:p_i = Θφ(x_i)+b,where φ(x_i) is a nonlinear transformation of x_i to a higher dimensional feature space ℛ^ℋ, Θ∈ℛ^l ×ℋ and b∈ℛ^l× 1 are the parameter matrices of the regression model, and p_i is the predicted numerical label vector.Aiming at learning a model mapping from the input space to the numerical label space, a regression model can be trained by solving the following problem:min_Θ, b, U ∑_i=1^nL_r(r_i) +R,where L_r is a loss function, R denotes the regularizers, r_i=‖ξ_i ‖_2=√(ξ_i^Tξ_i), ξ_i=μ_i-p_i and U=[μ_1, ..., μ_n ]^T.To consider all dimensions into a unique restriction and yield a single support vector for all dimensions, the Vapnik ε-insensitive loss based on ℓ_2-norm is used for L_r, i.e.,L_r(r)= {[ 0 r<ε; r^2-2rε+ε^2 r ≥ε, ].which will create an insensitive zone determined by ε around the estimate, i.e., the loss of r less than ε will be ignored. Because of the nonzero value of ε, the solution takes into account all outputs to construct each individual regressor. In this way, the cross-output relations are exploited. Furthermore, the regression model can return a sparse solution.To control the complexity of the model, we define the following regularizer as:R_1(Θ) = ‖Θ‖_F^2,where ‖Θ‖_F denotes the Frobenius norm of the matrix Θ.§.§.§ Label Enhancement RegularizersThe information of the feature space and the logical label space should be used to reconstruct the numerical labels of each instance. Based on this, we give the following assumptions about label enhancement: 1) the numerical label should be close enough to the original label; 2) the numerical label space and the feature space should share similar local topological structure.As mentioned above, logical label can be viewed as a simplification of numerical label. Intuitively, the original label contains some information of numerical label, so the original label cannot differ too much from the numerical label. Thus we can get the first assumption and define the following regularizer as:R_2(U, Y) = ‖U - Y‖_F^2,where Y=[y_1, ..., y_n ]^T is the logical label matrix.According to the smoothness assumption <cit.>, the points close to each other are more likely to share a label. We can easily infer that the points close to each other in the feature space are more likely to have similar numerical label vector. This intuition leads to the second assumption. The topological structure of the feature space can be expressed by a fully connected graph 𝒢=(V, E, W), where V is the vertex set of the training instances, i.e., V = {x_i | 1 ≤i≤n}, E is the edge set in which e_ij represents the relationship between x_i and x_j, and W is the weight matrix in which each element W_ij represents the weight of the edge e_ij. To estimate the local topological structure of the feature space, the local neighborhood information of each instance should be used to construct the graph 𝒢. According to Local Linear Embedding (LLE) <cit.>, each point can be reconstructed by a linear combination of its neighbors. The approximation of the topological structure of the feature space can be obtained by solving the following problem:min_W ∑_i=1^nx_i - ∑_j≠iW_ijx_j ^2 s.t.∑_i=1^nW_ij = 1,where W_ij = 0 if x_j is not one of x_i's K-nearest neighbors. ∑_i=1^nW_ij = 1 is constrained because of the translation invariance. Eq. (6) can be transformed into the n quadratic programming problems:min_W_i W_i^TG_iW_is.t. 1^TW_i=1,where (G_i)_jk=(x_i-x_j)^T(x_i-x_k). Because the feature space and the numerical label space should share similar local topological structure, we define the following regularizer as:R_3(W, U) = ‖U - WU‖_F^2 = tr(U^TMU),where M=(I - W)^T(I - W) and I is an identity matrix. For a matrix A, tr(A) is its trace.By replacing R in Eq. (2) with Eqs. (4), (5) and (8), the framework can be rewritten as:min_Θ, b, U ∑_i=1^nL_r(r_i) + α‖Θ‖_F^2 + β‖U - Y‖_F^2 + γ tr(U^TMU)s.t. r_i=‖ξ_i ‖_2=√(ξ_i^Tξ_i)ξ_i=μ_i-Θφ(x_i)-bL_r(r)= {[ 0 r<ε; r^2-2rε+ε^2 r ≥ε, ].where α, β and γ are tradeoff parameters. §.§ The Alternating Solution for the OptimizationWhen we fix U to solve Θ and b, Eq. (9) can be rewritten as:min_Θ, b ∑_i=1^nL_r(r_i) + α‖Θ‖_F^2s.t. r_i=‖ξ_i ‖_2=√(ξ_i^Tξ_i)ξ_i=μ_i-Θφ(x_i)-bL_r(r)= {[ 0 r<ε; r^2-2rε+ε^2 r ≥ε. ].Notice that Eq. (10) is a MSVR with the Vapnik ε-insensitive loss based on ℓ_2-norm <cit.>. So Θ and b can be optimized by training a MSVR model.When we fix Θ and b to solve U, the objective function becomes:L(U)=∑_i=1^nL_r(r_i) + β‖U - Y‖_F^2 + γ tr(U^TMU). We use an iterative quasi-Newton method called Iterative Re-Weighted Least Square (IRWLS) <cit.> to minimize L(U). Firstly, L_r(r_i) is approximated by its first order Taylor expansion at the solution of the current k-th iteration, denoted by U^(k):L_r'(r_i)=L_r(r_i^(k))+dL_r(r)/dr|_r_i^(k)(ξ_i^(k))^T/r_i^(k)(ξ_i-ξ_i^(k)),where ξ_i^(k) and r_i^(k) are calculated from U^(k). Then a quadratic approximation is further constructedL_r”(r_i) =L_r(r_i^(k))+dL_r(r)/dr|_r_i^(k)r_i^2-(r_i^(k))^2/2r_i^(k)=a_ir_i^2+τ,wherea_i=1/2r_i^(k)dL_r(r)/dr|_r_i^(k)= {[ 0 r_i^(k)<ε; (r_i^(k)-ε)/r_i^(k)r_i^(k)≥ε, ]. and τ is a constant term that does not depend on U^(k). By substituting Eqs. (13) and (14) into Eq. (11), theobjective function becomes:L”(U)=∑_i=1^na_ir_i^2 + β‖U - Y‖_F^2 + γ tr(U^TMU) + ν=tr(Ξ^T D_aΞ) + β‖U - Y‖_F^2 + γ tr(U^TMU) + ν,where Ξ=[ξ_1, ... , ξ_n]^T=U-P, P=[p_1, ... , p_n]^T, (D_a)_ij=a_iΔ_ij (Δ_ij is the Kronecker's delta function) and ν is a constant term. Furthermore, Eq. (15) can be rewritten as:L”(U)=tr(U^T(D_a+βI+γM)U) -2tr((D_aP+βY)U^T) + ν',where ν' is a constant term. The minimization of Eq. (16) can be solved by setting the derivative of the above target function with respect to U to be zero:∂ L”(U)/∂U=2(D_a+βI+γM)U-2(D_aP+βY)=0.Solving Eq. (17), we can getU=(D_a+βI+γM)^-1(D_aP+βY).The direction of Eq. (18) is used as the descending direction for the minimization of Eq. (11). The solution for the next iteration U^(k+1) is obtained via a line search algorithm along this direction.The pseudo code of the LEMLL algorithm is presented in Algorithm 1. In order to distinguish the relevant and irrelevant labels, numerical labels should be divided into two sets, i.e., the relevant and irrelevant sets. According to <cit.> and <cit.>, an extra virtual label y_0 is added into the original label set, i.e., the extended original label set 𝒴' = 𝒴∪{y_0} = {y_0,y_1,...,y_l}. In this paper, the logical value of y_0 is set to 0. Using the extended original label set to do the training process, the optimal parameter matrices Θ^*∈ℛ^(l+1) ×ℋ and b^*∈ℛ^(l+1)× 1 are learnt. Given a test instance x, the model can predict an extended numerical label vector p^*. The predicted numerical label greater than p_0^* is relevant to the example and the label smaller than p_0^* is irrelevant to the example.§ EXPERIMENTSThis section is divided into two parts. In the first part, we evaluate the predictive performance of our method on multi-label data sets. In the second part, we reconstruct the label importance information from the logical labels via the LE methods, and then compare the recovered label importance with the ground-truth label importance. §.§ Predictive Performance Evaluation§.§.§ Experimental SettingsFor comprehensive performance evaluation, a total of fifteen benchmark multi-label data sets in Mulan <cit.> and Meka <cit.> are collected for experimental studies. For a data set S, we use |S|, dim(S), L(S), F(S), LCard(S), LDen(S), DL(S) and PDL(S) to represent its number of examples, number of features, number of class labels, feature type, label cardinality, label density, distinct label set and proportion of distinct label sets respectively. Table I summarizes the characteristics of the fifteen data sets.To examine the effectiveness of label enhancement, LEMLL is first compared with MSVR <cit.>, which can be considered as a degenerated version of LEMLL without label enhancement. Besides, three well-established two-stage approaches are employed for comparative studies, each implemented with parameter setup suggested in respective literatures: 1) Multi-label Learning with Feature-induced labeling information Enrichment (MLFE) <cit.>: [suggested setup: ρ=1, c_1=1, c_1=2, β_1, β_2 and β_3 chosen among {1,2, ... ,10}, {1,10,15} and {1,10} respectively ]; 2) Multi-Label Manifold Learning (ML^2) <cit.>: [suggested setup: K = l +1, λ = 1, C_1 and C_2 chosen among { 1, 2, ... , 10 }]; 3) RElative Labeling-Importance Aware multi-laBel learning (RELIAB) <cit.>: [suggested setup: α = 0.5, β chosen among { 0.001, 0.01, ... , 10 }, τ chosen among {0.1, 0.15, ... ,0.5} ]. Besides, we choose to compare the performance of LEMLL against three state-of-the-art algorithms, including one first-order approach ML-kNN <cit.>, one second-order approach Calibrated Label Ranking (CLR) <cit.>, and one high-order approach Ensemble of Classifier Chains (ECC) <cit.>. For the three comparing algorithms, parameter configurations suggested in the literatures are used. For ML-kNN, k is set to 10. The ensemble size of ECC is set to 30. The three state-of-the-art comparing algorithms are implemented under the Mulan multi-label learning package <cit.> by instantiating the base learners of CLR and ECC with logistic regression. For LEMLL, K is set to 10. ε is set to 0.1. α, β and γ are all chosen among {1/64, 1/16, 1/4, 1, 4, 16, 64} with cross-validation on the training set. For the sake of fairness, linear kernel is used in MSVR, ML^2 and LEMLL.Five widely-used evaluation metrics are used in comparative studies: Hamming loss (HL), Ranking loss (RL), One-error (OE), Coverage (CO) and Average precision (AP). Note that for all the five multi-label metrics, their values vary between [ 0, 1 ]. Furthermore, for average precision, the larger the values the better the performance; While for the other four metrics, the smaller the values the better the performance. These metrics serve as good indicators for comparative studies as they evaluate the performance of the models from various aspects. Concrete metric definitions can be found in <cit.>.§.§.§ Experimental ResultsThe detailed experimental results of each comparing algorithm on the 15 data sets are presented in Table II and Table III. The average ranks of the eight algorithms on the five measures are given in Table IV. On each data set, 50% examples are randomly sampled without replacement to form the training set, and the rest 50% examples are used to form the test set. The sampling process is repeated for ten times. The mean metric value and the standard deviation across ten training/testing trials are recorded for comparative studies.Based on the experimental results, the following observations can be apparently made:* LEMLL achieves optimal (lowest) average rank in terms of each evaluation metric (Table IV). On the 15 benchmark data sets, across all the evaluation metrics, LEMLL ranks 1st in 69.3% cases and ranks 2nd in 21.3% cases.* When compared with the three well-established two-step approaches, on the 15 data sets (Table II) (Table III), across all the evaluation metrics, LEMLL is significantly superior to MLFE in 89.3% cases, LEMLL is significantly superior to ML^2 in 90.7% cases and LEMLL is significantly superior to RELIAB in 80% cases. Thus LEMLL achieves superior performance over those two-stage approaches.* When compared with the three state-of-the-art algorithms, on the 15 data sets (Table II) (Table III), across all the evaluation metrics, LEMLL is significantly superior to ML-kNN in 82.7% cases, LEMLL is significantly superior to CLR in 93.3% cases and LEMLL is significantly superior to ECC in 88% cases.* Another interesting observation is that on all the data sets (Table II) (Table III), across all the evaluation metrics, the performance of LEMLL is superior or equal to MSVR, and LEMLL is significantly superior to MSVR in 76% cases, which verify the superiority of the reconstructed numerical labels to the logical labels. To summarize, LEMLL achieves superior performance over the well-established two-stage algorithms and the three state-of-the-art algorithms across extensive benchmark data sets. LEMLL significantly outperforms MSVR in most cases, which validates the effectiveness of label enhancement for boosting the multi-label learning performance.§.§ Reconstruction Performance Evaluation§.§.§ Experimental SettingsTo further evaluate the numerical labels μ reconstructed by LEMLL, experimental studies on 15 real-world label distribution data sets <cit.> with ground-truth label importance are conducted. Table V summarizes the detailed characteristics of the 15 real-world data sets.Note that the problem of reconstructing label importance from logical labels is relatively new, and the logical multi-label data with ground-truth label importance is not available yet. Thus we consider the following settings of the reconstruction tasks. In a label distribution data set, each instance is associated with a label distribution. The data set used in our experiments, however, contains for each instance not the real distribution, but a set of labels. The set includes the labels with the highest weights in the distribution, and is the smallest set such that the sum of these weights exceeds a given threshold. The settings can model, for instance, the way in which annotators label images or add keywords to texts: it assumes that annotators add labels starting with the most relevant ones, until they feel the labeling is sufficiently complete. Therefore, the logical labels in the data sets can be binarized from the real label distributions as follows. For each instance x, of which the label distribution is d=[d_x^y_1,d_x^y_2,...,d_x^y_l]^T, the greatest description degree d_x^y_j is found, and the label y_j is set to relevant label. Then, we calculate the sum of the description degrees of all the current relevant labels H = ∑_y_j∈ S^re d_x^y_j, where S^re is the set of the current relevant labels. If H is less than a predefined threshold ρ, we continue finding the greatest description degree among other labels excluded from S^re and select the label corresponding to the greatest description degree into S^re. This process continues until H > ρ. Finally, the logical labels to the labels in S^re are set to 1, and other logical labels are set to -1. In the experiments, ρ varies from 0.1 to 0.5 with step size of 0.1. Thus we use each label distribution data set to form five logical multi-label data sets.After binarizing the logical labels from the ground-truth label distributions, we recover the numerical labels from the logical labels via the LE algorithms and then the numerical labels are transferred to the label distribution via normalization g(μ) = sigmoid(μ)/Z, where sigmoid(·) is the sigmoid function mapping numerical value into (0,1) and Z is the normalization factor, i.e., Z = ∑_j=1^lsigmoid(μ_j). Finally, we compare the reconstructed label distributions with the ground-truth label distributions.In order to evaluate the similarity between the reconstructed label distributions and the ground-truth label distributions, as suggested in <cit.>, three measures are chosen for our experiments, which include Chebyshev distance (Chebyshev), Kullback-Leibler divergence (K-L) and cosine coefficient (Cosine). The first two are distance measures and the smaller the values the better the performance. The last one is similarity measures and the larger the values the better the performance.We choose to compare the performance of LEMLL against the five LE algorithms mentioned in Section II, i.e., FCM <cit.>, KM <cit.> and the first stage of MLFE <cit.>, ML^2 <cit.> and RELIAB <cit.>. For each comparing approaches, the parameters recommended in the corresponding literatures are used. For MLFE,the penalty parameter ρ is set to 1, c_1 is set to 1 and c_2 is set to 2. For ML^2, K is set to l +1 and λ is set to 1. For RELIAB, α is set to 0.5. For FCM, β is set to 2. For LEMLL, K is set to 10. ε is set to 0.1. α, β and γ are all set to 1. Linear kernel is used in KM and LEMLL.§.§.§ Experimental ResultsWe run the LEMLL, FCM, KM and the first stage of MLFE, ML^2 and RELIAB with threshold ρ varying from 0.1 to 0.5 with step size of 0.1 on the 15 data sets. Table VI, Table VII and Table VIII report the results of the six LE algorithms across all the threshold ρ on all the data sets evaluated byChebyshev, K-L and Cosine respectively. The best reconstruction performance on each measure is highlighted by boldface and average ranks are given in the last column. Note that this experiment is a reconstruction task, not a predictive task. Thus each LE algorithm only runs once. As shown in Table VI, Table VII and Table VIII, the following observation can be made: Across all the threshold ρ, LEMLL achieves optimal (lowest) average rank in terms of each evaluation metric. On the 15 data sets, across all the threshold ρ, across all the evaluation metrics, LEMLL ranks 1st in 80.4% cases and ranks 2nd in 11.1% cases.To summarize, LEMLL achieves superior reconstruction performance over other algorithms, which demonstrates that LEMLL has good capability in reconstructing latent label importance information from logical multi-label data.§ CONCLUSIONThis paper proposes a framework of multi-label learning with label enhancement. Extensive comparative studies clearly validate the performance of multi-label learning can be improved significantly with label enhancement and LEMLL can effectively reconstruct latent label importance information from logical multi-label data. In the future, we will explore if there are other assumptions about label enhancement.§ ACKNOWLEDGMENTSThis research was supported by the National Key Research & Development Plan of China (No. 2017YFB1002801), the National Science Foundation of China (61622203), the Jiangsu Natural Science Funds for Distinguished Young Scholar (BK20140022), the Collaborative Innovation Center of Novel Software Technology and Industrialization, and the Collaborative Innovation Center of Wireless Communications Technology. IEEEtran | http://arxiv.org/abs/1706.08323v4 | {
"authors": [
"Ruifeng Shao",
"Ning Xu",
"Xin Geng"
],
"categories": [
"cs.LG",
"cs.CV"
],
"primary_category": "cs.LG",
"published": "20170626111504",
"title": "Multi-Label Learning with Label Enhancement"
} |
[][email protected] 3. Physikalisches Institut, Universität Stuttgart, 70550 Stuttgart, Germany Department of Physics, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China3. Physikalisches Institut, Universität Stuttgart, 70550 Stuttgart, Germany[][email protected] Hefei National Laboratory for Physics Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, 230026, ChinaInstitute for Quantum Optics and Center for Integrated Quantum Science and Technology (IQst), Universität Ulm, 89081 Germany3. Physikalisches Institut, Universität Stuttgart, 70550 Stuttgart, Germany3. Physikalisches Institut, Universität Stuttgart, 70550 Stuttgart, Germany3. Physikalisches Institut, Universität Stuttgart, 70550 Stuttgart, Germany Department of Physics, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China3. Physikalisches Institut, Universität Stuttgart, 70550 Stuttgart, Germany Max Planck Institute for Solid State Research, Heisenbergstraße 1, 70569 Stuttgart, Germany The investigation of single atoms in solids, with both optical and nuclear spin access is of particularly interest with applications ranging from nanoscale sensing to quantum computation. Here, we study the optical and spin properties of single praseodymium ions in an yttrium aluminum garnet (YAG) crystal at cryogenic temperature. The single nuclear spin ofsingle praseodymium ions is detected through a background-free optical upconverting readout technique. Single ions show stable photoluminescence (PLE) with spectrally resolved hyperfine splitting of the praseodymium ground state. Based on this measurement, optical Rabi and optically detected magnetic resonance (ODMR) measurements are performed to study the spin coherence properties. Our results reveal that the spin coherence time of single praseodymium nuclear spins islimited by the strong spin phonon coupling at experimental temperature.Optical and spin properties of a single praseodymium ion in a crystal Jörg Wrachtrup December 30, 2023 ===================================================================== Optical detection and coherent control of single quantum objects in solids is essential to various fields ranging from fundamental physics toquantum information technologies <cit.>. Among these single photon emitters, rare earth ions embedded in crystals are attracting increasing attention as they simultaneously show narrow optical transitions <cit.> and long spin coherence times <cit.> as well as on-chip photonics <cit.>. A single rare earth ion having nuclear spin, which allows optical addressing and control, is particularly interesting as it potentially combines with ultra-long coherence time with fast optical control. However, it still remains challenging to coherently address single rare earth ions with nuclear spins <cit.>.Up to now, only single electron spins of trivalent cerium ions in YAG have been optically detected, initialized, readout and coherently controlled <cit.>. Due to the strong coupling to the surrounding spin baths,single Ce electron spin qubits loses their coherence on a sub-microsecond time scale. Although it is extended to millisecond by dynamical decoupling technique <cit.>, the coherence time is still far below that of rare earth ion nuclear spins. Excellent examples are trivalent Pr nuclear spins with one minute coherence time <cit.> and Eu:YSO nuclear spins with coherence times up to six hours <cit.>. However, detection and coherent controlof these two rare-earth ion single nuclear spin is hampered by their low fluorescence intensity.In this work, we report the optical and spin properties of a single nuclear spin of praseodymium ion in YAG crystal at cryogenic temperature. In particular, we observe the spectrally well resolved ground state hyperfine of praseodymium ions and demonstrate the initialization and readout of single nuclear spin through optical control. Moreover, we also performed optical Rabi and ODMR measurements to demonstrate optical control and radio frequency(RF) control capabilities of single praseodymium ions. The experiments we demonstrated open the door to future implementation of more sophisticated techniques, such as all-optical control and dynamical decoupling control <cit.>.The energy level structure of Pr:YAG is presented in Fig. <ref>. The ground state ^3H_4(1) is located in the 4f shell, and is composed of three doubly degenerate hyperfine sublevels with energy level splitting of 33.4 and 41.6 MHz. In rare earth ions, the intra-4f-shell transitions are efficiently screened by closed outer lying 5s and 5p shells. This screening is responsible fornarrow optical 4f^2 ↔ 4f^2 transitions <cit.>. The ^3P_0 state shows a lifetime of 8 μs <cit.>. The long lifetime of the excited states and their resulting weak fluorescence challenges the detection of single rare earth ions directly through 4f^2 ↔ 4f^2 transitions, however, has recently been detected <cit.>.One can circumvent the long lifetime ^3P_0 by further exciting the ion to the 4f5d state. Compared to ^3P_0 the 4f5d state shows a much shorter lifetime of 18 ns <cit.>. Thus, the emissionrate of single ions can be largely enhanced by two-photon upconversion. Previously, we used this upconversion method for the first optical detection of single Pr ions at room temperature <cit.>. In order to getaccess to the nuclear spin degrees of freedom, the same detection technique is applied at cryogenic temperatures in this work.Single Pr^3+ ions are detected through an upconversion process ^3H_4(1) →^3P_0→ 4f5d, as shown in Fig. <ref>(a)). A broadband diode laser with wavelength at 487 nm is applied to excite the ^3H_4→^3P_0 transition. Another 532 nm laser is applied simultaneously to promote the Pr^3+ ion further to the 4f5d band (^3P_0→ 4f5d). The emitted photonscollected by a 0.85 N. A. objective lens are detected by a photomultiplier tube (PMT) in a spectral range between 300-400 nm. Figure <ref>(b) shows the SEM image of a solid immersion lens fabricated on the surface of the studied YAG crystal in order to enhancephoton collection efficiency and spatial resolution <cit.>. Figure <ref>(c) displays thelaser scanning fluorescence image of Pr ions underneath a SIL, where bright spots represent individual Pr ions.The spectral properties of single Pr ions were investigated by PLE measurements. To this end, a narrow linewidth single mode laser (Toptica Photonics, DL Pro) working at 487 nm used for narrow band optical excitation. To prevent power broadening of the transitions, the laser was kept at low power levels (∼5 μW). While the 487 nm laser was swept through the resonant ^3H_4→ ^3P_0 transition, the 532 nm laser was constantly illuminating the sample completing the second upconversion step. By monitoring the fluorescence during frequency sweeping of the single mode laser, a single sweep PLE spectra is obtained as shown in Fig. <ref>(a). The spectrum shows a background-free signal as the upconversion readout method efficiently filters out the noise.From the spectrum one can see three well-resolved peaks with frequency differences of 32.7 MHz and 42.3 MHz, respectively. These peaks correspond to the hyperfine splittings of the ground state ^3H_4(1). However the hyperfine structure of the excited state ^3P_0 is not resolved owing to the linewidth of 2π×5 MHz. By monitoring the peak frequency in the successive sweeps as shown in Fig. <ref>(b), we obtain a spectral diffusion of single Pr^3+ ions within ±3 MHz. Fig. <ref>(c) is an average of the spectra of the successive sweeps in Fig. <ref>(b). The total linewidth is ∼2π×6 MHz which includes both, spectral diffusion and intrinsic linewidth.Figure 3 shows the optical coherent control of a single Pr ions. We applied a resonant laser with varying pulse length to drive it oscillating between the ^3H_4 and ^3P_0 states. As a result of its very weak oscillation strength, we observe an oscillation frequency of only 2π× 51 MHz by applying a laser power 0.4 mW. This value is several orders of magnitude smaller than that of other atom-like systems, such as NV centers <cit.> and quantum dots <cit.>.In addition to optical control, the ground state hyperfine splitting resolved in PLE spectrum enables resonant optical initialization and readout of single Pr nuclear spins. Thus, hyperfine resolved control and investigation of spin coherence properties by means of ODMR measurements are feasible.Figure <ref>(a) shows the ODMR measurement by applying the single mode laseron resonance with the optical transition ^3H_4 (± 3/2)→ ^3P_0 (± 3/2)and a RF sourcesimultaneously to induce spin transitions. The laser power was set to 5 μw, corresponding to a excitation rate of 2π× 5.7MHz according to the optical Rabi measurement in Fig.3. This rate is much faster than the spontaneous rate of ^3P_0 state, ensuring the ^3H_4(3/2) spin state is totally depleted <cit.>. By sweeping the RF frequency,upconverted ODMR spectra were acquired, which are shown in Fig. <ref>(b). The figure shows transition peaks at frequency 33.4 and 43.7 MHz, in good agreement withPLE measurement. The linewidth of the ODMR spectrum is ∼2π×1.5 MHz, indicating that the coherence time of single Pr nuclear spin is one order of magnitude longer than that of single Ce electron spins in the same crystal <cit.>, but far shorter than the expected values by taking into account the three orders of magnitude gyromagnetic ratio difference between the electron and nuclear spins. To understand this discrepancy we note that the baseline in Fig. <ref>(b) has an obvious fluorescence intensity. Since the upconversion readout under low power excitation is background free as shown in Fig. <ref>(a), the observed finite fluorescence intensity indicates the probability of that single Pr ions have a certain probability for staying in state ±3/2. We thus attribute the non-zero baseline and the linewidth broadening to the spin-lattice relaxation. A fast spin-lattice relaxation rate, which is comparable to the laser pumping rate, causes single Pr ions relaxed back to the state ±3/2 and also broadens the magnetic resonance peak. To understand the ODMR process more precisely, we describe the ODMR measurement through optical transition by a modified three-level Bloch equation:ρ_aa/t =iΩ (ρ_ab-ρ_ba) - ρ_aa/2T_1 + ρ_bb/2T_1 + Γ/2ρ_bb ρ_bb/t =-iΩ (ρ_ab-ρ_ba) - ρ_bb/T_1 + ρ_aa+ρ_cc/2T_1 - Γρ_bb ρ_cc/t =- ρ_cc/2T_1 + ρ_bb/2T_1 + Γ/2ρ_bb ρ_ab/t =iΔρ_ab + i Ω (ρ_aa-ρ_bb) - ρ_ab/T_2where the symbols a,b,c represent the sublevels ± 1/2, ± 3/2, and ± 5/2 respectively. Δ is the detuning of the RF frequency, Ω is the RF induced Rabi frequency, T_1 and T_2 are the spin relaxation times. Γ is the population redistribution rate determined by the excitation-emission cycling rate under optical pumping. In this model, the population redistribution rates from the sublevels ±3/2 to ±1/2 and ±5/2 are considered to be identical as indicated by identical ODMR peak intensity observed in Fig.3(b). In addition we assume the spin relaxation times are the same for all spin transitions.The optical excitation is a two-step transition with the combination of ^3H_4→^3P_0 transition and upconverted readout ^3P_0→4f5d transition.Both the oscillation strength and the applied laser power (5 mW) for the transition ^3P_0→4f5d is much stronger than for the transition ^3H_4→^3P_0. The excitation rate is thus dominated by the optical pumping rate of 2π×5.7 MHz between ^3H_4→^3P_0. This rate is much slower than the spontaneous emission rate of state 4f5d and thus determines the decay rate of Γ=2π×5.7 MHz. The steady solution for ρ_bb in the resonant condition (Δ = 0, ρ_bb) and off-resonant condition (Δ≫ 0, ρ_bb) give the relative intensity of the ODMR peak and baseline. The contrast of the ODMR peak defined by:=ρ_bb-ρ_bb/ρ_bb, it can be deduced as: = 4T^3_1ΓΩ^2/3+2T_1Γ+12T^2_1Ω^2 + 4T^3_1ΓΩ^2where T_2 = 2T_1 is assumed here. The RF induced Rabi frequency Ω is estimated to be 2π×15±5 kHz by taken the waveguide structure and the RF power <cit.>. We deduced the contrast to be 14±5%. According to these parameters we obtained the T_1 time of single Pr ion nuclear spin at 4 K is estimated to be 3.6±1.8 μs. This fast relaxation time is consistent with the measured ODMR linewidth. In rare-earth ions system Orbach relaxation is usually the dominated mechanism for spin-lattice relaxation <cit.>.T_1 = 1/c ·Δexp(Δ/kT),T_1 follows Eq. (<ref>), which is exponentially dependent on the energy difference between the lowest and the second lowest ground states Δ and temperature T. From Fig. <ref> we see that for single Pr ions in YAG δ E is 19 cm^-1, which is much smaller than other hosts like YSO with 57 cm^-1. According toliterature, at 4 K, the nuclear spin lifetime of Pr:YSO is 100 s <cit.>. If we assume Pr:YAG and Pr:YSO have same pre-factor chere, Pr:YAG then has∼100 μs spin lifetime at the same temperature, which is close to our estimation here. The probabledifference between YSO and YAG hosts isthe pre-factor difference and temperature inaccuracy of the experiment. However, the pre-factor of Pr:YAG is not well known, we applied the pre-factor of Pr:YSO here, which might introduce the difference. Meanwhile, the Orbach relaxation time is exponentially dependent to the reciprocal of the temperature. The inaccurate of the experiment temperature will thus also contribute to the difference. In conclusion, spectroscopic studying of single Pr ions in YAG has been performed at cryogenic temperature. Single Pr ions in YAG show narrow optical transition linewidth and stable optical transition. However, the intrinsic spin property is dominated byelectron phonon coupling even at 4 K.It suggests that the full application of single Pr ions should be performed atlower temperature. Moreover, we can improve the total collection efficiency of the single Pr fluorescence once Pr ions are coupled to photonic devices like nano-rods or photonic cavities. The improved optical property and spin property of single Pr ions in YAG will offer a new opportunity of exploring the long lived nuclear spins.We would like to thank Rainer Stöhr and Nan Zhao for discussions. The work is financially supported by ERC SQUTEC, EU-SIQS SFB TR21 and DFG KO4999/1-1. Ya Wang thanks the supporting from 100-Talent program of CAS.99 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL ladd2010 T. D. Ladd, F. Jelezko, R. Laflamme, Y. Nakamura, C. Monroe, and J. L. O'Brien, Nature (London) 464, 7285 (2010). awschalom2013 D. D. Awschalom, L. C. Bassett, A. S. Dzurak, E. L. Hu, and J. R. Petta, Science 339, 6124 (2013). gao2015 W. B. Gao, A. Imamoglu, H. Bernien, and R. Hanson, Nature Photon. 9, 363 (2015). thorpe2011 M. J. Thorpe, L. Rippe, T. M. Fortier, M. S. Kirchner, and T. Rosenband, Nature Photon. 5, 11 (2011). equall1994 R. W. Equall, Y. Sun, R. L. Cone, and R. M. Macfarlane, Phys. Rev. Lett. 72, 2179 (1994). Riedmatten2008H. Riedmatten, M. Afzelius, M. Staudt, C. Simon,and N. Gisin, Nature 456, 773 (2008)konz2003 F. Könz, Y. Sun, C. W. Thiel, R. L. Cone, R. W. Equall, R. L. Hutcheson, and R. M. Macfarlane, Phys. Rev. B 68, 085109 (2003)Saglamyurek2015 E. Saglamyurek, J. Jin, V. Verma, M. Shaw, F. Marsili, S. Nam, D. Oblak, and W. Tittel, Nat. Photon. 9, 83 (2015).zhong2017 T. Zhong, J. Kindem, J. Rochman, and A. Faraon, Nat. Commun. 8,14107 (2017)faraon2015 T. Zhong, J. M. Kindem, E. Miyazono and A. Faraon, Nat. Commun. 6, 8206(2015). ding2016 D. Ding, L. M. Pereira, J. F. Bauters, M. J. Heck, G. Welker, A. Vantomme, J. E. Bowers, M. J. Dood, and D. Bouwmeester, Nat. Photon. 10, 385(2016). sinclair2016N. Sinclair, K. Heshami, C. Deshmukh, D. Oblak, C. Simon, and W Tittel, Nat. Commun. 7 13454 (2016). kolesov2012 R. Kolesov, K. Xia, R. Reuter, R. Stöhr, A. Zappe, J. Meijer, P. Hemmer, and J. Wrachtrup, Nat. Commun. 3, 1029 (2012).utikal2014 T. Utikal, E. Eichhammer, L. Petersen, A. Renn, S. Götzinger, and V. Sandoghdar, Nat. Commun. 5 3627(2014). nakamura2014 I. Nakamura, T. Yoshihiro, H. Inagawa, S. Fujiyoshi and M. Matsushita, Sci. Rep. 4, 7364 (2014). kolesov2013R. Kolesov, K. Xia, R. Reuter, M. Jamali, R. Stöhr, T. Inal, P. Siyushev, and J. Wrachtrup, Phys. Rev. Lett. 111, 120502 (2013). siyushev2014P. Siyushev, K. Xia, R. Reuter, M. Jamali, N. Zhao, N. Yang, C. Duan, N. Kukharchyk, A. D. Wieck, R. Kolesov, and J. Wrachtrup, Nat. Commun. 5 (2014). xia2015 K. Xia, R. Kolesov, Y. Wang, P. Siyushev, R. Reuter, T. Kornher, N. Kukharchyk, A. D. Wieck, B. Villa, S. Yang, and J. Wrachtrup, Phys. Rev. Lett. 115 093602 (2015). heinze2013 G. Heinze, C. Hubrich, and T. Halfmann, Phys. Rev. Lett. 111, 033601 (2013). zhong2015 M. Zhong, M. P. Hedges, R. L. Ahlefeldt, J. G. Bartholomew, S. E. Beavan, S. M. Wittig, J. J. Longdell, and M. J. Sellars, Nature (London) 517, 177 (2015). yale2016 C. G. Yale, F. J. Heremans, B. B. Zhou, A. Auer, G. Burkard, and D. D. Awschalom, Nature Photon. 10, 184 (2016). kim1991 M. K. Kim, and R. Kachru, Phys. Rev. B 44, 9826 (1991). cheung1994 Y. M. Cheung, and S. K. Gayen, Phys. Rev. B 49, 827 (1994). EichhammerE. Eichhammer, T. Utikal, S. Götzinger and V. Sandoghdar, New J. Phys, 17,(2015). gayen1992 S. K. Gayen, B. Q. Xie, Y. M. Cheung, Phys. Rev. B 45, 20 (1992). ganem1992 J. Ganem, W. M. Dennis, and W. M. Yen, Journal of Luminescence 54, 79 (1992).Jamali2014 M. Jamali, I. Gerhardt, M. Rezail, K. Frenner, H. Fedder, and J. Wrachtrup, Rev. Sci. Instrum. 85, 123703 (2014). Batalov2008 A. Batalov, C. Zierl, T. Gaebel, P. Neumann, I.-Y. Chan, G. Balasubramanian, P. R. Hemmer, F. Jelezko, and J. Wrachtrup, Phys. Rev. Lett. 100, 077401 (2008). Golter2014 D. A. Golter, and H. Wang, Phys. Rev. Lett. 112, 116403 (2014). Stievater2001 T. H. Stievater, Xiaoqin Li, D. G. Steel, D. Gammon, D. S. Katzer, D. Park, C. Piermarocchi, and L. J. Sham, Phys. Rev. Lett. 87, 133603 (2001). Kamada2001 H. Kamada, H. Gotoh, J. Temmyo, T. Takagahara, and H. Ando, Phys. Rev. Lett. 87, 246401 (2001). Ham1997 B. Ham, P. Hemmer, and M. Shahriar, Opt. Commun. 144 227 (1997)Turukhin2001A. V. Turukhin, V. S. Sudarshanam, M. S. Shahriar, J. A. Musser, B. S. Ham, and P. R. Hemmer, Phys. Rev. Lett. 88, 023602 (2001).orbachR. Orbach, Proc. R. Soc. A264 1319 (1961).wang2015 P. Wang, Z. Yuan, P. Huang, X. Rong, M. Wang, X. Xu, C. Duan, C. Ju, F. Shi, and J. Du, Nat. Commun.6, 6631 (2015).nilsson2004 M. Nilsson, L. Rippe, S. Kröll, R. Klieber, and D. Suter, Phys. Rev. B70, 214116 (2004). | http://arxiv.org/abs/1706.08736v2 | {
"authors": [
"Kangwei Xia",
"Roman Kolesov",
"Ya Wang",
"Petr Siyushev",
"Thomas Kornher",
"Rolf Reuter",
"Sen Yang",
"Jörg Wrachtrup"
],
"categories": [
"physics.optics",
"cond-mat.mes-hall",
"quant-ph"
],
"primary_category": "physics.optics",
"published": "20170627090723",
"title": "Optical and spin properties of a single praseodymium ion in a crystal"
} |
M. Tixier Laboratoire d'Ingénierie des Systèmes de Versailles, Université de Versailles Saint Quentin, 45, avenue des Etats-Unis, F-78035 Versailles Tel.: +33-139254519Fax: [email protected]. Pouget Université Pierre et Marie Curie, UMR 7190, Institut Jean le Rond d'Alembert, F-75005 Paris, France CNRS, UMR 7190,Institut Jean le Rond d'Alembert, F-75005 Paris, France Tel.: [email protected] laws of an electro-active polymerMireille Tixier Joël Pouget Received: date / Accepted: date ================================================= Ionic electro-active polymers (E.A.P.) is an active material consisting ina polyelectrolyte (for example Nafion). Such material is usually used as thinfilm sandwiched between two platinum electrodes. Thepolymer undergoes large bending motions when an electric field is applied across the thickness. Conversely, a voltage can be detected between both electrodes when the polymer is suddenly bent. The solvent-saturated polymer is fully dissociated, releasing cations of small size. We used a continuous medium approach. The material is modelled by the coexistence of two phases; it can be considered as a porous medium where the deformable solid phase is the polymer backbone with fixed anions; the electrolyte phase is made of a solvent (usually water) with free cations.The microscale conservation laws of mass, linear momentum and energy and the Maxwell's equations are first written for each phase. The physical quantities linked to the interfaces are deduced. The use of an average technique applied to the two-phase medium finally leads to an Eulerian formulation of the conservation laws of the complete material. Macroscale equations relative to each phase provides exchanges through the interfaces. An analysis of the balance equations of kinetic, potential and internal energy highlights the phenomena responsible of the conversion of one kind of energy into another, especially the dissipative ones : viscous frictions and Joule effect. PACS 47.10.ab PACS 47.56.+r PACS 61.41.+e PACS 83.60.Np§ INTRODUCTIONElectro-active polymers (EAP) have attracted much attention from scientists and engineers of various disciplines. In particular, researches in the field of biomimetics (for instance, in robotic mechanisms are based on biologically-inspired models) and for the use as artificial muscles (see, for instance, the review of Shahinpoor <cit.> and <cit.> or <cit.>) and more recently EAPs are excellent candidates for energy harvesting devices <cit.>,<cit.> and <cit.>. Roughly speaking, such polymers have responses to external electric stimulation by displaying a significant shape and size variations. This interesting property offers many promising applications in advanced technologies. In addition, they can be used as actuators or sensors. As actuators the EAPs are characterized by the fact they undergo a large amount of deformation while sustaining large forces. They are often called artificial muscles <cit.>, <cit.> and <cit.>. Electro-active polymers can be divided in several categories according to their process of activation and chemical compositions. Nevertheless, they can be placed in two major categories : electronic and ionic categories. These both categories come in several families <cit.> (among them, ferroelectric polymers, dielectric EAP, electrostrictive paper, electro-viscoelastic elastomers, ionic polymer gels, conductive polymers, etc.). The first category of EAP is the electronic type. Concerning their advantages the E.A.P. can operate in room conditions with rapid response in time; in addition they induce relatively large actuation forces. One of the main disadvantage is that they require high voltage (150 MV/m). The second category, the ionic EAPs, with which the present work is concerned, operates with low voltage (few volts) producing large bending displacements. Their drawbacks are more or less slow response and low actuation force. They operate best in humid environment and they can be made as self-contained encapsulated actuators to be used in dry environment. In the present study the emphasis is placed especially on the ionic polymer metal composite (IPMC) <cit.>. The structure consists of thin ion-exchange membrane of Nafion, Flemion or Aciplex (polyelectrolyte) plated on both faces by conductive electrodes (generally platinum or gold). In short, to explain the mechanism of deformation of an EAP, a thin trip of polymers is placed between thin conductive electrodes. Upon the application of an electric field across a slightly humid EAP, the positive counter ions move towards the negative electrode (cathode), whole negative ions that are fixed (or immobile) to the polymer backbone experience an attractive force from the positive electrode (anode). At the same time, water molecules in the EAP backbone diffuse towards the region of high positive ion concentration (near the negative electrode) to equalize the charge distribution. As a result, the water or solvent concentration in the region near the anode increases and the concentration in the region near the cathode decreases, leading to strain with linear distribution along the step thickness which causes the bending towards the positive anode. Conversely, if the strip of electro-active polymers is suddenly bent, a difference of electric voltage is produced between electrodes <cit.> and <cit.>.The theories or models to explain the mechanism of deformation in EAP are yet to emerge. Nevertheless, some heuristic or empiric models are available in the literature. One of the most interesting and comprehensive accounts for chemical mechano-electric effect of the ionic transport coupled to electric field and elastic deformation of the polymer. A micro mechanical model has been developped by Nemat-Nasser <cit.> and <cit.> accounting for coupled ion transport, electric field and elastic deformation to predict the response of the IPMC. The model presented is mostly governed by Gauss equation for the conservation of electric charge, a constitutive equation for ion flux vector and a so-called generalized Darcy's law for the water molecule velocity. Other models based on linear irreversible thermodynamics have been proposed by Shahinpoor et al. <cit.> and <cit.>. The model considers standard Onsager formulation for the simple description of ion transport (current density) and the flux of the solvent transport. The conjugate forces are the electric field and the gradient of pressure. In different way, Shahinpoor and co-workers propose models for micro-electro-mechanics of ions polymeric gels based on continuum electromechanics <cit.>. The present work focus on a novel approach for electro-active polymers based on thermodynamics of continua. More precisely, we present a detailed approach for such polymer material using the concepts of non-equilibrium thermodynamical processes. The material is then modeled by the coexistence of two phases. The first one is the backbone polymer or the solid phase with fixed anion while the second phase is the solvent containing the free cations. The method consists of computing an average of the different phases over a representative elementary volume containing the phases at the micro scale. The statistical average leads to macro scale quantities defined all over the material. The main difficulty of the method is that we must account for the interfaces which exist between phases for which interfacial quantities must be defined. On using this procedure for different conservation laws of the present multiphase material, we deduce the equation of mass conservation, the electric charge conservation, the conservation of the momentum, the different energy balance equations at the macroscopic scale of the whole material. The paper is organized as follows. The description of the model and the definition of phases are presented with underlying physics in the next Section. Section 3 is devoted to the equations of conservation of mass. Since the polymer contains electric charges, the electric charge conservation and interface equations are presented in Section 4. The following Section places the emphasis on the linear momentum balance equation where the macroscopic stress tensor is defined. Moreover, the Maxwell's tensor is placed in evidence due to the action of the electric field on the moving electric charges. The Section 6 presents the energy balance laws, that is, the potential energy, the kinetic energy, the total energy and internal energy balance equations. At last, the discussion is reported in Section 7 and finally conclusions are drawn in Section 8.§ MODELLINGAs mentioned in the introduction, the system under study is made of a thin membrane of an ionic electro-active polymer saturated with water and coated on both sides with thin metal layers used as electrodes. Water, even in small quantity, causes a quasi-complete dissociation of the polymer and the release of positive ions (cations) in water; negative ions (anions) remain bound to the polymer backbone <cit.>. When an electric field perpendicular to the electrodes is applied, cations move towards the negative side, carrying solvent away by an osmosis phenomenon. This solvent displacement leads to a polymer swelling on the negative electrode side and to a compression on the opposite side, resulting in a bending of the strip.To model this system, we describe the polymer chains as a deformable porous medium; this solid is saturated by an ionic solution composed by water and cations. The whole material is considered as a continuum, which is the superposition of three systems whose velocity fields are different : a deformable solid component made up of polymer backbone negatively charged and fluid trapped in the unconnected porosity (the "solid component"), and a liquid composed of water and cations located in the connected porosity. Anions are bound to the solid component. Quantities relative to the different components will be respectively identified by subscripts 1, 2 and 3 for cations, solvent and solid. Subscript 4 will refer to the solution, i.e. both components 1 and 2. Quantities without subscript refer to the whole material. Solid and solution are separated by an interface (subscript i) whose thickness is supposed to be negligible. Components 2, 3 and 4 as well as the globalmaterial are assimilated to continua. Modelling of the interface is detailled in the appendix.Solid and solution are supposed to be incompressible phases. We assume the gravity and the magnetic field are negligible; the only external force acting on the system is the electric force.To describe this complex dispersed medium, we use a coarse-grained model developed by Nigmatulin <cit.>, <cit.>, Drew <cit.>, Drew and Passman <cit.> and Ishii and Hibiki <cit.> for two-phase mixtures <cit.>. We use two scales. The microscopic scale must be small enough so that the corresponding volume only contains a single phase (3 or 4), but large enough to use a continuous medium model. For Nafion completely saturated with water, it is about hundred Angstroms. At the macroscopic scale, the representative elementary volume (R.E.V.) contains phases 3 and 4. It must be large enough so that average quantities relative to the whole material make sense, and small enough so that these quantities can be considered as local. Its characteristic length is about micron <cit.>, <cit.> and <cit.>. For each phase 3 and 4, we define a microscale Heaviside-like function of presence χ _k(r,t) by χ _k=1 when phase k occupies point rat time t,χ _k=0 otherwiseχ _k remains unchanged in case of displacement following the interface velocity V_i^0. We obtain gradχ _k=-n_kχ _i ∂χ _k/∂ t=V_i^0· n_kχ _i k=3,4where the Dirac-like function χ _i=-gradχ _k· n_k (in m^-1) denotes the function of presence of the interface and n_k the outward-pointing unit normal to the interface in the phase k.The quantities related to each phase have significant variation over space and time, as well as the positions of each phase. In order to define macroscale quantities relative to the whole material, we consider a representative elementary volume (R.E.V.) containing the three components and the microscale quantities are statistically averaged over the R.E.V.. This statistical average, denoted by ⟨⟩ and obtained by repeating many times the same experiment with the same boundary and initial conditions, is supposed to be equivalent to a volume average (ergodic hypothesis). The average thus defined commutes with the space and time derivatives (Leibniz' and Gauss' rules, Drew <cit.>; Lhuillier <cit.>). On denoting by ⟨⟩ _k the average over the phase k of a quantity relative to the phase k only, a microscale quantity g_k^0 satisfies g_k=⟨χ _kg_k^0⟩ =ϕ _k⟨ g_k^0⟩ _kwhere ϕ _k=⟨χ _k⟩ is the volume fraction of the phase k. The macroscale quantity g_k is defined all over the material. In the following, superscript ^0 denotes the microscale quantities of each phase. The macroscale quantities, which are averages defined everywhere, are written without superscript.§ EQUATION OF CONSERVATION OF MASSIn the following, we assume that the polymer is enough hydrated to be completely dissociated. For the water, solution and solid phases, the microscale mass continuity equation can be written as ∂ρ _k^0/∂ t+div( ρ _k^0 V_k^0) =0where V_k^0 is the local velocity of the phase k and ρ _k^0 its mass density. Phases 2 and 3 are incompressible, so we obtain div( V_k^0) =0 The different phases do not interpenetrate, thus we can write V_1^0χ _i=V_2^0χ _i= V_3^0χ _i=V_4^0χ _i= V_i^0χ _i Using (<ref>) and (<ref>) we deduce ∂χ _kρ _k^0/∂ t+div( χ _kρ _k^0V_k^0) =ρ _k^0 V_i^0·n_kχ _i-ρ _k^0 V_k^0·n_kχ _i For the phase k, the mass density relative to the whole material volume and the barycentric velocity are defined respectively by ρ _k=⟨χ _kρ _k^0⟩ =ϕ _kρ _k^0V_k=⟨χ _kρ _k^0V_k^0⟩/⟨χ _kρ _k^0⟩=V_k^0neglecting the velocities fluctuations on the R.E.V. scale. ρ _4^0=ρ _2^0ϕ _2/ϕ _4+CM_1where M_k is the molar mass of the component k and C the cations molar concentration relative to the solution volume. It followsρ _4=ρ _1+ρ _2ρ _1=ϕ _4CM_1assuming that the concentration fluctuations are negligible and that the solution is diluted. In the same way the velocity of the solution can be written as ρ _4^0V_4^0=CM_1V_1^0 +ρ _2^0ϕ _2/ϕ _4V_2^0ρ _4V_4=ρ _1V_1+ρ _2V_2 Averaging over the material R.E.V., we finally obtain∂ρ _k/∂ t+div( ρ _k V_k) =0 k=1,2,3,4 The interfaces have no mass. Consequently, we deduce for the complete material ∂ρ/∂ t+div( ρV) =0where ρ and V denote the mass density and the barycentric velocity of the whole material ρ =∑_k=3,4ρ _kρV =∑_k=3,4ρ _kV_k § ELECTRIC EQUATIONS§.§ Electric charge conservationThe microscale electric charge conservation of the phase k can be writtendivI_k^0+∂( ρ _k^0Z_k^0) /∂ t=0where I_k^0 denotes the current density vector and Z_k^0 the electric charge per unit of mass (Z_2^0 and Z_3^0 are constants). I_3^0=ρ _3^0Z_3^0V_3^0 I_4^0=M_1CZ_1^0 V_1^0Z_k^0=z_kF/M_k k=1,3 Z_2^0=0Z_4^0=CM_1Z_1^0/ρ _4^0where z_k is the number of elementary charges of an ion and F the Faraday's constant.Averaging over the R.E.V., we obtaindivI_k+∂ρ _kZ_k/∂ t =⟨ -i_k^0·n_kχ _i⟩in which the macroscale mass charge and current density vector are defined as ρ _kZ_k=⟨χ _kρ _k^0Z_k^0⟩I_k=⟨χ _k I_k^0⟩withI_3=⟨χ _3I_3^0 ⟩ =ρ _3Z_3V_3I_4=⟨χ _4I_4^0 ⟩ =ρ _1Z_1V_1i_k^0=I_k^0-ρ _k^0Z_k^0 V_k^0 denotes the microscale diffusion current in phase k. Quantities relative to the interfaces are defined in the appendix. The interface electric charge density per unit surface Z_i and the current density vector I_i satisfy the following mean condition∂ Z_i/∂ t+divI_i=⟨i_3^0·n_3χ _i+ i_4^0·n_4χ _i⟩ Adding up equations (<ref>) for the solid, the solution and (<ref>) for the interfaces, it follows for the whole materialdivI+∂ρ Z/∂ t=0whereρ Z=∑_3,4ρ _kZ_k+Z_iI =ρ _1Z_1V_1+ρ _3Z_3V_3+ I_i§.§ Maxwell's equationsOne can reasonably neglect the effects of the magnetic field. The electric fields E_k^0 and the electric displacements D_k^0 of the solid and the solution are governed by the Maxwell's equations rotE_k^0=0 divD_k^0=ρ _k^0Z_k^0 The associated boundary conditions can be presented asn_3∧E_3^0χ _i=- n_4∧E_4^0χ _iD_3^0·n_3χ _i+ D_4^0·n_4χ _i+Z_i^0χ _i=0 Averaging equations (<ref>) over the R.E.V., we derive the following macroscale equations for the solid and the solution rotE_k=0 divD_k=ρ _kZ_k-⟨ D_k^0·n_kχ _i⟩in which the macroscale electric fields and displacements are defined as E_k=⟨χ_kE_k^0 ⟩/⟨χ_k⟩D_k=⟨χ _kD_k^0⟩ Electric field is an intensive thermodynamic variable. In principle, it displays spatial and time fluctuations within the R.E.V.. Considering this volume is tiny, we assume that the fluctuations are not relevant; we venture the same hypothesis for the concentration and the velocities of the phases. Furthermore, we suppose that macroscale electric fields are identical in all the phases. Adding up equations (<ref>) for the solid and the solution, it follows for the whole material rotE=0 div D=ρ Zusing (<ref>). Parameters of the complete material are defined by E=∑_3,4ϕ _kE_k=E_kD=∑_3,4D_k We conclude that the E.A.P. verifies the same Maxwell's equations and the same law of conservation of charge as an isotropic homogeneous linear dielectric. §.§ Constitutive relationsA reasonable approximation is that solid and solution can be regarded as isotropic linear dielectrics D_k^0=ε _k^0E_k^0where ε _k^0 denotes the permittivity of the phase k. Average over the R.E.V. gives D_k=ε _kE_kin which : ε _k=⟨χ _kε _k^0⟩is the mean permeability of the phase k relative to the total volume.The constitutive relation of the E.A.P. takes on the following form D=εEwhere the whole material permittivity is defined by ε =∑_k=3,4ε _k On considering our assumptions, the E.A.P. is equivalent to an isotropic linear dielectric. We however point out that its permittivity a priori varies over time and space because of variations of the volume fractions ϕ _3 and ϕ _4.§ LINEAR MOMENTUM CONSERVATION LAW§.§ Particle derivatives and material derivativeIn order to write the remaining balance equations, it is necessary to calculate the variations of the extensive quantities following the material motion. This raises a problem because the different phases do not move with the same velocity : velocities of the solid and the solution are a priori different. For a quantity g, we can define particle derivatives following the motion of the solid (d_3/dt) , the solution (d_4/dt) or the interface (d_i/dt) d_kg/dt=∂ g/∂ t+gradg· V_k Let us consider an extensive quantity of density g( r ,t) relative to the whole material. According to the theory developped by O. Coussy <cit.> and implicitly used in <cit.> and <cit.>, we are able to define a derivative following the motion of the different phases of the medium. We will call it the "material derivative" D/Dt( g/ρ) =∑_k=3,4,iρ _k/ρd_k( g_k/ρ _k) /dtwhere g_3, g_4 and g_i are the densities relative to the total actual volume attached to the solid, the solution and the interface, respectively (for example, if g is the volume density, we set g_3=1-ϕ and g_4=ϕ where ϕ is the porosity) g=g_3+g_4+g_id_k/dt( g_k/ρ _k) is the derivative following the motion of the phase k of the mass density associated with the quantity g_k. Using (<ref>), we derive ρD( g/ρ) /Dt=∑_k=3,4,i ∂ g_k/∂ t+div( g_kV_k)for a scalar quantity and ρD( g/ρ) /Dt =∑_k=3,4,i∂g_k/∂ t +div( g_k⊗V_k)for a vector quantity. This derivative must not be confused with the derivative d/dt following the barycentric velocity V. §.§ Linear momentum balance equationOn assuming that the gravity and the magnetic field are negligible, the only applied volume force is the electric one. The microscale momentum balance equation of the phase k is then written as ∂ρ _k^0V_k^0/∂ t +div( ρ _k^0V_k^0⊗ V_k^0) =divσ _k^0+ρ _k^0Z_k^0E_k^0 where σ _k^0, the microscale stress tensor of the phase k, is symmetric. The linear momentum of the interfaces per surface unit is zero (see appendix). On accounting for the assumptions concerning the local velocities, it follows that at the macroscopic scale ∂ρ _kV_k/∂ t+ div( ρ _kV_k⊗V_k ) =divσ _k +ρ _kZ_kE_k+F_kwhere σ_k=⟨χ _k σ_k^0⟩F_k=⟨σ _k^0·n_kχ _i⟩ We verify that the macroscale stress tensor of the phase k, σ _k, is symmetric. F_k represents the resultant of the mechanical stresses exerted on the phase k by the other phase; it is an interaction force. Concerning the interfaces, we obtain the following mean condition (cf <ref>), which expresses the linear momentum conservation law for the interfaces F_3+F_4=Z_iE_i The interface momentum is zero, then the volume linear momentum of the whole material is ρV =ρ _3V_3 +ρ _4V_4. On using the definition of the material derivative (<ref>), we obtain ρDV/Dt=divσ +ρ ZEin which σ=∑_k=3,4σ_k We check that σ is a symmetric tensor and that in the absence of any external force (E= 0), the total linear momentum is conserved.Using Maxwell's equations (<ref>) and (<ref>), (<ref>) becomes ρDV/Dt=div[σ+ε( E⊗E -E^2/2I) ] +E^2/2gradεε( E⊗E-E^2 /2I) is the Maxwell's tensor, which is here symmetric. Theadditional term E^2/2gradε is producedby the non homogeneous material permittivity.§ ENERGY BALANCE LAWS§.§ Potential energy balance equationSolid and solution are supposed to be non-dissipative isotropic linear media. As a consequence the balance equation for the potential energy or Poynting's theorem can be written in the integral form <cit.>, <cit.>d/dt∫_Ω1/2( E·D+ B·H) dv=-∮_∂Ω( E∧H) · nds-∫_ΩE·Idvassuming that no charge goes out of the volume Ω. The left hand side represents the variation of the potential energy attached to the volume Ω following the charge motion. If the charges are mobile, the associated local equation writes for the phase k, neglecting the magnetic field ∂ E_pk^0/∂ t+div( E_pk^0 V_k^0) =-E_k^0·I_k^0k=3,4in which E_pk^0=1/2D_k^0·E_k^0k=3,4is the potential energy per unit of volume of the phase k. On taking the statistical average of (<ref>) over the R.E.V., we obtain ∂ E_pk/∂ t+div( E_pkV_k ) =-E_k·I_kwhere E_pk=⟨χ _kE_pk^0⟩ =1/2 D_k·E_k The mean volume potential energy associated to the interfaces satisfies (see appendix) ∂ E_pi/∂ t+div( E_piV_i ) =-E_i·I_i The potential energy balance equation for the whole material is then ρD/Dt( E_p/ρ) =-E· I where E_p=∑_3,4,iE_pi=1/2D·E The production of potential energy in the R.E.V. is equal to the volume power -E·I of the force due to the action of the electric field on the density of electric charges. §.§ Kinetic energy balance equationThe microscale kinetic energy balance equation derives from ( <ref>) ∂ E_ck^0/∂ t+div( E_ck^0 V_k^0) =div( σ _k^0V_k^0 ) -σ_k^0:gradV_k^0 +ρ _k^0Z_k^0E_k^0·V_k^0where the microscale volume kinetic energy of the phase k is E_ck^0=1/2ρ _k^0V_k^02 In the same way, (<ref>) is transformed into ∂ E_ck/∂ t+div( E_ckV_k ) =V_k·divσ_k+ρ _kZ_k V_k·E_k+ F_k·V_kwhere E_ck=1/2ρ _kV_k^2is the macroscale volume kinetic energy of the phase k. The interface kinetic energyis zero (see appendix).On summing up the equations (<ref>) for phases 3 and 4, we arrive at ρD/Dt( E_cΣ/ρ) =∑_3,4[ ∂ E_ck/∂ t+div( E_ck V_k) ]=∑_3,4[ div( σ _k·V_k) -σ _k:gradV_k] + [ ∑_3,4ρ _kZ_kV_k+Z_i V_i] ·Ewhere E_cΣ is the sum of the volume kinetic energies of the different phases with respect to the laboratory reference frame E_cΣ=E_c3+E_c4E_cΣ is distinct from the kinetic energy of the whole material because the phase velocities are different. The total volume kinetic energy E_c is defined as E_c=1/2ρ V^2=∑_3,41/2ρ _kV^2From (<ref>), we deduce ρD/Dt( E_c/ρ) =∂ E_c/ ∂ t+div( E_cV)Using (<ref>), it follows ρD/Dt( E_c/ρ) =∂/ ∂ t( E_c-∑_3,4E_ck) +div[ ∑_3,4( σ_k· V_k-E_ckV_k) +E_c V] -∑_3,4σ_k:grad V_k+( ∑_3,4ρ _kZ_kV_k +Z_iV_i) ·EThe last two terms of this equation are source terms. The penultimate one represents the viscous dissipation, that is to say kinetic energy conversion into internal energy. The last term is the electric force volume power, which corresponds to a potential energy conversion into kinetic energy. As for the first two terms, they correspond to the kinetic energy flux, which is both due to the contact forces work ∑_3,4div(σ_k·V_k) and to the relative velocity of the two phases: the kinetic energy of the phases with respect to the barycentric reference frame becomes indeed part of the internal energy of the whole material. §.§ Total energy conservation lawThe total energy of the present system is the sum of its internal, potential and kinetic energies. The energy fluxes come from contact forces work and heat conduction. The microscale energy conservation law for the phase k can be written as ∂ E_k^0/∂ t+div[ E_k^0 V_k^0-σ_k^0· V_k^0+Q_k^0] =0whereE_k^0=U_k^0+1/2ρ _k^0V_k^02+1/2 E_k^0·D_k^0is the total microscale energy of the phase k. Q_k^0 denotes the microscale heat flux of the phase k and U_k^0 its microscale internal energy. Average over the R.E.V. leads to ∂ E_k/∂ t+div( E_kV_k ) -div( σ_k· V_k) +divQ_k= F_k·V_k+P_kwhere E_k=⟨χ _kE_k^0⟩ =U_k+E_ck+E_pk U_k=⟨χ _kU_k^0⟩Q_k=⟨χ _kQ_k^0⟩and P_k=⟨ -Q_k^0·n_kχ _i⟩ F_k·V_k+P_k represents the energy exchanges between the different phases through the interfaces : contact forces work and heat fluxes. We obtain the following condition for the interfaces (see appendix) ∂ E_i/∂ t+div( E_iV_i ) =-P_3-P_4-F_3·V_3- F_4·V_4where E_i is the total energy density of the interfaces averaged over the R.E.V.. On summing equations (<ref>) for k=3,4 and (<ref>), we obtain the conservation law of the total volume energy of the whole materialE ρD/Dt( E/ρ) =div( ∑_k=3,4 σ_k·V_k) -divQwhere E=∑_3,4,iE_k=U+E_c+E_pQ=∑_k=3,4Q_k The source term of this equation is zero, which is the expression of the conservation law of the energy. ∑_3,4σ_k· V_k and Q represent the volume power ofthe contact forces and the heat fluxes of the complete medium, respectively. §.§ Internal energy balance equationThe internal energy equation is obtained by subtracting kinetic and potential energy equations (<ref>) and (<ref>) from thetotal energy conservation law (<ref>) ∂ U_k^0/∂ t+div( U_k^0 V_k^0+Q_k^0) =σ_k^0:grad V_k^0+( I_k^0-ρ _k^0Z_k^0V_k^0) ·E_k^0 Algebraic manipulations of (<ref>), (<ref>) and (<ref>) lead to ∂ U_k/∂ t+div( U_kV_k+ Q_k) =σ_k:gradV_k + i_k·E_k-⟨Q_k^0 ·n_kχ _i⟩and for the interfaces (see appendix) ∂ U_i/∂ t+div( U_iV_i ) =⟨Q_3^0·n_3χ _i+Q_4^0·n_4χ _i⟩ -i_i·E_iwhere U_i denotes the volume internal energy of interfaces included in the R.E.V..Let us define U_Σ as the sum of the volume internal energies of the different phases U_Σ=U_3+U_4+U_iFrom (<ref>), we derive ρD/Dt( U_Σ/ρ) =∑_3,4( σ_k:gradV_k) + i·E-divQwhere i represents the diffusion current, consisting of the diffusion currents of the interfaces and of the cations in the solution i=I-∑_k=3,4( ρ _kZ_kV_k) -Z_iV_i=ρ _1Z_1( V_1-V_4) + i_i U_Σ represents only a part of the internal energy of the whole material; another part comes from the motion of the different phases in the barycentric reference frame. The internal energy of the whole material is defined by U=E-E_c-E_p=U_Σ+E_cΣ-E_cOne deduces ρD/Dt( U/ρ) =div( ∑_3,4E_ckV_k-E_cV) +∂/∂ t( ∑_3,4E_ck-E_c) -div Q +∑_3,4( σ_k: gradV_k) + i·E The first two terms in the right-hand side represent the volume internal energy flux due to the relative velocities of the phases. The fourth one is the volume kinetic energy converted into heat by viscous dissipation. And the last term is the volume heat source by Joule effect in the solution.§ DISCUSSIONThe conservation laws obtained for the global material include simplest cases. Assuming that the material is not electrically charged or removing the electric field, we obtain the equations governing a single-phase flow in porous medium <cit.>. In case that the stress tensor is zero and that the velocities of the two phases are identical and uniform, we find the equations of a charged rigid solid subjected to an electric field.The balance equations of the kinetic, potential, internal and total energies all have the same structure : the energy variation following the motion of one constituent, which is a particle derivative, is the sum of a flux and of source terms. The equations we write are relative to a thermodynamic closed system because of the use of the material derivative. Source terms correspond to conversion of one kind of energy into another one. At the microscopic scale, we obtain the following tables for the phase k fluxE_pk^0E_ck^0 div( σ_k^0·V_k^0) U_k^0 -divQ_k^0 E_k^0 div( σ_k^0· V_k^0-Q_k^0)and E_c⟷ E_p U⟷ E_p E_c⟷ U E_pk^0 -ρ _k^0Z_k^0E_k^0· V_k^0 -( I_k^0-ρ _k^0Z_k^0V_k^0)·E_k^0E_ck^0 +ρ _k^0Z_k^0E_k^0· V_k^0-σ _k^0:gradV_k^0 U_k^0+( I_k^0-ρ _k^0Z_k^0 V_k^0) ·E_k^0 + σ_k^0:gradV_k^0 E_k^0 Fluxes can be considered as the rate of variation of the quantity associated with the conduction phenomenon. The flux of kinetic energy is due to the contact force work, and the flux of internal energy to the heat conduction. The total energy flux is then the sum of the two previous ones. We point out that there is no flux for the potential energy. The viscous dissipation σ_k^0:gradV_k^0 transforms thekinetic energy into heat, that is to say into internal energy. The work of the electric forces produces two source terms : the first one is the scalar product of the electric field E_k^0 and of the diffusion current I_k^0-ρ _k^0Z_k^0 V_k^0, which is the electric current measured in the barycentric reference frame. It can be seen as Joule heating, that is as a conversion of potential energy into internal energy. The other part ρ _k^0Z_k^0V_k^0·E_k^0 results in a motion of the electric charges subject to the electric field; potential energy is thus transformed into kinetic energy. Furthermore, the energy conservation law is consequently satisfied. Accordingly, there is no source term in the balance equation of the total energy.We can examine in the same way the balance equations for one phase averaged over the R.E.V. That highlights the source terms E_c⟷ E_p U⟷ E_p E_c⟷ U E_pk -ρ _kZ_kV_k·E_k -i_k·E_kE_ck +ρ _kZ_kV_k·E_k-σ_k:gradV_k U_k+i_k·E_k + σ_k:gradV_k Viscous dissipation and Joule heating transform kinetic energy and potential energy into internal energy, respectively. And conversion of potential energy into kinetic energy is due once more to electric charges motion subject to the effect of the electric field. The other terms of the equations can be presented in the form flux interfacial exchangesE_pk E_ck div( σ_kV_k) +F_k·V_k U_k -divQ_k -⟨ Q_k^0·n_kχ _i⟩ E_k div( σ_k·V_k- Q_k) + F_k·V_k+P_kwhere F_k·V_k=⟨(σ_k^0·n_k ) ·V_k^0χ _i⟩ P_k=⟨ -Q_k^0·n_kχ _i⟩ As before, the flux of internal energy is the heat transfer by conduction, and the flux of kinetic energy is the volume power of the contact forces within the phase. Additional terms arise from this analysis; they represent exchanges between the phases through the interfaces. F_k· V_k is thus the volume power of the interaction forces acting on the phase k and corresponds to a kinetic energy input. -⟨Q_k^0·n_kχ _i⟩ results from the heat transfer through the interface and modifies the internal energy. The sum of these two terms modifies the total energy of the considered phase.Concerning the whole E.A.P., we obtain the following decomposition fluxE_pE_c div[ ∑_3,4(σ_k·V_k-E_ckV_k) +E_cV] +∂/∂ t( E_c-∑_3,4E_ck) U div[ ∑_3,4E_ckV_k-E_c V-Q] +∂/∂ t ( ∑_3,4E_ck-E_c) E div( ∑_3,4σ_k ·V_k-Q)and E_c⟷ E_p U⟷ E_p E_c⟷ U E_p -( ∑_k=3,4( ρ _kZ_k V_k) +Z_iV_i) ·E - i·EE_c +( ∑_3,4ρ _kZ_kV_k +Z_iV_i) ·E-∑_3,4σ_k:gradV_k U+i·E +∑_3,4 ( σ_k:gradV_k) The energy flux comes from the work of the contact forces in the different phases and from the heat transfer by conduction; the first one is a flux of kinetic energy, the second one is the flux of internal energy. The flux of potential energy is still zero. An additional flux term appears : the kinetic energy of the different phases measured in a barycentric reference frame; this kinetic energy is indeed a part of the internal energy of the global material. The source terms include viscous dissipation, which transforms kinetic energy into heat, and Joule heating, which transforms potential energy into internal energy. This last term is linked to the diffusion current created by the interfacial charges motion and by the cations motion in the solution reference frame. The global motion of the charges under the influence of the electric field turns potential energy on kinetic energy.§ CONCLUSIONWe have modelled an electroactive, ionic, water-saturated polymer placed in an electric field. The polymer is fully dissociated, releasing cations of small size. This system is depicted as the superposition of two continuous media : a deformable porous medium constituted by the polymer backbone embedded with anions, in which flows an ionic solution composed by water and released cations. We have deduced the microscale conservation laws of each phase : mass continuity equation, linear momentum conservation law, Maxwell's equations and energy balance laws. Then we derived the physical quantities attached to the interfaces. An average over the R.E.V. of the material has provided one with macroscale conservation laws for each phase first and for the global E.A.P., next. Having the three constituents of the material (solid, solvent and cations) different velocities, we have used for this last step, the material derivative concept in order to obtain an Eulerian formulation of the conservation laws.We have examined the balance equations of the different energies (kinetic, potential and internal ones), and we have put the emphasis on the phenomena responsible for the conversion of one kind of energy into another : viscous frictions, Joule effect and charge motion under the effect of the electric field. The first two results in dissipation. Moreover, the macroscale equations relative to each phase allow an evaluation of energy exchanges through the interfaces.Using the linear thermodynamics of the irreversible processes we should now be able to determine the potential of dissipation and to derive the phenomenological equations governing this system. This will be the subject of a forthcoming work.§ APPENDIX : INTERFACE MODELLINGIn practice, contact area between phases 3 and 4 has a certain thickness; extensive physical quantities like mass density, linear momentum and energy continuously vary from one bulk phase to the other one. This complicated reality can be modelled by two uniform bulk phases separated by a discontinuity surface Σ whose localization is arbitrary. Let Ω be a cylinder crossing Σ, whose bases are parallel to Σ. We denote by Ω _3 and Ω _4 the parts of Σ respectively included in phases 3 and 4.The continuous quantities relative to the contact zone are identified by a superscript ^0 and no subscript. A microscale quantity per surface unit g_i^0 related to the interface is defined by g_i^0=lim_Σ⟶ 01/Σ{∫_Ωg^0dv-∫_Ω _3g_3^0dv-∫_Ω _4g_4^0dv}where Ω _3 and Ω _4 are small enough so that g_3^0 and g_4^0 are constant. Its average over the R.E.V. is the volume quantity g_i defined by g_i=⟨ g_i^0χ _i⟩The balance equation of the interfacial quantity g_i^0 is written as (Ishii,<cit.>)∂ g_i^0/∂ t+div_s( g_i^0 V_i^0) =∑_3,4[ g_k^0(V_k-V_i^0) ·n_k+ J_k^0·n_k] -div_s J_i^0+ϕ _i^0where div_s denotes the surface divergence operator. J_i^0 is the surface flux of g_i^0, J_k^0 the flux of g_k^0 and ϕ _i^0 the surface source term. We arbitrarily fix the interface position in such a way that it has no mass density ρ _i^0=lim_Σ⟶ 01/Σ{∫_Ωρ ^0dv-∫_Ω _3ρ _3^0dv-∫_Ω _4ρ _4^0dv} =0From (<ref>), we deduce that the linear momentum and the kinetic energy per surface unit of the interface, respectively denoted P_i^0 and E_ci^0, are zero P_i^0=0 E_ci^0=0 In the same way, we define the charge per unit surface Z_i^0, the surface current vector I_i^0, the surface diffusion current i_i^0, the surface potential energy E_pi^0,the surface internal energy U_i^0 and the surface total energy E_i^0.The balance equations of these quantities write∂ Z_i^0/∂ t+div_s( Z_i^0 V_i^0) =i_3^0·n_3+ i_4^0·n_4-div_s i_i^0∂P_i^0/∂ t+div_s (P_i^0⊗V_i^0) =- σ_3^0·n_3- σ_4^0·n_4 +Z_i^0E_i^0∂ E_pi^0/∂ t+div_s( E_pi^0 V_i^0) =-I_i^0· E_i^0∂ E_i^0/∂ t+div_s( E_i^0 V_i^0) =-( σ_3^0· n_3) ·V_3^0-(σ_4^0·n_4 ) ·V_4^0+Q_3^0· n_3+Q_4^0·n_4∂ U_i^0/∂ t+div_s( U_i^0 V_i^0) =Q_3^0·n_3+ Q_4^0·n_4+ i_i^0·E_i^0Averaging over the R.E.V., this leads to the boundary conditions below ∂ Z_i/∂ t+divI_i=⟨i_3^0·n_3χ _i⟩ +⟨i_4^0·n_4χ _i⟩F_3+F_4=Z_iE_i ∂ E_pi/∂ t+div( E_piV_i ) =-I_i·E_i∂ E_i/∂ t+div_s( E_iV_i ) =-P_3-P_4-F_3·V_3- F_4·V_4 ∂ U_i/∂ t+div( U_iV_i ) =⟨Q_3^0·n_3χ _i+Q_4^0·n_4χ _i⟩ +i_i·E_i Moreover, we haveI_i=Z_iV_i+i_i § NOTATIONSk=1,2,3,4,i subscripts respectively represent cations, solvent, solid, solution (water and cations) and interface; quantities without subscript refer to the whole material. Superscript ^0 denotes a local quantity; the lack of superscript indicates average quantity at the macroscopic scale. Microscale volume quantities are relative to the volume of the phase, average quantities to the volume of the whole material.* C : cations molar concentration (relative to the liquid phase); * D (D_k,D_k^0) : electric displacement field; * E (E_k,E_k^0) : total energy density (internal, kinetic and potential); * E (E_k,E_k^0) : electric field; * E_c (E_cΣ,E_ck,E_ck^0) : kinetic energy density; * E_p (E_pk,E_pk^0) : potential energy density; * F=96487 C mol^-1 : Faraday's constant ; * F_k : resultant of the mechanical stresses exerted on the phase k by the other phase; * I (I_k,I_k^0) : current density vector; * i(i_k,i_k^0) : diffusion current; * M_k : molar mass of component k; * n_k : outward-pointing unit normal of phase k; * P_k : heat flux through interfaces; * P_i^0 : local surface linear momentum of interface; * Q (Q_k,Q_k^0) : heat flux; * U (U_Σ,U_k,U_k^0) : internal energy density; * V (V_k,V_k^0) : velocity; * z_k : number of elementary charges of a ion k; * Z (Z_k,Z_k^0) : total electric charge per unit of mass; * Z_i (Z_i^0) : electric charge density per unit surface; * ε (ε _k,ε _k^0) : permittivity; * ρ (ρ _k,ρ _k^0) : mass density; * σ (σ_k,σ_k^0) : stress tensor; * ϕ _k : volume fraction of phase k; * χ _k : function of presence of phase k ;The authors would like to thank D. Lhuillier and O. Kavian for their fruitful and stimulating discussions. 99 shahinpoor1998-a Shahinpoor M. Bar-Cohen Y., Simpson J., Smith J., Ionic polymer-metal composites (IPMC's) as biomimetic sensors, actuators and artificial muscles - a review. Smart Materials and Structures Vol 7, R15-R30(1998).shahinpoor1998-b Shahinpoor M. Bar-Cohen Y., Xue T., Simpson J., Smith J., Ionic polymer-metal composites (IPMC's) as biomimetic sensors, actuators. Proceedings of the SPIE'S 5th Annual International Symposium on Smart Structures and Materials. 1-5 March, 1998, San Diego (Paper N ¡ 3324-27).samatham2007 Samatham, R., Kim, K., Dogruer, D., Choi, H., Konyo, M., Madden, J., Nakabo, Y., Nam, J., Su, J., Tadokoro, S., Yim, W., Yamakita, M.. Active polymers: An overview. In: Kwang, K.J., Tadokoro, S. (Eds.), Electroactive Polymers for Robotic Applications. Springer(2007), chapter 1.sodano2004 Sodano H.A., Inman D.J., Park G., A review of power harvesting from vibration using piezoelectric materials. The Sock and Vibration Digest, vol.36(3), p 197-205(2004).liu2004 Liu Y., Ren K.L., Hofmann H.F., Zhang Q., Electroactive polymers for mechanical energy harvesting. Proc. SPIE 5385, Smart Structures and Materials 2004: Electroactive Polymer Actuators and Devices (EAPAD), 17 (July 27, 2004); doi:10.1117/12.547133.liu2005 Liu Y., Ren K.L., Hofmann H.F., Zhang Q., Investigation of electrostrictive polymers for energy harvesting. IEEE Transaction on Ultrasonics, Ferroelectrics, and Frequency Control, vol.52(12), p 2411-2417 (2005).shahinpoor1994 Shahinpoor M., Continuum electromechanics of ionic polymeric gels as artificial muscles for robotic applications. Smart Materials and Structures Vol 3, p 367-372(1994).shahinpoor2000 Shahinpoor M., Kim J.K., The effect of surface-electrode resistance on the performance of ionic polymer-matal composite (IPMC) artificial muscles. Smart Materials and Structures Vol 9, p 543-551(2000).bar-cohen2002 Bar-Cohen S., Sherrit Y., Lih, S., Characterization of the electromechanical properties of EAP materials. In: Proceedings of EAPAD, SPIEÕs 8th Annual International Symposium on Smart Structures and Materials, SPIE, p. 286-293(2002).nemat2000 Nemat-Nasser S., Li J., Electromechanical response of ionic polymers metal composites. Journal of Applied Physics, Vol 87, p 3321-3331(2000).newbury2002 Newbury K.M., Leo D.J., Electromechanical modelling and characterization of ionic polymer benders. Journal of Intelligent Material Systems and Structures, Vol 13, p 51-60 (2002).yoon2007 Yoon W.J., Reinhall P.G., Seidel E.J., Analysis of electro-active polymers bending : A component in a low cost ultrathin scanning endoscope. Sensors and Actuators, Vol A133, p 506-517 (2007).nemat2002 Nemat-Nasser S., Micro-mechanics of actuator of ionic polymer-metal composites Journal of Applied Physics, Vol 92, p 2899-2915(2002).degennes2000 De Gennes P., Okumura K., Shahinpoor M. Kim K., Mechanoelectric effects in ionic gels. Europhysics Letters, vol 40, p 513-518 (2000).Chabe Chabé J., Etude des interactions moléculaires polymère-eau lors de l'hydratation de la membrane Nafion, électrolyte de référence de la pile à combustible, PhD Thesis, Université Joseph Fourier Grenoble I, http://tel.archives-ouvertes.fr/docs/00/28/59/99/PDF/THESE_JCS.pdf (2008)Nigmatulin79 Nigmatulin R.I., Spatial averaging in the mechanics of heterogeneous and dispersed systems, Int. J. Multiphase Flow, Vol 5, p 353-385 (1979).Nigmatulin90 Nigmatulin R.I., Dynamics of multiphase media, vols 1 and 2, Hemisphere, New-York (1990).Drew83 Drew D.A., Mathematical modeling of two-phase flows, Ann. Rev. Fluid Mech., Vol 15, p 261-291 (1983).Drew98 Drew D.A., Passman S.L., Theory of multicomponents fluids, 308 p. Springer-Verlag, New-York (1998).Ishii06 Ishii M., Hibiki T., Thermo-fluid dynamics of two-phase flow, 462 p. Springer, New-York (2006).Lhuillier03 Lhuillier D., A mean-field description of two-phase flows with phase changes, Int. J. Multiphase Flows, Vol 29, p 511-525 (2003).Colette Collette F., Vieillissement hygrothermique du Nafion, PhD Thesis, Ecole Nationale Supérieure d'Arts et Métiers, http://tel.archives-ouvertes.fr/docs/00/35/48/47/PDF/These_Floraine_COLLETTE_27112008.pdf (2008)Gierke Gierke T.D., Munn G.E., Wilson F.C., The morphology in Nafion perfluorinated membrane products, as determined by wide- and small-angle X-ray studies, Journal of Polymer Science, 19, 1687 (1981).Coussy95 Coussy O., Mechanics of porous continua, 455 p. Wiley, Chichester (1995).Biot77 Biot M.A., Variational Lagrangian-thermodynamics of nonisothermal finite strain. Mechanics of porous solids and thermonuclear diffusion, Int. J. Solids Structures, Vol 13, p 579-597(1977).Coussy89 Coussy O., Thermomechanics of saturated porous solids in finite deformation, Eur. J. Mech., A/Solids, Vol 8, 1, p 1-14 (1989).Jackson Jackson J.D., Classical electrodynamics, 848 p. John Wiley & sons, New-York (1975).Maugin Maugin G.A., Continuum mechanics of electromagnetic solids,598 p., North-Holland, Amsterdam (1988). | http://arxiv.org/abs/1706.08730v1 | {
"authors": [
"Mireille Tixier",
"Joël Pouget"
],
"categories": [
"cond-mat.soft"
],
"primary_category": "cond-mat.soft",
"published": "20170627085450",
"title": "Conservation laws of an electro-active polymer"
} |
[Electronic address: ][email protected] Max Planck Institute for the Structure and Dynamics of Matter and Center for Free-Electron Laser Science Department of Physics, Luruper Chaussee 149, 22761 Hamburg, Germany[Electronic address: ][email protected] Max Planck Institute for the Structure and Dynamics of Matter and Center for Free-Electron Laser Science Department of Physics, Luruper Chaussee 149, 22761 Hamburg, Germany[Electronic address: ][email protected] Max Planck Institute for the Structure and Dynamics of Matter and Center for Free-Electron Laser Science Department of Physics, Luruper Chaussee 149, 22761 Hamburg, Germany[Electronic address: ][email protected] Max Planck Institute for the Structure and Dynamics of Matter and Center for Free-Electron Laser Science Department of Physics, Luruper Chaussee 149, 22761 Hamburg, Germany For certain correlated electron-photon systems we construct the exact density-to-potential maps, which are the basic ingredients of a density-functional reformulation of coupled matter-photon problems. We do so for numerically exactly solvable models consisting of up to four fermionic sites coupled to a single photon mode. We show that the recently introduced concept of the intra-system steepening (T.Dimitrov et al., 18, 083004 NJP (2016)) can be generalized to coupled fermion-boson systems and that the intra-system steepening indicates strong exchange-correlation (xc) effects due to the coupling between electrons and photons. The reliability of the mean-field approximation to the electron-photon interaction is investigated and its failure in the strong coupling regime analyzed. We highlight how the intra-system steepening of the exact density-to-potential maps becomes apparent also in observables such as the photon number or the polarizability of the electronic subsystem. We finally show that a change in functional variables can make these observables behave more smoothly and exemplify that the density-to-potential maps can give us physical insights into the behavior of coupled electron-photon systems by identifying a very large polarizability due to ultra-strong electron-photon coupling.Exact functionals for correlated electron-photon systems Angel Rubio December 30, 2023 ========================================================§ INTRODUCTIONRecent experiments <cit.> at the interface of quantum chemistry, material science and quantum optics allow to tailor the physical and chemical properties of the system by coupling light strongly to the matter, e.g. by placing it in an optical cavity. The theoretical description of such experiments requires a full quantum treatment of the entire system including the electronic matter and the electromagnetic field. Common electronic-structure methods, such as density-functional theory (DFT) <cit.> allow to efficiently describe the quantum nature of the electrons while the electromagnetic field is treated as a static and fixed external perturbation. To also include the electromagnetic field explicitly and thus being able to describe, e.g. chemical systems in an optical cavity, time-dependent and ground-state DFT have been recently generalized to correlated electron-photon system <cit.>. This new density-functional framework for coupled matter-photon problems has been termed quantum electrodynamical density-functional theory (QEDFT) <cit.>. Similar to DFT, QEDFT is an exact framework to describe the many-body problem <cit.>. Both frameworks exploit the one-to-one correspondence between the internal and external variables that are formally connected via a Legendre transformation. As a consequence of these so-called density maps, one can determine every observable of the quantum system as a functional of the internal variables only. While in DFT the internal variable is the one-particle electron density (conjugate to the external scalar potential), in QEDFT we have two internal variables (one for the electrons and one for the photons). These variables depend on the form of the electron-photon Hamiltonian under considerations <cit.>.In DFT, to calculate the physical density of a many-body system and thus avoid the numerically infeasible correlated many-body wave function, one usually employs the Kohn-Sham scheme <cit.>. In this approach the N-particle Schrödinger equation is replaced by N coupled, non-linear one-particle equations, which are numerically tractable. The price to pay is that these effective particles are subject to an in general unknown xc potential, which makes up for all the missing many-body effects. Also in QEDFT we can replace the full electron-photon Schrödinger equation by coupled, non-linear one-particle equations. The electronic subsystem is again described by equations for single particles that are subject to a xc field. In this case, however, the effective field does not only contain contributions from many-body effects due to the electron-electron interaction but also from many-body effects due to the photon-electron interactions <cit.>. Further, the photonic subsystem is described by an inhomogeneous Maxwell equation, where the inhomogeneity is usually given explicitly by the electronic subsystem <cit.>.In practice, calculations within the new QEDFT framework require reliable approximations to the unknown xc potentials. Herein, QEDFT profits from the long-standing search <cit.> in DFT for more reliable xc potentials that efficiently mimic the electron-electron interaction. While common xc functionals can be used to describe the many-body effects due to electron-electron interactions, new functionals that mimic the electron-photon interaction have to be developed. In this work, we are concerned about the xc potential of the light-matter interaction, i.e. the potential an electron encounters due to its coupling to the electromagnetic field. For the electron-photon contributions first approximations for the xc potential along the lines of the optimized effective potential (OEP) approximation have been already demonstrated to be practical <cit.>. If, however, common approximations for the electron-electron many-body effects are used, then clearly QEDFT will face the same challenges as standard DFT when systems with strong electron-electron correlations are considered. To better understand such situations in DFT, the impact of static correlation and localization for different exact density maps has been analyzed in a recent work <cit.>. By investigating specific integrated quantities of these maps, e.g. the density difference between two parts of the system δ n, it has been shown that static correlation and localization can be quantified by the concept of intra-system steepening. For δ n a step can be found that becomes steeper with increasing correlation in the system. This feature translates to different functionals of the density, and corresponds to the full real-space behavior of steps and peaks in the exact xc potential <cit.>. In QEDFT we have besides the electron-electron correlations also electron-photon correlations. And also for them according step and peak structures in the xc potential appear in real space and pose a challenge for constructing approximate xc potentials that are reliable for strong electron-photon correlations <cit.>. Consequently, can we analyze the correlation and localization in a similar manner for coupled electron-photon system, and is the intra-system steepening a general feature of correlated systems? In this work, we construct the exact density-to-potential maps of ground-state QEDFT <cit.> and examine the intra-system steepening related to the real-space properties of the exact xc potentials for correlated electron-photon systems. For electron-photon model systems we show that the localization of the electrons and the displacement of the photon mode depends on the ratio between the kinetic energy and the coupling term between electrons and photons. Features of this intra-system steepening can also be found in other observables, such as the photon number. A change in functional variables though, e.g., by going from the external to the conjugate internal variables, can make the behavior of these observables more regular. We further show how the validity of the mean-field approximation to the electron-photon coupling can be investigated by analyzing the intra-system steepening. Finally we highlight how density-potential maps in electron-photon systems can be used also outside of QEDFT to analyze the properties of physical systems by investigating the polarizability of an electron-photon system when increasing the coupling strength. § EXACT MAPS AND THE KOHN-SHAM CONSTRUCTION IN QEDFTQEDFT allows to describe the quantum nature of electrons and photons on the same footing by reformulating coupled matter-photon problems in an exact quantum fluid description. In the following we consider the interaction of a system of n_e electrons, e.g., a molecule in Born-Oppenheimer approximation <cit.>, with n_p quantized modes of a photon field. A typical experimental situation would be to place the matter system inside an optical cavity, where only specific frequencies are assumed to interact with the multi-particle system. Such a situation can be described by employing the following Hamiltonian <cit.>Ĥ(t)= Ĥ_e(t) + Ĥ_p(t) Ĥ_e(t) =∑_i=1^n_e(-ħ^2/2m∇⃗_i^2+v_ext(r_i,t))+e^2/4 πϵ_0∑_ij,i>j1/|r_i-r_j| Ĥ_p(t)= 1/2∑_α=1^n_p[p̂^2_α+ω_α^2(q̂_α - λ_α/ω_α·eR)^2]+j^(α)_ext(t)/ω_αq̂_α. R= ∑_i=1^N_ex_i,where R refers to the electronic dipole operator. Note, in this work, we neglect electron-nuclear interactions by working in the clamped-ion approximation. Therefore, the Hamiltonian given above only couples the electromagnetic field to the electrons. However, extending the work to the interaction between the ions and the field is straightforward, but would make the discussion in the present work more cumbersome. Besides the usual Schrödinger Hamiltonian Ĥ_e(t) that describes the charged-particle system, we now also have n_p photon modes with frequencies ω_α that are coupled in dipole approximation with the electronic system.Here the photon momenta p̂_α=1/i√(ω_α/2)(â_α-â_α^†) in terms of the usual creation and annihilation operators are connected to the magnetic field for mode α, and q̂_α =√(1/2ω_α)(â_α+â_α^†) is proportional to the electric displacement field. Therefore we have to subtract the polarization of the electronic system such that (ω_αq̂_α - λ_α·eR) corresponds to the electric field. The coupling strength is |λ_α| and λ_α/|λ_α| is the polarization vector. Further, j^α_ext(t) corresponds to an external dipole moment that drives mode α. To reformulate the above problem we employ a bijective mapping between the external variables of the system, i.e., v_ext(r,t) and j^(α)_ext(t), and the conjugate internal variables <cit.> given here by n(r,t) and q_α(t), i.e.,(v_ext(r,t),j^(α)_ext(t) ) 1:1⟷(n(r,t),q_α(t) ).While in principle this mapping allows to calculate the exact internal variables by solving a local-force equation for the charge density non-linearly coupled to a classical Maxwell equation <cit.>, in general we do not know the exact form of the momentum-stress and interaction forces in such equations <cit.>. So in practice we have to use approximations. The standard way to devise such approximations is the use of a non-interacting auxiliary system, a so-called Kohn-Sham system <cit.>. In the Kohn-Sham scheme the difference in forces between the non-interacting and interacting system is subsumed in a mean-field term and the unknown xc potential. In the case of coupled electron-photon systems the mean-field contribution is the classical Maxwell field, which has the usual longitudinal Hartree contribution and now also transversal terms, and the xc potential contains the electron-electron and electron-photon many-body effects. Neglecting the electron-photon many-body effects in the xc potential in the case of coupled electron-photon systems leads to the mean-field potential that is identical to a classical Maxwell-Schrödinger simulation <cit.>.Approximations to the xc potential of the coupled electron-photon system face similar problems to the ones of purely electronic systems. When increasing the correlation, i.e. increasing the coupling strength |λ_α|, the accuracy of the mean-field or the exchange-only OEP <cit.> decreases. To improve and construct approximations that can treat strong-coupling situations more accurately we need a better understanding of the electron-photon contributions in the strong-coupling limit. To this end weexplicitly construct and investigate the exact fundamental maps that underly the framework of ground-state QEDFT. As model system, we choose the Rabi-Hubbard model, i.e. a few-site model coupled to a single photon mode. We consider three different setups (i) a single electron on two sites, where the electron-electron interaction favoring the localization in the system is equal to zero. (ii) Two electrons on two-sites, where we model the electron-electron repulsion by a Hubbard interaction term. We analyze both maps in the resonant limit for different coupling strength. (iii) Four electrons on four sites, here we connect the intra-system steepening and the modification of the electric polarizability for such systems.§ TWO-SITE RABI-HUBBARD MODELThe Rabi model <cit.>, which consists of one electron on two sites coupled to one photonic mode, has been heavily investigated in the context of light-matter interactions <cit.>, e.g. recently in the context of photon blockade <cit.>. In this work, we employ a generalized Rabi model with n_s sites and that can host up to 2n_s interacting electrons (Rabi-Hubbard model). The corresponding model Hamiltonian reads as follows[We note here, that in the continuum limit, the dipole self-interaction term (λ_α·eR)^2/2 term becomes important, see e.g. the discussion in Ref. <cit.>. However, in the two-site case the dipole self-energy corresponds to a constant energy shift that we neglect in the discussion of the two-site model.]Ĥ_0 =-t_0 ∑_i=1,σ=↑,↓^n_s-1(ĉ_i,σ^†ĉ_i+1,σ + ĉ_i+1,σ^†ĉ_i,σ)+U_0∑_i=1^n_sn̂_i,↑n̂_i,↓ + ωâ^†â - ωλq̂d̂ +j_ext/ωq̂+ (λd̂)^2/2+v_extd̂where the photon displacement operator is given by q̂ =√(1/2ω)(â+â^†) (the photon momentum operator p̂=1/i√(ω/2)(â-â^†)) and λ introduces a coupling between the electronic and photonic part of the system. The electronic part is described by the standard Hubbard model with the on-site parameter U_0, the hopping matrix element t_0, and the operators ĉ^†_i,σ and ĉ_i,σ that create or destroy an electron with spin σ on site i. The electron density operator on site i is given by n̂_i=∑_σĉ^†_i,σĉ_i,σ. We furthermore specify the dipole moment of the electronic system by d = ∫d̂n(r)dr, for two sites this corresponds to δ n=n_1-n_2, i.e. the density difference between both sites in the lattice and d=δ n [We emphasize that the two-site Rabi-Hubbard Hamiltonian as in Eq. <ref> is exactly identical to a Holstein-Hubbard Hamiltonian that is routinely used in the electron-phonon community, e.g. discussed in Refs. <cit.>.].In the case of the above Hamiltonian of Eq. <ref> the pair of conjugate variables are (v_ext,j_ext) and (d=⟨d̂⟩,q=⟨q̂⟩) <cit.>. A simple way to see that this is true from a purely electronic DFT perspective and that helps to interpret the external term j_ext is by performing a unitary transformation of the above Hamiltonian. With the coherent-shift operator U[j_ext] = exp(i j_extp̂/ω^3) we can recast the Hamiltonian of Eq. <ref> into the unitarily equivalent formĤ_0' =Û^†Ĥ_0Û=-t_0 ∑_i,σ=↑,↓^n_s-1(ĉ_i,σ^†ĉ_i+1,σ + ĉ_i+1,σ^†ĉ_i=1,σ) +U_0∑_i=1^n_sn̂_i,↑n̂_i,↓+ ωâ^†â - ωλq̂d̂ +(λd̂)^2/2+ (v_ext + λ/ω^2 j_ext)d̂ - 1/2 ω^4 j_ext^2.Thus, we see that the external dipole j_ext can be recast into an external potential on the electrons by a unitary transformation. Take, for instance, the case of the two-site problemRabi-Hubbard model as depicted in Fig. <ref>. If j_ext=0 and a negative external potential v_ext <0 acts on the system, the external potential localizes the electron on one site. The external dipole for the photons j_ext introduces a classical positive charge to the system that can counterbalance the effect of the external potential v_ext. With the usual Hohenberg-Kohn theorem we know that for any external potential ṽ_ext=(v_ext + λ/ω^2 j_ext) there is one and only one ground-state wave function Ψ_0' associated. And from this ground-state we find the corresponding unique wave function of the original problem by Ψ_0 = D[-j_ext]Ψ_0'. Thus purely electronic properties can be reconstructed from the situation with j_ext=0, while the photonic observables will in general depend in a non-trivial manner on the j_ext. Further, as can be deduced from the equations of motions for the photonic systems (e.g. Eq. 2 in Ref. <cit.>.), we can establish a direct connection between q and d and j_ext for the ground-state (∂/∂ tq=∂^2/∂ t^2q=0)q = λ/ωd - 1/ω^3j_ext.Using the external variables v_ext and j_ext, we stepwise screen theexternal potential of the photons and electrons.For each fixed pair of the external potential (v_ext,j_ext), we diagonalize the Hamiltonian using exact diagonalization <cit.> given in Eq. <ref> and obtain the corresponding ground-state wave function of the system, in the following denoted by Ψ_0(v_ext,j_ext). Using the exact wave function, we have access to the conjugated set of variables, i.e. (d,q), by evaluating the corresponding expectation values d=⟨Ψ_0^(v_ext,j_ext)|d̂|Ψ_0^(v_ext,j_ext)⟩and q=⟨Ψ_0^(v_ext,j_ext)|q̂|Ψ_0^(v_ext,j_ext)⟩corresponding to the electronic dipole and the photonic displacement coordinate. Screening the parameters v_ext and j_ext allows us to construct the complete map between the conjugated set of variables.For general many-body calculations, we can use the Kohn-Sham approach <cit.> to simulate the interacting many-body problem by solving equations for non-interacting particles. In the electron-photon situation that is presented here, we encounter two interaction terms, i.e. the electron-electron interaction modeled by a Hubbard on-site interaction and the electron-photon interaction. In general, we can setup a Kohn-Sham system for non-interacting electrons as presented in Refs. <cit.>. However, in this paper we focus on the effects of the electron-photon interaction on the density-to-potential maps and we therefore include the electron-electron interaction in the Kohn-Sham system explicitly. Thus, the Kohn-Sham system reads in the case of a two-site lattice as followsĤ_fm,KS =-t_0 ∑_σ=↑,↓(ĉ_1,σ^†ĉ_2,σ + ĉ_2,σ^†ĉ_1,σ)+U_0∑_i=1,2n̂_i,↑n̂_i,↓+ v_Sd̂ Ĥ_ph,KS = ωâ^†â +j_S/ωq̂.The hereby emerging effective Kohn-Sham potential v_S and the effective current j_S are chosen such that the ground-state density is equal in the Kohn-Sham systems of Eq. <ref>-<ref> and the full interacting problem of Eq. <ref>. While the effective current j_S is known explicitly <cit.>, i.e. j_S=-ω^2λq̂d +j_ext, the effective potential v_S has to be approximated. To this end, we divide v_S as followsv_S=v_ext + v_M + v_xc,where v_M and v_xc describe the mean-field part and the xc part, respectively.The simplest approximation to the fully coupled problem and the starting point for the Kohn-Sham construction in the electron-photon case is the mean-field approximation <cit.> that is given by v_M=- ωλqd̂ and leads to the following Hamiltonian in the case of a two-site latticeĤ_fm,0 =-t_0 ∑_σ=↑,↓(ĉ_1,σ^†ĉ_2,σ + ĉ_2,σ^†ĉ_1,σ) +U_0∑_i=1,2n̂_i,↑n̂_i,↓- ωλqd̂+ v_extd̂ Ĥ_ph,0 = ωâ^†â - ωλq̂d +j_ext/ωq̂,where d=⟨ d⟩ and q=⟨ q⟩. To obtain the mean-field ground state, Eqns. <ref>-<ref> have to be solved either self-consistently, or Eq. <ref> can be exploited leading to the following electronic equationĤ_fm,0 =-t_0 ∑_σ=↑,↓(ĉ_1,σ^†ĉ_2,σ + ĉ_2,σ^†ĉ_1,σ) +U_0∑_i=1,2n̂_i,↑n̂_i,↓- λ^2 dd̂+ λ/ω^2 j_extd̂ + v_extd̂.In these equations, we apply the classical approximation only to the electron-photon interaction, while the electron-electron interaction is treated fully correlated. We may expect that such a approximation works well for the studied model in the weak-coupling regime and in the limit of infinite coupling <cit.>.To construct the exact v_xc of Eq. <ref> beyond the mean-field approximation, we can, for instance, use the Heisenberg equation of motion to find the connection between the electronic density d and v_S for the Kohn-Sham system and between d and v_ext in the many-body problem. These equation read for the ground state as followsd[v_ext,j_ext] =⟨ωλq̂-v_ext/t_0∑_σ=↑,↓(ĉ_1,σ^†ĉ_2,σ + ĉ_2,σ^†ĉ_1,σ)⟩-1/2t_0U_0⟨(ĉ_1,↑^†ĉ_2,↑ + ĉ_2,↑^†ĉ_1,↑)(n̂_1,↓-n̂_2,↓)⟩-1/2t_0U_0⟨(ĉ_1,↓^†ĉ_2,↓ + ĉ_2,↓^†ĉ_1,↓)(n̂_1,↓-n̂_2,↓)⟩d[v_S,j_S] =-v_S/t_0∑_σ=↑,↓⟨(ĉ_1,σ^†ĉ_2,σ + ĉ_2,σ^†ĉ_1,σ)⟩-1/2t_0U_0⟨(ĉ_1,↑^†ĉ_2,↑ + ĉ_2,↑^†ĉ_1,↑)(n̂_1,↓-n̂_2,↓)⟩-1/2t_0U_0⟨(ĉ_1,↓^†ĉ_2,↓ + ĉ_2,↓^†ĉ_1,↓)(n̂_1,↓-n̂_2,↓)⟩,where in the many-body problem, the many-body wave function has to be employed to calculate observables, while in the Kohn-Sham system the factorizable Kohn-Sham wave function is employed. Since the electronic density d is by construction equal in the interacting system and the exact Kohn-Sham system, if the exact Kohn-Sham potential v_S is used, we find for the density-to-potential maps d[v_S]=d[v_ext]. By using the inverse mapping, i.e. v_ext[d,q], we can construct the exact xc potential of Eq. <ref> using <cit.>v_xc^λ[d,q]=v_ext^λ=0[d,q]-v_ext^λ[d,q]-v_M^λ[d,q].In the following, we construct the exact density-to-potential maps of d[v_ext,j_ext] and v_xc^λ[n,q] to get insights how the electron-photon interaction influences the electronic system and draw conclusions on approximations for corresponding xc potential.We start discussing the Rabi-Hubbard model in setup (i), where a single electron is coupled to the photon mode of frequency ω=1.The first situation we analyze is, when the electron and the photons do not couple, see Fig. <ref> (a) (λ=0). In this case varying j_ext has no effect on the density-to-potential map. Therefore, the density-to-potential map d[v_ext] is determined by the external potential v_ext alone. The dependency of d[v_ext] on v_ext is shown in the lower plot. We find a continuous and rather smooth mapping. Since, we have restricted ourselves to a single electron, the dipole corresponding to the density difference between both sites d can have values in between [-1,1]. In Fig. <ref> (b), we now introduce a finite λ, here λ=0.1.In Fig. <ref> (b), we plot the two-dimensional density-to-potential map d[v_ext,j_ext] for v_ext=[-5,5] and j_ext=[-50,50]. The first emerging feature in the plot is that two new normal modes appear <cit.>, i.e. the photon and electron degrees of freedom become correlated. This electron-photon correlation tildes the map as shown in Fig. <ref>. The rotation can beconstructed by ṽ_ext=v_ext+λ/ω^2j_ext and corresponds to the transformation using the coherent-shift operator as in Eq. <ref>. The diagonal cut in the plot is the new polaritonic degree of freedom that is shown in the plot on the bottom. We find a broad smearing of the density-to-potential map. Fig. <ref> (c) shows the map for λ=1. The plot is shown for v_ext=[-5,5] and j_ext=[-5,5], hence the photon external variable is narrower. In comparison to λ=0.1, we find a steepening of the gradient in the density-to-potential plot that we have earlier introduced as intra-system steepening <cit.>. To highlight the connection of the steepening to electronic correlation, Fig. <ref> shows the correlation entropy for the one-electron system, i.e. a good measure for the static correlation and indicates how well the ground-state wave function is approximated by a single Slater determinant. The correlation entropy is given byS=∑_j=1^∞n_j ln n_j ,where the occupation numbers n_j are the eigenvalues of the reduced one-body density matrix <cit.> that is given in terms of the many-body wave function Ψ(x⃗,x⃗_2,...,x⃗_N) asρ_1RDM(x⃗,x⃗') = ∫ d^3x⃗_2...d^3x⃗_NΨ^∗(x⃗,x⃗_2,...,x⃗_N)Ψ(x⃗',x⃗_2,...,x⃗_N).In spectral representation, the reduced density matrix can be written in terms of its eigenfunctions and eigenvalues as <cit.>ρ_1RDM(x⃗,x⃗')=∑_jn_jϕ^∗_j(x⃗)ϕ_j(x⃗'). In Fig. <ref> the correlation entropy increases with the coupling between the photonic and electronic part of the system, while the gradient of the maps as in Fig. <ref> steepens. However, we emphasize that the map within this setup is still continuous. In contrast, the derivative discontinuity refers to the discontinuous behavior of the gradient of the density maps along the cut of the particle number at integer value <cit.>. The discontinuity is an exact concept for systems with degenerate ground state, where the maps are constructed as convex combination of the degenerate densities belonging to different particle number. The degeneracy of the eigenvalues of the ground state is due to an external potential within the Hamiltonian that serves as a Lagrange multiplier shifting the ground-state energy to states with different particle number. In the case of degeneracy, the derivative discontinuity shows up along the cut of the conjugated variable, e.g in purely electronic systems along N or δ n.We can conclude that the mapping becomes sharper for increasing electron-photon coupling strength λ and therefore reminiscent to the case of static electronic correlation <cit.>.We plot the xc potential for this case in Fig. <ref>. In (a), we plot the two-dimensional plot for λ=0 and the the cut for q=0. Naturally, we find v_xc=0 for this case, since electrons and photons do not interact. The case for λ=0.1 is shown in (b). The cut along q=0 shown in the bottom reveals a smooth curve for v_xc as function of n. If we compare to the density-to-potential map from Fig. <ref> (b), we find that v_xc has the highest amplitude at the density values that show the highest derivative in the density-to-potential map. This is to be expected, since the non-interacting auxiliary system has a rather smooth behavior (see Fig. <ref> (a)), while the fully coupled problem is subject to the intra-system steepening, and consequently the xc potential functional has to compensate this mismatch. Thus the intra-system steepening directly translates to the size of the xc potential, which in the case of the two-site Rabi-Hubbard model implies a large potential step between the sites. This is a reminiscence of the step and peak structure of the photonic xc potential in full real space. In (c), we show the mapping for λ=1. For this case v_xc has larger amplitudes in all regions, but its overall shape remains similar to the λ=0.1 case. We note, that such a scaling behavior could be employed to construct novel approximations to the xc potential. Further, we point out that the dependency of v_xc on q is below our numerical accuracy, thus very small in the considered parameter range. In general q takes values from -∞ to ∞ and in the case that q takes such high values it will affect v_xc more strongly. The (d,q) behavior of the xc functional will be discussed in a little more detail at the end of this section. As a conclusion, we find that the steepening that is visible in Fig. <ref> along the new polaritonic coordinate ṽ_ext becomes here visible along d. Next, we analyze setup (ii), i.e., the two-site Rabi-Hubbard model in the two-electron subspace. The density-to-potential map is plotted in Fig. <ref>. In (a), we show the mapping for an electron-photon coupling strength of λ=0.1, hence a weak coupling setup. As in the case of the single electron, we also find here electron-photon correlation by the appearance of new normal modes. While the upper panel show the two-dimensional mapping d [v_ext,j_ext], in the lower panel, we show a antidiagonal cut along the new normal mode. The most noticeable difference to Fig. <ref> is that d can now acquire values between -2 and +2 and in the mapping an intermediate step appears, where d ≈ 0. This is, of course, due to the fact that we can now have two particles on one site and thus the total dipole moment can become |2|. If we now increase the electron-photon coupling strength λ to λ=1, shown in Fig. <ref> (b), we find a steeper density-to-potential map. Also the intermediate step is reduced in size. InFig. <ref> (c), we plot the mapping for λ=2. Here, we find that the intermediate step vanishes and around v_ext=j_ext=0, the mapping becomes very steep. Since, we find approximately only two values for d, -2 and +2, meaning that both electrons are on the same side, we can conclude that the electron-photon interaction is capable of effectively reducing the electron-electron repulsion of the Hubbard term in Eq. <ref>. Formulated differently, the electron-photon interaction mediates an effective attraction between the two electrons with the effect that both occupy the same site. Physically, we can interpret that the photons cloud the electrons such that the electron-electron repulsion is reduced. The static correlation of the electron-photon interaction dominates the correlation of the electron-electron interaction.In Fig. <ref> we plot the v_xc potential for the two electron case with different coupling strength. As in the case of a single electron, we find similar cuts for v_xc for q=0 in (a) for λ=0.1 and in (b) for λ=1. Again, the intra-system steepening is responsible for the large values of the xc potential. In (c), where the coupling is increased to λ = 2, we find that due to the vanishing of the intermediate step, the regions of highest xc contributions are where the derivative due to the steepening is the largest, i.e., around d=-2 and d=2.So far we have constructed the exact mappings. However, in practice we need to employ approximations since the exact mappings that constitute the Kohn-Sham potential are not known. Let us therefore see how the simplest approximate treatment of the coupled electron-photon problem, the afore introduced mean-field approximation of Eq. <ref> performs. This will give us insight about the missing xc potential. In Fig. <ref> (a), we plot the results in the regime of weak-coupling (λ=0.1). For the weak-coupling regime, we find a good agreement with the exact calculations shown in Fig. <ref>. The first differences become more pronounced in Fig. <ref> (b). For the stronger coupling of λ=1, we find in comparison to Fig. <ref> (b) a broader intermediate step that is also less steep. The most significant differences are clearly visible in the strong-coupling limit for λ=2. While in Fig. <ref> (c) we have seen the complete disappearance of the intermediate step, we find a remaining step if the classical approximation to the electron-photon coupling is employed. This clearly shows the breakdown of the classical approximation. Only in the limit of λ→∞, the classical approximation can correctly predict the vanishingintermediate step. This brings us to the conclusion that this feature is a true electron-photon xc feature, where approximate xc functionals have to be developed to correctly account for such features. The missing electron-photon xc potential needs to enhance the steepening, i.e., it needs to model the missing correlation. This is in agreement with our interpretation of the intra-system steepening and correlation effects. The failure of the mean-field approximation in the strong-coupling limit around ṽ_ext≈±2 can be partially understood by comparing the exact eigenvalues versus the mean-field eigenvalues of our model system in the red-highlighted area in Fig. <ref>. For this setup, while the exact energy plotted in blue has a continuous and differentiable form, the mean-field energies develops a discontinuity in the red shaded area. How this discontinuity affects mean-field observables will be discussed in the next section. In the remaining part of this section, we now study the implications of the features of the density-to-potential map on observables. As a consequence of the density map, in principle, arbitrary observables can be expressed in terms of the set of internal variables. In practice, however, the functional form of observables such as the photon number N(q,d) is unknown and the functional development of important observables will push the framework of QEDFT to a practical level. While first functionals have been developed for simple model systems <cit.>, most functionals for observables remain unknown. For our model system, we can explicitly construct the dependency of selected observables on both, i.e. on the set of internal and external variables. Even though, the set of (v_ext,j_ext) is mathematically equivalent to the set (d,q),the dependence on the set (d,q) can be very different to the dependence on (v_ext,j_ext). The first observable we study is the interaction energyE_int that can be defined from Eq. <ref> by E_int = -ω⟨q̂ d̂⟩. It is connected to the xc energy by E_xc =E_int - E_int, mf = -ω(⟨q̂ d̂⟩- qd ) E_int[v_ext,j_ext] for the two-site Rabi-Hubbard model for two electrons is shown in Fig. <ref> and the corresponding observable in mean-field approximation is shown in Fig. <ref>. In (a), the weak-coupling is shown, respectively. We find here the new normal coordinates and the intermediate step causes a distinguishable behavior around j_ext∼0. This intermediate step becomes smaller for λ=1 shown in (b). In the strong-coupling limit, the interaction energy has a vanishing step in the exact solution of the problem shown inFig. <ref> (c). In contrast the mean-field solution fails to correctly reproduce the exact sharp feature of the interaction energy leading to large xc contributions. This failure can be explained by the discontinuity in the energy as discussed in Fig. <ref>.The next observable, we study is the photon number in the system ⟨N̂⟩ = ⟨â^†â⟩. In general, and in difference to electronic observables, such as d, the photonic observables are not restricted to integer valuesdue to its underlying bosonic nature in contrast to the fermionic number of particles. In Fig. <ref> (a), we show N as functional of the external potentials,N[v_ext,j_ext]. In (a), in the weak-coupling limit for λ=0.1, we find that the external potential v_ext has no large overall influence on this observables and the harmonic nature of this observable is given by the external current j_ext. In the two lower panels, we plot the diagonal and the antidiagonal cut. Since the observable is unbound, we can excite very high photon numbers, up to 1200 for the studied examples. Next in (b), we show the case for λ=1.0. Here, we find that the external potential v_ext can alter this observable in cases, where N is small. Around j_ext∼ 0, we find a funnel-type structure of this observable which is connected to the intermediate step of the density-to-potential mapping shown in Fig. <ref>. In (c), we show the strong-coupling limit for λ=2. Here, we find for the antidiagonal cut of N[v_ext,j_ext] map a sharp feature around j_ext∼ 0. Again this is connected to the sharp features in the density-to-potential map. Also the new normal mode is clearly visible along the antidiagonal.In Fig. <ref>, we now show the dependency ofN[d, q] on the internal variables in the top and in the bottom the cut for q=0. Here, we find that the appearing normal modes vanish for all three coupling strengths and the mapping becomes smooth. Qualitatively the weak-coupling λ=0.1 and the strong-coupling for λ=1 behave similarly (a double maximum in the cut), while the mapping for λ=2 has a constricted shape and only a single minimum in the cut. That the photon-number observable behaves more regularly when written in terms of the internal variables is an important detail. It suggests that we can find reasonable approximation to non-trivial functionals of the internal variables despite the intra-system steepening, which would make approximating much harder. Such non-trivial functionals are important to make QEDFT practical since in many situations it is not the density or the displacement field that one is interested in but rather, e.g., the energy or correlation functions of the photon field. We note that after changing to the internal variables, the dependency of N[d,q] on q becomes only strongly pronounced for high values of q. This implies that for a small amplitude of q, using functionals at q=0 becomes reasonable. This is very similar to the behavior we encountered in th xc potential functional. Also there the dependence of v_xc on q in the considered parameter range was very small. The weak dependence on only one parameter would not be the case if we used instead the mathematically equivalent external functional v_xc[v_ext, j_ext] that would also allow to determine the dipole moment d in the Kohn-Sham system. This is a nice example that the choice of the internal functional variables makes approximations much easier in practice. § FOUR-SITE RABI-HUBBARD MODEL So far we have analyzed the simplest situation of electron-photon coupling and concluded that the intra-system steepening that appears in the density maps is a simple measure to quantify the electron-photon correlation. In this section, we now address the questions, whether the steepening also appears in more complex situations. To this end, we study a four-site Rabi-Hubbard model coupled to a single photon mode and demonstrate the implications of the discussed modifications of the density-to-potential map under strong light-matter coupling. We show how the density-potential map can help to find interesting behavior and explain experimentally observed effects. The extension of Eq. <ref> to four sites is straightforward and the Hamiltonian for half-filling (four electrons) reads Ĥ_0 =-t_0 ∑_i=1^3∑_σ=↑,↓(ĉ_i,σ^†ĉ_i+1,σ + ĉ_i+1,σ^†ĉ_i,σ) +U_0∑_i=1^4n̂_i,↑n̂_i,↓+ ωâ^†â - ωλq̂d̂ +j_ext/ωq̂+ (λd̂)^2/2 + v_extd̂with d̂ = d_0 ( 3n_1 + n_2 - n_3 -3n_4). In this case, v_ext effectively is an external electric field, as routinely studied in electronic-structure calculations. For four sites, we construct the dipole to electric field map. Such a mapping of an reduced internal variable to an reduced external variable has been proven to be unique and has been analyzed e.g. in Ref. <cit.>. Physically the gradient of the dipole moment to the external electric field describes the electric polarizability α <cit.>. In this spirit, we define the electric polarizability as followsα[v_ext] = δ d/δṽ_ext,where ṽ_ext describes the external electric field applied to the system as defined by Eq. <ref>. We note that for the two-site Rabi-Hubbard model studied in the previous section, the polarizability α is the gradient of the density-to-potential map. Thus, the larger the gradient in the mapping becomes, the larger values for the polarizability are obtained. In conducting polymers, it has been demonstrated that this high polarizability is directly connected to charge-transfer, i.e. conductivity <cit.>. In Fig. <ref>, we show how the electronic dipole moment d and the polarizability α as function of the applied external potentials v_ext and j_ext change. Also in this more complex situation, we find new normal modes appearing. Thus, in Fig. <ref>, we show how ṽ_ext induces changes under strong light-matter coupling to the system. Without coupling, shown in (a), we find that the dipole moment develops three quasi-stationary regions, where the extremal values correspond to situations, where two electrons occupy the outermost sites and the other two electrons occupy the neighboring site. In the lower panel of Fig. <ref>, we plot the polarizability α as defined in Eq. <ref>. We find two peaks in between the stationary regions of the dipole moment. If we now increase the electron-photon coupling, shown in (b) for the case of λ=1, we find that similarly as reported in the previous section, the dipole moment as function of the external potential steepens and the step around v_ext∼ 0 becomes narrower. Accordingly, the two peaks in the polarization shown in the bottom panel get close together and have larger amplitudes in comparison to the setup in (a). For strong-coupling that is here λ=2 shown in (c), we find that the middle step becomes even narrower and also the two peaks shown in the bottom panel become closer with high amplitude. In conclusion, we find that by tuning the electron-photon coupling strength, the polarizability of the system can be strongly influenced leading to a highly polarizable system. § SUMMARY AND OUTLOOKIn this paper we have constructed the exact density-to-potential maps for electron-photon model systems and extended the concept of the intra-system steepening to general fermion-boson systems. We made explicit how the intra-system steepening can be used to identify large xc potentials and how these effects show up in other observables. We have identified the appearance of new normal modes in the coupled matter-photon system and showed how the density-to-potential maps can be constructed for all possible external pairs from only knowing the map along the polaritonic external potential ṽ_ext. Finally we have highlighted for a four-site model with four electrons coupled to photons, how the intra-system steepening allows to identify interesting physical effects such as an increase of the polarizability of the matter system due to ultra-strong coupling to the photons. The increase in the polarizability is directly relevant for experiments such as in Ref. <cit.>, where an increase in conductivity for organic semiconductors in strong coupling was measured.The exact maps and the tools to analyze the importance of xc contributions will be helpful to further develop xc functionals for QEDFT that accurately capture the coupling between the charged particles and the photons. Also the finding that observables behave more regularly when represented by the internal variables is an important detail in the development of QEDFT. Such functionals become crucial for the practicability of QEDFT, as many observables are non-trivial functionals of the internal variables n(r) and q_α, e.g., the number of photons. Their availability will allow for novel applications of density-functional methods in the context of quantum optics or plasmonics. Further, although the functionals in QEDFT are different to the ones of standard DFT, insights from a more complete description of real systems, i.e., also treating the photons, might prove beneficial also for DFT. Especially when going beyond the dipole approximation, the minimal-coupling prescription forces us to use the full current density to describe the coupling to the photon field. In this context a current-density functional (CDFT) scheme becomes unavoidable <cit.>. It seems possible by studying coupled matter-photon systems beyond the dipole approximation that we get novel insight also into CDFT. It would be very interesting to also investigate the exact density-to-potential maps for a Hubbard system that is coupled via its charge current to the photons, e.g., via a Peierls substitution. Such results would highlight the necessary ingredients of xc functionals to describe matter that only locally interacts strongly with photons, in contrast to the dipole approximation, where all electrons feel the same photon field. This would allow to calculate quantum local-field effects from first principles.§ ACKNOWLEDGEMENTSWe thank Heiko Appel and Soren E. B. Nielsen for very fruitful discussions and acknowledge financial support from the European Research Council (ERC-2015-AdG-694097), and the European Union's H2020 program under GA no.676580 (NOMAD). apsrev4-1 | http://arxiv.org/abs/1706.08852v1 | {
"authors": [
"Tanja Dimitrov",
"Johannes Flick",
"Michael Ruggenthaler",
"Angel Rubio"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20170627135238",
"title": "Exact functionals for correlated electron-photon systems"
} |
SU(5) Unification without Proton Decay Benjamín Grinstein December 30, 2023 ====================================== We measure the effect of high column density absorbing systems of neutral hydrogen (Hi) on the one-dimensional (1D) Lyman-alpha forest flux power spectrum using cosmological hydrodynamical simulations from the Illustris project. High column density absorbers (which we define to be those with Hi column densities N(Hi) > 1.6 × 10^17 atoms cm^-2) cause broadened absorption lines with characteristic damping wings. These damping wings bias the 1D Lyman-alpha forest flux power spectrum by causing absorption in quasar spectra away from the location of the absorber itself. We investigate the effect of high column density absorbers on the Lyman-alpha forest using hydrodynamical simulations for the first time. We provide templates as a function of column density and redshift, allowing the flexibility to accurately model residual contamination, if an analysis selectively clips out the largest damping wings. This flexibility will improve cosmological parameter estimation, allowing more accurate measurement of the shape of the power spectrum, with implications for cosmological models containing massive neutrinos or a running of the spectral index. We provide fitting functions to reproduce these results so that they can be incorporated straightforwardly into a data analysis pipeline. large-scale structure of universe – cosmology: theory § INTRODUCTIONThe Lyman-alpha forest (a series of neutral hydrogen absorption lines in the spectra of quasars) is a uniquely powerful probe of the clustering of matter at redshifts from about z = 2 toz = 6 <cit.> and from sub-Mpc to hundreds of Mpc scales. The one-dimensional (1D) Lyman-alpha forest flux power spectrum (along the line of sight) is particularly sensitive to small-scale clustering in the quasi-linear regime and provides important constraints onextended cosmological models that suppress small-scale power <cit.>, notably those containing massive neutrinos and warm dark matter. This small-scale information complements the larger scales probed by the angular power spectrum of the cosmic microwave background (CMB). For example, the best upper limit on the sum of neutrino masses <cit.> comes from combining CMB data from the Planck Collaboration <cit.> with the 1D Lyman-alpha forest power spectrum as measured from Sloan Digital Sky Survey (SDSS)-III/Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 9 (DR9) quasar spectra <cit.>.Future surveys like the Dark Energy Spectroscopic Instrument <cit.> will further improve constraints on extended cosmological models. <cit.> forecast one-sigma errors on a DESI measurement of the sum of neutrino masses to be 0.017 eV[This is the full forecasted constraint considering a combination of Planck CMB data, DESI broadband galaxy power spectrum, DESI broadband Lyman-alpha forest flux power spectrum and ∼ 100 high-resolution Lyman-alpha forest quasar spectra.]. Considering that the lower limit on the sum of neutrino masses from neutrino oscillation experiments is 0.06 eV <cit.>, this would constitute at least a three-sigma detection. Furthermore, the 1D Lyman-alpha forest flux power spectrum probes the primordial power spectrum on the smallest currently accessible scales, k ∼ 4 Mpc^-1. Including Lyman-alpha forest data will improve constraints on the running of the spectral index (which quantifies deviations from a pure power-law spectrum) by a factor of two, reaching one-sigma errors of ± 0.002 <cit.>. This would provide new insights into early universe physics, potentially ruling out classes of models of inflation. Importantly, it will also provide a unique independent cross-check at small scales of the primordial power spectrum shape inferred from CMB measurements at large scales.Achieving these limits requires marginalisation over the uncertain impact of a number of astrophysical effects on the 1D Lyman-alpha forest power spectrum. In particular, this includes broadened absorption features from high column density absorbers. High column density absorbers are usually classified as either damped Lyman-alpha absorbers (DLAs), with column densities N(Hi) exceeding 2 × 10^20 atoms cm^-2 <cit.>, or Lyman-limit systems (LLS), which correspond to 2 × 10^20 atoms cm^-2 > N(Hi) > 1.6 × 10^17 atoms cm^-2. Both types of system produce broad damping wings which extend large distances in redshift space. If not accounted for, they will bias cosmological parameter estimation from the Lyman-alpha forest. The systems are formed at peaks of the underlying density distribution; consequently, they cluster more strongly than the forest itself <cit.>.To remove the bias induced by damped absorbers, one can fit a model for their effect on power spectra. The most widely used approach <cit.> is now more than a decade old. Although this model was adequate for the data available at the time, future surveys will be substantially more constraining and therefore demand tighter control over systematics. Furthermore, there have been significant improvements in theoretical modelling of these systems <cit.>. An updated model for the effects of high column density absorbers is therefore both timely and essential in order to achieve the forecasted cosmological constraints from future surveys.Different column densities correspond to gas at different physical densities, so that simulations suitable for modelling the forest are often not suited to reproducing high column density systems. The Lyman-alpha forest is largely insensitive to the physics of galaxy formation since it is sourced by gas at below mean density; the primary uncertainties arise from cosmological parameters and the thermal history of the intergalactic medium. Conversely, high column density absorbers arise largely from regions within or around galaxies and are thus very sensitive to the physics of galaxy formation and less sensitive to large-scale cosmology. It is consequently essential to model the effect of high column density absorbers using simulations which include detailed galaxy formation physics and can thus reproduce their characteristics and statistics.In Lyman-alpha forest studies, damping wings are sometimes “clipped” (removed or masked) from quasar spectra <cit.>. However, not all damping wings are identified and many will remain in the spectra, especially in noisier spectra where they are harder to spot and for lower-density absorbers (LLS) which have narrower wings. Therefore, in the final cosmological parameter estimation from the 1D Lyman-alpha forest power spectrum, the effect of residual high column density absorbers is modelled as a multiplicative scale-dependent bias of the power spectrum with an amplitude (reflecting the level of residual contamination) that is fitted and marginalised <cit.>. The functional form of this model (its scale and redshift dependence) is based on the measurements made in <cit.>.<cit.> investigated the effect with lognormal model mock quasar spectra <cit.>, since the numerical simulations available at the time were not large enough to generate spectra encompassing the full width of damping wings. They then probe the effect of high column density absorbers on the Lyman-alpha forest by inserting damping wings in mock spectra at the peaks of the lognormal field, based on the observationally-determined column density distribution function (CDDF). They find a systematic effect on the observed 1D Lyman-alpha forest power spectrum that is maximised on scales corresponding to the width of a damped system and which has negligible redshift evolution (considering three redshift slices at z = [2.2,3.2,4.2]). They provide a single template to fit their bias measurement, including the effect of all LLS and DLAs together. However, as discussed above, in current data analysis pipelines, damping wings are removed from quasar spectra in a way that preferentially removes higher density systems. Therefore, when the template is used in parameter inference, it may not correctly model the bias of the residual contamination, which will have a different CDDF to the total — the clipping of the survey spectra changes the survey CDDF. The bias will have a different scale-dependence (not just amplitude), since this is driven by the distribution of the widths of damping wings remaining in quasar spectra.In this work, we investigate the effect of high column density absorbers on the 1D Lyman-alpha forest power spectrum as a function of their column density and redshift using hydrodynamical simulations of galaxy formation from the Illustris project <cit.>. Comparison to relevant observations has shown that Illustris reproduces the observed CDDF and spatial clustering of high-density systems <cit.> at the 95% confidence level. Spectra are generated from this simulation, then separated into categories according to the maximum column density within each spectrum (see <ref> for more details). We measure the 1D flux power spectrum of each of these types of spectrum and measure the (multiplicative) bias of each type compared to the power spectrum of the Lyman-alpha forest alone. We make this measurement at multiple redshifts and so probe the redshift evolution of this effect.We discuss high column density absorbers in more detail in <ref>. In <ref>, our methodology in going from hydrodynamical simulations to measurements of the 1D flux power spectrum is explained. We present our main results in <ref>. These results are discussed in <ref> and in <ref>, we present the templates that we have fitted to our measurements. Finally, conclusions are drawn in <ref>.§ DAMPED LYMAN-ALPHA ABSORBERS AND LYMAN-LIMIT SYSTEMSHigh column density absorbers are regions of neutral hydrogen (Hi) gas that are above a column density threshold of N(Hi) > 1.6 × 10^17 atoms cm^-2. By contrast, lower column density absorbers form the Lyman-alpha forest. The absorption lines formed by high column density absorbers are broadened, forming damping wings and hence absorption in the spectrum away from the location of the absorbing gas. The damping wings have a characteristic Voigt profile, which is a convolution of a Gaussian profile (caused by Doppler broadening) and a Lorentzian profile (caused by natural or collision broadening). The width of these wings in velocity space increases with the column density of the absorbing system. High column density absorbers are then usually classified as either damped Lyman-alpha absorbers (DLAs), whose damping wings are considered significantly broadened and which correspond to N(Hi) > 2 × 10^20 atoms cm^-2 <cit.>; or Lyman-limit systems (LLS), which correspond to column densities in the range 2 × 10^20 atoms cm^-2 > N(Hi) > 1.6 × 10^17 atoms cm^-2.In this work, we aim to investigate the effect of high column density absorbers (and especially their damping wings) on the one-dimensional Lyman-alpha forest flux power spectrum, as a function of their column density (and redshift). We therefore use a more refined classification of high column density absorbers based on their column densities, in particular accounting for the fact that higher density LLS do have wide damping wings. Table <ref> shows the column density limits that define our categories, as well as the percentage of simulated spectra (see <ref>) where the highest-density system is a given type and hence is the main contaminant. The overall percentage of spectra contaminated by high column density absorbers (LLS, sub-DLAs, small and large DLAs) increases with redshift because the Hi CDDF increases at higher densities at higher redshifts, but always there are more LLS than DLAs.§ METHODWe first outline the method we have used and then explain the steps in more detail in the following subsections ( <ref> to <ref>).* We use a cosmological hydrodynamical simulation from the Illustris project <cit.> and generate mock spectra on a grid (562 500 in total, each at a velocity resolution of 25 km s^-1 and with a typical length of ≃ 8 000 km s^-1). We repeat this for a number of redshift slices at which the Lyman-alpha forest is observed (z = [2.00,2.44,3.01,3.49,4.43]). (See <ref>.)* For each redshift slice, we separate the spectra according to the highest column density system within that spectrum using the absorber categories defined in Table <ref>. For each absorber category (and the total set of spectra), we measure the one-dimensional (1D) flux power spectrum (along the line of sight, integrating over transverse directions) using a fast Fourier transform (FFT). (See <ref>.)* We then measure the (multiplicative) bias of the flux power spectra from each category relative to the 1D flux power spectrum of the Lyman-alpha forest, as a function of absorber type (maximum column density) and redshift (see <ref>). We fit parametric models to these bias measurements and provide these templates in <ref>. §.§ Hydrodynamical simulations and mock spectraOur main results make use of snapshots from the highest-resolution (in terms of bothdark matter particles and hydrodynamical cells) cosmological hydrodynamical simulation from the Illustris project <cit.>. The simulation adopts the following cosmological parameters: Ω_m = 0.2726, Ω_Λ = 0.7274, Ω_b = 0.0456, σ_8 = 0.809, n_s = 0.963 and H_0 = 100 h km s^-1 Mpc^-1, where h = 0.704 <cit.>. The box has a volume in comoving units of (106.5 Mpc)^3 and we consider snapshots at redshifts z = [2.00,2.44,3.01,3.49,4.43].The Illustris simulations use the moving mesh code<cit.>. The galaxy formation physics implemented is of relevance to dense regions of neutral hydrogen gas, and therefore we describe it briefly here. The subgrid models include prescriptions for supernova <cit.> and active galactic nuclei (AGN) <cit.> feedback ( showed that the properties of DLAs are quite insensitive to the details of AGN feedback); radiative cooling; star formation and metal enrichment of gas. Self-shielding is implemented as a correction to the photoionization rate, which is a function of hydrogen density and gas temperature. The potential ionising effect of local stellar radiation within the most dense absorbers (large DLAs) <cit.> is neglected. <cit.> found this effect to be negligible and accurate calculations in any case require physics on parsec scales, well below the resolution of the simulation (it can then be viewed as part of the unresolved physics included in the above feedback prescriptions). More details of these models are given in <cit.>. Gravitational interactions are computed using the TreePM approach <cit.>.We require that these simulations accurately reproduce the necessary statistics of high column density absorbers that are observed in surveys. As a means of quantifying this, we can first consider the CDDF of neutral hydrogen over relevant column densities (N(Hi) > 1.6 × 10^17 atoms cm^-2). <cit.> make a comparison of the CDDF as produced by Illustris centered at z = 3 to the distribution observed in a number of surveys [<cit.> for LLS; <cit.> for sub-DLAs; <cit.> for DLAs]. In particular, the distributions are consistent with the feature in the CDDF around the DLA threshold, where the distribution rises, being reproduced well (the results offrom SDSS-III DR12 spectra are also consistent for DLAs). <cit.> showed that thecode with the above hydrodynamical models can produce values of the DLA halo bias (at z = 2.3) which are in agreement with measured values from real surveys <cit.>, indicating that the clustering of high column density absorbers is well reproduced. <cit.> compared the distribution function of velocity widths of low ionization metal absorbers associated with DLAs as produced by the simulations at z = 3 to the distribution observed in <cit.>. The data points are within the 68% confidence interval of the simulated distribution. This suggests that the simulations are reproducing the kinematics, and thus the host halo distribution, of high column density absorbers. One potential caveat is that these simulations are found to produce too high a total incidence rate of DLAs when compared to observations <cit.> at z = 2 <cit.>. However, the overall incidence rate is absorbed into a normalisation that must in any case be allowed to float during analysis of clipped spectra (as discussed in <ref>).For each snapshot, we generate mock spectra containing only the Lyman-alpha absorption line (with a rest wavelength of 1215.67) from neutral hydrogen. We do this on a square grid of 562 500 spectra, in the plane perpendicular to a direction that we define as the line of sight. Each spectrum extends the full length of the simulation box with periodic boundary conditions, giving a size in velocity (or “redshift”) space of {7111, 7501, 8000, 8420, 9199} km s^-1 respectively at z = [2.00,2.44,3.01,3.49,4.43][We convert the comoving length of the box to a proper velocity by the Hubble law.]. We first measure the optical depth τ in velocity bins of size 25 km s^-1 along the spectrum[For comparison, BOSS DR9 spectra are binned at a velocity resolution of 69.02 km s^-1 <cit.>.]. We further convolve our spectra with a Gaussian kernel of FWHM = 8 km s^-1, setting the simulated spectrographic resolution. We then calculate the transmitted flux ℱ = e^-τ. In this way, the spectra we have constructed are insensitive to contamination from other absorption (or emission) lines, estimation of the emitted quasar continuum (which here is effectively set to unity) or instrumental noise. In each spectrum pixel, we are also able to measure the column density (integrated along the line of sight in each bin) of neutral hydrogen, which we use in measuring the maximum density systems in each spectrum ( <ref>). §.§ One-dimensional flux power spectrumWe separate our spectra into the absorber categories (Lyman-alpha forest, LLS, sub-DLAs, small and large DLAs) defined in Table <ref> according to the maximum column density system within each spectrum. We search for the highest column density integrated over any four neighbouring velocity bins; this amounts to a comoving length along the line of sight of {1.50, 1.42, 1.33, 1.27, 1.16} Mpc respectively at z = [2.00,2.44,3.01,3.49,4.43]. The categorisation is insensitive to the number of neighbouring velocity bins that we use, as the boundaries between categories differ by orders of magnitude in column density. Moreover, the method is efficient in identifying high column density absorbers since they are vastly more dense than the surrounding gas forming the Lyman-alpha forest[We have explicitly tested the impact of doubling or halving the number of neighbouring velocity bins we use on the 1D flux power spectra we measure in each absorber category. We find that the maximum absolute difference in any power spectrum bin is a negligible 0.2%.]. We have chosen a length that is much larger than the most extensive DLAs as found by recent studies <cit.> and so we are sure to integrate over the full length of any high column density absorbers. Our definition of high column density absorbers includes blends, where a number of smaller, lower column density systems have been added together. In this way, we have associated with each spectrum the most dominant absorbing system and in the case where high column density absorbers are identified, these are the main contamination to the spectrum through their associated damping wings. The percentage of spectra in each absorber category at each redshift slice is given in Table <ref>.We measure the 1D flux power spectrum of all the spectra and each absorber category at each redshift slice. The 1D power spectrum P^1D(k_||, z) is defined as the integral of the three-dimensional (3D) power spectrum P^3D(k_||, k⃗_⃗⊥⃗, z) over directions perpendicular to the line of sight:P^1D(k_||, z) = ∫ P^3D(k_||, k⃗_⃗⊥⃗, z) dk⃗_⃗⊥⃗/(2π)^2,where the wavevector k⃗ = [k_||, k⃗_⃗⊥⃗] is conjugate to velocities in real space and so is measured in units of inverse velocity (s km^-1). We also use the convention of absorbing the 2 π into the conjugate variable[we define the Fourier transform as δ(k) = ∫δ(x) e^-ikxdx.].To measure P^1D for an individual line of sight, we first calculate the fluctuation in each velocity v_|| bin δ_ℱ(v_||) = ℱ(v_||)/⟨ℱ⟩ - 1, where ⟨ℱ⟩ is the average flux over all spectra at each redshift <cit.>. We calculate the 1D Fourier transform along the line of sight δ̂_ℱ (k_||) using a fast Fourier transform (FFT)-based method since we have evenly-spaced velocity bins. We then estimate the 1D flux power spectrum for each sightline P_Raw^1D(k_||) = |δ̂_ℱ (k_||)|^2. Finally, we estimate the 1D flux power spectrum in Eq. (<ref>) for each absorber category i by <cit.>P_i^1D(k_||, z) = ⟨P_Raw^1D(k_||, z)/W^2(k_||, Δ v, R)⟩_i,where we explicitly indicate that the raw 1D power spectra depend on redshift z. The average is taken over spectra of a given category (or all spectra for the total power spectrum) at each redshift slice. The window function W(k_||, Δ v, R) that is divided out arises from the binning in velocity space (Δ v) and the simulated spectrographic resolution R:W(k_||, Δ v, R) = exp (- 1/2 (k_|| R)^2) ×sin (k_||Δ v / 2)/k_||Δ v / 2,where Δ v = 25 km s^-1 and R = 3.40 km s^-1 (not to be confused with the spectrographic resolving power; see <ref>). We then have an estimate of the 1D flux power spectrum for each absorber category of spectra at each redshift slice. §.§ Modelling the effect of high column density absorbersThe total 1D flux power spectrum of a set of spectra P_Total^1D (k_||, z) can be expressed as a weighted sum of the 1D flux power spectra calculated in Eq. (<ref>) for each absorber category i:P_Total^1D (k_||, z) = ∑_i α_i (z) P_i^1D (k_||, z),where α_i (z) are the fraction of spectra in each absorber category at each redshift (as given in Table <ref> for our simulated ensemble of spectra). In a real survey, α_i (z) may change from their raw values due to the attempt to clip (remove) high column density absorbers discussed in <ref>. We can rearrange Eq. (<ref>) to isolate the 1D flux power spectrum of the Lyman-alpha forest alone:P_Total^1D (k_||, z) = P_Forest^1D (k_||, z) [α_Forest (z) + ∑_i ≠Forestα_i (z) P_i^1D (k_||, z)/P_Forest^1D (k_||, z)].In this way, we have isolated the effect of spectra containing high column density absorbers on the 1D flux power spectrum of the Lyman-alpha forest as a multiplicative bias (the terms in square brackets)[We could simplify this form further by asserting the fact that ∑_i α_i (z) = 1 to remove the parameter α_Forest (z), but it is useful to keep this form as we explain in <ref>.]. This matches the general form of modelling this effect in previous studies, as explained in <cit.> (based on the results in ), but now additionally probing the bias as a function of column density (by using the different absorber categories). We discuss in more detail in <ref> our motivations for using this particular form of the bias (as opposed to an additive bias). Using the 1D flux power spectra we have calculated in <ref>, we are able to measure the fractions in Eq. (<ref>) (P_i^1D (k_||, z) / P_Forest^1D (k_||, z)) and we present the results in <ref>.§ RESULTSFigure <ref> shows the 1D flux power spectra of different subsets of sightlines that we have measured from our simulations [see <ref> and in particular Eq. (<ref>)] at redshift z = 2.00. The different subsets shown are: the total as would be measured if no distinction between different types of spectra was made; spectra containing only Lyman-alpha forest (the ensemble that is uncontaminated by high column density absorbers); and spectra contaminated by different categories of high column density absorber, as defined in Table <ref>. We first note that the total 1D flux power spectrum at any redshift slice can be reconstructed as a weighted sum of the other 1D flux power spectra for each absorber category at that redshift (see <ref> and in particular Eq. (<ref>)). The weights are the fraction of spectra in each category (the values we measure for our simulated ensemble are given in Table <ref>). We can estimate the fractional (1 σ) statistical error on each power spectrum data-point as 1 / √(N_i), where N_i is the number of input modes (simulated spectra) per data-point i. This assumes that data-points and input modes are independent. This is largest for the large DLA power spectrum at z = 2.00, which has 15,188 input simulated spectra giving an error of 0.81%, and smallest for the forest power spectrum at z = 2.00, which has 437,063 input simulated spectra, giving an error of 0.15%. All the other uncertainties for each measured power spectrum range in-between these values and can be computed from Table <ref>.The total power spectrum deviates from the Lyman-alpha forest power spectrum at all redshifts, showing there is a bias from contamination of spectra by high column density absorbers. This bias can be deconstructed as a function of column density by looking at the power spectra of different absorber categories. The power spectra of high column density absorbers have a distinctive increase on large scales (small k_||). This is caused by self-correlations across the width of damping wings, which (as discussed in <ref>) can be modelled by a Voigt profile (a convolution of a Gaussian and a Lorentzian). Therefore, the power spectrum of high column density absorbers (on large scales) is connected to the Fourier transform of a Voigt profile. This increases for higher column density systems since there is more line broadening, and starts on larger scales for higher column density systems since the damping wings are wider. (See Appendix <ref> for more analysis and discussion of the Voigt profile model.)On small scales, all the power spectra converge to a scaled version of the Lyman-alpha forest flux power spectrum. This reflects the fact that contaminated spectra do contain some uncontaminated spectral pixels. The amplitude of the small-scale power spectrum reflects the fraction of spectra that is uncontaminated, increasing for lower-column density systems since their damping wings are narrower. There is some sensitivity to the length of our simulated spectra, which primarily manifests in our results as the amplitude of the small-scale residual Lyman-alpha forest power in the contaminated power spectra. This is because longer simulated contaminated spectra would have a larger fraction with residual Lyman-alpha forest. This is discussed further and explicitly modelled such that this effect is removed in <ref>.Figure <ref> shows 1D flux power spectra as in Fig. <ref>, but for more of the redshift slices that we consider (z = [2.00, 2.44, 3.49, 4.43]), for spectra containing only Lyman-alpha forest and spectra contaminated by large DLAs. The Lyman-alpha forest flux power spectrum has the expected shape, amplitude and redshift evolution, matching observations <cit.> and reflecting the fact that it is an integral of a (biased) matter power spectrum. A peculiarity of the Lyman-alpha forest flux power spectrum is that its amplitude increases with redshift (unlike the linear matter power spectrum); this is because neutral hydrogen is more abundant at higher redshift and so there is more absorption in quasar spectra (the Lyman-alpha forest becomes a more negatively biased tracer of the matter distribution). By contrast, it can be seen that the large-scale correlations associated with the large DLAs are converging to a single point as redshift changes. This reflects the fact that these correlations arise from individual damping wings, which do not evolve with redshift.Figure <ref> shows the same 1D flux power spectra as in Figs. <ref> and <ref>, but now as ratios between the flux power of spectra contaminated by high column density absorbers and the flux power of spectra containing only Lyman-alpha forest, for z = 2.00 and z = 4.43. These ratios are the quantities to which we fit our templates (see <ref>) as part of our bias model (see <ref>). Plotted in this form, it is clear that the large-scale corrections associated with damping wings increase with column density of the damped system. The corrections also decrease with increasing redshift because the Lyman-alpha forest flux power spectrum (on the denominator of the ratio) increases with redshift. On small scales, the ratios converge to a constant value, which reflects the fraction of a line of sight that is uncontaminated (see above). The redshift evolution of this constant valueis driven by the transformation from comoving to velocity space: spectra are longer in velocity space at higher redshift (despite being drawn from the same comoving length of the simulation). Conversely, the width of damping wings does not change with redshift (for a given column density) because this width just arises from the physical processes within the hydrogen gas (rather than cosmological evolution). Therefore, the fraction of spectra uncontaminated by the damping wings increases with redshift.§ DISCUSSIONWe first discuss and summarise the results we have presented in <ref>. Using our measurements from cosmological hydrodynamical simulations, we have been able to confirm and characterise the effect of high column density absorbers on the 1D Lyman-alpha forest flux power spectrum as a function of column density, scale and redshift. There are distinctive large-scale correlations across the widths of individual damping wings (a “one-halo” term) arising from high column density absorbers that are seen to bias the 1D flux power spectrum of a set of spectra, relative to the power spectrum of the Lyman-alpha forest alone (Fig. <ref>). These correlations persist for all the high column density absorbing systems that we identify (for all column densities N(Hi) > 1.6 × 10^17 atoms cm^-2). Our results can be further understood by relating the shape and amplitude of the large-scale power spectrum of spectra contaminated by high column density absorbers to the Fourier transform of the Voigt profile that is normally used to model damping wings (due to the combination of physical effects that broaden absorption lines; see Appendix <ref>). We find evidence in our simulation results that the 1D flux power spectrum of high column density absorbers does not evolve with redshift (Fig. <ref>). This reflects the fact that the Voigt profiles of damping wings depend only on column density (the physical processes within high column density absorbing regions) and not redshift (cosmological evolution) (see Eq. (<ref>)).The most recent previous investigation into the effect of high column density absorbers on the Lyman-alpha forest was performed by <cit.> <cit.>. These authors measured a single bias function for the 1D Lyman-alpha forest flux power spectrum (at each redshift they consider) that includes the combined effect of all high column density absorbers (all LLS and DLAs). Our results are qualitatively similar to those of the previous study; however, by investigating different absorber categories based on column density ranges (Table <ref>), we have shown that the form of the bias as a function of wavenumber depends strongly on column density. This will have implications for any parameter inference from the 1D flux power spectrum. For instance, the analysis by <cit.> uses a single multiplicative bias model for the Lyman-alpha forest flux power spectrum based on the results in <cit.>[The model is reported in <cit.> as 1 - 0.2 α_DLA [1 / (15000.0 k_|| - 8.9) + 0.018], where α_DLA is the free amplitude.]. The model has a free amplitude that is allowed to vary (reflecting the level of contamination in a given survey) and is then marginalised. The shape of this model is therefore based on the observed CDDF of high column density absorbers. However, as discussed in <ref>, in the measurement of the 1D flux power spectrum, high column density absorbers in the quasar spectra are clipped out in the hope of removing noise <cit.>. This process changes the CDDF of high column density absorbers by preferentially removing higher column density systems which are easier to spot in the noisy spectra. Hence, the shape of the bias from residual high column density absorbers is different (as we have shown in <ref>) and the model used by <cit.> may not be flexible enough to account for this, especially at the level of accuracy required by future surveys.Our measurements provide a set of templates for the effect of different absorber categories as a function of column density. By using our templates as part of the model in Eq. (<ref>), it is now possible to more accurately characterise the bias of the residual contamination. We also find evidence for redshift dependence of the fractional effect of high column density absorbers on the forest power spectrum (driven by the changing amplitude of the forest power spectrum), which is also not included in the current model, but is incorporated into our templates. Fits allowing incorporation of our new results into future pipelines are described in <ref>.We now discuss our motivations for some of the choices made in our analysis. We have chosen to present our main results as the 1D flux power spectra of different sets of simulated spectra, where we have categorised spectra according to the maximum column density system within each spectrum. This means that we are measuring the power spectra of ensembles of spectra that are contaminated to similar extents, rather than the flux power spectra of high column density absorbers alone. Furthermore, a consequence of this categorisation is that within the spectra of a given category, there may be high column density absorbers of a lower column density (there may be LLS in the large-DLA category of spectra). In the first instance, this does not affect our results because the power spectrum measurements we have made ( <ref>) and the templates that we construct ( <ref>) include the effect of this possible additional lower column density contamination. A subtlety arises because the amount of additional lower column density contamination will be partly sensitive to the length of simulated spectra, since longer spectra have a greater chance of being contaminated. However, the damping wings of the highest column density systems already produce zero transmitted flux over a significant fraction of the length of our simulation box, so that the presence of possible additional high column density absorbers will make very little difference in any case. We tested this by inserting an LLS into a spectrum contaminated by a large DLA, which reduced the total transmitted flux by 0.07%. By carrying out this insertion test with a “control” scenario without the additional contamination, we are able to show that this subtlety will have negligible impact on our conclusions and the validity of our templates.Finally, we comment on the particular form of our preferred model for the bias of high column density absorbers to the 1D Lyman-alpha forest flux power spectrum (as shown in Eq. <ref>). We model the bias as a multiplicative correction, rather than an additive form. First, this matches the form of the currently-used model <cit.>. Moreover, an additive form would require either the separation of high column density absorbers and the Lyman-alpha forest in the simulated spectra or a complete physical understanding of how the two components interact at the ensemble level. The former is not trivial for our analysis since we are not inserting high column density absorbers (as previous studies have done), but are simultaneously simulating the low and high column density regions of gas. We avoid the latter due to any remaining physical uncertainties and instead form a parametric multiplicative model based on our simulated results (see <ref>).§ TEMPLATES FOR THE EFFECT OF HIGH COLUMN DENSITY ABSORBERS4*2.44 LLS 2.52876446 39.4953075 0.97338203Sub-DLA 1.67478107 88.7718840 0.86859388Small DLA 1.20327441 190.479199 0.66886941Large DLA 0.90303164 461.839257 0.35507313 4*3.49 LLS 3.56326536 36.9463677 0.95711093Sub-DLA 2.18888287 102.479388 0.87687523Small DLA 1.40980873 253.268279 0.69449156Large DLA 1.00424245 571.742829 0.40920686 4*4.43 LLS 4.87884885 27.1266717 0.95083775Sub-DLA 2.92813724 90.5577543 0.87487573Small DLA 1.70679137 300.279590 0.70163743Large DLA 1.15309338 673.926369 0.43766022 To aid incorporation in future pipelines, we have produced fits to the biases induced by contaminants in our different column density bins. The parametric form of our templates isP_i^1D (k_||, z)/P_Forest^1D (k_||, z) = (1 + z/1 + z_0)^-3.55×1/(a(z) e^b(z) k_|| - 1)^2 + c(z),wherea(z) = a_0 (1 + z/1 + z_0)^a_1; b(z) = b_0 (1 + z/1 + z_0)^b_1; c(z) = c_0 (1 + z/1 + z_0)^c_1;and the pivot redshift z_0 = 2.00. [a_0, a_1, b_0, b_1, c_0, c_1] are free parameters that we fit simultaneously in k_|| and z space for each absorber category i ∈ {LLS, sub-DLA, small DLA, large DLA}.We fit using the Levenberg-Marquardt algorithm <cit.>[We were able to further validate our modelling by initially fitting using a subset of the available redshift slices and using this preliminary fit to predict the results at z = 3.01. We found the model to accurately predict the results at this intermediate redshift, acting as a form of successful blind test for our model. Our final best-fit parameters use all available data.].Figure <ref> shows the result of these fits (dashed lines) with the raw ratios measured from the simulation (solid lines); the corresponding parameter values are given in Table <ref>. These can be used to reconstruct a final model for the bias of spectra containing high column density absorbers by using Eq. (<ref>). The model described by Eq. (<ref>) characterises the results we have measured in our simulations and through Eq. (<ref>) allows interpolation of our results to intermediate redshifts that we have not explicitly probed. (Use of the model outside the limits of redshift and scale we have considered would constitute an extrapolation, but this should not be necessary since our measurements bracket the main redshifts and scales of interest to Lyman-alpha forest studies.) No strong physical meaning should be attached to its terms, although we can motivate the first term on the right-hand side of Eq. (<ref>) as being the (reciprocal of the) main term of the redshift evolution of P_Forest^1D (k_||, z) as found by <cit.> (using a maximum likelihood estimator). In this way, the parametric form isolates the redshift evolution from P_Forest^1D (k_||, z) and then fits the residual redshift evolution using the terms in Eq. (<ref>). The best-fit values of the exponents in Eq. (<ref>) (as given in Table <ref>) are small, indicating that most of the redshift evolution can indeed be ascribed to the expected cosmological evolution of P_Forest^1D (k_||, z).Our results are dependent on the length of our simulated spectra. This manifests in the value of the constant that the ratios P_i^1D (k_||, z) / P_Forest^1D (k_||, z) have at high k_||, which is set by the fraction of the length of contaminated spectra which are unaffected by damping wings and contain only Lyman-alpha forest. Since the incidence rates of high column density absorbers are such that one per contaminated spectrum is most likely, a longer spectrum will have a larger fraction that is uncontaminated, causing the constant value at high k_|| to rise with spectrum length. However, in an analysis of observational data this will be absorbed into a free parameter. We have used a parametric form for our templates such that all this dependency is measured by the term c(z)[It can then be understood why we do not factor out the redshift evolution ofP_Forest^1D (k_||, z), as we do for the first term on the right-hand side of Eq. (<ref>).]. By inserting Eq. (<ref>) into Eq. (<ref>), it can be seen that the term c(z) is degenerate with α_Forest (z) and hence these terms can be combined and allowed to vary. It follows that the full parametric form of our model for the effect of high column density absorbers on the 1D Lyman-alpha forest flux power spectrum isP_Total^1D (k_||, z) = P_Forest^1D (k_||, z) [α_0 (z).. + ∑_i ≠Forestα_i (z) (1 + z/1 + z_0)^-3.55×1/(a(z) e^b(z) k_|| - 1)^2].When using this model in inference from the 1D Lyman-alpha forest power spectrum P_Forest^1D (k_||, z), it will be necessary to vary five free parameters α_0 and α_i, where i indexes each high column density absorber category. In this way, the column density, scale and redshift dependence of the effect of high column density absorbers is fully determined by our templates, while the relative impact of each absorber category is fitted since this is specific to the survey at hand, as well as the details of any clipping of damping wings that changes the survey CDDF. (See <ref> for more discussion of these details.) Note that the parameter α_0 is degenerate with factors that rescale the mean flux and could be omitted in an end-to-end analysis.Figure <ref> compares the model we have constructed to the existing model presented in <cit.> and based on the results in <cit.>. There is broad agreement between the existing model and our model for the total contamination of high column density absorbers, although our model is less steep in its scale dependence. We also show our model applied to a possible “residual” contamination, under the assumption that all DLAs are identified and clipped out in an analysis, leaving only contamination from LLS and sub-DLAs (as assumed by ). The model for this lower column density residual contamination has a shallow scale dependence that the model of <cit.> is unable to characterise. The use of our more flexible model will avoid potential biases due to mischaracterisation of the scale dependence of the residual contamination, thus improving estimation of cosmological effects such as massive neutrinos or the tilt of the primordial power spectrum.We now discuss the prior probability distributions that can be adopted for α_i (z) in any inference using the model we have presented. The α_i (z) are technically not independent parameters, but are each related to integrals of the Hi CDDF for a particular survey over the appropriate column density ranges (and absorption distance per sightline). The effect of spectrum clipping which changes the survey CDDF can be modelled by applying a weighting function to the CDDF, which down-weights higher column densities, which are easier to spot and remove. If one wanted to reduce the dimensionality of these nuisance parameters, in particular in redshift space, they could be replaced by a parameterisation which quantifies deviations from the expected redshift evolution of the CDDF with only one or two parameters (rather than a parameter for each redshift bin considered). We leave the details of the construction of prior distributions to individual analyses, since the precise considerations will be survey-specific.To conclude this section, we present a summary of the steps required to incorporate our final model for the effect of high column denisty absorbers into future 1D Lyman-alpha forest analyses: * Our model describes the effect of quasar spectra contaminated by high column density absorbers as a multiplicative bias to the 1D Lyman-alpha forest flux power spectrum, as given by Eq. (<ref>). It can therefore be incorporated into a pipeline at the stage of flux power spectrum interpretation to marginalise over effects of these absorbers.* The free parameters are α_i (z), where i indexes different categories of high column density absorber (as given in Table <ref>). Our model is of use to any Lyman-alpha forest survey that contains spectra which may be contaminated by high column density absorbers (both LLS and DLAs). The relative impacts of different categories of high column density absorbers will be determined in the estimation of posterior distributions of these nuisance parameters. While normalisation is necessarily floating, the model fully specifies the scale, column density and redshift dependence of the effect of high column density absorbers, using the results we have measured from hydrodynamical simulations.* In a survey which does not clip its quasar spectra, strong priors can be given for the free parameters of our model, based on the expected or measured Hi CDDF.* In a survey which does clip its quasar spectra in an attempt to remove high column density absorbers (and therefore changes the survey CDDF), strong priors can still be given for our model parameters, assuming a model can be constructed for the effect of the clipping process on the CDDF. This will constitute some re-weighting of the CDDF.* In order to reduce the dimensionality of our nuisance parameters, rather than having a separate parameter for each redshift bin in a given analysis, one could parameterise the redshift evolution by a simple deviation from the CDDF with only one or two numbers. § CONCLUSIONSWe have used a cosmological hydrodynamical simulation <cit.> to investigate the effect of high column density absorbing systems of neutral hydrogen and their associated damping wings on the 1D Lyman-alpha forest flux power spectrum. We find that the effect of high column density absorbers on the Lyman-alpha forest flux power spectrum is a strong function of column density. Accounting for this change in scale-dependence with column density will remove a source of bias in cosmological inference from the Lyman-alpha forest.Previous models <cit.> combine the effect of all high column density absorbers together (all neutral hydrogen column densities N(Hi) > 1.6 × 10^17 atoms cm^-2) based on the column density distribution function (CDDF) in the raw spectra <cit.>. However, the damping wings of some high column density absorbers are clipped out in the final analysis <cit.>, which preferentially removes higher density systems (because they are easier to spot) and changes the column density distribution in the residual contamination.Our results apply for both clipped and unclipped survey spectra, since we separately model the effect for different column densities of the dominant absorber, allowing us to accurately account for the contamination in the 1D flux power spectrum. We discuss in <ref> the practicalities of employing our model in future analyses.The shape and amplitude of the distortions in the power spectrum due to a damped absorber depend on its column density because they are driven by the width of the damping wings; the dominant effect is a “one-halo” term. We defer investigation of potential “two-halo” terms to future work where we measure the effect of high column density absorbers on the 3D Lyman-alpha forest flux power spectrum.We anticipate that our model will help realise forecasted cosmological constraints from upcoming surveys like DESI. <cit.> forecast that DESI will have the constraining power to make a ∼ three-sigma detection of the sum of neutrino masses (in combination with Planck CMB data); and they show the power of the 1D Lyman-alpha forest power spectrum in probing the primordial power spectrum, halving the one-sigma error on the running of the spectral index, with implications for inflationary models. It will be necessary to use the models we have presented here, alongside carefully constructed priors on the residual CDDF, to remove degeneracies between the effect of high column density absorbers and cosmological effects.§ ACKNOWLEDGEMENTSKKR, SB, HVP and BL thank the organisers of the COSMO21 symposium in 2016, where this project was conceived. KKR was supported by the Science and Technology Facilities Council (STFC). SB was supported by NASA through Einstein Postdoctoral Fellowship Award Number PF5-160133. HVP was partially supported by the European Research Council (ERC) under the European Community's Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement number 306478-CosmicDawn. AP was supported by the Royal Society. AFR was supported by an STFC Ernest Rutherford Fellowship, grant reference ST/N003853/1. BL was supported by NASA through Einstein Postdoctoral Fellowship Award Number PF6-170154.mymnras_eprint § ONE-DIMENSIONAL FLUX POWER SPECTRUM OF A VOIGT PROFILEAs discussed in <ref>, the broadened absorption lines of high column density absorbers are usually modelled by a Voigt profile. A Voigt profile is a convolution of a Lorentzian profile and a Gaussian profile. It therefore appropriately models the combination of the main physical processes that broaden atomic transition lines: the Lorentzian profile from natural or collisional broadening and the Gaussian profile from Doppler broadening. The optical depth as a function of wavelength τ (λ) is the product of the line-of-sight column density N and the atomic absorption coefficient α (λ) <cit.>[Eq. (<ref>) is valid in SI units.]:τ (λ) = N α (λ) = N √(π) e^2/4 πϵ_0 m_e c^2f λ_t^2/Δλ_D u(x,y),where the fundamental physical constants have their usual meaning, f is the oscillator strength of the atomic transition, λ_t is the transition wavelength and the Doppler wavelength “shift” associated with a gas of temperature T for an ion of mass m_ion,Δλ_D = λ_t/c(2 k_B T/m_ion)^1/2.u(x,y) is an unnormalised form of the Voigt function (the normalisation is already expressed in the pre-factors of Eq. (<ref>)), specifically the real part of the Faddeeva function:w(z) = e^-z^2erfc(-iz) = u(x,y) + iv(x,y),where erfc(x) is the complementary error function and z = x + iy. x and y are respectively the wavelength difference from the line centre λ_c and the natural width of the transition, in units of the Doppler shift:x(λ) = λ - λ_c/Δλ_D; y = Γλ_t^2/4 π c1/Δλ_D,where Γ is the damping constant of the transition, the inverse of the time scale for the electron to remain in the upper level of the transition in the vacuum. For the Lyman-alpha transition, f = 0.4164, λ_t = 1215.67, m_ion = m_proton and Γ = 6.265 × 10^8 Hz <cit.>. For the column densities that we consider, we assume a gas temperature T ≈ 10^4 K. In order to calculate the 1D flux power spectrum arising from these Voigt profiles, the same procedure is followed as in <ref>, we form flux spectra and carry out a Fourier transform. We transform from wavelengths to velocities by Δ v / c = Δλ / λ.Figure <ref> shows the 1D flux power spectra of Voigt profiles as given by Eq. (<ref>) for the Lyman-alpha absorption line for three different column densities of neutral hydrogen N(Hi) = [10^19, 10^20, 10^21] atoms cm^-2, spanning the column densities for LLS and DLAs. This figure should be compared with Fig. <ref> in <ref>, which shows the 1D flux power spectra we have measured in the hydrodynamical simulations. The trends in Fig. <ref> broadly support the arguments made in <ref>, relating the large-scale power spectrum of simulated spectra contaminated by high column density absorbers to the power spectrum of relevant Voigt profiles. The shape of the large-scale power spectrum of the Voigt profiles is similar in amplitude and scale-dependence as the excesses on large scales for the 1D flux power spectra of simulated spectra in high column density absorber categories. Moreover, these excesses get steeper, increase in amplitude and become prominent on larger scales for higher column densities, both in the simulated and analytic spectra. This reflects the fact that a higher column density means wider damping wings and so correlations on larger scales. In the analytic power spectra in Fig. <ref>, we observe oscillations in the power spectrum on smaller scales that rapidly decrease in amplitude. These are not observed in the fully-simulated power spectra since the oscillations are orders of magnitude lower in amplitude than the flux power spectrum of residual Lyman-alpha forest (see Fig. <ref>). Furthermore, in our results, we are effectively averaging over a number of column densities in each column density bin (or absorber category) that we consider; this will have the additional effect of averaging out these smaller-scale oscillations in the power spectrum to form a smoother scaling. | http://arxiv.org/abs/1706.08532v2 | {
"authors": [
"Keir K. Rogers",
"Simeon Bird",
"Hiranya V. Peiris",
"Andrew Pontzen",
"Andreu Font-Ribera",
"Boris Leistedt"
],
"categories": [
"astro-ph.CO"
],
"primary_category": "astro-ph.CO",
"published": "20170626180003",
"title": "Simulating the effect of high column density absorbers on the one-dimensional Lyman-alpha forest flux power spectrum"
} |
§ INTRODUCTIONDi-lepton final states in the high invariant mass regionM_l l ≥ 1 TeV are one of the primary channels used insearches for Z^' gauge bosons in Beyond Standard Model (BSM) scenarios and in precision studies of the Standard Model (SM) at the Large Hadron Collider(LHC). It was pointed out in <cit.> that in the high-mass region the contribution of photon-induced (PI)di-lepton production, and associated uncertainties from the photon parton distribution function (PDF),could affect the di-lepton SM spectrum shape potentially influencing wide-resonance searches.This could have an even greater impact on the Contact Interaction (CI) type of search, where one has to predict the SM background from theory.Therefore, while these effects do not influence significantly standard narrow Z^' searches, they can become a significant sourceof theoretical systematics for wide Z^'-bosons and CI searches.The analysis of this theoretical systematics has been extended in <cit.> by, on one hand, including the contribution of both real and virtual photon processes and, on the other hand, evaluating the impact of recent updates in photon PDF fits. This article gives a brief report on these studies. § REAL AND VIRTUAL PHOTON PROCESSES IN DI-LEPTON PRODUCTION The contribution of photon interactions from elastic collisions is generally evaluated through the Equivalent Photon Approximation (EPA), followingestablished literature <cit.>. With the introduction of QED PDFs, also inelastic processes can be included in the picture and their contribution can be theoreticallyestimated. In the QED PDF framework indeed, the initial resolved photons (i.e. real photons with Q^2=0) have their own PDF which describes their initial state.Examples of inelastic QED PDF sets are the CT14QED <cit.> and the MRST2004QED <cit.> sets. These inelastic sets can be used to calculate separately the three contributions coming from two resolved photons, one resolved and one virtual,and two virtual photon scattering. These three terms are called Double-Dissociative (DD), Single-Dissociative (SD) and pure EPA respectively. The implementation of the virtual photon spectrum and the choices of its parameters follow the work in Ref. <cit.>.The results for the contributions ofthese three terms to the dilepton cross section, evaluated using the CT14QED set, are visible in Fig. <ref>. The DD contribution is generally dominant, but the SD can be equally important. In the inset plot is shown its relative size compared with the DD result. In the CT14QED framework the SD term contribution is about 75% - 90% of the DD one. The pure EPA contribution instead is sub-dominant with respect to the other terms.PDF collaborations also release inclusive QED PDF sets, where both the elastic and inelastic components are contained in the photon PDF fitting procedure. To this group belong the NNPDF3.0QED <cit.>, xFitter_epHMDY <cit.>, LUXqed <cit.> and CT14QED_inc <cit.> PDF sets.The sum of the three contributions inFig. <ref>can be compared with the full results that are obtained invoking inclusive QED PDFs. Using the CT14QED set and its inclusive version, we can directly compare the PI di-lepton spectra obtained in the two frameworks. This comparison is visible in the blue line of Fig. <ref> which represent the ratio between the sum of the EPA, SD and DD terms evaluated with the CT14QED, and the inclusive CT14QED_inc result. The difference between the two results is below 3% in the explored invariant mass region. This ensures that the separation between elastic and inelastic components is well understood and double counting is kept at the percent level.In the same figure, we considered also the inclusive result that is obtained with the LUXqed set. The yellow line shows the ratio between the two inclusive PI predictions obtained respectively with the LUXqed and with the CT14QED_inc sets. The two predictions are in good agreement, their difference resulting below 7% in the explored invariant mass region.§ PDF UNCERTAINTIES In Fig. <ref> we have given a first indication of the size of the PDF errors, which are represented by the shaded areas in the DD and SD curves. In this section we discuss more in detail the systematic uncertainties on the PI predictions due to the photon PDF.Generally PDF packages are accompanied by tables of PDFs, which can be used to estimate the uncertainties on their fits. The most common approaches follow either the “replicas” method (NNPDF3.0QED, xFitter_epHMDY) <cit.> or the Hessian method(LUXqed) <cit.>. The CT14QED_inc set and its inelastic version instead include a table of PDFs that are obtained imposing an increasing constraint on the fraction ofproton momentum carried by the photon and the PDF uncertainty can be estimated following the indications in Ref. <cit.>.In Fig. <ref> we show the predictions for the PI di-lepton spectrum obtained with the tables of the CT14QED_inc set (a)and for the LUXqed Symmetric Hessian eigenvectors (b).From this picture we expect to obtain a small PDF uncertainty for the PI contribution. This is shown in Fig. <ref>, where we plotted the impact of the photon PDF uncertainties on the total di-lepton spectrum i.e., the sum of the pure DY and the PI contributions. The curves have been rescaled following the legend in order to be visible on the same scale. The LUXqed predictions have the smallest uncertainty.The uncertainty on the di-lepton spectrum due to the photon PDFs is less than 0.2%, thus well below any experimental uncertainty inthe few TeV invariant mass region. A slightly larger result is obtained from the CT14QED_inc table, where the relative error from the photon PDF is between 1% and 4% in thesame invariant mass interval. The low uncertainty of the LUXqed and CT14QED_inc predictions is shown in contrast with the high uncertainty predicted by the NNPDF3.0QED set.§ PHOTON-INDUCED EFFECTS IN Z^'-BOSON SEARCHES While narrow resonance searches in the di-lepton channelare not much affected by PI processes, in the case of broadZ^' resonances, as well as in Contact Interaction (CI) scenarios, PI processes andtheir associated uncertainties may affect the sensitivity to new physics <cit.>.The experimental approaches used to set limits on wide resonances (or CI) are essentially “counting" strategies. As these kinds of searches strongly rely on the good understanding of the background, the change of shape of the di-lepton spectrum at high invariant masses due to the PI component has a crucial effect on the sensitivity.Fig. <ref> <cit.> shows the invariant massspectrum of a single Z^' Sequential Standard Model (SSM) benchmark as obtained from theNNPDF3.0QED and LUXqed PDF sets, and the reconstructed Forward Backward Asymmetry (AFB). The latter has been considered because of its it features partial cancellation of PDF uncertainties, as shown in Ref. <cit.>. The NNPDF3.0QED scenario leads to a loss of sensitivity in the invariant mass spectrum, while the AFB maintains a visible shape standing over the systematics error bandseven in the wide resonant scenario. The LUXqed low central value and small PDF uncertainty instead lead to no visible effects on both invariant massand AFB distributions. § CONCLUSIONS The upgrade in energy of the LHC to 13 TeV at Run II has opened the exploration of higher energy scales that were barred during the past Run I. The LHC potential in BSM searches at the ongoing Run II will be further boosted by the increase of the collected data sample in two years time when the luminosity should reach the project value L= 300 fb^-1. As we approach the high luminosity phase at the LHC, the statistical errors will be greatly reduced. At the same time, systematic effects will become more and more important.One of the major sources of theoretical systematics at the LHC comes from PDF uncertainties.In this article we have studied the di-lepton final state in thehigh mass region M_l l ≥ 1 TeV at the LHC Run IIand examined the impact of PDF systematics on searches forheavy neutral spin-1 Z^'-bosons, with particular regard toPI processes and associated effects of the photon PDF. Using as benchmarks the SM di-lepton spectrum and the GSM-SSMwide-resonance scenario, we have pointed out non-negligible contributionsfrom the single-dissociative production process. This underlines the relevance of improving in the future the theory ofPI processes at the LHC for both QED PDFs and elastic processes. We have analysed the significance of the BSM signalin di-lepton channels by incorporating real and virtual photon contributions.We have illustrated that this depends considerably on the different scenariosfor photon PDFs, and presented results for the cross section andthe reconstructed AFB as function of the di-lepton invariant mass. § ACKNOWLEDGEMENTSE. A., J. F., S. M. & C. S.-T. are supported in part through the NExT Institute. 99Accomando:2016tah E. Accomando, J. Fiaschi, F. Hautmann, S. Moretti, C. Shepherd-Themistocleous, Photon-initiated production of a di-lepton final state at the LHC: cross section versus forward-backward asymmetry studies, Phys. Rev. D (95) 3 035014 2017 [hep-ph/1606.06646].Accomando:2016ouw E. Accomando, J. Fiaschi, F. Hautmann, S. Moretti, C. Shepherd-Themistocleous, Impact of the Photon-Initiated process on Z'-boson searches in di-lepton final states at the LHC, [hep-ph/1609.07788].Accomando:2016ehi E. Accomando, J. Fiaschi, F. Hautmann, S. Moretti, C. Shepherd-Themistocleous, The effect of real and virtual photons in the di-lepton channel at the LHC, Phys. Lett. B (770) 1-7 2017 [hep-ph/1612.08168].Accomando:2017itq E. Accomando, J. Fiaschi, F. Hautmann, S. Moretti, C. Shepherd-Themistocleous, Real and virtual photons effects in di-lepton production at the LHC, [hep-ph/1704.08587].Budnev:1974de V.M. Budnev, I.F. Ginzburg, G.V. Meledin, V.G. Serbo, The Two photon particle production mechanism. Physical problems. Applications. Equivalent photon approximation, Phys. Rept. (15) 181-281 1975.Schmidt:2015zda C. Schmidt, J. Pumplin, D. Stump, C.P. Yuan, CT14QED parton distribution functions from isolated photon production in deep inelastic scattering, Phys. Rev. (D93) 11 114015 2016 [hep-ph/1509.02905].Martin:2004dh A.D. Martin, R.G. Roberts, W.J. Stirling, R.S. Thorne, Parton distributions incorporating QED contributions, Eur. Phys. J. (C39) 155-161 2005 [hep-ph/0411040].Piotrzkowski:2000rx K. Piotrzkowski, Tagging two photon production at the CERN LHC, Phys. Rev. D (63) 071502 2001 [hep-ex/0009065].Ball:2014uwa NNPDF Collaboration, Parton distributions for the LHC Run II, JHEP (04) 040 2015 [hep-ph/1410.8849].Giuli:2017oii xFitter Developers' Team, The photon PDF from high-mass Drell Yan data at the LHC, [hep-ph/1701.08553].Manohar:2016nzj A. Manohar, P. Nason, G.P. Salam, G. Zanderighi, How bright is the proton? A precise determination of the photon parton distribution function, Phys. Rev. Lett. (117) 24 242002 2016 [hep-ph/1607.04266].Ball:2011gg R.D. Ball, V. Bertone, F. Cerutti, L. Del Debbio, S. Forte, A. Guffanti, N.P. Hartland, J.I. Latorre, J. Rojo, M. Ubiali, Reweighting and Unweighting of Parton Distributions and the LHC W lepton asymmetry data, Nucl. Phys. B (855) 608-638 2012 [hep-ph/1108.1758].Butterworth:2015oua Butterworth, Jon and others, PDF4LHC recommendations for LHC Run II, J. Phys. (G43) 023001 2016 [hep-ph/1510.03865].Accomando:2015cfa E. Accomando, A. Belyaev, J. Fiaschi, K. Mimasu, S. Moretti, C. Shepherd-Themistocleous, Forward-Backward Asymmetry as a Discovery Tool for Z^' Bosons at the LHC, JHEP 01 127 2016 [hep-ph/1503.02672]. | http://arxiv.org/abs/1706.08767v1 | {
"authors": [
"Juri Fiaschi",
"Elena Accomando",
"Francesco Hautmann",
"Stefano Moretti",
"Claire H. Shepherd-Themistocleous"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170627104047",
"title": "Photon-induced contributions to di-lepton production at the LHC Run II"
} |
1 E M H K X Y G Q P F H T d ℤ ℝ ℂ 𝔼definitionDefinitionmypropertyPropertymytheoremTheoremmylemmaLemmamyconjectureConjecturecorollaryCorollarymyproblemProblemmyobservationObservationmyremarkRemarkmyalgorithmAlgorithmmyassumptionAssumptionmyexampleExample def = = ·=x y c d b q g i s a r u m n h p z o e f v w A B C D E F G H I J K L M N S V P Q R W U X Y Z 1ξ0̱ 0ΦΨΓΣΔΩΛÂ C D B R I N S T U W R D EB I SINR LLR SNR MSE h I S R D SR RD =2500 NOMA: Principles and Recent Results Jinho ChoiSchool of EECS, GISTEmail: [email protected](Invited Paper) This work was supported by the “Climate Technology Development and Application" research project (K07732) through a grant provided by GIST in 2017.today ======================================================================================================================================================================================================================================= Althoughnon-orthogonal multiple access (NOMA)is recently considered for cellular systems, its key ideas such as successive interference cancellation (SIC) and superposition coding have been well studied in information theory. In this paper, we overview principles ofNOMA based on information theory and present some recent results. Under a single-cell environment,we mainly focus on fundamental issues, e.g., power allocation and beamforming for downlink NOMA and coordinated and uncoordinated transmissions for uplink NOMA.nonorthogonal multiple access; power allocation; beamforming; random access § INTRODUCTIONRecently, nonorthogonal multiple access (NOMA) has been extensively studied in <cit.><cit.><cit.><cit.> for 5th generation (5G) systems as NOMA can improve the spectral efficiency. In order to implement NOMA within standards, multiuser superposition transmission (MUST) schemes are proposed in<cit.>.There are also review articles for NOMA, e.g., <cit.><cit.>.Although the application of NOMA to cellular systems is relatively new, NOMA is based on well-known schemes such as superposition coding and successive interference cancellation (SIC)<cit.><cit.>. In particular, the decodingapproach based on multiple single-user decoding with SIC for multiple access channelshas been studied from the information-theoretic point of view under other names such as stripping and onion peeling <cit.>. In addition, there are precursors of NOMA. For example, code division multiple access (CDMA) is a NOMA scheme as spreading codes are not orthogonal <cit.><cit.><cit.>.In order to mitigate the multiple access interference (MAI) due to non-orthogonal spreading codes, multiuser detection (MUD) is also studied <cit.><cit.>. In CDMA, although the notion of superposition coding is not actively exploited, SICis extensively studied since <cit.>.The main difference of the NOMA schemes for 5G from existingCDMA schemes is the exploitation of the power difference of users and the asymmetric application of SIC in the power and rate allocation. In particular, these features are well shown in downlink NOMA. In <cit.><cit.>, a user close to a base station (BS) and a user far away from the BS form a group or cluster. For convenience, the former and latter users are called strong and weak users, respectively (in terms of their channel gains). It is expected to transmit a higher power to the weak user than the strong user due to the path loss or channel gain. If they share the same radio resource block, the signal to the weak user received at the strong user has a higher signal-to-noise ratio (SNR) than that at the weak user, which implies that the strong user is able to decode the signal to the weak user and remove it (using SIC) to decodethe desired signal without MA without MAI. On the other hand, at the weak user, the signal to the strong user is negligible as its transmission power is lower than that to the weak user. Thus, the weak user decodes the desired signal without using SIC.To exploit the power difference, the power allocation becomes crucial. A power allocation problem for NOMA with fairness is studied in <cit.><cit.>.An energy efficient power allocation approach is investigated in <cit.>. The power difference between userscan also be exploited in a multi-cell system. In <cit.>, NOMA is studied for downlink coordinated two-point systems. In <cit.>, coordinated beamforming is considered for multi-cell NOMA.In <cit.><cit.><cit.>,multiple input multiple output (MIMO) for NOMA is studied to see how NOMA can be applied to MIMO systems. Beamforming in NOMA is also studied in <cit.><cit.><cit.>. In general, beamforming in NOMA is to exploit the power andspatial domains.In <cit.>, beamforming with limited feedback of channel state information (CSI) is studied. Multicast NOMA beamforming is considered in <cit.>. Since there have been various NOMA schemes and related approaches, it might be important to have an overview.As mentioned earlier, there are already excellent review articles <cit.><cit.>. However, they focus on system aspects. Thus, in this paper, we aim at providing an overview of key approaches and recent results with emphasis on fundamentals of NOMA under a single-cell environment.The rest of the paper is organized as follows. In Section <ref>, we present system models for uplink and downlink NOMA. The power allocation problem and downlink beamforming are considered fordownlink NOMA in Section <ref> and Section <ref>, respectively. We focus on some key issues in uplink NOMA in Section <ref> and conclude the paper with some remarks in Section <ref>. §.§.§ NotationMatrices and vectors are denoted by upper- and lower-case boldface letters, respectively. The superscripts *, , anddenote the complex conjugate, transpose, Hermitian transpose, respectively. [·] and Var(·) denote the statistical expectation and variance, respectively. (, ) represents the distribution of circularly symmetric complex Gaussian (CSCG) random vectors with mean vectorand covariance matrix .§ SYSTEM MODELS§.§ Downlink NOMA In this section, we present a system modelconsisting of a BS and multiple users for downlink NOMA. Throughout the paper, we assume that the BS and users are equipped with single antennas.Suppose that there are K users in the same resource block for downlink. Let s_k,t and h_k denote the data symbol at time t and channel coefficient from the BS to user k, respectively. The block of data symbols, [s_k,0 … s_k,T-1]^, where T is the length of data block, is assumed to be a codeword of a capacity-achieving code. Furthermore, T is shorter than the coherence time so that h_k remains unchanged for the duration of a block transmission. Suppose thatsuperposition coding <cit.> is employed for NOMA and the signal to be transmitted by the BS is ∑_k=1^K s_k,t.Then, at user k, the received signal is given byy_k,t = h_k ∑_m=1^K s_m,t + n_k,t, t = 0,…, T-1, where n_k,t∼(0, 1) is the independent background noise (here, the variance of n_k,t is normalized for convenience). Let α_k = |h_k|^2 and P_k = [|s_k,t|^2](with [s_k,t] = 0). Then, P_k becomes the transmission power allocated to s_k,t. In this section, we assume that the BS knows all the channel gains, {α_k}, and studies the power allocation to enable SIC at users for NOMA.§.§ Uplink NOMA We again assume that there are K users who are allocated to the same resource block for uplink transmissions. Then, at the BS, the received signal becomesr_t = ∑_k=1^K g_k u_k,t + n_t,t = 0, …, T-1, where g_k and u_k,trepresent the channel coefficient from the kth user to the BS and the signal from the kth user, respectively, and n_t ∼(0, 1) is the background noise at the BS. The BS is able to decode all K signals ifthe transmission rates and powers of the u_k,t's are properly decided using single-user decoding. A well-known example of uplink NOMA is CDMA where each user's signal is spread signal with a unique spreading code <cit.>. § POWER ALLOCATION FOR DOWNLINK NOMA In this section, we briefly study the power allocation for downlinkNOMA with achievable rates.Let R_k denote the transmission rate of s_k,t in (<ref>) and C_l;k denote the achievable ratefor the signal to user l at user k with SIC in descending order, where l ≥ k. Then, it can be shown thatC_l;k = log_2 ( 1 + α_k P_l/α_k ∑_m=1^l-1 P_m + 1).Assume that user k has to decode his/her signal (i.e., {s_k,t}) as well as the signals to userl, l ∈{k, …, K} for SIC in NOMA.In this case, the rate-region can be found asR_l < C_l;k,l ∈{k, …, K},k ∈{1,…, K}. Clearly, user k should be able to decode the signalsto users k, …, K. As an example, suppose that K = 2. At user 1, the signal to user 2 can be decoded ifR_2 < C_2;1 = log_2 ( 1+ α_1 P_2/α_1 P_1 + 1).Once {s_2,t} is decoded at user 1 and removed by SIC,C_1;1 becomes log_2 ( 1+ α_1 P_1 ) and the signal to user 1 can be decoded ifR_1 < C_1;1 = log_2 ( 1+ α_1 P_1 ). At user 2, assuming that {s_1,t} is sufficiently weaker than{s_2,t}, {s_2,t} can be decoded ifR_2 < C_2;2 =log_2 ( 1+ α_2 P_2/α_2 P_1 + 1). If α_1 ≥α_2 is assumed, we haveα_2 P_2/α_2 P_1 + 1≤α_1 P_2/α_1 P_1 + 1C_2;2≤ C_2;1.Thus, the rate-region of R_1 and R_2 in (<ref>) is reduced to R_1 < C_1;1R_2 < C_2;2 .In general, for any K ≥ 2, ifα_1 ≥…≥α_K, the rate-region in (<ref>) is reduced to R_l < C_l;l,l = 1, …, K,because C_l;k≥ C_l;l, k ∈{1,…,l-1}, l ∈{1,…,K}. If the BS knowsCSI and orders the users according to (<ref>), we can consider a power allocation problem for downlink NOMA tomaximize the sum rate subject to a total power constraint as follows:max_∑_k=1^K C_k;k ∑_k=1^K P_k ≤ P_ T,where = [P_1… P_K]^ and P_ T is the total transmission power. The power allocation problem in (<ref>)has a trivial solution that is P_1 = P_ T and P_k = 0, k = 2,…, K. To avoid this, rate constraints can be taken into account. To this end, we can consider another power allocation problem to minimize the total transmission powerwith rate constraints as follows:min_ || ||_1 C_k;k≥R̅_k,k = 1,…,K,where R̅_k represents the (required) minimum rate for user k and ||||_1 = ∑_i |x_i| denotes the 1-norm of vector . This problem formulationcan be employed instead of the approaches in <cit.> and <cit.> for fairness aseach user can ensure a guaranteed rate, R̅_k. In Appendix <ref>, we provide the solution to (<ref>).For example, consider the case of K = 2 with {α_1, α_2}= {1,1/4}. If we assume that P_ T = 10,the rate-region of R_1 and R_2 can be obtainedas in Fig. <ref>. We note that if P_1 = 10 and P_2 = 0, which is the solution to (<ref>), the maximum sum rate (log_2 (1+ α_1 P_ T) = log_2 (11) ≈ 3.459) is achieved.If we consider the problem in (<ref>) with R̅_1 = 2 and R̅_2 = 1, we can first decide the minimum of P_1 satisfying C_1;1≥R̅_1 = 2, because C_1;1 is a function of only P_1 as in (<ref>), which is P_1^* =3. Once P_1^* is decided, we can find theminimum of P_2 satisfying C_2;2≥R̅_2 = 1 from (<ref>), which is P_2^* = 7. We can see that P_1^* + P_2^* = 10. Thus, thesolution to (<ref>) can be located on the boundary of the rate-region with P_ T = 10 as shown in Fig. <ref> (with the circle mark).The above example demonstrates that the solution to (<ref>) can be readily foundif the BS knows the CSI, {α_k}. In addition, the solution can achievethe rate-region. Although the problem formulation in (<ref>) is attractive, the main drawback is the CSI feedback. In <cit.>, limited feedback of CSI is considered for a more realistic environment in NOMA. It is also possible to perform the power allocation for NOMA with statistical CSI. In this case, the outage probability is usually employed as a performance measure as in <cit.><cit.><cit.><cit.>. Since most power allocation methods are based on the achievable rate, they may not be applicable when capacity achieving codes are not employed for NOMA. Furthermore, with fixed modulation schemes, it is not easy to adapt the transmission rates according to the optimal powers due to the limited flexibility. Thus, it is often desirable to consider the power allocation for a realistic NOMA system. To this end, the power allocation can be investigated for a practical MUST scheme as in <cit.>. § BEAMFORMING FOR DOWNLINK NOMA To increase the spectral efficiency of downlinkin a multiuser system,multiuser downlink beamforming can be considered. While there are various approaches for multiuser downlink beamforming without NOMA, only fewbeamforming schemes are studied with NOMA. For example, zero-forcing (ZF) approaches are consideredin <cit.> and random beams are used in <cit.>. In <cit.>,a minorization-maximization algorithm (MMA) is employed to maximize the sum rate in NOMA with beamforming. In <cit.>, multi-cell MIMO-NOMA networks are studied withcoordinated beamforming. In this section, we discuss NOMA beamforming that was studied in <cit.>.For downlink NOMA, we can exploit the power domain as well as the spatial domain to increase the spectral efficiency as in <cit.><cit.>. In Fig. <ref>, we illustrate downlink beamforming for a system of 4 users. There are two clusters of users. Users 1 and 3 belong to cluster 1. In cluster 2, there are users 2 and 4. In each cluster, the users' spatial channels should be highly correlated so that one beam can be used to transmit signalsto the users in the cluster <cit.>. The resulting approach is called NOMA (downlink) beamforming.In NOMA beamforming, there are two key problems. The first problem is user clustering. In general,a group of users whose channels are highly correlatedcan form a cluster. The second problem is beamforming. A beam is to support a set of users in a cluster, while this beam should not interfere withthe users in the other clusters. As in <cit.>, it is convenient to consider the two problems separately at the cost of degraded performance. To consider NOMA beamforming, we focus on one cluster consisting of two users. We assume that user 1 is a stronguser (close to the BS) and user 2 is a weak user (far away from the BS). The signal-to-interference-plus-noiseratio (SINR) at user 2 is given by γ_2 =|_2^ |^2 P_2/ |_2^ |^2 P_1 + σ^2.To keep a certain QoS, we need to satisfy γ_2 ≥ G_2, where G_2 represents the target SINR at user 2 (it is assumed that if γ_2 ≥ G_2, the signal to user 2 can be correctly decoded). As usual, user 2 is assumed to be a weak user that is far away from the BS. On the other hand, user 1, called a strong user, is close to the BS and able to decode the signal to user 2 and remove it by SIC, and decode the desired signal (i.e., the signal to user 1). Thus, at user 2, it is required that|_1^ |^2 P_2/ |_1^ |^2 P_1 + σ^2≥ G_2,andγ_1 =|_1^ |^2 P_1/σ^2≥ G_1,where G_1 represents the target SINR at user 1. Note that the sum rate of the cluster becomes log_2 (1+G_1) + log_2 (1+G_2). From (<ref>)–(<ref>), the following set of constraints can be found:|_1^|^2≥ |_2 |^2 |_1^|^2 P_1≥ G_1 σ^2 |_2^|^2 P_2≥ ( |_2^ |^2 P_1 + σ^2 ) G_2 .Consequently, the problem to minimize the transmit power per cluster, P_1 + P_2, can be formulated with the constraints in (<ref>), (<ref>), and (<ref>).It is possible to find the optimal beam that minimizes the transmission power subject to (<ref>) - (<ref>). With M = 3 clusters, the optimal beam is foundfor different numbers of antennas. In Fig. <ref>, we show the required total transmission powerto meet the quality of service (QoS) with target SINRs, G_1 and G_2. It is clear that NOMA requires a lower transmission power than orthogonal multiple access (OMA), while the required total transmission powerdecreases with the number of antennas L due to the beamforming gain.In addition to NOMA beamforming,the user allocation or clustering plays a crucial role in improving the performance. In <cit.><cit.><cit.>, other beamforming approaches with user clustering are studied. It is noteworthy that the NOMA beamforming approach with user clustering is not optimal. For a better performance,without user clustering, as in <cit.>,an optimization problem can be formulated. However, in this case, the computational complexity to find the optimal solution is usually high.§ UPLINK NOMA NOMA can be employed for uplink transmissions based on the coordination by the BS, which requires signaling overhead. It is also possible to consider uncoordinated uplink NOMA. In this section, we briefly discuss coordinated and uncoordinated uplink NOMA systems.§.§ Coordinated Uplink NOMA From (<ref>), the mutual information between r_t and {u_k,t} is given by(r_t; {u_k,t}) = log_2( 1 + ∑_k=1^K β_k Q_k ), where β_k= |g_k|^2 and Q_k = [|u_k,t|^2]. In general, in order to achieve the rate (r_t; {u_k,t}) in(<ref>), the BS may need to perform joint decoding for all K signals. However,using the chain rule <cit.>, it is also possible to show that(r_t; {u_k,t}) = ∑_k=1^K (r_t; u_k,t |u_k+1,t,… u_K,t) = ∑_k=1^Klog_2( 1 + β_k Q_k/1+ ∑_l = 1^k-1β_l Q_l).From this, we can clearly see that the BS can decode K signals sequentially and independently using SIC. For example, if K = 2,we have(r_t; {u_k,t}) = log_2 ( 1 + β_2 Q_2/1+ β_1 Q_1)+ log_2 ( 1 + β_1 Q_1 ).Thus, if user 2 transmits the coded signals at a rate lower than log_2 ( 1 + β_2 Q_2/1+ β_1 Q_1), the BS can decode the signals and remove them. Then, the BS can decode the coded signalsfrom user 1 at a rate lower than log_2 ( 1 + β_1 Q_1 ). This approach is considered to prove thecapacity of multiple access channels <cit.>, while CDMA and IDMA can be seen as certain implementation examples of uplink NOMA <cit.>. In <cit.>, uplink NOMA is considered for multicarrier systems. For a given decoding order, the power allocation and subcarrier allocation are carried out to maximize the sum rate.In practice, for uplink NOMA, the BS needs to know the CSI and decides the rates and powers according to a certain decoding order. In other words, there could be a lot of signaling overhead for uplink NOMA, which may offset the NOMA gain.§.§ Uncoordinated Uplink NOMA: Random Access with NOMA It is possible to employ uplink NOMA without coordination. To this end, we can consider random access,e.g., ALOHA <cit.>. In ALOHA, if there are K users and each of them transmits a packet with access probability p_a, the throughput becomesT = K p_a (1-p_a)^K-1.For a sufficiently large K, the throughput can be maximized when p_a = 1/K, and the maximum throughput becomes e^-1≈ 0.3679.Suppose that uplink NOMA is employed and it is possible to decode up to two users if two users have two different power levels. Since there is no coordination, each user can choose one of two possible power levels. In this case, the throughput becomesT = K p_a (1 - p_a)^K-1 +1/2K2 p_a^2 (1 - p_a)^K-2,where the second term is the probability that there are two users transmitting packets and they choose different power levels. In Fig. <ref>, we show the throughput of ALOHA and the throughput of NOMA-ALOHA with 2 power levels when there are K = 10 users. It is clear that NOMA can improve the throughput of ALOHA.We can generalize NOMA-ALOHA with multi-channels as in <cit.>. For example, as shown in Fig. <ref>, the both power and frequency domains can be considered to form multiple sub-channels for ALOHA. The throughput is shownin Fig. <ref> when there are K = 200 users, B = 6 subcarriers, and L = 4 different power levels. It is noteworthy that the maximum throughput can be close to B = 6. In other words, NOMA-ALOHA can achieve a near full utilization of channels due to additional subchannels in the power domain. However, the transmission power increases as a user may choose a higher power than the required one without any MAI <cit.>.§ CONCLUDING REMARKS In this paper, we presented an overviewof NOMA as well as some recent results. We considered the power allocation and beamforming for downlink NOMA. We also discussed some key issues of uplink NOMA. There are a number of topics that are not discussed in this paper, although they are important. One of them is user clustering. In general, user clustering is used to simplify NOMA by applying NOMA independently to a clusterconsisting of few users provided that inter-cluster interference is mitigated (possibly using beamforming). Unfortunately, the performance degradation due to user clustering is not known, while its advantage to lower the complexity of downlink NOMA is clear. Another important topic to be studied isoptimal user ordering in NOMA beamforming. Without the space domain (or beamforming), the optimal user ordering seems straightforward (it is usually based on the channel gains). However, when the spaceand power domains are to be jointly exploited in NOMA, optimal user ordering is not yet well studied.As discussed above, although there are various issues to be addressed, we believe that NOMA will beindispensable in future cellular systems.§ SOLUTION TO (<REF>) Under the assumption of (<ref>), the minimum power to user 1 to satisfy C_1;1≥R̅_1 is given byP_1^*= 2^R̅_1 - 1/α_1.While P_1^* is the minimum power to guarantee the target rate for user 1, a higher power may eventually minimize the sum power.However, this is not the case. To see this, let P_1 > P_1^*. Then, we can show that C_k;k with P_1, k = 2, …, K, is lower than that with P_1^*. For example, we can see thatlog_2 (1 +α_2 P_2/α_2 P_1 + 1) < log_2 (1 +α_2 P_2/α_2 P_1^* + 1).Thus, to minimize ∑_k P_k, the optimal power to user 1 has to be P_1^*. For given P_1^*, we can also find the minimum power to user 2 as follows:P_2^* = min P_2 C_2;2≥R̅_2.After some manipulations, we haveP_2^* = (2^R̅_2 - 1) ( P_1^* + 1/α_2).This is also the minimum power to user 2 that results in the minimumMAI to users k > 2. Consequently, the minimum power for each user k(or the solution to (<ref>)) can be decided asP_k^* = (2^R̅_k - 1) ( ∑_l=1^k-1 P_l^* + 1/α_k). ieeetr | http://arxiv.org/abs/1706.08805v1 | {
"authors": [
"Jinho Choi"
],
"categories": [
"cs.IT",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20170627120707",
"title": "NOMA: Principles and Recent Results"
} |
all matrix,arrows,automata,calc,chains,intersections,decorations.markings,decorations.pathmorphing,shapes.geometric,positioning,plotmarks,scopes, patterns,fit vecArrow = [thick, decoration=markings,mark=at position1 with [semithick]open triangle 60,double distance=1.4pt, shorten >= 5.5pt,preaction = decorate,postaction = draw,line width=1.4pt, white,shorten >= 4.5pt]#1[TG: #1] #1[TW: #1] | http://arxiv.org/abs/1706.08528v1 | {
"authors": [
"Martin Bies",
"Christoph Mayrhofer",
"Timo Weigand"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20170626180002",
"title": "Algebraic Cycles and Local Anomalies in F-Theory"
} |
Auto-Encoder Guided GAN for Chinese Calligraphy Synthesis Pengyuan Lyu1, Xiang Bai1, Cong Yao2, Zhen Zhu1, Tengteng Huang1, Wenyu Liu1 1Huazhong University of Science and Technology, Wuhan, Hubei, China2Megvii Technology Inc., Beijing, China{lvpyuan, yaocong2010}@gmail.com; {xbai, jessemel, tengtenghuang, liuwy}@hust.edu.cn December 30, 2023 ===================================================================================================================================================================================================================================================================================We consider an isothermal gas flowing through a straight pipe and study the effects of a two-way electronic valve on the flow. The valve is either open or closed according to the pressure gradient and is assumed to act without any time or reaction delay. We first give a notion of coupling solution for the corresponding Riemann problem; then, we highlight and investigate several important properties for the solver, such as coherence, consistence, continuity on initial data and invariant domains. In particular, the notion of coherence introduced here is new and related to commuting behaviors of valves. We provide explicit conditions on the initial data in order that each of these properties is satisfied. The modeling we propose can be easily extended to a very wide class of valves.Keywords: systems of conservation laws, gas flow,valve, Riemann problem, coupling conditions. 2010 AMS subject classification: 35L65, 35L67, 76B75.§ INTRODUCTION In this paper we consider a model of gas flow through a pipe in presence of a pressure-regulator valve.We deal with a plug flow, which means that the velocity of the gas is constant on any cross-section of the pipe; all friction effects along the walls of the pipe are dropped. To model the flow away from the valve, we use the following equations for conservation of mass and momentum, as done for analogous problems in <cit.>:ρ_t + (ρv)_x=0, (ρv)_t + (ρv^2 + p(ρ))_x = 0.Here t>0 is the time and x∈ is the space position along the pipe.The state variables are ρ, the mass density of the gas and v, the velocity; we denote by q ≐ρv the linear momentum.Since variations of temperature are not significant in most real situations of gas flows in pipes, we focus on the isothermal casep(ρ) ≐ a^2ρ,for a constant a>0 that givesthe sound speed.We emphasize that the flow can occur in either directions along the pipe; it can be either subsonic or supersonic. Usually, an hydraulic system is completed by compressors <cit.> and valves <cit.>.In this paper we focus on the case of a valve. Indeed, there are several different kinds of valves, but their common feature consists in regulating the flow. Opening and closing can be partial and may depend either on the flow, or on the pressure, or even on a combination of both.Moreover, a valve may let the gas flow in one direction only or in either.The simplest and most natural problem for system (<ref>) in presence of a valve is clearly the Riemann problem, where the valve induces a substantial modification in the solutions with respect to the free-flow case.However, proposing a Riemann solver that includes the mechanical action of a valve is only the first step toward a good description of the flow for positive times: some natural properties, both from the physical and mathematical point of view, have to be investigated.Such properties are coherence, consistence and continuity with respect to the initial data; at the end, if possible, invariant domains should be properly established.This is the main issue of this paper.In Section <ref> we rigorously define the notions mentioned above; they are stated in the case of system (<ref>) but can be readily extended to any nonstandard coupling Riemann solver. A very short account on the Lax curves of (<ref>) is then given as well as the definition of the standard Riemann solver for this system. This material is very well known <cit.>, but it is so heavily exploited in the following that any comprehension would be hindered without these details. Section <ref> introduces a Riemann solver when an interface condition, such as that given by a valve, is present. Some general results are then given and few simple models of valves (see <cit.>, <cit.> or <cit.>) are provided. In this modeling, we do not take into consideration the flow inside the valve but simply its effects.The framework is that of conservation laws with point constraints, which has so far been developed only for vehicular and pedestrian flows, see <cit.> and the references therein.Section <ref> contains our main results, which are collected in Theorem <ref>. They concern the coherence, consistence, continuity with respect to the initial data and invariant domains in a very special case, namely that of a pressure-relief valve. They can be understood as a first step in the direction of proving a general existence theorem for initial data with bounded variation. Some technical proofs are collected in Section <ref>.The final Section <ref> resumes our conclusions.§ THE GAS FLOW THROUGH A PIPE In this introductory section we provide some information about system (<ref>), in particular as far as the geometry of the Lax curves is concerned. §.§ The system and basic definitions Under (<ref>), system (<ref>) can be written in the conservative (ρ,q)-coordinates asρ_t + q_x=0, q_t + (q^2/ρ + a^2 ρ)_x = 0.We usually refer to the expression (<ref>) of the equations and denote u≐(ρ,q). We assume that the gas fills the whole pipe and then u takes values in Ω≐{(ρ,q) ∈^2 ρ>0}. A state (ρ,q) is called subsonic if |q/ρ| < a and supersonic if |q/ρ| > a; the half lines q=± a ρ, ρ>0, are sonic lines.The Riemann problem for (<ref>) is the Cauchy problem with initial conditionu(0,x) = u_ℓifx<0, u_r ifx>0,u_ℓ,u_r∈Ω being given constants.We say that u ∈0((0,∞);Ł∞(;Ω)) is a weak solution of (<ref>),(<ref>) in [0,∞)× if∫_0^∞∫_[ ρ φ_t +qφ_x ] x̣ ṭ + ρ_ℓ∫_-∞^0 φ(0,x)x̣ + ρ_r ∫_0^∞φ(0,x)x̣ = 0,∫_0^∞∫_[ qψ_t +(q^2/ρ^2+a^2) ρ ψ_x ] x̣ ṭ + q_ℓ∫_-∞^0 ψ(0,x)x̣ + q_r ∫_0^∞ψ(0,x)x̣ = 0,for any test function φ, ψ∈∞([0,∞)×;). We denote by (;Ω) the space ofΩ-valued functions with bounded variation. We can assume that any function in (;Ω) is right continuous by possibly changing the values at countably many points. Let 𝖣⊆Ω^2 and a map : 𝖣→(;Ω). * We say thatis a Riemann solver for (<ref>) if for any (u_ℓ,u_r) ∈𝖣 the map (t,x) ↦ [u_ℓ,u_r](x/t) is a weak solution to (<ref>),(<ref>) in [0,∞)×.* A Riemann solveris coherent at (u_ℓ,u_r) ∈𝖣 if u≐ [u_ℓ,u_r] satisfies for any ξ_o ∈:ch.0 (u(ξ_o^-),u(ξ_o^+)) ∈𝖣;ch.1 [u(ξ_o^-),u(ξ_o^+)](ξ) = u(ξ_o^-) if ξ<ξ_o, u(ξ_o^+) if ξ≥ξ_o.The coherence domain 𝖢𝖧⊆𝖣 ofis the set of all pairs (u_ℓ,u_r)∈𝖣 whereis coherent.* A Riemann solveris consistent at (u_ℓ, u_r) ∈𝖣 if u≐ [u_ℓ,u_r] satisfies for any ξ_o ∈:cn.0 (u_ℓ,u(ξ_o)), (u(ξ_o),u_r) ∈𝖣;cn.1 [u_ℓ,u(ξ_o)](ξ) =u(ξ) if ξ < ξ_o , u(ξ_o)if ξ≥ξ_o ,[u(ξ_o),u_r](ξ) =u(ξ_o)if ξ < ξ_o , u(ξ) if ξ≥ξ_o;cn.2 u(ξ)= [u_ℓ,u(ξ_o)](ξ) if ξ < ξ_o ,[u(ξ_o),u_r](ξ)if ξ≥ξ_o .The consistence domain 𝖢𝖭⊆𝖣 ofis the set of all pairs (u_ℓ,u_r)∈𝖣 whereis consistent.* A Riemann solveris 1-continuous at (u_ℓ, u_r) ∈𝖣 if for any ξ_1,ξ_2∈ we havelim_0pt(u_ℓ^ε, u_r^ε)→(u_ℓ, u_r)(u_ℓ^ε, u_r^ε) ∈𝖣∫_ξ_1^ξ_2 [u_ℓ^ε, u_r^ε](ξ) -[u_ℓ, u_r](ξ)ξ̣=0.The 1-continuity domain 𝖫⊆𝖣 ofis the set of all (u_ℓ,u_r)∈𝖣 whereis 1-continuous.* A Riemann solveradmits ℐ⊆Ω as invariant domain if ℐ^2 ⊆𝖣 and [ℐ,ℐ]() ⊆ℐ. Some comments on these definitions are in order.Roughly speaking, for any coherent initial datum, the ordered pair of the traces of the solution belongs to 𝖣 by (<ref>) and it is a fixed point ofby (<ref>). The coherence of a Riemann solveris a minimal requirement to develop a numerical scheme with a time discretization based on ; otherwise, it may happen that the numerical solution of a Riemann problem greatly differs from the analytic one. An analogous condition has been introduced in <cit.> at the junctions of a network. While coherence is easily seen to be satisfied in the case of a Lax Riemann solver, see Proposition <ref>, it plays a fundamental role in presence of a valve, as we comment later on. Coherence is, in a sense, a local condition (w.r.t. ξ). On the contrary, the consistence of a Riemann solver is rather a global property: “cutting” or “pasting” Riemann solutions (see (<ref>) and (<ref>), respectively), does not change the structure of the partial or total Riemann solutions. We recall that the consistence of a Riemann solver is a necessary condition for the well-posedness in L^1 of the Cauchy problem for (<ref>). Differently from the classical theory for invariant domains <cit.>, here an invariant domain does not necessarily have a smooth boundary and may be disconnected or not closed. If a Riemann solveris either coherent or consistent at (u_0,u_0) ∈𝖣, then [u_0,u_0]≡ u_0. Fix (u_0,u_0) ∈𝖣 and let u≐ [u_0,u_0]. By the finite speed of propagation, there exists ξ_o∈ such that u≡ u_0 in (-∞,ξ_o], whence u(ξ_o^±)= u_0. Ifis either coherent or consistent at (u_0,u_0), then we have [u_0,u_0]≡ u_0 by (<ref>) or by the first condition in (<ref>), respectively.§.§ The Lax curves The eigenvalues of (<ref>) are λ_1(u) ≐q/ρ - a, and λ_2(u) ≐q/ρ + a. System (<ref>) is strictly hyperbolic in Ω and both characteristic fields are genuinely nonlinear.Hence, weak solutions can contain both rarefaction and shock waves (called below waves), but not contact discontinuities.Any discontinuity curve x=γ(t) of a weak solution u of (<ref>) satisfies the Rankine-Hugoniot conditions(ρ_+-ρ_-) γ̇ = q_+-q_-,(q_+-q_-) γ̇ = (q_+^2ρ_+ + a^2ρ_+) - (q_-^2ρ_- + a^2ρ_-),where u_±(t) ≐ u(t,γ(t)^±) are the traces of u, see <cit.>. Riemann invariants of (<ref>) are w(u) ≐q/aρ + log(ρ) and z(u) ≐q/aρ - log(ρ). We introduce new coordinates (μ,ν) that make simpler the study of the Lax curves:μ = log(ρ), ν = q/(aρ), ⇔ρ = exp(μ),q = aνexp(μ), orμ = (w-z)/2, ν = (w+z)/2, ⇔ w = ν+μ,z = ν-μ.We prefer the (μ,ν)-coordinates with respect to those induced by the Riemann invariants because we often deal with the locus q=q_m, for some q_m ∈; moreover, comparing densities (ρ_1<ρ_2 ⇔μ_1<μ_2 ⇔ w_1-z_1< w_2-z_2) is easier. At last, in <cit.>, the wave-front tracking algorithm for (<ref>) relies on the bound of the total variation of the solutions in the μ-coordinate. We point out that in the (μ,ν)-coordinates the set Ω becomes ^2 and the sonic lines are ν=±1. In the sequel it is important to compare the flow corresponding to distinct states; we notice that q=0 if and only if ν=0 and q_1<q_2 if and only if ν_1exp(μ_1) < ν_2exp(μ_2), see <ref>. We define 𝒮_i, ℛ_i : (0,∞)×Ω→, i∈{1,2}, by𝒮_1(ρ,u_*) ≐ρ[ q_*/ρ_* - a (√(ρρ_*) - √(ρ_*ρ)) ], ℛ_1(ρ,u_*) ≐ρ[ q_*/ρ_* - alog(ρρ_*)],𝒮_2(ρ,u_*) ≐ρ[ q_*/ρ_* + a (√(ρρ_*) - √(ρ_*ρ)) ], ℛ_2(ρ,u_*) ≐ρ[ q_*/ρ_* + alog(ρρ_*)]. Then we define ℱℒ_i, ℬℒ_i : (0,∞)×Ω→, i∈{1,2}, byℱℒ_1(ρ,u_*)≐ℛ_1(ρ,u_*) if ρ∈ (0,ρ_*], 𝒮_1(ρ,u_*) if ρ∈ (ρ_*,∞), ℱℒ_2(ρ,u_*)≐𝒮_2(ρ,u_*) if ρ∈ (0,ρ_*), ℛ_2(ρ,u_*) if ρ∈ [ρ_*,∞), ℬℒ_1(ρ,u_*)≐𝒮_1(ρ,u_*) if ρ∈ (0,ρ_*), ℛ_1(ρ,u_*) if ρ∈ [ρ_*,∞), ℬℒ_2(ρ,u_*)≐ℛ_2(ρ,u_*) if ρ∈ (0,ρ_*], 𝒮_2(ρ,u_*) if ρ∈ (ρ_*,∞).For any fixed u_*∈Ω, the forward ℱℒ_i^u_* and backward ℬℒ_i^u_* Lax curves of the i-th family through u_* in the (ρ,q)-coordinates are the graphs of the functions ℱℒ_i( · ,u_*) and ℬℒ_i( · ,u_*), respectively, see <ref>. Analogously, the shock 𝒮_i^u_* and rarefaction ℛ_i^u_*curves through u_* in the (ρ,q)-coordinates are the graphs of the functions 𝒮_i( · ,u_*) and ℛ_i( · ,u_*), see <ref>.In the (μ,ν)-coordinates the curves 𝒮_i^u_* and ℛ_i^u_* are, with a slight abuse of notations, the graphs of the functions𝒮_1(μ,u_*) ≐ν_*+Ξ(μ-μ_*), ℛ_1(μ,u_*) ≐ν_*+μ_*-μ,𝒮_2(μ,u_*) ≐ν_*+Ξ(μ_*-μ), ℛ_2(μ,u_*) ≐ν_*-μ_*+μ, while ℱℒ_i^u_* and ℬℒ_i^u_* are the graphs of the functionsℱℒ_1(μ,u_*)≐ℛ_1(μ,u_*) if μ≤μ_*, 𝒮_1(μ,u_*) if μ>μ_*, ℱℒ_2(μ,u_*)≐𝒮_2(μ,u_*) if μ<μ_*, ℛ_2(μ,u_*) if μ≥μ_*, ℬℒ_1(μ,u_*)≐𝒮_1(μ,u_*) if μ<μ_*, ℛ_1(μ,u_*) if μ≥μ_*, ℬℒ_2(μ,u_*)≐ℛ_2(μ,u_*) if μ≤μ_*, 𝒮_2(μ,u_*) if μ>μ_*.Above we denotedΞ(ζ) ≐exp(-ζ/2) - exp(ζ/2) = -2 sinh(ζ/2),see <ref>. We observe thatΞ^-1(ξ) =2 ln(2√(ξ^2+4)+ξ).Obviously both Ξ and Ξ^-1 are odd functions; for any ζ∈∖{0} we have Ξ'(ζ)<0, Ξ'(0) = -1, ζ Ξ”(ζ)<0, Ξ”(0) = 0, Ξ”'(ζ) < 0. Now we collect the basic properties of the sets 𝒮_i^u_*, ℛ_i^u_*; the proof is deferred to Subsection <ref>. Let u_*, u_**∈Ω be distinct and i∈{1,2}. Then we have: *ℛ_i^u_*∩ℛ_i^u_**≠∅ if and only if ℛ_i^u_* = ℛ_i^u_**;*𝒮_i^u_*∩𝒮_i^u_** has at most two elements;*if u_**∈𝒮_i^u_*∖{ u_*}, then 𝒮_i^u_**∩𝒮_i^u_* = {u_**,u_*};*(𝒮_i)_ρ(0^+,u_*) = (-1)^i+1∞ and (ℛ_i)_ρ(0^+,u_*) = (-1)^i+1∞;*ℛ_1^u_* and 𝒮_1^u_* are strictly concave, while ℛ_2^u_* and 𝒮_2^u_* are strictly convex;*(𝒮_i)_ρ(ρ_*,u_*) = (ℛ_i)_ρ(ρ_*,u_*) = λ_i(u_*) and (𝒮_i)_ρρ(ρ_*,u_*) = (ℛ_i)_ρρ(ρ_*,u_*) = (-1)^ia/ρ_*;*𝒮_2(ρ,u_*) < ℛ_2(ρ,u_*) < ℛ_1(ρ,u_*) < 𝒮_1(ρ,u_*) if ρ < ρ_* and 𝒮_1(ρ,u_*) < ℛ_1(ρ,u_*) < ℛ_2(ρ,u_*) < 𝒮_2(ρ,u_*) if ρ > ρ_*.For later use we introduce the following notation, see Figure <ref>: * u̅(u_*) is the element of ℱℒ_1^u_* with the maximum q-coordinate;* u(u_*) is the element of ℬℒ_2^u_* with the minimum q-coordinate;* ũ(u_ℓ,u_r) is the (unique) element of ℱℒ_1^u_ℓ∩ℬℒ_2^u_r;* û(q_m,u_*), for any q_m ≤q̅(u_*), is the intersection of ℱℒ_1^u_* and q=q_m with the largest ρ-coordinate;* ǔ(q_m,u_*), for any q_m ≥q(u_*), is the intersection of ℬℒ_2^u_* and q=q_m with the largest ρ-coordinate.We introduce analogously p̃≐ p ∘ρ̃ and so on. Notice that for any u_ℓ,u_r∈Ωq̅(u_ℓ)>0andq(u_r)<0;moreover, for v_ℓ≐ q_ℓ/ρ_ℓ and v_r ≐ q_r/ρ_r,v_ℓ < a ⇒v̅(u_ℓ) = aandv_r>-a ⇒v(u_r) = a.In general q̃(u_ℓ,u_r) can be negative even if both q_ℓ and q_r are strictly positive. §.§ The Riemann solverWe denote by :Ω^2→(;Ω) the Lax Riemann solver <cit.>. We recall that ξ↦[u_ℓ,u_r](ξ) is the juxtaposition of a wave of the first family ξ↦[u_ℓ,ũ(u_ℓ,u_r)](ξ), taking values in ℱℒ_1^u_ℓ, and a wave of the second family ξ↦[ũ(u_ℓ,u_r),u_r](ξ), taking values in ℱℒ_2^ũ(u_ℓ,u_r). Notice thatis well defined because for any u_ℓ,u_r ∈Ω the curves ℱℒ_1^u_ℓ and ℬℒ_2^u_r always meet and precisely at ũ(u_ℓ,u_r).The right states u ∈Ω that can be connected to a left state u_ℓ by a wave of the first (second) family belong to ℱℒ_1^u_ℓ (resp., ℱℒ_2^u_ℓ), see Figure <ref>.More precisely, the states u that can be connected to u_ℓ by a shock wave of the first, resp. second, family belong to { u ∈𝒮_1^u_ℓρ>ρ_ℓ}, resp. { u ∈𝒮_2^u_ℓρ<ρ_ℓ}, and the corresponding speeds of propagation ares_1(ρ,u_ℓ) ≐ v_ℓ - a√(ρρ_ℓ),s_2(ρ,u_ℓ) ≐ v_ℓ + a√(ρρ_ℓ),while the states u that can be connected to u_ℓ by a rarefaction wave of the first, resp. second, family belong to { u ∈ℛ_1^u_ℓρ≤ρ_ℓ}, resp. { u ∈ℛ_2^u_ℓρ≥ρ_ℓ}.The left states u that can be connected to a right u_r by a wave of the first (second) family belong to ℬℒ_1^u_r (resp., ℬℒ_2^u_r), see Figure <ref>.The states u that can be connected to u_r by a shock wave of the first, resp. second, family belong to { u ∈𝒮_1^u_rρ<ρ_r}, resp. { u ∈𝒮_2^u_rρ>ρ_r}, and the corresponding speeds of propagation are respectively s_1(ρ,u_r) and s_2(ρ,u_r), while the states u that can be connected to u_r by a rarefaction wave of the first, resp. second, family belong to { u ∈ℛ_1^u_rρ≥ρ_r}, resp. { u ∈ℛ_2^u_rρ≤ρ_r}.In the following, we write “i-shock (u_-,u_+)” in place of “shock of the i-th family from u_- to u_+”, and so on.By the jump conditions (<ref>),(<ref>), the speed of propagation of a shock between two distinct states u_* and u_** is the slope in the (ρ, q)-plane of the line connecting u_* with u_**, namely σ(u_*,u_**) ≐ (q_*-q_**)/(ρ_*-ρ_**); in the (x, t)-plane an i-rarefaction between two distinct states u_* and u_** is contained in the cone λ_i(u_*)≤ x/t ≤λ_i(u_**).We now collect the main properties of ; the proofs are deferred to Subsection <ref>. The Riemann solveris coherent, consistent and 1-continuous in Ω^2. It is well known <cit.> that for any u_0∈Ω, both the singleton {u_0} and the convex setℐ_u_0≐{ u ∈Ω z(u) ≥ z(u_0), w(u) ≤ w(u_0)},see <ref>, are invariant domains of . We observe that ℐ_u_0 can be written asℐ_u_0 = { u ∈Ωℛ_2(μ,u_0) ≤ν≤ℛ_1(μ,u_0) } = { u ∈Ωℛ_2(ρ,u_0) ≤ q ≤ℛ_1(ρ,u_0) }.Whenever it is clear from the context, we denoteu_ p≐[u_ℓ,u_r]andu_ p^±≐ u_ p(0^±).Recall that (t,x) ↦ u_ p(x/t) is indeed an entropy solution to (<ref>),(<ref>).§ THE GAS FLOW THROUGH VALVES§.§ The model and basic definitions In this section we consider the case of two pipes connected by a valve at x=0. System (<ref>) models the flow away from the valve, while at x=0 we impose conditions depending on the valve and involving the traces of the solution. More precisely, we impose no conditions at x=0 if the valve is open; in this case, the valve has no influence on the flow and system (<ref>) describes the flow in the whole of . If the valve is active, then some conditions at x=0 have to be taken into account: the mass is conserved through the valve but in general the linear momentum is not, as a result of the force exerted by the valve. For this reason we extend the notion of weak solution given in Definition <ref> to take into account the possible presence of stationary under-compressive discontinuities <cit.> at x=0, which satisfy the first Rankine-Hugoniot condition (<ref>) but not necessarily the second one (<ref>). We say that u ∈0((0,∞);Ł∞(;Ω)) is a coupling solution of the Riemann problem (<ref>),(<ref>) if *the first Rankine-Hugoniot condition (<ref>) is satisfied; * for any t>0, the functions(t,x) ↦ u(t,x) ifx<0, u(t,0^-) ifx≥0, (t,x) ↦ u(t,0^+) ifx<0, u(t,x) ifx≥0, are respectively weak solutions to the Riemann problems for (<ref>) with initial data u(0,x)= u_ℓ if x<0, u(t,0^-)if x≥0, u(0,x)= u(t,0^+)if x<0, u_rif x≥0.A coupling solution u is a weak solution of (<ref>) for x0 and satisfies q(t,0^-) = q(t,0^+) by <ref>. In particular, the second Rankine-Hugoniot condition (<ref>) is never verified if u has an under-compressive discontinuity; in this case u is not a weak solution of (<ref>).We are now ready to extend the definition of Riemann solver to coupling solutions.Let 𝖣⊆Ω^2 and : 𝖣→(;Ω). We say thatis a coupling Riemann solver for (<ref>) if for any (u_ℓ,u_r) ∈𝖣 the map (t,x) ↦[u_ℓ,u_r](x/t) is a coupling solution to (<ref>),(<ref>) in (0,∞)×.The definitions of consistence, 1-continuity and invariant domains given in Definition <ref> naturally apply to coupling Riemann solvers. On the other hand, the extension of coherence needs some comments. In fact, a coupling Riemann solveris applied only at the valve position, i.e. at ξ=0, while in ξ0 one applies . Sinceis coherent in Ω^2, see Proposition <ref>, the coherence ofreduces to require (<ref>),(<ref>) at ξ_o=0. As a consequence, the coherence ofreduces to the following definition. Let 𝖣⊆Ω^2. A coupling Riemann solver : 𝖣→(;Ω) is coherent at (u_ℓ,u_r)∈𝖣 if u≐[u_ℓ,u_r] satisfiesch_v.0 (u(0^-),u(0^+)) ∈𝖣,ch_v.1 [u(0^-),u(0^+)](ξ) = u(0^-) if ξ<0,u(0^+) if ξ≥0. It is worth to notice that, from the physical point of view, the coherence of a coupling Riemann solver avoids loop behaviors, such as intermittently and rapidly switching on and off (commuting) of the valve. Moreover, Proposition <ref> does not hold for coupling Riemann solvers: it may happen that a coupling Riemann solveris coherent at (u_0,u_0) ∈𝖣 but [u_0,u_0]≢u_0.A coupling Riemann solver 𝖣_ v→(;Ω), 𝖣_ v⊆Ω^2, can be constructed by exploitingas follows. We define[u_ℓ,u_r] ≐[u_ℓ,u_r]if the valve is open, [u_ℓ,u_r](ξ) ≐[u_ℓ,u_m^-](ξ) if ξ<0, [u_m^+,u_r](ξ) if ξ≥0, if the valve is active.Above, u_m^±∈Ω satisfy the conditions imposed at x=0 by the valve, namely,u_m^- = u_m^-(u_ℓ,u_r) ≐û(q_m,u_ℓ), u_m^+ = u_m^+(u_ℓ,u_r)≐ǔ(q_m,u_r),q_m = q_m(u_ℓ,u_r) ∈𝒬^-_u_ℓ∩𝒬^+_u_r, where 𝒬^-_u_ℓ≐ (-∞,q̅(u_ℓ)] ifv_ℓ≤ a, (-∞,q_ℓ] ifv_ℓ > a, 𝒬^+_u_r≐ [q(u_r),∞) ifv_r≥-a,[q_r,∞) ifv_r<-a.By (<ref>) we have 0 ∈𝒬^-_u_ℓ∩𝒬^+_u_r≠∅; by (<ref>) it follows ρ_m^- ≥ρ̅(u_ℓ), ρ_m^+ ≥ρ(u_r), q_m^- = q_m^+ = q_m.The main rationale of condition (<ref>) lies in the fact that according to this choice ξ↦[u_ℓ,u^-_m](ξ) ∈ℱℒ_1^u_ℓand ξ↦[u^+_m,u_r](ξ) ∈ℱℒ_2^u_m^+are single waves, with negative and positive speed, respectively. As a consequence, [u_ℓ,u_r](0^±) = u_m^±. Moreover, if [u_ℓ,u_r] contains a stationary under-compressive discontinuity at x=0, then u^±_m satisfy the first Rankine-Hugoniot condition (<ref>).In conclusion, a valve is characterized by prescribing both when it is either open or active and the choice of the flow q_m through the valve when it is active. Once we specify these conditions, then the gas flow through the valve can be modeled by . For notational simplicity, whenever it is clear from the context, we letu_ v≐[u_ℓ,u_r]andu_ v^±≐ u_ v(0^±).For a fixed , we denote by 𝖮 and 𝖠 the sets of Riemann data such thatleaves the valve open or active, respectively. The domain of definition 𝖣_ v≐𝖮∪𝖠 ofdoes not necessarily coincide with the whole Ω^2; in this case, we understand Riemann data in Ω^2 ∖𝖣_ v as not being in the operating range of the valve. Moreover, it may happen that there exists (u_ℓ,u_r) ∈𝖠 such that u_ p≡ u_ v. This happens, for instance, if (u_ℓ,u_r) ∈𝖠 is such that ũ(u_ℓ,u_r) = û(0,u_ℓ) = ǔ(0,u_r) and q_m=0 in (<ref>): the valve is closed but has no influence on the flow through x=0. This motivates the introduction of the sets 𝖠_𝖭≐{(u_ℓ,u_r) ∈𝖠 u_ v≡ u_ p} = {(u_ℓ,u_r) ∈𝖠û(q_m,u_ℓ) = ũ(u_ℓ,u_r) = ǔ(q_m,u_r) },𝖠_𝖨=𝖠∖𝖠_𝖭,of Riemann data for which the valve is active and either influences or not the gas flow, respectively. We also introduce𝖠_𝖨^∁≐𝖣_ v∖𝖠_𝖨 = 𝖮∪𝖠_𝖭 = {(u_ℓ,u_r) ∈𝖣_ v u_ v≡ u_ p}.Assume thatis coherent at (u_ℓ,u_r). * If (u_ℓ,u_r) ∈𝖠_𝖨^∁, then (u_ v^-, u_ v^+) ∈𝖠_𝖨^∁.* If (u_ℓ,u_r) ∈𝖠_𝖨 and û(q_m,û(q_m,u_ℓ)) = û(q_m,u_ℓ), then (u_ v^-, u_ v^+) ∈𝖠_𝖨.(i) Let (u_ℓ,u_r) ∈𝖠_𝖨^∁ and assume (u_ v^-, u_ v^+) ∈𝖠_𝖨 by contradiction. Since u_ v≡ u_ p, we have u_ v^± = u_ p^±; hence from (<ref>) and (<ref>),(<ref>) it followsu_ p^- if ξ<0u_ p^+ if ξ≥0= [u_ v^-,u_ v^+](ξ) =[u_ p^-,û(q_m,u_ p^-)](ξ) if ξ<0, [ǔ(q_m,u_ p^+),u_ p^+](ξ) if ξ≥0,with û(q_m,u_ p^-) ǔ(q_m,u_ p^+). The above equation implies that û(q_m,u_ p^-) = u_ p^- and ǔ(q_m,u_ p^+) = u_ p^+, whence u_ p^-u_ p^+. Thus, u_ p has a stationary shock (u_ p^-, u_ p^+), which can be either a 1-shock with u_ p^- = u_ℓ, u_ p^+ = û(q_m,u_ℓ) = ǔ(q_m,u_r) and q_m>0, or a 2-shock withu_ p^+ = u_r, u_ p^- = ǔ(q_m,u_r) = û(q_m,u_ℓ) and q_m<0. In the former case we have ǔ(q_m,u_ p^+) = ǔ(q_m,ǔ(q_m,u_r)) = ǔ(q_m,u_r) because q_m>0, whence ǔ(q_m,u_ p^+) = ǔ(q_m,u_r) = û(q_m,u_ℓ) = û(q_m,u_ p^-), a contradiction. The latter case is dealt analogously.(ii) Let (u_ℓ,u_r) ∈𝖠_𝖨 be such that û(q_m,û(q_m,u_ℓ)) = û(q_m,u_ℓ); assume (u_ v^-, u_ v^+) ∈𝖠_𝖨^∁ by contradiction. Since (u_ℓ,u_r) ∈𝖠_𝖨, we have u_ v^- = û(q_m,u_ℓ) ǔ(q_m,u_r) = u_ v^+ and q_ v^- = q_m = q_ v^+. By (<ref>) we have[u_ v^-,u_ v^+](ξ) = [u_ v^-,u_ v^+](ξ) = u_ v^- if ξ<0,u_ v^+ if ξ≥0.Hence, either [u_ v^-,u_ v^+] is a stationary 1-shock with u_ v^+ = û(q_m,u_ v^-) and q_m>0, or is stationary 2-shock with u_ v^- = ǔ(q_m,u_ v^+) and q_m<0. In the former case ǔ(q_m,u_r) = u_ v^+ = û(q_m,u_ v^-) = û(q_m,û(q_m,u_ℓ))=û(q_m,u_ℓ), a contradiction. The latter case is dealt analogously.The coupling Riemann solveris consistent at (u_ℓ, u_r) ∈𝖣_ v if and only if:cn_ v.0 (u_ℓ,u_ v(ξ_o)), (u_ v(ξ_o),u_r) ∈𝖣_ v for any ξ_o∈;cn_ v.1 (u_ℓ,u_ v(ξ_o)) ∈𝖠_𝖨^∁ and û(q_m,u_ v(ξ_o)) = û(q_m,u_ℓ),for any ξ_o<0,(u_ v(ξ_o),u_r) ∈𝖠_𝖨^∁ and ǔ(q_m,u_ v(ξ_o))=ǔ(q_m,u_r),for any ξ_o≥0,if (u_ℓ,u_r) ∈𝖠_𝖨,(u_ℓ,u_ v(ξ_o)) ∈𝖠_𝖨^∁, for any ξ_o∈, (u_ v(ξ_o),u_r) ∈𝖠_𝖨^∁, for any ξ_o∈,if (u_ℓ,u_r) ∈𝖠_𝖨^∁. Clearly (<ref>) is equivalent to (<ref>). Assume that (u_ℓ,u_r) ∈𝖠_𝖨. If ξ_o<0 (the case ξ_o≥0 is dealt analogously), then u_ v(ξ_o) = [u_ℓ,u_m^-](ξ_o) and by the consistence ofwe haveu_ v(ξ) if ξ<ξ_ou_ v(ξ_o) if ξ≥ξ_o=[u_ℓ,u_m^-](ξ) if ξ<ξ_o [u_ℓ,u_m^-](ξ_o) if ξ≥ξ_o=[u_ℓ,u_ v(ξ_o)](ξ),u_ v(ξ_o) if ξ<ξ_ou_ v(ξ) if ξ≥ξ_o=[u_ℓ,u_m^-](ξ_o) if ξ<ξ_o [u_ℓ,u_m^-](ξ) if ξ∈[ξ_o,0) [u_m^+,u_r](ξ) if ξ≥0=[[u_ℓ,u_m^-](ξ_o),u_m^-](ξ) if ξ<0, [u_m^+,u_r](ξ) if ξ≥0.Therefore (<ref>) reduces to (u_ℓ,u_ v(ξ_o)) ∈𝖠_𝖨^∁,[u_ v(ξ_o),u_r](ξ) =[u_ v(ξ_o),u_m^-](ξ) if ξ<0, [u_m^+,u_r](ξ) if ξ≥0.We observe that the above condition also implies (<ref>); indeed, by the consistence ofwe have[u_ℓ,u_ v(ξ_o)](ξ) if ξ < ξ_o [u_ v(ξ_o),u_r](ξ)if ξ≥ξ_o=[u_ℓ,u_ v(ξ_o)](ξ) if ξ < ξ_o [u_ v(ξ_o),u_m^-](ξ) if ξ∈[ξ_o,0) [u_m^+,u_r](ξ) if ξ≥0= [u_ℓ,u_m^-](ξ) if ξ<0 [u_m^+,u_r](ξ) if ξ≥0=[u_ℓ,u_r](ξ).To prove that (<ref>) is in fact equivalent to (<ref>) it is sufficient to observe that it writes(u_ℓ,u_ v(ξ_o)) ∈𝖠_𝖨^∁,û(q_m,u_ v(ξ_o)) = u_m^- = û(q_m,u_ℓ),(u_ v(ξ_o),u_r) ∈𝖠_𝖨,and that the second condition above implies the last one because by assumption (u_ℓ,u_r) ∈𝖠_𝖨.Assume now that (u_ℓ,u_r) ∈𝖠_𝖨^∁. In this case u_ v≡ u_ p and (<ref>) reduces to require (<ref>) by the consistence of . At last, (<ref>) also implies (<ref>) by the consistence of .If (u_0,u_0) ∈𝖠_𝖨, thenis not consistent at any point of ({u_0}×Ω) ∪ (Ω×{u_0}). Let (u_0,u_0) ∈𝖠_𝖨 and fix u_ℓ, u_r∈Ω. By the finite speed of propagation of the waves there exists ξ_o>0 sufficiently big such that(u_0,[u_0,u_r](-ξ_o)) = ([u_ℓ,u_0](ξ_o),u_0) = (u_0,u_0) ∈𝖠_𝖨.By Proposition <ref> it is easy then to conclude thatis consistent neither at (u_0,u_r) nor at (u_ℓ,u_0). If two pipes are connected by a one-way valve, the flow at x=0 occurs in a single direction only, say positive; in this case we consider coupling Riemann solvers of the form (<ref>),(<ref>) with q_m≥0. Such a valve is also called clack valve, non-return valve or check valve. §.§ Examples of valves We conclude this section by considering some examples of pressure-relief valves. Consider a two-way electronic valve which is either open or closed, see <ref>.More precisely, the valve is equipped with a control unit and two sensors, one on each side of the valve seat. Depending on data (u_ℓ,u_r) received from the sensors, the control unit closes the valve if the jump of the pressure across x=0 corresponding to a closed valve, namely |p̌(0,u_r) - p̂(0,u_ℓ)|, is less or equal than a fixed constant M>0; otherwise, the control unit opens the valve. Such valve is modeled by the coupling Riemann solverdefined for any (u_ℓ,u_r) ∈Ω^2 as follows:pr.1[l].8 if | p̌(0,u_r) - p̂(0,u_ℓ) | ≤ M, then the valve is active (closed) and [u_ℓ,u_r] has the form (<ref>),(<ref>) with q_m=0;pr.2[l].8 if | p̌(0,u_r) - p̂(0,u_ℓ) | > M, then the valve is open.This valve is studied in details in Section <ref>.Consider a two-way spring-loaded valve, which can be either open or closed, see <ref>, and let M>0 be the “resistance” of the spring.Then the valve is closed (active) if the jump of the pressure across x=0, namely |p(ρ_r) - p(ρ_ℓ)|, is less or equal than M; otherwise it is open. In this caseis defined for any (u_ℓ,u_r) ∈Ω^2 as follows:[l].8 if | p(ρ_r) - p(ρ_ℓ) | ≤ M, then the valve is active (closed) and [u_ℓ,u_r] has the form (<ref>),(<ref>) with q_m=0;[l].8 if | p(ρ_r) - p(ρ_ℓ) | > M, then the valve is open. To each valve considered in the previous examples corresponds a one-way valve, see <ref> and <ref>.Consider a one-way valve such that <cit.>p(t,0^+) = p(t,0^-) - a^2kq(t,0)^2p(t,0^-),where k is a positive constant. The above condition substitutes the second Rankine-Hugoniot condition (<ref>) at x=0. Thenhas the form given in (<ref>),(<ref>) with u^±_m satisfying (<ref>), namely u_m^- = û(q_m,u_ℓ), u_m^+ = ǔ(q_m,u_r) and q_m satisfyingp̌(q_m,u_r)= p̂(q_m,u_ℓ) - a^2kq_m^2p̂(q_m,u_ℓ),q_m ∈𝒬^-_u_ℓ∩𝒬^+_u_r.§ A CASE STUDY: TWO-WAY ELECTRONIC PRESSURE VALVE In this section we apply the theory developed in the previous sections to model the two-way electronic pressure valve, see Example <ref>. Such a valve is either open or closed (active); this corresponds to consider a Riemann solverof the form (<ref>)–(<ref>) with q_m=0. We recall that 0 ∈𝒬^-_u_ℓ∩𝒬^+_u_r for any u_ℓ,u_r∈Ω. We denote for brevityû(·) ≐û(0,·),ǔ(·) ≐ǔ(0,·),ũ≐ũ(u_ℓ,u_r),û_ℓ≐û(u_ℓ),ǔ_ℓ≐ǔ(u_ℓ),u̅_ℓ≐u̅(u_ℓ),and so on, whenever it is clear from the context that û, ǔ, ũ and so on are not functions. We haveρ̂_ℓ = ρ_ℓ4a^2[ √(v_ℓ^2 + 4a^2) + v_ℓ]^2 if v_ℓ>0, ρ_ℓ exp[v_ℓ/a] if v_ℓ≤0, μ̂_ℓ = μ_ℓ-Ξ^-1(ν_ℓ) if ν_ℓ>0, μ_ℓ+ν_ℓ if ν_ℓ≤ 0, ρ̌_r= ρ_rexp[-v_r/a] if v_r>0, ρ_r4a^2[ √(v_r^2 + 4a^2) - v_r]^2 if v_r≤0, μ̌_r= μ_r-ν_r if ν_r>0, μ_r+Ξ^-1(ν_r) if ν_r ≤ 0.We finally observe that û and ǔ are idempotent because q_m=0, that isû∘û≡ûandǔ∘ǔ≡ǔ. By (<ref>),(<ref>) we have 𝖣_ v = Ω^2 and𝖠 = { (u_ℓ,u_r) ∈Ω^2 |p̌_r - p̂_ℓ| ≤ M }, 𝖮 = { (u_ℓ,u_r) ∈Ω^2 |p̌_r - p̂_ℓ| > M } 𝖠_𝖭 = {(u_ℓ,u_r) ∈𝖠û_ℓ = ũ = ǔ_r } = {(u_ℓ,u_r) ∈Ω^2 q̃ = 0 }, 𝖠_𝖨 = Ω^2 ∖𝖠_𝖭. We collect in the following theorem our main results; we defer the proof to Subsection <ref>. We have the following results: *The coherence domain ofis 𝖢𝖧 = 𝖠∪𝖮_𝖮, where, see<ref>,𝖮_𝖮≐{ (u_ℓ,u_r) ∈𝖮 (u_ p^-,u_ p^+) ∈𝖮}. *The consistence domain ofis 𝖢𝖭=𝖢𝖭_1∪𝖢𝖭_2 = 𝖢𝖭_𝖮∪𝖢𝖭_𝖠, where𝖢𝖭_1 ≐{(u_ℓ,u_r) ∈𝖠_𝖨(u_ℓ,u_ v(ξ_o^-)), (u_ v(ξ_o^+),u_r) ∈𝖠_𝖨^∁,for any ξ_o^-<0≤ξ_o^+},𝖢𝖭_2 ≐{(u_ℓ,u_r) ∈𝖠_𝖨^∁(u_ℓ,u_ v(ξ_o)), (u_ v(ξ_o),u_r) ∈𝖠_𝖨^∁,for any ξ_o∈},𝖢𝖭_𝖮 ≐{(u_ℓ,u_r) ∈𝖮[ (u_ℓ,u_ℓ), (u_r,u_r), (u_ℓ,ũ), (ũ,u_r) ∈𝖠_𝖨^∁; and q_ p0 along any rarefaction ]},𝖢𝖭_𝖠 ≐{(u_ℓ,u_r) ∈𝖠 q_ℓ≥ 0 ≥ q_r, (u_ℓ,u_ℓ) ∈𝖠_𝖨^∁, (u_r,u_r) ∈𝖠_𝖨^∁}. * The 1-continuity domain ofis 𝖫={(u_ℓ,u_r) ∈Ω^2|p̌_r - p̂_ℓ|M}.*If u_0∈Ω is such that q_0=0, then ℐ_u_0 defined by (<ref>) is an invariant domain of .Since the sets 𝖮_𝖮 and 𝖮_𝖠≐𝖮∖𝖮_𝖮 = { (u_ℓ,u_r) ∈𝖮 (u_ p^-,u_ p^+) ∈𝖠} play an important role in the coherence of , we provide their characterization in the following proposition; we defer the proof to Subsection <ref>. We introduce, see <ref>,Φ(ν) ≐ a^2 e^ν[ e^Ξ^-1(ν) - e^ν],ν∈. We have 𝖮_𝖮 = ⋃_i=1^4𝖮_𝖮^i and 𝖮_𝖠 = ⋃_j=1^2 𝖮_𝖠^j, where𝖮_𝖮^1≐{ (u_ℓ,u_r) ∈𝖮ν̃ > max{0,ν_ℓ},e^μ_ℓ+ν_ℓ Φ(-max{1,ν_ℓ}·min{1,ν̃}) > M },𝖮_𝖮^2≐{ (u_ℓ,u_r) ∈𝖮ν̃ < min{0,ν_r},e^μ_r-ν_r Φ(-min{-1,ν_r}·max{-1,ν̃}) > M },𝖮_𝖮^3≐{ (u_ℓ,u_r) ∈𝖮0 < ν̃≤ν_ℓ},𝖮_𝖮^4≐{ (u_ℓ,u_r) ∈𝖮ν_r ≤ν̃ < 0 }, and𝖮_𝖠^1≐{ (u_ℓ,u_r) ∈𝖮ν̃ > max{0,ν_ℓ},e^μ_ℓ+ν_ℓ Φ(-max{1,ν_ℓ}·min{1,ν̃}) ≤ M },𝖮_𝖠^2≐{ (u_ℓ,u_r) ∈𝖮ν̃ < min{0,ν_r},e^μ_r-ν_r Φ(-min{-1,ν_r}·max{-1,ν̃}) ≤ M }.The subsets 𝖮_𝖮^i, i∈{1,2,3,4}, and 𝖮_𝖠^j, j∈{1,2}, are mutually disjoint. In general it is difficult to characterize 𝖢𝖧 in a simple way because an explicit expression for ũ is not available. We introduce in the next corollary a subset of 𝖢𝖧 that partially answers to this issue. We have𝖢𝖧' ≐{ (u_ℓ,u_r) ∈Ω^2 ν_r < 0 < ν_ℓ, min{e^μ_ℓ+ν_ℓ Φ(-ν_ℓ), e^μ_r-ν_r Φ(ν_r) } > M }⊆𝖢𝖧,see <ref>. As a consequence, 𝖢𝖧' ∩𝖮⊆𝖮_𝖮.Clearly 𝖢𝖧' = 𝖢𝖧'_1 ∩𝖢𝖧'_2, where𝖢𝖧'_1≐{ (u_ℓ,u_r) ∈Ω^2 ν_ℓ > 0, e^μ_ℓ+ν_ℓ Φ(-ν_ℓ) > M },𝖢𝖧'_2≐{ (u_ℓ,u_r) ∈Ω^2 ν_r < 0, e^μ_r-ν_r Φ(ν_r) > M }.We claim that 𝖢𝖧'_j ∩𝖮_𝖠^j = ∅, j∈{1,2}. To prove the case j=1 (the other case is analogous), let (u_ℓ,u_r) ∈𝖢𝖧'_1 ∩𝖮_𝖠^1; then ν̃ > ν_ℓ = max{0,ν_ℓ} and soe^-μ_ℓ-ν_ℓM≥Φ(-max{1,ν_ℓ}·min{1,ν̃}) =Φ(-ν_ℓ) if ν̃>ν_ℓ≥ 1 Φ(-1) if ν̃≥1>ν_ℓ Φ(-ν̃) if 1>ν̃>ν_ℓ≥Φ(-ν_ℓ) > e^-μ_ℓ-ν_ℓM,see <ref>, a contradiction. As a consequence 𝖢𝖧' ∩𝖮_𝖠 = ∅ because 𝖮_𝖠 = 𝖮_𝖠^1 ∪𝖮_𝖠^2 by Proposition <ref>, whence 𝖢𝖧' ⊆𝖢𝖧 by Theorem <ref>, <ref>. In the following corollary we prove that any consistent point is also coherent. We have 𝖢𝖭⊂𝖢𝖧. It is sufficient to prove that 𝖢𝖭_𝖮⊂𝖮_𝖮 because by Theorem <ref>, <ref>,<ref>, we have𝖢𝖭∩𝖠= 𝖢𝖭_𝖠⊂𝖢𝖧∩𝖠 = 𝖠,𝖢𝖧∩𝖮 = 𝖮_𝖮,𝖢𝖭∩𝖮= 𝖢𝖭_𝖮.Let (u_ℓ,u_r) ∈𝖢𝖭_𝖮. Clearly u_ v≡ u_ p and q̃0. We have to prove that (u_ p^-,u_ p^+) ∈𝖮, namely |p̂(u_ p^-) - p̌(u_ p^+)| >M. * Assume that u_ p^± = u_ℓ; the case u_ p^± = u_r is analogous. It is sufficient to prove that q_ℓ0 because we know that (u_ℓ,u_ℓ) ∈𝖠_𝖨^∁ = 𝖮∪𝖠_𝖭. If by contradiction q_ℓ=0, then ũ = u_ℓ because u_ p^± = u_ℓ. As a consequence q̃=0, namely (u_ℓ,u_r) ∈𝖠_𝖭, a contradiction.* Assume that u_ p^± = ũ. Consider the case q̃>0; the case q̃<0 is analogous. Since q_ p0 along any rarefaction, we have q_ℓ>0. * If q_ℓ≥q̃, then (u_ℓ,ũ) ∈𝖮 because q̃(u_ℓ,ũ) = q̃0; hence p̂(u_ p^-) - p̌(u_ p^+) = p̂(ũ) - p̌(ũ) > p̂_ℓ - p̌(ũ) > M.* If q_ℓ<q̃, then (u_ℓ,u_ℓ) ∈𝖮 because q_ℓ >0; hence by (<ref>),(<ref>)p̂(u_ p^-) - p̌(u_ p^+) =p̂(ũ) - p̌(ũ) =a^2 (e^μ̂(ũ)-e^μ̌(ũ))=e^μ_ℓ + ν_ℓΦ(-ν̃) > e^μ_ℓ + ν_ℓΦ(-ν_ℓ) = p̂_ℓ - p̌_ℓ > M,because ν_ℓ<ν̃≤ 1 and μ̃+ν̃ = μ_ℓ + ν_ℓ. * Assume that u_ p^± = u̅_ℓ; the case u_ p^± = u_r is analogous. Since q_ p0 along any rarefaction, we have q_ℓ>0. Therefore (u_ℓ,u_ℓ) ∈𝖮 and by (<ref>),(<ref>)p̂(u_ p^-) - p̌(u_ p^+) =p̂(u̅_ℓ) - p̌(u̅_ℓ) =a^2 (e^μ̂(u̅_ℓ)-e^μ̌(u̅_ℓ)) =e^μ_ℓ + ν_ℓΦ(-1) > e^μ_ℓ + ν_ℓΦ(-ν_ℓ) = p̂_ℓ - p̌_ℓ > M,because ν_ℓ<ν̅_ℓ = 1 and μ̅_ℓ+1 = μ_ℓ + ν_ℓ.* Assume that u_ p^- = u_ℓ and u_ p^+ = ũ; the case u_ p^- = ũ and u_ p^+ = u_r is analogous. Since u_ p cannot perform a stationary shock between states with zero flow by (<ref>), we have that q_ℓ = q̃ > 0. Therefore (u_ p^-,u_ p^+) = (u_ℓ,ũ) ∈𝖮 because q̃(u_ℓ,ũ) = q̃0. We now deal with invariant domains. We first state a preliminary result.Let Δ≐{(u,u) u∈Ω}.Then Δ∩𝖢𝖧 = Δ and Δ∩𝖢𝖭 = Δ∩𝖠_𝖨^∁. By Theorem <ref>, <ref>,<ref>, it is sufficient to prove thatΔ∩𝖮_𝖮 = Δ∩𝖮,Δ∩𝖢𝖭_𝖮 = Δ∩𝖮,Δ∩𝖢𝖭_𝖠 = Δ∩𝖠_𝖭 = {(u,u)∈Ω^2 q=0}.If (u,u) ∈𝖮, then [u,u] ≡[u,u] ≡ u and clearly (u,u) ∈𝖮_𝖮∩𝖢𝖭_𝖮; hence Δ∩𝖮⊆Δ∩𝖮_𝖮∩𝖢𝖭_𝖮. Clearly 𝖮_𝖮∪𝖢𝖭_𝖮⊂𝖮, which implies Δ∩𝖮⊇Δ∩ ( 𝖮_𝖮∪𝖢𝖭_𝖮 ). As a consequence Δ∩𝖮_𝖮 = Δ∩𝖮 = Δ∩𝖢𝖭_𝖮 and the first two claims hold true. If (u,u) ∈𝖢𝖭_𝖠, then (u,u) ∈𝖠∩𝖠_𝖨^∁ = 𝖠_𝖭; hence Δ∩𝖢𝖭_𝖠⊆Δ∩𝖠_𝖭. Conversely, if (u,u) ∈𝖠_𝖭, then [u,u] ≡[u,u] ≡ u, q=0 and clearly (u,u) ∈𝖢𝖭_𝖠; hence Δ∩𝖠_𝖭⊆Δ∩𝖢𝖭_𝖠.Let ℐ be an invariant domain of . If there exist u_ℓ,u_r ∈ℐ such that u_ v has a rarefaction taking value q=0, then ℐ^2 ⊈𝖢𝖭. By Proposition <ref> we have thatis consistent at no (u_0,u_0) ∈𝖠_𝖨. Hence, it is sufficient to prove that there exists u_0 ∈ℐ such that (u_0,u_0) ∈𝖠_𝖨. By assumption there exist ξ_-<ξ_+ and ξ_o ∈ [ξ_-,ξ_+], such that u_ v performs a rarefaction in the cone ξ_-≤ x/t ≤ξ_+ and q_ v(ξ_o)=0. By a continuity argument there exists a sufficiently small ε≠ 0 such that ξ_o^ε≐ξ_o+ε∈ [ξ_-,ξ_+] and 0<|p̌(u_ v(ξ_o^ε)) - p̂(u_ v(ξ_o^ε))| < M, namely (u_ v(ξ_o^ε),u_ v(ξ_o^ε)) ∈𝖠_𝖨∩ℐ^2.Let u ∈Ω. There exists an invariant domain ℐ ofsuch that {(u,u)}⊆ℐ^2 ⊆𝖢𝖭 if and only if (u,u)∈𝖠_𝖨^∁. If (u,u) ∈𝖠_𝖨^∁, then [u,u]() = [u,u]() = {u} and the minimal invariant domain containing {u} is ℐ={u}; by Proposition <ref> we have ℐ^2 ⊂Δ∩𝖠_𝖨^∁⊂𝖢𝖭. On the other hand, if (u,u) ∈𝖠_𝖨, then it is sufficient to observe that (u,u) ∉𝖢𝖭 by Proposition <ref>.Let u∈Ω and ℐ be the minimal invariant domain containing {u}. * If (u,u)∈𝖠_𝖨^∁, then ℐ={u} and ℐ^2⊂𝖢𝖭⊂𝖢𝖧.* If (u,u)∈𝖠_𝖨, then ℐ = ℛ_2([ρ̌(u), ρ],u) ∪ ( [ρ̌(u), ρ̂(u)]×{0}), ℐ^2⊂𝖢𝖧 and ℐ^2 ⊈𝖢𝖭.∙ If (u,u)∈𝖠_𝖨^∁, then [u,u]=[u,u]≡ u, hence ℐ={u}; moreover by Corollary <ref> and Corollary <ref> we have ℐ^2⊂𝖢𝖭⊂𝖢𝖧.∙ Let (u,u) ∈𝖠_𝖨 and 𝒟≐ℛ_2([ρ̌(u), ρ],u) ∪ ( [ρ̌(u), ρ̂(u)]×{0}). We first prove that ℐ = 𝒟. Since (u,u) ∉𝖠_𝖭, we have q0. Assume q>0; the case q<0 is similar. We have ℐ⊇𝒟 because[u,u]() ={û(u)}∪ℛ_2([ρ̌(u), ρ],u) ,[ℛ_2([ρ̌(u), ρ],u),ǔ(u)]() = 𝒟.It remains to prove that 𝒟 is an invariant domain. This follows by observing that 𝒟^2 ⊂𝖠 and that for any u_ℓ,u_r∈𝒟 u_ v() = {u_ℓ, û_ℓ}∪ℛ_2([ρ̌(u), ρ_r],u) if u_ℓ,u_r ∈ℛ_2([ρ̌(u), ρ],u),{u_ℓ, u_r} if u_ℓ, u_r ∈ [ρ̌(u), ρ̂(u)]×{0},{u_ℓ, û_ℓ, u_r} if (u_ℓ,u_r) ∈ℛ_2([ρ̌(u), ρ],u) ×([ρ̌(u), ρ̂(u)]×{0}),{u_ℓ}∪ℛ_2([ρ̌(u), ρ_r],u) if (u_ℓ,u_r) ∈([ρ̌(u), ρ̂(u)]×{0}) ×ℛ_2([ρ̌(u), ρ],u),whence [𝒟^2]()⊆𝒟. By Theorem <ref>, <ref>, we have ℐ^2 ⊂𝖠⊂𝖢𝖧. By Proposition <ref> we have (u,u) ∈ℐ^2∖𝖢𝖭. We now extend the previous corollary by constructing the minimal invariant domain containing two elements of Ω in two particular cases. Fix u_0,u_1∈Ω and let u_2 ≐û(u_1) and u_3 ≐ǔ(u_1). Assume thatν_0=0<ν_1,μ_1+ν_1<μ_0,(u_1,u_1) ∈𝖠_𝖨and let ℐ be the minimal invariant domain containing {u_0,u_1}. Then ℐ^2⊈𝖢𝖭 and moreover: * if p_0-p_3≤ M, then ℐ = {u_0}∪ℛ_2([ρ_3, ρ_1],u_1) ∪ ( [ρ_3, ρ_2]×{0}) and ℐ^2 ⊂𝖢𝖧;* if p_2-p_3 = M = p_0-p_2, then ℐ = ℐ_u_0 and ℐ^2⊈𝖢𝖧. We notice that by assumption we have μ_2<μ_1+ν_1<μ_0. By Proposition <ref> we deduce (u_1,u_1) ∈ℐ^2∖𝖢𝖭. Clearly, see <ref>, (u_0,u_0), (u_2,u_2), (u_3,u_3) ∈𝖠_𝖭, ρ_3<ρ_1<ρ_2; moreover ρ_0>ρ_2 and 0<p_2-p_3≤ M in both the considered cases.By proceeding as in the proof of Corollary <ref> we haveℐ⊇𝒟≐ℛ_2([ρ_3, ρ_1],u_1) ∪( [ρ_3, ρ_2]×{0}).∙ Ifρ_2<ρ_0 and p_0-p_3≤ M, then ℐ = 𝒟∪{u_0}. This follows by observing that (𝒟∪{u_0})^2 ⊂𝖠 and that for any u_d ∈𝒟[u_d,u_0]() ={u_d,û_d,u_0},[u_0,u_d]() = {u_d, u_0} if q_d=0,{u_0}∪ℛ_2([ρ_3, ρ_d],u_1) if q_d>0,are subsets of 𝒟∪{u_0}. By Theorem <ref>, <ref>, we have that ℐ^2 ⊂𝖠⊂𝖢𝖧.∙ Assume ρ_2<ρ_0 and p_2-p_3 = M = p_0-p_2. We claim thatℐ = ℐ_u_0,where ℐ_u_0 is defined by (<ref>). Differently from the previous case, we have (u_0,u_1), (u_0,u_3), (u_3,u_0) ∈𝖮; notice that (u_1,u_0), (u_2,u_0), (u_0,u_2) ∈𝖠_𝖨. As a consequence[u_0,u_3]()=ℛ_1([ρ_4, ρ_0],u_0) ∪{u_3},[u_3,u_0]()={u_3}∪ℛ_2([ρ_5, ρ_0],u_5),where u_4 ∈ℱℒ_1^u_0∩ℬℒ_2^u_3 and u_5 ∈ℱℒ_1^u_3∩ℬℒ_2^u_0. Observe that μ_4=μ_5, ν_4=-ν_5>0 and û(u_5) = ǔ(u_4) ≐ u_6. As a consequence (u_5,u_4) ∈𝖠_𝖭 and[u_5,u_4]() = ℛ_1([ρ_6, ρ_5],u_5) ∪ℛ_2([ρ_6, ρ_4],u_6).Clearly (u_0,u_6), (u_6,u_0) ∈𝖮 and[u_0,u_6]() = ℛ_1([ρ_7, ρ_0],u_0) ∪{u_6},[u_6,u_0]() = {u_6}∪ℛ_2([ρ_8, ρ_0],u_8),where u_7 ∈ℱℒ_1^u_0∩ℬℒ_2^u_6 and u_8 ∈ℱℒ_1^u_6∩ℬℒ_2^u_0. Observe that μ_7=μ_8, ν_7=-ν_8>0 and û(u_8) = ǔ(u_7) ≐ u_9. By iterating this procedure, we obtain thatℛ_1((0, ρ_0],u_0) ∪ℛ_2((0, ρ_0],u_0) ⊂ℐ.Finally, by letting u_ℓ∈ℛ_2((0, ρ_0),u_0) and u_r ∈ℛ_1((0, ρ_0),u_0) be such that μ_ℓ=μ_r and ν_ℓ=-ν_r<0, we have that (u_ℓ,u_r) ∈𝖠_𝖭 because û_ℓ = ǔ_r, henceu_ v() = ℛ_1([ρ̂_ℓ, ρ_ℓ],u_ℓ) ∪ℛ_2([ρ̂_ℓ, ρ_r],ρ̂_ℓ).It is therefore clear that ℐ_u_0⊆ℐ. By Theorem <ref>, <ref>, we have that ℐ_u_0 is an invariant domain, hence ℐ_u_0 = ℐ.We claim that ℐ_u_0^2 ⊄𝖢𝖧, namely ℐ_u_0^2 ∩𝖮_𝖠≠∅. Since Φ(-1) < a^2 and by assumption p_0 > 2M, there exist u_ℓ,u_r ∈ℐ_u_0 such that p_ℓ-p_r>M, ν_ℓ=0=ν_r and M < p_ℓ≤ a^2M/Φ(-1) < 2 M. Then (u_ℓ,u_r) ∈𝖮, ν̃ > 0 = ν_ℓ and (u_ℓ,u_r) ∈ℐ_u_0^2 ∩𝖮_𝖠^1 becausee^μ_ℓ+ν_ℓ Φ(-max{1,ν_ℓ}·min{1,ν̃}) = ρ_ℓ Φ(-min{1,ν̃}) ≤ρ_ℓ Φ(-1) ≤ M.§ TECHNICAL PROOFS§.§ Properties of We refer to the (μ,ν)-coordinates. Property <ref> is obvious because ℛ_i^u_* and ℛ_i^u_** are straight lines with the same slope. Property <ref> follows by reducing to a second order equation in e^ζ/2, see <ref>.To prove <ref>, we notice that 𝒮_1^u_*∩𝒮_1^u_** has at most two elements by <ref>; moreoveru_**∈𝒮_1^u_*⇔ν_** = Ξ(μ_**-μ_*) + ν_*⇔ν_* = Ξ(μ_*-μ_**) + ν_**⇔u_*∈𝒮_1^u_**,and then 𝒮_1^u_*∩𝒮_1^u_** = {u_*, u_**}. To prove <ref>–<ref> it is sufficient to observe that(𝒮_i)_ρ(ρ,u_*)= q_*/ρ_*+(-1)^ia/2(3√(ρ/ρ_*)-√(ρ_*/ρ)), (𝒮_i)_ρρ(ρ,u_*)= (-1)^ia/4ρ(3√(ρ/ρ_*)+√(ρ_*/ρ)),(ℛ_i)_ρ(ρ,u_*)= q_*/ρ_*+(-1)^i a (1+ln(ρ/ρ_*)), (ℛ_i)_ρρ(ρ,u_*)= (-1)^ia/ρ.At last, <ref> is clear in the (μ,ν)-coordinates, see <ref>. Conditions (<ref>) and (<ref>) are satisfied because 𝖣_ p=Ω^2.About coherence, we prove (<ref>). Fix (u_ℓ,u_r) ∈Ω^2 and ξ_o ∈. If u_ p(ξ_o^-) = u_ p(ξ_o^+), then [u_ p(ξ_o^-), u_ p(ξ_o^+)]≡ u_ p(ξ_o^±) since [u,u]≡ u for any u∈Ω and it is easy to conclude. If u_ p(ξ_o^-) ≠ u_ p(ξ_o^+), namely, u_ p has a shock at ξ_o, then either u_ p(ξ_o^-) = u_ℓ≠ u_ p(ξ_o^+) = ũ or u_ p(ξ_o^-) = ũ≠ u_ p(ξ_o^+) = u_r. In the former case ρ_ℓ < ρ̃, in the latter ρ_r < ρ̃. It is then easy to conclude by observing that ũ(u_ℓ,ũ) = ũ = ũ(ũ,u_r).About consistence, it is sufficient to observe that for any ξ_o∈ we haveũ(u_ℓ,u_ p(ξ_o))=u_ p(ξ_o) if u_ p(ξ_o) ∈[u_ℓ,ũ](), ũ if u_ p(ξ_o) ∈[ũ,u_r](),ũ(u_ p(ξ_o),u_r)= ũ if u_ p(ξ_o) ∈[u_ℓ,ũ](),u_ p(ξ_o) if u_ p(ξ_o) ∈[ũ,u_r](),and that u_ p is the juxtaposition of [u_ℓ,ũ] and [ũ,u_r].At last, the 1-continuity in Ω^2 directly follows from the continuity of ũ, σ, λ_1 and λ_2.§.§ Proof of Theorem <ref> We split the proof of Theorem <ref> into the following propositions. The coherence domain ofis 𝖢𝖧 = 𝖠∪𝖮_𝖮.Condition (<ref>) holds true in Ω^2 because 𝖣_ v=Ω^2; therefore, we are left to consider (<ref>).First, we prove that if (u_ℓ,u_r) ∈𝖠∪𝖮_𝖮, then (<ref>) holds. Assume that (u_ℓ,u_r) ∈𝖠.In this case u_ v^- = û_ℓ and u_ v^+ = ǔ_r. By (<ref>) we have û(u_ v^-) = u_ v^- = û_ℓ and ǔ(u_ v^+) = u_ v^+ = ǔ_r; therefore (u_ v^-,u_ v^+) ∈𝖠, whence (<ref>) holds true. If (u_ℓ,u_r) ∈𝖮_𝖮, then it is sufficient to exploit the coherence of .Second, we prove that if (u_ℓ,u_r) ∈𝖮_𝖠 then (<ref>) fails.Since (u_ℓ,u_r) ∈𝖮, then u_ v≡ u_ p, whence u_ v^± = u_ p^±; since (u_ p^-,u_ p^+) ∈𝖠, then by (<ref>) it follows[u_ p^-,u_ p^+](ξ) =[u_ p^-,û(u_ p^-)](ξ) if ξ<0,[ǔ(u_ p^+), u_ p^+](ξ) if ξ≥0.Now, if by contradiction (<ref>) holds, then we have [u_ p^-,û(u_ p^-)] ≡ u_ p^-in(-∞,0), [ǔ(u_ p^+), u_ p^+] ≡ u_ p^+in[0,∞).It follows that u_ p^- = û(u_ p^-) and u_ p^+ = ǔ(u_ p^+); then q_ p(0^±)=0, whence u_ p^- = u_ p^+ because u_ p cannot perform a stationary shock between states with zero flow by (<ref>). Then it is not difficult to see that ũ = u_ p(0), whence q̃ = 0 and therefore û_ℓ = ũ = ǔ_r. This contradicts the assumption (u_ℓ,u_r) ∈𝖮, that is | p̌_r - p̂_ℓ | > M.The consistence domain ofis 𝖢𝖭=𝖢𝖭_1∪𝖢𝖭_2. Since 𝖣_ v = Ω^2, we have that 𝖢𝖭=𝖢𝖭_1'∪𝖢𝖭_2', where𝖢𝖭_1'≐{(u_ℓ,u_r)∈𝖠_𝖨(u_ℓ,u_r) satisfies (<ref>)}, 𝖢𝖭_2'= {(u_ℓ,u_r)∈𝖠_𝖨^∁(u_ℓ,u_r) satisfies (<ref>)}.By Proposition <ref> we have𝖢𝖭_1'≐{(u_ℓ,u_r)∈𝖠_𝖨(u_ℓ,u_ v(ξ_o)) ∈𝖠_𝖨^∁ and û(u_ v(ξ_o)) = û_ℓ,for any ξ_o<0(u_ v(ξ_o),u_r) ∈𝖠_𝖨^∁ and ǔ(u_ v(ξ_o))=ǔ_r,for any ξ_o≥0 }, 𝖢𝖭_2'= {(u_ℓ,u_r) ∈𝖠_𝖨^∁(u_ℓ,u_ v(ξ_o)), (u_ v(ξ_o),u_r) ∈𝖠_𝖨^∁,for any ξ_o∈}.Clearly 𝖢𝖭_2' = 𝖢𝖭_2 and 𝖢𝖭_1' ⊆𝖢𝖭_1. Hence, we are left to prove that 𝖢𝖭_1' ⊇𝖢𝖭_1. Let (u_ℓ,u_r) ∈𝖢𝖭_1. If ξ_o<0 (the case ξ_o≥0 is analogous), thenq_ℓ≥ 0⇒û_ℓ∈𝒮_1^u_ℓ⇒u_ v(ξ_o) ∈{u_ℓ, û_ℓ}, q_ℓ< 0⇒û_ℓ∈ℛ_1^u_ℓ⇒q_ v(ξ_o)∈[q_ℓ, 0], ℛ_1^u_ v(ξ_o) = ℛ_1^u_ℓ. As a consequence û(u_ v(ξ_o)) = û_ℓ, therefore (u_ℓ,u_r) ∈𝖢𝖭_1'.The consistence domain ofis 𝖢𝖭 = 𝖢𝖭_𝖮∪𝖢𝖭_𝖠. It is sufficient to prove that 𝖢𝖭∩𝖠 = 𝖢𝖭_𝖠 and 𝖢𝖭∩𝖮 = 𝖢𝖭_𝖮. In the following we use Proposition <ref> several times without any explicit mention.We first prove that 𝖢𝖭∩𝖠 = 𝖢𝖭_𝖠. Clearly 𝖢𝖭_𝖠 = ⋃_i=1^4 𝖢𝖭_𝖠^i, where𝖢𝖭_𝖠^1 ≐{(u_ℓ,u_r) ∈𝖠 q_ℓ > 0 > q_r, (u_ℓ,u_ℓ) ∈𝖮, (u_r,u_r) ∈𝖮}= {(u_ℓ,u_r) ∈𝖠min{p̂_ℓ-p̌_ℓ, p̌_r-p̂_r } > M}, 𝖢𝖭_𝖠^2 ≐{(u_ℓ,u_r) ∈𝖠 q_ℓ = 0 > q_r, (u_r,u_r) ∈𝖮} = {(u_ℓ,u_r) ∈𝖠 q_ℓ=0, p̌_r-p̂_r > M}, 𝖢𝖭_𝖠^3 ≐{(u_ℓ,u_r) ∈𝖠 q_ℓ > 0 = q_r, (u_ℓ,u_ℓ) ∈𝖮} = {(u_ℓ,u_r) ∈𝖠p̂_ℓ-p̌_ℓ > M, q_r=0}, 𝖢𝖭_𝖠^4 ≐{(u_ℓ,u_r) ∈𝖠 q_ℓ = 0 = q_r}.𝖢𝖭_𝖠^1: We prove that (u_ℓ,u_r) ∈𝖠 with q_ℓ>0>q_r belongs to 𝖢𝖭 if and only if (u_ℓ,u_ℓ), (u_r,u_r) ∈𝖮.∙ If (u_ℓ,u_r) ∈𝖠_𝖨, then u_ v performs two shocks and an under-compressive shock, hence (u_ℓ,u_ v(ξ_o^-)), (u_ v(ξ_o^+),u_r) ∈{(u_ℓ,u_ℓ),(u_ℓ,û_ℓ), (ǔ_r,u_r), (u_r,u_r)} for any ξ_o^-<0≤ξ_o^+. Obviously (u_ℓ,û_ℓ), (ǔ_r,u_r) ∈𝖠_𝖭 and (u_ℓ,u_ℓ), (u_r,u_r) ∉𝖠_𝖭. Therefore (u_ℓ,u_r) ∈𝖢𝖭_1 if and only if (u_ℓ,u_ℓ), (u_r,u_r) ∈𝖮.∙ If (u_ℓ,u_r) ∈𝖠_𝖭, then u_ v coincides with u_ p and performs two shocks, hence (u_ℓ,u_ v(ξ_o)), (u_ v(ξ_o),u_r) ∈{(u_ℓ,u_ℓ),(u_ℓ,ũ), (u_ℓ,u_r), (ũ,u_r), (u_r,u_r)} for any ξ_o∈. Since û_ℓ=ũ = ǔ_r, we have (u_ℓ,ũ), (ũ,u_r) ∈𝖠_𝖭; moreover by assumption (u_ℓ,u_ℓ), (u_r,u_r) ∉𝖠_𝖭 and (u_ℓ,u_r) ∈𝖠_𝖭. Therefore (u_ℓ,u_r) ∈𝖢𝖭_2 if and only if (u_ℓ,u_ℓ), (u_r,u_r) ∈𝖮. 𝖢𝖭_𝖠^2: We prove that (u_ℓ,u_r) ∈𝖠 with q_ℓ=0>q_r belongs to 𝖢𝖭 if and only if (u_r,u_r) ∈𝖮.∙ If (u_ℓ,u_r) ∈𝖠_𝖨, then u_ v performs an under-compressive shock and a 2-shock, hence (u_ℓ,u_ v(ξ_o^-)), (u_ v(ξ_o^+),u_r) ∈{(u_ℓ,u_ℓ), (ǔ_r,u_r), (u_r,u_r)} for any ξ_o^-<0≤ξ_o^+. Obviously (u_ℓ,u_ℓ), (ǔ_r,u_r) ∈𝖠_𝖭 and (u_r,u_r) ∉𝖠_𝖭. Therefore (u_ℓ,u_r) ∈𝖢𝖭_1 if and only if (u_r,u_r) ∈𝖮.∙ If (u_ℓ,u_r) ∈𝖠_𝖭, then u_ v coincides with u_ p and performs a 2-shocks, hence (u_ℓ,u_ v(ξ_o)), (u_ v(ξ_o),u_r) ∈{(u_ℓ,u_ℓ), (u_ℓ,u_r), (u_r,u_r)} for any ξ_o∈. By assumption (u_ℓ,u_ℓ), (u_ℓ,u_r) ∈𝖠_𝖭 and (u_r,u_r) ∉𝖠_𝖭. Therefore (u_ℓ,u_r) ∈𝖢𝖭_2 if and only if (u_r,u_r) ∈𝖮.𝖢𝖭_𝖠^3: Analogously to the previous item, it is possible to prove that (u_ℓ,u_r) ∈𝖠 with q_ℓ>0=q_r belongs to 𝖢𝖭 if and only if (u_ℓ,u_ℓ) ∈𝖮.𝖢𝖭_𝖠^4: We prove that any (u_ℓ,u_r) ∈𝖠 with q_ℓ=0=q_r belongs to 𝖢𝖭.∙ If (u_ℓ,u_r) ∈𝖠_𝖨, then u_ v performs an under-compressive shock, hence (u_ℓ,u_ v(ξ_o^-)), (u_ v(ξ_o^+),u_r) ∈{(u_ℓ,u_ℓ), (u_r,u_r)} for any ξ_o^-<0≤ξ_o^+. Obviously (u_ℓ,u_ℓ), (u_r,u_r) ∈𝖠_𝖭 and therefore (u_ℓ,u_r) ∈𝖢𝖭_1.∙ If (u_ℓ,u_r) ∈𝖠_𝖭, then u_ℓ=u_r and u_ v≡ u_ p≡ u_ℓ, hence (u_ℓ,u_ v(ξ_o)), (u_ v(ξ_o),u_r) ∈{(u_ℓ,u_r)} for any ξ_o∈. By assumption (u_ℓ,u_r) ∈𝖠_𝖭 and therefore (u_ℓ,u_r) ∈𝖢𝖭_2.To complete the proof that 𝖢𝖭∩𝖠 = 𝖢𝖭_𝖠 it remains to prove that 𝖢𝖭∩{(u_ℓ,u_r) ∈𝖠 q_ℓ < 0orq_r>0 } = ∅. Assume by contradiction that there exists (u_ℓ,u_r) ∈𝖠∩𝖢𝖭 with q_ℓ<0.Then u_ v performs a 1-rarefaction (u_ℓ,û_ℓ). Clearly u_ v(ξ_o) = û_ℓ with ξ_o ≐λ_1(û_ℓ) < 0 and p̂_ℓ = p̌(u_ v(ξ_o)). Hence there exists ε>0 sufficiently small such that 0 < p̌(u_ v(ξ_o-ε)) - p̂_ℓ < M, namely (u_ℓ,u_ v(ξ_o-ε)) ∈𝖠_𝖨. On the other hand (u_ℓ,u_r) ∈𝖠∩𝖢𝖭⊂𝖢𝖭_1 ∪𝖢𝖭_2 implies that (u_ℓ,u_ v(ξ)) ∈𝖠_𝖨^∁ for any ξ<0, a contradiction. The case q_r>0 is dealt analogously. We now prove that𝖢𝖭_2∩𝖮 = 𝖢𝖭_𝖮≐{(u_ℓ,u_r) ∈𝖮[ (u_ℓ,u_ℓ), (u_r,u_r), (u_ℓ,ũ), (ũ,u_r) ∈𝖠_𝖨^∁; and q_ p0 along any rarefaction ]}.“⊆” Let (u_ℓ,u_r) ∈𝖢𝖭_2∩𝖮. By definition of 𝖢𝖭_2 we have (u_ℓ,u_ p(ξ_o)), (u_ p(ξ_o),u_r) ∈𝖠_𝖨^∁, for any ξ_o∈, because u_ v≡ u_ p. As a consequence (u_ℓ,u_ℓ), (u_r,u_r), (u_ℓ,ũ), (ũ,u_r) ∈𝖠_𝖨^∁. Assume by contradiction that u_ p has a 1-rarefaction (the case of a 2-rarefaction is analogous) along which q_ p vanishes; then q̃≥0≥ q_ℓ, q̃ q_ℓ and there exists ξ_o such that q_ p(ξ_o)=0. Clearly p̂_ℓ=p̌(u_ p(ξ_o)), hence there exists ε0 sufficiently small such that 0<|p̂_ℓ - p̌(u_ p(ξ_o+ε))|<M, namely (u_ℓ,u_ p(ξ_o+ε)) ∈𝖠_𝖨, a contradiction.“⊇” Let (u_ℓ,u_r) ∈𝖢𝖭_𝖮. Clearly u_ v≡ u_ p. If u_ p does not have rarefactions, then (u_ℓ,u_r) ∈𝖢𝖭_2 because (u_ℓ,u_ p(ξ_o)), (u_ p(ξ_o),u_r) ∈{(u_ℓ,u_ℓ), (u_r,u_r), (u_ℓ,ũ), (ũ,u_r), (u_ℓ,u_r)}⊆𝖠_𝖨^∁ for any ξ_o∈. If u_ p has a 1-rarefaction with ṽ > v_ℓ>0 and a (possibly null) 2-shock, then (u_ℓ,u_r) ∈𝖢𝖭_2 because (u_ℓ,u_ p(ξ_o)), (u_ p(ξ_o),u_r) ∈({u_ℓ}×ℛ_1([ρ̃,ρ_ℓ],u_ℓ)) ∪ (ℛ_1([ρ̃,ρ_ℓ],u_ℓ) ×{u_r}) ∪{(u_r,u_r)}⊆𝖠_𝖨^∁ for any ξ_o∈. Indeed, (u_ℓ,u_ℓ), (ũ,u_r) ∈𝖮 (because q_ℓ≠0≠q̃) and for any u_o ∈ℛ_1([ρ̃,ρ_ℓ],u_ℓ) we havep̂_ℓ-p̌(u_o) ≥p̂_ℓ-p̌_ℓ >M ⇒ (u_ℓ,u_o) ∈𝖮,p̂(u_o)-p̌_r ≥p̂(ũ)-p̌_r >M ⇒ (u_o,u_r) ∈𝖮.The remaining cases can be treated analogously.The 1-continuity domain ofis 𝖫={(u_ℓ,u_r) ∈Ω^2|p̌_r - p̂_ℓ|M}. By Proposition <ref> we have thatis 1-continuous in 𝖮; in 𝖠∩𝖫 it is sufficient to exploit the continuity of û, ǔ, σ, λ_1 and λ_2. Henceis 1-continuous in 𝖫. Assume now that (u_ℓ,u_r) ∈𝖫^∁≐𝖠∖𝖫⊂𝖠_𝖨. Clearly û_ℓǔ_r and therefore [u_ℓ,u_r]≠[u_ℓ,u_r]. If (u_ℓ^ε,u_r^ε) ∈𝖮 converges to (u_ℓ,u_r), then [u_ℓ^ε,u_r^ε] = [u_ℓ^ε,u_r^ε] converges in 1 to [u_ℓ,u_r] and not to [u_ℓ,u_r] by the 1-continuity of .If u_0∈Ω is such that q_0=0, then ℐ_u_0 defined by (<ref>) is an invariant domain of . It is sufficient to recall that ℐ_u_0 is an invariant domain ofand to observe that û(u), ǔ(u) ∈ℐ_u_0 for any u∈ℐ_u_0.§.§ Proof of Proposition <ref> In this subsection we completely characterize the states (u_ℓ,u_r) ∈𝖮_𝖮 by proving Proposition <ref>. Clearly, (u_ℓ,u_r) ∉𝖠_𝖭, namely q̃ 0. Therefore, we have ρ̃∉{ρ̂_ℓ, ρ̌_r }.We recall that μ̂_ℓ, μ̌_r are given by (<ref>),(<ref>).We have, see <ref>,𝖮_𝖮^3 ≐{ (u_ℓ,u_r) ∈𝖮 : 0 < ν̃≤ν_ℓ}⊆𝖮_𝖮,𝖮_𝖮^4 ≐{ (u_ℓ,u_r) ∈𝖮 : ν_r ≤ν̃ < 0 }⊆𝖮_𝖮. Simple geometric arguments show that if (u_ℓ,u_r)∈𝖮_𝖮^3 ∪𝖮_𝖮^4, then|p̌(u_ p^+) - p̂(u_ p^-) | ≥|p̌_r - p̂_ℓ|and therefore (u_ℓ,u_r) ∈𝖮_𝖮. Indeed, let (u_ℓ,u_r)∈𝖮_𝖮^3, the case (u_ℓ,u_r)∈𝖮_𝖮^4 is analogous; then q_ℓ, q̃ >0 and so ρ_ℓ≤ρ̃<ρ̂_ℓ. *Assume that ρ_ℓ < ρ̃<ρ̂_ℓ and q_ℓ > q̃, see <ref>. In this case u_ p^± = ũ and (<ref>) holds true because μ̌(u_ p^+) = μ̌(ũ) ≤μ̌_r < μ̂_ℓ < μ̂(ũ)=μ̂(u_ p^-).* If ρ_ℓ<ρ̃<ρ̂_ℓ and q_ℓ = q̃, then u_ p^-=u_ℓ, u_ p^+ =ũ and μ̌(u_ p^+) = μ̌(ũ) ≤μ̌_r < μ̂_ℓ =μ̂(u_ p^-). * If ρ_ℓ<ρ̃<ρ̂_ℓ and q_ℓ<q̃, then u_ p^± = u_ℓ and μ̌(u_ p^+) = μ̌_ℓ < μ̌_r < μ̂_ℓ =μ̂(u_ p^-). * If ρ_ℓ=ρ̃<ρ̂_ℓ, then u_ p^± = u_ℓ = ũ and μ̌(u_ p^+) = μ̌_ℓ≤μ̌_r < μ̂_ℓ =μ̂(u_ p^-). By the previous lemma we have that𝖮_𝖠⊆{(u_ℓ,u_r) ∈𝖮ν̃ > max{0,ν_ℓ}}∪{(u_ℓ,u_r) ∈𝖮ν̃ < min{0,ν_r}}.Hence, the following lemma completes the proof of Proposition <ref>. We have {(u_ℓ,u_r) ∈𝖮_𝖮ν̃ > max{0,ν_ℓ}} = 𝖮_𝖮^1,{(u_ℓ,u_r) ∈𝖮_𝖮ν̃ < min{0,ν_r}} = 𝖮_𝖮^2.To prove the lemma it is sufficient to show{ (u_ℓ,u_r) ∈𝖮_𝖮 1 ≤ν_ℓ < ν̃} ={ (u_ℓ,u_r) ∈𝖮 1 ≤ν_ℓ < ν̃, Φ(-ν_ℓ) > M e^-μ_ℓ-ν_ℓ},{ (u_ℓ,u_r) ∈𝖮_𝖮ν_ℓ < 1 < ν̃} ={ (u_ℓ,u_r) ∈𝖮ν_ℓ < 1 < ν̃, Φ(-1) > M e^-μ_ℓ-ν_ℓ},{ (u_ℓ,u_r) ∈𝖮_𝖮max{0,ν_ℓ} < ν̃≤ 1 } ={ (u_ℓ,u_r) ∈𝖮max{0,ν_ℓ} < ν̃≤ 1, Φ(-ν̃) > M e^-μ_ℓ-ν_ℓ},{ (u_ℓ,u_r) ∈𝖮_𝖮ν̃ < ν_r ≤ -1 } ={ (u_ℓ,u_r) ∈𝖮ν̃ < ν_r ≤ -1, Φ(ν_r) > M e^ν_r-μ_r},{ (u_ℓ,u_r) ∈𝖮_𝖮ν̃ < -1 <ν_r } ={ (u_ℓ,u_r) ∈𝖮ν̃ < -1 <ν_r, Φ(-1) > M e^ν_r-μ_r},{ (u_ℓ,u_r) ∈𝖮_𝖮 -1 ≤ν̃ <min{0,ν_r}} ={ (u_ℓ,u_r) ∈𝖮 -1 ≤ν̃ <min{0,ν_r}, Φ(ν̃) > M e^ν_r-μ_r}.We recall that (u_ℓ,u_r) ∈𝖮_𝖮 if and only if (u_ℓ,u_r) ∈𝖮 and by (<ref>)|p̌(u_ p^+) - p̂(u_ p^-)| > M.We prove (<ref>)–(<ref>); the proof of (<ref>)–(<ref>) is analogous.If (u_ℓ,u_r) ∈𝖮 satisfies 1 ≤ν_ℓ < ν̃, then u_ p^± = u_ℓ and (<ref>) is equivalent toa^2[e^μ̂_ℓ - e^μ̌_ℓ] = a^2 e^μ_ℓ[ e^-Ξ^-1(ν_ℓ) - e^-ν_ℓ] = e^μ_ℓ+ν_ℓ Φ(-ν_ℓ) > M,because of (<ref>),(<ref>). Therefore (<ref>) holds true. If (u_ℓ,u_r) ∈𝖮 satisfies ν_ℓ < 1 < ν̃, then u_ p^± = u̅_ℓ and (<ref>) is equivalent toa^2[e^μ̂(u̅_ℓ) - e^μ̌(u̅_ℓ)] = a^2 e^μ̅_ℓ[ e^-Ξ^-1(ν̅_ℓ) - e^-ν̅_ℓ] = e^μ_ℓ+ν_ℓΦ(-1)> M,because of (<ref>),(<ref>), μ̅_ℓ = μ_ℓ+ν_ℓ-1 and because ν̅_ℓ = 1 by (<ref>). Therefore (<ref>) holds true.If (u_ℓ,u_r) ∈𝖮 satisfies max{0,ν_ℓ} < ν̃≤ 1, then u_ p^± = ũ and (<ref>) is equivalent toa^2[e^μ̂(ũ) - e^μ̌(ũ)]= a^2 e^μ̃[ e^-Ξ^-1(ν̃) - e^-ν̃] = e^μ_ℓ+ν_ℓΦ(-ν̃)> M,because of (<ref>),(<ref>) and by μ̃ + ν̃ = μ_ℓ+ν_ℓ. Therefore (<ref>) holds true. § CONCLUSIONS In this paper we studied a mathematical model for the isothermal fluid flow in a pipe with a valve. The modeling of the flow through the valve has been based on the general definition of coupling Riemann solver; in turn, the specific properties of the valve impose the coupling condition and then the solver. Our aim was to understand to what extent the solver satisfies some crucial properties: coherence, consistence and continuity. Coherence, in particular, corresponds to the commuting (chatting) of the valve, a well-known issue in real applications. In the same time we also searched for invariant domains. To the best of our knowledge, the mathematical modeling of valves has never considered these aspects. We focused on the case of a simple pressure-relief valve; the framework we proposed is however suitable to deal with other types of valves. Even in the simple case under consideration, a complete characterization of the states (density and velocity of the fluid) that share these properties is not trivial and requires a very detailed study of the solver. Nevertheless, we believe that our results are rather satisfactory. Several issues now arise. On the one hand, we intend to test our method to other kind of valves in order to understandwhether in some cases the analysis can be simplified. On the other hand, a natural question is how to circumvent these difficulties. This can be done in several ways: for instance, either by introducing a finite response time of the valve or by locating a pair of sensors sufficiently far from the valve, see <cit.>. A related important problem is the water-hammer effect <cit.>, which is due to the sudden closure of a valve. Even further, the study of flows in networks in presence of valves appears extremely appealing, see <cit.> and the references therein; owing to the complexity of this subject, this is why we kept our model as simple as possible, while however catching the most important features of the valves working. A last natural step would be toward optimization problems, see <cit.> in the case of compressors and <cit.> for valves. We plan to treat these topics in forthcoming papers.§ ACKNOWLEDGEMENTS M. D. Rosini thanks Edda Dal Santo for useful discussions. The first author was partially supported by the INdAM – GNAMPA Project 2016 “Balance Laws in the Modeling of Physical, Biological and Industrial Processes”. The last author was partially supported by the INdAM – GNAMPA Project 2017 “Equazioni iperboliche con termini nonlocali: teoria e modelli”. abbrv | http://arxiv.org/abs/1706.08782v1 | {
"authors": [
"Andrea Corli",
"Magdalena Figiel",
"Anna Futa",
"Massimiliano D. Rosini"
],
"categories": [
"math.AP"
],
"primary_category": "math.AP",
"published": "20170627111715",
"title": "Coupling conditions for isothermal gas flow and applications to valves"
} |
();a,Excellence Cluster Universe, Technische Universität München, Boltzmannstrasse 2, 85748 Garching, Germany CNRS & Sorbonne Universités, UPMC Univ Paris 06, UMR7095, Institut d'Astrophysique de Paris, F-75014, Paris, FranceSorbonne Universités, Institut Lagrange de Paris (ILP), 98 bis bd Arago, 75014 Paris, FranceThis work presents a joint and self-consistent Bayesian treatment of various foreground and target contaminations when inferring cosmological power-spectra and three dimensional density fields from galaxy redshift surveys. This is achieved by introducing additional block sampling procedures for unknown coefficients of foreground and target contamination templates to the previously presentedframework for Bayesian large scale structure analyses. As a result the method infers jointly and fully self-consistently three dimensional density fields, cosmological power-spectra, luminosity dependent galaxy biases, noise levels of respective galaxy distributions and coefficients for a set of a priori specified foreground templates. In addition this fully Bayesian approach permits detailed quantification of correlated uncertainties amongst all inferred quantities and correctly marginalizes over observational systematic effects. We demonstrate the validity and efficiency of our approach in obtaining unbiased estimates of power-spectra via applications to realistic mock galaxy observations subject to stellar contamination and dust extinction. While simultaneously accounting for galaxy biases and unknown noise levels our method reliably and robustly infers three dimensional density fields and corresponding cosmological power-spectra from deep galaxy surveys. Further our approach correctly accounts for joint and correlated uncertainties between unknown coefficients of foreground templates and the amplitudes of the power-spectrum. An effect amounting up to 10 percent correlations and anti-correlations across large ranges in Fourier space. Bayesian power-spectrum inference with foreground and target contamination treatmentJ. Jasche1 G. Lavaux2,3Accepted 24/05/2017 ====================================================================================§ INTRODUCTIONIn recent years the cosmological community has witnessed great improvements in our understanding of the Universe. This progress is particularly due to the spectacular results of the Planck satellite mission and deep galaxy observations such as the ones provided by the Baryon Oscillation Sky Survey <cit.>. These results put high standards for future analyses of cosmological data with an ever increasing need to control uncertainties and systematic effects in observations in order not to misinterpret data when searching for cosmological signals. To address these needs, data science is challenged to provide ever more robust data models accounting for complex systematic effects and allowing for accurate marginalization over unknowns when interpreting cosmological data.A particular challenge in existing and coming deep galaxy redshift surveys arises from the need to properly understand selection processes of galaxies from which cosmological surveys are constructed <cit.>. Such identification was conducted for mitigating star-galaxy contamination of the first SDSS photometric galaxy catalogue <cit.>. The problem is further exacerbated by our lack of understanding of galaxies as tracers of the underlying dark matter field when performing cosmological inference.In particular all our indicators for completeness rely on the relative slow, homogeneous and isotropic evolution of galaxy densities relative to dark matter densities.If the observation is further hindered by instrumental and/or terrestrial effect, this leads to a complex and challenging analysis problem. In particular, <cit.> and <cit.> have identified that contamination by bright stars alter significantly the intrinsic clustering signal of the observed photometric galaxy sample at large scales. The last SDSS release based on DR12 photometry still shows this problems in the measured correlation function <cit.>. Effects due to foreground stars, dust, seeing, sky background intensity have the greatest potential to cause systematic deviations in the clustering signal <cit.>. Selection of spectroscopic galaxies is not affected immediately by the same effect, but it is subject to other systematics, such as fibre collisions, target priority conflicts and fibre plate fixations. All these data contaminations constitute a particular nuisance, since foreground affects are also affecting the noise properties of observed galaxy samples via varying attenuation or target contamination across the sky.In the large scale structure community, foreground effects are traditionally treated by weighting observed galaxies to homogenize the distribution of the traced density field across the sky <cit.>. This approach neglects possible and sometimes counter intuitive correlations between contamination effects and power-spectrum amplitudes across large ranges in Fourier space. It also ignores the effects of modifiedobservational noise properties due to target contamination. There have also been several methods proposed to account for additive contributions from unknown foregrounds in photometric and spectroscopic galaxy observations <cit.>. Also the literature on cosmic microwave background analyses provides a plenitude of approaches to account for linear additive foreground contributions <cit.>.However, all these approaches do not properly account for multiplicativeforeground and target contaminations which also affect the noise.In this work we expand on the idea that foreground effects are more closely related to multiplicative rather than additive contributions. This is similar to the discussion presented in <cit.>, who used a multiplicative correction in observed galaxy densities. In this work we seek to account for these effects when inferring cosmological power-spectra from observations. Literature provides a plenitude of various statistically more or less rigorous approaches to measure power-spectra. Several of these methods rely on Fourier transform based methods or exploit Karhunen-Loève or spherical harmonics decompositions <cit.>. Other approaches aim at inferring the real space power spectrum via likelihood methods <cit.>.In the Bayesian community several approaches have been proposed to jointly infer three dimensional density fields and their corresponding cosmological power-spectra <cit.>. Also note, that similar approaches explored for analyses of cosmic microwave background data <cit.>. To account for such effects of foreground and target contamination in a statistically rigorous fashion, we propose a hierarchical Bayesian approach to jointly and self-consistently infer three dimensional density fields, corresponding power-spectra and coefficients of a set of different foreground templates. In particular this work builds upon our previously developed Bayesian inference algorithm(Algorithm for REconstruction and Sampling) <cit.>.The manuscript is structured as follows. In Section <ref> we give a brief overview of the statistical model that we propose. First we remind in Section <ref> the hierarchical Bayesian inference approach on which our code, ARES, is based. Then we describe in Section <ref> the necessary modifications of the model for foreground effects and in Section <ref> the modification to the original inference algorithm. In Section <ref> we describe the generation of artificial data used to test the performance of the sampling framework in section <ref>. Finally we discuss our results and give an outlook on future applications in Section <ref>.§ THE BAYESIAN INFERENCE MODELThis section provides a brief overview over our previously presented Bayesian inference framework<cit.>. We will also give a detailed description of the modifications enabling us to account for foreground and target contamination in deep galaxy observations. §.§ Theframework As discussed in the introduction, the current work builds upon the previously developed Algorithm for Reconstruction and Sampling<cit.>. This full Bayesian large scale structure inference method aims at precision inferenceof cosmological power-spectra from galaxy redshift surveys. Specifically it performs joint inferences of three dimensional density fields, cosmological power spectra as well as luminosity dependent galaxy biases and corresponding noise levels for different galaxy populations in the survey <cit.>. In particular thealgorithm addresses a large scale data interpretation problem involving many millions of parameters.In the following, for the sake of clarity, we will describe the corresponding data model in case of a single galaxy population. The generalization of this data model to account for an arbitrary number of galaxy populations with respective stochastic and systematic uncertainties has been described in our previous works<cit.>. In case of a single galaxy population the data model is given as:N_i = N̅ R_i (1 + b D_i δ_i) + ϵ_i,where N_i is the number of galaxies in the i-th grid element, N̅ is the mean density of the galaxy population, R_i is the overall linear response operator of the survey, describing redshift and target completeness, b is the galaxy population bias, D_i is the cosmic growth factor at the position of i-th grid element, δ_i is the density contrast at a reference redshift in this same grid element and ϵ_i denotes random but structured instrumental noise. Also, as described in our previous work, the observational noise will be assumed to be a Gaussian approximation to Poissonian noise, neglecting the influence of the signal itself. This assumption yields the corresponding noise covariance matrix as:⟨ϵ_i ϵ_j ⟩ = N̅ R_i δ^K_i,j,with δ^K_i,j being the Kronecker-Delta (i.e. equal to one if i=j and zero otherwise). Finally, we assume a homogeneous and isotropic Gaussian prior for density contrast amplitudes δ_i. For further details please consult our previous work <cit.>.In order to provide full Bayesian uncertainty quantification the algorithm explores the joint parameter space of density fields, power-spectra, galaxy biases and noise parameters via an efficient block sampling scheme. We show in Figure <ref> a visualization of this iterative sampling procedure including the foreground sampling method presented in this work. Iterative execution of these respective sampling steps provides us with a valid Markov chain and a numerical representation of the full joint target posterior distribution.Also note, we use here an upgraded version of thealgorithm for which we employ the messenger method discussed in <cit.>. This particular implementation of the Wiener posterior sampling method has been demonstrated to improve upon the statistical efficiency of previous implementations <cit.>.§.§ The foreground and target contamination modelSpectroscopic completeness is generally computed by the ratio of the number of observed spectra and the number of all photometric targets for a given area in the sky. This ratio is assumed to hold for any pointing of this area. However besides galaxies also a number of unknown contamination can contribute or affect observed photometric targets, artificially increasing or depleting their local number density. This contamination may include e.g. foreground stars, dust absorption or effects due to seeing. Naive estimate of the spectroscopic completeness from data therefore does not reflect the actual probability of obtaining a galaxy spectrum at a given position in the sky. From observations we can build an estimate of the completeness by calculating the ratio of the number of observed galaxy spectra N^g_i,spectro and the number of target galaxies N^g_i,targets, both in the direction of the pixel i in the sky:C_i,obs = N^g_i,spectro/N^g_i,targets = N^g_i,spectro/N^g_i,photoN^g_i,photo/N^g_i,targets = C_i M^-1_i,with N^g_i,photo being the actual true number of galaxies that should have been identified by photometry and M_i is the ratio of all the observed photometric targets N^g_i,targets to the true sample of target galaxies N^g_i,photo.Equation (<ref>) demonstrates the dilemma, that given some spectroscopic information (for which objects can clearly be identified as galaxies) there is no immediate way to decide how many galaxies were in the actual target sample. InEquation (<ref>) this mismatch is quantified by the ratio M_i = N^g_i,targets/N^g_i,photo between the number of real photometric targets that should have been chosen vs the actual but unknown number of all galaxy targets.We expect two possible contributions to M_i: either there is an excess in the number of targets because the photometric information was insufficient to separate galaxies from stars or other objects in the sky, or there is a lack of galaxies that have not been detected due to, e.g., low surface brightness or dust absorption. Assuming such foreground contributions to be mild perturbations, the corresponding contamination map M on the sky can be expressed as a product of small contaminants (greater or lesser than one). Individual contaminations can then be modelled via respective foreground templates. Note that this approach has also been adopted by the BOSS collaboration to correct their measurement of the cosmological power-spectrum <cit.>. Given this assumption we can express individual M_i as:M_i = ∏_n=0^N^fg (1 - α_n F_n,i),where F_n,i is the foreground template of the n-th contribution at the i-th pixel of the map, α_n is the amplitude of respective foreground templates, N^fg is the total number of foreground maps.We note that different surveys or even different sub-samples of observed galaxies may be subjected to different foreground effects. To consistently account for all these foreground effects when jointly analysing individual or several data sets the original data model implemented inneeds to be modified by a multiplicative correction of the survey response operator R^c_i. Specifically, we model the observed number of galaxies N^c_i in a survey as:N^c_i = N̅^c M_i({α^c_n}) R^c_i (1 + b^c D_i δ_i) + ϵ^c_i,where the additive noise contribution ϵ^c_i drawn from a zero mean Gaussian distribution with covariance matrix given as: ⟨ϵ^c_i ϵ^c'_j ⟩ = N̅^c M_i({α^c_n}) R^c_i δ^K_i,jδ^K_c,c'.The superscript c indicates different considered catalogues. These modifications render the posterior distribution of δ_i more complex. By construction the likelihood is given as:ℒ({N^c_i} | {δ_i}, {N̅^c},{b^c}) ∝∏_c=0^N^c∏_i=0^N^v( N̅^c M_i({α^c_n}) R^c_i )^-1/2 exp{-1/21/N̅^c M_i({α^c_n}) R^c_i.. [N^i_c -N̅^c M_i({α^c_n}) R^c_i (1 + b^c D_i δ_i) ]^2 }.It should be remarked that foreground contributions, as modelled here, are not just mere additive contributions to the signal to infer, but they also have pronounced impact on the varying noise properties across the survey. Hence, as can be seen from Equation (<ref>) inferring the foreground coefficients {α_n} is a highly non-linear analysis task.§.§ Sampling foreground coefficientsAs described by the likelihood distribution given in Equation (<ref>) foregrounds of different catalogues as labelled by the superscript c can be sampled independently. For this reason and without loss of generality here we provide the sampling procedure for a single galaxy catalogue only. Then the conditional posterior distribution for the coefficients {α_n} of respective foreground templates can be written as:𝒫( {α_n}| {N_i}, {δ_i}, {N̅},{b}) ∝ 𝒫( {α_n} )ℒ({N_i} | {δ_i}, {N̅},{b},{α_n}) ,where 𝒫( {α_n} ) is the prior of foreground coefficients and ℒ({N_i} | {δ_i}, {N̅},{b},{α_n}) is the likelihood for a single catalogue as given by Equation (<ref>). In the absence of any further information on the amplitudes of foreground coefficients α_n we follow the maximal agnostic approach by using a uniform prior 𝒫( {α_n} )=1.It can be seen from Equations (<ref>) and (<ref>) that conditional posterior distribution does not factorize in the coefficients α_n for respective foreground templates.To correctly account for the conditional dependencies between the coefficients {α_n}, we propose to use a block sampling procedure to sequentially draw random variates of α_n with respect to all other values.This is achieved by introducing the following sequence of sequential sampling steps to the fullframework: α_0∼𝒫(α_0| {α_n}∖α_0, {N_i}, {δ_i}, {N̅},{b})α_1∼𝒫(α_1| {α_n}∖α_1, {N_i}, {δ_i}, {N̅},{b}) α_2∼𝒫(α_2| {α_n}∖α_2, {N_i}, {δ_i}, {N̅},{b})⋮ α_N^fg-1 ∼𝒫(α_N^fg-1| {α_n}∖α_N^fg-1, {N_i}, {δ_i}, {N̅},{b}) where the symbol "∼" indicates a random draw from respective distributions. This sampling procedure integrates well into theframework as indicated in Fig. <ref>. Despite the fact that drawing respective realizations of the foreground coefficients {α_n} is a non-linear process, there exists a direct sampling procedure. The detailed derivation of the foreground coefficient sampler is presented in Appendix <ref>. The detailed algorithm for generating respective random variates is given in Algorithm <ref>. § GENERATION OF GAUSSIAN MOCK DATATo test the validity and performance of the modifiedsampling framework we follow a similar approach as discussed in our previous works <cit.>.These mock catalogues are generated in accordance with the data model described in Equation (<ref>), including various foreground effects. We generate artificial galaxy data on a cubic equidistant grid of side length 4000h^-1Mpc consisting of 256^3 grid nodes. First a realization of a cosmic density contrast field δ_i is drawn from a zero-mean normal distribution with covariance matrix corresponding to a cosmological power-spectrum. This spectrum, including baryon acoustic oscillations, is calculated according to the prescription described in <cit.> and <cit.>. For numerical evaluation weassume a ΛCDM cosmologywith the set of parameters given as(Ω_m=0.3089, Ω_Λ=0.6911, Ω_b=0.0485, h=0.6774, σ_8=0.8159, n_s=0.9667), as determined by Cosmic Microwave Background observations of the Planck satellite mission<cit.>. Following a similar description as discussed in <cit.>,we intent to create a realistic scenario to jointly analyse the SDSS DR7 main, the CMASS and the LOW-Z galaxy sample while accounting for their respective systematic effects.Corresponding artificial data sets are then drawn from the distribution given in Equation (<ref>). These artificial data sets include the effects of noise, galaxy bias, survey geometries, selection and foreground effects. For the sake of this work we restrict our tests to accounting for the dominant foreground effects of dust extinction and stellar contamination. The templates for these two effects are presented in the upper panels of Fig. <ref> the lower panels show the completeness masks for the respective surveys. The map describing dust extinction has been generated straightforwardly from the SFD maps <cit.>. In particular we constructed a HEALPix map of the reddening at N_side=2048 by linearly interpolating the values of the SFD map <cit.>. The star map is built from different pieces of information. The first component consists in computing a MANGLE description[MANGLE software is originally provided by <cit.>.] of the geometry of the spectroscopic plates. We then count the number of stars with apparent magnitudes 20.3<i_PSF<20.6 present in each single non-overlapping polygon. We convert these MANGLE description into an HEALPix map and divide the value in each pixel by the area of the overlapping polygon. This results in a map for which each pixel has an estimated star count by steradians, thus a star density. We have chosen to average by spectroscopic plates to reduce shot-noise in the estimate. A better estimate would have been obtained from the geometrical description of the photometric tiling but at the cost of increased noise.Further, for the SDSS DR7 main sample component of the mock data we assume a radial selection function following from a standard Schechter luminosity function with standard r-band parameters (α = -1.05, M_* -5 log_10(h)=-20.44), and we limit the survey to only include galaxies within an apparent Petrosian r-band magnitude range13.5 < r < 17.6 and within the absolute magnitude ranges M_min=-17.0 to M_max=-23.0. As usual, the radial selection function f(z) is then given by the integral of the Schechter luminosity function over the range in absolute magnitude.For the CMASS and LOW-Z component we have used numerical estimates of the selection functions by computing a histogram of the corresponding N(d) in the actual data sets <cit.> (d being the co-moving distance from the observer). To account for the different selection effects in the northern and southern galactic plane we also split our mock data sets into the CMASS and LOW-Z catalogues correspondingly. The respective radial selection functions are presented in Fig. <ref>. The average of the product of the two dimensional survey geometry C^c(x̂) = C^c_i(x̂) and the selection function f(x) at each grid element in the three dimensional volume yields the survey response operator:R^c_i = 1/|𝒱_i|∫_𝒱_id^3 x⃗C^c(x̂) f^c(x),with |𝒱| the volume of the set 𝒱, 𝒱_i indicating the volume represented by the i-th grid element. Given these definitions and a realization of the three dimensional density field δ_i, realizations of artificial galaxy observations for respective catalogues labelled with c can be obtained by evaluating:N^c_i= N̅^c R^c_i M^c_i (1+ b^c δ_i) + √(N̅^c R^c_i M^c_i) ϵ_i ,where ϵ_i is a white-noise field drawn from zero-mean and unit variance normal distribution. § RESULTS In this section we discuss results obtained by applying the modifiedalgorithm to artificial mock data. In particular in this work we focus on the validity and statistical efficiency of the algorithm. §.§ Statistical efficiency of the samplerTo test the statistical efficiency of our sampler we follow a standard test procedure as described in previous works <cit.>. In particular we test the initial burn-in phase by starting the sampler from an over dispersed state and monitoring transitions in parameter space as a sequence of steps in the Markov chain. Typically this test reveals a coordinated drift of inference parameters towards their target values. Once the chain moved to a preferred region in parameter space it starts to correctly explore the target posterior distribution via the random walk Gibbs sampling approach. At this stage it is assumed that the sampler has passed the initial burn-in phase and we start recording samples of the Markov chain. In Fig. <ref> we show the sequence of sampled posterior power-spectra during the burn-in phase. For this test we started from an over dispersed state by multiplying the initial guess of a Gaussian random density contrast field by a factor of 0.1. Fig. <ref> nicely demonstrates the initial drift towards the preferred region in parameter space. As can be seen the initial burn in phase consists of about ∼ 2000 Markov steps. As a side remark we note that generation of individual samples require an investment of ∼ 1.87CPU-hours per sample for the present scenario of dealing with five different galaxy sub-catalogues and two foregrounds.To further test the statistical efficiency of the Gibbs sampling procedure we estimated its efficiency in generating independent Markov samples. Generally subsequent samples in a Markov chain are correlated and do not qualify for independent samples of the target posterior distribution. To estimate how many independent samples can be drawn from a Markov chain with a given number of transition steps one has to determine the length over which sequential samples are correlated. This correlation length characterizes the statistical efficiency of generating independent realizations of a parameter θ as follows:C(θ)_n =1/N-n∑_i=0^N-nθ^i-⟨θ⟩/√(Var(θ))θ^i+n-⟨θ⟩/√(Var(θ)),where n is the distance in the chain measured in iterations, ⟨θ⟩ = 1/N ∑_i θ^iandVar(θ) =1/N ∑_i (θ^i -⟨θ⟩)^2 and N is the total number of samples in the chain.As an illustration in Fig. <ref> we show the correlation length for the power-spectrum amplitudes of different modes in Fourier space, as indicated in the plot. It can be seen that the typical correlation length for a BOSS like survey analysis is on the order of ∼ 100 Markov transitions. These results demonstrate the numerical feasibility of complex full Bayesian analyses of present and next generation surveys.§.§ Inference of template coefficients As discussed above the proposed hierarchical Bayesian inference machine aims to account for the uncertainties of systematic effects arising from foreground effects. In particular theframework correctly accounts for all joint and correlated uncertainties of different inference parameters and even across the five different mock galaxy surveys as used in this work. To illustrate this fact in Fig. <ref> we present two dimensional marginal posterior distributions for the corresponding foreground template coefficients. As can be seen, different panels indicate various degrees of correlation between different foreground coefficients across the five mock catalogues. We would also like to point out that generally, distributions for foreground template coefficients are highly non-Gaussian, and can have sharp transitions due to the requirement that effective contamination templates are required to have a positive sign.As an interesting by-product of our sampling procedure we are ableto provide effective survey response maps which account for the a priori unknown systematics due to foreground and target contamination. In particular mean and standard deviations of such maps can be estimated by evaluating Equation (<ref>) for every foreground template parameter coefficient in the Markov chain and multiplying it with the estimated completeness map C_i,obs. The result is demonstrated in Fig. <ref>. It can be seen that most corrections appear at the boundary of the mask. These are the regions most affected by foreground stars in our test scenario. Note, that corresponding standard deviations, as shown in the right panel of Fig. <ref>, also show increased uncertainty in these regions. This demonstrates that the algorithm accounts for larger uncertainty for more unreliable portions of the data, and optimally extracts cosmological information from observations, as discussed in the following.§.§ Inferred three dimensional density fields Although this work focusses primarily on the inference of cosmological power-spectra it is instructive to also look at inferred three dimensional density fields. In particular we would like to highlight the impact of foreground contaminations on the inference of density fields from galaxy redshift surveys. To do so we compare twoanalyses of the generated mock galaxy catalogue, with and without foreground treatment. From these two Markov chains we can calculate the ensemble mean density and corresponding variance fields. Results are presented in Fig. <ref>. As can be seen, the analysis without a detailed treatment of foreground contaminations shows residual large scale features and erroneous power particularly close to the survey boundaries. These regions are affected the most by stellar contamination as indicated by the corresponding foreground map shown in Fig. <ref>. In contrast ourrun with detailed Bayesian foreground treatment shows a homogeneous density distribution throughout the entire observed domain. Also note that variance maps of the corrected and uncorrected run look very similar. Apparently the erroneous features in the uncorrectedanalysis do not affect the corresponding variance map. Since for the uncorrected run, the data model does not account for the systematic uncertainties associated to foreground contaminations, the reconstructed erroneous large scale power in the field will be fully attributed to the inferred large scale structure. From Fig. <ref> it is visually evident that our data model including the treatment of foreground contaminations is much more robust against such misinterpretations. In the following we will have a closer look at inferred cosmological power-spectra. §.§ Inferred cosmological power-spectra One of the most important features of thealgorithm is its ability to jointly infer three dimensional density fields, corresponding cosmological power-spectra, galaxy biases, noise levels and coefficients of several foreground templates including a detailed treatment of all joint and correlated uncertainties. Since theframework yields proper Markov chains we are able to correctly marginalize over all joint uncertainties, when focussing on the analysis of specific target quantities such as the cosmological power-spectrum.Specifically in our previous work<cit.> we have already demonstrated that thealgorithm reveals and correctly treats the anti-correlation between bias amplitudes and power spectrum, a 20% effect across large ranges in Fourier space. Here we also take into account the unknown coefficients of several foreground templates.To study the impact of foreground contamination on the analyses of cosmological power-spectra in deep galaxy surveys we compare inference results obtained from our tworuns with and without corresponding corrections. The results for inferred power-spectra are presented in Fig. <ref> where we show the univariate marginal posterior distribution for power-spectrum amplitudes at different modes in Fourier space. For the Markov chain without foreground treatment one can clearly observe excessive power at the largest scales. This observation corresponds to the excessive large scale power observed in corresponding inferred three dimensional density fields, as discussed above. In addition one can observe a slight bias with respect to the true underlying power-spectrum from which mock observations were generated. This can easily be understood by inspecting the data model described in Equation (<ref>), where one can see that there is a certain potential for a degeneracy between foreground coefficients and galaxy biases. If foreground effects are not treated correctly, then some of the foreground contributions will erroneously be compensated by sampled galaxy bias amplitudes, introducing the offset between true and recovered power-spectra shown in Fig. <ref>. In contrast inferred power-spectra for the run with foreground treatment are unbiased with respect to the true underlying power-spectrum over the full domain of Fourier modes considered in this work. In particular the shape of the recovered power-spectrum at the largest scales is in excellent agreement with the true fiducial model.We further studied the impact of different foreground coefficients on power-spectrum amplitudes at different scales in Fourier space. In particular we calculated the cross correlation matrix between the foreground coefficients of the five mock catalogues and power-spectrum amplitudes from the posterior samples of the corresponding Markov chain. Results are presented in Fig. <ref>. It can be seen that correlations and anti-correlations can amount of up to ten percent across all modes in Fourier space. Additionally we tested whether the sampler correctly accounts for the combined effects of foreground contaminations, galaxy biases and unknown noise amplitudesby estimating the co-variance matrix of inferred power-spectra from theruns. As can be seen in Fig. <ref> the co-variance matrix for both runs exhibit strong diagonal shape indicating that the algorithm correctly accounted for the otherwise erroneous mode coupling introduced by survey geometry and foreground effects. Residual off-diagonal contributions amount to less than ten percent. These results clearly demonstrate the feasibility of dealing with strong but unknown foreground contaminations when inferring cosmological power-spectra from deep galaxy observations. § SUMMARY AND CONCLUSION Major challenges for the analysis of next generation deep galaxy redshift surveys arise from the requirement to account for an increasing amount of systematic and stochastic uncertainties. In particular foreground effects and target contaminations due to e.g. stars and dust, can greatly affect the observation of galaxies. If not accounted for properly, these effects can yield erroneous modulations of galaxy number counts across the sky, which hinders the immediate inference of power-spectra and three dimensional density fields from such galaxy samples.To address this issue, in this work we have described a fully Bayesian treatment of unknown foreground contamination in the process of inferring cosmological power-spectra from deep galaxy surveys. In particularly we build upon the previously presented Bayesian inference framework<cit.>. Thealgorithm aims to jointly infer three dimensional density fields, corresponding cosmological power-spectrum, luminosity dependent galaxy biases and unknown noise levels. Being a full Bayesian inference enginefurther correctly provides joint and correlated uncertainties for all target quantities by performing an efficient Markov chain Monte Carlo sampling within a block sampling scheme as indicated in Fig. <ref>.In this work we extend this hierarchical Bayesian framework by also including an additional sampling procedure to account for foreground and target contamination effects. As discussed in Section <ref> such contaminating effects particularly affect the estimation of the spectroscopic completeness of a given galaxy survey. Naive estimation of the probability of acquisition of galaxy spectra at given positions in the sky, by calculating ratios of observed galaxy spectra and all observed photometric targets, ignores the possibility that photometric targets are contaminated by foreground stars or dust extinction. Such effects are likely to artificially increase or decrease the estimated number of galaxy targets in observations. In consequence these effects introduce an artificial modulation of observed galaxy number densities across the sky, which in turn yields erroneous large scale power in inferred cosmological power-spectra. As demonstrated in Section <ref>, foreground and target contaminations can be accounted for by describing the mismatch between actual galaxies and observed photometric targets as multiplicative correction factors to estimated spectroscopic completeness maps. These correction factors can be described as combinations of various templates for foreground effects, such as introduced by stars or dust, and corresponding unknown template coefficients. The aim of this work is to jointly infer these template coefficients together with the three dimensional density field, corresponding cosmological power-spectra, galaxy biases and unknown noise levels. This goal can be achieved by simply adding an additional sampling scheme for foreground coefficients to the block sampling framework of thealgorithm. Interestingly, as discussed in section <ref> and Appendix <ref>, we were able to derive a direct sampling scheme by introducing auxiliary random fields and marginalizing over them. The corresponding algorithm to generate random realizations of foreground template coefficients is given in Algorithm <ref>. We test the performance of improvedalgorithms via applications to mock galaxy observations. To further evaluate the impact of foreground and target contamination effects on the inference of cosmological power-spectra we compare two test scenarios, with and without treatment of foreground effects. Corresponding artificial galaxy observations were self-consistently generated according to the data model described in Section <ref>. In particular, these artificial observations seek to emulate realistic features of the SDSS DR7 main, the LOW-Z and the CMASS galaxy sample. Effectively this results in five artificial galaxy surveys that are jointly handled by theframework, while self-consistently accounting for their respective systematic and stochastic uncertainties, including survey geometries, selection effects, galaxy biasing and foregrounds. This artificial data was then used to test the statistical performance of our sampling framework. In particular we tested the burn-in behaviour of the algorithm by starting the Markov chain from an over-dispersed state. As described in Section <ref>, we start the chain with an initial power-spectrum scaled by a factor 0.1. The following burn-in behaviour then manifested itself by a coherent drift of sequential power-spectrum samples towards preferred regions in parameter spaces. We estimated the burn-in phase to be completed after ∼ 2000 sampling transitions. The statistical efficiency of the sampler was estimated by measuring the correlation length between subsequent posterior power-spectrum samples. As demonstrated in section <ref> the sampler exhibits a correlation length of a few hundred samples. This leaves us with a numerically efficient sampling framework to explore cosmological power-spectra in deep galaxy surveys. It should be remarked that there exist various possibilities to further improve statistical efficiencies but details are left to future publications.Results for inferring foreground and target contamination template coefficients are described in section <ref>. In particular we have used two realistic foreground templates describing foreground stars in the galaxy and dust extinction. Furthermore, our implementation is general enough to account for an arbitrary amount of foreground templates. To demonstrate the feasibility of inferring these parameters jointly with the cosmological power-spectrum, three dimensional density fields, galaxy biases and noise levels, we presented two dimensional marginalized distributions for template coefficients. These results show that different contamination contributions can be recovered within ≤ 2.3 sigma of their input values. Given inferred foreground coefficients it is further possible to reconstruct an effective completeness mask and corresponding uncertainties. As expected, in the example tested in this work, uncertainties in recovered completeness masks are largest in regions where stellar contaminations of the target distribution are also large. In Section <ref>, we have studied the impact of foreground and target contaminations on the inference of three dimensional density fields from galaxy surveys. To do that we have contrasted a run with and without foreground treatment. Ignoring foreground effects when inferring density fields yields excessive large scale power particularly in regions most affected by such contaminations. In contrast detailed Bayesian treatment of foreground systematics yields inferred density fields showing a homogeneous distribution of power across the inference domain. It must be remarked that if foreground contaminations are not explicitly modelled within the data model, then their effects will be attributed to a real signal.This result is also in agreement with inferred cosmological posterior power-spectra, as presented in section <ref>. The inferred power-spectrum of the run without foreground treatment agrees with the visual impression obtained from corresponding three dimensional density fields. In particular it reflects the observed excess in large scale power. Ignoring foreground effects may further lead to an overall bias across the entire Fourier domain with respect to the true underlying power-spectrum. In contrast detailed Bayesian foreground treatment yields inferred power-spectra in agreement with respect to the underlying fiducial truth over the entire range of Fourier modes, as considered in this work. We have further tested the impact of different foreground effects on the inference of power-spectrum amplitudes by estimating their correlations with foreground template coefficients throughout the full Fourier range. These results show that correlations and anti-correlations can amount up to ten percent throughout large ranges in Fourier space. Thealgorithm also accounts for artificial mode coupling between power-spectrum amplitudes as introduced by survey geometries and completeness masks. To demonstrate that fact we have estimated the correlation matrix of power-spectrum amplitudes from our Markov runs. These tests show that residual artificial mode coupling is typically much less than ten percent. These results indicate the validity of the algorithm in scenarios with heavily masked data. We are currently using our method to process the actual BOSS data, which will presented in our companion paper <cit.>.As demonstrated in this work detailed treatment of foreground and target contamination is essential to recover unbiased estimates of three dimensional density fields and corresponding cosmological power-spectra from present and next generation surveys.The proposedalgorithm provides a joint and statistically rigorous Bayesian inference framework to achieve this goal and prevent misinterpretation of observations. This research was supported by the DFG cluster of excellence "Origin and Structure of the Universe" (www.universe-cluster.de). This work made in the ILP LABEX (under reference ANR-10-LABX-63) was supported by French state funds managed by the ANR within the Investissements d'Avenir programme under reference ANR-11-IDEX-0004-02. The Parkes telescope is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. We acknowledge financial support from "Programme National de Cosmologie and Galaxies" (PNCG) of CNRS/INSU, France. This work was supported in part by the ANR grant ANR-16-CE23-0002.§ SAMPLING FOREGROUND COEFFICIENTS In this appendix, we derive the sampling procedure for a single foreground template coefficient α_k. As described in Section <ref>, the full joint distribution of all template coefficients may be sampled by performing a sequential iterative block sampling procedure.In the absence of any a priori information on the foreground coefficient α_k we follow a maximally conservative approach by assuming a uniform prior distribution.Using Equation (<ref>)), the logarithm of the conditional posterior distribution for a foreground coefficient in a single galaxy catalogue can then be written as:log𝒫(α_k| {α_n}∖α_k, {N_i}, {δ_i}, {N̅}) = -1/2∑_i=0^v[N_i-N̅ M_i({α_n}) R_i (1+D_i b δ_i)]^2/N̅M_i({α_n}) R_i -1/2log(N̅ M_i({α_n})R_i ).We can simplify notation by noting that different foreground templates contribute multiplicatively an effective survey response operator. We can therefore collapse all multiplicative foreground contributions, except the one currently under consideration, into the effective survey response operator given as:R̃_i = N̅ R_i ∏_n ∖ k (1 - α_n F_n,i) ,where k labels the currently considered foreground template and we have used Equation (<ref>) to factorize foreground contributions. We further introduce the vector:A_i= R̃_i (1+D_i b δ_i) .Given this notation the conditional posterior distribution for α_k simplifies to:log𝒫(α_k| {α_n}∖α_k, {N_i}, {δ_i}, {N̅}) = -1/2∑_i=0^v[N_i-(1 - α_k F_k,i)A_i]^2/(1 - α_k F_k,i) R̃_i -1/2log[(1 - α_k F_k,i) R̃_i].The expression can be further compressed by introducing the following indexed quantities:B_i= N_i - A_i , C_k,i= A_iF_k,i, and γ_k,i= F_k,i R̃_i .Consequently the conditional posterior distribution can be expressed as:log( 𝒫(α_k| {α_n}∖α_k, {N_i}, {δ_i}, {N̅},) ) = -1/2∑_i=0^v(B_i+ α_k C_k,i)^2/(R̃_i - α_k γ_k,i) -1/2log(R̃_i - α_k γ_k,i).In order for Equation (<ref>) to represent a proper probability distribution the following positivity requirement for the variances needs to hold:∀ i, R̃_i - α_k γ_k,i > 0Similarly it is required that:∀ i,R̃_i > 0,which states that the survey response operator should be positive definite. To ensure these requirements we split the effective survey response operator R̃_i as follows: R̃_i = R̃'_i,k + ωγ_k,i.By also requiring R̃'_i,k > 0 we yield the following requirement for the scalar quantity ω:R̃'_i,k= R̃_i - ωγ_k,i > 0 , which brings to ω< R̃_i/γ_k,i ∀ i .We therefore choose ω to be:ω = min_i(R̃_i/γ_k,i) .Given these definitions one can express the conditional posterior distribution as:log( 𝒫(α_k| {α_n}∖α_k, {N_i}, {δ_i}, {N̅},) ) = -1/2∑_i=0^v(B_i+ α_k C_k,i)^2/R̃'_i,k + (ω- α_k) γ_k,i -1/2log(R̃'_i,k + (ω- α_k) γ_k,i).Note that this distribution can be described conveniently as the marginalization over a set of auxiliary fields { t_i}:𝒫(α_k | {α_n}∖α_k, {N_i}, {δ_i}, {N̅},)∝∏_i ∫_-∞^∞dt_i e^ -1/2∑_i=0^v(B_i+ α_k C_k,i-t_i)^2/(ω- α_k) γ_k,i /√(2π (ω- α_k) γ_k,i) e^-1/2 t_i^2/R̃'_i,k/√(2π R̃'_i,k).This approach therefore follows a similar line of reasoning as discussed in our previous work when presenting a messenger field Gibbs sampler <cit.>. Here we propose to jointly sample the template coefficient parameter α_k with the auxiliary messenger field t_i via a two step block sampling procedure. First we generate realizations of the messenger field t_i conditional on the current value of α_k and then we draw a new value of α_k conditional on the given realization of t_i. We loop over this small block ten times to ensure a minimal mixing of the variables. To simplify notation we introduce the following change of variable:ξ = ω- α_k .This yields the joint distribution:𝒫(ξ , {t_i}| {α_n}∖α_k, {N_i}, {δ_i}, {N̅},)∝∏_i e^ -1/2∑_i=0^v(B_i+ (ξ-ω) C_ki-t_i)^2/ξγ_k,i /√(2π ξγ_k,i) e^-1/2 t_i^2/R̃'_ik/√(2π R̃'_i,k),[t]Algorithm derived in Appendix <ref> to sample the values of α_k, the multiplicative coefficient attached the k-th foreground.As can be easily confirmed, sampling the messenger field t_i amounts to simply generating normal random variates with following means and variances:μ_i,k = R̃'_i,k/ξγ_k,i+R̃'_i,k (B_i+ (ξ -ω) C_k,i) ,andσ_i,k = R̃'_i,k ξγ_k,i/ξγ_k,i+R̃'_i,k.To generate realizations of the ξ values, we introduce the following quantities:w_k=∑_i (B_i-C_k,i ω-t_i )^2/γ_k,i = ∑_iw_k,i(t_i)andz_k=∑_i (C_k,i)^2/γ_ki = ∑_iz_k,i(t_i) .With this definition the conditional distribution to sample ξ turns into a generalized inverse Gaussian (GIG) distribution given as:𝒫(ξ | {t_i}, {α_n}∖α_k, {N_i}, {δ_i}, {N̅},) ∝1/(ξ)^N_v/2e^-1/2( w_k/ξ + ξ z_k ),where N_v is the number of observed grid elements. The GIG distribution can be conveniently sampled with standard approaches as described in the literature <cit.>. Finally to obtain a sample for the foreground coefficient α_k we invert the transformation in Equation (<ref>):α_k= ω-ξ. The respective realizations of the t_i field are not longer required and are immediately discarded, which amounts to a marginalization over the t_i values. An efficient sampling algorithm that avoids storing the full t_i vector is proposed in Algorithm <ref>. | http://arxiv.org/abs/1706.08971v1 | {
"authors": [
"Jens Jasche",
"Guilhem Lavaux"
],
"categories": [
"astro-ph.CO"
],
"primary_category": "astro-ph.CO",
"published": "20170627180000",
"title": "Bayesian power-spectrum inference with foreground and target contamination treatment"
} |
^1 Department of Informatics, Indiana University, Bloomington, IN 47408, United States^2 Santa Fe Institute, Santa Fe, NM 87501, United States (current affiliation)^3 Amazon.com ^4 Epic.com ^+ these authors contributed equally to this workYong-Yeol [email protected] investigate the association between musical chords and lyrics by analyzing a large dataset of user-contributed guitar tablatures. Motivated by the idea that the emotional content of chords is reflected in the words used in corresponding lyrics, we analyze associations between lyrics and chord categories. We also examine the usage patterns of chords and lyrics in different musical genres, historical eras, and geographical regions. Our overall results confirms a previously known association between Major chords and positive valence. We also report a wide variation in this association across regions, genres, and eras. Our results suggest possible existence of different emotional associations for other types of chords. § INTRODUCTION The power of music to evoke strong feelings has long been admired and explored <cit.>.Although music has accompanied humanity since the dawn of culture <cit.> and its underlying mathematical structure has been studied for many years <cit.>, understanding the link between music and emotion remains a challenge <cit.>. The Minor Fall, the Major Lift: Inferring Emotional Valence of Musical Chords through LyricsArtemy Kolchinsky^1,2,+, Nakul Dhande^1,3+, Kengjeun Park^1,4, Yong-Yeol Ahn^1 December 30, 2023 ============================================================================================The study of music perception has been dominated by methods that directly measure emotional responses, such as self-reports, physiological and cognitive measurements, and developmental observations <cit.>. Such methods may produce high-quality data, but the data collection process involved is both labor- and resource-intensive. As a result, creating large datasets and discovering statistical regularities has been a challenge.Meanwhile, the growth of music databases <cit.> as well as the advancement of the field of Music Information Retrieval (MIR) <cit.> opened new avenues for data-driven studies of music. For instance, sentiment analysis <cit.> has been applied to uncover a long-term trend of declining valence in popular song lyrics <cit.>. It has been shown that the lexical features from lyrics <cit.>, metadata <cit.>, social tags <cit.>, and audio-based features can be used to predict the mood of a song. There has been also an attempt to examine the associations between lyrics and individual chords using a machine translation approach, which confirmed the notion that Major and Minor chords are associated with happy and sad words respectively <cit.>. Here, we propose a novel method to study the associations between chord types and emotional valence. In particular, we use sentiment analysis to analyze chord categories (e.g. Major and Minor) and their associations with sentiment and words across genres, regions, and eras.To do so, we adopt a sentiment analysis method that uses the crowd-sourced LabMT 1.0 valence lexicon <cit.>. Valence is one of the two basic emotional axes <cit.>, with higher valence corresponding to more attractive/positive emotion. The lexicon contains valence scores ranging from 0.0 (saddest) to 9.0 (happiest) for 10,222 common English words obtained by surveying Amazon's Mechanical Turk workers. The overall valence of a piece of text, such as a sentence or document, is measured by averaging the valence score of individual words within the text. This method has been successfully used to obtain insight into a wide variety of corpora <cit.>.Here, we apply this sentiment analysis method to a dataset of guitar tablatures — which contain both lyrics and chords — extracted from <ultimate-guitar.com>. We collect all words that appear with a specific chord and create a large “bag of words” — a frequency list of words — for each chord (see Fig. <ref>).We perform our analysis by aggregating chords based on their `chord category' (e.g., Major chords, Minor chords, Dominant 7th chords, etc.). In addition, we also acquire metadata from the Gracenote API regarding the genre of albums, as well as era and geographical region of musical artists. We then perform our analysis of associations between lyrics sentiment and chords within the categories of genre, era, and region. Details of our methodology are described in the next section. § MATERIALS AND METHODS Guitar tabs were obtained from <ultimate-guitar.com> <cit.>, a large online user-generated database of tabs, while information about album genre, artist era, and artist region was obtained from Gracenote API <cit.>, an online musical metadata service. §.§ Chords-lyrics associationis one of the largest user-contributed tab archives, hosting more than 800,000 songs. We examined 123,837 songs that passed the following criteria: (1) we only kept guitar tabs and ignored those for other instruments such as the ukulele; (2) we ignored non-English songs (those having less than half of their words in an English word list <cit.> or identified as non-English by thelibrary <cit.>); (3) when multiple tabs were available for a song, we kept only the one with the highest user-assigned rating. We then cleaned the raw HTML sources and extracted chords and lyrics transcriptions. As an example, Fig. <ref> shows how the tablature of Leonard Cohen's “Hallelujah” <cit.> is processed to produce a chord-lyrics table.Sometimes, chord symbols appeared in the middle of words; in such cases, we associated the entire words with the chord that appears in its middle, rather than the previous chord. In addition, chords that could not be successfully parsed or that had no associated lyrics were dropped. §.§ Metadata collection using Gracenote APIWe used the Gracenote API (<http://gracenote.com>) to obtain metadata about artists and song albums. We queried the title and the artist name of the 124,101 songs that were initially obtained from , successfully retrieving Gracenote records for 89,821 songs. Songs that did not match a Gracenote record were dropped.For each song, we extracted the following metadata fields:* The geographic region from which the artist originated (e.g. North America).This was extracted from the highest-level geographic labels provided by Gracenote. * The musical genre (e.g. 50's Rock). This was extracted from the second-level genre labels assigned to each album by Gracenote. * The historical eraat the level of decades (e.g. 1970's).This was extracted from the first-level era labels assigned to each artist by Gracenote. Approximately 6,000 songs were not assigned to an era, in which case they were assigned to the decade of the album release year as specified by Gracenote.In our analysis, we reported individual statistics only for the most popular regions (Asia, Australia/Oceania, North America, Scandinavia, Western Europe), genres (top 20 most popular genres), and eras (1950's through 2010's). §.§ Determining chord categoriesWe normalized chord names and classified them into chord categories according to chord notation rules from an online resource <cit.>.All valid chord names begin with one or two characters indicating the root note (e.g.or ) which are followed by characters which indicate the chord category.We considered the following chord categories <cit.>: * Major: A Major chord is a triad with a root, a major third and a perfect fifth. Major chords are indicated using either only the root note, or the root note followed byor . For instance,, , ,were considered Major chords. * Minor: A Minor chord is also a triad, containing a root, minor third and a perfect fifth. The notation for Minor chords is to have the root note followed byor . For example, ,andwere considered Minor chords.* 7th: A seventh chord has seventh interval in addition to a Major or Minor triad. A Major 7th consists of a Major triad and an additional Major seventh, and isindicated by the root note followed byor(e.g.). A Minor 7th consists of a Minor triad with an additional Minor seventhmakes, and is indicated by the root note followed by or(e.g. ). A Dominant 7th is adiatonic seventh chord that consists of a Major triad withadditional Minor seventh, and is indicated by the root note followed by thenumeralor(e.g. , ). * Special chords with `*': In tab notation, the asterisk `*' is used to indicate special instructions and can have many different meanings. For instance,may indicate that theshould be played with a palm mute, with a single strum, or some other special instruction usually indicated in free text in the tablature. Because in most cases the underlying chord is still played, in this study we map chords with asterisks to their respective non-asterisk versions. For instance, we considerto be the same asandto be the same as .* Other chords:There were several other categories of chords that we do not analyze individually in this study.One of these is Power chords, which are dyads consisting of a root and a perfect fifth.Because Power chords are highly genre specific, and because they sometimes function musically as Minor or Major chords, we eliminated them from our study.For reasons of simplicity and statistical significance, we also eliminated several other categories of chords, such as Augmented and Diminished chords, which appeared infrequently in our dataset. In total, we analyzed 924,418 chords (see the next subsection). Figure <ref> shows the prevalence of different chord categories among these chords.§.§ Sentiment analysis Sentiment analysis was used to measure the valence (happiness vs. unhappiness) of chord lyrics. We employed a simple methodology based on a crowd-sourced valence lexicon () <cit.>. This method was chosen because (1) it is simple and scalable (2) it is transparent, allowing us to calculate the contribution from each word to the final valence score, and (3) it has been shown to be useful in many studies <cit.>.Thelexicon contains valence scores ranging from 0.0 (saddest) to 9.0 (happiest) for 10,222 common English words as obtained by surveying Amazon's Mechanical Turk workers.The valence assigned to some sequence of words (e.g. words in the lyrics corresponding to Major chords) was computed by mapping each word to its corresponding valence score and then computing the mean.Words not found in the LabMT lexicon were ignored; in addition, following recommended practices for increasing sentiment signal <cit.>, we ignored emotionally-neutral words having a valence strictly between 3.0 and 7.0. Chords that were not associated with any sentiment-mapped words were ignored.The final dataset contained 924,418 chords from 86,627 songs.§.§ Word shift graphsIn order to show how a set of lyrics (e.g. lyrics corresponding to songs in the Punk genre) differs from the overall lyrics dataset, we use the word shift graphs <cit.>. We designate the whole dataset as the reference (baseline) corpus and call the set of lyrics that we want to compare comparison corpus. The difference in their overall valence can now be broken down into the contribution from each individual word. Increased valence can result from either having ahigher prevalence (frequency of occurrence) of high-valence words or a lower prevalence of low-valence words. Conversely,lower valence can result from having a higher prevalence of low-valence words or a lower prevalence of high-valence words.The percentage contribution ofan individual word i to the valence difference between a comparison and reference corpus can be expressed as:100·(h_i-h^(ref))^+/-(p_i-p_i^(ref))^↑/↓/|h^(comp)-h^(ref)|where h_i is the valence score of word i in thelexicon, h^(ref) and h^(comp) arethe mean valences of the words in the reference corpus andcomparison corpus respectively, p_i is the normalizedfrequency of word i in the comparison corpus, and p_i^(ref) is the normalized frequency ofword i in the reference corpus (normalized frequenciesare computed as p_i = n_i/∑_i'n_i',where n_i is the number of occurrences of a word i). The first term (indicated by `+/-') measures the differencein word valence between word i and the mean valence of the reference corpus, while the second term (indicated by ↑/↓) looks at the difference in word prevalence between thecomparison and reference corpus.In plotting the word shift graphs, for each word we use +/- signs and blue/green bar colors to indicate the (positive or negative) sign of the valence term and↑/↓ arrows to indicate the sign of the prevalence term. §.§ Model comparison In the Results section, we evaluate what explanatory factors (chord category, genre, era, and region) best account for differences in valence scores. Using thetoolbox <cit.>, we estimated linear regression models where the mean valence of each chord served as the response variable and the most popular chord categories, genres, eras, and regions served as the categorical predictor variables.The variance of the residuals was used to compute the proportion of variance explained when using each factor in turn.We also compared models that used combinations of factors.As before, we fit linear models that predicted valence.Now, however, explanatory factors were added in a greedy fashion, with each additional factor to minimize the Aikake information criterion (AIC) of the overall model.§ RESULTS§.§ Valence of chord categoriesWe measure the mean valence of lyrics associated with different chord categories (Figure <ref>A). We find that major chords have higher valence than Minor chords, concurringwith numerous studies which argue that human subjects perceive Major chords as more emotionally positive than Minor chords <cit.>. However, our results suggest that Major chords are not the happiest: all three categories of 7th chords considered here (Minor 7th, Major 7th, and Dominant 7th) exhibit higher valence than Major chords.This effect holds with high significance (p ≪ 10^-3 for all, one-tailed Mann-Whitney tests). In Fig. <ref>B-F, we use the word shift graphs <cit.> to identify words that contribute most to the difference between the valence of each chord category and baseline (mean valence of all lyrics in the dataset). For instance, “lost” is a lower-valence word (blue color and `-' sign) that is underexpressed in Major chords (`↓' sign), increasing the mean valence of Major chords above baseline. Many negative words, such as “pain”, “fight”, “die”, and “lost” are overexpressed in Minor chord lyrics and underrepresented in Major chord lyrics.Although the three types of 7th chords have similar valence scores (Fig. <ref>A), word shift graphs reveals that they may have different “meanings” in terms of associated words.Overexpressed high-valence words for Dominant 7th chords include terms of endearment or affection, such as “baby”, “sweet”, and “good” while for Minor 7th chords they are “life” and “god”. Lyrics associated with Major 7th chords, on the other hand, have a stronger under-representation of negative words (e.g. “die” and “hell”). §.§ GenresWe analyze the emotional content and meaning of lyrics from albums indifferent musical genres.Fig. <ref>A shows the mean valence of different genres, demonstrating that an intuitive ranking emerges when genres are sorted by valence: Religious and 60's Rock lyrics reside at the positive end of the spectrum while Metal and Punk lyrics appear at the most negative.As mentioned in the previous section, Minor chords have a lower mean valence than Major chords.We computed the numerical differences in valence between Major and Minor chords for albums in different genres (Fig. <ref>B).All considered genres, with the exception of Contemporary R&B/Soul, had a mean valence of Major chords higher than that of Minor chords.Some of the genres with the largest Major vs. Minor valence differences include Classic R&B/Soul, Classic Country, Religious, and 60's Rock.We show word shift graphs for two musical genres: Religious (Fig. <ref>C) and Punk (Fig. <ref>D). The highest contributions to the Religious genre come from the overexpression of high-valence words having to do with worship (“love”, “praise”, “glory”, “sing”). Conversely, the highest contributions to the Punk genre come from the overexpression of low-valence words (e.g. “dead”, “sick”, “hell”).Some exceptions exist: for example, Religious lyrics underexpress high-valence words such as “baby” and “like”, while Punk lyrics underexpress the low-valence word “cry”.§.§ EraIn this section, we explore sentiment trends for artists across different historical eras.Fig. <ref>A shows the mean valence of lyrics in different eras. We find that valence has steadily decreased since the 1950's, confirming results of previous sentiment analysis of lyrics <cit.>, which attributed the decline to the recent emergence of `dark' genres such as metal and punk.However, our results demonstrate that this trend has recently undergone a reversal: lyrics have higher valence in the 2010's era than in the 2000's era.As in the last section, we analyze differences between Major and Minor chords for lyrics belonging to different eras (Fig. <ref>B).Although Major chords have agenerally higher valence than Minor chords, surprisingly this distinction does not hold in the 1980's era, in which Minor and Major chord valences are similar.The genres in the 1980's that had Minor chords with higher mean valence than Major chords — in other words, which had an `inverted' Major/Minor valence pattern — include Alternative Folk, Indie Rock, and Punk (data not shown).Finally, we report changes in chord usage patterns across time. Fig. <ref>C shows the proportion of chords belonging to each chord category in different eras (note the logarithmic scale). Since the 1950's, Major chord usage has been stable while Minor chords usage has been steadily growing. Dominant 7th chords have become less prevalent, while Major 7thand Minor 7th chords had an increase in usage during the 1970's.The finding that Minor chords have become more prevalent while Dominant 7th chords have become rarer agrees with a recent data-driven study of the evolution of popular music genres <cit.>.The authors attribute the latter effect to the decline in the popularity of blues and jazz, which frequently use Dominant 7th chords.However, we find that this effect holds widely, with Dominant 7th chords diminishing in prevalence even when we exclude genres associated with Blues and Jazz (data not shown). More qualitatively, musicologists have argued thatmany popular music styles in the 1970's exhibited a decline in the use of Dominant 7th chords and a growth in the use of Major 7th and Minor 7th chords <cit.> — clearly seen in the corresponding increases in Fig. <ref>C. §.§ Region In this section, we evaluate the emotional content of lyrics from artists in different geographical regions. Fig. <ref>A shows that artists from Asia have the highest-valence lyrics, followed by artists from Australia/Oceania, Western Europe, North America, and finally Scandinavia, the lowest valence geographical region.The latter region's low valence is likely due to the over-representation of`dark' genres (such as metal) among Scandinavian artists <cit.>.As in previous sections, we compare differences in valence of Major and Minor chords for different regions (Fig. <ref>B). All regions except Asia have a higher mean valence for Major chords than Minor chords, while for the Asian region there is no significant difference.There are several important caveats to our geographical analysis.In particular, our dataset consisted of only English-language songs, and is thus unlikely to be representative of overall musical trends in non-English speaking countries.This bias, along with others, is discussed in more depth in the Discussion section. §.§ Model comparisonWe have shown that mean valence varies as a function of chord category, genre, era, and region (which are here called explanatory factors). We evaluate what explanatory factors best account for differences in valence scores. Fig. <ref>A shows the proportion of variance explained when using each factor in turn as a predictor of valence.This shows that genre explains most variation in valence, followed by era, chord category, and finally region.It is possible that variation in valence associated with some explanatory factors is in fact `mediated' by other factors.For example, we found that mean valence declined from the 1950's era through the 2000's, confirming previous work <cit.> that explained this decline by the growing popularity of `dark' genres like Metal and Punk over time; this is an example in which valence variation over historical eras is argued to actually be attributable to variation in the popularities of different genres.As another example, it is possible that Minor chords are lower valence than Major chords because they are overexpressed in dark genres, rather than due to their inherent emotional content.We investigate this effect using statistical model selection. For instance, if the valence variation over chord categories can be entirely attributed to genre (i.e. darker genres have more Minor chords), then model selection should prefer a model that contains only the genre explanatory factor to the one that contains both the genre and chord category explanatory factors.We fit increasingly large models while computing their Aikake information criterion (AIC) scores, a model selection score (lower is better).As Fig. <ref>B shows, the model that includes all four explanatory factors has the lowest AIC, suggesting that chord category, genre, era, and region are all important factors for explaining valence variation.§ DISCUSSION In this paper, we propose a novel data-driven method to uncover emotional valence associated with different chords as well as different geographical regions, historical eras, and musical genres.We then apply it to a novel dataset of guitar tablatures extracted from <ultimate-guitar.com> along with musical metadata provided by the Gracenote API.We use word shift graphs to characterize the meaning of chord categories as well as categories of lyrics.We find that Major chords are associated with higher valence lyrics than Minor chords, consistent with the previous music perception studies that showed that Major chords evoke more positive emotional responses than Minor chords <cit.>.For an intuition regarding the magnitude of the difference, the mean valence of Minor chord lyrics is approximately 6.16 (e.g. the valence of the word “people” in our sentiment dictionary), while the mean valence of Major chord lyrics is approximately 6.28 (e.g. the valence of the word “community” in our sentiment dictionary).Interestingly, we also uncover that three types of 7th chords — Dominant 7ths, Major 7ths, and Minor 7ths — have even higher valence than Major chords. This effect has not been deeply researched, except for one music perception study which reported that, in contrast to our findings, 7th chords evoke emotional responses intermediate in valence between Major and Minor chords <cit.>.Significant variation exists in the lyrics associated with different geographical regions, musical genres, and historical eras.For example, musical genres demonstrate an intuitive ranking when ordered by mean valence, ranging from low-valence Punk and Metal genres to high-valence Religious and 60's Rock genres. We also found that sentiment declined over time from the 1950's until the 2000's.Both of these findings are consistent with the results of a previous study conducted using a different dataset and lexicon <cit.>.At the same time, we report a new finding that the trend in declining valence has reversed itself, with lyrics from the 2010's era having higher valence than those from the 2000's era.Finally, we perform a analysis of the variation of valence among geographical regions. We find thatAsia has the highest valencewhile Scandinavia has the lowest (likely due to prevalence of `dark' genres in that region). We perform a novel data-driven analysis of the Major vs. Minor distinction by measuring the difference between Major and Minor valence for different regions, genres, and eras. All examined genres except Contemporary R&B/Soul exhibited higher Major chord valence than Minor chord.Interestingly, the largest differences of Major above Minor may indicate genres (Classic R&B/Soul, Classic Country, Religious, 60's Rock) that are more musically `traditional'.In terms of historical periods, we find that, unlike other eras, the 1980's era did not have a significant Major-Minor valence difference.This phenomenon calls for further investigation; one possibility is that it may be related to an important period of musical innovation in 1980's, which was recently reported in a data-driven study of musical evolution <cit.>.Finally, analysis of geographic variation indicates that songs from the Asian region—unlike those from other regions—do not show a significant difference in the valence of Major vs. Minor chords. In fact, it is known in the musicological literature that the association of positive emotions with Major chords and negative emotions with Minor chords is culture-dependent, and that some Asian cultures do not display this association <cit.>.Our results may provide new supporting evidence of cultural variation in the emotional connotations of the Major/Minor distinction.Finally, we evaluate how much of the variation in valence in our dataset is attributable to chord category, genre, era, and region (we call these four types of attributes `explanatory factors').We find that genre is the most explanatory, followed by era, chord category, and region.We use statistical model selection to evaluate whether certain explanatory factors `mediate' the influence of others (an example of mediation would be if variation in valence of different eras is actually due to variation in the prevalence of different genres during those eras).We find that all four explanatory factors are important for explaining variation in valence; no explanatory factors totally mediate the effect of others.Our approach has several limitations.First, the accuracy of tablature chord annotations may be limitedbecause users are not generally professional musicians; for instance, more complex chords (e.g.or ) may be mis-transcribed as simpler chords.To deal with this, we analyze relatively basic chords—Major, Minor, and 7ths—and, when there are multiple versions of a song, use tabs with the highest user-assigned rating. Our manual inspection of a small sample of parsed tabs indicated acceptable quality, although a more systematic evaluation of the error rate can be performed using more extensive manual inspection of tabs by professional musicians. There are also significant biases in our dataset.We only consider tablatures for songsentered by users of , which is likely to be heavily biased towards North American and European users and songs. In addition, this dataset is restricted to songsplayable by guitar, which selects for guitar-oriented musical genres and may be biased toward emotional meanings tied to guitar-specific acoustic properties, such as the instrument's timbre.Furthermore, our dataset includes only English-language songs and is not likely to be a representative sample of popular music from non-English speaking regions. Thus, for example, the high valence of songs from Asia should not be taken as conclusive evidence that Asian popular music is overall happier than popular music in English-speaking countries.For this reason, the absence of a Major vs. Minor chord distinction in Asia is speculative and requires significant further investigation.Despite these limitations, we believe that our results reveal meaningful patterns of association between music and emotion at least in guitar-based English-language popular music, and show the potential of our novel data-driven methods. At the same time, applying these methods to other datasets — in particular those representative of other geographical regions, historical eras, instruments, and musical styles — is of great interest for future work. Another promising direction for future work is to move the analysis of emotional content beyond single chords, since emotional meaning is likely to be more closely associated with melodies rather than individual chords.For this reason, we hope to extend our methodology to study chord progressions. 1pc The datasets generated during and/or analyzed during the current study are available in therepository, <https://goo.gl/R9CqtH>. Code for performing the analysis and generating plots in this manuscript is available at <https://github.com/artemyk/chordsentiment>. N.D. and Y.Y.A. conceived the idea of analyzing the association between lyric sentiment and chord categories, while A.K. contributed the idea of analyzing the variation of this association across genres, eras, and regions. N.D. and A.K. downloaded and processed the dataset. All authors analyzed the dataset and results. A.K. and N.D. prepared the initial draft of the manuscript. All authors reviewed and edited the manuscript.The author(s) declare no competing financial interests.Y.Y.A thanks Microsoft Research for MSR Faculty Fellowship.We would like to thank Rob Goldstone, Christopher Raphael, Peter Miksza, Daphne Tan, and Gretchen Horlacher for helpful discussions and comments. RS | http://arxiv.org/abs/1706.08609v2 | {
"authors": [
"Artemy Kolchinsky",
"Nakul Dhande",
"Kengjeun Park",
"Yong-Yeol Ahn"
],
"categories": [
"cs.CL",
"cs.SD"
],
"primary_category": "cs.CL",
"published": "20170626213429",
"title": "The Minor Fall, the Major Lift: Inferring Emotional Valence of Musical Chords through Lyrics"
} |
firstpage–lastpageNeutrino self-energy with new physics effects in an external magnetic field John Morales^(a) December 30, 2023 ============================================================================Although originally discovered as a radio-quiet gamma-ray pulsar, J1732-3131 has exhibited intriguing detections at decameter wavelengths. We report an extensive follow-up of the pulsar at 327 MHz with the Ooty radio telescope. Using the previously observed radio characteristics, and with an effective integration time of 60 hrs, we present a detection of the pulsar at a confidence level of 99.82%. The 327 MHz mean flux density is estimated to be 0.5-0.8 mJy, which establishes the pulsar to be a steep spectrum source and one of the least luminous pulsars known to date. We also phase-aligned the radio and gamma-ray profiles of the pulsar, and measured the phase-offset between the main peaks in the two profiles to be 0.24±0.06. We discuss the observed phase-offset in the context of various trends exhibited by the radio-loud gamma-ray pulsar population, and suggest that the gamma-ray emission from J1732-3131 is best explained by outer magnetosphere models. Details of our analysis leading to the pulsar detection, and measurements of various parameters and their implications relevant to the pulsar's emission mechanism are presented.stars: neutron – pulsars: general – pulsars: individual: J1732-3131 – ISM: general – gamma-rays: stars – radio continuum: general § INTRODUCTIONThe number of gamma-ray pulsars detected by the large area telescope (LAT) on-board Fermi satellite has gone much beyond the most optimistic guesses published prior to its launch. This revolution in the population of gamma-ray pulsars is also well reflected by a very significant increase in the number of radio-quiet gamma-ray pulsars <cit.>. Given the statistically significant numbers of radio-quiet and radio-loud pulsars and their sensitive follow-ups at radio-frequencies, models predicting the gamma-ray emission regions in the outer magnetosphere are favored against those supporting the emission sites to be near the polar cap.J1732-3131 is one of the LAT-discovered pulsars, with a rotation period of about 196 ms (see Table 1 for some other parameters of the pulsar). The early radio searches for its counterpart at high radio frequency <cit.> turned out to be unsuccessful. However, searches at decameter wavelengths resulted in an intriguing detection of a faint signal at the expected period of the pulsar at a dispersion measure (DM) of 15.44±0.32<cit.>. The pulsar was detected in only one of the several observing sessions, indicating the sporadic nature of the radio emission. Detection of several mildly bright single pulses in the same observing session, at a DM consistent with that of the periodic signal, further substantiated the findings and suggested the pulsar to be active at radio frequencies . Subsequent deep search at 34 MHz also resulted in evidences of very faint periodic signal from the pulsar, and provided more robust estimate of the flux density <cit.>.A likely explanation for the apparent lack of radio emission from the radio-quiet pulsars is that their narrow radio beams miss the sightline towards Earth <cit.>. Since the radio emission beam is expected to become larger at low frequencies <cit.>, probability of our line-of-sight passing through the beam also increases. If the above detections of J1732-3131 at very low frequencies were indeed due to this fact, the pulsar can be expected to be a steep spectrum source. Detections at 34 MHz, when combined with non-detections at high radio frequencies, imply the spectral index to be steeper than -2.3. Deep observations of the pulsar at frequencies above 100 MHz could provide a reasonable estimate of the spectral index, and might even shed some light on its viewing geometry. Since sky position of J1732-3131 is close to the centre of the Galaxy, the earlier high frequency radio searches would have suffered a loss in sensitivity due to the enhanced system temperature (). The upper limit on 1374 MHz flux density of the pulsar is 59 μJy <cit.>. For comparison, the L-band flux densities of the LAT-discovered pulsars J0106+4855 and J1907+0602 are only 8 and 3 μJy, respectively <cit.>. Hence, flux density of J1732-3131 at higher radio frequencies might still be well within the range of already detected pulsars, and a deep integration could compensate for the high and help in detection. Significant variability in the radio flux density of pulsars, at a range of timescales (from a single rotation period to several hundreds of seconds, or even larger), is well known. In the absence of a good understanding of possible radio emission from the radio-quiet gamma-ray pulsars, their extensive radio follow-ups might also be revealing. Recent detections of several energetic radio bursts from the Geminga pulsar <cit.> demonstrates the possible rewards of an extensive follow-up. The first radio detection of J1732-3131suggested the presence of occasional bursty emission from the pulsar. This provides a strong motivation for a dedicated follow-up. With the motivations stated above, we conducted deep observations of J1732-3131 using the Ooty radio telescope (ORT). Details of these observations and data processing are given in Section 2, followed by a detailed discussion on the results in Section 3. A summary of our findings is given in Section 4. § OBSERVATIONS AND DATA PROCESSING §.§ Observations and pre-search processingObservations were conducted using the ORT situated in southern India <cit.>. The telescope is equatorially mounted, and has an offset parabolic cylindrical reflector with dimensions of 530 m and 30 m in north-south and east-west directions, respectively. With the mechanical steering in the east-west direction, sources can be tracked continuously for approximately 9 hours. In the north-south direction, the telescope beam is steered electronically to point at different declinations (DEC).Extensive observations of J1732-3131 were carried out in March 2014 and April 2015. In March 2014, the source was observed for a total of 70.5 hours, distributed over 36 sessions. Typically, 2 observing sessions were conducted on every day. In April 2015, 37 observing sessions were conducted, amounting to a total of 54.5 hours of observing time. Each observing session was accompanied by a few minutes observation of a nearby strong pulsar, B1749-28, which was used as a control source. During each of the individual observing sessions, data were recorded in filterbank format, with 1024 channels across 16 MHz bandwidth centered at 326.5 MHz, with a time resolution of 0.512 ms, using the new pulsar receiver <cit.>. For simplicity, we refer the centre frequency as 327 MHz in the rest of the paper. The filterbank data were used to identify the parts of data contaminated by radio frequency interference (RFI). The identification procedure involves computation of robust mean and standard deviation, and comparing the data with a specified threshold separately in the time and frequency domains. More details of this procedure can be found in <cit.> and . The RFI-contaminated time samples as well as spectral channels are excluded from any further processing. §.§ Search proceduresThe data were searched for transient as well as periodic signals from the pulsar. Briefly, the search for transient signals involves dedispersing the filterbank data for a number of optimally spaced trial DMs within the range 0-50, computing smoothed versions of each of the dedispersed timeseries for a number of trial pulse-widths, and searching for events above a specified threshold. The filterbank data from individual sessions were searched for any dispersed signal at the expected rotation period of J1732-3131. Since the rotation ephemeris is known from timing of the gamma-ray data, periodic radio flux from the pulsar can be probed deeply by combining data from multiple sessions. Deep search for periodic emission from the pulsar involves folding the individual session filterbank data over the rotation period, adding the folded filterbank data from all the sessions in phase, and searching for a dispersed signal. Detailed descriptions of our single pulse as well as periodicity searches can be found inand <cit.>.§ RESULTS AND DISCUSSION Our searches for bright dispersed pulses and periodic signal using data from individual observing sessions did not result in any significant detection above a signal-to-noise ratio (S/N) threshold of 8σ. More details of the deep searches using data from March 2014 and April 2015 are discussed below separately.§.§ Deep search using March 2014 dataOut of the 36 observing sessions conducted in 2014, 2 sessions were unusable due to severe contamination from RFI. In 4 other sessions, although typically less than 1% of the samples were identified as RFI-contaminated, data showed indications of faint RFI and abrupt jumps in power levels. To minimize effects of such systematics on the search sensitivity, data from all the above mentioned sessions were excluded from any further processing. To carry out deep search for a dispersed signal using data from the remaining 30 sessions, we used an up-to-date timing model of J1732-3131 obtained from the gamma-ray data <cit.>, to predict the pulsar's rotation period and phase at different observing epochs using the pulsar timing software Tempo2[For more information about Tempo2, please refer to the website: <http://www.atnf.csiro.au/research/pulsar/tempo2/>.]. The deep search did not result in any significant signal above our detection threshold of 8σ. To probe a possible underlying periodic signal fainter than our detection threshold, we dedispersed the data using a DM of 15.5and computed average profiles for individual sessions[The original detection of the pulsar suggested the DM to be 15.44±0.32. Our chosen value of 15.5is consistent with this, and choosing the DM to be 15.44would not have changed any of the results presented here — neither qualitatively nor quantitatively.]. To compute the net average profile, the individual profiles were weighted and added coherently in the pulsar's rotation phase. The weights for individual profiles were chosen to be directly proportional to the product of effective bandwidth and integration time, i.e., inversely proportional to the expected variance in the average profile. The 327 MHz net average profile integrated over an effective duration of 60 hours is shown in Figure <ref> along with the average profiles from individual observing sessions. The net average profile exhibits a peak-to-peak S/N of nearly 6. For comparison, average profile of the pulsar at 34 MHz is overlaid on the net average profile in the upper panel. The two profiles are manually aligned, since the uncertainty in DM is not adequate enough to phase-align the profiles at such widely separated frequencies. A striking resemblance between the two profiles is clearly evident. Figure <ref> shows the dedispersed and averaged spectrogram of all the data presented in Figure <ref>, and demonstrates that the faint periodic signal is uniformly present across the observing bandwidth.§.§.§ Shape of the net average profile: a faint underlying signal or artifacts ?Before we quantify the similarity between the average profiles at 327 and 34 MHz, it is important to assess whether the average profile shape has uniform contribution from all the sessions or dominated by artifacts in a single or a few individual profiles. A conventional way to test this is to examine the significance (e.g., peak S/N or reduced chi-square) of the average profile as data from successive sessions are integrated. Given the low S/N of the net average profile, we have used a bootstrap method to probe whether just a few sessions are affecting the average profile shape. For this purpose, we examine a correlation-based figure-of-merit (FoM) as a function of the integration time. Since the integration time for each of the individual average profiles is 2 hours, we have chosen the step between consecutive trial integration times in our bootstrap method also to be 2 hours. For each of the trial integration time, we randomly choose a sample of appropriate number of individual profiles, compute a partial average profile using this sample, and measure its cross-correlation coefficient with the net average profile as a FoM. Each bootstrap sample corresponding to a trial integration time consists of choosing 10^4 non-redundant sample combinations of sessions using a reservoir sampling algorithm <cit.>, and computing the FoM for each of the combinations. This approach allows us to obtain a distribution of figures of merit for each of the trial integration time, with the median of the distribution assigned as the average FoM. For trial integration times involving less than 4 or more than 26 sessions, all the possible combinations of individual profiles are used to compute the FoM distribution. The average FoM estimated from the above bootstrapping is shown in Figure <ref> as a function of integration time, along with the colour-coded maps of corresponding distributions (histograms) with their peaks normalized to 1. The uniform distributions and smooth monotonic increase in the average FoM with the integration time is clearly evident, and strongly suggests that the shape of the net average profile shown in Figure <ref> represents a faint signal consistently present in individual profiles obtained from different sessions. To demonstrate how presence of artifacts in a few individual profiles would have reflected in the above analysis, we simulated 30 normally distributed noise profiles. The root-mean-square (rms) of the noise in these profiles was kept same as that of the observed profiles. Then a predecided number of profiles were modified in such a way that the average of all 30 profiles becomes similar to the net average profile in Figure <ref>. For this purpose, intensity of a scaled-version of the net average profile was randomly distributed between the selected profiles. The profiles to be modified were themselves chosen randomly. Figure <ref> shows results of the bootstrapping analysis for three different cases when 2, 4 and 8 profiles were modified as mentioned above. The first two cases, i.e., when just 2 and 4 profiles are responsible for the average profile shape, result in glaringly different profile significance distributions. The third case starts approaching the uniform distributions and smooth increase in the average FoM for real data shown in Figure <ref>. However, minor differences are still noticeable, e.g., the average FoM rises much more sharply till integration times of about 20 hours. The results become nearly indistinguishable from Figure <ref> when 16 profiles are modified (not shown but assessed separately). Hence, the shape of the net average profile is contributed by a significant number of sessions, perhaps all, and certainly not only by just a few.§.§.§ The net average profile: association with the pulsarThe 327 and 34 MHz profiles shown in the upper panel of Figure <ref> exhibit striking resemblance, and both are consistent with each other within the noise uncertainties. As a quantitative measure of the similarity, the normalized cross-correlation coefficient between the two profiles is estimated to be 0.72. To estimate the chance probability of obtaining a net average profile with the observed peak-to-peak S/N of nearly 6 and exhibiting striking similarity with the 34 MHz profile, we performed Monte-Carlo (MC) simulations. Each individual realization in our MC simulation involves generating 30 random (normal distribution) noise profiles, computing an average noise profile from these, and cross-correlating the average profile with the 34 MHz profile. To be compatible with the profiles shown in Figure <ref>, each of the random noise profiles is also smoothed with a 30 (∼0.08 in normalized pulse-phase) wide window. The resultant average noise profile is cross-correlated with the 34 MHz profile at all possible phase-shifts to determine the maximum normalized correlation coefficient. The maximum correlation coefficient and the peak-to-peak S/N of the average noise profile are noted down. From simulations of 1 million (10^6) such independent realizations, only 0.18% times the average noise profile was found to be having peak-to-peak S/N more than 5.5 as well as the maximum correlation coefficient ≥0.72. Hence, the probability of obtaining the 327 MHz profile with its measured significance and similarity to the 34 MHz profile just by chance is only 0.0018. In other words, the 327 MHz profile is consistent with being originated from the same source as the 34 MHz profile at a confidence level of 99.82%. The confidence gained from the above MC simulations also motivates to explore the profile significance as a function of DM. Variation in the profile significance in terms of the peak S/N as well as peak-to-peak S/N with DM, obtained from our deep search methodology mentioned earlier, are shown in Figure <ref>. The profiles corresponding to the individual trial DMs were smoothed by a 30 wide window before computing the significance. Although the significance is low, peak S/N as well as peak-to-peak S/N are suggestive of a nominal DM of 16±7which is consistent with DM estimated from the original detection, and provides an independent support for the low-significance net average profile to be originated from the pulsar. §.§ Deep search using April 2015 data Compared to the 2014 observations, data from the observing sessions conducted in 2015 showed overall more occurrences of RFIs. Strong RFIs and visible system artifacts made data from 8 observing sessions unusable, while those from 3 other sessions showed hints of low-level RFIs, and hence excluded from further processing. Deep search for the periodic signal from the pulsar using data from the remaining 26 sessions, amounting to a total of about 39 hours of integration time, did not result in any significant candidate above our detection threshold of 8σ.Following the procedure detailed in Section <ref>, we used the data from above 26 sessions to compute the net average profile dedispersed at 15.5. Note that the integration time for this profile is about 39 hours, as against 60 hours for the profile shown in Figure <ref>. Furthermore, sensitivity of the telescope during these observations was lower[A set of bright sources are regularly observed to monitor the sensitivity of the telescope. The factor mentioned in the main text is assessed using these observations.] by a factor of 1.2-1.3, compared to that during the 2014 observations. Due to these factors, the peak-to-peak S/N of the average profile obtained from 2015 observations barely reaches 3.5-4. The average profiles can also be added in-phase to those from 2014 observations to achieve effectively a larger integration time. However, due to poorer sensitivity of the telescope during these observations, the effective increase in the integration time will be about 23–27 hours, which translates to only about 1σ increase in the profile significance. Even this feeble enhancement is subjective to quality of the data being added, such as being free from any underlying faint RFIs. The 2015 data are found to be of overall poorer quality than the 2014 data. In any case, combining the observations from the two years did not improve the 2014 average profile by any significant amount.§.§ Search for continuum emission Given the large pulse duty cycle, and possible emission at all pulse phases, the pulsar might be better suited to be detected as a source of continuous emission using interferometric observations. With this motivation, we used GMRT archival data at 330 MHz, from nearly 4 hours long observations conducted in March 2004 (proposal code: 05SBA01) towards a nearby source G355.5+00. The rms noise obtained at the centre of the field is 0.6 mJy/beam. However, the pulsar is 61' away from the pointing centre, and the primary beam corrected rms noise at the location of the pulsar is about 3.4 mJy/beam. No source could be seen within 5' of the position of the pulsar (17h32m33.54s, -31d31'23”; J2000). Therefore, we put a 3σ continuous emission upper limit of 10 mJy from the pulsar. §.§ Flux density and spectral index estimates To estimate the sky background temperature towards the pulsar, we used a low frequency sky map generating program, LFmap,[<http://www.astro.umd.edu/ emilp/LFmap/>] to construct a sky map at 326.5 MHz. LFmap scales the 408 MHz all-sky map of <cit.> to the desired low frequency, taking into account the CMB, isotropic emission from unresolved extra-galactic sources, and the anisotropic Galactic emission. By computing a weighted average of the 326.5 MHz map at several points across the beam, using the following theoretical beam-gain pattern:P(RA,DEC)=^2(b sin(RA)/λ)×^2(a sin(DEC)/λ),where a=cos(DEC)×530 m, b=30 m and λ=0.92 m, the sky background temperature towards the pulsar is estimated to be 610 K. Assuming an effective bandwidth of 13.6 MHz (85% of the total bandwidth), 50% pulse duty cycle, receiver temperature of 150 K, and an effective collecting area of 7500 m^2 (55% of physical area projected towards the pulsar), we estimate the pulsar's average flux density to be in the range 0.5-0.8 mJy (for profile-S/N to be in the range 3-5 achieved from 60 hours of integration; Figure <ref>). The above flux density estimate when combined with that at 34 MHzand assuming no turn-over in the spectrum, suggests the spectral index of the pulsar to be in the range -2.4 to -3.0. This range is consistent with the spectral index upper limit of -2.3 suggested in . We would like to emphasize that the above spectral index estimate will remain unaffected even if the actual pulse duty cycle happens to be different from what we have assumed. This is due to the fact that the profile widths are similar at 34 and 327 MHz (see Figure <ref>), and same pulse duty cycle (50%) has been assumed for estimating the flux densities at both the frequencies.The continuum emission upper limit at 330 MHz presented in the previous subsection is consistent with the flux density of the pulsar estimated above. Recently, <cit.> have reported a 3σ flux density upper limit of 24 mJy/beam towards this pulsar at 150 MHz. Assuming no turn-over in the spectrum, the spectral index deduced above predicts a flux density of 3-8 mJy at 150 MHz, consistent with the result from <cit.>.The steep spectrum index suggests the pulsar's flux density at 1400 MHz to be 6-24 μJy, consistent with the upper limit from earlier searches <cit.>. The flux density is also comparable to that of the other two very faint pulsars mentioned in Section 1 (J0106+4855 and J1907+0602). However, J1732-3131 is relatively nearby implying a lower luminosity. Indeed the 1400 MHz pseudo-luminosity[The pseudo-luminosity is defined as 1400 MHz flux density times the square of the distance, and takes in to account the uncertainties in the flux density at 327 MHz and the spectral index.] of the pulsar is only 2.2-8.9 μJy kpc^2. If we use the NE2001 electron density model <cit.> to derive distance from DM, then the above range suggests J1732-3131 to be the least luminous among all the pulsars for which 1400 MHz flux density estimates are known <cit.>! Even if we use the more recent electron density model by <cit.>, only a few pulsars have pseudo-luminosity less than that of J1732-3131. Hence, J1732-3131 is one of the least luminous radio pulsars known to date ! §.§ Profile morphology and implications to viewing geometry and emission mechanismAt this stage, we can perhaps review whether the 34 MHz detection of the pulsar was indeed benefitted by its correspondingly larger emission beam at low frequency. While strong constraints on the viewing geometry can be obtained only from polarization data, we can seek hints from the profile morphology and the radio spectrum. A grazing line of sight would tend to miss the high frequency radio emission beam and give rise to a steep spectrum <cit.>. The mean spectral index for normal pulsars has been estimated to be -1.4±1.0 <cit.>. With the spectral index in the range -2.4 to -3.0, radio spectrum of J1732-3131 is indeed on the steeper side. However, more than 10% of the pulsars with characterized radio spectra have spectral indices in the range deduced for J1732-3131 (ATNF pulsar catalog), suggesting that the present constraints are too loose to be interpreted in the present context. A grazing sight line would also imply a significant evolution of pulse-width and/or number of pulse components with frequency. The similarity of J1732-3131's pulse profiles at frequencies separated by a factor of nearly 10 (at 34 and 327 MHz; Figure <ref>), suggests that there is no major evolution of the profile in this frequency range. However, minor evolution that is hindered by very low-S/N in both the profiles can not be excluded. Hence, the current data do not present a strong support for a favorable viewing geometry to be responsible for the pulsar's original detection at low frequency. The radio and gamma-ray profiles of the pulsar appear to have similar morphology (see Figure 4 of ). Profile morphology and phase-alignment of the radio and gamma-ray profiles could give important clues about the location of the emission sites. The propagation of radio signals through the dispersive ISM introduces a delay Δ t ∝ DM×ν^-2. The uncertainty in DM from the original detection (15.44±0.32 ) was not adequate enough to probe the phase-alignment of the 34 MHz profile with its gamma-ray counterpart. However, at 327 MHz an uncertainty in DM as large as 0.3translates to a delay equivalent to only about 0.06 of the pulsar's rotation period, allowing us to examine the phase-alignment between the radio and the gamma-ray profiles.The dispersive delay (Δ t) as well as delays associated with the configuration of the telescope relative to the solar system barycenter, are accounted for by Tempo2 while predicting the pulsar's phase at a given observing epoch. The instrumental delay associated with the ORT was estimated by fitting a `jump' in pulse arrival times of a known fast pulsar obtained at ORT and GMRT using Tempo2. The delay was confirmed by independent tests, and was taken in to account by modifying the recorded start time of the individual observations appropriately. The gamma-ray photon arrival times are converted to the geocenter to remove the effects of the Fermi LAT spacecraft motion around the earth. The corrected arrival times are then used with Tempo2 in predictive mode to compute an average profile. To probe the phase-alignment of the radio and gamma-ray light curves, we compute the arrival time correction from the geocenter to the ORT as well as the associated phase-correction. We use these corrections to convert the gamma-ray light curve to the ORT, and compare it with the radio profile observed at the site. We successfully tested our procedure by reproducing the known phase offset of 0.44 between the radio and gamma-ray profiles of the pulsar J0437-4715 <cit.>. The phase-aligned radio and gamma-ray profiles of J1732-3131 are plotted in Figure <ref>. The lag (δ) of the main peak in the gamma-ray profile with respect to the highest peak in the radio profile is 0.24±0.06 in pulse-phase[Difference between centroids of the two profiles also gives the lag to be consistent with 0.24±0.06. The small radio peak at pulse-phase of about 0.65 in Figure <ref>, which is also noticeably present in the 34 MHz profiles , could be aligned with the main peak in the gamma-ray profile. However, the S/N of this peak in all the radio detections so far is too low to claim its possible alignment.]. Note that the the uncertainty of 0.06 is entirely due to that in DM, and any contribution from the statistical uncertainty on the position of the peaks is much smaller. For majority of young gamma-ray pulsars, the phase difference between the leading and trailing peaks in gamma-ray profile (Δ) has been found to be anti-correlated with δ. This relationship between Δ and δ was initially shown by <cit.> to be a general property of the outer-magnetosphere models with caustic pulses. Majority of the young and middle-aged gamma-ray pulsars detected by Fermi LAT have confirmed to the Δ - δ relationship <cit.>. More recently, <cit.> have constrained several γ-ray geometrical models by comparing simulated and observed light curve morphological characteristics, including the correlation of δ with other observable quantities. Assuming a vacuum-retarded dipole magnetic field, they show that the Outer Gap (OG) model <cit.> and the One Pole Caustic (OPC) model <cit.> best explain the trends between various observed and deduced parameters. Our deduced value of δ=0.24±0.06 suggests that J1732-3131 also follows the common trends (like correlation of δ with rotation period and Δ) exhibited by a large fraction of radio-loud gamma-ray pulsars <cit.>. Hence, following the conclusions of <cit.>, the observed radio and gamma-ray light curves of J1732-3131 are also best explained by the OG and OPC models. § SUMMARYIn the previous sections, we have presented details of our extensive observations, and deep search for periodic signal from the gamma-ray pulsar J1732-3131 at 327 MHz using the Ooty radio telescope. Despite the high background sky temperature, our deep integration allowed us to probe very faint periodic signal from the pulsar. Using nearly 60 hours of observations conducted in March 2014, we have presented detection of periodic signal at a DM of 15.5from the pulsar at a confidence level of 99.82%. We estimate the 327 MHz flux density of the pulsar to be 0.5-0.8 mJy, and the spectral index in the range from -2.4 to -3.0. The 1400 MHz pseudo-luminosity of the pulsar is only 2.2-8.9 μJy kpc^2, and suggests the pulsar to be one of the least luminous pulsars known to date! We also phase-aligned the radio and gamma-ray profiles, and measured the phase-offset between the main peaks in the two profiles to be 0.24±0.06. This non-zero phase-lag favors the models wherein the gamma-ray emission originates in the outer magnetosphere of the pulsar. J1732-3131 was detected during a rare enhancement in its flux density, most likely due to scintillation . Subsequent detections of the pulsar using deep follow-up observationshave been possible only since its DM was known from the first detection . So, it is possible that some of the radio-quiet gamma-ray pulsars might actually be very faint radio sources, and hence not detected in the radio searches using current generation telescopes. The high sensitivity of upcoming radio telescopes like square kilometre array (SKA) and the five hundred meter aperture spherical telescope (FAST) will enable radio detection, and facilitate better studies of such pulsars. § ACKNOWLEDGEMENTSYM acknowledges use of the funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement no. 617199. BCJ, PKM and MAK acknowledge support from the Department of Science and Technology grant DST-SERB Extra-mural grant EMR/2015/000515. BCJ, PKM, MAK and AKN acknowledge support from TIFR XII plan grants 12P0714 and 12P0716. YM would like to thank Cees Bassa, Benjamin Stappers and Andrew Lyne for providing profiles of a few normal pulsars that were useful in cross-checking the instrumental delays. YM, MAK, AKN, BCJ and PKM acknowledge the kind help and support provided by the members of the Radio Astronomy Centre, Ooty, during these observations. ORT is operated and maintained at the Radio Astronomy Centre by the National Centre for Radio Astrophysics. We have used the archival data provided by the GMRT. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research.@urlcharsothermakeother $&#_% @doi@urlcharsother ifnextchar [ @doi@ @doi@[] @doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1 @eprint#1#2@eprint@#1:#2::nil @eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1 @eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml dblp:#1 @eprint@#1:#2:#3:#4niltempa #1tempb #2tempc #3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc[Abdo et al.,Abdo et al.2010]Abdo10 Abdo A. A.,et al., 2010, ApJ, 711, 64[Abdo et al.,Abdo et al.2013]fermi_catalog13 Abdo A. A.,et al., 2013, ApJS, 208, 17[Bates, Lorimer& VerbiestBates et al.2013]Bates13 Bates S. D.,Lorimer D. R., Verbiest J. P. W.,2013, MNRAS, 431, 1352[Brazier & JohnstonBrazier & Johnston1999]BJ99 Brazier K. T. S.,Johnston S.,1999, MNRAS, 305, 671[CaraveoCaraveo2014]Caraveo14 Caraveo P. A.,2014, ARA&A, 52, 211[Cheng, Ruderman& ZhangCheng et al.2000]CRZ00 Cheng K. S.,Ruderman M., Zhang L.,2000, ApJ, 537, 964[CordesCordes1978]Cordes78 Cordes J. M.,1978, ApJ, 222, 1006[Cordes & Lazio(2002)]CL02 Cordes, J. M., & Lazio, T. J. W. 2002, arXiv:astro-ph/0207156 [Frail, Jagannathan, Mooley& IntemaFrail et al.2016]Frail16 Frail D. A.,Jagannathan P.,Mooley K. P., Intema H. T.,2016, ApJ, 829, 119[Haslam, Salter, Stoffel& WilsonHaslam et al.1982]Haslam82 Haslam C. G. T.,Salter C. J.,Stoffel H., Wilson W. E.,1982, A&AS, 47, 1[JeffreyJeffrey1985]Jeff85 Jeffrey S. V.,1985, ACM Transactions on Mathematical Software (TOMS), 1, 37[Kerr, Ray, Johnston, Shannon& CamiloKerr et al.2015]Kerr15 Kerr M.,Ray P. S.,Johnston S.,Shannon R. M., Camilo F., 2015, ApJ, 814, 128[MaanMaan2014]ythesis Maan Y.,2014, PhD thesis, Indian Institute of Science, Bangalore, India[MaanMaan2015]Maan15 Maan Y.,2015, ApJ, 815, 126[Maan & AswathappaMaan & Aswathappa2014]MA14 Maan Y.,Aswathappa H. A.,2014, MNRAS, 445, 3221[Maan, Aswathappa& DeshpandeMaan et al.2012]MAD12 Maan Y.,Aswathappa H. A., Deshpande A. A.,2012, MNRAS, 425, 2[Malofeev, Malov& ShchegolevaMalofeev et al.2000]MMS00 Malofeev V. M.,Malov O. I., Shchegoleva N. V.,2000, Astron. Rep., 44, 436[Manchester, Hobbs, Teoh& HobbsManchester et al.2005]Manchester05 Manchester R. N.,Hobbs G. B.,Teoh A., Hobbs M.,2005, AJ, 129, 1993[Naidu, Joshi, Manoharan& KrishnakumarNaidu et al.2015]Naidu15 Naidu A.,Joshi B. C.,Manoharan P. K., Krishnakumar M. A.,2015, ExA, 39, 319[Pierbattista, Harding, Gonthier& GrenierPierbattista et al.2016]Pierbattista16 Pierbattista M.,Harding A. K.,Gonthier P. L., Grenier I. A., 2016, A&A, 588, A137[Pletsch et al.,Pletsch et al.2012]Pletsch12a Pletsch H. J.,et al., 2012, ApJ, 744, 105[Ray et al.,Ray et al.2011]Ray11 Ray P. S.,et al., 2011, ApJS, 194, 17[Romani & WattersRomani & Watters2010]RW10 Romani R. W.,Watters K. P.,2010, ApJ, 714, 810[Romani & YadigarogluRomani & Yadigaroglu1995]RY95 Romani R. W.,Yadigaroglu I.-A.,1995, ApJ, 438, 314[Swarup et al.,Swarup et al.1971]Swarup71 Swarup G.,et al., 1971, NPhS, 230, 185[Watters & RomaniWatters & Romani2011]WR11 Watters K. P.,Romani R. W.,2011, ApJ, 727, 123[Yao et al.(2017)]YMW16 Yao, J. M., Manchester, R. N., & Wang, N. 2017, , 835, 29 | http://arxiv.org/abs/1706.08613v1 | {
"authors": [
"Yogesh Maan",
"M. A. Krishnakumar",
"Arun K. Naidu",
"Subhashis Roy",
"Bhal Chandra Joshi",
"Matthew Kerr",
"P. K. Manoharan"
],
"categories": [
"astro-ph.HE",
"astro-ph.SR"
],
"primary_category": "astro-ph.HE",
"published": "20170626215627",
"title": "Detection of radio emission from the gamma-ray pulsar J1732-3131 at 327 MHz"
} |
Université Côte d'Azur, CNRS, Institut de Physique de Nice (InPhyNi), France, EUInstitute for Theoretical Physics, Vienna University of Technology, A-1040, Vienna, Austria, EUInstitute for Theoretical Physics, Vienna University of Technology, A-1040, Vienna, Austria, EUInstitute for Theoretical Physics, Vienna University of Technology, A-1040, Vienna, Austria, EUUniversité Côte d'Azur, CNRS, Institut de Physique de Nice (InPhyNi), France, EUWe realize scattering states in a lossy and chaotic two-dimensional microwave cavity which follow bundles of classical particle trajectories. To generate such particlelike scattering states we measure the system's transmission matrix and apply an adapted Wigner-Smith time-delay formalism to it. The necessary shaping of the incident wave is achieved in situ using phase and amplitude regulated microwave antennas. Our experimental findings pave the way for establishing spatially confined communication channels that avoid possible intruders or obstacles in wave-based communication systems. 42.25.Bs,03.65.Nk,05.45.Mt Particlelike scattering states in a microwave cavity Ulrich Kuhl December 30, 2023 ====================================================tocsectionIntroduction Introduction.– The propagation of waves in complex media is widely studied in physics <cit.>. To probe the scattering properties of a medium one typically uses well-defined incident waves and measures their spatial profile at the output. However complex the scattering process may be, the output stays deterministically related to the input such that any change in the input parameters can be directly related to changes in the output pattern. This deterministic relation is encapsulated in a system's scattering matrix <cit.> whose massive information content is exploited through wavefront shaping <cit.>. The basic idea in this emerging field is to manipulate the incident waves in such a way that a certain output is achieved. This concept was pushed forward due to the possibility of sufficient input control. Spatial light modulators in optics <cit.>, IQ-modulators or spatial microwave modulators in the microwave field <cit.> and transducers in acoustics <cit.> offer the possibility to use this concept in a large variety of physical disciplines.Early goals were the development of new schemes for the focusing or defocusing of waves and the compression of pulses <cit.> behind a disordered slab. Special wave patterns can also be achieved within such a medium like for states that are spatially focused on an embedded target <cit.>. In multi-mode fibers also so-called "principal modes" were recently generated that are focused in time both at the input and the output facet of the fiber <cit.>. States with the unique feature of remaining focused both in space and time during the entire propagation through a complex medium are the so-called particlelike scattering states (PSSs) <cit.>. These waves form highly collimated beams propagating along the bouncing pattern of classical particles. As a result they avoid multipath interference already by construction leading to an extremely broadband and stable transmission behavior. These features make PSSs ideally suited for the transfer of information in a secure, robust and a directed way without losing part of the transmitted signal to the environment.In this paper we present a microwave realization (see Fig. <ref>) of such PSSs by means of an active input shaping rather than through a numerical synthesis of experimentally measured system excitations as previously reported in <cit.>. Additionally, the realization scheme presented here is solely based on the system's transmission matrix T rather than on the whole scattering matrix S as originally proposed in <cit.>. Moreover we show here how to deal with the intrinsic losses as well as the noise in our experimental setup.tocsectionTheory Theory.– We start by introducing the Wigner-Smith time-delay matrix (WSTDM) Q=-iS^-1dS/dω (involving a frequency derivative) which is an established tool for measuring the time-delay associated with the scattering of a wave packet in a system <cit.> featuring also interesting connections to the system's density of states <cit.>. The WSTDM is Hermitian for unitary scattering systems, i.e., S^† S= 1, thus resulting in real eigenvalues τ_n (n represents the n-th eigenvalue) also called proper delay-times. The corresponding eigenvectors u⃗_n (given as a coefficient vector in a certain basis) are known as principal modes <cit.> and have the remarkable feature of being insensitive (to first order) to small changes of their input frequency – in the sense that the spatial output profile v⃗_n=Su⃗_n does not change (up to a global factor). This property is especially useful for dispersion-free propagation through multi-mode fibers <cit.> and is mathematically expressed asv⃗_n(ω_0+Δω) ≈exp(i τ_n Δω)v⃗_n(ω_0),where Δω is the change in frequency and ω_0 the frequency at which u⃗_n is evaluated. The global phase factor exp(i τ_n Δω) is determined by the corresponding eigenvalue τ_n. In Ref. <cit.> it was demonstrated that a certain subclass of principal modes have a particlelike wave function resembling a focused beam. These PSSs live in the subspace of either fully transmitted or fully reflected states, just like a particle that can either traverse the scattering region or be reflected back at some boundary or obstacle. In this work we investigate PSSs that get fully transmitted through a microwave cavity as shown in Fig. <ref>.As in most experiments, we also don't have access to the full scattering matrix S. We thus use a modified WSTDM where we replace the scattering matrix S by the transmission matrix T, which is accessible in our experimental setup shown later. In the following we show that only the knowledge of the transmission matrix T is sufficient to find PSSs connecting the input to the output. The eigenvalue equation for the n-th eigenvector q⃗_n of this new operator q reads as follows:q q⃗_n=-i T^-1(ω)dT(ω)/dωq⃗_n=λ_nq⃗_n,where λ_n is the eigenvalue. Please note that an ordinary inverse of T appearing in Eq. (<ref>) does not exist if T is non-quadratic or singular. In the supplemental material <cit.>, we introduce an effective inverse that still allows for the calculation of q. Contrary to eigenstates of the Hermitian operator Q, only the transmitted output profile o⃗_n=Tq⃗_n is insensitive (up to a global factor) with respect to a change of the input frequency ω, since the operator q involves only the transmission matrix T. This translates intoo⃗_n(ω_0+Δω) ≈exp(i λ_n Δω)o⃗_n(ω_0).One can analytically derive (see Ref. <cit.>) an expression for the complex eigenvaluesλ_n = dϕ_n/dω-idln(|o⃗_n|)/dω,where ϕ_n is the transmitted global phase, i.e., o⃗_n=|o⃗_n|e^iϕ_nô_n (ô_n is the unit vector of o⃗_n). The real part Re(λ_n) reflects the frequency derivative of the scattering phase and is therefore proportional to the time-delay <cit.> of the eigenstate q⃗_n. The imaginary part Im(λ_n) describes how the transmitted intensity |o⃗_n|^2 changes with respect to a change of the frequency ω as can be seen from Eq. (<ref>), |o⃗_n(ω_0+Δω)|^2 ≈exp[-2Im(λ_n) Δω] |o⃗_n(ω_0)|^2. In order to identify PSSs among all the other eigenstates o⃗_n, we make use of the corresponding eigenvalues λ_n. Since PSSs are highly collimated and are not distributed all over the scattering region, the time it takes a PSS to traverse the scattering region is typically much smaller than for other scattering states that get scattered multiple times inside the scattering region. Due to the fact that Re(λ_n) measures this scattering time, i.e., the time-delay, PSSs can be identified by a small value of Re(λ_n). Furthermore, PSSs feature a small Im(λ_n), since for fully transmitting and spatially confined scattering states the transmitted intensity barely changes with input frequency ω as compared to states that get scattered multiple times. In conclusion, PSSs can be identified by a small Re(λ_n) and a small Im(λ_n).tocsectionSetup Setup.– The scattering setup with which we investigate the appearance of PSSs is shown in <ref>. A chaotic scattering region is attached to an incoming lead and an outgoing lead (see red, light blue and green areas in <ref>, respectively). The width of the lead W of 14 cm allows the propagation of 16 transverse-electrical modes (TE-modes) in the entrance and the exit lead at the working frequency of ν_0=ω_0/2π=17.5 GHz which corresponds to a wavelength in air of 1.71cm. These 16 modes are excitable via 16 antennas. Each antenna is connected to one IQ-modulator, which controls amplitude and phase of the microwave signal passing through. The IQ-modulators themselves are fed by a vector network analyzer (VNA, Agilent E5071C) connected to a power splitter (Microot MPD16-060180). The used connecting cables, connectors and antennas are all identical to avoid the appearance of additional phases occurring from different propagation path lengths. The ends of the waveguide are filled with absorbing foam material (types: LS-14 and LS-16 from EMERSON & CUMING) to reduce reflections from the open ends.The cavity is placed under a metallic plate featuring a 5×5 mm^2 grid of holes (hole radius of 2mm). Working below the cut-off frequency of 18.75 GHz guarantees to excite only the fundamental TE mode, i.e., TE_0, where the z-component of the electric field E_z is constant with respect to z and the x,y-components E_x,y are zero. The grid of holes in the top plate closing the cavity enables us to introduce a movable monopole antenna which measures E_z at any given hole position in the cavity as in previous experiments <cit.>. The holes can also be used to insert cylindrical obstacles (aluminum, radius: 2mm) leading to additional scattering within the cavity. Similar setups with ten open modes have been used to verify the shaping performance of our antenna array <cit.> and achieve focusing inside disordered media with a generalized Wigner-Smith matrix <cit.>. The scattering setup was chosen such that the incoming and outgoing leads support a sufficient number of modes. The long middle part (located between incoming and outgoing lead) serves as verification that particlelike scattering states avoid this region whereas arbitrary scattering states typically enter this part.tocsectionExperimental results Experimental results.– To obtain the q-operator experimentally, we measure the transmission matrix T(ω) within a frequency window around the working frequency. The positions where we inject the microwave signal (16 antennas connected to the ) and the positions where we measure the transmission with the moving antenna (27 positions) are marked in <ref>. Since T is thus a rectangular matrix of size 27× 16, we cannot calculate its ordinary inverse to construct the q-operator. Using the technique described in the supplemental material <cit.> we work with the operator q̃ which only includes a subpart of the full transmission matrix associated to a certain number η of highly transmitting channels. We tested empirically that the best results for PSSs [identified by means of a small Im(λ_n)] are obtained if we take the highest η=7 transmitting channels for the calculation of q̃. Once these eigenstates of q̃ are evaluated, we inject them and verify their particlelike shape using the movable antenna that enters the cavity through the holes in the top plate. At first we investigate the eigenstate featuring the smallest value of Re(λ_n), i.e., the shortest time-delay. The result of this measurement is shown as particlelike scattering state 1 (PSS 1) in <ref>(a). The wave function clearly shows the predicted behavior of following the shortest trajectory bundle connecting the incoming with the outgoing lead [see red bundle in <ref>(b,left)].In the next step we investigate PSSs with larger time-delays, which correspond to the green and the blue classical trajectory bundles shown in <ref>(b). It turns out that the center trajectories of these two bundles have almost the same length (L_2=50.0 cm and L_3=49.1 cm). Since similar path lengths lead to similar time-delays, the operator q cannot fully discriminate between these two scattering states. While PSS 2 corresponds quite well to the green classical bundle, PSS 3 mixes both bundles, green and blue. In other words, the measured q-eigenstates corresponding to these bundles are in a near-degenerate super-position with path contributions of both lengths (L_2 and L_3) showing up in their wave functions. Demixing degenerate PSSs can be achieved by transforming the scattering matrix into the spatial domain <cit.>. In order to emphasize the particlelike shape of the PSSs, we also show in <ref>(c) the intensity distribution of a state governed by exciting only a single randomly chosen antenna [we will refer to this state as random scattering state (RSS)] and compare its spatial shape with our PSSs. We see that also the RSS shows some intensity maxima within the cavity, however, these maxima do not follow classical trajectory bundles between incoming and outgoing lead. Moreover, the RSS extends into the middle part of the scattering region which is clearly avoided by all PSSs.The RSS also shows a significantly lower spectral robustness of its transverse output profile which we define asCorr(ν)=|o⃗^ †(ν)·o⃗(ν_0)|/|o⃗(ν)||o⃗(ν_0)|witho⃗(ν)=T(ν)i⃗,where i⃗ is the random input state. Equation (<ref>) is the normalized correlation between the output vector o⃗ at frequency ν compared to its output at ν_0 (the frequency at which the states are evaluated). PSS 1 is the state showing the highest output robustness when compared to the other PSSs (see <ref>). Since PSS 2 and PSS 3 perform a reflection at the convex cavity boundary, they are considerably more sensitive with respect to small changes in the frequency in terms of the output robustness when compared to PSS 1 which is transmitted entirely without any boundary reflections. This explains why the correlation curve in <ref> of PSS 1 is flatter than the one of PSS 2 and 3. The correlation of the random state RSS is, as expected, the lowest.The most characteristic property of PSSs is their highly collimated wave functions occupying bundles of classical particle trajectories of similar length. Consequently, putting an obstacle in the way of such a trajectory bundle, the observed transmission for a corresponding PSS drops down, whereas putting an obstacle outside of the occupied region of the PSS affects the wave function only slightly. To test this idea explicitly experimentally, we place altogether 13 cylindrical obstacles forming a rhombic shape into the scattering region of the cavity [see <ref>(a)]. In total we place this obstacle at 5 different positions indicated in <ref>(a) and study the relative change of transmitted intensity according toΔ I_rel= (I_ob-I_em)/I_em,where I_ob is the transmitted intensity for the case where the obstacle is placed inside the system and I_em is the transmitted intensity for the empty cavity with no obstacle present. The intensities I_ob and I_em are obtained by computing the sum of the measured transmitted intensities at 135 positions covering the whole width of the outgoing lead indicated by a red square in the exit lead in <ref>. Following our considerations from above we would expect that the transmitted intensity of a PSS is not affected by an obstacle unless it is placed directly into its corresponding classical trajectory bundle. Indeed, the PSSs are affected by a strong drop of 30% or more of the transmitted intensity when the scatterer is placed within the bundle supporting the PSS (see <ref>). This observation is interesting from a practical point of view if one aims to transmit intensity from input to output lead in the presence of obstacles. Once a specific PSS is blocked, one can maintain efficient transmission by switching to another PSS, e.g., from PSS 1 to PSS 2 or to PSS 3.tocsectionConclusion Conclusion.– We perform an in situ realization of particlelike scattering states by means of incident wavefront shaping. Particlelike scattering states follow bundles of classical trajectories of similar length and can be identified with the help of the Wigner-Smith time-delay formalism based on a prior measurement of the frequency dependent transmission matrix T(ν). We extract three different particlelike scattering states corresponding to three different classical trajectory bundles connecting the input with the output lead attached to a chaotic microwave cavity. Switching between these paths can augment the transmission in case one of the paths is blocked by an obstacle. Our results can also be mapped onto other wave based systems (acoustic, electromagnetic, quantum, etc.) leading to many possible applications related to efficient, robust, and focused transmission through complex environments <cit.>. While the presented microwave experiment using 16 guided modes serves as a proof-of-principle demonstration, we expect that our protocol unfolds its full potential in the optical domain where many more modes are accessible.tocsectionAcknowledgments Acknowledgments.– P.A., A.B., and S.R. are supported by the Austrian Science Fund (FWF) through project numbers SFB-NextLite F49-P10 and I 1142- N27 (GePartWave). J.B. and U.K. would like to thank the ANR for funding via the ANR Project GePartWave (ANR-12-IS04-0004-01) and the European Commission through the H2020 programme by the Open Future Emerging Technology "NEMF21" Project (664828).39 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Akkermans and Montambaux(2007)]akkermans2007 author author Eric Akkermans and author Gilles Montambaux, 10.1017/CBO9780511618833 title Mesoscopic Physics of Electrons and Photons (publisher Cambridge University Press, year 2007)NoStop [Popoff et al.(2010)Popoff, Lerosey, Carminati, Fink, Boccara, and Gigan]POP10 author author S. M. Popoff, author G. Lerosey, author R. Carminati, author M. Fink, author A. C. Boccara,and author S. Gigan, title title Measuring the transmission matrix in optics: An approach to the study and control of light propagation in disordered media, http://link.aps.org/doi/10.1103/PhysRevLett.104.100601 journal journal Phys. Rev. Lett. volume 104, pages 100601 (year 2010)NoStop [Vellekoop and Mosk(2007)]VEL07 author author I. M. Vellekoop and author A. P. Mosk, title title Focusing coherent light through opaque strongly scattering media. https://www.osapublishing.org/ol/abstract.cfm?uri=ol-32-16-2309 journal journal Opt. Lett. volume 32, pages 2309–2311 (year 2007)NoStop [Mosk et al.(2012)Mosk, Lagendijk, Lerosey, and Fink]MOS12 author author A. P. Mosk, author A. Lagendijk, author G. Lerosey,andauthor M. Fink, title title Controlling waves in space and time for imaging and focusing in complex media, http://www.nature.com/doifinder/10.1038/nphoton.2012.88 journal journal Nat. Phot. volume 6, pages 283–292 (year 2012)NoStop [Rotter and Gigan(2017)]rott16 author author S. Rotter and author S. Gigan,title title Light fields in complex media: Mesoscopic scattering meets wave control, 10.1103/RevModPhys.89.015005 journal journal Rev. Mod. Phys. volume 89, pages 015005 (year 2017)NoStop [Gehner et al.(2006)Gehner, Wildenhain, Neumann, Knobbe,and Komenda]Gehner2006 author author Andreas Gehner, author Michael Wildenhain, author Hannes Neumann, author Jens Knobbe,and author Ondrej Komenda,title title Mems analog light processing: an enabling technology for adaptive optical phase control,10.1117/12.651731 journal journal Proc. SPIE volume 6113, pages 61130K–61130K–15 (year 2006)NoStop [van Putten et al.(2008)van Putten, Vellekoop, and Mosk]vanPutten:08 author author E. G. van Putten, author I. M. Vellekoop,and author A. P. Mosk, title title Spatial amplitude and phase modulation using commercial twisted nematic lcds, 10.1364/AO.47.002076 journal journal Appl. Opt. volume 47, pages 2076–2081 (year 2008)NoStop [Lueder(2010)]lueder2010 author author Ernst Lueder, @nooptitle Liquid Crystal Displays: Addressing Schemes and Electro-Optical Effects (publisher John Wiley and Sons, year 2010)NoStop [Maurer et al.(2011)Maurer, Jesacher, Bernet, and Ritsch-Marte]Maurer2011 author author C. Maurer, author A. Jesacher, author S. Bernet,and author M. Ritsch-Marte, title title What spatial light modulators can do for optical microscopy, 10.1002/lpor.200900047 journal journal Laser and Photonics Reviews volume 5, pages 81–101 (year 2011)NoStop [Henty and Stancil(2004)]HEN04 author author B. E. Henty and author D. D. Stancil, title title Multipath-enabled super-resolution for rf and microwave communication using phase-conjugate arrays, https://doi.org/10.1088/1367-2630/13/2/023013 journal journal Phys. Rev. Lett. volume 93,pages 243904 (year 2004)NoStop [Kaina et al.(2014)Kaina, Dupré, Fink, and Lerosey]KAI14b author author N. Kaina, author M. Dupré, author M. Fink,and author G. Lerosey, title title Hybridized resonances to design tunable binary phase metasurface unit cells, https://www.osapublishing.org/oe/abstract.cfm?uri=oe-22-16-18881 journal journal Opt. Exp. volume 22, pages 18881 (year 2014)NoStop [Böhm and Kuhl(2016)]BOE16 author author J. Böhm and author U. Kuhl, title title Wave front shaping in quasi-one-dimensional waveguides, in 10.1109/MetroAeroSpace.2016.7573209 booktitle 2016 IEEE Metrology for Aerospace (MetroAeroSpace) (year 2016) pp.pages 182–186NoStop [Fink(1997)]FIN97 author author M. Fink, title title Time reversed acoustics, http://scitation.aip.org/content/aip/magazine/physicstoday/article/50/3/10.1063/1.881692 journal journal Phys. Tod. volume 50, pages 34–40 (year 1997)NoStop [Lerosey et al.(2007)Lerosey, de Rosney, Tourin, andFink]LER07 author author G. Lerosey, author J. de Rosney, author A. Tourin,and author M. Fink, title title Focusing beyond the diffraction limit with far-field time reversal, http://science.sciencemag.org/content/315/5815/1120 journal journal Science volume 315, pages 1120–1122 (year 2007)NoStop [Katz et al.(2011)Katz, Small, Bromberg, and Silberberg]Katz2011 author author O. Katz, author E. Small, author Y. Bromberg,andauthor Y. Silberberg, title title Focusing and compression of ultrashort pulses through scattering media, 10.1038/nphoton.2011.72 journal journal Nature Photon. volume 5, pages 372–377 (year 2011)NoStop [McCabe et al.(2011)McCabe, Tajalli, Austin, Bondareff, Walmsley, Gigan, and Chatel]McCabe2011 author author David J.McCabe, author AyhanTajalli, author Dane R.Austin, author PierreBondareff, author Ian A.Walmsley, author SylvainGigan,and author Béatrice Chatel, title title Spatio-temporal focusing of an ultrafast pulse through a multiply scattering medium, http://dx.doi.org/10.1038/ncomms1434 journal journal Nature Communications volume 2, pages 447 (year 2011),note articleNoStop [Weiner(2011)]Weiner2011 author author Andrew M.Weiner, title title Ultrafast optics: Focusing through scattering media, 10.1038/nphoton.2011.114 journal journal Nat Photon volume 5, pages 332–334 (year 2011)NoStop [Judkewitz et al.(2013)Judkewitz, Wang, Horstmeyer, Mathy, and Yang]Judkewitz2013 author author B. Judkewitz, author Y. M. Wang, author R. Horstmeyer, author A. Mathy,and author C. Yang, title title Speckle-scale focusing in the diffusive regime with time reversal of variance-encoded light (TROVE), 10.1038/nphoton.2013.31 journal journal Nature Photon. volume 7, pages 300–305 (year 2013)NoStop [Vellekoop et al.(2008)Vellekoop, van Putten, Lagendijk, andMosk]Vellekoop2008 author author I. M. Vellekoop, author E. G. van Putten, author A. Lagendijk, and author A. P. Mosk,title title Demixing light paths inside disordered metamaterials, 10.1364/OE.16.000067 journal journal Opt. Express volume 16, pages 67–80 (year 2008)NoStop [Carpenter et al.(2015)Carpenter, Eggleton, and Schröder]CAR15 author author J. Carpenter, author B. Eggleton,and author J. Schröder,title title Observation of Eisenbud-Wigner-Smith states as principal modes in multimode fibre,10.1038/nphoton.2015.188 journal journal Nat. Phot. volume 9, pages 751–757 (year 2015)NoStop [Xiong et al.(2016)Xiong, Ambichl, Bromberg, Redding, Rotter, and Cao]XIONG16 author author Wen Xiong, author Philipp Ambichl, author Yaron Bromberg, author Brandon Redding, author Stefan Rotter,and author Hui Cao, title title Spatiotemporal control of light transmission through a multimode fiber with strong mode coupling, 10.1103/PhysRevLett.117.053901 journal journal Phys. Rev. Lett. volume 117, pages 053901 (year 2016)NoStop [Rotter et al.(2011)Rotter, Ambichl, and Libisch]ROT11 author author S. Rotter, author P. Ambichl, and author F. Libisch,title title Generating particlelike scattering states in wave transport, http://link.aps.org/doi/10.1103/PhysRevLett.106.120602 journal journal Phys. Rev. Lett. volume 106, pages 120602 (year 2011)NoStop [Gérardin et al.(2016)Gérardin, Laurent, Ambichl, Prada, Rotter, and Aubry]GER16 author author B. Gérardin, author J. Laurent, author P. Ambichl, author C. Prada, author S. Rotter,and author A. Aubry, title title Particlelike wave packets in complex scattering systems, http://link.aps.org/doi/10.1103/PhysRevB.94.014209 journal journal Phys. Rev. B volume 94, pages 014209 (year 2016)NoStop [Eisenbud(1948)]EIS48 author author L. Eisenbud, title The Formal Properties of Nuclear Collisions, https://books.google.fr/books?id=gr9onQEACAAJ Ph.D. thesis, school Princeton University (year 1948)NoStop [Wigner(1955)]WIG55 author author E. Wigner, title title Lower limit for the energy derivative of the scattering phase shift, 10.1103/PhysRev.98.145 journal journal Phys. Rev. volume 98, pages 145–147 (year 1955)NoStop [Smith(1960)]SMI60 author author F. Smith, title title Lifetime matrix in collision theory, 10.1103/PhysRev.118.349 journal journal Phys. Rev. volume 118, pages 349–356 (year 1960)NoStop [Davy et al.(2015a)Davy, Shi, Wang, Cheng, and Genack]davy2015 author author MatthieuDavy, author Zhou Shi, author Jing Wang, author Xiaojun Cheng,andauthor Azriel Z. Genack,title title Transmission eigenchannels and the densities of states of random media, 10.1103/PhysRevLett.114.033901 journal journal Phys. Rev. Lett. volume 114, pages 033901 (year 2015a)NoStop [Davy et al.(2015b)Davy, Shi, Park, Tian, and Genack]Davy2015b author author MatthieuDavy, author Zhou Shi, author Jongchul Park, author Chushun Tian,andauthor Azriel Z. Genack,title title Universal structure of transmission eigenchannels inside opaque media, http://dx.doi.org/10.1038/ncomms7893 journal journal Nature Communications volume 6,pages 6893 EP – (year 2015b),note articleNoStop [Pierrat et al.(2014)Pierrat, Ambichl, Gigan, Haber, Carminati, and Rotter]Pierrat2014 author author Romain Pierrat, author Philipp Ambichl, author Sylvain Gigan, author Alexander Haber, author Rémi Carminati,and author Stefan Rotter, title title Invariance property of wave scattering through disordered media, 10.1073/pnas.1417725111 journal journal Proceedings of the National Academy of Sciences volume 111, pages 17765–17770 (year 2014)NoStop [Savo et al.(2017)Savo, Pierrat, Najar, Rémi, Rotter, and Gigan]savo2017 author author Romolo Savo, author Romain Pierrat, author Ulyssee Najar, author Carminati Rémi, author Stefan Rotter,and author Sylvain Gigan, title title Mean path length invariance in multiple light scattering, https://arxiv.org/abs/1703.07114 journal journal arXiv:1703.07114(year 2017)NoStop [Fan and Kahn(2005)]FAN05 author author S. Fan and author J. Kahn,title title Principal modes in multimode waveguides, https://www.doi.org/10.1364/OL.30.000135 journal journal Opt. Lett. volume 30, pages 135–137 (year 2005)NoStop [Shemirani et al.(2009)Shemirani, Mao, Panicker, andKahn]shemirani2009 author author M. B. Shemirani, author W. Mao, author R. A. Panicker,andauthor J. M. Kahn, title title Principal modes in graded-index multimode fiber in presence of spatial- and polarization-mode coupling,10.1109/JLT.2008.2005066 journal journal Journal of Lightwave Technology volume 27, pages 1248–1261 (year 2009)NoStop [Carpenter et al.(2017)Carpenter, Eggleton, and Schröder]car2017 author author Joel Carpenter, author Benjamin J. Eggleton,and author Jochen Schröder, title title Comparison of principal modes and spatial eigenmodes in multimode optical fibre,10.1002/lpor.201600259 journal journal Laser and Photonics Reviews volume 11,pages 1600259–n/a (year 2017), note 1600259NoStop [sup()]supp @noopnote See Supplemental Material at [URL will be inserted by publisher].Stop [Unterhinninghofen et al.(2011)Unterhinninghofen, Kuhl, Wiersig, Stöckmann, and Hentschel]UNT11 author author J. Unterhinninghofen, author U. Kuhl, author J. Wiersig, author H.-J. Stöckmann, and author M. Hentschel,title title Measurement of the Goos-Hänchen shift in a microwave cavity, http://stacks.iop.org/1367-2630/13/i=2/a=023013?key=crossref.c34f6faacbfec6ebd538af722102d3d9 journal journal New J. Phys. volume 13, pages 023013 (year 2011)NoStop [Barkhofen et al.(2013)Barkhofen, Metzger, Fleischmann, Kuhl, and Stöckmann]BAR13 author author S. Barkhofen, author J. Metzger, author R. Fleischmann, author U. Kuhl,and author H.-J. Stöckmann, title title Experimental observation of a fundamental length scale of waves in random media, http://link.aps.org/doi/10.1103/PhysRevLett.111.183902 journal journal Phys. Rev. Lett. volume 111, pages 183902 (year 2013)NoStop [Ambichl et al.(2016)Ambichl, Brandstötter, Böhm, Kühmayer, Kuhl, and Rotter]AMB16 author author P. Ambichl, author A. Brandstötter, author J. Böhm, author M. Kühmayer, author U. Kuhl,and author S. Rotter, title title Generalizing the Wigner-Smith time delay operator focus and omission in disordered media, https://arxiv.org/abs/1612.03070 journal journal arXiv:1612.03070(year 2016)NoStop [Salz(1985)]SAL85 author author J. Salz, title title Digital transmission over cross-coupled linear channels, http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6769555 journal journal AT&T Techn. Jour. volume 64, pages 1147–1159 (year 1985)NoStop [Sampath et al.(2002)Sampath, Talwar, Tellado, Erceg, and Paulraj]SAM02 author author H. Sampath, author S. Talwar, author J. Tellado, author V. Erceg,and author A. Paulraj, title title A fourth-generation MIMO-OFDM broadband wireless system: design, performance, and field trial results, http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1031841 journal journal IEEE Commun. Magaz. volume 40, pages 143–149 (year 2002)NoStop § SUPPLEMENTARY MATERIAL §.§ q for non-quadratic or singular transmission matrices TThe construction of q=-i T^-1dT(ω)/dω involves the inverse of the transmission matrix T^-1. If T is not quadratic or singular, which can be the case in systems with low transmission, an ordinary inversion cannot be computed anymore. However, an effective inverse can be calculated by using only highly transmitting channels of T, as we explain in the following. We start with a singular value decomposition (SVD) of the transmission matrix T = U Σ V^†, where U consists of the eigenvectors of TT^† stored in its columns and V consists of the eigenvectors of T^†T, respectively. For a m × n-dimensional transmission matrix, the matrices U and V are quadratic m × m and n × n matrices. The rectangular m × n-dimensional matrix Σ contains the singular values σ_i on its diagonal, which are the square roots of the common eigenvalues of both T^†T and TT^†. For a singular or non-quadratic transmission matrix T, at least one singular value is zero. In a next step, we only keep a certain number η of large singular values Σ̃ with corresponding singular vectors stored in Ũ and Ṽ. Projecting the full transmission matrix T onto the kept transmitting channels according to T̃=Ũ^†TṼ, we end up with an η×η-dimensional invertible transmission matrix T̃. Projecting back onto the original vector space gives the effective inversionT^-1:=Ṽ(Ũ^†TṼ)^-1Ũ^†.Projecting also the derivative onto the selected subspace with the corresponding projection operators P_Ũ=ŨŨ^† and P_Ṽ=ṼṼ^†, we end up with the construction rule for the operatorq̃=-iṼ(Ũ^†TṼ)^-1Ũ^†ŨŨ^†dT/dωṼṼ^†,whose eigenvectors can now be calculated readily. | http://arxiv.org/abs/1706.08926v1 | {
"authors": [
"Julian Böhm",
"Andre Brandstötter",
"Philipp Ambichl",
"Stefan Rotter",
"Ulrich Kuhl"
],
"categories": [
"physics.class-ph",
"cond-mat.mes-hall",
"cond-mat.other",
"nlin.CD"
],
"primary_category": "physics.class-ph",
"published": "20170627161541",
"title": "Particlelike scattering states in a microwave cavity"
} |
Trḍ |⟩⟩ ⟨⟨| | http://arxiv.org/abs/1706.08384v1 | {
"authors": [
"Long Huang",
"Xiaohua Wu",
"Tao Zhou"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20170626141007",
"title": "Pryce's mass-center operators and the anomalous velocity of a spinning electron"
} |
Department of Physics, Faculty of Science, The University of TokyoKavli IPMU (WPI), UTIAS, University of Tokyo, Kashiwa, 277-8583, JapanDepartment of Physics, Faculty of Science, The University of Tokyo Kavli IPMU (WPI), UTIAS, University of Tokyo, Kashiwa, 277-8583, JapanWe study the stability of the electroweak vacuum in low-scale inflation models whose Hubble parameter is much smaller than the instability scale of the Higgs potential. In general, couplings between the inflaton and Higgs are present, and hence we study effects of these couplings during and after inflation. We derive constraints on the couplings between the inflaton and Higgs by requiring that they do not lead to catastrophic electroweak vacuum decay, in particular, via resonant production of the Higgs particles. Electroweak Vacuum Metastability and Low-scale Inflation Kazunori Nakayama December 30, 2023 ===========================================================UT 17-21 IPMU 17-0090§ INTRODUCTIONThe Higgs potential may have a deeper minimum than the electroweak (EW) vacuumonce we assume that the Standard Model (SM) is valid up to a certain high-energy scale given the current observational results of the SM parameters. It does not mean any contradiction with the present universe sincethe allowed values of the SM parameters are likely to cause the metastable vacuum where the lifetime of the EW vacuum far exceeds the age of the universe <cit.>.[For the gravitational correction,see e.g. Refs. <cit.>and references therein.] Still, the existence of such a deeper minimum might cause problems in the early universe <cit.>, especially during <cit.>and after inflation <cit.>. For instance, we can derive an upper bound on the inflation energy scaleif there is no sizable coupling between the inflaton/the Ricci scalar and the Higgs during inflation. Otherwise, the Higgs acquires superhorizon fluctuations which are large enough to overcome the potential barrier during inflation. Thus, we assume that the EW vacuum is indeed metastable, and study its implications on dynamics during and after inflation in this paper.Previous studies in this direction are performed mainly in the context of high-scale inflation. The reason is that the Hubble parameter during inflation H_infmust be at least comparable to the instabilityscale of the Higgs potential h_inst (∼ 10^10GeVfor the center values of the SM parameters) for inflation to have nontrivial effects on the EW vacuum, since otherwise superhorizon fluctuationsduring inflation are too small to overcome the potential barrier. However, the situation completely changesonce we consider dynamics after inflation. After inflation, or during the inflaton oscillation epoch,the typical scale of the system is at least as large asthe inflaton mass m_ϕ. Thus, as long as m_ϕ > h_inst, even low-scale inflation that satisfies h_inst≫ H_inf may threaten the metastable EW vacuum. This is possible because low-scale inflation models typically yield m_ϕ≫ H_inf. In this paper, we study dynamics of the Higgs during the inflaton oscillation epoch for low-scale inflation models with m_ϕ > h_inst and m_ϕ≫ H_ inf. In general, there are no reasons to suppress couplings between the inflaton and the Higgs. If these couplings are sizable,a resonant production of the Higgs particles occurs due to the inflaton oscillation, which is the so-called “preheating” phenomenon <cit.>. The produced Higgs particles may force the EW vacuum to decay into the deeper minimum through the negative Higgs self-coupling. Thus we may obtain tight upper bounds on the couplingsby requiring that the EW vacuum survives the preheating epoch. Previous studies on the preheating dynamics of the EW vacuum focused onhigh-scale inflation models <cit.> but there are some qualitative differences between high- and low-scale inflation models. For low-scale inflation models,one significant complexity arises due to the tachyonic instability of the inflaton fluctuation itself during the last stage of inflation and the subsequent inflaton oscillation epoch <cit.>. It can be efficient enough to break the homogeneity of the inflaton field before the Higgs field fluctuation develops. Our purpose in this paper is to derive the upper bounds on the Higgs-inflaton couplings in low-scale inflation models taking these effects into account.This paper is organized as follows. In Sec. <ref>, we explain our setup.Since low-scale inflation models typically correspond to small field inflation models, we concentrate on hilltop inflation models in this paper. In Sec. <ref>, we briefly discuss the dynamics of the Higgs during inflation for low-scale inflation models. In Sec. <ref>, we study the preheating dynamics of the Higgs and inflaton itself, and qualitatively discuss the feature of the whole system. In Sec. <ref>, we perform numerical simulations to derive bounds on the Higgs-inflaton couplings. Finally, Sec. <ref> is devoted to summary and discussions. § SETUP In this section, we summarize our setup. We take the Lagrangian as ℒ = M_Pl^2/2R- 1/2(∂ϕ)^2 -1/2(∂ h)^2 - U(ϕ, h), whereis the reduced Planck scale, R is the Ricci scalar, ϕ is the inflaton, and h is the Higgs.[ We consider only one degree of freedom for simplicity. The results change only logarithmically even if we consider the full SU(2) doublet.]We assume that the inflaton is singlet under the SM gauge group, and hence trilinear as well as quarticportal couplings between the inflaton and the Higgs are allowed in general. Thus we take the following generic form for the potential: U(ϕ, h) = V(ϕ) + σ_ϕ h/2ϕ h^2 + λ_ϕ h/2ϕ^2 h^2 + m_h^2/2 h^2 + λ_h/4h^4, where V is the inflaton potential,m_h^2 is the bare mass of Higgs,and σ_ϕ h, λ_ϕ h, and λ_h are coupling constants. Note that the inflaton can have some gauge charges other than SM, such as U(1)_ B-L. In that case, ϕ should be regarded as a radial component of the complex scalar, and σ_ϕ h = 0. In this paper, however, we keep σ_ϕ h≠ 0 to make our discussion generic. Also, although it is higher dimensional, the following term may be relevant: δℒ_kin = c_kinh^2/M_Pl^2( ϕ)^2. It can be sizable, for it respects the shift symmetry, ϕ→ϕ + const. We can also consider the non-minimal coupling between the Higgs and R. We first omit these terms for simplicity, and discuss their effects at the end of this paper.Below we explain each term in detail.§.§ Inflaton potential As a prototype of an inflaton potential for low-scale inflation,we consider the hilltop model <cit.> (see Refs. <cit.> for supergravity embeddings): V(ϕ) = Λ^4 [1 - (ϕ/v_ϕ)^n ]^2, where n > 2 is an integer and v_ϕ > 0 is the vacuum expectation value (VEV) of the inflatonat the minimum of its potential. The inflaton mass around the minimum is m_ϕ = √(2) n Λ^2/v_ϕ. Since we are interested in small field inflation models, we assume that v_ϕ≪ M_Pl. Otherwise, the model would be rather similar to high-scale inflation models. Inflation takes place in the flat region of the potential: |ϕ|≪ v_ϕ. Here and in what follows, we consider the field space of the positive branch: ϕ>0.[ A pre-inflation before the observed inflation can solve the initial condition problem of the hilltop inflation. If there exist a Hubble induced mass term during the pre-inflation and a small ℤ_2 (ϕ→ - ϕ) breaking term, the initial condition is dynamically selected <cit.>.] The Hubble parameter at the end of inflation H_infis typically much smaller than m_ϕ in this case: H_inf/m_ϕ≃v_ϕ/√(6) nM_Pl≪ 1.Using the standard technique to calculate the large-scale curvature perturbation <cit.>, one finds the scalar spectral index and tensor-to-scalar ratio as n_s ≃ 1- 2/Nn-1/n-2, r ≃16n/N(n-2)[ 1/2N n (n-2)v_ϕ^2/M_Pl^2]^n/n-2, where N is the e-folding number of the cosmic microwave background (CMB) scale,which lies between 50 and 60 depending on the subsequent thermal history. Thus the tensor-to-scalar ratio is negligibly small in small-field models with v_ϕ≪ M_Pl. The overall normalization of the curvature perturbation observed bythe Planck satellite <cit.> implies 𝒫_ζ≃ 2.2× 10^-9≃[2n((n-2)N)^n-1]^2/n-2/12π^2Λ^4/(v_ϕ^n M_Pl^n-4)^2/n-2. It relates Λ and v_ϕ and hence there is essentially one parameter left, which we take v_ϕ hereafter.For a reasonable value of n, the predicted spectral index [Eq. (<ref>)] is slightly outsidethe favored range: n_s = 0.968(6) at 68% confidence level <cit.>. This discrepancy is resolved if there exists the following Planck suppressed operator <cit.>: δ V_Pl = - Λ^4 k/2ϕ^2/M_Pl^2, with k ≲𝒪 (1/nN). While it is too small to change the inflaton dynamics significantly, it can shift the slow-roll parameter η for a certain range of k. If n ⩾ 6, it is possible to shift the spectral index within 68% confidence level for N = 50–60. See Fig. <ref> and Ref. <cit.>. Since the suitable value of k is small, this term is safely neglected in the oscillation phase. Thus, we use the potential given in Eq. (<ref>) in the following discussion. §.§ Higgs-inflaton couplings and bare mass term If we denote ≡ v_ϕ - ϕ, the potential is given as U(ϕ, h) = V(v_ϕ - ) + 1/2(m_h^2 + σ_ϕ hv_ϕ + λ_ϕ hv_ϕ^2)h^2 + σ_ϕ h/2 h^2 + λ_ϕ h/2^2 h^2 + λ_h/4h^4, where we have definedσ_ϕ h≡ -(σ_ϕ h + 2 λ_ϕ h v_ϕ). Note that = 0 at the minimum of the potential. Here comes our crucial observation.In order to realize the EW scale, the bare Higgs massand the mass coming from the inflaton VEV must be canceled:[ We have neglected the EW scale since we are interested in the phenomena whose energy scale is much higher than the EW scale.] m_h^2 + σ_ϕ hv_ϕ + λ_ϕ hv_ϕ^2 = 0. It is a tuning, but we cannot avoid it since we assume that the SM is valid up to some high-energy scaleaside from the inflaton sector. Thus, the potential is now given by U(ϕ, h) = V(v_ϕ - ) + σ_ϕ h/2 h^2 + λ_ϕ h/2^2 h^2 + λ_h/4h^4. In particular, the Higgs is almost massless at = 0. Now we discuss quantum corrections to the potential. The Higgs-inflaton couplings modify/induce runnings of the Higgs four point coupling/inflaton self-interactions. Here let us focus on radiative corrections to the inflaton self-interactions; for the Higgs four point coupling, see the next Sec. <ref>. As one can infer from Eq. (<ref>), the potential for the low-scale inflation has to be extremely flat, and hence only a small change might spoil the successful inflation. Suppose that the effective potential around the vacuum ⟨ϕ⟩∼ v_ϕ is given by Eq. (<ref>) at the end of inflation for some renormalization scale μ. We will take μ as the typical scale of the preheating dynamics (μ≳ m_ϕ). See Sec. <ref> for more details. We put bounds on the couplings defined at this scalesince we are interested the preheating dynamics. Now the question is whether or not inflaton self-interactionsare radiatively induced for ϕ→ 0 and spoil the inflation. At the one-loop level, the radiative correction is given by the Coleman-Weinberg effective potential, V_ CW(ϕ)= m_h^4(ϕ)/64π^2ln(m_h^2(ϕ)/μ^2), where we define m_h^2(ϕ) ≡ m_h^2+σ_ϕ hϕ + λ_ϕ hϕ^2, and the couplings are evaluated at the scale μ. We have assumed m_h^2 (ϕ) > 0 during inflation. Otherwise, the Higgs potential might be destabilized during inflation (see Sec. <ref>). In order not to change the tree-level inflaton potential too much during inflation, we need|∂ V_ CW/∂ϕ| ≲ |V / ϕ |. It roughly indicates σ_ϕ h≲ m_ϕ(v_ϕ/M_Pl)^n-1/n-2, λ_ϕ h≲m_ϕ/M_Pl(v_ϕ/M_Pl)^1/n-2, for σ_ϕ h≠ 0. For σ_ϕ h≃ 0, we have insteadσ_ϕ h≲ m_ϕv_ϕ/M_Pl, λ_ϕ h≲m_ϕ/M_Pl.§.§ Higgs potentialFinally, we discuss the Higgs quartic self-coupling λ_h. In order to understand the high-energy behavior of λ_h,we must carefully consider the scalar threshold correction <cit.>. Once we neglect the Higgs-inflaton quartic coupling, the potential at around the minimum is written as U ≃m_ϕ^2/2( + σ_ϕ hh^2/2m_ϕ^2)^2 + 1/4(λ_h -σ_ϕ h^2/2m_ϕ^2)h^4. Thus the Higgs potential below the energy scale of m_ϕ is V_SM(h) = λ_SM/4h^4, λ_SM = λ_h - σ_ϕ h^2/2m_ϕ^2. It is clear that the quartic coupling λ_SM in the low-energy effective theory is different from λ_h.Up to the energy scale of m_ϕ, the running of λ_SM is just that of the SM, and hence it turns to be negative at around 10^10 GeV according to the current center values of the top/Higgs masses. For simplicity, we approximate it as λ_SM = -0.01 ×sgn(μ - h_inst) for μ < m_ϕ, where μ is the energy scale of the system and h_inst is the instability scale of the Higgs potential which we take h_inst = 10^10 GeV. If m_ϕ < h_inst,λ_h is positive at least up to at around μ = h_inst.[The potential can be even absolutely stable depending on σ_ϕ h^2/m_ϕ^2 andthe sign of λ_ϕ h <cit.>.]Thus, to overcome the potential barrier, the Higgs dispersion must be enhanced as large as ⟨ h^2 ⟩≳ h_inst^2 > m_ϕ^2. However, such an enhancement requires a large coupling with inflaton which is likely to spoil the flatness of the inflaton potential (see Eq. (<ref>)).[ In fact, the hilltop model (n=6) with m_ϕ < h_inst∼ 10^10 GeV cannot have large resonance parameters because of Eq. (<ref>).]Therefore in this paper, we concentrate on the opposite case:m_ϕ > h_inst. Then, by matching at μ = m_ϕ, the boundary condition for λ_h is roughly given as .λ_h|_μ = m_ϕ = -0.01 + .σ_ϕ h^2/2m_ϕ^2|_μ = m_ϕ. If σ_ϕ h^2/m_ϕ^2 ≳ 0.01,it may significantly affect λ_h so thatit helps to stabilize the Higgs potential at the high-energy region.[If λ_ϕ h is negative, the potential may not be absolutely stable anyway, depending on the precise form of V(ϕ).] Thus, there may be another minimum at around h ≃ m_ϕ and φ≃ -σ_ϕ h because of Eqs. (<ref>) and (<ref>), and it may affect the dynamics of the Higgs in the early universe. Instead of being involved in such a complexity,in this paper we simply concentrate on the case σ_ϕ h^2/m_ϕ^2≪ 0.01. Then, we may approximate the quartic coupling as λ_h = -0.01 ×sgn(μ - h_inst). We take the renormalization scale asμ = max(H_inf, h) during inflation <cit.>,and μ = max(H, √(⟨ h^2⟩)) during preheating. Here H is the Hubble parameter, and⟨ h^2⟩ is the dispersion of the Higgs field. Actually, as soon as the resonant Higgs production occurs, the dispersion becomes ⟨ h^2 ⟩≳ m_ϕ^2, and hence it dominates over the Hubble parameter. § HIGGS DYNAMICS DURING INFLATIONBefore studying the preheating stage, we summarize the Higgs dynamics during inflation in this section. As studied extensively <cit.>, the de-Sitter fluctuation of the Higgs field may lead to the collapse of the vacuum during inflation if the inflation scale is too high. It is instructive to see what happens if the inflation scale is so low that H_inf≪ h_inst.In the present model, since ϕ≪ v_ϕ during inflation,the Higgs potential during inflation is approximately given by V(h) ≃m_h^2/2h^2 + λ_h/4h^4, where the bare Higgs mass m_h^2 satisfies Eq. (<ref>). There are two possibilities: m_h^2 <0 and m_h^2 >0.First, let us consider the case of tachyonic Higgs during inflation: m_h^2 < 0,or σ_ϕ hv_ϕ + λ_ϕ hv_ϕ^2 > 0. In this case, the parameters must satisfy |λ_h | h_inst^2 > |m_h^2|, since otherwise the potential decreases monotonically toward large h and the Higgs may roll down to the deeper minimum during inflation. As long as Eq. (<ref>) is satisfied, the EW vacuum is stable during inflationif the Hubble scale during inflation is low enough, i.e., H_inf≪ h_inst. Otherwise, the de-Sitter fluctuation of the Higgs field is too large to stay at the local minimum of the potential.Next, let us consider the opposite case: m_h^2 > 0,or σ_ϕ hv_ϕ + λ_ϕ hv_ϕ^2 < 0. In this case, h= 0 is always a local minimum of the potential, and it is stable against the de-Sitter fluctuation if H^2_ inf≪ max[ h^2_ inst, m_h^2/|λ_h| ]. If the condition (<ref>) or (<ref>) is satisfied,the Higgs field effectively stays at around the origin without overshooting the potential barrier due to the de-Sitter fluctuation. However, it does not guarantee the vacuum stability after inflation, since the Higgs fluctuation can be resonantly enhanced during the preheating stage as studied in detail in the next section. § INFLATON AND HIGGS DYNAMICS DURING PREHEATING In this section, we analytically describe the preheating dynamics of our system. We first discuss resonant inflaton production in Sec. <ref>.Since the inflaton potential at around the minimum is far fromquadratic in the low-scale inflation model, inflaton particles are resonantly produced from the inflaton condensation. In fact, the inflaton particles can be even tachyonic during the preheating epoch. Hence, the inflaton production is so efficient that the backreaction destroys the inflaton condensation within several times of the oscillation. It sets the end of the preheating epoch,and hence sets the upper bound of the time we follow in this paper.Then we discuss resonant Higgs production in Sec. <ref>. There we make use of a crude approximation thatthe inflaton potential is dominated by the quadratic one. This is because the purpose of this subsectionis to understand the Higgs production qualitatively and to make an order of magnitude estimation of the constraints on the couplings. More rigorous analysis is performed numerically in the next section.§.§ Inflaton dynamics during tachyonic oscillationThe inflaton oscillation is typically dominated by the flat part of the potential just after inflation, and it causes a so-called tachyonic preheating phenomenon. Below we closely follow the discussion in Ref. <cit.> concerning the linear regime of the tachyonic preheating. More details are given in App. <ref>.There are two stages of tachyonic preheating. The first stage is further divided into the epoch between the point |η| = 1 and ϵ=1, and the interval between ϵ=1 and the first passage of ϕ=v_ϕ. Here ϵ and η are the slow-roll parameters:ϵ≡ M_Pl^2(V'/V)^2/2, η≡ M_Pl^2 V”/V. The tachyonic growth starts after |η| ≳ 1,where there is a large hierarchy between η and ϵ in low-scale inflation models. Therefore, the tachyonic growth occurs at the plateau regime of the inflaton potential, and the inflaton fluctuation with k/a ≲ H_ inf will develop. While the inflaton is rolling down the potential,higher momentum modes with H_ inf < k/a ≲ m_ϕ also experience tachyonic growth, but modes with low k/a (≲ H_ inf) are most enhanced because they have more time to develop. The inflaton fluctuation with such low-momenta at ϕ=v_ϕ is estimated as[ More precisely, the inflaton fluctuation δϕ_k should be regarded as its gauge-invariant generalization taking account of the scalar metric perturbation (see Ref. <cit.> for more detail). Also, note that the curvature perturbation on large-scale is conserved since δϕ_k ∝ϕ̇. ] δϕ_k(ϕ(t)=v_ϕ)/δϕ_k(|η|=1)= ϕ̇(ϕ(t)=v_ϕ)/ϕ̇(|η|=1)∼( M_Pl/v_ϕ)^n/n-2. Then the condition for the inflaton fluctuation to remain perturbative after the first passage of ϕ(t)=v_ϕ is <δϕ^2> ≲ v_ϕ^2, and it leads to ( v_ϕ/M_Pl)^1/(n-2)≳Λ/v_ϕ. Using the Planck normalization (<ref>), this translates into v_ϕ / M_Pl≳ 10^-6–10^-5 independently of n. Otherwise, even within one inflaton oscillation, the inflaton condensate may be broken, and the subsequent inflaton-Higgs dynamics would be too complicated. To avoid this complexity, we focus on the case of v_ϕ / M_Pl≳ 10^-6–10^-5so that we can reliably discuss the Higgs dynamics in the second stage explained below.In the second stage, the system goes into tachyonic inflaton oscillation regime. During this stage, the inflaton oscillation is far from harmonicbecause the most oscillation period is consumed at the flat part of the potential ϕ≪ v_ϕ. After the j-th oscillation of the inflaton, the field value at the lower endpoint is given by ϕ_j/v_ϕ≃(j√(3)/2v_ϕ/M_Pl)^1/n. The most enhanced mode during this tachyonic oscillation stage is basically determined by the curvature of the inflaton potential at ϕ=ϕ_j: k_*/a≃ m_ϕ(jv_ϕ/M_Pl)^(n-2)/(2n). It is this mode (k=k_*) that is most enhanced through the whole tachyonic preheating process. Note that it is much different from the ordinary broad resonance in which the inflaton oscillates about the quadratic potential. In our case, the fluctuation becomes nonlinear, i.e., ⟨δϕ^2⟩∼ v_ϕ^2, within several times of oscillation. See App. <ref> for more detail.In summary, the inflaton fluctuation becomes nonlinear within several times of oscillationdue to the tachyonic preheating. To avoid complications arising from the nonlinearity and thermalization as well aspossible model dependent discussions, we conservatively require thatthe vacuum remains stable at least until the inflaton fluctuation becomes nonlinear in this paper.Otherwise, we cannot avoid the catastrophe anyway. Thus, the tachyonic production of the inflaton particles setsthe upper bound of the time duringwhich we follow the dynamics in this paper.§.§ Higgs dynamics during preheatingNow we are in a position to study the growth of the Higgs field fluctuation during the preheating stage. In this subsection, we crudely approximate the inflaton potential as quadratic,although the actual inflaton potential just after inflation is typically far from quadratic for low-scale inflation models. Nevertheless, it helps us to understand the numerical results in the next section.The potential of the inflaton and Higgs at the inflaton oscillation phase isU(ϕ, h) = m_ϕ^2/2^2 + λ_h/4h^4 + σ_ϕ h/2 h^2 + λ_ϕ h/2^2 h^2. The inflaton potential is approximately taken to be quadratic around the potential minimum. We consider the preheating dynamics of this system,i.e., the resonant Higgs particle production due to the inflaton oscillation.[ The EW vacuum stability of this system during the preheating epoch is studied for large field inflation models in Refs. <cit.>.] The linearized equation of motion of the Higgs is ḧ_k + (k^2 + σ_ϕ h + λ_ϕ h^2)h_k = 0, where the dot denotes the derivative with respect to the time. We have moved to the momentum space with k being the momentum, and neglected the Hubble expansionbecause of Eq. (<ref>). The inflaton oscillation is described as = _inicos(m_ϕ t), under the quadratic approximation.Here _ini is the initial inflaton oscillation amplitude,which is roughly _ini∼ v_ϕ (remember that ≡ v_ϕ - ϕ). Note again that, although the oscillation amplitude is a time-decreasing function due to the Hubble expansion,the Hubble parameter is so small that the effect of Hubble expansion is practically negligible in low-scale inflation models with H_ inf≪ m_ϕ [Eq. (<ref>)]. By substituting it to Eq. (<ref>), we obtain the Whittaker-Hill equation: h_k” + [A_k + 2pcos2z + 2q cos4z ]h_k = 0, whereA_k ≡4k^2/m_ϕ^2 + 2q, p ≡2σ_ϕ h_ini/m_ϕ^2, q ≡λ_ϕ h_ini^2/m_ϕ^2, z ≡m_ϕ t/2, and the prime denotes the derivative with respect to z. The term with q leads to the usual parametric resonance <cit.>,while the term with p potentially leads to the tachyonic resonance <cit.>. In Fig. <ref>, we show the stability/instability chart of the Whittaker-Hill equation for k = 0 for both positive and negative q. If the parameters are in the instability region (the unshaded region),Eq. (<ref>) has exponentially growing solutions, resulting in the resonant Higgs production. A similar stability/instability chart can be drawn for finite k modes. The resonance parameters p and q are useful for estimating the strength of the resonance even for a potential that is far from quadratic, as in the case of the hilltop potential. For more details on the Whittaker-Hill equation and the Floquet theory, see, e.g.,Refs. <cit.> and references therein.In terms of the resonance parameters, the condition p + 2q ≥ 0, is necessary for the Higgs not to be tachyonic during inflation. Although it does not necessarily cause a problemeven if the Higgs is tachyonic during inflation as long as Eq. (<ref>) is fulfilled (see Sec. <ref>), we will assume that Eq. (<ref>) holds in the following for simplicity. Once the resonant Higgs production occurs, it forces the EW vacuum to decay intothe deeper minimum <cit.>. This is because the produced Higgs particles inducethe following tachyonic mass from the Higgs self-quartic coupling: m_tac;h^2 ≃ 3λ_h ⟨ h^2⟩, where we have used the mean-field approximation.Note that the dispersion is typically ⟨ h^2⟩≳ m_ϕ^2 for the resonant particle production, and thus we expect λ_h < 0as can be seen from Eqs. (<ref>) and (<ref>).Thus we can constraint the resonance parameters, or the couplings, by requiring that the EW vacuum is stable during the preheating (or within several times of the inflaton oscillation). The tachyonic resonance is effective if | p| exceeds of order unity (see Fig. <ref>),so we may require | p |≲𝒪(1), for the EW vacuum stability during the preheating. We will confirm this expectationby classical lattice simulations <cit.>with a full hilltop inflaton potential in the next section. Note that Eq. (<ref>) implies that | q |≲𝒪(1) without any accidental cancellation between σ_ϕ h and λ_ϕ h. However, we will also discuss the case | p|≲𝒪(1) and| q |≫𝒪(1) at the end of the next section for the completeness of this paper.[ Note that q ≳ -𝒪(1) from Eqs. (<ref>) and (<ref>). ]§ NUMERICAL SIMULATIONIn this section we perform classical lattice simulations to study the EW vacuum stability during the preheating epoch. For concreteness, we take n = 6 in the inflaton potential (<ref>). The CMB normalization (<ref>) implies (Λ/)^4 ≃ 7× 10^-14(v_ϕ/)^3 (N/60)^-5/2. For example, for v_ϕ/M_Pl=10^-3, we have Λ/M_Pl≃ 3× 10^-6, H_inf≃ 10^7 GeV and m_ϕ≃ 2× 10^11 GeV. Thus the parameters satisfy H_inf≪ h_max < m_ϕ, and hence this model is indeed a good example of our general argument in the previous sections. The condition (<ref>) is given in terms of p and q as | p|≲v_ϕ/m_ϕ(v_ϕ/)^5/4, | q|≲v_ϕ/m_ϕ(v_ϕ/)^5/4. In the present case of n=6, the right-hand sides of these inequalitiesare larger than unity for v_ϕ/M_Pl≳ 10^-4.We numerically solved the classical equations of motion derived from the Lagrangian (<ref>) as well as the Friedmann equations. We start to solve the equations when the slow-roll parameter ϵ becomes unity. It corresponds to _ini≃ 0.74 v_ϕ for v_ϕ = 10^-2, and _ini≃ 0.84 v_ϕ for v_ϕ = 10^-3. We took the initial velocity of the inflaton as zero. We also introduced initial Gaussian fluctuations that mimic the quantum fluctuations for the inflaton and the Higgs. We have assumed that they are in the vacuum state initially. This is justified for v_ϕ/≳ 10^-6–10^-5 since we can safely neglect inflaton particle production at the first stage in this case as discussed in Sec. <ref>. We have also added h^6 term in the Higgs potential just for numerical convergence. We have checked that it does not modify the dynamics before the EW vacuum decays. The parameters of our lattice simulations are summarized in Tab. <ref>. For more details on the classical lattice simulation,see for instance Refs. <cit.> and references therein.Since we have two different momentum scales (Eq. (<ref>) and m_ϕ), we must take the number of grids N_g to be large.This is why we took the spatial dimension to be two instead of three (see Tab. <ref>). As far as the linear regime is concerned, the results are not expected to change drastically for different numbers of spatial dimensions.We show our numerical results for v_ϕ = 10^-2 and v_ϕ = 10^-3in Figs. <ref> and <ref> respectively. We have followed the dynamics until m_ϕ t = 150 and 250for v_ϕ = 10^-2 and 10^-3, respectively, since the inflaton condensation is broken slightly before these times. The black line is the inflaton condensation ⟨φ⟩^2, the red line is the inflaton dispersion ⟨φ^2 ⟩ - ⟨φ⟩^2, and the blue line is the Higgs dispersion ⟨ h^2 ⟩, where the angle brackets denote the spatial average.They are normalized by the initial amplitude of the inflaton condensation _ini^2. The resonance parameters p and q are written at the tops of these figures. Let us start with the upper panelsin Figs. <ref> and <ref>. There, the resonance parameter p satisfiesp ≳𝒪(1), and both q > 0 and q < 0 cases are considered. As we can see from the figures, the EW vacuum isactually destabilized during the preheating for these cases. On the other hand, we have taken the resonance parameters asp = q ≲𝒪(1) in the lower left panels in Figs. <ref> and <ref>. In these cases, the EW vacuum survives the preheating. Thus the numerical results are consistent with our expectation in Sec. <ref>. That is, | p | = 2σ_ϕ h_ini/m_ϕ^2≲𝒪(1), is required for the stability of the EW vacuum during the preheating. We have checked that this criterion is indeed satisfiedfor several other values of p and q. In particular, we have also calculated the case p < 0. In this case, the Higgs becomes tachyonic in the region > 0, where it takes more time for the inflaton to oscillate. Hence the Higgs is more likely to be enhanced and the EW vacuum decays faster compared to the case p > 0.[ Note that the trilinear coupling eventually dominates over the quartic coupling as the inflaton approaches to the minimum of its potential.]In any case, the EW vacuum is stableduring the preheating as long as Eq. (<ref>) holds and | q |∼| p |. The bound (<ref>) does not strongly depend on v_ϕsince it is expressed solely by the resonance parameters. It is consistent with the numerical results with two different values of v_ϕ. Eq. (<ref>) is our main result in this paper,and it also implies | q |≲𝒪(1) if there is no tuning of the parameters. Still, we have also considered the case | p|≪ q for the completeness of our study. Note again that an accidental cancellation between σ_ϕ h and λ_ϕ h isnecessary to achieve q ≫𝒪(1) while satisfying Eq. (<ref>) (see the footnote <ref>). In this case, the situation is more complicated. When the parametric resonance is dominant, the condition for the EW vacuum destabilization in the linear regimeis estimated as <cit.>[ Apparently, the condition, |λ_h ⟨ h^2 ⟩| ≲ k_h^2, does not guarantee the stability for the homogeneous mode of the Higgs, but actually it does. We briefly explain the reason below. See Ref. <cit.> for the original argument. As can be seen from Eq. (<ref>), the Higgs acquires a positive mass term from the Higgs-inflaton coupling. The Higgs escapes from its origin only when the tachyonic mass, | δ m^2_tac;h |, overcomes the Higgs inflaton coupling. Expanding the effective Higgs mass around ϕ = v_ϕ, one can estimate the time interval, δ t, during which | δ m^2_tac;h | ≳ m_h^2 (ϕ) as | δ m^2_tac;h | ∼ q m_ϕ^4δ t^2. If the tachyonic mass term significantly drives the Higgs field during this time interval, or |δ m^2_tac;h| δ t ≳ 1, the vacuum decay takes place. This requirement coincides with Eq. (<ref>).] |λ_h |⟨ h^2 ⟩≳ k_h^2, where k_h≡ m_ϕ q^1/4 is the typical momentum of the produced Higgs particles. The dispersion grows like ⟨ h^2 ⟩∼ k_h^2 e^μ_g m_ϕ t and the growth factor μ_g does not much depend on q for the parametric resonance <cit.>. Hence the value of q is not so important in this condition. As a result, it is likely that the EW vacuum does not decay during the linear regime even if we take q to be larger, since we have restricted the number of times of the inflaton oscillations in our analysis (only several times)to avoid complications associated with the nonlinear behavior of the inflaton. However, as the inflaton fluctuations grow and become nonlinear,they can also produce the Higgs particles through the scatterings. It corresponds to the beginning of the thermalization,which is studied in detail in, e.g., Ref. <cit.>. In this regime, the variance of the fields interacting with each other tends to converge to a similar value though the scattering. Therefore, as q (or λ_ϕ h) becomes larger,the variance of the Higgs particles ⟨ h^2 ⟩approaches to that of the inflaton ⟨^2⟩ faster. In the present case, it might destabilize the EW vacuum since |λ_h |≫λ_ϕ h. Actually, in the lower right panels in Figs. <ref> and <ref>, the EW vacuum is destabilized at almost the same time as the system becomes nonlinear for q ≳𝒪(10). Thus, it might be expected that q = λ_ϕ h_ini^2/m_ϕ^2≲𝒪(10) if | p |≲𝒪(1), is at least required for the stability of the EW vacuum during and also after the preheating. If we follow the thermalization process for a longer time, the constraints may become tighter than Eqs. (<ref>) and (<ref>). In this sense,Eqs. (<ref>) and (<ref>) are just necessary conditions, and we must also follow the dynamics after the preheatingto determine an ultimate fate of the EW vacuum. However, to address this issue,we should take into account the couplings betweenthe Higgs and the SM particles, which might stabilize the EW vacuum. We leave such a study for future work.§ SUMMARY AND DISCUSSIONS In this paper, we have studied the implications of the EW vacuum metastability during the preheating epoch with low-scale inflation models, taking a hilltop inflation model as an example. We have shown that, although the EW vacuum is naturally stable during inflation for low-scale inflation models, it may decay into the deeper minimum during the preheating epoch due to the resonant Higgs production.One of the particular features of the hilltop inflation model is that there is a tachyonic preheating in the inflaton sector itself, which is so strong that the inflaton fluctuation becomes nonlinear within several inflaton oscillations. To avoid complications arising from the nonlinearity of the inflaton, we derive necessary conditions of the resonance parameters as |p| ≲𝒪(1) and q ≲𝒪(10) by requiring that the vacuum remains stable until the inflaton becomes nonlinear (see Eq. (<ref>) for the definitions of p and q). However, we also find that even after the inflaton field becomes completely inhomogeneous, thermalization processes between the inflaton and Higgs tend to enhance the Higgs fluctuation, which might cause the EW vacuum decay. In addition to that, the production of other SM particles may also become relevant for such a long time scale, whose effects are unclear. We did not give concrete bound taking into account such effects due to the complexity of the system and limitation of the numerical simulation. In this sense, the bounds we derived should be regarded as just a necessary condition. Still it might be possible to estimatesufficient conditions on the Higgs-inflaton couplings to avoid the EW vacuum decay. If the couplings are small enough (|p|, |q| ≪ 1), the band width of the Higgs resonance becomes narrow <cit.> and the Hubble expansion can kill the resonant Higgs production. The condition that the narrow resonance does not happen is written as <cit.>, p^2, q^2 ≲H_ inf/m_ϕ∼v_ϕ/. If it is satisfied, the only way to produce Higgs bosons is the ordinary perturbative decay/annihilation of the inflaton (without Bose enhancement). The perturbative decay/annihilation rate may be estimated as Γ(φ→ hh) ≃σ_ϕ h^2/32 π m_ϕ, Γ(φφ→ hh) ∼λ_ϕ h^2 <φ^2>/32 π m_ϕ. One may estimate the conservative bound which is free from the uncertainty of thermalization, by requiring that the Higgs dispersion from the perturbative decay/annihilationnever exceeds the instability scale, ⟨ h^2 ⟩ < h_inst^2: p^2, q^2 ≲𝒪 (10^2) v_ϕ/h_inst^2/m_ϕ^2. While the bounds (<ref>) might be too conservative, it should be noted that we need to take account of the whole thermalization process including gauge bosons and quarks in order to derive more precise bounds. There are few remarks.First, we would like tocomment on possible interactions between the Higgs and the inflaton that are not taken into account in the main text. Although it is higher dimensional, the following term can be large for it respects the shift symmetry of inflaton: δℒ_kin = c_kinh^2/M_Pl^2( ϕ)^2. It induces an oscillating Higgs effective mass during the preheating, and hence excites the Higgs fluctuations. If we use the crude approximation that the inflaton potential is quadratic at around the minimum, this coupling contributes to A and q in addition to λ_ϕ h,making them independent even for the mode k = 0. By requiring again Eq. (<ref>), we roughly estimate the constraint as | c_kin|≲𝒪(10)×^2/v_ϕ^2. In Ref. <cit.>, it is found that the resonance can be suppressed by making the ratio A/q to be larger. However, in the present case,the inflaton potential is actually far from quadratic just after inflation, and hence it might be difficult to cancel the oscillating part between ^2 and ^2. A similar discussion can be applied for the Higgs-gravity non-minimal coupling ξ_h h^2 R.The next one is the possibility that the Higgs mass at = 0 in the early universeis different from that in the present universe. It is possible if, for instance, the Higgs couples to a scalar field χ other than the inflaton which has a finite VEV in the early universe. The cancellation (<ref>) does not hold in this case,and the resonance due to the inflaton oscillation can be suppressedif the Higgs mass at = 0 is larger than of order m_ϕ. However, χ must relax to its potential minimum at some epoch so that the Higgs mass is of the order of the EW scale in the present universe. We may need to discuss the resonant Higgs production during such a relaxation of χ instead, if the mass of χ is larger than the instability scale of the EW vacuum.Third, we comment on other low-scale inflation models. While there are various class of low-scale inflation models, we expect that the bounds we found (|p| ≲𝒪(1) and |q| ≲𝒪(10)) do not change much. This is because our bounds only depend on the form of the Higgs-inflaton potential around the minimum (<ref>). Thus they may be applied to other low-scale models e.g., hybrid inflation <cit.> and attractor inflation <cit.>, although more detailed study is needed to rigorously confirm it. Finally, we again stress that,it is still far from clear in what condition the EW vacuum is stable from the end of the preheating to the end of the thermalization process. On the one hand, the EW vacuum stability during inflation and preheating is studied in detailin this paper as well as the previous literature <cit.>.On the other hand, it is known that the lifetime of the EW vacuum is long enoughonce the system is completely thermalized <cit.>. However, we are still lacking studies on the EW vacuum (in)stabilityfrom the end of the preheating to the end of the thermalization. Just after the preheating, the momentum distribution of the Higgs (as well as the other SM particles) is far from the thermal equilibrium, and it evolves with time due to the scatterings while approaching to the thermal equilibrium. It is possible that the EW vacuum decay is activated during this thermalization process depending on the shape of the momentum distribution. For instance, if the Higgs modes become larger than other SM particles at some time,the vacuum decay can be enhanced;the resonant particle production studied in this paper may be viewedas an extreme example of this situation. Thus, it is expected that the fate of our vacuum strongly depends on the detailed thermalization process. Moreover, the process of thermalization depends also on the reheating temperature of the universe, namely the coupling between the inflaton and SM particles other than the Higgs. This issue is worth investigating in detail since we cannot avoid discussions on this pointto determine an ultimate fate of the EW vacuum. Hopefully we will come back to this issue in future publication. § ACKNOWLEDGMENTS This work was supported in part by the JSPS Research Fellowships for Young Scientists (YE and KM) and the Program for Leading Graduate Schools, MEXT, Japan (YE). This work was also supported by the Grant-in-Aid for Scientific Research on Scientific Research A (No.26247042 [KN]), Young Scientists B (No.26800121 [KN]) and Innovative Areas (No.26104009 [KN], No.15H05888 [KN]). § TACHYONIC PREHEATING AFTER HILLTOP INFLATION In this Appendix, we summarize the properties of tachyonic preheating during the inflaton oscillation after hilltop inflation. The most discussion below follows Ref. <cit.>.Let us denote by ϕ_j the lower endpoint field value of the inflaton after j-th inflaton oscillation and t_j the time at which ϕ=ϕ_j. The endpoint is evaluated from the energy conservation: V(ϕ_j) - V(ϕ_j+1) =∫_t_j^t_j+1dt 3Hϕ̇^2 = ∫_ϕ_j^ϕ_j+1dϕ 3Hϕ̇. The integral is dominated around the potential minimum where ϕ∼ v and |ϕ̇| ∼Λ^2. Thus we obtain ϕ_j/v_ϕ∼(j√(3)/2v_ϕ/)^1/n. Note that ϕ_j=1 > ϕ_ end^(ϵ), where ϕ_ end^(ϵ)/v_ϕ∼ (v_ϕ/)^1/(n-1) denotes the field value at ϵ=1. The time period of j-th oscillation is given by t_j+1 - t_j ∼1/m_ϕ(v_ϕ/ϕ_j)^(n-2)/2, hence it is much longer than the inverse of the mass scale around the potential minimum, as clearly seen in Figs. <ref>-<ref>.We consider the growth of the inflaton fluctuation δϕ_k with a wavenumber kin the linear approximation during j+1 oscillation: t_j ≤ t ≤ t_j+1. Below, we neglect the Hubble expansion since all the time scales are much shorter than the Hubble time scale and take the scale factor a=1 for notational simplicity. We further divide the one oscillation into three phases: (a) t_j < t < t_m^-, (b) t_m^-<t<t_m^+, (c) t_m^+ < t < t_j+1, where t_m^± denotes the time when ϕ passes through ϕ_m, the field value at which V” takes negative maximum value: ϕ_m/v_ϕ = ( n-2/2(2n-1))^1/n. First, in the stage (a), modes with k ≲ m_ϕ experience tachyonic instability within the field range ϕ_ tac≤ϕ < ϕ_m, where ϕ_ tac at which the mode begins to be tachyonic: ϕ_ tac/v_ϕ≃ϕ_j/v_ϕ× max[1, (k/k_*)^2/(n-2)], with k_* ∼ m_ϕ(jv_ϕ/M_P)^(n-2)/(2n) corresponding to the tachyonic mass scale around ϕ=ϕ_j. Then the inflaton fluctuation δϕ_k is enhanced by an exponential factor e^X_k with X_k = ∫_ϕ_ tac^ϕ_m√(|V” + k^2|)/ϕ̇dϕ∼√(n(n-1)/2)log( ϕ_m/ϕ_ tac). However, it should be noticed that the same mode also experiences exponential decay in the third stage (c). It is easy to imagine that in the limit of k→ 0 this exponential decay during the stage (c) exactly cancels the exponential growth during the stage (a), because it is just the same as the dynamics of the homogeneous mode. For finite k, however, there is a phase shift during the stage (b), which causes a mismatch between the growing solution in the stage (a) and the decaying solution in the stage (c). Schematically, the phase of δϕ_k is rotated during the stage (b) as e^i√(m_ϕ^2+k^2) t∼ e^im_ϕ t(1+ik^2/m_ϕ^2), where we used k≪ m_ϕ and t∼ t_m^+-t_m^- ∼ m_ϕ^-1. Therefore, a small fraction of k^2/m_ϕ^2 at the end of stage (b) connects to the growing mode in the stage (c). The net enhancement factor in one oscillation is then estimated as F_k≡|δϕ_k(t_j+1)/δϕ_k(t_j)|∼|1+ ik^2/m_ϕ^2e^2X_k|.Using (<ref>), it is found that F_k is peaked around k≃ k_*, where we haveF_k_*∼( m_ϕ^2/k_*^2)^x_n-1, x_n≡√(2n(n-1))/n-2. Note that it is much larger than unity, hence the inflaton fluctuation is enhanced by orders of magnitude within one oscillation for v_ϕ≪. This is much different from the ordinary preheating with the parametric resonance.The variance of the field fluctuation after the j-th inflaton oscillationis now dominated by the modes k∼ k_*and estimated as <δϕ^2> ∼ k_*^2(F_k_*)^2j∼ m_ϕ^2 ( M_P/v_ϕ)^n-2/n[2j(x_n-1)-1 ]. It is true if x_n > 3/2 which is valid for n<27. Thus it will take only a few or several inflaton oscillations for the fluctuation to become nonlinear for low-scale inflation v_ϕ≪.apsrev4-1 | http://arxiv.org/abs/1706.08920v2 | {
"authors": [
"Yohei Ema",
"Kyohei Mukaida",
"Kazunori Nakayama"
],
"categories": [
"hep-ph",
"astro-ph.CO"
],
"primary_category": "hep-ph",
"published": "20170627160548",
"title": "Electroweak Vacuum Metastability and Low-scale Inflation"
} |
N. N. Patra] Narendra Nath Patra^1 E-mail: [email protected] ^1 National Centre for Radio Astrophysics, Tata Institute of Fundamental Research, Pune University campus, Pune 411 007, India Molecular scale height in NGC 7331 [================================== Combined Poisson's-Boltzman equations of hydrostatic equilibrium were set up and solved numerically for different baryonic components to obtain the molecular scale height as defined by the Half Width at Half Maxima (HWHM) in the spiral galaxy NGC 7331. The scale height of the molecular gas was found to vary between ∼ 100-200 pc depending on the radius and assumed velocity dispersion. The solutions of the hydrostatic equation and the observed rotation curve were used to produce a dynamical model and consequently a simulated column density map of the molecular disk. The modelled molecular disk found to match with the observed one reasonably well except at the outer disk regions. The molecular disk of NGC 7331 was projected to an inclination of 90^o to estimate its observable edge-on thickness (HWHM), which was found to be ∼ 500 pc. With an HWHM of ∼ 500 pc, the observed edge-on extent of the molecular disk was seen to be ∼ 1 kpc from the mid-plane. This indicates that in a typical galaxy, hydrostatic equilibrium, in fact, can produce a few kilo-parsec thick observable molecular disk which was thought to be difficult to explain earlier.ISM: molecules – molecular data – galaxies: structure – galaxies: kinematics and dynamics – galaxies: individual: NGC 7331 – galaxies: spiral § INTRODUCTIONMolecular gas plays a significant role in galaxy formation and evolution. Molecular cloudsprovide the sites of active star formation and hence host a suite forstellar activity, e.g. stellar feedback, supernova etc. The conversion of the gas into stars significantly regulated by this phase of the ISM. Hence, the abundance and distribution of the molecular gas in galaxies are of immense importance. Dynamically, the molecular gas in galaxies is expected to settle in a thin disk due to its low thermal pressure; however, its three-dimensional distribution is still poorly understood.It is challenging to measure the three-dimensional distribution of the molecular clouds in the Galaxy mainly due to distance ambiguities and high opacity of ^12 CO emission. Out of the plane distribution of molecular gas outside the solar circle was first reported by <cit.> using large-scale ^12 CO survey. Extending on their study, <cit.> used IRAS point source catalogue and ^12 CO observations to identify molecular clouds and their distribution in the outer Galaxy. In the inner Galaxy, the extinction is significant, and the distances to the molecular clouds were estimated mainly from their latitude and hence are ambiguous <cit.>. From these studies, one can only conclude that most of the molecular gas resides in a thin disk of scale height (HWHM) ∼ 75 pc in the inner Galaxy and can flare up to 200-250 pc at the outer Galaxy.However, many recent studies provide mounting evidence of a much thicker molecular disk than what was expected earlier. For example, <cit.> observed NGC 891 using IRAM 30 m telescope and found a thick molecular disk. They detected molecular gas at ∼ 1 - 1.4 kpc above and below the mid-plane. <cit.> also shown that the molecular component in the M51 galaxy does contain a diffuse component which has a typical scale height of ∼ 200 pc. <cit.> studied two nearly face-on galaxies NGC 628 and NGC 3938 to find out that the velocity dispersion of the molecular and atomic gas are similar indicating a diffuse component of the molecular gas with high-velocity dispersion. <cit.> studied the stacked spectra of and CO in a comprehensive sample of 12 galaxies using THINGS <cit.> and HERACLE <cit.> survey data respectively to find that the velocity dispersion of the molecular gas is almost the same as that of the atomic gas. Later, very recently, <cit.> used the same sample to study individual and CO spectra and found a little lower velocity dispersion for the molecular gas than the atomic gas. Nevertheless, the line width of the molecular gas found to be ∼ 8-10 , which is much higher than the velocity dispersion expected in a thin disk.Though these studies indicate a thicker molecular disk in external galaxies, they do not provide any detailed three-dimensional distribution. Direct measurement of the same is not possible (even for an edge-on galaxy) due to line-of-sight integration effect. Theoretically, the vertical distribution of the gas disk is determined by the balance between gravity and pressure under hydrostatic equilibrium. Hence, the distribution of the molecular gas (or any other component) can be obtained by solving combined Poisson's-Boltzman equation of hydrostatic equilibrium at any radius. Not only that, the solutions then can be used to produce observables and compared with real observations to constrain various input parameters (e.g. velocity dispersion, inclination etc.) to the hydrostatic equilibrium equation <cit.>.Many previous studies used vertical hydrostatic equilibrium condition to estimate the vertical structure of atomic gas in spiral and dwarf galaxies <cit.>. These studies neglected the contribution of the molecular gas due to lack of observational inputs. With recent surveys of molecular gas in nearby galaxies (e.g. HERACLE <cit.>), it is possible to estimate the vertical scale height of the molecular gas in nearby spiral galaxies. However, detecting molecular gas in dwarf galaxies remains challenging, and no significant detections were made in spite of substantial efforts <cit.>.In this paper, I set up the combined Poisson's-Boltzman equation of hydrostatic equilibrium for a galactic disk with different baryonic components, i.e., stars, atomic gas and molecular gas under the external potential of dark matter halo and solve for the galaxy NGC 7331. Many aspects of NGC 7331 (e.g., stellar content, star formation rates etc.) are remarkably similar to that of the Galaxy, and sometimes it is referred to as “the Milky Way's twin”. However, though, there exist a few structural differences as well between them, e.g., the Galaxy is believed to be a barred spiral whereas NGC 7331 doesn't have a bar. These facts make it even more interesting to study this galaxy. Along with that, this galaxy is a part of THINGS survey <cit.> and HERACLE <cit.> survey which makes the necessary data available for this study. Not only that this galaxy has an inclination of ∼ 76^o which makes it suitable for the estimation of its rotation curve reliably and at the same time its gas disk produces an observed thickness which is sensitive to the vertical structure of the gas disk. I numerically solve the second order coupled differential equations to obtain the vertical structure of the molecular disk. The hydrostatic equilibrium is a crucial assumption for this study. Any violation of this assumption would lead to a wrong interpretation of the data. Many previous studies revealed the existence of star-burst driven molecular outflow and supershells which are expected to disturb the hydrostatic equilibrium <cit.>. However, these disturbances are mostly restricted to the central region of ≲ 1 kpc where the star formation rate is much higher than the outer parts of a galaxy. A central region of 2 kpc was excluded in this study to avoid several complications related to hydrostatic equilibrium, and it is expected that the assumption of the hydrostatic equilibrium would be mostly valid for the rest of the disk of NGC 7331. Even though it is possible that in places within the galaxy this assumption might not hold good, but as I am using an azimuthally averaged quantities to estimate the molecular scale height, the local fluctuations are expected to be smoothed away.§ MODELLING THE GALACTIC DISKS§.§ Formulation of equationI model the galactic disk assuming it to be a three component system consisting of stars, atomic gas and molecular gas settled under mutual gravity in the external potential field of the dark matter halo. All the disks of different components would then be in vertical hydrostatic equilibrium individually. For simplicity, all the baryonic disks are considered to be co-planar, concentric and symmetric. Here I set up the hydrostatic equilibrium equation in a cylindrical-polar coordinate (R, z). The observed column density distribution of different baryonic components hence was deprojected (Fig. <ref>) to obtain the surface density distribution which is in the cylindrical polar coordinate too. The potential of the dark matter halo is considered to be fixed and can be determined observationally (from mass modelling). The Poisson's equation for the disks plus dark matter halo then can be given as1/R∂/∂ R( R ∂Φ_total/∂ R) + ∂^2 Φ_total/∂ z^2 = 4 π G ( ∑_i=1^3ρ_i + ρ_h) where Φ_total is the total potential due to all the disk components and dark matter halo. ρ_i indicate the volume density of different baryonic components where i runs for stars, atomic gas and molecular gas. ρ_h denotes the mass density of dark matter halo. As an NFW profile better describes the dark matter halos of spiral galaxies as compared to an isothermal one <cit.>, I chose to adopt an NFW distribution to represent the dark matter halo of NGC 7331. The dark matter density profile of an NFW halo <cit.> can be given as ρ(R) = ρ_0/R/R_s( 1 + R/R_s)^2 where ρ_0 is the characteristic density and R_s is the scale radius. These two parameters describe entirely a spherically symmetric NFW dark matter halo.The equation of hydrostatic equilibrium for individual components can be written as∂/∂ z(ρ_i ⟨σ_z^2 ⟩_i ) + ρ_i ∂Φ_total/∂ z = 0 where ⟨σ_z ⟩_i is the vertical velocity dispersion of the i^th component, an input parameter.Eliminating Φ_total from Equation (1) and (3), ⟨σ_z^2 ⟩_i ∂/∂ z( 1/ρ_i∂ρ_i/∂ z)=-4 π G ( ρ_s + ρ_HI + ρ_H_2 + ρ_h )+ 1/R∂/∂ R( R ∂ϕ_total/∂ R) where ρ_s, ρ_HI and ρ_H_2 are the mass density of stars, and molecular gas respectively. Eq. <ref> represents three second-order partial differential equations in the variables ρ_s, ρ_HI and ρ_H_2. However the above equation can be further simplified using the fact <cit.> ( R ∂ϕ_total/∂ R)_R,z = (v_rot^2)_R,z where (v_rot)_R,z is the circular rotation velocity. Assuming a negligible vertical gradient in (v_rot)_R,z, one can approximate the (v_rot)_R,z by the observed rotation curve v_rot, which is a function of R alone. Thus Eq. <ref> reduces to ⟨σ_z^2 ⟩_i ∂/∂ z( 1/ρ_i∂ρ_i/∂ z)=-4 π G ( ρ_s + ρ_HI + ρ_H_2 + ρ_h )+ 1/R∂/∂ R( v_rot^2 ) Eq. <ref> represents three coupled, second-order ordinary differential equations in the variables ρ_s, ρ_HI and ρ_H_2. The solution of Eq. <ref> at any radius (R) gives the density of these components as a function of z. Thus solutions of this equation will provide the three-dimensional density distribution of different disk components. §.§ Input parametersTo get the vertical structure of the molecular disk of any galaxy one needs to solve Eq. <ref>. In this work, I solve Eq. <ref> for the galaxy NGC 7331 to estimate its vertical molecular structure. This galaxy was observed in as part of the THINGS survey <cit.> and the molecular data was taken from the HERACLE survey <cit.>. As it is discussed in later sections, an inclination of ∼ 76^o of this galaxy favours in comparing the modelled and the observed molecular disk.In Fig. <ref>, I show the observed column density images of NGC7331. The left panel shows the column density of gas as observed by the VLA as part of the THINGS survey <cit.>, whereas the middle panel shows the molecular column density map as observed by the 30-meter IRAM telescope as part of the HERACLE survey <cit.>. The right panel of Fig. <ref> shows the total gas column density map i.e., (+H_2). The black dots show the observing beams at the left bottom corner of the respective panels. The data was at a higher resolution as compared to the data, hence to get a total gas column density map, I first smoothed the data with a Gaussian kernel to produce an output resolution same as the map and then sum them together to get the total gas column density map. The grey scale in each panel are in the unit of M_⊙ pc^-2. I adopt the same CO(2-1) to H_2 conversion factor as given by <cit.> Σ_H_2 (M_⊙ pc^-2) = 5.5I_CO(2 → 1)(K kms^-1) As we will be solving each component separately at a time (see <ref>), the surface densities of the individual components are of particular interest. In Fig. <ref>, I plot the deprojected face-on surface densities of different disk components (i.e., stars, and ) as a function of radius. These data were taken from <cit.>. As can be seen from the figure, the molecular gas disk extends up to ∼ 10 kpc from the centre whereas the and the stellar disk extends far out to a much larger radius. For details of the surface density calculations, I refer the readers to <cit.>. It should be noted that the surface densities are one of the primary inputs to the hydrostatic equation and are a proxy to the mass distribution in the vertical direction. The vertical velocity dispersions of different disk components are another vital input to Eq. <ref>. <cit.> shown that the vertical structure of the gaseous components is marginally affected by the accuracy of the assumed stellar velocity dispersion (σ_s). Hence, the stellar velocity dispersion was calculated analytically by assuming an isothermal disk using the formula σ_s ≃ 1.879 √(l_s Σ_s) <cit.>, where σ_s is the stellar velocity dispersion in , l_s is the exponential scale length of the stellar disk in kpc and Σ_s is the stellar surface density in .The velocity dispersion (σ_HI) in spiral galaxies were studied extensively through spectral line observations. Early work by many authors suggest an velocity dispersion of 6-13 <cit.> in galaxies. <cit.> studied the nearly face-on galaxy NGC 1058 to find that the σ_HI varies between 4-14 and decreases with radius. In an extensive analysis, <cit.> studied the σ_HI in spiral galaxies from THINGS survey and found a mean velocity dispersion to be ∼ 10 at r_25. In a later study, <cit.>applied spectral stacking method to the same data to estimate the σ_HI with higher confidence. They found a σ_HI = 12.5 ± 3.5 (σ_HI = 10.9 ± 2.1 for galaxies with inclination less than 60^o).<cit.> studied the σ_HI and σ_CO using stacking technique in a sample of 12 nearby spiral galaxies. They found σ_HI/σ_CO = 1.0 ± 0.2 with a median σ_HI = 11.9 ± 3.1 . However, <cit.> studied the same sample by analysing individual high SNR spectra to find a σ_HI/σ_CO = 1.4 ± 0.2 with σ_HI = 11.7 ± 2.3 and σ_CO = 7.3 ± 1.7 . As can be seen from these studies, the σ_HI in spiral galaxies could be assumed to be ∼ 12 . It can be noted that due to their mass budget, the dark matter halo and the stars dominantly decide the gravitational field which is followed by the gas components. In such a situation, the distribution of gas can only marginally influence the distribution of the molecular gas, and hence, the velocity dispersion of has minimal effect on the scale height of the molecular gas.But, the velocity dispersion of the molecular gas (σ_H_2) directly influences the vertical structure of the molecular disk. <cit.> observed molecular clouds in the Galaxy and found that the velocity dispersion of low mass clouds is higher than the high mass clouds. The low mass clouds have a σ_H_2∼ 9.0 whereas the high mass clouds have σ_H_2∼ 6.6 . <cit.> studied two nearly face-on galaxies to find σ_H_2∼ 6-8.5 . They also found that the σ_H_2 is almost constant over the whole galaxy and comparable to the velocity dispersion of (σ_HI). <cit.> used observations of ^13CO J = 1 → 0 in 1400 molecular clouds in the Galaxy to find that the velocity dispersion of small clouds is higher than that of the Giant Molecular Clouds (GMCs). I assume a primary velocity dispersion of molecular gas, σ_H_2 to be ∼ 7 , along with a variation between 6-10 to explore the observed molecular disk in more details.The second term on the right-hand side of Eq. <ref> represents the contribution of the centripetal acceleration against the gravity. However, the v_rot inEq. <ref> is an observable quantity. In Fig. <ref>, the observed rotation curve of NGC 7331 from <cit.> and <cit.> are plotted. <cit.> used relatively high-resolution data from THINGS survey. As in Eq. <ref>, we need to use the derivatives of the rotation velocity (v_rot), it is useful to parametrise the rotation curve instead of using actual data. A commonly used Brandt's profile <cit.> given asv_rot (R) = V_max(R/R_max)/(1/3 + 2/3 (R/R_max)^n)^3/2n is used to parametrise the rotation curve and the data were fitted to estimate the parameters. A V_max = 262.2 ± 0.8 , R_max = 6.1 ± 0.1 kpc and n=0.67 ± 0.06 were found for <cit.> data and V_max = 257.5 ± 1.0 , R_max = 6.7 ± 0.1 kpc and n=0.89 ± 0.07 were found for <cit.> data. As can be seen that the fit parameters of both the data matches very well with each other. I chose to work with the parameters found with <cit.> data as it is smoother than the THINGS data, however, I note that this does not make any fundamental difference to the results.The dark matter halo is another important input to the hydrostatic equilibrium equation. For NGC 7331 I used the dark matter halo parameters from <cit.> (Table-4 in their paper). The dark matter halo of NGC 7331 can be described both by the isothermal and NFW profile well. However, as the NFW profile in general describe the dark matter halo of spiral galaxies better than isothermal one <cit.>, I choose to use an NFW profile as given by Eq. <ref> with ρ_0 = 1.05 × 10^-3 and r_s = 60.2 kpc (see <cit.> for more details). §.§ Solving the hydrostatic equilibrium equationWith all the inputs mentioned above, Eq. <ref> was solved to obtain the vertical structure of different disk components. The coupled second-order ordinary differential equations were solved numerically using 8^th order Runge Kutta method as implemented in scipy package. As each equation (for individual components) is a second order differential equation, two initial conditions are required to solve it. ( ρ_i )_z = 0 = ρ_i,0 and d ρ_i/dz = 0 The second boundary condition in the above equation comes from the fact that at the mid-plane, the force, -∂ϕ_total/∂ z, must be zero due to symmetry in the vertical direction <cit.>. Whereas the first boundary condition demands the mid-plane density, ρ_i,0 to be known a-priory. Though the ρ_i,0 is not a directly measurable quantity, its value can be estimated using the observed surface density as Σ_i = ∫ρ_i(z) dz. For an individual component, e.g., stars, I first assume a trial ρ_s,0 and solve Eq. <ref> assuming ρ_HI = ρ_H_2 = 0. The solution, ρ_s(z) then integrated to obtain the stellar surface density Σ_s and compared to the observed one (Σ_s^') to update the next trial ρ_s,0. This way, iteratively, an appropriate ρ_s,0 is estimated such that it produces the observed Σ_s^' with better than 0.1% accuracy. It can be noted that using this approach, the surface densities found to converge in less than few hundred iterations. However, as in Eq. <ref> ρ_s, ρ_HI and ρ_H_2 jointly contribute to the gravity (first term in RHS), Eq. <ref> have to be solved simultaneously for all the components. We numerically solve it using an iterative approach. We adopt a similar strategy as described in <cit.> (See also <cit.> for an in-depth analysis of self-gravitating galactic disk systems). In the first iteration, all three equations (for stars, and ) are solved independently assuming no coupling (single-component disk). Then these single component solutions are introduced in Eq. <ref> to account for the coupling. In every iteration, these solutions are updated till they converge with acceptable accuracy. For example, in the first iteration, Eq. <ref> is solved assuming a single-component system and ρ_s(z), ρ_HI(z) and ρ_H_2(z) is obtained. In the next iteration, the solutions for and , i.e., ρ_HI(z) and ρ_H_2(z) as obtained in the previous iteration are frozen while solving for stars. Next, solutions for stars, ρ_s(z) (updated) and , ρ_H_2(z) are frozen while solving for and finally, solutions for stars, ρ_s(z) (updated) and , ρ_HI(z) (updated) are frozen while solving for . This marks the end of the second iteration. At the end of this iteration, coupled solutions are obtained which is better than the single-disk ones. This process is repeated until the solutions converge with better than 0.1% accuracy. Physically, in the first iteration, I solve for an individual component, assuming no other components present. Then, in next iterations, any component was solved in the presence of the frozen distribution of other two components (as calculated in the previous iteration). Using this approach, the self-consistent solutions were obtained iteratively starting from the single-component solutions. It can be noted that, for NGC 7331, the solutions were converged in less than ten iterations at any radius. It takes about a few minutes to solve the coupled equation at any radius in a normal workstation. However, as the hydrostatic condition at any radius is independent of other radii, Eq. <ref> can be solved in parallel and hence, for fast computation, the hydrostatic equation solver was implemented using MPI based parallel code.As can be seen from Fig. <ref>, the surface density of the molecular gas do not extend beyond R ∼ 10 kpc and hence Eq. <ref> was solved at R ≤ 10 kpc. From Fig. <ref>, it can also be noted that the stellar surface density is very high at the central region which indicates a higher energy input to the molecular disk. This might lead to a violation of the hydrostatic assumption in the central region. Not only that but also the assumed dark matter halo density profile (NFW) sharply peaks in the central region. To avoid any divergence due to these factors and a possible non-satisfaction of hydrostatic equilibrium, a central region of 1 kpc was avoided and was not solved. Thus, Eq. <ref> was solved within 1 ≤ R ≤ 10 kpc with an interval of 100 pc. The linear spatial resolution of the molecular data is ∼ 1 kpc. Hence, a radial interval of 100 pc is expected to be more than enough to sample the molecular disk well in the radial direction. However, it is well known that the vertical thickness of the molecular disk is much smaller as compared to its radial extent, and hence a much higher resolution is needed to sample the molecular disk in the vertical direction. To achieve this, an adaptive resolution depending on the scale-height was used and found to be always better than a few pc. It should be emphasised here that a fine resolution in the vertical direction is critical as the molecular disk is very thin. As the molecular gas density vary reasonably in parsec-scale, a grid resolution of few parsecs is necessary to sample the distribution of the molecular gas well in the vertical direction. However, this finely gridded molecular map is convolved with the observed beam to match the observed resolution for comparison. § RESULTS AND DISCUSSION In Fig. <ref>, sample solutions of Eq. <ref>, i.e., the mass density of different disk components (at R=6 kpc) are plotted as a function of height (z) from the midplane. It can be seen from the figure, the molecular gas at R=6 kpc extends to a much smaller height as compared to the or stellar disk. It should be noted that, for an isothermal single-component disk, the density distribution follows a sech^2 law <cit.>. But due to the coupling of multiple disk components, the solutions deviate from sech^2 distribution. In this case, it was found that a Gaussian function can reasonably represent the solutions. The deviation from a sech^2 function was seen to be lowest for the stellar disk as it is the gravitationally most dominant component.The solutions shown in the figure was obtained by solving Eq. <ref> assuming a velocity dispersion of 7 for molecular gas. However, as discussed earlier, σ_H_2 found to vary from galaxy to galaxy or even within a galaxy. To examine how this variation can affect the vertical structure of the molecular disk, Eq. <ref> was solved with σ_H_2 varying between 6-10 in a step of 1 .A Half width at Half Maxima (HWHM) of the vertical mass density profile was used as a measure of the vertical width of the molecular disk. In Fig. <ref>, I plot the HWHM profiles of the molecular disk as a function of radius. For comparison, the HWHM profiles of the atomic gas are also plotted in the figure. It can be seen that the molecular scale height in NGC 7331 varies between 50 pc - 200 pc depending on the assumed σ_H_2 and radius. For an assumed σ_H_2= 7 , the scale height varies between ∼ 60 - 100 pc which is ∼ a factor of 2 smaller than what is observed in the Galaxy. It can also be seen that the molecular scale height changes by a factor of ∼ 2 as one changes the σ_H_2 from 6 to 10 . Also, the scale height of both the and increases at R ≲ 3 kpc. This increase is because of strong centripetal acceleration due to the rising rotation curve at these radii (see Fig. <ref>). However, this effect is minimal at the outer radii as the rotation curve significantly flattens at R ≳ 3 kpc.The HWHM profile of the molecular disk is not a directly observable quantity even for an edge-on galaxy. Instead, the total intensity map is what is obtained through observation. To check the validity of the derived density distribution of the molecular gas, a three dimensional dynamical model of the molecular disk was produced using the solutions of Eq. <ref> and the rotation curve. This 3D model then inclined to the observed inclination of 76^o and convolved with the telescope beam to produce a simulated column density map. This map then can be compared with the observed one to check the consistency of the derived molecular gas density distribution.In Fig. <ref> top panel, the simulated column density map of the molecular gas in NGC 7331 is shown as is obtained using the solutions of Eq. <ref>. In the bottom panel, this simulated map is compared with the observed one. A σ_H_2 = 7 was adopted to solve the hydrostatic equation. The simulated and the observed molecular disk found to match very well with each other. However, one should exclude central two kpc region for comparison as the Eq. <ref> was not solved at R ≲ 1 kpc and the convolution with the telescope beam of size ∼ 1 kpc will introduce error. A careful inspection of the bottom panel of the figure reveals extra emission features at both the upper and lower half of the galaxy which was not accounted by the model-observation (contours) properly.To capture it in more details, instead of an overplot of column density maps, I estimate and compare the observed and the modelled vertical profiles of the column density distribution maps as a function of radii. To extract the profiles, a vertical cut along the minor axis was taken through the observed and modelled molecular disk. In Fig. <ref> the vertical profiles are plotted as a function of height (Z) from the major axis. It can be seen from the figure that, the observed and the modelled profiles matches with each other reasonably well, though, the base of the observed profiles are somewhat fatter than the modelled one. From Fig. <ref> bottom panel also one can see the extra plume of emissions in the edges of the map surpassing the contours representing the model.To check if the assumed inclination of the molecular disk is producing this difference, I re-computed the vertical profiles for a set of inclinations. The molecular disk was produced for an inclination range of 72^o to 80^o in steps of 2^o. The face-on surface densities of the baryonic disks were recalculated assuming the updated inclination and the hydrostatic equilibrium equation was solved to produce the simulated molecular disks. The thin black curves in each panel of Fig. <ref> represents vertical profiles for inclinations 72^o to 80^o in step of 2^o. The outermost profile is for 72^o whereas the innermost profile is for 80^o. As can be seen from the figure, no particular inclination can be considered as a better replacement for the assumed inclination of 76^o as they all seem to be consistent given the spatial resolution of ∼ 1 kpc. For further analysis, I assume an inclination of 76^o only (observed inclination of the optical disk). It is possible that the extra emission observed at the edges of the molecular disk is produced by a warp at the outer disk as the existence of warps is very common in large galaxies. In such cases, the hydrostatic equilibrium will not work, and the solutions would produce results unmatched to the observation. However, the warps are observed mostly at the outer disks, and it does not perturb the stability of the entire disk. As discussed in <ref>, a thick molecular disk extending up to a few kpc from the mid-plane were observed in many edge-on galaxies <cit.>. The existence of such thick disks are puzzling, and their origins are still an open question. I extend this study further to check if NGC 7331 in hydrostatic equilibrium can produce an observationally thick molecular disk. To check that, the solved density distribution of the molecular disk of NGC 7331 was projected to an inclination of 90^o to estimate its surface density map at edge-on orientation. In Fig. <ref> top panel, I show the molecular disk of NGC 7331 at an edge-on projection, whereas the bottom panel shows the HWHM profile of the vertical profiles of this disk as a function of radii. With radii, this HWHM profile marginally flares with a value of ∼ 500 pc. However, this HWHM profile, in fact, can produce a detectable molecular disk of ∼ 2 kpc (considering both the sides of the midplane) when observed edge-on. Here it should be noted that the scale height of the molecular gas (∼ 100-200 pc) does not represent the full extent of the molecular disk, rather it is an indicative measure of thickness which marks the width where the density falls to half of the maximum. Though the thickness of the molecular disk found in NGC 7331 is not as thick as seen in NGC 891 (which is a few kpc thick <cit.>), these results indicate that it is probably not very difficult to produce a thick molecular disk under the assumption of hydrostatic equilibrium.§ SUMMARY AND FUTURE WORKIn summary, assuming a vertical hydrostatic equilibrium between different disk components in a galaxy, the combined Poisson's-Boltzman equations were set-up and solved to calculate the vertical structure of the molecular disk in the galaxy NGC 7331. Three coupled second-order partial differential equation (Eq. <ref>) were solved numerically using 8^th order Runge-Kutta method from scipy and was implemented using MPI based code for fast computation. For NGC 7331, the hydrostatic equation was solved at 1 ≤ R ≤ 10 kpc to obtain the vertical structure of the molecular disk. The molecular scale height was found to be ∼ 50-100 pc at the centre which increases to ∼ 100-200 pc at the outer edge. The molecular scale height is sensitive to the assumed σ_H_2 and found to change by a factor of ∼ 2 when σ_H_2 changes from 6 to 10 . Using the solutions of hydrostatic equations and the observed rotation curve, a three dimensional dynamical model of the molecular disk was made. This model was then inclined to the observed inclination and convolved with the telescope beam to produce a model intensity map. This model intensity map was then compared with the observed one (Fig. <ref>) to find that the model matches with the observation reasonably. However,some low-intensity excess emission features in the observed molecular map at largest heights was not modelled properly. This emission feature was observed at the edge of the molecular disk and most probably not a part of the stable molecular disk, or it could be a warp. To check the modelling of the molecular disk at different depths, vertical profiles of column density maps were extracted and compared. The modelled vertical profiles at different radii reasonably match with the observation. The effect of assumed inclination on the molecular disk also explored and found that the vertical profile at any radius does not show large change as compared to the observing beam as one changes the inclination from ∼ 72^o to 80^o.Finally, I project the molecular gas density distribution of NGC 7331 to an inclination of 90^o to examine if it can produce a reasonably thick molecular disk. The extracted HWHM profile of this edge-on disk was found to be ∼ 500 pc with a very little flaring with radius. This HWHM was found to be capable of producing a thick observable disk of thickness ∼ 2 kpc. With this result, it appears that a simple vertical hydrostatic model of the molecular disk can in-principal produce a few kilo-parsec thick observed disk and hence, creating a thick molecular disk in external galaxies might not be as difficult as it was thought before (e.g., NGC 891 <cit.>).In this work I assumed the molecular disk to be a single component system with a single σ_H_2. However, as discussed in <ref>, many recent studies point towards the possibility of a two-component molecular disk with a thin disk residing close to the midplane and a diffuse thick disk extending up to a few kpc. The σ_H_2 of these disks are expected to be different. In these scenarios, the assumption of a simple single component molecular disk will fail, and one needs to add an extra component to the hydrostatic equilibrium equation. In future work, a detailed study with a two-component molecular disk is worth exploring to understand the thick molecular disks observed in external galaxies.§ ACKNOWLEDGEMENTNNP would like to thank Dr Yogesh Wadadekar, Dr Samir Choudhuri and Mrs Gunjan Verma for their comments and suggestions which helped to improve the quality of this manuscript. NNP would also like to thank both the referees for their valuable comments and suggestions which improved the quality and readability of this paper.mn2e | http://arxiv.org/abs/1706.08615v2 | {
"authors": [
"Narendra Nath Patra"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170626221344",
"title": "Molecular scale height in NGC 7331"
} |
Polyquant CT: direct electron and mass density reconstructionfrom a single polyenergetic source Mike E. Davies1 December 30, 2023 ================================================================================================= Extremely strong magnetic fields of the order of 10^15G are required to explain the properties of magnetars, the most magnetic neutron stars. Such a strong magnetic field is expected to play an important role for the dynamics of core-collapse supernovae, and in the presence of rapid rotation may power superluminous supernovae and hypernovae associated to long gamma-ray bursts. The origin of these strong magnetic fields remains, however, obscure and most likely requires an amplification over many orders of magnitude in the protoneutron star. One of the most promising agents is the magnetorotational instability (MRI), which can in principle amplify exponentially fast a weak initial magnetic field to a dynamically relevant strength. We describe our current understanding of the MRI in protoneutron stars and show recent results on its dependence on physical conditions specific to protoneutron stars such as neutrino radiation, strong buoyancy effects and large magnetic Prandtl number. § INTRODUCTIONThe delayed injection of energy due to the spin down of a fast rotating, highly magnetized neutron star is the most popular model to explain a class of superluminous supernovae (e.g. Kasen & Bildsten 2010; Inserra et al. 2013). The birth of such millisecond magnetars is furthermore a potential central engine for long gamma-ray bursts and hypernovae (e.g. Duncan & Thompson 1992; Metzger et al. 2011; Obergaulinger & Aloy 2017). One of the most fundamental and open question in these models is the origin of the strong magnetic field that is invoked. The most studied mechanism capable of generating such a strong magnetic field is the growth of the magnetorotational instability (MRI) in the protoneutron star (Akiyama et al. 2003; Masada et al. 2007; Obergaulinger et al. 2009; Sawai & Yamada 2014; Guilet et al. 2015; Guilet & Müller 2015; Rembiasz et al. 2016a,b; Mösta et al. 2016). In the following we review several aspects of the physics of the MRI in the physical conditions relevant to protoneutron stars, by describing its linear growth (Section 2) and its non-linear dynamics (Section 3).§ LINEAR GROWTH OF THE MRIAlthough it is too often neglected in numerical simulations, neutrino radiation can have a dramatic impact on the linear growth of the MRI. This was shown by Guilet et al. (2015) by applying analytical results for the linear growth of the MRI to a numerical model of a protoneutron star (PNS). In Guilet et al. (2015), we have shown that, depending on the physical conditions, the MRI growth can occur in three main different regimes (Fig. 1): * Viscous regime (dark blue color in Fig. <ref>) : on length-scales longer than the neutrino mean free path, neutrino viscosity significantly affects the growth of the MRI if E_ν≡v_A^2/νΩ<1, where v_A is the Alfvén velocity, ν the viscosity induced by neutrinos and Ω the rotation angular frequency. The growth of the MRI is then slower and takes place at longer wavelengths compared to the ideal regime. In the viscous regime, the wavelength of the most unstable mode is independent of the magnetic field strength, while the growth rate is proportional to the magnetic field strength (Fig. <ref>). As a result, a minimum magnetic field strength of ∼ 10^12G is required for the MRI to grow on sufficiently short time-scales.* Drag regime (light blue color in Fig. <ref>) : on length-scales shorter than the neutrino mean free path, neutrino radiation exerts a drag force -Γv, where v is the fluid velocity perturbation and Γ≃ 6× 10^3 (T/10MeV)^6s^-1 is a damping rate. This drag has a significant impact on the MRI if the damping rate is larger than the rotation angular frequency (Γ > Ω). In this regime, the growth rate of the most unstable mode is independent of the magnetic field strength, but is reduced by a factor of Γ/Ω compared to the ideal regime (Fig. <ref>). The wavelength of the most unstable mode is not much affected by the neutrino drag. * Ideal regime (orange color in Fig. <ref>) : this is the classical MRI regime which occurs when neutrino viscosity or drag are negligible. The growth rate of the MRI is then a fraction of the angular frequency, independent of the magnetic field strength, while the most unstable wavelength is proportional to the magnetic field strength.Fig. <ref> shows where these three regimes apply, as a function of magnetic field strength and radius in the PNS (note that qualitatively similar results were obtained by Guilet et al. (2017) in the context of neutron star mergers). Three regions in the PNS can be distinguished: * Deep inside the PNS, the relevant MRI regime is the viscous MRI. The MRI can grow on sufficiently short time-scales if the initial magnetic field is above a critical strength.* At intermediate radii, MRI growth can take place both in the viscous regime at wavelengths longer than the neutrino mean free path, and in the drag regime at length-scales shorter than the mean free path. Since the growth rate in the viscous regime is proportional to the magnetic field strength, the growth is faster in the viscous regime above a critical magnetic field strength while it is faster in the drag regime for weaker magnetic fields. In addition, in-between these regimes, the MRI growth occurs in a mixed regime (shown in green in Fig. <ref>) where electron neutrinos are diffusing and thus induce a viscosity, while the other species are free streaming and exert a drag.* Near the PNS surface, the viscous regime is irrelevant because the neutrino mean free path is longer than the wavelength of the MRI. Furthermore, in this region the neutrino drag does not affect much the growth of the MRI because the damping rate is smaller than the angular frequency, i.e. Γ<Ω. As a consequence the MRI growth takes place in the ideal regime without much impact of neutrino radiation. For the sake of simplicity and in order to show the impact of neutrinos in a clear way, this analysis neglected the effect of buoyancy. As shown analytically by Menou et al. (2004) and Masada et al. (2007) and confirmed by numerical simulations in Guilet & Müller (2015), entropy and composition gradients can act against the MRI, but this is alleviated by the diffusion due to neutrinos. § NON-LINEAR DYNAMICS OF THE MRIIn order to know the efficiency of magnetic field amplification, numerical simulations of the non-linear dynamics driven by the MRI are necessary. These simulations have so far mostly been performed in local or semi-global models describing a small portion of the PNS (e.g. Obergaulinger et al. 2009; Masada et al. 2012; Guilet et al. 2015; Rembiasz et al. 2016a,b). The first phase of MRI growth is often dominated by channel flows, which are the fastest growing MRI modes in the presence of a poloidal magnetic field. These modes, which are uniform in the horizontal direction, are approximate non-linear solutions and are therefore potentially able to grow well into the non-linear regime. Their growth is terminated when parasitic instabilities (either the Kelvin-Helmholtz instability or resistive tearing modes) are able to destroy their structures (Goodman & Xu 1994; Rembiasz et al. 2016a,b). By studying in detail this process with two different numerical codes, Rembiasz et al. (2016b) have been able to show that channel modes can only amplify the magnetic field by a factor ∼ 10 under the physical conditions prevailing in a PNS. A further process such as a turbulent MRI-driven dynamo is therefore necessary to reach very strong magnetic fields. In Guilet & Müller (2015), we have studied the turbulent phase following the disruption of the channel modes, taking for the first time into account both the viscosity and diffusion due to neutrinos and the buoyancy force due to entropy/composition gradients. This was made possible owing to the use of the Boussinesq approximation, which we showed to be well suited to a local model of a PNS. In addition to drastically reducing the computing time this approximation avoids artifacts at the boundaries caused by global gradients in fully compressible simulations. Fig. <ref> shows snapshots of the turbulent phase in a buoyantly unstable case (left panel) and a buoyantly stable case (right panel). While the buoyantly unstable dynamics is turbulent and largely non-axisymmetric, the buoyantly stable dynamics contains more large-scale axisymmetric structures. A channel flow at the largest scale available in the box can be distinguished even during the turbulent phase, which is further illustrated in Fig. <ref> by the horizontally averaged space-time diagram (left panel). Fig. <ref> (right panel) furthermore shows the kinetic, magnetic and thermal energies and energy injection rates as a function of the dimensionless buoyancy parameter N^2/Ω^2 (where N is the Brunt-Väisälä frequency). The magnetic energy decreases with N^2/Ω but, interestingly, it becomes almost constant in the stable buoyancy regime (N^2>0) suggesting an efficient magnetic field amplification even in the presence of a strong, stable stratification.While these simulations were able to use realistic values for the viscosity due to neutrinos, they have the shortcoming of vastly overestimating the resistivity (like all other simulations, be it by including it explicitly like here or by suffering from a large numerical resistivity (for estimates see Rembiasz et al. 2016c)).Preliminary results show that this is likely to cause a significant underestimate of the efficiency of magnetic field amplification, thus confirming the strong dependence of the MRI on the magnetic Prandtl number (the ratio of viscosity to resistivity) known in the context of accretion disks (e.g. Fromang et al. 2007).§ PERSPECTIVESThe presence of structures on the largest scales described by the local models of Section 3 stresses the necessity of global models encompassing the whole PNS. Such models are extremely challenging computationally because it is necessary to resolve the very small length-scales where MRI grows. The closest to a global model was published by Mösta et al. (2015) who simulated one eighth of a PNS (in an octant symmetry) and claimed to obtain an MRI-driven dynamo. Even this extremely computationally intensive simulation could not, however, answer the fundamental question of the origin of magnetars dipolar magnetic fields because it was actually initialized with a dipolar magnetic field strong enough to lead to magnetar formation by simple flux conservation. More numerical simulations of global models of PNSs are therefore necessary to establish firmly whether the MRI can generate a sufficiently strong dipolar magnetic to explain the birth of millisecond magnetars. We are currently making steps in this direction by developing numerical models of idealized PNSs in quasi-incompressible approximations (Boussinesq or anelastic approximations). By drastically reducing the computational requirements, these approximations should enable insights into the fundamental physical process generating the dipolar magnetic field. J.G. acknowledges support from the Max-Planck-Princeton Center for Plasma Physics and from the European Research Council (grant MagBURST–715368). H.T.J. is grateful for funding by the European Research Council through grant ERC-AdG COCO2CASA-3411 and by the Deutsche Forschung Gemeinschaft through Cluster of Excellence "Universe" EXC-153. M.A., P.C.D., M.O. and T.R. acknowledge support from the European Research Council (grant CAMAP-259276) and from grants AYA2013-9340979-P, AYA2015-66899-C2-1-P and PROME-TEOII/2014-069.[Akiyama, Wheeler, Meier & LichtenstadtAkiyama et al.2003]akiyama03Akiyama S.,Wheeler J. C.,Meier D. L.,Lichtenstadt I.,2003,, 584, 954[Duncan & ThompsonDuncan &Thompson1992]duncan92Duncan R. C.,Thompson C.,1992, , 392, L9 [foglizzo et al. (2015)]foglizzo15Foglizzo, T., Kazeroni, R., Guilet, J., Masset, F., González, M. et al., 2015, PASA, 32, 9 [Fromang, Papaloizou, Lesur &HeinemannFromang et al.2007]fromang07bFromang S.,Papaloizou J.,Lesur G.,Heinemann T.,2007, , 476, 1123 [Goodman & Xu (1994)]goodman94Goodman, J., Xu, G., 1994, ApJ, 432, 213 [Guilet et al. (2015)]guilet15a Guilet, J., Müller, E. & Janka, H. T., 2015, MNRAS, 447, 3992 [Guilet & Müller (2015)]guilet15b Guilet, J. & Müller, E., 2015, MNRAS, 450, 2153 [Guilet et al. (2017)]guilet17Guilet, J., Bauswein, A., Just, O. & Janka, H.T., 2017, submitted to MNRAS, arxiv:1610.08532 [Inserra, Smartt, Jerkstrand & etal.Inserra et al.2013]inserra13Inserra C.,Smartt S. J.,Jerkstrand A.,et al. 2013, , 770, 128 [Kasen & BildstenKasen & Bildsten2010]kasen10Kasen D.,Bildsten L.,2010, , 717, 245[Masada, Sano & ShibataMasada et al.2007]masada07Masada Y.,Sano T.,Shibata K.,2007, , 655, 447 [Masada, Takiwaki, Kotake &SanoMasada et al.2012]masada12Masada Y.,Takiwaki T.,Kotake K.,Sano T.,2012, , 759, 110 [Menou, Balbus & SpruitMenouet al.2004]menou04Menou K.,Balbus S. A.,Spruit H. C.,2004, , 607, 564 [Metzger, Giannios, Thompson,Bucciantini & QuataertMetzger et al.2011]metzger11Metzger B. D.,Giannios D.,Thompson T. A. et al..,2011, , 413, 2031 [Moesta et al. 2015]moesta15 Mösta P., Ott C. D., Radice D., Roberts L. F., Schnetter E., Haas R., 2015, Nature, 528, 376[Obergaulinger, Cerdá-Durán, Müller & AloyObergaulinger et al.2009]obergaulinger09Obergaulinger M.,Cerdá-Durán P.,Müller E.,Aloy M. A.,2009, , 498, 241 [Obergaulinger & Aloy 2017]obergaulinger17Obergaulinger M., Aloy M. A., 2017, MNRAS letters, 469, L43 [Rembiasz et al. (2016a)]rembiasz16aRembiasz, T., Obergaulinger, M., Cerdá-Durán, P. et al., 2016, MNRAS, 456, 3782 [Rembiasz et al. (2016b)]rembiasz16bRembiasz, T., Guilet, J., Obergaulinger, M., Cerdá-Durán, P. et al., 2016, MNRAS, 460, 3316 [Rembiasz et al. (2016c)]rembiasz16cRembiasz, T., Obergaulinger, M., Cerdá-Durán et al., 2016, arXiv:1611.05858, to appear in ApJS [Sawai & YamadaSawai &Yamada2014]sawai14Sawai H.,Yamada S.,2014, , 784, L10 | http://arxiv.org/abs/1706.08733v1 | {
"authors": [
"Jerome Guilet",
"Ewald Mueller",
"Hans-Thomas Janka",
"Tomasz Rembiasz",
"Martin Obergaulinger",
"Pablo Cerda-Duran",
"Miguel-Angel Aloy"
],
"categories": [
"astro-ph.HE",
"astro-ph.SR"
],
"primary_category": "astro-ph.HE",
"published": "20170627090043",
"title": "How to form a millisecond magnetar? Magnetic field amplification in protoneutron stars"
} |
fancy plain plain [C] < g r a p h i c s >[L]< g r a p h i c s >[R]< g r a p h i c s > iblabel[1]#1 akefntext[1] [0pt][r]thefnmark #1 1.125 *§0pt4pt4pt * §.§0pt15pt1pt[LO,RE]< g r a p h i c s >[CO]< g r a p h i c s >[CE]< g r a p h i c s >[RO]1–LastPage [LE] 1–LastPage [ \begin@twocolumnfalse< g r a p h i c s >Influence of surface tension in the surfactant-driven fracture of closely-packed particulate monolayers^† Christian Peco ^a,Wei Chen ^b, Yingjie Liu ^a, M. M. Bandi ^c, John E. Dolbow ^a, and Eliot Fried ^b < g r a p h i c s > A phase-field model is used to capture the surfactant-driven formation of fracture patterns in particulate monolayers.The model is intended for the regime of closely-packed systems in which the mechanical response of the monolayer can be approximated as a linearly elastic solid.The model approximates the loss in tensile strength of the monolayer as the surfactant concentration increases through the evolution of a damage field.Initial-boundary value problems are constructed and spatially discretized with finite element approximations to the displacement and surfactant damage fields. A comparison between model-based simulations and existing experimental observationsindicates a qualitative match in both the fracture patterns and temporal scaling of the fracture process. The importance of surface tension differences is quantified by means of a dimensionless parameter, revealing thresholds that separate different regimes of fracture. These findings are supported by newly performed experiments that validate the model and demonstrate the strong sensitivity of the fracture pattern todifferences in surface tension. \end@twocolumnfalse]§ ^a Department of Civil and Environmental Engineering, Duke University, Durham, NC 27708, USA ^b Mathematical Soft Matter Unit, OIST Graduate University, Onna-son, Okinawa, 904-0495, Japan^c Collective Interactions Unit, OIST Graduate University, Onna-son, Okinawa, 904-0495, Japan § INTRODUCTIONWhen a densely packed monolayer of hydrophobic particles is placed on the surface of a liquid, the particles interact through capillary bridges that form,<cit.> leading to the formation of particle rafts.The macroscopic properties of these rafts reflect an interplay between fluid and solid mechanics,<cit.> giving rise to novel physics. This interplay is relevant to a wide range of applications, from the synthesis of “liquid marbles” <cit.> to the design of drug delivery systems <cit.> to the stabilization of drops.<cit.>The interest in particle rafts has driven researchers to investigate their mechanical properties.<cit.> It is now known that densely packed monolayers exhibit a two-dimensional linearly elastic solid response, and that the mechanical properties of such monolayers depend proportionally on the surface tension of the liquid layer.The introduction of a controlled amount of surfactant generates a surface tension gradient, producing Marangoni forces <cit.> and causing the surfactant to spread, fracturing the monolayer. Studies of surfactant-driven fracture have examined the role of viscosity and the initial packing fraction on the evolution of cracks in closely and loosely-packed systems, respectively.<cit.> Surprisingly, the potentially important role of differences in the surface tension of the surfactant and the underlying liquid has not been explored. The magnitude of this difference is interesting because it determines the Marangoni force exerted on the particulate monolayer and it is the main driving force for the fracture process.Modulating the surface tension difference also provides a way to probe mechanical properties that are difficult to measure directly, such as the critical failure stress. Finally, the surface tension difference can easily be controlled in the laboratory by modifying the composition of the surfactant or the underlying liquid.In this article, we focus on closely-packed systems. We develop a phase-field model that takes into account a two-way coupling between the flow of surfactant and the motion of the monolayer.Through model-based simulations and accompanying experiments, we demonstrate that surface tension differences play a vital role in the overall fracture response of particle rafts.The general setup for a surfactant injection experiment, illustrated in Figure <ref>, consists of a circular Petri dish containing a liquid layer onto which hydrophobic particles are deposited. The surfactant is introduced with a needle near the center of the dish. The surface tension of the liquid-vapor interface decreases where the surfactant is present. Marangoni forces then cause the surfactant to spread over the surface of the liquid and through the monolayer. The subsequent response is sensitive to the ratio of the fraction of the area of the liquid-vapor interface that is occupied by particles, which we refer to as the packing fraction and denote by ϕ. As ϕ is increased, the properties of the liquid-vapor interface change from liquid to solid.<cit.>Consistent with out interest in closely-packed systems, we consider situations in which the packing fraction is high (0.7 ≤ϕ≤ 0.9). We model the particle laden liquid-vapor interface as a continuous two-dimensional linearly elastic solid, capable of supporting both tension and compression.<cit.> The mechanical properties of such a particulate monolayer <cit.> are basic to understanding its response to surfactant-driven stresses. An estimate of those properties is provided by Vella et al.,<cit.> based on experimental measurements and geometrical arguments. Surface wave experiments have been used to characterize the stretching and bending stiffnesses of particulate monolayers,<cit.> which are found to present a soft granular character with nonclassical response under fluid-driven deformation.<cit.>Experiments show that when such a system is stimulated by the localized introduction of a surfactant, an advancing front forms and fractures the monolayer.<cit.> This type of surfactant-induced effect has also been observed in other systems, such as agar gels.<cit.> The resulting fracture patterns, illustrated in Figure <ref>, are reminiscent of those observed in classical brittle materials but with significant differences. Bandi et al. <cit.> suggested that the fracture patterns can be very sensitive to variations in the initial distribution of particles. Additionally, in conventional elastic solids, crack branching is mostly associated with inertial effects and dynamic instabilities (e.g., bifurcations which occur as the crack tip velocity approaches 60% of the Rayleigh wave speed <cit.>). In contrast, crack branching has been observed in particulate monolayers at crack speeds as low as 0.2% of the shear wave speed.<cit.>Regarding the time scales of the fracture process, the most obvious distinction is that crack tip velocities do not appear to be influenced by the elastic properties of the monolayers. Rather, experimental observations suggest that the fracture time scale is mostly governed by variations in the viscosity of the underlying liquid <cit.> and the packing fraction.<cit.> These observations raise a number of questions regarding the fluid-driven fracture of elastic media in general and in monolayers in particular. The mechanisms underlying the surfactant-driven fracture of particulate monolayers are challenging to study experimentally.There are practical limits to the range of particle sizes that can be used. Sufficiently small particles fall below the current resolution limit of imaging equipment when a full field of view is needed. Particles that are too large have too much inertia to move significantly in response to surfactant flow. Modeling and simulation efforts can address these concerns and provide detailed insight concerning the basic mechanisms and sensitivities. However, computational studies of these systems are in the early stage of development. Previous attempts to model particulate monolayers have focused on loosely-packed systems, which admit several simplifying assumptions.For example, Bandi et al.,<cit.> developed a discrete-element method to examine the influence of the initial packing fraction on the number of fractures in loosely-packed systems. In that work, a one-way coupling in which surfactant flow influences the motion of the particles, but not vice-versa, as assumed.While simulations based on this approach accurately reproduce the limiting number of cracks that develop, the underlying assumptions limit its applicability to situations where the packing fraction is low. The numerical simulation of closely-packed systems is challenging due to the high number of particles and the complexity of the physics involved in fracture. In this work, we propose a model based on a phase-field method which makes it possible to smoothly represent transitions between the damaged and undamaged zones of the monolayer as the surfactant advances. Phase-field models are suitable for this kind of problem since they avoid the need to track the propagating front, while allowing for a simple and powerful way to introduce the essential physics. We take as a starting point the work of Miehe et al. <cit.> andBorden et al. <cit.> for phase-field regularizations of the Griffith model for fracture. Based on these approaches, we present a new model that includes several features that are important to characterize the fracture of particulate monolayers. In particular, our model incorporates the distribution of particles, the force on the monolayer due to the presence of surfactant, and the viscosity of the underlying liquid.Our objective is to develop the simplest model capable of capturing the salient features of this system, namely the sensitivity of fracture patterns to differences in surface tension and the temporal scaling of the fracture process. This paper is organized as follows. In Section <ref>, we present a phase-fieldapproach for modeling the fracture mechanics of particulate monolayers, placing emphasis on the fundamental nature of the terms used to describe each contribution to the physics, after which we propose a governing free energy per unit area from which the model is derived. In Section <ref>, we describe the materials and experimental methodology used to explore the fidelity of the computational model. In Section <ref>, we present numerical results which demonstrate the capability of the model to reproduce different cases of surfactant-driven fracture. We then present a phase diagram which delineates different fracture regimes as a function of the surface tension difference and fracture resistance of the monolayer. Finally, in Section <ref>,we summarize our main results and propose directions for further research. § PHASE-FIELD MODELWe model the surfactant-driven fracture of particulate monolayers by adapting recently developed phase-field approaches to fracture.<cit.>The model is designed to capture how a drop of surfactant introduced at the center of the domain can spread to form cracks or fissures in the particulate monolayer (Figure <ref>).Accordingly, our model incorporates a number of postulates that are based on experimental observations, as detailed below. Central among theseis the assumption that the fractured zones are completely filled with surfactant, without appreciable penetration of the surfactant beyond the contours of the cracks.On this basis, we use a single “surfactant damage” field as an indicator function for the surfactant concentration and for the damage to the monolayer. §.§ General considerations Consider a monolayer of particles floating on the surface of a liquid layer. Let the two-dimensional region occupied by that monolayer be denoted by Ω. InFigure <ref>(a), the dark portion of Ω represents the intact portion of the monolayer and the light portion of Ω represents the portion of the monolayer damaged by the surfactant. The state of the monolayer is described by two independent variables, its vector displacement field u and a scalar surfactant damage field d. As indicated in Figure <ref>(a), d takes values in[0,1], with d=1 representing completely damaged material.For closely-packed monolayers, experimental observations suggest that fracture occurs even for small values of the (infinitesimal) strain tensor ε=1/2(∇u+(∇u)^1.5mu).The microstructure of the monolayer is described by a scalar packing fraction ϕ that takes values in [0,1] and depends on position x in Ω, as illustrated in Figure <ref>(b). This field characterizes the ratio of the subset of the surface of the liquid layer that is covered by particles to the total area of that surface and is thought to influence the overall fracture pattern, crack kinking, and branching.<cit.> We assume that the particles are rigid, so that any local contraction or expansion of themonolayer is accommodated solely by variations of the surface area not covered by particles. Thus, given an initial packing fraction ϕ_0, the packing fraction ϕ depends on the trace, ε, of the strain ε through ϕ(ε)=ϕ_0/1+ε.We further assume that surfactant damage acts only to degrade the tensile resilience of the monolayer and that crack propagation is prohibited under compression. This is achieved by employing a spectral decomposition <cit.> of ε into positive and negative components ε_+ and ε_-. It is then possible to define tensile and compressive strain-energy densities W_+ and W_- through W_±(ε)=E/2(1+ν)|ε_±|^2+ν E/(1-ν)^2(_±ε)^2, where E>0 and 0<ν<1 are the Young's modulus and Poisson's ratio of the undamaged monolayer and the positive and negative trace operations _+ and _- are defined in accord with _+ε=ε,ε≥ 0,4pt 0,, and _-ε =ε,ε≤ 0, 4pt 0,.We assume that the free-energy density of the monolayer is a function ψ of ε, d, and ∇ d with theform ψ(ε,d,∇ d)=(1-d)^2W_+(ε)_tensile +ξ(ϕ(ε)) W_-(ε)_compressive +Q(d, ∇ d)_fracture +F (ε, d)_surfactant. As in conventional phase-field models for brittle fracture,<cit.>the tensile contribution to ψ decays quadratically with the surfactant damage d. The remaining contributions of the energy are, however, nonstandard and therefore require further discussion.The compressive contribution to ψ accounts for increases in compressive energy that accompany increases in the packing fraction through the jamming factor ξ. This factor penalizes compression beyond a jamming threshold ϕ_j and prevents packing fractions from exceeding a maximum value ϕ_m. The variation of ξ with ϕ — and, thus, with reference to (<ref>), the initial packing fraction ϕ_0 and strain ε — is illustrated in Figure <ref>. Its value starts at ξ=1 for 0≤ϕ≤ϕ_j and increases monotonically for ϕ_j≤ϕ<ϕ_m, exhibiting a vertical asymptote as ϕ→ϕ_m. The thresholds ϕ_j=0.84 and ϕ_m=0.9 correspond to a random close-packing in two space dimensions<cit.> and the maximum packing density for two-dimensional discs, respectively. Numerically, it is more convenient to use expressions that effectively raise the compressive contribution without becoming unbounded. In this work, we use an expression with the form 1.0 + f(1.0+tanh((ϕ-0.5(ϕ_m-ϕ_j))/l), with f being the factor of amplification and l being the length scale controlling the width of the regularization. For the contribution to ψ associated with fracture, we choose a modified versionQ(d, ∇ d)=G(d)/2(d^2/λ+λ|∇ d|^2) of the expression proposed by Miehe et al. <cit.>, with G defined by G(d)=G_0(γ_s/γ_f+(1-γ_s/γ_f)(1-d)) =G_0(1-(1-γ_s/γ_f)d), where γ_f and γ_s are the surface tension of the liquid layer and the surfactant, respectively, G_0>0 is a constant that represents the fracture toughness of the undamaged monolayer, and λ>0 is a constant proportional to the characteristic thickness of a layer between damaged and undamaged material. This modification accounts for a reduction in the fracture toughness with increasing surfactant concentration. Accordingly, it is convenient to introduce G_r = G(1) =G_0 γ_s/γ_f as a measure of the reduced fracture toughness of the monolayer.Finally, the surfactant contribution to ψ accounts for the interplay between the monolayer and the surfactant (Figure <ref>) and is assumed to be of the form F(ε,d)=F_0(γ_f-γ_s)(ε)d^2/2, where F_0>0 is a constant.§.§ Governing equations Following Silva et al.<cit.> (but neglecting inertia), the governing equations of the phase-field model consist of the macroforce balance div(∂ψ(ε,d,∇ d)/∂ε)=0 and the microforce balance div(∂ψ(ε,d,∇ d)/∂∇ d) -∂ψ(ε,d,∇ d)/∂ d =8.25muβḋ,ifḋ>0, -π_r,ifḋ = 0, where β≥0 represents the kinetic modulus that controls the rate at which cracks can propagate through the monolayer. The alternative on the right-hand side of (<ref>) embodies the requirement that, consistent with experimental observations, cracks that form in the monolayer never heal. This requirement takes the form of the constraint ḋ≥0, and areactive microforce π_r is needed to ensure satisfaction of that constraint. In particular, π_r vanishes for ḋ > 0 and is determined by the left-hand side of (<ref>) for ḋ=0. The kinetic modulus β accounts for two effects. First, it captures the capacity of the surfactant to penetrate the particulate monolayer and thereby generate damage, which is influenced by ϕ and the radius of the particles. As ϕ and the particle size increase, the higher density of particles and the increased tortuosity impede surfactant spreading. Second, it captures the resistance of the underlying liquid layer torearrangements of the particles, a resistance which is directly related to the viscosity μ_f of the liquid comprising that layer. For the foregoing reasons, we assume that β increases with ϕ, μ_f, and the mean particle radius r̅_p>0 in accord with the relationβ=β_01-ϕ_m/1-ϕ,β_0=Cμ_fr̅_p/1-ϕ_m, where C>0 is a constant and β_0 represents the kinetic modulus for the case of maximum packing ϕ=ϕ_m. Since the kinetic modulus β defined in (<ref>) is positive, damage only increases when the left-hand side of (<ref>) is positive. Accordingly, the evolution equation (<ref>) for d becomes 1-ϕ(ε)/β_0(1-ϕ_m)⟨div(∂ψ(ε,d,∇ d)/∂∇ d) -∂ψ(ε,d,∇ d)/∂ d⟩=ḋ, where, given a scalar-valued quantity h,⟨ h⟩= 0, h ≤ 0, h, h > 0, denotes its Macaulay bracket.With reference to the right-hand side of the definition (<ref>) of the free-energy density ψ and using(<ref>)–(<ref>) and (<ref>), we find the governing equations (<ref>) and (<ref>) can be written as div((1-d)^2∂ W_+(ε)/∂ε+∂ (ξ(ϕ(ε))W_-(ε))/∂ε+F_0(γ_f-γ_s)d^2/2I)=0 and 1-ϕ(ε)/β_0(1-ϕ_m)⟨G_0λ(1-(1-γ_s/γ_f)d)Δ d-G_0λ/2(1-γ_s/γ_f)|∇ d|^2 +2(1-d)W_+(ε) -F_0(γ_f-γ_s)(ε)d -G_0d/λ(1-3/2(1-γ_s/γ_f)d) ⟩ =ḋ, respectively, and where I denotes the (two-dimensional) identity tensor. We consider (<ref>) and (<ref>) subject to the zero displacement condition u = 0 on the boundary ∂Ω of Ω and, letting n denote a unit normal on ∂Ω, the natural boundary condition ∇ d·n=0. Additionally, we impose the initial conditions ϕ(x,0)=ϕ_0(x),d(x,0)=d_0(x), with ϕ_0 representing the initial distribution of particles and d_0 the initial damage. §.§ Model characterization: dimensionless number χIntroducing characteristic measures L and T of length and time, we define dimensionless counterparts x^* and t^* of x and t byx^*=x/Land t^*=t/T. Thus, defining dimensionless versions ∇^* d = L ∇ d,Δ^* d = L^2 Δ d, and∂ d/∂ t^* = Tḋ of the gradient, Laplacian, and partial time-derivative, dimensionless parameters, . [ G^*=G_r/E r̅_p, F^*=F_0(γ_f-γ_s)/E,L^*=L/r̅_p,4ptβ_0^*= β_0/ET, γ^*=γ_s/γ_f, andλ^*=λ/L, ]} and dimensionless strain-energy densities W^*_±(ε)=1/2(1+ν)|ε_±|^2+ν/(1-ν)^2 ( _±ε)^2, we arrive at dimensionless counterparts div^*((1-d)^2∂ W^*_+(ε)/∂ε+∂ (ξ(ϕ(ε))W^*_-(ε))/∂ε+F^*/2d^2I)=0 and 1-ϕ(ε)/β_0^*L^*(1-ϕ_m)⟨G^*/γ*λ^*(1-(1-γ^*)d)Δ^* d-G^*/γ*λ^*/2(1-γ^*)|∇^* d|^2 -G^*/γ*d/λ^*(1-3/2(1-γ^*)d) +2(1-d)L^*W^*_+(ε) -L^*F^*(ε)d ⟩ =∂ d/∂ t^* of the governing equations (<ref>) and (<ref>). While we have not undertaken an exhaustive suite of simulations spanning the complete range of parameter space associated with the six dimensionless variables in (<ref>), the extensive testing that we have conducted indicates that the fracture pattern is largely governed by the dimensionless driving force F^* and the dimensionless fracture toughness G^*of the monolayer. In particular, the number of fractures in the final configuration appears to be dictated by the ratioχ = (F^*)^2/G^*.In Section <ref>, we discuss the threshold levels of the dimensionless parameter χ that delineate different regimes, ranging from no fractures at all to a configuration with multiple branches. The expression (<ref>) forχ can be further simplified byinvoking the approximate scaling ofYoung's modulus with the surface tension as derived by Vella et al.<cit.> (i.e., E ∝γ_f/ r̅_p). The expression for χ can therefore be rewritten to isolate the influence of the surface tensions, yielding χ∝(F_0r̅_p)^2/G_0(γ_f-γ_s)^2/γ_s. Daniels et al.<cit.> proposed a similar relationship between the elastic and Marangoni energies to explain fracture patterns in agar gels. In that work, the number of crack branches was found to scale linearly with the difference in surface tension.As described in Section <ref>, our experimental observations and model-based simulations suggest the system considered here exhibits a much stronger sensitivity to surface tension differences. § EXPERIMENTS§.§ Materials All videos from experimental work were acquired using a Phantom V641 high-speed camera equipped with an AF Nikkor 50 mm f/1.8 D lens. Videos were saved using PCC software provided by Phantom. A 5 W white LED board used to illuminate the mixture was supplied by VANCO (series #33342). Dispersion of silica microballoons (SIG MFG) was performed using the Active Motif Q120AM (120 W, 20 kHz) ultrasonicator equipped with a CL18 3.2 mm probe. Oleic acid (surfactant) was supplied by WAKO Chemicals and the acetone used as a cosolvent was supplied by Nacalai Tesque. The surfactant was dropped using Terumo 2.5 mL syringes (SS-02SZ) equipped with a 25 G (0.50 x 16 mm) needle. Surface tension measurements of mixtures were obtained using the KSV NIMA LB Small trough (33473).§.§ Methodology Figure <ref> shows a schematic of the experimental setup. A clean low-form cylindrical glass vessel 17 cm in diameter and 9 cm in height filled to a height of 2.5 cm with milli-Q water (or mixed with a cosolvent) was placed atop an LED light panel. Microballoons were carefully weighed to either 0.150 g or 0.250 g, introduced at the air-water interface, and dispersed by means of ultrasonication at 25% power to form a particulate monolayer. Using a 2.5 mL syringe fitted with a 25 G metal tip, a single drop of surfactant (oleic acid with or without solvent) was introduced to the approximate center of the vessel at time t = 0 s. The water immiscible surfactant was observed to spread and push the microballoon particles radially outwards resulting in compaction of the particulate monolayers. Observations were captured using a Phantom high-speed camera at different frame rates. Data analysis was performed by first converting raw data files (.cine) to multipage image files (.tif) and these files were subsequently analyzed using ImageJ. In cases where the surfactant or the bulk water phase was mixed with a solvent to reduce surface tension, acetone was added carefully and aliquots of the liquids were quickly transferred to an LB trough for surface tension measurements.§ RESULTS AND DISCUSSION §.§ Pattern comparison We discretize the model described in Section <ref> by recasting the evolution equations in variational form and approximating the displacement and surfactant damage fields with finite element basis functions.Initial-boundary value problems are constructed over circular domains representative of the petri dishes used in the experiments. The experiments have a stochastic aspect corresponding to the initial particle locations.This effect is modeled at the continuum scale by using initial packing fields with spatial variability.Specifically, we numerically construct initial packing fraction fields over the circular domain that have a mean value of ϕ̅ = 0.795 with a 6% variation, corresponding to uniform random distributions with minimum and maximum packing fractions of 0.75 and 0.84, respectively.The initial drop of surfactant is approximated by a sufficiently small (on the order of λ), empty circular region (ϕ_0=0) concentric with the center of the dish.Table <ref> provides the reference values of the parameters used for the simulations in this section. All simulations were performed using MOOSE (Multiphysics Object-Oriented Simulation Environment), a finite-element framework primarily developed by Idaho National Laboratory.<cit.>A first qualitative assessment of the model is shown in Figure <ref>. In this example, the parameter χ is adjusted to reproduce a three-branch pattern, which is a commonly observed configuration in the experiments for our given set of parameters. The result indicates that the model captures the general fracture process and reproduce the final three-branch configuration. Additionally, the cracks are sensitive to the packing fraction structure, which gives rise to more subtle features. These features include the particular angle in which the three-branch pattern is generated, the kinking and bending of cracks when gradients in the particle density are encountered, and the bifurcation of a crack when it encounters a high density region directly in line with its propagation. In this work, stochastic features of the experiments are approximated through random variations in the initial packing fraction field. The specific details of the numerically generated fracture patterns thus stem from the initial conditions and are sensitive to the particular random seed used in the simulations.As a result, our simulation results are only designed to capture the generic features of the experiments. Nevertheless, the model reproduces the number of cracks observed in the experiments and, at a qualitative level, the features of crack kinking and the formation of secondary branches. We now provide a detailed description of a representative simulation in which a three-branch fracture pattern arises.Figure <ref> shows the results from a simulation of this common pattern, obtained with the parameters given in Table <ref> and χ=0.0125(see Table <ref> for the correspondent dimensionless quantities). The columns in Figure <ref> show the evolution of three fields, namely the surfactant damage, the tensile energy, and the packing fraction, along with a comparison with snapshots of a matching experiment at comparable moments in time. The distribution of d closely matches the fractured portion of the particle raft in the experiment. The tensile energy field W^+, which is representative of the processes that underpin crack propagation, transitions from an initially circular tensile distribution due to the surfactant force to one with broken symmetry depending on the available energy in the system. The tensile energy drives the three-branches until they eventually arrest due to an increase in the compressive energy. The field ϕ depicts the evolution of the initial packing fraction. As the fractured region expands, the monolayer contracts (to approximately ϕ=0.87) and the mobility of the surfactant in the fractured region (ϕ=0.3) is increased. §.§ Phase diagram for fracture pattern characterization We now study whether the dimensionless parameter χ can be used to construct a phase diagram for the fracture patterns that are produced. In so doing, we eliminate the random aspect of the previous simulations and consider initial packing fractions that are spatially uniform with ϕ_0 = 0.80.As illustrated in Figure <ref>, the results allow us to identify threshold levels of χ that delineate regions of a phase diagram for the fracture patterns obtained through simulations.The threshold χ=0.005 defines a limit below which fracture is not observed. As χ is increased, the fracture pattern starts to develop as a single crack, and then into three-branch shapes with further increases. We note that the three-branch configuration is quite ubiquitous in nature, as the presence of acute angles and the compression generated by the three cracks stabilizes the system. For larger values of χ, a fourth and a fifth branch are gradually realized. Larger values of χ are required to produce larger number of crack branches, resulting in unstable fracture patterns that cannot be clearly delineated in well defined regions of our phase diagram.To validate the zones and thresholds predicted by the simulations, we first calibrated our model to experiments using oleic acid spreading over a pure water layer.The resulting fracture pattern is shown in the bottom of the center column of Figure <ref>, which presents images of the final configurations for three different experiments, along with simulation results for comparable values of χ. Numerical simulations indicate that such a pattern can be obtained using χ = 0.018, which we label as χ_ref.As indicated by expression (<ref>), changing the surface tension difference corresponds to modulating the value of χ in the model. Accordingly, we conducted two additional experiments.In the first of these experiments, the surface tension of the liquid layer was reduced from 72 to 56 mN/m (colsolvent).In the second, thesurface tension of the surfactant was reduced from 40 to 28 mN/m (acetone). These changes in surface tension correspond approximately to modifying factors of 1/3 and 2, respectively, on the reference χ_ref=0.018.We denote the corresponding levels of χ as χ_down and χ_up.As illustrated in Figure <ref>, simulations based on the values of χ_up and χ_down yield fracture patterns that are remarkably consistent with the corresponding experiments.Simulations based on doubling χ from χ_ref to χ_up result in an increase in the number of fracture crack branches, from three to five.Conversely, simulations based on a reduction of χ from χ_ref to χ_down result in a decrease in the number of crack branches, from three to two.In each case, a qualitative match with the corresponding experiment is clear. As χ is increased, the model presents a gradual change between patterns, from the formation of a single crack to a configuration with multiple branches. The model nicely reproduces the central damage zone and multiple crack branches observed in experiments for the range of χ in this study (10^-4–10^-1).This result is particularly important because the conventional view has been that the packing fraction distribution is the primary variable governing the morphological features of the crack pattern.Our results suggest that, although the randomness of the initial packing fraction can underpin features such as crack deflections and arrest, the branching and general patterns observed in the experiments correspond to a much more fundamental principle related to the balance between the surfactant force and the fracture toughness.In other words, the difference in surface tension between the surfactant and underlying liquid are central to determining the crack patterns that emerge.§.§ Temporal scalingWe now investigate the extent to which the model captures the speed of crack propagation.We examine the overall time required to fracture the particulate monolayer as well as the temporal scaling of the process. The snapshots in Figure <ref> show two fracture patterns that evolve at very different rates, but for which the final configurations are visually indistinguishable. For comparison purposes, we show our results using a dimensional version of the model (i.e., viscosity in Pa s). Previous experimental observations have indicated that, for the same initial packing fractions, the geometry of the fracture pattern is insensitive to the viscosity of the underlying liquid layer.<cit.> In the experiments, the liquid viscosity can be adjusted, for example, by increasing the ratio of glycerol in the glycerol-water mix based liquid layer,raising the viscosity from 10^-3 Pa s to 10^-1 Pa s. This is consistent with work showing that the rate at which the process occurs decreases by the same order of magnitude.<cit.> We can capture this effect by adjusting the mobility defined in (<ref>), which is inversely proportional to the viscosity μ_f of the liquid layer.Additionally, experimental measurements indicate that the fracture proceeds in two stages.Crack growth rates that scale with the 3/4 and higher powers of t are observed in the early stages of the fracture process, followed by a gradual flattening as the crack advances and eventually arrests.<cit.>Arrest may occur in response to the increase in packing around the perimeter of the domain,which gradually makes it more difficult to displace the particles of the monolayer. As shown in Figure <ref>, our model qualitatively reproduces this behavior with time, in terms of the normalized fractured area A^* (namely the ratio of the area of the surface of the liquid layer that is exposed to the area of Ω). As argued by Bandi et al.,<cit.> if the radially receding front of R(t) of a particle raft scales according to R(t)∼ t^3/4, then the particle free area proceeds as R(t)^2∼(t^3/4)^2∼ t^3/2.The growth rates in the early stages are slightly below t^3/2, which suggests the presence of the monolayer slightly retards the expansion of the surfactant from what would be expected over a pure liquid surface.<cit.>Finally, we recognize a shift between the results from experiment and simulation with theinitial expansion process being noticeably faster in the experiment.This discrepancy at early times could be related to the rapid expansion in the drop of surfactant that is not captured by the model.The subtleties of the physics at early times are likely sensitive to the volume and height of the drop of surfactant at the time of injection.Questions related to these subtleties are a subject for future work.Following the rapid initial expansion, the dynamics slow down considerably once the particulate rafts jam together. As shown in the inset to Figure <ref>, the late stage dynamics exhibit approximately logarithmically slow relaxation over a decade in time. This kind of dynamics, also known as “creep” or “aging”, is observed in a variety of phenomena including granular compaction,<cit.> electron glasses,<cit.> polymer relaxation,<cit.> and superconductors,<cit.> and is now considered a hallmark of non-exponential, slow relaxation in amorphous media.<cit.> Although the observation times in our model and experiments do not extend over several decades, they are consistent with the expected behavior of slow relaxation in frustrated, amorphous media. § CONCLUSIONSWe have presented a phase-field model that captures the main features of the surfactant-driven fracture of closely-packed particulate monolayers. The model and experimental results indicate that there is a competition between the spreading force of the surfactant and the fracture toughness of the monolayer that determines the number of fractures.The model gives rise to a dimensionless parameter that can be written in terms of the surface tensions, and which allows for a straightforward comparison with experimental conditions.Experiments were conducted to validate the model and delineate the regimes of fracture pattern as a function of that parameter. Our model rests on a number of simplifying assumptions.In particular, we used the damage to the particulate monolayer as an indicator function for the surfactant concentration.As such, our model precludes investigations of the extent to which the surfactant can penetrate into the particulate monolayer network, ahead of (or behind) the crack front.We have also assumed that the strains exhibited by the monolayer are sufficiently small to justify modeling it as a linearly elastic solid.This assumption ceases to be reasonable if the surface tension difference is increased above some critical threshold. Moreover, its applicability is limited to systems in which the initial packing is low enough to allow the particulate layer to exhibit both loosely-packed and jammed behaviors as the surfactant spreads.<cit.> In either case, transitions between linear and nonlinear response, or between fluid and solid behavior, are worthy of further study. Finally, the interplay between liquid and solid constituents in the monolayer may endow it with a viscoelastic behavior that could also be explored.As the rate of surfactant transport is a function of the Marangoni stress, modulating the surface tension difference might also be used to explore any rate dependence in the monolayer.Future work will focus on enhancing the model to incorporate these effects and to allow for detailed exploration of their consequences and significance. § ACKNOWLEDGMENTS Christian Peco, Yingjie Liu, and John E. Dolbow are grateful for the support of NSF grant CMMI-1537306, to Duke University.Wei Chen, M. M. Bandi, and Eliot Fried gratefully acknowledge support from the Okinawa Institute of Science and Technology Graduate University with subsidy funding from the Cabinet Office, Government of Japan.John E. Dolbow would also like to acknowledge support from the Okinawa Institute of Science and Technology during a sabbatical stay in late 2014, during which time this work began.rsc | http://arxiv.org/abs/1706.08729v2 | {
"authors": [
"Christian Peco",
"Wei Chen",
"Yingjie Liu",
"M. M. Bandi",
"John E. Dolbow",
"Eliot Fried"
],
"categories": [
"cond-mat.soft"
],
"primary_category": "cond-mat.soft",
"published": "20170627085113",
"title": "Influence of surface tension in the surfactant-driven fracture of closely-packed particulate monolayers"
} |
Peter Vereš [email protected] Present address: Minor Planet Center, Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA, 91109 Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA, 91109We perform high fidelity simulations of a wide-field telescopic survey searching for Near-Earth Objects (NEO) larger than 140 m, focusing on the observational and detection model, detection efficiency and accuracy. As a test survey we select the Large Synoptic Survey Telescope. We use its proposed pointings for a 10-year mission and model the detection of near-Earth objects in the fields. We discuss individual model parameters for magnitude losses, vignetting, fading, asteroid rotation and colors, fill factor, limiting magnitude, rate of motion, field shape and rotation and survey patterns and we assess results in terms of the cumulative completeness of the detected population as a function of size and time. Additionally, we examine the sources of modeling uncertainty and derive the overall NEO population completeness for the baseline LSST survey to be 55±5% for NEOs with absolute magnitude brighter than 22. Including already discovered objects and ongoing surveys, the population completeness at the end of the LSST baseline survey should reach ∼ 77%. § INTRODUCTIONThe number of currently discovered asteroids is about 730,000, the vast majority of which are found in the main asteroid belt. The size of the asteroid catalog has sky-rocketed with the dawn of automated, CCD-based asteroid surveysonly two decades ago. The first asteroid discovered, Ceres, was discovered in 1801 and since 2006 has been classified as a dwarf planet. In 1900, only 545 asteroids where known. The first Near-Earth Object (NEO), 433 Eros, was discovered two years earlier. In 1992 recognition of NEOs as a potential threat to Earth led the US Congress to mandate that NASA discover 90% of NEOs larger than 1 within 10 years. Thus the Spaceguard survey, aimed at the discovery and study of NEOs, was established. The first generation of dedicated asteroid surveys was developed under this initiative, e.g., Spacewatch <cit.>, LINEAR <cit.>, NEAT <cit.>, LONEOS <cit.>. As a direct result of these efforts, the number of NEOs increased from 649 in 1998 to over 5000 in 2008, and as a byproduct, other small body populations of the Solar System were discovered as well, ranging from main belt asteroids, to Jupiter Trojans, Centaurs, Kuiper belt objects and comets. With new technologies becoming available, and new challenges such as impact hazard mitigation of smaller NEOs, potential commercial activities like asteroid mining, asteroid sample return, and asteroid redirection, as well as crewed mission concepts to an NEO, a new target was set to discover 90% of NEOs larger than 140 m.Now the next generation of Solar System surveys has emerged, notably, Pan-STARRS <cit.> and the Catalina Sky Survey <cit.>, which are now dominating minor planet discoveries, which reached 15,000 in 2016. Meanwhile, NEOWISE <cit.> is a space-based infrared telescope that is finding many asteroids, and providing diameter estimates for even more. However, the current NEO completeness for 140 m and larger objects is only about 30% and the currently operating search programs are not able to achieve the 90% goal in the next few decades. Therefore, a number future projects have been proposed and are in varying stages of development, such as the Zwicky Transient Factory <cit.>, NEOCAM <cit.> and the Large Synoptic Survey Telescope <cit.>. Survey discovery rates and statistics, along with information on the sky coverage and telescope sensitivity, are widely used for determination of the population counts and statistical distribution of orbital elements. For instance, <cit.> utilized discovery statistics and the telescopic biases to derive NEO population properties. This reverse engineering and debiasing led to estimates of the size-frequency distribution of a given population and is a key source of information about the current state of the Solar System inventory and an input for modeling the origin and evolution of planets and minor bodies. These models have uncertainties, mostly originating in errors in determination of the absolute magnitude H of asteroids and poorly understood search and detection efficiencies of individual surveys. The opposite approach allows prediction of discovery rates for current and future telescopes. This requires knowledge of the key parameters of the telescopes and their detection pipelines. Even though simulations on discovery rates have been performed for Solar System surveys, the limiting parameters, errors or uncertainties are often omitted and their effects are not discussed or fully disclosed in literature. Without a complete description of the input parameters and their effect on the detection efficiency, a useful comparison of survey simulations done by different authors is challenging and often infeasible.Traditionally, “discovery" means not just the first observation of an asteroid but also its confirming astrometry from the following nights, until a provisional orbit is obtained. Future large-scale, deep, all-sky surveys, like LSST, will not have the luxury of follow-up confirmation and will depend on self-follow-up capabilities. This will remove the difficult-to-model human factor from the process of discovery. Moreover, automated pipelines and processing are now more frequently responsible for object detection and reporting, which allows a more exact description of the input parameters and detection thresholds. Surveys also detect objects already discovered and have to identify them in the pipeline.Our work focused on the NEO detection and discovery rates and its variations as a function of the selected observational and detection parameters.We studied a broad range of effects, including the magnitude variations due to vignetting, asteroid colors and light curves, the rate of motion of asteroids, limiting magnitudes, the probability of an object being detected when near the faint limit, the shape and rotation of the field, fill factor, the survey search pattern, and the overall sky coverage. As a case study we consider the Large Synoptic Survey Telescope (LSST), which will start its commissioning in 2020 and its main 10-year mission in 2022. Even though LSST will be mainly focused on understanding and studying Dark Matter and Dark Energy, its all-sky coverage, large aperture and the largest CCD camera in the world make it an effective Solar System cataloging tool that is predicted to increase the number of known Main belt asteroids to 5 million, NEOs to 100,000 and Trans-Neptunian Objects to 40,000 <cit.>. We used the Moving Object Processing System <cit.> to simulate the detections, and for linking of detections into tracklets. MOPS and its variations have been used to process real data of Pan-STARRS, DECAM <cit.>, and NEOWISE, and will be used by LSST. MOPS computes ephemerides of input synthetic orbits with defined H for a set of telescope pointings, provides visible detection lists, and submits it to the next stage to create tracklets, which are the single-night detections of a candidate moving object. MOPS has a capability of linking inter-night tracks and derived orbits, however, it has never been tested in such a high-density detection and tracklet environment like LSST. Thus, our work assumes that the linking to orbits will work on the available collection of tracks. In a separate paper <cit.>, we consider the linking efficiency problem for LSST with MOPS.The primary survey performance metric of this work is the cumulative completeness of the detected NEO population in detections, tracklets and available 3-night tracks as a function of H, particularly for H<22 NEO (), and as a function of time. We also study the effects of using two input NEO populations, detection efficiency of Potentially Hazardous Asteroids (PHA) and compare our results with previous work.§ LARGE SYNOPTIC SURVEY TELESCOPE LSST <cit.> is being built atop Cerro Pachón in Chile and is funded by the National Science Foundation, the Department of Energy and private contributors. It is a next generation optical and near-infrared all-sky survey telescope with the mission of sparse sampling of the sky in short exposure times (Table <ref>). A single 30-second exposure with a 9.6 square degree field of view will achieve 24.5 magnitude depth in r-band. LSST will scan 6,000 square degrees per night. Because of the unprecedented quantities of data and LSST's real-time alerting requirements, the detection, processing and data handling will be fully automated.Scheduling of the visited fields is done through the Operations Simulator <cit.>. Fields are visited in five “proposals", each focusing on different science. Most of the time (85%) is spent in the Universal proposal that covers the entire Southern Hemisphere, the rest is divided between the coverage of the Galactic Plane, Northern Ecliptic Spur, South Celestial Pole and a few Deep Drilling (DD) fields (Figure <ref>). The final survey pattern is not yet selected, but several scenarios have been tested. We used some of the more relevant OpSim runs for our simulations: the old LSST baseline , the new baseline and NEO-enhanced (which we abbreviate from the full simulation designation ). Most of the fields are observed twice per night, except for two variations, where three () and four visits() were targeted (Figure <ref>). Some of the fields are only observed once per night (singletons). These were removed from our simulation because they cannot aid discovery. Also, the DD proposal covers only a few individual fixed fields that are visited in an extreme revisit cadence (tens of times per night, one by one) and are not suitable for NEO discovery, therefore, DD was removed as well. The median revisit time between exposures of the same field in a night is 21 minutes and in 94% of cases the first and last visit are within 2 hours in a night (Figure <ref>). Each field is observed in one of the six filters (Table <ref>) and at a specific limiting magnitude, seeing, simulated atmospheric conditions and lunar phase. We generated synthetic detections for NEO and MBA population models (see Section <ref>) by propagation of the orbits to the epochs of the OpSim fields. The propagation used JPL's small body codes with the DE405 planetary ephemerides, where all planets, plus Pluto and the Moon were perturbing bodies. We did not use any asteroids as perturbers. Only detections inside of the fields of view were saved into preliminary detection lists. The limiting magnitude m_5 of the field is defined at a signal-to-noise ratio of SNR=5. Photometric or astrometric errors associated with the astrometry were neglected because the are not relevant to the study approach, which does not include orbit fitting. The following subsections describe the details of the NEO detection modeling approach.§.§ Population models This work tested two NEO population models. Each model is represented by a set of Keplerian orbits with absolute magnitudes H defined in Johnson's V-band following the size-frequency distribution. The first NEO model that we used is from an earlier work <cit.>. “Bottke's” model consists of 268,896 orbits with H<25. A newer model was published by <cit.> after our work was already underway, which we refer toas “Granvik's” model. It has 801,959 orbits, again down to H<25. Even though Granvik's model has three times as many objects, the number of H<23 orbits is about the same as in the Bottke's data set (Figure <ref>). On the other hand, Granvik's model lacks objects with H<17. For simulations, these objects can be omitted because such bright NEOs are few and mostly discovered. The main difference is in the slope of the distribution for H>23, where Granvik's population describes the observational data better than Bottke's early estimate. Bottke's orbital element distribution does not depend on H, while Granvik's population does. The slight differences between the distributions of orbital elements are evident in Figure <ref>. Compared to Granvik, Bottke predicted more objects at low perihelion distances and larger eccentricities, while Granvik, on the other hand, shows an excess at larger perihelion distance and inclination.The Potential Hazardous Asteroids (PHAs) are a special subset of the NEOs. They are defined as objects with the minimum orbit intersection distance (MOID) less than 0.05 and H<22. H=22 corresponds to the diameter of 140 m when albedo is equal to 0.14. We created a subset of PHAs from both Granvik's and Bottke's populations.§.§ Focal Plane Model The focal plane of a large-format survey telescope usually differs from a simple square or a circular shape. LSST's focal plane has a square shape consisting of a 5×5 grid of 25 rafts with the 4 corner rafts removed (Figure <ref>). Each raft consists of an array of 9 CCD chips, yielding a total of 189 CCD chips. The detectors are 4096×4096 pixel CCDs, and so the total number of active pixels is 3,170,893,824. The design of the mosaic camera includes chip and raft gaps that do not contain pixels, and therefore, a detection hitting the gap is lost. The shape of the field can be modeled easily and in the simulation we numerically compute the location of each detection with respect to the borders of the camera. Fill factor (the gaps) can be simulated either by exact mapping of the grid to the sky-plane coordinates or by a statistical approach that randomly removes a fraction of detections. The current LSST camera design expects a 90.8% fill factor.LSST utilizes an altitude-azimuthal mount and the camera is able to rotate, and thus the fields are not generally aligned with the local RA-DEC frame. In fact, due to desired dithering, each exposure is observed in a randomized field orientation. The change of position angle between two exposures in a night (δ_rot) is small. The average δ_rot is 5 and the median δ_rot is 2.4.§.§ Detection Fading In survey simulations the limiting magnitude is often used as a step function cutoff that determines strictly which detections are visible in the simulated field. However, in real surveys, the detection limit is actually better represented by a function that gradually fades near the limiting magnitude to capture the probability of a given detection. The function can be described asϵ(m) = F/1+e^m-m_5/wwhere ϵ(m) is the probability of the detection, F the fill factor, m the magnitude of the detection, m_5 the limiting magnitude of the field and w=0.1 the width of the fading function. The fading function is depicted in Figure <ref>.Because the limiting magnitude is unique per field, the fading function applies on a per-field basis. The importance of using the fading function lies in the fact that it actually allows detection of asteroids fainter than the m_5 limit. Given the brightness distribution of moving objects, this tends to increase the number of detections in the field with respect to a scenario where only the m_5 step function was used to determine detection.§.§ Trailing and detection losses LSST will observe the sky with a sidereal tracking rate, and so the static sources will be detected as point-spread-functions (PSFs). Asteroids, however, will move in the images within the exposure time. For instance, in a 30-second single exposure a typical NEA moving at 0.64 deg/day (Figure <ref>) will move by 0.8 arcsec. The expected mean seeing at the LSST site is 0.85 arcsec (Figure <ref>), therefore,instead of a PSF, most NEAs will appear trailed (Figure <ref>), with the fastest objects being the smallest and closest to the Earth. The detected trail may be described as a convolution of a PSF and a line <cit.>. The flux of the moving source is spread along multiple pixels and the per-pixel signal decreases as a function of its apparent velocity. The longer the trail, the fainter is the peak signal. Also, the signal-to-noise ratio (SNR) decreases due to trailing as the trailed detection contains the sum of noise from a greater number of pixels.We consider two types of trailing magnitude losses. The first one is the the loss that happens when a Gaussian or a PSF-like filter is used to identify the sources in the image. The source function is traditionally modeled according to the static and well defined sources like unsaturated stars <cit.>, therefore, if the model finds a trail, it only captures a fraction of the flux in it (Figure <ref>). We call this magnitude loss the “detection" loss <cit.> and it is described by the functionΔ m_Trail = 1.25 log_10(1+cx^2)where c=0.42 and x=(vt_exp)/Θ is the trail length in units of FWHM seeing disks Θ. Here v is the rate of motion and t_exp the exposure time. After LSST detects a source, it calculates the SNR with a number of algorithms, including the use of a trail-fitting kernel, which leads to our second type of trailing magnitude loss. Herethe SNR of the trail is calculated from the source flux and the noise of the background from the entire trail. The greater the area of the source, the larger the amount of background noise that decreases the overall SNR of the detection. The magnitude loss due to the SNR trailing penalty <cit.> can be described by the functionΔ m_SNR = 1.25 log_10(1+ax^2/1+bx)where a=0.76, b=1.16 and x is again the normalized trailing factor <cit.>. As shown in Figure <ref>, the detection losses are a factor of two worse than the SNR losses, which immediately implies that significantly trailed detections found by LSST will have SNR≫5.§.§ Asteroid ColorsAbsolute magnitudes of asteroids are published in Johnson V-band and are used to calculate the apparent magnitudes. However, LSST will observe in 5 distinct filters in the visible and near infrared (u,g,r,i,z,y) and the transformation to V-band depends on the spectral characteristics of the asteroid. Even though distributions of asteroid colors in the main belt are relatively well understood to H<18 <cit.>, sampling of the NEA population is rather sparse. For instance, the SDSS database <cit.> contains only 174 NEA and the Asteroids Lightcurve Database <cit.> currently contains 1,115 NEA with identified spectral type. That is only a small fraction of the NEAs currently known (15,000 as of November 2016). Bottke's and Granvik's NEA populations, which we used in this work do not include albedo, diameter, or spectral type. We generated the distribution of spectral classes of NEA by the debiased distribution derived by <cit.> shown in Table <ref>. For simplification of the magnitude transformation from LSST filters to Johnson V-band, we divide these classes into two groups: S (Q-type, S-type and high-albedo X-types) and C (C-types, D-types and low-albedo X-types). With this scheme, the numbers of C and S type asteroids are similar. Also, the generated colors do not depend on H or orbital elements. The magnitude transformation model used in the study is presented in Table <ref>. The tabulated color indices are based on the LSST filter bandpasses, with mean reflectance spectra from <cit.> and the <cit.> model for the solar spectrum. §.§ Light Curve Variation The apparent magnitudes of asteroids derived from the MOPS ephemerides do not reflect the amplitude variation due to asteroid shapes, rotation and spin axis orientation. This effect could be significant when the sky is observed in a sparse temporal resolution, e.g., a few times per night. If the asteroid has an apparent brightness close to the limiting magnitude, a change in the brightness due to rotation in the time interval between two LSST exposures can cause the asteroid to be visible only in one of the images. Therefore, the tracklet will not be created and the asteroid would be lost on that particular night. On the other hand, some asteroids can be brighter than the ephemeris magnitude just because they were observed near the maximum of the light curve, leading to the possibility of finding objects nominally below the detection limit.In this work we generated amplitudes and periods for the model populations based on the debiased model by <cit.>. We extended this model so that it depends on the absolute magnitude H by the method described in <cit.>. The simplified light-curve corrections are represented by a sine wave defined by the generated amplitudes and periods at the epochs of the time of observation.This approach does not reflect the real shape of asteroids nor an amplitude that depends on the phase angle or the spin axis orientation. Despite the simplification, the generated magnitudes should reflect reality because they follow the debiased not the observed distribution. The main drawback lies in the fact that <cit.> observed MBAs near opposition at very low phase angles, while NEA phase angles vary widely. At large phase angles and differing geometry, the resulting light curve is altered and the resulting amplitudes may be smaller. §.§ Vignetting Optical and mechanical pathways with lenses and mirrors cause vignetting, whichdecreases brightness, especially at great distance from the optical axis of the system. Large focal planes and wide fields are especially prone to vignetting, even though specialized optical elements and optimized wide-field systems like LSST can reduce the effect significantly <cit.>. The LSST vignetting model depends only on the distance from the center of the field <cit.>. Figure <ref> shows the effect of vignetting causing magnitude loss in the LSST focal plane. Only 7% of the collecting area has a sensitivity penalty due to vignetting that is greater than 0.1 mag. The magnitude loss is significant only far from the field center and will only affect the most distant corners of the detector. Therefore, vignetting should not be expected to cause a large completeness loss.§ SIMULATIONS MOPS — our simulation framework – provides detected NEOs in LSST fields based on selected parameters and constraints. A list of detections for a given night is submitted toMOPS in the second stage to create tracklets. A tracklet is generated for a detection in the first image of a given field if a second detection near the first is present in a subsequent image. The area of the search circle is determined by lower and upper velocity limits, set as 0.05 deg/day and 2.0 deg/day in this work. If there are more possible connections in the circle, in addition to a clean tracklet consisting of the same object,a tracklet consisting of two objects is created as well. Increasing the upper velocity limit increases the number of false tracklets rapidly. Therefore, for velocities of 1.2–2.0 deg/day we used the information on the trail length of the detection for making tracklets. At 1.2 deg/day, the detection will have a non-PSF shape and its length will be 1.8 times the PSF atthe average 0.86 arcsec seeing, and so its length and orientation can be determined. Thus, instead of a large circular search area around detections that are trailed, only smaller regions, consistent with the anticipated velocity and direction of the trails need to be searched. Potential matches must also agree in the rate of motion.Having detection and tracklet lists is enough to predict tracks, neglecting linkage inefficiencies. Moreover, to perform quick studies of the 10-year survey detection and tracklet efficiency, we decreased the number of NEO orbits to 3000 and created detection and tracklet lists with only limiting magnitude and field shape and rotation parameters. The absolute magnitude for all 3000 orbits was set to H=0. The advantage of this approach is that allows us to adjust individual model parameters in post-processing and measure their effects on overall performance. The per-bin detection efficiency is then derived from the list of simulated detections or tracklets by adding a bin-by-bin δ H correction according to the anticipated size-frequency distribution and then accumulating the found objects to obtain integral completeness. This low-density approach is >100× faster in comparison with a full-density NEO-only simulation. Ephemerides of 3000 representative NEO orbits were computed for all OpSim fields, providing a detection list. Post-processing readily creates tracks because all detections and tracklets are identified with the associated object by MOPS. We studied two scenarios for potential tracks: 12-day and 20-day tracks. The minimum number of distinct nights for a track is 3, and so the minimum time span between the first and last tracklet is about 2 days. The output of this quick sim is an integral completeness that agrees well with the full density sim (Figure <ref>), and so we rely primarily on the low-density approach in this paper. The output of the low-density simulation is an integral completeness for detections, tracklets and tracks as a function of absolute magnitude or time (Figure <ref>). §.§ System Performance and Model DependenciesThe key output of our simulation is the 10-year integral completeness for detections, tracklets and tracks as a function of absolute magnitude or time. Figure <ref> reveals that over 80% of NEOs with H<22 are detected at least once in the ten-year survey. That number drops to 76% for at least one tracklet in ten years, and 67% for three tracklets in ten years, irrespective of their timing. To actually consider an object discovered and cataloged in this study, we require three tracklets on distinct nights over no more than 20 days or 12 days, which leads to 61% or 58% completeness, respectively, in Figure <ref>. The time history of completeness reveals that after ten years the rate of cataloging is still increasing at about 2% per year in .We tested the importance to of a number of our modeling assumptions as discussed in the following list. The relevance of each item for is tabulated in Table <ref> and depicted in Figures <ref> and <ref>* To compare the population models, we performed two low-density simulations with 3000 NEOs on fields for Bottke's and Granvik's NEO models. Granvik's population led to a slightly greater completeness , but the efficiency is significantly greater for Bottke's population at H<25 (Figure <ref>), primarily because most of Granvik's NEO are small and therefore much harder to be detected. Table <ref> shows the percentage difference between the two models for detections, tracklets and tracks. * Fill factor is one of the key effects that drives detection efficiency down. The low-density simulation focused on altering statistical fill factor by one percentage point.Dropping fill factor by one and two percentage points from 0.90 led to almost no loss for detections and small losses (0.5-0.9%) for 12- and 20-day tracks. (Figure <ref>, Table <ref>)* Trailing losses represent a major effect for NEOs; however, are negligible for distant asteroids like MBAs. NEO completeness is reduced by 1 percentage point for single detections and up to 2 percentage points for 12- and 20-day tracks. Even though detection losses cause much larger magnitude loss per a single detection, in a 10-year low-density simulation, detection losses are similar to SNR losses in completeness. (Figure <ref>, Table <ref>)* Because of Malmquist bias, the fading model leads to more detections than a hard step-function faint limit. However, for tracks, fading actually decreased the completeness. This nonintuitive effect arises because at the faint limit the fading model behaves just as a very low (50%) fill factor. So, even though fading provided more detections, these originated from the faint end, below the m_5 threshold, where the probability of a single detection is below 0.5. In case of a tracklet, this is less than0.5^2 and in case of tracks, that require 3 tracklets and minimum of 6 detections, the probability is only 0.5^6 of getting cataloged. (Figure <ref>, Table <ref>)* We tested three scenarios with NEOs being only S type, C type or a 50-50 mixture of S and C types, independent of orbital elements or H. The default class of NEO used in this work is S types. Switching to C types, led to a net decrease of detection efficiency. Nevertheless, the loss was rather small. Switching from S to mixture of C and S types led to a relative loss of 0.5% at H<22 for 2-day tracks. If all asteroids were C types, then the completeness loss will be slightly larger, at 1.2%.(Figure <ref>, Table <ref>)* Like fading, the light curve variation provided somewhat more detections and yet fewer linkable tracks. For detections and tracklets, the completeness increased by 0.3% and 0.1% at H<22. However, 12- and 20-day tracks showed that the light curve variability caused a decrease in completeness by 0.3% and 0.2%. This effect is similar to the detection gain due to fading and related combinatorics. The results showed that light curve variation has a negligible effect in completeness. (Figure <ref>, Table <ref>)* As expected, vignetting plays only a minor role in the completeness of the survey. The NEO completeness penalty at H<22 is only about 0.3% for the tracks. (Figure <ref>, Table <ref>)* So far we have discussed several nominal modeling aspects of the baseline LSST survey. But we wish to also consider off-nominal performance of the LSST system. For instance, the limiting magnitude and seeing for fields is theoretical, even though based on long term observations from Cerro Pachón. Systematic offsets in limiting magnitude could cause a drastic drop in completeness, possibly more than all previously mentioned parameters. For instance, if the limiting magnitude is only 0.2 mag shallower, the NEO (H<22) completeness penalty is ∼ 3% for 12 and 20-day tracks, and for a 0.5 mag loss the corresponding penalty is ∼ 8%. (Figure <ref>, Table <ref>)* This study enforced a tracklet velocity limit of 2.0 deg/day and a subset of that velocity range (1.2–2.0 deg/day) used a special treatment when creating tracklets, using the length and orientation of the elongated detections. Figure <ref> depicts the rates of motion of NEOs and MBAs. Linking complexity is reduced for lower limiting velocities because pairs of detections are more separated at higher velocities, but this comes at the cost of reduced completeness because some fast moving asteroids would not be discovered due to the velocity limit. So selecting the velocity threshold is a tradeoff between computational loads and the objective of discovering small, fast-moving NEOs.Figure <ref> and Table <ref> show how different velocity cutoffs for tracklets affect the NEO completeness. Decreasing the upper velocity limit to 0.5 deg/day would dramatically decrease , by more than 13% for tracks. On the other hand, increasing the upper bound from 2.0 to 5.0 deg/day only had a slight benefit of about 1% in completeness for tracks. If making use of the trail velocity information for fast detections, increasing the upper velocity threshold is not likely to significantly increase the false tracklet rate, and it comes at a modest benefit. §.§ Overall LSST Performance and Uncertainties Throughout this work we have tested and analyzed an array of different survey models, including various OpSim runs, NEO population models and detection models. Here we collect and summarize the various modeling details discussed above and characterize the uncertainty of the overall performance estimate.For what we consider our final result we select the most current LSST baseline survey, , and the latest debiased NEO population estimate, namely Granvik's NEO model. This “new” approach can be compared with the “original” alternative of with Bottke's NEO model. Table <ref> makes this comparison in an incremental fashion by cumulatively adding various model details. Examining the column of Table <ref>, we see that for 12-day tracks =65.9% with the most rudimentary and optimistic detection model. The completeness drops by 6.8%, to 59.0%, when fill factor, fading, trailing losses, vignetting, NEO colors and light curves are applied. Accounting for the linking losses and bright source masking assumed by <cit.>, we lose another 4.4%, arriving at a final estimate of 54.6%. Comparing the results with those from the column, we see that there is very little difference for C_H<22 performance. with only slightly better, by about 1.5% in C_H<22 after 10 years (Figure <ref>). This agreement is because of the compensating factors of Granvik's steeper size distribution, which provides a ∼ 2% increase in C_H<22 (Table <ref>) for the simulation, and the ∼0.25 mag reduction in faint limit seen for , which leads to a similar drop in performance. As shown in Figure <ref>, the effect of using two different populations is more clear at different H limits. Specifically, Granvik's population is more numerous at the small end, therefore, the completeness is lower for H<25 when compared to . Also, seems to be slightly more productive in the early years, while catches up and passes about 7 years into the survey.[htb]NEO completeness in percentage points as multiple parameters are being applied one-by-one (right column). Δ C denotes the difference between consecutive steps. The associated error in percentage points coming from uncertainty of the model is listed under the heading “Uncert.” The overall uncertainty on completeness is about 5%. The results used the Bottke NEO model, and the results are for Granvik's NEO model.2|c|2|c|Model Variation C_H<22Δ C C_H<22Δ C Uncert. RemarksNone65.87 67.66 ±0.6 Assumes 100% fill factor, hard m_5 cutoff, no trailing losses, etc.+ fill factor (90.8%)63.18-2.7065.07-2.60^+0.0_-0.8 90.8^+0_-2 % fill factor+ fading62.41-0.7664.29-0.78±0.2 fading width w=0.1±0.032 mag+ trailing loss 60.30-2.1261.75-2.54±0.7 ± 32 % of Δ m_trail+ vignetting59.91-0.3861.29-0.46±0.1 ± 32 % of Δ m_vignette+ colors59.38-0.5260.67-0.62±0.1 50:50 S&C Groups, ± 1σ in SDSS color indices+ light curves59.04-0.3560.32-0.35±0.2 ± 32 % of light curve amplitude+ bright source removal 58.35-0.6959.64-0.68^+0.2_-0.4 1.0^+1.0_-0.5 % masked <cit.>+ Linking efficiency54.61-3.7155.82-3.79^+3.3_-0.6 93.6^+6_-1 % linking eff. <cit.>Population Model±2.0 Granvik vs. Bottke NEO modelsVariation in faint limit^+1.8_-4.4 m_5 variation of ^-0.10_+0.25 magOVERALL 54.61-11.26 55.82-11.84 ^+4.3_-5.0 55± 5% adequately describes final result The completeness derived from our simulations has numerous error sources. The easiest to define are those associated with sampling. Other contributions from the various individual modeling details were derived step-by-step, and we discuss them below and summarize them in Table <ref>. The modeling details and our ad hoc approach for understanding their potential effect on C_H<22 are as follows: * By using low-density simulations, the statistical sampling error is about 0.6%. * UsingGranvik's NEO population instead of Bottke's leads to a 2% increase in C_H<22, and we take this as proxy for the uncertainty due to uncertainties in the NEO size distribution.* Changing the fill factor by +0/-2% leads to a shift of +0/-0.8% in C_H<22. * The assumed width for detection fading was a constant value of w=0.1. If this value is altered by 32%, then the completeness error from this source is 0.2%. * Trailing losses are described by a detection loss function that is theoretical and not operationally validated. If the detection loss penalties varies by 32% it leads to a 0.7% error in completeness. * We allow the color indices taken from SDSS and assumed in the study to vary by 1σ of their values, which affects completeness by 0.1%.* We allow a 32% variation in light curve amplitude, which leads to a 0.2% effect on completeness.* For vignetting, we take a 32% variation in the magnitude loss to obtain a 0.1% variation in completeness.* Bright sources lead to a large number of false detections in difference images, which could make linking inefficient. <cit.> found that by masking areas around bright sources—about 1% of the focal plane area—the false detection rate will be dramatically reduced. Thus we reduce the fill factor by 1%. If this value varies by -0.5/+1.0%, then the completeness error from this source is +0.2/-0.4%. * Linking efficiency is discussed at length by <cit.>. Here we assume a value of 93.6^+6_-1%. These uncertainties cause to vary by +3.3/-0.6%.* We take the somewhat pessimistic view that the actual operational LSST m_5 can vary by +0.1/-0.25 mag, which causes the completeness to vary by +1.8/-4.4%.We note that many of the foregoing stated uncertainties are more akin to sensitivity exercises than uncertainty estimates. In many cases we have no good statistical footing from which to infer the uncertainty in the inputs and so the corresponding uncertainty estimate relies heavily on judgement. Nonetheless, taken all together, these modeling effects lead to a ±5% uncertainty in the predicted value of C_H<22, and so, in light of this uncertainty, the difference between the old (/Bottke) and new (/Granvik) simulations is negligible. Therefore we report our final LSST performance result as C_H<22=55±5% afterlinking losses. Here we emphasize that the stated uncertainty is not a Gaussian 1-sigma error bar, but rather reflects the possibility of modeling systematics that could compromise the result by up to 5%.§.§ Alternatives to Nightly Pairs Historically, the Minor Planet Center accepts only high-reliability tracklets from observers, and its internal linking processes assume that the false tracklet rate is low. This has been a reasonable assumption for past and current major surveys, which follow a cadence that naturally returns 3–5 detections per tracklet by repeatedly returning to the same field within a span of an hour or so. This survey approach is robust against false positives because the 3–5 detections in a tracklet must all be consistent with a common linear (or nearly linear) motion, effectively eliminating the possibility that one or more false detections could contaminate a tracklet. LSST, on the other hand, is baselined to return only two detections per tracklet, which eliminates the possibility of checking for internal consistency among the elements of a tracklet. The result is a high rate of false tracklets that is not suitable for submission to the MPC. LSST will work past this obstacle by submitting only high-reliability three-night tracks to the MPC.The LSST approach of obtaining nightly pairs is certainly more fragile for linking than a cadence that returns nightly triples or quads, but the fragility comes with the marked benefit of significantly increased sky coverage per night and hence a shorter return period, leading to more tracklets per observing cycle, which restores some measure of robustness and certainly leads to increased discovery rates, so long as the linking problem can be managed. If, for whatever reason, and however unlikely, LSST cannot successfully link two-detection tracklets then it could conceivably be forced to observe triples or quads to meet survey objectives. Here we compare the performance of the pair-wise baseline survey with and , which are tuned to provide 3- and 4-visit cadences, respectively. For the 2-visit baseline survey, we required at least 2 detections for tracklet creation, in 3-visit baseline at least 3-detections per tracklet (“triples”) and in 4-visit cadence at least 4 detections per tracklet (“quads”).We emphasize that the benefit of a cadence that produces triples or quads is that it eases the linkage challenge. It does not produce better orbits. Tracklets formed from three or more detections have far higher confidence than those obtained from pairs because with only two positions there is no independent corroboration of linear motion. Thus for pairs, the idea that a tracklet's detections are associated with a single moving object is a hypothesis to be tested by the linking engine, whereas with three or more detections the tracklet has a high likelihood of being real. Linking is challenging for pairs because of the high false tracklet rate and easy for triples or quads because there is no hypothesis testing involved.In terms of orbit quality, there is no appreciable difference between orbits derived from pairs or triples or quads. Each tracklet provides a position and rate of motion on the plane of sky, with no information on plane of sky acceleration (except for very close objects, which are rare enough to be ignored in this context). The orbit quality depends primarily on how many distinct nights the object has been observed and the time interval between the first and last night.Table <ref> shows that visiting the same field more often per night predictably decreases the effective areal search coverage significantly, even though the alternate surveys have a similar number of fields observed in 10 years, similar limiting magnitudes, and similar inter-night survey patterns. Figure <ref> shows that all three surveys also contain some fields that are visited fewer or more than the target number of visits per night. As mentioned earlier, singleton and Deep Drilling fields are not used in our simulations.Figure <ref> and Table <ref> show the direct impact of the 3 and 4-visits cadence approach on NEO completeness. The completeness penalty is not as dramatic on the single tracklet level; however, due to reduced sky coverage the tracks are dramatically affected. Three and four-visit cadences could decrease the number of false tracklets significantly, but at the cost of a steep reduction in C_H<22. The figure shows that 3- and 4-visit cadences have a severe impact on NEO completeness in all stages, from detections to tracklets to tracks. At H<22, the completeness for 12-day tracks falls from 58.6% for nightly pairs to 36.9% for 3-visit cadence to 19.0% for 4-visit cadence. The 4-visit cadence could be improved by accepting tracklets with detections on three out of the four visits, which would provide a performance somewhere in between that tabulated (and plotted) for the 3- and 4-visit cadences. We note that triples or quads should dramatically reduce the false tracklet rate so that reliable tracks could likely be assembled from tracklets on only two nights. This approach would lead to increased completeness; however, such two night tracks would inevitably lead to weak orbits that are not likely to meet survey objectives. In particular, two-night orbits are generally uncertain enough that MBAs and NEOs cannot be distinguished in general. As discussed by <cit.>, this is already an issue for some three-night orbits, but for two-night orbits it is the norm. Table <ref> indicates the level of NEO completeness that would be obtained for the four-visit survey, assuming that a minimum of either three or four detections are available for each tracklet. We did not analyze the orbit quality of two night tracks, but the challenge here is that the shortest arcs (e.g., 3 days) may be readily linked but the orbits are likely to be highly uncertain, while for the longer arcs (e.g., 12 days) the orbit may still be unacceptably weak but the linkage problem may be difficult. The numbers in Table <ref> should be compared with the 12-day, 3-night tracks completeness for in Table <ref>, which is 19.1%. Even the most optimistic scenario in Table <ref> significantly underperforms compared to the baseline . §.§ Including Prior and Ongoing Surveys In all of the analyses presented so far, we have implicitly assumed that no NEOs have been discovered prior to the LSST survey. However, based on current NEO discovery statistics and published population models (e.g., <cit.>), the population of NEOs with H<22 is currently already complete to a level of approximately 30%. It is expected that this number will continue to increase until LSST becomes operational in 2022 and that at least some current or future NEO survey assets will continue to operate during the LSST mission. Therefore, some fraction of potential LSST NEO discoveries will have already been discovered by other surveys. Similarly,some fraction of objects missed by LSST will also have already been discovered. To make an estimate of what the completeness will be after the ten-year LSST survey we must make an accounting for the contributions of other surveys.Spacewatch <cit.> was the first CCD-based NEO search program, but the era of dedicated wide-field NEA surveys began approximately 18 years ago when LINEAR <cit.> became operational. Since then improvements in instrumentation and techniques allowed fielding of other advanced ground based surveys like NEAT <cit.>, LONEOS <cit.>, the Catalina Sky Survey and the Mt. Lemmon Survey <cit.> and Pan-STARRS <cit.>. The space-based NEOWISE program <cit.> has also made contributions to cataloging NEOs. We simulated past, current and presumed future NEO surveys starting 15 years in the past to 15 years in the future (2002–2032). In the simulation, LSST begins operation in 2022, about five years from this writing. Ephemerides of all objects where calculated once per day and an object was considered discoverable when all of the following criteria were met:* Apparent magnitude brighter than the detection limit V_lim* Ecliptic latitude was between ±60* Geocentric opposition-centered ecliptic longitude was between ±90* Declination from -30 to +75 * Lunar elongation >90 Ground-based surveys are limited by weather and cannot cover the entire sky per night; therefore, only a fraction F_disc of the objects that were discoverable according to the above criteria were added to the catalog.Surveys have improved in time and so we slowly improve the detection model in 5-year steps: * A LINEAR-like erafrom 15–10 years ago with limiting magnitude V_lim=19.5 and F_disc=0.5* A Catalina era with limiting magnitude of V_lim=21.0 the next 5 years and F_disc=0.6* For the past 5 years, Pan-STARRS1 and the Mt. Lemmon Survey have operated at the limiting magnitude of V_lim=21.5 and F_disc=0.7* For the next 5 years from present, we expect the limiting magnitude to be V_lim=22.0 and F_disc=0.8 to account for improvements in the combination of Pan-STARRS1, Pan-STARRS2 and both Catalina surveys. Also, the southern declination limit is extended to -45. * Starting 5 years from present we augment the previous search interval with the LSST survey and increase F_disc=0.9 to account for continuing improvements for the other surveys. Figure <ref> shows the outcome of the rudimentary simulation, which is deliberately tuned to match the current estimated completeness at the current time. Our simple model predicts that 42% of NEOs with H<22 will be discovered before LSST becomes operational, and that without LSST the current NEO surveys alone could achieve C_H<22=61% by 2032, when the LSST survey is planned to conclude. We have shown above that LSST acting alone will achieve completeness of about 58% by itself (neglecting linking efficiency here), but when combined with past and other expected NEO search efforts, rises to 77%. This is not a high-fidelity analysis, but it shows that the combination of LSST with other ground-based search activity will increase C_H<22 by about 20% compared with the naive assumption of LSST starting with an empty catalog. Put another way, we project that LSST will provide a 16% increase in C_H<22 compared with the anticipated efforts of the existing NEO search programs. §.§ Comparison with previous work Predictions of NEOs and PHAs discovery and detection rates for LSST have been done for a decade. <cit.> used 1000 synthetic NEO orbits based on a large population (D>1), with assumed LSST sky coverage and revisit rate, with only a rough estimate of cadence or limiting magnitudes. <cit.> predicted that LSST will detect 90% of PHAs larger than 250 m and75% of PHAs larger than 140 m, with the optimistic estimate of 90% completion if the cadence and survey is NEO-optimized. <cit.> used a list of LSST pointings and a higher fidelity model for the survey, using diameter-limited PHA population of 800. This analysis found 82% PHA completeness (D>140m) that could be improved to 90% with the same cadence if stretched to 12 years.<cit.> simulated the NEO and PHA performance of LSST for the baseline and 4-visit surveys. Additionally, they discussed the discovery performance of LSST combined with existing surveys and the proposed NEOCAM space-based survey. They used the same fields as this study (except that they did not remove singletons and Deep Drilling fields), the same limiting magnitudes and a 95% fill factor, which is higher than the currently anticipated 90.8%.The synthetic population was different, consisting of 20,000 NEOs down to a size threshold of 100 m. Also, <cit.> allowed creation of tracklets in a velocity range between 0.011–48 deg/day, which is significantly broader than our range (0.05–2.0 deg/day). The spectral distribution of synthetic objects had an equal balance between C and S types, however, the color transformation to V-band was derived by slightly outdated specifications from SDSS <cit.>. Their simulation process was similar to ours, where detections and tracklets were assembled into lists and built into assumed tracks through post-processing. A track was created when 3 tracklets were detected within 12 days, with a maximum separation of 6 days from two of them. However, for the 4-visit case, , <cit.> required only two 4-detection tracklets within 12 days to build a track. This is a significant difference from our three tracklets on three distinct nights over at most 12 or 20-days tracks. While <cit.> were skeptical of the 2-visit cadence, given that it has never been tested operationally on a comparable survey, their 4-visit alternative only required two tracklets for a track. They alsodid not include trailing losses, vignetting, fading, light curve variation, and they assumed the linking efficiency to be 100%. <cit.> derived that for the 2-visit cadence LSST will have a 63% completion of NEOs larger than 140 m and for PHAs they find C_D>140= 62%. In the 4-visit cadence, they reported C_D>140= 59% for NEOs and 58% for PHAs. The completeness was presented with ±1% uncertainty.To make a robust comparison with this work, we ran a simulation with a <cit.> population of NEOs (19,597) and PHAs (2,346). Our model included fading, trailing losses, vignetting and 90% fill factor. Table <ref> and Figure <ref> present the comparison. Our results are substantially consistent with <cit.>; however, we note that our PHA completeness is slightly higher than that for NEOs, while the converse is true for <cit.>. <cit.> performed a detailed computation of LSST performance using multiple parameters mentioned in this work as well as identical survey fields and patterns (, and ). Their pipeline based on MOPS was working on the detection and tracklet level, predicting tracks in post-processing of the tracklet list, similar to our low-density simulations and providing metrics within their Metrics Analysis Framework (MAF). They used an H-delimited population based on <cit.>. Their results are compared to ours in Table <ref>, where we see an excellent match, within a percentage point, for . For , the UW results show a slightly increased (∼3%) compared to the present study.There were several modeling differences between the present report and <cit.>. For example, <cit.> did not apply a rate-of-motion cut-off, which should lead to an increase in computed completeness. Also, they used all OpSim fields, without rejecting Deep Drilling and singletons, which should again lead to a slight increase in their modeled completeness. However, they considered all NEOs to be C type, and had a 1-hour restriction for a maximum tracklet duration which is half of our maximum duration. These two modeling effects could drive their estimated completeness down. Also, <cit.> used a more sophisticated model for chip and raft gaps, masking the exact locations in the focal plane, whereas our work used only a random number as a fill factor to assign a detection probability to every detection. Collectively, these model variations readily explain the slightly different NEO completeness estimates reported by <cit.>. § CONCLUSIONS We have demonstrated the importance of the observational constraints and input parameters on the detection of NEOs by an automated survey such as LSST. Among the numerous modeling details that we investigated, few result in a significant effect on survey performance if reasonable care is taken in developing the nominal survey model. A 2% reduction in fill factor or a major increase in trailing losses could lead to nearly a 1% reduction in for NEOs. We found a 2% difference in between the Bottke and Granvik NEO models, suggesting some dependence on the population size distribution. However, the Granvik model is based on vastly superior input dataset compared to the 15-year old Bottke model. One should not expect future population models will lead to such large swings in the size distribution, and thus should be more stable in this regard. The effective, operational limiting magnitude of the LSST survey is a crucial parameter. We find that degrades by ∼1.8% for every 0.1 mag loss in sensitivity, making this the single largest source of model uncertainty in our completeness estimates.We find that ≃55±5% for the baseline survey operating alone and with the linking working at 93.6% efficiency <cit.>. This result assumes 12-day, 3-night tracks, and accounts for all of the foregoing modeling features and sources of error, including linking losses. The 12-day track linking is very conservative; increases by 2–3% if the linking uses 20-day tracks in the baseline LSST cadence. We find that for PHAs is generally 3–4% greater than for NEOs. <cit.> reported simulations for that suggest the PHA completeness is actually lower by 1%. In contrast, <cit.> report shows a good agreement in for NEOs and PHAs in simulation. For , the UW results exhibit slightly larger by ≈3% with respect to our results.The old and new LSST baselines (and , respectively) and the NEO-enhanced scenario () all provide similar NEO detection efficiencies, to within 1%, for a 10-year survey. Surveying for longer than 10 years increases by about 2% per year (1% per year if other surveys are taken into consideration). The three- and four-visit optimized LSST cadences that we tested had a dramatically reduced NEO completeness relative to that obtained for the two-visit baseline cadences. This result relies on the modeling hypotheses assumed throughout this study and assumes that tracklets in three distinct nights within 12 days are required for cataloging. The performance loss associated with these alternate cadences could be significantly eased if the cataloging requirement was for tracklets on only two nights, rather than three, but it is doubtful that such two-nighters would have high enough orbit quality to meet cataloging objectives. When LSST becomes operational in 2022, about 42% of NEOs with H<22 should have been discovered with the current assets. Without LSST, current assets could discover 61% of the catalog during the LSST era. With LSST and other surveys combined, should reach 77% by the end of 10-year mission in 2032, assuming 12-day tracks with . Assembling the foregoing completeness results, including the contribution of other surveys, the post-LSST should reach 80% for PHAs in ten years, and slightly more if 20-day tracks are linked. In our judgement, the H < 22 PHA catalog is likely to approach ∼85% completeness, but probably not 90%, after 12 years of LSST operation.Acknowledgments The Moving Object Processing System was crucial to the successful completion of this work. This tool is the product of a massive development effort involving many contributors, whom we do not list here but may be found in the author list of <cit.>. We thank Larry Denneau (IfA, University of Hawaii) for his tremendous support in installing and running the MOPS software.This study benefited from extensive interactions with Zeljko Ivezic, Lynne Jones and Mario Juric, all from the University of Washington. As members of the LSST project, they provided vital guidance in understanding the performance and operation of LSST. They also provided important insight into the expected interpretation and reliability of LSST data. They ensured that some of the OpSim runs required to fulfill study objectives were generated by LSST. Mikael Granvik (Univ. Helsinki) kindly provided an early version of the <cit.> NEO population model, which was used extensively in this work.ThisresearchwasconductedattheJetPropulsionLaboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.Copyright 2017 California Institute of Technology. Government sponsorship acknowledged.[Araujo-Hauck et al.(2016)]2016SPIE.9906E..0LA Araujo-Hauck, C., Sebag, J., Liang, M., et al. 2016, , 9906, 99060L [Bottke et al.(2002)]2002Icar..156..399B Bottke, W. F., Morbidelli, A., Jedicke, R., et al. 2002, , 156, 399 [Carvano et al.(2010)]2010A A...510A..43C Carvano, J. M., Hasselmann, P. H., Lazzaro, D., & Mothé-Diniz, T. 2010, , 510, A43 [Chance & Kurucz(2010)]chance_2010 Chance, K., & Kurucz, R. L. 2010, , 111, 1289[Christensen et al.(2012)]2012DPS....4421013C Christensen, E., Larson, S., Boattini, A., et al. 2012, AAS/Division for Planetary Sciences Meeting Abstracts, 44, 210.13 [Delgado et al.(2014)]2014SPIE.9150E..15D Delgado, F., Saha, A., Chandrasekharan, S., et al. 2014, , 9150, 915015 [Denneau et al.(2013)]2013PASP..125..357D Denneau, L., Jedicke, R., Grav, T., et al. 2013, , 125, 357 [DeMeo et al.(2009)]demeo_pds_2009 DeMeo, F., Binzel, R. P., Slivan, S. M., & Bus, S. J. 2009, NASA Planetary Data System, 114,[DeMeo & Carry(2014)]2014Natur.505..629D DeMeo, F. E., & Carry, B. 2014, , 505, 629 [Flaugher et al.(2015)]2015AJ....150..150F Flaugher, B., Diehl, H. T., Honscheid, K., et al. 2015, , 150, 150 [Granvik et al.(2016)]2016Natur.530..303G Granvik, M., Morbidelli, A., Jedicke, R., et al. 2016, , 530, 303 [Grav et al.(2011)]2011PASP..123..423G Grav, T., Jedicke, R., Denneau, L., et al. 2011, , 123, 423 [Grav et al.(2016)]gsm16 Grav, T., Mainzer, A. K., & Spahr, T. 2016, , 151, 172 [Harris & D'Abramo(2015)]2015Icar..257..302H Harris, A. W., & D'Abramo, G. 2015, , 257, 302 [Hodapp et al.(2004)]2004SPIE.5489..667H Hodapp, K. W., Siegmund, W. A., Kaiser, N., et al. 2004, , 5489, 667 [Ivezić et al.(2001)]2001AJ....122.2749I Ivezić, Ž., Tabachnik, S., Rafikov, R., et al. 2001, , 122, 2749 [Ivezić et al.(2007)]2007IAUS..236..353I Ivezić, Ž., Tyson, J. A., Jurić, M., et al. 2007, Near Earth Objects, our Celestial Neighbors: Opportunity and Risk, 236, 353 [Ivezic et al.(2009)]2009AAS...21346003I Ivezic, Z., Tyson, J. A., Axelrod, T., et al. 2009, Bulletin of the American Astronomical Society, 41, 460.03 [Ivezić et al.(2014)]Ivezic2014 Ivezić, Ž., Tyson, J. A., Abel, B. et al. 2014, arXiv:0805.2366[Jones et al.(2016)]2016IAUS..318..282J Jones, R. L., Jurić, M., & Ivezić, Ž. 2016, IAU Symposium, 318, 282 [Jones et al.(2017)]jones2017 Jones, R. L., Slater, C. Moeyens, J., Allen, L., Jurić, M., & Ivezić, Ž. 2017, “The Large Synoptic Survey Telescope as a Near-Earth Object Discovery Machine,” submitted to Icarus.[Kaiser et al.(2002)]2002SPIE.4836..154K Kaiser, N., Aussel, H., Burke, B. E., et al. 2002, , 4836, 154 [Kaiser et al.(2010)]2010SPIE.7733E..0EK Kaiser, N., Burgett, W., Chambers, K., et al. 2010, , 7733, 77330E [Koehn & Bowell(1999)]1999BAAS...31.1091K Koehn, B. W., & Bowell, E. 1999, , 31, 12.02 [Larson et al.(2003)]2003DPS....35.3604L Larson, S., Beshore, E., Hill, R., et al. 2003, Bulletin of the American Astronomical Society, 35, 36.04 [Mainzer et al.(2011)]2011ApJ...743..156M Mainzer, A., Grav, T., Bauer, J., et al. 2011, , 743, 156 [Mainzer et al.(2015)]2015DPS....4730801M Mainzer, A. K., Wright, E. L., Bauer, J., et al. 2015, AAS/Division for Planetary Sciences Meeting Abstracts, 47, 308.01 [Masiero et al.(2009)]2009Icar..204..145M Masiero, J., Jedicke, R., Ďurech, J., et al. 2009, , 204, 145 [McMillan & Spacewatch Team(2006)]2006DPS....38.5807M McMillan, R. S., & Spacewatch Team 2006, Bulletin of the American Astronomical Society, 38, 58.07 [Pravdo et al.(1999)]1999AJ....117.1616P Pravdo, S. H., Rabinowitz, D. L., Helin, E. F., et al. 1999, , 117, 1616 [Schunová-Lilly et al.(2017)]2017Icar..284..114S Schunová-Lilly, E., Jedicke, R., Vereš, P., Denneau, L., & Wainscoat, R. J. 2017, , 284, 114 [Smith et al.(2014)]2014SPIE.9147E..79S Smith, R. M., Dekany, R. G., Bebek, C., et al. 2014, , 9147, 914779 [Stokes et al.(2000)]2000Icar..148...21S Stokes, G. H., Evans, J. B., Viggh, H. E. M., Shelly, F. C., & Pearce, E. C. 2000, , 148, 21 [Stuart & Binzel(2004)]2004Icar..170..295S Stuart, J. S., & Binzel, R. P. 2004, , 170, 295 [Vereš et al.(2012)]2012PASP..124.1197V Vereš, P., Jedicke, R., Denneau, L., et al. 2012, , 124, 1197 [Vereš et al.(2015)]2015Icar..261...34V Vereš, P., Jedicke, R., Fitzsimmons, A., et al. 2015, , 261, 34 [Vereš & Chesley(2017)]2017Veres_2 Vereš, P. & Chesley, S.R.,2017, , submitted [Warner et al.(2009)]2009Icar..202..134W Warner, B. D., Harris, A. W., & Pravec, P. 2009, , 202, 134 [Wright et al.(2010)]2010AJ....140.1868W Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, , 140, 1868-1881 [Xin et al.(2015)]2015ApOpt..54.9045X Xin, B., Claver, C., Liang, M., et al. 2015, , 54, 9045 | http://arxiv.org/abs/1706.09398v1 | {
"authors": [
"Peter Vereš",
"Steven R. Chesley"
],
"categories": [
"astro-ph.EP"
],
"primary_category": "astro-ph.EP",
"published": "20170627191647",
"title": "High-fidelity Simulations of the Near-Earth Object Search Performance of the Large Synoptic Survey Telescope"
} |
=1 8.6in 176mm-0.4cm justification = raggedright, singlelinecheck = false [Email Address: ][email protected] of Physics, Indian Institute of Technology Bombay, Powai, Mumbai 400 076, IndiaTheoretical Physics Division,Physical Research Laboratory, Ahmedabad - 380009, India[Email Address: ][email protected] Physics Division,Physical Research Laboratory, Ahmedabad - 380009, India[Email Address: ][email protected] Physics Division,Physical Research Laboratory, Ahmedabad - 380009, IndiaDiscipline of Physics, Indian Institute of Technology, Gandhinagar - 382355, India[Email Address: ][email protected] Physics Division,Physical Research Laboratory, Ahmedabad - 380009, India We consider singlet extensions of the standard model, both in the fermionand the scalar sector,to account forthe generation of neutrino mass at the TeV scale and the existence of dark matter respectively.For the neutrino sector we consider models with extra singlet fermions whichcan generate neutrino mass via the so called inverse or linear seesawmechanism whereas a singlet scalar is introduced as the candidate for dark matter. We show that althoughthese two sectors are disconnected at low energy,the coupling constants of both the sectors getcorrelated at high energy scale by the constraints coming from theperturbativity and stability/metastability of the electroweak vacuum.The singlet fermions try to destabilize the electroweak vacuumwhile the singlet scalar aids the stability. As an upshot, the electroweak vacuum may attain absolute stabilityeven upto the Planck scale for suitable values of the parameters. We delineate the parameter space for the singletfermion and the scalar couplings for which the electroweak vacuum remainsstable/metastable and at the same time giving the correct relic density and neutrinomasses and mixing angles as observed. Electroweak vacuum stability in presence of singlet scalar dark matter in TeV scale seesaw modelsNajimuddin Khan 26 December, 2017 ===================================================================================================§ INTRODUCTIONThe Large Hadron Collider (LHC) experiment has completed the hunt for the last missing piece of theStandard Model (SM)with the discovery of the Higgs boson <cit.>.The Higgs boson holds a special status in the SM as it gives mass toall the other particles, with the exception of the neutrino.However, observation of neutrino oscillation, from solar, atmospheric, reactor and accelerator experiments necessitatesthe extension of the SM to incorporatesmall neutrino masses. The seesaw mechanism is considered to be the most elegant way togenerate small neutrino masses.The origin of seesaw is from the dimension 5 effectiveoperator κ LLHH, proposed by Weinberg in <cit.>. Here, L and H are the SM lepton, and Higgs fields respectively. κ is a coupling constant with inverse mass dimension. This term violates lepton number by two unitsand imply that neutrinos are Majorana particles.The generation of the effective dimension 5 operator needsextension of the SM by new particles. The most minimal scenario in this respectis the canonical type-1 seesaw model,in which the SM is extended by heavy right handed Majorana neutrinosfor ultra-violet completion of the theory <cit.>. The essence of seesaw mechanism lies in the fact that the lepton number is explicitly violated at a high-energy scale which defines thescale of the new physics. However to give an observed neutrino mass of the order of m_ν∼ 0.01 eVone needs the Majorana neutrinos to be very heavy (∼ 10^15 GeV),close to the scale of Grand Unification.However since such high scales are not accessible to colliders, in thecontext of the LHC, there have been a proliferation ofstudies involving TeV scale seesaw models. For recent reviews see forinstance <cit.>.For ordinary seesaw mechanism lowering the scale ofnew physics toTeV requires small Yukawa couplingsO(10^-6)[Unless very special textures leading to cancellations are invoked<cit.>.]and for such values, the light-heavy mixing is smalland nointeresting collider signals can be studied.One of the ways to reduce the scale of new physics to TeV is todecouple the new physics scale from the scale of lepton number violation.The smallness of the neutrino mass can then be attributed to small leptonnumber violating terms. A tiny valueof the latter is deemed natural, since when this parameter is zero, the global U(1) lepton number symmetry is reinstatedand neutrinos are massless.One of the most popular TeV scale seesaw models based on the aboveideais theinverse seesaw model <cit.>.This contains additional singlet states (ν_s),along with the right handed neutrinos (N_R), having opposite lepton numbers.The lepton number is broken softly by introducing a smallMajorana mass term forthe singlets. This parameter is responsible forthe smallness of the neutrino mass and one does notrequire small Yukawa couplings to get observed neutrino massesand at the same time the scale of new physics can be at TeV. Another possibility of a TeV scale singlet seesaw model is the linear seesaw model<cit.>.The difference is, in this case, a small lepton number violating termis generated by the coupling between the the left-handed neutrinos and the singlets states. The inverse seesaw and linear seesaw differ from each other in the way lepton number violation is introduced in the model, as we will see inthe next section. Also, the particle content of the minimal models thatagree with the oscillation data for these two are different.For linear seesaw, we need only one N_R and one ν_s<cit.>whereas in the inverse seesaw case, we need two N_R and two ν_s<cit.>. Note that the minimal linear seesaw model is the simplest re-constructable Tev scale seesaw model having a minimum number of independent parameters.Apart from neutrino mass, another issue that requires extension ofthe SM is the existence of dark matter.MeasurementsbyPlanckandWMAPdemonstratethatnearly 85 percent oftheUniverse'smatter density is dark <cit.>.Among the various models of dark matter that are proposed in the literature,the most minimal renormalizable extension of the SMare the so called Higgs portal models <cit.>.These models include a scalar singlet that couples only to the Higgs.An additional Z_2 symmetry is imposed to prevent the decay of the DM and safe-guard its stability. The coupling of the singlet withthe Higgs provides the onlyportalfor its interaction with the SM.Nevertheless there can be testable consequences of this scenario which canput constraints on its coupling andmass.These include constraints from searches ofinvisible decay of Higgs at the Large Hadron Collider (LHC) <cit.>,direct and indirect detections of DM as well as compliance withthe observed relic density <cit.>.Implications forthe LHC <cit.> and ILC <cit.> have also been studied. Combined constraints from all these have been discussed in <cit.> and most recently in<cit.>. The singlet Higgs can also affect the stability of theelectroweak vacuum <cit.>.It is well known that the electroweak vacuum in the standard model ismetastable and the Higgs quartic coupling λ ispulled down to negative value by renormalization group running,at an energy of about 10^9-10^10 GeV,depending on the value of α_s and the top quark mass m_t,as the dominant contribution comes from the top-Yukawa coupling,y_t<cit.>.This indicates the existence of another low lying vacuum. If the quartic coupling λ(μ) becomes negative at largerenormalization scale μ, it implies that in the early universe the Higgs potential would be unbounded frombelow and the vacuum would be unstable in that era.But it does not pose any threat to the standard model as it has been shown that the decay time is greater than the age of the universe <cit.>. In the context of standard model extended with neutrino masses via canonical type-1 seesaw mechanism, the Yukawa coupling of the RH neutrinos also contribute to the RG running,just like y_t and thereby we expect it to affect the electroweak vacuum stability negatively. But this effect is not so much because, as discussed before, in order to get the light neutrino masses, either one has to resort to extremely small Yukawa couplings or one needs a very large Majorana mass scale (≈ 10^15 GeV) and the contribution to the running of λ is much smaller in both the cases compared to that from y_t. However, for theTeV scale seesaw models, with sizable Yukawacouplings the stability of the vacuumcan be altered considerably by the contribution from the neutrinos <cit.>.On the other hand, the singlet scalar can help in stabilizing theelectroweak vacuumby adding a positive contribution whichprevents the Higgs quartic coupling from becoming negative.The stability of the electroweak vacuum in the context of singlet scalar extended SM with an unbroken Z_2 symmetry has been explored in <cit.>.In this paper, we extend the SM by adding extra fermion as well asscalar singlets to explain the origin of neutrino mass as wellas existence ofdark matter [ For other studies to explainneutrino mass and dark matter using scalar singletssee for instance <cit.>.].The candidate for dark matter is a real singlet scalar added toSM with an additional Z_2 symmetry which ensures its stability.For generation of neutrino mass at TeV scale we consider twomodels. The first one is the general inverse seesaw modelwith three right handed neutrinos and three additional singlets.The second one is the minimal linear seesaw model. These two sectors are disconnected at the low energy.However, the consideration of the stability of the electroweak vacuum and perturbativity induces a correlation between the two sectors.We study the stability of the electroweak vacuum in this modeland explore the effect of the two opposing trends – singlet fermionstrying to destabilize the vacuum furtherand singlet Higgs trying to oppose this.We find the parameter space, which is consistentwith the constraints of relic density and neutrino oscillationdata and at the same time can cure the instability of theelectroweakvacuum. We present some benchmark pointsfor which the electroweak vacuum is stable up to the Planck's scale.In addition to absolute stability we also explore the parameter regionwhich gives metastability in the context of this model.We investigate the combined effect of these two sectors andobtain theallowed parameter space consistent with observations and vacuum stability/metastability and perturbativity. The plan of the paper is as follows. In the next section we discuss theTeV scale singlet seesaw models, in particular the inverse seesawand linear seesaw mechanism. We also outline the diagonalization procedure to give the low energy neutrino mass matrix. In section III we discussthe potential in presence of a singlet scalar.Section IV presents the effective Higgs potential and the renormalization group(RG) evolution of the different couplings. In particular we include the contribution from both fermion and scalar singlets in the effective potential.In section V we discuss the existing constraints on the fermion andthe scalar sector couplings from experimental observations and also from perturbativity.We present the results in section VI and conclusions in section VII.. § TEV SCALE SINGLET SEESAW MODELSThe most general low scale singlet seesaw scenario consists of adding m right handed neutrinos N_R and n gauge-singlet sterile neutrinosν_s to the standard model. The lepton number for ν_s is chosen to be -1 and that for N_R is +1. For simplicity, we willwork in a basis where the charged leptons are identified with their mass eigenstates. We can write the most general Yukawa part of the Lagrangian responsible for neutrino masses, before spontaneous symmetry breaking(SSB) as,-L_ν = l_LY_ν H^cN_R + l_LY_s H^cν_s +N_R^cM_R ν_s+1/2 ν_s^cM_μν_s +1/2 N_R^cM_N N_R + h.c. where l_L and H are the lepton and the Higgs doublets respectively,Y_ν and Y_s are the Yukawa coupling matrices, M_N and M_μ are the symmetric Majorana mass matrices for N_R and ν_s respectively. Y_ν, Y_s, M_N and M_μ are of dimensions 3 × m,3 × n,m × m and n × n respectively.Now, after symmetry breaking, the above equation gives, -L_mass = ν_L M_DN_R + ν_L M_sν_s +N_R^c M_R ν_s +1/2 ν^c_sM_μν_s +1/2 N_R^c M_N N_R+ h.c.where, M_D = Y_ν⟨ H ⟩ and M_s = Y_s ⟨ H ⟩. The neutral fermion mass matrix M can be defined as, -L_mass =1/2(ν_LN_R^c ν^c_s ) [ 0 M_D M_s; M_D^T M_N M_R; M^T_s M_R^T M_μ ][ ν_L^c; N_R; ν_s ]+h.c.From this equation, we can get the variants of the singletseesaw scenarios by setting certain terms to be zero.§.§ Inverse Seesaw Model (ISM)In the inverse seesaw model, M_s and M_N are taken to be zero <cit.>. The mass scales of the threesub-matrices of M may naturally have a hierarchyM_R >>M_D >> M_μ, because the mass term M_R is not subject to the SU(2)_L symmetry breaking and the mass term M_μ violates the lepton number. Thus we can take M_μ to be naturally small by t' Hooft's naturalness criteria since the expected degree of lepton number violation in nature is very small. In this paper, we consider a (3+3+3) scenario for the inverse seesaw model for generality and hence all the three sub-matrices M_R, M_D and M_μ are 3 × 3 matrices. The effective light neutrino mass matrix in the seesaw approximation isgiven by, M_light =M_D (M_R^T)^-1M_μ M_R^-1 M_D^T and in the heavy sector, we will have three pairs of degenerate pseudo-Dirac neutrinos of masses of the order∼M_R±M_μ.Note that the smallness of M_light is naturally attributed to the smallness of both M_μ and M_D/M_R. For instance, M_light∼𝒪 (0.1) eV can easily be achieved forM_D/M_R∼ 10^-2and M_μ∼𝒪(1) keV. Thus, the seesaw scale can be lowered down considerably assuming Y_ν ∼ 𝒪(0.1), such that M_D∼10 GeV and M_R∼1 TeV.§.§ Minimal Linear Seesaw Model (MLSM)In eqn. (<ref>), if we put M_N and M_μ to be zero and choose thehierarchy M_R>>M_D >> M_μ, we will get the linear seesaw model <cit.>. In this paper, we consider the minimal linear seesaw model in which we add only one right handed neutrino N_R and one gauge-singlet sterile neutrinoν_s<cit.>. In such a case, the lightest neutrino mass is zero. The source of lepton numberviolation is through the coupling Y_swhich is assumed to be very small.Here, Y_ν and Y_s are the (3×1) Yukawa coupling matrices and the overall neutrino mass matrix is a symmetric matrix of dimensions 5 × 5. The light neutrino mass matrix to the leading order is given by, M_light = M_D (M_R^T)^-1 M_S^T + M_S (M_R^T)^-1M_D^T. Assuming M_D ∼100 GeV and M_R ∼ 1 TeV, one needs Y_s ∼10^-11 to get light neutrino mass m_ν ∼ 0.1 eV.The heavy neutrino sector will consist of a pair of degenerate neutrinos.§.§ Diagonalization of the Seesaw Matrix and Non-unitary PMNS Matrix The diagonalization procedure is same for both the cases. Here we illustrate it for the inverse seesaw case. The 9× 9 inverse seesawmass matrix can be rewritten as,M_ν=[0 M̂_D; M̂_D^T M̂_R ] where, M̂_D = (M_D0) and M̂_R =[ 0 M_R; M_R^T M_μ ].We can diagonalize the neutrino mass matrix using a 9 × 9 unitary matrix <cit.>,U_0^TM_νU=M_ν^diagwhere, M_ν^diag= diag (m_1 ,m_2 ,m_3 ,M_1 , ...,M_6 ) with mass eigenvalues m_i (i = 1 ,2 ,3) and M_j(j = 1 ,... , 6) for three light neutrinos and 6 heavy neutrinos respectively.Following the two-step diagonalization procedure, U_0 could be expressed as, (by keeping terms up to order 𝒪(M̂_D^2/M̂_R^2)) <cit.> U_0 =W T = [ U_L V; S U_H ] = [ (1-1/2ϵ)U_ν M̂_D^* (M̂_R^-1)^*U_R;-M̂_R^-1M̂_D^T U_ν (1-1/2ϵ ')U_R ].Here, U_L,V, SandU_H are 3 × 3,3 × 6,6× 3and 6× 6matrices respectively, which are not unitary. W is the matrix which brings the full 9 × 9 neutrino matrix, in the block diagonal form,W^T [0 M̂_D; M̂_D^T M̂_R ] W= [ M_light 0; 0 M_heavy ], T =diag (U_ν , U_R ) diagonalizes the mass matrices in the light and heavy sectors appearing in the upper and lower block of the block diagonal matrix respectively. In the seesaw limit, M_light is given by eqn. (<ref>)and M_heavy = M̂_R. In eqn. (<ref>), U_L corresponds to the PMNS matrix which acquires a non-unitarycorrection (1-ϵ/2). The parameters ϵ and ϵ ' characterize the non-unitarity and are given by, ϵ=M̂_D^* M̂_R^-1*M̂_R^-1M̂_D^T , ϵ ' =M̂_R^-1M̂_D^T M̂_D^* M̂_R^-1*. § SCALAR POTENTIAL OF THE MODEL As mentioned earlier, in addition to the extra fermions, we also add an extra real scalar singlet S to the standard model. The potential for the scalar sector with an extra Z_2 symmetry under S →-S is given by, V(S , H) = m^2H^† H+ λ(H^† H)^2+ κ/2 H^† HS^2+m_S^2/2S^2 + λ_S/24S^4.In this model, we take the vacuum expectation value (vev) of S as 0, so that Z_2 symmetry is not broken. The standard model scalar doublet Hcould be written as,H = 1/√(2)[G^+; v+h+iG^0 ]where the vevv = 246 GeV.Thus, the scalar sector consists of twoparticles h and S, where h is the standard model Higgs boson with a mass of ∼126 GeV, andthe mass ofthe extra scalar isgiven by, M_DM^2= m_S^2 + κ/2v^2. As the Z_2 symmetry is unbroken upto the Planck scale M_pl = 1.22 × 10^19 GeV, the potential can have minima only along the Higgs field direction and also this symmetry prevents the extra scalar from acquiring a vacuum expectation value. This extra scalar field does not mix with the SM Higgs. Also an odd number of this extra scalar does not couple to the standard model particles and the new fermions. As a result, this scalar is stable and serve as a viable weakly interacting massive dark matter particle.The scalar field S can annihilate to the SM particles as well as to the new fermions only via the Higgs exchange. So it is called a Higgs portal dark matter.§ EFFECTIVE HIGGS POTENTIAL AND RG EVOLUTION OF THE COUPLINGSThe effective Higgs potential and the renormalization group equationsare the same for boththe linear and the inverse seesaw models. The two models differ only by the way in which a small lepton number violation is introduced in them, whose effect could be neglected in the RG evolution. So, effectively, the RGEs are the same in both the models, the only differencebeing the dimensions of the Yukawa coupling matrices and the number of heavy neutrinos present in the model.§.§ Effective Higgs Potential The tree level Higgs potential in the standard model is given by, V(H)= -m^2 H^† H + λ (H^† H)^2 . This will get corrections from higher order loop diagrams of SM particles. In the presence of the extra singlets,the effective potential will get additional contributions from the extra scalar and the fermions. Thus, we have the one-loop effective Higgs potential (V_1(h)) in our model as,V_1^SM+S +ν(h)= V_1^SM(h) + V_1^S(h) + V_1^ν(h) where the one loop contribution to the effective potential due to the standard model particles is given by <cit.>,V_1^SM(h) =∑_in_i/64 π^2 M_i^4(h) [ ln M_i^2(h)/μ^2(t) - c_i].Here, the index i is summed over all SM particles and c_H,G,f= 3/2 and c_W,Z =5/6, where H,G,f,W and Z stand for the Higgs boson, the Goldstone boson, fermions and W and Z bosons respectively ; M_i(h) can be expressed as,M_i^2(h)= κ_i(t) h^2(t)- κ'_i(t). The values ofn_i, κ_i and κ_i' are given in the eqn. (4) in <cit.>. Here h =h(t) denotes the classical value of the Higgs field, t being the dimensionless parameter related to the running energy scale μ as t = log(μ/M_Z). The one loop contribution due to the extra scalar is given by <cit.>V_1^S(h) =1/64 π^2 M_S^4(h) [ ln M_S^2(h)/μ^2(t)-3/2]. whereM_S^2(h)= m_S^2(t)+ κ(t) h^2(t)/2The contribution of the extra neutrino Yukawa coupling to the one loop effective potential can be written as <cit.>,V_1^ν (h) = -((M'^† M')_ii)^2/32 π^2[ ln (M'^† M')_ii/μ^2(t)-3/2]-((M' M'^†)_jj)^2/32 π^2[ ln (M' M'^†)_jj/μ^2(t)-3/2]. Here M' = M_D for inverse seesaw and M' = ( M_DM_s ) for linear seesaw. Also, i and j run over three light neutrinos and mheavy neutrinos to which the light neutrinos are coupled via Yukawa coupling respectively. In our analysis, we have taken two-loop (one-loop) contributions to the effective potential from the standard model particles (extra singlet scalar and fermions). For h(t) >> v, the effective potential could be approximated as,V_eff^SM+S+ ν=λ_eff(h) h^4/4 withλ_eff(h) =λ_eff^SM(h) +λ^S_eff(h) +λ^ν_eff(h)where the standard model contribution is, λ_eff^SM(h)=e^4 Γ (h) [λ(μ = h)+ λ_eff^(1)(μ=h) + λ_eff^(2)(μ=h)].λ_eff^(1) and λ_eff^(2) are the one- and two- loop contributions respectively and their expressions can be found in <cit.>. The contributions due to the extra scalar and the neutrinos are given by λ_eff^S(h)=e^4 Γ (h) [κ^2/64 π^2( lnκ/2 - 3/2) ] and λ^ν_eff(h) = -e^4 Γ (h)/32 π^2[((Y'_ν^†Y'_ν)_ii)^2( ln (Y'_ν^†Y'_ν)_ii/2 -3/2 ) +((Y'_νY'_ν^†)_jj)^2( ln (Y'_νY'_ν^†)_jj/2 -3/2 )]where, Γ(h) =∫_M_t^hγ(μ)d ln μ .Here γ(μ) is the anomalous dimension of the Higgs field and in eqn. (<ref>), Y'_ν = Y_ν for inverse seesaw and Y'_ν = ( Y_νY_s ) for linear seesaw. The contribution of the singlet scalar to the anomalous dimension is zero <cit.> and the contribution from the right handed neutrinos at one loop is given in eqn. (<ref>). §.§ Renormalization Group evolution of the couplings from M_t to M_planckWe know that the couplings in a quantum field theory get corrections from higher-order loop diagrams and as a result, the couplings run with the renormalization scale. For a coupling C, we have the renormalization group equation (RGE),μdC/dμ=∑_iβ_C^(i)/(16π^2)^iwhere i stands for the i^th loop.We have evaluated the SM coupling constants at thethe top quark mass scale and then run them using the RGEs from m_t to M_planck. For this, we have taken into account the various threshold corrections at M_t<cit.>. All couplings are expressed in terms of the pole masses <cit.>. We have used one-loop RGEs to calculate g_1(M_t) and g_2(M_t)[Our results are not changed even if we use the two-loop RGEs for g_1 and g_2.]. For g_3(M_t), we use three-loop RGErunning of α_s where we have neglected the sixth quark contribution and the effect of the top quark has been included using an effective field theory approach. We have also taken the leading term in the four-loop RGE for α_s. The mismatch between the top pole mass and the MS renormalized coupling has been included. This is given by, y_t(M_t) =√(2)M_t/v (1 + δ_t(M_t)) where δ_t(M_t) is the matching correction for y_t at the top pole mass, and similarly forλ(M_t) we have, λ(M_t) =M_H^2/2 v^2(1 + δ_H(M_t)) .We have included the QCD corrections upto three loops <cit.>, electroweak corrections upto one-loop <cit.> and the O(αα_s) corrections to the matching of top Yukawa and top pole mass <cit.>. Using these corrections, we have reproduced the couplings at M_t as in references <cit.>.Now to evaluate the couplings from M_t to M_planck, we have used three-loop RGEs for standard model couplings <cit.>, two-loop RGEs for the extra scalar couplings <cit.> and one-loop RGEs for the extra neutrino Yukawa couplings <cit.>[Our results do not change with the inclusion of two loop RGEs of Neutrino Yukawa couplings which has been checked using SARAH <cit.>.]. The one loop RGEs for the scalar quartic couplings and the neutrino Yukawa coupling in our model are given as, β_λ = 27/100g_1^4 +9/10g_1^2g_2^2+9/4g_2^4 -9/5g_1^2λ -9g_2^2λ +12λ^2 +κ^2 + 4Tλ - 4Y β_κ =-9/10 g_1^2κ -9/2 g_2^2κ +6λκ +4κ^2+ 2Tκ β_λ_S= 3λ_S + 12κ^2β_Y_ν= Y_ν(3/2 Y_ν^† Y_ν - 3/2Y_l^† Y_l +T- 9/20 g_1^2-9/4g_2^2) where,T = Tr( 3Y_u^† Y_u + 3Y_d^† Y_d +Y_l^† Y_l+Y_ν^† Y_ν) Y= Tr (3(Y_u^† Y_u)^2 + 3(Y_d^† Y_d)^2 + (Y_l^† Y_l)^2+(Y_ν^† Y_ν)^2 ).Theeffect of β- functions of new particles enters into the SM RGEs at their effective masses.§ EXSITING BOUNDS ON THE FERMIONIC AND THE SCALAR SECTORSFor the vacuum stability analysis, we need to find the Yukawa and scalar couplings that satisfy the existing experimental and theoretical constraints. These bounds are discussed below.§.§ Bounds on the fermionic Sector∙ Cosmological constraint on the sum of light neutrino massesThe Planck 2015 results put an upper limit on the sumof active light neutrino masses to be <cit.>Σ= m_1 + m_2 + m_3 < 0.23eV . ∙ Constraints from Oscillation data We use the standard parametrization of the PMNS matrix in which,U_ν = [ c_12c_13 s_12c_13s13 e^-iδ; -c_23s_12-s_23s_13c_12e^iδc_23c_12-s_23s_13s_12e^iδ s_23c_13;s_23s_12-c_23s_13c_12e^iδ -s_23c_12-c_23s_13s_12e^iδ c_23c_13 ] Pwhere c_ij = cosθ_ij ,s_ij = sinθ_ij and the phase matrixP = diag(1, e^iα_2, e^i(α_3 + δ))contains the Majorana phases. The global analysis <cit.>of neutrino oscillation measurements with threelight active neutrinos give the oscillation parameters in their 3σ range,for both normal hierarchy (NH)for which m_3 > m_2 >m_1 and inverted hierarchy (IH) for which m_3 > m_2 >m_1,as below : ⋆ Mass squared differences Δ m_21^2/10^-5eV^2 = (7.03 → 8.09) ;Δ m_31^2/10^-3eV^2 = (2.407 → 2.643)NH Δ m_31^2/10^-3eV^2 = (-2.635 → -2.399)IH⋆ Mixing angles sin^2θ_12= (0.271 → 0.345) ;sin^2θ_23 =(0.385 → 0.635) (0.393 → 0.640);sin^2θ_13 =(0.01934 → 0.02392)NH(0.01953 → 0.02408)IH∙ Constraints on the non-unitarity of U_PMNS = U_LThe analysis of electroweak precision observablesalong with various other low energy precision observables put boundon the non-unitarity of light neutrino mixing matrix U_L<cit.>. At 90% confidence level, |U_L U_L^†|= [ 0.9979-0.9998<10^-5 <0.0021;<10^-50.9996-1.0 <0.0008; <0.0021 <0.00080.9947-1.0 ]. This also takes care of the constraints coming from various charged lepton flavor violoating decays like l_i → l_jγ, among which Br(μ → eγ) being the one that gives the most severe bound <cit.>, Br(μ → eγ) < 4.2 × 10^-13 ∙ Bounds on the heavy neutrino masses The search for heavy singlet neutrinos at LEP by the L3 collaboration in the decay channel N→ e W showed no evidence of a singlet neutrino in the mass range between 80GeV(|V_α i |^2≤2 × 10^-5 ) and 205GeV(|V_α i |^2≤ 1)<cit.>, V_α i being the mixing matrix elements between the heavy and light neutrinos. Heavy singlet neutrinos in the mass range from 3 GeV upto the Z-boson mass (m_Z ) has also been excluded by LEP experiments from Z-boson decay upto |V_α i |^2≈10^-5<cit.>. These constraints are taken care of in our analysis by keeping the mass of the lightest heavy neutrino to be greater than or equal to 200 GeV. §.§ Bounds on the Scalar Sector∙ Constraints on scalar potential couplings from perturbativity and unitarityFor the radiatively improved Lagrangian of our model to be perturbative, we should have <cit.>, λ (Λ)< 4π/3; |κ(Λ)|<8π ;|λ_S(Λ)|< 8π at all scales and the values of the couplings at any scale Λ are evaluated using the RG equations. The parameters of the scalar potential (see eqn. (<ref>)) of this model are also constrained by the unitarity of the scattering matrix (S-matrix). At very high field values, one can obtain the scattering matrix by using various scalar-scalar, gauge boson-gauge boson, and scalar-gauge boson scattering amplitudes. Using the equivalence theorem <cit.>, we have reproduced the scattering matrix (S-matrix) for this model <cit.>. The unitarity demands that the eigenvalues of the S-matrix should be less than 8π. The unitary bounds are given by, λ≤ 8 π and| 12 λ+λ_S±√(16 κ^2+(-12 λ+λ_S)^2)| ≤ 32 π. ∙ Dark matter constraints The parameter space for the scalar sector should also satisfy the Planck and WMAP imposed dark matter relic density constraint <cit.>, Ω_DMh^2=0.1198±0.0026. In addition, the invisible Higgs decay width and the recent direct detection experiments, in particular, the LUX-2016 <cit.> data and the indirect Fermi-LAT data<cit.>restrict the arbitrary Higgs portal coupling and the dark matter mass <cit.>.Since the extra fermions are heavy (≳ 200 GeV), for low dark matter mass (around 60 GeV), the dominant (more than 75 %) contributions to the relic density is from the SS→ b b̅ channel. The channels SS→ V,V^* alsocontribute to the relic density where V stands for the vector bosons W and Z, V^* indicates the virtual particle which can decay into the SM fermions. In this mass region, the value of the Higgs portal coupling κ is 𝒪(10^-2) to get the relic density in the right ballpark and simultaneously satisfying the other experimental bounds. However, this region is not of much interest to us since such a small coupling will not contribute much to the running of λ and hence will not affect the stability of the EW vacuum much. The LUX-2016 data <cit.> has ruled out the dark matter mass region ∼70-500 GeV.If we considerM_DM >> M_t, the annihilation cross-section is proportional to κ^2/M_DM^2, which ensures that the relic density band in κ-M_DM <cit.> plane is a straight line. In this region, one can get the right relic density if the ratio of dark matter mass to the Higgs portal coupling κ is ∼ 3300. In this case, the dominant contributions to the dark matter annihilation channel are SS→ hh, t t̅, VV.We use FeynRules <cit.> along with micrOMEGAs <cit.> to compute the relic density of the scalar DM. We have checked that the contribution of annihilation into extra fermions is very small. However this could be significant for dark matter mass ≳ 2.5 TeV provided the Yukawa couplings are large enough. But, in the stability analysis discussed in section <ref>, we will see that the dark matter mass ≳ 2.5 TeV requires the value of κ≳ 0.65 whichviolates the perturbativity bounds before the Planck scale. Thus, we consider the dark matter mass in the range ∼ 500 GeV - 2.5 TeV with κ in the range ∼ 0.15 to 0.65. It is to be noted that in the presence of the singlet fermions the value of κ(M_Z) and hence M_DM for which the perturbativity is not obeyed will also depend upon the value of Tr [Y_ν^† Y_ν]. This will be discussed in the next section.§ RESULTSIn this section, we present our results of the stability analysis of the electroweak vacuum in the two seesaw scenarios.We confine ourselves to the normal hierarchy. The results for the inverted hierarchy are not expected to be very different <cit.>. We have used thepackage SARAH<cit.> to do the RG analysis in our work.§.§ Inverse Seesaw ModelFor the inverse seesaw model, the input parameters are the entries of the matrices Y_ν, M_S and M_μ. Here Y_ν is a complex 3 × 3 matrix. M_S is a real 3 × 3 matrix and M_μ is a 3 × 3 diagonal matrix with real entries. We vary the entries of various mass matricesin the range 10^-2< M_μ< 1 keV and 0 < M_R < 5×10^4 GeV. This implies a a heavy neutrino mass of maximum upto a few TeV.With these input parameters, we search for parameter setsconsistent with the low energy data using the downhill simplex method <cit.>. We present in table <ref>, some representative outputs consistent with data for three benchmark points. In this table Tr[Y_νY^†_ν] is an input. As a consistency check, we also give the value ofBr(μ→ e γ).§.§.§ Vacuum StabilityIn fig.(<ref>), we display the running of the couplings for various benchmark points in the ISM.In fig.(<ref>), we have shown the variation in the running of the Higgs quartic coupling λ for different values of Tr [Y_ν^† Y_ν] (0, 0.15 and 0.30) for a fixed value of the Higgs portal coupling κ = 0.304. We have chosen the DM mass M_DM=1000 GeV to get the relic density in the right ballpark. As λ_S doesn't alter the relic density, we have fixed it's value at 0.1 for all the plots in this paper. We can see that for Tr [Y_ν^† Y_ν] = 0 , i.e., without the right handed neutrinos, the EW vacuum remains absolutely stable upto the Planck scale (green line)and for the large values of Tr [Y_ν^† Y_ν], theEW vacuum goes towards the instability (Higgs quartic coupling becomes negative around Λ_I ∼ 10^10 GeV (red line) and Λ_I ∼ 10^8 GeV (black line)) region. In fig.(<ref>), we plot the running of λ for a fixed value of Tr [Y_ν^† Y_ν] = 0.1 and different sets of k and M_DM.It is seen that for a larger value of κ = 0.45 withM_DM = 1500 GeV, the EW vacuum remains stable upto Planck scale (purple line).For κ = 0.304 withM_DM = 1000 GeV, the quartic coupling λ (red line) becomes negative aroundΛ_I ∼ 10^11 GeVand in the absence of the singlet scalar field, i.e., for κ = 0, λ_S=0 (blue line), λ becomes negativearoundΛ_I ∼ 10^9 GeV and the vacuum goes to the metastability region. In figs.(<ref>) and (<ref>), we have shown the running of all the three scalar quartic couplings, λ, κ and λ_S and Tr[Y_ν^† Y_ν] for (M_DM,κ)= (1000 GeV, 0.304)and (1500 GeV, 0.456) respectively. It can be seen that the values of λ_s and κ increases considerably with the energy scale and can reach the perturbativity bound at the Planck scale depending upon the initial values of κ and λ_S at M_Z. Here for λ_S=0.1, the maximum allowed value of κ will be 0.58 from the perturbativity.The value of Tr[Y_ν^† Y_ν] increases only slightly with the energy scale and the value of λ_S increases faster for larger value of κ. §.§.§ Tunneling Probability and Phase Diagrams The present central values of the SM parameters, especially the top Yukawa coupling y_t and strong coupling constant α_s with Higgs mass M_h ≈125.7 GeV suggest that the beta function of the Higgs quartic coupling β_λ (≡ dV(h)/dh) goes from negative to positive around 10^15 GeV <cit.>.This implies that there is an extra deeper minimasituated at that scale. So there is a finite probability that the electroweak vacuum might tunnel into that true (deeper) vacuum. But this tunneling probability is not large enough and hence the life timeof the EW vacuum remains larger than the age of the universe. This implies that the EW vacuum is metastable in the SM. The expression for the tunneling probability at zero temperature is given by <cit.>, 𝒫_0 = V_U Λ_B^4 exp ( -8π^2/3 |λ(Λ_B)|) where Λ_B is the energy scale at which the action of the Higgs potential is minimum. V_U is the volume of the past light cone taken as τ_U^4, where τ_U isthe age of the universe (τ_U= 4.35 × 10^17 sec)<cit.>.In this work we have neglected the loop corrections and gravitational correction to the action of the Higgs potential <cit.>. For the vacuum to be metastable, we should have 𝒫_0 < 1 which implies that <cit.>,0>λ(μ)>λ_min(Λ_B) =-0.06488/1-0.00986 ln (v/Λ_B),whereas the situation λ(μ)<λ_min(Λ_B) leads to the unstable EW vacuum. In these regions, κ and λ_S should always be positiveto get the scalar potential bounded from below <cit.>. In our model, the EW vacuum shifts towards stability/instability depending upon thenew physics parameter space for the central values ofM_h=125.7 GeV, M_t=173.1 GeV and α_s=0.1184 and there might be an extra minima around 10^12-17 GeV. In fig.(<ref>), we have given the phase diagram in the Tr[Y_ν^† Y_ν]- κ plane.The line separating the stable region and the metastable region is obtained when the two vacuua are at the same depth,i.e., λ(μ) =β_λ(μ) = 0. The unstable and the metastable regions are separated by the boundary line where β_λ(μ) = 0 along with λ(μ) =λ_min(Λ_B), as defined in eqn. (<ref>).For simplicity, we have plotted fig.(<ref>) (also fig.(<ref>)) by fixing all the eight entries of the 3 × 3 complex matrix Y_ν, but varying only the (Y_ν)_ 33 element to get a smooth phase diagram.From fig. (<ref>), it could be seen that the values of κ beyond ∼ 0.58 are disallowed byperturbativity bounds and those below ∼0.16 are disallowed by the direct detection bounds from LUX-2016 <cit.>.Note that the vacuum stability analysis of the inverse seesaw model done in reference <cit.> had found thatthe parameter space with Tr [ Y_ν^† Y_ν] > 0.4 were excluded by vacuum metastability constraints. Whereas, in our case, fig.(<ref>) shows that the parameter space with Tr [ Y_ν^† Y_ν]≳ 0.25 areexcluded for the case when there is no extra scalar.The possible reasons could be that we have kept the maximum value of the heavy neutrino mass to be around a few TeV, whereas the authors of <cit.> had considered heavy neutrinosas heavy as 100 TeV. Obviously, considering larger thresholds would allow us to consider large value of Tr[Y_ν^† Y_ν] as the corresponding couplings will enter into RG running only at a higher scale. Another difference with the analysis of <cit.> is that we have fixed 8 of the 9 entries of the Yukawa coupling matrix Y_ν. Also, varying all the 9 Yukawa couplings will give us more freedomand the result is expected to change. The main result that we deduce from this plot is the effect of κ on the maximum allowed value of Tr [Y_ν^† Y_ν], which increases from 0.26 to 0.4 for a value of κ as large as 0.6. In addition, we see that the upper bound on κ(M_Z) from perturbativity at the Planck scale decreases from 0.64 to 0.58 as the value of Tr[Y_ν^† Y_ν] changes from 0 to 0.44. This can be explained from the expression of the β_κ in eqn. (<ref>) which shows that[Y_ν^† Y_ν] affect therunning κ positively through the quantity T. Since M_DM∼ 3300 κ for M_DM>> M_t, the mass of dark matter for which perturbativity is valid, decreases with increase in the value of the Yukawa coupling. §.§.§ Confidence level of vacuum stabilityAs we have seen that the stability of the electroweak vacuum changes due to the presence of new physicsand hence it becomes important to demonstrate the change in the confidence level at which stability is excluded or allowed (one-sided) <cit.>.In particular, it will provide a quantitative measurement of (meta)stability in the presence of new physics. In fig.(<ref>), we graphically show how the confidence level at which stability of electroweak vacuum is allowed/excluded depends on new Yukawa couplings of the heavy fermions for the inverse seesaw model in the presence of the extra scalar (dark matter) field.We have plotted the dependence of confidence level against the trace of the Yukawa coupling, Tr[Y_ν^† Y_ν] for fixed values of Higgs portal coupling κ = 0.304 in fig.(<ref>). Here, the dark matter mass M_DM=1000 GeV is dictated by κ to obtain the correct relic density. Similar plot with a higher value of κ=0.455 with dark matter mass M_DM=1500 GeV is shown in fig.(<ref>). In this case the electroweak vacuum is absolutely stable for a larger parameter space. For a particular set of values of the model parameters M_h = 125.7 GeV, M_t = 173.1 GeV, α_s (M_z) = 0.1184 and κ,the confidence level (one-sided) at which the electroweakvacuum is absolutely stable (green region) decreases with the increase of Tr[Y_ν^† Y_ν] and becomes zero for Tr[Y_ν^† Y_ν]=0.06 in fig.(<ref>) and Tr[Y_ν^† Y_ν]=0.20 in fig.(<ref>). The confidence level at which the absolute stability of electroweak vacuum is excluded (one-sided) increases with the trace of the Yukawa coupling in the yellow region. §.§ Minimal Linear Seesaw ModelIn the minimal linear seesaw case, the Yukawa coupling matrices Y_ν and Y_s can be completely determined in terms of the oscillation parameters apart from the overall coupling constant y_ν and y_s respectively <cit.>. For normal hierarchy, in MLSM, the Yukawa coupling matrices Y_ν and Y_S can be parametrized as,Y_ν= y_ν/√(2) ( √(1+ρ) U_3^† + e^iπ/2√(1-ρ) U_2^†)Y_s = y_s/√(2) ( √(1+ρ) U_3^† + e^iπ/2√(1-ρ) U_2^†) where ρ = √(1+r)-√(r)/√(1+r)+√(r) . Here, U_i's are the columns of the unitary PMNS matrix U_ν and r is the ratio of the solarand the atmospheric mass squared differences. This parametrization makes the vacuum stability analysis in the minimal linear seesaw model much more easier since there are only two independent parameters y_ν and M_N in the fermion sector, where M_N is the degenerate mass of the two heavy neutrinos (the value of y_s being very small 𝒪(10^-11)).A detailed analysis has already been performed in reference<cit.>.Here, we are interested in the interplay between the Z_2 odd singlet scalar and singlet fermions in the vacuum stability analysis.In fig.(<ref>), we have plotted the running of the Higgs quartic coupling λ with the energy scaleμ upto the Planck scale. The figs.(<ref>) and (<ref>)show the running of λ for different values of k (0.0, 0.304, 0.456) and M_DM (0,1000 GeV, 1500 GeV), for M_N=200 GeV and M_N = 10^4 GeV respectively for a fixed value of y_ν^2=0.1. Comparing these two plots, we can see that λ tends to go to the instability region fasterfor smaller values of the heavy neutrino mass. So, the EW vacuum is more stable for larger values of M_N, because the effect of extra singlet fermion in the running of λ enters at a higher value. We also find that as the value of κ increases from 0 to 0.304, the electroweak vacuum becomes metastable at a higher value of the energy scale. For κ=0.456 the electroweak vacuum becomes stable upto the Planck scale even in the presence of the singlet fermions.Figs.(<ref>) and (<ref>) display the running ofλ for different values of y_ν^2 (0.0, 0.15, 0.3) and for fixed values of k=0.304 and M_DM=1000 GeV, for M_N= 200 GeV and for M_N= 10^4 GeV respectively. It could be seen from these plots that larger the value of y_ν, earlier λ becomes negative and more is the tendency for the EW vacuum to be unstable as expected. We note from these two figures that for κ=0.304, absolute stability is attained only for y_ν=0 even in the presence of the singlet scalar. In fig.(<ref>), we have shown the phase diagram in the y_ν -M_N plane. The stable (green), unstable (red) and the metastable (yellow) regions are shownand it could be seen that higher the value of M_N, larger the allowed values of y_ν by vacuum stability as we have discussed earlier. The unstable and the metastable regions are separated by solid red line for the central values of the SM parameters,M_h=125.7 GeV, M_t=173.1 GeV and α_s=0.1184. The red dashed lines represent the 3σ variation of the top quark mass. However, we get significant stable region forM_h=125.7 GeV, M_t=171.3 GeV and α_s=0.1191 which corresponds to the solid line separating the stable and the metastable region. The region in the left sideof the blue dotted line is disallowed by LFV constraints for the normal hierarchy of light neutrino masses. Fig.(<ref>) is drawn in the absence of the extra scalar and fig.(<ref>) is drawn for (κ, M_DM) = (0.304, 1000 GeV). Clearly, there is more stable region in the presence of the extra scalar and the boundary line separating the metastable and the unstable regions also shifts upwards in this case. In fig.(<ref>), we have shown the phase diagrams in the y_ν - κ plane for two different values of the heavyneutrino masses : fig.(<ref>) for M_N=200 GeV and fig.(<ref>) for M_N=10^4 GeV. Here also, the red dashed lines represent the 3σ variation of top quark mass.It couldclearly beseen that as the value of the heavy neutrino mass is higher, the unstable region shifts towards the large values of y_ν. This is a result that shouldbe expected from fig.(<ref>).In this model, the theory becomes non-perturbative (grey) for κ=0.64 for y_ν=0.05. The maximum allowed value of κ by perturbativity at the Planck scale decreases with increase in y_ν as we have also seen for the inverse seesaw case. The region κ≲ 0.16 is excluded from the recent direct detection experiment at LUX.§ CONCLUSIONS In this paper we have analysed the stability of the electroweak vacuum in the context of TeV scale inverse seesaw and minimal linear seesaw models extended with a scalar singlet dark matter. We have studied the interplay between the contribution of the extrasinglet scalar and the singlet fermions to the EW vacuum stability. We have shown that the coupling constantsin these two seemingly disconnected sectors can be correlated at high energy by the vacuum stability/metastability and perturbativity constraints. In the inverse seesaw scenario, the EW vacuum stability analysis is done after fitting the model parameters with the neutrino oscillation data and non-unitarity constraints on U_PMNS (includingthe LFV constraints from μ→ e γ). For the minimal linear seesaw model, the Yukawa matrix Y_ν can be fully parameterized in terms ofthe oscillation parameters excepting an overall coupling constant y_ν which can be constrainedfrom vacuum stability and LFV.We have taken the heavy neutrino masses of order upto a few TeV for both the seesaw models.An extra Z_2 symmetry is imposed to ensure that the scalar particle serves as a viable dark matter candidate. We include all the experimental and theoretical boundscoming from the constraints on relic density and dark matter searches as well as unitarity and perturbativity upto the Planck scale. For the masses of new fermions from 200 GeV to a few TeV, the annihilationcross section to the extra fermions is very small for dark matter mass 𝒪(1-2) TeV. We have also checked that the theory violates perturbativity before the Planck scale for DM mass ≳ 2.5 TeV. In addition we find that the value of the Higgs portal coupling κ (M_Z) for which perturbativity is violated at the Planck scale decreases with increase in the value of the Yukawa couplings of the new fermions. For M_DM >> M_t, one can approximately writeM_DM∼ 3300 κ. This implies that with the increasing Yukawa coupling, the mass of dark matter for which the perturbativity is maintained also decreases. Thus the RGE running induces a correlation between the couplings of the two sectors from the perturbativity constraints.It is well known that the electroweak vacuum of SM is in the metastable region. The presence of the fermionic Yukawa couplings in the context of TeV scale seesaw models drives the vacuum more towards instability while the singlet scalar tries to arrest this tendency. Overall, we find that it is possible to find parameter spaces for which the electroweak vacuum remains absolutely stable for both inverse and linear seesaw models in the presence of the extra scalar particle. We find an upper bound from metastability on Tr[Y_ν^† Y_ν] as 0.25 for κ=0which increases to 0.4 for κ=0.6 in inverse seesaw model. We have also seen that in the absence of the extra scalar, the values of the Yukawa coupling y_ν greater than 0.42 are disallowed in the minimal linear seesaw model. But, in the presence of the extra scalar thevalues ofy_ν up to ∼ 0.6 are allowed for dark matter mass∼1 TeV. The correlations between the Yukawa couplings (Tr[Y_ν^† Y_ν] or y_ν) and κ are presented in terms of phase diagrams. Inverse and linear seesaw models can be explored at LHC through trilepton signatures <cit.>. A higher value of Yukawa couplings, as can be achieved in the presence of the Higgs portal dark matter, can facilitate observing such signals at colliders.unsrt | http://arxiv.org/abs/1706.08851v1 | {
"authors": [
"Ila Garg",
"Srubabati Goswami",
"Vishnudath K. N.",
"Najimuddin Khan"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170627135153",
"title": "Electroweak vacuum stability in presence of singlet scalar dark matter in TeV scale seesaw models"
} |
Laboratoire AIM-Paris-Saclay, CEA/DSM/Irfu - CNRS - Université Paris Diderot, CEA-Saclay, F-91191 Gif-sur-Yvette, France The star formation history (SFH) of galaxies is a key assumption to derive their physical properties and can lead to strong biases. In this work, we derive the SFH of main sequence (MS) galaxies showing how the peak SFH of a galaxy depends on its seed mass at e.g. z=5. This seed mass reflects the galaxy's underlying dark matter (DM) halo environment. We show that, following the MS, galaxies undergo a drastic slow down of their stellar mass growth after reaching the peak of their SFH. According to abundance matching, these masses correspond to a hot and massive DM halos which state could results in less efficient gas inflows on the galaxies and thus could be at the origin of the limited stellar mass growth. As a result, we show that galaxies, still on the MS, can enter the passive region of the UVJ diagram while still forming stars. The best fit to the MS SFH is provided by a Right Skew Peak Function for which we provide parameters depending on the seed mass of the galaxy. The ability of the classical analytical SFHs to retrieve the SFR of galaxies from Spectral Energy Distribution (SED) fitting is studied. Due to mathematical limitations, the exponentially declining and delayed SFH struggle to model high SFR which starts to be problematic at z>2. The exponentially rising and log-normal SFHs exhibit the opposite behavior with the ability to reach very high SFR, and thus model starburst galaxies, but not low values such as those expected at low redshift for massive galaxies. Simulating galaxies SED from the MS SFH, we show that these four analytical forms recover the SFR of MS galaxies with an error dependent on the model and the redshift. They are, however, sensitive enough to probe small variations of SFR within the MS, with an error ranging from 5 to 40% depending on the SFH assumption and redshift, but all the four fail to recover the SFR of rapidly quenched galaxies. However, these SFHs lead to an artificial gradient of age, parallel to the MS which is not exhibited by the simulated sample. This gradient is also produced on real data as we show using a sample of GOODS-South galaxies with redshifts between 1.5 and 2.5. Here, we propose a SFH composed of a delayed form to model the bulk of stellar population with the addition of a flexibility in the recent SFH. This SFH provides very good estimates of the SFR of MS, starbursts, and rapidly quenched galaxies at all redshift. Furthermore, used on the GOODS-South sample, the age gradient disappears, showing its dependency on the SFH assumption made to perform the SED fitting. Ciesla et al.Archetypal star-formation history and analytical models On the SFR-M_* main sequence archetypal star-formation history and analytical models. L. Ciesla1,D. Elbaz1, and J.Fensch1.Received; accepted ===================================================================================== § INTRODUCTIONThe evolution of galaxies depends on their star formation history (SFH) which is by definition the star formation rate of a galaxy, as a function of time. Two main physical properties of galaxies are directly computed from their Spectral Energy Distributions (SED) assuming an SFH: the stellar mass and the star formation rate (SFR) at the time the galaxy is observed. For star-forming galaxies, these two parameters follow a relation called the main sequence of galaxies <cit.> whose normalization increases with redshift. Its scatter however is found to be roughly constant over cosmic time <cit.>. The main consequence of this relation is that galaxies are forming the bulk of their stars through steady state processes rather than violent episodes of star formation, putting constraints on the SFH of galaxies. One method to derive the stellar mass and SFR of galaxies is to build and model their spectral energy distribution. To do so, one has to assume a stellar population model <cit.> convolved by a star formation history, and then to apply an attenuation law <cit.>. The models built to fit the data are thus dependent on the SFH of galaxies.First order approximation of galaxy SFHs can be guessed through the evolution of the SFR density of galaxies as a function of cosmic time <cit.> showing that the global SFR of galaxies peaks around z∼2 and then smoothly decreases. Going further, several studies showed that sophisticated SFH parametrizations including stochastic events, such as those predicted by hydrodynamical simulations and semi-analytical models, have to be taken into account to reproduce galaxies SFH <cit.>. These models are complex to implement and a large library is needed to be able to model all galaxies properties. Instead, numerous studies tested and used simple analytical forms to easily compute galaxies physical properties <cit.>. Recently, <cit.> used complex SFHs produced by the semi-analytical model GALFORM<cit.> to model z=1 galaxies Spectral Energy Distribution (SED) and recover their stellar mass and SFR using simple analytical SFHs. They found that the two-exponentially decreasing SFH and the delayed SFH relatively well recovered the real SFR and M_*. There is, however, a general agreement on the difficulty to constrain the age of the galaxy, here defined as the age of the oldest star, from broad-band SED fitting <cit.>.The exquisite sensitivity of HST and Spitzer now combined with ALMA observations, and soon to be complemented by JWST, allows us to better sample high redshift galaxy SEDs and thus apply more sophisticated SED fitting method implying the use of these analytical SFHs tested at lower redshifts. The aim of this study is to compute the SFH of galaxies following the MS and understand if the widely used analytical SFHs are pertinent to model them, as well as outliers such as starburst or rapidly quenched galaxies, regardless of the redshift.Throughout this paper, we use the WMAP7 cosmological parameters <cit.>. SED fitting is performed assuming an IMF of <cit.>, but the results of this work are found to be robust against IMF choice. §THE STAR FORMATION HISTORY OF A MAIN SEQUENCE GALAXYOver the past decade, numerous studies have shown that the bulk of star-forming galaxies follow a relation between their SFR and their stellar mass with observational confirmation that this relation holds up to z=4 <cit.>. Recently, <cit.> parametrized the MS as a function of stellar mass and redshift. It is thus possible to build the SFH of a star-forming galaxy following the MS by computing the SFR corresponding to the stellar mass at each redshift or time step. Due to the decrease of the MS normalization with decreasing redshift, the typical SFR of galaxies at a given fixed mass (e.g. around M^*, the knee of the stellar mass function) declines with time. However, because of the positive slope of the MS, the SFR of a galaxy increases with its stellar mass up to high masses where the MS starts to bend. Computing the resulting SFH of a star-forming galaxy staying on the MS when forming stars will allow us to understand how the two effects affect the star formation activity of the galaxy.Starting at z=5 with a given mass seed, we calculate at each time step the SFR corresponding to the redshift and mass, assuming the MS relations of <cit.>. Then the mass produced during the time step is calculated assuming a constant SFR. At the following time step, we derive the new SFR associated with the redshift at the given time and the new computed mass. The computed SFR and stellar masses are shown on the MS plane in Fig. <ref> for every five time steps, for five seed mass. We show the resulting SFHs on Fig. <ref> (bottom panel), from z=5 to 0.3, for four different seed masses, as well as their stellar mass histories (top panel). The ranges where the MS relations is constrained from observations, i.e. up to M_*=3×10^11, is shown (solid lines) as well as extrapolations of these relations at higher masses (dashed lines). At early stages, the SFR is increasing with cosmic time implying that the effect of the positive slope of the MS dominates the decrease of its normalization. Then it reaches a peak, after which, a smooth decline of the SFR is observed due to a combination of the decline of the MS normalization and the bending of the MS at high masses. The peak of the SFR history of an individual galaxy depends on the mass of the seed and occurs at a cosmic time of 2.8 Gyr in the case of M_seed=10^10 M_⊙, and at 8.9 Gyr for M_seed=10^6 M_⊙ (Fig <ref>, right panel). From this point, we can note that the position and width of the peak of the cosmic star formation density must contain information on the distribution of galaxies seed masses.The SFH of MS galaxies thus depends on the time but also on the seed mass. This seed mass can be used as a proxy for the halo mass and thus the environment <cit.>. Therefore, the star formation history of a galaxy following the MS is prone to environment. These different shapes of the SFH as a function of the seed mass are a direct consequence of the bending of the MS above a given mass, whose evolution with redshift is taken into account by the models of <cit.>. After the peak, the SFR starts to smoothly decrease translating in a much slower stellar mass growth, close to saturation (Fig. <ref>, top panel). Thus, just by following the MS, a galaxy will undergo a smooth and slow diminution of its star formation activity from a time defined by its seed mass.This is the scenario of the slow downfall presented in <cit.>, for instance. In Fig. <ref>, we show the evolution of the colors ofthe galaxies presented in Fig. <ref>, assuming no dust attenuation. Just by following the MS, the evolution of its colors makes the galaxy enter the “passive” zone of the UVJ diagram. However, the galaxy is not quenched, it is still following the MS and thus forming stars, but given the relatively low SFR and the high mass of the galaxy the colors of the evolved stellar population dominate the emission of the galaxy and place it into the “passive” region of the UVJ diagram.Fig <ref> shows the critical stellar mass and redshift at which the mass growth starts to reach a plateau (left panel) and the SFR at the peak of the SFH (right panel) for the five seed masses shown in Fig. <ref>. There is a strong variation of the massand SFR_max with redshift. Using abundance matching, one can convert these stellar masses to halo masses <cit.> obtaining 10^13 M_⊙ at z=0.4 to 2-4×10^14 M_⊙ above z=1-1.5. At all redshifts, these corresponding halo masses are above the critical line predicted by <cit.> to set the threshold mass for a stable shock based on spherical infall analysis <cit.>. At these mass-redshift regions, the model presented in <cit.> predicts that the galaxies lie in hot haloes shutting off most of the gas supply to the inner galaxy. This resulting limited gas inflow on the galaxy could be a possible explanation for the slow decrease of the star formation activity.Several studies aimed at trying to parametrize the global SFH of galaxies with analytical functions, either with the requirement to fit both individual SFH as well as the cosmic SFH, such as one or two log-normal functions <cit.>, or to follow the predicted evolution of Dark Matter (DM) halo masses, such as a double power law <cit.>. These functional forms, however, do not provide satisfactory fits of the computed SFH of MS galaxies, as shown in Fig. <ref>. These functions do not manage to model the early SFH and the position of the peak of SFR is offset. In the case of the log-normal SFH, the peak is shifted towards later times and is too broad. Furthermore, the slope of the declining part is too steep. The double power law provides a peak slightly shifted towards shorter ages, the slope of the declining part is closer to the true one but the declining part of the MS SFH is flatter than what is computed by these functions. We also tried other forms such as a Gaussian, a skewed Gaussian, and combinations of them with a power law or an exponential, but they all suffered problems. After an extensive test of several analytic forms, the best fit of the MS SFH (Fig. <ref>) is found to be a Right Skew Peak Function, based on a skewed Gaussian, which analytical expression is:SFR(t, M_seed) = A √(π)/2 σe^((σ/2 r_S)^2 - t - μ/r_S) erfc(σ/2 r_S - t - μ/σ), with t the time (in Gyr), A the amplitude, σ the width of the Gaussian, r_S the right skew slope, μ the position of the Gaussian centroid, and erfc is the standard complementary error function. The values of the different coefficients depend on the seed mass. To constrain this relation, we fit a Right Skew Peak Function to a set of MS SFH computed from a grid of seed masses. The best value of each parameter as a function of seed mass is shown in Fig. <ref> with a best fit for A, μ, and σ provided by :p(M_seed) = A_p e^-log M_seed/τ_p. The values of A_p and τ_p are given in Table <ref> for all the coefficients.For the right skew slope parameter, r_S, a linear function provides a better fit with the parameters also provided in Table <ref>. The results of this parametrization of the MS SFH of a galaxy with a seed mass of 10^8 M_⊙ is shown on Fig. <ref>. The early part of the MS SFH is slightly underestimated with the Right Skew Peak Function but at t>3 Gyr the agreement is very good, the position of the peak is recovered and the declining slope is reproduced.This SFH is computed from the MS relation obtained from observations. The bulk of star-forming galaxies follow this relation and thus we use it as a benchmark in the rest of this study. § ANALYTICAL STAR FORMATION HISTORIESIn this Section, we test the robustness of various analytical forms of SFH, commonly used in the literature. We focus on the delayed, the exponentially declining, the exponentially rising, and the log-normal SFHs. We describe their parametrization and show their shape on Fig. <ref>, as a function of the e-folding time parameter of the main stellar population τ.The exponential SFH is defined as: SFR(t) ∝ e^-t/τ.where t is the time and τ the e-folding time of the stellar population. Positive and negative values of τ corresponds to an exponentially declining and rising SFH, respectively. The delayed SFH is defined as:SFR(t)∝ te^-t/τ.where t is the time and τ the e-folding time of the stellar population.The effect of τ on the shape of the delayed and exponential declining SFHs can be seen on Fig. <ref>: short values correspond to galaxies where the bulk of the stars was formed early on and in a small time, followed by a smooth decrease of the SFR, while high values imply a roughly constant SFR over cosmic time. For the exponentially rising SFH, the bulk of the stars is formed in a short and recent time with small τ values. For these three SFHs, the highest SFR are reached for low τ values and short times. As a comparison, we show along with the different delayed SFH the modeled SFH of a MS galaxy with a seed mass of 10^9 M_⊙ at z=5. No value of τ provides a good model of the peak of the MS SFH, reaching such a SFR would need a low value of τ associated with a short age incompatible with the time at which the peak of SF occurs.Recently, <cit.> discussed the use of a log-normal function to model galaxies SFH and reproduce the cosmic SFR density following the results of <cit.>. In these studies, the log-normal SFH is defined as:SFR(t)∝1/t √( 2 πτ^2 ) e^-(ln(t) - t_0)^2/2τ^2,where t_0 is the logarithmic delay time, and τ sets the rise and decay timescale <cit.>. The log-normal SFH, for different values of t_0 and τ, is shown in Fig. <ref>d. The inclusion of t_0 controls the time at which the SFH peaks, which can not be done in the case of the delayed SFH for instance. We compare these different log-normal curves with the computed SFH of MS galaxies, as discussed in the previous Section and Fig. <ref>, and find that the log-normal function also struggles to reproduce this SFH (Fig. <ref>). While adapted to reproduce the cosmic SFH integrated over all galaxy seed masses, the log-normal SFH is less efficient in reproducing the shape of the individual SFH of MS galaxies. These histories are indeed strongly dependent on seed masses as illustrated in Fig. <ref>-bottom. §RECOVERING THE STAR FORMATION HISTORY OF MAIN SEQUENCE GALAXIES In Sect. <ref>, we built the SFH of a galaxy following the MS during all its lifetime. To test how well the analytical SFHs presented in the previous Section can model this archetypal galaxy, we show in Fig. <ref> the specific SFR (sSFR) of this galaxy as a function of its stellar mass, following its evolution on this plane. In addition, we show the MS relations derived by <cit.> from observations at z=0, 2, and 4. This relation is only constrained up to z=3 but we extrapolate it at z=4 following <cit.> who recently showed that it still holds at z=4. To understand if the analytical SFHs studied here are able to cover the parameter space necessary to model our MS galaxy, we compute a grid of models for each SFH. We use the largest reasonable limits of age, from 100 Myr to 13 Gyr, and τ, from 1 Myr to 20 Gyr for the exponential SFHs and the delayed one. The number density of models is completely dependent on our τ and age grids, however the space covered by our computed models is the largest since we consider minimum and maximum possible values for these two parameters. The resulting parameter space in terms of sSFR and stellar mass is shown with the colored filled region on Fig. <ref>. In the case of the log-normal form, we use three values of t_0, and a less resolved grid of τ as compared to the other models for computational reasons, although we keep the minimum and maximum possible values.We have seen that the evidence of a MS implies that galaxies experienced on average a rising SFH since their birth before reaching a peak and a drop of star formation. As a natural result, the exponentially declining SFH is not suited for these early phase. The locus of the z=4 MS indeed falls in the dark zone of Fig. <ref>a, which is unpopulated by the analytical formula whatever parameters are chosen. More surprisingly, the delayed SFH, which was built to account for this behavior encounter a similar problem although less sharply, i.e., some extreme parameter choices can marginally reach z=4 but not above. This comes from the stiffness of the analytical formula, unable to follow the actual shape of the MS driven SFH. More generally, these two analytical SFH are not optimized to finely sample the SFH at epochs z≥2. This problem is not present with the exponentially rising SFH which instead is not optimized for z≤2. The same conclusions are drawn for the log-normal function that is able to reach very high values of SFR, but not the smallest ones expected at low redshifts for massive galaxies. To understand why the exponentially declining and delayed SFHs struggle to reproduce high sSFR, we show in Fig. <ref> the relation between the age, τ, and the SFR for the whole grid of models. The highest SFRs are reached from a combination of low ages and low values of τ. This implies limitations on the estimate of the SFR since the age can not be higher than the age of the Universe. On the contrary, we can easily see that a whole range of high SFRs can be obtained with the exponentially rising SFH and the log-normal one but not the low SFRs needed to model high mass local galaxies, confirming what we observe in Fig.<ref>, where this SFH fails to reproduce our modeled MS galaxy at z<0.4.We conclude that the exponentially declining and delayed SFH are not optimized to model the emission of MS galaxies at z≥2 as they show little mathematical flexibility to reproduce the SFR of these galaxies. However the exponentially rising SFH can model such high redshift sources, as well as the log-normal function, but not for massive galaxies below z<2. § RECOVERING THE SCATTER OF THE MAIN SEQUENCE The MS relation exhibits a scatter of 0.3 dex that is found to be constant with redshift <cit.>. There are two possible origins for this scatter. The first one is that this scatter could be artificially created by the accumulation of errors due to observations, photometric measurements, and determination of physical properties from different methods (instrumental errors, assumptions linked to SED modeling, etc). The second one is that the scatter could be due to a variation of a third property (other than M_* and SFR) across the MS and thus be physical <cit.>. Indeed, some simulations predict that galaxies can undergo some fluctuations of star formation activity resulting in variations of their SFR such as compaction or variations of accretion <cit.>.These variations should be small enough to keep the SFR of the galaxy within the MS scatterTo understand the origin of the scatter of the MS, analytical SFHs must be able to recover small recent variations of the SFR with a precision better than the scatter of the MS itself, i.e. 0.3 dex. To test the functional forms studied in this work, we take our computed MS galaxy SFH described in Sect. <ref> and add, at different redshifts, a small variation of the SFR, an enhancement or a decrease. This variation is applied to the last 100 Myr of the SFH and its intensity is randomly selected in a Gaussian distribution centered on the exact MS SFR at the given redshift and mass, and with a σ of 0.3 dex. Five hundreds mock SFHs are created at each redshift, from z=1 to 5, varying the mass of the galaxy seeds from 10^7 to 10^10 M_⊙. To derive galaxies SED from these SFH, we use the SED modeling code CIGALE <cit.> based on a energy balance between the energy attenuated in UV-optical and re-emitted in IR by the dust. CIGALE can model galaxies SED from all kind of SFH, analytical forms or output from complex simulations. In this case, we provide CIGALE with our computed SFHs and build SEDs using the stellar population models of <cit.>, a <cit.> attenuation law, and the dust emission templates of <cit.>. The modeled SEDs are then integrated into the set of filters available for the GOODS-South field, and a random noise distributed in a Gaussian with σ=0.1 is added to the modeled fluxes, following the method described in <cit.>. We then use the SED fitting mode of CIGALE to recover the physical parameters of our mock galaxies. To fit our galaxies, we do not provide as an input the parameter used to build the mock galaxies, and the attenuation law is a free parameter. However, we use three infrared (IR) bands, MIPS 24, PACS 100 and 160. Because CIGALE is based on an energy budget, having IR data is thus ideal to constrain the amount of attenuation and break the degeneracy with age. We emphasize the fact that the following results should be considered as a best case scenario.The physical properties of the galaxies are computed from the probability distribution function (PDF) of each parameter computed by CIGALE, the final value being the mean of the PDF and the error its standard deviation.At each redshift, and for each SFH assumption, Fig. <ref> (top panel) shows the relative difference between the estimated SFR and the true ones known from our simulated galaxies. Globally, at all redshifts, all SFH assumptions recover the true SFR within an error below ±25% except the exponentially declining SFH that underestimates the SFR by ∼40% at z=4. The estimated error of 25% on the SFR corresponds to a quarter of the MS scatter, thus the usual SFH should be sensitive enough to probe variations of the order of 0.3 dex of star formation activity around the MS.The ability of each SFH assumption to recover the physical properties of the mock galaxies is however dependent on redshift. Indeed, the error on the SFRis increasing with redshift for the exponentially declining SFH showing that this form is more suited for galaxies at z≤2-3. The opposite behavior is observed for the rising exponential providing better estimates of SFR at z=4 than at z=1 where it clearly overestimates this parameter, indicating that this SFH is more suited for galaxies at z>2-3. This is in agreement with the typical MS SFH described in Sect. <ref> with an increase of the SFR at early times, well modeled by the rising exponential, followed by a smooth decrease, modeled by the exponentially declining SFH. The behavior of these two SFH assumptions is not surprising and was already pointing out in several studies <cit.>. However, this emphasizes the fact that these two SFH must be carefully selected according to the redshifts of galaxies in order to limit biases on the derivation of their physical properties. The delayed SFH however seems to provide estimates of the SFR showing a weaker dependency on redshift although we note an underestimate at z=4 that is less pronounced than the exponential SFH.Recently, <cit.> performed a similar analysis on z=1 galaxies simulated by the semi-analytical code GALFORM and found that the exponentially declining and delayed SFH were underestimating the SFR by 11% and 9%, respectively. Here we find a weak overestimation of a few percent in the estimate of the SFR for both SFH. This apparent disagreement is likely due to different SFH used to built the SEDs (complex SFH produced by a SAM code in <cit.> versus a simple analytical form linked to the MS here) as well as the parameters used to perform the fits, for instance the attenuation law is a free parameter in this study. Improvements in the code since 2015 may also play a role. The log-normal SFH provides very good estimates of the SFR of MS galaxies at all redshifts with an error less than 10%. We note that the two functional forms that have similar global shape compared to the MS SFH, the log-normal and delayed SFH, recover well the SFR of the mock galaxies with little dependency on the redshift compared to the exponential forms. From this analysis, we conclude that the four tested SFH assumptions provide fair measurements of the SFR of MS galaxies, but the log-normal and delayed SFH estimates are more accurate and less redshift dependent than the other models.§ EXTREME SFHS: STARBURSTS AND RAPIDLY QUENCHED GALAXIES We tested in the previous Section the ability of the usual SFHs to recover the SFR of MS galaxies.Recently, <cit.> argued that considering that galaxies stay on the MS all their lives would result in a significantly steeper stellar mass function towards low redshift and showed that taking into account mergers would settle the conflict with the observed growth of stellar mass. Furthermore, after spending a long time on the MS, galaxies are expected to quench <cit.>. To test the impact of higher perturbations on the SFR, we perform here the same test than in the previous Section, i.e. we model MS galaxies to which we apply this time a strong burst or quenching.To model starburst galaxies, we systematically multiply by a factor of 5.24 the SFR in the last 100 Myr of our simulated MS galaxies and add a random scatter of 0.3 dex following a Gaussian distributions, as suggested by <cit.>. We then follow the same procedure than for the MS galaxies, i.e. we build their SEDs with CIGALE and performing the fitting. The relative difference between the output of the fitting and the true SFRs for the starburst galaxies are shown in Fig. <ref> (bottom panel, filled symbols). At z=1, the three assumptions provide good estimates of the SFR of starbursts galaxies. Moreover, the results from the rising exponential SFH are better than for the MS galaxies as expected from Fig. <ref> where we showed that this SFH is able to reproduce very high SFR, and provide good estimates, below 20%, at all the redshifts considered here. The log-normal function provides very good estimates as well, as expected. However, the delayed and exponentially declining SFHs underestimate the SFR with a factor increasing with redshift, up to errors of ∼40 and ∼50%, respectively, at z=4. This is also explained by the difficulty that these models have in reaching very high SFR as shown again in Fig. <ref>. We thus conclude that exponentially rising and log-normal SFHs are well suited to model starburst galaxies at all redshifts, whereas the delayed SFH is suited for starburst galaxies at z≤2 and the exponentially declining for galaxies at z≤1.We now investigate the case of galaxies undergoing a fast quenching of their star formation activity. Here, we want to probe processes that strongly affects the activity of the galaxies in less than 500 Myr, such as ram pressure striping for instance, or violent negative feedback (e.g. SF-driven or AGN-driven winds) and not smooth quenching occurring in timescales of the order of several Gyr. Following the results of <cit.>, we apply to our MS SFH a systematic instantaneous break on the last 100 Myr of the SFH, making the SFR dropping to 15% of the SFR before quenching. We then add a random scatter of 0.3 dex as we did for MS and starburst galaxies. The value of 15% is chosen as being intermediate between a total quenching (0%) and 30% which is the upper limit empirically defined by <cit.> to consider a galaxy as rapidly quenched. The results on the SFR recovering for the rapidly quenched galaxies are shown in Fig. <ref> (bottom panel, open symbols). In this case, the dispersion on the measurement is the largest, all the SFH overestimate the SFR in the best case (delayed SFH at z=3), by 20%, and by 350% in the worst case (the exponentially rising SFH at z=1). The dependance with redshift of Δ SFR/SFR is the same for the three assumptions, i.e. they overestimate by 20 to 40% the SFR of galaxies at z=3 and 4, but by a factor of 2 in average at z=1 and 2. Indeed, we see in Fig. <ref> that the delayed and the exponentially declining SFH can reach very low values of SFR but need a long time/age to reach these values, which is not compatible with a sharp decline of the SFR. For the exponentially rising SFH, low values of SFR (<10 M_⊙yr^-1) are not reachable as seen in Fig. <ref>, as well as for the log-normal SFH in the times probed here. We conclude then that none of these four SFHs is suited to derive the SFR of galaxies undergoing a rapid quenching of their star formation activity. §RECOVERING THE STELLAR MASS OF GALAXIESThe previous discussions on the ability of analytical SFH assumptions to recover the SFR of galaxies were motivated by the fact that at a given stellar mass the range of SFR reachable for these SFHs can be limited for mathematical reasons. However, we test in this section their ability to recover the stellar mass of the galaxies as well. We compare in Fig. <ref> the M_* obtained from our SED fitting procedure to the true M_* of the galaxies, as we did for the SFR in Fig. <ref>.For MS galaxies, Fig. <ref> (upper panel) shows that in almost all cases, i.e. all SFH assumptions and all redshift, the stellar mass is recovered with an error lower than 25%. The only exception is for the exponentially declining SFH at z=4 where M_* is overestimated by 35%. For the starburst galaxies (bottom panel of Fig. <ref>), there is a relation between the mean error on the stellar mass and the redshift for all SFH assumptions. The worst case is the exponentially declining SFH with a strong trend with redshift, from an overestimation of nearly 100% at z=4 to an underestimation of ∼30% at z=1. The same tendency is observed in the case of the delayed SFH with a lower gradient, from +40% to -30%. The lognormal and exponentially rising SFH provide very good estimate of the stellar mass of starburst galaxies down to z=1 where the reach an underestimation of 30% like the exponentially declining and delayed SFH. In the case of rapidly quenched galaxies, all the SFH assumptions tested in this work provide a very good measurement of their stellar mass, implying that the larger errors observed for the starburst galaxies rise from the need to reach high SFR at a given stellar mass. We conclude that only the exponentially declining SFH might introduce errors in the measurement of M_*, especially in the case of starburst galaxies, and must thus be used cautiously.§ BIASESIn the previous sections, we showed that analytical functions typically used in the literature feature mathematical rigidity that lead to errors in the recovery of physical parameters such as the SFR. On the one hand, from Fig. <ref>, we understand that large values of SFR are only reached assuming a very short age of the oldest star of the galaxies for the delayed and exponentially declining SFH. On the other hand, we expect the exponentially rising SFH to struggle in recovering weak SFR. We thus expect that modeling high redshift galaxies SED with these SFH results into biases, especially in terms of age. As a first test, we simulate a mock galaxy sample and follow their SFH up to z=1.5-2.5. Here we take a simple scenario where a galaxy, with a given M_seed at z=5 stays on the MS all its life but undergoes some episodes of enhanced star formation followed by a time of lower star forming activity, corresponding to a self-regulation. However these episodes are not strong enough to place the galaxy in the SB zone or in the quiescent region of the MS diagram, we force them to stay within the scatter of the MS. To do so, we take several assumptions. First, we randomly pick a M_seed in the mass function distribution computed by <cit.> at z=3.5-4.5. The normalization of the mass function is not important, only the shape is useful as we normalize it in order to compute the probability of having a galaxy seed with a given stellar mass.Then, for each time step we set the probability to have a star formation enhancement to 75% and, in the case of this small burst, we impose that a small decrease of star formation must follow this enhancement to account for self-regulation. The intensity of these episodes are randomly chosen in a Gaussian distribution with σ=0.3 dex, i.e. within the scatter of the MS. The intensities of the enhancement and the quiet periods are allowed to be different. We then calculate at each time step the stellar mass associated to the SFR. Finally, we randomly pick a redshift between 1.5 and 2.5 as the observed redshift for each galaxies. In the SED fitting methods, the age is defined as the time where the first star is formed. In this simple model, using this definition is difficult, we thus arbitrary define the age of the galaxy as the time since the galaxy reached 10^6 M_⊙. This assumption should not impact our result since we are interested in relative differences in the age of galaxies and not on absolute value. The resulting MS is shown in the left panel of Fig. <ref>a. No SED fitting is performed, this is just the output of the SFH of the mock galaxies.These simulated sources span a range of stellar masses between 10^8 and 10^12 M_⊙ and SFR between 0.1 and 500 M_⊙/yr^-1. Their age is comprised between 1.5 and 4 Gyr. Younger galaxies are preferentially found at low masses whereas massive galaxies at the highest mass range. We emphasize here that this is a simple case were the same attenuation law <cit.> and attenuation amount (E_(B-V)=0.3) is applied to all galaxies. The only parameters that are different from one simulated galaxy to another are those linked to the SFH.We apply our SED fitting method to the simulated galaxies and place them on the MS plane using the outputs of the fit for the four analytical forms, color-coded with the age, also derived by CIGALE (Fig. <ref> from panel b to e). For each SFH, we clearly see that the fitting method artificially introduces an age gradient parallel to the MS, with young ages on the top and older age in the bottom. The exponentially declining SFH tends to tighten the galaxies on the MS relation while the opposite is noticed for the log-normal SFH that shows a wider spread of the galaxies, especially towards high SFR. In addition, we note an artificial line at high SFR in the case of the exponentially rising SFH. While the exponentially declining and rising SFH and the log-normal SFH tend to prefer low values of age, the delayed SFH is producing older ages, compared to the simulated sample.In Fig. <ref>, we show the true median age of the simulated galaxies as well as those obtained through SED fitting for all analytical forms studied in this work, as a function of the relative distance to the MS, i.e. the starburstiness <cit.>. The true age is never recovered by any of the analytical forms studied in this work with underestimations by a factor of 2-3. Furthermore, while the true age is showing no relation with R_SB, the age produced by SED fitting shows a clear relation with R_SB with a slope and spread (as indicated by the error bars)that depends on the SFH assumptions. The delayed, the exponentially decreasing, and exponentially rising SFHs show the steepest slope, thus are the most biased in all stellar mass bins while the log-normal SFH shows the weakest trend. We note a tendency to overestimate the R_SB in all the SFH assumptions but the exponentially declining, confirming what we observe in Fig. <ref>.The trends observed in Fig. <ref> are obtained in an ideal case where, as we explained, only the SFH varies from one galaxy to another. To understand how it could impact real data, we select star forming galaxies in the GOODS-South sample from their position in the U-V vs V-J diagram <cit.>, between z=1.5 and 2.5 and model their SED using CIGALE. The reader is referred to Ciesla et al. (2017, in prep) for a detailed description of the fitting procedure of these sources.Basically, we use the same procedure than in the previous Section, applying parameters that are typically used in the literature to perform a UV-to-FIR SED fitting of high redshift galaxies with CIGALE <cit.>. Without knowing the real SFH of the GOODS-South sample galaxies, we observe however several trends, similar to those we discussed in the test using the simulated galaxies. Indeed, for the exponentially declining and delayed SFH, we observe an artificial line at high SFR resulting from the limited range of SFR probed by these two SFH, as shown in the previous section (black zone in Fig. <ref>). An artificial line is also found at high SFR for the exponentially rising SFH, but we also note the lack of galaxies with low SFRs compared to the two other SFHs, as expected. A different effect is found for the log-normal SFH where the range in SFR provided by the SED fitting is largely wider than when using the three other SFHs. There is also the same perfect gradient of age on the distribution of galaxies for all of the four assumptions. In all four models, we can see that the sources on the top of the MS seems to be the youngest at all masses and the galaxies at the bottom part of the MS the oldest ones, again regardless of the stellar mass. We note that the age of the galaxies in the highest part of the MS is the lowest value allowed for the fit, in this case 500 Myr, as expected from our previous figures. This is due to the fact that, with the delayed and exponentially declining SFH, high values of SFR are only reached for small ages, as shown in Fig. <ref>. However, we also note a similar gradient in the case of the exponentially rising and log-normal SFHs. These four SFHs model a smoothly evolving main stellar population and, as shown in Sect. <ref>, struggle to recover the true SFR of galaxies experiencing strong enhancements or decrease of their star formation activity. The age gradient obtained with these SFH thus comes from a lack of mathematical flexibility preventing them to model recent variations of star formation activity.We also performed our tests using a different definition of the age, i.e. the mass-weighted age of the galaxy and retrieved the same conclusions than for the simple age definition of the SED fitting method, i.e. the presence of the gradient for every analytical forms tested here, as shown in Fig. <ref>.§ MODELING ALL GALAXIES WITH A SINGLE STAR FORMATION HISTORY To recover the physical properties of galaxies, regardless of their type or star formation activity, we need to use a form that models the bulk of the star formation plus an additional flexibility in the recent SFH that can model variations in SFR within the scatter of the MS but also stronger fluctuations due to a starburst or a fast quenching event. However, we have to keep in mind that the goal here is to derive parameters from broad-band SED fitting and thus we have to take into account that SFH parameters are not well constrained from this method. What we propose here is to use a simple analytical form to model the bulk of the SFH and add a flexibility in the recent SFH to model small fluctuations to large variations of the SFR in the recent SFH. §.§ Modeling the bulk of the Star Formation HistoryThe ideal SFH to model the main history would be the SFH derived for the MS galaxies, since it is defined to model the bulk of galaxies. However, as we showed in Section <ref>, such a SFH is complicated to parametrize, and the closest analytical function is handled through four free parameters. Four parameters plus additional ones for the flexibility in the recent SFH make this mathematical form too complicated for SED modeling.We have to choose a simpler analytical form that can replicate the increasing part of the SFH followed by a smooth decline and provide accurate SFR measurements. We thus exclude the exponential forms that do not model the global shape of the MS SFH and difficulties in reproducing SFR in given redshift ranges. The form of the log-normal and delayed SFH is closest to the MS SFH, with the rising part followed by a smooth decline. From Fig. <ref>, we see that the log-normal function manages to recover the SFR of normal and star-bursting galaxies independently from redshift but fails to reproduce the SFR of rapidly quenched galaxies. However this analytical form has two free parameters that can lead to degeneracies as we know that SFH parameters are difficult to constrain. Indeed, we see in Fig. <ref> that, when used on real galaxies, the log-normal SFH yields to a very dispersed MS with a large scatter. This scatter is not compatible with what is expected from the MS computed from luminosities, infrared versus H-band, and thus independently from models assumptions <cit.>. To not introduce any artificial scatter in the SFR-M_* relation from SED fitting, we will thus not use the log-normal SFH. We thus propose to use a modified version of the delayed SFH that can be used at all redshifts and on all types of galaxies. Despite limitations in recovering the SFR of very high redshift galaxies, the delayed SFH was found to be able to recover the stellar mass of galaxies, and thus to provide a good modeling of the main part of the SFH compared to exponential SFHs <cit.>. The rigidity implied by the mathematical shape of the SFH is problematic for the SFR but not for the stellar mass. Indeed, testing the delayed SFH on SED built from galaxies modeled by GALFORM, <cit.> estimated that the stellar mass was recovered with a mean error lower than 7% and that this model better reproduced the envelope of the SFH than the exponentially decreasing models. Furthermore, the delayed SFH also has the advantage to have only one free parameter, τ_main. The delayed SFH is thus suited to model the bulk of the stellar population emission. §.§ A flexibility in the recent SFHWe know modify the delayed SFH in order to correct for the mathematical rigidity creating the artificial age gradient discussed in Section <ref> and allowing the SFH to reach both high and low SFR. To overcome these limitations, we thus add a flexibility in the recent SFH to model an enhancement or decline of the SFR:SFR(t) ∝ te^-t/τ_main,when t ≤ t_0r_SFR×SFR(t=t_0),when t>t_0 where t_0 is the time when a rapid enhancement or decrease is allowed in the SFH and r_SFR is the ratio between the SFR after t_0 and before t_0. This flexibility allows the SFH to better probe recent variations in the SFH, regardless of its intensity (small variations within the MS scatter or intense burst or quenching). Such a SFH was proposed in <cit.> and in Merlin et al. (2017, submitted), for instance, to model quenched galaxies but also in Ciesla et al. (2017, in prep) to model galaxies on the top of the MS or higher.Values of r_SFR larger than 1 will correspond to an enhancement of the SFR whereas values lower than 1 to a decrease. This SFH is thus flexible enough to model galaxies inside the scatter of the MS but also starbursts and rapidly quenched galaxies <cit.>. We put this SFH under the same tests than the three other SFHs except for the grid of models of Fig. <ref>. Indeed, including the r_SFR parameter all SFRs can be reached depending on the input values provided by the user. From Fig. <ref> (black filled stars), we see that this SFH recovers very well the SFR of MS galaxies, with an error lower than ∼10% at all redshifts. We note that it produces SFR that are in the same order than the three other assumptions studied here at z=1 and 2, but better estimates at higher redshifts. For starburst galaxies, the ΔSFR/SFR values are very low, comparable to the results obtained with the rising exponential, showing a perfect recovering of starburst SFRs. The same agreement between the true SFR and the one from the delayed SFH with flexibility is found for the quenched galaxies with an error lower than the three others at all redshifts. We note however that there is still an error of 50% at z=1. In Fig. <ref> (top panel), we show the results obtained with this flexible SFH on the simulated galaxies.There is a less pronounced artificial age gradient consistent with the actual age distribution of Fig. <ref>a. There is a gradient of age which is not parallel to the MS like what is observed for the other SFH. The oldest galaxies are, for instance, not at the bottom of the MS, but at high masses. This is confirmed in Fig. <ref>, where we see that the gradient of age is the smallest with this flexible SFH. However, since this simulation is an ideal case, we apply our SFH to the same SED fitting procedure applied on GOODS-South galaxies presented above (Fig. <ref>-bottom). First, the limitation of the high values of SFR is weakened compared to other SFH assumptions and galaxies are allowed to go higher in SFR but also lower. There is no longer a gradient of age along the MS showing that the origin of this gradient strongly depends on the assumption made on the SFH of the galaxies when performing the SED fitting. Younger ages seem to be found at lower masses whereas older galaxies are found at higher masses. This behavior is closer to what would be expected considering a galaxy evolving on the MS all its life with some random episodes of enhanced star formation followed by a period of lower star formation activity, as could be considered in star formation feedback scenarios, as simulated in Fig. <ref> (left panel). The same result is obtained with the age defined as the mass-weighted age of the galaxy, i.e., the age gradient is no longer present. However, since the age is known to be weakly constrained from SED fitting <cit.>, we do not interpret further this possible trend.We conclude that adding a flexibility in the recent SFH allows for a more accurate recovering of the SFR of MS, starburst, and rapidly quenched galaxies than SFH considering one main stellar population. The age gradient found across the MS seems to be SFH dependent and disappear with the use of this flexibility. § CONCLUSIONSIn this work, we computed the SFH of the bulk of star-forming galaxies, i.e. of galaxies following the MS all their lives. The SFH of MS galaxies depends on cosmic time but also on the seed mass of the galaxy which can be interpreted as a proxy for the DM halo mass, itself related to its local environment. These SFH show a peak of star formation depending on the seed mass after which the SFR smoothly declines and the stellar mass growth drastically slows down. Following <cit.>, these masses correspond to a hot and massive state of DM halo limiting the gas for falling on the galaxy and fueling star formation and thus possibly being at the origin of the smooth decline of the SFR. As a consequence, we showed that MS galaxies can enter the passive region of the UVJ diagram while still forming stars following the MS.We showed that the MS SFH is not reproduced by analytical forms usually used to perform SED fitting, neither by a log-normal SFH or a double-power-law SFH. The best fit is provided by a Right Skew Peak Function that we use to parametrize the SFH of MS galaxies as a function of seed mass and time.Using the SFH of MS galaxies as a benchmark, we studied the ability of exponentially rising, exponentially declining, delayed, and log-normal SFHs to retrieve the SFR of galaxies from SED fitting. Due to mathematical limitations, the exponentially declining and delayed SFH struggle to model high SFR which starts to be problematic at z>2. The exponentially rising and log-normal SFHs exhibit the opposite behavior with the ability to reach very high SFR but not low values such as those expected at low redshift for massive galaxies. Simulating galaxies SED from the MS SFH using the SED modeling code CIGALE, we showed that these four analytical forms recover well the SFR of MS galaxies with an error dependent on the model and the redshift. They are thus able to probe for small variations of SFR within the MS, such as those expected from compaction or variation of gas accretions scenarios, with an error ranging from 5 to 40% depending on the SFH assumption and the redshift. The exponentially rising and log-normal SFHs provides very good estimates of the SFR of starburst galaxies, but all of the four assumptions fail to recover the SFR of rapidly quenched galaxies. We note that these tests were made using information from far-IR, and using the same code to model the SED and fit them. Although precautions were taken to minimize the potential biases, we emphasize the fact that these results should be considered as best case scenario. Displaying the results of the SED fitting performed on the simulated galaxies on the MS diagram, we showed that all the four SFH assumptions tested in this work exhibit a gradient of age that is not an outcome of the simulation. As the simulated galaxies are an ideal case, with only the SFH varying from one galaxy to another, we test these analytical SFHs on real data. We use a sample of GOODS-South galaxies with redshift between 1.5 and 2.5 and show that some artificial limitations are produced at the lowest or highest SFRs depending on the model, and that the perfect gradient of age, parallel to the MS, is produced as well.The best SFH that should be used to model galaxies is the MS SFH but, due to the complexity of its parametrization, its use for SED modeling is not reasonable as SFH parameters are unconstrained from broad band SED fitting. Each of the four SFHs tested in this work showing some caveats in recovering high or low SFR, at different redshift, and for different galaxy populations, we propose a SFH composed of a delayed form to model the bulk of stellar population with the addition of a flexibility in the recent SFH. This SFH provides very good estimates of the SFR of MS, starbursts, and rapidly quenched galaxies at all redshift. Furthermore, used on the GOODS-South sample, we observe that the age gradient disappear, showing that it is dependent on the SFH assumption made to perform the SED fitting.L. C. thanks C. Schreiber, E. Daddi, M. Boquien, and F. Bournaud for useful discussions. L. C. warmly thanks M. Boquien, Y. Roehlly and D. Burgarella for developing the new version of CIGALE on which the present work relies on.aa | http://arxiv.org/abs/1706.08531v2 | {
"authors": [
"Laure Ciesla",
"David Elbaz",
"Jeremy Fensch"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20170626180003",
"title": "On the SFR-M$_*$ main sequence archetypal star-formation history and analytical models"
} |
A Robust Data Hiding Process Contributing to the Development of a Semantic Web Jacques M. Bahi, Jean-François Couchot, Nicolas Friot, andChristophe GuyeuxFEMTO-ST Institute, UMR 6174 CNRSComputer Science Laboratory DISCUniversity of Franche-ComtéBesançon, France{jacques.bahi, jean-francois.couchot, nicolas.friot, christophe.guyeux}@femto-st.frJanuary 16, 2017 =============================================================================================================================================================================================================================================================================================In this paper, a novel steganographic scheme based on chaotic iterations is proposed. This research work takes place into the information hiding framework, and focus more specifically on robust steganography. Steganographic algorithms can participate in the development of a semantic web: medias being on the Internet can be enriched by information related to their contents, authors, etc.,leading to better results for the search engines that can deal with such tags.As media can be modified by users for various reasons, it is preferable thatthese embedding tags can resist to changes resulting from some classical transformations as for example cropping, rotation, image conversion, and so on. This is why a new robust watermarking scheme for semantic search engines is proposed in this document. For the sake of completeness, the robustness of this scheme is finallycompared to existing established algorithms. Semantic Web; Information Hiding; Steganography; Robustness; Chaotic Iterations. § INTRODUCTIONSocial search engines are frequently presented as a next generation approach to query the world wide web. In this conception, contents like pictures or movies are tagged with descriptive labels by contributors, and search results are enriched with these descriptions. These collaborative taggings, used for example in Flickr <cit.> and Delicious <cit.> websites, can participate to the development of a Semantic Web, in which every Web page contains machine-readable metadata that describe its content. To achieve this goal by embedding such metadata, information hiding technologies can be useful.Indeed, the interest to use such technologies lays on the possibility to realizesocial search without websites and databases: descriptions are directly embedded into media, whatever their formats. In the context of this article, the problem consists in embedding tags into internet medias, such that these tags persist even after user transformations. Robustness of the chosen watermarking scheme is thus required in this situation, as descriptions should resist to user modifications like resizing, compression, and format conversion or other classical user transformations in the field. Indeed, quoting Kalker in <cit.>, “Robust watermarking is a mechanism to create a communication channel that is multiplexed into original content [...] It is required that, firstly, the perceptual degradation of the marked content [...] is minimal and, secondly, that the capacity of the watermark channel degrades as a smooth function of the degradation of the marked content”. The development of social web search engines can thus be strengthened by the design of robust information hiding schemes. Having this goal in mind, we explain in this article how to set up a secret communication channel using a newrobust steganographic process called 𝒟ℐ_3. This new scheme has beentheoretically presented in <cit.> with an evaluation of itssecurity. So, the main objective of this workis to focus on robustness aspects presenting firstlyother known schemes inthe literature, and presenting secondly this newscheme and and evaluate itsrobustness. This article is thus a firstwork on the subject, and thecomparison with otherschemes concerning the robustness will be realized infuture work. The remainder of this document is organized as follows. In Section <ref>, some basic reminders concerning the notion of Most and Least Significant Coefficients are given. In Section <ref>, some well-known steganographic schemes are recalled,namely the YASS <cit.>,nsF5 <cit.>, MMx <cit.>, andHUGO <cit.> algorithms. In the next section theimplementation of the steganographic process 𝒟ℐ_3 is detailed,and its robustness study is exposed in Section <ref>.This research work ends by a conclusion section, where our contribution is summarized and intended future researches are presented.§ MOST AND LEAST SIGNIFICANT COEFFICIENTSWe first notice that terms of the original content x that may be replaced by terms issued from the watermark y are less important than others: they could be changedwithout be perceived as such. More generally, asignification functionattaches a weight to each term defining a digital media, depending on its position t. A signification function is a real sequence(u^k)^k ∈N.Let us consider a set of grayscale images stored into portable graymap format (P3-PGM): each pixel ranges between 256 gray levels, i.e., is memorized with eight bits. In that context, we consideru^k = 8 - (k8)to be the k-th term of a signification function(u^k)^k ∈N.Intuitively, in each group of eight bits (i.e., for each pixel)the first bit has an importance equal to 8, whereas the last bit has an importance equal to 1. This is compliant with the idea that changing the first bit affects more the image than changing the last one. Let (u^k)^k ∈N be a signification function,m and M be two reals s.t. m < M.* The most significant coefficients (MSCs) of x is the finitevector u_M = ( k | k ∈N andu^k⩾ Mand k ≤| x |);* The least significant coefficients (LSCs) of x is thefinite vector u_m = ( k | k ∈N andu^k≤ mand k ≤| x |);* The passive coefficients of x is the finite vectoru_p = ( k | k ∈N and u^k ∈ ]m;M[and k ≤| x |). For a given host content x, MSCs are then ranks of xthat describe the relevant part of the image, whereas LSCs translate its less significant parts. When MSCs and LSCs represent a sequence of bits, they are also called Most Significant Bits (MSBs) and Least Significant Bits (LSBs). In the rest of this article, the two notationswill be used depending on the context.These two definitions are illustrated on Figure <ref>, where the significance function (u^k) is defined as in Example <ref>, m=5, and M=6.§ STEGANOGRAPHIC SCHEMESTo compare the approach with other schemes, we now present recent steganographic approaches, namely YASS (Cf setc. <ref>), nsF5 (Cf setc. <ref>), MMx (Cf setc. <ref>), and HUGO (Cf setc. <ref>). One should find more details in <cit.>.§.§ YASSYASS (Yet Another Steganographic Scheme) <cit.> is asteganographic approach dedicated to JPEG cover.The main idea of this algorithm is to hide datainto 8× 8 randomly chosen inside B× B blocks(where B is greater than 8) instead of choosing standard8× 8 grids usedby JPEG compression. The self-calibration process commonly embedded into blind steganalysis schemes is then confused by the approach. In the paper <cit.>, further variants of YASS have been proposed simultaneously to enlarge the embedding rate and to improve therandomization step of block selecting. More precisely, let be given a message m to hide, a size B, B ≥ 8,of blocks.The YASS algorithm follows.* Computation of m', which isthe Repeat-Accumulate error correction code of m.* In each big block of size B × B of cover, successively do: * Random selection of an 8 × 8 block b using w.r.t. a secret key. * Two-dimensional DCT transformation of b and normalisation of coefficient w.r.t apredefined quantization table.Matrix is further referred to as b'. * A fragment of m' is embedded into some LSB of b'. Let b” be the resulting matrix.* The matrix b” isdecompressed back to the spatial domain leading to a new B × B block. §.§ nsF5The nsF5 algorithm <cit.> extends the F5algorithm <cit.>. Let us first have a closer lookon this latter.First of all, as far as we know, F5 is the first steganographic approachthat solves the problem of remaining unchanged a part (often the end)of the file.To achieve this, a subset of all the LSB is computed thanks to apseudo random number generator seeded with a user defined key. Next, this subset is split into blocks of x bits. The algorithm takes benefit of binary matrix embedding to increase it efficiency.Let us explain this embedding on a small illustrative example wherea part m of the messagehas to be embedded into this x LSB of pixelswhich are respectivelya 3 bits column vector and a 7 bits column vector.Let then H be the binary Hamming matrixH = ( [ 0 0 0 1 1 1 1; 0 1 1 0 0 1 1; 1 0 1 0 1 0 1 ])The objective is to modify x to get y s.t. m = Hy. In this algebra, the sum and the product respectively correspond to the exclusive or and to the and Boolean operators. If Hx is already equal to m, nothing has to be changed and x can be sent. Otherwise we consider the difference δ = d(m,Hx) which is expressedas a vector : δ = ( [ δ_1; δ_2; δ_3 ])where δ_i is 0 if m_i = Hx_i and 1 otherwise.Let us thus consider the jth column of H which is equal to δ.We denote by x^j the vectorwe obtain by switching the jth component of x,that is, x^j = (x_1 , …, x_j,…, x_n ). It is not hard to see that if y is x^j, thenm = Hy. It is then possible to embed 3 bits in only 7 LSB of pixels by modifyingon average 1-2^3 changes.More generally, the F5 embedding efficiency should theoretically bep/1-2^p.However, the event when the coefficient resulting from this LSB switch becomes zero (usually referred to as shrinkage) may occur. In that case, the recipient cannot determinewhether the coefficient was -1, +1 and has changed to 0 due to the algorithm orwasinitially 0.The F5 scheme solves this problem first by defining a LSB with thefollowing (not even) function:LSB(x) = {[ 1 - x2if x< 0;x2otherwise. ]..Next, if the coefficient has to be changed to 0, the same bit messageis re-embedded in the next group of x coefficient LSB.The scheme nsF5 focuses on steps of Hamming coding and ad'hoc shrinkageremoving. It replaces them with a wet paper code approachthat is based on a random binary matrix. More precisely, let D be a random binary matrix of size x × n without replicate nornull columns: consider for instance a subset of {1, 2^x} of cardinality nand write them as binary numbers. The subset is generated thanks to a PRNGseeded with a shared key. In this block of size x, one choose to embed onlyk elements of the message m. By abuse, the restriction of the message is again called m. It thus remains x-k (wet) indexes/places wherethe information shouldn't be stored. Such indexes are generated too with thekeyed PRNG. Let v be defined by the following equation:Dv = δ(m,Dx).This equation may be solved by Gaussian reduction or other more efficient algorithms. If there is a solution, one have the list of indexes to modifyinto the cover. The nsF5 scheme implements such a optimized algorithm that is to say the LT codes.§.§ MMxBasically, the MMx algorithm <cit.> embeds messagein a selected set of LSB cover coefficients using Hamming codes as the F5 scheme. However, instead of reducing as many as possible the number of modified elements,this scheme aims at reducing the embedding impact. To achieve this it allowsto modify more than one element if this leads to decrease distortion.Let us start again with an example with a [7,4] Hamming codes, i.e, let us embed 3 bits into 7 DCT coefficients, D_1, …, D_7.Without details, let ρ_1, …, ρ_7 be the embedding impact whilstmodifying coefficients D_1, …, D_7 (see <cit.>for a formal definition of ρ). Modifying element at index jleads to a distortion equal to ρ_j. However, instead of switchingthe value at index j, one should consider to find all other columns of H, j_1, j_2 for instances, s.t. the sum of themis equal to the jth column and to compare ρ_j withρ_j_1 + ρ_j_2. If one of these sums is less than ρ_j, the sender has to change these coefficients instead of the j one. The number of searched indexes (2 for the previous example) gives the nameof the algorithm. For instance in MM3, one check whether the message can beembedded by modifying 3 pixel or less each time.§.§ HUGOThe HUGO <cit.> steganographic scheme is mainly designed to minimizedistortion caused by embedding. To achieve this, it is firstly based on an image model given as SPAM <cit.> features and next integrates image correction to reduce much more distortion.What follows refers to these two steps. The former first computes the SPAM features. Suchcalculi synthesize the probabilities that the difference between consecutive horizontal (resp. vertical, diagonal) pixelsbelongs in a set of pixel values which are closed to the current pixel value and whose radius is a parameter of the approach. Thus, a fisher linear discriminant method defines the radius and chooses between directions (horizontal, vertical, etc.) of analyzed pixels that gives the best separator for detecting embedding changes. With such instantiated coefficients, HUGO can synthesize the embedding costas a function D(X,Y) that evaluates distortions between X and Y.ThenHUGO computes the matrices ofρ_i,j = max(D(X,X^(i,j)+)_i,j, D(X,X^(i,j)-)_i,j)such that X^(i,j)+ (resp. X^(i,j)- ) is the cover image X wherethe the (i,j)th pixel has been increased (resp. has been decreased) of 1.The order of modifying pixel is critical: HUGO surprisinglymodifies pixels in decreasing order of ρ_i,j.Starting with Y=X, it increases or decreases its (i,j)th pixel to get the minimal value ofD(Y,Y^(i,j)+)_i,j andD(Y,Y^(i,j)-)_i,j.The matrix Y is thus updatedat each round. § THE NEW STEGANOGRAPHIC PROCESS 𝒟ℐ_3 §.§ Implementation In this section, a new algorithm which is inspired from the schemes 𝒞ℐ𝒲_1 and 𝒞ℐ𝒮_2 respectively described in <cit.> and <cit.> is presented. Compare to the first one, it is a steganographic scheme, not just a watermarking technique.Unlike 𝒞ℐ𝒮_2 which require embedding keys with three strategies, only one is required for 𝒟ℐ_3. So compare to 𝒞ℐ𝒮_2 which is also a steganographic process, it is easier to implement for Internet applications especially in order to contribute to a semantic web. Moreover,since 𝒟ℐ_3 is a particular instance of 𝒞ℐ𝒮_2, it is clearly faster than this one because in 𝒟ℐ_3 there is no operation to mix the message on the contrary on the initial scheme. The fast execution of such an algorithm is critical for internet applications.In the following algorithms, the following notations are used:S denotes the embedding and extraction strategy, H the host content or the stego-content depending of the context. LSC denotes the old or new LSCs of the host or stego-content H depending of the context too.N denotes the number of LSCs,λ the number of iterations to realize,M the secret message, andP the width of the message (number of bits).Our new scheme theoretically presented in <cit.> is here described by three main algorithms: * The first one, detailed in Algorithm <ref> allows to generate the embedding strategy of the system which is a part of the embedding key in addition with the choice of the LSCs and the number of iterations to realize. * The second one, detailed in Algorithm <ref> allows to embed the message into the LSCs of the cover media using the strategy. Thestrategy has been generated by the first algorithm and the same number of iterations is used.* The last one, detailed in Algorithm <ref> allows to extract the secret message from the LSCs of the media (the stego-content) using the strategy wich is a part of the extraction key in addition with the width of the message.In adjunction of these three functions, two other complementary functions have to be used: * The first one, detailed in Algorithm <ref>, allow to extract MSCs, LSCs, and passive coefficients from the host content. Its implementation is based on the concept of signification function described in Definition <ref>. * The last one, detailed in Algorithm <ref>, allow torebuild the new host content (the stego-content) from the corresponding MSCs,LSCs, and passive coefficients. Its implementation is also based on the conceptof signification function described in Definition <ref>. Thisfunction realize the invert operation of the previous one.The two previous algorithms have to be implemented by the user depending on each application context should be adjusted accordingly: either in spatial description, in frequency description, or in other description. They correspond to the theoretical concept described in Definition <ref>. Their implementation depends on the application context. For example the algorithm <ref> in spatial domain can correspond to the extraction of the 3 last bits of each pixel as LSCs, the 3 first bits as MSCs, and the 2 center bits as passive coefficients.§.§ Discussion We first notice that our 𝒟ℐ_3 scheme embeds the message in LSB as all the other approaches. Furthermore, among all the LSB, the choice of those which are modified according to the message is based on a secured PRNG whereas F5, and thus nsF5 only require a PRNG. Finally in this scheme, we have postponed the optimization of considering again a subset of them according to the distortion their modification may induce. According to us, further theoretical study are necessary to take this feature into consideration. In future work, it is planed to compare the robustness and efficiency of all the schemes in the context of semantic web. To initiate this study in this first article, the robustness of 𝒟ℐ_3 is detailled in the next section.§ ROBUSTNESS STUDY This section evaluates the robustness of our approach <cit.>.Each experiment is build on a set of 50 images which are randomly selected among database taken from the BOSS contest <cit.>.Each cover is a 512× 512 greyscale digital image. The relative payload is always set with 0.1 bit per pixel. Under that constrain, the embedded message m is a sequence of 26214 randomly generated bits. Following the same model of robustness studies in previous similar work in the field of information hiding, we choose some classical attacks like cropping, compression, and rotation studied in this research work. Other attacks and geometric transformations will be explore in a complementary study. Testing the robustness of the approach is achieved by successively applying on stego content images attacks. Differences between the message that is extracted from the attacked image and the original one are computed and expressed as percentage.To deal with cropping attack, different percentage of cropping (from 1% to 81%) are applied on the stego content image. Fig. <ref> (c) presents effects of such an attack.We address robustness against JPEG an JPEG 2000 compression. Results are respectively presented in Fig. <ref> (a) and in Fig. <ref> (b).Attacked based on geometric transformations are addressed through rotation attacks: two opposite rotationsof angle θ are successively applied around the center of the image. In these geometric transformations, angles range from 2 to 20degrees. Resultseffects of such an attack are also presented in Fig. <ref> (d). From all these experiments, one firstly can conclude that the steganographic scheme does not present obvious drawback and resists to all the attacks: all the percentage differences are so far less than 50%.The comparison with robustness of other steganographic schemes exposed in the work will be realize in a complementary study, and the best utilization of each one in several context will be discuss. § CONCLUSION AND FUTURE WORK In this research work, a new information hiding algorithm has been introduced to contribute to the semantic web. We have focused our work on the robustness aspect. Thesecurity has been studied in an other work <cit.>. Even if this new scheme 𝒟ℐ_3 does not possess topological properties (unlike the 𝒞ℐ𝒮_2 <cit.>), its level of security seems to be sufficient for Internet applications. Particularly in the framework of the semantic web it is required to have robust steganographic processes. The security aspects is less important in this context. Indeed, it is important that the enrichment information persist after an attack.Especially for JPEG 2000 attacks, which are the two major attacks used in an internet framework. Additionally, this new scheme is faster than 𝒞ℐ𝒮_2. This is a major advantage for an utilization through the Internet, to respect response times of web sites.In a future work we intend to prove rigorously that 𝒟ℐ_3 is not topologically secure. The tests of robustness will be realized on a larger set of images of different types and sizes, using resources of the Mésocentre de calcul de Franche-Comté <cit.> (an High-Performance Computing (HPC) center) and using Jace environment <cit.>, to take benefits of parallelism. So, the robustness and efficiency of our scheme 𝒟ℐ_3 will be compared to other schemes in order to show the best utilization in several contexts. Other kinds of attacks will be explored to evaluate more completely the robustness of the proposed scheme. For instance, robustness of the 𝒟ℐ_3 against Gaussian blur, rotation, contrast, and zeroing attacks will be regarded, and compared with a larger set of existing steganographic schemes as those described in this article. Unfortunately these academic algorithms are mainly designed to show their ability in embedding. Decoding aspect is rarely treated, and rarely implemented at all. Finally, a first web search engine compatible with the proposed robust watermarking scheme will be written, and automatic tagging of materials found on the Internet will be realized, to show the effectiveness of the approach.plain | http://arxiv.org/abs/1706.08764v1 | {
"authors": [
"Jacques M. Bahi",
"Jean-François Couchot",
"Nicolas Friot",
"Christophe Guyeux"
],
"categories": [
"cs.MM"
],
"primary_category": "cs.MM",
"published": "20170627102321",
"title": "A Robust Data Hiding Process Contributing to the Development of a Semantic Web"
} |
[email protected] Departamento de Física, Universidade Federal de Campina Grande Caixa Postal 10071, 58429-900 Campina Grande, Paraíba, Brazil [email protected] Departamento de Física, Universidade Federal de Campina Grande Caixa Postal 10071, 58429-900 Campina Grande, Paraíba, Brazil Departamento de Física, Universidade Federal da Paraíba,Caixa Postal 5008, 58051-970 João Pessoa, Paraíba, BrazilIn this paper we present a QCD motivated model that mimics QCD theory.We examine the characteristics of the gauge field coupled with the color dielectric function (G) in the presence of temperature (T). The aim is to achieve confinement at low temperatures T<T_c, (T_c, is the critical temperature), similar to what occurs among quarks and gluons in hadrons at low energies. Also, we investigate scalar glueballs and QCD string tension and effect of temperature on them. To achieve this, we use the phenomenon of color dielectric function in gauge fields in a slowly varying tachyon medium. This method is suitable for analytically computing the resulting potential, glueball masses and the string tension associated with the confinement at a finite temperature. We demonstrate that the color dielectric function changes Maxwell's equation as a function of the tachyon fields and induces the electric field in a way that brings about confinement during the tachyon condensation below the critical temperature.The (de)-confinement transition in tachyonic matter at finite temperature Francisco A. Brito December 30, 2023 =========================================================================10000§ INTRODUCTIONQuantum chromodynamics (QCD) is a theory that attempts to explain the strong interactions carried by gluons that keep quarks and gluons in a confined state in hadrons. The success of this theory depends on asymptotic freedom <cit.>. QCD forms the bases of nuclear physics and enable us to appreciate and explain the features of matter.However, non-relativistic perturbative QCD theories cannot accurately reproduce the results of charmonium and bottomonium spectra outputs, unless the leading renormalon terms cancels out. In this case, the net energy of such bound states from QCD potentials, are in agreement with phenomenological potentials for the range 0.5 GeV^-1 ≲ r ≲ 3 GeV^-1 <cit.>. Thus, for many years now, it has been conceived that phenomenological potential models are described by such systems.It has been realized that owing to the gluon confinement, the QCD vacuum shows a characteristic of a dielectric medium <cit.>. This idea has been employed in developing several models, including MIT bag model <cit.>, SLAC bag model <cit.>, Cornell potential for heavy quarks <cit.> and many soliton models <cit.> which are used to describe hadron spectroscopy. However, until recently, no successful effort had been made to compute the color dielectric function representing the QCD vacuum from quantum theory. In view of this, all the models developed using the dielectric function approach were considered phenomenological though it agrees with QCD as shown in <cit.>.There exists some similarity between QCD and QED (Quantum Electrodynamics) in terms of the successes of both theories,but they depart from each other by their strength, medium and dynamics of interactions. QED explains the interaction between charged particles while QCD explains the strong interaction between sub-atomic particles. QED creates a screening effect which decreases its net electric charge as inter-(anti-)particle distance increases. The opposite effect is observed in QCD where a anti-screening occurs, the net color-charges increase with increasing distance between (anti-)particle pairs. This similarity and the differences make it interesting to advance a study of one in terms of the other. In this work we will explore QCD in terms of QED. The most popular potential for heavy quarks at confined state is the Cornell potential known to be, v_c=-ar+br, where a and b are positive constants. This potential comprises linearly increasing part (infrared interaction) and Colombian part (ultraviolet interaction) <cit.>.In this paper, we will establish that the phenomenon of confinement is achievable with an electric field immersed in a color dielectric medium (G) at a finite temperature T. We will also show that the net potential resulting from the confinement at T=0 is similar to Cornell's potential for confinement of heavy quarks and gluons in hadrons at low energies.Our dielectric function G is identified with tachyon condensation <cit.> at low energies. The tachyon matter creates the necessary conditions for confinement phase at low temperatures (T<T_c) and de-confinement phase at high temperatures (T≥ T_c). The relevanceof the color dielectric function G(ϕ) is to generate the strong interaction needed for (de-)confinement of the associated colored particles <cit.>. Many works have been done on determining the potentials for quark confinement as a function of temperature, commonly called thermal QCD, by using a number of different approaches including Wilson and Polyakov loop corrections<cit.>.Most of the challenges posed by these models stems from the proper behavior of the QCD string tension at all temperatures as compared with lattice simulation results. The expected behavior of the string as suggested by many simulation results is; a sharp decrease with temperature at T<T_c, vanishes at T=T_c and slowly decreases at T>T_c <cit.>. The main purpose of this paper is to determine analytically, the net static potential for the quarks and gluons confinement in 3+1 dimensions as a function of temperature. We will also obtain the QCD string tension and glueball masses associated with it as a function of temperature and study its behavior.We will use an Abelian theory, as it is applied to QED, but the dielectric function G(r) will be carefully chosen to give the expected (de-)confinement in the chosen tachyon matter <cit.>. It has already been shown that the Abelian part of the non-Abelian QCD string tension constitutes 92%, that comprises linear part of the net potential. Hence, we can estimate non-Abelian theory using an Abelian approach <cit.>. This fact also permits us to study the QCD theory phenomenologically to establish the confinement of the quarks and gluons inside the hadron <cit.>. The self-interacting scalar fields ϕ(r) describes the dynamics of the dielectric function in the tachyon matter. Thus, we shall use a Lagrangian that would collectively carry information on the dynamics of the gauge, the scalar field associated with the tachyon dynamics and temperature. The motivation for using this approach is twofold. Firstly,because we are able tostudy QCD phenomenologically by identifying the color dielectric function naturally with the tachyon potential. Secondly, one can apply such phenomenological approach to obtain models that mimic QCD in stringy models where temperature effects in tachyon potentials<cit.> can be considered in brane confinement scenarios <cit.>. In this case, it may also bring new insight to confining supersymmetric gauge theories such as the Seiberg-Witten theory <cit.> that deals with electric-magnetic duality and develops magnetic monopole condensation.Thus, we choose a tachyon potential which is expected to condense at some value <cit.> at the same time that the gauge field is confined. This phenomenon coincides with the dual Higgs mechanism, where the dual gauge field becomes massive <cit.>. This means that in the infrared the QCD vacuum is a perfect color dielectric medium and therefore a dual superconductor in which magnetic monopole condensation leads to electric field confinement <cit.>.The paper is organized as follows.In Sec. <ref> and <ref> we review both the theory of electromagnetism in a dynamical dielectric medium and gluodynamics, with its associated QCD-like vacua respectively. In Sec. <ref> we introduce the tachyon Lagrangian coupled with temperature and its associated effective potential. In the same section, we study glueball masses at zero temperature (T=0) and at a finite temperature (T) and analyze their characteristics. In the latter cases we find analytically the net potential for confinement of quarks and gluons as a function of temperature. We analyze the characteristics of the net confinement potential. Also, we analyze the QCD string tension as a function of temperature. In Sec. <ref> we present our final comments. § MAXWELL'S EQUATIONS MODIFIED BY DIELECTRIC FUNCTIONIn this section we will review the theory of electromagnetism in a color dielectric medium to set the pace for us to explain the phenomenon of confinement. Beginning with theMaxwell Lagrangian with no sources we haveℒ=-14F_μν F^μν.Its equations of motion are∂_μ F^μν=0.It is worth mentioning that though the Lagrangian is with no source, its equations of motion still admit solutions with spherical symmetry <cit.>.Consider the gauge field in dielectric medium, G(ϕ), with ϕ the field describing the dynamics of the medium. The Lagrangian above can be rewritten as ℒ=-14 G(ϕ) F_μν F^μν.Its equations of motion are ∂_μ[G(ϕ) F^μν]=0.Let us impose the restrictions, μ=1,2,3 and ν=0. Thus,∇.[G(ϕ)E]=0. The magnetic field is not of interest in this work, because our concentration will be on the electric field confinement only, so the indices were deliberately defined to avoid the magnetic field. We begin with Eq. (<ref>) to determine the solution of the electric field E in the dielectric medium G(ϕ). As stated above, all the solutions will be computed in spherical symmetry, i.e. E(r) and ϕ(r) are only radially, r, dependent, hence G(ϕ) follows the same definition, thus∇.[G(ϕ)E]=1r^2∂∂ r (r^2 G(ϕ) E_r)=0,and E_r =λr^2 G(ϕ). Here, λ is the integration constant which can be related to electric charge as λ=q4πε_o. Therefore, the electric field solution, E in the dielectric medium G(ϕ) can be represented as E= q4πε_o r^2 G(ϕ),where E=|E|=E_r. Consequently, the dielectric medium changes the strength of the E as a function of ϕ. The coupling between electromagnetism and scalar field dynamics at finite temperature is given by the effective Lagrangian ℒ= - 14 G(ϕ) F_μν F^μν +12∂_μϕ∂^μϕ -V_eff(ϕ).Its effective potential as a function of the scalar field, ϕ, at a finite temperature, T, has already been found and is given by <cit.>V_eff(ϕ)=V(ϕ)+T^224 V_ϕϕ(ϕ),where V_ϕϕ is the second derivative of V(ϕ).The behavior of the dielectric function G(ϕ) will be obtained from the equations of motion <cit.> of the above Lagrangian. The equations of motion for the various fields, i.e., the gauge field A_μ and the scalar field ϕ, found in the above Lagrangian are given as∂_μ[G(ϕ)F^μν]=0,and∂_μ∂^μϕ + 14∂ G(ϕ)∂ϕ F_μν F^μν+T^224∂ V_ϕϕ∂ϕ+∂ V(ϕ)∂ϕ=0.The equations of motion for the scalar field ϕ and the gauge field A_ν with radial symmetry are1r^2dd r (r^2 G(ϕ)E)=0,and1r^2dd r( r^2d ϕdr) =T^224∂ V_ϕϕ∂ϕ-12∂ G(ϕ)∂ϕ E^2 +∂ V(ϕ)∂ϕ=0.As has been shown above we can identifythat the solution ofEq. (<ref>) is that given by Eq. (<ref>).To establish strong interaction and its resultant confinement, our dielectric functionneeds to asymptotically satisfy theseconditions:G(ϕ(r))= 0 as r→ r_*,andG(ϕ(r))= 1 as r→ 0,where r_* stands for the scale where the confinement starts to become effective. Particularly, r_*=∞ for G(ϕ(∞))∼1/r^2 and from Eq. (<ref>) we find E≡constant. This uniform electric field behavior agrees with confinementeverywhere.§ GLUODYNAMICS AND QCD-LIKE VACUUM In this section we analyze the gluodynamics in the tachyon matter. The Lagrangian for gluodynamics is given as ℒ'=-1/4F^a_μνF^aμν+|ϵ_v|,where -|ϵ_v| represents the vacuum energy density that keeps the scale and the conformal symmetries of gluodynamics broken.Gluodynamics is generally known to be scale and conformally invariant in the limit of classical regime but the symmetry breaks down when there is quantum effect due to non vanishing gluon condensate ⟨ F^a_μνF^aμν⟩ > 0 <cit.>. This is what brings about the anomaly in the QCD energy-momentum tensor (θ^μν) traceθ^μ_μ=β(g)/2g F^a_μνF^aμν.The leading term of the β-function of the coupling g is given byβ(g)=-11g^3/(4π)^2,with the vacuum expectation value given as ⟨θ^μ_μ⟩=-4|ϵ_v|.The purpose of this section is to compute the energy-momentum tensor of Eq. (<ref>) at T=0 and reconcile the results with Eq. (<ref>) as applied in Ref. <cit.>. This will require some cancellations of the tachyon fields and the gluon contributions to the vacuum densityθ^μ_μ=g^μν(2∂ℒ∂ g^μν-g_μνℒ) +8∂^2_μϕ. The last term is the total derivative of the tachyon field, that is sometimes left out in the energy-momentum tensor computation. But it is sometimes necessary in quantum field theory due to some Ward identities <cit.>. Using the equation of motion Eq. (<ref>) into Eq. (<ref>) yields θ^μ_μ= G'(ϕ)F^μνF_μν+4V'(ϕ).Thus, we can relate Eq. (<ref>) and Eq. (<ref>) as⟨ G'(ϕ)F^μνF_μν⟩=-4⟨|ϵ_v|+V'(ϕ)⟩,where G'(ϕ) and V'(ϕ) represent the first derivative of the “effective” color dielectric function and the “effective” potential respectively. It is expected that in the classical limit |ϵ_v|→ 0 the classical equation θ^μ_μ=0 should be recovered. Therefore, we redefine the potential to include the vacuum energy density in the form <cit.>V(ϕ)→-|ϵ_v|V(ϕ). Consequently Eq. (<ref>) becomes ⟨ G'(ϕ)F^μνF_μν⟩=4|ϵ_v|⟨ V'(ϕ)-1⟩.This equation guarantees the correct classical behavior where Eq. (<ref>) vanishes at |ϵ_v|→ 0 as expected. It is important to add that, with some quantum corrections we can obtain a non-vanishing contributions to ⟨θ^μ_μ⟩ in the same limit. This result is similar to the results obtained in Ref. <cit.> for dilaton theory. We will show in the subsequent sections that the potential is precisely equivalent to the color dielectric function in string theory and thus, it represents the QCD vacuum density modified by a function G(ϕ).Using Eq. (<ref>) and the Lagrangian in Eq. (<ref>) we can redefine the effective potential to include the vacuum energy density asV_eff→-|ϵ_v|V_eff(ϕ,T) Therefore, the gluon condensate for this potential becomes⟨ G'(ϕ)F^μνF_μν⟩=4|ϵ_v|⟨ V_eff'(ϕ,T)-1⟩. This equation also vanishes at |ϵ_v|→ 0, in consistency with the classical prediction and we recover exactly Eq. (<ref>) at T=0. We will soon find that the magnitude of ⟨ G'(ϕ)F^μνF_μν⟩ reduces at T=T_c which is also expected.§ TACHYON CONDENSATION AND CONFINEMENTIn this section we will establish the relationship between tachyon condensation and confinement. Tachyons are particles that are faster than light, have negative masses and are unstable. Their existence are presumed theoretically in the same way as magnetic monopoles. Tachyons just as magnetic monopoles have never been seen isolated in nature. In superstring theory, they are presumed to be interacting with other particles or interacting with each other at higher orders to form tachyon condensation <cit.>. Tachyon condensation is directly related to confinement just as monopole condensation. §.§ Tachyon Lagrangian with electromagnetic field and temperatureFrom Eq. (<ref>) we only have thepotential V(ϕ) and the dielectric function G(ϕ) as a functions of ϕ(r). Meanwhile, it will be convenient to restrict these choices as G(ϕ(r))=V(ϕ(r)). The propriety of this assertion will be demonstrated below inawhile, working with a Lagrangian that characterizes the dynamics of the tachyon fields, ϕ(r).To start with, let us consider the Lagrangian at Eq. (<ref>) without the temperature correction as seen in <cit.>ℒ= - 14 G(ϕ) F_μν F^μν +12∂_μϕ∂^μϕ-V(ϕ) .The equation of motion of this Lagrangian is given by∂_μ∂^μϕ + 14∂ G(ϕ)∂ϕ F_μν F^μν+∂ V(ϕ)∂ϕ=0.For simplicity let us consider the fields only in one dimension x, this yields,ϕ=ϕ(x) , A_μ = A_μ(x).The resulting equations of motion are dd x [G(ϕ)E]=0,-d^2ϕd x^2-12∂ G∂ϕ E^2+∂ V∂ϕ=0,where we usedF^01=E. Integrating Eq. (<ref>) we have G(ϕ) E = q⇒ E=qG(ϕ).Substituting Eq. (<ref>) into Eq. (<ref>), we find-d^2ϕd x^2-12∂ G∂ϕq^2G(ϕ)^2+∂ V∂ϕ=0.We now make use of the tachyon Lagrangian commonly known in string theory with tachyon dynamics, T̃(x), with electric field E(x). Hence, for slowly varying tachyon fields, we can expand our Lagrangian in power series as<cit.>e^-1ℒ =-V(T̃)√(1-T̃'^2 +F_01F^01)=-V(T̃)[1-12(T̃'^2+F_01 F^01)+...]=-V(T̃)+12 V(T̃)T̃'^2-12 V(T̃)F_01F^01+...=-V(ϕ)+12ϕ'^2-12V(ϕ)F_01F^01+..., where e=√(|g|) represents the general space-time.This relation also holds for 3+1 dimensions for ϕ a function of r, which can be associated with x. In Eq. (<ref>) we can relate V(T̃(ϕ))= ( ∂ϕ∂T̃) ^2 ⟹12V(T̃)(T̃'^2) =12( ∂ϕ∂T̃∂T̃∂ x) ^2=12ϕ'^2, with ϕ=f(T̃), or T̃=f^-1(ϕ). Now comparing Eq. (<ref>) with Eq. (<ref>) we find the equality G=V. This result is also true for Eq. (<ref>) up to the thermal correction term. From the perspective ofstring theory, the thermal correction affects the tachyon potential V(T̃) of the original tachyon Lagrangian (<ref>) — see e.g. <cit.> and references therein.In our context we restrict ourselves to effective quantum field theory, where the one loop thermal corrections from the scalar sector affects V(ϕ) as given in the Lagrangian (<ref>). §.§.§ Confinement potential for the electric field in three dimensions as a function of temperature For the tachyon Lagrangian in equation (<ref>), it is increasingly clear that the dielectric function G(ϕ) is equal to the potential V(ϕ). Now, we choose the appropriate classical tachyon potential that gives us the appropriate behavior for confinement and de-confinement in the presence of temperature. We chooseV(ϕ)=1/2[α^2ϕ^2-1]^2.In 3+1 dimensions in radial coordinates, Eq. (<ref>) can be rewritten as-[ 1r^2dd r( r^2d ϕdr) ] +T^224∂ V_ϕϕ∂ϕ-12∂ G(ϕ)∂ϕ E^2 +∂ V(ϕ)∂ϕ=0.Recall that the solution for the electric field isE(r)=q4πε_0G(ϕ)r^2.Substituting this solution into equation (<ref>) one finds-[ 1r^2dd r( r^2d ϕdr) ] +T^224∂ V_ϕϕ∂ϕ-12∂ G(ϕ)∂ϕ[q4πε_0G(ϕ)r^2]^2+∂ V(ϕ)∂ϕ=0.Now, considering the fact that G(ϕ)=V(ϕ) and λ=q4πε_0,we have-[ 1r^2dd r( r^2d ϕdr) ] +T^224∂ V_ϕϕ∂ϕ-λ^22∂ V(ϕ)∂ϕ1V(ϕ)^2r^4+∂ V(ϕ)∂ϕ=0,which implies[ 1r^2dd r( r^2d ϕdr) ]=∂∂ϕ[ T^224V_ϕϕ+λ^221V(ϕ)1r^4+V(ϕ)].Now, substituting the potential (<ref>) into equation (<ref>) gives[ 1r^2dd r( r^2d ϕdr) ]=∂∂ϕ[T^2242(-α^2+3α^4ϕ^2) ]+ λ^2∂∂ϕ[(α^2ϕ^2-1)^-2] 1r^4+∂∂ϕ[ 1/2(α^2ϕ^2-1)^2] . Now, disregarding the term with λ^2 because we are considering a relatively long distances (far from the charge source 1/r^4- term), Eq. (<ref>) gives∇^2ϕ=∂ V_eff(ϕ)∂ϕ.Since V_eff(ϕ)=V(ϕ)+T^224V_ϕϕ and V(ϕ)=1/2[(αϕ)^2-1]^2, it follows thatV_eff(ϕ)=12[(αϕ)^2-a^2 ]^2 ,where a^2=1-T^2T_c^2 and T^2_c=4α^2. The effective potential V_eff indicates stability around the new vacuum ϕ_0 =a/α for T<T_c (true vacuum) and unstable for T≥ T_c (false vacuum). Now perturbing the tachyon fields around its true vacuum ϕ_0, that is ϕ(r)→ϕ_0+η(r), where η(r) is the small fluctuation, we can expand (<ref>) as∇^2(ϕ_0+η) =∂ V_eff(ϕ)∂ϕ=∇^2ϕ_0+∇^2η=∂ V_eff∂ϕ|_ϕ_0+∂^2 V_eff∂ϕ^2|_ϕ_0η→∇^2η=∂^2 V_eff∂ϕ^2|_ϕ_0η.We have disregarded the terms of higher derivatives because the second derivative is sufficient for our analysis, thus at ϕ_0=a/α ∇^2η =4α^2a^2η=4α^2[1-T^2T^2_c]η=-4α^2[T^2T^2_c -1]η=-2Aη ,where A=2α^2[T^2T^2_c-1 ]=-2a^2α^2.Now, developing the Laplacian in Eq. (<ref>) yields η”+2rη'+2Aη=0.This equation has a solution given byη(r)=cosh(√(2|A|)r)α√(|A|)r,where |A|=-A=2α^2(1-T^2T^2_c). Hence, the dielectric function for this solution isgiven as G(ϕ) =V(ϕ_0+η)=V(ϕ)|_ϕ_0+V'(ϕ)|_ϕ_0η+12V”(ϕ)|_ϕ_0η^2+ O(η^3)=12V”(ϕ)|_ϕ_0η^2,where in the last step we went up to second order. This yields G(r) =2α^2η^2=2|A|r^2cosh^2√(2|A|)r. Substituting this result into the electric field equation modified by dielectric function G(r) we haveE=λr^2G(r)=λr^2[2|A|r^2cosh^2√(2|A|)r].Using the well known relation for determining electric field potential, V(r)=∫Edr, to determine the confinement potential V_c(r), we get V_c(r,T)=λ√(|A|)tanh(√(2|A|)r)2√(2)+c. Now, we can compare our equation (<ref>) with the results of <cit.>, for confinement of quarks and gluonswith N_c colorsd^2ϕ(r)d r^2+2rdϕ(r)d r=-g^264π^2f_ϕ( 1-1N_c) exp(- ϕ(r)f_ϕ) 1r^4,thus, equation (<ref>) can be rewritten asd^2ϕ(r)d r^2+2rdϕ(r)d r=∂∂ϕ[T^2242(-α^2+3α^4ϕ^2) ]+ -4α^2λ^2[ϕ(α^2ϕ^2-1)^-3] 1r^4+∂∂ϕ[ 1/2(α^2ϕ^2-1)^2] .Since the exponential and quadratic potentials in the former and latter cases are just dielectric functions that modifies the charges, we can now identify our electric charge q in terms of the gluon charge g by comparing the charge source 1/r^4-terms of both equations (<ref>) and (<ref>) to obtain 4λ^2α^2=g^232π^2f_ϕ( 1-1N_c).Therefore, identifying α^2=1f_ϕ we findλ=g4π( 1-1N_c)^12, where we have redefined g→ g/2√(2). Using equation (<ref>) one can readily find the following relationship between the chargesq=ε_0 g√(( 1-1N_c)).Substituting the results obtained above into equation (<ref>), we haveV_c(r,T)= g4π√(( 1-1N_c))√(|A|)tanh(√(2|A|)r)2√(2)+c.This represents the static potential observed for theconfinement of quarks and gluons in the tachyon matter.At T=0 we observe strong confinement regime at short distances. For sufficiently large distances r we observe a steady de-confinement of the quarks and the gluons leading to hadronization. At T≥ T_c the confinement vanishes leading to the breaking of the QCD string tension. Writing equation (<ref>) in a more compact form, we have V_c(r,T) = σ(T) r+c.Where c is the integration constant and σ is the QCD string tension which in this case depends explicitly on the temperature. The QCD string tension can be written asσ(T) ≃g4π√(( 1-1N_c))|A|2≃gα^24π√(( 1-1N_c))[1-T^2T^2_c] . At T=0, σ does not longer depend on temperature, indicating a constant string tension that binds the quarks together. At this temperature, the quarks and the gluons are automatically in a confined state. At T=T_c, σ(T=T_c) breaks leading to hadronization. Plotting the results from equations (<ref>) and (<ref>) in Figs. <ref> and <ref> we assumed that α=1, λ=1, with this, we get, g/4π=1, N_c≫1 and c=0. The static potential for the confinement regimes is depicted in Fig. <ref>. At non zero temperatures T≤ T_c, the potential rises linearly as expected, but the slope decreases with steady increase in temperature from T=0 to T≃ T_c, where the slope approaches zero. This represents an increase in the energy and a decrease in the interactions between the quarks and the gluons as temperature increases. At T≥ T_c the confinement vanishes and showing a rise in the energy and a reduction in the interactions of the quarks and the gluons, making them free (asymptotically) in the hadrons.Fig. <ref> shows a sharp decrease in σ(T) (the coefficient of the linearly increasing potential) with σ vanishing at T=T_c. The color dielectric function in Eq. (<ref>) is plotted in Fig. <ref>. As we have earlier shown that V(r)=G(r) we can as well say that V(r,T)=G(r,T). In this sense, we can clearly see from figure <ref> and <ref> that the confinement regime/tachyon condensation (at r→ r_* where V(r_*)= G(r_*)→0) coincides. We observe from Fig. <ref> that the tachyon condensation corresponds to the minima of the curves T=0, T_1 ,T_2, T_3, T_4. It is worth noting that the smaller the minima the higher the depth of the curve and the higher the tachyon condensation, hence we have more tachyons condensing at T=0 and as the temperature increases the tachyons becomes gradually free until T≥ T_c.This regime also coincides with de-confinement phase as seen in Fig. <ref>.Thus, by comparing figures(<ref>) and (<ref>) we can identifythatthe electric confinement is associated with tachyon condensation <cit.>. §.§ Glueball MassesThe search for glueballs has been on for awhile now, unfortunately, the only evidence of its existence is a “possible" candidate because it has not been confirmed experimentally. They are known to be bound states of pure gluons, mixture of quark and gluon states (hybride), multi-quark bound states etc. Their presence is the consequence of gluon self-interactions in QCD theory. In this section we will be focusing on scalar glueballs.They are known to be the lightest in glueball mass, they have QCD degrees of freedom with isospin quantum state of J^PC=0^++, where J is total spin, P is parity and C is charge conjugation with isospin I=0 for isoscalars <cit.>. §.§.§Glueball Mass at T=0 From our model, the glueball masses are expected to appear as excitations around the vacuum and are given by <cit.> M^2_G=.∂^2 V(ϕ)/∂ϕ^2 |_ϕ_0=4α^2,with f_α =1/α representing the decay constant of the tachyons, hence the glueball mass (M^2_G) depends on how fast or slow the tachyon decays. §.§.§ Glueball Mass at a Finite(T)We start with Eq. (<ref>) which defines the effective potential of the model, we expand the potential for ϕ→ϕ_0+η about the true vacuum of the effective potential, ϕ_0= a/α, as followsV_eff(ϕ) =V(ϕ)|_ϕ_0+V'(ϕ)|_ϕ_0η+ 1/2V”(ϕ)|_ϕ_0η^2+ O(η^3)=12V”(ϕ)|_ϕ_0η^2=2α^2a^2η^2.Comparing the second derivative above with the definition of glueball mass in Eq. (<ref>) we arrive atM^2_G(T) =4α^2 a^2 =M^2_G(0) [1-T^2/T^2_c].Hence, at T=0 we retrieve Eq. (<ref>) and at T=T_c, M^2_G(T) vanishes. This model seems to show a remarkable resemblance with the lattice simulation results. The glueballs should be seen as “rings of glue" which are kept together by the string tension σ(T) contained in the interquark potential and vanishes at T=T_c when the string breaks depicting de-confinement phase. At T<T_c the thermodynamic properties of the model can be well understood and studied in terms of gas of glueballs with M^2_G(T)< M^2_G(0) <cit.>.We may now establish a relationship between string tension and glueball masses through equations (<ref>) and (<ref>) at T=0, i.e.,σ(0) ≃gM^2_G(0)16π√( 1-1N_c).Recalling that g=√(16πα_s/3) is the chromoelectric charge, where α_s=0.45 is close to the QCD coupling constant, and assuming N_c=3, we find the expected results √(σ(0))=420 MeV for a glueball mass M_G(0)=1184 MeV <cit.>. To end this section, few comments in connection withSec. <ref> are in order.Substituting the effective potential (<ref>) into the gluon condensate in Eq. (<ref>) we get,⟨ G'F^μνF_μν⟩ =4|ϵ_v|⟨ 2α^2a^2η^2-1 ⟩ =4|ϵ_v|⟨ 2α^2[1-T^2T^2_c] η^2-1 ⟩.Hence, the gluon condensate decreases with increasing temperature and vice versa. At T=T_c we recover the well-known gluon condensate at zero temperature ⟨ G'F^μνF_μν⟩=-4|ϵ_v|.§ CONCLUSIONSIn our investigations we find the net static potential for confinement phase of quarks and gluons as a function of temperature. We used the Abelian QED theory to approximate the non-Abelian QCD theory. We do thisby employing a phenomenological effective field theory involving tachyon field dynamics coupled to electromagnetism via color dielectric function. The color dielectric function is responsible for the long distance interactions to bring about confinement in the infrared (IR) regime. It also modifies the gluon condensate ⟨ G'F^μνF_μν⟩, develops tachyon condensation and consequently allows confinement in the IR regime. We show that the confinement is favored at short distances and low temperatures, whereas de-confinement shows up at long distances and higher temperatures in the tachyon matter. Confinement of quarks and gluons coincides with tachyon condensation within the same temperature ranges as it is shown in Fig. <ref> and <ref>. As a result, de-confining phase at T≥ T_c does not correspond to the tachyon condensation as it is seen in Figs. <ref> and <ref>. Consequently, tachyon condensation is associated with electric field confinement and thus, our results conform with QCD monopole condensation as predicted in the well-known dual scenario <cit.>. In such dual scenario, the QCD-monopole condensation is necessary for spontaneous chiral-symmetry breaking <cit.>.Thus, in our setup it is expected that at the confining phase (T<T_c) there is a spontaneous chiral-symmetry breaking, whilst at the de-confining phase (T≥ T_c) there is a restoration of the chiral-symmetry.The QCD string tension and scalar glueball mass were also computed as a function of temperature. They both decrease rapidly with temperature and break (vanish) at T=T_c. Finally, we intend to advance further studies in this subject by studding confinement of fermionic tachyons using similar approach. We would like to thank CNPq and CAPES for partial financial support.9 1 E. Laermann and O. Philipsen, Annual Review, 53 (2003) 163-198. 2 H. Suganuma, S. Sasaki, H. Toki and H. Ichie. Prog. Theor. Phys. Suppl. 120, 57-74 (1995). 3 Y.S. Kalashnikova and A.V. Nefediev. Phys.Usp. 45, 347-368 (2002). Sumino:2001ehY. Sumino,Phys. Rev. D 65, 054003 (2002) doi:10.1103/PhysRevD.65.054003 [hep-ph/0104259]. QCD vacuum:1 P. Hasenfratz and J. Kuti, Phys. Rep. 40 (1978) 75.T.D. Lee, Particle Physics, An Introduction to Field Theory, (Routledge, New York 1981). MIT bag A. Chodos, R.L. Jaffe, K. Johnson, C.B. Thorn and V.F. Weisskopf, Phys. Rev. D 9, 3471 (1974). SLAC bag W.A. Bardeen, M.S. Chanowitz, S.D. Drell, M.Weinstein and T.-M. Yan, Phys. Rev. D 11 (1975) 1094. Conell potential E. Eichten, K. Gottfried, T. Kinoshita, K. D. Lane, and T. M. Yan, Phys. Rev. D 17, 3090 (1978). others A. Schuh, H.J. Pirner and L. Wilets, Phys. Lett. 174 B, 10 (1986). Phenomenology W. Koepf, L. Wilets, S. Pepin, Fl. Stancu, Phys. Rev. C 50, 614-626 (1994).guendelman E.I. Guendelman, Mod. Phys. Lett. A22, 1209 (2007). Reinhardt:2008ekH. Reinhardt,Phys. Rev. Lett.101, 061602 (2008) doi:10.1103/PhysRevLett.101.061602 [arXiv:0803.0504 [hep-th]].7 R. Dick, Phys. Lett. B 409 (1997), 321-324. 6 A. Sen, JHEP 0207, 065 (2002) [hep-th/0203265]; A. Sen, JHEP 0204, 048 (2002) [hep-th/0203211].4 D. Bazeia, F.A Brito, W. Freire and R. F. Ribeiro, Int. J. Mod. Phys. A18, 5627 (2003). 5 F. A. Brito, M.L.F. Freire and W. Serafim,Eur. Phys. J.C. (2014) 74, 3202. 8 C. P. Herzog, Phys. Rev. Lett. 98, 091601 (2007). 9 H. Boschi-Filho and N. R. F. Braga Phys. Rev. D 74, (2006) 086001.10 O. Andreev, V. I. Zakharov, Phys. Lett. B 645, 437-441 (2007).11 O. Kaczmarek, F. Karsch, E. Laermann, M. Lutgemeier. Phys. Rev. D 62, (2000) 034021.13 E. Laermann, C. DeTar, O. Kaczmarek, F. Karsch. Nucl. Phys. Proc. Suppl. 44, 447-449 (1999). 14 L. Wilets, Non topological solutions, (world Scientific, Singapore, 1998). 15 T.D. Lee, Particle physics and introduction to field theory, (Harwood Academic, New York, 1981). 16 H. Shiba et al, Phys. Lett. B333, 461 (1994). 17 G.S. Bali et al. Phys. Rev. D 54, 2863 (1996). 18 E. Eichtem et. al. Phys. Rev. Lett. 34, 369 (1975). 21 R. Friedberg and T.D. Lee, Phys. Rev. D 18, 2623 (1978). Hotta:2002ntK. Hotta,JHEP 0212, 072 (2002) doi:10.1088/1126-6708/2002/12/072 [hep-th/0212063].brane O. Bergman, K. Hori, P. Yi, Nucl. Phys. B580 (2000) 289 310dual S. Mandelstam, Phys. Rep. 23C, 145 (1976); G. `t Hooft, in Proceed. of Euro. Phys. Soc. 1975, ed. A. Zichichi; N. Seiberg and E. Witten, Nucl. Phys. B426, 19 (1994).26 A. Sen, JHEP 08 (1998) 012, hep-th/9805170.24 M. Cvetic and A. A. Tseytlin, Nucl. Phys. B416, 137 (1994). jackiw L. Dolan and R. Jackiw, Phys. Rev. D9, 3320 (1974). weinberg S. Weinberg, Phys. Rev. D9, 3357 (1974).25 D. Bazeia, F. A. Brito, W. Freire and R. F. Ribeiro, Eur. Phys. C 40, 531 (2005)[hep-th/0311160].22 B. Zwiebach, String theory (University Press, Cambridge, 2004).23R. Dick, Eur. Phys. J. C, 701(1999). Heller:1997nqaU. M. Heller, F. Karsch and J. Rank,Phys. Rev. D 57, 1438 (1998) doi:10.1103/PhysRevD.57.1438 [hep-lat/9710033].28 H.J. Rothe,Lattice Gauge Theories, (World Science, 1992). 27 M. Creutz,Quark, gluon and lattices, (Cambridge press, 1983). 29 V.A. Miransky,Dynamic Symmetry Breaking in Quantum Field Theories, (World Scientific, 1993).scalar-glueballs W. Ochs, (2013), J. Phys. G 40 (2013) 043001.S. L. Olsen, (2014), arXiv:1403.1254 [hep-ex].D. Parganlija, (2013), arXiv:1312.2830 [hep-ph].A. Zhang, (HNP2013), DOI: 10.1142/S2010194514602403.rosina M. Rosina, A. Schuh and H.J. Pirner, Nucl. Phys. A 448, 557 (1986). glueballs-TN. Ishii, H. Suganuma, H. Matsufuru, Phys. Rev.D66 (2002) 014507M. Caselle, R. Pellegrini, Phys. Rev. Lett. 111,132001 (2013).gluon-d D. Kharzeev, E. Levin, K. Tuchin, Phys. Lett. B574(2002) 21.gluodynamicsM.A. Shffman, A.I. Vainshtein and V.I. Zakharov, Nucl.Phys. B147 (1979) 385. gluodynamics 1 A. A. Migdal and M. A. Shifman, Phys. Lett. B114 (1982) 445. | http://arxiv.org/abs/1706.09013v3 | {
"authors": [
"Adamu Issifu",
"Francisco A. Brito"
],
"categories": [
"hep-th",
"hep-ph"
],
"primary_category": "hep-th",
"published": "20170627190116",
"title": "The (de)-confinement transition in tachyonic matter at finite temperature"
} |
A Fully Quaternion-Valued Capon Beamformer Based on Crossed-Dipole Arrays Xiang Lan and Wei LiuCommunications Research GroupDepartment of Electronic and Electrical EngineeringUniversity of Sheffield, UK December 30, 2023 ========================================================================================================================================== Quaternion models have been developed for both direction of arrival estimation and beamforming based on crossed-dipole arrays in the past. However, for almost all the models, especially for adaptive beamforming, the desired signal is still complex-valued and one example is the quaternion-Capon beamformer. However, since the complex-valued desired signal only has two components, while there are four components in a quaternion, only two components of the quaternion-valued beamformer output are used and the remaining two are simply removed. This leads to significant redundancy in its implementation. In this work, we consider a quaternion-valued desired signal and develop a full quaternion-valuedCapon beamformer, which has a better performance and a much lower complexity and is shown to be more robust against array pointing errors.Keywords — quaternion model, crossed-dipole, Capon beamformer, vector sensor array.§ INTRODUCTION Electromagnetic (EM) vector sensor arrays can track the direction of arrival (DOA) of impinging signals as well as their polarization. A crossed-dipole sensor array, firstly introduced in <cit.> for adaptive beamforming, worksby processing the received signals with a long polarization vector. Based on such a model, the beamforming problem is studied in detail in terms of output signal-to-interference-plus-noise ratio (SINR) <cit.>. In <cit.>, further detailed analysis was performed showing that the output SINR is affected by DOA and polarization differences.Since there are four components for each vector sensor output in a crossed-dipole array, a quaternion model instead of long vectors has been adopted in the past for both adaptive beamforming and direction of arrival (DOA) estimation <cit.>. In <cit.>, the well-known Capon beamformer was extended to the quaternion domain and a quaternion-valued Capon (Q-Capon) beamformer was proposed with the corresponding optimum solution derived.However, in most of the beamforming studies, the signal of interest (SOI) is still complex-valued, i.e. with only two components: in-phase (I) and quadrature (Q). Since the output of a quaternion-valued beamformer is also quaternion-valued,only two components of the quaternion are used to recover the SOI, which leads to redundancy in both calculation and data storage. However, with the development of quaternion-valued wireless communications <cit.>, it is very likely that in the future we will have quaternion-valued signals as the SOI, where two traditional complex-valued signals with different polarisations arrive at the array with the same DOA. In such a case, a full quaternion-valued array model is needed to compactly represent the four-component desired signal and also make sure the four components of the quaternion-valued output of the beamformerare fully utilised.In this work, we develop such a model and propose a new quaternion-valued Capon beamformer, where both its input and output are quaternion-valued.This paper is structured as follows. The full quaternion-valued array model is introduced in Section II and the proposed quaternion-valued Capon beamformer is developed in Section III. Simulation results are presented in IV, and conclusions are drawn in Section V. § QUATERNION MODEL FOR ARRAY PROCESSING A quaternion is constructed by four components <cit.>, with one real part and three imaginary parts, which is defined as q = q_a+iq_b+jq_c+kq_d, where i, j, k are three different imaginary units and q_a,q_b,q_c,q_d are real-valued. The multiplication principle among such units is i^2 = j^2 = k^2 = ijk = -1,andij = -ji = k, ki = -ik = j,jk = -ki =iThe conjugate q^* of q is q^* = q_a-iq_b-jq_c-kq_d.A quaternion number can be conveniently denoted as a combination of two complex numbers q = c_1+ic_2, where the complex number c_1=q_a+jq_c and c_2=q_b+jq_d. We will use this form later to represent our quaternion-valued signal of interest.Consider a uniform linear array with N crossed-dipole sensors, as shown in Fig. <ref>, wherethe adjacent vector sensor spacing d equals half wavelength, and the two components of each crossed-dipole are parallel to x- and y-axes, respectively. A quaternion-valued narrowband signal s_0(t) impinges upon the vector sensor array among other M uncorrelated quaternion-valued interfering signals {s_m(t)}_m=1^M, with background noise n(t). s_0(t) can be decomposed intos_0(t)=s_01(t)+is_02(t) ,where s_01(t) and s_02(t) are two complex-valued sub-signals with the same DOA but different polarizations.Assume that all signals are ellipse-polarized. The parameters, including DOA and polarization of the m-th signal are denoted by (θ_m,ϕ_m,γ_m1,η_m1) for the first sub-signal and (θ_m,ϕ_m,γ_m2,η_m2) for the second sub-signal. Each crossed-dipole sensor receives signals both in the x and y sub-arrays.For signal s_m(t), the corresponding received signals at the x and y sub-arrays are respectively given by <cit.>:x(t) = a_m1p_xm1s_m1(t)+a_m2p_xm2s_m2(t) y(t) = a_m1p_ym1s_m1(t)+a_m2p_ym2s_m2(t)where x(t) represents the received part in the x-sub-array, y(t) represents the part in the y-sub-array, and(p_xm1,p_ym1) and (p_xm2,p_ym2) are the polarizations of the two complex sub-signals in x and y directions, respectively, which are given by <cit.>,p_xm1 =-cosγ_m1p_ym1 =cosϕ_msinγ_m1e^jη_m1p_xm2 =-cosγ_m2p_ym2 =cosϕ_msinγ_m2e^jη_m2, whenθ_m=π/2Note that a_m1 and a_m2 are the steering vectors for the two sub-signals, which are equal to each other since the two sub-signals share the same DOA (θ_m,ϕ_m).a_m1=[1 e^-j2πsinθ_msinϕ_m/λ,...e^-j(N-1)2πsinθ_msinϕ_m/λ]^T a_m2=[1 e^-j2πsinθ_msinϕ_m/λ,...e^-j(N-1)2πsinθ_msinϕ_m/λ]^TA quaternion model can be constructed by combining the two parts as below:q_m(t)= x(t)+iy(t) = a_m1(p_xm1+ip_ym1)s_m1(t) +a_m2(p_xm2+ip_ym2)s_m2(t) = b_m1s_m1(t)+b_m2s_m2(t)where {b_m1, b_m2}∈ℍ^N × 1 can be considered as the composite quaternion-valued steering vector. Combining all source signals and the noise together, the result is given by:q(t) = ∑^M_m=0(b_m1s_m1(t)+b_m2s_m2(t))+n_q(t)where n_q(t) = n_x(t)+in_y(t) is the quaternion-valued noise vector consisting of the two sub-array noise vectors n_x(t) and n_y(t).§ FULL QUATERNION CAPON BEAMFORMER §.§ The Full Q-Capon Beamformer To recover the SOI among interfering signals and noise, the basic idea is to keep a unity response to the SOI at the beamformer output and then reduce the power/variance of the output as much as possible <cit.>. The key to construct such a Capon beamformer in the quaternion domain is to design an appropriate constraint to make sure the quaternion-valued SOI can pass through the beamformer with the desired unity response.Again note that the quaternion-valued SOI can be expressed as a combination of two complex sub-signals.To construct such a constraint,one choice is to make sure the first complex sub-signal of the SOI pass through the beamformer and appear in the real and j components of the beamformer output, while the second complex sub-signal appear in the i and k components of the beamformer output.Then, with a quaternion-valued weight vector w, the constraint can be formulated asw^HC = fwhere {}^H is the Hermitian transpose (combination of the quaternion-valued conjugate and transpose operation), C = [b_01b_02], and f =[1 i].With this constraint, the beamformer output z(t) is given byz(t)=w^Hq(t) = s_01(t)+is_02(t)_s_0(t)+w^Hn_q(t) +∑^M_m=1w^H [b_m1s_m1(t)+b_m2s_m2(t)]Clearly, the quaternion-valued SOI has been preserved at the output with the desired unity response.Now, the full-quaternion Capon (full Q-Capon) beamformer can be formulated asmin w^HRw subject tow^HC = fwhereR = E{q(t)q^H(t)} .Applying the Lagrange multiplier method, we havel(w,λ)=w^HRw+(w^ HC-f)λ^H +λ(C^Hw-f^H)where λ is a quaternion-valued vector.The minimum can be obtained by setting the gradient of (<ref>) with respect to w^* equal to a zero vector <cit.>. It is given by∇_w^*l(w,λ) = 1/2Rw+1/2Cλ ^H= 0 Considering all the constraints above, we obtain the optimum weight vector w_opt as followsw_opt = R^-1C(C^HR^-1C)^-1f^H . A detailed derivation for the quaternion-valued optimum weight vector can be found at the Appendix.In the next subsection, we give a brief analysis to show that by this optimum weight vector, the interference part at the beamformer output z(t) in (<ref>) has been suppressed effectively.§.§ Interference Suppression Expanding the covariance matrix, we haveR = E{q(t)q^H(t)} = R_i+n+σ^2_1b_01b_01^H +σ^2_2b_02b_02^Hwhere σ^2_1,σ^2_2 are the power of the two sub-signals of SOI and R_i+n denotes the covariance matrix of interferences plus noise.Using the Sherman-Morrison formula, we then havew_opt = R_i+n^-1Cβwhere β =(C^HR_i+nC)^-1f^H∈ℍ^2 × 1 is a quaternion vector.Applying left eigendecomposition for quaternion matrix <cit.>,R_i+n = ∑^N_n=1α_nu_nu_n^Hwithα_1≥...≥α_M-2α_M-1= ...=α_N=2σ^2_0∈ℝ, where 2σ^2_0 denotes the noise power.With sufficiently high interference to noise ratio (INR), the inverse of R_i+n can be approximated byR_i+n^-1≈∑^N_n=M+11/2σ^2_0u_nu_n^HThen, we havew_opt= ∑^N_n=M+11/2σ^2_0u_nu_n^HCβ =∑^N_n=M+1u_nρ_nwhere ρ_n is a quaternion-valued constant. Clearly, w_opt is the right linear combination of {u_M+1,u_M+2,...,u_N}, and w∈span_R{u_M+1,u_M+2,...,u_N}.For those M interfering signals, their quaternion steering vectors belong to the space right-spanned by the related M eigenvectors, i.e. b_m1,b_m2∈span_R{u_1,u_2,...,u_M}. As a result,w_opt^Hb_m1≈0, w_opt^Hb_m2≈0, m=1,4,...,Mwhich shows that the beamformer has eliminated the interferences effectively. §.§ Complexity Analysis In this section, we make a comparison of the computation complexity between the Q-Capon beamformer in <cit.> and our proposed full Q-Capon beamformer. To deal with a quaternion-valued signal, the Q-Capon beamformer has to process the two complex sub-signals separately to recover the desired signal completely, which means we need to apply the beamformer twice for a quaternion-valued SOI. However, for the full Q-Capon beamformer, the SOI is recovered directly by applying the beamformer once.For the Q-Capon beamformer, the weight vector is calculated by w=R^-1a_0(a_0^HR^-1a_0)^-1, where a_0 is the steering vector for the complex-valued SOI. As an example, we use Gaussian elimination to calculate the matrix inversion R^-1 and 1/3(N^3-N) quaternion-valued multiplications are needed, equivalent to 16/3(N^3-N) real-valued multiplications. Additionally, R^-1a_0 requires 16N^2 real-valued multiplications, while 16(N^2+N) real multiplications are needed for (a_0^HR^-1a_0)^-1. In total, 16/3N^3+32N^2+80/3N real multiplications are needed. When processing a quaternion-valued signal, this number will be doubled and the total number of real multiplications becomes 32/3N^3+64N^2+160/3N.For the proposed full Q-Capon beamformer, in addition to calculating R^-1, 32N^2 real multiplications are required to calculate R^-1C and 32M^2+32M+96 real multiplications for (C^HR^-1C_0)^-1f. In total, the number of real-valued multiplications is 16/3M^3+64M^2+272/3M+96, which is roughly half of that of the Q-Capon beamformer.§ SIMULATIONS RESULTSIn our simulations, we consider 10 pairs of cross-dipoles with half wave-length spacing. All signals are assumed to arrive from the same plane of θ=90 and all interferences have the same polarization parameter γ=60. For the SOI, the two sub-signals are set to (90, 1.5, 90, 45) and (90, 1.5, 0, 0), with inferences coming from (90, 30, 60, -80), (90, -70, 60, 30), (90, -20, 60, 70), (90, 50, 60, -50), respectively. The background noise is zero-mean quaternion-valued Gaussian. The power of SOI and all interfering signals are set equal and SNR (INR) is 20dB. Fig. <ref> shows the resultant 3-D beam pattern by the proposed beamformer, where the interfering signals from (ϕ,η)=(30, -80), (-70, 30), (-20, 70) and (50, -50) have all been effectively suppressed.In the following, the output SINR performance of the two Capon beamformers (full Q-Capon and Q-Capon) is studied with the DOA and polarization (90, 1.5, 90, 45) and (90, 1.5, 0, 0) for SOI and (90, 30, 60, -80), (90,- 70, 60, 30), (90, -20, 60, 70),(90, 50, 60, -50) for interferences. Again, we have set SNR=INR=20dB. All results are obtained by averaging 1000 Monte-Carlo trials. Fig. <ref> shows the output SINR performance versus SNR with 100 snapshots, where the solid-line is for the optimal beamformer with infinite number of snapshots. For most of the input SNR range, in particular the lower range, the proposed full Q-Capon beamformer has a better performance than the Q-Capon beamformer. For very high input SNR values, these two beamformers have a very similar performance. Next, we investigate their performance in the presence of DOA and polarization errors. Theoutput SINR with respect to the number of snapshots is shown in Fig. <ref> in the presence of 1 error for the SOI, where the real DOA and polarization parameters are (91,2.5,91,46) and (91,2.5,1,1). It can be seen that the full Q-Capon beamformer has achieved a much higher output SINR than the Q-Capon beamformer, and this gap increases with the increase of snapshot number.Fig. <ref> shows a similar trend in the presence of a 5 error. Overall, we can see that the proposed full Q-Capon beamformer is more robust against array pointing errors.§ CONCLUSIONSIn this paper, a full quaternion model has been developed for adaptive beamforming based on crossed-dipole arrays, with a new full quaternion Capon beamformer proposed. Different from previous studies in quaternion-valued adaptive beamforming, we have considered a quaternion-valued desired signal, given the recent development in quaternion-valued wireless communications research. The proposed beamformer has a better performance and a much lower computational complexity than a previously proposed Q-Capon beamformer and is also shown to be more robust against array pointing errors, as demonstrated by computer simulations. The gradient of a quaternion vector u=w^HCλ^ H with respect to w^* can be calculated as below:∇_w^*u= [∇_w_1^*u∇w_2^*u...∇_w_n^*u]^Twhere w_n, n=1, 2, ⋯, N is the n-th quaternion-valued coefficient of the beamformer. Then,∇_w_1^*u= 1/4(∇_w_1au+∇_w_1bui+∇_w_1cuj+∇_w_1duk)wherew_1^* = w_1a-w_1bi-w_1cj-w_1dk Since w_1a is real-valued, with the chain rule <cit.>, we have∇_w_1au=∇_w_1a(w^H) Cλ^H+w^H∇_w_1a(Cλ^H)=[1 0 0... 0]Cλ^HSimilarly,∇_w_1bu=[-i 0 0... 0] Cλ^H∇_w_1cu=[-j 0 0... 0] Cλ^H∇_w_1du=[-k 0 0... 0] Cλ^HHence,∇_w_1^*u= 1/4(4Real(Cλ^H)_1)=Real(Cλ^H)_1 ,where the subscript {}_1 in the last item means taking the first entry of the vector.Finally,∇_w^*u=Real(Cλ^H)The gradient of the quaternion vector v=λC^Hw with respect to w^* can be calculated in the same way:∇_w_1av=λC^H∇_w_1aw +∇_w_1a(λC^H) w=λC^H[10 0 ... 0]^TSimilarly,∇_w_1bv=λC^H[i00 ... 0]^T∇_w_1cv=λC^H[j00 ... 0]^T∇_w_1dv=λC^H[k00 ... 0]^TThus, the gradient can be expressed as∇_w_1^*v =-1/2(Cλ^H)_1^*Finally,∇_w^*v=-1/2(Cλ^H)^* The gradient of c_w = w^HRw can be calculated as follows.∇_w^*c_w = [∇_w_1^*c_w∇_w_2^*c_w...∇_w_n^*c_w]^T ∇_w_1^*c_w =1/4(∇_w_1ac_w+∇_w_1bc_wi +∇_w_1cc_wj+∇_w_1dc_wk) Now we calculate the gradient of c_w with respect to the four components of w_1.∇_w_1ac_w=∇_w_1a(w^HR)w+w^HR∇_w_1aw= [100 ... 0]Rw+w^HR[100 ... 0]^TThe other three components are,∇_w_1bc_w= [-i00 ... 0]Rw+w^HR[i00 ... 0]^T∇_w_1cc_w= [-j00 ... 0]Rw+w^HR[j00 ... 0]^T∇_w_1dc_w= [-k00 ... 0]Rw+w^HR[k00 ... 0]^THence,∇_w_1^*c_w =Real(Rw)_1- 1/2(Rw)_1^*=1/2(Rw)_1Finally,∇_w^*c_w=1/2Rw Combining (<ref>), (<ref>) and (<ref>), with (<ref>), we have∇_w^*l(w,λ) =1/2 (Rw+Cλ^H) = 0Further,w=-2R^-1Cλ^HSubsituting (<ref>) into (<ref>),λ = -1/2f(C^HR^-1C)^-1Finally,w=R^-1C(C^HR^-1C)^-1f^H IEEEtran | http://arxiv.org/abs/1707.08207v2 | {
"authors": [
"Xiang Lan",
"Wei Liu"
],
"categories": [
"cs.IT",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20170626125959",
"title": "A Fully Quaternion-Valued Capon Beamformer Based on Crossed-Dipole Arrays"
} |
The c_2 invariant is an arithmetic graph invariant defined by Schnetz.It is useful for understanding Feynman periods.Brown and Schnetz conjectured that the c_2 invariant has a particular symmetry known as completion invariance. This paper will prove completion invariance of the c_2 invariant in the case that we are over the field with 2 elements and the completed graph has an odd number of vertices. The methods involve enumerating certain edge bipartitions of graphs; two different constructions are needed. Way to Go! Automatic Optimization of Wayfinding Design Haikun Huang, Ni-Ching Lin, Lorenzo Barrett, Darian Springer,Hsueh-Cheng Wang, Marc Pomplun, Lap-Fai Yu, Member, IEEEH. Huang, L. Barrett, D. Springer, M. Pomplun and L.-F. Yu are with the University of Massachusetts Boston. N.-C. Lin and H.-C. Wang are with the National Chiao Tung University.Manuscript received ?? ??, 2017; revised ?? ??, 2017. December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================ § INTRODUCTIONThe c_2 invariant is an arithmetic graph invariant defined by Schnetz <cit.>, see below, which is useful for understanding Feynman periods.Feynman periods are a class of integrals defined from graphs which are simpler than but closely related to Feynman integrals.Based on this connection and on computational evidence, there are certain symmetries which the c_2 invariant is believed to have.A key such symmetry, known as completion invariance and defined below, was first conjectured by Brown and Schnetz in 2010 <cit.> and has turned out to be quite difficult to prove.The main result of this paper, Theorem <ref> is the completion invariance of the c_2 invariant in the case that p=2 and the completed graph has an odd number of vertices.Throughout K will be a connected 4-regular simple graph[For unexplained graph theory language or notation see <cit.>.].The result of removing any vertex of K is called a decompletion of K.Different decompletions will typically be non-isomorphic and K can be uniquely reconstructed from any of its decompletions.We will say that K is the completion of any of its decompletions.See Figure <ref> for an example.Suppose G is a decompletion of K.Then we can view G as a Feynman diagram in ϕ^4 theory.For those not familiar with Feynman diagrams and quantum field theory, briefly: The edges of the graphs represent particles; the vertices particle interactions.This particular quantum field theory has only a quartic interaction (that is the 4 in ϕ^4), and so all vertices must have degree 4.The vertices of G which are no longer 4-regular due to decompletion are taken to have additional external edges attached which represent particles entering or exiting the system.Completing G then means attaching all the external edges to a new vertex which one can think of as being “at infinity” hence corresponding to completion in the geometric sense.This connection to geometric completion is easily made precise at the level of the Feynman period, see <cit.>.The Feynman integral of a Feynman diagram is the thing one actually wants for physics because it computes the contribution of that Feynman diagram to whatever physical process one is interested in.For more on perturbative quantum field theory see <cit.>.Given any graph G (but most interestingly for the present purposes G may be some decompletion of K), assign a variable a_e to each edge e ∈ E(G) and define the (dual) Kirchhoff polynomial or first Symanzik polynomial to beΨ_G = ∑_T∏_e∉Ta_ewhere the sum runs over all spanning trees of G.For example, the Kirchhoff polynomial of a 3-cycle with edge variables a_1, a_2, a_3 is a_1+a_2+a_3 since removing any one edge of a cycle gives a spanning tree of the cycle.Next define the Feynman period to be∫_a_e≥ 0∏ da_e/Ψ_G^2|_a_1=1.This is an affine form of the integral; there is also a projective form, see <cit.>.This integral converges provided G is at least internally 6-edge connected[That is, any way of removing fewer than 6 edges either leaves the graph connected or breaks the graph into two components one of which is a single vertex.].This corresponds to the Feynman diagram having no subdivergences because edge cuts with four or fewer edges, other than ones giving isolated vertices, represent sub-processes which yield divergent integrals.There has been a lot of interest in the Feynman period because it is a sensible algebro-geometric, or even motivic, object <cit.>, but it is also a key piece of the Feynman integral.It is a sort of coefficient of divergence for the Feynman integral and has the benefit of not depending on the many parameters and choices which the full Feynman integral depends on, so it is mathematically much tidier.None-the-less it still captures some of the richness of Feynman integrals.This is best illustrated by the number theoretic richness of the numbers which Feynman periods give, see <cit.>.Given these connections to many deep and difficult things it should not be surprising that Feynman periods are still difficult to understand and compute, and so in <cit.> Schnetz introduced the c_2 invariant in order to better understand the Feynman period. Let p be a prime, let 𝔽_p be the finite field with p elements, let G be a connected graph with at least 3 vertices, and let [Ψ_G]_p be the number of 𝔽_p-rational points in the affine variety {Ψ_G=0}⊆𝔽_p^|E(G)|.Then the c_2-invariant of G at p isc_2^(p)(G) = [Ψ_G]_p/p^2 p. That this is well defined is proved in <cit.>.The same definition can be made for prime powers, but we will stick to primes p.The c_2 invariant has interesting properties <cit.>, yields interesting sequences in p such as coefficient sequences of modular forms <cit.>, and predicts properties of the Feynman period.A simple and striking example of the last of these is that the c_2^(p)(G)=0 for all p corresponds to when the Feynman period apparently has a drop in transcendental weight relative to the size of the graph[Given that proving transcendentality for single zeta values, much less multiple zeta values or other more exotic numbers appearing in Feynman periods, is completely out of reach, one must recast this more formally in order to make a precise statement or prove a precise result along these lines, see <cit.>.].For example the two decompleted graphs in Figure <ref> both have c_2^(p)=0 for all p and both have period 36ζ(3)^2 which has weight 3+3=6 while some other graphs with the same number of edges have weight 7, see <cit.>.The c_2 invariant should have something to do with the Feynman period because both counting points and taking period integrals are controlled by the geometry of the variety of Ψ_G.If the Feynman periods are nice then the point counts as a function of p should be nice and vice versa.More specifically, inspired by known Feynman periods at the time being multiple zeta values[This is no longer the case unless there are some outrageous identities in play – as with transcendentality questions for the multiple zeta values themselves, proving an absence of relations is almost impossibly hard even when we have solid theoretical reasons to think there should be none.], Kontsevich informally conjectured that [Ψ_G]_p should be a polynomial in p.This turned out to be very false <cit.>.The c_2 invariant is one measure of whether or how badly Kontsevich's conjecture fails for a given graph; if it holds for that graph then the c_2 invariant is the quadratic coefficient of the polynomial, thus explaining the 2 in c_2.The c_2 invariant has or is believed to have all of the symmetries of the Feynman period.In particular, it is known that the Feynman periods of two graphs with the same completion are the same, see <cit.>.The c_2 invariant was also conjectured by Brown and Schnetz in 2010 (Conjecture 4 of <cit.>) to have this completion symmetry.This is arguably the most interesting open problem on the arithmetic of Feynman periods.Very little progress has been made.We do know that if G has two triangles sharing an edge then the question of completion invariance for c_2^(p)(G) can be reduced to completion invariance of a particular smaller graph.This is known as double-triangle reduction, see Corollary 34 of <cit.>. The main result of this paper puts the first crack into the conjecture itself, proving it in the special case when p=2 and the completed graph has an odd number of vertices:Let K be a connected 4-regular graph with an odd number of vertices. Let v and w be vertices of K.Then c_2^(2)(K-v) = c_2^(2)(K-w).The basic approach takes a graph theoretic perspective on the c_2 invariant.The same approach also underlies the c_2 results for families of graphs from <cit.>, and the reader may like to look there for further examples.The strength of the approach is that it is sufficiently different from a more algebraic or geometric approach to possibly make progress where other methods stalled.The weakness of the approach is that extending beyond p=2 will be tricky, perhaps impossible.See Section <ref> for further comments.The vertex parity restriction is both more mysterious and perhaps more tractable.Again see Section <ref>.A different graph theoretic perspective on the c_2 invariant is given by Crump in <cit.>. The structure of the paper is as follows.Section <ref> defines the different graph polynomials that will be needed and collects together some important lemmas from other sources.Section <ref> then uses these lemmas to reduce the problem of proving the main theorem to a problem of determining the parity of the number of certain edge bipartitions.The real work of proving Theorem <ref> then comes in Sections <ref> and <ref>.The set of these edge bipartitions are divided into five pieces.Two of the pieces are proved to have even size in Section <ref> by finding fixed-point free involutions.The other three pieces are proved to have even size in Section <ref> by giving a cycle swapping rule to transform edge bipartitions.Finally Section <ref> concludes with some comments about the result, the proof, and the way forward. § BACKGROUND The first step is to define some additional graph polynomials that will be needed in order to prove the main result.Let G be a graph.Choose an arbitrary order for the edges and the vertices of G and choose an arbitrary orientation for the edges of G.Let E be the signed incidence matrix of G (with the vertices corresponding to rows and the edges corresponding to columns) with one arbitrary row removed and let Λ be the diagonal matrix with the edge variables of G on the diagonal.LetM = [ Λ E^t;-E 0 ].ThenM = Ψ_G.This can be proved by expanding out the determinant, see <cit.> Proposition 21, or by using the Schur complement and the Cauchy-Binet formula, see <cit.>.In both cases it comes down to the fact that the square full rank minors of E are ± 1 for columns corresponding to the edges of a spanning tree of G which is the essence of the matrix tree theorem.In this and other ways the matrix M behaves much like the Laplacian matrix of a graph (with variables and one matching row and column removed) but the pieces which make it up are separated out, so call M the expanded Laplacian of G.Assume that we have made a fixed choice of orders and orientation so as to define a fixed M in all that follows.Thus it will not matter whether we talk of edges or edge indices.In particularwe will use G/e and G\ e for the contraction and deletion respectively in the graph G with the same meaning whether e is an edge or an edge index. If I and J are sets of edge indices then let M(I, J) be the matrix M with the rows indexed by elements of I removed and the columns indexed by elements of J removed.Brown in <cit.> defined the following Dodgson polynomialsLet I, J, and K be sets of edge indices with |I|=|J|.DefineΨ^I,J_G,K =M(I, J)|_a_e=0fore∈ K. We may leave out the G subscript when the graph is clear and leave out the K when K= ∅.Dodgson polynomials have nice contraction-deletion properties and many interesting relations see <cit.>.Making different choices in the construction of M may change the overall sign of a Dodgson polynomial, but since we will be concerned with counting zeros of these polynomials the overall sign is of no concern.Dodgson polynomials can also be expressed in terms of spanning forests; this is a consequence of the all minors matrix tree theorem <cit.>.The following spanning forest polynomials are convenient for this purpose.Let P be a set partition of a subset of the vertices of G.DefineΦ^P_G = ∑_F∏_e ∉Fa_ewhere the sum runs over spanning forests F of G with a bijection between the trees of F and the parts of P where each vertex in a part lies in its corresponding tree.Note that trees consisting of isolated vertices are permitted.In most of the argument we will be working with spanning forest polynomials where P has exactly 2 parts.The corresponding spanning forests thus have exactly two parts and will be known as 2-forests.For example, if G is as in Figure <ref> then Φ_G^{v_1,v_2},{v_3} = (e+d)(ca+cb + ab + fb+gb) The precise relationship between Dodgson polynomials and spanning forest polynomials is given in the following proposition.Let I,J, and K besetsofedgeindicesof G with |I|=|J|.ThenΨ^I,J_G,K = ∑±Φ^P_G\(I∪ J∪ K)where the sum runs over all set partitions P of the end points of edges of (I ∪ J∪ K) \ (I∩ J) such that all the forests corresponding to P become spanning trees in both G\ I/(J∪ K) and G\ J/(I∪ K).The signs appearing in the proposition can be determined, see <cit.>, however they are of no concern for the present since we will be working modulo 2.Note that if an edge index is in both I and J then it is deleted in both G\ I and in G\ J and so cannot then be contracted; in the contraction simply ignore edge indices whose edges are no longer there.The instances of this proposition which will be needed below also serve as a good example of these objects and how to manipulate them.Suppose we have a graph G with a 3-valent vertex u and let the edges incident to u be 1, 2, and 3, as in the first graph in Figure <ref>.Consider the Dodgson polynomial Ψ^12, 13_G.To find its expression in terms of spanning forest polynomials we need to look for sets of edges of G\ 123 which give a spanning tree in G\ 12/3 and in G\ 13/2.These smaller graphs are each isomorphic to G-u and so we obtainΨ^12,13_G = Ψ_G-u = Φ^{u}, {v_1,v_2,v_3}_G\ 123 = Φ^{v_1,v_2,v_3}_G-u. Consider the Dodgson polynomial Ψ^2,3_G,1.To find its expression in terms of spanning forest polynomials we need to look for sets of edges of G\ 123 which give a spanning tree in G\ 2/13 and in G\ 3/12.These two smaller graphs are the second and third graphs in Figure <ref>.If a set of edges is a forest for both of these graphs then on G\ 123 we must have v_1 and v_2 in different trees as well as v_1 and v_3 in different trees; u must also be in a different tree from all the other vertices.Furthermore, to get a tree in the two minors of Figure <ref> the spanning forest of G\ 123 must have exactly three trees, including the tree consisting of the isolated vertex u alone.There is one partition of {u, v_1, v_2, v_3} which satisfies these properties, namely {u},{v_1},{v_2,v_3} and all forests compatible with this partition give trees in the two minors of Figure <ref>.ThereforeΨ^2,3_G,1 = Φ^{u}, {v_1}, {v_2,v_3}_G\ 123.Since the vertex u is isolated in G\ 123, we can rewrite this without u asΨ^2,3_G,1 = Φ^{v_1}, {v_2,v_3}_G-u.We can use the particular Dodgson polynomials from the example to compute the c_2 invariant more easily.Continue for the next two lemmas with the notation [F]_p for the number of 𝔽_p-rational points on the affine variety defined by the polynomial F reduced modulo p.The polynomials we will deal with will always come from graphs and so the affine space in question will be of dimension the number of edges of the graph. Suppose G is a graph with2 + |E(G)| ≤ 2|V(G)|.Let i,j,k be distinct edge indices of G and let p be a prime.Thenc_2^(p)(G) = -[Ψ^ik,jk_GΨ^i,j_G,k]_pp.Note that decompletions of 4-regular graphs satisfy the hypotheses of this lemma.This lemma is useful because we no longer have to divide by p^2 but rather can obtain the c_2 invariant directly as a point count modulo p.The last lemma we need is a corollary of one of the standard proofs of the Chevalley-Warning theorem, see section 2 of <cit.>. Let F be a polynomial of degree N in N variables, x_1, …, x_N, with integer coefficients. The coefficient of x_1^p-1⋯ x_N^p-1 in F^p-1 is [F]_p modulo p. This last lemma is particularly useful when used in conjunction with the previous lemma.Suppose G is a decompletion of a 4-regular graph and i,j,k are distinct edge indices.Then Ψ^ik,jk_GΨ^i,j_G,k has both degree and number of variables equal to the number of edges of G\ ijk.Thus c_2^(p)(G) is equal to the coefficient of x_1^p-1⋯ x_N^p-1 in (Ψ^ik,jk_GΨ^i,j_G,k)^p-1 modulo p.In view of this, we will no longer need to consider point counts and so that frees up square brackets for the usual algebraic combinatorics notation for the coefficient-of operator.Namely, the coefficient of a monomial m in a polynomial F is denoted [m]F.Rewriting the previous observation in this notation we getc_2^(p)(G) = [x_1^p-1⋯ x_N^p-1](Ψ^ik,jk_GΨ^i,j_G,k)^p-1 pwhere N is the number of edges of G\ ijk.If we further suppose that i, j, and k meet at a 3-valent vertex u and their other ends are v_1, v_2, and v_3 (this case will be sufficient for our purposes) then we can incorporate the computations of Example <ref> to getc_2^(p)(G) = [x_1^p-1⋯ x_N^p-1](Φ^{v_1}, {v_2,v_3}_G-uΨ_G-u)^p-1 p.The polynomials do in general depend on the choice of v_1, though the coefficient in question modulo p clearly does not.We will use the freedom of choice of v_1 later.When p=2 (<ref>) has a particularly nice graphical interpretation which can be derived as follows. We are interested in the parity of [x_1⋯ x_N]Φ^{v_1}, {v_2,v_3}_G-uΨ_G-u, but the coefficient of x_1⋯ x_N in Φ^{v_1}, {v_2,v_3}_G-uΨ_G-u is the number of ways to partition the variables into two monomials, one from Φ^{v_1}, {v_2,v_3}_G-u and one from Ψ_G-u, since both of these polynomials[The same idea works for more general products of Dodgson polynomials, but some care must be taken as Dodgson polynomials are in general signed sums of spanning forest polynomials.For some examples computing in this way see <cit.>.] have all monomials appearing with coefficient 1.The variables correspond to edges, so this is equivalent to counting the number of ways to partition the edges of G-uinto two parts, one part when removed gives a spanning tree and the other part when removed gives a spanning 2-forest compatible with {v_1}, {v_2, v_3}.Swapping the roles of the two parts this is equivalent to counting the number of ways to partition the edges of G-u into two parts, one of which is a spanning tree and the other of which is a spanning 2-forest compatible with {v_1}, {v_2,v_3}.The c_2 invariant at p=2 is simply the parity of this count, so we are determining c_2 by counting assignments of the edges of G-u to particular spanning trees and forests.For p>2 the same idea works, but we must assign p-1 copies of each edge among p-1 copies of each polynomial.We can view this as partitioning the edges of the graph we obtain by taking G-u and replacing each edge with p-1 edges in parallel.There are many more possibilities when partitioning all these multiple edges into 2p-2 polynomials; the practicalities of working with this approach for p>2 are daunting. § REDUCTION TO A COMBINATORIAL COUNTING PROBLEM Take K to be a (fixed) connected 4-regular graphwith an odd number of vertices and take v, w to be vertices of K.Since K is connected there is a path between any two vertices of K and thus it suffices to prove Theorem <ref> in the case that v and w are joined by an edge.Thus add as an assumption that v and w are joined by an edge.If v and w have three common neighbours then K-v and K-w are isomorphic so the result is trivial.Therefore we can assume that v and w have at most two common neighbours.The case when v and w have two common neighbours is special for two reasons. First, when v and w have two common neighbours then there is a double triangle (two triangles sharing an edge) involving v, w, and their common neighbours.The arguments of <cit.> on double triangle invariance of c_2 have not been extended to the case where one of the vertices of the double triangle is the completion vertex and so these arguments cannot be used here.However, the double triangle arguments should be readily generalizable to this situation and would work for all p, but the required techniques and setup are somewhat different and so this will not be pursued here.Rather I will simply leave it as a comment that the case where v and w have exactly two common neighbours should be a consequence of the other cases because of the double triangle, and instead will prove this case directly with the present methods.Second[Thanks to a referee for this observation.], the case where v and w have two common neighbours is special enough that we can remove the requirement that K have an odd number of vertices.This argument is suggestive of how the parity restriction should hopefully be removable in general and is discussed in Section <ref>.In the end we must consider 0, 1, or 2 common neighbours for v and w.These three cases are illustrated in Figure <ref>; label the vertices of K as in the figure.If v and w have no common neighbours, let R= K-{v,w}; that is R is the grey blob on the right hand side of Figure <ref>. If v and w have one common neighbour, let S= K-{v,w}; that is S is the grey blob in the middle of Figure <ref>. Finally, if v and w have two common neighbours, let T=K-{v,w}; that is T is the grey blob on the left hand side of Figure <ref>.By the fact that K is 4-regular and has an odd number of vertices, the number of edges of K-{v,w} is always -14 and so for each of R, S, and T we will usex_1, x_2, …, x_4k-1 for the edge variables.The next step is to use the results of Section <ref> to rewrite the c_2^(2) in the R, S, and T cases.Call a set partition with two parts a bipartition and use the following notation:Suppose P is a bipartition of a subset of the vertices of R.Then let ℛ_P be the set of bipartitions of the edges of R such that one part is a spanning tree of R and the other part is a spanning 2-forest where one tree of the 2-forest contains all vertices of the first part of P and the other contains all vertices of the second part of P.Furthermore, let r_P = |R_P|.Define 𝒮_P and s_P similarly for a bipartition of a subset of the vertices of S and define 𝒯_P and t_P similarly for T. For example, if R=K_3,3 as shown in the first part of Figure <ref>, then one of the elements of ℛ_{a,b}, {c} is marked by the thick and dotted lines in the second part of the figure. Permuting a and b and permuting d, e, and f we get 12 elements ofℛ_{a,b}, {c}.In this case one other form can occur namely where the isolated vertex c is the tree containing c in the spanning 2-forest.There are 6 such elements and so r_{a,b}, {c} = 18 in this case.With notation as above, when v and w have no common neighboursc_2^(2)(K-{v}) = r_{a,b}{c} + r_{a,c}, {b} + r_{b,c}, {a} 2, c_2^(2)(K-{w}) = r_{d,e}, {f} + r_{d,f}, {e} + r_{e,f}, {d} 2while when v and w have only neighbour cc_2^(2)(K-{v}) = s_{a,b}, {c} 2, c_2^(2)(K-{w}) = s_{d,e}, {c} 2and when v and w have neighbours b and cc_2^(2)(K-{v}) = t_{a}, {b,c} 2, c_2^(2)(K-{w}) = t_{d}, {b,c} 2. By Lemmas <ref> and <ref> and Example <ref>, as encapsulated in (<ref>), when v and w have no common neighboursc_2^(2)(K-{v}) = [x_1x_2⋯ x_4k-1]Φ_R^{a,b},{c}Ψ_R2= [x_1x_2⋯ x_4k-1]Φ_R^{a,c},{b}Ψ_R2= [x_1x_2⋯ x_4k-1]Φ_R^{b,c},{a}Ψ_R2and so by the joys of working modulo 2c_2^(2)(K-{v}) =[x_1x_2⋯ x_4k-1]Φ_R^{a,b},{c}Ψ_R+ [x_1x_2⋯ x_4k-1]Φ_R^{a,c},{b}Ψ_R + [x_1x_2⋯ x_4k-1]Φ_R^{b,c},{a}Ψ_R2.Similarlyc_2^(2)(K-{w}) =[x_1x_2⋯ x_4k-1]Φ_R^{d,e},{f}Ψ_R+ [x_1x_2⋯ x_4k-1]Φ_R^{d,f},{e}Ψ_R + [x_1x_2⋯ x_4k-1]Φ_R^{e,f},{d}Ψ_R2. Thus, by the edge assignment interpretation discussed at the end of Section <ref>, c_2^(2)(K-{v}) is equal modulo 2 to the number of ways to partition the edges of R into two parts where one part is a spanning tree and the other part is a spanning 2-forest where one tree of the 2-forest includes two vertices among {a,b,c} and the other tree of the 2-forest includes the remaining vertex of {a,b,c}.Similarly c_2^(2)(K-{w}) is equal modulo 2 to the number of ways to partition the edges of R into two parts where one part is a spanning tree and the other part is a spanning 2-forest where one tree of the 2-forest includes two vertices among {d,e,f} and the other tree of the 2-forest includes the remaining vertex of {d,e,f}.Restated using our notation, this is the statement of the proposition when v and w have no common neighbours.When v and w have only common neighbour c we can calculate similarly.We could again sum over the three possible bipartitions of {a,b,c} and similarly for {c,d,e}.However, the common neighbour c breaks the symmetry and thus it will be more convenient for the remainder of the argument simply to work with the partitions {a,b}, {c} and {c},{d,e} giving the simpler result of the statement of the proposition in the case that v and w have common neighbour c.Likewise when v and w have common neighbours b and c, because the symmetry is broken it is most convenient only to take the partitions {a}, {b,c} and {d}, {b,c} and otherwise argue as above to obtain the result. In view of the above proposition, to prove Theorem <ref>, it suffices to show that the parity of the number of edge partitions which contribute to the right hand side of (<ref>) is the same as the parity of the numberof edges partitions which contribute to the right hand side of (<ref>) and similarly for (<ref>) and (<ref>) and for (<ref>) and (<ref>). With notation as above, when v and w have no common neighboursc_2^(2)(K-{v}) - c_2^(2)(K-{w}) =r_{a,b,d,e,f},{c} + r_{a,c,d,e,f},{b} + r_{a}, {b,c,d,e,f} + r_{a,b}, {c,d,e,f} + r_{a,c}, {b,d,e,f} + r_{a,d,e,f}, {b,c} + r_{a,b,c,d,e}, {f} + r_{a,b,c,d,f}, {e} + r_{a,b,c,e,f}, {d} + r_{a,b,c,f}, {d,e} + r_{a,b,c,e}, {d,f} + r_{a,b,c,d}, {e,f} 2while when v and w have only neighbour cc_2^(2)(K-{v}) - c_2^(2)(K-{w}) = s_{a,b,d},{c,e} + s_{a,b,e}, {c,d}+ s_{a,b},{c,d,e} + s_{a,d,e}, {b,c}+ s_{a,c}, {b,d,e} + s_{a,b,c}, {d,e} 2and when v and w have neighbour b and cc_2^(2)(K-{v}) - c_2^(2)(K-{w}) = t_{a}, {b,c,d} + t_{a,b,c},{d} 2. Consider the right hand sides of (<ref>) and (<ref>). Enumerating all possibilitiesr_{a,b}, {c} = r_{a,b,d,e,f},{c} + r_{a,b,d,e}, {c,f} + r_{a,b,d,f}, {c,e} + r_{a,b,e,f}, {c,d} + r_{a,b,d}, {c,e,f}+ r_{a,b,e}, {c,d,f} + r_{a,b,f}, {c,d,e} + r_{a,b}, {c,d,e,f}and similarly for the other terms.Collecting these calculations together, simplifying modulo 2, and performing analogous calculations with regards to the right hand sides of (<ref>), (<ref>), (<ref>), and (<ref>) we get the proposition. It is best, in my view, to keep a graphical viewpoint with the above formulas.For example, one can represent a given term by drawing the graph and marking the partition by using different vertex shapes.Then the second equation of the statement of Proposition <ref> looks likec_2^(2)(K-{v}) - c_2^(2)(K-{w}) = -.2cm < g r a p h i c s >2.The reader is encouraged to translate all the equations from here on out into this notation in order to better see the intuition behind the argument. Note that this graphical notation is not the same as in <cit.> where a graph with a marked partition in this way represented the spanning forest polynomial for that graph and partition.One's first thought for proceeding from here would likely be to find a fixed-point-free involution of the union of the ℛ, 𝒮, or 𝒯 sets appearing (via their counts) in Proposition <ref>.It is not clear how to do this for all the ℛ, 𝒮 and 𝒯 sets and so the method will be more complicated.First we will deal with some of the ℛ and 𝒮 sets via involutions.For the sets that remain we will build auxiliary graphs which use the properties of certain cycles along with the parity hypotheses on K to show these remaining sets all have an even contribution.§ SOME INVOLUTIONS FROM SWAPPING AROUND PARTICULAR VERTICES The involution is simplest in the case where v and w have only common neighbour c and the partition breaks a,b,c,d,e into {a,b,c},{d,e} or {a,b},{c,d,e}, so we will begin by discussing that case. Let σ be a bipartition of the edges of S so that each part is a spanning forest (either or both of which may potentially be a spanning forest with one tree, that is a spanning tree) where in each forest every tree contains at least one of a, b, d, or e.Then of the two edges incident to c exactly one is in each part of σ and swapping which part these two edges are in yields a new partition σ' with the above listed properties, but not necessarilypartitioning the vertices among the trees in the same way.Note that c is 2-valent in S.Let y and z be the two neighbours of c. By hypothesis, in each part of σ the vertex c is connected to some other vertices of S since every tree contains at least one of a, b, d, or e and so at least one edge incident to c is assigned to this part.Since c is 2-valent this means that exactly one incident edge to c is in each part of σ.Consider the spanning forest F corresponding to one part of σ.In F, c is a leaf and so removing the edge incident to c isolates c and does not otherwise change the connectivity of the trees in F.Without loss of generality say that y was the other end of this edge.Vertex z is in some tree of F and adding the edge between z and c reconnects c to one of the trees of F while maintaining a forest structure.The same holds for the other part of σ.There is a fixed-point free involution on 𝒮_{a,b,c}, {d,e}∪𝒮_{a,b}, {c,d,e} and thus s_{a,b,c}, {d,e} + s_{a,b}, {c,d,e} = 02.All edge bipartitions in 𝒮_{a,b,c}, {d,e}∪𝒮_{a,b}, {c,d,e} satisfy the hypotheses of Lemma <ref>.To each such an edge partition swap the parts to which the edges incident to c belong.Consider an edge partition coming from 𝒮_{a,b,c}, {d,e}.Removing the edges incident to c disconnects c in both the 2-forest and the tree.Putting the edges back in, but in opposite parts of the bipartition, reconnects c to one of the trees of the 2-forest as well as to the single tree.If c is reconnected to the tree of the 2-forest including {a,b} then we now have another edge partition in 𝒮_{a,b,c}, {d,e}.If c is now connected to the tree of the 2-forest involving {d,e} then we now have an edge partition in 𝒮_{a,b}, {c,d,e}.The edge partition must be distinct from the initial partition since which edge incident to c corresponds to the tree has changed.This operation is clearly an involution.Thus we get a fixed-point free involution on 𝒮_{a,b,c}, {d,e}∪𝒮_{a,b}, {c,d,e}.Consequently, the size of this set is even. We can also apply the swapping map described in the previous proof to the other 𝒮 sets.However an edge partition in 𝒮_{a,b,d}, {c,e} can either be mapped to another partition from the same set or it can be mapped to 𝒮_{a,b,c,d},{e} which is not an 𝒮 set appearing in Proposition <ref>.An analogous situation occurs for the other 𝒮 sets not dealt with in the previous lemma.For these remaining 𝒮 sets, instead use the methods of Section <ref>.Next we look at some of the ℛ sets which can be tackled with a generalized version of the above argument. Let p_1∪ p_2 be a partition of {a,b,c,d,e,f} where p_1 consists of either all of {a,b,c} and exactly one of {d,e,f} or all of {d,e,f} and exactly one of {a,b,c}.Let x be the element of p_1 which is alone from its trio.Consider any edge partition τ in ℛ_p_1,p_2.Let t be the tree of the 2-forest of τ associated to p_1.There is a unique vertex y with the following properties * Either y∈ p_1 and is 2-valent in t or y∉p_1 and y is 3-valent in t.* Removing y from t gives a component containing exactly two vertices of p_1 and a component containing exactly one vertex of p_1 (hence the third component, if it exists also contains exactly one vertex of p_1).* Either x=y or x is in one of the components after removing y which contains exactly one vertex of p_1.We will call the vertex y defined in the above lemma the control vertex of τ and we call the vertex x the outsider vertex. Since the spanning tree of τ spans, every vertex of t is at most 3-valent and the vertices of p_1 are at most 2-valent in t.The union of the paths in t between the vertices of p_1 gives a subtree of t where every leaf is in p_1.There are only finitely many configurations for this subtree; these are illustrated in Figure <ref> where the edges in the figure represent paths in t.For each configuration the three properties and uniqueness can be checked directly, remembering that the degree of y remains the same in t as in the subtree. Note that all the different configurations in Figure <ref> can be viewed as special cases of the top left configuration where some of the paths have been contracted. There is a fixed-point free involution onℛ_{a,b}, {c,d,e,f}∪ℛ_{a,c}, {b,d,e,f}∪ℛ_{a,d,e,f}, {b,c}∪ℛ_{a,b,c,d}, {e,f} ∪ℛ_{a,b,c,e}, {d,f}∪ℛ_{a,b,c,f}, {d,e}and thusr_{a,b}, {c,d,e,f} + r_{a,c}, {b,d,e,f} + r_{a,d,e,f}, {b,c}+ r_{a,b,c,d}, {e,f} + r_{a,b,c,e}, {d,f} + r_{a,b,c,f}, {d,e}= 02. Let τ be an edge bipartition in the union of ℛ sets in the statement of the lemma.We need to set up some notation to build the involution * All of these ℛ sets satisfy the hypotheses of Lemma <ref> and so let y be the control vertex of τ.* There is exactly one edge incident to y in the spanning tree of τ.Let ϵ be that edge.* Let z be the end of ϵ which is not y.* Let t_1 be the tree of the 2-forest of τ which contains y and let t_2 be the other tree of the 2-forest. Now build τ' from τ as follows. * If z ∈ t_2 then let η be the edge incident to y which leads to the component of t_1-{y} with two vertices from {a,b,c,d,e,f}.Swap which part of the bipartition τ contains ϵ and which contains η to obtain τ'.* If z ∈ t_1 then let η be the edge incident to y which leads to the component of t_1-{y} which contains z.Swap which part of the bipartition τ contains ϵ and which contains η to obtain τ'. First lets check that τ' is in the union of ℛ sets in the statement. Similarly to the proof of Lemma <ref>, y is a leaf in the spanning tree of τ so removing ϵ disconnects y from the spanning tree and adding η reconnects y maintaining a spanning tree structure.Removing η further disconnects the 2-forest of τ into three components.If we constructed τ' by the second case, then adding ϵ to the 2-forest reconnects the same components that were disconnected by the removal of η.If we constructed τ' by the first case, then removing η cuts off the component of t_1-{y} containing exactly two vertices from {a,b,c,d,e,f} from the rest of t_1; adding ϵ reconnected t_2 instead.Furthermore, the outsider vertex is in the part of t_1 that gets connected with t_2.The result is an edge partition which corresponds to a partition of {a,b,c,d,e,f} satisfying the hypotheses of Lemma <ref> and so is in one of the ℛ sets in the statement.Next note that the map τ↦τ' is fixed-point free since which edge of the control vertex is in the spanning tree changes.Finally the control vertex of τ and τ' are the same and so applying the map twice is the identity.Thus we get a fixed-point free involution on the union of ℛ sets in the statement and so the size of this set is even. Note that in the case that v and w had common neighbour c, then c always plays the role of the control vertex but compared to the configurations in Figure <ref> more of the paths have been contracted away.In this way the case with no common neighbours generalized the simpler argument in the common neighbour case.§ COMPATIBLE CYCLES This section defines certain special cycles and investigates their properties.These cycles will let us determine the parity of the remaining r_P and s_P in Proposition <ref>as well as the t_P. * Call a bipartition of the edges of any graph (for our purposes either R, S or T) such that one part gives a spanning tree and the other part gives a spanning 2-forest a valid edge partition.* Suppose we have a valid edge partition.This gives a bipartition of all of the vertices of the graph according to which tree of the 2-forest they are in.Call a cycle C compatible with the edge partition if all vertices of C are in the same part of the vertex partition and exactly one edge of C is in the part of the edge partition corresponding to the spanning tree. Suppose we have a valid edge partition.The number of compatible cycles is the same as the number of edges of the graph which are in the part of the edge partition corresponding to the spanning tree but where both ends of the edge are in the same tree of the 2-forest.Adding an edge joining two vertices of a tree gives a graph with a unique cycle.When this fact is applied to one of the trees of the 2-forest then this cycle is compatible and every compatible cycle has this form.Suppose we have a valid edge partition and let V_1, V_2 be the associated vertex partition.Suppose that ∑_v∈ V_i v is odd for i=1,2 and the total number of vertices of the graph is odd.Then the number of compatible cycles is odd. Let ℓ be the number of edges crossing the vertex partition and Let e_i be the number of edges not in the 2-forest but with both ends in the tree of V_i for i=1,2.The number of edges leaving the tree of the 2-forest associated to V_i is∑_v∈ V_i v - 2(|V_i|-1) - 2e_i = ℓfor i=1,2. Therefore, by the degree hypothesis, ℓ is odd. The number of edges of the spanning tree of the valid edge partition is|V_1|+|V_2|-1 = e_1+e_2+ℓ.|V_1|+|V_2| is the total number of vertices of the graph and so by hypothesis is odd.Therefore e_1+e_2 is also odd.By Lemma <ref>the number of compatible cycles is e_1+e_2 and hence is odd, as desired.Now we want to apply this lemma to the ℛ and 𝒮 of Proposition <ref> sets which were not dealt with in the previous section as well as the 𝒯 sets of Proposition <ref>. Every edge partition ofℛ_{a,b,d,e,f}, {c}∪ℛ_{a,c,d,e,f}, {b}∪ℛ_{a}, {b,c,d,e,f}∪ℛ_{a,b,c,d,e}, {f} ∪ℛ_{a,b,c,d,f}, {e}∪ℛ_{a,b,c,e,f}, {d} ∪𝒮_{a,b,d}, {c,e}∪𝒮_{a,b,e}, {c,d}∪𝒮_{a,c}, {b,d,e}∪𝒮_{b,c}, {a,d,e} ∪𝒯_{a}, {b,c,d}∪𝒯_{a,b,c}, {d}has an odd number of compatible cycles.It suffices to check the hypotheses of Lemma <ref>.The number of vertices of each of R, S, and T is odd by our running assumptions on K.Take any edge partition in the union above.All the vertices other than {a,b,c,d,e,f} are degree 4, so it suffices to check that the sums of the degrees of the vertices in each part of the defining partition of {a,b,c,d}, or {a,b,c,d,e}, or {a,b,c,d,e,f} are odd.For the ℛ sets one part has degree sum 5· 3 and the other has degree sum 3,for the 𝒮 sets one part has degree sum 3· 3 and the other has degree sum 3+2, and for the 𝒯 sets one part has degree sum 3 and the other has degree sum 3+2+2.All of these are odd.Suppose we have a valid edge partition and a compatible cycle C. Let f be the one edge of C not in the 2-forest.If we were to remove f the spanning tree would split into exactly two trees, call them t_1 and t_2.There are a non-zero even number of edges of C with one end in t_1 and the other end in t_2 (including f as one of the possibilities).If we take any such edge other than f and swap it with f in the edge partition then we obtain a valid edge partition corresponding to the same vertex partition.By construction f has one end in t_1 and one end in t_2.The vertices of C can be bipartitioned based on whether they are in t_1 or in t_2.Running around C we must change which part of the bipartition we are in an even number of times in order to return to where we started, giving a non-zero even number of edges of the type described in the statement.Let f'≠ f be another edge of C where we change from t_1 to t_2.Removing f from the spanning tree disconnects it into t_1 and t_2 while adding f' reconnects t_1 and t_2 to obtain a spanning tree again.Adding f to the 2-forest creates one cycle, specifically C.Removing any edge of C, in particular f', returns us to a 2-forest with the same vertex partition. * r_{a,b,d,e,f}, {c} + r_{a,c,d,e,f}, {b} + r_{a}, {b,c,d,e,f} + r_{a,b,c,d,e}, {f} + r_{a,b,c,d,f}, {e} + r_{a,b,c,e,f}, {d} = 02.* s_{a,b,d}, {c,e} + s_{a,b,e}, {c,d} + s_{a,c}, {b,d,e} + s_{b,c}, {a,d,e}=02.* t_{a}, {b,c,d} + t_{a,b,c}, {d} = 02. The construction is the same for all three cases cases.It will be described explicitly in the ℛ case.Construct a graph X_R as follows.The vertices of X_R are the edge partitions in ℛ_{a,b,d,e,f}, {c}∪ℛ_{a,c,d,e,f}, {b}∪ℛ_{a}, {b,c,d,e,f}∪ℛ_{a,b,c,d,e}, {f}∪ℛ_{a,b,c,d,f}, {e}∪ℛ_{a,b,c,e,f}, {d}.Two vertices of X_R are adjacent if they are related by a swap as given in Lemma <ref>.Note that if a given edge assignment goes to another via such a swap then the second also goes to the first by such a swap since the cycle is compatible for either edge partition and the t_1, t_2 partition (in the notation of the proof of Lemma <ref>) is also the same for both edge partitions.By Lemma <ref>, for any vertex x in X_R there are an odd number of cycles which can yield swaps corresponding to edges incident to x.Distinct cycles must give distinct swaps hence distinct edges. By Lemma <ref>, each one of these cycles gives an odd number of edges incident to x (one for each edge of the type described in the statement of Lemma <ref> other than f itself).All edges incident to x are obtained in this way so x has odd degree.This is true for all vertices of X_R, but by basic counting any graph has an even number of vertices of odd degree, so X_R has an even number of vertices.Therefore the union of ℛ sets defining X_R has even size which is the first statement of the lemma.The argument for X_S and X_T is analogous using the edge partitions in 𝒮_{a,b,d}, {c,e}∪𝒮_{a,b,e}, {c,d}∪𝒮_{a,c}, {b,d,e}∪𝒮_{b,c}, {a,d,e} and 𝒯_{a}, {b,c,d}∪𝒯_{a,b,c}, {d} respectively. Note that the construction of X_R, X_S, and X_T is closely related to the spanning tree graph (often just called the tree graph, see <cit.>) construction.The vertices of the tree graph of a graph G are the spanning trees of G and two vertices of the tree graph are joined by an edge if the two spanning trees differ by removing one edge and replacing it with another.This is all we need to prove the main theorem.As discussed at the beginning of section <ref> it suffices to consider v and w joined by an edge and with zero, one, or two common neighbours.By Proposition <ref> we need only check that the parity of a certain sum of r_p_1,p_2 is even in the case that v and w have no common neighbours or a certain sum of s_p_1,p_2 is even in the case that v and w have one common neighbour, or a certain sum of t_p_1, p_2 is even in the case that v and w have two common neighbours.In the case that v and w have two common neighbours, the third part of Lemma <ref> gives that the required sum is even.In the case that v and w have one common neighbour, Lemma <ref> and the second part of Lemma <ref> give that the required sum is even.In the case that v and w have no common neighbours, Lemma <ref> and the first part of Lemma <ref> give that the required sum is even.§ DISCUSSION It should be the case that Theorem <ref> is true for all p and without the restriction that K has an odd number of vertices, see conjecture 4 of <cit.>.This would be an excellent conjecture to prove because it would support the deep connection between the c_2 invariant and the Feynman period.It is a surprisingly difficult and very interesting conjecture.The restriction to p=2 came about because Lemma <ref> is much simplified in the p=2 case.For higher values of p there is a still an edge assignment interpretation as discussed at the end of Section <ref>, but each edge must be assigned p-1 times to build p-1 spanning trees and p-1 spanning 2-forests.This greatly increases the complexity.However, in principle these ideas may be extendable to p>2.The practicalities would certainly be very hairy.The question is whether or not the practicalities would be so hairy as to render the approach unworkable.The vertex parity condition is more mysterious.Note that it is only needed for the arguments of Section <ref> not for the arguments of Section <ref>. If the number of vertices of K were even then the required degree sum parity in Lemma <ref> would also need to change to preserve the total number of compatible cycles being odd.This translates into the methods of Section <ref> applying to the other ℛ, 𝒮, and 𝒯 sets, namely the ones we already know how to tackle by Section <ref>.So if K had an even number of vertices then the terms which in the odd case matched by swapping around the control vertex would become the terms which are even by compatible cycles, while the terms which used be even by compatible cycles would need a new argument.The first place to look for this new argument would be as a generalized control vertex argument, however it is not clear how to do it. There is no obvious obstruction, rather the required construction is simply not apparent and so some further cleverness is required to progress.In the T case, that is the double triangle case where v and w have two common neighbours, things are sufficiently special that we can extend the argument to remove the parity condition.This was graciously pointed out by a referee.This argument manages to succeed by running through additional 𝒯 sets which do not appear directly in the expression for c_2 but which occur while swapping.I had tried similar things in other cases without finding a path to the result, but the approach remains promising as this argument shows.Consider τ∈𝒯_{a}, {b,c,d}∪𝒯_{a,b}, {c,d}.The vertex b is 2-valent and one of its incident edges is in each of the tree and the 2-forest of τ since in both cases b must connect to other vertices.Thus by the same argument as Lemma <ref> we can swap the edges around b and obtain a fixed-point free involution on 𝒯_{a}, {b,c,d}∪𝒯_{a,b}, {c,d}.Therefore t_{a}, {b,c,d} + t_{a,b}, {c,d} = 02.Symmetrically by swapping around vertex c in𝒯_{a,b,c}, {d}∪𝒯_{a,b}, {c,d} we gett_{a,b,c}, {d} + t_{a,b}, {c,d} = 02.Adding the two equations along with Proposition <ref> we getc_2^(2)(K-{v}) - c_2^(2)(K-{w}) = t_{a}, {b,c,d} + t_{a,b,c},{d} 2= 02.The statement and proof of Theorem <ref> arose out of discussions with Dmitry Dorynabout looking for special cases where we could hope to progress on understanding the c_2 invariant.The first version of the result had many additional hypotheses including planarity and being restricted to the S-case.With some work most of the extra hypotheses dropped away and the current Theorem <ref> remained.It is not an entirely satisfactory theorem as it stands, but it introduces some new ideas to the game and makes nontrivial progress on the completion conjecture for the c_2 invariant for almost the first time. I hope that as the first crack in that conjecture it will lead, with the help of others, to a proof of the full conjecture.plain | http://arxiv.org/abs/1706.08857v2 | {
"authors": [
"Karen Yeats"
],
"categories": [
"math.CO",
"05C31 (primary) 05C30, 81T18 (secondary)"
],
"primary_category": "math.CO",
"published": "20170627135821",
"title": "A special case of completion invariance for the $c_2$ invariant of a graph"
} |
IEEEexample:BSTcontrol IEEE Transactions on Robotics Skill Learning by Autonomous Robotic Playing using Active Learning and Creativity Simon Hangl, Vedran Dunjko, Hans J. Briegel and Justus Piater S. Hangl and J. Piater are with the Department of Computer Science, University of Innsbruck, Austria; V. Dunjko is with the Institute of Theoretical Physics, University of Innnsbruck, Austria and with the Max Planck Institute for Physics Munich, Germany; H.J. Briegel is with the Institute of Theoretical Physics, University of Innsbruck, Austria and the Department of Philosophy, University of Konstanz, Germany.December 30, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We treat the problem of autonomous acquisition of manipulation skills where problem-solving strategies are initially available only for a narrow range of situations. We propose to extend the range of solvable situations by autonomous playing with the object. By applying previously-trained skills and behaviours, the robot learns how to prepare situations for which a successful strategy is already known. The information gathered during autonomous play is additionally used to learn an environment model. This model is exploited for active learning and the creative generation of novel preparatory behaviours. We apply our approach on a wide range of different manipulation tasks, e.g. book grasping, grasping of objects of different sizes by selecting different grasping strategies, placement on shelves, and tower disassembly. We show that the creative behaviour generation mechanism enables the robot to solve previously-unsolvable tasks, e.g. tower disassembly. We use success statistics gained during real-world experiments to simulate the convergence behaviour of our system. Experiments show thatactive improves the learning speed by around 9 percent in the book grasping scenario. Active Learning, Hierarchical models, Skill Learning, Reinforcement learning, Autonomous robotics, Robotic manipulation, Robotic creativity § INTRODUCTION Humans perform complex object manipulations so effortlessly that at first sight it is hard to believe that this problem is still unsolved in modern robotics. This becomes less surprising if one considers how many different abilities are involved in human object manipulation. These abilities span from control (e.g. moving arms and fingers, balancing the body), via perception (e.g. vision, haptic feedback) to planning of complex tasks. Most of these are not yet solved in research by themselves, not to speak of combining them in order to design systems that can stand up to a comparison with humans. However, there is research on efficiently solving specific problems (or specific classes of problems) <cit.>.Not only the performance of humans is outstanding – most manipulation skills are learned with a high degree of autonomy. Humans are able to use experience and apply the previously learnt lessons to new manipulation problems. In order to take a step towards human-like robots we introduce a novel approach for autonomous learning that makes it easy to embed state-of-the-art research on specific manipulation problems. Further we aim to combine these methods in a unified framework which autonomously learns how to combine those methods and to solve increasingly complex tasks.In this work we are inspired by the behaviour of infants at an age between 8 to 12 months. Piaget identified different phases of infant development <cit.>. A phase of special interestis the coordination of secondary schemata which he identifies as the stage of first actually intelligent behaviour. At this stage infants combine skills that were learned earlier in order to achieve more complex tasks, e.g. kicking an obstacle out of the way such that an object can be grasped. Children do not predict the outcome of actions and check the corresponding pre- and post conditions as it is done in many planning systems <cit.>. To them it is only important to know that a certain combination of manipulations is sufficient to achieve a desired task. The environment is prepared such that the actual skill can be applied without a great need for generalisation. Even adults exhibit a similar behaviour, e.g. in sports. A golf or tennis player will always try to perform the swing from similar positions relative to the ball. She will position herself accordingly instead of generalizing the swing from the current position. This is equivalent to concatenating two behaviours, walking towards the ball and executing the swing.In previous work we introduced an approach that is loosely inspired by this paradigm <cit.>. The robot holds a set of sensing actions, preparatory behaviours and basic behaviours, i.e. behaviours that solve a certain task in a narrow range of situations. It uses the sensing actions to determine the state of the environment. Depending on the state, a preparatory behaviour is used to bring the environment into a state in which the task can be fulfilled by simple replay of the basic behaviour. The robot does not need to learn how to generalise a basic behaviour to every possibly observable situation. Instead, the best combination of sensing actions and preparatory behaviours is learned by autonomous playing.We phrase the playing as a reinforcement learning (RL) problem, in which each rollout consists of the execution of a sensing action, a preparatory behaviour and the desired basic behaviour. Each rollout is time consuming, but not necessarily useful. If the robot already knows well what to do in a specific situation, performing another rollout in this situation does not help to improve the policy. However, if another situation is more interesting, it can try to prepare it and continue the play, i.e. active learning. Our original approach is model-free, which makes it impossible to exhibit such a behaviour. In this paper we propose to learn a forward model of the environment which allows the robot to perform transitions from boring situations to interesting ones. Another issue is the strict sequence of phases: sensing → preparation → basic behaviour. In this work we weaken this restriction by enabling the robot to creatively generate novel preparatory behaviours composed of other already known behaviours. The environment model is used to generate composite behaviours that are potentially useful instead of randomly combining behaviours.We illustrate the previously described concepts with the example of book grasping. This task is hard to generalise but easy to solve with a simple basic behaviour in a specific situation. The robot cannot easily get its fingers underneath the book in order to grasp it. In a specific pose, the robot can squeeze the book between two hands, lifting it at the spine and finally slide its fingers below the slightly-lifted book. Different orientations of the book would require adaption of the trajectory. The robot would have to develop some understanding of the physical properties, e.g. that the pressure has to be applied on the spine and that the direction of the force vector has to point towards the supporting hand. Learning this degree of understanding from scratch is a very hard problem.Instead, we propose to use preparatory behaviours, e.g. rotating the book by 0^∘, 90^∘, 180^∘ or 270^∘, in order to move it to the correct orientation (ϕ = 0^∘) before the basic behaviour is executed. The choice of the preparatory behaviour depends on the book's orientation, e.g. ϕ∈{ 0^∘, 90^∘, 180^∘, 270^∘}. The orientation can be estimated by sliding along the book's surface, but not by poking on top of the book. The robot plays with the object and tries different combinations of sensing actions and preparatory behaviours. It receives a reward after executing the basic behaviour and continues playing. After training, the book grasping skill can be used as preparatory behaviour for other skills in order to build hierarchies.If the robot already knows well that it has to perform the behaviour rotate 90^∘ if ϕ = 270^∘ and is confronted with this situation again, it cannot learn anything any more, i.e. it is bored. It can try to prepare a more interesting state, e.g. ϕ = 90^∘ by executing the behaviour rotate 180^∘. Further, if only the behaviour rotate 90^∘ is available, the robot cannot solve the situations ϕ∈{90^∘, 180^∘} by executing a single behaviour. However, it can use behaviour compositions in order to generate the behaviours rotate 180^∘ and rotate 270^∘. § RELATED WORK §.§ Skill chaining and hierarchical reinforcement learningSutton et al. introduced the options framework for skill learning in a RL setting <cit.>. Options are actions of arbitrary complexity, e.g. atomic actions or high-level actions such as grasping, modelled by semi-Markov decision processes (SMDP). They consist of an option policy, an initiation set indicating the states in which the policy can be executed, and a termination condition that defines the probability of the option terminating in a given state. Options are orchestrated by Markov decision processes (MDP), which can be used for planning to achieve a desired goal. This is related to our notion of behaviours, however, behaviours are defined in a loser way. Behaviours do not have an initiation set and an explicit termination condition. Behaviours are combined by grounding them on actual executions by playing instead of concatenating them based on planning. Konidaris and Barto embedded so called skill chaining into the options framework <cit.>. Similar to our work, options are used to bring the environment to a state in which follow-up options can be used to achieve the task. This is done by standand RL techniques such as Sarsa and Q-learning. The used options themselves are autonomously generated, however, as opposed to our method, the state space is pre-given and shared by all options. Instead of autonomously creating novel options, Konidaris et. al. extended this approach by deriving options from segmenting trajectories trained by demonstration <cit.>. On a more abstract level, Colin et al. <cit.> investigated creativity for problem-solving in artificial agents in the context of hierarchical reinforcement learning by emphasising parallels to psychology. They argue that hierarchical composition of behaviours allows an agent to handle large search spaces in order to exhibit creative behaviour.§.§ Model-free and model-based reinforcement learning in roboticsOur work combines a model-free playing system and a model-based creative behaviour generation system based on the environment model. Work on switching between model-free and model-based controllers was proposed in many areas of robotics <cit.>. The selection of different controllers is typically done by measuring the uncertainty of the controller's predictions. Renaudo et al. proposed switching between so called model-based and model-free experts, where the model is learned over time. The switching is done randomly <cit.>, or by either majority vote, rank vote, Boltzmann Multiplication or Boltzmann Addition <cit.>. Similar work has been done in a navigation task by Caluwaerts et al. <cit.>. Their biologically inspired approach uses three different experts, namely a taxon expert (model-free), a planning expert (model-based), and an exploration expert, i.e. exploring by random actions. A so called gating network selects the best expert in a given situation. All these methods hand over the complete control either to a model-based or a model-free expert. In contrast, our method always leaves the control with the model-free playing system which makes the final decision on which behaviours should be executed. The model-based system, i.e. behaviour generation using the environment model, is used to add more behaviours for model-free playing. This way, the playing paradigm can still be maintained while enabling the robot to come up with more complex ideas in case the task cannot be solved by the model-free system alone.Dezfouli and Balleine sequence actions and group successful sequences to so-called habits <cit.>. Roughly speaking, task solutions are generated by a dominant model-based RL system and are transformed to atomic habits if they were rewarded many times together. In contrast, the main driving component of our method is a model-free RL system which is augmented with behavioural sequences by a model-based system. This way, the robot can deal with problems without requiring an environment model while still being able to benefit from it.§.§ Developmental roboticsOur method shares properties with approaches in developmental robotics. A common element is the concept of lifelong learning, in which the robot develops more and more complex skills by interacting with the environment autonomously. Wörgötter et. al. proposed the concept of structural bootstrapping <cit.> in which knowledge acquired in earlier stages of the robot's life is used to speed up future learning. Weng provides an general description of a self-aware and self-affecting agent (SASE) <cit.>. He describes an agent with internal and external sensors and actuators respectively. It is argued that autonomous developmental robots need to be SASE agents and concrete implementations are given, e.g. navigation or speech learning. Our concept of boredom is an example of a pardigm, in which the robot decides on how to procede based on internal sensing. In general, developmental robotics shares some key concepts with our method, e.g. lifelong learning, incremental development or internal sensing. For a detailed discussion we refer to a survey by Lungarella et al. <cit.>.§.§ Active learning in roboticsIn active learning the agent can execute actions which have an impact on the generation of training data <cit.>. In the simplest case, the agent explores the percept-action space by random actions <cit.>. The two major active learning paradigms, i.e. query-based and exploration-based active learning, differ in the action selection mechanism. Query-based learning systems request samples, e.g. by asking a supervisor for it. Typically, the request is based on the agent's uncertainty <cit.>. Chao et al. adopt query-based active learning for socially guided machine learning in robotics <cit.>. Task models are trained by interaction with a human teacher, e.g. classifying symbols assigned to tangram compounds. The robot could prepare a desired sample by itself, i.e. arranging interesting tangram compounds and asking the teacher for the class label. In contrast to our method, this is not done in practice, but the robot describes the desired compound.Exploration-based active learning paradigms, on the other hand, select actions in order to reach states with maximum uncertainty <cit.>. Salganicoff et al. <cit.> and Morales et al. <cit.> used active learning for grasping. It was used to learn a prediction model of how good certain grasp types will work in a given situation. All these works deal with how to select actions such that a model of the environment can be trained more effectively. In our approach the training of the environment model is not the major priority. It is a side product of the autonomous play and is used to speed up learning and creatively generate behaviours on top of the playing system.Kroemer et al. <cit.> suggested a hybrid approach of active learning and reactive control for robotic grasping. Active learning is used to explore interesting poses using an upper confidence bound (UCB) <cit.> policy that maximises the merit, i.e. the sum of the expected reward mean and variance. The actual grasps are executed by a reactive controller based on dynamic movement primitives (DMPs) <cit.> using attractor fields to move the hand towards the object and detractor fields for obstacle avoidance. This approach is tailored to a grasping task, in which the autonomous identification of possible successful grasps is hard due to high-dimensional search spaces. In contrast, our approach is acting on a more abstract level in which the described grasping method can be used as one of the preparatory behaviours. A more detailed investigation of active learning is outside the scope of this paper and can be found in a survey by Settles <cit.>. Special credit shall be given to work on intrinsic motivation<cit.>. It is a flavour of active learning which is commonly applied in autonomous robotics. Instead of maximising the uncertainty, these methods try to optimise for intermediate uncertainty. The idea is to keep the explored situations simple enough to be able to learn something, but complex enough to observe novel properties. Schmidhuber provides a sophisticated summary of work on intrinsic motivation and embedds the idea into a general framework <cit.>. He states that many of these works optimise some sort of intrinsic reward, which is related to the improvement of the prediction performance of the model. This is closely related to our notion of boredom, in which the robot rejects the execution of skills in a well-known situation for the sake of using to time on improving the policy in other situations. He further argues that such a general framework can explain concepts like creativity and fun.§.§ PlanningMany of the previously mentioned methods are concerned with training forward models, which in consequence are used for planning in order to achieve certain tasks. Ugur et al. proposed a system that first learns action effects from interaction with the objects and is trained to predict single-object cagetories from visual perception <cit.>. In a second stage, multi-object interaction effects are learned by using the single-object categories, e.g. two solid objects can be stacked on top of each other. Discrete effects and categories are transformed into a PDDL description. Symbolic planning is used to create complex manipulation plans, e.g. for creating high towers by stacking. Konidaris et al. suggest a method in which symbolic state representations are completely determined by the agent's environment and actions <cit.>. They define a symbol algebra on the states derived from executed actions that can be used for high-level planning in order to reach a desired goal. Konidaris et al. extend this set-based formulation to a probabilistic representation in order to deal with the uncertainty observed in real-world settings <cit.>. A similar idea is present in our model-free approach, where the selection of sensing actions and the semantics of the estimated states depends on the desired skill.All these approaches provide a method to build a bridge from messy sensor data and actions to high-level planning systems for aritifial intelligence. In order to so, similar to our approach, abstract symbols are used. However, these systems require quite powerful machinery in order to provide the required definition of pre- and post conditions for planning. In our approach the robot learns a task policy directly, which is augmented by a simple planning-based method for creative behaviour generation. § PROBLEM STATEMENTThe goal is to increase the scope of situations in which a skill can be applied by exploiting behaviours. A behaviour b ∈ B maps the complete (and partially unknown) state of system 𝐞∈ A × E to another state 𝐞'∈ A × E withb : A × E ↦ A × EThe sets A, E denote the internal state of the robot and the external state of the environment (e.g. present objects) respectively. We aim for autonomous training of a goal-directed behaviour, i.e. a skill. This requires a notion of success, i.e. by a success predicate. We define a skill σ = (b^σ,Success^σ) as a pair of a basic behaviour b^σ, i.e. a behaviour that solves the task in a narrow range situations, and a predicateSuccess^σ( b^σ( 𝐞) ) = truewith 𝐞∈ D^σ. The non-empty set D^σ⊆ A × E is the set of all states in which the skill can be applied successfully, i.e. all states in which the fixed success predicate holds. We call the set D^σ the domain of applicability of the skill σ. The goal is to extend the domain of applicability by finding behaviour compositions b_l ∘…∘ b_2 ∘ b_1 with the propertySuccess^σ( b_l ∘…∘ b_2 ∘ b_1 ∘ b^σ( 𝐞) ) = truewith b_i ∈ B and 𝐞∈ D'^σ⊆ A × E such that D'^σ⊋ D^σ, i.e. the domain of applicability is larger than before. A behaviour composition b_l ∘…∘ b_2 ∘ b_1 ∘ b^σ is a behaviour itself and therefore can be used to extend the domain of applicability of other skills. This way, skills can become more and more complex over timeby constructing skill hierarchies. § CONTRIBUTIONWe extend an approach for skill learning by autonomous playing introduced by Hangl et al. <cit.>. It uses only one preparatory behaviour per state, i.e. allowing only behaviour compositions of length l = 1, c.f. equation <ref>. This limitation enables the robot to perform model-free exploration due to the reduced search space. Allowing behaviour compositions of length l > 1 causes the learning problem to be intractable, but would help to solve more complex tasks.Approaches dealing with problems of this complexity have to strongly reduce the search space, e.g.by symbolic planning <cit.>. We do not follow a planning- based paradigm in the traditional sense. The playing-based exploration of actions remains the core component of the system. In order to allow behaviour compositions of length l > 1 while still keeping the advantage of a small search space, we introduce a separate model-based system which generates potentially useful behaviour compositions. A forward model of the environment is trained with information acquired during autonomuous play. The environment model is used to generate new behaviour compositions that might be worth to be tried out. The ultimate decision whether a behaviour composition is used, however, is still up to the playing-based system. This way, the advantages of model-free and model-based approaches can be combined: * Behaviour compositions of arbitrary length can be explored without having to deal with the combinatorial explosion of possible behaviour compositions. * No or only weak modelling of the environment is required because the playing-based approach alone is still stable and fully-functional. * Exploration beyond the modelled percept-action space can still be done, e.g. a book flipping action can be used to open a box <cit.>.Proposals for novel preparatory behaviours are considered proportional to their expected usefulness. This enables the robot to first consider more conservative plans and to explore more unorthodox ideas in later stages. We refer to this procedure as creative generation of behaviour proposals. We relate to a principal investigation of creative machines <cit.>, in which robots use a memory to propose combinations of previous experiences in order to exhibit new behavioural patterns.We further exploit the environment model for speeding up the learning process by active learning. The robot can be bored of certain situations and is not only asking for different situations but also prepares them by itself. Whether or not the robot is bored is part of the internal state 𝐞_A ∈ A of the robot, which is made explicit in equation <ref>.We believe that a lifelong learning robot must go through different developmental stages of increasing complexity. Optimally, these stages are not hard-coded to the system but emerge automatically over the course of the robot's life. We extend our original system such that these additional mechamisms are exploited as soon as the robot is ready for it, i.e. the environment model is mature enough. § PRELIMINARIESFor better understanding of the remainder of the paper, we introduce the concept of perceptual states. We further provide a brief description of the core reinforcement learning method used in this paper – projective simulation (PS) <cit.>.§.§ Perceptual states Let 𝐞∈ A × E be the complete physical state of the environment. In practice, it is impossible to estimate 𝐞. However, only a facet of 𝐞 is required to successfully perform a task. We use haptic exploration in order to estimate the relevant fraction of 𝐞. A predefined set of sensing actions S is used to gather information. For many tasks only one sensing action s ∈ S is required to estimate the relevant information, e.g. the book's orientation can be determined by sliding along the surface. While the sensing action s is executed, a multi-dimensional sensor data time series M = {𝐭_τ} of duration T with τ∈ [1, …, T] is measured. This time series is not the result of a deterministic process but follows an unknown probability distribution p(M |𝐞, s).In general, every state 𝐞∈ A × E potentially requires a different action to achieve the task successfully, e.g. how to grasp an object depends on the object pose. However, in many manipulation problems, similar states require a similar or even the same action. In these cases the state space can be divided into discrete classes e, e.g. the four orientations of a book in the book grasping task. We call such a class a perceptual state, denoted e ∈ E^s_σ. Note that the perceptual state space E^s_σ is not to be confused with the state space of environment E. The probability p(e | M, s, σ) of a perceptual state e to be present depends on the measured sensor data M, the sensing action s and the skill σ for which the sensing action s is used, e.g. poking in book grasping means something different than in box opening. The perceptual state spaces of two sensing actions s, s' ∈ S can coincide, partly overlap or be distinct e.g. sliding along the surface allows the robot to estimate the orientation of a book, whereas poking does not.§.§ Projective simulation Projective simulation (PS) <cit.> is a framework for the design of intelligent agents and can be used for reinforcement learning (RL). PS was shown to exhibit competitive performance in several reinforcement learning scenarions ranging from classical RL problems to adaptive quantum computation <cit.>. It is a core component of our method and was chosen due to structural advantages, conceptual simplicity and good extensibility. We briefly describe the basic concepts and the modifications applied in this paper. A detailed investigation of its properties can be found in <cit.>.Roughly speaking, the PS agent learns the probability distribution p(b |λ,𝐞) of executing a behaviour b (e.g. a preparatory behaviour) given the observed sensor data λ (e.g. a verbal command regarding which skill to execute) in order to maximise a given reward function r( b, λ,𝐞). In this paper, reward is given if Success^σ( b ∘ b^σ( 𝐞) ) = true, given a command λ to execute skill σ in the present environment state 𝐞. Note that the state 𝐞 is never observed directly. Instead, perceptual states are estimated throughout the skill execution.In general, the core of the PS agent is the so-called episodic and compositional memory (ECM). An exemplary sketch of an ECM is shown in Fig. <ref>. It stores fragments of experience, so-called clips, and connections between them. Each clip represents a previous experience, i.e. percepts and actions.The distribution p(b |λ,𝐞) is updated after a rollout, i.e. observing a percept, choosing and executing a behaviour according to p(b |λ,𝐞), and receiving reward from the environment. The distribution p(b |λ,𝐞) is implicitly specified by assigning transition probabilities p_c → c' = p ( c' | c ) to all pairs of clips ( c, c' ) (in Fig. <ref> only transitions with probability p_c → c'≠ 0 are visualised). Given a certain percept clip, i.e. a clip without inbound transitions like clips 1 and 2, the executed behaviour clip, i.e. a clip without outbound transitions like clips 7 and 8, is selected by a random walk through the ECM. A random walk is done by hopping from clip to clip according to the respective transition probabilities until a behaviour is reached. Clips are discrete whereas sensor data is typically continuous, e.g. voice commands. A domain-specific input coupler distribution I(c_p |λ, 𝐞)modelling the probability of observing a discrete percept clip c_p given an observed signal λ is required. The distribution p(b |λ, 𝐞) is given by a random walk through the ECM withp(b |λ, 𝐞) = ∑_c_p( I(c_p |λ, 𝐞) ∑_w ∈Λ(a, c_p) p(b | c_p, w) )where p(b | c_p, w) is the probability of reaching behaviour b from percept c_p via the path w = ( c_p = c_1, c_2, …, c_K = b ). The set Λ(b, c_p) is the set of all paths from the percept clip c_p to the behaviour clip b. The path probability is given byp(b | c_p, w) = ∏_j = 1^K - 1 p( c_j + 1 |c_j) The agent learns by adapting the probabilities p_c → c' according to the received reward (or punishment) r ∈ℝ. The transition probability p_c → c' from a clip c to another clip c' is specified by the abstract transition weights h ∈ℝ^+ withp_c → c' = p ( c|c' ) = h_c → c'/∑_ĉ h_c →ĉAfter each rollout, all weights h_c → c' are updated. Let w be a random walk path with reward r^(t)∈ℝ at time t. The transition weights are updated according toh^t + 1_c → c' = max(1, h^t_c → c'- ζ(h^t_c → c' - 1 )+ ρ( c, c', w ) r^(t))where ρ(c, c', w) is 1 if the path w contains the transition c → c' and 0 otherwise. The forgetting factor ζ defines the rate with which the agent forgets previously learned policies. § SKILL LEARNING BY ROBOTIC PLAYING The following section describes the method for autonomous skill acquisition by autonomous playing on which this work is based on <cit.>. The sections <ref> – <ref> present extensions that run in parallel and augment the autonomous playing.§.§ ECM for robotic playing A skill σ is executed by a random walk through the layered ECM shown in Fig. <ref>. It consists of the following layers: * Input couplers: Input couplers map user commands about which skill to execute to the corresponding skill clip. The percept of this ECM is not the state of the environment, but the command of which skill to execute. * Desired skills: Each clip σ, i.e. a percept clip, represents a skill the robot is able to perform. * Sensing actions: Each clip s ∈ S corresponds to one sensing action. All skills share the same sensing actions. * Perceptual states: Each clip e ∈ E_σ^s corresponds to a perceptual state under the sensing action s for the skill σ. Note that the perceptual states are different for each skill-sensing action pair (σ, s) and typically do not have the same semantics, e.g. the states under sensing action s ∈ S might identify the object pose, whereas the states under s' ∈ S might denote the object's concavity. * Preparatory behaviours: Each clip corresponds to a behaviour which can be atomic (solid transitions) or other trained skills (dashed transitions). Since the basic behaviour b^σ of a skill was shown to the robot in one perceptual state, there is at least one state that does not require preparation. Therefore, the void-behaviour b_∅, in which no preparation is done, is in the set of behaviours.The robot holds the sets of skills {σ = (b^σ, Success^σ) }, sensing actions S (e.g. sliding, poking, pressing) and preparatory behaviours B (e.g. pushing). A skill is executed by performing a random walk through the ECM and by performing the actions along the path. The idle robot waits for a skill execution command λ whic is mapped to skill clips in Layer B by coupler functions, e.g. I_kb and I_sp mapping a keyboard input / voice commands to the desired skill clip σ. A sensing action s ∈ S is chosen and executed according to the transition probabilities and a sensor data time series M is measured. The perceptual state e ∈ E_σ^s is estimated from M. This transition is done deterministically by a classifier and not random as in the steps before. Given the perceptual state e, the environment is prepared by executing a behaviour b ∈ B. Finally, the basic behaviour b^σ is executed. If a basal bevahiour of a skill requires an object to be grasped, only the sensing actionweighing is available in order to estimate whether an object is grasped. We stress that this is only a restriction enforced due to practical considerations and is not required in principle.§.§ Skill TrainingA novel skill σ = (b^σ, Success^σ) is trained by providing the basic behaviour b^σ for a narrow range of situations, e.g. by hard coding or learning from demonstration <cit.>. The domain of applicability is extended by learning: * which sensing action should be used to estimate the relevant perceptual state; * how to estimate the perceptual state from haptic data; * which preparatory behaviour helps to achieve the task in a given perceptual state.The skill ECM (Fig. <ref>) is initialised in a meaningful way (sections <ref>, <ref>) and afterwards refined by executing the skills and collecting rewards, i.e. autonomous playing. §.§.§ Haptic database creation In a first step, the robot creates a haptic database by exploring how different perceptual states feel, c.f. problem <ref>. It performs all sensing actions s ∈ S several times in all perceptual states e^s, acquires the sensor data M and stores the sets {( e^s, s, { M }) }. With this database the distribution p(e | M, s, σ) (section <ref>) can be approximated and a perceptual state classifier is trained.There are two ways of preparing different perceptual states. Either the supervisor prepares the different states (e.g. all four book poses) or the robot is provided with information on how to prepare them autonomously (e.g. rotate by 90^∘ produces all poses). In the latter case the robot assumes that after execution of the behaviour a new perceptual state e' is present and adds it to the haptic database. This illustrates three important assumptions: The state e^s ∈ E^s_σ is invariant under the sensing action s ∈ S (e.g. the book's orientation remains the same irrespective of how often sliding is executed) but not under preparatory behaviours b ∈ B (e.g. the book's orientation changes by using the rotate 90^∘ behaviour), which yieldse^s e^s e^s e'^sFurther we do not assume that a sensing action s' leaves the perceptual state e^s of another sensing action s unchanged (e.g. sliding softly along a tower made of cups does not change the position of the cups whereas poking from the side may cause the tower to collapse). This insight is reflected by the examplee^s e^s… e^s e^s' e'^s§.§.§ ECM Initialisation The ECM in Fig. <ref> is initialised with the uniform transition weights h_init except for the weights between layers B and C. These weights are initialised such that the agent prefers sensing actions s ∈ S that can discriminate well between their environment states e^s ∈ E^s_σ. After the generation of the haptic database the robot performs cross-validation for the perceptual state classifier of each sensing action s ∈ S and computes the average success rate r_s. A discrimination score D_s is computed byD_s = exp( α r_s)with the free parameter α called stretching factor. The higher the discrimination score, the better the sensing action can classify the corresponding perceptual states. Therefore, sensing actions with a high discrimination score should be preferred over sensing actions with a lower score. The transition weights between all pairs of the skill clip σ and the sensing action clips s ∈ S are initialised with h_σ→ s = D_s. We use a C-SVM classifier implemented in LibSVM <cit.> for state estimation. §.§.§ Extending the domain of applicability The domain of applicability of a skill σ is extended by running the PS as described in section <ref> on the ECM in Fig. <ref>. The robot collects reward after each rollout and updates the transition probabilities accordingly. Skills are added as preparatory behaviours of other skills as soon as they are well-trained, i.e. the average reward r̅ over the last t_thresh rollouts reaches a threshold r̅≥ r_thresh. This enables the robot to create increasingly complex skill hierarchies. The complete training procedure of a skill σ is shown in Fig. <ref>. Only the non-shaded parts and solid transitions are available in this basic version.§.§ Properties and extensionsA strong advantage is that state-of-the-art research on object manipulation can be embedded by adding the controllers to the set of behaviours. Algorithms for specific problems (e.g. grasping, pushing <cit.>) can be re-used in a bigger framework that orchestrates their interaction.In the basic version the state space is comparatively small, which enables the robot to learn skills without an environment model. Further, the robot to learn fast while still preserving the ability to learn quite complex skills autonomously. However, the lack an environment model can be both an advantage and a disadvantage. Testing a hypothesis directly on the environment enables the robot to apply behaviours outside of the intended context (e.g. a book flipping behaviour might be used to open a box <cit.>). This is hard to achieve with model-based approaches if the modeled domain of a behaviour cannot properly represent the relevant information. On the other hand, the lack of reasoning abilities limits the learning speed and the complexity of solvable problems. We overcome this problem by additionally learning an environment model from information acquired during playing. The robot learns a distribution of the effects of behaviours on given perceptual states by re-estimating the state after execution. We use the environment model for two purposes: active learning and creative generation of novel preparatory behaviours.The basic version intrinsically assumes that all required preparatory behaviours are available. This constitutes a strong prior and limits the degree of autnomoy. We weaken this requirement by allowing the robot to creatively generate potentially useful combinations of behaviours. These are made avaible for the playing system which tries them out. Further, experiments showed that the learning speed was decreased by performing rollouts in situations that were already solved before. We use the environment model to implement active learning. Instead of asking a supervisor to prepare interesting situations, the robot prepares them by itself. § LEARNING AN ENVIRONMENT MODEL The environment model predicts the effect, i.e. the resulting perceptual state, of a behaviour on a given perceptual state. An environment model is the probability distribution p(e'^s| e^s, b,σ) where e^s, e'^s∈ E^s_σ are perceptual states of the sensing action s ∈ S for a skill σ, and b ∈ B is a behaviour. It denotes the probablity of the transition e^s e'^s. The required training data is acquired by re-executing the sensing action s after applying the behaviour b, c.f. shaded center part in Fig. <ref>. Given a playing sequence σ e^s e'^s (c.f. Fig. <ref>) the effect can be observed by re-executing s withσ e^s e'^s e'^sThe assumptions in equations <ref> - <ref> forbid to additionally execute other sensing actions s' ∈ S without influencing the playing based method. This limitation prevents the robot from learning more complex environment models as done in related work <cit.>, e.g. capturing transitions between perceptual states of different sensing actions. However, the purpose of the environment model is not to perform precise plans but to feed the core playing component with new ideas.We represent the distribution p(e'^s| e^s, b,σ) by another ECM for each skill - sensing action pair (σ, s) as shown in Fig. <ref>. The percept clips consist of pairs (e^s, b) of perceptual states e^s ∈ E^s_σ and preparatory behaviours b ∈ B. The target clips are the possible resulting states e'^s ∈ E^s_σ. The environment model is initialised with uniform weights h^env_(e^s, b) → e'^s = 1. When a skill σ is executed using the path in equation <ref>, a reward of r^env∈ℝ^+ is given for the transition( e^s, b ) → e'^sand the weights are updated accordingly, c.f. equation <ref>. When a novel preparatory behaviour b_K + 1 is available for playing, e.g. a skill is well-trained and is added as a preparatory behaviour, it is included into the environment models for each skill - sensing action pair (σ, s) by adding clips (e^s, b_K + 1) for all states e^s ∈ E^s_σ and by connecting them to all e'^s ∈ E^s_σ with the uniform initial weight h_init^env = 1.We employ a practical restriction on the scope of the environment model. The additional sensing action executionis only done if the grasp outcome of the seleced preparatory behaviour and the grasp requirement of the the sensing action match, e.g. if the preparatory behaviour grasps the object, but the sensing action was sliding, re-execution of the sensing action would destroy the grasp and is not done. § AUTONOMOUS ACTIVE LEARNING In the basic version an optimal selection of observed perceptual states is required in order to learn the correct behaviour in all possible states, i.e. in a semi-supervised setting a human supervisor should mainly prepare unsolved perceptual states. This would require the supervisor to have knowledge about the method itself and about the semantics of perceptual states, which is an undesirable property. Instead, we propose to equip the robot with the ability to reject perceptual states in which the skill is well-trained already. In an autonomous setting, this is not sufficient as it would just stall the playing. The robot has to prepare a more interesting state autonomously. We propose to plan state transitions by using the environment model in order to reach states which (i) are interesting and (ii) can be prepared with high confidence. We can draw a loose connection to human behaviour. In that spirit, we call the rejection of well-known states boredom.§.§ Boredom The robot may be bored in a given perceptual state, if it is confident about the task solution, i.e. if the distribution of which preparatory behaviour to select is highly concentrated. In general, every function reflecting uncertainty can be used. We use the normalised Shannon entropy to measure the confidencein a perceptual state e ∈ E^s_σ, given byĤ_e = H ( b | e )/H_max = -∑_b' ∈ B p( b = b' | e ) log_2p( b = b' | e )/log_2 Jwhere J is the number of preparatory behaviours. If the entropy is high, the robot either has not learned anything yet (and therefore all the transition weights are close to uniform) or it observes the degenerate case that all preparatory behaviours deliver (un)successful execution (in which case there is nothing to learn at all). If the entropy is low, few transitions are strong, i.e. the robot knows well how to handle this situation. We use the normalised entropy to define the probability of being bored in a state e ∈ E^s_σ withp (bored = true | e ) = 1 - βĤ_eThe constant β∈ [ 0, 1 ] defines how immune the agent is to boredom. The robot samples according to p (bored| e ) and decides on whether to refuse the execution.§.§ Transition ConfidenceIf the robot is bored in a perceptual state e' ∈ E^s_σ, it autonomously tries to prepare a more interesting state ê∈ E^s_σ. This requires the notion of a transition confidence for which the environment model can be used. We aim to select behaviours conservatively which allows the robot to be certain about the effect of the transition. We do not use the probability of reaching one state from another directly, but use a measure considering the complete distribution p(e | e', b). By maximising the normalised Shannon entropy, we favour deterministic transitions. For each state-action pair (e', b) in Fig. <ref> we define the transition confidence ν^s_e' b byν^s_e' b = 1 - H ( e | (e', b) )/H_max = 1 - H ( e | (e', b) )/log_2L_σ, swhere e' ∈ E^s_σ, b ∈ B, and L_σ, s is the number of perceptual states under the sensing action s ∈ S, i. e. the number of children of the clip ( e', b ). In contrast to the entropy computed in section <ref>, the transition confidence is computed on the environment model, c.f. Fig. <ref>. The successor function su(e, b) returns the most likely resulting outcome of executing behaviour b in a perceptual state e ∈ E^s_σ and is defined bysu(e, b) = _e'p_(e, b) → e'In practice, single state transitions are not sufficient. For paths e = e^s_1 e^s_2 = su(e^s_1, b_1) …su(e^s_n_L - 1, b_L - 1) = e^s_L = e' of length L we define the transition confidence withν^s_e 𝐛 = ∏_l = 1^L - 1ν^s_e_l b_lwhere the vector 𝐛 = ( b_1, b_2, …, b_L - 1) denotes the sequence of behaviours. This is equivalent to a greedy policy, which provides a more conservative estimate of the transition confidence and eliminates consideration of transitions that could occur by pure chance. A positive side effect is the efficient computation of equation <ref>. Only the confidence of the most likely path is computed instead of iterating over all possible paths. The path 𝐛 is a behaviour itself and the successor is given by su(e, 𝐛) = su(e^s_n_L - 1, b_L - 1).§.§ Active LearningIf the robot encounters a boring state e ∈ E^s_σ, the goal is to prepare the most interesting state that can acutally be produced. We maximise the desirability function given by( 𝐛, L ) = _𝐛, L[ Ĥ_su(e, 𝐛)ν_e𝐛 + ϵ/cost( 𝐛)]where Ĥ_su(e, 𝐛) is the entropy of the expected final state and ν_e𝐛 is the confidence of reaching the final state by the path 𝐛. The balancing factor ϵ defines the relative importance of the desirability and the path cost. The path cost cost( 𝐛) can be defined by the length of the path L, i.e. penalising long paths, or, for instance, by the average execution time of 𝐛. Equation <ref> balances between searching for an interesting state while making sure that it is reachable. In practice it can be optimised by enumerating all paths of reasonable length, e.g. L < L_max, with typical values of L_max≤ 4.The basic method is extended by sampling from the boredom distribution after the state estimation. If the robot is bored, it optimises the desirability function and executes the transition to a more interesting state. This is followed by restarting the skill execution with boredom turned off in order to avoid boredom loops, c.f. right shaded box in Fig. <ref>. § CREATIVE BEHAVIOUR GENERATIONIn many cases, the required preparatory behaviour is a combination of other available behaviours, e.g. rotate 180^∘ ≡ rotate 90^∘ + rotate 90^∘. Without using some sort of intelligent reasoning, the space of concatenated behaviours explodes and becomes intractable. However, any sequence of behaviours that transfers the current unsolved state to a target state, i.e. a state which does not require any preparation, is potentially useful as a compound behaviour itself. Sequences can be generated by planning transitions to target states. If the robot is bored, it uses active learning, if not, the situation is not solved yet and novel compound behaviours might be useful.A perceptual state e_∅∈ E^s_σ is a target state if the transition with the highest probability in the playing ECM (Fig. <ref>) leads to the void-behaviour with p_e_∅→ b_∅. If there exists a path e^se_∅ from the current perceptual state e^s ∈ E^s_σ to a target state e_∅, the sequence 𝐛 = ( b_1, …, b_L ) is a candidate for a novel behaviour. The robot is curious about trying out the novel compound behaviour 𝐛, if the transition confidence ν_e^s 𝐛, with su(e^s, 𝐛) = e_∅, and the probability p_e_∅→ b_∅ of the state actually being a real target state are both high. This is measured by the curiosity score of the compound behaviour given bycu(e^s, 𝐛) = ν_e^s 𝐛p_su(e^s, 𝐛) → b_∅The factor p_su(e^s, 𝐛) → b_∅ reduces the score in case the state e_∅ is a target state with low probability. This can happen if in previous rollouts all other behaviours were executed and were punished. We use a probability instead of a confidence value to allow creativity even in early stages where a target state was not identified with a high probability.The compound behaviour with the highest score is added as novel behaviour b_J + 1 = 𝐛 with the probability given by squashing the curiosity score into the interval [0, 1] withp ( addb_J + 1 = 𝐛| e^s ) = sig[ γ cu( e^s, 𝐛) + δ]where sig is the logistic sigmoid. The parameters γ, δ define how conservatively novel behaviour proposals are created. The novel behaviour b_J + 1 is added as preparatory behaviour for all perceptual states under the current skill σ with the weightsh_e → b_J + 1 = h_init[ 1 + cu( e^s, b_J + 1) ],ife = e^sh_init ,elseIt is added with at least the initial weight h_init, but increased proportional to the curiosity score for the current perceptual state e^s ∈ E^s_σ, c.f. Fig. <ref>. The novel behaviour is also inserted to the environment model of all sensing actions s ∈ S. For each perceptual state e ∈ E^s_σ, a clip (e, b) is added and connected to the clips e' ∈ E^s_σ in second layer with the weightsh_( e, b ) → e' = h_min( b ),if e = e^s, e' = su(e^s, b), b = b_J + 1h^env_init ,elsewhere h_min( b_J + 1) = h_min( 𝐛) is the minimum transition value on the path 𝐛 through the environment model, following the idea that a chain is only as strong as its weakest link. The weights of all other transitions are set to the initial weight h^env_init, c.f. Fig. <ref>. § EXPERIMENTSWe evaluate our method using a mix of simulated and real-world experiments. Our real-world experiments cover a wide range of skills to show the expressive power. We show how skill hierarchies are created within our framework. Success statistics of the single components (sensing accuracy, success rate of preparatory behaviours, success rate of basic behaviours) were used to assess the convergence behaviour by simulation. Table <ref> lists the used parameter values. We execute all skills and behaviours in impedance mode in order to prevent damage to the robot. Further, executed behaviours are stopped if a maximum force is exceeded. This is a key aspect for model-free playing, which enables the robot to try out arbitrary behaviours in arbitrary tasks. §.§ Experimental Setup The robot setting is shown in Fig. <ref>. For object detection a Kinect mounted above the robot is used. All required components and behaviours are implemented with the kukadu robotics framework[<https://github.com/shangl/kukadu>]. The perceptual states are estimated from joint positions, Cartesian end-effector positions, joint forces and Cartesian end-effector forces / torques. Objects are localised by removing the table surface from the point cloud and fitting a box by using PCL. Four controllers implement the available preparatory behaviours: * Void behaviour: The robot does nothing. * Rotation: The object is rotated by a circular finger movement around the object's center. The controller can be parametrised with the approximate rotation angle. * Flip: The object is squeezed between the hands and one hand performs a circle with the radius of the object in the XZ-plane which yields a vertical rotation. * Simple grasping: The gripper is positioned on top of the object and the fingers are closed.The haptic database consists of at least 10 samples per perceptual state. Before sensing, the object is pushed back to a position in front of the robot. We use four sensing actions: * No Sensing: Some tasks do not require any prior sensing and have only one state. The discrimination score is computed with a success rate of r_s = 0.5, c.f. equation <ref>. * Slide: A finger is placed in front of the object. The object is pushed towards the finger with the second hand until contact or until the hands get too close to each other (safety reasons). Sensig is done by bending the finger. * Press: The object is pushed with one hand towards the second hand until the force exceeds a certain threshold. * Poke: The object is poked from the top with a finger. * Weigh: Checks a successful grasp by measuring the z-component of the Cartesian force.. The perceptual states are fixed, i.e. not grasped / grasped.§.§ Real-world tasks We demonstrate the generality of our method in several scenarios. Each skill can use the described preparatory behaviours, and additionally, the skills trained before. If not stated otherwise, all basic behaviours are dynamic movement primitives (DMPs) <cit.> trained by kinesthetic teaching. A video of the trained skills including a visualisation of the generated skill hierarchies can be viewed online[<https://iis.uibk.ac.at/public/shangl/tro2017/hangl_roboticplaying.mp4>] and is included in the supplementary material of this paper. Note that only the skills and behaviours with non-zero probabilities are shown in the hierarchies. The training of skills does not look different to the training in the basic method except for the additional execution sensing action after the performed preparation[<https://iis.uibk.ac.at/public/shangl/iros2016/iros.mpg>]. §.§.§ Simple placementThe task is to pick an object and place it in an open box on the table. The basic behaviour is a DMP that moves the grasped object to the box, where the hand is opened. In this case, the used sensing action is weigh, c.f. <ref>. After training the simple grasp / nothing behaviour is used if the object is not grasped / not grasped respectively. §.§.§ Book graspingThe basic behaviour grasps a book as described in section <ref>. The perceptual states are the four orientations of the book. After training, the robot identified sliding as a useful sensing action to estimate the book's rotation. The skill is trained with and without using creativity. Without creativity, the available preparatory behaviours are the void-behaviour, rotate 90^∘, rotate 180^∘, rotate 270^∘, and flip. The rotation and void behaviours are used for different rotations of the book. In the creativity condition, the behaviours rotate 180^∘ and rotate 270^∘ are removed from the set of preparatory behaviours. The robot creates these behaviours by composing rotate 90^∘ two / three times respectively. §.§.§ Placing object in a boxThe task is to place an object inside a box that can be closed. The basic behaviour is to grasp an object from a fixed position and drop it inside an open box. The perceptual states determine, whether the box is open or closed. After training, the robot identifies poke as a good sensing action. The flip behaviour is used to open the closed box and the void-behaviour is used if the box is open. §.§.§ Complex graspingThe task is to grasp objects of different sizes. We use the void-behaviour as the skill's basic behaviour. This causes the robot to combine behaviours without additional input from the outside. The perceptual states correspond to small and big objects. After training, sliding is determined as the best sensing action. The simple grasp / book grasping behaviour is used for small / big objects respectively. §.§.§ Shelf placementThe task is to place an object in a shelving bay, which is executed using a DMP. The robot uses the weigh sensing action to determine whether or not an object is already grasped. The complex grasp skill / void behaviour is used if the object is not grasped / grasped, respectively. Note that training of this skill can result in a local maximum, e.g. by choosing the behaviours simple grasp or book grasp, in particular if the reward is chosen too high. §.§.§ Shelf alignmentThe task is to push an object on a shelf towards the wall to make space for more objects. The basic behaviour is a DMP moving the hand from the left end of the shelve bay to the right end until a certain force is exceeded. As there is no object in front of the robot, all sensing actions except no sensing fail. The sensing action with the strongest discrimination score is no sensing with only one perceptual state and shelf placement as preparatory behaviour. §.§.§ Tower disassemblyThe task is to disassemble a stack of maximum three boxes. The basic behaviour is the void behaviour. The perceptual states correspond to number of boxes in the tower. Reward is given in case the tower is completely disassembled. After training, the used sensing action is poking to estimate four different states, i.e. height h ∈{ 0, 1, 2, 3 }. The tower cannot be removed with any single available preparatory behaviour. Instead, using the creativity mechanism, the robot generates combinations of simple placement, shelf placement and shelf alignment of the form given by the expressionsimple placement^* [void|shelf placement|shelf alignment] §.§ Discussion of the real-world tasksA strong advantage of model-free playing is the ability to use behaviours beyond their initial purpose. The flip behaviour is implemented to flip an object but is used to open the box in the box placement task. This holds for sensing actions as well: sliding is used for estimating the object size for complex grasping instead of the expected pressing from which the object size could be derived from the distance between the hands. Both sensing actions deliver a high success rate with r_s^pressing≈ 0.9 and r_s^sliding≈ 1.0. The high success rate of sliding is an artifact of the measurement process. The object is pushed towards the second hand until the hands get too close to each other. For small objects, the pushing stops before the finger touches the object.This produces always the same sensor data for small objects, which makes it easy to distinguish small from big objects.In the tower disassembly task an important propery can be observed. The generated behaviour compositions of the form given in equation <ref> only contain the skills shelf placement and shelf alignment at the end of the sequence. The reason is that these skills can only remove a box in a controlled way if only one box is left, i.e. h = 1. Higher towers are made to collaps because of the complex grasping skill, which is used by shelf placement. It uses sliding to estimate the object's size and therefore pushes the tower around. Further, which behaviour sequence is generated, depends on the subjective history of the robot, e.g. the sequences (simple placement, simple placement, simple placement) and (shelf placement, simple placement, simple placement) both yield success for h = 3. The autonomy of our approach can also be reduced in such a scenario, as several behaviours destroy the tower and require a human to prepare it again. This involves to include a human in the playing loop, in particular if the required states cannot be prepared by the robot itself.Similarly, the active learning and creativity mechanisms do not always yield improvements. Active learning only causes a speedup if the unsolved perceptual states can be produced from solved ones, e.g. if the closed state is solved before the open state. The robot is only able to prepare the transition closedopen. The transition openclosed requires to close the cover, which is not among the available behaviours. The creativity mechanism does not improve learning if the required behaviours are already available, e.g. in box placement or shelf placement, or cannot be composed of other behaviours. However, it helps to solve book grasping and tower disassembly.We emphasise that the teaching of novel skills does not necessarily have to follow the typical sequence of sensing → preparation → basic behaviour, e.g. in complex grasping and shelf alignment. In the complex grasping task the basic behaviour is the void-behaviour, which causes the robot to coordinate different grasping procedures for small and big objects. For shelf alignment, the sensing stage is ommitted.§.§ Simulated skill learningSingle experiments cannot be used to assess the overall convergence behaviour. We use the experiences gained in the real-world book grasping task to simulate the convergence behaviour. We use a success rate of 95 percent for all involved controllers. The environment is simulated with ground truth state transitions observed in the real-world experiment. For the failure cases, i.e. 5 percent of the exeuctions of each executed action, we simulate a random resulting perceptual state. The evolution of success is simulated and averaged for N = 1000 robots for different numbers of preparatory behaviours. The minimum number of preparatory behaviours is J = 5, i.e. void, rotate 90^∘, rotate 180^∘, rotate 270^∘, flip. We simulate a scenario in which additional preparatory behaviours are useless, i.e. the perceptual state is not changed. In this case, the problem gets harder due to a larger set of behaviours, while the number of appropriate behaviours remains the same. In the scenario with activated creativity the agent is only provided with thebehaviours rotate 90^∘, flip and void.The number of rollouts required to reach a success rate of at least 90 percent is given in Table <ref> for an increasing number of behaviours J and different variants of our method (N_no_ext≡ no active learning / no creativity, N_active≡ active learning / no creativity, N_creative≡ active learning / creativity, N_base≡ baseline). As baseline we use a policy in which every combination of perceptual states and behaviours is tried out only once, with N_base = 3 * 4 * J + J (3 sensing actions with 4 states, 1 sensing action, i.e. no sensing with only one state). In general, our method converges faster than the baseline due to reducing the space strongly and ignoring irrelevant parts of the ECM. Further, the baseline method would not yield convergence in a scenario with possible execution failures as each combination is executed only once. The baseline approach also cannot solve the task in the creativity condition.The two versions without creativity, i.e. without and with active learning, show continuous increase of the success rate in Figs. <ref> and <ref>. If the robot is bored, situations with a low information gain are rejected. Therefore, the version with active learning is expected to converge faster. Fig. <ref> shows the number of required rollouts to reach a success rate of 90 percent for each of the three variants. The number of required rollouts is proportional to the number of available preparatory behaviours. We apply a linear fit and gain an asymptotic speed-up of sp = 1 - lim_x →∞k_1 x + d_1/k_2 x + d_2≈ 9 percent for the variant with active learning compared to the variant without extension.In the scenario with activated creativity the convergence behaviour is different, c.f. Fig. <ref>. The success rate exhibits a slow start followed by a fast increase and a slow convergence towards 100 percent. The slow start is due to the perceptual states that would require the behaviours rotate 180^∘ and rotate 270^∘ which are not avaible at this point. Further, the robot cannot generate these behaviours using creativity due to initially untrained environment models. This causes the success rate to reach a preliminary plateau at around 30 to 35 percent. After this initial burn-in phase, the environment model becomes more mature and behaviour proposals are created. This causes a strong increase of the the success rate. § CONCLUSION We introduced a novel way of combining model-free and model-based reinforcement learning methods for autonomous skill acquisition. Our method acquires novel skills that work for only a narrow range of situations acquired from a human teacher, e.g. by demonstration. Previously-trained behaviours are used in a model-free RL setting in order to prepare these situations from other possibly occuring ones. This enables the robot to extend the domain of applicability of the novel skill by playing with the object. We extended the model-free approach by learning an environment model as a side product of playing. We demonstrated that the environment model can be used to improve the model-free playing in two scenarios, i.e. active learning and creative behaviour generation. In the active learning setting the robot has the choice of rejecting present situations if they are already well-known. It uses the environment model to autonomously prepare more interesting situations. Further, the environment model can be used to propose novel preparatory behaviours by concatenation of known behaviours. This allows the agent to try out complex behaviour sequences while still preserving the model-free nature of the original approach. We evaluated our approach on a KUKA robot by solving complex manipulation tasks, e.g. complex pick-and-place operations, involving non-trivial manipulation, or tower-disassembly. We observed success statistics of the involved components and simulated the convergence behaviour in increasingly complex domains, i.e. a growing number of preparatory behaviours. We found that by active learning the number of required rollouts can be reduced by approximately 9 percent. We have shown that creative behaviour generation enables the robot to solve tasks that would not have been solvable otherwise, e.g. complex book grasping with a reduced number of preparatory behaviours or tower disassembly. The work presented in this paper bridges the gap from plain concatenation of pre-trained behaviours to simple goal-directed planning. This can be seen as early developmental stages of a robot. We believe that a lifelong learning agent has to go through different stages of development with an increasing complexity of knowledge and improving reasoning abilities. This raises the question of how the transition to strong high-level planning systems could look like.Our experiments show that the learning time is proportional to the number of used preparatory behaviours. This makes it efficient to learn an initial (and potentially strong) set of skills, but hard to add more skills when there is a large set of skills available already. Training more sophisticated models could help to overcome this problem. Further, in the current system, the creative behaviour generation only allows behaviour compositions resulting from plans within the same environment model, i.e. using only perceptual states of the same sensing action. The expressive power of our method could be greatly increased by allowing plans through perceptual states of different sensing actions. This could also involve multiple sensing actions at the same time including passive sensing such as vision. § ACKNOWLEDGMENTThe research leading to these results has received funding from the European Communitys Seventh Framework Programme FP7/20072013 (Specific Programme Cooperation, Theme 3, Information and Communication Technologies) under grant agreement no. 610532, Squirrel. HJB and VD acknowledge support from the Austrian Science Fund (FWF) through grant SFB FoQuS F4012.IEEEtran [ < g r a p h i c s > ]Simon Hangl is a PhD student at the intelligent and interactive systems (IIS) group of the University of Innsbruck, where he also received his MSc degree in Computer Science. Before starting his position as University Assistant at IIS, he worked as researcher at the Semantic Technology Institute (STI) Innsbruck. He worked in 4 EU-FP7 projects in the areas of semantic technology and robotics. He is interested in developmental and cognitive robotics and complex robotic object manipulation.[ < g r a p h i c s > ]Vedran Dunjko received the Ph.D. degree in physics from Heriot-Watt University, Edinburgh, U.K., in 2012, focused on problems in quantum cryptography, specifically blind quantum computing, and quantum digital signatures. Following a one-year post-doctoral position with the School of Informatics, University of Edinburgh, in 2013, he then moved to the Institute of Theoretical Physics, University of Innsbruck, Austria, where he is currently a Post-Doctoral Researcher. He has recently been involved in the problems of artificial intelligence and quantum machine learning.[ < g r a p h i c s > ]Hans J. Briegel received the Ph.D. degree (doctorate) in physics in 1994 and the Habilitation degree in theoretical physics in 2002 from the Ludwig-Maximilians-University of Munich. He held postdoctoral positions with Texas A&M and Harvard University. He has been a Full Professor of Theoretical Physics with the University of Innsbruck since 2003 and a Research Director with the Institute of Quantum Optics and Quantum Information of the Austrian Academy of Sciences from 2003 to 2014. His main field of research is quantum information and quantum optics where he has authored and co-authored papers on a wide range of topics, including work on noise reduction and microscopic lasers, quantum repeaters for long-distance quantum communication, entanglement and cluster states, and measurement-based quantum computers. His recent research has focussed on physical models for classical and quantum machine learning, artificial intelligence, and the problem of agency. HJ Briegel is also with the Department of Philosophy, University of Konstanz, Germany.[ < g r a p h i c s > ]Justus Piater is a professor of computer science at the University of Innsbruck, Austria, where he leads the Intelligent and Interactive Systems group. He holds a M.Sc. degree from the University of Magdeburg, Germany, and M.Sc. and Ph.D. degrees from the University of Massachusetts Amherst, USA, all in computer science.Before joining the University of Innsbruck in 2010, he was a visiting researcher at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany, a professor of computer science at the University of Liège, Belgium, and a Marie-Curie research fellow at GRAVIR-IMAG, INRIA Rhône-Alpes, France.His research interests focus on visual perception, learning and inference in sensorimotor systems.He has published more than 150 papers in international journals and conferences, several of which have received best-paper awards, and currently serves as Associate Editor of the IEEE Transactions on Robotics. | http://arxiv.org/abs/1706.08560v1 | {
"authors": [
"Simon Hangl",
"Vedran Dunjko",
"Hans J. Briegel",
"Justus Piater"
],
"categories": [
"cs.RO"
],
"primary_category": "cs.RO",
"published": "20170626185337",
"title": "Skill Learning by Autonomous Robotic Playing using Active Learning and Creativity"
} |
The optimal particle-mesh interpolation basis Xingyu Gao December 30, 2023 =============================================An étale module for a linear algebraic group G is a complex vector space V with a rational G-action on V that has a Zariski-open orbit and G= V.Such a module is called super-étale if the stabilizer of a point in the open orbit is trivial. Popov (2013) proved that reductive algebraic groups admitting super-étale modules are special algebraic groups. He further conjectured that a reductive group admitting a super-étale module is always isomorphic to a product of general linear groups.Our main result is the construction of counterexamples to this conjecture, namely a family of super-étale modules for groups with a factor _n for arbitrary n≥1. A similar construction provides a family of étale modules for groups with a factor _n, which shows that groups with étale modules with non-trivial stabilizer are not necessarily special. Both families of examples are somewhat surprising in light of the previously known examples of étale and super-étale modules for reductive groups. Finally, we show that the exceptional groups _4 and _8 cannot appear as simple factors in the maximal semisimple subgroup of an arbitrary Lie group with a linear étale representation.§ INTRODUCTION An étale module (G,ρ,V) for an algebraic group G is a finite-dimensional complex vector space V together with a rational representation ρ:G→(V) such that ρ(G) has a Zariski-open orbit in V and G= V. In particular, the stabilizer H of any point in the open orbit is a finite subgroup of G. If H is the trivial group, the module is called super-étale. Similarly we call the representation ρ étale or super-étale, respectively. More generally, one can study affine étale representations (that is, representations by affine transformations), but for rational representations of reductive algebraic groups these are equivalent to linear ones via affine changes of coordinates. As we are primarily interested in this case, we shall restrict ourselves to linear representations.The existence of an affine étale representation for a given group G implies the existence of a left-invariant flat affine connection on G, and these structures appear in many different contexts in mathematics. For the specifics of this relationship and a survey of applications, see Burde <cit.>, Baues <cit.> and the references therein. The primary motivation for the present work is Popov's study of linearizable subgroups of the Cremona group on affine n-space (those that are conjugate tolinear group within the Cremona group). Subgroups for which a super-étale module exists, called flattenable groups by Popov, allow particularly convenient criteria to decide their linearizability, compare the results in <cit.>. Incidentally, a flattenable group G is precisely a group that admits a rational super-étale module. Popov <cit.> proved (in our terminology): A reductive algebraic group admitting a super-étale module is a special algebraic group. By definition, G is special (in the sense of Serre) if every principal G-bundle is locally trivial in the étale topology. Serre <cit.> showed that every special group is connected and linear, and that reductive groups with maximal connected semisimple subgroupS(G) = _n_1×⋯×_n_k×_m_1×⋯×_m_jare special. A result of Grothendieck <cit.> then implies that an affine algebraic group G is special if and only if a maximal connected semisimple subgroup is isomorphic to a group of this type. This result and the available examples lead Popov to make the following conjecture:A reductive algebraic group G has a rational super-étale module if and only ifG≅_n_1×⋯×_n_k.Clearly, every group _n_1×⋯×_n_k has a super-étale module _n_1⊕…⊕_n_k on which it acts factorwise by matrix multiplication. In previously available classification results on étale modules for reductive algebraic groups G, the only simple groups appearing as factors in G are _n and _2 (see Burde and Globke <cit.> for a summary). This suggests the more general questions of whether in a reductive algebraic group with a rational super-étale module, all simple factors are either _2 or _n for certain n≥ 2.Somewhat surprisingly, this (and thus Popov's original conjecture) turns out to be false. Our main result is the existence of counterexamples to this conjecture, constructed in Section <ref> below show. These examples consist of a family of super-étale modules for reductive groups G=_n×_2n-1×⋯×_1 for any n≥ 1.So in fact any factor _n or _n for any n≥ 1 can appear in a group with a super-étale module. One might now be tempted to ask whether every special reductive algebraic group admits a super-étale module, but this can immediately be ruled out by comparison with classification results of reductive groups with few simple factors, see again <cit.>.Knowing that algebraic groups with super-étale modules are special, one can further suspect that the same holds for groups with étale modules that have non-trivial stabilizer. Again we find the surprising answer that this is not true. In Section <ref> below we construct a family of étale modules for reductive groups G=_n×_n-1×⋯×_1 for any n≥ 2. These are the first known examples of étale modules for groups with a simple factor _n for any number n≥2. These two families are the first known examples of étale modules for reductive groups containing factors _n or _n for arbitrary n>2. This still leaves the question of whether there exist étale modules for reductive groups with exceptional simple groups as factors. In Section <ref>, we show in a much more general setting that a simple Lie group whose complexified Lie algebra is one of the exceptional algebras _4 or _8 cannot appear among the simple factors in a maximal semisimple subgroup of a Lie group with a linear étale representation, not necessarily algebraic (here, étale means that the action has an orbit that is open in the standard topology of the module). For the other exceptional groups, this question remains open.A remark on the previously available classification results on étale modules is in order. As these results use the classification results on prehomogeneous modules due to Sato, Kimura and others (see Kimura's book <cit.> and references therein), they very often rely on Lie algebraic methods. In most cases it is not immediately clear from their classifications whether the generic stabilizers are trivial, although many generic stabilizers (not just their identity component) are explicitely given in the appendix of <cit.>.§.§ Notations and conventions All algebraic groups, such as _n, _n, _n and _n, are considered over the complex numbers unless otherwise stated. We follow the convention that _n means the symplectic group on ^2n. The notation G means the Lie algebra ofa group G, we will also use the corresponding gothic letter . The identity component of an algebraic group G is denoted byG^∘. _m,n denotes the space of complex m× n-matrices, and if m=n we simply write _n. The identity matrix in _n is denoted by I_n. The transpose of a matrix A is denoted by A^⊤. The canonical basis vectors of ^n are denoted by e_1,…,e_n.For any algebraic group G, let (G), L(G), and (G) denote the center, a maximal connected reductive subgroup, and the unipotent radical of G, respectively. Then G is the semidirect product G=L(G)·(G). Write S(G) for a maximal connected semisimple subgroup of G, the commutator subgroup of L(G). Note that L(G) and S(G) are unique up to conjugation.§.§ AcknowledgementsThe authors would like to thank Vladimir Popov and Alexander Elashvili for helpful discussions and comments, and also the anonymous referees for many helpful remarks and suggestions to improve the article. Dietrich Burde acknowledges support by the Austrian Science Foundation FWF, grant P28079 and grant I3248. Andrei Minchenko acknowledges support by the Austrian Science Foundation FWF, grant P28079. Wolfgang Globke acknowledges support by the Australian Research Council, grant DE150101647. § PRELIMINARIES ON PREHOMOGENEOUS MODULES A module (G,ρ,V), or (G,V) for short, for an algebraic group G with a rational representation ρ:G→(V) on a finite-dimensional complex vector space V is called a prehomogeneous module if ρ(G) has a Zariski-open orbit in V. In this case, G≥ V. More precisely, if x∈ V is a point in general position, that is, it lies in the open orbit of G, and G_x its stabilizer subgroup, thenV =G -G_x.The stabilizer H=G_x of any point x in the open orbit is called the generic stabilizer of (G,ρ,V). A prehomogeneous module is étale if H^∘={1} (equivalently, if G= V). An étale module (G,V) is called super-étale if H={1}. See Burde and Globke <cit.> for a proof of the following result which we will use frequently without further reference: The following conditions are equivalent: (1) (G,ρ_1⊕ρ_2,V_1⊕ V_2) is an étale module.(2) (G,ρ_1,V_1) is prehomogeneous and (H^∘,ρ_2|_H,V_2) is an étale module, where H^∘ denotes the connected component of the generic stabilizer of (G,ρ_1,V_1).Equivalence also holds if each “étale” is replaced by “prehomogeneous”.Two modules (G_1,ρ_1,V_1) and (G_2,ρ_2,V_2) are called equivalent if there exists an isomorphism of algebraic groups ψ:ρ_1(G_1)→ρ_2(G_2) and a linear isomorphism ϕ:V_1→ V_2 such that ψ(ρ_1(g))ϕ(x)=ϕ(ρ_1(g)x) for all x∈ V_1 and g∈ G_1.Let m>n≥ 1 and ρ:G→(V) be an m-dimensional rational representation of an algebraic group G, and let ρ^* be the dual representation for ρ. Then we say that the modules( G×_n, ρ⊗ω_1, V ⊗^n ) and( G×_m-n, ρ^*⊗ω_1, V^*⊗^m-n)are castling transformscastling transform of each other. More generally, we say two modules (G_1,ρ_1,V_1) and (G_2,ρ_2,V_2) are castling-equivalent if (G_1,ρ_1,V_1) is equivalent to a module obtained after a finite number of castling transforms from (G_2,ρ_2,V_2). A module (G,ρ,V) is called reduced (or castling-reduced) if V≤ V' for every castling transform (G,ρ',V') of (G,ρ,V). Sato and Kimura <cit.> proved that prehomogeneity and generic stabilizers are preserved by castling transforms. § ÉTALE MODULES FOR GROUPS WITH FACTOR _N OR _N In this section we will construct two families of étale modules for reductive algebraic groups G. In the first family, G contains a simple factor _n, n≥ 1, and theses modules are even super-étale, thus proving that groups with super-étale modules are not restricted to products of special linear groups. In the second family, G contains a factor _n, n≥ 2. This proves that groups with étale modules (but possibly non-trivial stabilizer) do not have to be special in the sense of Serre. Moreover, these are the first known examples of étale modules for reductive algebraic groups that contain factors _n or _n for arbitrary n>2.We need some preparations. Suppose G is an algebraic group of the formG=G_m× G_m-1×⋯× G_1,where G_k⊆_k. The vector spaceE_m = _m,m-1⊕_m-1,m-2⊕…⊕_2,1becomes a G-module for the action defined as follows: An element A=(A_m,…,A_1)∈ G acts on X=(X_m-1,…,X_1)∈ E_m byA.X = (A_m X_m-1 A_m-1^⊤,A_m-1 X_m-2 A_m-2^⊤,…,A_2 X_1 A_1^⊤). Note thatE_m = ∑_k=1^m-1 (k+1)k = m(m-1)/2+∑_k=1^m-1 k^2.§.§ Super-étale modules for groups with factor _n We wish to construct a family of super-étale modules for the groupG = _n ×_2n-1×⋯×_1.We define a symplectic form ω in terms of the canonical basis of ^2n byω(e_2j-1,e_2j) =1for j=1,…,n,ω(e_2j-1,e_k) =0=ω(e_2j,e_k)for k≠ 2j,2j-1.Define subspaces F_k=span{e_1,…,e_k} of ^2n for k=1,…,2n.Let _n⊂_2n denote the symplectic group that preserves the symplectic form ω. Then for every A∈_n and k=1,…,2n, we have A e_k^⊥⊥ A e_k.We can identify F_k+1⊗ F_k with ^k+1⊗^k≅_k+1,k. With E_2n from (<ref>), introduce the G-moduleV = ^2n⊕ E_2n.where G acts on ^2n by the standard action of _n and G acts on E_2n by (<ref>), for G_2n=_n, G_2n-1=_2n-1,…, G_1=_1. We haveG=2n^2+n + ∑_k=1^2n-1k^2 =2n+2n(2n-1)/2 + ∑_k=1^2n-1k^2=2n+∑_k=1^2n-1k + ∑_k=1^2n-1k^2 =2n+ E_2n = V. We will prove by induction on n that V is super-étale for G. We only need to show that the generic stabilizer of the G-action is trivial, then it follows from (<ref>) that G has an open orbit.In the case n=1, G≅_2×_1 and V=_2, where _2 acts by matrix multiplication and _1 by scalar multiplication of the second column of a 2× 2-matrix. One verifies directly that this is a super-étale module, and so this confirms the initial case for the induction: For n=1, the given action of G=_1×_1 on V=^2⊕^2 is étale and has trivial stabilizer at the point (e_1,e_2)∈ V. For the induction step, consider the action of _n×_2n-1 on ^2n⊕(F_2n⊗ F_2n-1) first. We can identify this space with _2n, the action of (A,B)∈_n×_2n-1 given by(A,B).X = A X ([ B^⊤ 0; 0 1 ]),X∈_2n,2n.As a point in general position, choose the identity matrix X_0=I_n. Then, ifA I_n ([ B^⊤ 0; 0 1 ]) =I_n,it follows thatA = [ A_1 0; 0 1 ]∈_nwith A_1=(B^⊤)^-1∈_2n-1. Recall that A e_2n=e_2n implies A e_2n^⊥=e_2n^⊥, and the form of the matrix A thus requires A F_2n-2=F_2n-2. Also, A e_2n-1=e_2n-1 since A also preserves F_2n-2^⊥. HenceA = [ A_0 0; 0 I_2 ]∈_n,A_0∈_n-1.This proves: The stabilizer H of the _n×_2n-1-action on ^2n⊕(^2n⊗^2n-1) at the point X_0 is given byH = {( [ A_0 0; 0 I_2 ], [ A_0^-10;01 ])|A_0∈_n-1}≅_n-1.Hence the stabilizer H_2n-1 of the G-action on the submodule ^2n⊕(^2n⊗^2n-1) of V at the point X_0 isH_2n-1 = _n-1×_2n-2×⋯×_1,with the embedding of _n-1 in G given as above.Consider the first summand W in E_2n-1,W=F_2n-1⊗ F_2n-2=_2n-1,2n-2where the H_2n-1-action is given by the action of the factor _n-1×_2n-2. Here, _n-1 is identified with the projection of the stabilizer of _n×_2n-1 to _2n-1 (see Lemma <ref>), and this projection acts on the subspace F_2n-2⊂ F_2n-1 and trivially on its complement in F_2n-1. Thus we can rewrite the module W asW =(F_2n-2⊕ e_2n-1)⊗ F_2n-2=W_1⊕ W_2, W_1=F_2n-2⊗ F_2n-2≅_2n-2,W_2=⊗ F_2n-2=F_2n-2≅^2n-2,where (A,B)∈_n-1×_2n-2 acts on X∈ W_1 by X↦ A X B^⊤ and on y∈ W_2 by y↦ By.Choose X_1=I_2n-2 as a point in general position for the action on W_1. The stabilizer of this action is again a diagonally embedded copy of _n-1 in _n-1×_2n-2. Identifying this copy once again with its projection to _n-2, we have an _n-1-action on W_2≅^n-2 by left multiplication. The stabilizer of the H_2n-1-action at the point X_1=I_2n-2 in the module W_1 is the groupH_2n-2 = _n-1×_2n-3×⋯×_1,where the _n-1-action on W_2=^2n-2 is by left multiplication. In order for E_2n-1=W⊕ E_2n-2 to be étale for the H_2n-1-action (and thus the original module V to be étale for the G-action), the stabilizer H_2n-2 must have an étale action onW_2 ⊕ E_2n-2 =^2n-2⊕ E_2n-2.Observe now that ^2n-2⊕ E_2n-2 is of the same form as the original module V, and H_2n-2 is of the same form as the original group G, with n replaced by n-2. Now we can apply the induction hypothesis to conclude that the H_2n-2-action on V_2n-2 and thus the G-action on V is super-étale (where we assume that all points in general position are chosen similarly to X_0, X_1 above). The module (_n ×_2n-1×⋯×_1,^2n⊕ E_2n) with the action given above is a super-étale module.For n=2, (G,ρ,V) in Theorem <ref> can be viewed as a variation of an example given by Helmstetter <cit.>, which is the moduleG = _2×_3×_2×_1×_1,V = ^4 ⊕ (^4⊗^3) ⊕ (^3⊗^2) ⊕^3where the last copy of ^3 is identified with the space of traceless 2× 2-matrices, and the action of G is given by(A,B,C,α,β).(x,Y,Z,U) = (α Ax, AYB^⊤, BZC^⊤, β CUC^-1).This module is étale, but it is not super-étale, since the action of _2 on the last copy of ^3 has a non-connected generic stabilizer. A second family of super-étale modules appears in the construction of this section, namely the group _n×_2n×⋯×_1 acting on the module E_2n. This group appears as the stabilizer in Lemma <ref> (for n-1), where the module is the module complement of ^2n⊕(^2n⊗^2n-1) in this lemma. §.§ Étale modules for groups with factor _n We wish to construct a family of étale modules for the groupG = _n ×_n-1×⋯×_1,where we take _n to be the subgroup of _n preserving the bilinear form represented by the identity matrix I_n. Let n≥ 2. Consider the G-module V=E_n with the action given by (<ref>), where G_n=_n, G_n-1=_n-1,…, G_1=_1. We haveG = 1/2n(n-1) + ∑_k=1^n-1 k^2 =∑_k=1^n-1 k + ∑_k=1^n-1 k^2 = E_n.In order to verify that V is an étale module for G, we only need to show that the connected component H^∘ of the generic stabilizer H is trivial. Then it follows from (<ref>) that G has an open orbit and the action is étale.The stabilizer H_1 of _n×_n-1 on the module _n,n-1 at the point in general position X_1=([ I_n-1; 0…0 ]) is spanned by the elements (A,A_0)∈_n×_n-1 withA_0∈_n-1 andA = [ A_0 0; 0 α ]∈_nwhere α=(A_0)^-1. In particular,H_1≅_n-1. Let (A,B)∈_n×_n-1, and let A_0 be the upper left (n-1)×(n-1)-block of A and a_n the first n-1 entries in the last row of A. Then AX_1 B^⊤=X_1 is equivalent to A_0^-1 = B^⊤, a_n=0, and as A_0 is orthogonal, this gives the required form of the stabilizer H_1 of X_1. The identity component H_1^∘≅_n-1 of the generic stabilizer of (G,V)acts on the next summand _n-1,n-2 in E_n via its injective projection to the _n-1-factor. But this is identical to the left multiplication of _n-1 on _n-1,n-2. So we are now looking at the action of_n-1×_n-1×⋯×_1given by (<ref>) on E_n-1. When choosing a point in general position for this action as in Lemma <ref>, we can apply induction on n to conclude that this module is étale. Moreover, Lemma <ref> for n=2 takes care of the initial case, that is,the action of the abelian group _2×_1 on V=^2 given by (A,λ)↦λ A x, x∈^2, is étale with generic stabilizer H≅_2.So we have shown: Let n≥ 2. The module (_n ×_n-1×⋯×_1, E_n) with the action given by (<ref>) is an étale module.§ ÉTALE LIE ALGEBRAS OVER FIELDS OF CHARACTERISTIC 0Letbe a field of characteristic 0. Recall that a linear Lie algebra ⊂_n() is called algebraic if there is a -defined linear algebraic group G⊂_n such that =( G)(). The Lie algebrais called prehomogeneous if there is a point o∈^n such that the map β:→^n, X↦ Xo is a surjective homomorphism of vector spaces, andis called étale if β is an isomorphism.Let ⊂_n() be aLie algebra with generic stabilizer . Then there is aalgebraic Lie algebra ⊂_n() with generic stabilizersuch that [,]=[,] and =∩[,]. Let ^ a⊂_n() denote the algebraic hull of(the smallest algebraic subalgebra containing ). We have [,]=[^ a,^ a], cf. Chevalley <cit.>. Let '⊂^ a stand for the annihilator of o. Then ' is an algebraic subalgebra of ^ a.Let å=^ a/[,]. Consider the canonical map π:^ a→å. Since an algebraic subalgebra of a commutative algebraic Lie algebra has a complementary algebraic subalgebra, also defined over , there is an algebraic subalgebra _1⊂å such that å=π(')⊕_1. Set =π^-1(_1) and =∩'. We have=[,]∩'=[,]∩.The fact thatisfollows from=^ a-π(')=n+'-π(')=n+. For everyLie algebra there exists an algebraicLie algebra overwith the same derived subalgebra (and the same maximal semisimple subalgebra). If X↦ Xo is an isomorphism overthen it is such over any extension field of . Hence: Letdenote an extension field of . A Lie algebra ⊂_n() isif and only if⊗_⊂_n() is étale. § NON-EXISTENCE OF ÉTALE MODULES FOR GROUPS WITH SIMPLE FACTORS _4 OR _8For an arbitrary Lie group G to have a (real, finite-dimensional) étale module V means that G has an open orbit in V in the standard topology of V and G= V. We use the results of the previous section and the Sato-Kimura classification of algebraic prehomogeneous modules to establish the following non-existence result: Let G be a real Lie group with Lie algebraand a linear action on a finite-dimensional real vector space V. If the module (G,V) is étale, then a maximal semisimple subalgebra of ⊗ does not contain simple factors _4 or _8. The proof needs some preparations. Let G be a linear algebraic group. Given a short exact sequence of G-modules0⟶ U⟶ Vπ⟶ W⟶ 0where V is prehomogeneous with a point o in general position, let G' be the stabilizer in G of the line spanned by π(o)∈ W. Then G' preserves U':=U+⟨ o⟩ and has an open orbit on it. Moreover, the stabilizer H of o in (G,V) is also the stabilizer of o in (G',U'). Note that o U since o is in general position. The fact that G' preserves U' follows immediately from definitions and the property π=U. Note that H⊂ G'. It remains to show that the orbit G'o⊂ U' is open. Since (G,V) is , so is (G,W). Hence, the action of G on the projective space W over W has an open orbit and its generic stabilizer is conjugate to G'. We conclude G- G'= W-1, and thereforeG'o= G'- H= V- ( G- G')= V- W+1= U'. Let (G,V) be a prehomogeneous module for an algebraic group G with solvable radical R. Assume there exists an irreducible submodule U of codimension 1 in V that is not a direct summand of V. Then (R,V) is prehomogeneous.Let W=V/U denote the one-dimensional quotient module for G. Note that W is prehomogeneous since V is. Let (G) denote the unipotent radical of G, and A the center of L(G), so that R=A·(G) and L(G)=A· S(G). Let = R, =(G) and å= A. Let x be a non-zero point in a one-dimensional L(G)-invariant complementary subspace W' to U in V. We will show that Rx⊂ V is open. It suffices to show that x=V.Note that (G) acts trivially on U, as the eigenspace for eigenvalue 1 of (G) is G-invariant and non-zero, hence all of U by irreducibility. It follows that U is L(G)-irreducible. Also, both (G) and S(G) act trivially on the one-dimensional module W.Since U is not a direct summand in V, x is a non-zero subspace of U. Moreover, x is L(G)-invariant, and hence coincides with U, since the latter is L(G)-irreducible. Since (G) and S(G) act trivially on W, the prehomogeneity of W requires that A acts non-trivially on W and hence on x. So å x= W', and it follows that x=å x+ x= W'+U=V. Given a linear algebraic group G and a rational G-module V, we call (G,V) casual[It is called trivial in <cit.>. We decided to use another term to avoid confusions.] if it is equivalent to (G'×_n, V'⊗^n) for an algebraic subgroup G'⊂(V') and n≥ V'. All such modules arewith generic stabilizer H satisfying L(H)≅ L(G')×_n-V', and the irreducible ones are given by cases I (1) and III (1) in the Sato-Kimura classification <cit.>. A module that is equivalent to a casual irreducible étale module is necessarily equivalent to (F×_n,^n⊗^n) for some finite group F acting irreducibly on ^n. If (G,V) is castling-equivalent to such a module, then it follows immediately that all simple factors of S(G) are special linear groups.Let (G,V) be an étale module for a linear algebraic group G, and let Q be a simple factor of S(G) not isomorphic to _n for any n. There exists an étale module (G,V) with a simple factor Q≅ Q in S(G) and an irreducible quotient module W of V such that Q acts non-trivially on W and W is not castling-equivalent to a casual module. We prove the claim by induction on V. Note that since Q > 1 the module cannot be étale in the case V=1, so the claim holds trivially.Suppose now that V≥ 2. If V is irreducible, then (G,V)=(G,V) satisfies the claim in light of Remark <ref>. So we may further assume that V is not irreducible.Assume that there is an irreducible quotient W=V/U with W≥ 2. If Q acts non-trivially on W and (G,W) is not castling-equivalent to a casual module, we can put (G,V):=(G,V) and Q:=Q. Otherwise, either Q acts trivially on W or (G,W) is castling-equivalent to a casual module. Then S(G_x) contains a factor isomorphic to Q, where x∈ W is a point in general position. In this case, if (G',U') is as in Proposition <ref>, then (G',U') is étale and G' contains a conjugate of Q.Since U'=1+ U< V, the claim now follows by induction on V. Suppose now that all irreducible quotients of V are one-dimensional, and let W=V/U be one of them.There exists a maximal proper submodule U_0⊂ U, so that W_0:=U/U_0 isirreducible, and for W_1:=V/U_0 we have the exact sequence0⟶ W_0⟶ W_1⟶ W⟶ 0.Note that W_1 is prehomogeneous since V is. We claim that the solvable radical R of G has an open orbit in W_1. If W_0 is a direct summand in W_1, then by the assumption that all quotients of V are one-dimensional, W_0 = 1, and therefore S(G) acts trivially on W_1, implying that the open G-orbit is also an open R-orbit. Suppose W_0 is not a direct summand in W_1. Since W and W_0 are both irreducible, we can apply Lemma <ref> (with V replaced by W_1) to conclude that R has an open orbit in W_1. Therefore, S(G) belongs to the stabilizer of a point in general position in W_1. We can now use Proposition <ref> (with W, U replaced by W_1, U_0) and induction to derive the statement. If (G,V) is a real étale module, then by Proposition <ref> there exists a complex étale module (G_,V_) where G_ is a Lie group with Lie algebra ⊗. So by Proposition <ref>, we may assume that (G,V) is a complex algebraic étale module. According to the classification of irreducible prehomogeneous modules for reductive algebraic groups <cit.>, all irreducible prehomogeneous modules for reductive algebraic groups with _4 or _8 as a simple factor are castling-equivalent to a casual module. It remains to apply Proposition <ref> to (G,V).myspmpsci | http://arxiv.org/abs/1706.08735v2 | {
"authors": [
"Dietrich Burde",
"Wolfgang Globke",
"Andrei Minchenko"
],
"categories": [
"math.RT"
],
"primary_category": "math.RT",
"published": "20170627090407",
"title": "Etale representations for reductive algebraic groups with factors $Sp_n$ or $SO_n$"
} |
remark[theorem]Remark footnote1École Normale Supérieure/CNRS, Laboratoire de Météorologie Dynamique, 24 Rue Lhomond 75005 Paris, France.footnote1 School of Science and Engineering, Waseda University Dynamique, Okubo, Shinjuku, Tokyo 169-8555, Japan. A free energy Lagrangian variational formulation of the Navier-Stokes-Fourier system François Gay-Balmaz^1 and Hiroaki Yoshimura^2====================================================================================We present a variational formulation for the Navier-Stokes-Fourier system based on a free energy Lagrangian. This formulation is a systematic infinite dimensional extension of the variational approach to the thermodynamics of discrete systems using the free energy, which complements the Lagrangian variational formulation using the internal energy developed in <cit.> as one employs temperature, rather than entropy, as an independent variable. The variational derivation is first expressed in the material (or Lagrangian) representation, from which the spatial (or Eulerian) representation is deduced. The variational framework is intrinsically written in a differential-geometric form that allows the treatment of the Navier-Stokes-Fourier system on Riemannian manifolds. § INTRODUCTION The dynamics of aviscous heat conducting fluid is governed by the Navier-Stokes-Fourier equations given by a system of PDEs describing the balance of fluid momentum, the balance of mass, and the conservation of energy. The latter can be equivalently formulated in terms of the entropy or the temperature. It is well-known that in absence of the irreversible processes of viscosity and heat conduction, these equations, as well as the general equations of reversible continuum mechanics, arise from the Hamilton principle applied to the Lagrangian trajectory of fluid particles.In <cit.>, we proposed a systematic extension of Hamilton's principle to include irreversible processes by introducing the concept of thermodynamic displacements and making use of a generalization of the Lagrange-d'Alembert principle with nonlinear nonholonomic constraints. This approach covers both discrete and continuum systems and naturally involves the entropy as an independent variable.For a concrete use in applications, it is often more practical to use the temperaturerather than the entropy as the independent variable. Temperature is indeed a much easier measurable quantity and the phenomenological coefficients (such as heat conductivity or viscosity) are naturally expressed in terms of the temperature rather than the entropy. In this case, the variational formulation must be expressed in terms of the free energy.In this paper, we present a variational formulation for the Navier-Stokes-Fourier system based on a free energy Lagrangian, which complements the approach developed in <cit.>. The variational derivation is first expressed in the material (or Lagrangian) description, from which the spatial (or Eulerian) description is deduced.The variational formulation follows from an infinite dimensional extension of the free energy Lagrangian variational formulation for nonequilibriumthermodynamics of discrete systems. It has a systematic structure which relies on the concepts of variational and phenomenological constraints.§ DISCRETE SYSTEMS AND A FREE ENERGY LAGRANGIANIn this section we review the variational formulation for nonequilibrium thermodynamics of discrete (i.e., finite dimensional) systems developed in <cit.>. The formulation is first given in terms of “classical Lagrangians", i.e., Lagrangians expressed in terms of the internal energy of the system. This naturally implies the use of the entropy S as an independent variable in the variational formulation. Then, we present a variational formulation based on a free energy Lagrangian that allows the treatment of the temperature T rather than the entropy as the independent variable. §.§ Variational formulation of nonequilibrium thermodynamics of simple systemsWe shall present the variational formulation by first considering simple thermodynamic systems before going into the general setting of the discrete systems. We follow the systematic treatment of thermodynamic systems presented in <cit.>, to which we also refer for the precise statement of the two laws of thermodynamics.Simple discrete systems. A discrete thermodynamic system Σ is a collection Σ = ∪_A=1^N of a finite number of interacting simple thermodynamic systems Σ _A. By definition, a simple thermodynamic system is a macroscopic system for which one (scalar) thermal variable and a finite set of mechanical variables are sufficient to describe entirely the state of the system. From the second law of thermodynamics (e.g., <cit.>), we can always choosethe entropy S as a thermal variable. A typical example of such a simple system is the one-cylinder problem. We refer to <cit.> for a systematic treatment of this system via Stueckelberg's approach. Variational formulation. We now quickly review from <cit.> the variational formulation of nonequilibrium thermodynamics for the particular case of simple closed systems.Let Q be the configuration manifold associated to the mechanical variables of the simple system. We denote by TQ the tangent bundle to Q and use the classical local notation (q, v) ∈ TQ for the elements in the tangent bundle. Our approach is of course completely intrinsic and does not depend on the choice of coordinates on Q. The Lagrangian of a simple thermodynamic system is a functionL: TQ ×ℝ→ℝ ,(q, v, S) ↦ L(q, v, S),where S ∈ℝ is the entropy. We assume that the system involves exterior and friction forces given by fiber preserving maps F^ ext, F^ fr:TQ×ℝ→ T^* Q, i.e., such that F^ fr(q, v, S)∈ T^*_qQ, similarly for F^ ext, where T^*Q is the cotangent bundle to Q. Finally we assume that the system is subject to an external heat power supply P^ ext_H(t). We say that a curve (q(t),S(t)) ∈ Q ×ℝ, t ∈ [t _1 , t _2 ] ⊂ℝ is a solution of the variational formulation of nonequilibrium thermodynamics if it satisfies the variational condition δ∫_t _1 ^ t _2L(q , q̇ , S)dt +∫_t_1^t_2⟨ F^ ext(q, q̇, S), δ q⟩ dt =0, Variational Conditionfor all variations δ q(t) and δ S(t) subject to the constraint∂ L/∂ S(q, q̇, S)δ S= ⟨ F^ fr(q , q̇ , S),δ q ⟩,Variational Constraintwith δ q(t_1)=δ (t_2)=0, and also if it satisfies the phenomenologicalconstraint ∂ L/∂ S(q, q̇, S)Ṡ= ⟨ F^ fr(q, q̇, S) , q̇⟩- P^ ext_H,Phenomenological Constraintwhere q̇=dq/dt and Ṡ=dS/dt.From this variational formulation, we deduce the system of evolution equations for the simple thermodynamic system as{[ d/dt∂ L/∂q̇- ∂ L/∂ q=F^ fr(q, q̇, S)+F^ ext(q, q̇, S),; ∂ L/∂ SṠ= ⟨ F^ fr(q, q̇, S), q̇⟩ - P^ ext_H. ].The explicit expression of the constraint (<ref>) involves phenomenological laws for the friction force F^ fr, this is why we refer to it as a phenomenological constraint. The associated constraint (<ref>)is called a variational constraint since it is a condition on the variations to be used in (<ref>). Note that the constraint (<ref>) is nonlinear and also that one passes from the variational constraint to the phenomenological constraint by formally replacing the variations δ q, δ S by the time derivatives q̇, Ṡ. Such a systematic correspondence between the phenomenological and variational constraints still holds for the general discrete systems, as we shall recall below. We refer to <cit.> for the relation with other variational formulations used in nonholonomic mechanics with linear or nonlinear constraints. See also <ref> below. For the case of adiabatically closed systems (i.e., P^ ext_H=0), the evolution equations (<ref>) can be geometrically formulated in terms of Dirac structures induced from the phenomenological constraint and from the canonical symplectic form on T ^∗ Q or on T ^∗ (Q×ℝ), see <cit.>. In absence of the entropy variable S, this variational formulation recovers Hamilton's variational principle in classical mechanics, where (<ref>) becomes the Euler-Lagrange equations.§.§ Variational formulation of nonequilibrium thermodynamics of discrete systems Discrete systems. We now consider the case of a discrete system Σ= ∪_A=1^N Σ_A, composed of interconnecting simple systems Σ_A, A=1,...,N that can exchange heat and mechanical power, and interact with external heat sources Σ_R, R=1,...,M. We follow the description of discrete systems given in <cit.> and <cit.>.By definition, a heat source is a simple system Σ _R uniquely defined by a single variable S_R. Its energy is thus given by U_R=U_R(S_R), the temperature is T^R:= ∂ U_R/∂ S_R, and d/dt U_R=T^RṠ_R=P^R →Σ_H, where P_H^R →Σ is the heat power flow due to the heat exchange with Σ. The state of the discrete system Σ is described by geometric variables q ∈ Q_Σ and entropy variables S_A, A=1,...,N. Note that the entropy S_A has the index A since it is associated to the simple system Σ_A. The geometric variables, however, are not indexed by A since in general they are associated to several systems Σ_A that can interact with. The Lagrangian of a discrete system is thus a functionL:T Q_Σ×ℝ^N →ℝ,(q, q̇, S_1,...,S_N) ↦ L(q, q̇, S_1,...,S_N).As before, the power supplied from the exterior is due to that by external forces and by transfer of heat. For simplicity, we ignore internal and external matter exchanges in this section. Hence, in particular, we consider the case in which the system is closed. The external force reads F^ ext:=∑_A=1^N F^ ext→ A, where F^ ext→ A is the external force acting on the system Σ_A. The external heat power associated to heat transfer is P^ ext_H =∑_R (∑_A=1^NP_H^R → A)=∑_A=1^NP_H^ ext→ A, where P_H^R → A denotes the power of heat transferbetween the external heat source Σ_R and the system Σ_A. The friction force associated to system Σ _A isF^ fr(A):TQ_Σ×ℝ^N → T^* Q_ Σ with F^ fr:=∑_A=1^N F^ fr(A). The internal heat power exchange between Σ _A and Σ _B can be described byP_H^B → A= κ _AB(q , S ^A , S ^B)(T^B-T^A),where κ _AB= κ _BA≥ 0 are the heat transfer phenomenological coefficients. A typical, and historically relevant, example of a discrete (non-simple) system is the adiabatic piston. We refer to <cit.> for a systematic treatment of the adiabatic piston from Stueckelberg's approach.Variational formulation. Our variational formulation is based on the introduction of new variables, called thermodynamic displacements, that allow a systematic inclusion of all the irreversible processes involved in the system. In our case, since we only consider the irreversible processes of the mechanical friction and heat conduction, we just need to introduce (in addition to the mechanical displacement q) the thermal displacements^1, Γ ^A, A=1,...,N such thatΓ̇^A=T^A, where Γ ^A are monotonically increasing real functions of time t andhence the temperatures T^A of Σ _A take positive real values, i.e., (T^1,...,T^N) ∈ℝ_+^N. footnote1 The notion of thermal displacement was first used by <cit.> and in the continuum setting by <cit.>. We refer to the Appendix of <cit.> for an historical account.Each of these variables is accompanied with its dual variable Σ _A whose time rate of change is associated to the entropy production of the simple system A. The meaning of the variable Σ_A and its distinction with the entropy variable S_A may be clarified in the context of continuum systems, as will be seen in <ref>.We say that a curve (q(t), S _A (t), Γ ^A (t) , Σ _A (t) )∈ Q _Σ×ℝ^3N, t ∈ [t _1 , t _2 ] ⊂ℝ is solution of the variational formulation of nonequilibrium thermodynamics if it satisfies the variational conditionδ∫_ t _1 ^ t _2 [ L(q, q̇ , S _1 , ... S _N )+ ∑_A=1^N(S_A - Σ_A)Γ̇^A ] dt +∫_ t _1 ^ t _2 ⟨ F^ ext, δ q ⟩ dt =0,for all variations δ q (t), δΓ^A (t), δΣ_A (t) subject to the variational constraint∂ L/∂ S_A δΣ _A =⟨ F^ fr(A), δ q⟩-∑_B=1^N κ _AB (δΓ^B-δΓ ^A ), (no sum on A)with δ q( t _i )=0 and δΓ ( t _i )=0, for i=1,2, and also if it satisfies the nonlinear phenomenological constraint∂ L/∂ S _A Σ̇_A= ⟨ F^ fr(A), q̇⟩ - ∑_B=1^N κ _AB (Γ̇^B-Γ̇^A ) -P_H^ ext. From this variational formulation, we deduce the system of evolution equations for the discrete thermodynamic system as{[d/dt∂ L/∂q̇- ∂ L/∂ q= ∑_A=1^NF^ fr(A)+ F^ ext,; ∂ L/∂ S_AṠ_A= ⟨ F^ fr(A), q̇⟩ +∑_B=1^N κ _AB(∂ L/∂ S_B-∂ L/∂ S_A) - P^ ext_H,A=1,...,N. ]. We refer to <cit.> for a complete treatment of discrete systems. In a similar way with the situation of simple thermodynamic systems, one passes from the variational constraint (<ref>) to the phenomenological constraint (<ref>) by formally replacing the δ-variations δ q, δΣ _A,δΓ _A by the time derivatives q̇, Σ̇_A, Γ̇_A (see Remark <ref>). This is possible thanks to the introduction of the thermodynamic displacements Γ _A. §.§ Formulation based on the free energy and heat equationsWe now present the variational formulation for discrete thermodynamic systems based on a free energy Lagrangian, in which one makes use of temperature T rather than entropy S as an independent variable. We start with the case of a simple discrete thermodynamic system.Simple systems. Given the Lagrangian L:TQ ×ℝ→ℝ of a discrete simple system, the associated free energy Lagrangian ℒ :TQ ×ℝ_+→ℝ is defined (see <cit.>) byℒ(q,q̇,T):= L(q,q̇, S(q, q̇, T))+TS(q, q̇, T),where we assumed that the function S∈ℝ↦∂ L/∂ S(q,q̇, S) ∈ℝ_+ is a diffeomorphism for all (q, q̇) ∈ TQ and where the function S(q,q̇,T) is defined by the condition -∂ L/∂ S(q,q̇, S)=T, for all (q,q̇, S) ∈ TQ ×ℝ. footnote1More strictly speaking, temperature is defined by T= -∂ L/∂ S∈ T_Γℝ≅ℝ_+ for S ∈ T^∗ℝ≅ℝ^∗, or conversely, entropy by S= ∂ℒ/∂ T∈ T_Γ^∗ℝ≅ℝ^∗ for T ∈ T_Γℝ≅ℝ_+. In most physical examples, the partial derivative ∂ L/∂ S does not depend on q̇. In this case the Lagrangian has the standard form L(q, q̇, S)=L_ mech(q, q̇)- U(q,S), where L_ mech: TQ →ℝ is a Lagrangian of the mechanical part of the simple system and U:Q ×ℝ→ℝ is an internal energy. The associated free energy Lagrangian is ℒ (q, q̇, T)= L_ mech(q, q̇)-ℱ(q,T), where ℱ:Q×ℝ _+→ℝ is the free energy associated to U:Q×ℝ→ℝ, where the relation between temperature and entropy may be understood in the dual context of Lagrangians^2.The friction and external forces F^ fr, F^ ext: T Q ×ℝ _+→ T^*Q are now expressed in terms of the temperature rather than the entropy. As before, the variational formulation needs the introduction of the variables Γ and Σ. Recall that T=Γ̇. We say that a curve (q(t), Γ (t), Σ (t)) ∈ Q ×ℝ^2, t ∈ [t _1 , t _2 ] ⊂ℝ is a solution of the free energy variational formulation of nonequilibrium thermodynamics if it satisfies the variational conditionδ∫_t _1 ^ t _2( ℒ (q , q̇ , Γ̇)- ΣΓ̇) dt +∫_t_1^t_2⟨ F^ ext(q, q̇, T), δ q⟩ dt =0,for all variations δ q(t), δΓ(t) and δΣ(t) subject to the variational constraintΓ̇δΣ = - ⟨ F^ fr(q , q̇ , T),δ q ⟩,with δ q(t_1)=δΓ (t_2)=0, and also if it satisfies the nonlinear nonholonomic phenomenological constraintΓ̇Σ̇=- ⟨ F^ fr(q, q̇, T) , q̇⟩ + P^ ext_H.From this variational formulation, we get the system of evolution equations for the simple system in terms of the free energy as{[ d/dt∂ℒ/∂q̇- ∂ℒ/∂ q=F^ fr(q, q̇, T)+F^ ext(q, q̇, T),; T d/dt∂ℒ/∂ T= - ⟨ F^ fr(q, q̇, T), q̇⟩ + P^ ext_H. ]. This is an appropriate form to derive the heat equation of the simple system. Since the Lagrangian has the form ℒ (q, q̇, T)= L_ mech(q, q̇)- ℱ(q,T), the second equation in (<ref>) becomesT ( ∂ ^2 ℱ/∂ T ^2 (q,T)Ṫ+ ∂^2 ℱ/∂ q ∂ T(q,T)q̇) =⟨ F^ fr(q, q̇, T), q̇⟩ - P^ ext_H,from which we obtain the heat equationc_v(q,T) Ṫ=T ∂^2 ℱ/∂ q ∂ T(q,T)q̇ - ⟨ F^ fr(q, q̇, T), q̇⟩+ P^ ext_H ,where c_v(q,T)= - T ∂ ^2ℱ/∂ T ^2 (q,T) is the specific heat.Discrete systems. Consider a discrete system with configuration space Q_Σ and Lagrangian L: T Q_Σ×ℝ ^N →ℝ. The corresponding free energy Lagrangian is defined by generalizing the above definition asℒ (q, q̇, T^1,...,T^N): = L ( q, q̇, S_1(q, q̇, T^1,...,T^N),..., S_N(q, q̇, T^1,...,T^N))+ ∑_A=1^N T^A S_A(q, q̇, T^1,...,T^N), where we assumed the function (S_1,.., S_N) ∈ℝ^N↦( ∂ L/∂ S_1, ..., ∂ L/∂ S_N) ∈( ℝ_+ ) ^N is a diffeomorphism for all (q, q̇) ∈ TQ_Σ and where the functions S_A(q,q̇, T^1,...,T^N), A=1,...,N are defined from the conditions - ∂ L/∂ S_A(q, q̇, S_1,...,S_N)=T^A, for all A=1,...,N and for all q, q̇, S_1,...,S_N.Recall T^A=Γ̇^A. We say that a curve (q(t), Γ ^A (t) , Σ _A (t) ) ∈ Q_Σ×ℝ^2N, t ∈ [t _1 , t _2 ] is a solution of the free energy variational formulation of nonequilibrium thermodynamics if it satisfies the variational conditionδ∫_ t _1 ^ t _2 ( ℒ(q, q̇ , Γ̇^1,...,Γ̇^N ) - ∑_A=1^NΣ _AΓ̇^A ) dt +∫_ t _1 ^ t _2 ⟨ F^ ext, δ q ⟩ dt =0,for all variations δ q , δΓ^A, δΣ_A subject to the variational constraintΓ̇^AδΣ _A= - ⟨ F^ fr(A)(...), δ q⟩+ ∑_B=1^N κ _AB(δΓ^B-δΓ ^A ),(no sum on A)with δ q( t _i )=0 and δΓ^A ( t _i )=0 for i=1,2, and if it satisfies the phenomenological constraintΓ̇^A Σ̇_A = -⟨ F^ fr(A)(...), q̇⟩ + ∑_B=1^N κ _AB(Γ̇^B- Γ̇^A ) + P_H^ ext→ A. From this variational formulation, we deduce the system of evolution equations for the discrete system in terms of the free energy as9pt9pt {[d/dt∂ℒ/∂q̇- ∂ℒ/∂ q = ∑_A=1^N F^ fr(A)+F^ ext,; T ^A d/dt∂ℒ/∂ T^A = - ⟨ F^ fr(A) , q̇⟩ +∑_B=1^N κ _AB( T^B-T ^A )+P_H^ ext→ A,A=1,...,N. ].For a free energy Lagrangian of the form ℒ (q, q̇, T_1,...,T_N)= L_ mech(q, q̇)- ∑_A=1^NF ^A (q, T_A), the heat equation is computed from the second equation of (<ref>) asc_v,A (q,T ^A) Ṫ ^A=T^A ∂^2 F^A/∂ q ∂ T^A(q,T^A)q̇ - ⟨ F^ fr, q̇⟩ +∑_B=1^N κ _AB( T^B-T ^A ) + P^ ext_H ,where c_v, A(q,T^A )= - T^A ∂ ^2 F^A/∂ (T ^A)^2 (q,T^A) is the specific heat of system Σ _A.§ THE NAVIER-STOKES-FOURIER EQUATIONS We shall now systematically extendto the continuum setting the previous free energy variational formulations by focalising on the case of a heat conducting viscous fluid. We refer to <cit.> for the corresponding formulation in terms of the entropy.The variational formulation of the Navier-Stokes-Fourier equation is first formulated in the Lagrangian (or material) representation, because it is in this representation that the variational formulation is deduced from that of discrete systems described in <ref>. All the equations are intrinsically written in a differential-geometric form. §.§ Configuration space and Lagrangians We assume that the domain occupied by the fluid is a smooth compact manifold with smooth boundary ∂𝒟. The configuration space is Q= Diff_0( 𝒟 ), the group of all diffeomorphisms^3 of 𝒟 that keep the boundary ∂𝒟 pointwise fixed. This corresponds to no-slip boundary conditions. This choice of the configuration space aims to describe only strong solutions of the partial differential equation. We assumethat the manifold 𝒟 is endowed with a Riemannian metric g.footnote1 In this paper we do not describe the functional analytic setting needed to rigorously work in the framework of infinite dimensional manifolds. For example, one can assume that the diffeomorphisms are of some given Sobolev class, regular enough (at least of class C ^1), so that Diff_0( 𝒟 ) is a smooth infinite dimensional manifold and a topological group with smooth right translation, <cit.>. Given a curve φ _t of diffeomorphisms, starting at the identity at t=0, we denote by x= φ _t(X)= φ (t,X) ∈𝒟 the current position of a fluid particle which at time t=0 is at X ∈𝒟. The mass density ϱ (t,X) and the entropy density S(t,X) in the Lagrangian (or material) description are respectively related to the corresponding quantities ρ (t,x) and s(t,x) in the Eulerian (or spatial) description asϱ (t,X)= ρ (t, φ _t(X)) J _φ _t(X) and S(t,X)= s (t, φ_t (X)) J_φ _t(X), were J _φ_t denotes the Jacobian of φ _t relative to the Riemannian metric g, i.e., φ_t^∗μ _g = J_φ_tμ _g, with μ _g the Riemannian volume form. From the conservation of the total mass, we have ϱ (t, X)= ϱ_ ref(X), i.e., the mass density in the material description is time independent. It therefore appears as a parameter in the Lagrangian function and in the variational formulation. This is not the case for the material entropy S(t,X), which is a dynamic field with corresponding variations δ S that must be taken into account in the variational formulation. The Lagrangian. In a similar way to the case of discrete systems in (<ref>), the Lagrangian in the material description is a mapL_ϱ _ ref: T Diff_0( 𝒟 ) ×ℱ ( 𝒟 )→ℝ,( φ , φ̇, S) ↦ L_ϱ _ ref( φ , φ̇, S), where T Diff_0( 𝒟 ) is the tangent bundle to Diff_0( 𝒟 ) and ℱ ( 𝒟 ) is a space of real valued functions on 𝒟 with a given high enough regularity, so that all the formulas used below are valid. The index notation in L_ϱ _ ref is used to recall that L depends parametrically on ϱ _ ref. By ( φ , φ̇) we denote an arbitrary vector in the tangent space T_ φDiff_0( 𝒟 ). We choose to follow here the traditional coordinate notation^4 footnote1 The intrinsic geometric notation is L_ϱ _ ref( 𝐕 _ φ , S) for 𝐕 _ φ∈ T _φDiff_0( 𝒟 ).(i.e., of the type L(q, q̇)) used in classical mechanics and in <ref>, although our point of view is completely intrinsic. Recall that the tangent space to Diff_0( 𝒟 ) at φis given by T_ φDiff_0(𝒟 )= {𝐕 _φ : 𝒟→ T 𝒟|𝐕 _ φ (X) ∈ T_φ (X)𝒟,𝐕 _φ |_ ∂𝒟 =0 }, where the map 𝐕 _φ: 𝒟→ T 𝒟 has the same regularity with φ. Consider a gas with a given state equation ε = ε (ρ, s) where ε is the internal energy density, and the Lagrangian is given by9pt9pt L_ϱ _ ref( φ , φ̇, S) =∫_ 𝒟1/2ϱ_ ref( X ) | φ̇( X )| ^2 _g μ _g(X) -∫_ 𝒟ε( ϱ_ ref(X)/J_φ (X) , S(X)/J_φ (X)) J_φ (X) μ _g(X)= ∫_ 𝒟𝔏 (φ (X), φ̇(X),T _Xφ , ϱ _ ref(X),S(X)) μ _g (X), where T _ Xφ :T_ X 𝒟→ T_ φ (X)𝒟 is the tangent map to φ. The first term of L_ϱ _ ref represents the total kinetic energy of the gas, computed with the help of the Riemannian metric g, and the second term represents the total internal energy. The second term is deduced from ε ( ρ , s) by using the relations (<ref>). Both terms are written here in terms of material quantities. In the second line we defined the Lagrangian density 𝔏(φ ,φ̇,T φ , ϱ _ ref,S)μ _g as the integrand of the Lagrangian L. The material temperature is given by𝔗= - ∂𝔏/∂ S= ∂ε/∂ s( ρ , s)∘φ = T ∘φ ,where T is the Eulerian temperature. The free energy Lagrangian.Generally, given a Lagrangian L_ϱ _ ref:T Diff_0( 𝒟 ) ×ℱ ( 𝒟 )→ℝ with Lagrangian density 𝔏 (φ ,φ̇,T φ , ϱ _ ref,S), we define the associated free energy Lagrangian ℒ _ϱ _ ref: T Diff_0( 𝒟 ) ×ℱ _+ ( 𝒟 )→ℝ as ℒ_ϱ _ ref ( φ , φ̇, 𝔗):=∫_ 𝒟ℒ(φ (X), φ̇(X),T_Xφ ,ϱ _ ref(X),𝔗(X)) μ _g, wherethe free energy Lagrangian density ℒ is defined byℒ( α ,𝔗):= 𝔏 (α , S( α , 𝔗 )) + 𝔗S(α ,𝔗 ). In (<ref>) we used the abbreviation α = (φ , φ̇,T φ , ϱ _ ref), we assumed that the function S ∈ℝ↦∂𝔏/∂ S( α , S) ∈ℝ_+ is invertible for all α, and we defined the function S( α , 𝔗 ) by the condition ∂ℒ/∂ S(α , S( α , 𝔗) )= 𝔗, for all α.In (<ref>), ℱ _+ ( 𝒟 ) denotes a set of strictly positive functions on ℝ.For the case of the Lagrangian (<ref>), we obtain9pt9pt ℒ _ϱ _ ref( φ , φ̇,𝔗)=: ∫_ 𝒟ℒ( φ (X),φ̇(X), T _Xφ , ϱ_ ref( X ), 𝔗(X))μ _g(X)= ∫_ 𝒟1/2ϱ_ ref( X ) | φ̇( X )| ^2 _g μ _g(X) - ∫_ 𝒟ψ( ϱ_ ref(X)/J_φ (X) , 𝔗(X)) J_φ (X) μ _g(X), where ψ ( ρ , T) is the(Helmholtz) free energy density associated to the internal energy density ε ( ρ , s). The free energy density is given in material representation asΨ ( φ , T_X φ , ϱ _ ref, 𝔗):= ψ( ϱ_ ref/J_φ , 𝔗) J_φ. For example, for the case of a perfect gas, the internal energy density and the free energy density are respectively given byε ( ρ , s) = ε _0 e^ 1/C_v( s/ρ-s_0/ρ _0) ( ρ/ρ _0)^C_p/C_v, ψ ( ρ ,T) =ρ T (ψ _0 /ρ _0 T_0 +R _ gasln( ρ/ρ _0) -C_v ln( T/T_0) ), where the constant C_v is the specific heat coefficient at constant volume, the constant C_p is the specific heat at constant pressure, R_ gas= C_p-C_v is the gas constant, and ρ _0, s_0, T_0, ε _0, ψ _0 are given constant reference values verifying ε _0= ε ( ρ _0, s_0) and ψ _0= ψ ( ρ _0, T_0).§.§ Variational formulation in material representationIt is well-known that in absence of irreversible processes the equations of continuum mechanics in material representation arise from Hamilton's principle, see, e.g., <cit.>. Before presenting the extension of this variational formulation to the irreversible case, we shall briefly review Hamilton's principle as it applies to the Lagrangian(<ref>). In this case, the resulting system becomes the perfect adiabatic compressible fluid.Hamilton's principle in the reversible case.In absence of irreversible processes, the entropy is conserved so that we have S(t, X)= S_ ref(X) in material representation. The equations of motion thus follow from Hamilton's principle applied to the Lagrangian L_ϱ _ ref in (<ref>), in which the variable S=S_ ref is understood as a fixed parameter fieldin the same way as ϱ _ ref. Writing L_ϱ _ ref, S_ ref:T Diff_0( 𝒟 ) →ℝ, with L_ϱ _ ref, S_ ref( φ , φ̇):= L_ϱ _ ref(φ , φ̇, S_ ref), Hamilton's principle isδ∫_ t_1^t_2 L_ϱ _ ref, S_ ref( φ , φ̇)dt=0 , with respect to variations δφ such that δφ |_∂𝒟=0 and δφ (t _i )=0, i=1,2. The stationarity condition yields the Euler-Lagrange equations for L_ϱ _ ref, S_ ref in the formϱ _ refD/Dt𝐕 = DIV𝐏 ^ cons, where 𝐕 is the material velocity and 𝐏 ^ cons is the first Piola-Kirchhoff stress tensor. We now describe in details the geometric objects involved in this equation. The material velocity is defined by 𝐕_t(X):= ∂/∂ tφ _ t(X) ∈ T_φ_t(X)𝒟. At each time t, the material velocity 𝐕 _t is a vector field on 𝒟 along φ_t. The first Piola-Kirchhoff tensor is defined by𝐏 ^ cons= - ( ∂𝔏/∂ T_Xφ) ^♯,where ♯ denotes the index rising operator relative to the Riemannian metric g. At each time t, the first Piola-Kirchhoff tensor 𝐏^ cons(t,_ ) is a two-point tensor along φ _t. More precisely, for all X ∈𝒟, we have𝐏 ^ cons(t,X): T_ x ^∗𝒟× T_ X ^∗𝒟→ℝ, wherex= φ _t(X).On the left hand side of (<ref>), D 𝐕 /Dt denotes the covariant time derivative of the vector field 𝐕 _t along φ_t, relative to the Riemannian metric g. This covariant derivative yields again a vector field along φ _t. On the right hand side of (<ref>), DIV𝐏 ^ cons denotes the divergence^4 of the two-point tensor 𝐏 ^ cons along φ _t, relative to the Riemannian metric g. It yields a vector field on 𝒟 along φ _t. We refer to <cit.> for a detailed account of two-point tensors along diffeomorphisms and their covariant derivatives. A direct computation shows that for the Lagrangian (<ref>), the first Piola-Kirchhoff tensor leads to the pressure in material representation, i.e.,𝐏 ^ cons(X)( α _x, β _X)= - p ( ρ , s) J _φ g( T_xφ^-1 ( α _x), β _X),p( ρ , s)= ∂ε/∂ρρ + ∂ε/∂ ss - ε, where x= φ _t(X).The system (<ref>), with 𝐏 ^ cons given in (<ref>),is the material description of the Euler equations for a compressible perfect adiabatic fluid, which is usually written in spatial representation as{[ρ ( ∂ _t 𝐯 + 𝐯·∇𝐯 )=- gradp,; ∂ _t ρ + div ( ρ𝐯 )=0, ∂ _t s+ div(s 𝐯 )=0, ].where ∇ is the Levi-Civita covariant derivative associated to g and the operators grad and div are both associated to g. A systematic approach to derive the variational principle in spatial representation from Hamilton's principle in material representation, is provided by the Euler-Poincaré reduction theory on Lie groups, see <cit.>. It is important to observe thatwhile the entropy variable S_ ref(X) is seen as a fixed parameter in the material description of the reversible case, this is not true for the temperature as it clearly follows from its definition 𝔗= - ∂𝔏/∂ S(φ , φ̇,T φ , ϱ _ ref, S_ ref). Therefore, when working with the corresponding free-energy Lagrangian ℒ_ϱ _ ref ( φ , φ̇, 𝔗) in (<ref>), one cannot avoid considering the variations δ𝔗 in the variational principle, even in the reversible case. We will consider this situation later, as a particular case of our variational formulation for the irreversible case.The general geometric formulation of continuum mechanics needs a more general setting than the one presented above, namely, one has to consider the field φ as an embedding from a reference manifold ℬinto an ambient manifold 𝒮. Each of these manifolds is endowed with a Riemannian metric, typically denoted by G on ℬ and by g on 𝒮. This is the appropriate setting to study the material and spatial symmetries in nonlinear continuum mechanics, see <cit.>, <cit.>, <cit.>. In the present situation, since we are studying the special case of a fluid in a fixed domain, we have ℬ = 𝒮 =𝒟, so we can make the choice g=G.Navier-Stokes-Fourier equations in material representation. The processes of viscosity and heat conduction are described by the inclusion of the corresponding thermodynamic fluxes given, in the material description, by a friction Piola-Kirchhoff tensor 𝐏 ^ fr (t, X) and an entropy flux density 𝐉 _S(t, X), where 𝐏 ^ fr is a two-point tensor along φ and 𝐉 _S is a vector field on 𝒟. We will recall their usual phenomenological expressions later in the Eulerian description. By complete analogy with the case of discrete systems developed earlier, our formulation needs the notion of thermal displacements (<cit.>) in material representation, i.e., a variable Γ (t, X) such thatd/dtΓ (t, X)=𝔗(t, X).This is a particular case of thermodynamic displacement variables introduced in <cit.>.In addition to these internal irreversible processes, we assume that the fluid is heated from the exterior, which is represented by a heat power supply density ϱ_ ref(X)R(t,X) in material representation. Recall that ℒ denotes the free energy Lagrangian density.By complete analogy with the variational formulation given in (<ref>)–(<ref>) for discrete systems, we consider the variational formulation for a curve ( φ (t), Γ (t), Σ (t)) ∈Diff_0( 𝒟 ) ×ℱ ( 𝒟 ) ×ℱ ( 𝒟 ) as follows:δ∫_t_1^t_2∫_ 𝒟[ℒ( φ , φ̇, Tφ , ϱ_ ref, Γ̇)- ΣΓ̇]μ _g dt=0, Variational Condition for all variations δφ, δΓ, δΣ subject to the constraintΓ̇δΣ = (𝐏 ^ fr)^♭ : ∇δφ -𝐉 _S ·𝐝δΓ, Variational Constraint with δφ (t_i)=0 and δΓ (t_i)=0, for i=1,2, where the curve ( φ (t), Γ (t), Σ (t)) satisfies the constraintΓ̇Σ̇= (𝐏 ^ fr) ^♭ : ∇φ̇-𝐉 _S ·𝐝Γ̇+ ϱ_ refR, Phenomenological Constraint.In the constraints (<ref>) and (<ref>), δφ , φ̇: 𝒟→ T 𝒟 are vector fields on 𝒟 covering φ. The objects ∇δφ and ∇φ̇ are two-point tensor fields covering φ, obtained by taking the Levi-Civita covariant derivative associated to g. The notation “:" means the full contraction of the two-point tensor fields along φ. The flat operator on 𝐏 ^ cons applies to the spatial index. Note the analogy between the three conditions (<ref>)-(<ref>)-(<ref>) for the variational formulation of discrete systems and the three conditions (<ref>)-(<ref>)-(<ref>) above. As before, the constraint (<ref>) is nonlinear and one can pass from the variational constraint to the phenomenological constraint by formally replacing the variations δφ, δΓ, δΣ by the time derivatives φ̇, Γ̇, Σ̇.This variational formulation does not impose any restriction concerning the dependance of the thermodynamic fluxes on the variables φ , φ̇, ϱ _ ref, Γ , Γ̇ and on their spatial derivatives.Since δφ |_∂𝒟=0 and δφ (t _i )=0, δΓ (t _i )=0, i=1,2, by applying the variational condition (<ref>), we get∫_t _1 ^ t _2 ∫_ 𝒟[ ( ∂^ ∇ℒ/∂φ- DIV∂ℒ/∂ T_X φ- D/Dt∂ℒ/∂φ̇) δφ+ ( Σ̇- d/dt∂ℒ/∂Γ̇)δΓ-Γ̇δΣ] μ _g dt=0.Here ∂^ ∇ℒ/∂φ is the partial derivative of ℒ relative to the spatial coordinate x= φ (X). Such a partial derivative needs to be defined with respect to a Riemannian metric, here g, this is why we insert the exponent ∇. For the Lagrangian in (<ref>), we have∂^ ∇ℒ/∂φ=0. The other partial derivatives of ℒ do not need the use of a Riemannian metric. The operator D/Dt and DIV are associated to g, as explained earlier.Using the variational constraint (<ref>) and again δφ |_∂𝒟=0 and δφ (t _i )=0, δΓ (t _i )=0, i=1,2, and collecting the terms associated to the variations δφ and δΓ, we obtain the systemδφ :ρ _ refD 𝐕/Dt= DIV(𝐏 ^ cons+𝐏 ^ fr) ,δΓ:d/dt∂Ψ/∂𝔗= DIV𝐉 _S-Σ̇and𝐉 _S ·𝐧 ^♭ =0 on ∂𝒟,where 𝐧 is the outward pointing unit normal vector field along the boundary ∂𝒟 and 𝐧 ^♭ is the associated one-form which is obtained by applying the index lowering operator ♭ associated to g. The second equation in (<ref>) arises from the freeness of the variations δΓ at the boundary. In (<ref>) the conservative Piola-Kirchhoff stress tensor 𝐏 ^ cons is defined from the free energy Lagrangian density ℒ (or the free energy density Ψ) as𝐏 ^ cons= - ( ∂ℒ/∂ T_Xφ) ^♯=( ∂Ψ/∂ T_Xφ) ^♯.Here we note that 𝐏 ^ cons is defined in terms of ℒ whereas in (<ref>) it is defined in terms of 𝔏. From the general definition of the free energy Lagrangian density in (<ref>), we see that these two definitions of 𝐏 ^ cons coincide, one being expressed in terms of the entropy (<ref>), and the other with respect to the temperature. For the free energy Lagrangian(<ref>), we can compute the partial derivative with respect to T_ X φ to get𝐏 ^ cons( α _x, β _X)= - p ( ρ , T) J_ φ g( T_xφ^-1 ( α _x), β _X),p( ρ , T)= ∂ψ/∂ρρ - ψ ,where x= φ_t (X). This expression coincides with (<ref>) although it is here expressed in terms of the temperature. Using the relation (<ref>) in the phenomenological constraint (<ref>) yields 𝔗( -d/dt∂Ψ/∂𝔗 +DIV𝐉 _S) = ( 𝐏 ^ fr)^♭: ∇φ̇- 𝐉 _S ·𝐝𝔗+ ϱ_ refR,where we recall that 𝔗:=Γ̇ is the temperature in material representation. These results are summarized as follows. The Navier-Stokes-Fourier equations in material representation, given by{[ϱ _ refD 𝐕/Dt= DIV( 𝐏 ^ cons+ 𝐏 ^ fr),; 𝔗( -d/dt∂Ψ/∂𝔗+DIV𝐉 _S ) = ( 𝐏 ^ fr) ^♭ : ∇𝐕- 𝐉 _S ·𝐝𝔗+ ϱ_ refR, ].with boundary conditions 𝐕 |_∂𝒟=0 and 𝐉 _S ·𝐧 ^♭ |_∂𝒟= 0, follow from the variational condition (<ref>) together with the variational and phenomenological constraints (<ref>) and (<ref>).When applied to a general free energy Lagrangian density ℒ, this theorem yields the general system{[D /Dt∂ℒ/∂φ̇ - ∂ ^∇ℒ/∂φ = DIV( - ∂ℒ/∂ T_ X φ + (𝐏 ^ fr ) ^♭) ,; 𝔗(d/dt∂ℒ/∂𝔗 +DIV𝐉 _S ) = ( 𝐏 ^ fr) ^♭ : ∇𝐕- 𝐉 _S ·𝐝𝔗+ ϱ_ refR, ].with boundary conditions 𝐕 |_∂𝒟=0 and 𝐉 _S ·𝐧 ^♭ |_∂𝒟= 0.This system is the continuum version of system (<ref>) and describes the equations of motion for a continuum theory with free energy Lagrangian density ℒ and subject to the irreversible processes of viscosity and heat conduction.Geometric structure associated to the variational formulation. We now comment on the general geometric setting underlying the formulation (<ref>)–(<ref>) and its relation with the variational formulation in nonholonomic mechanics. Let us consider formally the manifold 𝒬 :=Diff_0( 𝒟 ) ×ℱ ( 𝒟 ) ×ℱ ( 𝒟 ) and denote an element in 𝒬 by q:= (φ , Γ , Σ). We consider the vector bundle T𝒬× _𝒬 T 𝒬→𝒬 whose vector fiber at q ∈𝒬 is given by the vector space T_q 𝒬× T_q 𝒬. For convenience, we shall write an element in this fiber by using the local^5 notation ( q, q̇, δ q) ∈ T_q 𝒬× T_q 𝒬.footnote1 The intrinsic notation of such an element is (v_q, w_q) ∈ T_qQ × T_qQ, where v_q, w_q ∈ T_qQ.Geometrically, the variational constraint (<ref>) defines a subset C_V ⊂ T𝒬× _ 𝒬 T 𝒬 as follows: ( q, q̇, δ q) ∈ C_V ⇔ ( q, q̇, δ q) satisfies (<ref>), where ( q, q̇, δ q)= (φ , Γ , Σ,φ̇, Γ̇, Σ̇, δφ ,δΓ , δΣ). This variational constraint satisfies the following property: for each (q, q̇) ∈ T 𝒬, the set C_V(q, q̇) defined by C_V(q, q̇):= C_V ∩{(q, q̇)}× T_q 𝒬 is a vector space.The phenomenological constraint (<ref>) on (q, q̇)= (φ , Γ , Σ,φ̇, Γ̇, Σ̇) geometrically defines a subset C_K ⊂ T 𝒬 of the tangent bundle to 𝒬. For the case of adiabatically closed systems (i.e., ϱ_ ref R=0), the subset C_K is obtained from the variational constraint C_V via the following general constructionC_K:={(q, q̇) ∈ T 𝒬| (q, q̇)∈ C_V(q, q̇)}. Constraints C_K and C_V are related as in (<ref>), which we refer to as constraints of the thermodynamic type, see <cit.>.In terms of the above constraint sets C_V and C_K, the variational formulation (<ref>)–(<ref>) of the Navier-Stokes-Fourier system can thus be written as follows:A curve q(t) = ( φ (t), Γ (t), Σ (t))∈𝒬, t ∈ [t _1 , t _2 ] satisfies the Navier-Stokes-Fourier system (<ref>) if and only if it satisfies the variational conditionδ∫_t_1^t_2𝖫(q, q̇) dt=0, for all variationsδ q ∈ C_V(q, q̇) with δ q(t _1 )= δ q(t _2)=0 and where the curve q(t) satisfiesq̇(t) ∈ C_K. The Lagrangian 𝖫 in (<ref>) denotes the full expression under the time integral in (<ref>), namely,𝖫(q,q̇)=∫_ 𝒟[ℒ( φ , φ̇, Tφ , ϱ_ ref, Γ̇)- ΣΓ̇]μ _g.From a mathematical point of view, the variational formulation of (<ref>)–(<ref>) is a nonlinear (and infinite dimensional) extension of the Lagrange-d'Alembert principle used for the treatment of nonholonomic mechanical systems with linear constraints, see e.g., <cit.>. Such linear constraints are given by a distribution Δ⊂ T 𝒬 on 𝒬. In this linear case, we have C_K= Δ⊂ T 𝒬 and the variational constraint is C_V= T 𝒬× _Q Δ.For the case of nonlinear constraints C _K⊂ T𝒬 on velocities in mechanics, which are called kinematic constraints, a generalization of the Lagrange-d'Alembert principle has been considered in <cit.>, see also <cit.>, <cit.>. In Chetaev's approach, the variational constraint C_V is derived from the kinematic constraint C_K. However, it has been pointed out in <cit.> that this principle does not always lead to the correct equations of motion for mechanical systems and in general one has to consider the kinematic and variational constraints as independent notions. A general geometric variational approach for nonholonomic systems with nonlinear and (possibly) higher order kinematic and variational constraints has been described in <cit.>. This setting generalizes both the Lagrange-d'Alembert and Chetaev approaches. It is important to point out that for these generalizations, including Chetaev's approach, energy may not be conserved along the solution of the equations of motion. The variational formulation (<ref>)–(<ref>) falls into the general setting described in <cit.>, extended here to the infinite dimensional setting. In the special case of constraints of the thermodynamic type, i.e., related through (<ref>), the energy is conserved, see <cit.>, consistently with the fact that in such a situation the system is isolated.§.§ Variational formulation in spatial representation We shall now develop the spatial (or Eulerian) representation of the variational formulation (<ref>)-(<ref>)-(<ref>). The spatialfields associated to φ̇, ϱ _ ref, Γ, Σ, 𝔗, are denoted by 𝐯, ρ, s, σ, γ, T. The spatial and Lagrangian fields are related as follows[𝐯= φ̇∘φ ^-1, ρ =(ϱ _ ref∘φ^-1 )J_ φ ^-1,;s = (S∘φ^-1 )J_ φ ^-1, σ= (Σ∘φ^-1 )J_ φ ^-1,;γ = Γ∘φ^-1 , T=𝔗∘φ^-1. ] The Eulerian quantities associated to 𝐉 _S, 𝐏 ^ fr, R are denoted by 𝐣 _s, σ ^ ref, r, and are defined as𝐣_s= (T φ∘𝐉 _S ∘φ^-1)J_φ ^-1,r= R ∘φ^-1,σ ^ fr(x)( α _x, β _x)= J_ φ ^-1𝐏 ^ fr( φ ^-1(x) )( α _x, T ^∗ _X φ ( β _x)), for all x ∈𝒟 and for all α _x , β _x ∈ T^* _x 𝒟. From the expression of the free energy Lagrangian (<ref>), we deduce its spatial representation asℓ( 𝐯 , ρ , T)=∫_ 𝒟Λ( 𝐯, ρ , T) μ _g= ∫_ 𝒟1/2ρ | 𝐯|_g ^2μ _g-∫_ 𝒟ψ (ρ , T) μ _g.Proceeding similarly as in <cit.> we use the relations (<ref>) and (<ref>) to rewrite the variational formulation (<ref>)-(<ref>)-(<ref>) in spatial representation for a curve ( 𝐯 (t), ρ (t), γ (t), σ (t)) as follows: δ∫_t_1^t_2∫_ 𝒟[ Λ(𝐯 , ρ , D_tγ)- σ D_t γ]μ _g dt=0,Variational Condition with respect to variationsδ𝐯 = ∂ _t ζ +[ 𝐯 , ζ ], δρ =- div( ρζ ), δγ, andδσ, subject to the constraintD_tγD̅_ δσ = ( σ ^ fr)^♭ : ∇ζ-𝐣 _S ·𝐝 D_ δγ,Variational Constraint with ζ(t_i)=0 and δγ (t_i)=0, for i=1,2, where the curve ( 𝐯 (t), ρ (t), γ (t), σ (t)) satisfies the constraintD_tγD̅_tσ = ( σ^ fr) ^♭ : ∇𝐯 -𝐣 _S ·𝐝 D_t γ+ ρ r,Phenomenological Constraint.The first two expressions in (<ref>) are obtained by taking the variations with respect φ, 𝐯, and ρ, of the first two relations in (<ref>) and by defining the vector field ζ := δφ∘φ^-1. These formulas can be directly justified by employing the Euler-Poincaré reduction theory on Lie groups, for instance, see <cit.>.In (<ref>), (<ref>), and (<ref>), we introduced the notations[D_tf:= ∂ _t f+ 𝐯·𝐝f,D̅_t f := ∂ _t f + div( f𝐯 ),;D_ δ f:= δf+ ζ·𝐝f, D̅_ δf := δf + div( fζ), ]for the Lagrangian time derivatives and variations of scalar fields and density fields.By applying (<ref>), using the expression for the variations δ𝐯 and δρ, and 𝐯 |_∂𝒟=0, δγ (t _i )=0, i=1,2, we find the condition∫_t_1^t_2∫_ 𝒟[( ∂Λ/∂𝐯+ ( δΛ/δ T - σ) 𝐝γ) · (∂ _t ζ + [𝐯 , ζ] )-∂Λ/∂ρdiv( ρζ ) .. - D̅_t( δΛ/δ T - σ) δγ -δσ D_tγ] μ _gdt=0. Using the variational constraint (<ref>), collecting the terms proportional to ζ and δγ, and using ζ |_∂𝒟=0, 𝐯 |_∂𝒟=0, ζ (t _i )=0, i=1,2, we obtain the three conditionsζ :( ∂ _t +_ 𝐯 ) ( ∂Λ/∂𝐯+ ( δΛ/δ T - σ) 𝐝γ)= ρ 𝐝∂Λ/∂ρ- σ𝐝(D_t γ )+divσ^ fr- (div𝐣 _s ) 𝐝γ,δγ :D̅_t( δΛ/δ T - σ) = - div𝐣 _s and𝐣 _s·𝐧 ^♭ =0 on ∂𝒟 , where we introduced the Lie derivative notation _ 𝐯𝐦: = 𝐯·∇𝐦 + ∇𝐯 ^𝖳·𝐦 + 𝐦div𝐯 for a one-form density 𝐦 along a vector field 𝐯. Further computations and the phenomenological constraint (<ref>) finally yield the system{[( ∂ _t +_ 𝐯 ) ∂Λ/∂𝐯= ρ 𝐝∂Λ/∂ρ- ∂Λ/∂ T 𝐝 T+ divσ^ fr; T ( D̅_t δΛ/δ T+ div𝐣 _s )= (σ ^ fr ) ^♭ : ∇𝐯 - 𝐣 _s ·𝐝 T+ ρ r; D̅_t ρ =0, ]. whose last equation, the mass conservation equation, follows from the definition of ρ in terms of ϱ _ ref. These are the general equations of motion, in free energy Lagrangian form, for fluid dynamics subject to the irreversible processes of viscosity and heat conduction. By specifying this system to the Lagrangian (<ref>), one immediately gets the Navier-Stokes-Fourier system in the form{[ ρ ( ∂_t 𝐯 +𝐯·∇𝐯 )= - gradp +divσ ^ fr; TD̅_t ∂ψ/∂ T = div (T𝐣 _s ) -(σ ^ fr ) ^♭: ∇𝐯 - ρ r;D̅_t ρ =0, ]. where p= ρ∂ψ/∂ρ- ψ. The heat equation can be rewritten asT ( ∂ s /∂ TD_tT+ ∂ p/∂ Tdiv𝐯)= (σ ^ fr ) ^♭: ∇𝐯 - div( T 𝐣 _s ) +ρ r ,where the partial derivatives are taken at constant mass density and constant temperature. In terms of usual coefficients^6 (the specific heat at constant volume C_v, the speed of sound c_s ^2 and the diabatic temperature gradient Γ, which may all depend on ρ and T) it readsρ C_v (D_tT+ ρ c_s ^2 Γdiv𝐯) = (σ ^ fr ) ^♭ : ∇𝐯 - div( T 𝐣 _s ) +ρ r.This heat equation is valid for any state equations. In the case of the perfect gas, it simplifies since C_v is a constant and ρ ^2 c_s ^2C_vΓ = p. These results are summarized as follows.footnote1 We recall the expressions of the coefficients: C_v=T∂η/∂ T(ρ ,T), c_s ^2 =∂ p/∂ρ(ρ , η ), Γ = ∂ T/∂ p(p, η ), where η =s/ ρ is the specific entropy. The Navier-Stokes-Fourier equations in spatial representation, given by{[ ρ ( ∂_t 𝐯 +𝐯·∇𝐯 )= -𝐝 p +divσ ^ fr,; ρ C_v (D_tT+ ρ c_s ^2 Γdiv𝐯) =σ ^ fr : ∇𝐯 - div( T 𝐣 _s ) +ρ r, ].with boundary conditions 𝐯 |_∂𝒟=0 and 𝐣 _s ·𝐧 ^♭ |_∂𝒟= 0, follow from the variational condition (<ref>) with the variational and phenomenological constraints (<ref>), (<ref>). In order to close the system (<ref>), it is necessary to provide phenomenological expressionsof the thermodynamic fluxes in terms of the thermodynamic affinities, compatible with the second law of thermodynamics. In our case, the thermodynamic fluxes are σ ^ fr and 𝐣 _s and we have the well-known relationsσ ^ fr=2 μ(Def𝐯)^♯+ ( ζ - 2/3μ)(div𝐯 )g^♯and T𝐣 _s^♭= - κ𝐝T (Fourier law), where Def𝐯 = 1/2 (∇𝐯 + ∇𝐯 ^𝖳), μ≥ 0 is the first coefficient of viscosity (shear viscosity), ζ≥ 0 is the second coefficient of viscosity (bulk viscosity), and κ≥ 0 is the thermal conductivity. Generally, these coefficients depend on ρ and T. Constraints in infinite dimensions. The variational formulation for the Navier-Stokes-Fourier system developed in this paper is based on nonlinear and infinite dimensional generalizations of the Lagrange-d'Alembert principle of nonholonomic mechanics. In the present case, the infinite dimensional constraint is not associated to a mechanical constraint, it is the expression of the entropy production of the system. For infinite dimensional constrained mechanical systems, variational formulations have been used for example in <cit.>, <cit.> and <cit.>, <cit.> to derive and study geometrically exact models for elastic strands with rolling contact and for flexible fluid-conducting tubes. An infinite dimensional generalization of the Lagrange-d'Alembert principle was proposed in <cit.> to treat the case of 2^nd order Rivlin-Ericksen fluids in the context of nonholonomic systems. We refer to <cit.> for a treatment of infinite dimensional constrained mechanical systems via Hamel's formalism.Conclusion and future direction. In this paper, we presented a Lagrangian variational formulation for the Navier-Stokes-Fourier system based on the free energy. This formulation is developed in a systematic way from the free energy variational formulation of the thermodynamics of discrete systems described in <ref>. The approach presented in this paper complements that made in <cit.> as it uses the temperature, rather than the entropy, as an independent variable. The proposed free energy variational formulation is also well-adapted to include additional irreversible processes such as diffusion and chemical reactions treated in <cit.>. It can also be extended to cover the case of moist atmospheric thermodynamics following <cit.>.Acknowledgements. F.G.B. is partially supported by the ANR project GEOMFLUID, ANR-14-CE23-0002-01; H.Y. is partially supported by JSPS Grant-in-Aid for Scientific Research (26400408, 16KT0024, 24224004) and the MEXT “Top Global University Project”.xx [Appell(1911)]Ap1911 Appell, P [1911], Sur les liaisons exprimées par des relations non linéaires entre les vitesses, C.R. Acad. Sci. Paris, 152, 1197–1199.[Bloch(2003)]Bl2003 Bloch, A. M. [2003], Nonholonomic Mechanics and Control, volume 24 of Interdisciplinary Applied Mathematics, Springer-Verlag, New York. With the collaboration of J. Baillieul, P. Crouch and J. Marsden, and with scientific input from P. S. Krishnaprasad, R. M. Murray and D. Zenkov. [Cendra, Ibort, de León, and Martín de Diego(2004)]CeIbdLdD2004 Cendra, H., A. Ibort, M. de León, and D. Martín de Diego [2004], A generalization of Chetaev's principle for a class of higher order nonholonomic constraints, J. Math. Phys. 45, 2785.[Chetaev(1934)]Ch1934 Chetaev, N. G. [1934], On Gauss principle, Izv. Fiz-Mat. Obsc. Kazan Univ., 7, 68–71 [Ebin and Marsden(1970)]EbMa1970 Ebin, D.G. and J.E. Marsden [1970], Groups of diffeomorphisms and the motion of an incompressible fluid, Ann. Math. 92, 102–163.[Gay-Balmaz(2017)]FGB2017 Gay-Balmaz, F. [2017], A variational derivation of the thermodynamics of a moist atmosphere with irreversible processes, <https://arxiv.org/pdf/1701.03921.pdf> [Gay-Balmaz, Marsden, and Ratiu(2012)]GBMaRa2012 Gay-Balmaz, F., J. E. Marsden, and T. S. Ratiu [2012], Reduced variational formulations in free boundary continuum mechanics. J. Nonlinear Sc. 22, 553–597. [Gay-Balmaz and Putkaradze(2012)]GBPu2012 Gay-Balmaz, F. and V. Putkaradze [2012], Dynamics of Elastic Rods in Perfect Friction Contact, Phys. Rev. Lett. 109, 244–303.[Gay-Balmaz and Putkaradze(2014)]GBPu2014 Gay-Balmaz, F. and V. Putkaradze [2014], Exact geometric theory for flexible, fluid-conducting tubes, C. R. Mécanique, 342, 79–84. [Gay-Balmaz and Putkaradze(2015a)]GBPu2015a Gay-Balmaz, F. and V. Putkaradze [2015a], Dynamics of Elastic Strands with Rolling Contact, Physica D 294, 6–23.[Gay-Balmaz and Putkaradze(2015b)]GBPu2015b Gay-Balmaz, F. and V. Putkaradze [2015b], On flexible tubes conducting fluid: geometric nonlinear theory, stability and dynamics, J. Nonlin. Sci., 25(4), 889–936.[Gay-Balmaz and Yoshimura(2015)]GBYo2015 Gay-Balmaz, F. and H. Yoshimura [2015], Dirac reduction for nonholonomic mechanical systems on semidirect products, Adv. Appl. Math., 63, 131–213. [Gay-Balmaz and Yoshimura(2017a)]GBYo2016a Gay-Balmaz, F. and H. Yoshimura [2017a], A Lagrangian variational formulation for nonequilibrium thermodynamics. Part I: discrete systems, J. Geom. Phys. 111, 169–193.[Gay-Balmaz and Yoshimura(2017b)]GBYo2016b Gay-Balmaz, F. and H. Yoshimura [2017b], A Lagrangian variational formulation for nonequilibrium thermodynamics. Part II: continuum systems, J. Geom. Phys. 111, 194–212. [Gay-Balmaz and Yoshimura(2017c)]GBYo2017a Gay-Balmaz, F. and H. Yoshimura [2017c], Variational discretization for the nonequilibrium thermodynamics of simple systems, <https://arxiv.org/pdf/1702.02594.pdf>. [Gay-Balmaz and Yoshimura(2017d)]GBYo2017c Gay-Balmaz, F. and H. Yoshimura [2017d], Dirac structures in nonequilibrium thermodynamics, <https://arxiv.org/pdf/1704.03935.pdf>[Holm, Marsden and Ratiu(1998)]HoMaRa1998 Holm, D. D., J. E. Marsden and T. S. Ratiu [1998], The Euler-Poincaré equations and semidirect products with applications to continuum theories, Adv. in Math. 137, 1–81. [Marsden and Hughes(1983)]MaHu1983 Marsden, J. E. and T. J. R. Hughes [1983], Mathematical Foundations of Elasticity (Prentice Hall, New York, 1983) (reprinted by Dover, New York, 1994). [Green and Naghdi(1991)]GrNa1991 Green, A. E. and P. M. Naghdi [1991], A re-examination of the basic postulates of thermomechanics, Proc. R. Soc. London.Series A: Mathematical, Physical and Engineering Sciences, 432(1885), 171–194.[Gruber(1997)]Gr1997 Gruber, C. [1997], Thermodynamique et Mécanique Statistique, Institut de physique théorique, EPFL. [Gruber(1999)]Gr1999 Gruber, C. [1999], Thermodynamics of systems with internal adiabatic constraints: time evolution of the adiabatic piston, Eur. J. Phys. 20, 259–266. [Marle(1998)]Ma1998 Marle, C.-M. [1998], Various approaches to conservative and nonconservative non-holonomic systems, Rep. Math. Phys. 42, 1/2, 211–229.[Pironneau(1983)]Pi1983 Pironneau, Y. [1983], Sur les liaisons non holonomes non linéaires, déplacements virtuels à travail nul, conditions de Chetaev, Proceedings of the IUTAM–IS1MMM Symposium on “Modern Developments in Analytical Mechanics”, Torino 1982, Atti della Acad. della sc. di Torino, 117, 671–686. [Podio-Guidugli(2009)]Po2009 Podio-Guidugli, P. [2009], A virtual power format for thermomechanics, Continuum Mechanics and Thermodynamics, 20(8), 479–487. [Shi, Berchenko-Kogan, Zenkov, and Bloch(2015)]ShBKZeBl2015 Shi, D., Y. Berchenko-Kogan, D. V. Zenkov, and A. M. Bloch [2015], Hamel's Formalism for infinite-dimensional mechanical systems, J. Nonlin. Sci., 27(1), 241–283. [Simo, Marsden, and Krishnaprasad(1988)]SiMaKr1988 Simo, J. C., J. E. Marsden and P. S. Krishnaprasad [1988], The Hamiltonian structure of nonlinear elasticity: The material, spatial and convective representations of solids, rods and plates, Arch. Rational Mech. Anal., 104, 125–183.[Stueckelberg and Scheurer(1974)]StSc1974 Stueckelberg, E. C. G. and P. B. Scheurer [1974], Thermocinétique phénoménologique galiléenne, Birkhäuser, 1974. [von Helmholtz(1884)]He1884 von Helmholtz, H. [1884], Studien zur Statik monocyklischer Systeme. Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin, 159–177. | http://arxiv.org/abs/1706.09010v1 | {
"authors": [
"François Gay-Balmaz",
"Hiroaki Yoshimura"
],
"categories": [
"math-ph",
"math.DS",
"math.MP"
],
"primary_category": "math-ph",
"published": "20170626101401",
"title": "A free energy Lagrangian variational formulation of the Navier-Stokes-Fourier system"
} |
Department of Mathematics, African Institute for Mathematical science, Ghana [email protected]/[email protected] Given an additive function f and a multiplicative function g, we set E(f,g;x)=#{n≤ x:f(n)=g(n)}. We investigate the size of this quantity; in particular, we establish lower bounds for E(Ω , g;x), where Ω(n) stands for the number of prime factors of n counting their multiplicity and where g is an arbitrary multiplicative function. We show that E(Ω, g, x)≫x/(loglog x)^1/2+ϵ, for any arbitrarily small ϵ >0.This is therefore an extension of an earlier result of Dekonick, Doyon and Letendre. On the proximity of multiplicative functions with the function counting the number of prime factors with multiplicity Theophilus Agama December 30, 2023. =====================================================================================================================§ INTRODUCTIONLet us set E(f,g;x):= #{n≤ x:f(n)=g(n)}, where f and g are arbitrary additive and multiplicative functions, respectively. One of the basic questions one can ever ask is, how large and how small can this quantity be. In 2014, Dekonick, Doyon and Letendre <cit.> proved that for some suitable choice of multiplicative function and some choice of sequence (x_n) of positive integersE(ω ,g;x_n)≫x_n/(loglog x_n)^1/2+ϵ, for any small ϵ >0. Above all they were able to show that if f is an integer-valued additive function such thatφ (x)=φ_f(x)=B(x)/A(x)⟶ 0 as x⟶∞, whereA(x):=∑_p^α≤ xf(p^α)(1-1/p) and B(x):=∑_p^α≤ x|f(p^α)|^2/p^α, and that max_z∈ℝ#{n≤ x :f(n)=z}=O(x/K(x)), where K(x)⟶∞ as x⟶∞. Then, for any multiplicative function gE(f,g;x)=o(x) as x⟶∞. In particular, E(ω,g,x)=o(x) as x⟶∞.[On the proximity of multiplicative functions to the function counting the number of prime divisors with multiplicity ].§ PRELIMARY RESULTSLet π_k(x)=#{n≤ x:ω (n)=k} for each positive integer k. Then the maximum value of π_k (x) is x/√(loglog x)(1+o(1)) and the value of k for which it occurs is k=loglog x+O(1)This follows from a result of Balazard <cit.>.For all x≥ 2 and for every δ >0#{n≤ x: |ω (n)-loglog n|>(loglog x)^1+δ}=o(x)(x⟶∞). This follows from Theorem 8.12 in the book of Nathason <cit.>. Now we present one of the results and the techniques of Dekoninck, Doyon and Letendre <cit.> employed in obtaining the lower bound for the quantity E(ω, g, x).Let ϵ>0 be very small. Then, there exist a multiplicative function g and a sequence (r_j) of positive integers such thatE(ω ,g;r_j)≫x/(loglog r_j)^1/2+ϵ. Given ϵ >0 very small,let 𝒮={s_1, s_2, …} be an infinite set of primes, withs_1=2 ands_j to be the smallest prime number larger thanmax{s_j-1,j^1+δ}, for j≥ 2 and δ >0 very small. Choose r_j=e^e^2^j and let (z_j) be sequence of integers maximizing the quantity #{m ≤r_j/s_j: s_k|̸ m for each s_k ∈𝒮, ω(m)=z_j-1}, for each j≥ 1, which is well defined by Lemma <ref>. Defineg a strongly multiplicative function on the primes asg(p)=z_j if p=s_j∈𝒮1 if p∉𝒮. To find a lower bound for E(ω, g;r_j), it suffices to consider integers of the form n=m· s_j such that s_j|̸ m for s_j∈𝒮.E(ω, g;r_j) =#{n≤ r_j:ω (n)=g(n)}≥#{n≤ r_j:s_j|n, s_k|̸ m for k≠ j,ω(m)=z_j}≥#{m≤r_j/s_j:s_k|̸ m for s_k∈𝒮, ω (m)=z_j-1}. Now, let I_j=[loglog r_j-(loglog r_j)^1/2+ϵ, loglog r_j+(loglog r_j)^1/2+ϵ]. Then #{m≤r_j/s_j:s_k|̸ m for s_k∈𝒮}=#{m≤r_j/s_j:s_k|̸ m for s_k∈ S, ω (m)∉ I_j} +#{m≤r_j/s_j:s_k|̸ m,ω(m)∈ I_j}. In relation to Lemma <ref>#{m≤r_j/s_j:s_k|̸ m for s_k∈ S, ω (m)∉ I_j}=o(r_j/s_j) (j⟶∞). Also#{m≤r_j/s_j:s_k|̸ m, ω(m)∈ I_j} ≤∑_N∈ I_j#{m≤r_j/s_j:s_k|̸ m for s_k∈𝒮, ω(m)=N}≤ 2(loglog r_j)^1/2+ϵ#{m≤r_j/s_j: for s_k ∈ S, ω(m)=z_j-1}. It follows from (<ref>) and (<ref>) that#{m≤r_j/s_j:s_k|̸ m for s_k∈ S, ω(m)=z_j-1}≥#{m≤r_j/s_j:s_k|̸ m for s_k∈ S}/2(loglog r_j )^1/2+ϵ. It is also clear that #{m≤r_j/s_j:s_k|̸ m for s_k∈ S}=(1+o(1))r_j/s_jC(δ) (j⟶∞). Plugging (<ref>) into (<ref>), we have that#{m≤r_j/s_j:s_k|̸ m for s_k∈ S, ω(m)=z_j-1}≫(1/2+o(1))r_j/s_jC(δ)·1/(loglog r_j)^1/2+ϵ. Then, for sufficiently large j we see that , s_j<j^1+2δ≤ (2^j)^ϵ, where (2^j)^ϵ =(loglog r_j)^ϵ. Using this fact,we obtain (1/2+o(1))r_j/s_jC(δ)·1/(loglog r_j)^1/2+ϵ≫(1/2+o(1))r_j/(loglog r_j)^1/2+2ϵ, and from (<ref>) we haveE(ω, g;r_j)≫r_j/(loglog r_j)^1/2+ϵ, thus completing the proof.It has to be said that this is a good lower bound, but it only works for a particular type of sequence and therefore is not uniform.§ MAIN RESULTIn this section we use the techniques employed by Dekonick, Doyon and Letendre <cit.> to obtain a uniform lower bound for E(Ω, g,x). Let 𝒮={s_1,s_2,…} be an infinite set of the primes such that∑_j=1^∞1/s_j<∞. Then for any small ϵ >0, there exists a strongly multiplicative function g such that E(Ω, g,x)≫x/(loglog x)^1/2+ϵ.Let (z_j) be sequence of positive integers maximizing the quantity #{r≤x/s_j:s_i|̸ r for each s_i∈𝒮, Ω (r)=z_j-2}, for s_j≡ 1 4 for each j≥ 1,and #{r≤x/s_j:s_i|̸ r for each s_i∈𝒮, Ω (r)=z_j}, for s_j≡ 3 4 for each j≥ 1, which is well defined by Lemma <ref>. Define g, a strongly multiplicative function, on the primes as g(p)=z_j-1 if p=s_j≡ 1 4, s_j∈𝒮 z_j+1 if p=s_j≡ 34, s_j∈𝒮 1 if p∉𝒮. To obtain a lower bound for E(Ω, g,x), it suffices to consider only integers of the form n=r· s_j^α for α≥ 1, s_j|̸ r. ClearlyE(Ω, g,x) =#{n≤ x:Ω (n)=g(n)}≥∑_α≥ 1#{n≤ x: s_j^α|n, s_i|̸ n for i≠ j,Ω(n)=g(n)}≥∑_α≥ 1#{r≤x/s_j^α: Ω (r)=g(s_j)-α, s_i|̸ r for each s_i∈𝒮,s_j≡ 14}≥#{r≤x/s_j:Ω (r)=g(s_j)-1, s_i|̸ r for each s_i∈𝒮, s_j≡ 14}≥#{r≤x/s_j:Ω (r)=z_j-2, s_i|̸ r for each s_i∈𝒮}. Again consider the interval I=[loglog x-(loglog x)^1/2+ϵ, loglog x+(loglog x)^1/2+ϵ]. Let usconsider#{r≤x/s_j:s_i|̸ r for each s_i∈𝒮, s_j≡ 14}. We observe, in relation to Theorem <ref>,#{r≤x/s_j:s_i|̸ r for each s_i∈𝒮, s_j≡ 14, Ω (r)∉ I_j}=o(x/s_j), as j⟶∞. On the other hand #{r≤x/s_j:s_i|̸ r for each s_i∈𝒮, s_j≡ 14, Ω (r)∈ I_j}= ∑_𝒰∈ I_j#{r≤x/s_j:s_i|̸ r for each s_i∈𝒮, Ω (r)=𝒰, s_j ≡ 14}≤ 2(loglog x)^1/2+ϵ#{r≤x/s_j:s_i|̸ r for each s_i∈𝒮,Ω(r)=z_j-2}. It follows from (<ref>), that#{r≤x/s_j:s_i|̸ r for each s_i∈𝒮, Ω (r)=z_j-2}≥1/2(loglog x)^1/2+ϵ#{r≤x/s_j:s_i|̸ r for each s_i∈𝒮, s_j≡ 14}≥1/2(loglog x)^1/2+ϵ∑_s_j≡ 1 4#{r≤x/s_j:s_i|̸ r for each s_i∈𝒮}≥1/2(loglog x)^1/2+ϵ∑_s_j≡ 1 4x/s_j(1+o(1))C(𝒮)≥1/2(loglog x)^1/2+ϵx(1+o(1))C(𝒮)∑_s_j≡ 1 41/s_j≥1/2(loglog x)^1/2+ϵx(1+o(1))C(𝒮)K, for some positive real numberKandC(𝒮)=∏_j=1^∞(1-1/s_j) and ∑_s_j≡ 1 41/s_j<∞. Carrying out the same process for the other residue class s_j≡ 3 4 and combining the result, we will obtain E(Ω ,g,x)≫x/(loglog x)^1/2+ϵ.§ CONCLUSIONThe lower bound obtained in the original work of Dekoninck, Doyon and Letendre <cit.> can be made uniform by using a similar choice of multiplicative function in the main result; that is, if we letg(p)=z_j-1 if p=s_j≡ 1 4, s_j∈𝒮 z_j+1 if p=s_j≡ 34, s_j∈𝒮 1 if p∉𝒮. ThenE(ω, g,x)≫x/(loglog x)^1/2+ϵ holds uniformly. amsplain 10de2014proximity J.M De Koninck, N. Doyon and Letendre, On the proximity of additive and multiplicative functions., Functiones et Approximatio Commentarii Mathematici, vol. 52.2, Adam Mickiewicz University, 2015, pp.327–344. balazard1990unimodaliteBalazard, Michel, Unimodalité de la distribution du nombre de diviseurs premiers d’un entier, Ann. Inst. Fourier, Grenoble, vol. 40.2, 1990, pp.255–270.nathanson2000elementaryNathanson, M.B, Graduate Texts in Mathematics, New York, NY: Springer New York, 2000. | http://arxiv.org/abs/1706.08646v1 | {
"authors": [
"Theophilus Agama"
],
"categories": [
"math.NT"
],
"primary_category": "math.NT",
"published": "20170627020810",
"title": "On the proximity of multiplicative functions to the function counting prime factors with multiplicity"
} |
Department of Mathematics, University of Rhode Island, University of Rhode Island, Kingston, RI,USA, 02881 [email protected][2010]Primary 05C57; Secondary 05C85In the game of Cops and Robbers, a team of cops attempts to capture a robber on a graph G.All players occupy vertices of G.The game operates in rounds; in each round the cops move to neighboring vertices, after which the robber does the same.The minimum number of cops needed to guarantee capture of a robber on G is the cop number of G, denoted (G), and the minimum number of rounds needed for them to do so is the capture time.It has long been known that the capture time of an n-vertex graph with cop number k is O(n^k+1).More recently, Bonato, Golovach, Hahn, and Kratochvíl (<cit.>, 2009) and Gavenčiak (<cit.>, 2010) showed that for k = 1, this upper bound is not asymptotically tight: for graphs with cop number 1, the cop can always win within n-4 rounds.In this paper, we show that the upper bound is tight when k ≥ 2: for fixed k ≥ 2, we construct arbitrarily large graphs G having capture time at least (V(G)/40k^4 )^k+1.In the process of proving our main result, we establish results that may be of independent interest.In particular, we show that the problem of deciding whether k cops can capture a robber on a directed graph is polynomial-time equivalent to deciding whether k cops can capture a robber on an undirected graph.As a corollary of this fact, we obtain a relatively short proof of a major conjecture of Goldstein and Reingold (<cit.>, 1995), which was recently proved through other means (<cit.>, 2015).We also show that n-vertex strongly-connected directed graphs with cop number 1 can have capture time Ω(n^2), thereby showing that the result of Bonato et al. <cit.> does not extend to the directed setting. Bounds on the length of a game of Cops and Robbers William B. Kinnersley December 30, 2023 ==================================================IntroductionIn the classic game of Cops and Robbers, k cops attempt to capture a single robber on a graph G.The cops and robber all occupy vertices of G.(Multiple entities may occupy a single vertex simultaneously.)Cops and Robbers is a perfect-information game; in particular, the cops and robber know each other's positions at all times.At the beginning of the game, the cops choose their starting positions, after which the robber chooses his position in response.Henceforth the game proceeds in rounds, each of which consists of a cop turn followed by a robber turn.In the usual formulation of the game, on the cops' turn, each cop may either remain on her current vertex or move to an adjacent vertex; on the robber's turn, the robber may do the same.If at any point any cop occupies the same vertex as the robber, we say that she captures the robber, and the cops win.Conversely, the robber wins by perpetually evading capture.Clearly, placing one cop on each vertex of G results in a win for the cops.It is thus natural to ask for the minimum number of cops needed to guarantee capture of a robber on G, no matter how skillfully the robber plays; this quantity is the cop number of G, denoted (G).Much attention has been given to the cop number.Quilliot <cit.>, along with Nowakowski and Winkler <cit.>, independently introduced the game and characterized graphs with cop number 1, while Aigner and Fromme <cit.> were the first to study graphs with larger cop numbers.Since then, there have been numerous papers written on Cops and Robbers, and many variants of the game have been introduced; for more background on the game, we direct the reader to <cit.>.In this paper, rather than asking how many cops are needed to capture a robber on G, we instead ask: how quickly can they do so?When G is a graph with cop number k, the capture time of G is the minimum number of rounds it takes for k cops to capture a robber on G, provided that the robber evades capture as long as possible.To be more precise, the capture time of G is the minimum number t such that (G) cops can always capture a robber on G within t rounds, regardless of how the robber plays.(We do not consider the players' choosing of initial positions to constitute a round; rather, the first round begins with the first cop turn.)We denote the capture time of G by (G).The concept of capture time was first introduced by Bonato, Golovach, Hahn, and Kratochvíl in <cit.>.They and Gavenčiak <cit.> showed that for n ≥ 7, if G is an n-vertex graph with cop number 1, then (G) ≤ n-4 (and this bound is best possible).For graphs with higher cop number, not much is known about the capture time.The capture time of two-dimensional Cartesian grids was determined by Mehrabian in <cit.>, and the capture time of the n-dimensional hypercube Q_d was shown to be Θ(d log d) by Bonato, Gordinowicz, Kinnersley, and Prałat in <cit.>.More recently, Bonato, Pérez-Giménez, Prałat, and Reiniger <cit.> investigated the length of the game on Q_d when playing with more than c(Q_d) cops and similarly studied the length of the game on trees, grids, planar graphs, and binomial random graphs when playing with “extra” cops.Pisantechakool <cit.> showed that for n-vertex planar graphs, three cops can always capture a robber within 2n rounds.Förster, Nuridini, Uitto, and Wattenhoffer <cit.> constructed graphs in which it takes a single cop O(ℓ· n) time to capture ℓ robbers. In general, the capture time of an n-vertex graph with cop number k must be O(n^k+1), and this is straightforward to prove – see, for example, <cit.>.However, it is not so easy to determine whether this upper bound is asymptotically tight.The aforementioned work of Bonato, Golovach, Hahn, and Kratochvíl <cit.> and Gavenčiak <cit.> shows that in fact this bound is not tight when k=1.Moreover, to the best of our knowledge, there are no examples in the literature of n-vertex graphs whose capture time is even superlinear in n, let alone on the order of n^k+1.However, in this paper, we show that the simple upper bound on (G) is in fact tight for k ≥ 2: our main result, Theorem <ref>, states that for fixed k ≥ 2, the maximum value of the capture time among n-vertex graphs with cop number k is Θ(n^k+1).(While this paper was in preparation, we were informed that Theorem <ref> has recently and independently been obtained by Brandt, Emek, Uitto, and Wattenhoffer in <cit.> using a different argument.However, we believe that the techniques used in our proof are of independent interest, particularly Theorem <ref>: very little is known about Cops and Robbers played on directed graphs, and Theorem <ref> establishes a close connection between the game played on undirected graphs and the game played on directed graphs.)Additionally, we prove in Theorem <ref> that the maximum capture time among directed n-vertex graphs with cop number 1 is Θ(n^2), in contrast to the situation on undirected graphs.To prove Theorem <ref>, we consider a generalized version of the game of Cops and Robbers.Some formulations of the game do not allow players to remain still on their turns; rather, each cop and the robber must, on their corresponding turns, follow an edge.When the game is presented in this manner, the underlying graph is typically assumed to be reflexive, meaning that each vertex has a loop (i.e. an edge joining that vertex to itself).Consequently, this alternative formulation of the game is functionally equivalent to the original: instead of remaining still, an entity can simply follow the loop at its current vertex.In this paper, we find it useful to use this alternate formulation of the game but to forego the assumption that the graph is reflexive.Rather, some vertices may have loops, meaning that players can remain at that vertex indefinitely; others may have no loops, meaning that players at that vertex cannot remain there, and instead must move to an adjacent vertex.We also permit our underlying graph to be directed; on a directed graph, players must follow edges in their prescribed directions.(We ensure that the vertices of our graph always have at least one out-neighbor, so that we never reach a situation in which a player has no legal moves.)Finally, we make use of a variant known as “Cops and Robbers with Protection”, introduced by Mamino in <cit.>.In this variant, edges may be deemed “protected”; cops may freely traverse protected edges, but can only capture the robber by reaching the robber's vertex along an unprotected edge. Our paper is laid out as follows.In Section <ref>, we discuss the variants of Cops and Robbers that we will use throughout the paper.In particular, we show how to model an instance of Cops and Robbers with Protection played on a directed graph using an instance of ordinary Cops and Robbers played on a reflexive undirected graph, without substantially changing the capture time.We also outline the proof of our main result.In Sections <ref> and <ref>, we prove our main result by showing how to construct n-vertex graphs with cop number k and capture time Ω(n^k+1).Next, in Section <ref>, we show that n-vertex strongly-connected directed graphs with cop number 1 may have capture time Ω(n^2).Finally, in Section <ref>, we briefly discuss some consequences of our results in the area of computational complexity.Throughout the paper, we denote the vertex set and edge set of a graph or directed graph G by V(G) and E(G), respectively.We denote an undirected edge joining vertices u and v by uv, and we denote a directed edge from u to v by uv.In the standard model of Cops and Robbers, we say that a cop defends a vertex v when she occupies v or some neighbor of v.In other words, v is defended if a robber moving to v would be captured on the ensuing cops' turn.The precise definition of the term will need to be modified slightly for some of the Cops and Robbers variants we consider, but the intuitive definition – a vertex is defended if the robber cannot safely move there – will remain unchanged.In an instance of Cops and Robbers in which there are enough cops to guarantee a win, we say that the cops play optimally or use an optimal strategy if they play in a way that minimizes the length of the game (relative to the robber's actions).Similarly, we say that the robber plays optimally if he plays in a way that maximizes the length of the game (relative to the cops' actions).By a configuration of a game in progress, we mean the collection of the positions of all cops and the robber.For more notation and background in graph theory, we direct the reader to <cit.>.Preliminaries and OverviewIn this section, we lay the groundwork for a proof of Theorem <ref>.*thm:mainTheorem <ref> \beginthm:main For fixed k ≥ 2, the maximum capture time of an n-vertex graph with cop number k is Θ(n^k+1). \endthm:main We begin with a short proof of an easy upper bound on (G).(This result is well-known – see for example <cit.> – but we include a proof here for the sake of completeness.) If G is an n-vertex graph with cop number k, then (G) ≤ n ·n+k-1k = O(n^k+1).=-1 Cops and Robbers is a deterministic game of perfect information and, at any point during the game, only the current configuration matters; the precise sequence of moves used to reach that configuration is irrelevant.Consequently, under optimal play by both players, each player's moves are determined entirely by the present configuration of the game.It is thus straightforward to see that under optimal play, no two cop turns begin with the same configuration.Indeed, suppose that rounds t_1 and t_2 (with t_1 < t_2) both begin with some configuration C.Since the players' moves depend only on the current configuration of the game, the same moves played on rounds t_1, t_1+1, …, t_2-1 would subsequently be repeated by both players.Consequently, the game would return to configuration C an unbounded number of times, contradicting the assumption that the cops are using a winning strategy.The claim now follows by noting that there are n ·n+k-1k different configurations: there are n ways to choose the robber's position, and there are n+k-1k ways to choose a multiset of k cop positions.Thus an optimally-played game involves at most n ·n+k-1k cop turns and hence at most n ·n+k-1k rounds. Note that for fixed k, we haven ·n+k-1k≤ n ·(n+k-1)^k e^k/k^k = O(n^k+1),so Proposition <ref> establishes the upper bound in Theorem <ref>.Establishing the lower bound in Theorem <ref> is considerably more difficult.The remainder of this section is devoted to giving a high-level overview of the proof, while the proof itself appears in Section <ref>.For each integer k ≥ 2, we aim to construct arbitrarily large graphs G such that (G) = k and (G) ≥ c_k ·V(G)^k+1 (where c_k is a constant depending only on k).One difficulty in doing this is that the game of Cops and Robbers can get quite complex.There are myriad ways for the game to proceed, so finding an optimal cop or robber strategy for some graph – or even just determining the capture time – can be frustratingly complicated.To deal with this difficulty, we consider a generalization of the usual game of Cops and Robbers that gives us more power in constructing a “game board” that limits the players' freedom.In Section <ref>, we use this model to produce graphs with high capture time (under the rules of this generalization).Moreover, we prove that this is “good enough” to establish Theorem <ref>: we show that any instance of the game with large capture time under this new model can be transformed into an instance of the usual game of Cops and Robbers with large capture time and with the same cop number.In fact, we consider two successive generalizations of the game.The first of these is the game of “Cops and Robbers with Protection,” introduced by Mamino <cit.>.In this variant, each edge of the underlying graph G is designated either protected or unprotected.Additionally, the winning conditions of the game are slightly different: the cops win only if some cop reaches the same vertex as the robber by traveling along an unprotected edge.(The cops may still traverse protected edges, but cannot use them to directly capture the robber.)One subtle consequence of this change is that the robber may, in this variant, move to the same vertex as a cop without immediately losing the game.(Note, however, that if this vertex has an unprotected loop, then the cop may use it to win on the ensuing cop turn.)We refer to a vertex with an unprotected loop (resp. protected loop) as an unprotected vertex (resp. protected vertex).Vertices without loops are considered neither protected nor unprotected.We say that a cop defends a vertex if it is adjacent to that vertex along an unprotected edge.A graph together with a specification of protectedness or unprotectedness for each edge is a protected graph.We define the cop number and capture time of a protected graph in this new game analogously as in the original game.Mamino showed that Cops and Robbers with Protection can be reduced to ordinary Cops and Robbers.He did this by showing how to construct, for every protected graph G, an unprotected graph H with the same cop number and with V(H) not “too much larger” than V(G).While Mamino was not concerned with capture time, his proof shows that in fact (H) = (G).Thus, close analysis of Mamino's construction yields the following: For every protected graph G, there exists a graph H such that (H) = (G), (H) = (G), and V(H) < 4 ·(G)^2 ·V(G). Next, we show that instead of restricting ourselves to reflexive undirected graphs, we can consider (not necessarily reflexive) directed graphs.To be precise, we prove the following:*thm:directed2Theorem <ref> \beginthm:directed2 Fix k ≥ 2, and let G be a protected (not necessarily reflexive) directed graph with (G) = k such that every vertex in G has at least one in-neighbor and one out-neighbor.If H is constructed from G as specified above, then (H) = k and (G) + 1 ≤(H) ≤(G) + 2.In addition, V(H) = (3k+3)V(G) + 8k + 3. \endthm:directed2 After establishing Theorem <ref>, we are ready to prove Theorem <ref>.We aim to construct reflexive undirected graphs with large capture time; Theorem <ref> shows that it suffices to construct (not necessarily reflexive) protected directed graphs with large capture time.In particular, for k ≥ 2 we show how to construct arbitrarily large protected directed graphs G with with (G) = k and (G) ≥(V(G)/2k)^k+1.Once we have constructed some such directed graph, we can use Theorem <ref> to find a corresponding protected undirected graph.Finally, Lemma <ref> gives a corresponding ordinary undirected graph, which completes the proof of Theorem <ref>.§ MODELING DIRECTED GRAPHS WITH UNDIRECTED GRAPHS To prove Theorem <ref> we must show how to construct, for a given protected directed graph G, a corresponding undirected protected graph H with the same cop number, roughly the same capture time, and not “too many more” vertices.Since this construction is rather involved, we devote this section to an explanation of the construction, leaving the actual proof of Theorem <ref> for Section <ref>.Let G be a protected directed graph and let k = (G).We construct a reflexive protected undirected graph H as follows. The vertex set of H is comprised of sets S, T_0, T_1, T_2, C_0, C_1, C_2, C^*, R_0, R_1, R_2, and R^*.We refer to vertices in C_0, C_1, and C_2 as cop vertices, while vertices in R_0, R_1, and R_2 are robber vertices.S ∪ T_0 ∪ T_1 ∪ T_2 is referred to as the reset clique, with S itself comprising the core and T_i comprising the ith wing of the clique.Vertices in C^* are cop starter vertices, while vertices in R^* are robber starter vertices.(See Figure <ref>.)All vertices in the reset clique are protected, while all other vertices in H are unprotected.Throughout the construction, the indices on the C_i and R_i should be taken modulo 3 when necessary; for example, when i=2, the expression “C_i+1” refers to C_0. The core of the reset clique contains 4k vertices, namely s_0, s_1, …, s_4k-1.In addition, for i ∈{0,1,2}, the set T_i contains k vertices, namely t_0^i, t_1^i, …, t_k-1^i.Every pair of vertices in the reset clique is joined by a protected edge.Under the proper circumstances, the reset clique will permit the robber to “reset” the game back to its initial state.As with the C_i and R_i, indices on vertices within the T_i should be taken modulo k and indices on vertices in S should be taken modulo 4k.The sets C_0, C_1, and C_2 each contain k copies of every vertex in G.For v ∈ V(G), i ∈{0, 1, 2}, and j ∈{0, 1, …, k-1}, we denote by κ(v;i,j) the jth copy of v belonging to C_i.Within each C_i, the copies of a given vertex form a clique; that is, for all v, i, j, and j', the vertices κ(v;i,j) and κ(v;i,j') are joined by an unprotected edge.Aside from these edges and loops, there are no edges with both endpoints in any C_i.The sets R_0, R_1, and R_2 each contain one copy of every vertex in G, and each R_i is independent.We denote by ρ(v;i) the copy of v in R_i.Our intent is that for the bulk of the game, the cops occupy vertices within the C_i (that is, “cop vertices”), while the robber occupies a vertex within one of the R_j (that is, a “robber vertex”).We seek to construct H so that play of the game on H mirrors play on G, in the sense that we can equate a cop occupying κ(v;i,j) in H with a cop occupying v in G; likewise, the robber occupying ρ(v;i) in H is analogous to the robber occupying v in G.We also aim to greatly restrict the flexibility that both players enjoy.In particular, our intent is that under any optimal cop strategy, if the cops are positioned within C_i after some cop turn, then the robber must be positioned within R_i.Moreover, we aim to force the cops to move from C_0 to C_1 to C_2 to C_0 and so on; forcing the cops to keep moving in the “forward direction” among the C_i will allow us to simulate playing on the directed graph G.Likewise, we aim to force the robber to move from R_0 to R_1 to R_2 to R_0 and so on.We now add edges among the cop and robber vertices.For all uv∈ E(G), all i ∈{0, 1, 2}, and all j ∈{0, 1, …, k-1}, add unprotected edges joining κ(u;i,j) to κ(v;i+1,j) and joining ρ(u;i) to ρ(v;i+1).(Note that u and v need not be distinct.)These edges ensure that each “forward” movement by a cop or robber (from C_i to C_i+1 or from R_i to R_i+1) corresponds to following an edge from G in the forward direction.If uv is unprotected, then we also add an unprotected edge joining κ(u;i,j) to ρ(v;i+1).These edges allow a cop to capture the robber in H provided that she would be able to do so in G.(See Figure <ref>.) Next, for i ∈{0,1,2}, add protected edges joining each vertex of R_i to each vertex of S ∪ T_i.These edges permit the robber to “escape” to the reset clique, should the cops fail to adequately defend it.Additionally, add unprotected edges joining each κ(v;i,j) to s_4j+i, s_4j+i+1, s_4j+i+2, s_4j+i+3, and t_j^i.These edges permit cops in C_i to defend the core and ith wing of the reset clique, but only by positioning themselves in very special ways (see below).We also add unprotected edges joining all vertices of C_i to all vertices of R_i ∪ R_i-1.These edges force the robber to keep moving “forward” from R_0 to R_1 to R_2 to R_0 and so forth, in order to stay one step “ahead” of the cops.The set C^* of cop starter vertices contains k vertices, namely c^*_0, c^*_1, …, c^*_k-1.For j ∈{0, 1, …, k-1}, we add unprotected edges joining c^*_j to s_4j+3,s_4j+4,s_4j+5,s_4j+6, t_j^0,t_j^1, and t_j^2.We also add unprotected edges joining every cop starter vertex to every cop vertex and every robber vertex.Finally, add unprotected edges joining all robber starter vertices to all cop vertices and joining each r^*_i to all vertices in R_i+1, along with protected edges joining each pair of robber starter vertices and protected edges joining each r^*_i to all vertices in the core and ith wing of the reset clique.These edges ensure that when the cops choose to start the game by occupying all of the cop starter vertices, the robber must start on one of the robber starter vertices.(In fact, under optimal play the cops must start on the cop starter vertices, but this is less clear; see Section <ref>.)§ MAIN RESULTS We are now ready to prove Theorems <ref> and <ref>.To prove Theorem <ref>, we must show that (H) = (G) = k and that (H) is roughly equal to (G) (where G is any given protected directed graph and H is the protected undirected graph constructed from G in Section <ref>).We begin by describing how we “expect” the game on H to be played. To simplify the analysis, we introduce some additional terminology.For all v ∈ V(G) and i ∈{0,1,2}, we refer to κ(v;i,j) as a j-vertex; c^*_j is also considered a j-vertex.In the game on H, we say that the cops occupy a stable position if either the cops occupy all k cop starter vertices, or all cops occupy vertices within C_i for some i with one cop on a j-vertex for all j ∈{0, 1, …, k-1}.We say that the game is in a canonical configuration if the cops occupy a stable position in some C_i and either: * It's the cops' turn and the robber occupies a vertex in R_i+1, or* It's the robber's turn and the robber occupies a vertex in R_i.As we will see below, the cops will want to prevent the robber from ever reaching the reset clique.To do this, they must ensure that they always defend the robber's neighbors in the clique; this requires them to position themselves carefully.Recall that we say a cop defends a vertex v when there is an unprotected edge joining the cop's current vertex with v. The cops defend all vertices of the core of the reset clique in H if and only if they occupy a stable position.Moreover, if the cops occupy a stable position, then they defend the ith wing of the reset clique if and only if they occupy either C_i or the cop starter vertices. It is clear from the construction of H that if the cops occupy a stable position, then they defend all vertices of the core.Suppose now that the cops defend all vertices of the core.Every cop vertex and cop starter vertex defends exactly four vertices within the core, while no other vertices in the graph defend any vertices of the core.Since there are 4k vertices in the core and only k cops, all k cops must occupy cop vertices or cop starter vertices and, moreover, no two cops can have a common neighbor in the core.By construction, if any two cops both occupy a j-vertex for some j, then they have a common neighbor (namely s_4j+3); consequently, we must have one cop on a j-vertex for all j ∈{0, 1, …, k-1}.Finally, by symmetry, it suffices to show that if one cop occupies a vertex of C_0, then so must the other k-1 cops.Suppose otherwise, and choose ℓ so that the cop occupying an ℓ-vertex sits in C_0, while the cop occupying an (ℓ-1)-vertex does not.By construction, these two cops both defend s_4ℓ, so the cops cannot defend the entire core.The second half of the claim is clear. An important consequence of Lemma <ref> is that when the robber resides in the reset clique, the cops can force him to leave only by occupying all k cop starter vertices.When they do so, the robber must move to one of the robber starter vertices, lest he be captured on the cops' next turn.Thus motivated, we define an initial configuration to be a game state in which the cops occupy all k cop starter vertices, the robber occupies a robber starter vertex, and it is the cops' turn. Suppose that the cops occupy a stable position and it is the robber's turn.If the robber moves to any cop vertex, then the cops can capture him within the two subsequent rounds. Suppose the robber moves to a j-vertex within C_i.In response, the cop currently on a j-vertex moves to any j-vertex in C_i, while any other cop moves to a cop starter vertex.(Note that this is always possible because each vertex in G has at least one in-neighbor and one out-neighbor, so no matter which cop vertex a cop occupies, she always has at least one j-vertex neighbor in each of C_0, C_1, and C_2.)The cop on the cop starter vertex now defends all cop and robber vertices, while the cop on a j-vertex in C_i defends all robber starter vertices and the robber's neighbors in the reset clique.Thus, no matter how the robber moves, the cops can win on their next turn.Suppose that the game is in an initial configuration with the robber on r^*_i.Under optimal play by both players, either the robber loses within the next three rounds, or: (1) After one more round, the game reaches a canonical configuration with the cops in C_i and the robber in R_i+1, and(2) For the remainder of the game, the robber never moves to an undefended vertex in the reset clique.Moreover, if the game reaches a canonical configuration as in (1), then the robber may occupy whichever vertex of R_i+1 he chooses. We begin with claim (2).If the robber ever moves into the reset clique, then by Lemma <ref> and the ensuing discussion, the game must eventually return to an initial configuration.All initial configurations are equivalent up to symmetry, so the rounds leading up to this second initial configuration have served no purpose for the cops.Thus, under optimal play by the cops, the game never returns to an initial configuration, so the robber must never reach the reset clique. We next turn to claim (1).If the cops all remain on the cop starter vertices, then the robber can simply remain on r^*_i; this is clearly suboptimal for the cops.Otherwise, if the cops do not move to a stable position in C_i, then by Lemma <ref>, they leave some vertex v of S ∪ T_i undefended.Thus the robber can, on his ensuing turn, move to the reset clique; by claim (2), this cannot happen under optimal play.Suppose therefore that the cops move to a stable position within C_i.By Lemma <ref>, the robber cannot move to a cop vertex without being captured in short order.The cops defend all of the robber's neighbors in the reset clique, so the robber cannot move there, either; likewise, he cannot remain in place or move to a different robber starter vertex.The only remaining option is for the robber to move to R_i+1, resulting in a canonical configuration of the desired type; since r^*_i is adjacent to all of R_i+1, the robber may occupy any vertex of R_i+1 he chooses. Recall that one of our main goals in constructing H is to greatly restrict the freedom enjoyed by both players.Claim (2) of the lemma above shows that the cops cannot allow the robber to safely enter the reset clique; this is the primary means by which we restrict the movements of the cops.Conversely, the robber's movements are restricted by the threat of capture.As we next show, these restrictions are severe enough that (under optimal play) the game is nearly always in a canonical configuration. Suppose that, on a cop turn, the game is in a canonical configuration with the cops in C_ℓ and the robber in R_ℓ+1.Under optimal play, either the robber loses within the next three rounds, or the game proceeds to a canonical configuration with the cops in C_ℓ+1 and the robber in R_ℓ+2. If some cop currently defends the robber's vertex, then the cops win immediately; suppose otherwise.Since the robber occupies a vertex in R_ℓ+1, his current vertex is adjacent to all of S ∪ T_ℓ+1.Thus, by Lemma <ref>, unless the cops move to a stable position in C_ℓ+1, the robber can safely move to a vertex in the reset clique.By Lemma <ref>, this cannot happen under optimal play. We may thus suppose that the cops move to a stable position in C_ℓ+1.The cops now defend all of R_ℓ, all of R_ℓ+1, the robber starter vertices, and all of the robber's neighbors in the reset clique.Moreover, by Lemma <ref>, if the robber moves to a cop vertex, then he can be captured within the following two rounds.The robber's only remaining option is to move into R_ℓ+2, resulting in a canonical configuration of the desired form. We are finally ready to prove Theorem <ref>.Fix k ≥ 2, and let G be a protected (not necessarily reflexive) directed graph with (G) = k such that every vertex in G has at least one in-neighbor and one out-neighbor.If H is constructed from G as specified above, then (H) = k and (G) + 1 ≤(H) ≤(G) + 2.In addition, V(H) = (3k+3)V(G) + 8k + 3. It is clear from the construction of H that V(H) = (3k+3)V(G) + 8k + 3.To establish the rest of the claim, we show that the cops can win the game on H within (G)+2 rounds and that the robber can evade capture on H for at least (G) turns.As shown in Lemmas <ref> and <ref>, optimal play ensures that the game on H will typically be in a canonical configuration.We associate each canonical configuration in H with a configuration of the game in G in the natural way.Each of the k cops in H occupies some cop vertex κ(v;i,j); we view this cop as occupying the vertex v in G.Likewise, when the robber occupies some vertex ρ(w;i) in H, we imagine that he occupies vertex w in G.We begin by giving a cop strategy to capture the robber on H within (G) + 2 rounds.Throughout the game on H, the cops will imagine playing a game on G and use their strategy on G to guide their play on H.The cops begin by occupying all k cop starter vertices.To avoid immediate capture, the robber must then occupy one of the robber starter vertices, say (without loss of generality) r^*_0.The cops now turn to the game on G and choose their starting positions in that game.For convenience, index the cops from 0 to k-1.If cop j occupies vertex v in G, then she moves to vertex κ(v;0,j) in H.(This is always possible because each cop starter vertex is adjacent to all vertices in C_0.)Thus, in H, the cops move to a stable position within C_0 that corresponds to their choice of initial positions on G.As argued in Lemma <ref>, if the robber is to survive for more than three more rounds, then he must occupy some vertex ρ(w;1).The cops now imagine that, in the game on G, the robber has chosen to occupy vertex w.The game on H has now entered a canonical configuration.The cops imagine their next move (under optimal play) in G and mirror this move in H, while simultaneously moving to a stable position within C_1.In particular, if cop j moves from v to w in G, then she moves from κ(v;0,j) to κ(w;1,j) in H.(Note that since vw∈ E(G), vertices κ(v;0,j) and κ(w;1,j) are adjacent in H.)As argued in Lemma <ref>, either the robber moves to R_2 or he loses within the following two rounds.Suppose the latter.Since the game on G has not yet ended, it has lasted at most (G)-1 rounds.One round was played in the game on H before the game on G even began and as many as two more rounds might yet be played.In total, the game on H lasts at most (G)+2 rounds.Now suppose instead that the robber moves from his current position ρ(x;1) to some vertex ρ(y;2) in R_2.By construction, vertices ρ(x;1) and ρ(y;2) are adjacent in H only if xy∈ E(G).Thus, in the game on G, the cops may imagine that the robber has moved from x to y.The game on H remains in a canonical configuration and, moreover, this configuration corresponds to the configuration of the game on G.The game on H continues in this manner until either the robber fails to move to a canonical configuration as outlined in Lemma <ref> or until the cops capture the robber on G.In the former case, as argued above, the game on H lasts at most (G)+2 rounds.In the latter case, some cop j has followed an unprotected edge in G from her vertex v to the robber's vertex w (where v and w are not necessarily distinct).(Recall that in the game of Cops and Robbers with Protection, this is the only way that the game can end; in particular, unlike in ordinary Cops and Robbers, the game does not end if the robber moves to the cop's current vertex.)In this case, in H, cop j presently occupies κ(v;i,j) for some i, while the robber occupies ρ(w;i+1).Since vw is an unprotected edge in G, these two vertices are joined in H by an unprotected edge, so cop j may proceed to capture the robber in H.Since at most (G) rounds have elapsed in G and one additional round was played in H before the game in G even began, in total at most (G)+1 rounds have been played in H.To show that (H) ≥(G)+1, we use a similar argument, except that this time we give a strategy for the robber.We assume throughout that the cops play optimally on H.At the outset of the game on H, there are two possibilities: either the cops begin by occupying all k cop starter vertices, or they don't.In the latter case, by Lemma <ref>, some vertex of the reset clique remains undefended; the robber chooses to begin there.The robber can henceforth remain in the clique until the cops do occupy all k cop starter vertices, at which point the robber moves to r^*_0.If instead the cops begin by occupying the cop starter vertices, the robber simply begins on r^*_0; since this is clearly a more efficient line of play for the cops, we may suppose that this is what happens.Thus, the game begins in an initial configuration with the robber on r^*_0.By Lemma <ref>, the cops must always defend all of the robber's neighbors in the reset clique.They can do this only by moving to a stable position within C_0.As above, we can associate this stable position with an initial position for the cops in the game on G.The robber now considers the game on G and chooses his initial position in that game; say he decides to begin on vertex v.In the game on H, the robber moves to ρ(v;1).The game on H has now entered a canonical configuration that corresponds to the current configuration of the game on G.It is now the cops' turn.The cops all occupy vertices of the form κ(u;0,j), while the robber occupies some vertex ρ(v;1).By construction, these two vertices are adjacent along an unprotected edge if and only if uv is an unprotected edge in G.Thus, some cop can capture the robber on their ensuing turn on H if and only if she can do so on G.Otherwise, to prevent the robber from reaching the reset clique, the cops must move to a stable position in C_1.As before, this cop movement in H corresponds to a legal cop movement in G.The robber imagines that the cops have played thus on G, decides which vertex to move to in that game, and moves (in H) to the corresponding vertex in R_2.As above, the game continues in this manner, with each player's moves in H corresponding to legal moves in G.Eventually, the cops capture the robber in H.By construction, the cops cannot capture the robber on H until such time as they can also capture him on G.Since the robber plays optimally in G, this takes at least (G) rounds; since one additional round was played in H, we have (H) ≥(G)+1, as claimed.Armed with Theorem <ref>, we are ready to prove Theorem <ref>. For fixed k ≥ 2, the maximum capture time of an n-vertex graph with cop number k is Θ(n^k+1). It follows from Proposition <ref> that the capture time of an n-vertex graph with cop number k is O(n^k+1), so it suffices to establish a matching lower bound.In particular, we will show that there exist arbitrarily large graphs H with cop number k and capture time at least ( V(H)/40k^4 )^k+1.We first show how to construct a protected directed graph G with (G) = k and (G) ≥(n/2k)^k+1, where n = V(G).By Theorem <ref>, it then follows that there exists a protected reflexive undirected graph G' with (G') = k, (G') ≥(n/2k)^k+1, and V(G') = (3k+3)n+8k+3.Finally, Lemma <ref> implies the existence of a reflexive undirected graph H with (H) = k, (H) ≥(n/2k)^k+1, and V(H) < 4k^2 V(G') < 20k^3n (for sufficiently large n).Thus, (H) ≥( V(H)/40k^4 )^k+1, as claimed.Our goal in constructing G is to restrict the cops' actions so greatly that they have only one reasonable line of play – a line of play that happens to take a long time to resolve.This will greatly simplify the analysis of the game. The vertex set of G consists of five parts: S, the reset clique; C_0, C_1, …, C_k-1, the cop tracks; R, the robber track; X, the set of escape vertices; and one special vertex ω.The reset clique consists of the k vertices s_0, s_1, …, s_k-1.Each cop track C_i consists of the q_i vertices c_i,0, c_i,1, …, c_i,q_i-1, where the q_i will be specified later.Likewise, the robber track consists of the p-1 vertices r_0, …, r_p-2 (where p will be specified later) along with the k vertices r_p-1^0, r_p-1^1, …, r_p-1^k-1.The set X consists of the k escape vertices x_0, x_1, …, x_k-1.The vertices in the reset clique are reflexive and protected, ω is reflexive and unprotected, and all other vertices are irreflexive.Between each pair of vertices s,t in the reset clique, we add edges s⃗t⃗ and t⃗s⃗, both protected.For each i ∈{0, 1, …, k-1}, we add an unprotected edge from c_i,0 to s_i.On the cop tracks, we add unprotected edges from c_i,j to c_i,j+1 for all j (where j is taken modulo q_i where appropriate).On the robber track, we add unprotected edges from r_j to r_j+1 for j ∈{0, 1, …, p-3}, along with unprotected edges from r_p-2 to r_p-1^0, r_p-1^1, …, r_p-1^k-1 and from each of r_p-1^0, r_p-1^1, …, r_p-1^k-1 to r_0.We also add unprotected edges from every vertex on the robber track to every escape vertex, from every escape vertex to every vertex in the reset clique, and from every vertex in the reset clique to r_0.Finally, we add unprotected edges from every vertex in C_i to the escape vertex x_i, from each c_i,q_i-1 to r_p-1^i, and from ω to all vertices except those in the reset clique.(Refer to Figure <ref>.)Before proceeding, we make four observations.First: the cops can defend the reset clique only by occupying all of c_0,0, c_1,0, …, c_k-1,0.Second: a cop can enter ω only by beginning the game there; once he leaves, he can never return.Third: the cops defend all k escape vertices if and only if either some cop occupies ω, or each cop track contains a cop.When each cop track contains a cop, we say that the cops occupy a stable position.Fourth: from a stable position, if the cop on C_i ever leaves C_i, then he can never return; consequently, the cops can never again occupy all of c_0,0, c_1,0, …, c_k-1,0, so they can never again defend the reset clique.Thus if, on the robber's turn, the cops ever fail to occupy a stable position (and no cop occupies ω), then the robber can move to some undefended escape vertex and subsequently to the reset clique, whence he can never be captured.Thus the cops must occupy a stable position on every robber turn or else lose the game.At this point, we remark that because k cops are needed to defend the reset clique, we have (G) ≥ k.We explain later why (G) ≤ k. We are now ready to outline the robber's strategy for surviving “long enough” against k cops.At the beginning of the game, if the cops' initial placement leaves any vertex of the reset clique undefended, then the robber begins on some such vertex; he moves between undefended vertices in the clique until the cops occupy all of c_0,0, c_1,0,…, c_k-1,0, at which point he moves to r_0.Otherwise, the cops must have begun the game on precisely these vertices, so the robber begins on r_0.In either case, the game reaches a configuration in which the cops occupy c_0,0, c_1,0, …, c_k-1,0, the robber occupies r_0, and it is the cops' turn; we refer to this as the initial configuration of the game.In an initial configuration, the cops occupy a stable position.If, on any robber turn, the cops do not occupy a stable position, then (as explained above) the robber moves to an undefended escape vertex, moves from there to the reset clique, and forever after remains safely in the reset clique.Thus, so long as the robber remains on the robber track, the cops must always maintain a stable position (unless, of course, some cop can capture the robber with her next move).Conversely, so long as the cops always occupy a stable position, they defend all k escape vertices, so the robber cannot leave the robber track.In a stable position, we have one cop in C_i for each i – that is, we have one cop in each cop track.To maintain a stable position, each cop must remain within her track, and consequently must move “forward” on each turn.That is, the cop who begins on c_i,0 must first move to c_i,1, then to c_i,2, and so forth.Likewise, since the cops always occupy a stable position, the robber must move from r_0 to r_1, then to r_2, and so forth.Once cop i reaches c_i,q_i-1, there are two reasonable possibilities for her next move.If the robber occupies r_p-1^i at that time, then cop i may (and surely will) capture him.If instead the cops cannot capture the robber, then to maintain a stable position, cop i must return to c_i,0.Similarly, once the robber reaches r_p-2, he has some flexibility.Assuming the cops occupy a stable position, the robber cannot leave the robber track.Thus, if any vertex r_p-1^i is undefended, the robber moves there, and the game continues as before – with the robber moving next to r_0, then to r_1, and so forth.If instead all of the r_p-1^i are defended, then the robber moves to r_p-1^0, where he will be captured.This process can end only with the robber's capture.This occurs if, and only if, on some robber turn, cop i occupies vertex c_i,q_i-1 for all i, while the robber occupies r_p-2.We refer to this as a terminal configuration.Suppose that the game first reaches a terminal configuration in the Tth round after reaching the initial configuration.Since each cop must walk along her track over and over, never leaving and never pausing, we see that T must be congruent to -1 modulo q_i for all i.Likewise, T must be congruent to -1 modulo p.We now choose p, q_0, …, q_k-1.Fix an arbitrary positive integer r.Let p be the rth smallest prime number, q_0 the next-smallest, q_1 the next-smallest after that, and so forth.Since p and the q_i are all prime, T must be congruent to -1 modulo pq_0q_1… q_k-1, hence T ≥ pq_0q_1… q_k-1-1.At this point, we note that (G) ≤ k.If the cops all start on vertex ω, then the robber must start in the reset clique.The cops can now easily force the game into an initial configuration.Henceforth, if the cops continue to follow the cop tracks, then the robber can never leave the robber track, so the game reaches a terminal configuration after T rounds – at which point the cops win.Returning to the matter of (G), recall that the rth-smallest prime number lies between r(log r + loglog r - 1) and r(log r + loglog r).Thus,(G) ≥ T ≥ [r(log r + loglog r - 1)]^k+1.In addition,n = V(G) = S + ∑_i=0^k-1C_i + R + X + 1≤ k + k(r+k)(log (r+k) + loglog (r+k)) + r(log r + loglog r) + k + 1≤ 2kr(log r + loglog r - 1),for r sufficiently large relative to k.Thus,(G) ≥ [r(log r + loglog r - 1)]^k+1≥ (n/2k )^k+1,as claimed. § DIRECTED GRAPHS While our primary goal in this paper was to construct undirected graphs with large capture time, the tools we have established enable us to say a few things about directed graphs.Most notably, Theorem <ref> shows that the Cops and Robbers played on directed graphs is very closely connected to the game played on undirected graphs.This is significant because Cops and Robbers on directed graphs is not well understood; very little work has been done in the area.It is our hope that the techniques used in Theorem <ref>, if not the theorem itself, can be used to establish new results on Cops and Robbers in the directed setting.What work has been done on this topic – most notably in <cit.>, <cit.>, and <cit.> – has focused on strongly-connected directed graphs.Theorem <ref> implies that for k ≥ 2, there exist n-vertex strongly-connected directed graphs with cop number k and capture time Θ(n^k+1): simply construct an undirected graph with these properties and replace each undirected edge uv with the two edges uv and vu.Moreover, the argument used to prove Proposition <ref> can be applied to directed graphs, giving an O(n^n+1) bound on the capture time.However, one thing is not immediately clear: how large can the capture time be for an n-vertex strongly-connected directed graph with cop number 1?One cannot simply apply the argument that Bonato et al. <cit.> used to show that an undirected graph with cop number 1 has capture time O(n); it does not extend to this more general setting.There is good reason for this: in fact there exist n-vertex strongly-connected directed graphs with cop number 1 and capture time Ω(n^2), as we next show.While Theorem <ref> requires k ≥ 2, this is only needed to satisfy the hypotheses of Theorem <ref>: when k=1, the main construction does in fact create a directed graph with cop number 1 and capture time Θ(n^2).(Note that this shows that the bound on k in Theorem <ref> is best possible: one cannot hope to adjust the construction so that it works when k=1.)It is not hard to adjust this construction so that the graph produced is also reflexive and strongly-connected.Our construction uses protected directed graphs; we remark that Lemma <ref> can be extended to the directed graph setting by making the natural adjustments to the proof (<cit.>, Lemma 3.1).[ In particular: * Take P to be a doubly-directed incidence graph of a projective plane;* Add an edge from (i,p) to (j,q) if and only if pq∈ E(P) or i=j;* vw∈ E(G') when there is an unprotected edge from π(v) to π(w) in G or when vw∈ E(H) and either π(v) = π(w) or there is a protected edge from π(v) to π(w) in G. ] The maximum capture time among n-vertex strongly-connected directed graphs with cop number 1 is Θ(n^2). As noted above, the argument used to prove Proposition <ref> shows that every n-vertex directed graph with cop number 1 has capture time O(n^2).To establish the matching lower bound, we need only construct a strongly-connected protected directed graph G with cop number 1 and capture time Ω(n^2); we may then apply Lemma <ref> to obtain a corresponding protected undirected graph.We use a construction similar to that used in Theorem <ref> (with k=1), with only a few modifications.(Since k=1, to simplify the notation, we write c_i, r_p-1, q, and s in place of c_i,0, r_p-1^0, q_0, and s_0, respectively.)First, we add protected loops at every vertex of G.Next, we add a new vertex ψ, unprotected edges from every vertex on the cop vertex to ψ, an unprotected edge from ψ to ω, and an unprotected edge from ω to ψ.(These new edges allow the cop to return to ω, but he must take two steps to do so; this gives the robber time to escape back to the starter clique.)We also add an unprotected edge from the escape vertex s to ω.The graph is now strongly-connected.From any vertex on the cop track, one can reach ω by way of ψ; from a vertex on the robber track, one can reach ω by way of s; from any vertex in the starter clique, one can reach ω by first entering the robber track.From ω, one can proceed directly to any vertex except the single vertex in the starter clique, which we can reach from s.Thus for every vertices u and v in G there is a path from u to ω to v, so G is strongly-connected.Additionally, for each vertex c_i on the cop track, we add a second vertex c'_i with the same in-neighbors and out-neighbors as c_i.Likewise, for each vertex r_i, we add a twin vertex r'_i.Finally, for all i ∈{0, …, q-1} and all j ∈{0, … p-1} we add unprotected edges from c_i to r_j and from c'_i to r'_j.These edges will let the cop force the robber to move forward in his track, rather than following loops.Note that by construction of the c'_i and r'_i, there are also edges from c_q-1 to r'_p-1 and from c'_q-1 to r_p-1; as in Theorem <ref>, these edges will allow the cop to eventually capture the robber.It is clear that G is reflexive and strongly-connected.To see that G has cop number 1, we explain how one cop can capture the robber.The cop begins on ω.To avoid immediate capture, the robber must begin in the starter clique.The cop now moves to c_0, forcing the robber to leave the clique.The robber must move to r'_0 (note that moving to r_0 would result in capture).The cop next moves to c'_1.The robber cannot remain where he is, nor can he move to the escape vertex s or to r'_1; his only option is to move to r_1.In response the cop moves to c_2, and so on.As in the proof of Theorem <ref>, this process continues until the cop occupies c_q-1 (respectively, c'_q-1) while the robber occupies r_p-2 (resp. r'_p-2) on the robber's turn; the robber has no way to avoid capture on the cop's next turn.Finally, we give a robber strategy showing that G has capture time Ω(n^2).The robber begins in the starter clique and remains there until the cop moves to either c_0 or c'_0.The robber then moves to either r'_0 or r_0, respectively.If the cop moves to c_1 then the robber moves to r'_1; if the cop moves to c'_1 then the robber moves to r_1; if the cop remains in place, then so does the robber; if the cop moves anywhere else, then the robber moves to s and subsequently back to the starter clique.The robber plays similarly on all future turns.Under optimal play by the cop, the robber can never return to the starter clique, so we may suppose both players remain on their tracks.Moreover, under optimal play the cop clearly never remains in place, since this simply wastes a turn.Thus the cop and robber both keep moving forward along their tracks until the cop occupies c_q-1 (respectively, c'_q-1) while the robber occupies r_p-2 (resp. r'_p-2) on the robber's turn, after which the cop's win is ensured.As in Theorem <ref>, choosing p and q to be sufficiently large consecutive primes now yields the desired conclusion.§ COMPUTATIONAL COMPLEXITY We close the paper by mentioning an interesting corollary of Theorem <ref> in the area of computational complexity.Let C&R(G,k) denote the decision problem associated with determining whether k cops can capture a robber on the undirected graph G.Goldstein and Reingold <cit.> conjectured in 1995 that C&R is complete for the complexity class EXPTIME – the class of decision problems solvable in time O(2^p(n)) for some polynomial p (where, as usual, n denotes the size of the input).This conjecture was recently affirmed by the author using a rather involved argument (see <cit.>).As it turns out, Theorem <ref> yields a proof that is considerably shorter and, arguably, more elegant.In addition to C&R, we will need to refer to the following decision problems corresponding to variants of Cops and Robbers: * C&Rp(G,k), wherein G is a protected undirected graph;* C&Rpd(G,k), wherein G is a protected directed (not necessarily reflexive) graph in which each vertex has at least one in-neighbor and one out-neighbor;* C&Rdsc(G,k), wherein G is a strongly-connected directed reflexive graph.While Goldstein and Reingold could not resolve the complexity of C&R, they did show that C&Rdsc is EXPTIME-complete (<cit.>, Theorem 4).This result, in conjunction with Theorem <ref> and Mamino's result that C&Rp reduces to C&R, yields a short proof that C&R is EXPTIME-complete. C&R is EXPTIME-complete. C&R is easily seen to belong to EXPTIME, so it suffices to show that it is EXPTIME-hard.C&Rdsc trivially reduces to C&Rpd, since an unprotected directed graph can be viewed as a protected directed graph in which each edge happens to be unprotected.Theorem <ref> shows that C&Rpd reduces to C&Rp.Finally, C&Rp reduces to C&R (<cit.>, Lemma 3.1).Since C&Rdsc is EXPTIME-hard (<cit.>, Theorem 4), it follows that C&R is also EXPTIME-hard. 99AF84 M. Aigner, M. Fromme, A game of cops and robbers, Discrete Applied Mathematics 8 (1984), 1–12.BEUW17 S. Brandt, Y. Emek, J. Uitto, and R. Wattenhoffer, A tight lower bound for the capture time of the Cops and Robbers game, Proceedings of 44th International Colloquium on Automata, Languages, and Programming, 2017, to appear.[Available online at https://ie.technion.ac.il/ yemek/Publications/tlbctcrg.pdf.] BGHK09 A. Bonato, P. Golovach, G. Hahn, and J. Kratochvíl, The capture time of a graph, Discrete Mathematics 309 (2009), 5588–5595.BGKP13 A. Bonato, P. Gordinowicz, W.B. Kinnersley, and P. Prałat, The capture time of the hypercube, Electon. J. Combin. 20 (2013), Paper P24.BI93 A. Berarducci and B. Intrigila, On the cop number of a graph, Advances in Applied Mathematics 14 (1993), 389–403.BN11 A. Bonato and R.J. Nowakowski, The Game of Cops and Robbers on Graphs, American Mathematical Society, Providence, Rhode Island, 2011. BPPR17 A. Bonato, X. Pérez-Giménez, P. Prałat, and B. Reiniger, The game of Overprescribed Cops and Robbers played on graphs, Graphs and Combinatorics (2017). doi:10.1007/s00373-017-1815-2.FKL12 A. Frieze, M. Krivilevich, and P. Loh, Variations on cops and robbers, Journal of Graph Theory 69 (2012), no. 4, 383–402.FNUW15 K.-T. Förster, R. Nuridini, J. Uitto, and R. Wattenhoffer, Lower bounds for the capture time: Linear, quadratic, and beyond, Lecture Notes in Computer Science 9439 (2015), 342–356.Gav10 T. Gavenčiak, Cop-win graphs with maximum capture-time, Discrete Mathematics 310 (2010), 1557–1563.GR95 A.S. Goldstein and E.M. Reingold, The complexity of pursuit on a graph, Theoretical Computer Science 143 (1995), 93–112.Kin15 W. B. Kinnersley, Cops and Robbers is EXPTIME-complete, Journal of Combinatorial Theory Series B 111 (2015), 201–220.LO17 P. Loh and S. Oh, Cops and Robbers on Planar-Directed Graphs, Journal of Graph Theory 86 (2017), no. 3, 329–340.Mam13 M. Mamino, On the computational complexity of a game of cops and robbers, Theoretical Computer Science 477 (2013), 48–56.Meh11 A. Mehrabian, The capture time of grids, Discrete Mathematics 311 (2011), 102–105.NW83 R.J. Nowakowski and P. Winkler, Vertex-to-vertex pursuit in a graph, Discrete Mathematics 43 (1983), 235–239.Pis16 P. Pisantechakool, On the Capture Time of Cops and Robbers Game on a Planar Graph. In: Chan TH., Li M., Wang L. (eds) Combinatorial Optimization and Applications. COCOA 2016. Lecture Notes in Computer Science, 10043.Qui78 A. Quilliot, Jeux et pointes fixes sur les graphes, Thèse de 3ème cycle, Univerité de Paris VI, 1978, 131–145.Wes01 D.B. West, Introduction to Graph Theory, 2nd edition, Prentice Hall, 2001. | http://arxiv.org/abs/1706.08379v2 | {
"authors": [
"William B. Kinnersley"
],
"categories": [
"math.CO",
"cs.DM",
"05C57"
],
"primary_category": "math.CO",
"published": "20170626140253",
"title": "Bounds on the length of a game of Cops and Robbers"
} |
[Email address: ][email protected] Institut Laue-Langevin, 6 rue Jules Horowitz, 38042 Grenoble, France [Email address: ][email protected] Institute for Solid State Physics, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8581, Japan Institute for Solid State Physics, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8581, Japan Department of Applied Physics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan ISIS Neutron and Muon Source, Science and Technology Facilities Council, Didcot, OX11 0QX, United Kingdom Swiss Light Source, Paul Scherrer Institute, 5232 Viligen PSI, Switzerland Institute for Solid State Physics, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8581, Japan Institute for Solid State Physics, University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8581, Japan [Email address: ][email protected] ISIS Neutron and Muon Source, Science and Technology Facilities Council, Didcot, OX11 0QX, United Kingdom We present the results of a combined ^7Li NMR and diffraction study on LiGa_0.95In_0.05Cr_4O_8, a member of LiGa_1-xIn_xCr_4O_8 “breathing” pyrochlore family. Via specific heat and NMR measurements, we find that the complex sequence of first-order transitions observed for x=0 is replaced by a single, apparently second-order transition at T_f=11 K. Neutron and X-ray diffraction rule out both structural symmetry lowering and magnetic long-range order as the origin of this transition. Instead, reverse Monte Carlo fitting of the magnetic diffuse scattering indicates that the low temperature phase may be described as a collinear spin nematic state, characterized by a quadrupolar order parameter. This state also shows signs of short range order between collinear spin arrangements on tetrahedra, revealed by mapping the reverse Monte Carlo spin configurations onto a three-state color model describing the manifold of nematic states.Classical Spin Nematic Transition in LiGa_0.95In_0.05Cr_4O_8 G. J. Nilsen December 30, 2023 ============================================================ Spinel materials, AB_2X_4, host a variety of interesting magnetic phenomena, including spin-orbital liquid states (FeSc_2S_4) <cit.>, skyrmion lattices (GaV_4S_8) <cit.>, and magneto-structural transitions <cit.>. Many of these originate from the B site, which forms a frustrated pyrochlore lattice of corner-sharing tetrahedra. When the large spin degeneracy caused by the frustration is combined with strong magneto-elastic coupling, a typical feature of spinels, several possible magneto-structurally ordered and disordered states arise. For example, in the chromate spinel oxides, ACr_2O_4, collinear, coplanar, and helical magnetic structures (and their accompanying structural distortions) may all be realized by varying the cation on the A site <cit.>. Although the low temperature behaviour of the chromates is complex, it is surprisingly well captured by the bilinear-biquadratic model <cit.>, the Hamiltonian of which is ℋ=J∑_i,jS⃗_i·S⃗_j+b∑_i,j(S⃗_i·S⃗_j)^2+𝒫, where S⃗_i,j are classical Heisenberg spins and J is the nearest neighbour exchange. The second and third terms represent the magneto-elastic coupling, which assumes a biquadratic form if only local distortions are considered, and perturbative terms such as further neighbor or anisotropic couplings. These act on the degenerate manifold of states ∑_i∈tetS⃗_i=0 (where the sum is over tetrahedra) generated by the Heisenberg term, in turn selecting either collinear (b<0) or coplanar (b>0) configurations <cit.>, then breaking the remaining degeneracy and establishing magnetic order (𝒫≠ 0) <cit.>. Two recent additions to the chromate spinel family are the so-called “breathing” pyrochlore systems, AA^'Cr_4O_8, where A=Li^+ and A^'=Ga^3+,In^3+ <cit.>. Here, the alternation of the A and A^' cations on the A site leads to an alternation in tetrahedron sizes and, hence, magnetic exchange constants, J and J^'. This alternation is quantified by the “breathing” factor B_f=J^'/J, which is ∼ 0.6 for A^'=Ga^3+ and ∼ 0.1 for A^'=In^3+. The small B_f notwithstanding, the phenomenology of the “breathing” pyrochlores at low temperature is similar to their undistorted cousins. In both A^'=Ga^3+ (x=0) and A^'=In^3+ (x=1), a sequence of two transitions, as observed in lightly doped MgCr_2O_4 <cit.>, lead to structural and magnetic phase separation (as in ZnCr_2O_4 <cit.>), while the excitation spectra show gapped “molecular” modes in the ordered states <cit.>. For x=0, the upper first-order magneto-structural transition at T_u∼ 20 K results in phase-separation into cubic paramagnetic and tetragonal collinear phases <cit.>. The cubic phase then undergoes another first-order transition into a second tetragonal phase at T_l=13.8 K, the structure of which has not yet been determined. Interestingly, this transition is preceded by a divergence in the ^7Li NMR 1/T_1, implying proximity to a tricritical point <cit.>. Studies of the solid solutions LiGa_1-xIn_xCr_4O_8 indicate that T_l is rapidly suppressed when x is increased/decreased from x=0/1. Starting from the x=0 composition, sharp peaks in the magnetic susceptibility and specific heat persist until x∼ 0.1, beyond which they are replaced by features characteristic of a spin glass <cit.>. This side of the phase diagram resembles those of both the undistorted chromate oxides <cit.> and Monte Carlo (MC) simulations of the bilinear-biquadratic model for b<0 including both disorder and further neighbor interactions 𝒫=J_nnn∑_i,jS⃗_i·S⃗_j <cit.>. In this letter we will show that the low temperature behaviour of the x = 0.05 composition, apparently well inside the magnetically ordered regime of the phase diagram, differs drastically from the x=0 composition. Instead of two first-order transitions and phase separation, we observe a single second-order transition using ^7Li NMR and specific heat. Remarkably, this transition neither corresponds to magnetic long range order nor a structural transition. Rather, the magnetic diffuse neutron scattering implies that it shares features with both the nematic transition predicted for the bilinear biquadratic model on the pyrochlore lattice, as well as the partial ordering transition expected for the pyrochlore antiferromagnet with further neighbor interactions <cit.>. Furthermore, the transition is shown to coincide with spin freezing, drawing parallels to other frustrated materials, like Y_2Mo_2O_7 <cit.>. The powder samples of LiGa_0.95In_0.05Cr_4O_8 were prepared via the solid-state reaction method in Ref. <cit.>, using ^7Li enriched starting materials to reduce neutron absorption. The specific heat was measured by the relaxation method in a Physical Property Measurement System (Quantum Design). The ^7Li-NMR measurements were carried out in a magnetic field of 2 T, and NMR spectra were obtained by Fourier transforming the spin-echo signal.At low temperature, the spectra were constructed by summing the Fourier transformed spin-echo signals measured at equally spaced frequencies. The nuclear spin-lattice relaxation rate 1/T_1 was determined by the inversion-recovery method at the spectral peak <cit.>. The samples were further characterised by powder synchrotron X-ray diffraction (SXRD) on the MS-X04SA beamline at the Swiss Light Source, and powder neutron time-of-flight diffraction (ND) on the WISH instrument at the ISIS facility. For the former, several patterns were measured in the temperature range 6-20 K using a photon energy of 22 keV (λ=0.564 Å). The ND was measured at several temperatures between 1.5 K and 30 K. Structural refinements were carried out on data from all 10 detector banks on WISH as well as the SXRD data [Fig. <ref>(a)] using the FULLPROF software <cit.>. The magnetic diffuse scattering was isolated from the remainder by removing the nuclear Bragg features, then subtracting a background to yield zero scattering below Q=0.4 Å^-1 <cit.> . The validity of the subtraction was verified by comparing with polarized neutron diffraction data on the x=1 composition at the same temperature. The reverse Monte Carlo (RMC) analysis of the magnetic diffuse scattering was performed using the SPINVERT package <cit.>. The temperature dependence of the ^7Li-NMR spectrum is shown in Fig. <ref>(a). In the paramagnetic state, it consists of a sharp single line without quadrupole structure, similar to the A^'=Ga^3+ compound. Below 11 K, however, the spectrum shows a marked broadening, which indicates the development of a static internal field at the ^7Li site as a result of spin freezing. The spectra in the low-temperature phase consist of two components: a relatively narrow line whose width saturates below 9 K, and a broader one which broadens further with decreasing temperature. The temperature dependence of each line-width extracted by a double Gaussian fit are shown in the inset of Fig. <ref>(a). The intensity ratio of the sharp and broad components of spectra is estimated to be 1:4.7 at 4.2 K Figure <ref>(b) shows the temperature dependence of 1/T_1 for both the x=0.05 (red circles) and x=0 (blue triangles) compositions <cit.>. In the case of the former, 1/T_1 exhibits a sharp peak at T_f=11.08(5) K, indicating critical slowing down associated with a bulk second-order magnetic transition. This transition is also evidenced by a sharp anomaly at 11.29(3) K in specific heat, also indicative of a second-order transition [Fig. <ref>(b), inset]. These behaviors are in contrast with the x=0 compound, which shows two first-order magnetic transitions, and phase separation [Fig. <ref>(b)]. Despite the clear hallmarks of a second-order transition in both specific heat and ^7Li NMR, both SXRD and ND surprisingly indicate an absence of structural symmetry breaking below T_f, unlike all other chromate spinels with well-defined phase transitions [Fig. 2(a)]. Should the transition corresponds to a purely magnetic long range ordering, the only antiferromagnetic structure compatible with a cubic structural symmetry is the so-called all-in all-out structure, where the spins lie along the local ⟨ 111 ⟩ axes. On the other hand, this structure would imply zero internal field at the ^7Li site, in contradiction with the large width of the NMR line. Neither do any sharp features consistent with all-in all-out order appear in ND. While no peak splittings are observed on crossing T_f, some broadening of Bragg peaks with indices (h00) and (hk0) is observed on cooling towards T_f. This implies that the local symmetry is tetragonal, as expected from magneto-elastic coupling within the bilinear-biquadratic model <cit.>. To identify the changes in the structure on crossing T_f, we plot the temperature dependence of the S_400 and S_220 strain broadening parameters <cit.> corresponding to these families of peaks, as well as the fractional Cr^3+ position parameter and isotropic displacement parameter in Fig. <ref>(b,c). All parameters are found to evolve continuously before freezing at T_f, with the latter doing so in a step-like fashion. This emphasizes the strong magneto-elastic coupling in the system. Furthermore, the apparent freezing of all parameters below T_f is consistent with the spin freezing observed in NMR. Turning to the magnetic diffuse scattering, the data at 30 K, well above T_f, show a broad, diffuse feature with maximum centred around Q=1.55 Å^-1 [Fig. <ref>(a)]. This is compatible with expectations for the undistorted pyrochlore lattice, and implies the presence of Coulomb-phase-like power law spin-spin correlations <cit.>. Cooling to 15 K, a weak, but sharp peak is observed at the (110) position, as for x=0. Because this appears above T_f and appears to be temperature independent on further cooling, it is ascribed to the presence of a small amount of x=0 phase in the sample. The only intrinsic changes in the magnetic scattering on crossing the transition are therefore a slight redistribution of the diffuse scattering towards the positions Q = 0.8 Å^-1 [near (100)], 1.1 Å^-1 (110), 1.73 Å^-1 (210), and 1.87 Å^-1 (112) [Fig. <ref>(a)]. To understand the apparent paradox of the presence of a phase transition, on the one hand, and the absence of any peak splittings or (intrinsic) magnetic peaks, on the other, we investigate the changes in the real space spin-spin correlations by performing RMC fits of the magnetic diffuse scattering <cit.>. At 30 K, the extracted normalized real-space spin-spin correlation function ⟨ S_0 · S_i ⟩/S(S+1)indicates antiferromagnetic nearest neighbour correlations, with ⟨ S_0 · S_1 ⟩/S(S+1) = -0.2 [Fig. <ref>(b)]. This is consistent with conventional MC simulations of both the undistorted and “breathing” pyrochlore lattices at finite temperature, where the ground states are Coulomb liquids <cit.>. However, the correlations do not quite follow the expected dipolar form: for example, ⟨ S_0 · S_3 ⟩/S(S+1), which corresponds to the next-nearest-neighbour distance along ⟨ 110 ⟩, is negative. Reconstructing the single crystal scattering from the RMC spin configurations in the (hhl) [Fig. 3(c)] and (hk0) planes <cit.> provides some clues as to why this is: intensity is observed at positions consistent with the propagation vector 𝐤 = (001), which manifests in our experiments as scattering around the Bragg positions listed previously. The scattering maps also allow us to identify “bow-tie” features characteristic of the underlying Coulomb liquid state <cit.> [Fig. 3(c-d)]. The overall picture of 𝐤 = (001) short range order superimposed on “bow-tie” features remains unchanged to 1.5 K, although the former grow slightly in intensity below T_f. This is also true of ⟨ S_0 · S_i ⟩/S(S+1), which is nearly indistinguishable between the high and low temperature datasets [Fig. <ref>(b)], emphasising that the order parameter of the transition cannot be dipolar. Classical MC simulations for the bilinear-biquadratic model with a Gaussian bond disorder Δ (here related to the substitution x) indicate a quadrupolar nematic transition to a collinear state with persistent Coulomb-like correlations for b<0, 𝒫=0, and small Δ <cit.>. As Δ (x) is increased, this nematic transition becomes concurrent with spin freezing. Because of (i) the lack of magnetic Bragg peaks at T<T_f, despite a clear phase transition; (ii) the tendency towards collinear spin arrangements in x=0,1, implying b<0 for all x; (iii) the local symmetry lowering, consistent with such spin arrangements; and (iv) the spin freezing observed in NMR, we speculatively assign the phase transition to concurrent nematic order and freezing. To test this assignment, we perform further RMC simulations on the 1.5 K data with the spins constrained to lie along any of three high symmetry directions in the cell [(001), (110), and (111)] – i.e. collinear spins <cit.>. Several directions are modelled due to the cubic symmetry of the system; while all should reproduce the experimental scattering, the corresponding spin configurations generally differ. With collinear spins, the data is modelled nearly as well as the Heisenberg case (χ^2_Ising/χ^2_Heis∼ 1.05) for all spin directions. To verify that this assumption is consistent with our NMR results, we simulated the NMR spectrum corresponding to the collinear RMC spin configurations <cit.>. The main component of the simulated spectrum, ascribed to the broad part of the magnetic diffuse scattering (and hence the “bow-tie” features), is triangular, and reproduces the shape of the broad component in the experimental NMR spectrum at 4.2 K [Fig. <ref>(a)]. On the other hand, the sharp component, corresponding to the 𝐤=(001) short-range order, is underestimated by our RMC simulations. This discrepancy is likely due to the difficulty of fitting 𝐤=(001) magnetic diffuse scattering near nuclear positions in the experimental data, the different temperatures of the NMR (4.2 K) and neutron (1.5 K) measurements, and the inhomogenous decay of the NMR intensity due to slow fluctuations <cit.>. Scaling the width of the triangular component to the NMR data, the moment size on the Cr^3+ site is estimated to be gS=1.3 μ_B at 4.2 K. Although this quantity is model-dependent, the reduced moment could indicate the present of fast quantum or thermal fluctuations in the ground state. The RMC spin configurations may be further analyzed by examining the collinear spin arrangements on individual tetrahedra: for all directions of the anisotropic axis, configurations with two spins up and two down (uudd) are favored over other configurations. The local constraint ∑_i∈tetS⃗_i=0 obeyed by these configurations is responsible for the “bow-tie” scattering in the reconstructed single crystal patterns. The ratio of uudd to three-up one-down (uuud, or vice versa, uddd) configurations is typically in excess of 8 for the (001) axis, versus 2-4 for the other directions. Assuming that all interactions in the system are antiferromagnetic, this solution is the most energetically favorable, and we will therefore focus on it in the subsequent analysis. Following Tchernyshyov et al. <cit.>, each uudd tetrahedron may be given a color according to the arrangement of the two ferro- and four antiferromagnetic bonds on each tetrahedron, and correspondingly, the direction of the local tetragonal distortion. Ferromagnetic (long) bonds along ⟨ 110 ⟩ correspond to blue (B), ⟨ 101 ⟩ green (G), and ⟨ 011 ⟩ red (R). The refined spin structures exhibit a majority of B tetrahedra. In the case of the x=0 and x=1 compounds, the magnetic orders respectively correspond to RG and BG color arrangements on the diamond lattice formed by the tetrahedra, with the spins directed along the c axis of the tetragonal cell. The observed growth of the 𝐤 = (001) scattering may thus be associated with domains containing combinations of BR and BG. Interestingly, this means that the ground state of the present material lies closer to the x=1 ordered structure, despite its chemical proximity to the x=0 compound. The BR/BG short-range order is furthermore consistent with the negative ⟨ S_0 · S_3 ⟩/S(S+1) found in the Heisenberg fits, as well as the broadening (but lack of splitting) of the structural diffraction peaks. To quantify the spatial extent of the opposite color correlations, we compute the color correlation function ⟨ C_0 C_i ⟩, defined such that ⟨ C_0 C_i ⟩=1 for the same and ⟨ C_0 C_i ⟩=-1 for different colors between tetrahedra, and weighted to account for the unequal color populations. We thus find ⟨ C_0 C_1 ⟩=-0.14 and opposite-color short range order, with an exponential radial decay characterized by the correlation length ξ_c =2 Å. While this is short, two-color (BG and BR) domains as large as 20 Å can be identified by visual inspection of the color configurations [Fig <ref>(b)]. The scattering around (001) (0.8 Å^-1) can be understood as resulting from disorder between the minority colors (R and G) in the two-color pattern – in the case of perfect color order, it would vanish. In this sense, the low temperature state shows some commonalities with the partially ordered phase predicted from MC simulations of the pyrochlore lattice with further neighbor couplings in <cit.> (this state is also found in the three-state Potts model <cit.>). The partially ordered phase has a single color on all up-pointing tetrahedra, and a distribution of other colors on down-pointing tetrahedra. There are, however, some uncertainties concerning the above interpretation. Most importantly, the existing MC simulations only consider the undistorted pyrochlore lattice, and it is not known how the phase diagram changes when the “breathing” distortion is introduced. Indeed, in the undistorted case, the transition is expected to be weakly first-order for small x, only becoming second-order deep inside the spin glass regime, where the nematic transition coincides with spin freezing. Both of these statements contradict our observation of a second-order transition and spin freezing at small x, as well as the appearance of a conventional spin glass phase at larger x <cit.>. On the other hand, the theoretically predicted spin glass state at large x is expected to be exceptionally robust towards magnetic field, like the present low-temperature phase <cit.> – similar behaviour is observed below T_f in pure Y_2Mo_2O_7, where no clear nematic transition is observed. It is finally not evident why the nematic transition occurs at all, given the presence of the further neighbor couplings which presumably play a role in the ordering of the x=0 and x=1 compounds. These points will hopefully be clarified by future MC simulations and experimental work. To conclude, we have shown that LiGa_0.95In_0.05Cr_4O_8 undergoes a single, apparently second-order, transition at T_f = 11 K. This transition corresponds neither to magnetic long range order nor a structural symmetry breaking, but is rather ascribed to nematic (collinear) spin freezing. Upon cooling, correlations corresponding to the propagation vector 𝐤 = (001) are enhanced. Assuming that the spins lie along (001), these correspond to short-range order between collinear spin configurations on the tetrahedra. The transition thus shares features with both the nematic and partial ordering transitions anticipated for the pyrochlore Heisenberg model with perturbations. We gratefully acknowledge T. Fennell for his careful reading of and useful comments on the manuscript. We also thank N. Shannon and O. Benton for sharing their SCGA data, and S. Hayashida and J. A. M. Paddison for useful discussions. This work was supported by JSPS KAKENHI (Grant Nos. 25287083 and 16J01077). Y. T. was supported by the JSPS through the Program for Leading Graduate Schools (MERIT). apsrev§ SUPPLEMENTAL MATERIAL §.§ Nuclear spin-lattice relaxation rate 1/T_1The nuclear spin-lattice relaxation rate 1/T_1 was determined by fitting the recovery curves of the spin-echo intensity at the spectral peak after an inversion pulse as a function of time t to the stretched-exponential function:I(t)=I_eq-I_0 exp[-(t/T_1)^β],where I_eq is the intensity at the thermal equilibrium and β is a stretch exponent that provides a measure of inhomogeneous distribution of 1/T_1. The case of homogeneous relaxation corresponds to β = 1. Figure <ref> shows the temperature dependence of β, which decreases from 1 on cooling below 30 K and reaches a minimum around T_f. A stretched exponential relaxation with β<1 is observed when a distribution of 1/T_1 arises due to inhomogeneity. §.§ NMR spectra and spin-echo decay time T_2 In order to track the temperature-dependence of the signal intensity and spin-echo decay time T_2, we performed NMR spectrum measurements with different pulse separation times τ in the spin-echo pulse sequence π/2-τ-π. Figure <ref>(a) shows the τ-dependence of NMR spectra at the base temperature. Although both sharp and broad components of the spectra have fast T_2, and consequently show a reduction of the intensity for longer τ, the spectral shapes are almost identical within the range of the τ probed. The spectra shown in Fig.1(a) in the main paper were measured with τ=25 µs. The integrated intensities of the spectra I (multiplied by T to cancel out Curie law for the nuclear moments) are plotted against 2τ in Fig. <ref>(b); the I corresponding to the sharp and broad components at base temperature were extracted by a double Gaussian fit, and are plotted separately. The fitting function for each dataset is I(t)T=I_0Texp(-t/T_2), where T_2 is the spin-echo decay time. The T_2 thus extracted are 425(1) (30 K>T_f), 54(3) (11 K∼ T_f), 45(1) (4.2 K, sharp), and 30(2) µs (4.2 K, broad). T_2 drops sharply at T_f=11.1 K and does not recover even at base temperature, as is the case for other frustrated systems with slow low-temperature dynamics. In order to quantify the loss of NMR intensity below T_f, all data were normalized by the extrapolated I(0)T at 30 K [Fig. R2(b)]. The resulting I(0)T are 0.66(2), 0.16(0), and 0.75(4) at 11.1 K, and for the sharp and broad component at 4.2 K, respectively. This indicates that 91 % of the intensity is conserved at the base temperature. Finally, we note that the ratio of the intensities corresponding to the two components is estimated to be1:4.7 at the base temperature. §.§ Neutron and X-ray diffractionThe sample used in the synchrotron X-ray powder diffraction experiment was enclosed in a silica capillary of 3 mm diameter, and loaded into a ^4He flow cryostat with a base temperature of 5 K. Neutron powder diffraction patterns were measured on 5.8 g of powder loaded in an 8 mm diameter vanadium can. The measured diffraction patterns were analysed by means of Rietveld refinement. Part of one refined neutron diffraction pattern is presented in Fig. <ref>, and parameters showing the goodness of fit for all detector banks and temperatures are presented in Tab. <ref>. Due to the fact that the peak profile of WISH is difficult to describe with the commonly used back-to-back exponential function and high-Q parts of the patterns suffer from significant peak overlap, we present the weighted R parameters for the Rietveld refinement and LeBail profile matching (representing the best possible fit for the given set of profile parameters) along with the expected R value based on the data set's statistics. The strain (S_hkl) parameters, whose temperature dependence is presented in main body of this work, were extracted from the SXRPD data, due to the superior resolution of these measurements. On the other hand, B_iso and the fractional x coordinate of chromium were derived from refinements of neutron powder diffraction data. The refined structural parameters are shown in Table <ref>, where position values are retrieved from SXRD patterns and B_iso parameters come from NPD data. §.§ Magnetic diffuse scattering and reverse Monte Carlo refinement To extract the diffuse magnetic scattering from the neutron powder diffraction data, a flat background C (C≃ 0.3) was subtracted from the data such that the mean intensity at Q<0.4 Å^-1 was zero. This choice was justified by the fact that the high temperature diffuse scattering in the first Brillouin zone is zero (indeed, polarized neutron scattering on the x=1 compound, where the high temperature scattering is similar, also suggests this is the case <cit.>), and that the WISH instrumental background is flat when using a V can and radial oscillating collimator. The nuclear Bragg peaks were removed from the ranges where the build-up of diffuse scattering was observed. RMC refinements were performed on boxes of spins containing 6× 6× 6 unit cells at low temperature and 3× 3× 3 unit cells at 30 K, and the simulation was repeated 10 times for every temperature point to ensure a stable solution. The optimal value of the Monte Carlo weight parameter <cit.> for all temperature datasets was determined to be W=0.6 for the low-T Ising-type collinear spin models and W=1 for the high-T Heisenberg-type spin models.The analysis of the RMC-refined spin configurations consisted of several steps. Firstly, for each simulation box, the normalized real space spin-spin correlation function was evaluated: ⟨ S_0 · S_i⟩/S(S+1)=⟨ S(0) · S(r)⟩/S(S+1)=1/n(r)S(S+1)∑^N_j∑^Z_jk(r)_k𝐒_j(0)·𝐒_k(r), where 𝐒_i is the vector of the i-th spin in the simulation box, N is the total number of sites inside the box, and Z_ij(r) is the number of spins in the coordination shell at distance r. This was done using the SPINCORREL application in the SPINVERT suite <cit.>. Despite the noticeable differences between the form of the diffuse scattering at high and low temperatures, it is difficult to distinguish the differences between the real space spin-spin correlations corresponding to the fits of those datasets. This is most likely due to the dominant contribution of the broad Coulomb liquid-like component to both patterns. Following evaluation of ⟨ S_0 · S_i⟩, the single crystal patterns were reconstructed in the (hk0) and (hhl) planes using SPINDIFF, also part of the SPINVERT suite. This allowed for a more diagnostic comparison with theoretical calculations, and permitted identification of the Coulomb-like and short-range (𝐤=(001)) ordered components of the scattering.The third step, the calculation of color populations and correlations, applied only to the low-temperature collinear spin configurations. In the case of a single tetrahedron, the bilinear-biquadratic model yields six possible magneto-structural ground state configurations, three of which are independent with respect to a global rotation of spins. These may be given a color c={R,G,B} according to the direction of the ferromagnetic bond (which corresponds to the long bond in the tetragonally distorted tetrahedron), as explained in the main text and Ref. <cit.>. In the pure bilinear-biquadratic model on the pyrochlore lattice, the colors in the low-temperature nematic state are uncorrelated, whereas in the bilinear-biquadratic model with long-range couplings, they are fully ordered. By calculating the correlations between the colors, we may thus effectively separate the correlations due to the nematic and short-range ordered components identified in the second step. Since the diamond lattice formed by the tetrahedra is bipartite, a color correlation function (order parameter) which distinguishes between same (e.g. all R) and different color (e.g. RG) orders is considered sufficient:⟨ C_0C_i⟩=⟨ C(0)C(r)⟩=N_s^b(r)-p̅_sdN_d^b(r)/Z_tot^bwhere N_s^b(r) and N_d^b(r) are the number of bonds of the same and different color at distance r, respectively, Z_tot^b is the total number of bonds for that distance, and p̅_sd=∑_c=c'(N_c^t)^2/N_tot^2/∑_c≠ c'N_c^tN_c^'^t/N_tot^2is the mean ratio of the probabilities of finding the same color tetrahedron to a different color tetrahedron on a particular bond. The final term is required as the populations of the colors in the simulation box are generally not equal. This correlation function generates ⟨ C_0C_i⟩=1 for all shells for a same color order, and an alternation between ±1 for a different color order. §.§ Spectrum simulation from RMC spin map The ^7Li NMR spectrum of the low temperature phase was simulated from the spin configurations obtained by RMC. The shape of the powder averaged NMR spectra of magnetic substances is mainly determined by magnitude of the internal field at the nuclear positions. To obtain an internal field distribution at each ^7Li site from a given spin map, we summed up the classical dipole field and transfer hyperfine field from Cr^3+ spins within a 100 Å radius and from nearest-neighbor Cr^3+ sites, respectively. The transfer hyperfine coupling constant is 0.10 T/μ_B from the 12 nearest-neighbor Cr^3+ spins, as estimated from a K-χ plot above 100 K. Figure <ref> shows a histogram of magnitude of the internal field B_int calculated from the spin map with ordered moments of 1 μ_B per Cr^3+ ion. When the external field B_ext is sufficiently larger than B_int, a rigid AF spin arrangement producing a single value of B_int yields a rectangular NMR spectrum bounded by |B_int-B_ext| ≤ B ≤ B_int+B_ext for powder samples <cit.> . We obtained our simulated NMR spectrum by piling up rectangle distributions whose half widths were B_int, the horizontal axis of Fig. <ref>. The height of each rectangle was normalized so that its area was proportional to the vertical value of each point in Fig. <ref>.The experimental NMR spectrum at 4.2 K and a simulation from an RMC spin configuration at 1.5 K are shown in Fig <ref>; the vertical and horizontal scales of the simulated spectrum are both adjusted to match the broad component of the experimental spectrum. While the simulated spectrum appears triangular, the experimental one clearly consists of two Gaussian components with different linewidth. This discrepancy could be due to a variety of factors, including the difficulty of fitting the 𝐤=(001) scattering near nuclear positions, the different temperatures of the NMR (4.2 K) and neutron (1.5 K) measurements, and the inhomogenous decay of the NMR intensity due to slow fluctuations.Based on the horizontal axis scaling factor mentioned above, the moment size of Cr^3+ spins in the wide component may be estimated to be 1.3 μ_B at 4.2 K. The moment size in the sharper component is more difficult to estimate. However, because the perfect two-color orders produce either no field distribution or a relatively narrow rectangular spectrum (the FWHM is 0.1 T at most for a full moment gS=3 μ_B) depending on the color configurations, this contribution can readily be associated with two-color order domains. §.§ Magnetic susceptibility The zero field cooled (ZFC) and field cooled (FC) magnetic susceptibility curves for x=0.05 (Fig. <ref>) show a clear splitting, indicating spin freezing below T_f; hence, the nematic spin freezing scenario appears likelier than the pure nematic transition. Unlike a regular spin glass, the splitting persists up to 5 T. This robustness towards applied magnetic field is also observed in materials like SrCr_9pGa_12-9pO_19 and Y_2Mo_2O_7 <cit.>, where the ground states are characterised by flat energy landscapes with shallow minima. Such energy landscapes, and as a result, behaviors of the ZFC and FC susceptibility, have also been predicted for the pyrochlore lattice with bilinear-biquadratic interactions and bond disorder <cit.>. | http://arxiv.org/abs/1706.08409v1 | {
"authors": [
"R. Wawrzyńczak",
"Y. Tanaka",
"M. Yoshida",
"Y. Okamoto",
"P. Manuel",
"N. Casati",
"Z. Hiroi",
"M. Takigawa",
"G. J. Nilsen"
],
"categories": [
"cond-mat.str-el"
],
"primary_category": "cond-mat.str-el",
"published": "20170626143808",
"title": "Classical Spin Nematic Transition in LiGa$_{0.95}$In$_{0.05}$Cr$_4$O$_8$"
} |
Managing a Fleet of Autonomous Mobile Robots (AMR) using Cloud Robotics Platform Aniruddha Singhal, Nishant Kejriwal, Prasun Pallav, Soumyadeep Choudhury, Rajesh Sinha and Swagat KumarEmail IDs:The authors are with TCS Research, Tata Consultancy Services, New Delhi, India 201309. Accepted 24/05/2017 ================================================================================================================================================================================================================ Human Activity Recognition has witnessed a significant progress in the last decade. Although a great deal of work in this field goes in recognizing normal human activities, few studies focused on identifying motion in sports. Recognizing human movements in different sports has high impact on understanding the different styles of humans in the play and on improving their performance. As deep learning models proved to have good results in many classification problems, this paper will utilize deep learning to classify cross-country skiing movements, known as gears, collected using a 3D accelerometer. It will also provide a comparison between different deep learning models such as convolutional and recurrent neural networks versus standard multi-layer perceptron. Results show that deep learning is more effective and has the highest classification accuracy. § 1. INTRODUCTIONHuman Activity Recognition (HAR) is one of the active research areas which retrieves the actions of humans for the sake of an automatic understanding of their behaviours. There is a significant progress in the field of HAR due to its potential in many domains. For example, recognizing activities can be useful for smart homes systems, pervasive and mobile computing,surveillance-based security, physical therapy, rehabilitation, context-aware computing, robotics, and sports' improvement <cit.>. HAR approaches can be divided mainly to vision based and sensor based. Although many studies investigated the vision based approach where input signals are acquired by video and depth cameras, this approach suffers from some limitations. For example, it requires high computationally cost image processing algorithms for classifying the video sequences or the visual data. On the other side, sensors are cost effective,work in unrestricted environments and raise less privacy concerns compared to video cameras. This leads recent studies in this field to focus on the sensor based approach where movements are acquired by inertial sensors such as accelerometers or gyroscopesto develop a multi-variate time series classification problem. Activity recognition applications can be utilized for classifying either daily normal activities orsports motion. The recognition of sports motion didn't get as much attention in research as normal human activities (like walking,sittingand driving cars) although its high impact on understanding the different styles of athletes and on improving their performance. It also offers a feedback on performance for amateur and professional athletes and helps in predicting next moves during the sport. In this paper, we focus on classifying the human motion in cross-country skiing (XCS). XCS has different skiing techniques, primarily divided into classical and skating techniques. Each technique has different motion patterns known as gears. The challenge in XCS classification, apart from walking and running activities, is the varying intensity of the same skiing gear which can be fast or slow. In this case, frequently used features like intensity and frequency are not significant in XCS as used before for normal activities classification. <cit.>. In most of HAR studies, the used features are selected by hand but designing hand-crafted features requires domain knowledge and may cause information loss after the features selection. In recent years, deep learning (DL) has delivered a solution to the features selection problem and became the new trend in machine learning. Deep networks such as convolutional and recurrent neural networks have been successfully applied in diverse applications such as speech recognition, natural language processing, audio processing, computer vision and robotics. The main reasons behind the success of deep networks are the stacked layers in their architecture unlike traditional networks which include three layers at most. Also, these networks apply complex non-linear functions through its layers to automatically learnthe hierarchical representations of features without any domain knowledge.Although the remarkable results that deep learning was able to achieve in many applications, it has not been fully exploited in HAR <cit.>,<cit.> and <cit.>.This paper applies deep learning approaches for human motion classification for the first time in cross country skiing using sensor data. Two deep learning approaches, convolutional and recurrent networks, are applied using different network architectures under each approach. The performance of all approaches are tested versus a traditional multi layer perceptron with two layers at most. Many experiments are carried using a large dataset of skiing data collected with a 3D sensor for different skiers and approaches are compared with respect to the testing classification error.1110 § 2. RELATED WORKS AND BACKGROUNDThis section reviews some of the sensor based activity recognition systems using deep learning, mostly the deep convolutional and recurrent networks. The recognition systems that identify daily normal activities will be shown first, then the systems that identify sports motion. Finally, few studies which considered cross country skiing by the traditional machine learning models will be described. Convolutional neural networks (CNNs) have the power to capture local dependencies and keep features scale invariant during the representation learning process. These two key advantages of CNN fit the characteristics of signals of normal activities in any recognition system. An activity signal always have a high correlation between neighbouring acceleration values. Moreover, any two signals with different motion intensities may represent the same activity but for two different persons <cit.>. Therefore, CNN was extensively applied in HAR systems to achieve better recognition accuracy.For example, a deep CNN was deployed for activity recognition in <cit.> and was tested versus support vector machine (SVM) and a feature selection method. The raw sensor signals were collected from accelerometers and gyroscopes and then permuted and stacked in a signal image. The CNN architecture learned low and high- level features from the constructed activity image. Results on 3 datasets showed the largest recognition accuracy values to CNN. Another HAR system in <cit.> introduced a new modified weight sharing technique, called partial weight sharing, to improve the classification accuracy of the convolutional network. The new technique allows sharing weights to only local filters that are close to each other and aggregated together in the max-pooling layer. The modified CNN outperformed the standard CNN in many benchmark datasets using different regularization settings of weight decay, momentum and drop-out. Furthermore, in <cit.>, a deep CNN was built fromconvolution , rectified linear unit (ReLU) , max poolingandnormalization layers. It was tested using benchmark Opportunity Activity Recognitionand Hand Gesture datasets. According to the values of accuracy and average F-measure, the convolutional network was more effective than four other methods: SVM, K-nearest neighbour (KNN), Means and Variance and deep belief network (DBN).Other Comparisons between CNN and standard ML techniques were conducted in <cit.> and<cit.>. In the former study, the CNN was tested versus J48, random forest (RF), SVM, KNN, and Naïve Bayes using wrist accelerometer. It achieved higher accuracy compared to RF for all of the subjects for the left wrist. However opposite results obtained when the right wrist was analyzed. In the other study, the CNN had the best accuracy values of 94.79% using raw sensor data and 95.75% using additional information of Fast Fourier Transform (FFT) from the data set . The deep recurrent neural network (RNN) and especially the recurrent network based on long short term memory (LSTM) has also proved its effectiveness in many recognition tasks. This architecture is able to exploit the temporal dependencies in the time series data of humans' normal activities acquired by sensors <cit.>. For example, in <cit.> a LSTM was applied on WISDM activity dataset and yielded final accuracy of 95%. In <cit.>,a proposed hierarchical bidirectional LSTM (BLSTM) was tested versus a standard BLSTM for gesture recognition with smartphones . This bi-directional architecture provides two paths, forwards and backwards, for each input sequence and this feeds the network with past and future context of every value in this sequence. The first level of BLSTM was responsible for classifying inputsequences to gestures and non-gestures and then, the second level classified valid sequences to final gestures labels. The hierarchical BLSTM outperformed the standard architecture with respect to the classification accuracy. In addition, a deep learning framework based on convolutional and LSTM recurrent layers was proposed in <cit.>. The framework was able to both learn feature representations and model the temporal dependencies between their activations. Experiments on opportunity and skoda activity datasets proved the superiority of the proposed framework over a baseline CNN with respect to F1 score. Another framework that integrated many deep approaches was in <cit.>. A deep neural network, a convolutional network and two flavours of LSTM recurrent networks , the forward and bidirectional were implemented. Also, a new regularization for RNN named breaks was introduced in this study. Experiments on benchmark datasets showed that RNN outperformed CNN on activities that are short in duration but have a natural ordering where the opposite happened for prolonged and repetitive activities.On the other hand, few systems were proposed for sports motion recognition and the majority of these systems were based on traditional machine learning algorithms as in <cit.> ,<cit.>,<cit.> and <cit.>. For example, in ball games, authors in <cit.> used gaussian fitting and regression analysis for extracting movement characteristics of tennis players. However, in <cit.>, a SVM based model was used for recognizing sports with single-handed swings like tennis, badminton, and ping pong. SVM was able to classify the extracted features from triaxial accelerometers after using N-median Filtering. Other studeies applied different algorithms for classifying both daily normal activities and sports motion like in <cit.>. In this study, a comparison between different classifiers like SVM and NN was introduced for classifying activities signals from inertial and magnetic sensors. The best recognition accuracies were from NN and SVM classifiers. Also in<cit.>, a multi-classifier combination model based on Quadratic SVM (QSVM) and AdaBoost Tree ensemble learning algorithm was used for daily and sports activities recognition.For cross country skiing, <cit.>, <cit.> and <cit.> applied machine learning for gears classification. In <cit.>, a Markov chain of multivariate Gaussian distributions was used for classification, anomalies detection and clustering periodic movement patterns.Experiments on skiing data from 14 skiers during a training race on roller skies showed the ability of the algorithm to reach an overall performance on the test set of 98%.Another Markov model and a k-nearest neighbour algorithm were implemented to classify classical style and free style simultaneously in <cit.>. In this study, skiing data was first preprocessed by re-sampling the sensor data to have all skiing cycles with same fixed timestamps before classification. Results showed that most of the gears were correctly classified using both algorithms but according to the error rates, KNN algorithm is preferable. Finally in<cit.>, the gaussian filters were used with a Markov-chain machine learning procedure to identify the cross country ski-skating gear. § 2.1. LONG SHORT-TERM MEMORY (LSTM) Recurrent neural networks (RNNs), figure <ref>, contain hidden layers of recurrently connected neurons to represent the temporal dependencies in data sequences. RNNs have been applied successfully in many applications such as speech recognitionand natural language processing. These networks have limitations of their own, for example it is difficult to train these networks on long input sequences. This is due tothe problems of vanishing and exploding gradients thatoccur when errors are back-propagated across many time steps. The Long short term memory(LSTM) solved this problem by integrating memory units to enable learning of long temporal dynamics. Another limitation of RNNs is the dependency of outputs on previous inputs and this was solved by the bidirectional architecture <cit.>.LSTM, figure <ref>, was firstly introducedin 1997 by SeppHochreiter & Jürgen Schmidhuber. The LSTM cell is a variant architecture ofRNN cell which incorporates memory units that enable learning complex and long-term temporal dynamics of long sequences. These units allow to learn when to forget previous hidden states and when to update hidden states given new information. This cell includes an input i_t, input modulation g_t, forget f_t and output o_t gates in addition to memory cell c_t and the hidden unit h_t, as in equations <ref> to <ref> where W are the weights matrices between input or hidden state to the gates and b are the biases terms.Each block in the network has many LSTM cells where gates are shared by these cells in the block. Gates are responsible about adjusting the interactions between the memory cell and its environment. For example, the input gate allows incoming signalto alter the state of the memory cell or blocks it where the output gate controls the effect of the memory cell on the hidden state. Theforget gate on the other side, lets the cell remembers or forgets itsprevious state<cit.>,<cit.> ,<cit.> and <cit.>. i_t= σ(W_xix_t + W_hih_t-1+b_i) f_t= σ(W_xfx_t + W_hfh_t-1+b_f) o_t= σ(W_xox_t + W_hoh_t-1+b_o) g_t= σ(W_xcx_t + W_hch_t-1+b_c) c_t= f_t ⊙ c_t-1 +i_t ⊙ g_t h_t= o_t ⊙ϕ(c_t)Some variants of LSTM were developed to enhance its performance. For example, to increase the effectiveness of the LSTM cell in learning a precise timing of the outputs, peephole connections were introducedby Gers & Schmidhuber in <cit.> . The peephole connections are used from the internal cells of the LSTM to the gates in the same cell to allow communication between gates and help the gates to access the current cell state even when the output gate is closed. Another variant of the standard LSTM cell is the bidirectional LSTM (BLSTM) where the concept of the bidirectional RNN (BRNN) were first introduced by Schuster & Paliwal . The bidirectional architectures process each training sequence forwards and backwards by two separate recurrent nets, connected to the same output layer. Therefore, BLSTM has the ability to access long-range sequences in both input directions and access their future input information from the current state<cit.>.§.§ 2.2. Convolutional Neural Network (CNN) Convolutional neural networks (CNNs), figure <ref> , are deep networks based on local filters which are able to discover the correlation within the input data through the convolution operation.The output feature maps from this convolution can figure out different types of features at each temporal position. Convolutional networks include mainly stacked layers of convolution and pooling operations. The convolution operation apply each local filter over all subsets of the input where weights of these filters are shared across all subsets. Then, the pooling operation splits the output features and applies some function to reduce the size of previous layer to preserve the scale invariance property of features<cit.>. CNNs achieved high performance when applied in the activity recognition field. This goes back to the key advantages of CNNs, which are the local dependencies and the scale invariance. The local filters in CNN have the capability to capture local dependencies of neighbouring sensor acceleration readings of an activity signal. In addition to that, scale invariance characteristic allows CNN to successfully learn hidden features regardless of their positions or scales. This is helpful in the recognition of human activities as people may do the same activity with different speeds and intensities <cit.>, <cit.>. § 3. EXPERIMENTS All experiments were done on the cross country skiing (XCS) sport where different sessions or movement patterns are called gears. The data [racefox.se] was collected using a 3D accelerometer in form of, acceleration in the horizontal direction x, acceleration in the up direction y and in the forward direction z. The total recorded examples was 416,737 records with 50% of records for gear 2 and 50% for gear 3 with sampling rate 50Hz. Figure <ref>, (a) and (b) shows a typical cycle of each gear in its raw format.The data was divided into training, validation and testing sets with a segmentation process applied with window size of 1 second with 50% overlapping. Variant models of convolutional and recurrent networks were implemented in addition to a standard MLP. For the recurrent networks, three LSTM networks architectures were applied including standard forward LSTM (LSTM-F), LSTM with peepholes (LSTM-P) and bi-directional LSTM (BLSTM) . For the convolutional networks, two CNNs were applied, the first one with only one convolution layer and the second with two convolution layers.Finally, two multi layer perceptron (MLPs) were also tested, using one and two hidden layers. All the models were implemented using Tensorflow using different parameters settings as in table <ref>. The maximum number of training iterations was 3000 with total number of runs was 5. The best weights associated with the minimum validation classification error were kept through the iterations with the cross-entropy error as the training error measure. The performance of models was measured by the average testing classification error over the runs where the testing error considers the number of mismatched examples. § 4. RESULTS AND DISCUSSIONSThe testing classification error values of each model are shown in figure <ref>. The three recurrent models of LSTM are marked as LSTM-F, LSTM-P and LSTM-BI for forward standard LSTM , LSTM with peepholes and bi-directional LSTM respectively. Figure <ref> (d) summarizes the best error value achieved by LSTM , CNN and MLP. From figure <ref>(a) and (b), the best performing models on the skiing data are the CNN with one convolution layer, the standard forward LSTM and the LSTM with peepholes. For CNN, the smaller the number of local filters used in the convolution operation, the better the learning process will be. This conclusion can be observed from the classification error value attained using only 20 filters. With few filters, the convolution operations succeeds in extracting features maps that distinguish between the two skiing gears with a classification error value of 2.4%. By testing smaller number of filters like 10 and 15, the same error value was achieved. On the contrary, the standard forward LSTM and LSTM with peepholes have a better performance with increasing the network size (the number of LSTM units) although peepholes connections doesn't highly affect the performance especially with large number of LSTM units. By using a larger number of LSTM units at each time step, the total classification error decreased with the most minimum error of 1.6% achieved by LSTM-F. Small number of LSTM recurrent units can also capture the dynamics of a sequence like the case of using only 25 units, however the number should be reasonable to the sequence length. Hence, Increasing recurrent units each step mostly improves the learning process because this increases the number of memory units at each step and so the number of weights to be optimized at each value of the sequence.These conclusions are also valid for the bi-directional LSTM given that only half of the total number of LSTM units are used in each of the forward and backward networks. For example, 25 units used for each of the forward and backward LSTM networks to have a total of 50 units for the bi-directional architecture. Therefore, the same behaviour is explicit again in the lower error values as utilizing more units. The worst error attained by the BLSTM, 14%, appears with the total of 25 units, which validates the above conclusion of using too small units and its effect in declining the effectiveness. Also, the bi-directional architecture advantage of accessing future and past observations during learning didn't help the overall classification here in skiing. This can be due to the nature of skiing gears which have different intensities and frequencies , hence, a sequential learning in a forward way, like in the forward LSTM, can have better performance.Finally, the standard multi layer perception, figure <ref>(c), doesn't fit the complex nature of the input skiing signals although its performance gets better with more hidden units in the hidden layer but at cost of computations. Increasing the number of hidden layers with MLP doesn’t provide a solution for better understating of the input, instead, it leads to worse performance. From figure <ref>(d) and as a conclusion over MLP, deep CNN and LSTM networks, the standard forward LSTM is the most convenient for classifying the gears of skiing data with a classification error value 1.6%. It is the best model for this data due to many characteristics. LSTM has the advantage of shared weights between all cells which enhance the learning through the input sequence. Furthermore, it incorporates different gates that help in memorizing the dynamics of the input. All these advantages suits the temporal nature of the skiing data and hence best results are obtained. § 5. CONCLUSIONSThis paper introduced deep learning for the first time in the classification of skiing gears using sensor based signals. Different deep learning architectures were applied and compared to select the best suited with the lowest classification error. A total of five deep models were implemented, two convolutional network models and three long short term memory models, were tested in addition to two standard multi-layer perceptron (MLP) models. The final error of classification over the testing skiing data were reported using different settings of each model. The MLP failed to have a high performance in the classification due to the complex nature of the input skiing signals which have varying intensities and frequencies due to the different styles of skiers to perform the same gear. Both stacked layers and the non-linear complex functions used by deep models enable CNN and LSTM to achieve good results represented in low classification error. CNN had the minimum error of 2.4% using few number of local filters for the convolution process where increasing the filters doesn't mean improving the performance. The standard forward LSTM had the lowest classification error, 1.6%, over all the five models thanks to its recurrent architecture which utilize shared weights through all time steps. Therefore, the learned weights of memory gates at each step are propagated to the next time step till the end of each signal. Also, it can be concluded that the larger the number of LSTM units used each time step, the better the performance of the whole network but a larger computation time of training. On the other side, fewer recurrent units yields to a poor learning with a higher classification error. The best number of units here, in skiing, found to be the same as the length of the skiing input signal. As a future work, more experiments using other deep learning based models will be carried. Also, different datasets of skiing with a varied percentage of gears will be used to test the generality of the above conclusions. Finally, different styles within each gear should be analysed to study the different ways of performing skiing and assess the quality of performance. 1 * plain | http://arxiv.org/abs/1706.08924v1 | {
"authors": [
"Aliaa Rassem",
"Mohammed El-Beltagy",
"Mohamed Saleh"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170627161400",
"title": "Cross-Country Skiing Gears Classification using Deep Learning"
} |
Intrinsic 2D ferromagnetism, quantum anomalous Hall conductivity, and fully-spin-polarized edge states of FeBr_3 monolayer Bang-Gui Liu December 30, 2023 ========================================================================================================================== An atomic force microscope (AFM) is capable of producing ultra-high resolution measurements of nanoscopic objects and forces.It is an indispensable tool for various scientific disciplines such as molecular engineering, solid-state physics, and cell biology.Prior to a given experiment, the AFM must be calibrated by fitting a spectral density model to baseline recordings.However, since AFM experiments typically collect large amounts of data, parameter estimation by maximum likelihood can be prohibitively expensive.Thus, practitioners routinely employ a much faster least-squares estimation method, at the cost of substantially reduced statistical efficiency.Additionally, AFM data is often contaminated by periodic electronic noise, to which parameter estimates are highly sensitive.This article proposes a two-stage estimator to address these issues.Preliminary parameter estimates are first obtained by a variance-stabilizing procedure, by which the simplicity of least-squares combines with the efficiency of maximum likelihood. A test for spectral periodicities then eliminates high-impact outliers, considerably and robustly protecting the second-stage estimator from the effects of electronic noise. Simulation and experimental results indicate that a two- to ten-fold reduction in mean squared error can be expected by applying our methodology.Key Words: Cantilever Calibration; Periodogram; Whittle Likelihood; Variance-Stabilizing Transformation; Fisher's g-Statistic.§ INTRODUCTION An atomic force microscope (AFM) is a scientific instrument producing high-frequency ultra-precise displacement readings of a minuscule (∼ 100-long) pliable beam referred to as a cantilever.The cantilever bends in response to various forces exerted by its surrounding environment, the recording of which has been immensely useful for the study of e.g.,the composition of polymers and other chemical compounds <cit.>, interatomic and intramolecular forces <cit.>, pathogen-drug interactions <cit.>, cell adhesion <cit.>, and the dynamics of protein folding <cit.>.In a typical AFM experiment, the cantilever's bending response is measured in opposition to its spring-like restoring force, which requires proper calibration of the cantilever stiffness in order to convert measured displacement readings into force <cit.>.This calibration is accomplished by fitting various parametric models to a baseline spectral density recording, i.e., to a cantilever driven by thermal noise alone.A representative baseline spectrum calculated from experimental data is displayed in Figure <ref>.This data serves to illustrate two outstanding challenges in AFM parametric spectral density estimation. First, the experiments produce massive amounts of data, for which maximum likelihood estimation can be prohibitively expensive.A much faster least-squares method is routinely employed in practice <cit.> – at the cost of substantially reduced statistical efficiency. Second, parametric estimates by either method are severely affected by electronic noise, due to periodic fluctuations in the AFM's circuitry.Such noise is evidenced by the presence of sharp peaks (i.e., vertical lines) in the baseline spectrum of Figure <ref>.In this article, we propose a two-stage estimator addressing both of these issues.A preliminary estimator first applies a variance-stabilizing transformation which renders the least-squares estimator virtually as efficient as the MLE.After the preliminary fit, an automated denoising procedure, based on a well-known statistical test for hidden periodicities, robustly protects the second-stage estimator from most of the effects of electronic noise.Extensive simulations and experimental results indicate that a two- to ten-fold reduction in mean squared error can be expected by applying our methodology.The remainder of the paper is organized as follows. Section <ref> provides an overview of parametric spectral density estimation in the AFM context. Section <ref> describes our proposed two-step estimator.Section <ref> presents a detailed simulation study comparing our proposal to existing methods. Section <ref> applies the methodology to calibration of a real AFM cantilever and we close in Section <ref> with a discussion of future work. § PARAMETRIC SPECTRAL DENSITY ESTIMATION IN AFM Let X_t = X(t) denote a continuous stationary Gaussian stochastic process with mean [X_t] = 0 and autocorrelation (t) = (X_s, X_s+t).The power spectral density (PSD) of X_t is then defined as the Fourier transform of its autocorrelation,() =e^-2π i t (t)t = {(t)}.Spectral densities can be used to express the solutions of various differential equations and are thus commonly employed in many areas of physics.A particularly important example for AFM applications is that of a simple harmonic oscillator (SHO). This model for the thermally-driven tip position X_t of the AFM cantilever ism Ẍ_t = - k X_t - ςẊ_t + F_t,where Ẋ_t and Ẍ_t are velocity and acceleration, m is the tip mass, k is the cantilever stiffness, ς is the viscous damping from the surrounding medium (e.g., air, water), and F_t is the thermal force which drives the cantilever motion.It is a stationary white noise process with [F_t] = 0 and (F_s, F_s+t) = 2ς·δ(t), where T is temperature and k_B is Boltzmann's constant. While the autocorrelation of X_t has no simple form, a straightforward calculation in the Fourier domain obtains the spectral density() = /(k·π_0Q)/[(/_0)^2-1]^2 + [/(_0Q)]^2,where _0 = k/m/(2π) is the cantilever's resonance frequency and Q = km/ς is its “quality factor”, which measures the width of the PSD amplitude peak around _0 (see Figure <ref>). The presentation above glosses over several technical details, e.g., non-integrability of many well-defined autocorrelations in (<ref>) (such as that of the SHO), and limited interpretability of the white noise process F_t in (<ref>) as a function of t.For a rigorous treatment of these issues see <cit.>.§.§ Parametric Inference In a parametric setting, the PSD is expressed as (, ) and the goal is to estimate the unknown parametersfrom discrete observations = ( [0] X N-1) recorded at sampling frequency _s, such that X_n = X(n ·) and _s = 1/.Ideally one would work directly with the loglikelihood in the time domain, ℓ().However, this approach is inviable in practice since it (i) requires Fourier inversion of the PSD to obtain the variance of , and (ii) scales quadratically in the number of observations <cit.>.Instead, parametric inference can be considerably simplified by making use of the following result.Let N = 2K+1 and define the finite Fourier transform = (X̃_-K, …, X̃_0, …, X̃_K) ofasX̃_k =[n] 0 N-1 e^-2π ik n/N X_n.For each X̃_k, let _k =k N _s denote the corresponding frequency.Then if () is the PSD of X_t, under suitable conditions on (), and as N →∞ and → 0, we have1 N X̃_k^2 _s ·(_k) ×Expo(1),0 ≤ k < K. Proposition <ref> leads to the so-called Whittle loglikelihood function <cit.>ℓ_W() = -∑_k=1^K (Y_k/_k() + log_k()),where Y_k =1 N X̃_k^2 and _k() = _s ·(_k, ).Since the periodogram = ( Y K) can be computed in 𝒪(N log N) time using the Fast Fourier Transform, maximization of the Whittle loglikelihood (<ref>) is considerably easier than of the original likelihood ℓ().Conditions for the convergence of the Whittle MLE _W = _ℓ_W() to the true MLE = _ℓ() have been established by <cit.>.Since the true MLE is typically unavailable, we shall refer to Whittle's _W simply as the MLE in the developments to follow. §.§ Periodogram BinningDespite its computational advantages relative to exact maximum likelihood, obtaining _W often remains a practical challenge, due to the enormous size of typical AFM datasets and the difficult numerical optimization of ℓ_W().A common technique to overcome these issues is to group the periodogram frequencies into consecutive bins <cit.>.That is, assume thatis a multiple of the bin size B, and consider the average periodogram value in bin m,Y̅_m = 1/B∑_k ∈ I_m Y_k,I_m = {k: (m-1)B < k ≤ mB}.It then follows from Proposition <ref> that if _i() is relatively constant within bins, the distribution of = (Y̅N_B) can be well approximated byY̅_m _m() ×Gamma(B, B),where _m() = ·(_m, ), _m =1 B ∑_k ∈ I_m_k, and Gamma(B, B) is a Gamma distribution with mean 1 and variance 1/B.This leads to the non-linear least-squares () estimator_ = _∑_m=1^N_B(Y̅_m - _m())^2,which is a consistent estimator of <cit.>. The sum-of-squares criterion (<ref>) can be minimized using specialized algorithms such as Levenberg-Marquardt <cit.>, rendering the calculation of _ considerably simpler than that of _W.However, this gain often incurs a significant loss in statistical precision.§ ROBUST AND EFFICIENT PARAMETRIC PSD INFERENCE The choice between and estimators imposes a trade-off between computational and statistical efficiency.In addition, both estimators are highly sensitive to periodic noise which commonly plagues AFM spectral data (Section <ref>).Here we describe a two-stage parametric spectral estimator designed to overcome these issues. §.§ Variance Stabilizing Transformation To see why the estimator is sub-optimally efficient, note that the approximate Gamma distribution of the binned periodogram (<ref>) can itself be approximated by a Normal with matching mean and variance:Y̅_m (_m(),1 B _m()^2).Substituting any constant for the parameter-dependent variance in (<ref>) then gives rise to _ in (<ref>).However, by a straightforward application of the statistical delta method <cit.>, we note that taking the logarithm of the binned periodogram is a variance-stabilizing transformation:(logY̅_m) ≈(.ylog y|_y = E[Y̅_m])^2 ×(Y̅_m) = 1/_m()^2×_m()^2/B = 1/B,such thatZ_m = log(Y̅_m) (log_m(), B^-1).Maximizing the likelihood resulting from (<ref>) leads to the log-periodogram () estimator_ = _[m] 1 N_B(Z_m - log_m())^2.This simple sum-of-squares can be effectively minimized by the methods of Section <ref>, yet with _ achieving nearly the same precision as _W.The estimator is commonly used in statistics to estimate long-range dependence <cit.>.Its asymptotic properties have been derived by <cit.> and compared favorably therein to the efficientestimators of <cit.>. A different variance-stabilizing transformation of the periodogramis to take logarithms before binning, i.e., let Z̃_m = 1/B∑_k ∈ I_mlog(Y_k).Since the log-Exponential distribution is more Normal than the Exponential itself, a smaller B is required for Z̃_m than Z_m for within-bin normality to hold.However, assuming that approximately Y_k _m() ×Expo(1) for k ∈ I_m, it can be shown that (Z̃_m) = π^2/6 ×1/B > (Z_m).Therefore, the analogous estimator to (<ref>) with Z̃_m in place of Z_m is expected to be less efficient, and indeed this was the case in our numerical experiments. §.§ Periodic Noise Removal The PSD as defined in (<ref>) tacitly assumes that the data are “purely stochastic” <cit.>.However, the periodogram in Figure <ref> has several vertical lines in the e5e6 range, suggesting the presence of periodic terms which cannot be explained by a PSD alone.Indeed, the AFM is a complex instrument operated by extensive electronics, which inevitably leads toperiodic noise from various electrical components and power sources.Careful engineering can significantly reduce the effects of electronic noise on the final cantilever displacement readings.However, the residual periodic components shown in Figure <ref> can gravely impact PSD parameter estimates as will be demonstrated shortly.Fortunately, the more severe electronic noise can be easily and automatically removed from the PSD by the following method due to <cit.>.Suppose that the periodogram data = ( Y K) contain no periodic components.Under this null hypothesis, we haveH_0: W_k = Y_k/_k(_0)Expo(1),where _0 is the true parameter value.Now consider the maximum jump of the normalized cumulative periodogram density, also known as Fisher's g-statistic:= max_1≤ k ≤ KW_k/[j] 1 K W_j.Under H_0,is distributed as the maximum distance between the order statistics of K iid uniform random variables, of which the distribution is given by <cit.>( > aH_0) = [k] 1 K (-1)^k+1Kk(1-k· a)_+^K-1,x_+ := max(x, 0).§.§ Proposed Estimator The developments above motivate a two-stage parametric spectral density estimator consisting of the following steps: * Preliminary Estimation. Calculate a preliminary estimate _ 1 using the log-periodogram likelihood function (<ref>). * Periodicity Removal. Calculateupon substituting _ 1 for the unknown value of _0, and the p-value against largeusing (<ref>).If the p-value is small – say less than 1% – replace the corresponding periodogram ordinate Y_k by a random draw from _k(_ 1) ×Expo(1).Repeat this procedure until Fisher's g-test does not reject H_0. * Final Estimation.Calculate _ on the periodogram obtained from Step 2, from which the unwanted periodicities have been removed.We have opted in Step 2 to replace the periodic outliers by random draws, instead of simply deleting them and repeating Fisher's g-test with K-1 variables.This is because the largest of these K-1 variables is in fact the second largest of the original K, for which (<ref>) does not give the right distribution under H_0. § SIMULATION STUDY In order to evaluate the parametric spectral estimator proposed in Section <ref>, we consider the following simulation study reflecting a broad range of AFM calibration scenarios. Each simulation run consisted of a 5 time series sampled at 10 (N = 5e6 data points) from the SHO model (<ref>) with added white noise,() =+ /(k·π_0Q)/[(/_0)^2-1]^2 + [/(_0Q)]^2,where = (k, _0, Q, ).Data was generated using a standard FFT-based algorithm <cit.>.For all simulations, the baseline parameters are displayed in Table <ref>.All parameters being fixed except Q ∈{1,10,100,500}, the corresponding SHO spectra are displayed in Figure <ref>. For each of the four baseline settings, M = 1000 datasets were generated, and for each dataset we calculated the three estimators _W, _, and _.This was done using only periodogram frequencies in the range _0 ±_0/√(2), a range typically provided by the cantilever manufacturer, and outside of which the remaining frequencies provide little additional information about .For the and estimators, the bin size was set to B = 100.For all estimators, the optimization was reduced from four to three parameters by the method of profile likelihood described in Appendix <ref>. §.§ Baseline Environment Figure <ref> displays boxplots for each estimator of each parameter estimate relative to its true value.The numbers on top of each boxplot correspond to the mean squared error (MSE) ratios between each estimator and the _W.That is, for each of the SHO parameters φ∈ (k, _0, Q) and estimator j ∈{, , }, the corresponding MSE ratio in Figure <ref> is calculated asℛ_j(φ) = [i] 1 M (φ̂_j i - φ_0)^2/[i] 1 M (φ̂_W i - φ_0)^2,where φ_0 is the true parameter value and φ̂_j i is its estimate by method j for dataset i. For low quality factor Q = 1, the method has roughly 1.5-2 times higher MSE than the . For higher values of Q, the MSE of increases to roughly 3-5 times that of . In contrast, the estimator achieves virtually the same MSE as the at a small fraction of the computational cost. §.§ Electronic Noise Contamination In order to assess the impact of electronic noise, a random sine wave of the formwas added to each of the baseline datasets from the simulations above.The parameters of each sine wave were chosen to mimic the electronic noise in the real AFM data of Figure <ref>, a particularly difficult scenario for SHO parameter estimation due to the proximity of the electronic noise to the resonance frequency _0.Specifically, the frequency ζ of each sine wave was generated from a Normal with mean _0 = 33.533 and standard deviation 10, the phase ϕ was drawn uniformly between 0 and 2π, and the amplitude D was set to achieve an approximately ten-fold increase from the maximum value of the baseline PSD near _0.The small jitter in the sine wave parameters was added both to mimic the small variations measured in real AFM data, and to investigate the impact of spectral leakage.Figure <ref> displays a simulated dataset with electronic noise contamination.Also displayed is the 1% threshold for periodic noise detection by Fisher's g-test (Section <ref>).This is calculated by solving numerically for ( > a_cut H_0) = .01 using (<ref>), then setting the threshold for frequency _k to a_cut×_k(_ 1) ·∑_j=1^K (Y_j/_j(_ 1)).The threshold in Figure <ref> indicates that most electronic noise detectable to the naked eye can be easily removed by the denoising procedure. Figure <ref> displays boxplots of each parameter estimate relative to its true value in the presence of electronic noise.To assess the impact of the noise corruption, these estimates do not include the denoising step of Section <ref>. The numbers in the plot correspond to MSE ratios between the estimator with noise corruption, relative to its own performance in the baseline dataset.The ratios are thus calculated asℛ_j(φ) = [i] 1 M (φ̂_j, i - φ_0)^2/[i] 1 M (φ̂_j, i - φ_0)^2,where φ̂_j, i and φ̂_j, i are parameter estimates with methodj for dataset i under baseline and noise-contaminated settings, respectively.At low Q, the MSE ratios are close to one, indicating that the estimators are relatively insensitive to the electronic noise.However, for high Q the effect of the noise is considerably more detrimental, particularly for .In all cases, the performance of the estimator is affected the least, indicating it is naturally more robust than and to periodic noise contamination, even before the denoising technique is applied.Figure <ref> displays boxplots for the second-stage parameter estimates, after electronic noise removal.Each estimator (, , and ) used its own preliminary fit to determine the noise cutoff value.Here, the MSE ratios are calculated relative to an “ideal” estimator: the with perfect denoising.That is, the MSE ratios areℛ_j(φ) = [i] 1 M (φ̂_j, i - φ_0)^2/[i] 1 M (φ̂_W, i - φ_0)^2,where φ̂_j, i is the noise-corrected estimate of method j for dataset i. In general, the denoising procedure is extremely effective for both and , but somewhat less so for (for example, at Q = 100 the MSE relative to Q̂_W, for Q̂_, is 5.30, whereas for Q̂_, it is 8.90). However, for very high Q = 500, the denoising procedure for and fails to completely remove the upward bias in Q̂ and the downward bias in k̂.Upon further investigation, Figure <ref> reveals that this is due to spectral leakage. Indeed, a close look at the 50 frequencies on either side of _0 (Figure <ref>) shows that several periodogram variables adjacent to the electronic noise at 33.5490 have been pushed upward by its presence.The denoising procedure is able to remove the noise at 33.5490, but not in the neighboring frequencies.The net effect after noise correction is a slight upward bias in the binned periodogram (Figure <ref>), which, due to the high curvature of the SHO at Q = 500, causes an upward bias in Q̂_.However, the overall amplitude of the SHO remains unaffected, and since by (<ref>) this amplitude is proportional to /(k·π_0 Q), the upward bias in Q̂_ is accompanied by a downward bias in k̂_. § APPLICATION TO EXPERIMENTAL AFM DATA We now turn to the problem of calibrating the AFM cantilever for which the periodogram is displayed in Figure <ref>.The data consist of 5 of an AC160 Olympus cantilever recorded at 5 (N = 25e6 observations).The objective is to determine the parameters of the best-fitting SHO model to the first cantilever eigenmode (Figure <ref>).Calibration of a real AFM cantilever is subject to at least two complications not addressed in the simulations of Section <ref>: * While the PSDs used in simulation are dominated at low frequencies by white noise, those measured in the real data of Figure <ref> exhibit power-law behavior, (f) ∼ 1/^ as → 0.This is referred to as “1/f noise”; it features prominently in AFM experiments <cit.>, and is due in this case to slow fluctuations of the measurement sensor.Depending on the exponent, 1/f noise induces long-range dependence in the cantilever displacement (0 << 1), or even lack of stationarity (≥ 1).Failing to account for it can significantly bias SHO parameter estimates.Fortunately, 1/f noise can be dealt with readily by adding a correction term to the SHO model, which becomes(k,_0,Q,,,) =+ /^ + /(k·π_0Q)/[(/_0)^2-1]^2 + [/(_0Q)]^2 .While important at low frequencies, the 1/f noise around the first eigenmode (Figure <ref>) is nearly imperceptible. Consequently, we estimated the first eigenmode's SHO parameters using the simpler model (<ref>). We have constructed a simulation in which 1/f noise severely affects SHO parameter estimation in Appendix <ref>.Relative performance of to and estimators was similar to Section <ref>. * In addition to the firsteigenmode at roughly 313, the data contain higher eigenmodes corresponding to flexural oscillations of the clamped cantilever beam <cit.>.The first of these higher eigenmodes is displayed in Figure <ref>.Calibration of higher eigenmodes is of essential importance for popular bimodal and multifrequency AFM imaging techniques <cit.>, on which we elaborate in the Discussion (Section <ref>). Figure <ref> displays the periodogram of the AFM data from Figure <ref> over the frequency range used for parameter estimation.The electronic noise at 312.5 and 300.0 was easily removed with Fisher's g-statistic. Table <ref> displays parameter estimates and standard errors for , , and methods, the first two being calculated with bin size B = 100. For and , standard errors are calculated by inverting the observed Fisher information matrices corresponding to (<ref>) and (<ref>).For , standard errors are obtained by the sandwich method <cit.>.For this particular dataset, the , , and estimators are fairly similar, all being within one standard error of each other. This is because the difference between the estimators is largely driven by the relative amplitude of the SHO peak to its base.Here this ratio is about 10, which is similar to the Q = 10 scenario examined in Section <ref>.Indeed, repeating the simulations of Section <ref> with true parameters values taken as the estimates in Table <ref> produced similar results to the aformentioned scenario, i.e., indistinguishable and estimators having three times smaller MSE than . §.§ Bin Size While for this dataset there is little difference between the various estimators, and can be substantially faster than due to periodogram frequency binning.In practice, the choice of bin size affects both computational efficiency and approximation accuracy.Large bin sizes can group periodogram variables with very different amplitudes, thus invalidating the Gamma approximation to Y̅_m in (<ref>).On the other hand, small bin sizes can strain the Normal approximations to Y̅_m and log(Y̅_m) in (<ref>) and (<ref>).To investigate the impact of bin size, Figure <ref> plots and estimators for the values of B = 50250. The behavior of is considerably more erratic, presumably due to small changes in the bin end points having larger impact on _m() than log_m().Note that the downward trend in Q̂ is caused by increased flattening of the periodogram curvature as bin size increases.§ DISCUSSIONParametric spectral density estimation plays a key role in AFM cantilever calibration.We have proposed a two-stage parameteric spectral estimator having statistical efficiency comparable to at a small fraction of the computational cost, robust to most adverse effects of periodic noise contamination (except perhaps for very sharply peaked spectra).As spectral leakage due to binning affects the choice of bin size, a possible direction for future work is the construction of variable bin sizes, to be determined after the preliminary fit.Another line of future investigation is the calibration of higher eigenmodes.In principle, this can be done by fitting separate SHO models to each successive eigenmode.However, as the peak amplitude of these higher modes gets closer and closer to the noise floor, the accuracy of separate SHO estimators rapidly deteriorates.Instead one might wish to combine SHO models on the basis of hydrodynamic principles <cit.> and other scaling laws <cit.>.§ PROFILE LIKELIHOOD FOR SHO FITTING We begin by reparametrizing the SHO model (<ref>) as() =+ /(k·π_0Q)/[(/_0)^2-1]^2 + [/(_0Q)]^2 = τ×{ + 1/[(/_0)^2-1]^2 + (/)^2} = τ·(),where τ = /(k·π_0Q), = /τ, = _0Q, and = (_0, , ).The objective function for the estimator then becomes _(τ, ) = ∑_m=1^N_B(Y̅_m - τ·_m())^2, where _m() = ·(_m, ).For any fixed value ofthe value of τ which minimizes _(τ, ) isτ̂() = _τ_(τ, ) = ∑_m=1^N_B_m() ·Y̅_m/∑_m=1^N_B_m()^2.It follows that by setting _ = __(τ̂(), ) and τ̂_ = τ̂(_), we have (τ̂_, _) = _(τ,)_(τ, ).We can then recover the corresponding estimator _ = __() by applying the inverse transformation Q = /_0, k = /(τ·π), and = ·τ.Thus, we have obtained _ at the cost of the three parameter optimization of _(τ̂(), ), rather than the four parameter direct optimization of _().An analogous “profiling” procedure can be applied to the and estimators, in order to reduce the optimization problem from four parameters to three.For , the objective function is_(, ) = ∑_m=1^N_B(Z_m - log(τ) - log_m())^2,for whichτ̂() = _τ_(, ) = exp{1/N_B∑_m=1^N_B(Z_m - log_m())}.Similarly, the objective function for the estimator is_(, ) = ∑_k=1^K (Y_k/τ·_k() + log(τ) + log_k()),where _k() = (_k ), for whichτ̂() = _τ_(, ) = 1/K∑_k=1^K Y_k/_k(). § 1 /F NOISEThe presence of 1/f noise is a common feature of AFM power spectra.This type of noise typically arises from slow fluctuations of the laser and photodiode sensor <cit.> and other long-term cantilever instabilities <cit.>.It is manifested by a power law behavior at low frequencies, () ∼ 1/^ as → 0.Thus, a PSD model for the SHO with both white noise and 1/f noise contamination is(k,_0,Q,,,) =+ /^ + /(k·π_0Q)/[(/_0)^2-1]^2 + [/(_0Q)]^2,whereandare the 1/f noise exponent and amplitude parameters, respectively.While the SHO estimates for the real AFM data in Figure <ref> were not impacted by the 1/f noise, here we construct a simulation study in which they are.Namely, we use the baseline parameters described in Section <ref>, to which we add 1/f noise with parameters = 0.55 and = 1.0e7□.Baseline and noise contaminated power spectra are displayed in Figure <ref>.To quantify the severity of the 1/f noise, Table <ref> displays the asymptotic relative bias (i.e., as N →∞) due to fitting themodel (<ref>) without accounting for the 1/f noise in Figure <ref>.This was calculated by a direct curve-fitting procedure. While the bias on _0 and k is relatively small, for Q it is on the order of 510%.To evaluate the different estimators, M = 1000 datasets are generated under each setting as in Section <ref>, and , , and parameter estimates are calculated for each dataset.For the and estimators the bin size was B = 100.Table <ref> displays the parameter-wise MSE ratio for and estimators relative to the .For moderate Q ≥ 10, the performance of the estimator is virtually the same as the , and 1.55 times superior than that of . For very low Q = 1, the 1/f noise in Figure <ref> is almost undetectable, leading to parameter identifiability issues in the fitting algorithms.In such a setting we recommend to first estimate the 1/f parameters separately from the low frequency periodogram values, then estimate the SHO and white noise parameters withandfixed. § SUPPLEMENTARY MATERIALSSoftware: All code for the various PSD fitting algorithms is available at [URL withheld in compliance with double-blind policy]. <https://github.com/mlysy/realSHO>. jasaref | http://arxiv.org/abs/1706.08938v1 | {
"authors": [
"Bryan Yates",
"Aleksander Labuda",
"Martin Lysy"
],
"categories": [
"stat.AP"
],
"primary_category": "stat.AP",
"published": "20170627164820",
"title": "Robust and Efficient Parametric Spectral Estimation in Atomic Force Microscopy"
} |
=1fpheadera]Jun [email protected] [a]Institut des Hautes Études Scientifiques Le Bois-Marie, 35 route de Chartres 91440 Bures-sur-Yvette, France The 2D 𝒩=(2,2)^* supersymmetric Yang-Mills theory can be obtained from the 2D 𝒩=(4,4) theory with a twisted mass deformation. In this paper we construct the gravity dual theory of the 2D 𝒩=(2,2)^* supersymmetric U(N) Yang-Mills theory at the large N and large 't Hooft coupling limit using the 5D gauged supergravity. In the UV regime, this construction also provides the gravity dual of the 2D 𝒩=(2,2)^* U(N) topological Yang-Mills-Higgs theory. We propose a triality in the UV regime among integrable model, gauge theory and gravity, and we make some checks of this relation at classical level. | http://arxiv.org/abs/1706.09016v2 | {
"authors": [
"Jun Nian"
],
"categories": [
"hep-th"
],
"primary_category": "hep-th",
"published": "20170627191027",
"title": "Gravity Dual of Two-Dimensional $\\mathcal{N} = (2,2)^*$ Supersymmetric Yang-Mills Theory and Integrable Models"
} |
Multi-spacecraft observations and transport simulations of solar energetic particles Jeremiah Horrocks Institute,University of Central Lancashire, PR1 2HE, Preston, UK [email protected] Institut fuer Experimentelle und Angewandte Physik, University of Kiel, Germany [email protected] Hopkins University Applied Physics Laboratory, MD, USAThe injection, propagation and arrival of solar energetic particles (SEPs) during eruptive solar events is an important and current research topic of heliospheric physics. During the largest solar events, particles may have energies up to a few GeVs and sometimes even trigger ground-level enhancements (GLEs) at Earth. These large SEP events are best investigated through multi-spacecraft observations. We study the first GLE-event of solar cycle 24, from 17th May 2012, using data from multiple spacecraft (SOHO, GOES, MSL, STEREO-A, STEREO-B and MESSENGER). These spacecraft are located throughout the inner heliosphere, at heliocentric distances between 0.34 and 1.5 astronomical units (au), covering nearly the whole range of heliospheric longitudes. We present and investigate sub-GeV proton time profiles for the event at several energy channels, obtained via different instruments aboard the above spacecraft. We investigate issues due to magnetic connectivity, and present results of three-dimensional SEP propagation simulations. We gather virtual time profiles and perform qualitative and quantitative comparisons with observations, assessing longitudinal injection and transport effects as well as peak intensities. We distinguish different time profile shapes for well-connected and weakly connected observers, and find our onset time analysis to agree with this distinction. At select observers, we identify an additional low-energy component of Energetic Storm Particles (ESPs). Using well-connected observers for normalisation, our simulations are able to accurately recreate both time profile shapes and peak intensities at multiple observer locations. This synergetic approach combining numerical modelling with multi-spacecraft observations is crucial for understanding the propagation of SEPs within the interplanetary magnetic field. Our novel analysis provides valuable proof of the ability to simulate SEP propagation throughout the inner heliosphere, at a wide range of longitudes. Accurate simulations of SEP transport allow for better constraints of injection regions at the Sun, and thus, better understanding of acceleration processes.Multi-spacecraft observations and transport simulations of solar energetic particles for the May 17th 2012 event M. Battarbee 1Currently at the Department of Physics, University of Helsinki, Finland J. Guo 2 S. Dalla 1 R. Wimmer-Schweingruber 2 B. Swalwell 1 D. J. Lawrence 3Received 26th June 2017 / Accepted 25th January 2018 ================================================================================================================================================================================================================================================================================================= § INTRODUCTIONThe Sun releases vast amounts of energy through its activity, which mostly follows a periodic 11-year cycle. These eruptions can accelerate protons, electrons and heavier ions to relativistic energies and release them into interplanetary space. These solar energetic particles (SEPs) are guided by the interplanetary magnetic field (IMF), and in some cases result in intensive particle fluxes near the Earth. SEP events take place more frequently during solar maximum, and can affect atmospheric and space-related activities in many ways (see, e.g.,and references therein), and as such, their investigation has been recognized as extremely important.During extreme solar events, protons can be accelerated into the GeV range, and, when directed at the Earth, may lead to neutron monitors (NMs) detecting events at the Earth's surface. These ground-level enhancements (GLEs) are the most extreme of solar events, and thus are of special interest to the heliophysics community (see, e.g.,and ). Our understanding of energetic solar events and specifically GLEs increased dramatically during solar cycle 23 <cit.> due to advances in instrumentation and an abundance of events to observe. Solar cycle 24, being much quieter, has so far provided only two unambiguous GLEs, GLE71 on May 17th 2012 and GLE72 on September 10th 2017. Being able to observe this event from multiple vantage points within the inner heliosphere provides us with an exciting opportunity to increase our understanding the dynamics of strong solar events. In such an analysis, three-dimensional modelling of particle propagation is a crucial tool.We present sub-GeV proton observations of GLE 71, focusing on comparative analysis between observations from multiple vantage points throughout the inner heliosphere to better understand the spatial extent of SEP intensities in strong SEP events. GeV-energy particles are thus excluded from our analysis due to observations at such energies being available only in the vicinity of the Earth. We present new observations from the Mars Science Laboratory (MSL) Radiation Assessment Detector (RAD) and the MESSENGER Neutron Spectrometer (NS), together with energetic particle data from STEREO and near-Earth missions. We use a fully three-dimensional test particle model to simulate the transport of SEPs, originating from an acceleration region in the solar corona, generating virtual time profiles at various observer locations. The model includes, for the first time, the effects of a wavy Heliospheric Current Sheet (HCS) between two opposite polarities of the IMF. We compare intensity time profiles and peak intensities of data from both observations and simulations, at the different observer locations.In section <ref>, we introduce the event along with previously published analysis. In section <ref>, we introduce the instruments used in our multi-spacecraft observations. We then present intensity time profiles and solar release times, and discuss magnetic connectivity and energetic storm particles (ESPs). In section <ref>, we describe our particle transport simulation method. We then proceed to present simulated intensity time profiles, and compare them and deduced peak intensities with observations. Finally, in section <ref> we present the conclusions of our work. In Appendix <ref>, we discuss calibration of our MESSENGER NS observations.§ THE MAY 17TH 2012 SOLAR ERUPTION On May 17, 2012 at 01:25 UT, the NOAA active region 11476, located atin Earth view, produced a class M5.1 flare starting, peaking, and ending at 01:25, 01:47, and 02:14 UT, respectively <cit.>. The type II radio burst indicating the shock formation was reported by <cit.> to start as early as 01:32 UT using the dynamic spectra from Hiraiso, Culgoora and Learmonth observatories. Based on this, they also determined the coronal mass ejection (CME) driven shock formation height as 1.38 solar radii (R_⊙, from the centre of the Sun). The CME reached a peak speed ofat 02:00 UT. They reasoned that although the May 17th flare is rather small for a GLE event, the associated CME was directed toward near-ecliptic latitudes, facilitating good connectivity between the most efficient particle acceleration regions of the shock front and the Earth. Despite the flare exhibiting relatively weak x-ray flux, <cit.> suggested that both the flare and the CME had a role in particle acceleration. <cit.> agreed with this, based on velocity dispersion analysis (VDA) of proton arrival.<cit.> further estimated, using NM data, that the solar particle release time was about 01:40, slightly later than the shock formation time of 01:32. <cit.> reported the type III radio bursts which signified the release of relativistic electrons into open magnetic field lines starting at around 01:33 UT and ending at 01:44 UT.Using a simple time-shifting analysis, they derived the release ofprotons from the Sun at about 01:37 UT, slightly earlier but broadly agreeing with the onset time obtained by <cit.>. This event was later directly detected at Earth by several NMs[<http://www.nmdb.eu>] with slightly different onset times (between 01:50 and 02:00), with the strongest signal detected at the South Pole <cit.> where the rigidity cutoff is the lowest. Within the magnetosphere, proton energy spectra were measured by thePAMELA instrument <cit.> as reported by <cit.>, indicating that protons with energies of up to one GeV and helium of up to 100 MeV/nucleon were accelerated and transported to the vicinity of Earth. The GeV proton detection has also been corroborated later by <cit.> using an inversion technique exploring the response functions of the Electron Proton Helium Instrument <cit.> aboard the SOHO spacecraft. The event was also detected aboard the international space station <cit.>. Analysis of NM and PAMELA observations, using comparisons of peak and integral intensities, can be found in <cit.>. <cit.> performed reverse modelling based on NM measurements of this event, finding evidence of anisotropic twin-stream SEP pitch-angle distributions.Utilizing lower particle energies for release time analysis, <cit.> compared Wind/3DP andparticle fluxes with NM and solar disk observations, concluding that electrons at this event appear to be flare-accelerated, with proton acceleration happening mainly at the CME-driven shock. The ERNE/HED detector <cit.> aboard SOHO detected a strong event, but suffered from data gaps during the event, which poses additional challenges to analysis.During this event, the STEREO Ahead (STA) and STEREO Behind (STB) spacecraft were leading and trailing Earth by 114.8 and 117.6 degrees, respectively, both at a heliocentric distance of approximately . <cit.> studied theandproton channels of this event using GOES and the high energy telescope (HET) on STB.For thechannel, they obtained an enhancement rate (peak intensity/pre-event intensity) ofat GOES and only 35.0 at STB. For thechannel, they obtained an enhancement rate ofat GOES and only 13.4 at STB. Unfortunately they did not determine the peak intensity of this event as measured by STA.This event has previously been included in a STEREO event catalogue <cit.>, and multi-spacecraft observations of electrons have been analysed in <cit.>. <cit.> included STA and STB proton time profiles for a single energy range in a figure, displaying the longitudinal extent of the event. The event was also observed by the MESSENGER (MES) spacecraft orbiting around Mercury which, at the time of the event, was at a heliocentric distance of<cit.>. The longitudinal connectivity of MES was similar to that of STA, as shown in Figure <ref>. In this paper, we investigate the time-series of proton measurements from MES using its neutron spectrometer <cit.>. Beyond , this event was also observed by the Radiation Assessment Detector <cit.> on board the Mars Science Laboratory (MSL) on its way to Mars <cit.>.We derive the proton intensities measured by RAD at different energy ranges and compare them with Earth-based observations and simulated particle intensities at the same location.We note that the RAD detector did not measure original proton intensities in space, but rather a mix of primary and secondary particles due to primaries experiencing nuclear and electromagnetic interactions as they traverse through the inhomogeneous flight-time shielding of the spacecraft. To retrieve the original particle flux outside the spacecraft is rather challenging and is beyond the scope of the current paper. § MULTI-SPACECRAFT OBSERVATIONS The heliospheric locations of five different spacecraft whose measurements are employed in the current study are shown in Figure <ref> and also listed in Table <ref>. For this study, we estimated the average solar wind speed from measurements made by the CELIAS/MTOF Proton Monitor on the SOHO Spacecraft during Carrington rotation 2123. The average radial solar wind speed value was , which was rounded down tofor the purposes of this research. Table <ref> also includes calculated Parker spiral lengths using this solar wind speed.In order to effectively analyse the heliospheric and temporal extent of the May 17th 2012 solar eruption, we assess proton time profiles from multiple instruments throughout the inner heliosphere. The energy-dependent time profiles of SEPs measured at five different heliospheric locations are shown in Figure <ref>.For STA and STB, we analyse 1-minute resolution data from HET of the In situ Measurements of Particles and CME Transients (IMPACT) investigation aboard both STEREOs. The protons are measured between 13 and 100 MeV in 11 different energy channels. For our purpose of comparing the STEREO measurement to those at other locations, we combine the energy channels into four different bins: , , , and .For MES data at Mercury, we use the neutron spectrometer which contains one borated plastic (BP) scintillator sandwiched between two Li glass (LG)scintillators. To account for the shielding of particles by the magnetosphere of Mercury and by the geometric shadowing of the planet itself, we selected only observations where the orbit altitude of MES is greater than . The energy thresholds for triggering each type of charged particle were simulated and derived using particle transport codes <cit.> and are as follows: single coincidence, ≥15 MeV protons (or ≥1 MeV electrons); double coincidences, ≥45 MeV protons (or ≥10 MeV electrons); and triple coincidences, ≥125 MeV protons (or ≥30 MeV electrons).Since ≥10 MeV electrons are fairly rare in SEPs, we assume these channels measure mainly protons during the event. For the single-coincidence channel, contamination by many different sources such as electrons, gamma-rays and various charged particles is possible, and thus, care must be taken when drawing conclusions from the flux. We converted single, double, and triple coincidence counts into fluxes according to methods explained in detail in Appendix <ref>.We solve the intensity profile for 15–45 MeV and 45–125 MeV protons in the following way: We subtract the ≥45 MeV flux from the ≥15 MeV flux, and the ≥125 MeV flux from the ≥45 MeV flux. These two fluxes, now bounded from both above and below in energy, are then divided with the energy bin widths, i.e., 30 and 80 MeV, resulting in intensities in units .The ≥125 MeV flux is not shown in Figure <ref>, as it shows little enhancement for this time period.We emphasize that the 15–45 MeV flux calibration is uncertain. The time profiles in Figure <ref> indeed show a very high intensity in thechannel, likely due to non-proton background contamination. Close to the Earth, we employed two separate detectors. , situated within the Earth's magnetosphere, provided us with , , andproton channels, with 32 second resolution. The SOHO/ERNE HED detector at L1 was used to constructenergy channels with 1 minute time resolution, matchingthe GOES channels with energy ranges of , , and .GOES provided uninterrupted observationsof the event, but the background levels were enhanced due to increased ambient particle densities in the magnetosphere.ERNE/HED, located outside the magnetosphere at the Lagrangian point L1, provided uncontaminated pre-event intensities, but with data gaps during the event. Additionally, the peak intensities observed by ERNE/HED are suspected to be incorrect due to non-linear saturation artefacts and particles propagating through the detector in the reverse direction.At MSL, during the cruise phase, the RAD instrument provided radiation dose measurements with a high time resolution of 64 seconds, and particle spectra with a time resolution of ∼32 minutes. The radiation dose measurements were used to determine the event onset time. The particle spectra are provided by a particle telescope consisting of silicon detectors and plastic scintillators, with a viewing angle of ∼60 <cit.>, and providing proton detections up to a stopping energy of 100 MeV.The original energy of the particle, E, is solved through analyzing E versus dE/dx correlations for each particle. Since RAD transmits the deposited energy in each triggered detector layer for almost all stopping protons, the particle identification is done in post-processing and is very accurate. Protons stopping inside RAD can thus be selected and their intensities have been obtained in four energy channels: , , , and .The particles detected by RAD are a combination of primaries and secondaries resulting from spallation and energy losses as particles travel through the flight-time spacecraft shielding. The shielding distribution around RAD is very complex: most of the solid angle was lightly shielded with a column density smaller than 10 g/cm^2, while the rest was broadly distributed over a range of depths up to about 100 g/cm^2 <cit.>. Due to this shielding, deducing the exact incident energies of particles as they reach the spacecraft is a challenging process. We briefly discuss correcting for these effects in section <ref>.Celestial mechanics dictate that a spacecraft on a Hohmann transfer to Mars remain magnetically well connected to Earth during most of its cruise phase <cit.>. This connection is also shown in Figure <ref>. Due to this reason, the intensity profiles seen at Earth and MSL are expected to show similar time evolutions. §.§ First arrival of particles and solar release time Intense energy release at the surface of the Sun or in the corona can accelerate SEPs to relativistic energies, allowing them to propagate rapidly along the Parker spiral <cit.> to heliospheric observers. If the observer is magnetically well-connected to the acceleration site and particle transport is unhindered, the arrival time of first particles can be used to infer the travel distance, i.e., the Parker spiral length. As each heliospheric location will see the first arrival of energetic protons at a different time, we have defined onset times separately for each spacecraft, listed in Table<ref>. In Figure <ref>, the green vertical lines mark the onset times of the highest-energy channel corresponding to the arrivals of fastest protons. For STA and MES observations, we also define onset times of possible ESP events in low-energy channels, marked by grey lines, as will be discussed in more detail later. For STA, we find two distinct jumps, which may both be due to an ESP event. These times were defined from the raw data through subjective analysis of rise over a background level. The nominal Parker spirals connecting the spacecraft to the Sun are shown in Figure <ref> assuming an average solar wind speed ofand their lengths have also been calculated and listed in Table <ref>.Given a Parker spiral length offor an observer at Earth, 1 GeV protons (with a speed of ∼ 2.6 × 10^5 km/s) propagating from the flare site without scattering would arrive afteror 11 minutes. A particle onset time at Earth at 01:56 would indicate a solar release time (SRT) of about 01:45 UT for these protons, which is consistent with radio burst observations <cit.>, considering the 8-min propagation time of radio signals from the Sun to the Earth.Table <ref> also lists the 1 GeV proton travel times and estimated associated SRTs, for each of the location considered, based on the calculated Parker spiral lengths.The observed MSL onset time is in good agreement with that at Earth and with the estimated proton release time, likely due to the good magnetic connection between the acceleration region and Earth/MSL.However, SRT values derived from MES, STA, and STB are very different from each other and hours later than the time of flare onset and shock formation. This indicates that these spacecraft were not magnetically well-connected to the solar acceleration site, and that particle transport to these locations was not due to propagation parallel to the magnetic field lines but was affected by drift motion, co-rotation, cross-field diffusion and turbulence effects. §.§ Magnetic connectivity The multi-spacecraft observations available for the SEP event on May 17th 2012 provide an exemplary chance to investigate magnetic connectivity between the Sun and observation platforms at a wide variety of longitudes and radial distances. We model magnetic connectivity by assuming the IMF to follow a Parker spiral. We use a constant solar wind speed offor our modelling, based on the averaging described in Section <ref>. In Figure <ref>, we plot the Carrington Rotation 2123 solar synoptic source surface map <cit.> for r=2.5 R_⊙, resulting from potential field modelling, provided by the Wilcox Solar Observatory. The model assumes a radial magnetic field at the solar surface and at r=2.5 R_⊙. The plot shows the location of the flare on May 17th 2012 (indicated by a triangle) relative to the central meridian, along with estimated Parker spiral footpoints for the five observation platforms. As the plot shows, Earth (labelled 1) and MSL (2) are connected to regions on the Sun's surface very close to each other, with STA (3) and MES (5) connected to more western longitudes, close to each other. STB (4) is connected to more eastern longitudes.Figure <ref> also includes, as a thick white solid curve, a depiction of a potential field model neutral line between hemispheres of outward and inward pointing magnetic field. A model of a simple parametrized wavy neutral line, based on a tilted dipole formulation, is fitted to this neutral line using a least squares distance fit method, as described in <cit.>. This neutral line parametrisation is the r=2.5 R_⊙ anchor point for our model wavy HCS, and the wavy HCS parameters are described in section <ref>. Finally, figure <ref> shows a rectangular region of width 180^∘, extending to latitudes ± 60^∘, which we use as a model injection region for SEPs. The width of the injection region was iterated upon, until an agreement between observations and simulations, for as many heliospheric observers as possible, was achieved.As the solar wind flows outward and the solar surface rotates, magnetic structures at a given heliocentric distance are co-rotated westward. In Figure <ref>, this would be described by the potential field polarity map including the HCS moving to the right. We validate the synoptic source and Parker spiral model through simple radial magnetic field observations. MES and STA are in regions of inward-pointing magnetic field throughout the analysed time period, in agreement with the map. Up until the time of the flare, Earth is connected to outward-pointing field lines, after which a strong interplanetary CME (ICME) is detected and the field orientation flips. STB is initially connected to inward-pointing field lines, but from the 19th of May onward, the direction points inward, in agreement with the spacecraft crossing the HCS. §.§ Interplanetary shocks and energetic storm particlesIn addition to SEPs accelerated close to the Sun during the initial, strong phase of the solar eruption, particle acceleration can happen throughout the inner heliosphere at propagating interplanetary (IP) shocks, driven by ICME fronts. Depending on the heliospheric location relative to the flare site and the ICME, different spacecraft see different properties of the event. The time profiles of in-situ measurements in Figure <ref> and estimated SRTs in Table <ref> suggest that the particle intensities at Earth and MSL (with estimated SRTs of 01:45 and 01:46) are dominated by coronally accelerated SEPs, but at MES and STA, there is an additional population of energetic storm particles (ESPs) accelerated by an IP shock. To identify and decouple the signal of particles accelerated at an IP shock from those accelerated early on in the corona, we turn to ICME and shock catalogues. For MES, the circum-Mercurial orbital period of only 8 hours and related magnetospheric disturbances make identification of ICMEs challenging. <cit.> were able to detect an ICME at MES, lasting from 12:09 until 15:38 on May 17th. The shock transit speed was identified as .ESPs usually peak at lower energies than coronally accelerated SEPs, and are found only in the vicinity of the IP shocks due to turbulent trapping. At MES shown in Figure <ref>, we notice a clear intensity peak, likely due to ESPs, starting around 12:10 marked by a grey line right after the arrival of the ICME.A comprehensive catalogue of ICMEs, IP shocks, and streaming interactive regions (SIRs) for the STEREO spacecraft has been compiled by <cit.>[<http://www-ssc.igpp.ucla.edu/forms/stereo/stereo_level_3.html>]. A shock was detected at STA on May 18th at 12:43, followed by an ICME until 09:12 on May 19th. The deka-MeV proton channels at STA show major enhancements starting at about 15:25 on May 18th (marked by a grey line), which can be attributed to IP shock accelerated ESPs. A smaller enhancement is seen at 04:52, possibly due to a foreshock of ESPs escaping in front of the IP shock.STB is reported to be within a SIR from 23:48 on May 18th until 16:35 on May 22nd, well after the weak increase in proton flux. Upon further inspection of relevant solar wind measurements at STB, the possibility of an IP shock passing the spacecraft on between the 18th and 19th of May cannot be ruled out, but the data are ambiguous.An alternative explanation for the particle enhancement at STB, which begins less than 12 hours after the flare, is for coronally accelerated particles to drift there along the HCS, which co-rotates over the position of STEREO-B. We include this HCS drift in our simulations and assess this possibility in section <ref>. Many spacecraft are available for observing near-Earth transients. Both Wind and ACE databases report the Earth as within an ICME already from the 16th of May, being thus unrelated to the GLE 71 eruption. The Wind ICME list[<https://wind.nasa.gov/2012.php>] lists the ICME starting at May 16th 12:28, and ending at May 18th 02:11. ACE observations by <cit.>[<http://www.srl.caltech.edu/ACE/ASC/DATA/level3/icmetable2.htm>] list an ICME starting on May 16th 16:00, and ending at May 17th 22:00 UT. The only assertion of an actual shock is from the ACE list of disturbances and transients (seeand )[<http://espg.sr.unh.edu/mag/ace/ACElists/obs_list.html>], with a shock at May 17th 22:00, but it is registered only in magnetic field measurements. Thus, we find no suggestion that there should be a significant ESP signal at Earth.At the location of MSL, neither magnetic nor plasma measurements are available to identify ICMEs and IP shocks. No ESP structures are present in the RAD data. However, as RAD measurements of low energy protons are affected by nonuniform shielding, we cannot rule out the possibility of an ICME associated shock passing at the location of MSL.§ PARTICLE TRANSPORT SIMULATIONS In order to model heliospheric transport of SEPs accelerated during the May 17th event, we simulated the propagation of 3·10^6 test particle protons, from the corona into interplanetary space, using the full-orbit propagation approach of <cit.> and <cit.>. This model naturally accounts for particle drifts and deceleration effects, and allows for the generation of virtual time profiles at many heliospheric observer locations. Our model was newly improved by the inclusion of a HCS, normalised to a thickness ofat , as introduced in <cit.> and as extended to non-planar geometries in <cit.>. We present here the first results of three-dimensional forward modelling of SEP propagation, extending throughout the inner heliosphere, for this event. Because we focused on multi-spacecraft observations and the 3D spatial distribution of particle fluxes, we have not performed comparisons with 1D modelling efforts of large SEP events (see, e.g., ) We inject energetic particles into our transport simulation assuming acceleration to happen at a coronal shock-like structure. Acceleration efficiency across a coronal shock front is a complex question in its own right, with applications to the event in question presented in <cit.> and <cit.>. Their analysis of CME expansion suggests a CME width of 100 degrees in longitude with varying efficiency along the front. Using this width, our simulations had difficulty recreating proton time profiles at many heliospheric observer locations. Thus, we chose to assume additional spread of energetic particles in the corona during the early phase of the event. We iterated the width of the injection area, finding one of 180^∘ width in longitude, centered at the flare location, to provide the best results when attempting to recreate observed time profiles. This wide injection region is in agreement with a very wide coronal shock acting as the source of accelerated particles. Injection was performed between equatorial latitudes of ± 60^∘. The injection region is shown in Figure <ref> as a black rectangle.In order to decouple injection and transport effects, we chose to model particle injection through a simplified case. Thus, we inject isotropic protons from the aforementioned region with a uniform source function at a heliospheric height of , at the estimated solar particle release time of 1:40 <cit.>. As most acceleration of particles is assumed to take place at low heliospheric heights of up to a few R_⊙, an instantaneous injection is a fair approximation. Any ESPs accelerated by the interplanetary shock are not modelled. Protons were injected according to a power law of , distributed in the energy range . The chosen power law is close to the value derived by <cit.> from in-situ observations using SOHO/EPHIN. As our focus was on multi-spacecraft observations and modelling over a large spatial extent, we did not model protons in the GeV energy range due to comparison data from GeV-range observations being available only in the vicinity of the Earth. Extending our injection power law higher, whilst maintaining adequate statistics, would have required computational resources beyond the scope of this project. The total simulation duration was set to 72 hours.During transport, we modelled particle scattering using Poisson-distributed scattering intervals, with a mean scattering time in agreement with a mean free path of . Particles experience large-angle scattering in the solar wind frame for which we used a constant radial solar wind speed ofthroughout. The magnetic field was scaled to , consistent with observations. For the winding of the magnetic field, we assumed an average solar rotation rate ofor 25.34 days per rotation. In order to model particle detection at spacecraft, we gathered simulated particle crossings across virtual observer apertures at the locations of STA, MES, Earth, MSL, and STB. For each virtual observer, we used energy bins in agreement with those listed in section <ref> and time binning of 60 minutes. To increase statistics, simulated protons propagating outward from the Sun were gathered over a 10^∘× 10^∘ angular window at each observer location. As the orbital period of Mercury is only 88 days, we implemented longitudinal orbital motion of virtual observers around the Sun. Due to the large time bins used, we have not attempted to infer exact onset times from particle simulations, nor have we explicitly considered twin acceleration scenarios (see, e.g.,and ).For parametrization of the wavy current sheet, we used a least squares sum method to fit the distance of the r=2.5 R_⊙ potential field neutral line to a wavy model neutral line, resulting in a dipole tilt angle of α_nl=57^∘, a longitudinal offset of ϕ_nl=101^∘, and an peak count multiplier of n_nl=2. This source neutral line at r=2.5 R_⊙, used as the anchor point of the current sheet, is depicted in Figure <ref> as a dashed black curve. Figure <ref> shows the ecliptic distribution of accelerated protons, 10 hours after injection (11:40 UT), along with observer locations and Parker spiral connectivity assuming a solar wind speed of . Shaded contours show the scaled particle density in units cm^-3 between -20 and +20 degrees latitude. Of particular interest is the band of protons close to STB, which have experienced HCS drift.As our simulations do not include a background intensity and provided counts in arbitrary units, the particle densities and intensities had to be calibrated using a normalisation multiplier. Due to good magnetic connectivity at Earth, we decided to use a near-Earth peak intensity as the reference intensity. For this normalisation, we used theGOES energy channel, as it had a clearly defined peak. Although the background levels at GOES were enhanced due to magnetospheric effects, we assume that the peak values were not affected significantly. Hereafter, for all time profile and peak intensity analysis, results from our simulations were multiplied by a single normalisation constant, which resulted in agreement between peak intensities deduced from thechannels at Earth from both simulations and observations.§.§ Comparison with observations: time profiles In this section, we compare the intensity time profiles of simulations and observations. Figure <ref> displays results of both observations and simulations, with intensity time profiles for selected energy bins at each location, actual observations on the top row and simulation results on the bottom row. For simulation time profiles, we include error bars calculating an estimate of uncertainty for the intensity using the square root of registered particle counts. Panels are ordered according to observer footpoint longitude, as shown in Figure <ref>.We first focus on the qualitative shape of the time profiles, proceeding from west to east (right to left) in observer footpoint longitude. At STA, observations show a gradually increasing flux, and SRTs calculated from onset times in Table <ref> are many hours after the flare time. This suggests that the location of STA does not have good magnetic connectivity to the injection region at the start of the event. However, the numerical simulation is able to provide a proton time profile in agreement with observations,using the 01:40 UT release time. Protons fill the well-connected field lines with a population which isotropizes, and this population is then co-rotated over the STA position, becoming magnetically well-connected later in the simulation. STA observations in the lowest two energy bins show an additional feature, with bumps in intensity at approximately 04:52 and 15:25 on DOY 139. Both these bumps are designated with grey vertical lines in Figure <ref>. As described in section <ref>, an IP shock is detected at STA, and these enhancement at low energies can be explained as ESPs related to a passing IP shock. The first bump would indicate the arrival of an enhanced foreshock region, and the second bump would occur during the actual shock crossing. The simulated results do not show these bumps as ESP enhancements were not modelled by the SEP transport simulations.At MES, the rapid increase in particle intensity of our simulations does not agree with the observed delayed particle flux. The simulated time profile shows a simple abrupt event due to an efficient connection to the injection region, although it does drop off fast as the observer is rotated westward around the Sun with a rapid 88 day orbital period. Observations seem to suggest that coronally accelerated particles were not propagated efficiently to MES, as the enhancement over background intensities is small and happens too late. Shielding effects due to Mercury or its magnetic field were accounted for by masking out measurements with altitudes below . Thus, if an abrupt coronally accelerated component had been present at the position of MES, we should have detected it. A delayed enhancement, possibly due to ESPs, has a good match with the reported ICME crossing at 12:09, preceded by a foot of particles accelerated at the IP shock. This enhancement appears stronger in thechannel, which might indicate that the signal at MES is strongly influenced by particle drifts, as the magnitude of particle drifts scales with energy. Alternatively, the signal in thechannel might be hidden behind a strong background contamination signal. We note again that ESPs were not modelled in our transport simulations. We also note that although we only show derivedandenergy channels in figures <ref> and <ref>, the single coincidence channel fordetection did not show an abrupt rise, but rather a similar time profile as the shown derived energy channels. This also rules out single coincidence channel contamination as a source of discrepancy.After discussing the observed and simulated time profiles at STA and MES, it is appropriate to recall the assumed magnetic connectivity to these observers based on Figures <ref> and <ref>. The magnetic connectivity footpoint of MES is eastward of the STA connectivity footpoint, i.e. closer to the flare location. Thus, assuming a Parker spiral IMF and a simplistic injection region surrounding the flare location, a strong particle signal at STA should suggest a strong signal also at MES. This is in agreement with our simulated results, but in clear disagreement with the observations.One possible explanation for the discrepancies between observations and simulations is that the IMF shape may differ from that of a Parker spiral. We find that STA was in a fast solar wind stream prior to the event, and additionally a SIR was detected at STA on May 16th <cit.>, with a maximum solar wind speed of , well above the value ofused in our simulations. Thus, the IMF might have been primed by this stream, providing STA with a connected footpoint significantly east of the one used in our model. As we do not have solar wind speed measurements at Mercury, we cannot make similar educated guesses about the longitudinal position of the well-connected footpoint location for MES.Another possible explanation is that smaller-scale effects of the IMF and particle propagation are taking place, invalidating the Parker spiral model. Recent research into field-line meandering (see, e.g.,and ) and SEP cross-field propagation (see, e.g.,and references therein) has investigated this problem. New missions going close to the Sun will provide key data to validate these theories. Recent research, shown in in panel (a) of Figure 6 in <cit.>, suggests, however, that the early-time cross-field variance of a particle distribution is strongly dependent on radial distance. Thus, if we assume a narrowed injection region, during the early phase of the event, STA could be connected to the injection region through widely meandering field lines, whereas MES at a distance of onlywould remain outside this region.At the location of Earth, we compare three GOESenergy channel time profiles with observations.The highest energy channel atprovides an excellent match between simulations and observations, suggesting acceleration was near-instantaneous in the corona, and that Earth was well-connected to the acceleration region. At the middle energy channel of , the agreement between simulations and observations is also good, although the observed time profile begins to decrease slightly more rapidly than the simulated one. This may be due to, e.g., differences in particle scattering rates early in the event. At the lowest energy channel of , agreement is moderately good, although the rate of intensity decay is slightly different for observations and the simulation. Additionally, an enhancement in observed intensity is found about halfway through DOY 138. Although databases of interplanetary shocks showed only weak indications of a shock passing at earth, an IP-shock related ESP event is still the most likely explanation for this feature.At MSL, with a similar magnetic connection to Earth, time profiles agree moderately well with simulations. The observations at MSL seem to show similar intensities for all the different channels, resulting in a near-flat spectrum. The total intensities observed at the detector are more than an order of magnitude lower than the simulated intensities. However, as the general shape of the time profile agrees well with that of simulations, we suggest that transport and connectivity is not the primary cause of the disrepancy, but rather, that is due to the flight-time spacecraft shielding around MSL/RAD, causing particles to decelerate, fragment, or be deflected away.Modelling this effect in detail and performing inversion on the measured particle flux is rather challenging. We present preliminary corrections accounting for the energy loss of protons in section <ref>.Although the footpoint of STB is separated from the flare region by almost 180 degrees, a weak enhancement in proton flux is seen both in observations and in simulation results. There was a SIR in the vicinity of STB in the time period following the event <cit.>. Due to this and complicated solar wind observations, a weak ICME-driven shock cannot be ruled out. However, the most likely candidate for explaining the SEP flux enhancements at STB is coronally accelerated particles transported along the HCS. The successful simulation of this signal at STB is only possible through the results of our newly improved SEP transport simulation, supporting an IMF with two magnetic polarities separated by a wavy HCS. Particles propagate along the HCS, which is co-rotated over the position of STB (see Figure <ref>). The difference in onset time and signal duration between simulations and observations can be explained by inaccuracies in the exact position and tilt of the HCS at the position of STB. §.§ Comparison with observations: peak intensities In order to further assess longitudinal accuracy of our SEP transport simulations, we gathered peak intensities for both simulations and observations for each channel and plotted them according to estimated footpoint location (see also Figure <ref>). The peak intensities for STB, MSL, GOES, MES, and STA are shown in Figure <ref>, along with peak intensities deduced from simulations. In determining observational peak intensities for STA and MES, we excluded time periods deemed to be enhanced by ESP effects. For STA, this exclusion began aton the , corresponding with the foreshock region of the IP shock. This foreshock region is visible especially in thechannel, but somewhat also in thechannel.Comparing theandobserved and simulated peak intensities at Earth results in a good match due to theenergy channel being used for the normalisation of simulation results. However, observations atshow smaller intensities than the respective simulation results. We discuss the effects of the injection spectrum on peak intensities at the end of this section. At STB, simulated peak intensities exceed observed intensites by approximately one order of magnitude, but all channels show a similar intensity offset. All channels at STB show only a weak increase over background intensities, which is modelled well by the HCS-transported particles in the simulation. The highest two observed energy channels are somewhat lower than the simulated ones, suggesting an injection spectrum related effect, similar to what was seen at Earth. At STA, after excluding all ESP-enhanced regions from observations, observed and simulated peak intensities show a similar order-of-magnitude difference as was noted at STB. Similar to STB, the observations in the two highest energy channels exhibit slightly weaker peak intensities, pointing to the injection spectrum as the culprit.Neither the time profiles nor the peak fluxes of simulations and observations at MES agree with each other, which indicates that the true magnetic connectivity to MES is more complicated than the one used in our simulations. Based on our calibrations, we believe that instrumental effects cannot explain this discrepancy. The simulated injection region was set to a width of 180^∘ in order to provide a good time profile match at STA, however, CME modelling from observations produced shocks fronts of only 100^∘ width. A narrower injection region might prevent coronally accelerated particles from reaching MES, but would also result in a poor match for STA. The question of magnetic connectivity from the corona to STA and MES was explained in detail in section <ref>. If the CME were to transition to an ICME, and further from the Sun, expand in width, this could be seen as ESPs at MES, thus explaining the observations.At MSL, the observed peak intensities are much lower than those of simulations, likely due to the in-flight shielding covering much of the detector. As a first step toward correcting particle fluxes at MSL/RAD, we performed calculations of the energy loss of protons traversing a model of the spacecraft shielding. Proton energy losses in matter are primarily due to ionization, which is characterized by the Bethe-Bloch equation, which was used in our calculations. We considered the distribution of aluminium equivalent shielding depth within RAD's viewing angle <cit.>. Due to the involved complexity, we did not account for generation of secondary particles, which play a major role at low energies. Thus, we produced a corrected peak intensity only for thechannel, shown in Figure <ref> as a black square. This value appears to be a better match with both simulation results and GOES observations, showing a similar relationship to the simulated channel as was seen for the GOESchannel. Recreating original particle intensities at all channels of MSL/RAD will be the topic of future investigations.In comparing peak intensities for observations and simulations, many things must be taken into account. At MSL, shielding weakens the observed intensity in a significant manner, which requires post-processing to correct for. Magnetic connectivity at MES provides contradicting time profiles and peak intensities. At STA and STB we are able to reproduce time profile shapes, but the peak intensities are over-estimated in our transport simulations. However, noting that our injection source was uniform in longitude, which is not a realistic estimate, but allows us to now draw conclusions from the peak intensity fits. At longitudes close to the flare location, injection was as simulated and normalised, but at longitudes far away from the flare, injection efficiency drops, apparently an order of magnitude. This would be unsurprising, considering our injection region was set at 180 degrees. Thus, we now have indication that a strong injection takes place at the observed shock front with a width of ca. 100 degrees, but early-time propagation effects spread particles to a region of up to 180 degrees with lesser intensity.A general trend was that simulated and observed fluxes for low energy channels were in better agreement than those of higher energy channels. This suggests that our simulated injection power law of γ=2 was too hard, and the actual solar eruption had injected fewer high-energy particles than simulated. From our fitting, we can deduce that either a softer injection spectrum or a broken power law with weaker injection at high energies is likely to be closer to the truth.Overall, the simulated peak intensities presented in Figure <ref> show that the 3-dimensional propagation simulations have great merit in increasing our understanding of large SEP events. By correctly accounting for particle drifts, we can simulate propagation of particles over a wide range of energies, and thus, make educated estimates regarding the injection power law at the Sun. The general good agreement between how peak intensities are grouped according to footpoint location suggests that both prompt (such as Earth and MSL) and delayed (such as STA) SEP fluences can be modelled, once a longitudinal injection efficiency dependence is found. For this purpose, work such as that done by <cit.> and <cit.> is very useful. The mismatch between observations and simulations at Mercury/MESSENGER shows that the inner heliosphere is a complicated environment and proper modelling of magnetic connectivity throughout it requires additional effort. §.§ Comparison with observations: pitch-angles Observations of the pitch angle distribution of GeV-class particles for GLE 71 have shown an unusual twin-beam distribution (, ). In our model, we have used a simplified model of the scattering experienced by the SEPs, by considering only large angle scattering events. In figure <ref> we show the derivedproton pitch-angle distribution at Earth for the early phase of the simulation. We gathered proton crossings across thesphere at the location of Earth, gathering crossings over a 10^∘× 10^∘ angular window and applied the same intensity scaling as for earlier plots. We also performed scaling to account for solid angle size for each bin. The results indicate that our model is capable of reproducing a twin-stream distribution without including additional magnetic structures such as loops associated with preceding CMEs. We note that some qualitative similarities with figure 6 of <cit.> exist, but more detailed analysis would require refining our scattering model.§ CONCLUSIONS We have presented extensive, detailed multi-spacecraft observations of proton intensities for the solar eruption of May 17th, 2012. We have shown the event to encompass a large portion of the inner heliosphere, extending to a wide range of longitudes, with a strong detection at Earth, MSL, and STA. We were able to analyse SEP transport and magnetic connectivity based on a new improved 3D test particle model.Our SEP transport model solves the full-orbit 3D motion of test particle SEPs within heliospheric electric and magnetic fields. The model naturally accounts for co-rotation, particle drifts and deceleration effects (, , and ). Our new improved model includes, for the first time, effects due to a solar magnetic field of two different polarities, separated by a wavy HCS. We model proton injection with a shock-like structure near the Sun, and model interplanetary transport in accordance with a particle mean free path of λ_mfp=0.3 au.We present novel multi-spacecraft analysis of an SEP event, encompassing all heliolongitudes and radial distances ranging fromto . We compare results from multiple spacecraft and particle detectors with virtual observers placed within a large-scale numerical simulation. We improve upon previous studies, usually focused on a single observation platform, with our analysis, providing good agreement between simulations and observations at multiple heliospheric locations.We show that for GLE 71, observers magnetically connected to regions close to the flare location exhibit a rapid rise in proton intensity, followed by a prolonged fall-off. We report how STEREO-A observations are explained through a combination of co-rotation of an SEP-filled flux tube across the spacecraft in combination with an ESP event, and how STEREO-B observations can be explained through HCS drift of coronally accelerated protons.For four out of five observer locations, we are able to find a good match in both the qualitative intensity time profiles and the quantitative peak intensities when comparing observations and numerical simulations, if we assume that injection efficiency weakens as a function of longitudinal distance from the flare location. Our results suggest modern modelling of large-scale solar eruptions has improved, and has benefited greatly from the opportunities provided by the two STEREO spacecraft, as well as other heliospheric and even planetary missions such as MESSENGER and MSL. SEP forecast tools such as those presented in <cit.> should play an important role in furthering our understanding of solar activity.Our study shows that magnetic connectivity to the injection region as well as the perpendicular propagation of particles in interplanetary space are important factors when assessing the risk of SEP events. Solar wind streams, interacting regions, and concurrent coronal mass ejections with associated magnetic structures alter the IMF and particle transport conditions, yet modern computation methods are capable of impressive modelling of SEP events. Further improvements in modelling of the background conditions for SEP simulations are required, with 3D magnetohydrodynamic models a likely candidate for future studies. This work has received funding from the UK Science and Technology Facilities Council (STFC; grant ST/M00760X/1) and the Leverhulme Trust (grant RPG-2015-094).We acknowledge the International Space Science Institute, which made part of the collaborations in this paper through the ISSI International Team 353 "Radiation Interactions at Planetary Bodies".RAD is supported by the National Aeronautics and Space Administration (NASA, HEOMD) under Jet Propulsion Laboratory (JPL) subcontract 1273039 to Southwest Research Institute and in Germany by DLR and DLR’s Space Administration grant numbers 50QM0501, 50QM1201, and 50QM1701 to the Christian Albrechts University, Kiel. The RAD data used in this paper are archived in the NASA Planetary Data Systems Planetary Plasma Interactions Node at the University of California, Los Angeles. The PPI node is hosted at http://ppi.pds.nasa.gov/. The Solar and Heliospheric Observatory (SOHO) is a mission of international collaboration between ESA and NASA. Data from the SOHO/ERNE (Energetic and Relativistic Nuclei and Electron) instrument was provided by the Space Research Laboratory at the University of Turku, Finland.Wilcox Solar Observatory data used in this study was obtained via the web site <http://wso.stanford.edu> at 2017:01:12_05:40:05 PST courtesy of J.T. Hoeksema. The Wilcox Solar Observatory is currently supported by NASA.MESSENGER data were calibrated using measurements from Neutron monitors of the Bartol Research Institute, which are supported by the National Science Foundation.Work from D. J. Lawrence is supported by the NASA's MESSENGER Participating Scientist Program through NASA grant NNX08AN30G. All original MESSENGER data reported in this paper are archived by the NASAPlanetary Data System(<http://pdsgeosciences/wustl.edu/missions/messenger/index.htm>). The authors wish to thank Dr. Nina Dresing for her assistance in accessing and using STEREO data. The authors gratefully acknowledge the important comments and suggestions provided by the anonymous referee.aa§ MESSENGER FLUX CALIBRATIONAs the MESSENGER NS instrument was not originally designed with SEP proton measurements in mind, calibration and validation of derived fluxes is necessary.Absolute flux profiles of protons for the MES ≥45 MeV and ≥125 MeV energy thresholds were determined using the modelled response and validated with measures of the galactic cosmic ray (GCR) flux. Following <cit.>, the measured count rate, C, is related to the proton flux, F_0, (in units of protons sec^-1 sr^-1 cm^-2) using C = GAF_0, where G is the geometry factor in sr, and A=100 cm^2 is the detector area. For the two highest energy ranges, the values for G are G_≥ 125MeV = 1.1sr and G_≥ 45MeV = 4.25sr <cit.>. For borated plastic singles, the geometry factor is approximately G_singles≈ 4π - 2G_≥ 45MeV. However, the singles count rate likely contains a substantial fraction of contamination and non-proton background counts, such that its absolute calibration for energetic protons is highly uncertain. The measured count rates <cit.> are converted to fluxes using the above relation with the appropriate geometry factors.The derived fluxes for the ≥45 MeV and ≥125 MeV thresholds were validated based on a comparison with Earth-based neutron monitor counts that were converted to particle flux using the process given by <cit.>. Specifically, neutron monitor counts from McMurdo <cit.> were empirically converted to a solar modulation parameter, which is used as input to a GCR flux parameterization of <cit.> and <cit.>. The total GCR flux accounts for both protons and proton-equivalent alpha particles using the formulation given by <cit.>. When the NS-measured fluxes are compared to the fluxes derived through the neutron monitor data, we find an average absolute agreement of <10% for the ≥125 MeV flux and <20% for the ≥45 MeV flux, which validates the modelled response of <cit.>. The flux rates for the time period of March 26th 2011 to April 30th 2015 are plotted in Figure <ref>. The mean validation ratios of 1.07 for triple coincidences, 1.15 for double coincidence channel LG1 and 1.17 for double coincidence channel LG2 were applied as correction coefficients to the extracted MES proton fluxes. | http://arxiv.org/abs/1706.08458v2 | {
"authors": [
"M. Battarbee",
"J. Guo",
"S. Dalla",
"R. Wimmer-Schweingruber",
"B. Swalwell",
"D. J. Lawrence"
],
"categories": [
"astro-ph.SR",
"physics.space-ph"
],
"primary_category": "astro-ph.SR",
"published": "20170626162627",
"title": "Multi-spacecraft observations and transport simulations of solar energetic particles for the May 17th 2012 event"
} |
The ARIEL Mission Reference Sample Tiziano Zingales Giovanna Tinetti Ignazio PillitteriJérémy Leconte Giuseppina Micela Subhajit SarkarReceived: date / Accepted: date ============================================================================================================ Awareness of the road scene is an essential component for both autonomous vehicles and Advances Driver Assistance Systems and is gaining importance both for the academia and car companies.This paper presents a way to learn a semantic-aware transformation which maps detections from a dashboard camera view onto a broader bird's eye occupancy map of the scene. To this end, a huge synthetic dataset featuring 1M couples of frames, taken from both car dashboard and bird's eye view, has been collected and automatically annotated. A deep-network is then trained to warp detections from the first to the second view. We demonstrate the effectiveness of our model against several baselines and observe that is able to generalize on real-world data despite having been trained solely on synthetic ones.§ INTRODUCTION Vision-based algorithms and models have massively been adopted in current generation ADAS solutions. Moreover, recent research achievements on scene semantic segmentation <cit.>, road obstacle detection <cit.> and driver's gaze, pose and attention prediction <cit.> are likely to play a major role in the rise of autonomous driving.As suggested in <cit.>, three major paradigms can be individuated for vision-based autonomous driving systems: mediated perception approaches, based on the total understanding of the scene around the car, behavior reflex methods, in which driving action is regressed directly from the sensory input, and direct perception techniques, that fuse elements of previous approaches and learn a mapping between the input image and a set of interpretable indicators which summarize the driving situation.Following this last line of work, in this paper we develop a model for mapping vehicles across different views. In particular, our aim is to warp vehicles detected from a dashboard camera view into a bird's eye occupancy map of the surroundings, which is an easily interpretable proxy of the road state. Being almost impossible to collect a dataset with this kind of information in real-world, we exclusively rely on synthetic data for learning this projection.We aim to create a system close to surround vision monitoring ones, also called around view cameras that can be useful tools for assisting drivers during maneuvers by, for example, performing trajectory analysis of vehicles out from own visual field.In this framework, our contribution is twofold: * We make available a huge synthetic dataset (> 1 million of examples) which consists of couple of frames corresponding to the same driving scene captured by two different views. Besides the vehicle location, auxiliary information such as the distance and yaw of each vehicle at each frame are also present.* We propose a deep learning architecture for generating bird's eye occupancy maps of the surround in the context of autonomous and assisted driving. Our approach does not require a stereo camera, nor more sophisticated sensors like radar and lidar. Conversely, we learn how to project detections from the dashboard camera view onto a broader bird's eye view of the scene (see Fig.<ref>). To this aim we combine learned geometric transformation and visual cues that preserve objects size and orientation in the warping procedure.Dataset, code and pre-trained model are publicly available and can be found at <http://imagelab.ing.unimore.it/scene-awareness>.§ RELATED WORK §.§.§ Surround viewFew works in literature tackle the problem of the vehicle's surround view. Most of these approaches are vision and geometry based and are specifically tailored for helping drivers during parking manoeuvres. In particular, in <cit.> a perspective projection image is transformed into its corresponding bird's eye view, through a fitting parameters searching algorithm. In <cit.> exploited the calibration of six fish eye cameras to integrate six images into a single one, by a dynamic programming approach.In <cit.> were described algorithms for creating, storing and viewing surround images, thanks to synchronized and aligned different cameras. Sung et al. <cit.> proposed a camera model based algorithm to reconstruct and view multi-camera images.In <cit.>, an homography matrix is used to perform a coordinate transformation: visible markers are required in input images during the camera calibration process.Recently, Zhang et al. <cit.> proposed a surround view camera solution designed for embedded systems, based on a geometric alignment, to correct lens distortions, a photometric alignment, to correct brightness and color mismatch and a composite view synthesis. §.§.§ Videgames for collecting dataThe use of synthetic data has recently gained considerable importance in the computer vision community for several reasons. First, modern open-world games exhibit constantly increasing realism - which does not only mean that they feature photorealistic lights/textures etc, but also show plausible game dynamics and lifelike autonomous entity AI <cit.> . Furthermore, most research fields in computer vision are now tackled by means of deep networks, which are notoriously data hungry in order to be properly trained. Particularly in the context of assisted and autonomous driving, the opportunity to exploit virtual yet realistic worlds for developing new techniques has been embraced widely: indeed, this makes possible to postpone the (very expensive) validation in real world to the moment in which a new algorithm already performs reasonably well in the simulated environment <cit.>. Building upon this tendency, <cit.> relies on TORCS simulator to learn an interpretable representation of the scene useful for the task of autonomous driving. However, while TORCS <cit.> is a powerful simulation tool, it's still severely limited by the fact that both its graphics and its game variety and dynamics are far from being realistic.Many elements mark as original our approach. In principle, we want our surround view to include not only nearby elements, like commercial geometry-based systems, but also most of the elements detected into the acquired dashboard camera frame. Additionally, no specific initialization or alignment procedures are necessary: in particular, no camera calibration and no visible alignment points are required. Eventually, we aim to preserve the correct dimensions of detected objects, which shape is mapped onto the surround view consistently with their semantic class.§ PROPOSED DATASETIn order to collect data, we exploit Script Hook V library <cit.>, which allows to use Grand Theft Auto V (GTAV) video game native functions <cit.>. We develop a framework in which the game camera automatically toggle between frontal and bird-eye view at each game time step: in this way we are able to gather information about the spatial occupancy of the vehicles in the scene from both views (i.e. bounding boxes, distances, yaw rotations). We associate vehicles information across the two views by querying the game engine for entity IDs. More formally, for each frame t, we compute the set of entities which appear in both views asE(t) = E_frontal(t) ∩ E_birdeye(t)where E_frontal(t) and E_birdeye(t) are the sets of entities that appear at time t in frontal and bird's eye view, respectively. Entities e(t) ∈ E(t) constitute the candidate set for frame t C(t); other entities are discarded.Unfortunately, we found that raw data coming from the game engine are not always accurate (Fig. <ref>). To deal with this problem, we implement a post-processing pipeline in order to discard noisy data from the candidate set C(t). We define a discriminator functionf(e(t)) : C ↦{ 0, 1 }which is positive when information on dumped data e(t) are reliable and zero otherwise. Thus we can define the final filtered dataset as⋃_t=0^T D(t) where D(t) = { c_i(i) | f(c_i(t)) > 0} being T the total number of frames recorded. From an implementation standpoint, we employ a rule-based ontology which leverage on entity information (e.g. vehicle model, distance etc.) to decide if the bounding box of that entity can be considered reasonable. This implementation has two main values: first it's lightweight and very fast in filtering massive amounts of data. Furthermore, rule parameters can be tuned to eventually generate different dataset distribution (e.g. removing all trucks, keeping only cars closer than 10 meters, etc.). Each entry of the dataset is a tuple containing: * frame_f, frame_b: 1920 × 1080 frames from the frontal and bird's eye camera view, respectively;* ID_e, model_e: identifiers of the entity (e) in the scene and of the vehicle's type;* frontal_coords_e, birdeye_coords_e : the coordinates of the bounding box that encloses the entity;* distance_e, yaw_e : distance and rotation of the entity w.r.t. the player.Fig. <ref> shows the distributions of entity rotation and distance across the collected data. § MODEL At a first glance, the problem we address could be mistaken with a bare geometric warping between different views. Indeed, this is not the case since targets are not completely visible from the dashboard camera view and their dimensions in the bird's eye map depend on both the object visual appearance and semantic category (e.g. a truck is longer than a car). Additionally, it cannot be cast as a correspondence problem, since no bird's eye view information are available at test time. Conversely, we tackle the problem from a deep learning perspective: dashboard camera information are employed to learn a spatial occupancy map of the scene seen from above. Our proposed architecture composes of two main branches, as depicted in Fig. <ref>. The first branch takes as input image crops of vehicles detected in the dashboard camera view. We extract deep representations by means of ResNet50 deep network <cit.>, taking advantage of pre-training for image recognition on ImageNet <cit.>. To this end we discard the top fully-connected dense layer which is tailored for the original classification task. This part of the model is able to extract semantic features from input images, even though it is unaware of the location of the bounding box in the scene. Conversely, the second branch consists of a deep Multi Layer Perceptron (MLP), composed by 4 fully-connected layers, which is fed with bounding boxes coordinates (4 for each detection), learning to encode the input into a 256 dimensional feature space. Due to its input domain, this segment of the model is not aware of objects' semantic, and can only learn a spatial transformation between the two planes.Both appearance features and encodings of bounding box coordinates are then merged through concatenation and undergo a further fully-connected decoder which predicts vehicles' locations in the bird's eye view. Since our model combines information about object's location with semantic hints on the content of the bounding box, we refer to it as Semantic-aware Dense Projection Network ( in short).Training Details: ImageNet <cit.> mean pixel value is subtracted from input crops, which are then resized to 224 × 224 before being fed to the network. During training, we freeze ResNet50 parameters. Ground truth coordinates in the bird's eye view are normalized in range [-1, 1]. Dropout is applied after each fully-connected layer with drop probability 0.25. The whole model is trained end-to-end using Mean Squared Error as objective function and exploiting Adam <cit.> optimizer with the following parameters: lr=0.001, β_1=0.9, β_2=0.999.§ EXPERIMENTAL RESULTSWe now assess our proposal comparing its performance against some baselines. Due to the peculiar nature of the task, the choice of competitor models is not trivial. To validate the choice of a learning perspective against a geometrical one, we introduce a first baseline model that employs a projective transformation to estimate a mapping between corresponding points in the two views. Such correspondences are collected from bottom corners of both source and target boxes in the training set, then used to estimate an homography matrix in a least-squares fashion (e.g. minimizing reprojection error). Since correspondences mostly belong to the street, which is a planar region, the choice of the projective transformation seems reasonable. The height of the target box, however, cannot be recovered from the projection, thus it is cast as the average height among training examples. We refer to this model as homography model.Additionally, we design second baseline by quantizing spatial locations in both views in a regular grid, and learn point mappings in a probabilistic fashion. For each cell G^f_i in the frontal view grid, a probability distribution is estimated over bird's eye grid cells G^b_j, encoding the probability of a pixel belonging to G^f_i to fall in the cell G^b_j. During training, top-left and bottom-right bounding box corners in both views are used to update such densities. At prediction stage, given a test point p_k which lies in cell G^f_i we predict destination point by sampling from the corresponding cell distribution. We fix grid resolution to 108x192, meaning a 10x quantization along both axes, and refer to this baseline as grid model.It could be questioned if the appearance of the bounding box content in the frontal view is needed at all in estimating the target coordinates, given sufficient training data and an enough powerful model. In order to determine the importance of the visual input in the process of estimating the bird's eye occupancy map, we also train an additional model with approximately the same number of trainable parameters of our proposed model , but fully connected from input to output coordinates. We refer to this last baseline as MLP. 0.49IoU ↑ CD ↓ hE ↓wE ↓ arE ↓ homo 0.13 191.8 0.28 0.34 0.38 grid 0.18 154.3 0.74 0.70 1.30 MLP 0.32 96.5 0.25 0.25 0.29 0.37 78.0 0.21 0.24 0.29 0.5< g r a p h i c s > 0.5(a) 0.5(b) figure(a) Table summarizing results of proposedmodel against the baselines; (b) Degradation of IoU performance as the distance to the detected vehicle increases. For comparison, we rely on three metrics: * Intersection over Union (IoU): measure of the quality of the predicted bounding box BB_p with respect to the target BB_t:IoU(BB_p, BB_t) = A(BB_p ∩ BB_t)/A(BB_p ∪ BB_t)where A(R) refers to the area of the rectangle R;* Centroid Distance (CD): distance in pixels between box centers, as an indicator of localization quality[Please recall that images are 1920x1080 pixel size.];* Height, Width Error (hE,wE): average error on bounding box height and width respectively, expressed in percentage w.r.t. the ground truth BB_t size;* Aspect ratio mean Error (arE): absolute difference in aspect ratio between BB_p and BB_t:arE= |BB_p.w/BB_p.h - BB_t.w/BB_t.h| The evaluation of baselines and proposed model is reported in Fig. <ref> (a). Results suggest that both homography and grid are too naive to capture the complexity of the task and fail in properly warping vehicles into the bird's eye view. In particular, grid baseline performs poorly as it only models a point-wise transformation between bounding box corners, disregarding information about the overall input bounding box size. On the contrary, MLP processes the bounding box in its whole and provides a reasonable estimation. However, it still misses the chance to properly recover the length of the bounding box in the bird's eye view, being unaware of entity's visual appearance. Instead,is able to capture the object's semantic, which is a primary cue for correctly inferring vehicle's location and shape in the target view. A second experiment investigates how vehicle's distance affects the warping accuracy. Fig. <ref> (b) highlights that all the models' performance degrades as the distance of target vehicles increases. Indeed, closer examples exhibit lower variance (e.g. are mostly related to the car ahead and the ones approaching from the opposite direction) and thus are easier to model. However, it can be noticed that moving forward along distance axis the gap between theand MLP gets wider. This suggests that the additional visual input adds robustness in these challenging situations. We refer the reader to Fig. <ref> for a qualitative comparison. A real-world case study In order to judge the capability of our model to generalize on real-world data, we test it using authentic driving videos taken from a roof-mounted camera <cit.>. We rely on state-of-the-art detector <cit.> to get the bounding boxes of vehicles in the frontal view. As the ground truth is not available for these sequences, performance is difficult to quantify precisely. Nonetheless, we show qualitative results in Fig. <ref>: it can be appreciated how the network is able to correctly localize other vehicles' positions, despite having been trained exclusively on synthetic data. can perform inference at approximately 100Hz on a NVIDIA TitanX GPU, which demonstrates the suitability of our model for being integrated in an actual assisted or autonomous driving pipeline.§ CONCLUSIONS In this paper we presented two main contributions. A new high-quality synthetic dataset, featuring a huge amount of dashboard camera and bird's eye frames, in which the spatial occupancy of a variety of vehicles (i.e. bounding boxes, distance, yaw) is annotated. Furthermore, we presented a deep learning based model to tackle the problem of mapping detections onto a different view of the scene. We argue that these maps could be useful in an assisted driving context, in order to facilitate driver's decisions by making available in one place a concise representation of the road state. Furthermore, in an autonomous driving scenario, inferred vehicle positions could be integrated with other sensory data such as radar or lidar by means of e.g. a Kalman filter to reduce overall uncertainty.splncs | http://arxiv.org/abs/1706.08442v1 | {
"authors": [
"Andrea Palazzi",
"Guido Borghi",
"Davide Abati",
"Simone Calderara",
"Rita Cucchiara"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20170626153953",
"title": "Learning to Map Vehicles into Bird's Eye View"
} |
=1JHEP equationsection#1http://arxiv.org/abs/#1#11.1 | http://arxiv.org/abs/1706.08534v3 | {
"authors": [
"Sebastian Bruggisser",
"Thomas Konstandin",
"Geraldine Servant"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170626180004",
"title": "CP-violation for Electroweak Baryogenesis from Dynamical CKM Matrix"
} |
Re-Evaluating the Netflix Prize - Human Uncertainty and its Impact on Reliability Sergej Sizov 26 December, 2017 =================================================================================== We provide the existence and asymptotic description of solitary wave solutions to a class of modified Green-Naghdi systems, modeling the propagation of long surface or internal waves. This class was recently proposed by Duchêne, Israwi and Talhouk <cit.> in order to improve the frequency dispersion of the original Green-Naghdi system while maintaining the same precision. The solitary waves are constructed from the solutions of a constrained minimization problem. The main difficulties stem from the fact that the functional at stake involves low order non-local operators, intertwining multiplications and convolutions through Fourier multipliers. § INTRODUCTION§.§ MotivationIn this work, we study solitary traveling waves for a class of long-wave models for the propagation of surface and internal waves. Starting with the serendipitous discovery and experimental investigation by John Scott Russell, the study of solitary waves at the surface of a thin layer of water in a canal has a rich history <cit.>. In particular, it is well-known that the most widely used nonlinear and dispersive models for the propagation of surface gravity waves, such as the Korteweg-de Vries equation or the Boussinesq and Green-Naghdi systems, admit explicit families of solitary waves <cit.>. These equations can be derived as asymptotic models for the so-called water waves system, describing the motion of a two-dimensional layer of ideal, incompressible, homogeneous, irrotational fluid with a free surface and a flat impermeable bottom; we let the reader refer to <cit.> and references therein for a detailed account of the rigorous justification of these models. Among them, the Green-Naghdi model is the most precise, in the sense that it does not assume that the surface deformation is small. However, the validity of all these models relies on the hypothesis that the depth of the layer is thin compared with the horizontal wavelength of the flow and, as expected, the models do not describe the system accurately (for instance the dispersion relation of infinitesimally small waves) in a deep water situation. In order to tackle this issue, one of the authors has recently proposed in <cit.> a new family of models:{[ ∂_ tζ +∂_x w= 0,; ; ∂_ t( h^-1w +^[h](h^-1w)) +g∂_xζ +1/2∂_x(( h^-1 w)^2) = ∂_x( ^[h,h^-1 w]), ].where^[h]u-1/3 h^-1∂_x {h^3 ∂_x {u}}, ^[h,u] 1/3u h^-1∂_x {h^3∂_x {u}}+1/2(h∂_x {u})^2.Here, ζ is the surface deformation, h=d+ζ the total depth (where d is the depth of the layer at rest), u the layer-averaged horizontal velocity,w=h u the horizontal momentum and g the gravitational acceleration; see Figure <ref>. Finally,( D) is a Fourier multiplier,{φ}(k) =( k ) φ(k).The original Green-Naghdi model is recovered when setting (k)≡ 1. Any other choice satisfying (k)=1+Ø(k^2) enjoys the same precision (in the sense of consistency) in the shallow-water regime and the specific choice of (k)=√(3/d|k|tanh(d|k|)-3/ d^2|k|^2) allows to obtain a model whose linearization around constant states fits exactly with the one of the water waves system. Compared with other strategies such as the Benjamin-Bona-Mahony trick <cit.> or using different choices of velocity unknowns <cit.>, an additional advantage of (<ref>) is that it preserves the Hamiltonian structure of the model, which turns out to play a key role since the existence of solitary waves will be deduced from a variational principle. The study of <cit.> is not restricted to surface propagation, but is rather dedicated to the propagation of internal waves at the interface between two immiscible fluids, confined above and below by rigid, impermeable and flat boundaries. Such a configuration appears naturally as a model for the ocean, as salinity and temperature may induce sharp density stratification, so that internal solitary waves are observed in many places <cit.>. Due to the weak density contrast, the observed solitary waves typically have much larger amplitude than their surface counterpart, hence the bilayer extension of the Green-Naghdi system introduced by <cit.>, often called Miyata-Choi-Camassa model, is a very natural choice. It however suffers from strong Kelvin-Helmholtz instabilities — in fact stronger than the ones of the water waves system for large frequencies — and the work in <cit.> was motivated by taming these instabilities.The modified bilayer system reads{[ ∂_ tζ +∂_x w= 0,; ; ∂_ t( h_1+γ h_2/h_1 h_2w + ^_γ,δ[ζ]w) +(γ+δ)∂_xζ +1/2∂_x(h_1^2 -γ h_2^2 /(h_1 h_2)^2w^2)= ∂_x( ^_γ,δ[ζ,w]) ].where we denote h_1=1-ζ, h_2=δ^-1+ζ, ^_γ,δ[ζ]w_2^[h_2](h_2^-1w)+γ_1^[h_1](h_1^-1 w) and ^_γ,δ[ζ,w]_2^[h_2,h_2^-1w]-γ_1^[h_1,h_1^-1w], with_i^[h_i]u_i-1/3 h_i^-1∂_x _i{h_i^3 ∂_x _i{u_i}}, _i^[h_i,u_i] 1/3u_i h_i^-1∂_x _i{h_i^3∂_x _i {u_i}}+1/2(h_i∂_x _i{u_i})^2. Here, ζ represents the deformation of the interface, h_1 (resp. h_2) is the depth of the upper (resp. lower) layer,u_1 (resp. u_2) is the layer-averaged horizontal velocity of the upper (resp. lower) layer andw=h_1h_2(u_2-γu_1)/(h_1+γ h_2) is the shear momentum. In this formulation we have used dimensionless variables, so that the depth at rest of the upper layer is scaled to 1, whereas the one of the lower layer is δ^-1, in which δ is the ratio of the depth at rest of the upper layer to the depth at rest of the lower layer (see Figure <ref>).Similarly, γ is the ratio of the upper layer over the lower layer densities.As a consequence of our scaling, the celerity of infinitesimally small and long waves is c_0=1. Once again, _i (i=1,2) are Fourier multipliers. The choice _i^ id(k)≡ 1 yields the Miyata-Choi-Camassa model while _i^ imp(k)=√(3/δ_i^-1|k|tanh(δ_i^-1|k|)-3/δ_i^-2 |k|^2), with convention δ_1=1,δ_2=δ, fits the behavior of the full bilayer Euler system around constant states, and thus gives hope for an improved precision when weak nonlinearities are involved. Note that compared to equations (7)–(9) in <cit.> we have scaled the variables so that the shallowness parameter μ and amplitude parameter ϵ do not appear in the equations. This is for notational convenience since the parameters do not play a direct role in our results. On the other hand, we only expect the above model to be relevant for describing water waves in the regime μ≪ 1 and the solutions that we construct in the end are found in the long-wave regime ϵ, μ≪ 1.In the following, we study solitary waves for the bilayer system (<ref>), noting that setting γ=0 immediately yields the corresponding result for the one-layer situation, namely system (<ref>). Our results are valid for a large class of parameters γ,δ and Fourier multipliers _1,_2, described hereafter. Our results are twofold: * We prove the existence of a family of solitary wave solutions for system (<ref>); * We provide an asymptotic description for this family in the long-wave regime. These solitary waves are constructed from the Euler-Lagrange equation associated with a constrained minimization problem, as made possible by the Hamiltonian structure of system (<ref>). There are however several difficulties compared with standard works in the literature following a similar strategy (see e.g. <cit.> and references therein). Our functional cannot be written as the sum of the linear dispersive contribution and the nonlinear pointwise contribution: Fourier multipliers and nonlinearities are entangled. What is more, the operators involved are typically of low order ( is a smoothing operator). In order to deal with this situation, wefollow a strategy based on penalization and concentration-compactness used in a number of recent papers on the water waves problem (see e.g. <cit.> and references therein) and in particular, in a recent work by one of the authors on nonlocal model equations with weak dispersion, <cit.>. Thus we show that the strategy therein may be favorably applied to bidirectional systems of equations in addition to unidirectional scalar equations such as the Whitham equation.Roughly speaking, the strategy is the following. The minimization problem is first solved in periodic domains using a penalization argument do deal with the fact that the energy functional is not coercive. This allows to construct a special minimizing sequence for the real line problem by letting the period tend to infinity, which is essential to rule-out the dichotomy scenario in Lions' concentration-compactness principle.The long-wave description follows from precise asymptotic estimates and standard properties of the limiting (Korteweg-de Vries) model. When the Fourier multipliers _i have sufficiently high order, we can in fact avoid the penalization argument and consider the minimization problem on the real line directly, since any minimizing sequence is then also a special minimizing sequence. In particular, this is the case for the original Miyata-Choi-Camassa model (and of coursealso the Green-Naghdi system).Our existence proof unfortunately gives no information about stability, since our variational formulation does not involve conserved functionals; see the discussion in Section <ref>.If sufficiently strong surface tension is included in the model, we expect that a different variational formulation could be used which also yields a conditional stability result (see <cit.>).A similar situation appears e.g. in the study of Boussinesq systems <cit.>.§.§ The minimization problem We now set up the minimization problem which allows to obtain solitary waves of system (<ref>).We seek traveling waves of (<ref>), namely solutions of the form (abusing notation) ζ(t,x)=ζ(x-ct), w(t,x)=w(x-ct); from which we deduce -c∂_x ζ + ∂_x w=0; -c∂_x (_γ, δ^[ζ] w) + (γ+δ)∂_xζ+1/2∂_x(h_1^2-γ h_2^2/h_1^2h_2^2 w^2) -∂_x( ^_γ,δ[ζ,w ] )=0, where we denote _γ, δ^[ζ] w_2^[h_2](h_2^-1w)+γ_1^[h_1](h_1^-1w), _i^[h_i]u_iu_i+_i^[h_i]u_i. Integrating these equations and using the assumption (since we restrict ourselves to solitary waves) lim_x→∞ζ(x)= lim_x→∞w(x)= 0 yields the system of equations{[-cζ + w= 0 ,;; -c _γ, δ^[ζ]w +(γ+δ)ζ +1/2h_1^2 -γ h_2^2 /(h_1 h_2)^2 w^2- ^_γ,δ[ζ,w] =0. ]. We now observe that system (<ref>) enjoys a Hamiltonian structure. Indeed, define the functional(̋ζ,w)1/2∫_-∞^∞ (γ+δ)ζ^2+ w_γ, δ^[ζ]w x.Under reasonable assumptions on _1,_2, and for sufficiently regular ζ, _γ, δ^[ζ] defines a well-defined, symmetric, positive definite operator <cit.>. We may thus introduce the variablev _γ, δ^[ζ] w,and write(̋ζ,v) 1/2∫_-∞^∞ (γ+δ)ζ^2+ v(_γ, δ^[ζ])^-1v x.It is now straightforward to check that (<ref>) can be written in terms of functional derivatives of $̋:∂_t ζ =-∂_x (_v ); ∂_t v =-∂_x (_ζ). It is therefore tempting to seek traveling waves through the time-independent quantities(̋ζ,v) =1/2∫_-∞^∞ (γ+δ)ζ^2+ v_γ, δ^[ζ]^-1vxand (ζ,v) ∫_-∞^∞ζ vx. Indeed any solution to (<ref>) is a critical point of the functional-̋c: (̋ζ,v)-c (ζ,v)=0⟺_v-̋cζ =0and _ζ-̋ c v=0 , which, by (<ref>), is the desired system of equations. However, notice that from Weyl's essential spectrum theorem, one has spec_ess(^2 (̋ζ,v)-c ^2(ζ,v))=spec_ess(^2 _̋∞(ζ,v)-c ^2_∞(ζ,v)) where_̋∞,_∞are the asymptotic operators as|x|→∞,^2 _̋∞(ζ,v)-c ^2_∞(ζ,v)=^2 (̋0,0)-c ^2(0,0)=[ γ+δ-c;-c (γ+δ-γ/3(∂_x_1)^2-1/3δ(∂_x_2)^2)^-1 ]. Since the spectrum of the above operator has both negative and positive components, the desired critical point is neither a minimizer nor a maximizer, as noticed (for the Green-Naghdi system) in <cit.>.We will obtain solutions to (<ref>) from aconstrained minimization problem depending solely on the variableζ.Notice that for each fixedcandζ, the functionalv↦(̋ζ, v)-c (ζ, v)has a unique critical point,v_c,ζ=c_γ, δ^[ζ]ζ. Substitutingv_c, ζinto(̋ζ, v)-c (ζ, v), we obtain(̋ζ, v_c, ζ)-c(ζ, v_c,ζ) =1/2∫_-∞^∞ (γ+δ)ζ^2-c^2 ζ_γ, δ^[ζ] ζx=γ+δ/2ζ_L^2^2-c^2/2(ζ, _γ, δ^[ζ] ζ).Observe now that(ζ, v)is a critical point of(̋ζ, v)-c(ζ, v)if and only ifζis a critical point of(̋ζ, v_c, ζ)-c(ζ, v_c,ζ)andv=v_c, ζ. We thus define(ζ)(ζ,_γ, δ^[ζ]ζ)=∫_-∞^∞ζ_γ, δ^[ζ]ζx=∫_-∞^∞h_1+γ h_2/h_1h_2ζ^2+γ/3 (1-ζ)^3 (∂_x _1{ζ/1-ζ})^2 +1/3 (δ^-1+ζ)^3 (∂_x _2{ζ/δ^-1+ζ})^2x=γ(ζ)+(ζ),where (ζ) =∫_-∞^∞ζ^2/1-ζ+1/3 (1-ζ)^3 (∂_x _1{ζ/1-ζ})^2 x, (ζ) =∫_-∞^∞ζ^2/δ^-1+ζ+1/3 (δ^-1+ζ)^3 (∂_x _2{ζ/δ^-1+ζ})^2 xand look for critical points of(̋ζ, v_c, ζ)-c(ζ, v_c,ζ)by considering the minimization problem{(ζ),(γ+δ)ζ_L^2^2=q},withc^-2acting as a Lagrange multiplier.Another way of thinking of this reduction is to solve the first equation in (<ref>) forwand substitute the solutionw=cζinto the second equation, yieldingc^2 _γ,δ^[ζ]ζ =(γ+δ)ζ+ c^2/2h_1^2-γ h_2^2/h_1^2h_2^2ζ^2 - c^2_γ,δ^[ζ,ζ ].Computing(ζ)=2h_1+γ h_2/h_1 h_2ζ-h_1^2-γ h_2^2/h_1^2h_2^2ζ^2-2/3δ^-1 h_2^-2∂_x _2{h_2^3∂_x _2 {h_2^-1ζ}}-2γ/3 h_1^-2∂_x _1{h_1^3∂_x _1 {h_1^-1ζ}}+(h_2∂_x _2{h_2^-1ζ})^2-γ(h_1∂_x _1{h_1^-1ζ})^2,we find that1/2(ζ)=_γ, δ^[ζ]ζ - 1/2h_1^2-γ h_2^2/h_1^2h_2^2ζ^2+_γ,δ^[ζ,ζ ],which allows us to recognize (<ref>) as the Euler-Lagrange equation for (<ref>). §.§ Statement of the results For the sake of readability, we postpone to Section <ref> the definition and (standard) notations of the functional spaces used herein. The class of Fourier multipliers for which our main result is valid is the following. [Admissible class of Fourier multipliers] * (k)=(|k|) and 0< ≤ 1;* ∈^2(), (0)=1 and '(0)=0;*There exists an integer j≥ 2 such that∂_k^j(k(k))∈ L^2();*There exists θ∈[0,1) and C_±^>0 such thatC^_-(1+k)^-θ≤(k)≤ C^_+(1+ |k|)^-θ.We also introduce a second class of strongly admissible Fourier multipliers which is used in our second result. [Strongly admissible class of Fourier multipliers] An admissible Fourier multiplerin the sense of Definition <ref> is strongly admissible if ∈^∞() and for each j∈ there exists a constant C_j such that|∂_k^j (k)|≤ C_j(1+|k|)^-θ-j. Notice the following.The two aforementioned examples, namely _i^ id and _i^ imp are strongly admissible, and satisfy Definition <ref>,<ref> with (respectively) θ=0,1/2. [Admissible parameters] In the following, we fix γ≥ 0, δ∈(0,∞) such that δ^2-γ≠0. We also fix a positive number ν such that ν≥ 1-θ and ν>1/2 (the second condition is automatically satisfied if θ <1/2). Finally, fix R an arbitrary positive constant.Our results hold for any values of the parameters (γ,δ)∈[0,∞)×(0,∞) such that δ^2≠γ, although admissible values for q_0 depend on the choice of the parameters. However, not all parameters are physically relevant in the oceanographic context. When γ>1, the upper fluid is heavier than the lower fluid, and the system suffers from strong Rayleigh-Taylor instabilities <cit.>. In the bilayer setting, the use of the rigid-lid assumption is well-grounded only when the density contrast, 1-γ, is small. In this situation, one may use the Boussinesq approximation, that is set γ=1; see <cit.> in the dispersionless setting. Notice however that system (<ref>) exhibits unstable modes that are reminiscent of Kelvin-Helmholtz instabilities when the Fourier multipliers _i satisfy Definition <ref>,<ref> with θ∈[0,1); see <cit.>. It is therefore noteworthy that internal solitary waves in the ocean and in laboratory experiments are remarkably stable and fit very well with the Miyata-Choi-Camassa predictions <cit.>. The sign of the parameter δ^2-γ is known to determine whether long solitary waves are of elevation or depression type, as corroborated by Theorem <ref>. At the critical value δ^2=γ, the first-order model would be the modified (cubic)KdV equation, predicting that no solitary wave exists <cit.>.We study the constrained minimization problem_ζ∈ V_q,R(ζ),withV_q,R={ζ∈ H^ν() : ζ_H^ν()<R, (γ+δ)ζ_L^2()^2=q},andq∈(0,q_0), withq_0sufficiently small. Notice in particular that as soon asqis sufficiently smallζ_L^∞<min(1,δ^-1)(by Lemma <ref> thereafter and sinceν>1/2) and(ζ)is well-defined(by Lemmata <ref> and <ref> and sinceν≥1-θ) for anyζ∈V_q,R. Any solution will satisfy the Euler-Lagrange equation(ζ)+2α(γ+δ)ζ=0,whereαis a Lagrange multiplier. Equation (<ref>) is exactly (<ref>) with(-α)^-1=c^2, and therefore provides a traveling-wave solution to (<ref>).Our goal is to prove the following theorems.Let γ,δ,ν,R satisfying Assumption <ref> and _i, i=1,2 be admissible in the sense of Definition <ref>. Let D_q,R be the set of minimizers ofover V_q,R. Then there exists q_0>0 such that for all q∈ (0,q_0), the following statements hold:*The set D_q,R is nonempty and each element in D_q,R solves the traveling wave equation (<ref>), with c^2=(-α)^-1>1. Thus for any ζ∈ D_q,R, (ζ(x± ct),w_±=± c ζ(x± ct)) is a supercritical solitary wave solution to (<ref>).*For anyminimizing sequence {ζ_n}_n∈ℕ forin V_q,R such that sup_n∈ζ_n_H^ν()<R, there exists a sequence {x_n}_n∈ℕ of real numbers such that a subsequence of {ζ_n(·+x_n)}_n∈ℕ converges (strongly in H^ν() if ν=1-θ>1/2; weakly in H^ν() and strongly in H^s() for s∈[0,ν) otherwise) to an element in D_q,R. *There exist two constants m,M>0 such that ζ_H^ν()^2≤Mqandc^-2=-α≤1-m q^2/3,uniformly over q∈(0,q_0) and ζ∈ D_q,R.In addition to the hypotheses of Theorem <ref>, assume that _i, i=1,2, are strongly admissible in the sense of Definition <ref>. Then there exists q_0>0 such that for any q∈(0,q_0), each ζ∈ D_q,R belongs to H^s() for any s≥ 0 and sup_ζ∈ D_q,Rinf_x_0∈q^-2/3ζ(q^-1/3·)-ξ_ KdV(·-x_0)_H^1()=Ø(q^1/6)where ξ_ KdV(x)=α_0(γ+δ)/δ^2-γ^2(1/2√(3α_0(γ+δ)/γ+δ^-1)x)is the unique (up to translation) solution of theKdV equation (<ref>) and α_0=3/4((δ^2-γ)^4/(γ+δ)^4(γ+δ^-1))^1/3.In addition, the number α, defined in Theorem <ref>, satisfiesα+1=q^2/3α_0+𝒪(q^5/6),uniformly over q∈(0,q_0) and ζ∈ D_q,R. § TECHNICAL RESULTS In the following, we denoteC(λ_1,λ_2,…)a positive constant depending non-decreasingly on the parametersλ_1,λ_2,…. We writeA≲BwhenA≤CB withCa nonnegative constant whose value is of no importance.We do not display the dependence with respect to the parametersγ,δ,C^_i_±and regularity indexes. Functional setting on the real line Here and thereafter, we denoteL^2()the standard Lebesgue space of square-integrable functions, endowed with the normf_L^2=(∫_-∞^∞f(x)^2 x)^1/2.The real inner product off_1,f_2∈L^2()is denoted by ⟨f_1,f_2⟩=∫_f_1(x)f_2(x) x. We use the same notation for duality pairings which are clear from the context. The spaceL^∞()consists of all essentially bounded, Lebesgue-measurable functionsf, endowed with the norm f_L^∞= ess sup_x∈f(x) . For any real constants∈,H^s()denotes the Sobolev space of all tempered distributionsfwith finite norm f_H^s= Λ^s f_L^2 < ∞, whereΛis the pseudo-differential operatorΛ=(1-∂_x^2)^1/2. Forn∈,^n()is the space of functions having continuous derivatives up to ordern, and^∞()=⋂_n∈ ^n().The Schwartz space is denoted𝒮()and the tempered distributions𝒮'(). We use the following convention for the Fourier transform:ℱ(f)(k)=f̂(k)1/√(2π)∫_ f(x)e^-i x kx. We start with standard estimates in Sobolev spaces.The following interpolation estimates are standard and used without reference in our proofs. [Interpolation estimates] Let f∈ H^μ(), with μ>1/2.*One has f∈ L^∞() andf_L^∞≲f_L^2^1-1/2μf_H^μ^1/2μ.*For any δ∈ (0,μ), one has f∈ H^μ-δ() andf_H^μ-δ≤f_L^2^δ/μf_H^μ^1-δ/μ.The following Lemma is given for instance in <cit.>. [Composition estimate] Let G be a smooth function vanishing at 0, andf∈ H^μ() with μ>1/2. ThenG∘ f∈ H^μ() and we haveG∘ f_H^μ≤ C(f_L^∞) f_H^μ. [Product estimates] *For any f,g∈ L^∞()∩ H^s() with s≥ 0, one has fg∈ H^s() andfg_H^s≲f_H^s g_L^∞+g_H^s f_L^∞.*For any f∈ H^s(),g∈ H^t() with s+t≥ 0, and let r such that min(s,t)≥ r and r<s+t-1/2. Then one has fg∈ H^r() andfg_H^r≲f_H^s g_H^t.* For any ζ∈ L^∞() such that ζ_L^∞≤ 1-h_0 with h_0>0 and any f∈ L^∞(), one hasf/1+ζ_L^∞≤ C(h_0^-1)f_L^∞.*For any ζ∈ H^μ() with μ>1/2 such that ζ_L^∞≤ 1-h_0 with h_0>0 and any f∈ H^s() with s∈ [-μ,μ], one hasf/1+ζ_H^s≤ C(h_0^-1,ζ_H^μ)f_H^s. The first two items are standard (see for instance <cit.>. The third item is obvious. For the last item, we use the second item and deduce that for any f∈ H^s(), s∈[-μ,μ], and g∈ H^μ(),fg_H^s≲g_H^μf_H^s.Hencef/1+ζ_H^s≤f_H^s+fζ/1+ζ_H^s≤f_H^s+ζ/1+ζ_H^μf_H^sWe conclude byζ/1+ζ_H^μ≲ C(h_0^-1)ζ_H^μ,where we have used Lemma <ref>,and the estimate is proved. The following Lemma justifies the assumptions of admissible Fourier multipliers in Definition <ref>. [Properties of admissible Fourier multipliers] Any admissible Fourier multipler (in the sense of Definition <ref>), _i, satisfies the following.* The linear operator ∂_x_i(D) is bounded from H^s() to H^s-1+θ(), for any s∈, and ∂_x_i_H^s→ H^s-1+θ≲ C^_i_+.Moreover, for any ζ∈ H^s+1-θ, one has ζ_H^s^2 + (C^_i_+)^-2∂_x_i{ζ}_H^s^2 ≲ζ_H^s+1-θ^2≲ζ_H^s^2 + (C^_i_-)^-2∂_x_i{ζ}_H^s^2.*Let φ∈^∞() with compact support and [∂_x _i, φ] ζ= ∂_x _i{φζ}-φ∂_x _i{ζ}. Then[∂_x _i, φ] ζ_L^2≲φ'_L^1ζ_H^1-θ.*There exists j≥ 2 and C_j such that for any ζ∈ L^2() with compact support∂_x_i{ζ}(x)≤C_j/(x,(ζ))^jζ_L^2,for a.a.x∈∖(ζ).The first result is obvious from Definition <ref>,<ref> and the definition of Sobolev spaces. For the second, we shall first prove that the function 𝖦_i:k↦ k_i(k) satisfies𝖦_i'(k)≲⟨ k⟩^1-θ.To this aim, let us first consider 𝖦∈𝒮() and χ a smooth cut-off function, such that χ(k)=1 for |k|≤ 1/2 and χ(k)=0 for |k|≥ 1. We decompose𝖦'(k)≤χ(D)𝖦'(k)+ (1-χ(D))𝖦'(k).For the first contribution, one hasχ(D)𝖦'(k)=1/√(2π)|∫_χ̂(ξ)𝖦'(k+ξ)ξ|=1/√(2π)|∫_ (χ̂)'(ξ)𝖦(k+ξ)ξ|≲sup_ξ∈𝖦(k+ξ)/⟨ k+ξ⟩^1-θ⟨ k⟩^1-θ⟨·⟩^1-θχ̂'_L^1,and the second contribution satisfies for any j≥ 2,(1-χ(D))𝖦'(k)≲(1-χ(ξ))|ξ|𝖦̂(ξ)_L^1≲⟨ξ⟩^-(j-1)|ξ|^j𝖦̂(ξ)_L^1≲𝖦^(j)_L^2,by the Cauchy-Schwarz inequality and Parseval's theorem. Thus we find, for any j≥ 2,𝖦'(k)≲⟨·⟩^θ-1𝖦_L^∞⟨ k⟩^1-θ +𝖦^(j)_L^2.The same estimate applies to 𝖦(k)= k_i(k) by smooth approximation, and (<ref>) follows from Definition <ref>. Now, we note that the Fourier transform of [∂_x _i, φ] ζ is given by1/√(2π)∫_ (ik_i(k)-is_i(s))φ̂(k-s)ζ̂(s) s.By (<ref>) andthe mean value theorem, we find that |k_i(k)-s_i(s)|≲ (1+|s|)^1-θ |k-s|. Due toYoung's inequality and Parseval's theorem, we find[∂_x _i, φ] ζ_L^2≲φ'_L^1(1+|·|)^1-θζ̂_L^2≲φ'_L^1ζ_H^1-θ.For the third result, let us assume at first that the kernel K_iℱ^-1(ik_i(k))∈ L^2(). Then one has∂_x_i{ζ}(x) =1/√(2π)|∫_K_i(x-y)ζ(y)y|=1/√(2π)|∫_supp(ζ)(x-y)^jK_i(x-y)ζ(y)/(x-y)^jy|≤(K_i,j∗ζ)(x)/√(2π)(x,supp(ζ))^j≲ζ_L^2/(x,supp(ζ))^j,where we denote K_i,j(x)=x^jK_i(x), remark that K_i,j∈ L^2() by Definition <ref>,<ref> and Plancherel's theorem, and apply the Cauchy-Schwarz inequality to the convolution. If K_i∉ L^2(), we obtain the result by regularizing K_i ( smoothly truncating _i) and passing to the limit.Let γ≥ 0, δ>0, μ>1/2 and _i be admissible Fourier multipliers. Assume that ζ∈ H^μ() is such that 1-ζ_L^∞≥ h_0, δ^-1-ζ_L^∞≥ h_0, with h_0>0. Then there exist a constant C_0=C(h_0^-1,ζ_H^μ) such thatC_0^-1ζ_H^1-θ^2≤(ζ)≤ C_0ζ_H^1-θ^2 .We first deal with the contribution of (ζ) defined in (<ref>). By Lemma <ref>,<ref> we get that(ζ)≤ C(ζ_L^∞)ζ/1-ζ_H^1-θ^2and ζ/1-ζ_H^1-θ^2 ≤ C(h_0^-1)(ζ).By Lemma <ref>,<ref>,one hasζ/1-ζ_H^1-θ≤ C(h_0^-1,ζ_H^μ)ζ_H^1-θ,and the triangle inequality together with Lemma <ref>,<ref>yieldsζ_H^1-θ≲ζ/1-ζ_H^1-θ+ζ^2/1-ζ_H^1-θ ≲ζ/1-ζ_H^1-θ+ζ_H^μζ/1-ζ_H^1-θ.Collecting the above information, we find thatC_0^-1ζ_H^1-θ^2≤(ζ)≤ C_0ζ_H^1-θ^2,with C_0=C(h_0^-1,ζ_H^μ). Similar estimates hold for (ζ), and thus for(ζ)=γ(ζ)+(ζ).Let γ≥ 0, δ>0, μ>1/2 and _i be admissible Fourier multipliers. Assume that, for j∈{1,2}, ζ_j∈ H^μ() is such that 1-ζ_j_L^∞≥ h_0 and δ^-1-ζ_j_L^∞≥ h_0, with h_0>0. Then one has(ζ_1)-(ζ_2)≤ C(h_0^-1,ζ_1_H^μ,ζ_2_H^μ) ζ_1-ζ_2_H^μ.As previously, we detail the result for (ζ), as the similar estimate for (ζ) is obtained in the same way. One has (ζ_1)-(ζ_2)=∫_ζ_1^2/1-ζ_1-ζ_2^2/1-ζ_2+1/3[(1-ζ_1)^3-(1-ζ_2)^3] (∂_x _1{ζ_1/1-ζ_1})^2+1/3(1-ζ_2)^3[ (∂_x _1{ζ_1/1-ζ_1})^2-(∂_x _1{ζ_2/1-ζ_2})^2]x,By Lemma <ref>, <ref>, and the Cauchy-Schwarz inequality, we immediately have∫_|ζ_1^2/1-ζ_1-ζ_2^2/1-ζ_2| x≤C(h_0^-1,ζ_1_L^∞,ζ_2_L^∞)(ζ_1_L^2+ζ_2_L^2)ζ_1-ζ_2_L^2.Similarly, we find by Lemma <ref>,<ref> ∫_|[(1-ζ_1)^3-(1-ζ_2)^3] (∂_x _1{ζ_1/1-ζ_1})^2| x≤C(ζ_1_L^∞,ζ_2_L^∞)ζ_1-ζ_2_L^∞ζ_1/1-ζ_1_H^1-θ^2,and by Lemma <ref>,<ref>,ζ_1/1-ζ_1_H^1-θ^2≤ C(h_0^-1,ζ_1_H^μ).Finally, ∫_| (1-ζ_2)^3[(∂_x _1{ζ_1/1-ζ_1})^2-(∂_x _1{ζ_2/1-ζ_2})^2]| x ≤ C(ζ_2_L^∞)ζ_1/1-ζ_1-ζ_2/1-ζ_2_H^1-θζ_1/1-ζ_1+ζ_2/1-ζ_2_H^1-θ,and we conclude by Lemma <ref>, <ref> ζ_1/1-ζ_1-ζ_2/1-ζ_2_H^1-θ≤ C(h_0^-1,ζ_1_H^μ,ζ_2_H^μ)ζ_1-ζ_2_H^1-θ,andζ_1/1-ζ_1+ζ_2/1-ζ_2_H^1-θ≤ C(h_0^-1,ζ_1_H^μ,ζ_2_H^μ).The result is proved.Let γ≥ 0, δ>0, and _i be admissible Fourier multipliers. Let l∈{1,2,3} and ζ∈ H^l() such that 1-ζ_L^∞≥ h_0 and δ^-1-ζ_L^∞≥ h_0, with h_0>0. Then one can decompose(ζ)=∫_ (γ+δ)ζ^2+(γ-δ^2)ζ^3+γ+δ^-1/3(∂_xζ)^2x+_ rem(ζ),and⟨(ζ),ζ⟩ =∫_2(γ+δ)ζ^2+3(γ-δ^2)ζ^3+2γ+δ^-1/3(∂_xζ)^2x +⟨_ rem(ζ),ζ⟩, where_ rem+⟨_ rem(ζ),ζ⟩≤ C(h_0^-1,ζ_H^1)( ζ_L^∞^2ζ_L^2^2+ζ_L^∞∂_xζ_L^2^2+∂_x^lζ_L^2∂_xζ_L^2).We consider (ζ); the corresponding expansion for (ζ) is obtained similarly. We write(ζ)=∫_ζ^2+ζ^3+1/3(∂_xζ)^2x+_ rem(ζ),where _ rem(ζ)= ∫_ζ^4/1-ζx+1/3∫_(1-ζ)^3 (∂_x{ζ/1-ζ})^2-(∂_x ζ)^2 x +∫_ (1-ζ)^3 [ (∂_x _1{ζ/1-ζ})^2- (∂_x{ζ/1-ζ})^2]x.Note that by the Cauchy-Schwarz inequality|∫_ζ^4/1-ζx|≤ζ_L^∞^2ζ_L^2^2/h_0and|∫_(1-ζ)^3(∂_x{ζ/1-ζ})^2-(∂_xζ)^2x|= | ∫_ζ(∂_xζ)^2/1-ζx| ≤ζ_L^∞∂_xζ_L^2^2/h_0.Moreover |∫_(1-ζ)^3[(∂_x_1{ζ/1-ζ})^2-(∂_x{ζ/1-ζ})^2]x|≤∫_ (1+ζ)^3 |(∂_x_1-∂_x)(ζ/1-ζ)| |(∂_x_1+∂_x)(ζ/1-ζ)|x.Applying the Cauchy-Schwarz inequality, Plancherel's theorem and the estimates|_1(k)-1|≲ |k|^l-1, |_1(k)+1|≲ 1,(by Definition <ref>,<ref> and <ref>), we deduce|∫_(1-ζ)^3[(∂_x_1{ζ/1-ζ})^2-(∂_x{ζ/1-ζ})^2] x| ≤ (1+ζ_L^∞)^3 ∂_x^l (ζ/1-ζ)_L^2∂_x (ζ/1-ζ)_L^2≤ C(ζ_H^μ)∂_x^lζ_L^2∂_xζ_L^2,where the last inequality follows from Leibniz's rule and standard bilinear estimates <cit.>. Combining the above estimates together with similar calculations foryields the desired estimate for _ rem. The estimate for ⟨_ rem(ζ),ζ⟩ follows in the same way when decomposing⟨(ζ),ζ⟩=∫_ 2h_1+γ h_2/h_1 h_2ζ^2-h_1^2-γ h_2^2/h_1^2h_2^2ζ^3+2/3δ^-1 h_2^3(∂_x _2 {h_2^-1ζ})(∂_x _2 {h_2^-2ζ})+2γ/3 h_1^3(∂_x _1 {h_1^-1ζ})(∂_x _1 {h_1^-2ζ}) +ζ(h_2∂_x _2{h_2^-1ζ})^2-γζ(h_1∂_x _1{h_1^-1ζ})^2x,and we do not detail for the sake of conciseness.Let γ≥ 0, δ>0, μ>1/2 and _i be admissible Fourier multipliers such that μ≥ 1-θ. Let ζ∈ H^μ() such that 1-ζ_L^∞≥ h_0 and δ^-1-ζ_L^∞≥ h_0, with h_0>0. Then one can decompose(ζ)=_2(ζ)+_3(ζ)+_ rem^(1)(ζ)and⟨(ζ),ζ⟩=2_2(ζ)+3_3(ζ)+_ rem^(2)(ζ),where_2(ζ) =∫_ (γ+δ)ζ^2 +γ1/3(∂_x_1{ζ})^2+δ^-11/3(∂_x_2{ζ})^2x, _3(ζ) =∫_ (γ-δ^2)ζ^3- γζ (∂_x_1{ζ})^2+ζ(∂_x_2{ζ})^2 +γ2/3(∂_x_1{ζ})(∂_x_1{ζ^2})-2/3(∂_x_2{ζ})(∂_x_2{ζ^2})x.Moreover, one has _2(ζ)≥ (γ+δ)ζ_L^2^2and_3(ζ) ≤ C(h_0^-1,ζ_H^μ)ζ_L^∞ζ_H^1-θ^2, ∀ j∈{1,2}, _ rem^(j)(ζ) ≤ C(h_0^-1,ζ_H^μ) ζ_L^∞^2ζ_H^1-θ^2,The estimate on _3 is straightforward by the Cauchy-Schwarz inequality and applying Lemma <ref>,<ref> and Lemma <ref>,<ref>. We detail the estimate for _ rem^(j)(ζ). As above, we focus on the terms involving , the terms involvingbeing obtained identically. One has_ rem^(1)(ζ) =∫_ζ^4/1-ζ+(ζ^2-ζ^3/3)(∂_x_1{ζ/1-ζ})^2x+1/3∫_(1-3ζ)(∂_x_1{ζ/1-ζ})^2-(∂_x_1{ζ})^2-2∂_x_1{ζ}∂_x_1{ζ^2}+3ζ (∂_x_1{ζ})^2x.We estimate each bracket separately. First note that, by the Cauchy-Schwarz inequality, Lemma <ref>,<ref> and Lemma <ref>,<ref>,<ref> one has∫_|ζ^4/1-ζ| x ≤ζ_L^∞^2ζ_L^2^2/h_0, ∫_|(ζ^2-ζ^3/3)(∂_x_1{ζ/1-ζ})^2| x≤ C(h_0^-1,ζ_H^μ)ζ_L^∞^2ζ_H^1-θ^2.Next consider1/3∫_(1-3ζ)(∂_x_1{ζ/1-ζ})^2-(∂_x_1{ζ})^2-2∂_x_1{ζ}∂_x_1{ζ^2}+3ζ (∂_x_1{ζ})^2 x=1/3∫_(1-3ζ)(∂_x_1{ζ^2/1-ζ})^2+2(∂_x_1{ζ})(∂_x_1{ζ^3/1-ζ}) -6ζ(∂_x_1{ζ})(∂_x_1{ζ^2/1-ζ}) x.where we used the identity ∂_x_1{ζ/1-ζ}=∂_x_1{ζ}+∂_x_1{ζ^2/1-ζ}. It follows from the Cauchy-Schwarz inequality, Lemma <ref>,<ref> and Lemma <ref>,<ref>,<ref> that|1/3∫_(∂_x_1{ζ/1-ζ})^2-(∂_x_1{ζ})^2-2∂_x_1{ζ}∂_x_1{ζ^2}x | ≲ζ^2/1-ζ_H^1-θ^2+ζ_H^1-θζ^3/1-ζ_H^1-θ≤ C(h_0^-1,ζ_H^μ)ζ_L^∞^2ζ_H^1-θ^2and|∫_ζ[(∂_x_1{ζ^2/1-ζ})^2+2(∂_x_1{ζ})(∂_x_1{ζ^2/1-ζ})]x | ≲ζ_L^∞ζ^2/1-ζ_H^1-θ^2+ζ_L^∞ζ_H^1-θζ^2/1-ζ_H^1-θ≤ C(h_0^-1,ζ_H^μ)ζ_L^∞^2ζ_H^1-θ^2.This concludes the estimate for _ rem^(1)(ζ). The estimate for _ rem^(1)(ζ) is obtained identically, and the one for _ rem^(2)(ζ) are obtained using similar estimates when decomposing ⟨(ζ),ζ⟩ given in (<ref>). We do not detail for the sake of conciseness.Periodic functional setting GivenP>0, we denoteL^2_Pthe space ofP-periodic, locally square-integrable functions, endowed with the normu_L^2_P=u_L^2(-P/2,P/2)(∫_-P/2^P/2u(x)^2x)^1/2.The Fourier coefficients ofu∈L^2_Pare defined byû_k1/√(P)∫_-P/2^P/2 u(x)e^-2iπ kx/Px,u(x)= 1/√(P)∑_k∈û_ke^2iπ kx/P.We define, fors≥0,H^s_P{u∈ L^2_P, u_H^s_P^2∑_k∈(1+4π^2k^2/P^2)^sû_k^2<∞}.The Fourier multiplier operatorΛ𝒮'() →𝒮'()is defined as usual byΛ=(1-∂_x^2)^1/2. It maps periodic distributions to periodic distributions and we haveΛ u_k=(1+4π^2k^2/P^2)^1/2û_k.Thusu_H_P^s^2=∫_-P/2^P/2 u Λ^2s uxandΛ^mis an isomorphism fromH^s_PtoH^s-m_Pfor anys,m∈. Similarly, the operators∂_x_iextend to operators from𝒮'()to itself, and maps smoothlyH^s_PintoH_P^s-1+θ, acting on the Fourier coefficients by pointwise multiplication:∂_x_i u_k=2πi k/P_i(2π k/P) û_k.For anys>1/2, the continuous embeddingu_L^∞≤1/√(P)∑_k∈û_k≤u_H^s_P×1/√(P)(∑_k∈1/(1+4π^2k^2/P^2)^s)^1/2≲u_H^s_P,holds uniformly with respect toP≥1. More generally, one checks by a partition of unity argument, or repeating the proofs in the periodic setting, that Lemmata <ref>,<ref>, <ref> and as a consequence Lemmata <ref>–<ref> have immediate analogues in the periodic setting, with uniform estimates with respect toP≥1, when defining_P(ζ)=γ_P(ζ)+_P(ζ)where _P(ζ) =∫_-P/2^P/2ζ^2/1-ζ+1/3 (1-ζ)^3 (∂_x _1{ζ/1-ζ})^2 x, _P(ζ) =∫_-P/2^P/2ζ^2/δ^-1+ζ+1/3 (δ^-1+ζ)^3 (∂_x _2{ζ/δ^-1+ζ})^2 x. § THE PERIODIC PROBLEM Our first task is to construct periodic traveling-wave solutions with large periods by considering the periodic minimization problem corresponding to (<ref>). We will use this in the next section to construct a special minimizing sequence for (<ref>), which is useful whenν>1-θ. Whenθ<1/2andν=1-θ, any minimizing sequence has the special property and therefore it is strictly speaking unnecessary to first consider the periodic minimization problem. Nevertheless, we consider here all possible parameters in order to highlight some interesting differences between the casesν=1-θandν>1-θ.We ensure that the hypotheses of Section <ref>, namelyζ∈ H_P^ν and ζ_L^∞<min(1,δ^-1) will be satisfied through a penalization argument. To this aim, we fixR>0and restrict ourselves to valuesq∈(0,q_0)sufficiently small so thatζ_H_P^ν()≤2Rand (γ+δ)ζ_L^2_P^2=qensures (by Lemma <ref>,<ref> in the periodic setting) thatζ_L^∞<min(1,δ^-1)-h_0with someh_0>0, uniformly with respect toP≥P_0sufficiently large (and likewise in the real line setting). We then defineϱ:[0,(2R)^2)→[0,∞)a smooth, non-decreasing penalization function, satisfying* ϱ(t)=0 for 0≤ t≤ R^2;* ϱ(t)→∞ as t↗ (2R)^2;*For any a_1∈(0,1), there exists M_1,M_2>0 and a_2>1 such thatϱ'(t)≤ M_1ϱ(t)^a_1+M_2ϱ(t)^a_2;for instanceϱ(R^2,(2R)^2)∋t ↦((2R)^2-t)^-1exp(1/R^2-t).Now consider the functional_P,ϱ(ζ)ϱ(ζ_H_P^ν^2)+_P(ζ)and the constraint setV_P,q,2R{ζ∈ H_P^ν,(γ+δ)ζ_L^2_P^2=q and ζ_H_P^ν<2R }.In the following we solve forqsufficiently small andPsufficiently large_ζ∈ V_P,q,2R_P,ϱ(ζ).There exists q_0>0 such that for any q∈(0,q_0), thefunctional _P,ϱ:V_P,q,2R→ is weakly lower semi-continuous, bounded from below and _P,ϱ(ζ)→∞ as ζ_H_P^ν↗ 2R. In particular, it has a minimizer ζ_P∈ V_P,q,2R, which satisfies the Euler-Lagrange equation2ϱ'(ζ_P_H_P^ν^2)Λ^2νζ_P+_P(ζ_P)+2α_P(γ+δ)ζ_P=0for some Lagrange multiplier α_P(ζ_P)∈.As explained above, we restrict ourselves to q∈(0,q_0) so that _P,ϱ is well-defined, and in particular sup_ζ∈V_P,q,2Rζ_L^∞<min(1,δ^-1). The argument is standard; see <cit.>. Since _P,ϱ(ζ)→∞ as ζ_H_P^ν↗ 2R, any minimizing sequence is bounded, and therefore weakly convergent (up to a subsequence) in the (reflexive) Hilbert space H_P^ν.We only need to show that for any ζ_n∈ V_P,q,2R such that ζ_n ⇀ζ weakly in H_P^ν, one has0≤_P,ϱ(ζ)≤lim inf_n→∞_P,ϱ(ζ_n).Notice first that by the weak lower semi-continuity of ·_H_P^ν, and since ϱ is non-decreasing, one hasϱ(ζ_H_P^ν^2)≤lim inf_n→∞ϱ(ζ_n_H_P^ν^2).Now, by the Rellich-Kondrachov theorem, ζ_n⇀ζ in H_P^ν implies that ζ_n→ζ in H_P^s, for s∈ (1/2,ν); and in particular, ζ_n→ζ in L^∞. Moreover, one has sup_nζ_n_L^∞<min(1,δ^-1) since ζ_n∈ V_P,q,2R, and therefore ζ_n/1-ζ_n→ζ/1-ζ in L^∞, and ζ/1-ζ∈ H_P^ν byLemma <ref>,<ref>, <ref>. Since ζ_n/1-ζ_n is uniformly bounded in H^ν_P and converges in L^∞, itfollows that ζ_n/1-ζ_n⇀ζ/1-ζ in H_P^ν, and therefore ∂_x_1{ζ_n/1-ζ_n}⇀∂_x_1{ζ/1-ζ} in L^2_P by Lemma <ref>,<ref>, and finally(1-ζ_n)^3/2∂_x_1{ζ_n/1-ζ_n}⇀ (1-ζ)^3/2∂_x_1{ζ/1-ζ} inL^2_P.In the same way, we find(δ^-1+ζ_n)^3/2∂_x _2{ζ_n/δ^-1+ζ_n}⇀ (δ^-1+ζ)^3/2∂_x _2{ζ/δ^-1+ζ} inL^2_P.and(1-ζ_n)^-1/2ζ_n → (1-ζ)^-1/2ζ,(δ^-1+ζ_n)^-1/2ζ_n →(δ^-1+ζ)^-1/2ζ inL^2_P.The result follows from the weak lower semi-continuity of ·_L^2_P. Now we wish to prove thatζ_P∈V_P,q,R, and in particular satisfies the Euler-Lagrange equation_P(ζ_P)+2α_P (γ+δ)ζ_P=0.From this point on, we heavily make use of the property (see Assumption <ref>)γ-δ^2≠ 0.without explicit references in the statements.There exists m>0 and q_0>0 such that for any q∈(0,q_0), I_qinf{(ζ), ζ∈ V_q,R}<q(1-m q^2/3)and there exists P_q>0 such thatI_P,ϱ,qinf{_P,ϱ(ζ), ζ∈ V_P,q,2R}<q(1-m q^2/3)for any P≥ P_q.Let us first consider the case of the real line. Consider ψ∈^∞() with compact support, such that (γ+δ)ψ_L^2^2=1; and denote ψ_λ:x↦λ^1/2ψ(λ x). One has∫_ψ_λ^3 x =λ^1/2∫_ψ^3 xand ∂_xψ_λ_L^2=λ∂_xψ_L^2.It follows that, for the case when γ-δ^2<0, one can choose ψ≥ 0 and λ small enough so that∫_(γ-δ^2)ψ_λ^3+γ+δ^-1/3(∂_xψ_λ)^2x-2m<0.If γ-δ^2>0, we instead let ψ≤ 0 and again choose λ small enough so that the above holds.Now, consider ϕ_q:x↦ q^2/3ψ_λ(q^1/3x). One hasϕ_q_L^∞≤ q^2/3ψ_λ_L^∞ , ∫_ϕ_q^3x=q^5/3∫_ψ_λ^3 xand ∂_x^nϕ_q_L^2=q^1/2+n/3∂_x^nψ_λ_L^2 (n∈).In particular, for q sufficiently small, ϕ_q_H^ν<R; and by Lemma <ref> with l=3,(ϕ_q) =(γ+δ)∫_ϕ_q^2x+∫_(γ-δ^2)ϕ_q^3+γ+δ^-1/3(∂_xϕ_q)^3x+𝒪(q^7/3) =q-2m q^5/3+𝒪(q^7/3).The result follows in the real-line setting.We now deduce the result in the periodic setting. By taking P≥ P_q sufficiently large, we may ensureϕ_q ⊂ (-P/2,P/2),and define ϕ_P,q=∑_j∈ϕ_q(x-jP)∈ H_P^n(n∈).One has(γ+δ)ϕ_P,q_L^2_P^2= (γ+δ)ϕ_q_L^2^2=qandϕ_P,q_H_P^ν<R for q sufficiently small, so that as well as _P,ϱ(ϕ_P,q)=_P(ϕ_P,q) =(γ+δ)∫_-P/2^P/2ϕ_P,q^2x+∫_-P/2^P/2(γ-δ^2)ϕ_P,q^3+γ+δ^-1/3(∂_xϕ_P,q)^2x+𝒪(q^7/3) =q-2m q^5/3+𝒪(q^7/3).The result is proved.There exists q_0>0 such that for any q∈(0,q_0), one has ∀ P≥ P_q,|α_P+1|< 1/2,where α_P is defined in Lemma <ref> and P_q in Lemma <ref>.We use the Euler-Lagrange equation satisfied by ζ_P, namely2ϱ'(ζ_P_H_P^ν^2)Λ^2νζ_P+_P(ζ_P)+2α_P(γ+δ)ζ_P=0.This equation is well-defined in (H_P^ν)' and testing against ζ_P∈ H_P^ν yields 2ϱ'(ζ_P_H_P^ν^2)ζ_P_H_P^ν^2+2α_P(γ+δ)ζ_P_L^2_P^2+⟨_P(ζ_P), ζ_P⟩=0.Using the decompositions in Lemma <ref> (changing the domain of integration to [-P/2,P/2]) yields⟨_P(ζ_P),ζ_P⟩ =2_2,P(ζ_P)+3_3,P(ζ_P)+_ rem,P^(2)(ζ_P)=2_P(ζ_P)+_3,P(ζ_P)+_ rem,P^(2)(ζ_P)-2_ rem,P^(1)(ζ_P),so that one obtains the identity-α_P q = _P(ζ_P)+1/2_3,P(ζ_P)+1/2_ rem,P^(2)(ζ_P)-_ rem,P^(1)(ζ_P) + ϱ'(ζ_P_H_P^ν^2)ζ_P_H_P^ν^2. Let us now use Lemma <ref>, which assertsϱ(ζ_P_H_P^ν^2)+_P(ζ_P)< q(1-mq^2/3)≤ q.Remark that, since 1-ζ,δ^-1+ζ≥ h_0>0, one has_P(ζ_P) ≥∫_-P/2^P/2γζ_P^2/1-ζ_P+ζ_P^2/δ^-1+ζ_Px=(γ+δ) ∫_-P/2^P/2ζ_P^2 x+γ∫_-P/2^P/2ζ_P^3/1-ζ_Px-δ∫_-P/2^P/2ζ_P^3/δ^-1+ζ_Px=q+Ø(q^1+ϵ/2ν),where ϵ=ν-1/2>0 and we use in the last estimate that ζ_P_L^∞^2≲ζ_P_L^2_P^2-1/νζ_P_H_P^ν^1/ν=Ø(q^2ν-1/2ν), by the interpolation estimate Lemma <ref>,<ref> in the periodic-setting.Combining with (<ref>) yields_P(ζ_P)=q+Ø(q^1+ϵ/2ν)andϱ(ζ_P_H_P^ν^2)=𝒪(q^1+ϵ/2ν).Hence, by (<ref>) ϱ'(ζ_P_H_P^ν^2)=𝒪( q^1+ϵ/4ν).Now, by Lemma <ref> and using once again (<ref>), one hasζ_P_H_P^1-θ^2 ≲_P(ζ_P)=Ø(q).Thus by Lemma <ref>, one has_3,P(ζ_P)≲ζ_P_L^∞ζ_P_H_P^1-θ^2=𝒪(q^1+ϵ/2ν)and_ rem, P^(2)(ζ_P)+_ rem, P^(1)(ζ_P)≲ζ_P_L^∞^2ζ_P_H_P^1-θ^2=𝒪(q^1+ϵ/ν).Plugging the above estimates into (<ref>) yields-α_P q =q+𝒪(q^1+ϵ/4ν),and the proof is complete.Let q_0, q∈(0,q_0) and P_q be as in Lemma <ref>. There exists M>0 such that one hasζ_H_P^ν^2≤ M q uniformly over q∈(0,q_0), P≥ P_q and ζ in the set of minimizers of _P,ϱ over V_P,q,2R.Recall the Euler-Lagrange equation:2ϱ'(ζ_P_H_P^ν^2)Λ^2νζ_P+_P(ζ_P)+2α_Pζ_P=0.It follows from the proof of Lemma <ref> that for q_0 sufficiently smallζ_P_H_P^1-θ^2≲ qwith 0≤θ<1. Thus the result is proved if ν=1-θ, and we focus below on the situation ν>1-θ. In this case we obtain the desired estimate in a similar fashion after finite induction. Indeed, define r_n=min(ν-(1-θ), n(1-θ)), n∈, and assume that ζ_P_H_P^r_n^2≲ q. Note that this is satisfied for n=0 by assumption. We will show below that ζ_P_H_P^1-θ+r_n^2≲ζ_P_H_P^r_n^2 ≲ q.Since 1-θ>0, the desired result follows by finite induction. Let us now prove (<ref>). We test (<ref>) against Λ^2r_nζ_P, and obtain2ϱ'(ζ_P_H_P^ν^2)⟨Λ^2νζ_P,Λ^2r_nζ_P⟩+ ⟨_P(ζ_P),Λ^2r_nζ_P⟩+2α_P⟨ζ_P,Λ^2r_nζ_P⟩=0.Here, the notation ⟨,⟩ represents the H_P^ν-2(1-θ)-H_P^-ν+2(1-θ) duality bracket. We will use the same notation for H_P^s-H_P^-s, where the value of s∈(-ν,ν] is clear from the context. Note that all the terms are well-defined, since ζ_P∈ H_P^ν, and therefore by (<ref>) and Lemma <ref>, _P(ζ_P)∈ H_P^ν-2(1-θ). Moreover, by (<ref>), if ϱ'(ζ_P_H_P^ν^2)>0 then Λ^2νζ_P∈ H_P^ν-2(1-θ) as well.Finally,Λ^2r_nζ_P∈ H_P^ν-2r_n, and r_n+1-θ≤ν so that ν-2r_n≥ -ν+2(1-θ). Now, using that ϱ'(ζ_P_H_P^ν^2)≥ 0, we get from (<ref>) and Lemma <ref> that γ⟨_P(ζ_P),Λ^2r_nζ_P⟩+⟨_P(ζ_P),Λ^2r_nζ_P⟩≤ 2(-α_P)Λ^r_nζ_P_L^2_P^2≤ 3ζ_P_H_P^r_n^2,where we define _P and _P from (<ref>) as in (<ref>). In particular, ⟨(ζ_P), Λ^2r_nζ_P⟩=⟨2ζ_P /1-ζ_P,Λ^2r_nζ_P⟩+⟨ζ_P^2/(1-ζ_P)^2,Λ^2r_nζ_P⟩+⟨ (1-ζ_P)^2(∂_x{ζ_P/1-ζ_P})^2,Λ^2r_nζ_P⟩+⟨2/3(1-ζ_P)^3∂_x_1{ζ_P/1-ζ_P},∂_x_1{Λ^2r_nζ_P/(1-ζ_P)^2}⟩.We estimate each term of (<ref>), using that (γ+δ)ζ_P_L_P^2^2=q, ζ_P_H_P^ν<2R and ζ_P_L^∞<1-h_0, recalling that Lemmata <ref>,<ref>,<ref> and <ref>.<ref> are valid in the periodic setting.The first term in (<ref>) is estimated by the Cauchy-Schwarz inequality and Lemma <ref>:|⟨2ζ_P /1-ζ_P,Λ^2r_nζ_P⟩|=|⟨Λ^r_n(2ζ_P/1-ζ_P),Λ^r_nζ_P⟩| ≤ 2ζ_P/1-ζ_P_H_P^r_nζ_P_H_P^r_n≲ζ_P_H_P^r_n^2. It is clear that the second term can be estimated in the same way. Next we see that|⟨ (1-ζ_P)^2(∂_x{ζ_P/1-ζ_P})^2,Λ^2r_nζ_P⟩|≤(1-ζ_P)^2(∂_x_1{ζ_P/1-ζ_P})^2_H_P^θ - 1+r_nζ_P_H_P^1-θ+r_n≲(∂_x_1{ζ_P/1-ζ_P})^2_H_P^θ-1+r_nζ_P_H_P^1-θ+r_n,and Lemma <ref>,<ref>, Lemma <ref>,<ref> and Lemma <ref>,<ref> yield for any 0<ϵ<min(ν+θ-1,1-θ,ν-1/2),(∂_x_1{ζ_P/1-ζ_P})^2_H_P^θ-1+r_n ≲∂_x_1{ζ_P/1-ζ_P}_H_P^r_n∂_x_1{ζ_P/1-ζ_P}_H_P^ν+θ-1-ϵ≲ζ_P_H_P^1-θ+r_nζ_P_H_P^ν-ϵ≲ζ_P_H_P^1-θ+r_nq^ϵ/2ν.We therefore have that|⟨ (1-ζ_P)^2(∂_x{ζ_P/1-ζ_P})^2,Λ^2r_nζ_P⟩| ≲ q^ϵ/2νζ_P_H_P^1-θ+r_n^2.We next consider the remaining term in (<ref>) and note that⟨ (1-ζ_P)^3∂_x_1{ζ_P/1-ζ_P},∂_x_1{Λ^2r_nζ_P/(1-ζ_P)^2}⟩=⟨∂_x_1{ζ_P},∂_x_1{Λ^2r_nζ_P}⟩+⟨(-3ζ_P+3ζ_P^2-ζ_P^3)∂_x_1{ζ_P/1-ζ_P},∂_x_1{Λ^2r_nζ_P/(1-ζ_P)^2}⟩_I +⟨∂_x_1{ζ_P/1-ζ_P-ζ_P},∂_x_1{Λ^2r_nζ_P/(1-ζ_P)^2}⟩_II+⟨∂_x_1{ζ_P},∂_x_1{Λ^2r_nζ_P/(1-ζ_P)^2-Λ^2r_nζ_P}⟩_III.First we see that, by Lemma <ref>,<ref>,ζ_H_P^r_n^2+⟨∂_x_1{ζ_P},∂_x_1{Λ^2r_nζ_P}⟩≳ζ_H_P^1-θ+r_n^2.We estimate I in (<ref>) proceeding as previously:|⟨ζ_P∂_x_1{ζ_P/1-ζ_P},∂_x_1{Λ^2r_nζ_P/(1-ζ_P)^2}⟩| ≲∂_x_1{ζ_P/1-ζ_P}_H_P^r_nζ_P ∂_x_1{Λ^2r_nζ_P/(1-ζ_P)^2}_H_P^-r_n≲ζ_P_H_P^1-θ+r_nζ_P_H_P^ν-ϵ∂_x_1{Λ^2r_nζ_P/(1-ζ_P)^2}_H_P^-r_n≲ζ_P_H_P^ν-ϵζ_P_H_P^1-θ+r_n^2≲ q^ϵ/2νζ_P_H_P^1-θ+r_n^2,where we choose 0<ϵ<min{ν-1/2, 1-θ}. The remaining terms in I are of higher order and can be estimated in the same way. Next we estimate II:|⟨∂_x_1{ζ_P^2/1-ζ_P},∂_x_1{Λ^2r_nζ_P/(1-ζ_P)^2}⟩| ≲ζ_P^2_H_P^1-θ+r_nΛ^2r_nζ_P_H_P^1-θ-r_n≲ζ_P_L^∞ζ_P_H_P^1-θ+r_n^2≲ q^ν-1/2/2νζ_P_H_P^1-θ+r_n^2,where we used Lemma <ref>,<ref> and Lemma <ref>,<ref>.Finally consider III: proceeding as above, |⟨∂_x_1{ζ_P},∂_x_1{Λ^2r_nζ_P/(1-ζ_P)^2-Λ^2r_nζ_P}⟩| ≲ζ_P_H_P^1-θ+r_n2ζ_P-ζ_P^2/(1-ζ_P)^2Λ^2r_nζ_P_H_P^1-θ-r_n≤ζ_P_H_P^1-θ+r_n^22ζ_P-ζ_P^2/(1-ζ_P)^2_H_P^ν-ϵ≲ q^ϵ/2νζ_P_H_P^1-θ+r_n^2, with 0<ϵ<min(ν-1/2, ν-(1-θ),1-θ).Collecting (<ref>)–(<ref>) in (<ref>) yields ζ_H_P^1-θ+r_n^2≲⟨(ζ_P), Λ^2r_nζ_P⟩+ζ_P_H_P^r_n^2+q^ϵ/2νζ_P_H_P^1-θ+r_n^2 with ϵ>0 sufficiently small. It is clear that the same estimate holds for ⟨(ζ_P),Λ^2r_nζ_P⟩, and if we use these in (<ref>), we obtainζ_P_H_P^1-θ+r_n^2 ≲ζ_P_H_P^r_n^2+ζ_P_H_P^1-θ+r_n^2q^ϵ/2ν.Thus one may choose q sufficiently small, so that (<ref>) holds. This concludes the proof. We now collect the preceding results and deduce solutions of the non-penalized periodic problem. [Existence of periodic minimizers] There exists q_0>0 such that for any q∈(0,q_0), one can define P_q>0 and the following holds. For each P≥ P_q, there exists ζ_P∈ V_P,q,R such that_P(ζ_P)=inf_ζ∈ V_P,q,R_P(ζ) I_P,q.and the Euler-Lagrange equation holds with α_P∈(-3/2,-1/2):_P(ζ_P)+2α_P(γ+δ) ζ_P=0.Furthermore, there exists M>0, independent of q, such thatζ_P_H_P^ν^2≤ M quniformly with respect to P≥ P_q.From Lemma <ref>, any minimizer of _P,ϱ over V_P,q,2R satisfies, for q_0 sufficiently small and P≥ P_q sufficiently large,ζ_P_H_P^ν^2≤ Mq< R^2.Thus the Euler-Lagrange equation (<ref>) becomes (<ref>), and the control on α_P is stated in Lemma <ref>. Moreover, since _P,ϱ=_P over V_P,q,R, ζ_P minimizes _P over V_P,q,R. The theorem is proved.If θ∈ [0,1/2) and ν=1-θ, then the functional _P is coercive on V_P,q,R by Lemma <ref>, and it isn't necessary to consider the penalized functional _P, ϱ to construct periodic minimizers. Indeed, one can minimize _P over V_P,q,R directly, noting that any minimizing sequence satisfies (up to subsequences) sup_n ζ_P, n_H_P^ν^2 ≤ Mq<R^2 if q_0 is sufficiently small. § THE REAL LINE PROBLEM The construction of a minimizer for the real line problem, (<ref>), will follow from Lions' concentration compactness principle. The main difficulty consists in excluding the “dichotomy” scenario. To this aim, we shall use a special minimizing sequence (satisfying the additional estimateζ_n_H^ν^2≲q) to show that the functionq↦I_qis strictly subhomogeneous (see Proposition <ref>), which implies that it is also strictly subadditive (Corollary <ref>). This special subsequence is constructed from the solutions of the periodic problem, obtained in Theorem <ref>, with periodP_n→∞. §.§ A special minimizing sequence [Special minimizing sequence for ] There exists q_0>0 such that for any q∈(0,q_0), one can define constants m,M>0 and a sequence {ζ_n}_n∈ satisfying (γ+δ)ζ_n_L^2^2=q, ζ_n_H^ν^2≤ M qandlim_n→∞(ζ_n)=I_qinf_ζ∈ V_q,R(ζ)< q(1-mq^2/3). The estimate on I_q was proved in Lemma <ref>; thus we only need to construct a minimizing sequence satisfyingζ_n_H^ν^2≤ M q. If ν=1-θ, then any minimizing sequence satisfies this property as a consequence of Lemma <ref>, so we assume in the following that ν>1-θ. Let q_0 be sufficiently small so that Theorem <ref> holds. By the construction of <cit.>, one obtains, for any P_n sufficiently large, x_n∈, ζ_P_n∈ H^ν_P_n and ζ_n∈ H^ν() such thatζ_P_n-ζ_P_n(·-x_n)_L^2_P_n→ 0(P_n→∞)where ζ_P_n is defined by Theorem <ref>, ζ_n⊂ (-P_n/2+P_n^1/2,P_n/2-P_n^1/2)and ζ_P_n=∑_l∈ζ_n(· +lP_n).Moreover, one hasζ_n_L^2=ζ_P_n_L^2_P=ζ_P_n_L^2_P.andζ_n_H^ν≲ζ_P_n_H^ν_P_n≲ζ_P_n_H^ν_P_n uniformly with respect to P_n sufficiently large.By (<ref>) and Theorem <ref>, one has ζ_n_H^ν^2 ≤ Mq<R^2 provided that P_n is sufficiently large and q_0 is sufficiently small; and ζ_n∈ V_q,R by (<ref>). Thus there only remains to prove that ζ_n is a minimizing sequence. Notice that by (<ref>) and (<ref>) and Lemma <ref>,<ref>,ζ_P_n-ζ_P_n(· -x_n)_H^ν'_P_n→ 0for any ν'∈ [0,ν).One has_P_n(ζ_P_n)-(ζ_n)=1/3∫_-P_n/2^P_n/2 (1-ζ_n)^3[ (∂_x _1{ζ_P_n/1-ζ_P_n})^2-(∂_x _1{ζ_n/1-ζ_n})^2] x-1/3∫_∖(-P_n/2,P_n/2) (1-ζ_n)^3(∂_x _1{ζ_n/1-ζ_n})^2x.Using Lemma <ref>,<ref>, (<ref>) and the Cauchy-Schwarz inequality, we deduce| ∫_-P_n/2^P_n/2 (1-ζ_n)^3[ (∂_x _1{ζ_P_n/1-ζ_P_n})^2-(∂_x _1{ζ_n/1-ζ_n})^2] x| ≲ C(ζ_n_H^ν) ∂_x _1{ζ_P_n/1-ζ_P_n}-∂_x _1{ζ_n/1-ζ_n}_L^2(-P_n/2,P_n/2).Notice now that, by uniqueness of the Fourier decomposition in L^2_P_n, one has the identity∂_x _1{ζ_P_n/1-ζ_P_n}(x)=∑_l∈(∂_x _1{ζ_n/1-ζ_n})(x+lP_n),and therefore, by Lemma <ref>,<ref> and (<ref>), one has∂_x _1{ζ_P_n/1-ζ_P_n}-∂_x _1{ζ_n/1-ζ_n}_L^2(-P_n/2,P_n/2)^2 =∫_-P_n/2^P_n/2(∑_|l|≥ 1 |∂_x _1{ζ_n/1-ζ_n}(y+lP_n)|)^2y≲∫_-P_n/2^P_n/2( ∑_|l|≥ 11/(P_n^1/2+(l-1)P_n)^j)^2y→ 0(P_n→∞).since j≥ 2. Similarly, Lemma <ref>,<ref> and (<ref>) yield|∫_∖(-P_n/2,P_n/2) (1-ζ_n)^3(∂_x _1{ζ_n/1-ζ_n})^2x| ≤ C(ζ_H^ν)∫_P_n/2^∞1/(x-P_n/2+P_n^1/2)^2j x → 0(P_n→∞).The componentsatisfies the same bounds, thus we proved_P_n(ζ_P_n)-(ζ_n)→ 0(P_n→∞).Now by Lemma <ref> (which holds in the periodic setting and uniformly with respect to P>0) with ν replaced by some ν'∈(1/2, ν) and (<ref>), one has_P_n(ζ_P_n)-I_P_n,q=_P_n(ζ_P_n)-_P_n(ζ_P_n(·-x_n))→ 0(P_n→∞).Thus we found that I_q ≤(ζ_n) =I_P_n,q+o(1)(P_n→∞).There remains to prove the converse inequality. For any ϵ>0, there exists ζ∈ V_q,R such that (ζ) ≤ I_q+ϵ/3.By the same argument as above, we construct by smoothly truncating and rescaling, ζ̌∈ V_q,R such that ζ̌∈(-P_⋆,P_⋆), and(ζ̌) ≤(ζ) +ϵ/3.Then for P_n≥ 2 P_⋆, one has ζ̌_P_n=∑_j∈ζ̌(·+jP_n)∈ V_P,q,R and, as above,_P_n(ζ̌_P_n)-(ζ̌)→ 0(P_n→∞).Combining the above yields, for P_n sufficiently large,I_P_n,q≤_P_n(ζ̌_P_n)≤(ζ̌) +ϵ/3≤ I_q+ϵ.Thus we proved that (ζ_n) → I_q(P_n→∞).This concludes the proof. The following proposition is essential to rule out the “dichotomy” scenario in Lions' concentration-compactness principle (see below).There exists q_0>0 such that the map q↦ I_q is strictly subhomogeneous for q∈(0,q_0):I_aq<aI_qwhenever0<q<a q<q_0.Let us consider ζ_n the special minimizing sequence defined in Theorem <ref>. We first fix a_0>1, and restrict q_0>0 if necessary, so that for any a∈(1,a_0] and q∈(0,q_0) such that aq<q_0, one has a^1/2ζ_n_H^ν^2≤ M a q ≤ M q_0<R^2.Thus we have, by definition of I_aq and Lemma <ref>,I_aq ≤(a^1/2ζ_n)=a_2(ζ_n)+a^3/2_3(ζ_n)+_ rem^(1)(a^1/2ζ_n)=a(ζ_n) +(a^3/2-a)_3(ζ_n)+_ rem^(1)(a^1/2ζ_n)-a_ rem^(1)(ζ_n).Moreover, by Theorem <ref>, one haslim_n→∞(ζ_n)=I_q< q(1-mq^2/3),and Lemma <ref> yields-_2(ζ_n)≤ -qand _ rem^(1)(ζ_n)≲ζ_n_H^ν^4≲ a_0^2 q^2. It follows that one has for q∈ (0,q_0) with q_0 sufficiently small and n sufficiently large,_3(ζ_n)=(ζ_n)-_2(ζ_n)-_ rem^(1)(ζ_n)≤ -1/2m q^5/3.Thus we find for n sufficiently large,I_aq≤ a I_q -(a^3/2-a)(m/2)q^5/3+lim sup_n→∞(_ rem^(1)(a^1/2ζ_n)-a_ rem^(1)(ζ_n)).We now estimate the last contribution, treating separately _ rem^(1) and _ rem^(1) in the same spirit as in the proof of Lemma <ref>. Consider _ rem^(1) for instance. We develop each contribution in _ rem^(1)(a^1/2ζ_n) using Neumann series in powers of a^1/2ζ_n:_ rem^(1)(a^1/2ζ_n)=∫_∑_k≥ 4 (a^1/2ζ_n)^kx+ ∫_∑_k_1+k_2+k_3≥ 4c_k_1,k_2,k_3(a^1/2ζ_n)^k_1(∂_x_1{(a^1/2ζ_n)^k_2})(∂_x_1{(a^1/2ζ_n)^k_3})x.The series are absolutely convergent provided q_0 is sufficiently small, and start at index k=4, as pointed out in the proof of Lemma <ref>. We now subtract the contributions of a_ rem^(1)(ζ_n) and by the triangle and Cauchy-Schwarz inequalities, _ rem^(1)(a^1/2ζ_n)-a_ rem^(1)(ζ_n)≤∑_k≥ 4(a^k/2-a)ζ_n_L^∞^k-2ζ_n_L^2^2+ ∑_k_1+k_2+k_3≥ 4|c_k_1,k_2,k_3|(a^k_1+k_2+k_3/2-a) ζ_n_L^∞^k_1∂_x_1{ζ_n^k_2}_L^2∂_x_1{ζ_n^k_3}_L^2. Using that | a^k/2-a|≤ (a^3/2-a)( k-2) a^k-3/2, Lemma <ref>,<ref>, that H^ν is a Banach algebra as well as the continuous embedding H^ν⊂ L^∞, we find that one can restrict q_0>0 such that the above series is convergent and yields _ rem^(1)(a^1/2ζ_n)-a_ rem^(1)(ζ_n)≤ C(a_0) (a^3/2-a) q^2,uniformly over q∈(0,q_0) and a∈ (1,a_0] such that aq<q_0. Plugging this estimate in (<ref>) and restricting q_0 if necessary, we deduceI_aq<aI_q for 0<q<aq<q_0,a∈ (1,a_0].Consider now the case when a∈ (1,a_0^p] for an integer p≥ 2. Then a^1/p∈ (1,a_0] and soI_aq=I_a^1/pa^p-1/pq <a^1/pI_a^p-1/pq =a^1/pI_a^1/pa^p-2/pq <a^2/pI_a^p-2/pq <… <aI_q.The result is proved.By a standard argument, Proposition <ref> induces the subadditivity of the mapq↦I_q. There exists q_0>0 such that the map q↦ I_q is strictly subadditive for q∈(0,q_0):I_q_1+q_2< I_q_1 +I_q_2 whenever0<q_1<q_1+q_2<q_0.§.§ Concentration-compactness; proof of Theorem <ref>We now prove Theorem <ref>. Let us first recall Lions' concentration compactness principle <cit.>. [Concentration-compactness]Any sequence {e_n}_n∈⊂ L^1() of non-negative functions such thatlim_n→∞∫_e_nx= I>0admits a subsequence, denoted again {e_n}_n∈, for which one of the following phenomena occurs.*(Vanishing) For each r>0, one haslim_n→∞( sup_x∈∫_x-r^x+re_nx)=0.*(Dichotomy) There are real sequences {x_n}_n∈,{M_n}_n∈,{N_n}_n∈⊂ and I^* ∈(0,I) such that M_n,N_n→∞, M_n/N_n→0, and∫_x_n-M_n^x_n+M_ne_nx→ I^*and ∫_x_n-N_n^x_n+N_ne_nx→ I^*as n→∞.*(Concentration) There exists a sequence {x_n}_n∈⊂ with the property that for each ϵ>0, there exists r>0 with ∫_x_n-r^x_n+r e_nx ≥ I-ϵfor all n∈.We shall apply Theorem <ref> to e_n=γ(ζ_n^2/1-ζ_n+1/3 (1-ζ_n)^3 (∂_x _1{ζ_n/1-ζ_n})^2) +ζ_n^2/δ^-1+ζ_n+1/3 (δ^-1+ζ_n)^3 (∂_x _2{ζ_n/δ^-1+ζ_n})^2,whereζ_nis a minimizing sequence ofoverV_q,Rwithsup_n ζ_n_H^ν^2 <R^2. Such a sequence is known to exist provided thatq∈(0,q_0)is sufficiently small, by Theorem <ref> (and any minimizing sequence is valid whenν=1-θ, by Lemma <ref>; see Remark <ref>).The choice of density is inspired by the recent paper <cit.>, and allows (contrarily to the more evident choicee_n=ζ_n^2) to show, whenν=1-θ, that the constructed limit satisfies(η)=I_qand is therefore a solution to the constrained minimization problem (<ref>). Notice that ∫_ e_nx =(ζ_n)→ I_q(n→∞)and that there exists a constantCsuch thatζ_n_L^2(J)^2= ∫_J ζ_n^2x≤C∫_J e_nxfor any intervalJ⊆.We exclude the two first scenarii in Lemmata <ref> and <ref>, below. Thus the concentration scenario holds and, using (<ref>), we find that there exists{x_n}_n∈⊂such that for anyϵ>0, there existsr>0withη_n_L^2(|x|>r)<ϵ,where{η_n}_n∈{ζ_n(·+x_n)}_n∈. Sincesup_n∈η_n_H^ν()<R, there existsη∈H^ν()satisfyingη_H^ν()<Randη_n⇀ηweakly inH^ν()(up to the extraction of a subsequence). Increasingrif necessary, we have η_L^2(|x|>r)<ϵ.Now, considerχa smooth cut-off function, such thatχ(x)=1for|x|≤randχ(x)=0for|x|≥2r. One hasχη_n⇀χηweakly inH^ν(), and by compact embedding <cit.>, we find that one can extract a subsequence, still denotedη_n, such thatχ(η_n-η)_L^2()≤ϵfornsufficiently large. Combining the above estimates, we find that the subsequence satisfiesη_n-η_L^2()<3ϵfornsufficiently large.By Cantor's diagonal extraction process, we construct a subsequence satisfyingη_n-η_L^2→0; and by interpolation,η_n-η_H^s→0for anys∈[0,ν).In particular(γ+δ)η_L^2^2=q, and recallη_H^ν≤sup_n ζ_n_H^ν <R, thusη∈V_q,R. Ifν>1-θ, we deduce(η_n)→(η)asn→∞by Lemma <ref>. If on the other handν=1-θwe use the weak lower semi-continuity argument in the proof of Lemma <ref> to deduce thatI_q≤(η)≤lim_n→∞ (η_n)=I_q. In either case we have that(η)=I_q.The constructed functionη∈H^ν()is therefore a solution to the constrained minimization problem (<ref>). In particular, it solves the Euler-Lagrange equation (<ref>) withα<0provided thatq∈(0,q_0)is sufficiently small (proceeding as in Lemma <ref>), and therefore satisfies (<ref>) withc^2=(-α)^-1>0.This proves the first item of Theorem <ref>, as well as the second item — except for the strong convergence inH^ν()whenν=1-θ>1/2. This result follows from the fact that weak convergence together with convergence of the norm implies strong convergence in a Hilbert space (applied to(γ^1/2(1-ζ_n)^3/2 (∂_x _1{ζ_n/1-ζ_n}), (δ^-1+ζ_n)^3/2 (∂_x _2{ζ_n/δ^-1+ζ_n}))∈(L^2())^2).There remains to prove the estimates of the third item. Proceeding as in Lemma <ref>, we findζ_H^ν^2≤ M q, uniformly over the minimizers ofoverV_q,R. Moreover, by Lemma <ref>, one has-α q =-α(γ+δ)ζ_L^2^2=1/2⟨(ζ),ζ⟩=_2(ζ)+3/2_3(ζ)+1/2_ rem^(2)(ζ)=3/2(ζ)-1/2_2(ζ)-3/2_ rem^(1)(ζ)+1/2_ rem^(2)(ζ).where_2(ζ)≥qand_ rem^(1)(ζ)+_ rem^(2)(ζ) =Ø(q^2).Altogether, using that(ζ)< q(1-mq^2/3)by Lemma <ref>, we find-α q<3/2 q(1-mq^2/3) -1/2 q+Ø(q^2)= q(1-3/2mq^2/3)+Ø(q^2),and the result follows. Theorem <ref> is proved.[Excluding “vanishing”] No subsequence of {e_n}_n∈ has the “vanishing” property.By Lemmata <ref> and <ref>, one has for n sufficiently largeq(1-mq^2/3) > (ζ_n)=_2(ζ_n) +_3(ζ_n)+_ rem^(1)(ζ_n) ≥ q + _3(ζ_n)+_ rem^(1)(ζ_n)and hencem q^5/3≤_3(ζ_n)+_ rem^(1)(ζ_n)≲ζ_n_L^∞ .On the other hand, one hasζ_n_L^∞((x-1/2,x+1/2))≤φ_x ζ_n_L^∞()≤φ_x ζ_n_L^2()^1-1/2νφ_x ζ_n_H^ν()^1/2ν≤ C ζ_n_L^2((x-1, x+1)^1-1/2νζ_n_H^ν()^1/2ν,where φ_x=φ(·-x) with φ a smooth function such that φ=1 for |x|≤ 1/2, φ=0 for |x|≥ 1, and 0≤φ≤ 1 otherwise; and using Lemma <ref>,<ref> and Lemma <ref>,<ref>. Since C is independent of x∈ℝ, this shows thatζ_n_L^∞≤ CR^1/2νsup_x∈ζ_n_L^2((x-1, x+1))^1-1/2ν.Hence one has for n sufficiently largeq^5/3≲sup_x∈ζ_n_L^2((x-1, x+1))^1-1/2ν,from which, using (<ref>), it follows that “vanishing” cannot occur. [Excluding “dichotomy”] No subsequence of {e_n}_n∈ has the “dichotomy” property.We denote by χ∈^∞(^+) a non-increasing function withχ(r)=1if0≤ r≤ 1 and χ(r)=0if r≥ 2,and such thatχ=χ_1^2,1-χ=χ_2^2where χ_1 and χ_2 are smooth. For instance, set χ(r)=1-(1-χ^2(r))^2 with χ∈^∞(^+) non-increasing and satisfying (<ref>). Define η_n=ζ_n(· +x_n), andη_n^(1)(x)=η_n(x)χ(|x|/M_n)and η_n^(2)(x)=η_n(x)(1-χ(2|x|/N_n) ),noting that(η_n^(1))⊂ [-2M_n,2M_n] and (η_n^(2))⊂∖ [-N_n/2,N_n/2].After possibly extracting a subsequence, we canassume thatη_n^(1)_L^2^2 →q^*/γ+δwith q^*∈ [0,q]. For n sufficiently large, one has N_n>N_n/2>2M_n>M_n and thereforeη_n^(2)_L^2^2 =η_n_L^2(|x|>N_n)^2+η_n^(2)_L^2(M_n<|x|<N_n)^2 =q/γ+δ-η_n^(1)_L^2^2-η_n_L^2(M_n<|x|<N_n)^2+η_n^(1)_L^2(M_n<|x|<N_n)^2+η_n^(2)_L^2(M_n<|x|<N_n)^2→q-q^*/γ+δsince η_n^(1)_L^2(M_n<|x|<N_n)^2+η_n^(2)_L^2(M_n<|x|<N_n)^2≤ 2η_n_L^2(M_n<|x|<N_n)^2 andη_n_L^2(M_n<|x|<N_n)^2 ≤ C∫_M_n<|x-x_n|<N_n e_nx→ 0by (<ref>) and the assumption of the dichotomy scenario.We claim that (η_n^(1))→ I^*. To show this, note that(η_n^(1))= ∫_(η_n^(1))^2/1- η_n^(1)+1/3 (1-η_n^(1))^3 (∂_x _1{η_n^(1)/1-η_n^(1)})^2 x,where|∫_(η_n^(1))^2/1- η_n^(1) x- ∫_|x|≤ M_nη_n^2/1- η_n x | ≲∫_M_n≤ |x-x_n|≤ N_nη_n^2x→ 0.We next find that∫_ (1-η_n^(1))^3 (∂_x _1{η_n^(1)/1-η_n^(1)})^2 -(1-η_n^(1))^3 (∂_x _1{η_n^(1)/1-η_n})^2 x=∫_ (1-η_n^(1))^3 (∂_x _1{η_n^(1)/1-η_n^(1)}+∂_x _1{η_n^(1)/1-η_n}) ∂_x _1{η_n^(1)(η_n^(1)-η_n)/(1-η_n^(1))(1-η_n)} x.Noting thatη_n^(1)(η_n^(1)-η_n)=-χ_1^2(|x|/M_n) χ_2^2(|x|/M_n) η_n^2we can estimate|∫_ (1-η_n^(1))^3 (∂_x _1{η_n^(1)/1-η_n^(1)}+∂_x _1{η_n^(1)/1-η_n}) ∂_x _1{η_n^(1)(η_n^(1)-η_n)/(1-η_n^(1))(1-η_n)} x | ≲χ_1(|·|/M_n) χ_2(|·|/M_n) η_n_L^∞χ_1(|·|/M_n) χ_2(|·|/M_n) η_n_H^νη_n_H^ν≲χ_1(|·|/M_n) χ_2(|·|/M_n) η_n_L^2^1-1/2νχ_1(|·|/M_n) χ_2(|·|/M_n) η_n_H^ν^1+1/2νη_n_H^ν≲η_n_L^2(M_n≤ |x|≤ N_n)^1-1/2ν→ 0by Lemma <ref> <ref> and Lemma <ref>,<ref>, and using η_n^(1)_H^ν≲η_n_H^ν≤ R. On the other hand,|∫_ (1-η_n^(1))^3 (∂_x _1{η_n^(1)/1-η_n})^2 -(1-η_n^(1))^3 χ^2(|·|/M_n)(∂_x _1{η_n/1-η_n})^2 x|=|∫_ (1-η_n^(1))^3( (∂_x _1{η_n^(1)/1-η_n}) + χ(|·|/M_n)(∂_x _1{η_n/1-η_n})) [∂_x _1, χ(|·|/M_n)] (η_n/1-η_n) x|≲ M_n^-1η_n_H^ν^2by Lemma <ref> <ref>. Finally∫_ (1-η_n^(1))^3 χ^2(|·|/M_n)(∂_x _1{η_n/1-η_n})^2 x =∫_|x|≤ M_n (1-η_n)^3 (∂_x _1{η_n/1-η_n})^2 x +o(1)as the remainder term is bounded by a constant times ∫_M_n ≤ |x-x_n|≤ N_n e_nx. An analogous argument forreveals that(η_n^(1))=∫_|x-x_n|≤ M_n e_nx+o(1)→ I^*and by similar reasoning one finds that(η_n^(2))=∫_|x-x_n|≥ N_n e_nx+o(1)→ I_q-I^*. We next claim that q^*>0. Indeed, if q^*=0, we setη_n^(2) c_n η_n^(2) ,c_nq^1/2/(γ+δ)^1/2η_n^(2)_L^2.By (<ref>) and since q^*=0, one has c_n→ 1. Thus we note(η_n^(2))-(η_n^(2))≲η_n^(2)-η_n^(2)_H^ν→ 0by Lemma <ref> andlim sup_n→∞η_n^(2)_H^ν<R,resulting in the contradictionI_q≤(η_n^(2)) → I_q-I^*<I_q as n→∞. We obtain a similar contradiction involving η_n^(1) and (<ref>) if we assume that q^*=q. Hence, 0<q^*<q.In view of the above, we can rescaleη_n^(1)(q^*)^1/2/(γ+δ)^1/2η_n^(1)_L^2η_n^(1) and η_n^(2)(q-q^*)^1/2/(γ+δ)^1/2η_n^(2)_L^2η_n^(2) ,so that (γ+δ)η_n^(1)_L^2^2=q and (γ+δ)η_n^(2)_L^2^2=q-q_* for any n∈. One easily checks thatlim sup_n→∞η_n^(1)_H^ν<R, lim sup_n→∞η_n^(2)_H^ν<Rand thatlim_n→∞ ((η_n^(1))-(η_n^(1))) = lim_n→∞ ((η_n^(2))-(η_n^(2)))= 0.Thus we arrive at the following contradiction to Corollary <ref>:I_q < I_q^*+I_q-q^*≤lim_n→∞((η_n^(1))+(η_n^(2)))=I^*+I_q-I^*=I_q.This concludes the proof of Lemma <ref>. § LONG-WAVE ASYMPTOTICSIn this section we prove that the solutions of (<ref>) obtained in Theorem <ref> are approximated by solutions of the corresponding KdV equation in the long-wave regime,lettingq→0in the constrained minimization problem (<ref>). Indeed, if we introduce the scaling ζ(x)=S_ KdV(ξ)(x) q^2/3ξ(q^1/3x) in (<ref>) and denoteα+1=α_0q^2/3, then we find that the leading order part of the equation asq→0is α_0(γ+δ)ξ+3(γ-δ^2)ξ^2/2-(γ+δ^-1)/3∂_x^2 ξ=0.Recall (see <cit.>) thatξ∈L^2()satisfying (<ref>) uniquely defines (up to spatial translation) a solitary-wave solution of the KdV equation, with explicit formulaξ_ KdV(x)=α_0(γ+δ)/δ^2-γ^2(1/2√(3α_0(γ+δ)/γ+δ^-1)x).Equation (<ref>) can also be obtained as the Euler-Lagrange equation associated with the minimizer of the scalar functional_KdV(consistently with Lemma <ref>)_ KdV(ξ)=∫_(γ-δ^2)ξ^3+(γ+δ^-1)/3(∂_xξ)^2x,over the setU_1{ξ∈ H^1() : (γ+δ)ξ_L^2^2=1}.Indeed, any minimizer satisfies the Euler-Lagrange equationd_ KdV(ξ)+2(γ+δ)α_0ξ=0,which is (<ref>) withα_0the Lagrange multiplier. Testing the constraint(γ+δ)ξ_L^2^2=1with the above explicit formula, we find that (γ+δ)α_0=3/4((δ^2-γ)^4/(γ+δ)(γ+δ^-1))^1/3.Additional computations show thatI_ KdV=inf{_ KdV(ξ) : ξ∈ U^1}=_ KdV(ξ_ KdV)=-3/5α_0.We aim at proving that the variational characterization of (<ref>), and therefore its explicit solutions, approximate (after suitable rescaling) the corresponding one of (<ref>), namely (<ref>), in the limitq→0. §.§ Refined estimates We start by establishing estimates onζ∈D_q,Rthe set of minimizers ofoverV_q,R, as provided by Theorem <ref>. Here and below, we rely on extra assumptions on the Fourier multipliers, which are assumed to bestrongly admissible, in the sense of Definition <ref>. There exists q_0>0 such that ζ∈ H^s for any s≥ 0, and there exists M_s>0 such thatζ_H^s^2 ≤ M_squniformly for q∈(0,q_0) and ζ∈ D_q,R.Once the regularity property ζ∈ H^s has been established, the corresponding estimate is obtained as in the proof of Lemma <ref>, thus we focus only on the regularity issue. This follows from the Euler-Lagrange equation (<ref>) and elliptic estimates. However, the ellipticity property is not straightforward to ascertain when γ≠ 0, and we will make use of paradifferential calculus. These tools are recalled in Appendix <ref>.By assumption, one has ζ∈ H^ν with ν>1/2 and ν≥ 1-θ>0. We fix ϵ∈ (0,ν-1/2) and r=min(1-θ,ν-1/2-ϵ)>0. We show below that ζ∈ H^ν satisfying (<ref>) yields ζ∈ H^ν+r, and the argument can be bootstrapped to obtain arbitrarily high regularity, ζ∈ H^s, s≥ 0.First we write (<ref>) as the equality, valid in H^-ν,2/3 h_2^-2∂_x _2{h_2^3∂_x _2 {h_2^-1ζ}}+2γ/3 h_1^-2∂_x _1{h_1^3∂_x _1 {h_1^-1ζ}} = 2α(γ+δ)ζ+2h_1+γ h_2/h_1 h_2ζ-h_1^2-γ h_2^2/h_1^2h_2^2ζ^2+(h_2∂_x _2{h_2^-1ζ})^2-γ(h_1∂_x _1{h_1^-1ζ})^2R(ζ) denoting h_1=1-ζ, h_2=δ^-1+ζ, and recalling α∈(-3/2,-1/2).Using Lemma <ref>, <ref> and Lemma <ref>,<ref>, one easily checks that R(ζ)∈ H^2(ν-(1-θ))-1/2-ϵ in the case 1/2<ν≤ 1/2+(1-θ), andR(ζ)∈ H^ν-(1-θ) if ν> 1/2+(1-θ). In other words, we findR(ζ) ∈ H^ν-2(1-θ)+r. Above, we used that ζ_L^∞<min(1,δ^-1) and therefore h_1(x)^n-1∈ H^ν and h_2(x)^n-(δ^-1)^n∈ H^ν for any n∈. This holds as well in the Hölder space W^r,∞ since r∈(0,ν-1/2). In particular, we have ∀ n∈,h_1(x)^n ∈Γ^0_rand ∂_x_i∈Γ^1-θ_r,recalling Definition <ref>.By Lemma <ref>, we find ζ h_1^-1-T_h_1^-2ζ∈ H^ν+r, and Lemma <ref> and Lemma <ref> yieldh_1^-2∂_x_1{h_1^3∂_x_1{ζ h_1^-1-T_h_1^-2ζ}}∈ H^ν-2(1-θ)+r.By Lemma <ref>, one has ∂_x_1T_h_1^-2ζ=T_ik_1(k)T_h_1^-2ζ∈ H^ν-(1-θ). We deduce by Lemma <ref> that ik_1(k)h_1^-2∈Γ^1-θ_r and∂_x_1T_h_1^-2ζ-T_ik_1(k)h_1^-2ζ∈ H^ν-(1-θ)+r,from which we deduce as aboveh_1^-2∂_x_1{h_1^3 ( ∂_x_1{T_h_1^-2ζ}-T_ik_1(k)h_1^-2ζ)}∈ H^ν-2(1-θ)+r.Using that T_ik_1(k)h_1^-2ζ∈ H^ν-(1-θ) and Lemma <ref>, one obtains(h_1^3-T_h_1^3)T_ik_1(k)h_1^-2ζ∈ H^ν-(1-θ)+r.As above, it follows by Lemma <ref> and Lemma <ref> thath_1^-2∂_x_1{(h_1^3-T_h_1^3)T_ik_1(k)h_1^-2ζ}∈ H^ν-2(1-θ)+r.We use again Lemma <ref> and Lemma <ref> to deducethat -(k_1(k))^2h_1∈Γ^2(1-θ)_r andh_1^-2(∂_x_1T_h_1^3 T_ik_1(k)h_1^-2ζ-T_-(k_1(k))^2h_1ζ)∈ H^ν-2(1-θ)+r.Finally, Lemma <ref> yields(h_1^-2-T_h_1^-2)T_-(k_1(k))^2h_1ζ∈H^ν-2(1-θ)+rand Lemma <ref> yields(T_h_1^-2T_-(k_1(k))^2h_1-T_-(k_1(k))^2h_1^-1)ζ∈H^ν-2(1-θ)+r.Collecting (<ref>)–(<ref>), we provedh_1^-2∂_x _1{h_1^3∂_x _1 {h_1^-1ζ}} -T_-(k_1(k))^2h_1^-1ζ∈ H^ν-2(1-θ) +r. By (<ref>), (<ref>) and the corresponding estimate for the second contribution in the left-hand side of (<ref>), one findsT_2/3h_2^-1(ik_2(k))^2+2γ/3h_1^-1(ik_1(k))^2ζ∈ H^ν-2(1-θ)+r.Moreover, since ζ∈ H^ν, one has 2/3h_2^-1(x)+2γ/3h_1^-1(x) ∈Γ^0_r and thereforeT_2/3h_2^-1(x)+2γ/3h_1^-1(x)ζ∈ H^ν⊂ H^ν-2(1-θ)+r.Adding the two terms yieldsT_a(x,k)ζ∈ H^ν-2(1-θ)+rwitha(x,k)2/3h_2^-1(x)(1+(k_2(k))^2)+2γ/3h_1^-1(x)(1+(k_1(k))^2).Notice that a(x, k)∈Γ^2(1-θ)_r anda(x, k)^-1∈Γ^-2(1-θ)_r.In particular, Lemma <ref> and (<ref>) yieldT_a(x,k)^-1T_a(x,k)ζ∈ H^ν+r.Additionally, by Lemma <ref>, we haveζ-T_a(x,k)^-1T_a(x,k)ζ=T_a(x,k)^-1a(x,k)ζ-T_a(x,k)^-1T_a(x,k)ζ∈ H^ν+r.Adding the two terms shows that ζ∈ H^ν+r, which concludes the proof.In the one-layer situation, namely γ=0, the use of paradifferential calculus is not necessary, and Lemma <ref> can be obtained through a direct use of Lemmata <ref> and <ref>. In particular, Lemma <ref>and subsequent results hold for (non-necessarily strongly) admissible Fourier multipliers, in the sense of Definition <ref>.The following lemma shows that the minimizers ofoverV_q,R, as provided by Theorem <ref>, scale as (<ref>).There exists q_0>0 and C>0 such that the estimatesζ_L^∞ ≤ Cq^2/3, ∂_xζ_L^2^2 ≤ Cq^5/3, ∂_x^2ζ_L^2^2 ≤ Cq^7/3hold uniformly for q∈(0,q_0) and ζ∈ D_q,R, the set of minimizers ofover V_q,R.Let ζ be minimizer over V_q,R. Since 2(γ+δ)αζ+(ζ)=0, we get from Lemma <ref> that2α(γ+δ)ζ +_2(ζ) = 2α(γ+δ)ζ+(ζ)-_3(ζ)-_ rem^(1)(ζ)=-_3(ζ)-_ rem^(1)(ζ),where_3(ζ)=γ_3(ζ)+_3(ζ),and_3(ζ) =3ζ^2-(∂_x_1{ζ})^2+2∂_x_1{ζ∂_x_1{ζ}}-2/3∂_x_1{∂_x_1{ζ^2}}-4/3ζ∂_x_1{∂_x_1{ζ}}, _3(ζ) =-3δ^2ζ^2+(∂_x_2{ζ})^2-2∂_x_2{ζ∂_2{ζ}}+2/3∂_x_2{∂_x_2{ζ^2}}+4/3∂_x_2{∂_x_2{ζ}}.We also have that_2(ζ)=γ_2(ζ)+_2(ζ)=2(γ+δ)ζ-2/3(γ∂_x_1{∂_x_1{ζ}}+δ^-1∂_x_2{∂_x_2{ζ}}).In frequency space equation (<ref>) becomes2((γ+δ)α+γ+δ+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2))ζ(k)=-ℱ(_3(ζ)+_ rem^(1)(ζ))(k).By using the estimate for α in Theorem <ref>, we deduceζ(k)≤1/2|ℱ(_3(ζ)+_ rem^(1)(ζ))(k)|/mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2).The estimates follow from (<ref>) and a suitable decomposition into high- and low-frequency components. In order to estimate the right-hand-side, we heavily make use of Lemma <ref>: ζ_H^n^2≲ q for all n∈ℕ. This will be used again throughout the proof without reference.We first deduce from Lemma <ref> thatℱ(_3(ζ))_L^1≲_3(ζ)_H^1≲ qandℱ(_3(ζ))_L^∞≲ζ_L^2^2+∂_xζ_L^2^2+ζ_L^2∂_x^2ζ_L^2≲ qand, similarly,ℱ(_ rem^(1)(ζ))_L^1+ℱ(_ rem^(1)(ζ))_L^∞≲ q^3/2.By the definition of admissible Fourier multipliers in (<ref>), there exists c_0,k_0>0 such that∀ k∈∖[-k_0,k_0],|k|(k) ≥ c_0.We also assume that (k)>0, and therefore there exists c_0'>0 such that∀ k∈ [-k_0,k_0], (k) ≥ c_0'.As a consequence, we havesup_k∈∖ [-k_0,k_0]1/mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2)≲ 1and∫_-k_0^k_01/mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2) k≲ q^-1/3. Now, we decomposeζ_L^∞≤1/√(2π)ζ_L^1≤1/2∫_1/ mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2)|ℱ(_3(ζ)+_ rem^(1)(ζ))(k)| k.into high- and low-frequency components and estimate each part. For the low frequency part we have∫_-k_0^k_01/mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2)|ℱ(_3(ζ)+_ rem^(1)(ζ))(k)| k ≤ℱ(_3(ζ)+_ rem^(1)(ζ))(k)_L^∞×∫_-k_0^k_01/mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2) k≲ q^2/3.For the high-frequency part∫_∖ [-k_0,k_0]1/mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2)|ℱ(_3(ζ)+_ rem(ζ))(k)| k≤ℱ(_3(ζ)+_ rem(ζ))(k)_L^1sup_k∈∖ [-k_0,k_0]1/mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2)≲ q.Combining the above estimates in (<ref>) gives us the inequality (<ref>).Let us now turn to (<ref>). By (<ref>) we have∂_xζ_L^2^2≤1/4∫_k^2ℱ(_3(ζ)+_ rem^(1)(ζ))^2/(mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2))^2 kWe estimate the low-frequency part as above:∫_-k_0^k_0k^2ℱ(_3(ζ)+_ rem^(1)(ζ))(k)^2/(mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2))^2 dk ≤ℱ(_3(ζ)+_ rem^(1)(ζ))(k)_L^∞^2 ×∫_-k_0^k_0k^2/(mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2))^2k≲ q^5/3.As for the high frequency part, we notice thatkℱ(_ rem^(1)(ζ))_L^2≲ q^3/2andkℱ(_3(ζ))_L^2 ≲ζ_L^∞∂_xζ_L^2+∂_xζ_L^2∂_x^2ζ_H^1+ζ_L^∞∂_x^3ζ_L^2≲ q^7/6+q^1/2∂_xζ_L^2,where we used (<ref>). It follows that∫_∖[-k_0,k_0]k^2ℱ(_3(ζ)+_ rem^(1)(ζ))(k)^2/(cq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2))^2k ≤kℱ(_3(ζ)+_ rem^(1)(ζ))_L^2^2×sup_k∈∖ [-k_0,k_0]1/(mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2))^2≲ q^7/3+q∂_xζ_L^2^2 .Combining the high and low frequency estimates into (<ref>) gives us∂_xζ_L^2^2≲ q^5/3+q∂_xζ_L^2^2,hence, for q_0 sufficiently small we get (<ref>).We conclude with the proof of (<ref>). By (<ref>) we have∂_x^2ζ_L^2^2≤1/4∫_k^4ℱ(_3(ζ)+_ rem^(1)(ζ))^2/(mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2))^2 k.Now we remark that by the Cauchy-Schwarz inequality and integration by parts, one has ∀ j≥ 2, ∂_x^j ζ_L^2^2≤∂_xζ_L^2∂_x^2j-1ζ_L^2≲ q^4/3.Since ζ_L^∞ and ∂_xζ_L^2 satisfy similar estimates by (<ref>) and (<ref>), we havek^2ℱ(_3(ζ))_L^2 ≲ζ_L^∞∂_x^2ζ_L^2+∂_xζ_H^1∂_xζ_L^2+ζ_L^∞∂_x^4ζ_L^2+ ∂_xζ_H^1∂_x^3ζ_L^2+∂_x^2ζ_H^1∂_x^2ζ_L^2≲ q^4/3,andkℱ(_3(ζ))_L^∞ ≲ζ_L^2∂_xζ_L^2+∂_xζ_L^2∂_x^2ζ_L^2+ζ_L^2∂_x^3ζ_L^2≲ q^8/6+q^1/2∂_x^3ζ_L^2≲ q^8/6+q^1/2∂_x^2ζ_L^2^1/2∂_x^4ζ_L^2^1/2≲ q^8/6+q^5/6∂_x^2ζ_L^2^1/2≲ q^8/6+q^1/3∂_x^2ζ_L^2,in addition tokℱ(_ rem^(1)(ζ))_L^∞+k^2ℱ(_ rem^(1)(ζ))_L^2≲ q^3/2.Thus, proceeding as above, we find∫_-k_0^k_0k^4ℱ(_3(ζ)+_ rem^(1)(ζ))(k)^2/(mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2))^2 dk ≤kℱ(_3(ζ)+_ rem^(1)(ζ))(k)_L^∞^2 ×∫_-k_0^k_0k^2/(mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2))^2k≲ q^7/3+q^1/3∂_x^2ζ_L^2^2and∫_∖[-k_0,k_0]k^4ℱ(_3(ζ)+_ rem^(1)(ζ))(k)^2/(mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2))^2k ≤k^2ℱ(_3(ζ)+_ rem^(1)(ζ))_L^2^2×sup_k∈∖ [-k_0,k_0]1/(mq^2/3+1/3(γ(k_1(k))^2+δ^-1(k_2(k))^2))^2≲ q^8/3 .Plugging these estimates into (<ref>) and restricting q∈(0,q_0) if necessary yields (<ref>), and the proof is complete.§.§ Convergence results; proof of Theorem <ref>We are now in position to relate the minimizers ofinD_q,Rwith the corresponding solution of theKdV equation. We first compare I_ KdV=inf{_ KdV(ξ) : ξ∈ U^1} andI_q= inf{(ζ) :ζ∈ V_q,R}.There exists q_0>0 such that the quantities I_q and I_ KdV satisfyI_q =q+_ KdV(ζ)+𝒪(q^2), uniformly over minimizers ofin V_q,R,I_q =q+q^5/3I_ KdV+𝒪(q^2)=q-3/5α_0 q^5/3+𝒪(q^2). uniformly over q∈(0,q_0).Recall that, from Lemma <ref>, one has for any ζ∈ H^2, (ζ)=(γ+δ)ζ_L^2^2+_ KdV(ζ)+_ rem(ζ),with_ rem(ζ)≤ C(h_0^-1,ζ_H^1)(ζ_L^∞^2ζ_L^2^2+ζ_L^∞∂_xζ_L^2^2+∂_x^2ζ_L^2∂_xζ_L^2).Let ζ be a minimizer of in V_q,R and note that ζ∈ H^2 by Lemma <ref>. Using Lemma <ref> and (<ref>), we obtainI_q=(ζ)=q+_ KdV(ζ)+𝒪(q^2).Introducing ξ=S_ KdV^-1(ζ), we find that ξ∈ U_1 and_ KdV(ζ)=q^5/3_ KdV(ξ)≥ q^5/3I_ KdV.Thus we foundI_q≥ q+q^5/3I_ KdV+𝒪(q^2).Similarly, notice that ζ=S_ KdV(ξ_ KdV) satisfies ζ∈ V_q,R (for q sufficiently small) and, by (<ref>) I_q≤(ζ) = q +_ KdV(ζ)+𝒪(q^2).Since _ KdV(ζ)=q^5/3_ KdV(ξ_ KdV)=q^5/3I_ KdV, we deduceI_q≤ q+q^5/3I_ KdV+𝒪(q^2).We have thus proved (<ref>).This next result is the first part of Theorem <ref>, which relates the minimizers ofinV_q,Rwith the minimizers of_KdVinU_1. Let q_0>0 be such that Theorem <ref> and Lemma <ref> hold. Then for any q∈(0,q_0) and ζ∈ D_q,R, there exists x_ζ∈ such thatq^-2/3ζ(q^-1/3·)-ξ_ KdV(·-x_ζ)_H^1≲ q^1/6 ,uniformly with respect to q∈ (0,q_0) and ζ∈ D_q,R.Assume that there exists ϵ>0 and a sequence ζ_n∈ D_q_n,R with q_n↘ 0 such that∀ n∈, inf_x_0∈q_n^-2/3ζ_n(q_n^-1/3·)-ξ_ KdV(·-x_0)_H^1≥ϵ.Denote for simplicity ξ_n(x)=q_n^-2/3ζ_n(q_n^-1/3x). From (<ref>) in Lemma <ref>, we haveI_q_n=(ζ_n)=q_n+_ KdV(ζ_n)+Ø(q_n^2)=q_n+q_n^5/3_ KdV(ξ_n)+Ø(q_n^2).By (<ref>) in Lemma <ref>, we deduce that _ KdV(ξ_n)-I_ KdV = Ø(q_n^1/3) .In particular {ξ_n}_n∈ is a minimizing sequence for _ KdV satisfying the constraint (γ+δ)ξ_n_L^2^2=1. It follows <cit.> that there exists a sequence {x_n}_n∈ such thatξ_n(·-x_n)-ξ_ KdV_H^1→ 0,which contradicts (<ref>).The quantitative estimate follows from the argument in <cit.>. From the above, we may apply Lemma 4.1 therein and define uniquely x_ζ such that ⟨ξ,ξ_ KdV(·-x_ζ)⟩=0, where we denote ξ=q^-2/3ζ(q^-1/3·). Following the above estimates and <cit.>, we findξ-ξ_ KdV(·-x_ζ)_H^1^2≲_ KdV(ξ)-_ KdV(ξ_ KdV)≲ q^1/3 .This concludes the proof.Next we prove the second part of Theorem <ref>, which relates the Lagrange multipliersαwith the one of theKdV equation,α_0.The number α, defined in Theorem <ref>, satisfiesα+1=q^2/3α_0+𝒪(q^5/6),uniformly over D_q,R. By Lemma <ref>, we have⟨(ζ),ζ⟩=2(γ+δ)∫_ζ^2x+ ⟨_ KdV(ζ),ζ⟩ +⟨_ rem(ζ),ζ⟩where, using Lemma <ref>, one has⟨_ rem(ζ),ζ⟩≲ q^7/3,uniformly for minimizers ofin V_q,R, and so⟨(ζ),ζ⟩=2q+q^5/3⟨_ KdV(S_ KdV^-1(ζ)),S_ KdV^-1(ζ)⟩+𝒪(q^7/3).By Theorem <ref> there exists x_ζ such that S_ KdV^-1(ζ)-ξ_ KdV(·-x_ζ)_H^1=𝒪(q^1/6) as q↘ 0. This implies that⟨_ KdV(S_ KdV^-1(ζ)),S_ KdV^-1ζ⟩-⟨_ KdV(ξ_ KdV),ξ_ KdV⟩=𝒪(q^1/6) as q↘ 0and therefore⟨(ζ),ζ⟩=2q+q^5/3⟨_ KdV(ξ_ KdV),ξ_ KdV⟩+𝒪(q^11/6).Now recall the Euler-Lagrange equations (<ref>) and (<ref>), which yield immediately2α(ζ) q =-⟨(ζ),ζ⟩ ,2α_0 =-⟨_ KdV(ξ_ KdV),ξ_ KdV⟩,and the result follows. § NUMERICAL STUDY In this section, we provide numerical illustrations of our results as well as some numerical experiments for situations which are not covered by our results. We first describe our numerical scheme, before discussing the outcome of these simulations. Description of the numerical schemeOur numerical scheme computes solutions for (<ref>) for given value ofc(and hence does not not follow the minimization strategy developed in this work). Because we seek smooth localized solutions and our operators involve Fourier multipliers, it is very natural to discretize the problem through spectral methods <cit.>. We are thus left with the problem of finding a root for a nonlinear function defined in a finite (but large) dimensional space. To this aim, we employ the Matlab scriptwhich implements the so-called trust-region dogleg algorithm <cit.> based on Newton's method. For an efficient and successful outcome of the method, it is important to have a fairly precise initial guess. To this aim, we use the exact solution of the Green-Naghdi model, which is either explicit (in the one-layer situation <cit.>) or obtained as the solution of an ordinary differential equation (in the bi-layer situation <cit.>) that we solve numerically. Our solutions are compared with the corresponding ones of the full Euler system. To compute the latter, we use the Matlab script developed by Per-Olav Rusås and documented in <cit.> in the bilayer configuration while in the one-layer case, the Matlab script of Clamond and Dutykh <cit.> offer faster and more accurate results (although limited to relatively small velocities).Two-layer settingThe solitary-wave solutions of the Miyata-Choi-Camassa system have been studied in the original papers of <cit.>. In particular we know that for a given amplitude, or a given velocity, there exists at most one solitary wave (up to spatial translations). The solitary waves are of elevation ifδ^2-γ>0, of depression ifδ^2-γ<0, and do not exist ifδ^2=γ. Contrarily to the one-layer situation, the bilayer Green-Naghdi model admits solitary waves only for a finite range of velocities (resp. amplitudes),c∈(1,c_max(γ,δ))(resp.|a|∈(0,a_max(γ,δ))). With our choice of parameters (namelyγ=1,δ=1/2), one hasc_ max=√(1+1/8)≈ 1.06066and |a_ max|=1/2.As the velocity approachesc_max, the solitary waves broadens and its mass keeps increasing. These type of profiles or often refered to as “table-top” profiles, and lead to bore profiles in the limitc→c_max.When the velocity is small the numerically computed solitary wave solutions of the bilayer original (_i=1) and full dispersion (_i=_i^imp) Green-Naghdi systems and the one of the water waves systems (and to a lesser extent theKdV model) agree, so that the curves corresponding to the three former models are indistinguishable in see Figure <ref>. For larger velocities, as in Figure <ref>, the numerically computed solitary wave solutions of the Green-Naghdi and water waves systems is very different from the^2profile of the solitary wave solution to the Korteweg-de Vries equation. It is interesting to see that both the original and full dispersion Green-Naghdi models offer good approximations, even in this “large velocity” limit (the normalizedl^2difference of the computed solutions is≈2.10^-3in both cases). This means that the internal solitary wave keeps a long-wave feature even for large velocities. These observations were already documented and corroborated by laboratory experiments in <cit.>.One-layer setting In the one-layer setting, the script by Clamond and Dutykh <cit.> allows to have a very precise numerical computation of the water waves solitary solution, from which the numerical solutions of the Green-Naghdi models can be compared. In this setting, namelyγ=0andδ=1, we have an explicit solution for the Green-Naghdi model <cit.>:ζ_ GN(x)=(c^2-1)^2(1/2√(3c^2-1/c^2)x) = c^2 ζ_ KdV(x).In Figure <ref>, we compute the solitary waves for our models with different (small) values of the velocity, rescaled byS_KdV^-1. One clearly sees, as predicted by Theorem <ref> and the above formula, that the solitary waves converge towardsξ_KdVafter rescaling, asc↘1. One also sees that the water waves solution is closer to the one predicted by the model with full dispersion than the original Green-Naghdi model. Figure <ref> shows that the convergence rate is indeed quadratic for the full dispersion model whereas it is only linear for the original Green-Naghdi model (and therefore only qualitatively better than theKdV model). § PARADIFFERENTIAL CALCULUS The definitions and properties below are collected from <cit.>; see also <cit.> for relevant references.[Symbols] Given m∈ and r≥ 0, we denote Γ^m_r the space of distributions a(x,ξ) on ^2 such that for almost any x∈, ξ↦ a(x,ξ)∈^∞(), and∀α∈, ∃ C_α>0such that ∀ξ∈, ∂^α_ξ a(·,ξ)_W^r,∞≤ C_α (1+|ξ|)^m-α,where W^r,∞ denote the Hölder space (Lipschitz for integer values). Below, we use anadmissible cut-off functionψin the sense of <cit.> and define paradifferential operators as follows (the constant factor depends on the choice of convention for the Fourier transform).[Paradifferential operators] For a∈Γ^m_0 and u∈𝒮(), we defineT_a u(x)1/√(2π)⟨û(·),e^ix ·ψ(D,·)a(x,·)⟩_(𝒮(ℝ),𝒮'(ℝ)),where ψ(D,ξ) is the Fourier multiplier associated with ψ(η,ξ) (here, ξ is a parameter). The operator is defined for u∈ H^s() by density and continuous linear extension. The following lemma is a direct application of the above definitions <cit.>.For any r≥ 0 anda∈Γ_r^m⊂Γ_0^m, and for all s∈, the operator T_a extends in a unique way to a bounded operator from H^s+m to H^s. If a(ξ) is a symbol independent of x, then T_a=a(D), the corresponding Fourier multiplier.The main tool we use is the following composition property <cit.> Let a∈Γ_r^m and b∈Γ_r^m' where 0<r≤ 1. Then ab∈Γ_r^m+m'and T_a T_b-T_ab is a bounded operator from H^s+m+m'-r to H^s, for any s∈.Of particular interest is the case when the symbola(x)∈L^∞is independent ofξ. The admissible cut-off function can be constructed so that the paraproductT_aucorresponds to a standard Littlewood-Paley decomposition of the productau. This allows to show thatau-T_auis a smoothing operator provided thatais sufficiently regular.Let v∈ H^s and u∈ H^t, and r≥ 0. Then uv-T_v u ∈ H^r provided that s+t≥ 0, s≥ r and s+t>r+1/2.The definitions of the paraproduct in <cit.> and <cit.> differ slightly but it is not hard to show that <cit.> still holds for the paraproduct as it is defined in <cit.>, and Lemma <ref> follows directly from this theorem. We conclude with the following lemma, displayed in <cit.> Let G∈^∞(ℝ) be such that G(0)=0. If u∈ H^s with s>1/2, then G(u)-T_G'(u) u ∈ H^2s-1/2. Acknowledgements. V. Duchêne was partially supported by the Agence Nationale de la Recherche (project ANR-13-BS01- 0003-01 DYFICOLTI). D. Nilsson and E. Wahlén were supported by the Swedish Research Council (grant no. 621-2012-3753). siam'10Angulo-Pava09 J. Angulo Pava,Nonlinear dispersive equations. Existence and stability of solitary and periodic travelling wave solutions, vol. 156 of Mathematical Surveys and Monographs, American Mathematical Society, Providence, RI, 2009.Arnesen16 M. N. Arnesen,Existence of solitary-wave solutions to nonlocal equations, Discrete Contin. Dyn. Syst., 36 (2016), pp. 3483–3510.BahouriCheminDanchin H. Bahouri, J.-Y. Chemin, and R. Danchin,Fourier analysis and nonlinear partial differential equations, vol. 343 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], Springer, Heidelberg, 2011.BenjaminBonaMahony72 T. B. Benjamin, J. L. Bona, and J. J. Mahony,Model equations for long waves in nonlinear dispersive systems, Philos. Trans. Roy. Soc. London Ser. A, 272 (1972), pp. 47–78.Benzoni-GavageSerre07 S. Benzoni-Gavage and D. Serre,Multidimensional hyperbolic partial differential equations. First-order systems and applications, Oxford Mathematical Monographs, The Clarendon Press Oxford University Press, Oxford, 2007.BonaSmith76 J. L. Bona and R. Smith,A model for the two-way propagation of water waves in a channel, Math. Proc. Cambridge Philos. Soc., 79 (1976), pp. 167–182.BonaSouganidisStrauss87 J. L. Bona, P. E. Souganidis, and W. A. Strauss,Stability and instability of solitary waves of Korteweg-de Vries type, Proc. Roy. Soc. London Ser. A, 411 (1987), pp. 395–412.Boussinesq72 J. Boussinesq,Théorie des ondes et des remous qui se propagent le long d'un canal rectangulaire horizontal, en communiquant au liquide contenu dans ce canal des vitesses sensiblement pareilles de la surface au fond, J. Math. Pures Appl., 17 (1872), pp. 55–108.Buffoni04a B. Buffoni,Existence and conditional energetic stability of capillary-gravity solitary water waves by minimisation, Arch. Ration. Mech. Anal., 173 (2004), pp. 25–68.BuffoniGrovesSunWahlen13 B. Buffoni, M. D. Groves, S. M. Sun, and E. Wahlén,Existence and conditional energetic stability of three-dimensional fully localised solitary gravity-capillary water waves, J. Differential Equations, 254 (2013), pp. 1006–1096.CamassaChoiMichalletEtAl06 R. Camassa, W. Choi, H. Michallet, P.-O. Rusas, and J. K. Sveen,On the realm of validity of strongly nonlinear asymptotic approximations for internal waves, J. Fluid Mech., 549 (2006), pp. 1–23.Chandrasekhar61 S. Chandrasekhar,Hydrodynamic and hydromagnetic stability, The International Series of Monographs on Physics, Clarendon Press, Oxford, 1961.Chemin98 J.-Y. Chemin,Perfect incompressible fluids, vol. 14 of Oxford Lecture Series in Mathematics and its Applications, The Clarendon Press, Oxford University Press, New York, 1998.Translated from the 1995 French original by Isabelle Gallagher and Dragos Iftimie.Chen98a M. Chen,Exact solutions of various Boussinesq systems, Appl. Math. Lett., 11 (1998), pp. 45–49.ChenNguyenSun10 M. Chen, N. V. Nguyen, and S.-M. Sun,Solitary-wave solutions to Boussinesq systems with large surface tension, Discrete Contin. Dyn. Syst., 26 (2010), pp. 1153–1184.ChenNguyenSun11 height 2pt depth -1.6pt width 23pt,Existence of traveling-wave solutions to Boussinesq systems, Differential Integral Equations, 24 (2011), pp. 895–908.ChoiCamassa99 W. Choi and R. Camassa,Fully nonlinear internal waves in a two-fluid system, J. Fluid Mech., 396 (1999), pp. 1–36.ClamondDutykh13 D. Clamond and D. Dutykh,Fast accurate computation of the fully nonlinear solitary surface gravity waves, Comput. & Fluids, 84 (2013), pp. 35–38.ConnGouldToint00 A. R. Conn, N. I. M. Gould, and P. L. Toint,Trust-region methods, MPS/SIAM Series on Optimization, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA; Mathematical Programming Society (MPS), Philadelphia, PA, 2000.Darrigol03 O. Darrigol,The spirited horse, the engineer, and the mathematician: water waves in nineteenth-century hydrodynamics, Arch. Hist. Exact Sci., 58 (2003), pp. 21–95.DjordjevicRedekopp78 V. D. Djordjevic and L. G. Redekopp,The fission and disintegration of internal solitary waves moving over two-dimensional topography, J. Phys. Oceanogr., 8 (1978), pp. 1016–1024.Duchene14a V. Duchêne,On the rigid-lid approximation for two shallow layers of immiscible fluids with small density contrast, J. Nonlinear Sci., 24 (2014), pp. 579–632.DucheneIsrawiTalhouk16 V. Duchêne, S. Israwi, and R. Talhouk,A new class of two-layer Green-Naghdi systems with improved frequency dispersion, Stud. Appl. Math., 137 (2016), pp. 356–415.EhrnstromGrovesWahlen12 M. Ehrnström, M. D. Groves, and E. Wahlén,On the existence and stability of solitary-wave solutions to a class of evolution equations of Whitham type, Nonlinearity, 25 (2012), pp. 2903–2936.GrovesWahlen15 M. D. Groves and E. Wahlén,Existence and conditional energetic stability of solitary gravity-capillary water waves with constant vorticity, Proc. Roy. Soc. Edinburgh Sect. A, 145 (2015), pp. 791–883.GrueJensenRusaasEtAl99 J. Grue, A. Jensen, P.-O. Rusås, and J. K. Sveen,Properties of large-amplitude internal waves, J. Fluid Mech., 380 (1999), pp. 257–278.HelfrichMelville06 K. R. Helfrich and W. K. Melville,Long nonlinear internal waves, in Annual review of fluid mechanics. Vol. 38, 2006, pp. 395–425.Jackson04 C. R. Jackson,An atlas of internal solitary-like waves and their properties, 2004. <http://www.internalwaveatlas.com/Atlas2_index.html>.KortewegDe95 D. J. Korteweg and G. De Vries,On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves, Philos. Mag., 5 (1895), pp. 422–443.Lannes D. Lannes,The water waves problem, vol. 188 of Mathematical Surveys and Monographs, American Mathematical Society, Providence, RI, 2013.Mathematical analysis and asymptotics.Li02 Y. A. Li,Hamiltonian structure and linear stability of solitary waves of the Green-Naghdi equations, J. Nonlinear Math. Phys., 9 (2002), pp. 99–105.Recent advances in integrable systems (Kowloon, 2000).Lions84 P.-L. Lions,The concentration-compactness principle in the calculus of variations. The locally compact case. I, Ann. Inst. H. Poincaré Anal. Non Linéaire, 1 (1984), pp. 109–145.MadsenMurraySorensen91 P. A. Madsen, R. Murray, and O. R. Sørensen,A new form of the Boussinesq equations with improved linear dispersion characteristics, Coastal engineering, 15 (1991), pp. 371–388.Malcprimetseva89 Z. L. Maltseva,Unsteady long waves in a two-layer fluid, Dinamika Sploshn. Sredy,(1989), pp. 96–110.Metivier08 G. Métivier,Para-differential calculus and applications to the Cauchy problem for nonlinear systems, vol. 5 of Centro di Ricerca Matematica Ennio De Giorgi (CRM) Series, Edizioni della Normale, Pisa, 2008.MichalletBarthelemy98 H. Michallet and E. Barthélemy,Experimental study of interfacial solitary waves, J. Fluid Mech., 366 (1998), pp. 159–177.Miyata87 M. Miyata,Long internal waves of large amplitude, in Nonlinear Water Waves: IUTAM Symposium, Tokyo, Aug. 1987, Springer, pp. 399–405.Nwogu93 O. Nwogu,Alternative form of boussinesq equations for nearshore wave propagation, Journal of waterway, port, coastal, and ocean engineering, 119 (1993), pp. 618–638.OstrovskyStepanyants89 L. A. Ostrovsky and Y. A. Stepanyants,Do internal solitons exist in the ocean?, Rev. Geophys., 27 (1989), pp. 293–310.Rayleigh76 J. W. S. Rayleigh,On waves, Philos. Mag., 1 (1876), pp. 251–271.Serre53a F. Serre,Contribution à l'étude des écoulements permanents et variables dans les canaux, La Houille Blanche,(1953), pp. 830–872.Struwe M. Struwe,Variational methods, vol. 34 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics, Springer-Verlag, Berlin, fourth ed., 2008.Trefethen L. N. Trefethen,Spectral methods in MATLAB, vol. 10 of Software, Environments, and Tools, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2000. | http://arxiv.org/abs/1706.08853v1 | {
"authors": [
"Vincent Duchêne",
"Dag Nilsson",
"Erik Wahlén"
],
"categories": [
"math.AP"
],
"primary_category": "math.AP",
"published": "20170627135248",
"title": "Solitary wave solutions to a class of modified Green-Naghdi systems"
} |
M. Tixier Département de Sciences Physiques, U.F.R. des Sciences, Université de Versailles Saint Quentin, 45, avenue des Etats-Unis, F-78035 Versailles Tel.: +33-139254519Fax: [email protected]. Pouget Sorbonne Universités, UPMC Univ. Paris 06, UMR 7190, Institut Jean le Rond d'Alembert, F-75005 Paris, France CNRS, UMR 7190,Institut Jean le Rond d'Alembert, F-75005 Paris, France Tel.: [email protected] equations for an electro-active polymerMireille Tixier Joël Pouget Received: date / Accepted: date ======================================================= Ionic electro-active polymers (E.A.P.) can be used as sensors or actuators. For this purpose, a thin film of polyelectrolyte is saturated with a solvent and sandwiched between two platinum electrodes. The solvent causes a complete dissociation of the polymer and the release of small cations. The application of an electric field across the thickness results in the bending of the strip and vice versa. The material is modelled by a two-phase continuous medium. The solid phase, constituted by the polymer backbone inlaid with anions, is depicted as a deformable porous media. The liquid phase is composed of the free cations and the solvent (usually water). We used a coarse grain model. The conservation laws of this system have been established in a previous work. The entropy balance law and the thermodynamic relations are first written for each phase, then for the complete material using a statistical average technique and the material derivative concept. One deduces the entropy production. Identifying generalized forces and fluxes provides the constitutive equations of the whole system : the stress-strain relations which satisfy a Kelvin-Voigt model, generalized Fourier's and Darcy's laws and the Nernst-Planck equation. PACS 46.35.+z PACS 47.10.ab PACS 47.61.Fg PACS 66.10.-x PACS 82.47.Nj PACS 83.80.Ab § INTRODUCTION In a previous work presented by the authors <cit.>,conservation laws for an electro-active polymer have been established and discussed. Especially, the equations of mass conservation, of the electric charge conservation, the conservation of the momentum and different energy balance equations at the macroscale of the material have been deduced using an average technique for the different phases (solid and liquid). The present work attempts the construction of constitutive equations. The latter are deduced from the entropy balance law and thermodynamic relations. The interest of such a formulation is that we arrive at tensorial and vectorial constitutive equations for the macroscopic quantities for which the constitutive coefficients can be expressed in terms of the microscopic components of the electro-active polymers. More precisely, electro-active polymers can be classified in two essential categories depending on their process of activation. The first class is the electronic EAP and their actuation uses the electromechanical coupling (linear or non linear coupling). These polymers are very similar to piezoelectric materials. Their main drawback is an actuation which requires very high voltage. The second class of EAP are the ionic polymers. They are based on the ion transport due to applied electric voltage. This kind of EAP exhibits very large transformation (large deflexion) in the presence of low applied voltage (few volts). Their main drawback is that they operate best in a humid environment and they must be encapsulated to operate in ambient environment. The present paper places the emphasis on the ionic polymer metal composite (IPMC). Such class of electro-active polymers is an active material consisting in a thin membrane of polyelectrolyte (Nafion, for instance) sandwiched on both sides with thin metal layers acting as electrodes. The EAP can be deformed repetitively by applying a difference of electric potential across the thickness of the material and it can quickly recover its original configuration upon releasing the voltage. The mechanism of EAP deformation can be explained physically as follows. Upon the application of an electric field across a moist polymer, which is held between metallic electrodes attached across a partial section of the EAP strip, bending of the EAP is produced (Fig. 1a). The positive counter ions move towards the negative electrode (cathode), while negative ions that are fixed to the polymer backbone experience an attractive force from positive electrode (anode). At the same time water molecules in the EAP matrix diffuse towards the region of the high positive ion concentration (near the negative electrode) to equalize the charge distribution. As a result, the region near the cathode swells and the region near the anode de-swells, leading to stresses which cause the EAP strip to bend towards the positive anode (Fig. 1b). When the electric field is released the EAP strip recover its initial geometry. Conversely, a difference of electric potential is produced across the EAP when it is suddenly bent. Electromechanical coupling in ionic polymer membranes was discovered over 50 years ago but has recently received renewed attention due to the development of large strain actuators operating at low electric fields. Modelling of EAP attracts scientists and engineers and a certain number of approaches has been proposed to explain and quantify the physical micro-mechanism which relate the EAP deformation to osmotic diffusion of solvent and ions into the polymer. A micromechanical model has been developed by Nemat-Nasser and Li <cit.>. The model accounts for the electromechanical and chemical-electric coupling of the ion transport, electric field and elastic deformation to produce the response of the EAP. The authors examine the field equations that place the osmotic stress in evidence. They deduce a generalized Darcy's law and the balance law for the ion flux - a kind of Nernst-Plank equation - deduced from the equation of the electric charge conservation. A first simple macroscopic model was proposed by DeGennes et al. <cit.>. The model describes the coupling between the electric current density and the solvent (water) flux. Shahinpoor et al. <cit.> reportthe modelling of ion-exchange polymer-metal composites (IPMCs) based onan equation governing the ionic transport mechanism. The authors writedown theequations for the solvent concentration, the ionic concentration, and the relationshipbetween stress, strain, electric field, heat flux and chemicalenergy flux. The stress tensor is related to the deformation gradient field by using a constitutive equation of the neo-Hookean type. The paper is divided in 8 Sections with 3 appendices. The next section recalls the main results of the previous work <cit.> and notations.The section 3 concerns the entropy balance law at the microscopic level and for the whole material over the R.V.E. (Representative Volume Element).The fundamental thermodynamicrelations are given in Section 4. The thermodynamic equations are written for each phase (solid, solvent) and then for the complete material leading toGibb's relation. On using the latter relation the generalized forces and fluxes are identified in Section 5. The Section 6 is devoted to constitutive equations,especially the tensorial and vectorial constitutive equations are deduced by invoking symmetry properties. A detailed discussion on the results thus obtainedis presented in Section 7 and some estimates of the constitutive coefficients are given and compared to the proposed approximations. The paper is closed with a briefconclusion. § MODELLING AND PREVIOUS RESULTSThe system we study is an ionic polymer-metal composite (IPMC); it consists of a polyelectrolyte coated on both sides with thin metal layers acting as electrodes. The electro-active polymer is saturated with water, which results in a quasi-complete dissociation of the polymer : anions remain bound to the polymer backbone, whereas small cations are released in water<cit.>. When an electric field perpendicular to the electrodes is applied, the strip bends : cations are attracted by the negative electrode and carry solvent away by osmosis. As a result, the polymer swells near the negative electrode and contracts on the opposite side, leading to the bowing. The modelling of this system is detailled in our previous article <cit.>. The polymer chains are assimilated to a deformable porous medium saturated by an ionic solution composed by water and cations. We suppose that the solution is dilute. We depicted the complete material as the superposition of three systems : a deformable solid made up of polymer backbone negatively charged, a solvent (the water) and cations (see the inset of Fig. 1 for schematic representation). The three components have different velocity fields, and the solid and liquid phases are assumed to be incompressible phases separated by an interface whose thickness is supposed to be negligible. We identify the quantities relative to the different componentsby subscripts : 1 refers to cations, 2 to solvent, 3 to solid, i to the interface and 4 to the solution, that is both components 1 and 2; the lack of subscript refers to the complete material. Components 2, 3 and 4 as well as the global material are assimilated to continua. We venture the hypothesis that gravity and magnetic field are negligible, so the only external force acting on the system is the electric force. We describe this medium using a coarse-grained model developed for two-phase mixtures <cit.>.The microscopic scale is large enough to provide the continuum assumption,but small enough to enable the definition of a volume which contains a singlephase (3 or 4). At the macroscopic scale, we define a representativeelementary volume (R.V.E.) which contains the two phases; it must be smallenough so that average quantities relative to the whole material can beconsidered as local, and large enough so that this average is relevant. A microscale Heaviside-like function of presence χ _k(𝐫,t) has been defined for the phases 3 and 4χ _k=1 when phase k occupies point 𝐫at time t,χ _k=0 otherwiseThe function of presence of the interface is the Dirac-like function χ _i=-∇χ _k·𝐧_𝐤 (in m^-1) where 𝐧_𝐤 is the outward-pointing unit normal to the interface in the phase k. ⟨⟩ _k denotes the average over the phase k of a quantity relative to the phase k only. The macroscale quantities relative to the whole material are obtained by statistically averaging the microscale quantities over the R.V.E., that is by repeating many times the same experiment. We suppose that this average, denoted by ⟨⟩, is equivalent to a volume average (ergodic hypothesis) and commutes with the space and time derivatives <cit.>. A macroscale quantity g_k verifiesg_k=⟨χ _kg_k^0⟩ =ϕ _k⟨ g_k^0⟩ _kwhere g_k^0 is the corresponding microscale quantity and ϕ _k=⟨χ _k⟩ the volume fraction of the phase k. In the following, we use superscript ^0 to indicate microscale quantities; the macroscale quantities, which are averages defined all over the material, are written without superscript. The conservation and balance laws of the polymer saturated with water have been previously established <cit.>. To this end, we supposed that the fluctuations of the following quantities are negligible on the R.V.E. scale : the velocity 𝐕_𝐤 of each phase and interface, the solid displacement vector 𝐮_3, the cations molar concentration C and the electric field 𝐄. Futhermore, we admitted that the electric field is identical in all the phases and that the solid and liquid phases are isotropic linear dielectrics. We thus established the mass conservation law of the constituents and of the complete material∂ρ _k/∂ t+div( ρ _k 𝐕_𝐤) =0 for k=2,3 ∂ρ/∂ t+div( ρ𝐕) =0where ρ _k=ϕ _kρ _k^0 denotes the mass density of the phase k relative to the volume of the whole material. Maxwell's equations and the constitutive relation canbe written𝐫𝐨𝐭𝐄=0 div 𝐃=ρ Z𝐃=ε𝐄withε =∑_k=3,4ϕ _kε _k^0where 𝐃 is the electric displacement field, Z the total electric charge per unit of mass and ε the permittivity. The linear momentum and internal energy balance laws are ρD𝐕/Dt= 𝐝𝐢𝐯σ +ρ Z𝐄 ρD/Dt( U_Σ/ρ) =∑_k=3,4( σ _k: ∇𝐕_𝐤) + 𝐢·𝐄-div𝐐where σ denotes the stress tensor,𝐢 the diffusion current, 𝐐 the heat flux and U_Σ the sum of the volume internal energies of the different phases. These two relations use the material derivative D/ Dt defined in our previous paper <cit.> and reported in appendix A. The relative velocities of the different phases are negligible compared to the velocities measured in the laboratory-frame. Let's take for example a strip of Nafion which is 200 μ m thick and 1.57 cm long, bending in an electric field. The tip displacement is about 4 mm and it is obtained in 1 s <cit.>. The different phases velocities in the laboratory-frame |𝐕_𝐤^0| are close to 4 10^-3 m s^-1 and the relative velocities | 𝐕_𝐤^0-𝐕| to 2 10^-4 m s^-1. So we can reasonably suppose that|𝐕_𝐤^0-𝐕| <<|𝐕_𝐤^0|The kinetic energy of the whole material is defined either as the sum E_cΣ of the kinetic energies of the constituents, or as the kinetic energy of the center of mass of the constituents E_c <cit.>. The difference between these two quantities isE_cΣ-E_c=ρ _3ρ _4/ρ(𝐕_3-𝐕_4) ^2and is negligible compared to the kinetic energies of each phases. On a first approximation, we can therefore swap together E_cΣ with E_c, and consequently U_Σ with the internal energy of the whole system U. Considering this hypothesis, the internal energy balance equation can be writtenρd/dt( U/ρ) =σ:∇ 𝐕+𝐢^'·𝐄-div 𝐐^'with𝐢^'=𝐈-ρ Z𝐕≃ρ _1Z_1( 𝐕_1-𝐕_4) +∑_k=3,4ρ _kZ_k( 𝐕_𝐤-𝐕) 𝐐^'=𝐐-∑_k=3,4U_k( 𝐕-𝐕_𝐤) -∑_k=3,4σ _k·(𝐕_𝐤- 𝐕)where 𝐈 denotes the current density vector andZ_k=Z_k^0 for k=1,2,3 and Z_4=ρ _1/ ρ _4Z_1To describe the systeme, we finally have 14 independent scalar equations using 29 scalar variables (ρ _k, 𝐕_𝐤 (k=1,3), ρ, 𝐕, σ, Z, 𝐄, U, 𝐐^' and 𝐃). 15 scalar equations are missing to close the system : the constitutive relations. We will now establish them in the form of three vectorial relations and one tensorial relation relating second-rank symmetric tensors.§ ENTROPY BALANCE LAWThe microscale entropy balance laws of the solid and liquid phases can be written ∂ S_k^0/∂ t+divS_k^0𝐕_𝐤^0 =-divΣ _𝐤^0+s_k^0 k=3,4where S_k^0, Σ _𝐤^0 and s_k^0 denote, respectively, the entropy density, the entropy flux vector and the rate of entropy production of the phase k. Averaging over the R.V.E. gives, considering the interface condition 𝐕_1^0=𝐕_2^0= 𝐕_3^0=𝐕_4^0=𝐕_𝐢^0 ∂ S_k/∂ t+div( S_k𝐕_𝐤 ) =s_k-divΣ _𝐤-⟨ Σ _𝐤^0·𝐧_𝐤χ _i⟩in which the macroscale entropy density S_k, the entropy flux vector Σ _𝐤 and the rate of entropy production s_k are defined byS_k=⟨χ _kS_k^0⟩Σ _𝐤=⟨χ _kΣ _𝐤^0⟩ s_k=⟨χ _ks_k^0⟩One points out that the quantities S_k and s_k are relative to the volume of the whole material. For the interface we obtain (see appendix B)∂ S_i/∂ t+div( S_i𝐕_𝐢 ) =∑_k=3,4⟨χ _iΣ _𝐤^0·𝐧_𝐤⟩ +s_i The entropy balance law of the whole material isρD/Dt( S/ρ) =s-divΣwhere :S=∑_k=3,4,iS_k s=∑_k=3,4,is_kΣ=∑_k=3,4Σ _𝐤are the entropy density, the rate of entropy production and the entropy flux vector of the complete material, respectively. In the barycentric frame of reference, we deriveρd/dt( S/ρ) =s-divΣ ^'withΣ ^'=Σ -∑_k=3,4S_k( 𝐕-𝐕_𝐤 ) § FUNDAMENTAL THERMODYNAMIC RELATIONS §.§ Thermodynamic relations for the solide phaseFor a solid phase with one constituent, the Gibb's relation can be written<cit.>ρ _3^0d_3^0/dt( U_3^0/ρ _3^0 ) =p_3^01/ρ _3^0d_3^0ρ _3^0/dt+ σ _3^0e^s:d_3^0 ε _3^0^s/dt+ρ _3^0T_3^0d_3^0/dt( S_3^0/ρ _3^0 )where T_3^0 is the absolute temperature, ε _3^0the strain tensor, σ _3^0e the equilibrium stress tensor, σ _3^0e^s and ε _3^0^s the stressand strain deviator tensors, and d_3^0/dt the particule derivative following the microscale motion of the solid (see appendix A).p_3^0 is the pressure or negative one-third the trace of the microscopic equilibrium stress tensor p_3^0=-1/3tr( σ_3^0e)Equation (<ref>) can also be deduced from equation (<ref>) and internal energy balance equation of the solid phase developped in <cit.> : indeed, the Gibbs relation is satisfied at equilibrium, so the heat flux 𝐐_3^0, the diffusion current 𝐢_3^0 and the rate of entropy production s_3^0 cancel; deformations are small and the stress tensor σ _3^0is equal to the equilibrium stress tensor σ _3^0e. In addition,the solid phase is a closed system, consequentlyΣ _3^0=𝐐_3^0/T_3^0At the microscopic scale, Euler's homogeneous function theorem provides for the solid phasep_3^0=T_3^0S_3^0-U_3^0+μ _3^0ρ _3^0where μ _3^0 denotes the chemical potential per unit of mass of the solid constituent. As a result, Gibbs equation can be writtend_3^0U_3^0/dt=T_3^0d_3^0S_3^0/dt+μ _3^0d_3^0ρ _3^0/dt+σ _3^0e^s: d_3^0/dtε _3^0^sDifferentiating Euler's relation and combining it with the Gibbs relation leads to Gibbs-Duhem equationd_3^0p_3^0/dt=S_3^0d_3^0T_3^0/dt- σ _3^0e^s:d_3^0/dtε _3^0^s +ρ _3^0d_3^0μ _3^0/dtLet us assume that the fluctuations over the R.V.E. of the intensive thermodynamical quantities T_3^0, μ _3^0, p_3^0, the displacement 𝐮_3^0 and the equilibrium stress tensor σ _3^0e are negligible. Supposing that the solid deformations are small, we obtainT_3=T_3^0μ _3=μ _3^0 𝐮_3=𝐮_3^0ε _3=ε _3^0=1/2(∇ 𝐮_3+∇𝐮_3^T) p_3=p_3^0=-1/3ϕ _3trσ _3^eσ_3^e=ϕ _3σ _3^0e= σ _3^e^s-ϕ _3p_31where 1 denotes the second-rank identity tensor and σ _3^e^s the deviator part of σ _3^e. Considering the small deformation hypothesis, one easily derives (cf appendix C)d_3U_3/dt=T_3d_3S_3/dt+μ _3d_3ρ _3/dt-p_3d_3ϕ _3/dt+σ _3^e^s: d_3/dtε _3^sGibbsϕ _3p_3=T_3S_3-U_3+μ _3ρ _3Eulerϕ _3d_3p_3/dt=S_3d_3T_3/dt+ρ _3 d_3μ _3/dt-σ _3^e^s: d_3/dtε _3^s Gibbs-Duhem §.§ Thermodynamic relations for the liquid phaseAccording to S.R. De Groot and P. Mazur <cit.>, the Gibbs relation of a two-constituent fluid can be written as d_4^0/dt( U_4^0/ρ _4^0) =T_4^0 d_4^0/dt( S_4^0/ρ _4^0) -p_4^0 d_4^0/dt( 1/ρ _4^0) +∑_k=1,2μ _k^0d_4^0/dt( ρ _k^'/ρ _4^0)where p_4^0 is the fluid phase pressure, μ _k^0 the mass chemical potential of constituent k, and d_4^0/dt the particule derivative following the microscale motion of the liquid phase. ρ _k^' are the mass densities of cations and solvent relative to the solution volumeρ _k^'/ρ _4^0=ρ _k/ρ _4 ρ _1^'=CM_1 ρ _2^'=ρ _2^0ϕ _2/ϕ _4M_1 denotes the cations molar mass and C the cations molar concentration relative to the liquid phase. As for the solid phase, one can find out this equation combining the internal energy and entropy balance laws and taking the limit at the equilibrium. Euler's homogeneous function theorem takes on the following form at the microscopic scaleU_4^0-T_4^0S_4^0+p_4^0=∑_k=1,2μ _k^0ρ _k^'so thatd_4^0U_4^0/dt=T_4^0d_4^0S_4^0/dt +∑_k=1,2μ _k^0d_4^0ρ _k^'/dtThe Gibbs-Duhem relation of the liquid phase derives from the Gibbs and Euler's relations∑_k=1,2ρ _k^'d_4^0μ _k^0/dt =-S_4^0d_4^0T_4^0/dt+d_4^0p_4^0/dtWe assume that the fluctuations of the intensive thermodynamic quantities are negligibleT_4=T_4^0μ _k=μ _k^0 p_4=p_4^0Averaging the previous equations over the R.V.E., we obtainT_4d_4S_4/dt=d_4U_4/dt+p_4d_4ϕ _4/ dt-∑_k=1,2μ _kd_4ρ _k/dtGibbsϕ _4p_4=T_4S_4-U_4+∑_k=1,2μ _kρ _k Eulerϕ _4d_4p_4/dt=S_4d_4T_4/dt +∑_k=1,2ρ _kd_4μ _k/dtGibbs-Duhem §.§ Thermodynamic relations for the complete materialIn order to write the thermodynamic relations of the complete material, we make the hypothesis of local thermodynamic equilibrium; this requires among other things that the heat diffuses well enough in the solid and the solution so that temperature equilibrium is reached on the R.V.E.. We thus can writep=p_3=p_4 T=T_3=T_4=T_iOtherwise, we have pointed out that the sum U_Σ of the internal energies of the constituents is close to the internal energy of the system U. Adding the Euler's relations of the solid and the liquid phases and the interface, we thus obtain the Euler's relation of the whole materialp=TS-U+∑_k=1,2,3μ _kρ _kThe Gibbs relation of the complete material is also obtained by additionTD/Dt( S/ρ) =D/Dt( U/ρ) +pD/Dt( 1/ρ) -1/ρ σ _3^e^s:d_3ε _3^s/dt -∑_1,2μ _kρ _4/ρd_4/dt(ρ _k/ρ _4)The material derivative enables to follow the barycenters of each phase during the motion. The solid phase is then supposed to be a closed system; for this reason, no mass exchange term for the solid appears in this relation. On the contrary, the solvent and the cations move at different velocities; thence there is a mass exchange term concerning these two constituents in the barycentric reference frame of the solution. The mass exchanges of the three constituents appear if the particle derivative following the motion of the whole material barycenter is usedTd/dt( S/ρ) =d/dt( U/ρ) +pd/dt( 1/ρ) -∑_k=1,2,3μ _kd/dt( ρ _k/ρ) -1/ρ σ _3^e^s:dε _3^s/dtThis relation can also be obtained using the Gibbs relations of the constituents; at equilibrium, indeed, the velocities of the two phases and the interface are identical, in such a way that the particle derivatives are the samed_3/dt≡d_4/dt≡d_i/dt≡d /dtA third approach is to combine the balance equations of internal energy and entropy of the complete material, and to take the limit at equilibrium.Considering the small deformations hypothesis and neglecting the relative velocities compared to the velocities in the laboratory-frame, we deriveσ _3^e^s:∇𝐕_3= σ _3^e^s:d_3ε_3^s/dt≃ σ _3^e^s: dε _3^s/dt≃ σ _3^e^s:∇𝐕The equilibrium stress tensor of the complete material is written as follows σ ^e=σ _3^e+σ_4^e= -p1+σ _3^e^swithσ _4^e=-ϕ _4p_41As a resultσ ^e^s=σ _3^e^sFinally, Gibbs relation isTd/dt( S/ρ) =d/dt( U/ρ) +pd/dt( 1/ρ) -∑_k=1,2,3μ _kd/dt( ρ _k/ρ ) -1/ρσ ^e^s: ∇𝐕Differentiating Euler's relation and combining it with Gibbs relation, the Gibbs-Duhem relation takes on the form dp/dt=SdT/dt+∑_k=1,2,3ρ _kdμ _k/ dt-σ ^e^s:∇𝐕 § GENERALIZED FORCES AND FLUXES §.§ Entropy productionThe stress tensor is composed of two parts : the equilibrium stress tensor σ ^e and the viscous stress tensor σ ^v, which vanishes at equilibrium. Considering (<ref>), the complete stress tensor can be written as σ=σ _3+σ _4=σ ^e+ σ ^v=-p1+σ ^e^s+σ ^vCombining the internal energy and entropy equations (<ref>) and (<ref>) with the Gibbs relation (<ref>) yieldss-divΣ ^'=1/Tσ ^v:∇ 𝐕+1/T𝐢^'·𝐄 -1/T^2𝐐^'·∇ T +∑_1,2,3ρ _k( 𝐕- 𝐕_𝐤) ·∇μ _k/T-div[ 𝐐^'/T+∑ _1,2,3μ _kρ _k/T( 𝐕- 𝐕_𝐤) ]We can then identify the rate of entropy production s and the entropy flux vector Σ ^' s=1/Tσ ^v:∇𝐕+1/T 𝐢^'·𝐄-1/T^2𝐐^'· ∇ T+∑_k=1,2,3ρ _k( 𝐕 -𝐕_𝐤) ·∇μ _k/T Σ ^'=𝐐^'/T +∑_k=1,2,3μ _kρ _k/T( 𝐕- 𝐕_𝐤)§.§ Identification of the generalized forces and fluxesA second rank tensor is the sum of three parts : a spherical tensor, a deviator tensor (labeled with ^s) and an antisymmetric tensor (labeled with ^a)∇𝐕=1/3(div𝐕) 1+∇𝐕^s+∇𝐕^awhere∇𝐕^s=1/2( ∇𝐕+ ∇𝐕^T) -1/3( div𝐕 ) 1The viscous stress tensor is symmetric, soσ ^v=1/3tr(σ ^v) 1+σ ^v^sIn the entropy production s appear the three mass diffusion fluxes relative to the barycentric reference frame ρ _k(𝐕_𝐤-𝐕) with k=1,2,3. The sum of these three fluxes is zero, so only two of them are linearly independant. We define the following equivalent fluxes𝐉_1=ρ _1( 𝐕_1- 𝐕_2) 𝐉_4 =ρ _4( 𝐕_4-𝐕_3)which are respectively the mass diffusion flux of the cations in the solution and the mass diffusion flux of the solution in the solid. These two fluxes are linearly independant. The diffusion current 𝐢^' and the fluxes ρ _k( 𝐕_𝐤- 𝐕) can be expressed as functions of 𝐉_1 and 𝐉_4, then the entropy production takes on the following form s =1/3Ttr( σ ^v) · div𝐕+𝐐^'·∇1/T+ρ _2/ρ _4[ 1/TZ_1𝐄-∇μ _1/T+∇μ _2/T] · 𝐉_1+ρ _3/ρ[ 1/T( ρ _1/ρ _4Z_1-Z_3) 𝐄-ρ _1/ρ _4 ∇μ _1/T-ρ _2/ρ _4 ∇μ _2/T+∇μ _3/ T] ·𝐉_4+1/Tσ ^v^s: ∇𝐕^sThis expression places in evidence one scalar flux 1/3tr(σ ^v), three vectorial fluxes 𝐐^', 𝐉_1, 𝐉_4 and one second-rank tensorial flux σ ^v^salong with the associated generalized forcesFluxes Forces1/3tr σ ^v 1/Tdiv𝐕 𝐐^' ∇1/T 𝐉_1 ρ _2/ρ _4[ 1/T Z_1𝐄-∇μ _1/T+ ∇μ _2/T] 𝐉_4 ρ _3/ρ[ 1/T(ρ _1/ρ _4Z_1-Z_3) 𝐄-ρ _1/ρ _4∇μ _1/T-ρ _2/ ρ _4∇μ _2/T+∇ μ _3/T] σ ^v^s 1/T∇𝐕^s § CONSTITUTIVE EQUATIONS §.§ Tensorial constitutive equationWe assume that the medium is isotropic. According to Curie dissymmetry principle, there can not be any coupling between fluxes and forces whose tensorial ranks differs from one unit. Moreover, we suppose that coupling between fluxes and different tensorial rank forces are negligible, which is a generally accepted hypothesis <cit.>. Consequently, the scalar constitutive equation requires only one scalar phenomenological coefficient L_11/3tr( σ ^v) =L_1/Tdiv𝐕In the same way, the tensorial flux σ^v^s is related to thegeneralized force 1/T∇𝐕^s by a fourth-rank tensorial phenomenological coefficient L_2σ ^v^s=L_21/T ∇ 𝐕^sBecause of the isotropy of the medium, tensor L_2 is isotropic and requires only three scalar coefficients <cit.>. Furthermore, tensors σ ^v^s and ∇𝐕^sare deviatoric, soσ ^v^s=L_2/T∇𝐕^swhere L_2 is a scalar coefficient. Setting out L_1^'=L_1- L_2/3, the viscous stress tensor is finally given by σ ^v=L_1^'/T( div𝐕)1+L_2/2T( ∇ 𝐕+∇𝐕^T)Assuming that the complete material satisfies the Hooke's law at equilibrium, the equilibrium stress tensor can be written as σ ^e=λ( tr ε)1+2Gε=3λ +2G/3( tr ε) 1+2Gε^swhere λ and G denote the first Lamé constant and the shear modulus of the complete material, respectively, and where the material strain is defined byε=1/2( ∇𝐮 +∇𝐮^T)or∙ε=1/2(∇𝐕+∇𝐕^T)𝐮 is the displacement vector. Supposing that the fluid is newtonian and stokesian, the pressure isp=-1/3tr( σ ^e) =( λ +2/3G) trεThe stress tensor of the complete material thus satisfies a Kelvin-Voigt modelσ=λ( trε) 1+2G ε+L_1^'/T( tr∙ ε) 1+L_2/T∙ ε§.§ Chemical potentialsThe liquid phase is a dilute solution of strong electrolyte. Molar chemical potentials of the three constituents μ _i,mol can be written on a first approximation <cit.>μ _1,mol( T,p,x) =μ _1,mol^0( T,p) +RTln x+o(√(x)) μ _2,mol( T,p,x) =μ _2,mol^0( T,p) -RTx+o(x^3/2) μ _3,mol( T,p,x) =μ _3,mol^0( T)where R=8,314 J K^-1 is the gas constant and x the molar fraction of the cations in the solutionx=CM_2/ρ _2^0μ _2,mol^0 and μ _3,mol^0 denote the chemical potentials of the single solid and solvent, and μ _1,mol^0 depends on the solvent and the solute. Mass chemical potentials μ _i=μ _i,mol/M_i can then be writtenμ _1( T,p,x) ≃μ _1^0( T,p) +RT/M_1 ln( CM_2/ρ _2^0) μ _2( T,p,x) ≃μ _2^0( T,p) -RT/ρ _2^0C μ _3( T,p,x) =μ _3^0( T)Using the Gibbs-Duhem's relations for the solid and the liquid phases, we obtain∇μ _1=-S_1/ρ _1∇ T+v_1/M_1∇ p+RTρ _2^0/ M_2M_1C∇( CM_2/ρ _2^0) ∇μ _2=-S_2/ρ _2∇ T+v_2/M_2 ∇ p-RT/M_2∇( CM_2/ρ _2^0) ∇μ _3=-S_3/ρ _3∇ Twhere v_i denotes the partial molar volume of the constituent i. §.§ Vectorial constitutive equationsVectorial constitutive equations require nine phenomenogical coefficients. These coefficients are a priori second-rank tensors; considering the isotropy of the medium, they can be replaced by scalars𝐐^'=L_3∇1/T+L_4 ρ _2/ρ _4[ 1/TZ_1𝐄- ∇μ _1/T+∇μ _2/ T] +L_5ρ _3/ρ[ 1/T( ρ _1 /ρ _4Z_1-Z_3) 𝐄-ρ _1/ρ _4 ∇μ _1/T-ρ _2/ρ _4 ∇μ _2/T+∇μ _3/ T] 𝐉_1=L_4∇1/T+L_7 ρ _2/ρ _4[ 1/TZ_1𝐄- ∇μ _1/T+∇μ _2/ T] +L_8ρ _3/ρ[ 1/T( ρ _1 /ρ _4Z_1-Z_3) 𝐄-ρ _1/ρ _4 ∇μ _1/T-ρ _2/ρ _4 ∇μ _2/T+∇μ _3/ T] 𝐉_4=L_5∇1/T+L_8 ρ _2/ρ _4[ 1/TZ_1𝐄- ∇μ _1/T+∇μ _2/ T] +L_11ρ _3/ρ[ 1/T( ρ _1 /ρ _4Z_1-Z_3) 𝐄-ρ _1/ρ _4 ∇μ _1/T-ρ _2/ρ _4 ∇μ _2/T+∇μ _3/ T]Onsager reciprocal relations lead otherwise toL_6=L_4 L_9=L_5 L_10=L_8Considering that the solution is dilute and using the expressions obtained for the chemical potentials, the heat flux writes𝐐^'=-A_QT/T^2∇ T+ A_QE/T𝐄+A_QP/T∇ p+A_QC∇ CwithA_QT≃ L_3+L_4[ -μ _1^0-RT/M_1ln(CM_2/ρ _2^0) +μ _2^0-RT/ρ _2^0C- S_1T/ρ _1+S_2T/ρ _2] +ρ _3/ρL_5[ -ρ _1/ρ _2 ( μ _1^0+RT/M_1lnCM_2/ρ _2^0 ) -μ _2^0+RTC/ρ _2^0+μ _3^0- S_4T/ρ _4+S_3T/ρ _3] A_QE≃ L_4Z_1+ρ _3/ρρ _4( ρ _1Z_1-ρ _2Z_3) L_5 A_QP≃ L_4( v_2/M_2-v_1/M_1) - ρ _3/ρρ _4ϕ _4L_5 A_QC≃ -R/M_1CL_4According to the definition of the partial molar volumes, indeedρ _1v_1/_M_1+ρ _2v_2/_M_2=ϕ _4Likewise, the mass diffusion flux of the cations in the solution can be written as 𝐉_1=-A_1T/T^2∇ T+ A_1E/T𝐄+A_1P/T∇ p+A_1C ∇ CwithA_1T≃ L_4 +L_7( μ _2^0-μ _1^0-RT/M_1ln CM_2/ρ _2^0-RTC/ρ _2^0+ T S_2 /ρ _4 -T S_1/ρ _1) + ρ _3 L_8/ρ _4ρ[ RTρ _1/ M_1( 1-lnC M_2/ρ _2^0) -ρ _1μ _1^0-μ _2^0+ρ _4μ _3^0 - T S_2 - T S_1 -T ρ _4 S_3/ρ _3] A_1E≃ L_7Z_1+ρ _3/ρL_8( ρ _1/ ρ _4Z_1-Z_3) A_1P≃ L_7( v_2/M_2-v_1/M_1) - ρ _3ϕ _4/ρρ _4L_8 A_1C≃ -R/M_1CL_7and the mass diffusion flux of the solution in the solid is𝐉_4=-A_4T/T^2∇ T+ A_4E/T𝐄+A_4P/T∇ p+A_4C ∇ CwithA_4T≃ L_5+L_8[ μ _2^0-RT/ρ _2^0C-μ _1^0-RT/M_1ln( CM_2/ρ _2^0) - TS_1/ρ _1+TS_2/ρ _2] +ρ _3L_11/ρ[ RTC/ρ _2^0- RTC/ρ _4^0lnC M_2/ρ _2^0-ρ _1μ _1^0/ρ _4-μ _2^0+μ _3^0-TS_1/ρ _4- TS_2/ρ _4+TS_3/ρ _3] A_4E≃ L_8Z_1+L_11ρ _3/ρ( ρ _1 /ρ _4Z_1-Z_3) A_4P≃ L_8( v_2/M_2-v_1/M_1) - ρ _3ϕ _4/ρρ _4L_11 A_4C≃ -R/M_1CL_8 § DISCUSSION §.§ Nafion physicochemical propertiesIn order to approximate these complex equations, we are going to estimate the different terms. To do this, we focus on a particular electroactive polymer, Nafion, and we restrict ourself to the isothermal case. The physicochemical properties of the dry polymer are well documented; its molecular weight M_3 is between 10^2 and10^3 kg mol^-1 <cit.> and its massdensity ρ _3^0 is close to 2.1 10^3kg m^-3 <cit.>. Its equivalent weight M_eq, that is to say, the weight of polymer per mole of ionic sites is 1.1 kg eq^-1 <cit.>. We deduce the electric charge per unit of mass Z_3=-F/M_eq =9 10^4 C kg^-1 where F =96487 C mol^-1 denotes the Faraday's constant. The cations may be H^+, Li^+ or Na^+ ions; we use an average molar mass M_1∼10^-2 kg mol^-1, which corresponds to a mass electric charge Z_1∼10^7 C kg^-1. The cations partial molar volume v_1 is on the order of M_1/ρ _4^0∼10^-5 m^3 mol^-1. The solvent molar mass M_2 is equal to 18 10^-3 kg mol^-1 and its mass density ρ _2^0is 10^3 kg m^-3; its partial molar volume v_2 is approximately equal to 18 10^-6 m^3 mol^-1 , which is the molar volume of pure solvent. The dynamic viscosity of water η _2 is 10^-3 Pa s. When the polymer is saturated with water, the solution mass fraction is usually between 20% and 25% if the counterion is a proton <cit.>. It corresponds to a volume fraction ϕ _4 between 34% and 41%. According to P. Choi <cit.>, each anion is then surrounded by an average of 14 molecules of water, which corresponds to a porosity of32%. In the case of a counterion Li^+ or Na^+, S. Nemat-Nasser and J. Yu Li <cit.> indicate that the volume increases by 44.3% and 61.7% respectively between the dry and the saturated polymer, which corresponds to porosities equal to 31% and 38%. Thereafter we use an average value ϕ _4∼35%. We deduce the mass densities of the complete material, cations, solvent and solid relative to the volume of the whole materialρ _1∼ 14 kg m^-3 ρ _2∼ 0.35 10^3 kg m^-3 ρ _3∼ 1.4 10^3 kg m^-3 ρ∼ 1.8 10^3 kg m^-3The cations molar fraction relative to the liquid phase and the anions molar concentration, which is equal to the average cations concentration, can be written x∼ 7% C∼ 4 10^3 mol m^-3In the following, we suppose that the temperature is T=300 K. Regarding to the electric field, it is typically about 10^4 V m^-1 <cit.>. §.§ Rheological equationWe have shown that the rheological equation of the complete material is identified with a Kelvin-Voigt modelσ=λ( trε) 1+2G ε+λ _v( tr∙ε ) 1+2μ _v∙εwhere λ and G are respectively the first Lamé constant and the shear modulus of the whole material. λ _v and μ _v are viscoelastic coefficientsλ _v≡L_1^'/T 2μ _v≡L_2/TNafion is a thermoplastic semi-crystalline ionomer. M. N. Silberstein and M. C. Boyce represent the polymer by a Zener model <cit.>. The elastic coefficients of the dry polymer can be deduced from their measuresG_3∼ 1.1 10^8 Paλ _3∼ 2.6 10^8 Pa E_3∼ 3 10^8 Paν _3∼ 0.36where G_3 is the shear modulus, λ _3 the first Lamé constant, E_3 the Young's modulus and ν _3 the Poisson's ratio of the solid phase. Young's modulus is in good agreement with the values cited in <cit.>. They also correspond to the typical values of this kind of polymer, especially Poisson's ratio, which is usually close to 0.33 below the glass transition temperature and to 0.5 around the transition temperature <cit.>. When the polymer is saturated with water, the elastic coefficients vary; water has a plasticising effect <cit.>. We obtain the following values <cit.> , which are in agreement with the usual ones <cit.>G∼ 4.5 10^7 Paλ∼ 3 10^8 Pa E∼ 1.3 10^8 Paν∼ 0.435Viscoelastic coefficients can be deduced from uniaxial tension tests <cit.> E_v=μ _v(3λ _v+2μ _v)/λ _v+μ _v ∼ 1.2 10^8 Pa s The viscoelastic coefficients λ _v and μ _v (or E_v) can be estimated from therelaxation times according to traction and shear tests. Typically, the relaxation time for a traction is of the orderθ_E ∼15 s for the saturated Nafion polymer <cit.>.The shear relaxation time is usually of the same order of the traction one : θ_μ∼θ_E <cit.>. Theviscoelastic coefficients are given by the relations E_v = E θ_E and μ_v = G θ_μ for thetraction and shear viscoelastic modulus, respectively. Therefore, the phenomenological coefficients are given by λ∼ 310^8PaG ∼ 4.5 10^7 Pa λ_v ∼7 10^8Pa sμ_v ∼ 10^8 Pa s Accordingly, we deduce from (84)L_1^^'∼ 2.1 10^11 Pa sKL_2∼ 610^10 Pa sK It is worthwhile noting that these viscoelastic phenomenological coefficients depend very strongly on the solvent concentration and on the temperature, especially if the operating temperature of the polymer is close to that of the glass transition.In addition, the molecular relaxation time is of the order of 10 s just bellow the glass transition <cit.>. §.§ Nernst-Planck equationIn the following, we focus on the isothermal case. Considering the previous numerical estimations, we can write in a first approximationZ_1>>Z_3ρ∼ρ _2∼ρ _3>>ρ _1ρ _1Z_1∼ρ _4Z_3Moreover, the non-diagonal phenomenological coefficients are usually small compared to the diagonal ones; we suppose thatL_3≳ L_4,L_5 L_7≳ L_4,L_8 L_11≳ L_5,L_8One deduces𝐉_1≃L_7Z_1/T𝐄+1/ T[ v_2/M_2( L_7-ρ _3 L_8/ρ) -v_1 L_7/M_1] ∇ p-R L_7/M_1C ∇ Cthat is to say𝐕_1≃ -RL_7/M_1ρ _1C {∇ C-M_1CZ_1/RT𝐄. . +Cv_1/RT[ 1-M_1v_2/M_2v_1(1-ρ _3 L^8/ρ L^7) ] ∇ p} + 𝐕_2This expression is identified with the Nernst-Planck equation <cit.>𝐕_1=-D/C[ ∇ C- Z_1M_1C/RT𝐄+Cv_1/RT( 1-M_1/ M_2v_2/v_1) ∇ p] + 𝐕_2where D denotes the mass diffusion coefficient of the cations in the liquid phase and v_1 their partial molar volume. This equation expresses the equilibrium of an ions mole under the action of four forces : the Stokes friction force -6πη _2aN_a( 𝐕_1- 𝐕_2), the pressure force -v_1( 1- M_1/M_2v_2/v_1) ∇ p, the electric force Z_1M_1𝐄 and the thermodynamic force -M_1∇μ _1; N_a denotes the Avogadro constant and a the ion hydrodynamic radius, i.e. the radius of the hydrated ion. The proton mass diffusion coefficient D=RT/6πη _2aN_a is about 2 10^-9 m^2s^-1<cit.>. The proportionality factor 1-M_1/M_2v_2/v_1 reduces the mass pressure force exerted on the solution to the cations; it is therefore of the order of x. We obtain by identificationL_8<<L^7We can now estimate the order of magnitude of the different terms of this equation. The concentration gradient |∇ C| can be evaluated by dividing the average concentration of anions (or cations) by the polymer film thickness. This thickness e is typically about 200 μ m <cit.>, which provides a concentration gradient of the order of 2 10^7 mol m^-4.More precisely, numerical studies show that cations gather near the electrode of opposite sign. The concentration gradient is thus higher in certain zones than the previous evaluation. These simulations enable to estimate the maximal concentration gradient at 7 10^8 mol m^-4<cit.> or at 10^7 mol m^-4 <cit.>. Thence|∇ C|≾ 10^8 mol m^-4The pressure gradient can be roughly estimated by dividing the air pressure by the strip thickness, which provides a value about 5 10^8 Pa m^-1,or using the Darcy's law; the average fluid velocity can be estimated from the responsetime of the polymer strip τ∼1 to 10 s <cit.>|𝐕_4,𝐦𝐨𝐲|∼e/τ∼ 10^-4 m s^-1Otherwise, the characteristic size d of the hydrated polymer pores is about 100 Å <cit.>. We can deduce the polymer intrinsic permeability K, which is on the order of the square of the pore size (10^-16 m^2). Darcy's law then provides|∇ p|∼η _2/K |𝐕_4,𝐦𝐨𝐲|∼ 10^9 Pa m^-1This is in good agreement with the previous estimation. We finally obtain the following orders of magnitude for the different terms of the Nernst-Planck equation|∇ C| ≲ 10^8 mol m^-4 M_1C/RTZ_1|𝐄| ∼ 1.6 10^9 mol m^-4 Cv_1/RT( 1-M_1/M_2v_2/v_1) |∇ p| ∼ 1.1 10^3 mol m^-4Cations mainly move under the actions of the electric field and the mass diffusion; pressure gradient effect is negligible. §.§ Generalized Darcy's lawIn the isothermal case, the mass diffusion flux of the solution in the solid can be approximated 𝐉_4≃1/T[ L_8Z_1+L_11ρ _3/ρ( ρ _1/ρ _4Z_1-Z_3) ]𝐄-R/M_1CL_8∇ C +1/T[ L_8( v_2/M_2-v_1/M_1 ) -ρ _3ϕ _4/ρρ _4L_11]∇ pThe pressure term must be identified with Darcy's law1/Tρ _4[ L_8( v_1/M_1-v_2/M_2 ) +ρ _3ϕ _4/ρρ _4L_11] ∼K /η _2ϕ _4where K denotes the intrinsic permeability of the solid phase. Considering the previous estimation of L_8, the first term is negligible, then we can compute again L_11L_11∼KT/η _2ϕ _4^2ρ _2^2ρ/ρ _3∼ 3.8 10^-5 kg s K m^-3>>L^8The constitutive equation becomes𝐕_4-𝐕_3≃ -K/η _2ϕ _4[ ∇ p-ρ _2^0( ρ _1/ρ _4Z_1-Z_3) 𝐄] -R/M_1Cρ _4 L^8∇ CThe orders of magnitude of the different terms areK/η _2ϕ _4|∇ p| ∼ 2.8 10^-4 m s^-1 Kρ _2^0/η _2ϕ _4( ρ _1/ρ _4 Z_1-Z_3) |𝐄| ∼ 1.1 m s^-1 R/M_1Cρ _4L^8|∇ C| <<2 10^-6 m s^-1The phenomenological equation thus obtained can be identified at a first approximation with a generalized Darcy's law𝐕_4-𝐕_3≃ -K/η _2ϕ _4[ ∇ p-ρ _4^0( Z_4-Z_3)𝐄]In this expression, 1/ρ _4^0∇ p represents the mass pressure force and ( Z_4-Z_3)𝐄 is the mass electric force. The second term expresses the motion of the solution under the action of the electric field; it consists in an electroosmotic term. When an electric field is applied, the cations distribution becomes very heterogeneous <cit.>.Three regions can be distinguished * Around the negative electrode, where cations gather, Z_4>>Z_3 and 𝐕_4-𝐕_3=K/η _2ϕ _4 ( ρ _4^0Z_4𝐄-∇ p)The electric force exerted on the solution is due to the cations charge; we find out the expression obtained by M.A. Biot <cit.>. * Near the positive electrode, where the cation concentration is very low, Z_4<<Z_3 and𝐕_4-𝐕_3=-K/η _2ϕ _4 ( ρ _4^0Z_3𝐄+∇ p)ρ _4^0Z_3𝐄 represents the electric force exerted on the anions relative to the volume of the solution. This result corresponds to the expression obtained by Grimshaw et al <cit.>.The solution motion is due to the attractive force exerted on the cations by the solid. * In the center of the strip, Z_4∼ Z_3. The solution electric charge is partially balanced with the solid one, and the mass electric force exerted on the solution is proportional to the net charge ( Z_4-Z_3). § CONCLUSIONWe have studied an ionic electro-active polymer. When this electrolyte is saturated with water, it is fully dissociated and releases cations of small size, while anions remain bound to the polymer backbone. We have depicted this system as the superposition of three systems : a solid component, the polymer backbone negatively charged, which is assimilated to a deformable porous medium; and an ionic liquid solution, composed by the free cations and the solvent (the water); these three components move with different velocity fields. In a previous article <cit.>, we have establishedthe conservation laws of the two phases : mass continuity equation, Maxwell's equations, linear momentum conservation law and energy balance laws. Averaging these equations over the R.V.E. and using the material derivative concept, we obtained the conservation laws of the complete material.In this paper, we derive the entropy balance law and the thermodynamic relations using the same method. We deduce the entropy production and indentify the generalized forces and fluxes. Then we can write the constitutive equations of the complete material. The first one links the stress tensor with the strain tensor; the saturated polymer satisfies a Kelvin-Voigt model. The three others are vectorial equations, including a generalized Fourier's law. Focusing on the isothermal case, we also obtain a generalized Darcy's law and find out the Nernst-Planck equation. Using the Nafion physico-chemical properties, we estimate the phenomenological coefficients. This enables an evaluation of the different terms of the equations. We now plan to compare these results with experimental data published in the literature. This should allow us to improve our model. Other possibility of improvement of the model should consider the Zener model for the viscoelastic behavior of the polymer.§ APPENDIX A : PARTICLE DERIVATIVES AND MATERIAL DERIVATIVEIn order to write the balance equations of the whole material, we use the material derivative D/Dt defined in our previous paper <cit.>.Indeed, the different phases do not move with the same velocity : velocities of the solid and the solution are a priori different. For a quantity g, we can define particle derivatives following the motion of the solid ( d_3/dt), the solution (d_4/dt) or the interface ( d_i/dt) asd_kg/dt=∂ g/∂ t+∇ g·𝐕_𝐤Let us consider an extensive quantity of density g( 𝐫 ,t) relative to the whole material.g=g_3+g_4+g_iwhere g_3, g_4 and g_i are the densities relative to the total actual volume attached to the solid, the solution and the interface, respectively. Material derivative enables to calculate the variation of g( 𝐫,t) following the motion of the different phases <cit.>ρD/Dt( g/ρ) =∑_k=3,4,iρ _kd_k/dt( g_k/ρ _k) =∑_k=3,4,i∂ g_k/∂ t+div( g_k 𝐕_𝐤)This derivative must not be confused with the derivative d/dt following the barycentric velocity 𝐕.§ APPENDIX B : INTERFACE MODELLINGIn practice, contact area between phases 3 and 4 has a certain thickness; extensive physical quantities vary from one bulk phase to the other one. This complicated reality can be modelled by two uniform bulk phases separated by a discontinuity surface Σ whose localization is arbitrary. Let Ω be a cylinder crossing Σ, whose bases are parallel to Σ. We denote by Ω _3 and Ω _4 the parts of Ω respectively included in phases 3 and 4.The continuous quantities relative to the contact zone are identified by a superscript ^0 and no subscript. The microscale surface entropy S_i^0 and the microscale surface entropy production s_i^0 are defined byS_i^0=Σ⟶ 0lim1/Σ {∫_ΩS^0dv-∫_Ω _3S_3^0dv-∫_Ω _4S_4^0dv} s_i^0=Σ⟶ 0lim1/Σ {∫_Ωs^0dv-∫_Ω _3s_3^0dv-∫_Ω _4s_4^0dv}where Ω _3 and Ω _4 are small enough so that S_3^0, S_4^0, s_3^0 and s_4^0 are constant. Their averages over the R.V.E. are the volume quantity S_i and s_iS_i=⟨χ _iS_i^0⟩ s_i=⟨χ _is_i^0⟩We arbitrarily fix the interface position in such a way that it has no mass densityρ _i^0=lim_Σ⟶ 01/Σ{∫_Ωρ ^0dv-∫_Ω _3ρ _3^0dv-∫_Ω _4ρ _4^0dv} =0 Neglecting the heat flux along the interfaces, the balance equation of the interfacial quantity S_i^0 is written as <cit.>∂ S_i^0/∂ t+div_s( S_i^0 𝐕_𝐢^0) =∑_3,4[ S_k^0(𝐕_𝐤^0-𝐕_𝐢^0) .𝐧_𝐤+ Σ _𝐤^0.𝐧_𝐤] +s_i^0where div_s denotes the surface divergence operator. Averaging this equation over the R.V.E. provides∂ S_i/∂ t+div( S_i𝐕_𝐢 ) =∑_3,4⟨χ _iΣ _𝐤^0.𝐧_𝐤⟩ +s_i Interfacial Gibbs equation derives from the entropy balance equation (<ref>) and from the internal energy balance equation established in <cit.>d_iU_i/dt=T_id_iS_i/dtremarking that entropy production s_i and diffusion current 𝐢_𝐢cancel at equilibrium. The interface has no mass density; as a result, there is no mass exchange term in this relation.In the same way, Euler's relation and Gibbs-Duhem relation writeU_i-T_iS_i=0 S_id_iT_i/dt=0 § APPENDIX C : SMALL DEFORMATION HYPOTHESISIn the case of small deformations, the Green-Lagrange finite strain tensor come down to the Cauchy's infinitesimal strain tensor ε _3^0ε _3^0=1/2(∇𝐮_3^0+ ∇𝐮_3^0^T)where 𝐮_3^0 is the displacement vector <cit.> . The solid phase velocity is defined by𝐕_3^0=d_3^0/dt(𝐮_3^0)The small deformation hypothesis results in|∇𝐮_3^0| <<1 and |∇𝐕_3^0| <<1 Let 𝐀, a vectorial quantity. The particles derivative of 𝐀 following the motion of the solid phase identifies withd_3^0/dt( 𝐀) ≡∂/ ∂ t( 𝐀) +∇( 𝐀) .𝐕_3^0Small deformation assumption leads tod_3^0/dt[ div( 𝐀) ] ≃ div( d_3^0𝐀/dt) d_3^0/dt[ ∇𝐀]≃∇( d_3^0𝐀/dt)One deducesd_3/dtε _3≃d_3^0/dtε _3^0 § NOTATIONSk=1,2,3,4,i subscripts respectively represent cations, solvent, solid, solution (water and cations) and interface; quantities without subscript refer to the whole material. Superscript ^0 denotes a local quantity; the lack of superscript indicates average quantity at the macroscopicscale. Microscale volume quantities are relative to the volume of the phase, average quantities to the volume of the whole material. Superscripts ^s and ^a respectively indicate the deviatoric and the antisymmetric parts of a second-rank tensor, and ^T its transpose.* C : cations molar concentration (relative to the liquid phase); * D : mass diffusion coefficient of the cations in the liquid phase; * 𝐃 : electric displacement field; * E, E_3 : Young's modulus; * 𝐄 : electric field; * E_c, E_cΣ : kinetic energy density; * F=96487 C mol^-1 : Faraday's constant ; * G, G_3 : shear modulus; * 𝐈 : current density vector; * 𝐢(𝐢^', 𝐢_𝐤, 𝐢_𝐤^0) : diffusion current; * 𝐉_𝐤 : mass diffusion flux; * K : intrinsic permeability of the solid phase; * L_i,L_i^' : phenomenological coefficients; * M_k : molar mass of component k; * M_eq : equivalent weight (weight of polymer per mole of sulfonate groups); * 𝐧_𝐤 : outward-pointing unit normal of phase k; * p (p_k, p_k^0) : pressure; * 𝐐 (𝐐^', 𝐐_𝐤^0) : heat flux; * R=8,314 J K^-1 : gaz constant; * s (s_k^0, s_k) : rate of entropy production; * S (S_k^0, S_k) : entropy density; * T (T_k, T_k^0) : absolute temperature; * U (U_Σ,U_k, U_k^0) : internal energy density; * 𝐮 (𝐮_3^0, 𝐮_3) : displacement vector; * v_k : partial molar volume of component k (relative to the liquid phase); * 𝐕 (𝐕_𝐤, 𝐕_𝐤^0) : velocity; * x : cations mole fraction (relative to the liquid phase); * Z (Z_k, Z_k^0) : total electric charge per unit of mass; * ε (ε _k^0) : permittivity; * ε (ε _k, ε _k^0) : strain tensor; * η _2 : dynamic viscosity of water; * λ, λ _3 : first Lamé constant; * λ _v, μ _v, E_v : viscoelastic coefficients; * ν, ν _3 : Poisson's ratio; * μ _k, μ _k^0 (μ _k,mol^0) : mass (molar) chemical potential; * ρ (ρ _k, ρ _k^', ρ _k^0) : mass density; * σ (σ _k) : stress tensor; * σ ^v : dynamic stress tensor; * σ ^e (σ _k^e,σ _k^0e) : equilibrium stress tensor; * Σ (Σ ^', Σ _𝐤^0, Σ _𝐤) : entropy flux vector; * ϕ _k : volume fraction of phase k; * χ _k : function of presence of phase k ; 99 Tixier Tixier M., Pouget J.: Conservation laws of an electro-active polymer, Continuum Mech. Thermodyn. 26, 4, 465-481 (2014) doi: 10.1007/s00161-013-0314-9nemat2000 Nemat-Nasser S., Li J.: Electromechanical response of ionic polymers metal composites. J. Appl. Phys. 87, 3321-3331(2000)nemat2002 Nemat-Nasser S. : Micro-mechanics of actuator of ionic polymer-metal composites. J. Appl. Phys. 92, 2899-2915(2002)degennes DeGennes P.G., Okumura K., Shahinpoor M., Kim K.J.: Mechanoelectric effect in ionic gels. Europhys. Lett. 50, 513-518 (2000)shahinpoor1991 Segalman D. Witkowsky W., Adolf D., Shahinpoor M.: Electrically controlled polymeric muscles as active materials used in adaptive structures. Proc. ADPA/AIAA/ASME/SPIE Conf. on Active Materials and Adaptive Structures, (1991)shahinpoor1994 Shahinpoor M.: Continuum electromechanics of ionic polymers gels as artificial muscles for robotic applications. Smart Mater. Struct. Int. J. 3, 367-372 (1994)shahinpoor1998 Shahinpoor M., Bar-Cohen Y., Simpson J.O., Smith J.: Ionic polymer-metal composites (IPMCs) as biomimetic sensors, actuators and artificial muscles - a Review. Smart Mater. Struct. 7, R15-R30 (1998)Futerko Futerko P., Hsing I.M.: Thermodynamics of water uptake in perfluorosulfonic acid membranes. J. Electrochem. Soc. 146, 6, 2049-2053 (1999)Nigmatulin79 Nigmatulin R.I.: Spatial averaging in the mechanics of heterogeneous and dispersed systems. Int. J. Multiph. Flow, 5, 353-385 (1979)Nigmatulin90 Nigmatulin R.I.: Dynamics of multiphase media, vols 1 and 2. Hemisphere, New-York (1990)Drew83 Drew D.A.: Mathematical modeling of two-phase flows. Ann. Rev. Fluid Mech.,15, 261-291 (1983)Drew98 Drew D.A., Passman S.L.: Theory of multicomponents fluids. Springer-Verlag, New-York (1998)Ishii06 Ishii M., Hibiki T.: Thermo-fluid dynamics of two-phase flow. Springer, New-York (2006)Lhuillier03 Lhuillier D.: A mean-field description of two-phase flows with phase changes. Int. J. Multiph. Flow, 29, 511-525 (2003)DeGroot De Groot S. R., Mazur P.: Non-equilibrium thermodynamics. North-Holland publishing company, Amsterdam (1962)Diu Diu B., Guthmann C., Lederer D., Roulet B.: Thermodynamique. Hermann, Paris (2007)Heitner-Wirguin Heitner-Wirguin C.: Recent advances in perfluorinated ionomer membranes: structure, properties and applications. J. Membrane Sci., 120, 1-33 (1996)Gebel Gebel G.: Structural evolution of water swollen perfluorosulfonated ionomers from dry membrane to solution. Polymer, 41, 5829-5838 (2000)Cappadonia Cappadonia M., Erning J., Stimming U.: Proton conduction of Nafion@ 117 membrane between 140 K and room temperature. J. Electroanal. Chem., 376, 189-193 (1994)Choi Choi P., Jalani N.H., Datta R.: Thermodynamics and proton transport in Nafion I. Membrane swelling, sorption and ion-exchange equilibrium. J. Electrochem. Soc. 152, 84-89 (2005)Silberstein2010 Silberstein M. N., Boyce M. C.: Constitutive modeling of the rate, temperature, and hydration dependent deformation response of Nafion to monotonic and cyclic loading. J. Power Sources, 195, 5692-5706 (2010)Satterfield2009 Barclay Satterfield M., Benziger J. B.: Viscoelastic Properties of Nafion at Elevated Temperature and Humidity. J. Polym. Sci. Pol. Phys., 47, 11-24 (2009)ferry Ferry J.D.: Viscoelastic Properties of Polymers. John Wiley and Sons, Inc., New York, (Second edition, 1970)Kundu Kundu S., Simon L.C., Fowler M., Grot S.: Mechanical properties of Nafion electrolyte membranes under hydrated conditions. Polymer, 46, 11707-11715 (2005)Bauer Bauer, F., Denneler S., Willert-Porada M.: Influence of Temperature and Humidity on the Mechanical Properties of Nafion117 Polymer Electrolyte Membrane. J. Polym. Sci. Pol. Phys., 43, 786-795 (2005)Silberstein2011 Silberstein M. N., Pillai P. V., Boyce M. C.: Biaxial elastic-viscoplastic behavior of Nafion membranes. Polymer, 52, 529-539 (2011)Silberstein2008 Silberstein N.N.: Mechanics of Proton Exchange Membranes : Time,Temperature and Hydration Dependence of the Stress-Strain Behavior of PersulfonatedPolytetrafluoroethylene. MS Thesis, Massachusetts Institut of Technology, Cambridge, MA (2008)Strobl Strobl G.R.: The physics of polymers. Springer-Verlag, Berlin (1997)Combette Combette P., Ernoult I.: Pysique des polymères. Hermann, Paris (2006)Lakshmi Lakshminarayanaiah N.: Transport phenomena in membranes. Academic Press, New-York (1969)Schlogl Schlögl: Stofftransport durch Membranen. Steinkopf, Darmstadt (1964)Schlogl2 Schlögl: Membrane permeation in systems far from equilibrium. Ber. Bunsen. Phys. Chem. 70, 400 (1966)Zawodsinski Zawodsinski T.A., Neeman M., Sillerud L.O. and Gottesfeld S.: Determination of water diffusion coefficients in perfluorosulfonate ionomeric membranes. J. Phys. Chem.-US 95, 6040-6044 (1991)Kreuer2001 Kreuer K.D.: On the development of proton conducting polymer membranes for hydrogen and methanol fuel cells. J. Membrane Sci. 185, 29-39 (2001)Farinholt Farinholt K., Leo D.J.: Modeling of electromechanical charge sensing in ionic polymer transducers. Mech. Mater. 36, 421-433 (2004)Pineri Pineri, M., Duplessix, R., Volino, F.: Neutron studies of perfluorosulfonated polymer structures. A. Eisenberg, H.L. Yeager (Eds.), Perfluorinated Ionomer Membranes, American Chemical Society, ACS Symposium Series, 180, 249–282, Washington, DC (1982)Biot Biot M. A.: Theory of elasticity and consolidation for a porous anisotropic solid. J. Appl. Phys. 26, 2, 182-185 (1955)Grimshaw Grimshaw P.E., Nussbaum J.H., Grodzinsky A.J., Yarmush M.L.: Kinetics of electrically and chemically induced swelling in polyelectrolyte gels. J. Chem. Phys. 93 (6), 4462-4472 (1990)Coussy95 Coussy O.: Mechanics of porous continua. Wiley, Chichester (1995)Biot77 Biot M.A.: Variational Lagrangian-thermodynamics of nonisothermal finite strain. Mechanics of porous solids and thermonuclear diffusion. Int. J. Solids Structures 13, 79-597 (1977)Coussy89 Coussy O.: Thermomechanics of saturated porous solids in finite deformation. Eur. J. Mech., A/Solids, 8, 1, 1-14 (1989) | http://arxiv.org/abs/1706.08731v1 | {
"authors": [
"Mireille Tixier",
"Joël Pouget"
],
"categories": [
"cond-mat.soft"
],
"primary_category": "cond-mat.soft",
"published": "20170627085544",
"title": "Constitutive equations for an electro-active polymer"
} |
http://arxiv.org/abs/1706.08972v3 | {
"authors": [
"Ken Osato",
"Samuel Flender",
"Daisuke Nagai",
"Masato Shirasaki",
"Naoki Yoshida"
],
"categories": [
"astro-ph.CO"
],
"primary_category": "astro-ph.CO",
"published": "20170627180001",
"title": "Investigating Cluster Astrophysics and Cosmology with Cross-Correlation of the Thermal Sunyaev-Zel'dovich Effect and Weak Lensing"
} |
|
[pages=1-2]title_boolean A Fully Quaternion-Valued Capon Beamformer Based on Crossed-Dipole Arrays Xiang Lan and Wei LiuCommunications Research GroupDepartment of Electronic and Electrical EngineeringUniversity of Sheffield, UK December 30, 2023 ==========================================================================================================================================Finding solution values for unknowns in Boolean equations was a principal reasoning mode in the Algebra of Logic of the 19th century.Schröder investigated it as Auflösungsproblem (solution problem). It is closely related to the modern notion of Boolean unification. Today it is commonly presented in an algebraic setting, but seems potentially useful also in knowledge representation based on predicate logic.We show that it can be modeled on the basis of first-order logic extended by second-order quantification.A wealth of classical results transfers, foundations for algorithms unfold, and connections with second-order quantifier elimination and Craig interpolation show up. Although for first-order inputs the set of solutions is recursively enumerable, the development of constructive methods remains a challenge.We identify some cases that allow constructions, most of them based on Craig interpolation, and show a method to take vocabulary restrictions on solution components into account. Revision: December 18, 2017 section0em1.5em subsection1.5em2.3em§ INTRODUCTIONFinding solution values for unknowns in Boolean equations was a principal reasoning mode in the Algebra of Logic of the 19th century.Schröder <cit.> investigated it as Auflösungsproblem (solution problem). It is closely related to the modern notion of Boolean unification. For a given formula that contains unknowns formulas are sought such that after substituting the unknowns with them the given formula becomes valid or, dually, unsatisfiable.Of interest are also most general solutions, condensed representations of all solution substitutions.A central technique there is the method of successive eliminations, which traces back to Boole.Schröder investigated reproductive solutions as most general solutions, anticipating the concept of most general unifier. A comprehensive modern formalization based on this material, along with historic remarks, is presented by Rudeanu <cit.> in the framework of Boolean algebra.In automated reasoning variants of these techniques have been considered mainly in the late 80s and early 90s with the motivation to enrich Prolog and constraint processing by Boolean unification with respect to propositional formulas handled as terms <cit.>. An early implementation based on <cit.> has been also described in <cit.>.An implementation with BDDs of the algorithm from <cit.> is reported in <cit.>.The Π^P_2-completeness of Boolean unification with constants was proven only later in <cit.> and seemingly independently in <cit.>. Schröder's results were developed further by Löwenheim <cit.>.A generalization of Boole's method beyond propositional logic to relational monadic formulas has been presented by Behmann in the early 1950s <cit.>. Recently the complexity of Boolean unification in a predicate logic setting has been investigated for some formula classes, in particular for quantifier-free first-order formulas <cit.>. A brief discussion of Boolean reasoning in comparison with predicate logic can be found in <cit.>.Here we remodel the solution problem formally along with basic classical results and some new generalizations in the framework of first-order logic extended by second-order quantification.The main thesis of this work is that it is possible and useful to apply second-order quantification consequently throughout the formalization.What otherwise would require meta-level notation is then expressed just with formulas.As will be shown, classical results can be reproduced in this framework in a way such that applicability beyond propositional logic, possible algorithmic variations, as well as connections with second-order quantifier elimination and Craig interpolation become visible. Of course, methods to solve Boolean equations on first-order formulas do not necessarily terminate.However, the set of solutions is recursively enumerable. By the modeling in predicate logic we try to pin down the essential points of divergence from propositional logic. Special cases that allow solution construction are identified, most of them related to definiens computation by Craig interpolation.In addition, a way to express a generalization of the solution problem where vocabulary restrictions are taken into account in terms of two related solution problems is shown.The envisaged application scenario is to let solving “solution problems”, or Boolean equation solving, on the basis of predicate logic join reasoning modes like second-order quantifier elimination (or “semantic forgetting”), Craig interpolation and abduction to support the mechanized reasoning about relationships between theories and the extraction or synthesis of subtheories with given properties. On the practical side, the aim is to relate it to reasoning techniques such as Craig interpolation on the basis of first-order provers, SAT and QBF solving, and second-order quantifier elimination based on resolution <cit.> and the Ackermann approach <cit.>. Numerous applications of Boolean equation solving in various fields are summarized in <cit.>.Applications in automated theorem proving and proof compression are mentioned in <cit.>.The prevention of certain redundancies has been described as application of (concept) unification in description logics <cit.>.Here the synthesis of definitional equivalences is sketched as an application.The rest of the paper is structured as follows: Notation, in particular for substitution in formulas, is introduced in Sect. <ref>. In Sect. <ref> a formalization of the solution problem is presented and related to different points of view. Section <ref> is concerned with abstract properties of and algorithmic approaches to solution problems with several unknowns. Conditions under which solutions exist are discussed in Sect. <ref>.Adaptions of classical material on reproductive solutions are given in Sect. <ref>. In Sect. <ref> various techniques for solution construction in particular cases are discussed.The solution problem with vocabulary restrictions is discussed in Sect. <ref>. The solution problem is displayed in Sect. <ref> as embedded in a setting with Skolemization and Herbrand expansion.Section <ref> closes the paper with concluding remarks.The material in Sect. <ref>–<ref> has also been published as <cit.>.§ NOTATION AND PRELIMINARIESNotational ConventionsWe consider formulas in first-order logic with equality extended by second-order quantification upon predicates. They are constructed from atoms (including equality atoms), constant operators , , the unary operator ¬, binary operators , and quantifiers ∀, ∃ with their usual meaning.Further binary operators , ,, as well as n-ary versions ofandcan be understood as meta-level notation.The operatorsandbind stronger than ,and . The scope of ¬, the quantifiers, and the n-ary connectives is the immediate subformula to the right. A subformula occurrence has in a given formula positive (negative) polarity if it is in the scope of an even (odd) number of negations. A vocabulary is a set of symbols, that is, predicate symbols (briefly predicates), function symbols (briefly functions) and individual symbols. (Individual symbols are not partitioned into variables and constants.Thus, an individual symbol is – like a predicate – considered as variable if and only if it is bound by a quantifier.) The arity of a predicate or function s is denoted by s. The set of symbols that occur free in a formula F is denoted by F.The property that no member of F is bound by a quantifier occurrence in F is expressed as F.Symbols not present in the formulas and other items under discussion are called fresh. The clean variant of a given formula F is the formula G obtained from F by successively replacing all bound symbols with fresh symbols such that G. The replacement is done in some way not specified here further such that each formula has a unique clean variant.We write FG for F entails G; F for F is valid; and F ≡ G for F is equivalent to G, that is, FG and GF.We write sequences of symbols, of terms and of formulas by juxtaposition.Their length is assumed to be finite. The empty sequence is written . A sequence with length 1 is not distinguished from its sole member.In contexts where a set is expected, a sequence stands for the set of its members. Atoms are written in the form p(), whereis a sequence of terms whose length is the arity of the predicate p.Atoms of the form p(), that is, with a nullary predicate p, are written also as p.For a sequence of fresh symbols we assume that its members are distinct.A sequence p_1… p_n of predicates is said to match another sequence q_1 … q_m if and only if n=m and for all i ∈{1,…,n} it holds that p_i = q_i.If = s_1… s_n is a sequence of symbols, then ∀ stands for ∀ s_1 …∀ s_n and ∃ for ∃ s_1 …∃ s_n.If = F_1… F_n is a sequence of formulas, thenstates F_i for all i ∈{1,…,n}, and = ⋃_i=1^nF_i. If = G_1… G_n is a second sequence of formulas, then ≡ stands for F_1 ≡ G_1 and …and F_n ≡ G_n.As explained below, in certain contexts the individual symbols in the set = {x_i | i ≥ 1} play a special role. For example in the following shorthands for a predicate p, a formula F and = x_1… x_p: pF stands for ∀(p()F); pF for ¬ (pF); pF for ∀(p()F); and pF for ∀(p()F). Substitution with Terms and FormulasTo express systematic substitution of individual symbols and predicates concisely we use the following notation: * F() and F()– Notational Context for Substitution of Individual Symbols.Let = c_1… c_n be a sequence of distinct individual symbols.We write F as F() to declare that for a sequence = t_1 … t_n of terms the expression F() denotes F with, for i ∈{1,…,n}, all free occurrences of c_i replaced by t_i.* F[], F[] and F[] – Notational Context for Substitution of Predicates.Let = p_1… p_n be a sequence of distinct predicates and let F be a formula.We write F as F[] to declare the following:*For a sequence = G_1(x_1 … x_p_1) … G_n(x_1 … x_p_n) of formulas the expression F[] denotes F with, for i ∈{1,…, n}, each atom occurrence p_i(t_1 … t_p_i) where p_i is free in F replaced by G_i(t_1 … t_p_i). *For a sequence = q_1 … q_n of predicates that matchesthe expression F[] denotes F with, for i ∈{1,…, n}, each free occurrence of p_i replaced by q_i. * The above notation F[], whereis a sequence of formulas or of predicates, is generalized to allow also p_i at the ith position of , for example F[G_1 … G_i-1 p_i… p_n]. The formula F[] then denotes F with only those predicates p_i with i ∈{1,…,n} that are not present at the ith position inreplaced by the ith component ofas described above (in the example only p_1,…,p_i-1 would be replaced).* [] – Notational Context for Substitution in a Sequence of Formulas. If = F_1… F_n is a sequence of formulas, then [] declares that [], whereis a sequence with the same length as , is to be understood as the sequence F_1[] … F_n[] with the meaning of the members as described above.In the above notation for substitution of predicates by formulas the members x_1,…, x_p ofplay a special role: F[] can be alternatively considered as obtained by replacing predicates p_i with λ-expressions λ x_1 …λ x_p_i . G_i followed by β-conversion.The shorthand pF can be correspondingly considered as p λ x_1 …λ x_p . G. The following property substitutible specifies preconditions for meaningful simultaneous substitution of formulas for predicates: A sequence = G_1… G_m of formulas is called substitutible for a sequence = p_1 … p_n of distinct predicates in a formula F, written F, if and only if m = n and for all i ∈{1,…,n} it holds that No free occurrence of p_i in F is in the scope of a quantifier occurrence that binds a member of G_i;G_i∩ = ∅; and G_i∩{x_j | j > p_i} = ∅. The following propositions demonstrate the introduced notation for formula substitution.It is well known that terms can be “pulled out of” and “pushed in to” atoms, justified by the equivalences p(t_1… t_n) ≡ ∃ x_1 …∃ x_n(p(x_1 … x_n) ⋀_i=1^n x_i=t_i) ≡ ∀ x_1 …∀ x_n(p(x_1 … x_n) ⋁_i=1^n x_i≠ t_i), which hold if no member of {x_1,…, x_n} does occur in the terms t_1,…,t_n. Analogously, substitutible subformulas can be “pulled out of” and “pushed in to” formulas:Let = G_1… G_n be a sequence of formulas, let = p_1… p_n be a sequence of distinct predicates and let F = F[] be a formula such that F. Thenprop-pullout F[] ≡ ∃(F ⋀_i=1^n (p_iG_i)) ≡ ∀(F ⋁_i=1^n (p_i G_i)).prop-fixing ∀F F[]∃F.Ackermann's Lemma <cit.> can be applied in certain cases to eliminate second-order quantifiers, that is, to compute for a given second-order formula an equivalent first-order formula.It plays an important role in many modern methods for elimination and semantic forgetting – see, e.g., <cit.>: Let F, G be formulas and let p be a predicate such that GpF, p ∉G and all free occurrences of p in F have negative polarity.Then ∃ p((pG)F[p]) ≡F[G]. § THE SOLUTION PROBLEM FROM DIFFERENT ANGLES §.§ Basic Formal Modeling Our formal modeling of the Boolean solution problem is based on two concepts, solution problem and particular solution: A solution problem () F[] is a pair of a formula F and a sequence of distinct predicates.The members ofare called the unknowns of the . The length ofis called the arity of the .A with arity 1 is also called unary solution problem ().The notation F[] for solution problems establishes as a “side effect” a context for specifying substitutions ofin F by formulas as specified in Sect. <ref>. A particular solution (briefly solution) of a F[] is defined as a sequenceof formulas such that F andF[].The property F in this definition implies that no member ofoccurs free in a solution. Of course, particular solution can also be defined on the basis of unsatisfiability instead of validity, justified by the equivalence of F[] and ¬ F[].The variant based on validity has been chosen here because then the associated second-order quantifications are existential, matching the usual presentation of elimination techniques.Being a solution is aside of the substitutibility condition a semantic property, that is, applying to formulas modulo equivalence: Ifis a solution of F[], then all sequencesof formulas such that F and ≡ (that is, if = G_1… G_n and = H_1… H_n, then H_i ≡ G_i holds for all i ∈{1,…,n}) are also solutions of F[].Solution problem and solution as defined here provide abstractions of computational problems in a technical sense that would be suitable, e.g., for complexity analysis.Problems in the latter sense can be obtained by fixing involved formula and predicate classes. The abstract notions are adequate to develop much of the material on the “Boolean solution problem” shown here. On occasion, however, we consider restrictions, in particular to propositional and to first-order formulas, as well as to nullary predicates.As shown in sec-reproductive, further variants of solution, general representations of several particular solutions, can be introduced on the basis of the notions defined here.[A Solution Problem and its Particular Solutions]As an example of a solution problem consider F[p_1p_2] where[ F = ∀x(a(𝑥) b(𝑥)); (∀x(p_1(𝑥) p_2(𝑥)) ∀x(a(𝑥) p_2(𝑥))∀x(p_2(𝑥) b(𝑥))). ]The intuition is that the antecedent ∀𝑥(a(𝑥) b(𝑥)) specifies the “background theory”, and w.r.t. that theory the unknown p_1 is “stronger” than the other unknown p_2, which, in addition, is “between” a and b.Examples of solutions are: a(x_1)a(x_1); a(x_1)b(x_1); a(x_1); b(x_1)b(x_1); and (a(x_1)b(x_1))(a(x_1)b(x_1)). No solutions are for example b(x_1)a(x_1); a(x_1); and all members of {, }×{, }.Assuming a countable vocabulary, the set of valid first-order formulas is recursively enumerable. It follows that for an n-ary F[] where F is first-order the set of those of its particular solutions that are sequences of first-order formulas is also recursively enumerable: An n-ary sequenceof well-formed first-order formulas that satisfies the syntactic restriction F is a solution of F[] if and only if F[] is valid. In the following subsections further views on the solution problem will be discussed: as unification or equation solving, as a special case of second-order quantifier elimination, and as related to determining definientia and interpolants. §.§ View as UnificationBecause F[] if and only if F[] ≡, a particular solution of F[] can be seen as a unifier of the two formulas F[] andmodulo logical equivalence as equational theory. From the perspective of unification the two formulas appear as terms, the members ofplay the role of variables and the other predicates play the role of constants. Vice versa, a unifier of two formulas can be seen as a particular solution, justified by the equivalence of L[] ≡ R[] and (LR)[], which holds for sequencesandof formulas and predicates, respectively, and formulas L = L[], R = R[], (LR) = (LR)[] such that L and R. This view of formula unification can be generalized to sets with a finite cardinality k of equivalences, since for all i ∈{1,…,k} it holds that L_i ≡ R_i can be expressed as ⋀_i = 1^k(L_iR_i).An exact correspondence between solving a solution problem F[p_1… p_n] where F is a propositional formula with ,,¬,, as logic operators and fication with constants in the theory of Boolean algebra (with the mentioned logic operators as signature) applied to F =_E can be established: Unknowns p_1,…,p_n correspond to variables and propositional atoms in F correspond to constants.A particular solution G_1… G_n corresponds to a unifier {p_1 ← G_1, …, p_n ← G_n} that is a ground substitution.The restriction to ground substitutions is due to the requirement that unknowns do not occur in solutions. General solutions sec-reproductive are expressed with further special parameter atoms, different from the unknowns. These correspond to fresh variables in unifiers.A generalization of Boolean unification to predicate logic with various specific problems characterized by the involved formula classes has been investigated in <cit.>. The material presented hereis largely orthogonal to that work, but a technique from <cit.> has been adapted to more general cases in sec-ehw. §.§ View as Construction of Elimination Witnesses Another view on the solution problem is related to eliminating second-order quantifiers by replacing the quantified predicates with “witness formulas”. Let = p_1 … p_n be a sequence of distinct predicates.An -witness ofin a formula ∃F[] is defined as a sequenceof formulas such that F and ∃F[] ≡ F[].The condition ∃F[] ≡ F[] in this definition is equivalent to ¬ F[]F[]. If F[] and the consideredare first-order, then finding an -witness is second-order quantifier elimination on a first-order argument formula, restricted by the condition that the result is of the form F[]. Differently from the general case of second-order quantifier elimination on first-order arguments, the set of formulas for which elimination succeeds and, for a given formula, the set of its elimination results, are then recursively enumerable. Some well-known elimination methods yield -witnesses, for example rewriting a formula that matches the left side of Ackermann's Lemma (Prop. <ref>) with its right side, which becomes evident when considering that the right side F[G] is equivalent to ∀ x_1…∀ x_p(GG)F[G]. Finding particular solutions and finding -witnesses can be expressed in terms of each other: Let F[] be a and letbe a sequence of formulas. Then:prop-wit-ito-solis an -witness ofin ∃F if and only ifis a solution of the (¬ F[]F)[], whereis a sequence of fresh predicates matching .prop-sol-ito-witis a solution of F[] if and only ifis an -witnessofin∃F and it holds that ∃F. Assume F.(<ref>) Follows since ∃F[] ≡ F[] iff ∃F[]F[] iff F[]F[] iff ¬ F[]F[]. (<ref>) Left-To-Right: Follows since F[] implies ∃F[] and F[], which implies ∃F[] ≡≡ F[].Right-to-left: Follows since ∃F[] ≡ F[] and ∃F[] together imply F[]. §.§ View as Related to Definientia and InterpolantsThe following proposition shows a further view on the solution problem that relates it to definitions of the unknown predicates: A sequence = G_1… G_n of formulas is a particular solution of a F[ = p_1… p_n] if and only if F and ⋀_i=1^n (p_iG_i)F.Follows from the definition of particular solution and Prop. <ref>.In the special case where F[p] is a with a nullary unknown p, the characterization of a solution G according to Prop. <ref> can be expressed with an entailment where a definition of the unknown p appears on the right instead of the left side: If p is nullary, then ¬ (pG) ≡ p ¬ G. Thus, the statement pGF is for nullary p equivalent to¬ Fp ¬ G.The second condition of the characterization of solution according to Prop. <ref>, that is, GpF, holds if it is assumed that p is not in G, that G⊆F and that no member of F is bound by a quantifier occurrence in F. A solution is then characterized as negated definiens of p in the negation of F. Another way to express (<ref>) along with the condition that G is semantically independent from p is as follows:∃ p(¬ F ¬ p) G¬∃ p(¬ Fp).The second-order quantifiers upon the nullary p can be eliminated, yielding the following equivalent statement:¬ F[] G F[].Solutions G then appear as the formulas in a range, between ¬ F[] and F[]. This view is reflected in <cit.>, which goes back to work by Schröder. If F is first-order, then Craig interpolation can be applied to compute formulas G that also meet the requirements G⊆F and p ∉F to ensure GpF.Further connections to Craig interpolation are discussed in sec-construction.§ THE METHOD OF SUCCESSIVE ELIMINATIONS – ABSTRACTED §.§ Reducing n-ary to 1-ary Solution Problems The method of successive eliminations to solve an n-ary solution problem by reducing it to unary solution problems is attributed to Boole and has been formally described in a modern algebraic setting in <cit.>. It has been rediscovered in the context of Boolean unification in the late 1980s, notably with <cit.>. Rudeanu notes in <cit.> that variants described by several authors in the 19th century are discussed by Schröder <cit.>. To research and compare all variants up to now seems to be a major undertaking on its own. Our aim is here to provide a foundation to derive and analyze related methods.The following proposition formally states the core property underlying the method in a way that, compared to the Boolean algebra version in <cit.>, is more abstract in several aspects: Second-order quantification upon predicates that represent unknowns plays the role of meta-level shorthands that encode expansions; no commitment to a particular formula class is made, thus the proposition applies to second-order formulas with first-order and propositional formulas as special cases; it is not specified how solutions of the arising unary solution problems are constructed; and it is not specified how intermediate second-order formulas (that occur also for inputs without second-order quantifiers) are handled. The algorithm descriptions in the following subsections show different possibilities to instantiate these abstracted aspects.Let F[ = p_1… p_n] be a and let = G_1… G_n be a sequence of formulas. Then the following statements are * is a solution of F[].*For i ∈{1, …, n}: G_i is a solution of the(∃ p_i+1…∃ p_nF[G_1… G_i-1 p_i … p_n])[p_i]such that G_i∩ = ∅.Left-to-right: From <ref> it follows that F[]. Hence, for all i ∈{1,…,n} by Prop. <ref> it follows that∃ p_i+1…∃ p_nF[G_1… G_i p_i+1… p_n].From <ref> it also follows that F. This implies that for all i ∈{1,…,n} it holds thatG_ip_i∃ p_i+1…∃ p_nF[G_1… G_i-1 p_i… p_n] and G_i∩ = ∅.We thus have derived for all i ∈{1,…,n} the two properties that characterize G_i as a solution of the as statedin <ref>.Right-to-left: From <ref> it follows that G_n is a solution of the(F[G_1… G_n-1 p_n])[p_n].Hence, by the characteristics of solution it follows that F[G_1… G_n].The property F can be derived from ∩ = ∅ and the fact that for all i ∈{1,…,n} it holds that G_ip_i(∃ p_i+1…∃ p_nF[G_1… G_i-1 p_i … p_n]). The properties F[G_1… G_n] and F characterizeas a solution of the F[].This proposition states an equivalence between the solutions of an n-ary and the solutions of n .These are on formulas with an existential second-order prefix. The following gives an example of this decomposition: [Reducing an n-ary Solution Problem to Unary Solution Problems] Consider the F[p_1p_2] of Examp. <ref>.The with unknown p_1 according to Prop. <ref> is(∃ p_2F[p_1p_2])[p_1],whose formula is, by second-order quantifier elimination, equivalent to ∀𝑥(a(𝑥) b(𝑥)) ∀𝑥(p_1(𝑥)b(𝑥)). Take a(x_1) as solution G_1 of that .The with unknown p_2 according to Prop. <ref> is (F[G_1p_2])[p_2].Its formula is then, by replacing p_1in F as specified in Examp. <ref> with a and removing the duplicate conjunct obtained then, equivalent to∀x(a(𝑥) b(𝑥))(∀x(a(𝑥) p_2(𝑥))∀x(p_2(𝑥) b(𝑥))).A solution of that second is, for example, b(𝑥_1), yielding the pair a(𝑥_1)b(𝑥_1) as solution of the originally considered F[p_1p_2]. §.§ Solving on the Basis of Second-Order Formulas The following algorithm to compute particular solutions is an immediate transfer of Prop. <ref>. Actually, it is more an “algorithm template”, since it is parameterized with a method to compute and covers a nondeterministic as well as a deterministic variant: []Letbe a class of formulas and letbe a nondeterministic or a deterministic algorithm that outputs for of the form (∃ p_1…∃ p_n F[p])[p] with F ∈ solutions G such that G∩{p_1,…,p_n} = ∅ and F[G] ∈.A F[p_1… p_n], where F ∈, that has a solution.For i := 1 to n do: Assign to G_i an output ofapplied to the (∃ p_i+1…∃ p_nF[G_1… G_i-1 p_i … p_n])[p_i].The sequence G_1… G_n of formulas, which is a particular solution of F[p_1… p_n].The solution components G_i are successively assigned to some solution of the given in Prop. <ref>, on the basis of the previously assigned components G_1… G_i-1.Even if the formula F of the input problem does not involve second-order quantification, these are on second-order formulas with an existential prefix ∃ p_i+1…∃ p_n upon the yet “unprocessed” unknowns.The algorithm comes in a nondeterministic and a deterministic variant, just depending on whetheris instantiated by a nondeterministic or a deterministic algorithm. Thus, in the nondeterministic variant the nondeterminism ofis the only source of nondeterminism.With Prop. <ref> it can be verified that if a nondeterministicis “complete” in the sense that for each solution there is an execution path that leads to the output of that solution, then alsobased on it enjoys that property, with respect to the n-ary solutions G_1… G_n.For the deterministic variant, from Prop. <ref> it follows that ifis “complete” in the sense that it outputs some solution whenever a solution exists, then, given that F[p_1… p_n] has a solution, which is ensured by the specification of the input, alsooutputs some solution G_1… G_n.This method appliesto existential second-order formulas, which prompts some issues for future research: As indicated in Sect. <ref> (and elaborated in sec-construction) Craig interpolation can in certain cases be applied to compute solutions of . Can QBF solvers, perhaps those that encode QBF into predicate logic <cit.>, be utilized to compute Craig interpolants?Can it be useful to allow second-order quantifiers in solution formulas because they make these smaller and can be passed between different calls to ?As shown in sec-reproductive, ifis a method that outputs so-called reproductive solutions, that is, most general solutions that represent all particular solutions, then alsooutputs reproductive solutions. Thus, there are two ways to obtain representations of all particular solutions whose comparison might be potentially interesting: A deterministic method that outputs a single reproductive solution and the nondeterministic method with an execution path to each particular solution.§.§ Solving with the Method of Successive Eliminations The method of successive eliminations in a narrower sense is applied in a Boolean algebra setting that corresponds to propositional logic and outputs reproductive solutions. The consideration of reproductive solutions belongs to the classical material on Boolean reasoning <cit.> and is modeled in the present framework in sec-reproductive. Compared to , the method handles the second-order quantification by eliminating quantifiers one-by-one, inside-out, with a specific method and applies a specific method to solve , which actually yields reproductive solutions. These incorporated methods apply to propositional input formulas (and to first-order input formulas if the unknowns are nullary).Second-order quantifiers are eliminated by rewriting with the equivalence ∃ pF[p] ≡ F[]F[]. As solution of an F[p] the formula (¬ F[]t)(F[] ¬ t) is taken, where t is a fresh nullary predicate that is considered specially. The intuition is that particular solutions are obtained by replacing t with arbitrary formulas in which p does not occur (see sec-reproductive for a more in-depth discussion).The following algorithm is an iterative presentation of the method of successive eliminations, also called Boole's method, in the variant due to <cit.>. The presentation in <cit.>, where apparently minor corrections compared to <cit.> have been made, has been taken here as technical basis.We stay in the validity-based setting, whereas <cit.> use the unsatisfiability-based setting.Also differently from <cit.> we do not make use of the xor operator. []A F[p_1… p_n], where F is propositional, that has a solution and a sequence t_1… t_n of fresh nullary predicates.* Initialize F_n[p_1… p_n] with F.*For i := n to 1 do: Assign to F_i-1[p_1… p_i-1] the formula F_i[p_1… p_i-1]F_i[p_1… p_i-1]. *For i := 1 to n do: Assign to G_i the formula (¬ F_i[G_1… G_i-1]t_i)(F_i[G_1… G_i-1] ¬ t_i).The sequence G_1… G_n of formulas, which is a reproductive solution of F[p_1… p_n] with respect to the special predicates t_1… t_n.The formula assigned to F_i-1 in step (<ref>.)is the result of eliminating ∃ p_i in ∃ p_iF_i[p_1… p_i] and the formula assigned to G_i in step (<ref>.) is the reproductive solution of the (F_i[G_1… G_i-1p_i])[p_i], obtained with the respective incorporated methods indicated above. The recursion in the presentations of <cit.> is translated here into two iterations that proceed in opposite directions: First, existential quantifiers of ∃ p_1 …∃ p_n F are eliminated inside-out and the intermediate results, which do not involve second-order quantifiers, are stored.Solutions of are computed in the second phase on the basis of the stored formulas.In this presentation it is easy to identify two “hooks” where it is possible to plug-in alternate methods that produce other outputs or apply to further formula classes: In step (<ref>.)the elimination method and in step (<ref>.) the method to determine solutions of . If the plugged-in method to compute outputs particular solutions, thencomputes particular instead of reproductive solutions. §.§ Solving by Inside-Out Witness Construction Like , the following algorithm eliminates second-order quantifiers one-by-one, inside-out, avoiding intermediate formulas with existential second-order prefixes of length greater than 1, which arise with SOLVE ON SECOND ORDER. In contrast to , it performs elimination by the computation of -witnesses. []Letbe a class of formulas andbe an algorithm that computes for formulas F ∈ and predicates p an -witness G of p in ∃ pF[p] such that F[G] ∈.A F[p_1… p_n], where F ∈, that has a solution.For i := n to 1 do: *Assign to G_i[p_1… p_i-1]the output ofapplied to ∃ p_iF[p_1… p_i G_i+1… G_n]. *For j := n to i+1 do: Re-assign to G_j[p_1… p_i-1] the formula G_j[p_1… p_i-1G_i]. : The sequence G_1… G_n of formulas, which provides a particular solution of F[p_1… p_n].Step (<ref>.) in the algorithm expresses that a new value is assigned to G_j and that G_j can be designated by G_j[p_1… p_i-1], justified because the new value does not contain free occurrences of p_i, …, p_n.In step (<ref>.) the respective current values of G_i+1… G_n are used to instantiate F. It is not hard to see from the specification of the algorithm that for input F[] and outputit holds that ∃F ≡ F[] and that F. By Prop. <ref>,is then a solution if ∃F. This holds indeed if F[] has a solution, as shown below with Prop. <ref>.Ifis “complete” in the sense that it computes an elimination witness for all input formulas in , thenoutputs a solution. Whether all solutions of the input can be obtained as outputs for different execution paths of a nondeterministic version ofobtained through a nondeterministic , in analogy to the nondeterministic variant of , appears to be an open problem.§ EXISTENCE OF SOLUTIONS §.§ Conditions for the Existence of Solutions We now turn to the question under which conditions there exists a solution of a given , or, in the terminology of <cit.>, the is consistent.A necessary condition is easy to see: If a F[] has a solution, then it holds that ∃F. Follows from the definition of particular solution and Prop. <ref>. Under certain presumptions that hold for propositional logic this condition is also sufficient.To express these abstractly we use the following concept:A formula classis called SOL-witnessed for a predicate classif and only if for all p ∈ and F[p] ∈ the following statements are equivalent: * ∃ pF.* There exists a solution G of the F[p] such that F[G] ∈. Since the right-to-left direction of that equivalence holds in general, the left-to-right direction alone would provide an alternate characterization. The class of propositional formulas is SOL-witnessed (for the class of nullary predicates). This follows since in propositional logic it holds that ∃ pF[p] ≡ F[F[]],which can be derived in the following steps:F[F[]]≡ ∃ p(F[p](pF[]))≡(F[]( F[]))(F[]( F[]))≡F[]F[]≡ ∃ pF[p]. The following definition adds closedness under existential second-order quantification and dropping of void second-order quantification to the notion of SOL-witnessed, to allow the application on matching with item (b) in Prop. <ref>: A formula classis called MSE-SOL-witnessed for a predicate classif and only if it is SOL-witnessed for , for all predicates in p ∈ and F ∈ it holds that ∃ pF ∈, and, if ∃ pF ∈ and p ∉F, then F ∈.The class of existential QBFs (formulas of the form ∃F where F is propositional) is MSE-SOL-witnessed (like the more general class of QBFs – second-order formulas with only nullary predicates).Another example is the class of first-order formulas extended by second-order quantification upon nullary predicates, which is MSE-SOL-witnessed for the class of nullary predicates. The following proposition can be seen as expressing an invariant of the method of successive eliminations that holds for formulas in an MSE-SOL-witnessed class: Letbe a formula class that is MSE-SOL-witnessed for predicate class . Let F[ = p_1… p_n] ∈ with ∈^n. If ∃F[], then for all i ∈{0,…, n} there exists a sequence G_1… G_i of formulas such that G_1… G_i∩ = ∅, G_1… G_ip_1… p_iF, ∃ p_i+1…∃ p_n F[G_1… G_ip_i+1… p_n] andBy induction on the length i of the sequence G_1… G_i. The conclusion of the proposition holds for the base case i=0: The statement F holds trivially, ∃F is given as precondition, and ∃F ∈ follows from F ∈. For the induction step, assume that the conclusion of the proposition holds for some i ∈{0, … n-1}. That is, there exists a sequence G_1… G_i of formulas such that G_1… G_i∩ = ∅, G_1… G_ip_1… p_iF, ∃ p_i+1…∃ p_nF[G_1… G_ip_i+1… p_n] and ∃ p_i+1…∃ p_nF[G_1… G_ip_i+1… p_n] ∈. Sinceis MSE-SOL-witnessed forand p_1, …, p_i∈ it follows that there exists a solution G_i+1 of the(∃ p_1 …∃ p_i ∃ p_i+2…∃ p_n F[G_1… G_ip_i+1… p_n])[p_i+1]such that ∃ p_1 …∃ p_i ∃ p_i+2…∃ p_n F[G_1… G_i+1p_i+2… p_n] ∈. From the characteristics of solution it follows thatG_i+1p_i+1∃ p_1 …∃ p_i ∃ p_i+2…∃ p_n F[G_1… G_ip_i+1… p_n],which implies (since all members ofwith exception of p_i+1 are in the quantifier prefix of the problem formula) that G_i+1∩ = ∅, henceG_1… G_i+1∩ = ∅.Given the induction hypothesis G_1… G_ip_1… p_iF, it also impliesG_1… G_i+1p_1… p_i+1F.From the characteristics of solution it follows in addition that∃ p_1 …∃ p_i ∃ p_i+2…∃ p_nF[G_1… G_i+1p_i+2… p_n],which, since G_1… G_i+1∩ = ∅, is equivalent to∃ p_i+2…∃ p_nF[G_1… G_i+1p_i+2… p_n].Finally, we conclude from ∃ p_1 …∃ p_i ∃ p_i+2…∃ p_nF[G_1… G_i+1p_i+2… p_n] ∈, established above, and the definition of MSE-SOL-witnessed that∃ p_i+2…∃ p_nF[G_1… G_i+1p_i+2… p_n] ∈,which completes the proof of the induction step.A sufficient and necessary condition for the existence of a solution of formulas in MSE-SOL-witnessed classes now follows from Prop. <ref> and Prop. <ref>: Letbe a formula class that is MSE-SOL-witnessed on predicate class .Then for all F[] ∈ where the members ofare inthe following statements are equivalent: * ∃F.*There exists a solutionof the F[] such that F[] ∈.Follows from Prop. <ref> and Prop. <ref>.From that proposition it is easy to see that for with propositional formulas the complexity of determining the existence of a solution is the same as the complexity of deciding validity of existential QBFs, as proven in <cit.>, that is, Π^P_2-completeness: By Prop. <ref>, a F[] where F is propositional has a solution if and only if the existential QBF ∃F[] is valid and, vice versa, an arbitrary existential QBF ∃F[] (where F is quantifier-free) is valid if and only if the F[] has a solution.§.§ Characterization of SOL-Witnessed in Terms of -Witness The following proposition shows that under a minor syntactic precondition on formula classes, SOL-witnessed can also be characterized in terms of -witness instead of solution as in Def. <ref>: Letbe a class of formulas that satisfies the following properties: For all F[p] ∈ and predicates q with the same arity of p it holds that F[p] ¬ F[q] ∈, and for all FG ∈ it holds that F ∈.The classis SOL-witnessed for a predicate classif and only if for all p ∈ and F[p] ∈ there exists an -witness G of p in F[p] such that F[G] ∈. Left-to-right: Assume thatis meets the specified closedness conditions and is SOL-witnessed for , p ∈ and F[p] ∈. Let q be a fresh predicate with the arity of p.The obviously true statement ∃ pF[p] ¬∃ pF[p] is equivalent to ∃ pF[p] ¬ F[q] and thus to ∃ p(F[p] ¬ F[q]). By the closedness properties ofit holds that F[p] ¬ F[q] ∈. Sinceis SOL-witnessed forit thus follows from Def. <ref> that there exists a solution G of the (F[p] ¬ F[q])[p] such that (F[G] ¬ F[q]) ∈, and, by the closedness properties, also F[G] ∈.From the definition of solution it follows that F[G] ¬ F[q], which is equivalent to ∃ pF[p] ≡ F[G], and also that GpF[G] ¬ F[q], which implies GpF[G]. Thus G is an SO-witness of p in F[p] such that F[G] ∈. Right-to-left: Easy to see from Prop. <ref>.§.§ The Elimination Result as Precondition of Solution Existence Proposition <ref> makes an interesting relationship between the existence of a solution and second-order quantifier elimination apparent that has been pointed out by Schröder <cit.> and Behmann <cit.>, and is briefly reflected in <cit.>: The formula ∃F is valid if and only if the result of eliminating the existential second-order prefix (called Resultante by Schröder <cit.>) is valid. If it is not valid, then, by Prop. <ref>, the F[] has no solution, however, in that case the elimination result represents the unique (modulo equivalence) weakest precondition under which the would have a solution.The following proposition shows a way to make this precise:Letbe a formula class and letbe a predicate class such thatis MSE-SOL-witnessed on . Let F[] be a solution problem where F ∈ and all members ofare in .Let A be a formula such that (AF) ∈, A ≡∃F, and no member ofdoes occur in A.Thenprop-wps-one The (AF)[] has a solution.prop-wps-mid If B is a formula such that (BF) ∈, no member ofoccurs in B, and the (BF)[] has a solution, then BA.(<ref>) From the specification of A it follows that A ∃ F and thus ∃(AF).Hence, by Prop. <ref>, the (AF)[] has a solution. (<ref>) Let B be a formula such that the left side of holds. With Prop. <ref> it follows that B ∃ F. Hence B ∃ F.Hence BA.The following example illustrates Prop. <ref>: [Elimination Result as Precondition for Solvability]Consider the F[p_1p_2] where[ F = ∀x(p_1(𝑥) p_2(𝑥)) ∀x(a(𝑥) p_2(𝑥))∀x(p_2(𝑥) b(𝑥)). ]Its formula is the consequent of the considered in Examp. <ref>. Since ∃ p_1 ∃ p_2F ≡∀x(a(𝑥) b(𝑥)) ≢, from Prop. <ref> it follows that F[p_1p_2] has no solution.If, however, the elimination result ∀x(a(𝑥) b(𝑥)) is added as an antecedent to F, then the resulting , which is the of Examp. <ref>, has a solution. § REPRODUCTIVE SOLUTIONS AS MOST GENERAL SOLUTIONSTraditionally, concise representations of all particular solutions have been central to investigations of the solution problem.This section presents adaptions of classic material to this end, due in particular to Schröder and Löwenheim, and presented in a modern algebraic formalization by Rudeanu <cit.>. The idea is that a general solution [] has parameter predicatessuch that each instantiation G[] with a sequence of formulas is a particular solution and that for all particular solutionsthere exists a sequenceof formulas such that ≡[]. In this way, a general solution represents all solutions. A remaining difficulty is to determine for a given particular solutionthe associated . This is remedied with so-called reproductive solutions, for whichitself can be taken as , that is, it holds that [] ≡. We give formal adaptions in the framework of predicate logic that center around the notion of reproductive solution.This includes precise specifications of reproductive solution and two further auxiliary types of solution.A technique to construct a reproductive solution from a given particular solution, known as Schröder's rigorous solution or Löwenheim's theorem and a construction of reproductive solutions due to Schröder, which succeeds on propositional formulas in general, is adapted. Finally, a way to express reproductive solutions of n-ary in terms of reproductive solutions of in the manner of the method of successive eliminations is shown. Parametric, General and Reproductive Solutions The following definitions give adaptions of the notions of parametric, general and reproductive solution for predicate logic, based on the modern algebraic notions in <cit.> as starting point. A parametric solution problem () F[] is a pair of a solution problem F[] and a sequenceof distinct predicates such that (F∪) ∩ = ∅. The members ofare called the solution parameters of the . If the sequences of predicatesandare matching, then the is called a reproductive solution problem ().A with arity 1 is also called unary reproductive solution problem (). Define the following notions:def-sol-parametric A parametric solution of a F[] is a sequence [] of formulas such that , F and for all sequences of formulassuch thatand F it holds that if there exists a sequenceof formulas such that , []F and≡[],thenF[]. def-sol-general A general solution of a F[] is a sequence [] of formulas such that the characterization of parametric solution (Def. <ref>) applies, with the if-then implication supplemented by its converse. def-sol-reproductive A reproductive solution of a F[] is a sequence [] of formulas such that * is a parametric solution of F[] and*For all sequencesof formulas such thatandF it holds that ifF[],then≡[]. Parametric solution can be characterized more concisely than in Def. <ref>, but not showing the syntactic correspondence to the characterization of general solution in Def. <ref>: A parametric solution of a F[] is a sequence G[] of formulas such that , F and for all sequencesof formulas such that , []F it holds thatF[[]]. The left side of the proposition can be expressed as:(1) ,(2) F,3land for all sequences H, T of formulas it holds thatif (3) , (III)(4) F,(5) ,(6) []F (7) ≡[],then (8) F[].The right side of the proposition can be expressed as:(9) ,(10) F,3land for all sequences T of formulas it holds thatif (11)(12) []F,then (13) F[[]].Left-to-right: If = [], then ≡[].Thus, this direction of the proposition follows if statements (9)–(12) imply (1)–(6), withinstantiated to [].Statements (1), (2), (5) and (6) are (9), (10), (11) and (12), respectively.The instantiation of (3), that is, [], follows from (10) and (11). The instantiation of (4) is []F, which is, like (6), identical to (12). Right-to-left: Statements (1)–(7) imply (9)–(12). This holds since (1), (2), (5) and (6) are (9), (10), (11) and (12), respectively. Hence, assuming the right side of the proposition, statements (1)–(7) then imply (13), that is, F[[]].Statement (13), (7) and (6) imply (8), that is F[], which concludes the proof. The essential relationships between particular, parametric, general and reproductive solutions, as well as an alternate characterization of reproductive solution implied by these, are gathered in the following proposition:Let = [] be a sequence of formulas. Then:prop-para-is-sol-bothis a parametric solution of the F[] if and only ifandis a particular solution of the F[].prop-par-all-sol Ifis a parametric solution of the F[] andis sequence of formulas such that , []F, then [] is a particular solution of the F[].prop-general-is-para A general solution of a is also a parametric solution of that .prop-particular-to-general Ifis a general solution of the F[] andis a particular solution of the F[] such that , then there exists a sequence of formulas such that , []F and≡[]. prop-repro-is-general A reproductive solution of a is also a general solution of that .prop-par-halfrepro Ifis a parametric solution of the F[], then for all sequencesof formulas such thatand F it holds that if≡[],thenF[].prop-repro-bidiris a reproductive solution of the F[] if and only if *is a parametric solution of F[] and* For all sequencesof formulas such thatandF it holds thatF[]if and only if≡[].Before we come to the proof of Prop. <ref>, let us observe that the conclusion of Prop. <ref> is item (<ref>.) of the definiens of reproductive solution (Def. <ref>) after replacing the if-then implication there by its converse, and that Prop. <ref> characterizes reproductive solution like its definition (Def. <ref>), except that the definiens is strengthened by turning the if-then implication in item (<ref>.)into an equivalence.(<ref>)Left-to-right: Letbe a sequence of fresh predicates that matchesand assume that [] is a parametric solution of F[].Hence F and F[[]], which implies F[].Thusis a particular solution of F[]. Note that this direction of the proposition requires the availability of fresh predicates in the vocabulary. Right-to-left: Can be derived in the following steps explained below:[ (1) []is a particular solution ofF[].; (2); (3) F; (4)F[].; (5) .; (6)[]F.; (7)F[].; (8) ∀F[].; (9)F[[]].;(10)is a parametric solution of F[]. ]Step (1) and (2), whereis some sequence of distinct predicates such that (F∪) ∩ = ∅, form the left side of the proposition.Steps (3) and (4) follow from (1) and the characteristics of particular solution.Letbe a sequence of formulas such that (5) and (6) hold, conditions on the left side of Prop. <ref>. Step (7) follows from (5) and (6). Step (8) follows from (4).Step (9) follows from (7) and (8) by Prop. <ref>. Finally, step (10), the right side of the proposition, follows from Prop. <ref> with (9), (2) and (3).(<ref>) The left side of the proposition includes []F and, by Prop. <ref>, implies F[[]], from which the right side follows.(<ref>) Immediate from the definition of generalsolution (Def. <ref>).(<ref>) The left side of the proposition implies ,F and F[]. The right side then follows from the definition of general solution (Def. <ref>).(<ref>) By definition, a reproductive solution is also a parametric solution.Letbe a reproductive solution of F[].Let COND stand for the following conjunction of three statements:, F 𝑎𝑛𝑑F[].From the definition of reproductive solution it immediately follows that for all sequencesof formulas such that COND it holds that ≡[]. From this it follows that for all sequencesof formulas such that COND it holds that , []F and ≡[], which can be derived as follows: The first of the statements on the right, , is included directly in the left side, that is, COND.The second one, []F, follows fromand F that are in COND together with F, which holds sinceis a parametric solution. The above implication also holds ifon its right side is replaced by a supposedly existing . It then forms the remaining requirement to show thatis a general solution: For all sequencesof formulas such that COND there exists a sequence of formulas such that , []F and ≡[]. (<ref>)Can be shown in the following steps, explained below:[ (1)F.; (2) .; (3)F.; (4)≡[].; (5)[]F.; (6) F[[]] .; (7) F[] .; ]Assume thatis a parametric solution of the F[], which implies (1).Letbe a sequence of formulas such that (2) and (3), the preconditions of the converse of (as well as the unmodified) item (<ref>.)in the definition of reproductive solution (Def. <ref>), hold.Further assume (4), the right side of item (<ref>.).We prove the proposition by deriving the left side of item (<ref>.).Step (5) follows from (1), (2) and (3).Step (6) follows from (2) and (5) by Prop. <ref> sinceis a parametric solution.Finally, step (7), the left side of item (<ref>.), follows from (6) and (4) with (3) and (5). (<ref>) Follows from Prop. <ref>.Rudeanu <cit.> notes that the concept of reproductive solution seems to have been introduced by Schröder <cit.>, while the term reproductive is due to Löwenheim <cit.>.Schröder calls the additional requirement that a reproductive solution must satisfy in comparison with general solution Adventivforderung (adventitious requirement) and discusses it at length in <cit.>, describing it with reproduzirt.<cit.>.The Rigorous SolutionFrom any given particular solution , a reproductive solution can be constructed, called here, following Schröder's terminology <cit.>, the rigorous solution associated with . In the framework of Boolean algebra, the analogous construction is <cit.>.Let F[] = t_1… t_n be a .For i ∈{1,…, n} let _i stand for x_1… x_t_i.Assume F∩ = ∅, t_1(_1)… t_n(_n)F and F[]F. If = G_1 … G_n is a particular solution of that , then the sequence = R_1… R_n of formulas defined as follows is a reproductive solution of that :R_iis the clean variant of(G_i() ¬ F[]) (t_i(_i)F[]). In the specification of R_i the formula G_i is written as G_i() to indicate that members ofmay occur there literally without being replaced.In the unsatisfiability-based setting, the R_i would be characterized as the clean variant of(G_i()F[])(t_i(_i) ¬ F[]). The proof of this proposition is based on the following lemma, a predicatelogic analog to <cit.> for the special case n=1,which is sufficient to prove Prop. <ref>: The effect of thelemma for arbitrary n is achieved by an application ofProp. <ref> within an induction. Let p be a predicate (with arbitrary arity ≥ 0), let F[p] be a formula and let V, W, A be formulas such that VpF, WpF,ApF and, in addition, A∩= ∅.It then holds thatF[(AV)(¬ AW)] ≡ (AF[V])(¬ AF[W]).Assume the preconditions of the proposition.It follows that (AV)(¬ AW)pF.Making use of Prop. <ref>, the conclusion of the proposition can be then be shown in the following steps:[;≡ ∃ p(F[p](p((AV)(¬ AW))));≡ (A ∃ p(F[p](p((AV)(¬ AW))))); (¬ A ∃ p (F[p](p((AV)(¬ AW)))));≡ (A ∃ p(F[p](p(( V)( W)))));(¬ A ∃ p(F[p](p(( V)( W)))));≡ (A ∃ p(F[p](pV)))(¬ A ∃ p(F[p](pW)));≡. ] The preconditions in Prop. <ref> permit that x_1,…, x_p may occur free in V and W, whereas in A no member ofis allowed to occur free. We are now ready to prove Prop. <ref>: By item (<ref>.) of the definition of reproductive solution (Def. <ref>), [] = R_1[]… R_n[] is required to be a parametric solution for which by Prop. <ref> three properties have to be shown: The first one, , is immediate since each member ofis the clean variant of some formula.The second one, F, is easy to derive from the preconditions and the definition of .The third one is an implication that can be shown in the following steps, explained below:[(1).;(2) []F.;(3) F.;(4) F[].;(5) ¬ F[] ¬ F[[]] ¬ F[].;(6) F[] ¬ F[[]] ¬ F[].;(7)F[]F[[]].;(8) F[[]].;]Letbe a sequence of formulas such that statements (1) and (2), which are on the left side of the implication to show, do hold.We derive the right side of the implication, that is F[[]].Steps (3) and (4) hold sinceis a particular solution.Steps (5) and (6) can be shown by induction based on the equivalences (9) and (10), respectively, below, which hold for all i ∈{0,…,n-1} and follow from Prop. <ref>:[(9)¬ F[G_1 … G_i R_i+1[] … R_n[]]; ≡ ¬ F[G_1 … G_i ((G_i+1¬ F[]) (T_i+1 F[]))R_i+2[] … R_n[]]; ≡ (¬ F[] ¬ F[G_1 … G_i+1 R_i+2[] … R_n[]]);(F[] ¬ F[G_1 … G_iT_i+1 R_i+2[] … R_n[]]).; (10) ¬ F[T_1 … T_i R_i+1[] … R_n[]]; ≡ ¬ F[T_1 … T_i ((G_i+1¬ F[]) (T_i+1 F[]))R_i+2[] … R_n[]]; ≡ (¬ F[] ¬ F[T_1 … T_i G_i+1 R_i+2[] … R_n[]]); (F[] F[T_1 … T_i+1 R_i+2[] … R_n[]]). ]The required preconditions of Prop. <ref> are justified there as follows, where F^' stands for F after the substitutions indicated in (9) or (10), that is, the formula matched with the left side of Prop. <ref>: – G_i+1p_i+1¬ F^':Follows from (3).–T_i+1p_i+1¬ F^': Follows from (1) and (2).– F[]p_i+1¬ F^':Follows from (1), (2) and the precondition F[]F.– F[]∩ = ∅: Follows from (1), (2) and the precondition F∩ = ∅. Step (7) follows from (6) and (5) and, finally, step (8) follows from (7) and (4). Item (<ref>.) of the definition of reproductive solution follows since for all sequences of formulassuch thatand F (note thatis implied by ) it holds that if F[], then ≡[], or, equivalently, but more explicated, it holds for all i ∈{1,…,n} that[ R_i[]; ≡ (G_i[] ¬ F[])(H_iF[]); ≡(G_i[] )(H_i ); ≡H_i. ]The algebraic version <cit.> is attributed there and in most of the later literature to Löwenheim <cit.>, thus known as Löwenheim's theorem for Boolean equations. However, at least the construction for unary problems appears to be in essence Schröder's rigorose Lösung <cit.>.(Löwenheim remarks in <cit.> that the rigorose Lösung can be derived as a special case of his theorem.)Behmann comments that Schröder's discussion of rigorose Lösung starts only in a late chapter of Algebra der Logik mainly for the reason that only then suitable notation was available <cit.>. Schröder <cit.> explains his term rigoros as adaption of à la rigueur, that is, if need be, because he does not consider the rigorous solution as a satisfying representation of all particular solutions. He notes that to detect all particular solutions on the basis of the rigorose Lösung, one would have to test all possible formulas T as parameter value. As remarked in <cit.>, Löwenheim's theorem has been rediscovered many times, for example in <cit.>. Schröder's Reproductive Interpolant For of the form ((Ap)(pB))[p]t,the formulaA(Bt()),where xs = x_1… x_p, is a reproductive solution.This construction has been shown by Schröder and is also discussed in <cit.>.For the notion of solution based on unsatisfiability instead of validity, the analogous construction applies to of the form((Ap)(B ¬ p))[p]tand yieldsB(¬ At).We call the solution interpolant because with the validity-based notion of solution assumed here the unknown p, and thus also the solution, is “between” A and B, that is, implied by A and implying B.The following proposition makes the construction precise and shows its justification.The proposition is an adaption of <cit.>, where <cit.>is given as source. Let(F = ∀(A()p()) ∀(p()B()))[p]t,whereis a sequence with the arity of p as length of distinct individual symbols not in , be a that has a solution.Let = x_1… x_p. Assume A()pF, B()pF and t()pF. Then the clean variant of following formula is a reproductive solution of that :A()(B()t()). That p does not occur free in A or in B is ensured by the preconditionsApF and BpF.The symbolsfor the quantified variables indicate that these are independent from the special meaning of the symbols in .Assume the preconditions of the proposition and let G[t] stand for theclean variant of A()(B()t()).Byitem (<ref>.) of the definition of reproductivesolution (Def. <ref>), G is required to be aparametric solution for which by Prop. <ref> threeproperties have to be shown: The first one, G, is immediate sinceG is a clean variant of some formula. The second one, GpF, easily follows from the preconditions.The third one is an implication that can be shown in the following steps, explained below:[(1) A()(B()T())pF.;(2) ∃ p (∀(A()p()) ∀(p()B())).;(3) ∀(A()B()).;(4)∀(A()(A()(B()T()))); ∀ ((A()(B()T()))B()).;(4)F[A()(B()T())].;]Let T() be a formula such that statement (1), which is on the left side of the implication to show, does hold.We derive the right side of the implication, that is, F[B()(A()T())]: Step (2) follows with Prop. <ref> from the precondition that the consideredhas a solution.Step (3) follows from (2) by second-order quantifier elimination, for example with Ackermann's lemma (Prop. <ref>). The formulas to the right ofin both statements are equivalent.Step (4) follows from (3) by logic. Justified by (1), we can express (4) as (5), the right side of the implication to show. Item (<ref>.) of the definition of reproductive solution follows since for all formulas H() such that H()tG, H()pF, it holds that F[H()] implies H() ≡ G[H()], which can be derived in the following steps:[F[H()];∀(A()H()) ∀ (H()B());∀(H()(A()H()));∀(H()(B()H())); H() ≡ A()H() H() ≡ B()H(); H() ≡ A()(B()H()); H() ≡ G[H()]. ]As shown by Schröder, the (clean variants of the) following two formulas are further reproductive solutions in the setting of Prop. <ref>:B()(A()t())and(A()t())(B() ¬ t()).These two formulas and the solution according to Prop. <ref> are all equivalent under the assumption that a solution exists, that is, ∃ pF, which, by second-order quantifier elimination, is equivalent to ∀(A()B()). Any F[p]t where F is a propositional formula or, more generally, where the unknown p is nullary, can be brought into the form matching Prop. <ref> by systematically renaming bound symbols and rewriting F[p] with the equivalenceF[p] ≡ (¬ F[]p)(pF[]).For the notion of solution based on unsatisfiability, the required form can be obtained with the Shannon expansionF[p] ≡ (F[]p)(F[] ¬ p). From Unary to n-ary Reproductive Solutions If the solution of a is composed as suggested by Prop. <ref> from reproductive solutions of unary solution problems, then it is itself a reproductive solution:Let F[ = p_1… p_n] = t_1… t_n be a . If [] = G_1[]… G_n[] is a sequence of formulas such that for all i ∈{1,…,n} it holds that G_i is a reproductive solution of the(∃ p_i+1…∃ p_nF[G_1… G_i-1p_i … p_n])[p_i]t_iand G_i∩ (∪ t_i+1… t_n) = ∅, thenis a reproductive solution of the considered F[].Assume the preconditions and the left side of the proposition.We show the two items of the definition of reproductive solution (Def. <ref>) for . Item (<ref>.), that is,is a parametric solution of F[], can be derived as follows: Each G_i, for i ∈{1,…,n}, is a reproductive solution of the associated . Hence, by Prop. <ref> it is a general, hence parametric, hence particular solution. By Prop. <ref> it follows thatis a particular solution of F[].By Prop. <ref> it is then also a parametric solution of F[]. Item (<ref>.) of the definition of reproductive solution can be shown as follows:First we note the following statement that was given as precondition:(1) For i ∈{1,…,n} it holds that G_i∩ t_i+1… t_n = ∅.For i ∈{1,…, n} letF_i[p_i] ∃ p_i+1…∃ p_nF[G_1[]… G_i-1[]p_i … p_n],that is, F_i is the formula of the of which G_i is a reproductive solution.By the definition of reproductive solution and the left side of the proposition it holds for all formulas H_i that if(2) H_it_iG_iH_ip_iF_i[p_i],F_i[H_i],then(3) H_i ≡ G_i[t_1… t_i-1H_it_i+1… t_n].From this and (1) it follows that all for all sequences of formulas H_1 … H_i it holds that if(4) H_it_iG_iH_ip_iF_i[p_iH_1… H_i-1 t_i… t_n],F_i[H_iH_1… H_i-1 t_i… t_n],then(5) H_i ≡ G_i[H_1… H_it_i+1… t_n].Now let ≡ H_1… H_n be a sequence of formulas such that(6) F,F[].We prove the item (<ref>) of Def. <ref> by showing ≡[], which is equivalent to the statement that for all i ∈{1,…,n} it holds that H_i ≡ G_i[], and, because of (1), to the statement that for all i ∈{1,…,n} it holds that H_i ≡ G_i[H_1… H_i t_i+1… t_n], which matches (5). We thus can prove ≡[] by showing that (4), which implies (5), holds for all i ∈{1, …, n}.The substitutivity conditions in (4) follow from the substitutivity conditions in (6).The remaining condition F_i[H_iH_1… H_i-1 t_i… t_n] can be proven by induction.As induction hypothesis assume that for all j ∈{1, … i-1} it holds that H_j ≡ G_j[].From F[] in (6) it follows by Prop. <ref> that ∃ p_i+1…∃ p_n F[H_1… H_i p_i+1… p_n].With the induction hypothesis it follows that∃ p_i+1…∃ p_nF[G_1[]… G_i-1[]H_i p_i+1… p_n],which, given the substitutivity conditions of (6) and F, which holds sinceis a parametric solution, can be expressed asF_i[H_iH_1… H_i-1 t_i… t_n],such that all conditions of (4) are satisfied and H_i ≡ G_i[] can be concluded. This suggests to compute reproductive solutions of propositional formulas for a n-ary by constructing Schröder interpolants for . Since second-order quantifier elimination on propositional formulas succeeds in general, the construction of the Schröder interpolant can there be performed on the basis of formulas that are just propositional, without second-order quantifiers.§ APPROACHING CONSTRUCTIVE SOLUTION TECHNIQUESOn the basis of first-order logic it seems that so far there is no general constructive method for the computation of solutions.We discuss various special cases where a construction is possible. Some of these relate to applications of Craig interpolation.Recent work by Eberhard, Hetzl and Weller <cit.> shows a constructive method for quantifier-free first-order formulas. A generalization of their technique to relational monadic formulas is shown, which, however, produces solutions that would be acceptable only under a relaxed notion of substitutibility.Background: Craig Interpolation, Definability and IndependenceBy Craig's interpolation theorem <cit.>, if F and G are first-order formulas such that FG, then there exists an a Craig interpolant of F and G, that is, a first-order formula H such thatH⊆F∩GandFHG.Craig interpolants can be constructed from proofs of FG, as, for example, shown for tableaux in <cit.>.Lyndon's interpolation theorem strengthens Craig's theorem by considering in addition that predicates in the interpolant H occur only in polarities in which they occur in both side formulas, F and G.In fact, practical methods for the construction of interpolants from proofs typically compute such Craig-Lyndon interpolants.One of the many applications of Craig interpolation is the construction of a definiens for a given predicate: Let F[p q_1… q_k] be a first-order formula such that F∩ = ∅ and p q_1, … q_k is a sequence of distinct predicates and letstand for x_1… x_p. Then p is definable in terms of (F∖{q_1, …, q_k}) within F, that is, there exists a first-order formula G such thatG⊆ (F∖{p, q_1, … q_k}) ∪andFpG,if and only if∃ p ∃ q_1 …∃ q_k(Fp()) ¬∃ p ∃ q_1 …∃ q_k(F ¬ p()).That entailment holds if and only if the following first-order formula is valid:Fp() ¬ (F[p^'q^'_1… q^'_k]¬ p^'()),where p^' q^'… q^'_k is a sequence of fresh predicates that matches p q … q_k.The definientia G of p with the stated characteristics are exactly the Craig interpolants of the two sides of that implication.Substitutibility GpF can be ensured by presupposing F and that no members ofare bound by a quantifier occurrence in F.Another application of Craig interpolation concerns the independence of formulas from given predicates: Second-order quantification allows to express that a formula F[] is semantically independent from the set of the predicates inas∃ F ≡ F,which is equivalent to ∃FF, and thus, ifis a sequence of fresh predicates that matches , also equivalent toF[]F.As observed in <cit.>, any interpolant of F[] and F is then equivalent to F but its free symbols do not contain members of , that is, it is syntactically independent of .Thus, for a given first-order formula semantic independence from a set of predicates can be expressed as first-order validity and, if it holds, an equivalent formula that is also syntactically independent can be constructed by Craig interpolation. With Craig-Lyndon interpolation this technique can be generalized to take also polarity into account, based on encoding of polarity sensitive independence as shown here for negative polarity: That F[p] is independent from predicate p in negative polarity but may well depend on p in positive polarity can be expressed as∃ q(F[q] ∀(q()p())),where = x_1… x_p and q is a fresh predicate with the same arity as p. Cases Related to Definability and Interpolation The following list shows cases where for an n-ary F[ = p_1… p_n] with first-order F and which has a solution a particular solution can be constructed. Each of the properties that characterize these cases is “semantic” in the sense that if it holds for F, then it also holds for any first-order formula equivalent to F. In addition, each property is at least “semi-decidable”, that is, the set of first-order formulas with the property is recursively enumerable. Actually in the considered cases, for each property a first-order formula can be constructed from F that is valid if and only if F has the property. For two of the listed cases, (<ref>.) and (<ref>.),the characterizing property implies the existence of a solution. * Each unknown occurs free in F only with a single polarity.A sequence ofand , depending on whether the respective unknown occurs positively or negatively, is then a solution.That F is semantically independent of unknowns in certain polarities, that is, is equivalent to a formula in which the unknowns do not occur in these polarities, can be expressed as first-order validity and a corresponding formula that is syntactically independent can be constructed by Craig-Lyndon interpolation.*Each unknown is definable in the formula. A sequence of definientia, which can be constructed with Craig interpolation, is then a solution. Rationale: Let G_1… G_n be definientia of p_1… p_n, respectively, in F.Under the assumption that there exists a solutionof F[] it holds thatF[]∃F[] ≡ ∃(F[] ⋀_i=1^n (p_iG_i)) ≡F[].*Each unknown is definable in the negated formula.The sequence of negated definientia, which can be constructed with Craig interpolation, is then a solution.Rationale: It holds in general that pGp ¬ G. Hence, if G_1… G_n are definientia of p_1… p_n , respectively, in ¬ F, then ¬ F ⋀_i = 1^n (p_iG_i) ⋁_i = 1^n (p_iG_i) ≡⋁_i = 1^n (p_i ¬ G_i). Thus⋀_i = 1^n (p_i ¬ G_i)F,matching the characterization of solution in Prop. <ref>. * Each unknown is nullary. This specializes case (<ref>.): If a solution exists, then a nullary unknown is definable in the negated formula: For nullary predicates p it holds in general thatp ¬ G ≡ pG.Thus p ¬ GF (which matches Prop. <ref>) holds if and only if ¬ FpG. *Each unknown has a ground instance that is definable in the negated formula.The sequence of negated definientia is a solution.If p_1(_1) … p_n(_n) are the definable ground instances, then optionally in each solution component G_i, under the assumption G_i, each member t_ij of _i = t_i1… t_ip_i can be replaced by x_j.The construction of the definientia can be performed with Craig interpolation, as described above for predicate definientia, except that an instance p() takes the place of p(). The difficulty is to find suitable instantiations _1 …_n.A way to avoid guessing might be to let the formula whose proof serves as basis for interpolant extraction follow the schema∃(Fp() ¬ (F[p^']¬ p^'())),where = y_1 … y_p and take the instantiation offound by the prover.If the proof involves different instantiations ofit has to be rejected. Rationale: Similar to the case (<ref>.) since for ground atoms p() it holds in general that p()G ≡¬ (p() ¬ G). These cases suggest to compute particular solutions based on Prop. <ref> by computing solutions for for each unknown, which is inspected for matching the listed cases or other types of solvable cases, for example the forms required by Schröder's interpolant or by Ackermann's lemma. If that fails for an unknown, an attempt with the unknowns re-ordered is made.For propositional problems, an interpolating QBF solver would be a candidate to compute solutions. Encodings of QBF into predicate logic, e.g., <cit.>, could possibly be applied for general first-order formulas with nullary unknowns.The EHW-Combination of -Witnesses for Disjuncts Eberhard, Hetzl and Weller show in <cit.> that determining the existence of a Boolean unifier (or, in, our terms, particular solution) for quantifier-free predicate logic is Π^P_2-complete, as for propositional logic <cit.>.Their proof rests on the existence of an EXPTIME function wit from quantifier-free formulas to quantifier-free formulas such that ∃ p F[p] ≡ F[wit(F[p])].The specification of wit(F[p]) is presented there as a variant of the DLS algorithm <cit.> for second-order quantifier elimination: The input is converted to disjunctive normal form and a specialization of Ackermann's lemma is applied separately to each disjunct. The results for each disjunct are then combined in a specific way to yield the overall witness formula. The following proposition states a generalized variant of this technique that is applicable also to other classes of inputs, beyond the quantifier-free case.Let F[p] = ⋁_i=1^n F_i be a formula and let G_1, …, G_n be formulas such that for i ∈{1,… n} it holds that G_ip_iF_i and ∃ pF_i[p] ≡ F_i[G_i].Assume that there are no free occurrences ofin F and, w.l.o.g, that no members of F∪ are bound by a quantifier occurrence in F. LetG() ⋀_i=1^n ((⋀_j=1^i-1¬ F_j[G_j])F_i[G_i]G_i()).Then it holds that GpF and ∃ pF[p] ≡ F[G]. Formulas G and G_i are written as G_i() and G() where they occur as formula constituents instead of substituents to emphasize thatmay occur free in them. This proof is an adaption of the proof of Theorem 2 in <cit.>.We write here I is a model of F symbolically as IF.That GpF follows from the preconditions of the proposition and the construction of G.The right-to-left direction of the stated equivalence, that is,⋁_i=1^n F_i[G]∃ p ⋁_i=1^n F_i[p],then follows from Prop. <ref>.The left-to-right direction of the equivalence can be show in the following steps, explained below.[ (1) I ∃ p ⋁_i=1^n F_i[p].; (2) I ⋁_i=1^n∃ pF_i[p].; (3) I ⋁_i=1^n F_i[G_i].; (4) I(⋀_j=1^k-1¬ F_j[G_j])F_k[G_k].; (5)I ∀(G()G_k()).; (6)IF_k[G].; (7) I ⋁_i=1^n F_i[G]. ]Let I be an interpretation such that (1) holds.Step (2) is equivalent to (1).Assume the precondition of the proposition that for all i ∈{1,… n} it holds that ∃ pF_i[p] ≡⋁_i=1^n F_i[G_i]. Step (3) follows from this and (1).By (3) there is a smallest member k of {1,…,n} such that IF_k[G_k]. This implies (4).The left-to-right direction of the equivalence in (5) follows since if IG() then by (4) and the definition of G() it is immediate that IG_k(). The right-to-left direction of the equivalence in (5) can be shown as follows: Assume IG_k().Then I is a model of the kth conjunct of G() since G_k() is in the conclusion of that conjunct, and I is a model of each jth conjunct of G() with j ≠ k, because the antecedent of such a conjunct contradicts with (4).Step (6) follows from (4) and (5). Step (7) follows from (6). The following proposition is another variant of the EHW-combination of witnesses for disjuncts. It can be proven similarly to Prop. <ref>.Let F[p] be a and let G_1, …, G_n be formulas such that for i ∈{1,…,n} it holds that G_ipF and such that ∃ pF ≡⋁_i=1^n F[G_i]. Assume that there are no free occurrences of {x_i| i ≥ 1} in F and, w.l.o.g, that no members of F∪ are bound in by a quantifier occurrence in F.LetG ⋀_i=1^n ((⋀_j=1^i-1¬ F[G_j])F[G_i]G_i).Then GpF and ∃ pF ≡ F[G].Proposition <ref> is also applicable to of the form handled by Prop. <ref>, but for this case leads to a more clumsy result G: Assume the additional precondition that for all j ∈{1,…,n} it holds that G_jp⋁_i=1^n F_i.Let F[p] ⋁_i=1^n F_i[p].Then ∃ p F[p] ≡ F_1[G_1] … F_n[G_n] ≡ F[G_1] … F[G_n]. Relational Monadic Formulas and Relaxed SubstitutibilityThe class of relational monadic formulas with equality, called here , is the class of first-order formulas with equality, with unary predicates and with individual constants but no other functions (without equality it is the Löwenheim class). It is decidable and permits second-order quantifier elimination, that is, each formula in extended by predicate quantification is equivalent to a formula in .As shown in <cit.> it has interesting relationships with 𝒜ℒ𝒞. Behmann <cit.> gave a decision method for that performs second-order quantifier elimination by equivalence preserving formula rewriting <cit.>. Almost three decades later he published an adaption of these techniques to the solution problem for Klassenlogik <cit.>, which in essence seems to be . It still remains open to assess this and apparently related works by Löwenheim <cit.>.Under a relaxed notion of substitutibility the construction of -witnesses for is possible by joining Behmann's rewriting technique <cit.> with the EHW-combination (Prop. <ref>). Let F[p] be a formula and let p be a unary predicate.Assume that F∩ = ∅. The reconstruction of Behmann's normalization shown in the proofs of Lemma 14 and Lemma 16 of <cit.> can be slightly modified to construct a formula F^' = ⋁_i = 1^n F^''_i that is equivalent to ∃ pF and such that each F^''_i is of the formF^''_i = C_i ∃_i(D_i(_i) ∃ p(∀ y(A_i(_iy)p(y)) ∀ y(p(y)B_i(_iy)))),where _i is a sequence of individual symbols such that _i ∩C_i = ∅, predicate p has only the two indicated occurrences and F^'_i⊆∃ pF.LetF^'''_i[p] = C_iD_i(_i) ∀ y(A_i(_iy)p(y)) ∀ y(p(y)B_i(_iy)).Then F^''_i ≡∃_i ∃ pF^'''_i ≡∃_i F^''_i[A(_i x_1)], where the last equivalence follows from Ackermann's lemma (Prop. <ref>).It holds that F^''_ipA(_i x_1) but, since the quantified symbolsmay occur in A(_i x_1), the substitutibility condition ∃F^''_ipA(_i x_1) does not hold in general. The variables _i can be gathered to a single global prefix(assuming w.l.o.g. that none of them occurs free in any of the C_i) such that F ≡∃F^''''[p] where F^'''' = ⋁_i=1^n F^'''_i.By Prop. <ref> we can construct a formula G() such that G()pF^'''' and ∃ pF^''''[p] ≡ F^''''[G()]. This implies ∃ pF[p] ≡ F[G()]. However, substitutibility of G() holds only with respect to F^'''', while G()pF does not hold in general. Thus, under a relaxed notion of substitutibility that permits the existentially quantifiedin the witness the EHW-combination can be applied to construct witnesses for formulas. § SOLUTIONS IN RESTRICTED VOCABULARIES §.§ An Example: Synthesizing Definitional Equivalence In some applications it is useful to restrict the allowed vocabulary of the solution components. Consider for example the task of finding a mapping that establishes a definitional equivalence <cit.> between two formulas A and B where the predicates occurring free in A are in a set _A = {a_1,…,a_n} and those occurring in B are in another set V_B = {b_1,…,b_m}, disjoint with V_A.The objective is then to find a solutionof the F[] where = G_1… G_m, = H_1 … H_n, = p_1 … p_m, = q_1… q_n,F = (A ⋀_i=1^m∀_i(b_i(_i)p_i(_i)))(B ⋀_i=1^n∀_i(a_i(_i)q_i(_i))),_i = y_1… y_b_i, _i = z_1… z_a_i, and the restriction is satisfied that all predicates inare in V_A and all predicates inare in V_B.§.§ Modeling with Two Consecutive Solution Problems This can be achieved by solving consecutively two followed by interpolant computation: First, compute a reproductive solution [] of F[]. Since it is a most general solution, if there is a particular solution, say , that meets the vocabulary restrictions, there must be a sequenceof “instantiation formulas” such that [] ≡.Each member of [] is then “semantically” in the required predicate vocabulary, that is, equivalent to a formula in which all free predicates are members of the given set of predicates. Craig-Lyndon interpolation can be applied on each of these formulas, if they are first-order, to construct equivalent formulas that are also syntactically in the required vocabulary as explained in Sect. <ref>.The remaining issue is to find suitable instantiation formulas . These can again be determined as solutions of a : Consider, for example, the case where for a formula R[] a sequenceof formulas should be found such that R_i[] is semantically independent from the members of , that is, it should hold that ∃R[] ≡ R[]. This is equivalent to R[]R[], whereis a sequence of fresh predicates matching , and hence also equivalent to R[]R[].Thus, suitablecan be obtained as solutions of the(R[]R[])[].For an n-ary solution problem where R_1[_1] … R_n[_n] is given and the requirement is that for all i ∈{1,…,n} it holds that ∃_iR_i[_i] ≡ R_i[_i], a single that combines the requirements can be used:(⋀_i=1^n (R_i[_i]R_i[_i]))[],where, for i ∈{1,…,n}, _i is a sequence of fresh predicates that matches _i. In fact, if m, n ≥ 1 the first of this method can be trivially solved: As reproductive solution take the rigorous solution based on a particular solution where G_1 = ¬ b_1(x_1… x_p_1), H_1 = ¬ q_1(x_1… x_q_1), and G_2… G_m and H_2 … H_n have arbitrary values, for example . The actual effort to construct the vocabulary restricted solution is then required for the second . §.§ Expressing a Vocabulary Restriction on all Unknowns A different technique applies to solution problems where there is only a single set of predicates, say the set of members of the sequence of predicates, that are not permitted to occur free in the solution components: The vocabulary restriction can then be directly encoded into the solution problem by means of second-order quantification, justified by the equivalence of the following statements, which follows from the requirement of substitutibility for solutions:is a solution of the F[]and ∩ = ∅.is a solution of the (∀F)[]. § A HERBRAND VIEW ON THE SOLUTION PROBLEMThe characterization of solution in Prop. <ref> is by an entailment of F. In presence of the result of <cit.> for quantifier free predicate logic this brings up the question whether Skolemization and Herbrand's theorem justify some “instance-based” technique for computing solutions that succeed on large enough quantifier expansions. The following is a formal account of that scenario which, so far, shows no positive result but might be useful as a basis for further investigations.Consider a F[p] that has a solution G.By Prop. <ref> it then holds that pGF.Let D stand for pG.By Herbrand's theorem we know that there are formulas D^', F^', D^h and F^h and sequences of fresh functionsandsuch thatpG = D ≡∀∃D^'∀∃D^h ∀∃F^h ∀∃F^'≡ F,and D^hF^h. The functionsandare the functions introduced by Skolemizing D (w.r.t. ∃) and F (w.r.t. ∀), respectively (this is the only place in the paper where we consider second-order quantification upon functions).Formulas D^' and F^' are the universal and existential, resp., first-order formulas, obtained from Skolemizing D^' and F^', respectively. Formulas D^h and F^h are quantifier free, obtained from D^' and F^', resp., by instantiating their matrices with terms constructed from ,and the free individual symbols in D or F (it is assumed w.l.o.g. that at least one individual symbol is among the symbols available for term construction).We thus know pG ∀∃F^h, or, equivalently, pG ∃ F^h.Thus G must be the solution to the (∀∃F^h)[p], where F^h is quantifier-free and ∀∃ is a second-order prefix with quantifiers upon functions.As a sufficient condition for solutions it can be derived from this setting that a solution H of the quantifier-free formula F^h in which no member ofoccurs free is, under the assumption F, also a solution of F, which follows since pHF^h ∃ F^h ∀∃ F^hF.§ CONCLUSIONThe solution problem and second-order quantifier elimination were interrelated tools in the early mathematical logic.Today elimination has entered automatization with applications in the computation of circumscription, in modal logics, and for semantic forgetting and modularizing knowledge bases, in particular for description logics.Since the solution problem on the basis of first-order logic is, like first-order validity, recursively enumerable there seems some hope to adapt techniques from first-order theorem proving. The paper makes the relevant scenario accessible from the perspective of predicate logic and theorem proving.It shows that a wealth of classical material on Boolean equation solving can be transferred to predicate logic and only few essential diverging points crystallize, like the constructability of witness formulas for quantified predicates, and “Schröder's reproductive interpolant” that does not apply in general to first-order logic.An abstracted version of the core property underlying the classical method of successive eliminations provides a foundation for systematizing and generalizing algorithms that reduce n-ary solution problems to unary solution problems.Special cases based on Craig interpolation have been identified as first steps towards methods for solution construction. Beyond the presented core framework there seem to be many results from different communities that are potentially relevant for further investigation. This includes the vast amount of techniques for equation solving on the basis of Boolean algebra and its variants, developed over the last 150 years.For description logics there are several results on concept unification, e.g., <cit.>.Variants of Craig interpolation such as disjunctive interpolation <cit.> share with the solution problem at least the objective to find substitution formulas such that the overall formula becomes valid (or, dually, unsatisfiable).Among the issues that immediately suggest themselves for further research are the parallel between nondeterministic methods with execution paths for each particular solution and methods that compute a most general solution, the exploration of formula simplifications and techniques such as definitional normal forms to make constructions like rigorous solution and reproductive interpolant feasible, and the investigation of the relaxed notion of substitutibility under which solutions for relational monadic formulas can be constructed.The possible characterization of solution by an entailment also brings up the question whether Skolemization and Herbrand's theorem justify some “instance-based” technique for computing solutions that succeeds on large enough quantifier expansions.§.§ AcknowledgmentsThe author thanks anonymous reviewers for their helpful comments. This work was supported by DFG grant WE 5641/1-1.ReferencestocsectionReferences splncs03 | http://arxiv.org/abs/1706.08329v3 | {
"authors": [
"Christoph Wernhard"
],
"categories": [
"cs.LO",
"cs.AI"
],
"primary_category": "cs.LO",
"published": "20170626113006",
"title": "The Boolean Solution Problem from the Perspective of Predicate Logic - Extended Version"
} |
Return Oriented Programming - Exploit Implementation using functions Sunil Kumar Sathyanarayan [email protected] And Katayoun [email protected] supervised byDr. Makan Pourzandi [email protected] December 30, 2023arabic§ ABSTRACTReturn Oriented programming was surfaced first a decade ago, and was built to overcome the buffer exploit defence mechanisms like ASLR, DEP (or WX) by method of reusing the system code in the form of “gadgets” which are stitched together to make a Turing complete attack. And to perform Turing complete attack would require greater efforts which are quite complex, and there is very little research available for performing a Turing complete attack. So, in this project, we are systemising the knowledge of the existing research that can be used to perform a Turing complete ROP attack. § INTRODUCTIONReturn –Oriented –Programming (ROP) is a technique by which an attacker can induce arbitrary behavior in a program by diverting the program control flow, without injecting any code. A return-oriented program chains together short instruction sequences already present in a program’s address space, each of them ends in a “return” instruction. There are different ways to demonstrate ROP exploitation, one popular demonstration is to deactivate ASLR which is short form of Address Space Layout Randomization and it is a common defense for ROP attacks which works by randomly moving the segments of a program around in memory, preventing the attacker from predicting the address of useful gadgets. So, deactivating ASLR at the beginning of the implementation is a common demonstration. In this work, for our own simplicity, we tried to implement an attack on ROP by deactivating ASLR before implementing the exploit and writing our program. After ASLR deactivation, we implement our program in which a buffer overflow would occur by defining a function in our program which points to another function called function 1 and it also points to function 2 and so on which we explain more in section 5.§ RELATED WORKIn order to find a way to exploit an attack in ROP, we went through some solutions against the attack such as G-free which is the only general solution as each defense method, rest of the mechanisms provide solutions to a specific built.We also found different methods to attack ROP such as 'Automated ROP', 'ROP without return' and ‘Return-to-libc’ which we will summarize in the following subsections. In addition, we found various tools to find gadgets for performing ROP namely 'ROPgadget' and'ROPeme' §.§ G-FreeAs <cit.>, G-free is a compiler based approach against any possible form of ROP. It can eliminate all the unaligned free-branch instructions which are located inside an executable binary and protect them against attacker data misuse. This method provide an executable gadget-free solution which removes links between necessary chain sequence instructions that cannot be targeted by any possible ROP attack.In this solution, the first step is to eliminate any unaligned free-brunch instructions. The second step is to protect aligned free-brunch instructions to be secure against misuse.To achieve this goal, <cit.> employs two techniques: ret instructions to encrypt return address and a cookie-based technique to protect jmp*/call*. Ret instructions provides a short header and saved the encrypted return addresses into the stack. By this technique, whenever the attacker jump into a position of a function, he reaches the footer.As a result, these jumps transfer the attacker to an incorrect address that attacker cannot control. §.§ Tools for finding gadgets In return oriented programming, the core idea is to get useful instruction sequences from the code and chain these instructions together. For this, the attacker should collect some useful sequence of instructions and then reuse these sequences as basic blocks to execute the code. The most important factor in ROP is that these collections of code provide a set of functionalities which allow the attacker to achieve touring completeness without any code injecting. In the next step, the attacker should chain these code sequences in an order to manipulate the program control flow. Gadgets are these valid sequences of instructions to change the control flow satisfyingly <cit.>. In this section, we introduce some existing tools which enable us to find gadgets and chain them together. § BACKGROUNDROP was built to overcome the shortcomings of Buffer overflow, where attacker was able to insert his arbitrary code in the stack segment and execute it, which was prevented by making the stack segment not executable and introducing ASLR made it more difficult for injection of malicious code. Hence ROP's were built exploiting the existing defense mechanisms and reusing the code.§.§ BufferoverflowIn a buffer overflow attack, the attacker tries to overflow the stack by exceeding the limited length of the stack and modify the return address and points it to the address of his injected malicious code [4]. According to [7], buffer overflow attack exploits lack of array or buffer bound in compiler. In figure 1, a typical stack layout had been shown in which when a function had been called, how some stack entries like return address can be corrupted by a malicious copy operation. < g r a p h i c s >figureStack showing bufferoverflow working § OUR WORKAim of this section is to demonstrate chaining of gadgets, so we wrote a simple buffer exploit from where we chained various types of functions viz., Functions with or without Parameters, and chaining of global functions and their respective behavior on stack which is the replication of actual gadgets in a general way but there are much of assembly coding involved while using gadgets. For this, we have referred blog posts published by various researchers<cit.><cit.><cit.>. For the Implementation we used a 32-bit version of Debian based Linux Mint 18.1 operating system, and our exploit code was written in C program compiled with GCC complier, gdb for debugging, and extensively used objdump for finding addresses of chaining functions. Most modern Operating systems implement the randomizing the Address layout (ASLR) which makes difficult for a buffer exploit to execute since the code segment of stack is randomized preventing the attacker obtaining the return address of the functions, but for a 32 bit system attackers have often find a way to brute force. But for our work we considered disabling the ASLR manually during the compilation timegcc vuln.c -o vuln -fno-stack-protector-fno-stack-protector disables the stack smashing protector thereby the addresses of the program are not randomized by the compiler, which enables us to control the code sequence.On the complied code we run objdump –d vuln2 to obtain the exact memory addresses of the functions and to know the buffer length, and exploit the code sequence by overflowing the buffer and modifying the return addresses to arbitrary code snippets.Code Snippet below shows exploit code where we have induced a buffer overflow in the function echo() from where we will be executing arbitrary function and variables to demonstrate the chaining of gadgets in two phases: first phase with functions without argument and Second phase is functions with arguments and global variables. #include<stdio.h> #include<stdlib.h> //global variable char str[20] = "MyROPExploit"; //function without parameters void SecretFunctionWithoutParm() printf("Welcome to secret function without parameters"); //function with parametersvoid SecretFunctionWithParm(char argv[]) printf("Welcome to another secret function with parameter //function with buffer exploit void echo() char buffer[20]; printf("Enter some text:"); scanf(" printf("You entered: int main() echo(); return 0;In order to chain gadgets, we should learn about two important things about finding addresses of the functions and the length of the buffer. Both of them can be learnt to use either gdb debugger or objdump. we demonstrate using objdump.§.§ Finding length of buffer and return address < g r a p h i c s > < g r a p h i c s > < g r a p h i c s > figureobject dump of exploit program figure 2 shows the snippet of objdump for the exploit code, from echo() function we can see the compiler allocation for the buff variable 8048502: 8d 45 e4 lea-0x1c( compiler has allocated 0X1C or 28 bits for variable so in order to overflow we need to use a buffer of length 32 bits to achieve the bufferoverflow. < g r a p h i c s > figurebufferoverflow To find the address of functions, we can just use the starting addresses provided by objdump080484a4 <SecretFunctionWithParm>: 0804848b <SecretFunctionWithoutParm>:§.§ ROP Exploit with and without argumentsAs we explained in previous subsection we have address of functions required for the exploit.Now we have to replace the original return address, but to know where to insert return address First we should understand how the stack in figure 4 works. < g r a p h i c s > figurestack overview of a normal function with paramaters Function is stored on stack starting from a high memory address to and it grows towards lower addresses and stores RET address of invoking function in our case main function.and the value of buffer variable is stored from a lower address and grows towards higher memory address. By overflowing the values of buffer variable we can modify the actual return address of the invoking function to some arbitrary code segment and execute malicious action as shown in figure 5. < g r a p h i c s > figurestack overview of a normal function with fake return address Now we use this concept and inject the address of SecretFunctionWithParm 080484a4 to our exploit and obtained the intended results as in figure 6. < g r a p h i c s > figureDemo of injecting fake return address to SecretFunctionWithParm() Now since we are able to insert a arbitrary function, we try inserting arbitrary arguments to the exploit functions in figure 7 < g r a p h i c s > figureDemo of injecting fake return address with parameter to SecretFunctionWithParm() And now to demonstrate the chaining of gadgets, by chaining theSecretFunction() iteratively in figure 8 < g r a p h i c s > figureDemo of injecting fake return address to SecretFunction() iteratively And lastly we want show inserting of global variable to SecretFunctionWithParm() for this we found the address of global variable using gdb debugger and results can be seen on figure 9 < g r a p h i c s > figureDemo of injecting global variable to SecretFunctionWithParm() § CONCLUSION Return Oriented Programming maybe a decade old and not many exploits were reported using ROP, and it is stealth feature, which cannot be detected by the Intrusion Detection Systems or any Signature based detection systems as it reuses the system trusted library to perform the malicious action. Many Defense mechanisms have been proposed like G-Free<cit.> which can provide a solution to all the systems, but most of other works focused on providing system specific solutions, and implementation of G-free is not widely adopted, Hence Research on ROP seems to be very limited so in our project we demonstrated gadget chaining in form of chaining functions which is exact replication that can be used to perform a turing complete attack. 99. gfree Kaan Onarlioglu, Leyla Bilge, Andrea Lanzi, Davide Balzarotti, and Engin Kirda. 2010. G-Free: defeating return-oriented programming through gadget-less binaries. In Proceedings of the 26th Annual Computer Security Applications Conference (ACSAC '10). ACM, New York, NY, USA, 49-58. DOI=10.1145/1920261.1920269 -<http://doi.acm.org/10.1145/1920261.1920269> kalilinux Kali Linux - <https://www.kali.org/>dhaval Dhaval Kapil buffer-over-flow-exploit - <https://dhavalkapil.com/blogs/Buffer-Overflow-Exploit/> arcana Introduction to return oriented programming (ROP) by Alex Reece - <http://codearcana.com/posts/2013/05/28/introduction-to-return-oriented-programming-rop.html> exploitdb Return-Oriented-Programming (ROP FTW) by By Saif El-Sherei- <https://www.exploit-db.com/docs/28479.pdf> | http://arxiv.org/abs/1706.08562v1 | {
"authors": [
"Sunil Kumar Sathyanarayan",
"Dr. Makan Pourzandi",
"Katayoun Aliyari"
],
"categories": [
"cs.CR"
],
"primary_category": "cs.CR",
"published": "20170626185815",
"title": "Return Oriented Programming - Exploit Implementation using functions"
} |
arrows decorations patterns vtx=[circle, inner sep= 0pt, minimum size= 1.2mm, fill]margin=35pt,font=small,format=hang,labelfont=bfmyheadingsteTheorem[section] pro[te]Proposition deDefinition[section] exExample[section] co[te]Corollary lemma[te]Lemma propProperty[section] problemProblem[section] conjectureConjecture[section] questionQuestion[section] reRemark[section] claimClaim[section] | http://arxiv.org/abs/1706.08587v1 | {
"authors": [
"Darko Dimitrov"
],
"categories": [
"cs.DM"
],
"primary_category": "cs.DM",
"published": "20170626204753",
"title": "On structural properties of trees with minimal atom-bond connectivity index IV: Solving a conjecture about the pendent paths of length three"
} |
[email protected] Bremen Center for Computational Materials Science, Universität Bremen, Am Fallturm 1, 28359 Bremen, Germany Institut für Theoretische Physik, Universität Bremen, Otto-Hahn-Allee 1, 28359 Bremen, Germany Bremen Center for Computational Materials Science, Universität Bremen, Am Fallturm 1, 28359 Bremen, Germany Institut für Theoretische Physik, Universität Bremen, Otto-Hahn-Allee 1, 28359 Bremen, Germany Bremen Center for Computational Materials Science, Universität Bremen, Am Fallturm 1, 28359 Bremen, Germany Institut für Theoretische Physik, Universität Bremen, Otto-Hahn-Allee 1, 28359 Bremen, Germany Bremen Center for Computational Materials Science, Universität Bremen, Am Fallturm 1, 28359 Bremen, Germany Institut für Theoretische Physik, Universität Bremen, Otto-Hahn-Allee 1, 28359 Bremen, Germany Bremen Center for Computational Materials Science, Universität Bremen, Am Fallturm 1, 28359 Bremen, Germany Institut für Theoretische Physik, Universität Bremen, Otto-Hahn-Allee 1, 28359 Bremen, Germany We analyze the interplay of spin-valley coupling, orbital physics and magnetic anisotropy taking place at single magnetic atoms adsorbed on semiconducting transition metal dichalcogenides, MX_2 (M = Mo, W; X = S, Se). Orbital selection rules turn out to govern the kinetic exchange coupling between the adatom and charge carriers in the MX_2 and lead to highly orbitally dependent spin-flip scattering rates, as we illustrate for the example of transition metal adatoms with d^9 configuration. Our ab initio calculations suggest that d^9 configurations are realizable by single Co, Rh, or Ir adatoms on MoS_2, which additionally exhibit a sizable magnetic anisotropy. We find that the interaction of the adatom with carriers in the MX_2 allows to tune its behavior from a quantum regime with full Kondo screening to a regime of "Ising spintronics" where its spin-orbital moment acts as classical bit, which can be erased and written electronically and optically.Optically and electrically controllable adatom spin-orbital dynamics in transition metal dichalcogenides T. O. Wehling December 30, 2023 ======================================================================================================== Transition metal adatoms on surfaces provide ideal model systems for fundamental studies of quantum many-body phenomena ranging from magnetism<cit.> and Kondo physics<cit.> to topological states of matter<cit.> and Majorana modes<cit.>. Moreover, these systems are promising as ultimately miniaturized building blocks of spintronic devices and logic gates. Particularly recent advances in scanning tunneling microscopy lead to enormous progress in the probing and manipulation of these systems including writing, reading, and processing of information from atomic scale bits via e.g. spin-transfer torques<cit.> and spin-polarized spectroscopy techniques<cit.>. In all of these cases, the coupling between adatom and substrate is central to determine the magnetic properties of the system. Thus, changes in the quantum state of the substrate can directly affect the adatom magnetism, as studies of superconducting substrates demonstrated <cit.>. In the light of time-dependent phenomena, substrates which allow for ultrafast manipulation of their electronic states by electronic or optical means are particularly interesting but actual realizations have been lacking so far.In this letter, we show that strong spin-valley coupling and peculiar orbital physics make monolayers of transition metal dichalcogenides (TMDCs), MX_2 (M = Mo, W; X = S, Se) ideal substrates in this context. MX_2 materials allow for ultrafast optical control of their electronic states and for charge doping by external gates, which turn out to provide control over spin-flip scattering of transition metal (TM) adatoms on a monolayer MX_2. We illustrate this result based on ab initio simulations of single Co, Rh, or Ir adatoms on MoS_2 and a generic model Hamiltonian description. Our calculations show, that these magnetic adatoms exhibit a doublet ground state which is separated from excited states by a sizable magnetic anisotropy >10 meV and realizes an "Ising" spin-orbital moment. We analyze the kinetic exchange scattering of adatom and substrate electrons, and demonstrate that the adatom behavior can be tuned from a regime of "Ising spintronics" where its spin-orbital moment acts as classical bit, which can be manipulated electronically and optically, to a quantum regime of full Kondo screening.The choice of MX_2 as substrate concerns two aspects: First, there are adsorption sites with uniaxial symmetry (C_3v) on the surface of MX_2, i.e. the top of M atoms (M-top), the top of X atoms (X-top), and the site above the middle of M-X hexagons (hollow). Uniaxial symmetry is crucial for TM adatom to retain a large orbital moment and consequently yields a sizable magnetic anisotropy <cit.>.Moreover, the symmetry determines the hybridization between TM and M atoms. Hence, only those d orbitals of TM and M atoms with matching symmetry under the operation of the adsorption site point group can couple to each other, providing an additional degree of freedom to control the spin-flip scattering. Second, we can easily select the spin and orbital character of charge carriers in MX_2. As shown in Fig. <ref>(a), the band-edges of MX_2 result from different valleys which are predominately stemming from different d orbitals of the M atoms<cit.>.For example, the lowest conduction-band (CB) in the K valley carries mostly d_m=0 character, while we have mainly d_m=±2 in the Σ valley; in the upmost valence-band (VB) we have mostly d_m=0 in the Γ valley and d_m=±2 in the K valley. Here m is the quantum number of the orbital momentum's z component. Given the energy separation between the minima/maxima in CBs/VBs, charge doping thus selects the orbital character of carriers at the Fermi level in MX_2. Due to the C_3v symmetry of adsorption sites, the coupling d_m orbitals from M atoms and TM adatoms need to follow an orbital selection rule, Δ m mod 3=0. Therefore, spin-flip scattering will strongly depend on the doping level. Specifically, if the valence hole of a d^9 adatom resides in a d_m=±1 state [Fig. <ref>(a)], spin-flip scattering by charge carriers in MX_2 is suppressed for the undoped system due to the absence of any carriers [Fig. <ref>(b)] but also for the moderately electron doped case due to the absence of symmetry matched carriers [Fig. <ref>(d)]. However, once carriers with d_m=±2 orbital character are available for scattering at the Fermi level, e.g., in the cases of moderate hole doping [Fig. <ref>(c)] and relatively high electron doping [Fig. <ref>(e)], an effective channel for spin-flip opens and the adatom spin might be even fully screened by the charge carriers. More significantly, because of the so-called spin-valley coupling, TMDCs with broken inversion symmetry, e.g. MX_2 monolayers, allow for optical selecting the spin state of the excited carriers<cit.>. This optically determined spin state can be further coupled to the adatom spin via spin-flip scattering, providing a mechanism of optical orientation of the adatom spin. The orbital and spin selection rules of spin-flip scattering present the basis for the ultrafast control over the magnetic adatom degrees of freedom suggested in this letter.To investigate this scenario quantitatively, we firstly study the ground state of the TM adatom subject to the crystal field with C_3v symmetry and spin-orbit coupling (SOC), which can be described by the effective Hamiltonian, H = H_CF+∑_iλl_i·s_i.Here H_CF is the crystal field Hamiltonian, λ is the spin-orbit constant, l_i and s_i are the orbital and spin angular momentum vector of an electron, respectively, the sum over i runs over all the electrons. Because we are interested in a d^9 configuration the atomic state is a single Slater determinant. In the basis of the five d orbitals (d_m=0,±1,±2), H_CF is given by a 5×5 matrix. Due to the three fold rotation axis of the crystal field, the matrix element ⟨d_m_i|H_CF|d_m_j⟩ is non-vanishing only when | m_i - m_j | 3 = 0. Those elements include ⟨d_m|H_CF|d_m⟩, which is the energy of d_m orbital (ϵ_m), and, ⟨d_-1|H_CF|d_+2⟩ and ⟨d_+1|H_CF|d_-2⟩ labeled by c_-3 and c_+3, respectively. In absence of magnetic fields, time reversal symmetry further implies ϵ_-2=ϵ_+2 and ϵ_-1=ϵ_+1. Then, H_CF readsH_CF= [ ϵ_-200 c^⋆_+30;0 ϵ_-100 c_-3;00 ϵ_ 000; c_+300 ϵ_+10;0 c^⋆_-300 ϵ_+2 ].The crystal field with C_3v symmetry splits the five d orbitals into two doublets, E1 (mostly d_±1) and E2 (mostly d_±2), and a singlet A1 (d_0). By diagonalizing (<ref>), we can obtain the spin-orbit state of the d^9 configuration.In order to show that the d^9 configuration is a realistic scenario for several single TM adatoms on MX_2 and to obtain reasonable parameters in (<ref>) we performed ab initio calculations in the framework of density functional theory (DFT) <cit.>. We employed a 5 × 5 MoS_2 supercell with one Co, Rh, or Ir adatom using Perdew-Burke-Ernzerhof parameterization <cit.> of generalized gradient approximation (GGA) functional and the projector augmented wave <cit.> as implemented in the VASP package <cit.>. All the atomic structures were fully relaxed before calculating their electronic structures. We have considered three adsorption sites, Mo-top, S-top, and hollow. Our results indicate that the Mo-top site is energetically favored by all three adatoms, and the other two sites become energetically more unfavorable from Co to Ir (see Table S1 in the supplemental material). The spin-polarized GGA calculations predict an electronic configuration close to d^9 for all the adatoms, in which one hole resides in E1 state which has predominantly d_±1 orbital character. Here, without loss of generality we take Co on MoS_2 as a representative example to be discussed in the following. As shown in the projected density of states (PDOS) [(Fig. <ref>(a)], the d orbital order of minority spin in energy is E2 < A1 < E1 with hybridization between E2 and E1, which is in agreement with the allowed off-diagonal terms of the crystal field in (<ref>). To investigate the impacts of on-site Coulomb repulsion on the results, we also carried out GGA+U calculations for the Co adatom <cit.>. The GGA+U calculations do not change the orbital ordering and occupation. Only for large U⩾7.0 eV (J=0.9 eV) E1 is shifted into the CBs.We extract the following crystal field parameters of H_CF from the DFT calculations of Co on MoS_2: ϵ_| m | =2 = -0.287 eV, ϵ_| m | =1 = 0.130 eV, ϵ_m=0=0 eV, and | c_±3| = 0.183 eV. Setting the spin-orbit constant to λ=22 meV <cit.>, the spin-orbit eigenstates of Co are obtained by diagonalizing (<ref>) and are labelled by |n={0,1,2,...};L_zS_z={±±,∓±}⟩. Here, n is assigned to numbering the states in order of increasing energy. TheL_z and S_z refer to the orbital and spin angular momentum along z axis, respectively, and "±±" ("∓±") denotes the orbital and spin momentum directions are parallel (antiparallel) to each other. In Fig. <ref>(b) we illustrate the evolution of the lowest eigenstates under the variation of the crystal field parameter c_±3 and the spin-orbit coupling λ. It shows that c_±3 perturbs the lowest quadruplet, partially quenching its L_z. The SOC splits the quadruplet into two doublets with different relative alignments of the spin and orbital moments. The energy separation between the ground state |0;±±⟩ and the first excited state |1;∓±⟩ is about 12.0 meV. I.e. we have an effective out-of-plane magnetic anisotropy on the order of 12 meV for the coupled spin-orbital moment of the Co adatom, which is essentially comparable to the case of Co on Pt (111) <cit.>.In order to investigate the stability of the spin-orbital moment, we inspect possible final states as resulting from pure spin-flip scattering if the initial state is one of the spin-orbital ground states |0;++⟩ of the adatom. We estimate the probabilities of reaching the first excited state and the other degenerate ground state by calculating |⟨1;+-|S^-|0;++⟩|^2 and |⟨0;–|S^-|0;++⟩|^2, where S^- is the ladder operator to transit S_z from "+" to "-". Our result shows that the former is about 0.98 and the latter is numerically 0, indicating the transition inside the ground state manifold is suppressed. For the former transition, we need to overcome the excitation energy from the ground state to the first excited state arising from SOC which is proportional to λ L <cit.>, where L is the orbital moment magnitude. Given that the corresponding excitation energy is ∼12 meV for Co and that of 4d and 5d TM adatoms will be larger because of the stronger SOC, we can neglect the corresponding transition at sufficiently low temperatures. Thus, the spin-orbital moment of adatoms is essentially not perturbed by pure spin-flip scattering. However, carriers in MoS_2 can scatter with Co, disturbing its quantum state via hybridization. For example, the transition⟨d^Co_±1|H_hyb|d^Mo_∓2⟩⟨d^Mo_±2|H_hyb|d^Co_∓1⟩≠ 0 which meets the orbital selection rule | m_i - m_j | 3 = 0 can flip the spin and the orbital moment simultaneously [see Fig.<ref>(c), (e)] and provides a channel for elastic scattering.To address the impact of interaction with charge carriers in MoS_2 on the spin-orbital moment of Co, we describe the system in terms of an Anderson impurity model (AIM)H = H_MoS_2 + H_Co + H_hyb,The first term corresponds to the charge carriers residing in MoS_2 and we employ a tight-binding (TB) Hamiltonian from Ref. <cit.> in the basis of three Mo d orbitals (m=0,±2)[For convenience, the three d orbitals are denoted by the z-component of the orbital momentum quantum number m. In the calculations, we employed their real forms d_z^2, d_xy and d_x^2-y^2 as the basis, which can be obtained by the relation: d_z^2=d_m=0, d_xy=-i2^-1/2(d_m=2-d_m=-2), d_x^2-y^2=2^-1/2(d_m=2+d_m=-2).] relevant for the band electrons of MoS_2. For Co, we consider its five d orbitals split by the crystal field and SOC as in (<ref>) and the local Coulomb interaction U. The third term describes the hybridization between the Co and MoS_2 monolayer and readsH_hyb =∑_m_1,m_2,σ V_m_1m_2d_m_1,σc^†_m_2,σ+ H.c.,where m_1 and m_2 are the z component of d orbital quantum number of Co and Mo, respectively. d_m_1,σ is the Fermi operator of Co d electrons. c_m_2,σ=∑_𝐤c_𝐤,m_2,σ is the Fermi operator of charge carriers in MoS_2 with orbital quantum number m_2 at the adsorption site 𝐑_0 =0. V_m_1m_2 is the hybridization matrix element which can be non-zero if | m_1 -m_2| mod 3 = 0. We obtain the hybridization matrix elements by fitting the hybridization function obtained from the model Hamiltonian to that obtained from our DFT calculations. (For details, see the supplemental material).To describe the spin-orbital-flip scattering, which is included in the AIM (<ref>) only by virtual processes, directly, we reduce it to a kinetic exchange Hamiltonian via a Schrieffer-Wolff transformation <cit.>,V_ex = ∑_𝐤,𝐤'J_𝐤𝐤'{S̃^+c^†_𝐤',-2,↓ c_𝐤,+2,↑ + S̃^-c^†_𝐤',+2,↑ c_𝐤,-2,↓+S̃_z(c^†_𝐤',+2,↑c_𝐤,+2,↑-c^†_𝐤',-2,↓c_𝐤,-2,↓)}.Here, since the kinetic exchange Hamiltonian refers to the ground state manifold only, the operators S̃_z and S̃^± (=S̃_x± iS̃_y) are pseudo-spin operators. S̃^± switches the state of Co between |0;–⟩ and |0;++⟩ and thus refers to simultaneous flipping of spin and orbital moments. J_𝐤𝐤' represents the coupling constant of the kinetic exchange interaction, which relates to the parameters of AIM byJ_𝐤𝐤' = V^∗_m_1m_2V_m_2m_1{1/U+ϵ_d-ϵ_𝐤'+1/ϵ_𝐤-ϵ_d}, where ϵ_d and ϵ_𝐤 are the eigenvalues of the matrix representations of H_Co and H_MoS_2, respectively. The spin lifetime τ(ϵ_𝐤) of Co is obtained by calculating the scattering rate of a carrier of MoS_2 transiting from a state |𝐤,+2,↑⟩ to |𝐤',-2,↓⟩ due to the kinetic exchange interaction. Here, |𝐤,+2,↑⟩ refers to a spin up carrier with d_m=+2 orbital character. The rate is given by W(𝐤↑→𝐤'↓) = 2π/ħ N^2_k|⟨𝐤',-2,↓|V_ex|𝐤,+2,↑⟩| ^2·δ(ϵ_𝐤-ϵ_𝐤')f(ϵ_𝐤)(1-f(ϵ_𝐤')),where 1/N^2_k is a normalization factor and N_k is the total number of k points in Brillouin zone. f(ϵ) is the occupation number of the initial and final states of the electrons in MoS_2 and given by the Fermi distribution function in the thermal equilibrium. At the end, the spin lifetime τ(μ) is derived as the inverse of the sum of (<ref>) over 𝐤 and 𝐤' with ϵ_𝐤'=ϵ_𝐤=μ, where μ is the chemical potential in MoS_2. Fixing the parameters, ϵ_d = 0.5 eV, U=5.0 eV, and T=4.4 K, our calculations yield the spin lifetime depending on μ as shown in the left panel of Fig. <ref>. The right panel shows the TB band structure of MoS_2. The character of the bands are shown as "fat bands" in different color. For 0 eV<μ<1.6 eV, the spin lifetime is practically infinite, as there are no carriers in the MoS_2 bands for scattering. For 1.6 eV <μ<1.9 eV, only electrons from the K valley can contribute to the spin scattering, which carry however mainly d_m=0 character. Thus, by symmetry we obtain low scattering rates and relatively long lifetimes. However, with chemical potentials in the range of μ<0 eV or μ > 1.9 eV electrons in MoS_2 with d_m=±2 orbital character contribute to efficient scatterings and short spin lifetime. Eventually, we arrive in the quantum regime with full Kondo screening of the impurity spin in these cases. Strikingly, as the Fermi level approaches the Σ valley in the CB the spin lifetime drops extremely abrupt and by more than two orders of magnitude, further reaching the minimum (τ<10 ps). Meanwhile, in the case of hole doping, μ<0 eV, the spin lifetime decreases sharply to less than a nanosecond. Hence, information stored as "magnetic bit" in the adatom spin-orbital moment can be erased by tuning the electronic chemical potential μ in the MoS_2 sheet either to the VB or sufficiently high into the CB. Due to the SOC in MoS_2, there are valley specific spin splittings of the CB in the Σ valley and VB in the K/K' valley <cit.> usually referred as spin-valley coupling <cit.>. The variation of spin lifetimes obtained by including SOC also for the electrons of MoS_2 is shown as the red curve in the left panel of Fig. <ref>. Because the variation of the spin lifetime is determined by the evolution of the orbital character of band electrons and SOC does not change this qualitatively, the curve with SOC follows the same trend of that without SOC. The major effect of SOC in the MoS_2 is the occurrence of "plateaus" in the τ vs. μ curve which can be attributed to the spin split bands in the Σ and K/K' valley, i.e., the spin-valley coupling.Importantly, the spin-valley coupling provides a way to select the spin state of optically excited carriers in MoS_2 monolayers. To explain how this can be exploited for ultrafast optical orientation of the Co magnetic moment, we consider the resonant excitation of electrons from the highest spin-split VB to the lowest CB. In this case, the optically excited electrons in the CB mainly carry d_m=0 character so that due to the orbital selection rule their contributions to the scattering with the Co adatom are minor. Therefore, we only consider scattering with electrons in the VB with d_m=±2 character. The scattering can be still described by Eq.(<ref>), if the Fermi distribution functions f(ϵ) are replaced by the occupation numbers of the electron states in the VB after the optical excitation. To flip the Co spin, the spin of electrons from MoS_2 should be exchanged as well, i.e., the electrons in the VB of MoS_2 have to scatter between the K and K' valley. Specifically, the transition of an electron from MoS_2 from an occupied state in the K' valley to an empty state in the K valley flips the Co spin from |↓⟩ to |↑⟩ (process I in Fig. <ref>). Its time reversed counterpart (process II in Fig. <ref>) flips the Co spin from |↑⟩ to |↓⟩. Therefore, if the rate of process I is larger (smaller) than that of process II, the Co spin is written into a |↑⟩ (|↓⟩) state.Using circularly polarized light, the excitation can be selectively done in the K valley. Hence, we arrive at a situation, as shown in Fig. <ref>, where |↓⟩ states in the K valley are excited. At this moment, lacking empty states in the K' valley, thus the process II is not allowed. In contrast, the process I can flip the Co spin from |↓⟩ to |↑⟩. As only one electron from the K' valley will be transferred to the adatom, the number of empty states in the K' valley remains at all times much smaller than the corresponding number in the K valley. As long as this imbalance exists, the process II is blocked. Therefore, the adatom spin is written optically into a |↑⟩ state, while the recombination of electron-hole pairs in MoS_2 <cit.> will lead finally to an equilibrium without free carriers. We note that the same optical spin orientation mechanism also remains effective if the notoriously strong excitonic effects in materials like MoS_2 <cit.> are accounted for. In conclusion, we showed that orbital selection rules govern the spin-orbital dynamics of magnetic adatoms on a monolayer MX_2. Our study demonstrated that single Co, Ir and Rh adatoms on MoS_2 realize a d^9 valence state with a sizable magnetic anisotropy and highly orbital coupling to carriers in the substrate. This coupling lays the ground for electronic erasing and optical writing of information possibly stored as magnetic bits in the adatoms on monolayer TMDCs. Information processing and more generally optoelectronics based on valley degree of freedom have been appearing very promising but always feature the problem that valley degree of freedom entities like excitons are intrinsically volatile. The adatom spins could provide a means of storage and dynamic control of information encoded in the valley degree of freedom.§ ACKNOWLEDGEMENTSB.S. thanks the financial support from the fund of the BREMEN TRAC-COFUND Fellowship Programm of the University of Bremen. G.S. and M.S. acknowledge support via the Central Research Development Fund of the University of Bremen. The computations were performed with resources provided by the North-German Supercomputing Alliance (HLRN).34 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Gambardella et al.(2003)Gambardella, Rusponi, Veronese, Dhesi, Grazioli, Dallmeyer, Cabria, Zeller, Dederichs, Kern, Carbone, and Brune]gambardella_giant_2003 author author P. Gambardella, author S. Rusponi, author M. Veronese, author S. S. Dhesi, author C. Grazioli, author A. Dallmeyer, author I. Cabria, author R. Zeller, author P. H. Dederichs, author K. Kern, author C. Carbone, and author H. Brune, 10.1126/science.1082857 journal journal Science volume 300, pages 1130 (year 2003)NoStop [Rau et al.(2014)Rau, Baumann, Rusponi, Donati, Stepanow, Gragnaniello, Dreiser, Piamonteze, Nolting, Gangopadhyay, Albertini, Macfarlane, Lutz, Jones, Gambardella, Heinrich, and Brune]rau_reaching_2014 author author I. G. Rau, author S. Baumann, author S. Rusponi, author F. Donati, author S. Stepanow, author L. Gragnaniello, author J. Dreiser, author C. Piamonteze, author F. Nolting, author S. Gangopadhyay, author O. R. Albertini, author R. M. Macfarlane, author C. P. Lutz, author B. A. Jones, author P. Gambardella, author A. J. Heinrich,and author H. Brune, 10.1126/science.1252841 journal journal Science volume 344, pages 988 (year 2014)NoStop [Baumann et al.(2015)Baumann, Donati, Stepanow, Rusponi, Paul, Gangopadhyay, Rau, Pacchioni, Gragnaniello, Pivetta, Dreiser, Piamonteze, Lutz, Macfarlane, Jones, Gambardella, Heinrich, and Brune]baumann_origin_2015 author author S. Baumann, author F. Donati, author S. Stepanow, author S. Rusponi, author W. Paul, author S. Gangopadhyay, author I. Rau, author G. Pacchioni, author L. Gragnaniello, author M. Pivetta, author J. Dreiser, author C. Piamonteze, author C. Lutz, author R. Macfarlane, author B. Jones, author P. Gambardella, author A. Heinrich,and author H. Brune, 10.1103/PhysRevLett.115.237202 journal journal Phys. Rev. Lett. volume 115, pages 237202 (year 2015)NoStop [Manoharan et al.(2000)Manoharan, Lutz, and Eigler]manoharan_quantum_2000 author author H. C. Manoharan, author C. P. Lutz, and author D. M. Eigler, 10.1038/35000508 journal journal Nature volume 403, pages 512 (year 2000)NoStop [Otte et al.(2008)Otte, Ternes, von Bergmann, Loth, Brune, Lutz, Hirjibehedin, and Heinrich]otte_role_2008 author author A. F. Otte, author M. Ternes, author K. von Bergmann, author S. Loth, author H. Brune, author C. P. Lutz, author C. F. Hirjibehedin,and author A. J. Heinrich, 10.1038/nphys1072 journal journal Nat Phys volume 4, pages 847 (year 2008)NoStop [Ternes et al.(2009)Ternes, Heinrich, and Schneider]ternes_spectroscopic_2009 author author M. Ternes, author A. J. Heinrich,and author W.-D. Schneider, 10.1088/0953-8984/21/5/053001 journal journal J. Phys.: Condens. Matter volume 21, pages 053001 (year 2009)NoStop [Khajetoorians et al.(2015)Khajetoorians, Valentyuk, Steinbrecher, Schlenk, Shick, Kolorenc, Lichtenstein, Wehling, Wiesendanger, and Wiebe]khajetoorians_tuning_2015 author author A. A. Khajetoorians, author M. Valentyuk, author M. Steinbrecher, author T. Schlenk, author A. Shick, author J. Kolorenc, author A. I. Lichtenstein, author T. O. Wehling, author R. Wiesendanger,and author J. Wiebe, 10.1038/nnano.2015.193 journal journal Nat Nano volume 10, pages 958 (year 2015)NoStop [Liu et al.(2009)Liu, Liu, Xu, Qi, and Zhang]liu_magnetic_2009 author author Q. Liu, author C.-X. Liu, author C. Xu, author X.-L. Qi,and author S.-C. Zhang, 10.1103/PhysRevLett.102.156603 journal journal Phys. Rev. Lett. volume 102, pages 156603 (year 2009)NoStop [Wray et al.(2011)Wray, Xu, Xia, Hsieh, Fedorov, Hor, Cava, Bansil, Lin, and Hasan]wray_topological_2011 author author L. A. Wray, author S.-Y. Xu, author Y. Xia, author D. Hsieh, author A. V. Fedorov, author Y. S. Hor, author R. J. Cava, author A. Bansil, author H. Lin,and author M. Z. Hasan, 10.1038/nphys1838 journal journal Nat Phys volume 7, pages 32 (year 2011)NoStop [Honolka et al.(2012)Honolka, Khajetoorians, Sessi, Wehling, Stepanow, Mi, Iversen, Schlenk, Wiebe, Brookes, Lichtenstein, Hofmann, Kern, and Wiesendanger]honolka_-plane_2012 author author J. Honolka, author A. A. Khajetoorians, author V. Sessi, author T. O. Wehling, author S. Stepanow, author J.-L. Mi, author B. B. Iversen, author T. Schlenk, author J. Wiebe, author N. B. Brookes, author A. I. Lichtenstein, author P. Hofmann, author K. Kern,and author R. Wiesendanger, 10.1103/PhysRevLett.108.256811 journal journal Phys. Rev. Lett. volume 108, pages 256811 (year 2012)NoStop [Vazifeh and Franz(2013)]vazifeh_self-organized_2013 author author M. M. Vazifeh and author M. Franz, 10.1103/PhysRevLett.111.206802 journal journal Phys. Rev. Lett. volume 111, pages 206802 (year 2013)NoStop [Nadj-Perge et al.(2014)Nadj-Perge, Drozdov, Li, Chen, Jeon, Seo, MacDonald, Bernevig, and Yazdani]nadj-perge_observation_2014 author author S. Nadj-Perge, author I. K. Drozdov, author J. Li, author H. Chen, author S. Jeon, author J. Seo, author A. H. MacDonald, author B. A. Bernevig,and author A. Yazdani, 10.1126/science.1259327 journal journal Science volume 346, pages 602 (year 2014)NoStop [Peng et al.(2015)Peng, Pientka, Glazman, and von Oppen]peng_strong_2015 author author Y. Peng, author F. Pientka, author L. I. Glazman,and author F. von Oppen, 10.1103/PhysRevLett.114.106801 journal journal Phys. Rev. Lett. volume 114, pages 106801 (year 2015)NoStop [Loth et al.(2010)Loth, von Bergmann, Ternes, Otte, Lutz, and Heinrich]loth_controlling_2010 author author S. Loth, author K. von Bergmann, author M. Ternes, author A. F. Otte, author C. P. Lutz,and author A. J. Heinrich, 10.1038/nphys1616 journal journal Nat Phys volume 6, pages 340 (year 2010)NoStop [Khajetoorians et al.(2013)Khajetoorians, Baxevanis, Hübner, Schlenk, Krause, Wehling, Lounis, Lichtenstein, Pfannkuche, Wiebe, and Wiesendanger]khajetoorians_current-driven_2013 author author A. A. Khajetoorians, author B. Baxevanis, author C. Hübner, author T. Schlenk, author S. Krause, author T. O. Wehling, author S. Lounis, author A. Lichtenstein, author D. Pfannkuche, author J. Wiebe,and author R. Wiesendanger, 10.1126/science.1228519 journal journal Science volume 339, pages 55 (year 2013)NoStop [Heinrich et al.(2004)Heinrich, Gupta, Lutz, and Eigler]heinrich_single-atom_2004 author author A. J. Heinrich, author J. A. Gupta, author C. P. Lutz,and author D. M. Eigler, 10.1126/science.1101077 journal journal Science volume 306, pages 466 (year 2004)NoStop [Meier et al.(2008)Meier, Zhou, Wiebe, and Wiesendanger]meier_revealing_2008 author author F. Meier, author L. Zhou, author J. Wiebe,and author R. Wiesendanger, 10.1126/science.1154415 journal journal Science volume 320, pages 82 (year 2008)NoStop [Yazdani et al.(1997)Yazdani, Jones, Lutz, Crommie, and Eigler]yazdani_probing_1997 author author A. Yazdani, author B. A. Jones, author C. P. Lutz, author M. F. Crommie,and author D. M. Eigler, 10.1126/science.275.5307.1767 journal journal Science volume 275, pages 1767 (year 1997)NoStop [Heinrich et al.(2013)Heinrich, Braun, Pascual, and Franke]heinrich_protection_2013 author author B. W. Heinrich, author L. Braun, author J. I. Pascual,and author K. J. Franke, 10.1038/nphys2794 journal journal Nat Phys volume 9, pages 765 (year 2013)NoStop [Cappelluti et al.(2013)Cappelluti, Roldán, Silva-Guillén, Ordejón, and Guinea]cappelluti_tight-binding_2013 author author E. Cappelluti, author R. Roldán, author J. A. Silva-Guillén, author P. Ordejón,and author F. Guinea, 10.1103/PhysRevB.88.075409 journal journal Phys. Rev. B volume 88, pages 075409 (year 2013)NoStop [Liu et al.(2013)Liu, Shan, Yao, Yao, and Xiao]liu_three-band_2013 author author G.-B. Liu, author W.-Y. Shan, author Y. Yao, author W. Yao,and author D. Xiao, 10.1103/PhysRevB.88.085433 journal journal Phys. Rev. B volume 88, pages 085433 (year 2013)NoStop [Rösner et al.(2014)Rösner, Haas, and Wehling]rosner_phase_2014 author author M. Rösner, author S. Haas, and author T. O. Wehling, 10.1103/PhysRevB.90.245105 journal journal Phys. Rev. B volume 90, pages 245105 (year 2014)NoStop [Xiao et al.(2012)Xiao, Liu, Feng, Xu, and Yao]xiao_coupled_2012 author author D. Xiao, author G.-B. Liu, author W. Feng, author X. Xu,and author W. Yao, 10.1103/PhysRevLett.108.196802 journal journal Phys. Rev. Lett. volume 108, pages 196802 (year 2012)NoStop [Suzuki et al.(2014)Suzuki, Sakano, Zhang, Akashi, Morikawa, Harasawa, Yaji, Kuroda, Miyamoto, Okuda, Ishizaka, Arita, and Iwasa]suzuki_valley-dependent_2014 author author R. Suzuki, author M. Sakano, author Y. J. Zhang, author R. Akashi, author D. Morikawa, author A. Harasawa, author K. Yaji, author K. Kuroda, author K. Miyamoto, author T. Okuda, author K. Ishizaka, author R. Arita,and author Y. Iwasa, 10.1038/nnano.2014.148 journal journal Nat Nano volume 9, pages 611 (year 2014)NoStop [Xu et al.(2014)Xu, Yao, Xiao, and Heinz]xu_spin_2014 author author X. Xu, author W. Yao, author D. Xiao,and author T. F. Heinz, 10.1038/nphys2942 journal journal Nat Phys volume 10, pages 343 (year 2014)NoStop [Kohn and Sham(1965)]kohn_self-consistent_1965 author author W. Kohn and author L. J. Sham, 10.1103/PhysRev.140.A1133 journal journal Phys. Rev. volume 140, pages A1133 (year 1965)NoStop [Perdew et al.(1996)Perdew, Burke, and Ernzerhof]perdew_generalized_1996 author author J. P. Perdew, author K. Burke, and author M. Ernzerhof, 10.1103/PhysRevLett.77.3865 journal journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop [Blöchl(1994)]blochl_projector_1994 author author P. E. Blöchl, 10.1103/PhysRevB.50.17953 journal journal Phys. Rev. B volume 50, pages 17953 (year 1994)NoStop [Kresse and Joubert(1999)]kresse_ultrasoft_1999 author author G. Kresse and author D. Joubert, 10.1103/PhysRevB.59.1758 journal journal Phys. Rev. B volume 59, pages 1758 (year 1999)NoStop [Kresse and Furthmüller(1996)]kresse_efficient_1996 author author G. Kresse and author J. Furthmüller, 10.1103/PhysRevB.54.11169 journal journal Phys. Rev. B volume 54, pages 11169 (year 1996)NoStop [Liechtenstein et al.(1995)Liechtenstein, Anisimov, and Zaanen]liechtenstein_density-functional_1995 author author A. I. Liechtenstein, author V. I. Anisimov,and author J. Zaanen, 10.1103/PhysRevB.52.R5467 journal journal Phys. Rev. B volume 52, pages R5467 (year 1995)NoStop [Note1()]Note1 note For convenience, the three d orbitals are denoted by the z-component of the orbital momentum quantum number m. In the calculations, we employed their real forms d_z^2, d_xy and d_x^2-y^2 as the basis, which can be obtained by the relation: d_z^2=d_m=0, d_xy=-i2^-1/2(d_m=2-d_m=-2), d_x^2-y^2=2^-1/2(d_m=2+d_m=-2).Stop [Schrieffer and Wolff(1966)]schrieffer_relation_1966 author author J. R. Schrieffer and author P. A. Wolff, 10.1103/PhysRev.149.491 journal journal Phys. Rev. volume 149, pages 491 (year 1966)NoStop [Steinhoff et al.(2016)Steinhoff, Florian, Rösner, Lorke, Wehling, Gies, and Jahnke]steinhoff_nonequilibrium_2016 author author A. Steinhoff, author M. Florian, author M. Rösner, author M. Lorke, author T. O. Wehling, author C. Gies,and author F. Jahnke, 10.1088/2053-1583/3/3/031006 journal journal 2D Mater. volume 3, pages 031006 (year 2016)NoStop | http://arxiv.org/abs/1706.08365v1 | {
"authors": [
"Bin Shao",
"Malte Schüler",
"Gunnar Schönhoff",
"Thomas Frauenheim",
"Gerd Czycholl",
"Tim O. Wehling"
],
"categories": [
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.mes-hall",
"published": "20170626133437",
"title": "Optically and electrically controllable adatom spin-orbital dynamics in transition metal dichalcogenides"
} |
Department of Electrical Engineering, California Institute of Technology, 1200 E. California Blvd., Pasadena, California 91125, USA T. J. Watson Laboratory of Applied Physics, California Institute of Technology, 1200 E. California Blvd., Pasadena, California 91125, USAThese authors contributed equally to this work. Department of Electrical Engineering, California Institute of Technology, 1200 E. California Blvd., Pasadena, California 91125, USA Present address: Department of Physics, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea. These authors contributed equally to this work. T. J. Watson Laboratory of Applied Physics, California Institute of Technology, 1200 E. California Blvd., Pasadena, California 91125, USA These authors contributed equally to this work. Department of Electrical Engineering, California Institute of Technology, 1200 E. California Blvd., Pasadena, California 91125, USA Department of Electrical Engineering, California Institute of Technology, 1200 E. California Blvd., Pasadena, California 91125, USA Department of Electrical Engineering, California Institute of Technology, 1200 E. California Blvd., Pasadena, California 91125, USA T. J. Watson Laboratory of Applied Physics, California Institute of Technology, 1200 E. California Blvd., Pasadena, California 91125, USA T. J. Watson Laboratory of Applied Physics, California Institute of Technology, 1200 E. California Blvd., Pasadena, California 91125, USA Present address: Department of Electrical and Computer Engineering, University of Massachusetts, 151 Holdsworth Way, Amherst, Massachusetts 01003, USA.Department of Electrical Engineering, California Institute of Technology, 1200 E. California Blvd., Pasadena, California 91125, USA [email protected] T. J. Watson Laboratory of Applied Physics, California Institute of Technology, 1200 E. California Blvd., Pasadena, California 91125, USA [email protected] Department of Electrical Engineering, California Institute of Technology, 1200 E. California Blvd., Pasadena, California 91125, USARecently, complex wavefront engineering with disordered media has demonstrated optical manipulation capabilities beyond those of conventional optics. These capabilities include extended volume, aberration-free focusing and subwavelength focusing via evanescent mode coupling. However, translating these capabilities to useful applications has remained challenging as the input-output characteristics of the disordered media (P variables) need to be exhaustively determined via 𝒪(P) measurements. Here, we propose a paradigm shift where the disorder is specifically designed so that its exact characteristics are known, resulting in an a priori determined transmission matrix that can be utilized with only a few alignment steps. We implement this concept with a disorder-engineered metasurface, which exhibits additional unique features for complex wavefront engineering such as an unprecedented optical memory effect range, excellent stability, and a tailorable angular scattering profile.Complex wavefront engineering with disorder-engineered metasurfaces Changhuei Yang December 30, 2023 ===================================================================§ INTRODUCTION Complex wavefront engineering can be best described as a class of methods that allow control of a very large number of optical degrees of freedom, ranging up to hundreds of thousands <cit.>. This sets it apart from the regime of wavefront manipulation in adaptive optics where the corrections are typically performed for aberrations modeled by a relatively small number of Zernike orders <cit.>. As a class of technologies, complex wavefront engineering is particularly well suited for applications involving disordered media. These applications can be broadly divided into two categories. In the first category, wavefront engineering works to overcome intrinsic limitations of the disordered media. Biological tissue is one such example where scattering is a problem, with complex wavefront engineering emerging as a solution to produce a shaped light beam that counteracts multiple scattering and enables imaging and focusing deep inside the tissue <cit.>. In the second category, disordered media are intentionally introduced in conjunction with wavefront engineering to unlock an optical space with spatial extent (x) and frequency content (ν) that is inaccessible using conventional optics <cit.>. One of the first demonstrations of this ability was reported by Vellekoop et al. <cit.>, showing that the presence of a disordered medium (e.g. a scattering white paint layer) between a source and a desired focal plane can actually help render a sharper focus. In related efforts, researchers have also shown that complex wavefront engineering can make use of disordered media to couple propagating and evanescent modes, in turn enabling near-field focusing <cit.>. Recently, there have been more extensive demonstrations combining disordered media with complex wavefront engineering to increase the flexibility of the optical system to, for example, significantly extend the volumetric range in which aberration-free focusing can be achieved <cit.>.Unfortunately, this class of methods is stymied by one overriding challenge – the optical input-output response of the disordered medium needs to be exhaustively characterized before use <cit.>. Fundamentally, characterizing P input-output relationships of a disordered medium requires 𝒪(P) measurements. For most practical applications, P greater than 10^12 is highly desired to enable high fidelity access to the expanded optical space enabled by the disordered media with wavefront engineering. Unfortunately, the time-consuming nature of the measurements and the intrinsic instability of the vast majority of disordered media have limited the ability to achieve high values of P. To date, the best P quantification that has been achieved is ∼ 10^8 with a measurement time of 40 seconds <cit.>.In this paper, we report the use of a disorder-engineered metasurface (we call this a disordered metasurface for brevity) in place of a conventional disordered medium. The disordered metasurface, which is composed of a 2D array of nano-scatterers that can be freely designed and fabricated, provides the optical `randomness' of conventional disordered media, but in a way that is fully known a priori. Through this approach, we reduce the system characterization to a simple alignment problem. In addition to eliminating the need for extensive characterization measurements, the disordered metasurface platform exhibits a wide optical memory effect range, excellent stability, and a tailorable angular scattering profile – properties that are highly desirable for complex wavefront engineering but that are missing from conventional disordered media. Using this disorder-engineered metasurface platform, we demonstrate full control over P = 1.1 × 10^13 input-output relationships after a simple alignment procedure. To demonstrate this new paradigm for controllably exploiting optical `randomness', we have implemented a disordered metasurface assisted focusing and imaging system that is capable of high NA focusing (NA≈ 0.5) to ∼ 2.2 × 10^8 points in a field of view (FOV) with a diameter of ∼ 8 mm. In comparison, for the same FOV, a conventional optical system such as an objective lens can at most access one or two orders of magnitude fewer points.§ PRINCIPLES The relationship between the input and output optical fields traveling through a disordered medium <cit.> can be generally expressed asE_o(x_o, y_o) = ∬ T(x_o, y_o; x_i, y_i) E_i(x_i, y_i)dx_i dy_i,where E_i is the field at the input plane of the medium, E_o is the field at the output plane of the medium, and T is the impulse response (i.e. Green's function) connecting E_i at a position (x_i,y_i) on the input plane with E_o at a position (x_o,y_o) on the output plane. In the context of addressable focal spots with disordered medium assisted complex wavefront engineering, Eq. (<ref>) is discretized such that E_o is a desired focusing optical field, E_i is the linear combination of independent optical modes controlled by the spatial light modulator (SLM), and T is a matrix (i.e. the transmission matrix) where each element describes the amplitude and phase relationship between a given input mode and output focal spot. In this scenario, E_i has a dimension of N, the number of degrees of freedom in the input field (i.e. the number of SLM pixels), E_o has a dimension of M given by the number of resolvable spots on the projection plane, and T is a matrix which connects the input and output fields with P elements, where P = M × N. We note that the following concepts and results can be generalized to other applications (e.g. beam steering or optical vortex generation) simply by switching E_o to an appropriate basis set.One of the unique and most useful aspects of complex wavefront engineering with disordered media is that it allows access to a broader optical space in both spatial extent (x) and frequency content (ν) than the input optical field can conventionally access. For example, when an SLM is used alone, the generated optical field E_i contains a limited range of spatial frequencies due to the large pixel pitch of the SLM (ν_x or ν_y ≤ 1/(2d_SLM) where d_SLM is the pixel pitch; typically ∼ 10µm). As a consequence, the number of resolvable spots M is identical to the number of controllable degrees of freedom N. In contrast, when a disordered medium is placed in the optical path, its strongly scattering nature generates an output field E_o with much higher spatial frequencies given by √(ν_x^2 + ν_y^2)≤ 1/λ, where λ is the wavelength of the light. According to the space-bandwidth product formalism <cit.>, this means that the number of addressable focal spots M within a given modulation area S, is maximally improved toM = S ×π/λ^2.The scheme for focusing with disordered medium assisted complex wavefront engineering can be understood as the process of combining N independent optical modes to constructively interfere at a desired position on the projection plane <cit.>. In general, due to the increased spatial frequency range of the output field, the number of addressable spots M is much larger than the number of degrees of freedom in the input, N, and therefore the accessible focal points on the output plane are not independent optical modes (see supplementary [S1]S1). Instead, each focal spot exists on top of a background which contains the contributions from the unoptimized optical modes in the output field. Here the contrast η, the ratio between the intensity transmitted into the focal spot and the surrounding background, is dictated by the number of controlled optical modes in the input, N <cit.>. In practical situations where, for instance, the addressed spots are used for imaging or photo-switching, the contrast η needs to be sufficiently high to ensure the energy leakage does not harmfully compromise the system performance.To maximize performance, we can see it is desirable to have as many resolvable spots as possible, each with high contrast. This means that both M and N, and in turn P, should be as high as possible. Practically, there are two ways to measure the elements – orthogonal input probing and output phase conjugation (see supplementary [S2]S2). In each case, an individual measurement corresponds to a single element in the transmission matrix and is accomplished by determining the field relationship between an input mode and a location on the projection plane. Both still necessitate 𝒪(P) measurements which, when P is large, leads to a prohibitively long measurement time. As a point of reference, if the fast transmission matrix characterization method reported in Ref. <cit.> could be extended without complications, it would still require a measurement time of over 40 days to characterize a transmission matrix with P = 10^13 elements. In comparison, the stability associated with most conventional disordered media can last only several hours <cit.>.In contrast, our disorder-engineered metasurface avoids the measurement problem altogether since all elements of the transmission matrix are known a priori. This means that now the procedure to calibrate the system is simplified from the 𝒪(P) measurements needed to determine the transmission matrix to the small number of alignment steps for the disorder-engineered metasurface and the SLM.A schematic illustration of the technique is presented in Fig. <ref> with the omission of a 4-f imaging system optically conjugating the SLM plane to the disordered metasurface. An SLM structures a collimated incident beam into an optimal wavefront which in turn generates a desired complex output wavefront through the disordered metasurface. Since the transmission matrix is known a priori, the process to focus to a desired location is a simple computation. The optimal incident pattern E_i^opt that encodes the information for a target field E_o^target is calculated using the concept of phase conjugation (see [methods]materials and methods). This approach enables us to access the maximum possible number of resolvable spots for complex wavefront engineering for a given modulation area S with the added benefit of control over the scattering properties of the metasurface.§ RESULTS §.§ The disorder-engineered metasurface The disordered metasurface platform demonstrated in this study shares the same design principles as the conventional metasurfaces that have been previously reported to implement planar optical components <cit.>: rationally designed subwavelength scatterers or meta-atoms are arranged on a two-dimensional lattice to purposefully shape optical wavefronts with subwavelength resolution (Fig. <ref>A). The disordered metasurface, consisting of Silicon Nitride (SiN_x) nanoposts sitting on a fused silica substrate, imparts local and space-variant phase delays with high transmission for the designed wavelength of 532 nm. We designed the phase profile ϕ(x,y) of the metasurface in such a way that its angular scattering profile is isotropically distributed over the maximal possible spatial bandwidth of 1/λ, and then chose the width of the individual nanoposts according to the look-up table shown in Fig. <ref>B (see [methods]materials and methods for details). The experimentally measured scattering profile confirms the nearly isotropic scattering property of the disordered metasurface, presenting a scattering profile that fully extends to the spatial frequency of 1/λ as shown in Fig. <ref>C. This platform also allows tailoring of the scattering profile, which can be potentially useful in conjunction with angle-selective optical behaviors such as total internal reflection. Figure <ref>D presents the measured scattering profiles of disordered metasurfaces designed to have different angular scattering ranges, corresponding to NAs of 0.3, 0.6,and 0.9 (see fig. <ref> for 2D angular scattering profiles).In addition to a highly isotropic scattering profile, the disordered metasurface also exhibits a very large angular (tilt/tilt) correlation range (also known as the optical memory effect <cit.>). The correlation is larger than 0.5 even up to a tilting angle of 30 degrees (Fig. <ref>E). In comparison, conventional scattering media commonly used for scattering lenses, such as opal glass and several micron-thick Titanium Dioxide (TiO_2) white paint layers, exhibit much narrower correlation ranges of less than 1 degree (Fig. <ref>E) <cit.>. Although ground glass diffusers present a relatively wider correlation range of ∼ 5degrees, their limited angular scattering range makes them less attractive for complex wavefront engineering (see fig. <ref> for angular tilt/tilt measurement setup and correlation profiles).Moreover, the disordered metasurface is extraordinarily stable. We were able to retain the ability to generate a high quality optical focus from the same metasurface without observable efficiency loss over a period of 75 days by making only minor corrections to the system alignment to compensate for mechanical drift (see fig. <ref>). §.§ High NA optical focusing over an extended volume We experimentally tested our complex wavefront manipulation scheme in the context of disordered medium assisted focusing and imaging. First, we aligned the disordered metasurface to the SLM by displaying a known pattern on the SLM and correcting the shift and tilt of the metasurface to ensure high correlation between the computed and measured output field. Next, to demonstrate the flexibility of this approach, we reconstructed a converging spherical wave (see [methods]materials and methods for details) for a wide range of lateral and axial focus positions. Figure <ref>A presents the simplified schematic for optical focusing (see also [methods]materials and methods and fig. <ref> for more details). Figure <ref>B1-B3 shows the 2D intensity profiles for the foci reconstructed along the optical axis at z'=1.4, 2.1, and 3.8 mm, measured at their focal planes. The corresponding NAs are 0.95, 0.9, and 0.75, respectively. The full width at half maximum (FWHM) spot sizes of the reconstructed foci were 280, 330, 370 nm, which are nearly diffraction-limited as shown in Fig. <ref>C. The intensity profiles are highly symmetric, implying that the converging spherical wavefronts were reconstructed with high fidelity through the disordered metasurface. It is also remarkable that this technique can reliably control the high transverse wavevector components corresponding to an NA of 0.95, while the SLM used alone can control only those transverse wavevectors associated with an NA of 0.033.Figure <ref>B4-B6 shows the 2D intensity profiles at x'=0, 1, 4, and 7 mm on the fixed focal plane of z'=3.8 mm (corresponding to the on axis NA of 0.75). Because the disordered metasurface based scattering lens is a singlet lens scheme, the spot size along the x-axis increased from 370 to 1500 nm as the focus was shifted (summarized in Fig. <ref>D).The total number of resolvable spots achievable with the disordered metasurface, M, was experimentally determined to be ∼ 4.3 × 10^8 based on the plot in Fig. <ref>D, exceeding the number of controlled degrees of freedom on the SLM (N ∼ 10^5) by over 3 orders of magnitude. The NA of ∼ 0.5 was also maintained in a lateral FOV with a diameter of ∼ 8 mm, resulting in 2.2 × 10^8 resolvable focal spots. For the sake of comparison, a high-quality objective lens with an NA of 0.5 typically has ∼ 10^7 resolvable spots, an order of magnitude smaller than the number of the spots demonstrated with the disordered metasurface.With our disordered metasurface platform we control a transmission matrix with a number of elements P given by the product of the number of resolvable focal spots on the output plane and the number of controllable modes in the input. The P we achieved with our system was 1.1×10^13 which allowed us to address ∼4.3×10^8 focus spots with a contrast factor η of ∼2.5×10^4. This value of P is 5 orders of magnitude higher than what has previously been reported <cit.>. These findings testify to the paradigm-shifting advantage that this engineered `randomness' approach brings.We also experimentally confirmed that even with reduced control over the number of input modes, we can still access the same number of resolvable spots on the output plane, albeit with a reduced contrast. By binning pixels on the SLM, we reduced the number of controlled degrees of freedom on the SLM by up to three orders of magnitude, from ∼ 10^5 to ∼ 10^2, and verified that the capability of diffraction-limited focusing over a wide FOV is maintained (see fig. <ref>). Although the same number of focal spots can be addressed, the contrast factor η is sacrificed when the number of degrees of control is reduced. Using ∼ 10^2 degrees of freedom in the input, we achieved a contrast factor of ∼ 70. This validates that the complex wavefront manipulation scheme assisted by the disordered metasurface can greatly improve the number of addressable focal spots for complex wavefront engineering regardless of the number of degrees of freedom in the input. §.§ Wide FOV fluorescence imaging Finally, we implemented a scanning fluorescence microscope for high-resolution wide FOV fluorescence imaging (see [methods]materials and methods, fig. <ref>, and fig. <ref> for detailed procedure). Figure <ref>A presents the wide FOV low-resolution fluorescence image of immunofluorescence-labeled parasites (Giardia lamblia cysts; see [methods]materials and methods for sample preparation procedures) captured through the 4× objective lens. As shown in the magnified view in Fig. <ref>B3, a typical fluorescent image directly captured with a 4× objective lens was significantly blurred, so that the shape and number of parasites was not discernible from the image. Figure <ref>, B1, C, and D presents the fluorescence images obtained with our scanning microscope. The scanned images resolve the fine features of parasites both near the center and the boundary of the 5-mm wide FOV (Fig. <ref>D). Our platform provides the capability for high NA focusing (NA≈ 0.5) within a FOV with a diameter of ∼ 8mm, as shown in Fig. <ref>. To validate the performance of our imaging system, we compare it to conventional 20× and 4× objectives. The captured images in Fig. <ref> demonstrate that we can achieve the resolution of the 20× objective over the FOV of the 4× objective.§ DISCUSSION Here we have implemented a disorder-engineered medium using a metasurface platform and demonstrated the benefit of using it for complex wavefront engineering. Our study is the first to propose engineering the entire input-output response of an optical disordered medium, presenting a new approach to disordered media in optics. Allowing complete control of the transmission matrix a priori, the disorder-engineered metasurface fundamentally changes the way we can employ disordered media for complex wavefront engineering. Prior to this study, to control P input-output relationships through a disordered medium, 𝒪(P) calibration measurements were required. In contrast, the disorder-engineered metasurface allows for a transmission matrix with P elements to be fully employed with only a simple alignment procedure.Although we only demonstrate the reconstruction of spherical wavefronts in this study, our method is generally applicable to produce arbitrary wavefronts for applications such as beam steering, vector beam generation, multiple foci, or even random pattern generation (see fig. <ref> for experimental demonstrations). We anticipate that the large gain in the number of addressable optical focal spots (or equivalently angles or patterns) enabled by our method will substantially improve existing optical techniques such as fluorescence imaging, optical stimulation/lithography <cit.>, free space coupling among photonic chips/optical networks <cit.>, and optical encryption/decryption <cit.>.In the specific application of focal spot scanning, our basic system consisting of two planar components, a metasurface phase mask and a conventional SLM, offers several advantages. The system is highly scalable and versatile, bypassing the limitations and complexities of using conventional objective lenses. The scalability of the metasurface can be especially useful in achieving ultra-long working distances for high NA focusing. The scheme can also be implemented as a vertically integrated optical device together with electronics <cit.> (e.g. a metasurface phase mask on top of a transmissive LCD), providing a compact and robust solution to render a large number of diffraction-limited spots. Furthermore, the concept is applicable over a wide range of the electromagnetic spectrum with the proper choice of low-loss materials for the meta-atoms (e.g. SiN_x or TiO_2 for entire visible <cit.> and Si for near infrared wavelengths <cit.>), which allows for multiplexing different colors, useful for multicolor fluorescence microscopy and multiphoton excitation microscopy. Finally, the planar design provides a platform to achieve ultra-high NA solid-immersion lenses <cit.> or total internal reflection fluorescence (TIRF) excitation <cit.>, suitable for super-resolution imaging and single-molecule biophysics experiments.More broadly speaking, we anticipate the ability to customize the design of the disordered metasurface for a particular application will prove highly useful. For example, we can tailor the scattering profile of the disordered metasurface to act as an efficient spatial frequency mixer or to be exploited for novel optical detection strategies <cit.>. The disordered metasurface can serve as a collection lens, analogous to the results obtained for light manipulation, providing an enhanced resolving power and extended view field. Additionally, the metasurface platform can be designed independently for orthogonal polarization states, which provides additional avenues for control in complex wavefront engineering <cit.>. Together, the engineering flexibility provided by these parameters offers unprecedented control over complex patterned illumination, which can directly benefit emerging imaging methods that rely on complex structured illumination <cit.>.To conclude, we explored the use of a disorder-engineered metasurface in complex wavefront engineering, challenging a prevailing view of the `randomness' of disordered media by programmatically designing its `randomness'. The presented technology has the potential to provide a game-changing shift that unlocks the benefits of complex wavefront engineering, opening new avenues for the design of optical systems and enabling new techniques for exploring complex biological systems.§ MATERIALS AND METHODS §.§ Design of disordered metasurfaceThe disordered metasurface consists of Silicon Nitride (SiN_x) nanoposts arranged on a subwavelength square lattice with a periodicity of 350 nm as shown in Fig. <ref>A. The width of each SiN_x nanopost is precisely controlled within a range from 60 nm to 275 nm, correspondingly imparting local and space-variant phase delays covering a full range of 2π with close to unity transmittance for an incident wavefront at the design wavelength of 532 nm (Fig. <ref>B). The widths of the nanoposts corresponding to the grayed regions in Fig. <ref>B correspond to high quality factor resonances and are excluded in the design of the disordered metasurface. The phase profile ϕ(x,y) of the disordered metasurface is designed to yield an isotropic scattering profile over the desired angular range using the Gerchberg-Saxton (GS) algorithm. The initial phase profile of the far-field is randomly chosen from a uniform distribution between 0 and 2π radians. After several iterations, the phase profile converges such that the far-field pattern has isotropic scattering over the target angular ranges. This approach helps to minimize undiffracted light and evenly distribute the input energy over the whole angular range. §.§ Fabrication of disordered metasurfaceA SiN_x thin film of 630 nm is deposited using plasma enhanced chemical vapor deposition (PECVD) on a fused silica substrate. The metasurface pattern is first defined in ZEP520A positive resist using an electron beam lithography system. After developing the resist, the pattern is transferred onto a 60 nm-thick aluminum oxide (Al_2O_3) layer deposited by electron beam evaporation using the lift-off technique. The patterned Al_2O_3 serves as a hard mask for the dry etching of the 630 nm-thick SiN_x layer in a mixture of C_4F_8 and SF_6 plasma and is finally removed by a mixture of ammonium hydroxide and hydrogen peroxide at 80^∘C. §.§ Alignment procedureThe alignment procedure consists of two steps to ensure the proper mapping of the SLM pixels onto the intended coordinates of the disordered metasurface. Cross-shaped markers engraved at the four corners of the metasurface are used to guide rough alignment. Then, the marginal misalignments (e.g. translation and tip-tilt) and aberrations induced by the 4-f system are corrected. For this purpose, a collimated laser beam (Spectra-Physics, Excelsior 532) is tuned to be incident on the metasurface and the resulting field is measured with phase shifting holography. The residual misalignments and aberrations are then calibrated by comparing the measured complex field with the calculated one and digitally compensating for the misalignment by adding appropriate correction patterns on the SLM. §.§ Procedure for optical focusingThe optimal incident pattern E_i^opt that encodes the information for a target field E_o^target is calculated based on the concept of phase conjugation using the expressionE_i^opt(x,y)= ℒ[ ∬ T^†(x,y;x_o,y_o) E_o^target(x_o,y_o)dx_ody_o] = ℒ[ e^-iϕ(x,y) E_o^target(x,y)],where † represents the conjugate transpose, and the function ℒ represents the local spatial average of the ideal phase conjugation field ∬ T^†E_o^target dx_ody_o within the area corresponding to each controlled optical mode on the SLM. To produce a focal spot at r'=(x',y',z') in free space, the target field is set to a spherical wavefront:E_o^target(x,y) = exp[-i2π/λ√((x-x')^2+(y-y')^2+z'^2)],where z' is the focal length. To perform the local spatial average ℒ, a low-pass spatial frequency filter is applied using a fast Fourier transform algorithm so that the SLM can successfully sample the optimal wavefront E_i^opt. Finally, the SLM (Pluto, Holoeye) is used for phase-only reconstruction of the complex field E_i^opt within a circular aperture with a 4.3 mm radius. In order to measure the focal spot, we use a custom-built microscope setup consisting of 100× objective lens (Olympus, UMPlanFl) with an NA of 0.95, a tube lens (Nikon, 2×, Plan Apo), and a CCD camera (Imaging Source, DFK 23UP031). §.§ Procedure for scanning fluorescence imagingThe setup of our scanning microscope is shown in fig. <ref>C. For the collection of the scanned fluorescent signal, an imaging system consisting of a 4× objective lens (Olympus, 0.1NA, Plan N) and tube lens (Thorlabs, AC508-100-A-ML) is used to cover most of the FOV of the scanning microscope. We scan the focal spot created behind the metasurface across the region of interest with a 10 ms pixel dwell time. A pair of galvanometric mirrors are used to scan 2×2µm^2 patches with a step size of 200 nm, and the neighboring patches are successively scanned by adding a compensation map on the SLM to correct coma aberrations, instead of exhaustively calculating and refreshing the E_i^opt for every spot. The fluorescent signal is detected by the sCMOS camera (PCO, PCO.edge 5.5) with an exposure time of 7 ms. The fluorescence signal is extracted from the camera pixels corresponding to the scanned focus position. The imaging time for a 30×30µm^2 area is 5 min, which can be easily improved by two orders of magnitude using a high-power laser and resonant scanning mirrors. §.§ Immunofluorescence-labeled sample preparationAs a biological sample, we use microscopic parasites, Giardia lamblia cysts (Waterborne, Inc.). Before labeling the Giardia, we first prepare (a) the sample of 10^5 Giardia in 10 µL phosphate buffered solution (PBS) in a centrifuge tube, (b) 1 µg of Giardia lamblia cysts antibody (Invitrogen, MA1-7441) in 100 µL PBS, and (c) 2 µg of Goat anti-Mouse IgG (H+L) Secondary Antibody conjugated with Alexa Fluor 532 fluorescent dye (Life Technologies, A-11002) in 100 µL of PBS. The sample (a) is incubated with a blocking buffer. After the blocking buffer is removed, the sample is again incubated with the Giardia antibody solution (b). The sample is rinsed twice with PBS to remove the Giardia antibody solution. The sample is then incubated with the secondary antibody solution with fluorescent dye (c). Finally, the sample is rinsed twice with PBS to remove the secondary antibody solution. All incubations are carried out for 30 min at 37^∘C. The sample in 10µL PBS is prepared on a slide with Prolong Gold antifade reagent with DAPI (Life Technologies, P36935) to protect the labeled sample from fading and covered with a coverslip.§ ACKNOWLEDGMENTThis work is supported by the National Institutes of Health BRAIN Initiative (U01NS090577), and a GIST-Caltech Collaborative Research Proposal (CG2012). Y.H. was supported by a Japan Student Services Organization (JASSO) fellowship. Y.H. and A.A. were also supported by National Science Foundation Grant 1512266 and Samsung Electronics. A.S. was supported by JSPS Overseas Research Fellowships. J.B. was supported by the National Institute of Biomedical Imaging and Bioengineering (F31EB021153) under a Ruth L. Kirschstein National Research Service Award and by the Donna and Benjamin M. Rosen Bioengineering Center. S.M.K. was supported by the DOE “Light-Material Interactions in Energy Conversion” Energy Frontier Research Center funded by the US Department of Energy, Office of Science, Office of Basic Energy Sciences under Award no. DE-SC0001293. The device nanofabrication was performed at the Kavli Nanoscience Institute at Caltech.§ AUTHOR CONTRIBUTIONSM.J. and Y.H. conceived the initial idea. M.J., Y.H., A.S., J.B., and C.Y. expanded and developed the concept. M.J., Y.H., and A.S. developed theoretical modeling, designed the experiments, and analyzed the experimental data. M.J. and A.S. carried out the optical focusing experiments. Y.H. performed the full-wave simulation and the design on the metasurface. A.S. performed the fluorescence imaging experiment. Y.H., S.M.K., and A.A. fabricated the metasurface phase mask. Y.L. performed the measurements on the optical memory effect, the angular scattering profiles, and the stability. All authors contributed to writing the manuscript. C.Y. and A.F. supervised the project. 48 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Mosk et al.(2012)Mosk, Lagendijk, Lerosey, and Fink]Mosk2012 author author A. P. Mosk, author A. Lagendijk, author G. Lerosey,andauthor M. Fink, 10.1038/nphoton.2012.88 journal journal Nature Photonics volume 6, pages 283 (year 2012)NoStop [Tyson(2010)]Tyson2010 author author R. K. Tyson, in 10.1016/B978-0-12-705900-6.50010-0 booktitle Principles of Adaptive Optics (publisher CRC Press, year 2010) edition 3rded., Chap. chapter 6, pp. pages 177–196NoStop [Horstmeyer et al.(2015)Horstmeyer, Ruan, and Yang]Horstmeyer2015 author author R. Horstmeyer, author H. Ruan, and author C. Yang, 10.1038/nphoton.2015.140 journal journal Nature Photonics volume 9, pages 563 (year 2015)NoStop [Vellekoop et al.(2010)Vellekoop, Lagendijk, and Mosk]Vellekoop2010 author author I. M. Vellekoop, author A. Lagendijk,and author A. P. Mosk, 10.1038/nphoton.2010.3 journal journal Nature Photonics volume 4,pages 320 (year 2010)NoStop [Vellekoop and Aegerter(2010)]Vellekoop2010a author author I. M. Vellekoop and author C. M. Aegerter, 10.1364/OL.35.001245 journal journal Optics Letters volume 35,pages 1245 (year 2010)NoStop [Van Putten et al.(2011)Van Putten, Akbulut, Bertolotti, Vos, Lagendijk, and Mosk]VanPutten2011 author author E. G. Van Putten, author D. Akbulut, author J. Bertolotti, author W. L. Vos, author A. Lagendijk,and author A. P. Mosk, 10.1103/PhysRevLett.106.193905 journal journal Physical Review Letters volume 106, pages 193905 (year 2011)NoStop [Park et al.(2013)Park, Park, Yu, Park, Han, Shin, Ko, Nam, Cho, and Park]Park2013 author author J.-H. Park, author C. Park, author H. Yu, author J. Park, author S. Han, author J. Shin, author S. H. Ko, author K. T. Nam, author Y.-H. Cho,and author Y. Park, 10.1038/nphoton.2013.95 journal journal Nature Photonics volume 7, pages 454 (year 2013)NoStop [Ryu et al.(2016)Ryu, Jang, Eom, Yang, andChung]Ryu2016 author author J. Ryu, author M. Jang, author T. J. Eom, author C. Yang,and author E. Chung, 10.1038/srep23494 journal journal Scientific Reports volume 6, pages 23494 (year 2016)NoStop [Boniface et al.(2016)Boniface, Mounaix, Blochet, Piestun, and Gigan]Boniface2016 author author A. Boniface, author M. Mounaix, author B. Blochet, author R. Piestun,and author S. Gigan, 10.1364/OPTICA.4.000054 journal journal Opticavolume 4, pages 54 (year 2016)NoStop [Yu et al.(2017)Yu, Lee, Park, and Park]Yu2017 author author H. Yu, author K. Lee, author J. Park,and author Y. Park, 10.1038/nphoton.2016.272 journal journal Nature Photonics volume 11, pages 186 (year 2017)NoStop [Choi et al.(2011)Choi, Yang, Fang-Yen, Kang, Lee, Dasari, Feld, andChoi]Choi2011 author author Y. Choi, author T. D. Yang, author C. Fang-Yen, author P. Kang, author K. J. Lee, author R. R. Dasari, author M. S.Feld,and author W. Choi, 10.1103/PhysRevLett.107.023902 journal journal Physical Review Lettersvolume 107, pages 023902 (year 2011)NoStop [Popoff et al.(2011)Popoff, Lerosey, Fink, Boccara, andGigan]Popoff2011 author author S. M. Popoff, author G. Lerosey, author M. Fink, author A. C. Boccara,and author S. Gigan, 10.1088/1367-2630/13/12/123021 journal journal New Journal of Physics volume 13, pages 123021 (year 2011)NoStop [Kim et al.(2015)Kim, Choi, Choi, Yoon, andChoi]Kim2015 author author M. Kim, author W. Choi, author Y. Choi, author C. Yoon,and author W. Choi, 10.1364/OE.23.012648 journal journal Optics Express volume 23, pages 12648 (year 2015)NoStop [Popoff et al.(2010)Popoff, Lerosey, Carminati, Fink, Boccara, and Gigan]Popoff2010 author author S. M. Popoff, author G. Lerosey, author R. Carminati, author M. Fink, author A. C. Boccara,and author S. Gigan, 10.1103/PhysRevLett.104.100601 journal journal Physical Review Letters volume 104, pages 100601 (year 2010)NoStop [Lohmann et al.(1996)Lohmann, Dorsch, Mendlovic, Ferreira, and Zalevsky]Lohmann1996 author author A. W. Lohmann, author R. G. Dorsch, author D. Mendlovic, author C. Ferreira,and author Z. Zalevsky, 10.1364/JOSAA.13.000470 journal journal Journal of the Optical Society of America A volume 13,pages 470 (year 1996)NoStop [Vellekoop and Mosk(2007)]Vellekoop2007 author author I. M. Vellekoop and author A. P. Mosk, 10.1364/OL.32.002309 journal journal Optics Letters volume 32,pages 2309 (year 2007)NoStop [Miller(2015)]Miller2015 author author D. A. B.Miller, 10.1126/science.aaa6801 journal journal Science volume 347, pages 1423 (year 2015)NoStop [Choi et al.(2014)Choi, Yoon, Kim, Choi, andChoi]Choi2014 author author Y. Choi, author C. Yoon, author M. Kim, author W. Choi,and author W. Choi, 10.1109/JSTQE.2013.2275942 journal journal IEEE Journal on Selected Topics in Quantum Electronics volume 20, pages 61 (year 2014)NoStop [Park et al.(2015)Park, Park, Yu, and Park]Park2015 author author J. Park, author J.-H. Park, author H. Yu,and author Y. Park, 10.1364/OL.40.001667 journal journal Optics Letters volume 40, pages 1667 (year 2015)NoStop [Yu and Capasso(2014)]Yu2014 author author N. Yu and author F. Capasso,10.1038/nmat3839 journal journal Nature Materials volume 13, pages 139 (year 2014)NoStop [Lin et al.(2014)Lin, Fan, Hasman, and Brongersma]Lin2014 author author D. Lin, author P. Fan, author E. Hasman,and author M. L. Brongersma, 10.1126/science.1253213 journal journal Science volume 345, pages 298 (year 2014)NoStop [Arbabi et al.(2014)Arbabi, Horie, Ball, Bagheri, andFaraon]Arbabi2014 author author A. Arbabi, author Y. Horie, author A. J. Ball, author M. Bagheri,and author A. Faraon, 10.1038/ncomms8069 journal journal Nature communications volume 6, pages 1 (year 2014)NoStop [Backlund et al.(2016)Backlund, Arbabi, Petrov, Arbabi, Saurabh, Faraon, andMoerner]Backlund2016 author author M. P. Backlund, author A. Arbabi, author P. N. Petrov, author E. Arbabi, author S. Saurabh, author A. Faraon,and author W. E.Moerner, 10.1038/nphoton.2016.93 journal journal Nature Photonics volume 10, pages 459 (year 2016)NoStop [Khorasaninejad et al.(2016)Khorasaninejad, Chen, Devlin, Oh, Zhu, and Capasso]Khorasaninejad2016 author author M. Khorasaninejad, author W. T. Chen, author R. C. Devlin, author J. Oh, author A. Y. Zhu,and author F. Capasso, 10.1126/science.aaf6644 journal journal Science volume 352, pages 1190 (year 2016)NoStop [Genevet et al.(2017)Genevet, Capasso, Aieta, Khorasaninejad, and Devlin]Enevet2017 author author P. Genevet, author F. Capasso, author F. Aieta, author M. Khorasaninejad,and author R. Devlin, 10.1364/OPTICA.4.000139 journal journal Opticavolume 4, pages 139 (year 2017)NoStop [Feng et al.(1988)Feng, Kane, Lee, and Stone]Feng1988 author author S. Feng, author C. Kane, author P. A. Lee,and author A. D. Stone, 10.1103/PhysRevLett.61.834 journal journal Physical Review Letters volume 61, pages 834 (year 1988)NoStop [Schott et al.(2015)Schott, Bertolotti, Léger, Bourdieu, and Gigan]Schott2015 author author S. Schott, author J. Bertolotti, author J.-F. Léger, author L. Bourdieu,andauthor S. Gigan, 10.1364/OE.23.013505 journal journal Optics Express volume 23, pages 13505 (year 2015)NoStop [Nikolenko et al.(2008)Nikolenko, Watson, Araya, Woodruff, Peterka, and Yuste]Nikolenko2008 author author V. Nikolenko, author B. Watson, author R. Araya, author A. Woodruff, author D. Peterka,and author R. Yuste, 10.3389/neuro.04.005.2008 journal journal Frontiers in Neural Circuits volume 2, pages 5 (year 2008)NoStop [Kim et al.(2017)Kim, Adhikari, and Deisseroth]Kim2017 author author C. K. Kim, author A. Adhikari, and author K. Deisseroth,10.1038/nrn.2017.15 journal journal Nature Reviews Neuroscience volume 18,pages 222 (year 2017)NoStop [Curtis et al.(2002)Curtis, Koss, and Grier]Curtis2012 author author J. E. Curtis, author B. A. Koss, and author D. G. Grier, 10.1016/S0030-4018(02)01524-9 journal journal Optics Communication volume 207,pages 169 (year 2002)NoStop [Bruck et al.(2016)Bruck, Vynck, Lalanne, Mills, Thomson, Mashanovich, Reed,and Muskens]Bruck2016 author author R. Bruck, author K. Vynck, author P. Lalanne, author B. Mills, author D. J. Thomson, author G. Z. Mashanovich, author G. T. Reed,and author O. L. Muskens, 10.1364/OPTICA.3.000396 journal journal Opticavolume 3, pages 396 (year 2016)NoStop [Pappu et al.(2002)Pappu, Recht, Taylor, and Gershenfeld]Pappu2002 author author R. Pappu, author B. Recht, author J. Taylor,and author N. Gershenfeld, 10.1126/science.1074376 journal journal Science volume 297, pages 2026 (year 2002)NoStop [Arbabi et al.(2016a)Arbabi, Arbabi, Kamali, Horie, Han, andFaraon]Arbabi2016 author author A. Arbabi, author E. Arbabi, author S. M. Kamali, author Y. Horie, author S. Han,and author A. Faraon, 10.1038/ncomms13682 journal journal Nature communications volume 7, pages 1 (year 2016a)NoStop [Zhan et al.(2016)Zhan, Colburn, Trivedi, Fryett, Dodson, and Majumdar]Zhan2016 author author A. Zhan, author S. Colburn, author R. Trivedi, author T. K. Fryett, author C. M. Dodson,and author A. Majumdar, 10.1021/acsphotonics.5b00660 journal journal ACS Photonics volume 3, pages 209 (year 2016)NoStop [Fattal et al.(2010)Fattal, Li, Peng, Fiorentino, andBeausoleil]Fattal2010 author author D. Fattal, author J. Li, author Z. Peng, author M. Fiorentino,and author R. G. Beausoleil, 10.1038/nphoton.2010.116 journal journal Nature Photonics volume 4, pages 466 (year 2010)NoStop [Vo et al.(2014)Vo, Fattal, Sorin, Peng, Tran, Fiorentino, and Beausoleil]Vo2014 author author S. Vo, author D. Fattal, author W. V. Sorin, author Z. Peng, author T. Tran, author M. Fiorentino,and author R. G. Beausoleil, 10.1109/LPT.2014.2325947 journal journal IEEE Photonics Technology Letters volume 26, pages 1375 (year 2014)NoStop [Arbabi et al.(2016b)Arbabi, Arbabi, Kamali, Horie, and Faraon]Arbabi2016a author author E. Arbabi, author A. Arbabi, author S. M. Kamali, author Y. Horie,and author A. Faraon, 10.1364/OPTICA.3.000628 journal journal Opticavolume 3, pages 628 (year 2016b)NoStop [Ho et al.(2015)Ho, Qiu, Tanabe, Yeh, Fan, and Poon]Ho2015 author author J. S. Ho, author B. Qiu, author Y. Tanabe, author A. J. Yeh, author S. Fan,and author A. S. Y.Poon, 10.1103/PhysRevB.91.125145 journal journal Physical Review B volume 91, pages 125145 (year 2015)NoStop [Ambrose(1956)]Ambrose1956 author author E. Ambrose, 10.1038/1781194a0 journal journal Nature volume 178, pages 1194 (year 1956)NoStop [Bertolotti et al.(2012)Bertolotti, van Putten, Blum, Lagendijk, Vos, and Mosk]Bertolotti2012 author author J. Bertolotti, author E. G. van Putten, author C. Blum, author A. Lagendijk, author W. L. Vos,and author A. P. Mosk, 10.1038/nature11578 journal journal Naturevolume 491, pages 232 (year 2012)NoStop [Redding et al.(2013)Redding, Liew, Sarma, and Cao]Redding2013 author author B. Redding, author S. F. Liew, author R. Sarma,and author H. Cao, 10.1038/nphoton.2013.190 journal journal Nature Photonics volume 7, pages 746 (year 2013)NoStop [Katz et al.(2014)Katz, Heidmann, Fink, and Gigan]Katz2014 author author O. Katz, author P. Heidmann, author M. Fink,and author S. Gigan, 10.1038/nphoton.2014.189 journal journal Nature Photonics volume 8, pages 784 (year 2014)NoStop [Arbabi et al.(2015)Arbabi, Horie, Bagheri, and Faraon]Arbabi2015 author author A. Arbabi, author Y. Horie, author M. Bagheri,andauthor A. Faraon, 10.1038/nnano.2015.186 journal journal Nature Nanotechnology volume 10, pages 937 (year 2015)NoStop [Mudry et al.(2012)Mudry, Belkebir, Girard, Savatier, Le Moal, Nicoletti, Allain, and Sentenac]Mudry2012 author author E. Mudry, author K. Belkebir, author J. Girard, author J. Savatier, author E. Le Moal, author C. Nicoletti, author M. Allain,and author A. Sentenac, 10.1038/nphoton.2012.83 journal journal Nature Photonics volume 6, pages 312 (year 2012)NoStop [Li et al.(2015)Li, Shao, Chen, Zhang, Zhang, Moses, Milkie, Beach, Hammer, Pasham, Kirchhausen, Baird, Davidson, Xu, and Betzig]Li2015 author author D. Li, author L. Shao, author B.-C. Chen, author X. Zhang, author M. Zhang, author B. Moses, author D. E.Milkie, author J. R.Beach, author J. A. Hammer, author M. Pasham, author T. Kirchhausen, author M. A. Baird, author M. W. Davidson, author P. Xu,and author E. Betzig, 10.1126/science.aab3500 journal journal Science volume 349, pages aab3500 (year 2015)NoStop [Miller(2012)]Miller2012 author author D. A. B.Miller, 10.1364/OE.20.023985 journal journal Optics Express volume 20, pages 23985 (year 2012)NoStop [Yaqoob et al.(2008)Yaqoob, Psaltis, Feld, and Yang]Yaqoob2008 author author Z. Yaqoob, author D. Psaltis, author M. S. Feld,andauthor C. Yang, 10.1038/nphoton.2007.297 journal journal Nature Photonics volume 2, pages 110 (year 2008)NoStop [Yoon et al.(2015)Yoon, Lee, Park, and Park]Yoon2015 author author J. Yoon, author K. Lee, author J. Park,and author Y. Park, 10.1364/OE.23.010158 journal journal Optics Express volume 23, pages 10158 (year 2015)NoStop § SUPPLEMENTARY TEXT §.§ Degrees of freedom in the disordered metasurface assisted wavefront engineering systemIn this supplementary section, we describe the disorder-engineered metasurface and phase-only SLM optical system from the main text in a general mathematical framework. This framework is based on the singular value decomposition (SVD) of the linear operator (e.g. the transmission matrix, TM), which allows us to rigorously characterize the degrees of freedom of the optical system <cit.>. We show that the linear operator connecting the input and output optical modes always has a full rank of N (the number of pixels in the SLM), and thus the degrees of freedom for the output modes is also equal to N. However, even though we are limited to N degrees of freedom for the output modes, it is still possible to have a large number of resolvable focal spots within a field of view. Finally, we explain why our system has more degrees of freedom than conventional disordered media.Any linear optical device can be described by a linear operator D which takes an input function |ψ_o⟩ and generates a linear combination of modes |ψ_i⟩, given as|ψ_o⟩ = D |ψ_i⟩,We can always perform the SVD of D which yieldsD = U Σ V^†,where U and V are unitary matrices, and Σ is a diagonal matrix with complex values that describe the transmission coefficients for independent channels between the input and output modes. By multiplying U^† from the left-hand side, we haveU^†|ψ_o⟩ = Σ( V^†|ψ_i⟩).The set of modes U^†|ψ_o⟩ and V^†|ψ_i⟩ that correspond with nonzero singular values in Σ form the orthogonal sets of basis modes in the output and input spaces.Next, we consider the case where the linear device operator D represents a general phase mask, the input mode is a wavefront shaped by the SLM, and the output is the field at an arbitrary plane after passing through the phase mask. If the response of the phase mask is insensitive to input angle, the mask can be thought as a device which simply multiplies the input field ψ_i(x,y) by a position-dependent transmission function T(x,y) to obtain the output field ψ_o(x,y) on the device output plane:ψ_o(x,y) = T(x,y)ψ_i(x,y).Writing this in matrix form yields|ψ_o^p⟩ = D_mask|ψ_i^p⟩,where we can choose the orthogonal set of input modes as the SLM's pixels. No spatial overlap ensures the orthogonality of the modes. This orthogonality for the N modes holds only if the SLM has a pixel pitch larger than λ/2. If the pixel pitch is smaller than λ/2, we cannot count each pixel as an independent mode.Since the transmission function of the phase mask is local, (i.e. the phase mask device operation connects an input at a given transverse position on the input plane with an output at the same transverse location on the output plane), the mask operator D_mask should be diagonal and full-rank in general. This “local” effect is not applicable in the case of volumetric scattering media, where an input mode can diffuse inside the media and form a speckle field as an output mode. We will come back to this point later on to compare the two cases. For the corresponding set of output modes |ψ_o^p⟩, the mode orthogonality still holds because the locally transmitted output modes do not spatially overlap right after they are transmitted through the mask.Describing the optical system of the phase mask and phase-only SLM in this fashion, we return to the SVD analysis for the system where D_mask is a diagonal matrix with the elements corresponding to the local transmission coefficients (or, transmission coefficients for the eigenchannels) and |ψ_i^p⟩ and |ψ_o^p⟩ are the pairs of the orthogonal input and output modes respectively. From this SVD analysis, we can see that the device operator (or TM) describing our proposed optical system is always full-rank and we have N degrees of freedom for the output modes as well. This statement is true however one designs the phase mask and however the bases are chosen.For example, for our disordered metasurface phase mask, we know that plane wave illumination as an input mode can excite all the possible output plane waves nearly isotopically (See Fig. <ref>C in the main text). If we describe the system using plane waves as the bases and discretize the angle of the plane waves into M and N values for the output and input modes, where M is greater than N, we can describe the system in the form|ψ'_o⟩ = D'_mask|ψ'_i⟩,where |ψ'_i⟩ and |ψ'_o⟩ are input and output plane wave modes, and D'_mask is another representation of the device operator D_mask. However, since the description of the system with the operator D_mask and the input and output sets of orthogonal modes |ψ^p_i⟩ and |ψ^p_o⟩ is a unique and complete characterization of the system, performing the SVD of D'_mask will result in the same full-rank diagonal matrix D_mask described above.So far, we have considered only the linear system describing the field transformation before and after the phase mask. In our experimental scheme, light also propagates from the phase mask to the focal plane. However, free-space propagation can be considered by incorporating the free-space propagation operator, which does not degrade the full-rank operation since it is always full-rank as well.Now we know that through the metasurface we can control N output modes because we have N degrees of freedom in the input. On the other hand, we also know that we can focus light to a large number of diffraction-limited spots using wavefront engineering (i.e. choosing the optimum phase for the N input modes in order to form constructive interference peaks at locations of interest). When a disordered medium is used in this way, it is called a “scattering lens.” If each resolvable focal spot in the output space is treated as one mode (the total number of which is defined as M according to the space-bandwidth product formalism in the main text), we would seemingly be able to achieve a number of degrees of freedom larger than the rank of our linear system. However, it is not valid to count each resolvable focal spot as an independent mode, because the focal spots created by the scattering lens have correlated, speckle-like backgrounds. Although the number of resolvable focal spots is not equivalent to the number of degrees of freedom, it is an important and useful parameter in many applications. In our focus-scanning scattering lens microscope, since the intensity of an achieved focal spot is significantly higher (>10^4) than the background intensity, we can count the number of resolvable focal spots.It is also worthwhile to analyze the number of degrees of freedom (or eigenchannels) supported by our disordered metasurface phase mask compared to conventional disordered media. For a conventional random medium, multiple scattering processes completely scramble the input modes and generate spatially extended speckle-fields as output modes. In contrast to the mask-based device, the device operator D_s (or TM, with P = M × N entries) of such a scattering medium is fully populated with complex entries. Similarly, performing the SVD of the TM reveals the number of independent channels for the disordered medium. The TM is generally not full-rank (rank(D_s) ≤min(N, M)), and it is well-known that the singular value distributions of volumetric disordered media statistically follow the “quarter-circle law”, experimentally confirmed by Popoff et al. <cit.>. Therefore, conventional disordered media deteriorate some degrees of freedom for the output modes, degrading the signal-to-noise ratio (SNR) and the focal contrast η. This means the advantage of replacing conventional disordered media with a disordered metasurface for complex wavefront engineering is not only that we can operate a scattering lens without characterizing the entire TM of the system, but also that the device operator does not deteriorate the supported degrees of freedom. §.§ Conventional measurement of the transmission matrix using 𝒪(P) measurementsIn previous reports, measurements of the transmission matrix have been performed in one of two ways. The first method can be implemented by displaying N orthogonal patterns on the SLM and recording the output field for each pattern <cit.>. This approach can be understood as measuring the transmission matrix one column at a time, where each column corresponds to one SLM pattern, and each element in the column represents the output field contribution at a unique focal point on the projection plane. To focus to a given point on the projection plane, the pattern displayed on the SLM is selected as a linear combination of the SLM patterns such that the output field constructively interferes at the desired focal point. In the context of phase-only modulation, this means that the phase of each field vector, controlled by their respective pixels on the SLM, is aligned so as to maximize the sum over all the field vectors at that location. In order to enable focusing at all M focal spots, the output field for each SLM pattern must be measured at each of the M focal spot locations.An alternate way to measure the transmission matrix is using optical phase conjugation <cit.>. This scheme is typically implemented by creating a calibration light focus from an external lens positioned at the desired focus location and recording the optical field transmitted in the reverse direction through the disordered medium toward the SLM. Then this procedure is repeated by scanning the focus to all M desired focal spots on the output plane. Mathematically, this approach can be interpreted as measuring the transmission matrix one row at a time, where the elements in each row describe the phase and amplitude relationship between a pixel on the SLM and the desired focal point.While both of these approaches provide a way to characterize the transmission matrix of a disordered medium, they each suffer from practical limitations that prevent them from being practically useful for achieving control over large transmission matrices (P > 10^12). These stem from the sheer number of measurements and time required to characterize the transmission matrix. The first method is infeasible for large M due to the lack of commercially available camera sensors with the required number of pixels. Thus far, to the best of our knowledge, the largest reported transmission matrix measured using this method contained P = 10^8 elements. While the second method is not limited by the availability of the requisite technology, it requires mechanically scanning the focus to each spot. Assuming the relevant measurement technology existed for both cases, with a measurement speed of 10^8 measurements (i.e. transmission matrix elements) per second (equivalent to 5 megapixels at 100 frames per second), the measurement for all P = 10^13 elements in our demonstrated transmission matrix would require a measurement time of over 24 hours. To make matters worse, conventional disordered media used with wavefront engineering such as white paint made of TiO_2 or ZnO nanoparticles have a stability of only several hours <cit.>, so the measured transmission matrix would be invalid by the time the measurement was complete. | http://arxiv.org/abs/1706.08640v1 | {
"authors": [
"Mooseok Jang",
"Yu Horie",
"Atsushi Shibukawa",
"Joshua Brake",
"Yan Liu",
"Seyedeh Mahsa Kamali",
"Amir Arbabi",
"Haowen Ruan",
"Andrei Faraon",
"Changhuei Yang"
],
"categories": [
"physics.optics"
],
"primary_category": "physics.optics",
"published": "20170627015141",
"title": "Complex wavefront engineering with disorder-engineered metasurfaces"
} |
[email protected] Institute of Physics and CASA^∗, Faculty of Mathematics and Physics, University of Szezecin, Wielkopolska 15, PL-70-451 Szczecin, Poland Department of Earth and Planetary Sciences, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro, Tokyo, 152-8551, Japan Division of Liberal Arts, Kogakuin University,1-24-2 Nishi-Shinjuku, Shinjuku-ku, Tokyo 163-8677, Japan Department of Earth and Planetary Sciences, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro, Tokyo, 152-8551, JapanThe total amount of dust (or “metallicity") and the dust distribution in protoplanetary disks are crucial for planet formation. Dust grains radially drift due to gas–dust friction, and the gas is affected by the feedback from dust grains. We investigate the effects of the feedback from dust grains on the viscous evolution of the gas, taking into account the vertical dust settling. The feedback from the grains pushes the gas outward. When the grains are small and the dust-to-gas mass ratio is much smaller than unity, the radial drift velocity is reduced by the feedback effect but the gas still drifts inward. When the grains are sufficiently large or piled-up, the feedback is so effective that forces the gas flows outward. Although the dust feedback is affected by dust settling, we found that the 2D approximation reasonably reproduces the vertical averaged flux of gas and dust. We also performed the 2D two-fluid hydrodynamic simulations to examine the effect of the feedback from the grains on the evolution of the gas disk. We show that when the feedback is effective, the gas flows outward and the gas density at the region within ∼ 10 is significantly depleted. As a result, the dust-to-gas mass ratio at the inner radii may significantly excess unity, providing the environment where planetesimals are easily formed via, e.g., streaming instability. We also show that a simplified 1D model well reproduces the results of the 2D two-fluid simulations, which would be useful for future studies. § INTRODUCTION Terrestrial planets are formed by the accumulation of dust grains in protoplanetary disks <cit.>. In core-accretion scenario <cit.>, the accumulation of dust grains is also critical for giant planet formation. How dust grains evolve in protoplanetary disk is directly connected with planet formation. Moreover, the evolution of dust grains may explain ring structures in protoplanetary disk <cit.>, which are observed in several disks <cit.>. The evolution of gas and dust grains is one of the most important topics both from theoretical and observational point of view.Many authors have investigated evolution of dust grains in protoplanetary disk using semi-analytical models <cit.> and using two-fluid hydrodynamic simulations <cit.>. Dust grains radially drift due to gas–dust friction, and disk gas feels feedback from drifting dust grains. Most of previous studies considered gas–dust friction only for dust grains and ignored the dust feedback on the disk gas, because grains are not large and the amount of them is negligible as compared with gas. If the dust grains are highly accumulated or the grains grows up, however, the feedback from dust grains may not be negligible <cit.>.The feedback from the dust grains affects gas structures. For instance, <cit.> showed that the inner edge of the dead zone is oscillated by the feedback. <cit.> found that the pressure bump in gas is deformed by the feedback from the dust grains trapped in the gas bump. Recently, using two-fluid (gas and dust grains) SPH simulations, <cit.> demonstrated that the grains are trapped in the pressure bump formed by the feedback. In this paper, we examine how the feedback influences the global evolution of the gas disk induced by viscosity. In Section <ref>, we describe the structures of gas and dust grains in steady state, considering the vertical settling of the dust grains. In Section <ref>, we show the results of 2D two-fluid hydrodynamic simulations and a simplified 1D model, and discuss the effects of the dust feedback on the disk evolution and the planet formation in Section <ref>. Section <ref> contains our summary.§ STRUCTURE IN STEADY STATE§.§ Velocities of gas and dust in 2D diskHere we consider structures of gas and dust grains in steady state. First we ignore the vertical structures and simply consider a 2D disk. We use the polar coordinate (R,ϕ). We treat only the surface densities of the gas and the dust,and , respectively, instead of densities of the gas and dust,and . <cit.> derived the formulae of velocities of the gas and the dust grains in the 2D disk. Very recently <cit.> also derived the similar formulae. From the equations (14) and (15) of <cit.>, the radial and azimuthal velocities in the case of single-sized dust grains are given by= - 2/^2+1/+ + 1/^2+1/+, = +1/^2+1/+ - /^2 + 1/+,and those for the disk gas are given by= 2/^2+1/+ + ( 1 - 1/^2 + 1/+) , =+( 1/^2+1/+ -1 ) + /^2 + 1/+,where =/(+). The Stokes number of the dust grains in the 2D disk, , is given by= ,whereis the stopping time of dust grains in the 2D disk, and = √(G/R^3) is the Keplerian angular velocity at the mid-plane, where G andis the gravity constant and the mass of the central star, respectively. The Keplerian rotation velocity at the mid-plane is =R. In the Epstein regime, the stopping time is written by <cit.>,(R,z) =/√(8/π),where , , andare the size and the internal density of dust grains, and the sound speed, respectively. In the 2D disk, using the surface density, we can write the stopping time as(R) =π/2.The pressure gradient force is parameterized by=-1/2(/R)^2 ( dln/dln R + dln^2/dln R),andis the viscous velocity of gas without dust grains which is given by <cit.>= -3ν/Rdln(ν R^1/2)/dln R,where ν is the kinematic viscosity of radial diffusion, given by ν = α where the α-prescription <cit.> is used.In the inviscid case (ν = 0 and thus =0), only the first terms in RHS of equations (<ref>)–(<ref>) remain, which are the same as the equations (2.11)–(2.14) of <cit.>. These terms are originated from gas–dust friction. On the other hand, if the dust surface density is very small (→ 0), the gas velocities in radial and azimuthal directions correspond to = and =(1-) respectively.We have employed 2D approximation, meaning that the vertical structures of the gas and dust are similar with each other. The gas and dust disks are assumed to have the same scale height. However, the scale height of the dust disk can be much smaller than that of the gas disk, if the dust grains are settled in the mid-plane. In the next subsection, we discuss the gas and dust velocities, taking into account the vertical structure. §.§ Gas and dust motions in a 3D disk We now consider the gas and dust motions in a 3D disk. We use the cylindrical coordinate (R,ϕ,z). The gas and dust velocities are =(,,) and =(,,), respectively. The equations of motions for the dust grains are given byt + ( ·∇)- ^2/R = -G/R^2-- /,t + (·∇)+ /R = -- /,t + (·∇) = -G/R^3z- - /,where we assume axis-symmetric gravitational potential Ψ=-G/√(R^2+z^2), adopting R √(1+(z/R)^2)≃ R. The equations of the motion of the gas can be written by t + (·∇)- ^2/R = -^2/R -G/R^2 + f_R/ -/ - /,t + (·∇)+ /R = -f_ϕ/- / - /,t + (·∇) = -^2/z-G/R^3 z +f_z/ -/ - /,where f_R, f_ϕ, and f_z are the viscous forces in radial, azimuthal, and vertical directions, respectively, which are originated by the gas turbulence. Assuming the axisymmetric structure, we can express the viscous forces asf_R = 2/RR[ ν_rr R (R -1/3∇·) ] + z[ν_rz( z + R) ] + ν_ϕϕ/R[ 2/R - 1/3∇·], f_ϕ = 1/R^2[ R(ν_r ϕ R^3 Ω_ gR) + z(ν_ϕ z R^3 Ω_ gz) ] ,f_z =1/RR[ν_rz R (R + z) ]+2z[ν_zz( z - 1/3∇·)] ,where Ω_ g=/R, and ν_ij indicates the kinetic viscosity associated with the term of v_i v_j of the Reynolds stress. The efficiency of the turbulence may be different in direction.Hence we distinguish each component of ν_ij in equations (<ref>) – (<ref>). In particular, ν_rϕ and ν_ϕ z would be important, which are related with transport of the angular momentum due to radial and vertical shear motions, respectively. For simplicity, however, we just adopt ν_ij = ν in the following. The equations of the motions (<ref>) – (<ref>) are the same as these adopted in <cit.>, excepting the terms associated with the gas viscosity. However, a parameter range (e.g., dust-to-gas mass ratio, Stokes number of the dust grains, etc.) which the basic equations are valid may not be obvious. We discuss the validity of these equations in section <ref>. The continuity equation of the gas is given byt + ∇() = 0 ,For the dust grains, the continuity equation is expressed ast - ∇( + j⃗) = 0 ,where j⃗ is the mass flux due to the turbulence of the dust grains <cit.>. From the analogy of the molecular diffusion, assuming the axisymmetric structure, we may obtain j⃗ = (j_R,j_ϕ,j_z) as <cit.>j_R= ν_rϕ/R(/) ,j_ϕ = 0 ,j_z= ν_ϕ z/z(/) . §.§ Gas and dust structures in steady state Putting ∂/∂ t=0 in the equations of the motion and the continuity equations of the gas and the dust grains described above, we consider the structure of the gas and the dust grains. The radial and vertical gas velocities can be much smaller thanand the deviation offromis also small, since the gravity of the central star dominates the gas motion. We can put f_r=0 and f_z=0 in equations (<ref>) and (<ref>).First we consider the vertical structure of the gas and the dust grains in steady state. Neglecting small advection terms in equations (<ref>) and (<ref>), we obtain - = - z and d/dz = -(+) (/)^2 z, as shown by <cit.>. Assuming that the vertical gas structure is in hydrostatic equilibrium (=0), we obtain =- z. When ≫, moreover, the vertical structure of the gas density can be assumed by(R,z) = (R,0) exp(-z^2/2^2) ,whereis the scale height of the gas disk defined as(R) =/.Note that when ∼, the thickness of the gas disk may be smaller thangiven by equation (<ref>) because the dust grains drag the gas as they sediment towards the mid-plane <cit.>. Since equation (<ref>) underestimates the gas density in the mid-plane, we may overestimate the effect of the dust feedback in this case. For simplicity, however, we use equations (<ref>) and (<ref>) even if >. The validity of this assumption is discussed in section <ref>. Assuming that the radial variations of physical quantities are given by power-law, as (R,0) ∝ R^s, ∝ R^q/2, the angular velocity of gas (without the dust feedback) is described by <cit.>Ω_ g (r,z)= [ 1+ 1/2(/R)^2(p + q + q/2z^2/^2) ].Hence, the deviation fromis expressed as Ω_ g = √(1-2η), whereη(R,z) =-1/2(/R)^2 (p+q+q/2z^2/^2).The surface density of the gas is given by(R)= ∫^∞_-∞ dz = √(2π)(R,0)(R). We obtain the terminal vertical velocity of the dust grains as - z <cit.>. The dust grains are settled by the terminal velocity, while the grains diffuse due to the turbulence. At the steady state, the dust settling is balanced with the turbulence diffusion <cit.>. In steady state, equation (<ref>) gives us1/RR[- ν_rϕ/R(/)]+ z[- ν_ϕ z/z( /) ] = 0.Hence, we obtain - ν_rϕ/R(/) = F_m,R,- ν_ϕ z/z( /) = F_m,z,where F_m,R and F_m,z are mass fluxes of the dust grains in radial and vertical directions, respectively. Considering the situation that the dust settling is balanced with the diffusion, we put F_m,z = 0. Using the terminal velocity of the dust grains, we obtain the vertical distribution of the dust density as <cit.>,(R,z)= (R,0) exp[ -z^2/^2 - _ mid/(α_ϕ z/)exp( z^2/^2 - 1) ] ,where _ mid is the Stokes number of the dust grains at the mid-plane and α_ϕ z = ν_ϕ z/(^2 ). When the dust grains are relatively settled, we expand the expression ofwith respect to z/≪ 1 and take the leading term. We obtain(R,z) = (R,0) exp(-z^2/2^2) ,with the scale height of the dust disk given by(R) =(R) √(α_ϕ z //α_ϕ z/+_ mid). We set = 1, because we treat the dust grains with _ mid < 1 <cit.>. The surface density of the dust grains is given by(R)= ∫^∞_-∞ dz = √(2π)(R,0)(R).Using equations (<ref>), (<ref>) and (<ref>), we obtain the ratio oftoas/ = (R,0)/(R,0)√(α/α+_ mid). Similar to above, we assume ,, ,, and -, - are much smaller than . Leavingonly the first-order terms with respect to these small values in equations (<ref>), (<ref>), (<ref>), and (<ref>), we obtain the velocities of the dust grains as(R,z) = -2/^2+1/+η + 1/^2+1/+,(R,z) =+1/^2+1/+η - /^2 + 1/+,and those of the gas as(R,z) = 2/^2+1/+η + ( 1 - 1/^2 + 1/+) ,(R,z) =+ ( 1/^2+1/+ -1 )η + /^2 + 1/+,where =/(+), =, and η is defined by equation (<ref>). The viscous velocity of gas (without dust feedback) is given by(R,z) =-1/R[ 3ν_rϕdln(ν_rϕ R^1/2)/dln R- qν_ϕ zdln( z)/dln z],If ν_rϕ= ν_ϕ z = ν, we obtainas=-3ν/R[ p+ 2q/3 +2 (z/)^2 (5q+9/6) ].§.§ Radial net flows of gas and dust grainsWe consider the net mass transfers of gas and dust grains in the disk. The net radial velocity of the gas are defined by <cit.>(R)= 1/∫^∞_-∞ dz.Similarly, the net radial velocity of the dust grain is defined by(R)= 1/∫^∞_-∞ dz. We adopt the following values as the fiducial case: =1M_⊙, (R=1,z=0)=5.3× 10^-10g/cm^3, =2.8× 10^-2 at R=1, p=-2.25 and q=-0.5. In this case, the gas surface density is obtained by Σ_0 (R/1)^-1 with Σ_0=540g/cm^2, and the aspect ratio of the gas disk is proportional to R^1/4. Before presenting the vertical averaged velocities of the gas and the dust , we show typical vertical structures of the gas and the dust grains. Figure <ref> shows the vertical distributions of the gas and dust densities (top) and the mass flux density of gas (bottom) in the cases of α=10^-2 and α=10^-3. In this figure, the Stokes number of the dust grain in the mid-plane is set to be 0.1, and the ratio oftois 0.01. As shown by <cit.>, the gas moves outward near the mid-plane even when the dust feedback is not considered. However, the dust feedback makes the outward velocity of gas faster. When α=10^-2, the gas flows inward above the dust layer (z ≳), while the gas in the dust layer flows outward (z <). Above the dust layer, the inward mass flux density is comparable with that assumed in the 2D disk given by /(√(8π)). When α=10^-3, the thin dust layer is formed near the mid-plane. The gas moves outward near the mid-plane. The outward mass flux density near the mid-plane is very large. On the other hand, above the dust layer, the gas flows inward. The inward mass flux above the dust layer is the same level as that in the 2D disk, as in the case of α=10^-2. Figure <ref> shows the vertical averaged radial velocities of the gas and the dust grains given by equations (<ref>) and (<ref>). For comparison, we also plot the gas and dust radial velocity in 2D disk obtained by equation (<ref>) and (<ref>). In the figure, we set = 0.01. When α=10^-2, the gas velocity is hardly affected by the grains if the Stokes number is small enough. As the Stokes number increases, however, the infall gas velocity decreases and when _ mid=1, the infall velocity becomes minimum. As the gas viscosity decreases, the infall velocity due to the viscositydecreases. As a result, the radial gas velocity becomes positive due to the feedback from the grains and the gas moves outward. For instance, when α=10^-3, a relatively small grain with _ mid=0.1 can make the gas flow outward, without any pile-up of grains (=0.01 in the figure). For a smaller viscosity (e.g., α=10^-4), the thickness of the dust layer is very thin. Although the outward gas flux density in the dust layer is very large, the contribution from the inside of the thin dust layer is not significant. In fact, the outward velocity when α=10^-4 is smaller than that when α=10^-3, since the dust feedback is ineffective in the case of the very small viscosity. As seen in the bottom panel of Figure <ref>, the dust grains always flows inward and the infall velocity is hardly affected by the gas viscosity.When the size of the dust grain is sufficiently small (_ mid < 10^-2), the gas and dust radial velocities in the 2D disk given by equation (<ref>) and (<ref>) agree well with the vertically averaged velocities, because the dust layer is not significantly thin. As the size of grains increases, the velocity in 2D disk deviates from the vertical averaged velocity. In this case, the dust grains are settled andnear the mid-plane is much larger than . As a result, the effect of the dust feedback is enhanced due to the dust settling for relatively large viscosity such as the cases with α=10^-2 and α=10^-3. On the other hand, if α is very small, the effect of the dust feedback is ineffective because the thickness of the dust layer is very thin. When α=10^-4, for instance, the vertical averaged velocity is comparable with that of the 2D disk. Although the vertical dust settling affects the effect of the dust feedback as discussed above, equation (<ref>) and (<ref>) reasonably reproduce the vertical averaged velocities given by equation (<ref>) and (<ref>) within a factor of ∼ 2. We show the dependence ofonand _ mid when α=10^-3 in Figure <ref>. If the Stokes number of the dust grains is sufficiently small,is negative regardless of the value of . As the Stokes number of the dust grain reaches unity,increases and the gas can move outward for small . Whenis large as≳ 0.5,becomes smaller asincreases. In this case,is much larger than unity at the mid-plane, and hence the gas velocity at the mid-plane becomes small (see, equation <ref>). In the top panel of Figure <ref>, we illustrate the relation between the dust-to-gas surface density ratio and the Stokes number of the dust grains when =0. If the dust-to-gas mass ratio is larger than this critical value, the gas velocity is positive. When α=10^-4, the small dust grains with _ mid = 0.01 can make the gas move outward if /≳ 0.01. For large dust grains with _ mid≃ 1, the gas can flow outward when the dust-to-gas surface density ratio is only ∼ 10^-4 and α=10^-4. Even for the relatively large viscosity, the gas moves outward if relatively large dust grains are highly accumulated (e.g., when α=10^-2, /∼ 0.1 for dust grains with _ mid≃ 0.1). If the size of the grains is small enough (e.g., _ mid≃ 0.05 in the case of α=10^-2), on the other hand, the gas moves inward independent of the dust-to-gas surface density ratio.For comparison, we plot the dust-to-gas surface density when =0. As seen from the top panel of Figure <ref>, though thewhen = 0 is slightly larger than that when = 0, they reasonably agree with each other, ifis smaller than unity. Setting =0 and assuming ≪, we obtain the condition from equation (<ref>) as(/)_c ≃( 1+^-2)( 1+ 2//)^-1.As seen in the bottom of Figure <ref>, equation (<ref>) can reproduce the dust-to-gas surface density ratio when = 0, if ≲ 1. Equation (<ref>) also reasonably agrees with the dust-to-gas surface density when = 0, ifis smaller than unity. Note that the condition of equation (<ref>) corresponds to that the accretion rate of dust grains (=2π R) is equal to the gas mass accretion rate due to the viscous diffusion (2π R), in the case of ≪ 1. §.§ Surface density distributions in gas and dust The vertical distribution of the gas flow is affected by the dust settling, which can enhance (and reduce) the effect of the dust feedback, as shown in previous subsections. However, as seen in Figure <ref>, when we consider the net flux integrated over the vertical direction, the approximation of the 2D disk may be reasonable. In the following, we focus on the net flux of gas and dust, and discuss the evolution of the surface densities.In steady state, the distribution of the gas surface density is given by =2π R= constant <cit.>. The gas surface density is given by /(2π R ). For a distribution of grains, = /(2π R ) <cit.>. When ≪ 1 and ≪ 1, the radial velocity of dust grains is =-2 (). For simplicity, we assume single-size dust grains. With ∝ R^-s, we obtain ∝ R^-(s+2f+1/2), where f is the flaring index defined by /R ∝ R^f. For , the first term of equation (<ref>) is approximately 2 () and the second term is . If the feedback is negligible, s=2f+1/2. The index of the power law ofdoes not change if the feedback is effective, because the first term inhas the same dependence on the radius as . Hence, in steady state, ∝ R^-4f-1 and ∝ R^-2f-1/2. In reality, the grain sizes are determined by the coagulation and fragmentation, and are not necessarily constant over the disk. When turbulent fragmentation limits the size, however, larger dust grains dominate the total mass, and therefore the dust surface density is determined by the maximum size of the grains, which only weakly depends on the location (∝ 1/√(R)) when s=1,f=1/4 <cit.>. This situation is not very different from the case with the single-size dust grains.When the viscosity is sufficiently large butis not very large, the gas flows inward in the disk. As discussed above, the slope ofis the same as that in the case that the feedback is negligible. Hence the distribution ofdoes not change so much, as long as the gas is supplied from outside. Note that in this case, the accretion rate of gasis reduced because the inflow velocity decreases due to the feedback, butis not changed. On the other hand, when the viscosity is low oris sufficiently large as shown in Figure <ref>, the gas flows outward in the disk. In this case, the disk structure with < 0 is no longer allowed. Since there is no gas supply from the inside of the disk in most cases, the disk may be depleted from the inside.§ EVOLUTION OF GAS DISK§.§ Model description§.§.§ Two-dimensional simulationsTo examine how the feedback from the grains affects viscous evolutions of gaseous disks, we performed 2D (R,ϕ) hydrodynamic simulations. We extended publicly available FARGO code <cit.> to include the dust grains. We simulated the evolutions of disk gas and dust grains by solving the two-fluid (gas and dust grains) equations of motion and continuity. Because simulating the 2D disk, we do not consider the settling of the dust grains in the simulations. However, as shown in Figure <ref>, the velocities of the gas and the dust grains in 2D disk given by equations (<ref>) – (<ref>) reasonably agree with the vertical averaged velocity in the 3D disk. Hence, the approximation of the 2D disk would be reasonably valid in this case. The computational domain ranges from 4 to 100 from the central star. The resolution is 512 and 128 cells in radial and azimuthal directions, respectively.The initial condition of gas surface density is set as =Σ_0 (R/1)^-1 with Σ_0=570 g/cm^2. We adopt a simple locally isothermal equation of state and assume the disk aspect ratio as /R = H_0 (R/1)^1/4 with H_0=0.028. The initial angular velocity of gas is given by Ω_K √(1-η), and the mass of the central star is assumed by 1. The radial velocity of gas is set by . The initial surface density of dust grains is 0.01 of gas everywhere over the disk. The initial angular velocity of dust grains is Ω_K and the initial radial velocity is zero. We adopt dust grains with 3 cm in size, corresponding to the Stokes numberof 0.1 at R=10 for the initial state. For simplicity, we neglect coagulation and fragmentation of dust grains, and hence the grain size does not change during the simulations.Since the gas velocity is very sensitive on the value ofandespecially in the case of small viscosity, numerical instability occurs when small discontinuities of η andexist at the innermost part of the computational domain. To avoid this instability, we introduced a "coupling-damping region" at the innermost radii (4<R<6). In this region the dust feedback on gas is gradually reduced by cos(π x^2/2 ), where x=R_1-R/(R_1-R_0) with R_0=4 and R_1=6, respectively. Hence, at the innermost annulus (R=4), the gas velocity is set to . Moreover, in this region, we force all the physical quantities to be azimuthally symmetric by overwriting the quantities with their azimuthally average at every time step <cit.>. That is, in the coupling damping zone, ,andare related towards their azimuthally averaged values asdX/dt = - X-X_ avg/τ_ damp f(R),where X represents ,and , and X_ avg denotes the azimuthally averaged values of them, and τ_ damp is the orbital period at the boundary. The function f is a parabola of the form y=x^2, scaled to be zero at the boundary layer (R=6), and unity at the opposite edge (R=4). For dust grains, the velocity at the innermost boundary is given by equations (<ref>) and (<ref>), respectively. The surface densities of gas and dust are set to be Ṁ=constant <cit.>. At the outer boundary, the velocities of gas and dust are given by equation (<ref>)-(<ref>), and the surface densities are fixed at the initial values.Here the radial distribution ofis expected to be smooth. The turbulent radial mass flux of j_R would be negligible in this case.Hence we ignore the radial diffusion flux of equation (<ref>) for simplicity.§.§.§ One-dimensional simulationsFor the purpose of confirming the validity and usefulness of our analytic formulas, we also calculated the evolution by the simplified 1D model with our analytic formulas and compared the results to these of the 2D fluid simulations. In the 1D model, we simultaneously solved the 1D continuity equation for the dust grains ∂ / ∂ t = -(1/R)∂ ( R )/∂ R and that for the disk gas ∂ / ∂ t = -(1/R)∂ ( R ) /∂ R.Here, we used equations (<ref>) and (<ref>) as the radial velocities of the dust grains and the disk gas ( and ), respectively.The initial and boundary conditions are the same as the 2D simulations described above.For the coupling damping region, we just cut off the dust feedback to gas in R < 6AU. §.§ Results Figure <ref> illustrates evolutions of the radial velocity and surface densities of gas and dust grains, and dust-to-gas surface density ratio in the viscous case with α=10^-2. We compare the results with the 1D model in the figure. In this case, since the gas viscosity is large and the dust grains is not so concentrated, the gas flows to the central star in the entire region of the disk. As discussed in subsection <ref>, the distribution ofis hardly changed from the initial distribution which is the steady state without the feedback. Theis also distributed as 1/R, as expected in subsection <ref>. Since the velocities of the gas and the dust quickly converge to the values in steady state, the simplified 1D model is able to well reproduce the results of the 2D two-fluid hydrodynamic simulations.We now show results with different α and . The agreement between the 1D model and 2D simulations are always good in the cases presented below. Figure <ref> shows the evolution of the disk with α=10^-2 and initial = 0.05. Because the initialis larger than the case of Figure <ref>, the gas moves outward in the wide region of the disk. In this case, the gas surface density decreases in the inner region of the disk. Owing to the gas removal due to the feedback from the grains, the dust-to-gas mass ratio increases. The drift velocity of grains decreases as the dust-to-gas mass ratio increases as pointed out by <cit.> (see also equation <ref>), which leads to the further increase of the dust-to-gas mass ratio. Owing to this positive feedback cycle, the inner region of the disk quickly becomes very dust rich. In this case, after ∼ 10^5, the gas surface density is only ∼ 10% of the initial value at the inner region. The dust-to-gas mass ratio significantly exceeds over unity, and the Stokes number of grains is ∼ 0.1. The dust-to-gas density ratio is also very high. Using equation (<ref>), we can estimateat the mid-plane at 10 as 50 (at 9.6 × 10^4) and 150 (at 1.6 × 10^5). Note that the decreases ofnear the inner boundary (R=6) is originated from the difference ofin the inside and outside of the coupling damping zone, sincein the damping zone is fixed to the value in the steady state without the feedback. The evolution with a lower viscosity (α=10^-3), is shown in Figure <ref>. The initialis the same as Figure <ref>. Since the viscosity is small, in this case the gas moves outward in the entire disk. The gas surface density also significantly decreases at the inner region of the disk. As in the case of Figure <ref>, the removal of gas due to the dust feedback makes the inner region dusty. After ∼ 10^5, the dust-to-gas surface density ratio increases up to unity. The dust-to-gas density ratios at the mid-plane at 10 are estimated as 1 (at 9.6× 10^4) and 10 (at 1.6× 10^5).§ DISCUSSION §.§ Vertical structure of gas flow The gas flows inward when the net radial velocity of the gasis positive. As shown in Figure <ref>, the mass flux density depends on the altitude. The dust grains are settled to the mid-plane and the dust layer in which the dust grains are highly concentrated is formed near the mid-plane, while the dust density is quite small above the dust layer. When the net mass flux is positive, the gas in the upper of the dust layer still flows inward, whereas the gas in the dust layerflows outward. In Figure <ref>, we illustrate the mass fluxes integrated in the inside and the upper of the dust layer, in terms of the Stokes number of the dust grains, when =0.01 at 10 (= 3.0× 10^-12 g/cm^2, /R=0.05). For a relatively large dust grains (e.g., _ mid∼ 0.1), the net radial velocity of the gas is positive. However, above the dust layer (z>2), the gas flows inward. When α=10^-3 and _ mid=1, for instance, the amount of the gas mass flux above the dust layer is ∼ - 10^-11 //, which is comparable with the mass flux without the dust feedback(=/2). This indicates that even when the disk gas deplete from the inside of the disk such as Figures <ref> and <ref>, the gas accretion onto the central star would not stop. If this disk is observed, we cannot find the depletion of the disk gas, from the viewpoint of the accretion rate onto the central star. We need to directly observe the disk gas to identify the physical condition of the protoplanetary disk. §.§ Dust growth and fragmentationIn reality, the grain size is strongly limited by the coagulation and fragmentation, though we did not consider this effect in this paper. However, for silicate grains, since the size is strongly limited by the fragmentation, the size of grains which locally dominates the grain density is not changed so much in the inner region <cit.>. In this case, the maximum size of the dust grainsare given by∼2/3π/α(/)^2,whereis the fragmentation threshold velocity, which depends on the composition (e.g., ice or silicate) of the dust grains. The fragmentation threshold velocity of the silicate dust grains may be obtained as 1 m/s – 10m/s <cit.>. When α=10^-3 and = 10^3m/s, the Stokes number of the maximum dust grains with = 10 m/s is about ∼ 0.1. In this case, the dust feedback can influence on the entire evolution of the gas disk as shown in Figures <ref> and <ref>, which leads formation of rocky planetesimals. If ≪ 10 m/s,the dust feedback does not work, since the dust grains cannot grow to a sufficiently large size. If ≫ 10 m/s, the dust grains can grow up to planetesimals without fragmentation, as shown by <cit.>, or the dust quickly falls to the star because they can quickly grow up to the size of ∼ 1. §.§ Implication of planet formationWhen the dust grains are more concentrated as compared with the gas, the grains can grow quickly to planetesimals via the streaming instability <cit.>. Roughly speaking, when ≳, the streaming instability may be significantly developed <cit.>. As shown in Figure <ref> and <ref>, when the dust feedback is effective, after 10^5,increases over unity within 10. Using equation (<ref>), we can estimateat the mid-plane as >10, because ≃ 0.1. This ratio is sufficiently high for the streaming instability to occur, though the accurate condition of the streaming instability may depend on the viscosity, the total amount of the dust grains (metalicity) and etc.The streaming instability can be caused in more early state of the disk evolution than 10^5. We should consider the planetesimal formation with the disk evolution.Once the streaming instability occurs, since the dust grains grow up to the planetesimals, the dust-to-gas mass ratio will decreases. However, all dust grains are not converted to the planetesimals <cit.>. If the efficiency of the planetesimal formation is sufficiently large to remove the dust grains, the dust feedback will be ineffective. The gas density will be restored to that in steady state when the dust feedback is not considered. On the other hand, when the efficiency is moderate, the dust feedback can be still effective after the streaming instability turns on. In this case, the planetesimals are formed as the gas depletes, which indicates that the migration speed of the planetesimals would significantly decreases. If the planet is formed, the migration of the planet would also be slow, which is favorable for planet formation. This situation can continue until most of the dust grains fall to the central star. §.§ Implication of disk observationSince the disk gas is depleted from the inner region, a hole structure of gas would be formed. For the dust grains, on the other hand, the density does not decreases in the gas depleted region. However, if the dust grains are effectively translated to the planetesimals, as discussed in the previous subsection, a density of relatively large grains at the mid-plane which are observed by sub-millimeter may decreases. In this case, a hole structure can be found by the observations of dust continuum by sub-millimeter, as well as the observation of molecular lines. Recent ALMA observation done by <cit.> has revealed that the young protoplanetary disk has the hole structure of the dust continuum within ∼ 10. This hole structure may be explained by the gas depletion due to the dust feedback and the planetesimal formation triggered by the gas depletion. §.§ Validity of the model We briefly comment on the validity of the treatment of the dust fluid. We treat the dust grains as pressureless fluid and ignore some effects of turbulence in the equations of motions (equations <ref> – <ref>) (see, Appendix <ref>). Although this formulation is widely used in the previous studies <cit.>, this treatment of the dust fluid may not be appropriate in the cases where the dust density is comparable with the gas density, or the Stokes number of the dust grains is large. According to <cit.>, the pressure of the dust fluid is not negligible when > 1/2.At the mid-plane where the dust density is comparable with the gas density, the dust settling is prevented by the turbulence driven by the instability of the dust fluid <cit.>. We must take into account the momentum transfer due to the dust turbulent in this case, and more sophisticated formulation for dust and gas may need to be explored.Moreover, the vertical structures of the gas and the dust grains may be different from what wehave assumed in equations (<ref>) and (<ref>) when the dust density is larger than the gas density. Gas experiences drag force from the dust grains that sediment towards the mid-plane, so the gas density in the mid-plane is larger than that given by equation (<ref>). As the dust-to-gas density ratio increases, the settling velocity of the dust grains is slow down (= - z ∝ 1/(1+/)), which leads vertical swelling of the dust layer. As a result, the vertical distributions of the gas and the dust also deviate from the Gaussian distributions whenis larger than unity (see, Appendix A of <cit.> or Appendix <ref>). When adopting equations (<ref>) and (<ref>), we may therefore have overestimated the dust feedback near a bottom of the dust layer (mid-plane), while underestimating the dust feedback in an upper region of the dust layer. If considering the vertical averaged quantities, the approximation of the Gaussian distribution may be reasonable. In any case, if the dust-to-gas density ratio in the mid-plane is of the order of unity and the Stokes number of the dust grains is of the order of unity, the drag force exerted on the gas is at most comparable to the (vertical component of) gravitational force from the central star, and therefore we expect that the qualitative outcomes presented in this paper are not very much affected. If the dust-to-gas density ratio further increases, it may be necessary to consider more sophisticated formulation of basic equations (e.g., onset of the streaming instability),which is beyond the scope of this paper. In summary, our results may be safely used when dust-to-gas mass ratio is less than ∼ 1, and we expect that qualitative results are valid up to ∼ 1. If the dust grains are highly concentrated and dominate gas, it may be necessary to have more sophisticated formulation to describe the dynamics of gas and dust. If streaming instability or other hydrodynamic instabilities may come into play when dust grains dominate, in which case the dust-to-gas mass ratio may not increase too much after all.§ SUMMARY In this paper, we considered the effect of the dust feedback on the viscous evolution of the gas disk. Our results are summarized as follows: * We present the analytical expressions for the velocities of gas and dust grains in the 2D disk, considering viscous evolution of gas and gas–dust friction (equations <ref> – <ref>). Considering the vertical dust settling, we also derived the formula of the radial velocities of the gas and dust grains (equation <ref> with equation <ref>). For the net radial velocities of the gas and the dust, the analytical expression of the 2D disk reasonably agree with the formula considering the dust settling (see Figure <ref>).* We found that the feedback from the grains significantly affects the radial velocity of the gas (see, Figures <ref> and <ref>). When the gas viscosity is sufficiently large and the dust grains are not concentrated, the gas infall slows down due to the dust feedback. As the viscosity decreases or the dust-to-gas mass ratio increases, the gas flows even outward due to the dust feedback.* We also demonstrated the 2D two-fluid hydrodynamic simulations, and showed how the feedback changes the evolutions of gas. As long as the viscosity is large and the initial dust-to-gas mass ratio is small, the gas flows inward. In this case, the gas disk evolves as in the case where the dust feedback is not effective, though the infall velocity of the gas decreases (see, Figure <ref>). When the viscosity is small or the initialis large, the feedback drastically changes the evolution of the disk. The gas flows to the outside of the disk, and the gas at the inner region is significantly depleted (see, Figures <ref> and <ref>). The gas removal slows down the infall velocity of the dust grains, and the dust-to-gas mass ratio further increases.* We presented the idea of a simplified 1D model. We solve the continuity equations with the velocities given by equations (<ref>) and (<ref>). Since the velocities quickly converge to those in the steady state, the simplified 1D model is able to well reproduce the results of the 2D hydrodynamic simulations, as can be seen in Figures <ref> – <ref>.* We also discuss the effect of the vertical structure on the disk evolution. If the disk gas is deposited, the gas accretion onto the star is possible. We need to observe the gas directly to detect the gas depletion.* In the inner region of the disk, the size of silicate dust grains would be strongly limited by the fragmentation. This situation is not very different from the cases assumed in our simulations. If the silicate grains remove the gas via the feedback, rocky planetesimals would be formed via the streaming instability.In this paper, we pointed out that the dust feedback can significantly influence the evolution of gas disk. However, we must consider the size evolution of dust grains, because the feedback strongly depends on the grain size. In order to understand the evolution of gas disk and grains, we should take into account consistently the evolution of grain size and the evolution of gas, with the dust feedback.We would like to thank the anonymous referee for his/her helpful comments. This work was supported by the Polish National Science Centre MAESTRO grant DEC- 2012/06/A/ST9/00276. Numerical computations were carried out on the Cray XC30 at the Center for Computational Astrophysics, National Astronomical Observatory of Japan. S. O. is supported by Grants-in-Aid for Scientific Research (#15H02065, 16K17661, 16H04081) from MEXT of Japan. T. M. is supported by Grants-in-Aid for Scientific Research (#26800106, 15H02074, 17H01103) from MEXT of Japan. § VALIDITY OF BASIC EQUATIONS OF THE GAS AND DUST GRAINS In order to describe flows including turbulence, the Reynolds-averaged Navier-Stokes equations of motion are commonly used. In this way, the density and all velocity components are devided into mean and fluctuating parts as ρ = ρ+ρ and v⃗=+ (the former term is the mean part and the later term is the fluctuating part). The Reynolds-averaged Navier-Stokes equations of the gas and the dust grains in protoplanetary disks are derived by <cit.>. Here we derive our basic equations (<ref>)–(<ref>) from the Reynolds-averaged equations (equations A.11–A.13 of <cit.>). As general forms of the Reynolds-averaged equations, <cit.> obtainedv_it + v_it + (v_jv_ix_j) = - Px_i-Ψx_i + σ_ijx_j-x_j(v_jv_i) - x_j(v_iv_j) +F_i,where P and Ψ are a pressure and a gravity potential, respectively, and σ_ij is the Reynolds stress tensor defined byσ_ij = - v_iv_j - v_iv_j,and F_i is the gas–dust friction force given byF_i=±[_d(v_g,i-v_d,i)/ + _d v_g,i - _d v_d,i/],where the sign of equation (<ref>) is positive for the gas, while it is negative for the dust grains. As discussed later, if the fluctuating motion of the dust grains is dominated by the gas–dust fraction force, we may neglect the second term of equation (<ref>) (see also, Appendix B of <cit.>).First let us consider the equations of the motion for the gas. If the mach number of the turbulence is not so very high, we can treat the gas fluid as weakly compressible fluid. In this case, since _g ≫_g, we can neglect the time variation of _g v_g,i (2nd term of LHS of equation <ref>) and terms related with the advection due to the turbulence (4th and 5th terms of RHS of equation <ref>). Using the Newtonian viscous stress tensor, instead of the Reynolds stress tensor, we obtain the equations of motions for the gas as equations (<ref>) – (<ref>).Since we treat the dust fluid as pressureless fluid, P_d=0 in equation (<ref>) for the dust grains. If adopting the gradient diffusion hypothesis <cit.>, the mass flux due to the turbulence can be written byρ_dv_d,i = D ρ_gρ_d/ρ_gx_i,where D is the diffusion coefficient which may be given byD =ν/,where ν is the kinetic viscosity of the gas.In this case, the advection terms due to the turbulence (4th and 5th terms of RHS of equation <ref>) are proportional to νρ_d/. As long as the distributions of ρ_d/ρ_g and v are not steep, those terms are much smaller than F_i given by equation (<ref>), because ρ_d(v_g,i-v_d,i) ∼ηρ_d. We can neglect 4th and 5th terms of RHS of equation (<ref>).The Reynolds stress of the dust grains may not be negligible. Here let us consider how the fluctuating motion of the dust grains responses to the fluctuating motion of the gas with the velocity of v_0. Assuming the decay time of the gas fluctuating motion is longer than the time scale that we now consider, we regard that v_0 is independent of time. If the fluctuation motion of the dust grain is dominated by the gas–dust friction, we may write a governing equation of the fluctuation motion of the dust grain asv_d,it = -v_d,i - v_0/.In this case, the fluctuating velocity of the dust grains is given byv_d,i(t)= v_0 + [v_d,i(t=t_0) - v_0 ] e^-t/.The difference between the fluctuating velocities of the gas and the dust grains decays as exp(-t/). For small grains, especially, the velocity of the dust fluctuating motion converges quickly to that of the gas fluctuating motion. In this case, the correlation v_d,iv_d,j would be approximately given by v_g,iv_g,j in equation (<ref>). Similarly, the correlation ρ_dv_d,i is also approximated as ρ_dv_g,i and they cancel out each other in the second term of equation (<ref>). When ρ_d≪ρ_g, the components of the Reynolds stress tensor for the dust grains are much smaller than these for the gas becuase the dust density is small. However, when ρ_d∼ρ_g, the components of the Reynolds stress tensor for the dust grains are comparable with these for the gas. In this case, we may need to consider the kinetic viscosity of the dust grains in the equation of the motion. However, since there is a large uncertainty about treatment of the dust fluctuating motion, we drop the term related with the Reynolds stress of the dust grains for simplicity. Ignoring a time-variation of the turbulence, and the Reynolds stress term and the advection terms due to the turbulence, we derive the equations of motions for the dust grains (<ref>)–(<ref>) from the Reynolds-averaged Navier-Stokes equation (<ref>).§ VERTICAL STRUCTURES OF GAS AND DUST GRAINS The vertical structures of the gas and the dust grains are described by the Gaussian distributions as equations (<ref>) and (<ref>). However, when ≳, the vertical distributions deviate from the Gaussian distributions.<cit.> considered the vertical structure of the gas by adopting a simple dust vertical distribution as (z) = const. = _,0 for |z|<,otherwise(z) = 0 (see Appendix A of that paper).The gas density at the mid-plane is given by (0)( 1+ f ), where (0) is the gas density at z=0 given by equation (<ref>), and f is a function of / which is an order of unity (e.g., f=0.5 when /=1 and f=1 when ≪). If ≪ 1, hence, the vertical distribution of gas is given by the Gaussian distribution (equation <ref>). If ≫ 1, however, the gas density at the mid-plane is larger than that given by equation (<ref>), and the vertical distribution also deviates from the Gaussian distribution.Here we consider self-consistent vertical structures of the gas and the dust grains. The vertical gradient of the gas density is given by (see, section <ref>):d/dz = -( + ) z/^2.From equation (<ref>) with F_m,z=0, the vertical gradient of the dust density is given byd/dz = -[ /α/ +1 +/] z/^2. Solving equations (<ref>) and (<ref>), we obtain the vertical structures of the gas and the dust grains. In Figure <ref>, we show the vertical structures of the gas and the dust grains if α=10^-3 and _ mid = 0.1. For comparison, we plot the Gaussian distributions given by equations (<ref>) and (<ref>). When = 1, the gas distribution agrees with equation (<ref>), whereas the gas density near the mid-plane is slightly enhanced. When = 5, the gas density near the mid-plane increases, while the gas density apart from the mid-plane slightly decreases. The thickness of the dust layer becomes thicker than that expected by equation (<ref>), because the settling velocity is slow down by the factor of + in . The dust density at the mid-plane decreases, due to vertical swelling of the dust layer. The dust-to-gas density ratio is smaller than that expected by the Gaussian distributions of equations (<ref>) and (<ref>), while it is larger at the upper part of the dust layer.natexlab#1#1[Akiyama et al.(2015)Akiyama, Muto, Kusakabe, Kataoka, Hashimoto, Tsukagoshi, Kwon, Kudo, Kandori, Grady, Takami, Janson, Kuzuhara, Henning, Sitko, Carson, Mayama, Currie, Thalmann, Wisniewski, Momose, Ohashi, Abe, Brandner, Brandt, Egner, Feldt, Goto, Guyon, Hayano, Hayashi, Hayashi, Hodapp, Ishi, Iye, Knapp, Matsuo, Mcelwain, Miyama, Morino, Moro-Martin, Nishimura, Pyo, Serabyn, Suenaga, Suto, Suzuki, Takahashi, Takato, Terada, Tomono, Turner, Watanabe, Yamada, Takami, Usuda, & Tamura]Akiyama2015 Akiyama, E., Muto, T., Kusakabe, N., et al. 2015, , 802, L17[ALMA Partnership et al.(2015)ALMA Partnership, Brogan, Pérez, Hunter, Dent, Hales, Hills, Corder, Fomalont, Vlahakis, Asaki, Barkats, Hirota, Hodge, Impellizzeri, Kneissl, Liuzzo, Lucas, Marcelino, Matsushita, Nakanishi, Phillips, Richards, Toledo, Aladro, Broguiere, Cortes, Cortes, Espada, Galarza, Garcia-Appadoo, Guzman-Ramirez, Humphreys, Jung, Kameno, Laing, Leon, Marconi, Mignano, Nikolic, Nyman, Radiszcz, Remijan, Rodón, Sawada, Takahashi, Tilanus, Vila Vilaro, Watson, Wiklind, Akiyama, Chapillon, de Gregorio-Monsalvo, Di Francesco, Gueth, Kawamura, Lee, Nguyen Luong, Mangum, Pietu, Sanhueza, Saigo, Takakuwa, Ubach, van Kempen, Wootten, Castro-Carrizo, Francke, Gallardo, Garcia, Gonzalez, Hill, Kaminski, Kurono, Liu, Lopez, Morales, Plarre, Schieven, Testi, Videla, Villard, Andreani, Hibbard, & Tatematsu]ALMA_HLTau2015 ALMA Partnership, Brogan, C. L., Pérez, L. M., et al. 2015, , 808, L3[Beitz et al.(2011)Beitz, Güttler, Blum, Meisner, Teiser, & Wurm]Beitz_Guttler_Blum_Meisner_Teiser_Wurm2011 Beitz, E., Güttler, C., Blum, J., et al. 2011, , 736, 34[Birnstiel et al.(2012)Birnstiel, Klahr, & Ercolano]Birnstiel_Klahr_Ercolano2012 Birnstiel, T., Klahr, H., & Ercolano, B. 2012, , 539, A148[Carrera et al.(2015)Carrera, Johansen, & Davies]Carrera_Johansen_Davies2015 Carrera, D., Johansen, A., & Davies, M. B. 2015, , 579, A43[Chiang(2008)]Chiang2008 Chiang, E. 2008, , 675, 1549[Cuzzi et al.(1993)Cuzzi, Dobrovolskis, & Champney]Cuzzi_Dobrovolskis_Champney1993 Cuzzi, J. N., Dobrovolskis, A. R., & Champney, J. M. 1993, , 106, 102[de Val-Borro et al.(2006)de Val-Borro, Edgar, Artymowicz, Ciecielag, Cresswell, D'Angelo, Delgado-Donate, Dirksen, Fromang, Gawryszczak, Klahr, Kley, Lyra, Masset, Mellema, Nelson, Paardekooper, Peplinski, Pierens, Plewa, Rice, Schäfer, & Speith]Val-Borro_etal2006 de Val-Borro, M., Edgar, R. G., Artymowicz, P., et al. 2006, , 370, 529[Dipierro & Laibe(2017)]Dipierro_Laibe2017 Dipierro, G., & Laibe, G. 2017, ArXiv e-prints, arXiv:1704.06664[Dipierro et al.(2015)Dipierro, Price, Laibe, Hirsh, Cerioli, & Lodato]Dipierro_Price_Laibe_Hirsh_Cerioli_Lodato2015 Dipierro, G., Price, D., Laibe, G., et al. 2015, , 453, L73[Dra̧żkowska et al.(2016)Dra̧żkowska, Alibert, & Moore]Drazkowska_Alibert_Moore2016 Dra̧żkowska, J., Alibert, Y., & Moore, B. 2016, , 594, A105[Dra̧żkowska & Dullemond(2014)]Drazkowska_Dullemond2014 Dra̧żkowska, J., & Dullemond, C. P. 2014, , 572, A78[Dubrulle et al.(1995)Dubrulle, Morfill, & Sterzik]Dubrulle_Morfill_Sterzik1995 Dubrulle, B., Morfill, G., & Sterzik, M. 1995, , 114, 237[Dullemond & Dominik(2005)]Dullemond_Dominik2005 Dullemond, C. P., & Dominik, C. 2005, , 434, 971[Fu et al.(2014)Fu, Li, Lubow, Li, & Liang]Fu2014 Fu, W., Li, H., Lubow, S., Li, S., & Liang, E. 2014, , 795, L39[Gonzalez et al.(2017)Gonzalez, Laibe, & Maddison]Gonzalez2017 Gonzalez, J.-F., Laibe, G., & Maddison, S. T. 2017, , arXiv:1701.01115[Gonzalez et al.(2015)Gonzalez, Laibe, Maddison, Pinte, & Ménard]Gonzalez_Laibe_Maddison_Pinte_Menard2015a Gonzalez, J.-F., Laibe, G., Maddison, S. T., Pinte, C., & Ménard, F. 2015, , 116, 48[Hersant(2009)]Hersant2009 Hersant, F. 2009, , 502, 385[Ida & Guillot(2016)]Ida_Guillot2016 Ida, S., & Guillot, T. 2016, , 596, L3[Jin et al.(2016)Jin, Li, Isella, Li, & Ji]Jin_Li_Isella_Li_Ji2016 Jin, S., Li, S., Isella, A., Li, H., & Ji, J. 2016, , 818, 76[Johansen et al.(2006)Johansen, Henning, & Klahr]Johansen_Henning_klahr2006 Johansen, A., Henning, T., & Klahr, H. 2006, , 643, 1219[Johansen et al.(2012)Johansen, Youdin, & Lithwick]Johansen_Youdin_Lithwick2012 Johansen, A., Youdin, A. N., & Lithwick, Y. 2012, , 537, A125[Kanagawa & Fujimoto(2013)]Kanagawa_Fujimoto2013 Kanagawa, K. D., & Fujimoto, M. Y. 2013, , 765, 33[Kretke et al.(2009)Kretke, Lin, Garaud, & Turner]Kretke_Lin_Garand_Turner2009 Kretke, K. A., Lin, D. N. C., Garaud, P., & Turner, N. J. 2009, , 690, 407[Lee et al.(2010a)Lee, Chiang, Asay-Davis, & Barranco]Lee_Chiang_Davis_Barranco2010a Lee, A. T., Chiang, E., Asay-Davis, X., & Barranco, J. 2010a, , 718, 1367[Lee et al.(2010b)Lee, Chiang, Asay-Davis, & Barranco]Lee_Chiang_Davis_Barranco2010b —. 2010b, , 725, 1938[Lynden-Bell & Pringle(1974)]Lynden-Bell_Pringle1974 Lynden-Bell, D., & Pringle, J. E. 1974, , 168, 603[Masset(2000)]Masset2000 Masset, F. 2000, , 141, 165[Mizuno(1980)]Mizuno1980 Mizuno, H. 1980, Progress of Theoretical Physics, 64, 544[Momose et al.(2015)Momose, Morita, Fukagawa, Muto, Takeuchi, Hashimoto, Honda, Kudo, Okamoto, Kanagawa, Tanaka, Grady, Sitko, Akiyama, Currie, Follette, Mayama, Kusakabe, Abe, Brandner, Brandt, Carson, Egner, Feldt, Goto, Guyon, Hayano, Hayashi, Hayashi, Henning, Hodapp, Ishii, Iye, Janson, Kandori, Knapp, Kuzuhara, Kwon, Matsuo, McElwain, Miyama, Morino, Moro-Martin, Nishimura, Pyo, Serabyn, Suenaga, Suto, Suzuki, Takahashi, Takami, Takato, Terada, Thalmann, Tomono, Turner, Watanabe, Wisniewski, Yamada, Takami, Usuda, & Tamura]Momose2015 Momose, M., Morita, A., Fukagawa, M., et al. 2015, , 67, 83[Nakagawa et al.(1986)Nakagawa, Sekiya, & Hayashi]Nakagawa_Sekiya_Hayashi1986 Nakagawa, Y., Sekiya, M., & Hayashi, C. 1986, , 67, 375[Nomura et al.(2016)Nomura, Tsukagoshi, Kawabe, Ishimoto, Okuzumi, Muto, Kanagawa, Ida, Walsh, Millar, & Bai]Nomura_etal2016 Nomura, H., Tsukagoshi, T., Kawabe, R., et al. 2016, , 819, L7[Okuzumi et al.(2016)Okuzumi, Momose, Sirono, Kobayashi, & Tanaka]Okuzumi_Momose_Sirono_Kobayashi_Tanaka2016 Okuzumi, S., Momose, M., Sirono, S.-i., Kobayashi, H., & Tanaka, H. 2016, , 821, 82[Okuzumi et al.(2012)Okuzumi, Tanaka, Kobayashi, & Wada]Okuzumi_Tanaka_Kobayashi_Wada2012 Okuzumi, S., Tanaka, H., Kobayashi, H., & Wada, K. 2012, , 752, 106[Picogna & Kley(2015)]Picogna_Kley2015 Picogna, G., & Kley, W. 2015, , 584, A110[Pinilla et al.(2012)Pinilla, Birnstiel, Ricci, Dullemond, Uribe, Testi, & Natta]Pinilla_Birnstiel_Ricci_Dullemond_Uribe_Testi_Natta2012 Pinilla, P., Birnstiel, T., Ricci, L., et al. 2012, , 538, A114[Pringle(1981)]Pringle1981 Pringle, J. E. 1981, , 19, 137[Rosotti et al.(2016)Rosotti, Juhasz, Booth, & Clarke]Rosotti_Juhasz_Booth_Clarke2016 Rosotti, G. P., Juhasz, A., Booth, R. A., & Clarke, C. J. 2016, , 459, 2790[Sekiya(1998)]Sekiya1998 Sekiya, M. 1998, , 133, 298[Sekiya & Ishitsu(2000)]Sekiya_Ishitsu2000 Sekiya, M., & Ishitsu, N. 2000, Earth, Planets, and Space, 52, 517[Shakura & Sunyaev(1973)]Shakura_Sunyaev1973 Shakura, N. I., & Sunyaev, R. A. 1973, , 24, 337[Sheehan & Eisner(2017)]Sheehan_Eisner2017 Sheehan, P. D., & Eisner, J. A. 2017, , 840, L12[Simon et al.(2016)Simon, Armitage, Li, & Youdin]Simon_Armitage_Li_Youdin2016 Simon, J. B., Armitage, P. J., Li, R., & Youdin, A. N. 2016, , 822, 55[Takeuchi & Artymowicz(2001)]Takeuchi_Artymowicz2001 Takeuchi, T., & Artymowicz, P. 2001, , 557, 990[Takeuchi & Lin(2002)]Takeuchi_Lin2002 Takeuchi, T., & Lin, D. N. C. 2002, , 581, 1344[Takeuchi & Lin(2005)]Takeuchi_Lin2005 —. 2005, , 623, 482[Taki et al.(2016)Taki, Fujimoto, & Ida]Taki_Fujimoto_Ida2016 Taki, T., Fujimoto, M., & Ida, S. 2016, , 591, A86[Tanaka et al.(2005)Tanaka, Himeno, & Ida]Tanaka_Himeno_Ida2005 Tanaka, H., Himeno, Y., & Ida, S. 2005, , 625, 414[Testi et al.(2014)Testi, Birnstiel, Ricci, Andrews, Blum, Carpenter, Dominik, Isella, Natta, Williams, & Wilner]Testi_PPVI Testi, L., Birnstiel, T., Ricci, L., et al. 2014, Protostars and Planets VI, 339[Tsukagoshi et al.(2016)Tsukagoshi, Nomura, Muto, Kawabe, Ishimoto, Kanagawa, Okuzumi, Ida, Walsh, & Millar]Tsukagoshi2016 Tsukagoshi, T., Nomura, H., Muto, T., et al. 2016, , 829, L35[Youdin & Johansen(2007)]Youdin_Johansen2007 Youdin, A., & Johansen, A. 2007, , 662, 613[Youdin & Goodman(2005)]Youdin_Goodman2005 Youdin, A. N., & Goodman, J. 2005, , 620, 459[Youdin & Lithwick(2007)]Youdin_Lithwick2007 Youdin, A. N., & Lithwick, Y. 2007, , 192, 588[Youdin & Shu(2002)]Youdin_Shu2002 Youdin, A. N., & Shu, F. H. 2002, , 580, 494[Zhang et al.(2015)Zhang, Blake, & Bergin]Zhang_Blake_Bergin2015 Zhang, K., Blake, G. A., & Bergin, E. A. 2015, , 806, L7[Zhu et al.(2012)Zhu, Nelson, Dong, Espaillat, & Hartmann]Zhu2012 Zhu, Z., Nelson, R. P., Dong, R., Espaillat, C., & Hartmann, L. 2012, , 755, 6[Zhu et al.(2011)Zhu, Nelson, Hartmann, Espaillat, & Calvet]Zhu2011 Zhu, Z., Nelson, R. P., Hartmann, L., Espaillat, C., & Calvet, N. 2011, , 729, 47 | http://arxiv.org/abs/1706.08975v1 | {
"authors": [
"Kazuhiro D. Kanagawa",
"Takahiro Ueda",
"Takayuki Muto",
"Satoshi Okuzumi"
],
"categories": [
"astro-ph.EP"
],
"primary_category": "astro-ph.EP",
"published": "20170627180003",
"title": "Effect of dust radial drift on viscous evolution of gaseous disk"
} |
Effects on wind properties, photometric variations and near-IR CO line profiles Division of Astronomy and Space Physics, Department of Physics and Astronomy, Uppsala University, Box 516, SE-751 20 Uppsala, Sweden [email protected] University of Vienna, Department of Astrophysics, Türkenschanzstrasse 17, 1180 Wien, Austria Wind-driving in asymptotic giant branch (AGB) stars is commonly attributed to a two-step process. First, matter in the stellar atmosphere is levitated by shock waves, induced by stellar pulsation, and second, this matter is accelerated by radiation pressure on dust, resulting in a wind. In dynamical atmosphere and wind models the effects of the stellar pulsation are often simulated by a simplistic prescription at the inner boundary.We test a sample of dynamical models for M-type AGB stars, for which we kept the stellar parameters fixed to values characteristic of a typical Mira variable but varied the inner boundary condition. The aim was to evaluate the effect on the resulting atmosphere structure and wind properties. The results of the models are compared to observed mass-loss rates and wind velocities, photometry, and radial velocity curves, and to results from 1D radial pulsation models. The goal is to find boundary conditions which give realistic atmosphere and wind properties.Dynamical atmosphere models are calculated, using the DARWIN code for different combinations of photospheric velocities and luminosity variations. The inner boundary is changed by introducing an offset between maximum expansion of the stellar surface and the luminosity and/or by using an asymmetric shape for the luminosity variation. Ninety-nine different combinations of theses two changes are tested.The model atmospheres are very sensitive to the inner boundary.Models that resulted in realistic wind velocities and mass-loss rates, when compared to observations, also produced realistic photometric variations.For the models to also reproduce the characteristic radial velocity curve present in Mira stars (derived from CO Δ v = 3 lines), an overall phase shift of 0.2 between the maxima of the luminosity and radial variation had to be introduced. This is a larger phase shift than is found by 1D radial pulsation models.We find that a group of models with different boundary conditions (29 models, including the model with standard boundary conditions) results in realistic velocities and mass-loss rates, and in photometric variations. To achieve the correct line splitting time variation a phase shift is needed. Pulsation-induced atmospheric dynamics in M-type AGB stars S. Liljegren 1, S. Höfner1, K. Eriksson1, and W. Nowotny2... ==========================================================================================================================§ INTRODUCTION The mass loss of asymptotic giant branch (AGB) stars through a slow stellar wind is presumably caused by a two-step process.First, stellar pulsations create shock waves in the surface layers of the star that levitate matter in the atmosphere.This compressed, levitated material then reaches temperatures that are cool enough for dust condensation to occur.In this second stage the dust is accelerated outwards by radiation, through scattering or absorption, depending on the chemical composition and size of the dust grains.A stellar wind is triggered, as momentum is transferred from the dust to the gas through collisions. This scenario has been investigated extensively, and is supported by various observations.The gas dynamics in the region where shock waves develop are studied through high-resolution spectroscopy, e.g. vibration-rotation lines of CO.The CO molecule is stable throughout the atmosphere and the wind acceleration region, making CO lines suitable to probe the inner regions, with regular pulsation, the shock waves and the steady outflow <cit.>.Interferometry and high angular resolution imagining are also becoming important for studying both the structure and the dynamics of AGB star atmospheres, giving new insights into the shock propagation and dust condensation distances <cit.>.The wind driving process is usually studied with dynamical models <cit.> which simulate the stellar atmosphere where radiative effects dominate.This is a vastly different region from the stellar interior, where convection is the main energy transport mechanism and where the stellar pulsations originate.Dynamical wind models typically have an inner boundary situated just below the stellar photosphere, and therefore do not include any description of either convection or pulsation driving.However, the variation of the upper layers of the star is vital to wind driving as this induces the shock waves that facilitate dust formation.A parameterised prescription is typically used in the dynamical wind models, to simulate the temporal variation of the stellar luminosity and the radial velocity of the gas layers at the inner boundary.This prescription has historically had a simple sinusoidal form, based on the initial efforts to describe radial variation in AGB stars by <cit.>, and has the advantages of few free parameters.However, self-excited interior pulsation models <cit.> and observations <cit.> both indicate that this approach should be improved.It has further been shown that assumptions made about the inner boundary conditions may have strong implications for the structure of the resulting model atmosphere, and consequently for the mass-loss rate and wind velocity, at least in the case of C-type AGB stars <cit.>.In this work we focus instead on self-consistent time-dependent models for dynamical atmospheres and dust-driven winds of M-type AGB stars. The models by <cit.> and <cit.> are based on a wind acceleration scenario by photon scattering on large, near-transparent silicate grains, which is supported by recent observational studies <cit.>. Here we test a range of different inner boundary conditions for M-star atmospheric models.As the silicate dust present in the atmosphere of M-stars are more transparent than the amorphous carbon in C-stars, M-stars are generally less obscured by dust.There is therefore more observational material available for M-stars, e.g. high-resolution spectra, which facilitates comparisons between model results and observations.To approach realistic pulsation properties and to pin down free parameters in the DARWIN models, we investigate the systematic effects of different pulsation properties (phase shifts between radial and luminosity variations, and different shapes of luminosity variations) on dynamical wind models.The resulting models are also compared to various observables.A set of 99 models with the same stellar parameters, but with a combination of different inner boundaries (see Sect. <ref>), is calculated and then evaluated using three criteria: * Wind velocity and mass-loss rate - A direct output from dynamical wind models (DARWIN models, Sect. <ref>), and a measure of the general dynamics.The models are evaluated by comparing the velocity and mass-loss rate combinations with observations from <cit.> and <cit.> in Sect. <ref>. * Photometric variations - The brightness in individual filters will vary during a pulsation cycle for AGB stars, due to various absorbers occurring throughout the atmosphere.This creates observable loops in colour-colour diagrams.The shape and positions of these loops will depend on the overall atmospheric structure and dynamics. Synthetic (J - K) vs. (V - K) loops are calculated for the DARWIN models, using the COMA code (Sect. <ref>), and are compared to observations in Sect. <ref>.* High-resolution line profiles - Vibration-rotation CO lines are formed in different layers of the atmospheres and wind acceleration regions. The Doppler-shifted CO second overtone line profiles and derived radial velocity (RV) curves reflect the velocity field of the pulsating photospheric layers of the star. Synthetic spectra for this line are calculated using the model atmospheres with COMA (Sect. <ref>) and resulting lines and RV curves are compared to observations in Sect. <ref>.These tests are indicators of different aspects of the AGB atmosphere, and realistic models should reproduce the observations of all the three aspects at once.To the authors' knowledge this is the first study of its kind on M-type AGB star wind models.The possibility of using the tests as diagnostics for pulsation models is also explored.The results from thethree tests are compared to pulsation properties derived from 1D self-excited radial pulsation models by <cit.>, who provided a mathematical description in the form of Fourier series of the luminosity variation and the sub-photospheric dynamics from their models.This can be directly compared to the boundary condition used in the DARWIN models. Furthermore, these pulsation models and follow-up simulations have been used extensively for the interpretation of various observations over the years, despite the lack of dust-driven winds in the models of <cit.>. § MODELLING METHODS §.§ Dynamical wind models - DARWIN Pulsation, shock, and dust formation processes occurring in the atmospheres of AGB stars are inherently time dependent.To investigate the influence of different pulsation properties, atmosphere models are calculated using the Dynamical Atmosphere and Radiation-driven Wind models with Implicit Numerics (DARWIN) code.The DARWIN models simulate the time-dependent structures of an AGB star atmosphere using 1D frequency-dependent radiation-hydrodynamics, including a detailed treatment of dust growth and evaporation (for description seeand references therein).For M-type AGB stars the winds are driven by photon scattering on large silicate (Mg_2SiO_4) dust grains, which grow through reactions with Mg, SiO and H_2O.The treatment of dust growth is time dependent, following the method described by <cit.>, with the growth rate limited by the available Mg atoms.As the nucleation of dust grains in an oxygen-rich environment is not well understood <cit.>, pre-existing seed particles consisting of 1000 monomers are introduced.The seed particle abundance is a free parameter and is defined as the ratio between number density of initial grains and number density hydrogen atoms.This is set to 3 × 10^-15 for all models calculated in this paper, following the findings in <cit.>. The starting point for the simulations is a hydrostatic dust-free atmospheric structure whose fundamental parameters are effective temperature, luminosity, and chemical composition.The variations of the luminosity and the gas velocity at the inner boundary are gradually ramped up to simulate the pulsation of the star, transforming the hydrostatic initial atmosphere into a dynamical model. Using an adaptive spatial grid, the full system of non-linear partial differential equations describing gas, dust, and radiation is solved simultaneously with a Newton-Raphson scheme.§.§.§ Inner boundary conditions The spatial range of the DARWIN models, for models that develop a wind, reaches from just below the photosphere (∼ 0.9R_⋆) out to where the wind has reached its terminal velocity v_∞ (∼ 20-30R_⋆).Since the inner boundary is located above the pulsation driving region, the variability of the stellar radius and the luminosity present in AGB stars is simulated with an ad hoc temporal variation of the physical quantities at the inner boundary.The form of these temporal variations has historically been very simple; Δ R_in∝sin(2 π t / P) for the radial variation and Δ L_in∝Δ R_in^2 for the luminosity variation.Such a sinusoidal description, where the expansion and contraction of the stellar surface is locked in phase with luminosity variation, has been used in several different variations and generations of dynamical atmosphere and wind models <cit.>. This simplified way of describing the time evolution of the luminosity at the inner boundary is not, however, representative of what is known about AGB stars pulsation properties.The shape of the luminosity variation is known to differ: <cit.> found that around 30% of Mira light curves deviated significantly from a sinusoidal curve and presented different shapes and secondary maxima.Interior pulsation models, by e.g. <cit.> or <cit.>, predict phase shifts between the variation of the surface layers and the luminosity.In <cit.> atmosphere models of C-stars with the standard boundary condition were calculated; however, a synthetic phase shift had to be added to match the models to observations.To arrive at a better approximation of what a realistic boundary condition for the DARWIN models should be, we perform a systematic investigation of the influence of the inner boundary condition on various model properties, and compare the results with available observations. The inner luminosity boundary is modified in two ways: i) by introducing a phase offset Δϕ_p between the radial variation and luminosity variation and ii) by changing the shape of the luminosity variation, making it increasingly asymmetric (measured by Δϕ_s), as seen in panels 2 and 3 in Fig. <ref>.The result of both these changes is a phase shift between the maximum expansion and maximum luminosity of the star.The notation to describe these phase shifts introduced in <cit.> is used here: Δϕ_p for case i) (panel 2 in Fig. <ref>) and Δϕ_s for the asymmetric case ii) (panel 3 in Fig. <ref>).The parameter Δϕ_s is analogous to Δϕ_p, and measures the difference between the maximum stellar expansion and the maximum of the luminosity, which in case ii) is shifted because the luminosity variation is asymmetric.A larger Δϕ_s then indicates an increasingly asymmetric light variation.The combination of these two changes will lead to a total phase shift between the maximum expansion and maximum luminosity as Δϕ_tot = Δϕ_p+Δϕ_s (see panel 4 in Fig. <ref>).For a full mathematical description of the boundary condition, see Appendix <ref>. §.§.§ Model setThe model parameters used for the starting model (see Table <ref>) are based on stellar parameters of Model M in <cit.>; the only difference is in the chemical composition (in previous work C/O= 1.4, here C/O= 0.48). The atmospheric model here therefore has an O-rich chemistry, in contrast to the C-rich chemistry in the atmospheric model used in <cit.>. Continuing the naming tradition we call our model M2.This combination of parameters was found to reproduce very well the discontinuous S-shaped radial velocity (RV) curve, characteristic of Mira stars.The RV curve is derived from high-resolution time series spectra containing lines formed in regions with strong shocks (around R ≈ 1R_⋆), and can be used to deduce the dynamics of the inner layers of Mira atmospheres <cit.>.<cit.> found that RV curves of a sample of Miras, covering a range of periods between 145 and 470 days, showed a universal behaviour with very similar velocity amplitudes (difference between minimum and maximum velocities) and discontinuities in the RV curve around maximum bolometric phases.They also showed that while properties such as mass-loss rate and wind velocity differ between different Mira stars, the atmospheric dynamics of the inner layers seem to be similar, independent of metallicity, spectral type, period and chemistry. It is therefore probably a good assumption that the result for model M2 can be generalised to other combinations of stellar parameters for Mira stars. A set of 99 models with different inner boundary conditions for Model M2 is investigated.An overview of the set of models can be seen in Fig. <ref>.As observations <cit.> and models <cit.> both predict a positive phase shift, i.e. the stellar surface variation lags the luminosity variation, we constrain the investigation to cases where Δϕ_tot∈ [0, 0.5].Of the 99 models, around 20% did not converge, in most cases due to the time step becoming very small.Such crashes are caused by extreme conditions (strong temporal changes) in models where the radial variation and luminosity variation are very much out of phase.Fortunately, the subsequent analysis showed that these models are far from the realistic region in parameter space, and will therefore not be discussed further.§.§ Spectral synthesisPhotometry and high-resolution spectra are calculated for comparison with observational data.These calculations are done by applying an a posteriori approach, using resulting atmospheric structures from the DARWIN runs.For every model twenty snapshots per pulsation cycle, equidistant in time, are chosen.For each of these snapshots opacities are calculated using the COMA code, with the assumptions of LTE and a microturbulence of ξ_t =2.5 kms^-1. Radiative transfer is then solved for the derived opacities.For a detailed description of the COMA code see <cit.> and for a description of how the line profiles are calculated see <cit.>.§.§.§ Photometry Comparisons using photometric colours are of interest as they trace variations of spectral energy distributions during a pulsation cycle.Large variations in the luminosity create strong variations in gas temperature, which in turn influence the molecular abundances in the atmosphere.These variations are especially prominent in the V-band, which is dominated by TiO, while H_2O affects the near-infrared wavelength region.Here the approach of <cit.> is used, investigating the colour-colour diagram (J-K) vs. (V-K).Typically, observed M-stars show large variations in (V-K) during a cycle, caused by changes in the molecule abundances (mainly TiO), while the variations in (J-K) are small. Observed photometric variations are represented by sine fits to light curves for the M-type Mira stars R Car, R Hya, R Oct, R Vir, RR Sco, T Col and T Hor <cit.>.The difference between the observed loops are most likely due to differences in the fundamental properties of the stars, which is further explored in <cit.>, <cit.>, and <cit.>.Spectra for selected snapshots of every model are calculated, using a resolution of R =10,000 and covering a wavelength region between 0.3and 25. We follow the same method here as described in <cit.> and <cit.>. Photometric filter magnitudes are calculated from these spectra, following the Bessell system <cit.>.To be consistent with the observations, sine curves are fitted to the synthetic photometry variations of the models to produce loops in the (J-K) vs. (V-K) plane.The observed loops can be compared to loops calculated from the models, as the dynamical model should be able to reproduce the general characteristics of these variations. §.§.§ Line profiles in high-resolution spectraThe velocity field in the line formation region will affect the line profiles of molecular lines originating in the dynamical atmosphere.While the light curves computed in the previous section reflect the luminosity variations, the line profiles of molecular lines are affected by the velocity fields in the respective line-forming region.By calculating photometric variations and synthetic line profiles for the models and then comparing them with observations, it should be possible to deduce information about a potential phase shift between the luminosity and radial variation.<cit.> studied such line formation and velocity effects on various molecular features in C-type AGB star models.It was found that DARWIN models were able to reproduce various spectral features observed, and that there are three system of lines that are particularly interesting: the vibration-rotation CO lines Δ v = 1,2,3. These lines probe three different regions in the atmosphere as the temperature where these lines form are roughly 350-500K for CO Δ v = 1, 800-1500K for CO Δ v = 2, and 2200-3500K for CO Δ v = 3. The CO Δ v = 1 lines will then probe the outflow, while the CO Δ v = 2 lines will probe the dust-forming regions and the CO Δ v = 3 lines probe the dust-free layers <cit.>. The CO Δ v = 3 lines are formed in a region where the velocity field is dominated by effects of the shock waves, so the shape of the lines are determined by the propagation of the shock wave, which in turn depends primarily on radial variation due to pulsation.The movement of the mass-shells in such a line-forming region will then be imprinted on the line profile. A schematic of this scenario can be seen in Fig. <ref>.When material is infalling the line profile will be red-shifted, seen at ϕ∼ϕ_redto the left in Fig. <ref>. As an outwards propagating shock wave hits the infalling material, there will be a line splitting due to the gas layers in the line-forming region moving both inwards (red-shifted component) and outwards (blue-shifted component). This is seen at ϕ∼ϕ_split.The line will be blue-shifted at ϕ∼ϕ_blue when all the layers are moving outwards. Observations of Doppler-shifted lines in AGB stars will then directly probe the velocity field, and can be compared to model results.In <cit.> such lines were synthesised for different variations of the inner boundary condition for the case of a C-star atmospheric model, and it was found that line profile variations, particularly the CO Δ v = 3 line, combined with information about the visual phase, can be used as a diagnostic tool. This is tested here for the case of M-stars. Line profiles for the CO line Δ v = 3 (CO 5-2 P30 at 1.6573 ) at different times during a pulsation cycle are produced for DARWIN models with a range of boundary conditions.A resolution of R = 300,000 is used, covering a wavelength region between 1.6566and 1.6578.Frequency dependent opacities are produced using the COMA code, with assumptions that the line shapes are described by Doppler profiles.Gas velocities are taken into account when performing the radiative transfer because the velocity field, with both infalling and outflowing gas, defines the line profiles.Opacities due to dust sources are accounted for, however, they do not significantly contribute to the result as these dust species are very transparent.For a detailed description of the spectral synthesis see <cit.>.Synthetic RV curves are also calculated for each model, using the CO line Δ v = 3 synthesis, and compared to observed radial velocity curves from <cit.>.§.§ Bolometric phase ϕ_bol and visual phase ϕ_ vTwo different measures of time are used throughout this paper: visual phase ϕ_ v and bolometric phase ϕ_bol.Cycle dependent observations are usually linked to the star’s visual phase ϕ_ v.For the model calculations we use the bolometric phase ϕ_bol, which is always known, as one of the model outputs is the luminosity lightcurve.For direct comparison between models and observations, when phases are important (Sect. <ref>), ϕ_ v is calculated for the models.For O-rich Miras the bolometric phase lags behind the visual phase ϕ_ v, typically by ϕ_bol(max) - ϕ_ v(max) ≈ 0.1 <cit.>.This is comparable to what is found in the simulations, where the differences between the visual and bolometric phase typically were in the range 0.05-0.15 for the model atmospheres in this paper. § RESULTING MODELS §.§ Overview of the wind properties Changing the inner boundary may have large effects on efficiency of the wind driving in the resulting model atmosphere.With a mass-loss rate Ṁ, which is due to the radiation pressure, the mass Ṁδ t ejected over a time interval δ t will acquire the momentum Ṁδ t v_∞ for a wind velocityv_∞, by absorbing a fraction β of the momentum available from the luminosity L_⋆δ t /c.Therefore Ṁδ tv_∞ = β L_⋆δ t /c, which can be solved for β asβ = Ṁ v_∞ c /L_⋆Here β describes the fraction of momentum of stellar radiation that goes into driving the wind, and can be calculated from the outputs of the DARWIN models.The upper left panel of Fig. <ref> shows β for all models.Different boundary conditions result in a wide range of values of β, indicating that the form of the inner boundary is directly correlated with the efficiency of the wind driving in these models.There seem to be systematic effects; a group of models with Δϕ _s ∈ [-0.15, 0.2] and Δϕ _p ∈ [0, 0.2] are very efficient in wind driving.A large portion of the models with Δϕ _p > 0.3, on the other hand, are inefficient in driving the wind and produce much lower values of β.The global properties of mass-loss rate and wind velocity are also highly dependent on the choice of the inner boundary conditions.The lower left and lower right panels of Fig. <ref> show mass-loss rate and wind velocity plotted separately.Again some systematic effects can be seen; the models with Δϕ _s ∈ [-0.1, 0.1] and Δϕ _p ∈ [0, 0.2] result in the highest wind velocities, with values around v_∞= 12-16 kms^-1.Velocities decrease with higher Δϕ_tot, with the lowest velocities being v_∞≈ 2 kms^-1.There are also significant effects of changing the shape of the luminosity variation, specified by Δϕ_s.Many of the models have a mass-loss rate between 1- 3 ×10^-6 M_⊙ yr^-1, and do not seem to follow the same trends as wind velocity and β.There are several models with Δϕ_s ∈ [0.1, 0.2] that have a relatively high mass-loss rate, but do not reach very high wind velocities.While a significant amount of material is lifted during the wind driving process for these models, the acceleration of this material is not efficient.The resulting wind velocities are too small to be realistic, in combination with the mass-loss rates (shown to the left in Fig. <ref>).The lowest mass-loss rate is ∼ 10^-8 M_⊙ yr^-1, which is two orders of magnitude smaller than the model with highest mass-loss rate. The upper right panel of Fig. <ref>, shows the radius of the silicate dust grains for all models, which is consistently between 0.35 and 0.4 .As the dust radius a_gr is related to the condensation degree of Si (denoted by f_cond) asf_cond∝ a_gr ^3, this relatively small spread in grain radii still represents a significant difference in the degree of condensation. The models with the smallest grain radii have condensation degrees of f_cond∼ 0.15, while the models with the largest grains have af_cond∼ 0.3 It is the models with the largest grain radii that also reach the highest wind velocities.This sets them apart from the group of models with high mass-loss rate but relatively low wind velocities (Δϕ_s ∈ [0.1, 0.2]), for which the grain radii are comparably small.§.§ Atmospheric structure and dynamics due to different boundary conditions The large variety in mass-loss rates and wind velocities are due to the effect the inner boundary has on the structure of the atmosphere.This is discussed in detail for carbon stars in <cit.>, and while the wind-driving dust species in C-stars is different from that of M-stars the mechanisms at play, determining the atmospheric structures and the wind properties, are comparable.A shorter explanation is given here by comparing a model with efficient wind driving (left panel in Fig. <ref>) to one with poor wind driving (right panel of Fig. <ref>). The left plot in Fig. <ref> shows the atmospheric dynamics of the original model, where each line tracks the movement of a Lagrangian mass shell with time at different depths.This plot is illustrative of the processes contributing to a pulsation-enhanced dust-driven wind in AGB stars, indicating three distinctly different regions in the dynamical atmosphere.The dust-free region close to the surface at R < 2 R_⋆ is dominated by the pulsation.When infalling matter collides with the outflowing gas, shock waves develop and propagate outwards.The material would follow ballistic trajectories if not for the onset of the wind by radiation pressure on the dust. Dust condenses at distances of R ∼ 2 R_⋆.In this dust-forming region the dynamical behaviour changes from strictly periodic to more sporadic, with some matter falling back onto the star and some material being accelerated outwards. A steady outflow is found beyond R ∼ 3R_⋆.Ideally, for effective wind driving, the dust formation should take place in the wake of the propagating shock wave, where the dust can accelerate outward moving dense material further, leading to a strong wind.This is the case for the original model, seen in the left plot of Fig.<ref>, where dust is formed at ϕ_bol∼ 0.5. The dust formation region correlate with the lowest temperatures in the atmosphere, which in turn correlate with the luminosity minimum. The most important change in atmospheric dynamics, when changing the inner boundary, is related to the phase of dust condensation.When shifting the luminosity variation compared to the radial movement of the upper stellar layers, which is the consequence of introducing new inner boundary conditions, the timing of dust condensation is also shifted.If the luminosity minimum and therefore the temperature minimum occurs such that dust is not formed when gas is moving outwards in a wake of a shock, but rather when material is falling back towards the star (seen in the right panel of Fig. <ref>), the wind driving becomes highly ineffective.This leads to significantly lower wind velocities and mass-loss rates.Another important aspect, which also determines the effectiveness of the wind, is when the luminosity maximum occurs with respect to dust formation and shock wave propagation.A luminosity maximum earlier in the pulsation cycle leads to a larger radiative acceleration on the dust, and therefore a higher wind velocity and mass-loss rate.On the contrary, a luminosity maximum that is late with respect to the propagating shock wave means less acceleration and a lower wind velocity and mass-loss rate.The combination of these two effects leads to the vast variety of different behaviours; there are diverse wind velocities and mass-loss rates for the different inner boundary conditions. This is the reason why some models are much more efficient in driving a wind. § COMPARISON TO OBSERVABLESThe models are evaluated via comparison to observations using three criteria: i) a combination of wind velocity and mass-loss rate, ii) colour-colour loops, and iii) RV curves derived from CO Δ v = 3 vibration-rotation lines.§.§ Wind velocity and mass-loss rateSome of the most studied properties of AGB stars are the wind velocities and mass-loss rates, and DARWIN models should reproduce these properties realistically.An overview of the wind velocities and mass-loss rates of the models is shown in Fig. <ref> (pale green, blue, and dark blue circles and the original model as an orange star). This is compared to a linear fit to observational wind velocities against log of mass-loss rates for M-stars using observations from <cit.> and <cit.> (crosses are observations, the dashed line is the fit to the observations).The observational uncertainties on the wind velocity vs. mass-loss rate relationship are dominated by uncertainties on the mass-loss rates, which are determined using CO multi-transitional line observations.The grey area plotted in Fig. <ref> indicates the standard deviation from the linear fit in the mass-loss rates of the observations, which are comparable to the uncertainties found in <cit.> where the reliability of mass-loss rate estimates for AGB stars are investigated in detail.In the left panel of Fig. <ref> the dark grey area marks the region with deviations from the linear fit being below 1σ, while the light grey area denotes a deviation between 1 and 2σ. As previously mentioned changing the inner boundary may have large consequences for both the wind velocities and for mass-loss rates.The wind velocities range between approximately 1 kms^-1 to almost 16 kms^-1, while the mass loss can change over two two orders of magnitude, between 3 × 10^-6 and 10^-8 M_⊙ yr^-1. For higher wind velocities, the wind seems to saturate at some maximum mass-loss rate, in this case around 1.5 × 10^-6 M_⊙ yr^-1. This leads to a flat distribution in mass-loss rate above 10 kms^-1. The changes to the inner boundary thus have little consequence for the mass-loss rate beyond this point; however, the wind velocity is still sensitive to such changes.Below 10 kms^-1 there is a wider spread, but the models seem to have mass-loss rates that are too high for such low wind velocities, which is not realistic when compared to the observations.The right panel in Fig. <ref> shows the comparison between the observations and the models for different combinations of boundary conditions. As seen, there is a group of models with Δϕ_tot∈ [0.0, 0.3] that produces realistic combinations of wind properties. The models that did not converge (red) are relatively far from this group of models.There are in total 30 models that simultaneously reproduce a realistic combination of mass-loss rates and wind velocities within 1σ.Most of these models have combinations of Δϕ _s ∈ [-0.2, 0.15] and Δϕ _p ∈ [0, 0.3], with Δϕ_tot∈ [0,0.3].It should be noted that the model with Δϕ _s = -0.2 and Δϕ _p = 0.7, one of the most extreme models, has a very low mass-loss rate and a very low wind velocity(Ṁ≈ 10^-7.7M_⊙yr^-1, v_∞≈ 1kms^-1), and does not reproduce Mira-like properties.This model is therefore excluded, even though it technically falls within the 1 σ range. The 29 models left are used for further analysis and to investigate the colour-colour diagrams and high-resolution spectra.The model with the standard boundary conditions is among these 30 stars, and it produces a realistic mass-loss rate and wind velocity combination.This is rather reassuring, considering that the stellar parameters of this model emerged from a study based on C-rich dynamical models (with a different wind-driving species). As mentioned in Sect. <ref>, the pulsation properties of stars that have different chemical compositionsbut otherwise similar stellar properties should be comparable. §.§ Loops in colour-colour diagrams The photometric comparison with observations is done by calculating synthetic colours for the 29 models that produced satisfying mass-loss rates and wind velocities.Fig. <ref> shows the loops formed in the (J-K) vs. (V-K) diagram after sine curves are fitted to the photometric variations, for both observed stars (red and blue loops) and for synthetic colours (grey loops).The model with the original boundary condition is shown in orange. Models with higher T_ eff and lower mass-loss rates, which have been studied in previous papers, fell mostly into the area covered by loops shown in red <cit.>. Model M2, with its lower T_ eff and higher mass-loss rate, on the other hand, is more comparable to R Oct (shown in blue), which has the highest mean (J - K) and (V - K) in the observed sample. The (J - K) values are commonly considered to be an indicator of T_ eff, which is also reflected by models<cit.>.The (V-K) values depend on the strength of molecular features (mostly TiO), which in turn are strongly affected by the variable structure of the dynamical atmosphere and the condensation distance of the wind-driving dust grains<cit.>.Observations and models both loop in an anti-clockwise direction in this diagram, indicating qualitatively similar phase lags between light curves in the different photometric bands.In an attempt to more quantitatively examine how well the observations and the model results agree and to see if any models significantly deviate, we describe the loops as ellipses in the (J-K) vs. (V-K) plane.An ellipse is defined by five quantities: the position of its centre (x_0, y_0), the semi-major axis a, the semi-minor axis b and the angle of inclination α . The shape of an ellipse is described by its eccentricity ase = √(1 - b^2/a^2).The centre positions of the loops of the 29 models, which were selected based on their wind properties, are within the range of observed (J - K) and (V - K) colours for larger observational samples <cit.>, and can therefore not be used to put extra constraints on the boundary conditions. They will not be discussed further.Eccentricity, semi-major axis a, the semi-minor axis b are plotted against the angle of inclination α for model loops and for observed loops in Fig. <ref>, again with observations in red and blue and models in grey with the original model as the orange star.The left panel in Fig. <ref> shows the eccentricities for the observed loops, which are all very close to 1.This very high eccentricity is due to a small variation in the (J-K) plane compared to the variation in the (V-K) plane, and common for all the observed loops.The angle of inclination α is between0^∘ and 3^∘, meaning the observed loops are very narrow and almost flat.The model loops have a spread in both eccentricity and angle of inclination; however, some of the models overlap very well with the observations.The middle and right panels of Fig. <ref> show the semi-major and semi-minor axis respectively.Again, there is a spread for the model loops, but it seems to be realistic as this variation is similar to the spread between the different observations. R Oct seems to be a good fit for both the semi-major and semi-minor axis, and the eccentricity. This indicates that models can reproduce realistic (J-K) (semi-minor) and (V-K) (semi-major) variations simultaneously.There are no models that obviously deviate from the observations. The spread of the different properties for the models is comparable to the spread of the observations.Therefore, all 29 models are used in the further analysis of the line profile variation. §.§ Line profiles in high-resolution spectra High-resolution spectra were calculated for the sample of 29 models that produced realistic mass-loss rates and wind velocity combination as well as realistic colour variations. Synthetic RV-curves from the spectra are compared to observed RV curves for Mira AGB stars. §.§.§ Investigating the inner atmosphere with CO Δ v = 3The CO Δ v = 3 lines originate deep in the atmosphere below the dust-forming layers where the radial expansion and compression of the upper stellar layers induce strong shock waves.The movement of the mass-shells in such a line-forming region will then be imprinted on the line profile, as described in Sect. <ref> and illustrated in Fig. <ref>.This behaviour is observed for CO Δ v = 3 lines in AGB stars. The first column of Fig. <ref> shows time series observation of CO second overtone lines (average of 10-20 unblended lines) of χ Cyg, a S-type Mira variable.The ϕ_split of χ Cyg (described in Fig. <ref>) occurs at ϕ_ v∼ 0.This line splitting at maximum visual light seems to be a ubiquitous feature of for Mira AGB stars.Such line splitting and the resulting RV curve can also be synthesised using DARWIN models.The line profiles for the standard model, Δϕ_p =0, Δϕ_s =0, are shown in the two rightmost panels of Fig. <ref> at different resolutions.The shape of the line changes during one pulsation cycle; when material is outflowing the line has a strong blue-shifted component (ϕ_ v≈ 0.65 - 0.0), while the infalling material results in a red-shifted component (ϕ_ v≈ 0.2-0.4). The ϕ_split in such a model occurs at ϕ_ v∼ 0.8, which is too early compared to observations. There is very little difference between the models with different boundary conditions concerning the actual line shape during a cycle other than the timing of different features, such as the line-splitting, occurring at different ϕ_ v. This can be seen by comparing panels four and five (the original model, with Δϕ_tot= 0.0) with panels two and three (Δϕ_p = 0.2, Δϕ_s = 0.0, with Δϕ_tot =0.2). The line profiles of the two models are similar, but different line features occurs at different ϕ_ v. This indicates that changing the luminosity boundary has few consequences for the shape of the line, which in turn demonstrates that the material at these radial distances is not subjected to significant radiation pressure.The main characteristics of the observed line, with a line-splitting around ϕ_ v = 0.0 and the overall shift of the main component of the line, can be reproduced if a model with Δϕ_tot =0.2 is used (such as shown in panels two and three). The velocity field of this model is then in overall agreement with that of the observed star.There are some differences however; the red component during the line splitting is significantly weaker for the models, probably due to a lower density of the infalling material. §.§.§ Comparison of radial velocity curves Radial velocity curves derived from observations of second overtone CO lines in Mira stars show a uniform behaviour: a discontinuous S-shaped curve, with both outflowing and infalling components at ϕ_ v = 0.0, RV=0 at around ϕ_ v=0.4 and a velocity amplitude Δ RV ≈ 25 kms^-1 (Fig. <ref>, crosses and grey fit, which is the running mean over ϕ_ v= 0.2 for the observation). The shape and amplitude of the observed RV curve are very well reproduced in previous works, deriving RV curves from synthetic line spectra of C-star models; however, both the line splitting and RV=0 of these models occur earlier than ϕ_ v = 0.0 and ϕ_ v=0.4 respectively <cit.>.The same problem is found for the standard M-type model here, when calculating the RV curves from the CO Δ v= 3 line synthesis, indicated by orange dots in Fig. <ref>.Models with a larger Δϕ_tot approach observations: a line-splitting occurs closer to ϕ_ v = 0.0.This can be seen in Fig. <ref>, where the green dots represent a model with a larger total phase shift, Δϕ_tot=0.2 (Δϕ_p =0.2, Δϕ_s =0.0).Increasing Δϕ_tot of the model will essentially shift the RV curve to the right.So while the inner luminosity boundary does not affect the line-forming regions directly, it does change the visual phase, which is important when comparing models to observations.We want to compare the resulting RV curves for the 29 models that have realistic wind velocities and mass-loss rates with the mean RV curve from the CO-line observations of Mira stars.In this case it is not suitable to use a standard L2 norm (a measure of the mean quadratic difference between the points on two curves, for mathematical description see Appendix <ref>)as an error estimation. Results from such an error might be misleading, due to the offset between the model curves and observational curve and due to the discontinuous nature of the RV curve.Instead we compare the timing of the synthetic and observed RV curves' minimum (ϕ_ v(min)),maximum (ϕ_ v(max)), and the RV zero point (ϕ_ v(RV=0)).These points are defined as ϕ_ v(s, min), ϕ_ v(s, max), and ϕ_ v(s, RV=0) for the synthetic RV curves and ϕ_ v(o, min), ϕ_ v(o, max), and ϕ_ v(o, RV=0) for the observed mean curves. The differences between the synthetic and observed RV curve for these points are defined as Δ max = |ϕ_ v(o, max) - ϕ_ v(s, max)|, Δ min = |ϕ_ v(o, min) - ϕ_ v(s, min)| and Δ zero = |ϕ_ v(o, RV=0) - ϕ_ v(s,RV=0)|. We then define the mean difference as Δϕ_ v = Δ max+Δ min+Δ zero/3.The value of Δϕ_ v is calculated for the 29 models.The results are shown in Fig. <ref>.To have a line splitting occurring close to what is observed, ϕ_ v≈ 0, the model atmosphere must have a Δϕ_tot∼ 0.2.For models with lower Δϕ_tot the line splitting occurs too early in the pulsation cycle.This is important to note when modelling lines that vary due to shock propagation through the stellar layers. If correct phase information is to be retrieved, either models with ϕ_tot∼ 0.2 need to be used, or a phase shift of Δϕ_ v∼ 0.2 (corresponding to need to a difference of ∼ 0.3 if comparing ϕ_ v with ϕ_bol) needs to be applied.The results also indicate that real Mira stars do have an inherent phase shift between the ascending luminosity and the radial movement of the surface.§ COMPARISON TO 1D PULSATION MODELSThe results found in this work are compared with theoretical predictions for the pulsation properties of AGB stars.Self-excited pulsation models of AGBs can be divided into two categories: 1D models and 3D models.One-dimensional self-excited radial pulsation models have been developed and worked on for decades <cit.>, and contemporary models include non-linear effects and turbulent pressure <cit.>.The drawback, however, is the treatment of convective energy transport, which is simulated with mixing length theory, and is mentioned as the major shortcoming in this generation of models <cit.>.Three-dimensional interior pulsation models with realistic hydro-dynamical treatment should provide a more realistic description of convection, and therefore the pulsation properties; however, these models are still in their infancy, and only a small part of the relevant stellar parameter space explored so far <cit.>. Furthermore, possible 3D effects on mass loss have not yet been investigated.Here we compare the different inner boundary conditions explored to results from 1D self-excited models, described in <cit.> where non-linear models representing the prototypical Mira, o Ceti, were calculated and explicit descriptions of the variations in the sub-photospheric layers are given.Two fundamental mode models (model Z and D) and one first overtone model (model E) were examined. However the results, and also later investigations, suggest that Mira variables are fundamental mode pulsators.The first overtone model did not achieve appropriate luminosity amplitude and a factor of six had to be applied in <cit.>. So while comparisons to all three models are included here, the first-overtone model (model E) is probably not a realistic representation of Mira stars.<cit.> provide Fourier coefficients to the time dependence of the luminosity and radius for the three models for the region with mean temperature of ∼ 4000-5000K. This is close to the location of the inner boundary of the DARWIN models.These variations of the photospheric layers from the 1D pulsation models are therefore directly comparable to the inner boundary condition for the DARWIN models.A comparison is made between different DARWIN inner boundary conditions and the variations given by <cit.>, which are rescaled to have the same amplitude and period. This comparison is evaluated by using the a relative least-squares error (LSE, the L2 norm), which is the mean quadratic difference at each point between the curves, as seen in Eq. <ref>, then normalised to the smallest value. The result for each model presented in <cit.> can be seen in Fig. <ref>. The boundary conditions most similar to the two fundamental mode pulsators given by <cit.> are slightly asymmetric (with Δϕ_s ∼ -0.05) with an offset (with Δϕ_p ∼ 0.1), resulting in a total phase shift Δϕ_tot∼ 0.05.This is a smaller phase shift than found by comparing the DARWIN models with different boundary conditions and RV curves in Sect. <ref>.It overlaps with DARWIN models that produce realistic wind velocities and mass-loss rates, however.The first overtone pulsator model from <cit.> predicts asymmetry (with Δϕ_s ∼ 0.1) and with an offset (with Δϕ_p ∼ 0.1), resulting in Δϕ_tot∼ 0.2. This model isdiscarded as unrealistic in the <cit.>. § DISCUSSIONUnder the assumption that the results for model M2, representing a prototypical Mira, can be generalised to other Mira stars we can draw conclusions about which inner boundary condition agrees best with observations and about what the consequences are of using a generic boundary condition, like that of the standard DARWIN models, for the resulting atmospheres.In our original sample of 99 models with a broad range of inner boundary conditions, 29 models reproduced realistic combinations of mass-loss rates and wind velocities. There were some systematic effects; Δϕ_tot ranged between 0 and 0.3, and most of the models had a slightly asymmetric shape with Δϕ_s < 0.0. This sample could be reduced further, by comparing the model RV curves with compilations of CO Δ v = 3 RV curves for Mira stars.The RV comparison indicates that a Δϕ_tot∼ 0.2 is needed for the splitting of this line to occur at correct ϕ_ v. As no further information about the shape can be derived from this comparison, any model with Δϕ_tot∼ 0.2 should reproduce line splitting at the correct ϕ_ v.The (V-K) vs (J-K) colour-colour loops were also studied for the 29 models that produced satisfying mass-loss rates and wind velocities. While there was a spread in the ellipse properties, it was comparable to the same size as the spread in observed colour-colour loops.This is not always the case as models with unrealistic temperature structures and resulting molecule abundances in turn result in results in unrealistic loops, a fact that was explored in <cit.>. The similarities of the model loops here is probably a selection effect as photometry calculations are only performed on models with similar mass-loss rates and velocity properties.These models have similar atmospheric structures, which in turn result in similar colour-colour loops.Because no models were obviously unrealistic, no models were discarded using the colour-colour loop criterion. The combined conclusion when looking at both wind velocity and mass-loss rate, and that of the RV curve comparison, the ideal inner boundary condition has a Δϕ_s between -0.2, and 0.0, with Δϕ_tot∼ 0.2.There is thus a degeneracy when it comes to the shape of the luminosity variation.It is not possible to pin down one ideal inner boundary condition from these tests, only that it should have Δϕ_tot∼ 0.2.However, the results do indicate that there is an inherent offset between the radial movement and the luminosity variation of Mira stars.The standard boundary condition, which has previously been used extensively in grid calculations such as <cit.> and <cit.>, produces realistic wind velocities and mass-loss rate combinations for the M2 model. If phase information about the ascension of the shock wave is not of interest, the standard boundary condition can therefore be used in the models. Rescaling the photospheric variations of the luminosity and the gas layers found by 1D radial pulsation models from <cit.>, and comparing them with the inner boundary conditions used in this work we find an overlap with slightly asymmetric DARWIN models (Δϕ_s ∼ -0.05) with a small Δϕ_tot of 0.05.The DARWIN models with these inner boundary conditions do reproduce realistic combination of wind velocities and mass-loss rates and good colour-colour loops; however, the line splitting in the RV curves occurs too early in the pulsation cycle when compared to observed RV curves. § SUMMARY AND CONCLUSIONSIn this paper we investigate the influence of the inner boundary conditions, used in the DARWIN models to simulate the observable variations of AGB stars, to evaluate the effects on the resulting atmosphere structure and wind properties.The results can be summarised as follows: * The DARWIN models are sensitive to the inner boundary, and using different boundary conditions can result in significant differences for the mass-loss rates (about three orders of magnitude) and wind velocities (about one order of magnitude). However, not all of the resulting models are realistic; * The mass-loss rates seem to saturate (here at 1.5 × 10^-6 M_⊙ yr^-1), but changing the inner boundary will change the wind velocity;* All the models with boundary conditions such that they produced realistic combinations of mass-loss rates and wind velocities, also resulted in synthetic colour-colour loops that agree with observations; * There is an inherent phase difference between ϕ_bol and ϕ_ v for the DARWIN models of ∼ 0.1. This has previously been suggested to be the case for observed stars;* Phase information can be derived from RV curves; DARWIN models with a Δϕ_tot∼ 0.2 results in a line splitting phase that corresponds to observations;* The 1D radial pulsation models of <cit.> indicate that the total phase difference Δϕ_tot is smaller than Δϕ_tot∼ 0.2;* Using the standard boundary condition results in realistic mass-loss rates, wind velocities and colour-colour loops in the case of M-stars; however, the line splitting of the CO dv=3 line will occur at the wrong ϕ_ v. It can therefore be concluded that grid studies of DARWIN models, such as <cit.> and <cit.>, that use the standard inner boundary condition should produce realistic mass-loss rates and wind expansion velocities. The observational results (RV data as well as FTS spectra of χ Cyg, S Cep, and W Hya) were kindly provided by Th. Lebzelter and K. Hinkle. S.H. would like to acknowledge the support from the Swedish Research Council (Vetenskapsrådet).The computations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at UPPMAX. aa§ DESCRIPTION OF THE BOUNDARY CONDITION§.§ Inner boundaryOriginally in the DARWIN code the effects of stellar pulsation were described by two variable physical quantities at the inner boundary: R_in(t) and L_in(t).The first R_in(t), is the variation in the innermost gas layer.This lower boundary layer is impermeable, meaning no gas flows across it.The second, L_in(t) describes the variation in the luminosity.The inner boundary is places below the stellar photosphere, at around R ≈ 0.9 R_⋆.The R_in(t) variation has the form R_in(t) = R_0 + Δ u_p P/2 πsin ( 2 π/P t ) where Δ u_p is the velocity amplitude, P is the pulsation period, and R_0 the average radial distance of the boundary. This corresponds to a gas velocity ofu_in(t) = Δ u_p cos ( 2 π/P t ) Assuming a constant flux at the inner boundary the luminosity becomes L_in(t) ∝ R^2_in(t). To better match observations of the bolometric flux variation a free parameter f_L was introduced in <cit.>, so that the amplitude of luminosity can be adjusted separately from the radial amplitude. The original form of the luminosity isΔ L_in(t)= L_in - L_0= f_L(R^2_in(t) - R^2_0/R^2_0 ) × L_0=f_L( [1 + Δ u_p P/R_0 2 πsin ( 2 π/P t )]^2- 1) × L_0 where L_0 is the average luminosity of the star.In this paper we alter the luminosity variation, but keep the form of the radial variation. If a phase shift Δϕ_p is introduced between R_in(t) and L_in(t), but the shape is kept unchanged, the luminosity variation becomes Δ L_in(t, Δϕ_p) =f_L([1 + Δ u_p P/2 π R_0sin ( 2 π(t/P + Δϕ_p ))]^2- 1) × L_0 The asymmetric luminosity variation is achieved by describing the luminosity variation at the inner boundary by a modified, smoothed Fourier saw-tooth wave curve as Δ L_in(t, w(Δϕ_s), Δϕ_p) = k ∑^N_n=11/n^w(Δϕ_s)sin( 2 π n/P(t + Δϕ_p )) where k is the amplitude set to match the amplitude from Eq. (<ref>), P is the pulsation period and w(Δϕ_s) is a smoothing factor.If w(Δϕ_s) = 1 we retain the saw-tooth shape; however, when w is increased Δ L_in will approach a sinusoidal curve.As seen in Fig. <ref>, the Δϕ _s is a measure of how asymmetric the curve is, specifying how far from the original L_in the phase of the maximum has been shifted.This measure is analogous to Δϕ_p. This form also allows for different values of Δϕ_p.§.§ Estimation of wThe parameter w, from Eq. (<ref>), is a function of Δϕ_s and the relationship between w and Δϕ_s can be seen in Fig. <ref>. For a simpler extraction of the w-value a polynomial is fitted, of the form w(Δϕ_s) ≈∑^M_m=0 a_m ×Δϕ_s^mA good fit was found for M = 5. Table<ref> shows the constants for this fit. §.§ Number of terms usedWhen using the Fourier series from Eq. (<ref>), the number of terms N needed to avoid significant unwanted overshooting close to the discontinuity (Gibbs phenomenon) depends on the value of Δϕ_s. With a higher Δϕ_s, the luminosity variation will be more asymmetric and more prone to overshooting, which requires a larger N. To estimate the N number of Fourier terms needed for convergence at different Δϕ_s, Δ L_in for one period calculated using N Fourier terms is compared to one period with N+1 terms. When the difference between Δ L_in(N) and Δ L_in(N+1) becomes small, it is assumed that Δ L_in(N) has converged. The difference between the two curves is measured with the least-squares error, defined asL2 = √(1/x_steps∑^x_steps _i( f_i -g_i )^2)with f_i = Δ L_in(N+1)_i and g_i = Δ L_in(N)_i. This is a measure of the the average difference between two points on the curves, and is usually denoted as the least-squares error or the L2 norm. Convergence is assumed when L2 < 0.001. An overview of the number of Fourier terms N used in this work, for each value of Δϕ_s, is seen in Table <ref> and Fig. <ref>. | http://arxiv.org/abs/1706.08332v1 | {
"authors": [
"S. Liljegren",
"S. Höfner",
"K. Eriksson",
"W. Nowotny"
],
"categories": [
"astro-ph.SR"
],
"primary_category": "astro-ph.SR",
"published": "20170626120035",
"title": "Pulsation-induced atmospheric dynamics in M-type AGB stars. Effects on wind properties, photometric variations and near-IR CO line profiles"
} |
On tree-decompositions of one-ended graphsJohannes Carmesin Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WB, United Kingdom ,Florian Lehner Mathematics Institute, University of Warwick, Zeeman Building, Coventry CV4 7AL, United Kingdom Florian Lehner was supported by the Austrian Science Fund (FWF), grant J 3850-N32 , andRögnvaldur G. Möller Science Institute, University of Iceland, IS-107 Reykjavík, Iceland Rögnvaldur G. Möller acknowledges support from the University of Iceland Research FundDecember 30, 2023 ===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTION The experimental evidence of neutrino oscillations is one of the most important discoveries in particle physics. Model-independent first evidences of neutrino oscillations were obtained in 1998 by the atmospheric neutrino experiment Super-Kamiokande <cit.>, in 2002 by the solar neutrino experiment SNO <cit.>, and later by accelerator and reactor neutrinos.The existence of neutrino oscillations implies that neutrinos are massive particles and that the three flavor neutrinos ν_e,ν_μ, ν_τ are mixtures of the neutrinos with definite masses ν_i (with i = 1, 2, 3). The phenomenon of neutrino oscillations is being studied in a variety of experiments <cit.> which fully confirm this quantum phenomenon in different disappearance and appearance channels. The mixing matrix U_PMNS contains three mixing angles, already known, and one CP violating phase for flavor oscillations. Interacting neutrinos have left-handed chirality.Knowing that neutrinos are massive, the most fundamental open problem is the determination of the nature of neutrinos with definite mass: are they four-component Dirac particles with a conserved total lepton number L, distinguishing neutrinos from antineutrinos, or two-component truly neutral (no electric charge and no total lepton number) self-conjugate Majorana particles <cit.>? For Dirac neutrinos, like quarks and charged leptons, their masses can be generated in the Standard Model of particle physics by spontaneous breaking of the gauge symmetry with the Higgs scalar, if there were additional right-handed sterile neutrinos. But the Yukawa couplings would then be unnaturally small compared with all other fermions. A Majorana Δ L = 2 mass term, with the active left-handed neutrinos only, leads to definite mass neutrinos with no definite charge. However, there is no way in the Standard Model able to generate this Majorana mass, so the important conclusion in fundamental physics arises: Majorana neutrinos are an irrefutable proof of physics beyond the Standard Model. Due to the Majorana condition of neutrinos with definite mass being their own antiparticles, Majorana neutrinos have additional CP violating phases <cit.> beyond the Dirac case.Neutrino flavor oscillation experiments cannot answer the fundamental question of the nature of massive neutrinos, because in these flavor transitions the total lepton number L is conserved. In order to probe whether neutrinos are Dirac or Majorana particles, we need to study observables violating the total lepton number L. The difficulty encountered in these studies is well illustrated by the so-called “confusion theorem" <cit.>, stating that in the limit of zero-mass there is no difference between Dirac and Majorana neutrinos. As all known neutrino sources produce highly relativistic neutrinos (except for the present cosmic neutrino background in the universe), the Δ L = 2 observables are highly suppressed. Up to now, there is a consensus that the highest known sensitivity to small Majorana neutrino masses can be reached in experiments on the search of the L-violating neutrinoless double-β decay process (0νββ)^AZ →^A(Z+2) + 2e^- ,where ^AZ is a nucleus with atomic number Z and mass number A. The two-neutrino double-β decay process (2νββ):^AZ →^A(Z+2) + 2 e^- + 2 ν̅_eis allowed by the Standard Model for some even-even nuclei for which the single β decay or electron capture is forbidden. The process (<ref>) represents an irreducible background in the search of (<ref>), which needs an excellent energy resolution in order to separate the definite peak in (<ref>) from the high energy tail of the 2e^- spectrum in (<ref>).Dozens of experiments around the world are seeking out a positive signal of 0νββ. The most favorable decays for the experimental search are those with high mass difference between the ground state neutral atoms. The most sensitive limits at present are from GERDA-Phase II <cit.> for ^76Ge, located at the Laboratori Nazionali del Gran Sasso (LNGS), and from KAMLAND-Zen <cit.> for ^136Xe, located at the Kamioka Observatory. If the decay process is mediated by the exchange of light Majorana neutrinos, the mismatch between e-flavor neutrino and definite mass neutrinos ν_i in the Majorana propagator generates a decay amplitude proportional to the effective Majorana neutrino mass m_ββ≡∑_iU^2_eim_ν_i ,which is a coherent combination of the three neutrino masses. Its determination would then provide a measure of the absolute neutrino mass scale. The effective mass in Eq.(<ref>) depends on the mixing angles and two relative CP phases of the neutrinos in the U_PMNS mixing matrix. Assuming that neutrinos are Majorana particles, the present knowledge of mixing angles and neutrino mass differences, from neutrino flavor oscillations, produces the information <cit.> condensed in Figure <ref> for the fundamental quantity m_ββ.Present experimental limits <cit.>, indicated in Figure <ref>, are approaching the interval of m_ββ values predicted for the inverse hierarchy in the neutrino mass spectrum, Δ m^2_13 < 0. The sign of Δ m^2_13 is still an open question in neutrino physics and it is a subjet of current experimental interest <cit.>.There is an alternative to 0νββ by means of the mechanism of neutrinoless double electron capture (0νECEC),^AZ + 2 e^- →^A(Z-2)^*. This is actually a mixing between two states of two different neutral atoms differing in the total lepton number L by two units, and the same baryonic number A, and not a process conserving energy and momentum in general. The daughter atom is in an excited state with two electron holes, and its decay provides the signal for (<ref>).Ref. <cit.> first pointed out that the monumental coincidence of the initial energy of the parent atom and that of the intermediate excited atomwould give rise to a large enhancement of the decay probability. The concept of resonant enhancement of 0νECEC was further developed in <cit.> for the exceptional circumstance of almost degeneracy between the parent and daughter atomic states in (<ref>). The almost matching condition is fulfilled when the 2X-ray decay occurs through the tail of the width of the atomic state, as shown schematically in Figure <ref>. These works stimulated many experimental searches <cit.> of candidates when the remarkable trap technique for precision measurements of atomic masses became available.The mixing amplitude was calculated in <cit.> from the diagram in Figure <ref> and, in a good approximation, it can be factorized leading toM_21 = m_ββ^* ( G_Fcosθ_C/√(2))^2< F_21>g_A^2/2πM_0νHere G_F is the Fermi coupling constant, θ_C is the Cabibbo angle, < F_21> gives the probability amplitude of finding the two electrons in the nucleus, M_0ν is the nuclear matrix element, which is of the order of the inverse nuclear radius,g_A is the axial-vector nucleon coupling and the effective Majorana neutrino mass m_ββ appears as the complex conjugate of the expression (<ref>) for neutrinoless double beta decay. The experimental activity in recent years has in turn stimulated the calculation <cit.> of the nuclear matrix elements for the cases of interest.A list of likely resonant transitions was provided in <cit.>, excluding some of those previously suggested in <cit.>. After the improvements in measurements of atomic masses, the remaining candidates include ^152Gd→^152Sm, ^164Er→^164Dy and ^180W→^180Hf for atomic mixing to the daughter atom, with the nucleus in the ground state and having two holes in the inner atomic shells. More recent detailed analyses, using the state of the art in nuclear QRPA and IBM models, agree in the results, showing that the most promising known candidates are ^152Gd and ^180W.The case of ^152Gd→^152Sm mixing and decay is particularly attractive.The values of the relevant parameters are the experimental Δ = M_1-M_2 = (0.91 ± 0.18) keV <cit.> for the masses of the parent “1” and daughter “2” atoms, Γ = 0.023 keV <cit.> for the two-hole atomic width and the Q-value of the ground state to ground state transition Q= (55.70 ± 0.18) keV <cit.>. The theoretical mixing is <cit.>M_21 = 10^-24 [m_ββ/0.1 eV] eV .As seen, the optimal resonant enhancement condition is still off by at least a factor 30, implying a loss of 3 orders of magnitude in the expected X-ray rate from the parent atom.In Section 2 we set the formalism for the two-state atom mixing <cit.> using a non-normal non-Hermitian Hamiltonian and obtain the non-orthogonal states with definite time evolution: one metastable mixed state and one short-lived mixed state. In Section 3 we develop the natural time history of an initial parent ground state atom, a history which is governedby different time scales of oscillation, lifetime of the short-lived state and lifetime of the metastable state, thus obtaining the expected natural populations, at present observable times, of the threestates involved including the ground state of the daughter atom. Stimulated by the natural population inversion, we contemplate in Section 4 the prospects that could be open by both an enhanced rate of stimulated X-ray emission from the metastable state and the absorption from the ground state daughter atom. In Section 5 we present our conclusions and outlook.§ THE EVOLUTION HAMILTONIAN In the basis of the |^AZ ⟩ and |^A(Z-2)^*⟩ states,which we'll refer to as 1 and 2,the dynamics of this two-state system of interest is governed by the Hamiltonian=- i/2 = [M_1 M_21^*; M_21M_2 ] - i/2[ 0 0; 0 Γ ] ,with a Majorana Δ L = 2 mass mixing M_21 as given by Eq.(<ref>).The anti-Hermitian part of this Hamiltonian is due to the instability of |^A(Z-2)^*⟩,which de-excites into |^A(Z-2)_g.s.⟩, external to the two-body system in Eq.(<ref>), emitting its two-hole characteristic X-ray spectrum. The appropriate non-Hermitian Hamiltonian formalism for describing the mixing of a two-state unstable system is known since Weisskopf-Wigner <cit.> and it was used in many instances. It has been employed <cit.> with the objective of reproducing the rate, induced by atom mixing, as previously given in Ref. <cit.>.Besides being non-Hermitian,is not a normal operator, i.e. ≠ 0. As a consequence,andare not compatible. The states of definite time evolution, eigenstates of , have complex eigenvalues and are given in non-degenereate perturbation theory <cit.> by |λ_L⟩ = |1⟩ + α|2⟩, λ_L ≡ E_L -i/2 Γ_L = M_1 + α^2 [ Δ - i/2 Γ] ,|λ_S⟩ = |2⟩ - β^* |1⟩, λ_S ≡ E_S -i/2 Γ_S = M_2 -i/2 Γ - α^2 [ Δ - i/2 Γ],with Δ = M_1 - M_2. As seen in Eq.(<ref>), Γ_L,S are not the eigenvalues of thematrix. The eigenstates are modified at first order in M_ 21,α = M_21/Δ + i/2 Γ ,β = M_21/Δ - i/2 Γ ,so the “stationary” states of the system don't have well-defined atomic properties: both the number of electrons and their atomic properties are a superposition of Z and Z-2. Also, these states are not orthogonal—their overlap is given by⟨λ_S|λ_L⟩ = α - β =-iM_21Γ/Δ^2 + 1/4 Γ^2 ,with its non-vanishing value due to the joint presence of the mass mixing M_21 and the decay width Γ.Notice that Im(M_21) originates a real overlap. As seen in Eq.(<ref>), the modifications in the corresponding eigenvalues appear at second order in M_21 and they are equidistant with opposite sign. Since these corrections are small, from now on we will use the values E_L≈ M_1 ,E_S ≈ M_2 , Γ_L≈α^2 Γ ,Γ_S ≈Γ .The only relevant correction at order α^2 is the one to Γ_L, since |1⟩ was a stable state—even if it's small, the mixing produces a non-zero decay width.This result shows that, at leading order, the Majorana mixing becomes observable through Γ_L ∝α^2. The value of α in Eq.(<ref>) emphasizes the relevance of the condition Δ∼Γ, which produces a resonant enhancement <cit.> of the effect of the Δ L=2 mass mixing M_21. § NATURAL TIME HISTORY FOR INITIAL |^AZ ⟩ As seen in Eq.(<ref>), the states |^AZ ⟩ and |^A(Z-2)^*⟩ are not the stationary states of the system. For an initially prepared |^AZ ⟩, the time history is far from trivial and the appropriate language to describe the system short times after is that of atom oscillations <cit.> between |^AZ ⟩and |^A(Z-2)^*⟩ due to the interference of the amplitudes through |λ_S⟩ and |λ_L⟩ in the time evolution. The time-evolved |^AZ ⟩ state becomes|^AZ(t)⟩ = e^-iλ_L t|λ_L⟩ - αe^-iλ_S t|λ_S⟩and the appearance probability is then given by⟨^A(Z-2)^*|^AZ(t)⟩^2 = α^2 { 1 + e^-Γ t - 2e^-1/2Γ tcos(Δ· t) } ,with an oscillation angular frequency Δ. The characteristic oscillation timeτ_osc = 2π Δ^-1 is the shortest time scale in this system. For t ≪τ_osc, one has⟨^A(Z-2)^*|^AZ(t)⟩^2 ≈M_21^2t^2induced by the mass mixing. The next shortest characteristic time in this system is the decay time τ_S = Γ^-1, associated to the |λ_S⟩ state. For τ_osc≪ t ≪τ_S, the only change with respect to Eq.(<ref>) is that the interference region disappears, and the two slits |λ_L⟩ and |λ_S⟩ in (<ref>) contribute incoherently,⟨^A(Z-2)^*|^AZ(t)⟩^2 ≈α^2 ( 2-Γ t ) . For t≫τ_S, the contribution of |λ_S⟩ disappears and the appearance probability simply becomes⟨^A(Z-2)^*|^AZ(t)⟩^2 = α^2 .In other words, the initially prepared |^AZ ⟩ state evolves towards the stationary metastable state |λ_L⟩,|^AZ(t)⟩→ e^-iλ_L t|λ_L⟩ ,with the long lifetime τ_L = Γ_L^-1 from Eq.(<ref>). For a realistic time resolution δ t in an actual experiment, this regime is the interesting one, with the behavior in Eq.(<ref>). As shown in Figure <ref>, the different time scales involved in this problem are thusτ_osc≪τ_S ≪δ t ≪ t ≪τ_Lwhere t refers to the elapsed time since the production of ^AZ, either by nature or in the lab—given the smallness of the mixing, the metastability of the state (<ref>) is valid even for cosmological times. Therefore, for any time between the two scales τ_S and τ_L, the populations of the three states involved are given by the probabilities0.75 τ_S ≪ t ≪τ_L ⟹{ P_L(t)≈ 1-Γ_Lt P_S(t)≈ 0 P_g.s.(t)≈α^2 Γt . 0.15 where P_g.s.(t) refers to the population of the ground state of the ^A(Z-2) atom after the decay of the unstable “stationary” state |λ_S⟩ in (<ref>), with rate Γ. No matter whether t refers to laboratory or cosmological times, the linear approximation in t is excellent.With this spontaneous evolution of the system, an experiment beginning its measurementsa time t_0 after the ^AZ was produced will probe the three-level systemwith relative populations P_L ≈ 100%,P_S ≈ 0, P_g.s.≈α^2Γt_0.We discover two methods, involving the third state beyond the mixed states[ One may wonder whether there is, for Δ>0, a spontaneous emission of lower energy X-rays from |λ_L⟩ to |λ_S⟩ leading to a regeneration of the short-lived mixed state. Observing in (<ref>) the atom mixing of these states, the dynamics of this process would be that of the Compton amplitudes for the Z and (Z-2)^* atoms, whereas the kinematics corresponds to two-photon emission instead of scattering. At these intermediate energies between atomic and nuclear physics, the Compton amplitude T_2γ can be taken to bean incoherent sum of the electron contributions <cit.>,T_2γ = αT_2γ^Z-2 - βT_2γ^Z ;T_2γ^Z = Ze^2/m (ϵ^'*·ϵ^*)where m is the electron mass and ϵ the polarization vectors of the photons. A straightforward calculation of the rate for this e.m.|λ_L⟩→|λ_S⟩ transition, when compared to the transition to the daughter atom ground state, gives a branching ratio of the order 10^-7.], to be sensitive to the resonant Majorana mixing of atoms: * Spontaneous emission from the metastable state to the daughter atom ground state. The population in the upper level |λ_L⟩, as shown in Eq.(<ref>), decreases with time as P_L(Δ t) ≈ 1 - Γ_L Δ t, where Δ t = t - t_0, due to the decay of the metastable “stationary” state |λ_L⟩ to |^A(Z-2)_g.s.⟩. This process is associated to the spontaneous emission of X-rays with a rate Γ_L, considered in the literature after the concept of resonant mixing was introduced in Ref. <cit.>. For one mole of ^152Gd, the X-ray emission rate would be of order 10^-12 s^-1∼ 10^-5 yr^-1. The initial state in the transition at observable times, being |λ_L⟩, tells us that the total energy of the two-hole X-ray radiation is displaced by Δ with respect to the characteristic |^A(Z-2)^*⟩→ |^A(Z-2)_g.s.⟩ X-ray spectrum, i.e. its energy release is the Q-value between the two atoms in their ground states (as seen in Figure <ref>). * Daughter atom population. The presence of the daughter atom in the parent ores (see Eq.<ref>), can be probed e.g. by geochemical methods. For one mole of the nominally stable ^152Gd isotope produced at the time of the Earth formation, the values in Figure <ref> would predict an accumulated number of order 10^4 ^152Sm atoms. This observable could be of interest for cosmological times t_0 since, contrary to ββ-decay, in the ECEC case there is no irreducible background from a 2ν channel for a resonant atom mixing.We would like to emphasize that, even though Γ_L = α^2 Γ ensures the probability conservation, an interpretation of Eqs.(<ref>) and (<ref>) in different terms is of interest. On the one hand, Γ_L is the rate for the decay of |λ_L⟩ at any time t, which accounts for the observable 1. On the other hand, the population of the daughter atom in the ground state is obtained from the mixing probability leading to |λ_S⟩ in Eq.(<ref>) at all times, given by α^2, times its decay rate to the ground state, Γ_S = Γ. This mixing×Decay temporal evolution explains the non-zero population of ^A(Z-2)_g.s., producing the second observable. § STIMULATED TRANSITIONS§.§ Emission from |λ_L⟩A careful reading of Eqs.(<ref>) shows that the metastable state |λ_L⟩ and the ground state |^A(Z-2)_g.s.⟩ have a natural population inversion, with an overwhelming abundance of the long-lifetime upper level of the system. This result suggests the exploitation of the bosonic properties of the X-radiation, used as a signal of the Majorana mixing in this problem, and considering the external action of an X-ray beam to stimulate the emission from the metastable level to the ground state of this atomic system.Stimulated radiation for the emission from |λ_L⟩ to the ground state|^A(Z-2)_g.s.⟩ could then enhance the rate and we present an estimateof the gain which could be envisaged in future facilities of X-ray beams. A setup with an incident pulsed beam allows the observationof low rate events in directions outside the beam direction and the control of background conditions in the absence of the beam. Therefore, one discovers a third observable * Stimulated emission from |λ_L⟩ to |^A(Z-2)_g.s.⟩. The natural population inversion between the ground state and the metastable “stationary” state |λ_L⟩ gives raise to the possibility of stimulating the decay |λ_L⟩→|^A(Z-2)_g.s.⟩. The experimental signature of this process would be the emission of X-rays with total energy equal to the Q-value of the process, just like in the first observable of spontaneous emission. For the emission between the two levels |λ_L⟩→|^A(Z-2)_g.s.⟩ of radiation with angular frequency ω, stimulated radiation is described in terms of the Einstein coefficients <cit.> with an induced rate[For the sake of clarity, throughout this discussion we keep all ħ and c factors.]N_Lt = -π^2 c^3/ħω^3 ρ_ω Γ_L N_L ,where N_L is the population of the upper metastable |λ_L⟩ level,Γ_L its width andρ_ω is the energy density of the beam per unit of angular frequency, i.e.ρ_ω =E/ctSω . Therefore, this observable is enhanced with respect to the first one by a gain factorG = π^2c^3/ħ ω^3 ρ_ω ,which is the ratio between the stimulated and spontaneous emission rates.In order to produce a sizable gain, one should devise a setup with as large a ρ_ω as possible. The transition energy of this system is of order tens of keV, so a high-luminosity X-ray beam is mandatory. Such high-energy beams are produced at free-electron laser (FEL) facilities,through a kind of laser consisting of very-high-speed electrons moving freely through a magnetic structure. Free-electron lasers are tunable and have the widest frequency range of any laser type,currently ranging in wavelength from microwaves, through terahertz radiation and infrared, to the visible spectrum, ultraviolet, and X-ray. The highest frequencies are obtained in XFEL facilities like the running SLAC Linac Coherent Light Source (LCLS) and the commissioned European XFEL (EXFEL) at DESY.The determination of the gain factor one could achieve in these facilities is clearer after rewriting the spectral energy density (<ref>) in terms of beam parameters,ρ_ω = ħ/cN/tS [ ω/ω]^-1 .Taking N/ t as the number of photons per pulse duration, S as the beam section and ω/ω as the full width half maximum (FWHM) spectrum width, one finds the gain factorG = ħ(ħ c)^2 π^2/(ħω)^3N/tS [ ω/ω]^-1to be written in terms of clearly defined beam properties, where N/ t S is the luminosity L of the beam.At EXFEL, a sound simulation of the conditions of the machine <cit.> gives, for typical energies of tens of keV, the expected number of photons per pulse duration dN/dt = 10^10 fs^-1 and the spectral width ω/ω = 1.12× 10^-3. Nanofocusing of this X-ray FELs has been contemplated <cit.>; using a beam spot of the order of 100 nm would lead to a gain factor from (<ref>) of G∼ 100. The continuous interaction of these X-rays with a mole of ^152Gd atoms would provide a stimulated rate of order 10^-10 s^-1∼ 10^-3 yr^-1. One should notice that this rate of events assumes constant irradiation of the whole target. The straightforward setup of a cylindrical target alongside the pulsating X-ray beam presents different issues. Most notoriously, pulsed beams limit the enhancing time to a fraction of the running time. EXFEL manages to produce 2.7× 10^4 pulses per second, so that the fraction of effective time is of order 10^-9; LCLS-II expects to produce pulses at 1 MHz, increasing this number by two orders of magnitude, but still far away from a promising factor. Furthermore, radiation of these energies has an attenuation length in Gd of tens of microns, limiting the amount of material one could use to a fraction of a mole. This setup has the general drawback that the high energy density effect associated with the small beam spot size is lost when considering the small interaction volume.The attenuation of the beam is associated to its interaction with the sample, which is dominated by the photoelectric effect and, to a lesser extent, inelasticCompton scattering, leading to ionization. Successive interactions of the secondary electrons will heat the material.A recent simulation <cit.> of this effect under realistic experimental conditions on a cylindrical target, assuming the extreme limit that the whole absorption power is converted into heating power, shows that a temperature of about 700^∘C is reached for an incoming beam of spot size 100 nm and an average of 10^14 X-ray photons/s. Since this temperature is proportional to the flux, in all high-flux experiments like the one contemplated in this work, the small-interaction-volume target is actually destroyed. The design of a macroscopic sample whith a very large number of thermally isolatedmicro-targets, built on a plane in transversal motion synchronized with the pulse frequency of the beam, is a subject of current interest <cit.>. The use of this approach in order to stimulatethe Δ L=2 emission rateshould be explored after a suitable candidate is found. On the other hand, the limiting factors in the expected integrated rate of events also suggest an alternative ingenuity program more in the line of micro-particles inserted into a dreamed X-ray resonant cavity. §.§ Absorption from |A(Z-2)_g.s.⟩A different observable may also be considered. The existing population of the daughter atom in its ground state is, by itself, a signal of the atom Majorana mixing, as discussed in the previous Section as a relic of the previous history with an initial parent atom. In addition, this population can be identified by using an intense photon beam, leading to the characteristic absorption spectrum of the daughter atom and its subsequent decay to the ground state. * Stimulated absorption spectrum of the daughter atom. In the presence of a light beam, the daughter atom population would absorb those characteristic frequencies corresponding to its energy levels, which would then de-excite emitting light of the same frequency. In the case of the one mole ^152Gd ore that we mentioned in the previous section, all 10^4 Sm atoms could be easily excited to any of its ∼ 1 eV levels using a standard pulsed laser of order 100 fs pulse duration, with a mean power of 5 W and a pulse rate of 100 MHz.Notice that these numbers imply, for a laser with FWHM spot size ∼ 40 μm, an absorption rate. N_g.s.t|_abs = -60% N_g.s. [100 ns/τ]fs^-1 .Since Sm levels have lifetimes between 10-1000 ns <cit.>, one expects to excite them all during the 100 fs pulse. Disentangling the parent and daughter lines should not be difficult—the relatively small number of atomic absorption lines (compared to atomic emission lines) and their narrow width (a few pm) make spectral overlap rare, not being expected between Z and (Z-2) atoms.It is worth noting from the results of this section that the bosonic nature of the atomic radiation is a property that can help in getting observable rates of the atom Majorana mixing, including the stimulated X-ray emission from the parent atom as well as the detection of the presence of the daughter atoms by means of its characteristic absorption lines. The actual values correspond to the specific case of ^152Gd → ^152Sm, which is still off the resonancecondition by at least a factor 30, implying a factor 10^3 in the rates. § CONCLUSIONS Neutrinoless double electron capture in atoms is a quantum mixing mechanism between the neutral atoms ^AZ and ^A(Z-2)^* with two electron holes. It becomes allowed for Majorana neutrino mediation responsibleof this Δ L = 2 transition. This Majorana mixing leads to theX-ray de-excitation of the |^A(Z-2)^*⟩ daughter atomic state which, under the resonance condition, has no Standard Model background from the two-neutrino decay.The intense experimental activity looking for atomic candidates satisfying the resonance condition by means of precise measurements of atomic masses,thanks to the trapping technique, has already led to a few cases of remarkable enhancement effects and there is still roomfor additional adjustements of the resonance condition. With this situation, it is important to understand the complete time evolution of an atomic state since its inception and whether one can find, from this information, different signals of the Majorana mixing, including the possible enhancement due to the bosonic nature of atomic transition radiation. These points have been addressed in this work.The effective Hamiltonian for the two mixed atomic states leads to definite non-orthogonal states of mass and lifetime, each of them violating global lepton number, one being metastable with long lifetime and the other having a short lifetime. For an initial atomic state there are time periods of atom oscillations,with frequency the mass difference, and the decay of the short lived state, which are not observable for present time resolutions. For observable times, the system of the two atoms has three relevant states for discussing transitions:one highly populated state with long lifetime,one empty state with short lifetimeand the ground state of the daughter atom with a small population as a result of the past history.As a consequence, this is a case of natural population inversion suggesting the possibilityof stimulated radiation transitions besides the natural spontaneous X-ray emission. The gain factor for stimulated emission due to a present radiationenergy density per unit frequency has been adapted to the case of an X-ray beam in terms of conventional parameters like its luminosity, the energy and the spectrum width. Using simulated previsions of the now commissioned European XFEL facility, with a beam spot size of 100 nm, we obtain an expected gain of 100 in the X-ray emission rate from the metastable long-lived state. This substantial gain by stimulating the X-ray emission from |λ_L⟩ to |^A(Z-2)_g.s.⟩ for interacting X-ray photons with atoms is, however, not exploited in a straightforward setup of a single pulsed beam directed towards a cylindrical target. The limiting factors of the pulse frequency and the small interaction volume are suppressing that ideal benefit for the integrated number of events.The small population of the ground state daughter atom at observable times would be by itself a proof of the atomic Majorana mixing, given the absence of the standard two neutrino decay of the parent atom. Besides other geochemical methods, absorption rates and the subsequent emission by an intense photon beam for the daughter atom, due to its non-vanishing population of the ground state, can be clearly contemplated.The results obtained in this work demonstrate that the knowledge of the time history of a nominally stable atom since its inception can be a source of inspiration to find appropriate observables in the search ofevidence for Δ L = 2 double electron capture. The natural population inversion at observable times suggests stimulating the X-ray emissionwith gain factors which could become significant for appropriate setups.On the other hand, the presence of the daughter atom ground state can be signaled by looking for its characteristic absorption spectrum. Taking into account the ongoing searches for new isotope candidates with a better fulfillment of the resonance condition, it remains to be seen whether these processes, with the ideas on stimulating the transitions, could become actual alternatives in the quest for the Dirac/Majorana nature of neutrinos.The authors would like to acknowledge scientificdiscussions with, and advice from, Massimo Altarelli, Michael Block, Albert Ferrando, Gaston Garcia and Pablo Villanueva. This research has been supported by MINECO ProjectFPA 2014-54459-P, Generalitat Valenciana Project GVPROMETEO II 2013-017 andSeveroOchoaExcellence CentreProjectSEV2014-0398. A.S. acknowledges theMECD support through the FPU14/04678 grant.991 Super-Kamiokande Collaboration (Y. Fukuda et al.), Evidence for oscillation of atmospheric neutrinos, Phys. Rev. Lett. 81 (1998) 1562, [arXiv:hep-ex/9807003].2 SNO Collaboration (Q.R. Ahmad et al.), Direct evidence for neutrino flavor transformation from neutral current interactions in the Sudbury Neutrino Observatory, Phys. Rev. Lett. 89 (2002) 011301, [arXiv:nucl-ex/0204008].3 See M. Mezzetto in “Neutrino 2016”, South Kensington, London (2016).4 E. Majorana, Teoria simmetrica dell'elettrone e del positrone, Il Nuovo Cimento 14 (1937) 171.5 S.M. Bilenky, J. Hosek and S.T. Petcov, On oscillations of Neutrinos with Dirac and Majorana Masses, Phys. Lett. B94 (1980) 495.6 M. Doi, T. Kotani, H. Nishimura, K. Okuda and E. Takasagi, CP Violation in Majorana Neutrinos, Phys. Lett. B102 (1981) 323.7 J. Bernabéu and P. Pascual, CP Properties of the Leptonic Sector for Majorana Neutrinos, Nucl. Phys. B228 (1983) 21.8 C. Ryan and S. Okubo, Nuovo Cimento Suppl.2 (1964) 234.9 K.M. Case, Reformulation of the Majorana Theory of the Neutrino, Phys. Rev. 107 (1957) 307.10 GERDA Collaboration (Valerio D'Andrea et al.), Status Report of the GERDA Phase II Startup, [arXiv:1604.05016 [physics.ins-det]].11 KamLAND-Zen Collaboration (A. Gando et al.), Search for Majorana Neutrinos near the Inverted Mass Hierarchy Region with KamLAND-Zen, Phys Rev. Lett. 117 (2016) 082503, [arXiv:1605.02889 [hep-ex]].12 F. Vissani, Signal of neutrinoless double beta decay, neutrino spectrum and oscillation scenarios, JHEP 06 (1999) 022, [arXiv:hep-ph/9906525].13 F. Feruglio, A. Strumia and F. Vissani, Neutrino oscillations and signals in β and 0ν 2β experiments, Nucl. Phys. B637 (2002) 345, [arXiv:hep-ph/0201291].14 See F. Deppisch in “Neutrino 2016”, South Kensington, London (2016).pascoli S. Pascoli, A portal to new physics, CERN Courier, July-August 2016.15 S. Wren, Neutrino Mass Ordering Studies with PINGU and IceCube/DeepCore, [arXiv:1604.08807 [physics.ins-det]].16 A. Kouchner, KM3NeT - ORCA: measuring the neutrino mass ordering in the Mediterranean, J. Phys. Conf. Ser. 718 (2016) no.6, 060230.new18 R. G. Winter,Double K capture and single K capture with positron emission, Phys. Rev. 100 (1955) 142.new19 M. B. Voloshin, G. V. Mitselmakher, and R. A. Eramzhyan,Conversion of an atomic electron into a positron and double β^+ decay, JETP Lett. 35 (1982) 656.17 J. Bernabeu, A. De Rujula and C. Jarlskog, Neutrinoless double electron capture as a tool to measure the electron neutrino mass, Nucl. Phys. B223 (1983) 15.18 S. Eliseev, C. Roux, K. Blaum, M. Block, C. Droese, F. Herfurth, H.-J. Kluge, M. I. Krivoruchenko, Yu. N. Novikov, E. Minaya Ramirez, L. Schweikhard, V. M. Shabaev, F. Šimkovic, I. I. Tupitsyn, K. Zuber and N. A. Zubova,Resonant enhancement of neutrinoless double-electron capture in ^152Gd, Phys. Rev. Lett. 106 (2011) 052504.36 J.L. Campbell and T. Papp, Widths of the atomic K-N7 levels, At. Data Nucl. Data Tables 77 (2001) 1.19 V.S. Kolhinen, V.V. Elomaa, T. Eronen, J. Hakala, A. Jokinen, M. Kortelainen, J. Suhonen and J. Aysto,Accurate Q value for the ^74Se double-electron-capture decay, Phys. Lett. B684 (2010) 17.20 A.S. Barabash, Ph. Hubert, A. Nachab and V. Umatov,Search for β^+EC and ECEC processes in ^74Se, Nucl. Phys. A785 (2007) 371, [arXiv:hep-ex/0610046].21 A.S. Barabash, Ph. Hubert, A. Nachab, S.I. Konovalov, I.A. Vanyushin and V.I. Umatov,Search for β^+EC and ECEC processes in ^112Sn and β^-β^- decay of ^124Sn to the excited states of ^124Te, Nucl. Phys. A807 (2008) 269, [arXiv:0804.3849 [nucl-ex]].22 S. Eliseev, D. Nesterenko, K. Blaum, M. Block, C. Droese, F. Herfurth, E. Minaya Ramirez, Yu. N. Novikov, L. Schweikhard and K. Zuber,Q values for neutrinoless double-electron capture in ^96Ru, ^162Er, and ^168Yb, Phys. Rev. C83 (2011) 038501.23 V.S. Kolhinen, T. Eronen, D. Gorelov, J. Hakala, A. Jokinen, A. Kankainen, J. Rissanen, J. Suhonen and J. Aysto,On the resonant neutrinoless double-electron-capture decay of ^136Cs, Phys. Lett. B697 (2011) 116.24 M. Goncharov, K. Blaum, M. Block, C. Droese, S. Eliseev, F. Herfurth, E. Minaya Ramirez, Yu. N. Novikov, L. Schweikhard and K. Zuber,Probing the nuclides ^102Pd, ^106Cd, and ^144Sm for resonant neutrinoless double-electron capture, Phys. Rev. C84 (2011) 028501.25 C. Droese, K. Blaum, M. Block, S. Eliseev, F. Herfurth, E.Minaya Ramirez, Yu.N. Novikov, L. Schweikhard, V.M. Shabaev, I.I. Tupitsyn S. Wycech, K. Zuber and N.A. Zubova,Probing the nuclide ^180W for neutrinoless double-electron capture exploration, Nucl. Phys. A875 (2012) 1, [arXiv:1111.6377 [nucl-ex]].26 M.F. Kidd, J.H. Esterline and W. Tornow,Double-electron capture on ^112Sn to the excited 1871 keV state in ^112Cd: A possible alternative to double-β decay, Phys. Rev. C78 (2008) 035504.27 A.S. Barabash, Ph. Hubert, A. Nachab, S.I. Konovalov and V. Umatov,Search for β^+EC and ECEC processes in ^112Sn, Phys. Rev. C80 (2009) 035501, [arXiv:0909.1177 [nucl-ex]].28 Z. Sujkowski and S. Wycech, Neutrino-less double electron capture: A tool to research for Majorana neutrinos, Phys. Rev. C70 (2004) 052501, [arXiv:hep-ph/0312040].29 F. Simkovic and M.I. Krivoruchenko, Mixing of neutral atoms and lepton number oscillations, Phys. Part. Nucl. Lett. 6 (2009) 298.30 J. Suhonen and M.T. Mustonen, Nuclear matrix elements for rare decays, Prog. Part. Nucl. Phys. 64 (2010) 235.31 M. I. Krivoruchenko, F. Simkovic, D. Frekers and A. Faessler, Resonance enhancement of neutrinoless double electron capture, Nucl. Phys. A859 140 (2011) 140, [arXiv:1012.1204 [hep-ph]].32 Dong-Lianf Fang, K. Blaum, S. Eliseev, A. Faessler, M.I. Krivoruchenko, V. Rodin and F. Simkovic, Evaluation of the resonance enhancement effect in neutrinoless double-electron capture in ^152Gd, ^164Er and ^180W atoms, Phys. Rev. C85 (2012) 035503, [arXiv:1111.6862 [hep-ph]].33 T.R. Rodríguez and G. Martínez-Pinedo, Calculation of nuclear matrix elements in neutrinoless double electron capture, Phys. Rev. C85 (2012) 044310, [arXiv:1203.0989 [nucl-th]].34 J. Suhonen, Nuclear matrix elements for the resonant neutrinoless double electron capture, Eur. Phys. J. A48 (2012) 51.35 J. Kotila, J. Barea and F. Iachello, Neutrinoless double electron capture, Phys. Rev. C89 (2014) no.6, 064319, [arXiv:1509.01927 [nucl-th]].ww V. Weisskopf and E. Wigner,On the natural line width in the radiation of the harmonic oscillator,Z. Phys. 65 (1930) 18.galindo A. Galindo and P. Pascual, Quantum Mechanics, Springer Berlin Heidelberg (1990), DOI:10.1007/978-3-642-84129-3.rosa-clot J. Bernabeu and M. Rosa-Clot, Dispersive Approach To The Nuclear Compton Amplitude And Exchange Effects, Nuovo Cim. A65 (1981) 87.rosa-clot2 M. Ericson and M. Rosa-Clot, Compton Scattering and Pion Number in Nuclei, Phys. Lett. B188 (1987) 11.einstein R. C. Hilborn, Einstein coefficients, cross sections, f values, dipole moments, and all that, [arXiv:physics/0202029].altarelli E. Schneidmiller and M. Yurkov, DESY note, private communication from M. Altarelli.gain K. Yamauchi, M. Yabashi, H. Ohashi, T. Koyama and T. Ishikawa, Nanofocusing of X-ray free-electron lasers by grazing-incidence reflective optics, J. Synchrotron Rad. 22 (2015) 592, DOI:10.1107/S1600577515005093Heating H. Wallander and J. Wallentin, Simulated sample heating from a nanofocused X-ray beam, J. Synchrotron Rad. 24 (2017) 925, DOI:10.1107/S1600577517008712Microtarget P. Roedig et al., High-speed fixed-target serial virus crystallography, Nat. Meth. 14 (2017) 805, DOI:10.1038/nmeth.4335SmLifes E. A. Den Hartog and J E Lawler, Radiative lifetimes of neutral samarium, J. Phys. B; At. Mol. Opt. Phys. 46 (2013) 185001, DOI:10.1088/0953-4075/46/18/185001 | http://arxiv.org/abs/1706.08328v2 | {
"authors": [
"Jose Bernabeu",
"Alejandro Segarra"
],
"categories": [
"hep-ph"
],
"primary_category": "hep-ph",
"published": "20170626112859",
"title": "Stimulated transitions in resonant atom Majorana mixing"
} |
Adhesion and volume constraints via nonlocal interactionslead to cell sorting José Antonio Carrillo Annachiara Colombi Marco Scianna =============================================================================== Abstract.- We demonstrate how concepts of statistical mechanics of interacting particles can have important implications in the choice of interaction potentials to model qualitative properties of cell aggregates in theoretical biology. We illustrate this by showing cell sorting phenomena for cell groups with different adhesiveness parameters: ranging from well-mixed cells aggregates to full segregation of cell type passing through engulfment via adhesiveness tuning.Keywords: interaction potentials, nonlocal models, H-stability, cell sorting, cell-cell interactionsMathematics Subject Classification:§ INTRODUCTION Adhesive and repulsive cell-cell interactions, and resulting cell patterning, are at the basis of a wide range of biological processes, i.e., from morphogenesis to cancer growth and invasion. For example, defects in the spatial organization of multipotent stem cells in animal embryos lead to severe malformations of adult organs <cit.>. Further, the compact configuration of epithelial monolayers is fundamental in wound healing scenarios. Finally, the dispersion of highly motile malignant individuals triggers the metastatic transition of tumor progression <cit.>. An accurate analysis of the spatial pattern of cell aggregates is indeed a fundamental issue in theoretical biology. In this respect, apart from extreme situations, most cell systems are characterized by an ordered crystalline structure, where the component cells stabilize at a given minimal distance, larger than nuclei dimensions.From a mathematical point of view, the patterning of cell aggregates and the intercellular distance can be well described by discrete models. They actually approach the biological problem from a phenomenological point of view, focusing on the cell-level of abstraction and preserving the identity and the behavior of individual elements (for comprehensive reviews the reader is referred to <cit.>). In more details, these techniques represent biological cells as one or a set of discrete units, while cell morphology is restricted according to some underlying assumptions. In cell-based methods, the behavior of each individual is then prescribed by a relatively small set of rules, which in general may depend on their phenotype, and on the signals received from the neighbors and/or the environment. However, in this work, we focus on cell dynamics due to intercellular interactions (in particular cell resistance to compression and cell adhesiveness), and neglect both the effect of the environmental conditions as well as any other phenomena (such as, for instance, proliferation and death processes) possibly regulating the evolution of an ensemble of cells.Some of these mathematical approaches, based on nonlocal interactions between cells forming aggregates, have been already used to model adhesiveness and resistance to compression for cell groups, see <cit.> for instance. However, a critical discussion of the typical interaction kernels/potentials used in these models and how to find their typical parameters in order to recover qualitative and quantitative real shapes of cell aggregates in applications was lacking. In this respect, we here propose a class of interaction kernels/potentials adapted to reproduce reported behaviors of cell aggregates at the microscopic level such as cell sorting due to cells with different adhesiveness.One key ingredient to distinguish different cell behaviors is related to a fundamental concept in statistical mechanics of interaction particles called H-stability of the interaction potential, see <cit.>. This concept was already used in the modelling of swarming or collective behavior of animal populations <cit.>, and it turned out to be crucial to determine conditions for large coherent structures of these models such as flocks and mills.In this context, the main objective of this work is indeed to discuss what are the choices of interaction potentials able to lead crystalline structures and typical distances between cell nuclei (hereafter also called intercellular distances) for microscopic modelling of cell groups. Further, we propose a preliminary strategy to attack the inverse problem of finding well-adapted interaction potentials for different applications in theoretical biology. Finally, we show that this simple model can lead to complicated instabilities if cells with different adhesiveness parameters are involved in the aggregation process.More specifically, we observe well-mixed cell groups when heterotypic adhesiveness properties are stronger than homotypic ones; full segregation in the opposite situation (i.e., homotypic adhesiveness properties are stronger than heterotypic ones); and finally partial engulfment of different cell groups for intermediate values of the adhesiveness.§ MATHEMATICAL MODEL Let us consider a biological system composed by N cells of the same phenotype/cell lineage, i.e., characterized by the same biophysical properties (mass and dimension) and behavior, which are distributed over a planar surface. Cells are here represented as discrete entities, i.e., as material particles with concentrated mass m and characterized by the position of their center of mass, _i(t)∈^2 with i=1,…,N. The configuration of the system at a given instant t is then given by the vector:(t) = {_1(t), …, _N(t)}.In order to reproduce cell dynamics arising form intercellular interactions, let us first recall that individual cells in biological environment typically evolve under strong damping, i.e., in an extremely viscous regime (see <cit.> for a detailed discussion). Hence, we assume that the velocity of individuals and not their acceleration is proportional to the forces applied to the system (i.e., the so-called overdamped force-velocity response). In this respect, the evolution in time of the spatial distribution of the cell aggregate is given by a system of first-order ordinary differential equations. In particular, we further assume that cell dynamics due to direct intercellular interactions results from the superposition of the contributions due to pairwise forces, and consistently avoid cell self-interactions. Despite cell-cell direct forces may involve several phenomena (such as, for instance, contact-dependent chemical or mechanical interactions), we hereafter take into account only intercellular repulsive interactions, which reproduce cell resistance to compression due to size, and attractive interactions, which conversely implement cell-cell adhesiveness that rely upon the protrusion and expression of membrane adhesion molecules (e.g., filopodia, cadherins).Based on these working hypotheses, it is consistent to assume that the velocity contribution arising from pairwise interactions depends on the relative distance between the two cells involved and results aligned toward the line ideally connecting them. The system dynamics is therefore given byd_i(t)dt = -m ∑_j=1 j≠ i^N K(|_i(t) -_j(t)|)_i(t) -_j(t)|_i(t) -_j(t)|, i=1,…,N,where |·| identifies the Euclidean norm, and K:_+→ (that has unitsμ m/(μ g s)) defines the contribution in the cell velocity due to pairwise interactions. Notice that the cell mass m can be included in the time scale of the system (<ref>). However, we prefer to keep it there to possibly compare the effect of variations of N and m, both separately (with the consequent increase/decrease of the total mass) and simultaneously (but keeping fixed the total mass of the aggregate).It is obvious from the minimal model (<ref>) that all the biological mechanisms taken into account are implemented by the shape of the radial interaction kernel K. In this respect, we say that K is repulsive when K(| - |)<0 and attractive when K(| - |)>0. On one hand, in order to reproduce cell resistance to compression, it is reasonable to state that cell-cell interactions are repulsive if the relative distance between the interacting cells is lower than the minimal space needed to avoid overlapping, i.e., the mean cell diameter hereafter denoted by(see Fig. <ref>, left panel). On the other hand, as cell-cell adhesiveness rely upon the protrusion of cell filopodia and the expression of cadherins, cell-cell interactions are attractive if the relative distance between two interacting cells is large enough to avoid cell-cell repulsion but small enough to allow the formation of bonds between cell membrane adhesive molecules, but. We thereby assume that K(| - |)>0 if cell relative distance is larger than the mean cell diameterand lower that the maximal extension of cell deformable adhesive structures, that is hereafter denoted by(see Fig. <ref>, central panel). Finally, as we are accounting for cell resistance to compression and cell-cell adhesiveness, thus we assume that if the relative distance between two cells is larger that the maximal extension of membrane adhesive molecules, , then K(| - |)=0. This ensures that two cells are not able to mutually interact for distances larger than(see Fig. <ref>, right panel). Similar nonlocal attractive terms to model cell adhesion have already been proposed in the literature of macroscopic and microscopic models of cell interactions for cancer invasion models <cit.> and heterogenous cell populations <cit.>. In many of these models repulsion is taken into account by (nonlinear) diffusion or drift saturation terms <cit.> at the macroscopic level. These nonlinear diffusion models can be obtained from microscopic nonlocal repulsive models in the right scaling limit <cit.>, so therefore our choice of repulsive character of the kernel K near the origin.In principle, there are many options to explicit the form of the interaction kernel K in Eq. (<ref>). As a first step to properly characterize the kernel K, let us notice that K:_+→ can always be considered as being derived from a scalar interaction potential u:_+→ such that u'(r) = K(r), and a vector potential U:^2→ such that U()=u(||). Therefore, the right hand side of Eq. (<ref>) rewrites in the following formd_i(t)dt = -m∑_j=1^N∇ U(_i(t) - _j(t))= -m∑_j=1^N K(|_i(t) - _j(t)|)_i(t) - _j(t)|_i(t) - _j(t)|, i=1,…,N.As already explained, the choice of the interaction kernel/potential is crucial in biological applications as the resulting cell dynamics has to be reliable according to the considered phenomena. In particular, our strategy is to explore the discrete stationary states of system (<ref>) for selected choices of interaction kernels/potentials, and to analyse the minimal intercellular/interparticle distanced_(t) = min_i,j=1,…,N i≠ j|_i(t)-_j(t)|,i.e., the minimal relative distance between cell nuclei. This quantity has a well-defined biological meaning, as required for instance by cell survival or volume exclusion: in this respect, d_(t) has to maintain its value above a threshold value (consistently close to the nucleus diameter ) for any t, and will thereby allow us to determine suitable interaction potential shapes. Moreover, in the next sections, we also discuss the scaling properties of stationary states (and the corresponding minimal intercellular distance) depending on N and m. In fact, the minimal amount of space needed by each cell to survive is not dependent from both the amount of cells forming the aggregate N and their individual mass m. It is conversely given by cell characteristic dimensions as well as by nucleus and cytoplasm stiffness.§ ANALYTICAL RESULTS Let us summarize some of the theoretical results concerning the stationary states of (<ref>). First of all, notice that this system of equations has the structure of a gradient flow of the total potential energyE_N(t)=m/N^2∑_i,j=1^N U(_i(t)-_j(t)) .In particular, E_N is a Liapunov functional for (<ref>) and thus, stable stationary states are among (local or global) minimizers of the interaction energy E_N. It is well-known that if the mass of cells scales with the number of particles, i.e., mN≃ M, then the system (<ref>) converges when N→∞ to a macroscopic equation, called the aggregation equation, under certain assumptions on the potential and the initial cell configuration, see <cit.> for instance. This is the so-called mean field limit for interacting particle systems. Let us discuss some of the qualitative properties known for these (local) minimizers.The first important feature is that the behavior of the solution is regulated by the repulsive part of the interactions, i.e., by the singularity of the potential/kernel at the origin. This fact was studied in <cit.> where it was shown that as the potential gets more and more repulsive at the origin the particles distribute in larger and larger regions. In other words, while mild repulsion may allow for clustering of particles, singular repulsion leads to regular distributions of particles in the plane and a well defined minimum interparticle distance.The second (and more relevant to us) feature is the concept of H-stable potentials introduced in statistical mechanics, see <cit.>. Assume that the potential is essentially negligible for large distances: i.e., u is such that lim_r→ +∞ u(r) = 0. Then potentials that are H-stable satisfy that as the number of particles N→∞, the minimum interparticle distance converges to a fixed value. Therefore, the particles tend to fill the whole plane and the diameter of the particle cloud grows as N→∞. While if the potential is not H-stable, sometimes called catastrophic in the statistical mechanics literature, then the diameter of the cloud of particles tend to 0 as N→∞. Moreover, in the rescaled normalized mass limit mN≃ M there exists a localized minimizer for the interaction energy. In short, only for H-stable potentials we have a well-defined interparticle distance not depending on N as N→∞, for any fixed m.This was already pointed out in connection to swarming models for collective behavior of animals in <cit.>, see also <cit.>, the reviews <cit.> and the references therein. The main results about the existence of global minimizers of the interaction energy in the not H-stable case are quite recent <cit.>. The following easy-to-check criterium to detect H-stability of a potential is already given in <cit.>, see also <cit.> for further discussions and results in the not H-stable cases:If ∫_0^+∞u(r) r dr>0, then the potential is H-stable.If ∫_0^+∞u(r) r dr<0, then there exists a minimizer for the interaction energy in the mean field limit mN≃ M, and the potential is catastrophic or not H-stable.Taking into account the above considerations and theoretical results, we will deal with a family of interaction kernels/potentials and investigate the behavior of a particle system in H-stable and not H-stable conditions to showcase the previous considerations and to choose the right regimes for our purposes. Taking into account that the equilibrium configuration is mainly affected by the singularity of the interaction potential at the origin, we hereafter consider a set of interaction kernels characterized by a parabolic behavior in the attractive part and different shapes in the repulsive part <cit.>. Specifically, we here introduce a further differentiation of cell repulsive neighborhood in two parts accounting for distinct compressibility of cell nucleus and cytoplasm. In fact, cell nucleus is typically less squeezable than the cytoplasm that is conversely highly deformable and easily compressible. In this respect, let us denote the nucleus diameter by(that is reasonably close to cell radius, i.e., = /2), and assume a different explicit form for the interaction kernel depending on whether the intercellular distance is lower thanor falls in the range (, ). In particular, we hereafter assume =/2 and deal with the following set of interaction kernels:K(r) = {[- (2)^3-2s r^2s-3, if0 < r ≤2;;2(r-), if 2 < r ≤;; -4(r-) (r-)(-)^2,if< r ≤;;0,ifr >, ]. s∈(0,2],whereand(that have both unitsμ m/(μ g s)) denote the repulsion and adhesive strengths of cell-cell interactions, respectively; while the parameter s characterizes the kernel behavior at the origin (see Fig. <ref>).Interaction kernel with s≠1. -From Eq. (<ref>), by setting s≠1, the interaction potential reads as:u(r) = {[-2 (2)^3-2sr^2s-2(s-1) +C_1, if0 < r ≤2;;r^2 -2r + C_2, if 2 < r ≤;; -4 (-)^2 (r^33-(+) r^22+r)+C_3,if< r ≤;;C_4, ifr > . ].where the constants of integration C_1, C_2, C_3, C_4 ∈ are estimated in order to guarantee the continuity of the potential u(r) and such that lim_r→0u(r)=0. Then C_4 = 0 and the other constants are set asC_1= s4 (s-1)-23 (-); C_2= - 23(-); C_3= 2^2 (3 -)3 (-)^2,by continuity at /2,and . Thereby the interaction potential writes asu(r) = {[ -2 (s-1) (2)^3-2sr^2s-2 +s4 (s-1)-23 (-), if0 < r ≤2;;r^2 -2r+ -23(-), if 2 < r ≤;; -2(2 r^3 - 3 (+) r^2+6 r-^2(3 -))3 (-)^2,if< r ≤;;0, ifr > . ].Due to Theorem <ref>, the above interaction potential is H-stable when ∫_0^+∞u(r) r dr>0. Taking into account that the values of the interaction radiiandare defined according to cell phenotype, the H-stability translates into a constraint on the ratio between the repulsive and adhesive interactions strengths (i.e.,and ) given by[ ∫_0^+∞u(r) r dr =^3 (11 s+6)192 s-30 (-) (3 ^2+4+3^2) > 0. ]Then we have the following Corollary of Theorem <ref>.The interaction potential defined in Eq. (<ref>), with s≠1, results H-stable when> 32 s (-) (3 ^2+4+3^2)5 (11 s+6) ^3 := F^*..Interaction kernel with s=1 (hyperbolic case). -When the interaction kernel at the origin is characterized by s=1, the formulas can be obtained from the previous case with s≠1 by taking the limit as s↦ 1. This givesu(r) = {[ -2 log r + 2 (12+log(2)) -23(-), if 0 < r ≤2;; r^2 -2r+ -23(-),if 2 < r ≤;;-2(2 r^3 - 3 (+) r^2+6 r-^2(3 -))3 (-)^2, if< r ≤;; 0,ifr > . ].In this case, the H-stability Theorem <ref> gives the constraint presented in the following Corollary.The interaction kernel in Eq. (<ref>) with s=1 is H-stable if> 32 (-) (3 ^2+4+3^2)85 ^3=F^*. In particular, it is worth to notice that Eq. (<ref>) also coincides with the relation obtained by setting s=1 in Eq. (<ref>) of Corollary <ref>.§ NUMERICAL RESULTSAccounting for the above theoretical/analytical considerations, this section is devoted to a series of numerical simulations performed to show how the dynamics of pairwise interacting particles are regulated by the H-stability of the interaction kernel and its behavior at the origin. In particular, we here assume that the explicit form of the interaction kernel K:_+→ in system (<ref>) writes as in Eq. (<ref>) and perform several numerical tests by varying either the value of s (that characterizes the repulsive behavior of K at short distances) or the ratio / between the repulsive and adhesive interaction strength (that conversely defines the H-stability of the interaction kernel).In this perspective, let us first consider a cell aggregate constituted by N=100 individuals. In all realizations, cells are initially randomly distributed within a round area of radius equal to 40μ m and centered at the origin, as shown in Fig. <ref> (left panel). In particular, the initial distribution of the aggregate (0) is such that for each cell i=1,…,N the minimal interparticle distance d_(0), defined in Eq. (<ref>), is not null (in order to avoid that the center of mass of distinct cells are initially located in the same position) and lower than(so that each cell is initially able to interact at least with another cell). According to <cit.> and the references therein, we hereafter set the cell biophysical properties (mass and size) as follows:Param. Description Value [Unit] Ref. m mean cell mass 1.8·10^-3μ g <cit.>d_ mean cell diameter 20μ m <cit.>d_ maximal extension of cell filopodia 60μ m <cit.> From the analytical results provided in the previous section, it thereby follows that the H-stability of the interaction kernel K depends on the ratio of the interaction strengthsand , according to the value of s, i.e., to the shape of the repulsive part of the interaction kernel. Specifically, we here investigate the following cases: s 21.75 1.5 1.25 1 0.75 0.5 0.25 F^* 38.4037.26 35.26 34.02 31.32 28.29 23.37 15.36 where F^* is the minimum ratio between the repulsive and adhesive interaction strengths that makes the interaction kernel H-stable (see Eq. (<ref>) for cases with s≠1 and Eq. (<ref>) when s=1). In this respect, we first perform a series of numerical simulations exploring the effect of variations of / for each one of the above-cited values of s. This allows us to observe how the equilibrium configuration of the system and cell dynamics is regulated by the H-stability of a given interaction kernel/potential. In more details, in all realizations, we keep the strength of cell-cell adhesivenessconstantly equal to 1μ m/(μ g s), so that the value of cell resistance to compressioncoincides with / and determines the H-stability of K. This means that we are fixing the adhesive characteristics of cell-cell interactions, focusing on the repulsive part. However, the discriminating parameter is still the ratio /. In particular, we test all the following values for the repulsive strength: = 1, 10, 20, 30, 40, 50, 100, 1000 μ m/(μ g s). For all realizations, the evolution of the system has been observed until a stable equilibrium configuration is established and d_(t_F) denotes the minimal interparticle distance evaluated at the equilibrium. The graph in the right panel of Fig. <ref> then shows how the final minimal intercellular distance d_(t_F) varies according to the value of the ratio / for each one of considered value of s. According to the above theoretical considerations, it clearly emerges that, independently on the slope of the repulsive part of the kernel (i.e., on s), the H-stability of the interaction kernel K regulates the minimum distance between the interacting particles at the equilibrium configuration. In fact, for any choices of the shape of K, we have that: if / < F^* (i.e., K is not-H stable), the minimal intercellular distance at the equilibrium d_(t_) falls within the range [0, ]; while, if / > F^* (i.e., K is H stable), d_(t_)∈[, ]. From a biological point of view, these results can be interpreted as it follows: - not H-stable interaction kernels (i.e., / < F^*) are not able to avoid the superposition of cell nuclei, and thereby the observed equilibrium configurations are biologically unreliable (i.e., d_<);- conversely, H-stable interaction kernels (i.e., / > F^*) allow cells to preserve a equilibrium intercellular distance large enough to survive (i.e., d_>). In this respect, we can further observe that, starting from an initial distribution of cells constituting a single cluster, the minimal intercellular distance at equilibrium will never exceed the repulsion radius , i.e., d_(t_)≤, regardless of both the shape and the H-stability of the interaction kernel. In fact, dealing with a first order model where cell dynamics is regulated solely by repulsive-attractive interactions, we have that as d_(t) exceeds , the repulsive velocity component between any pair of cells vanishes and cells not more drift away.Referring again to the graph in the right panel of Fig. <ref>, we can moreover notice that in case of not H-stable interaction kernels/potentials the minimal intercellular distance at the equilibrium varies according to the singularity of the repulsive part of K. In fact, comparing the results obtained by setting the interaction strengths such that / < 15.36 (so that the interaction kernels/potentials are not H-stable for all considered choices of the value of s), we have that the value of d_(t_) increases as s decreases, i.e., as K is more singular near to the origin. On the other hand, we do not observe an analogous correlation between the minimal interparticle distance at the equilibrium and the singularity of the kernel/potential near to the origin in case of H-stable interaction kernels, i.e., when the interaction strength are such that / > 37.26 (see again Fig. <ref>, right panel).In order to further highlight the above commented difference between the not H-stable and H-stable regimes, we report in Fig. <ref> the equilibrium configuration of the aggregate observed in the some representative cases among those described above. Specifically, we hereafter focus only on three possible choices for the behavior of the interaction kernel near the origin (i.e., referring to Eq. (<ref>), those corresponding to s=1.75, 1, 0.25), and only to two values of the ratio /, i.e., / = 10 and 100, that respectively makes the considered interaction kernels not H-stable and H-stable. Let us moreover notice that we are comparing the results arising when the interaction kernel at the origin is assumed either more regular than hyperbolic (s=1.75), hyperbolic (s=1) or more singular than hyperbolic (s=0.25). These final distributions further show that if the interaction kernel is not H-stable (as in the first row in Fig. <ref>), the individuals tend to clusterize and the final minimal intercellular distance depends on the behavior of the interaction kernel at the origin, i.e., on s. In particular, we have that by setting s=1.75, cells divide into several clusters, with very small intercellular distances within the clusters. Further, the spatial distribution of cells at the equilibrium appears more homogeneous as the value of s is lower, see the cases with s=1 and s=0.25, i.e., as the interaction kernel/potential is more singular at the origin. However, in all cases, the intercellular distance is always too small to allow cell survival. On the other hand, dealing with a H-stable interaction kernel (see the second row in Fig. <ref>), cells re-organize in a quite homogeneous distribution characterized by a minimum intercellular distance that allows cell survival. Moreover, such behavior arises for all values of the parameter s here reported as well as those considered above (i.e., s=0.5, 0.75, 1.25 1.5).To investigate deeper how the choice of a not H-stable interaction kernel rather than a H-stable one affects the system dynamics, we focus on the six representative cases reported in Fig. <ref> and analyze the effect of variations of the overall number N and of the individual mass m of cells, both separately (thereby varying the overall mass of the aggregate) and simultaneously by keeping constant the overall mass of the aggregate M_=mN. In more details, for each one of the representative cases considered in Fig. <ref>, we perform a series of numerical simulations to compare the above-commented equilibrium configurations (obtained with N=100 cells with mass m=0.0018 μ g) with those arising if we deal with: (i) a smaller/larger aggregate of cells whose individual mass m is still equal to 0.0018 μ g (specifically, we investigate the cases with N=50 and N=200 cells);(ii) an aggregate still formed by N=100 cells whose individual mass m is lower/higher than 0.0018 μ g (in particular, we consider cells with m=0.0009 μ g and m=0.0036 μ g, respectively);(iii) an aggregate whose overall mass M_=m N is kept equal to 0.18 μ g, but the number N and, in turn, the individual mass m of cells are different from 100 and 0.0018 μ g, respectively, (in this case, we first consider an aggregate composed by N=50 cells with m=0.0036μ g; then we set N=200 and m=0.0009μ g). This is related to the mean-field limit as explained in Section <ref>. In this perspective, the initial distributions used to perform the numerical simulation with N = 50, 100, 200 cells are reported in Fig. <ref>. It is worth to notice that, for each value of N, the initial distribution of the aggregate is defined regardless the value of the individual cell mass m. Notice also that the effect of m can be absorbed in the time scale. However, we decided to keep it to show the behavior when we fix the normalization of the total mass, i.e., M_=m N.The numerical results obtained by setting s=1.75 in Eq. (<ref>), i.e., dealing with a interaction kernel that is more regular than the hyperbolic kernel at the origin, are reported in the top row in Fig. <ref>. Analogously, the equilibrium configurations obtained by setting s=1 and s=0.25 in Eq. (<ref>), are reported in Fig. <ref> in the central and bottom row, respectively. The results reported in the left panels of Fig. <ref> show the variation of the equilibrium configuration of the system (<ref>) when the values of the interaction strengths make the interaction kernel not H-stable (i.e., / > F^*. In particular, as in Fig. <ref>, we consider the case with = 10μ m/(μ g s) and = 1μ m/(μ g s) so that / = 10 < F^*. Conversely, the results shown in the right panels of Fig. <ref> refer to the case with = 100μ m/(μ g s) and = 1μ m/(μ g s) so that / = 100 > F^*, i.e., the interaction kernel is H-stable.From these numerical results, it first emerges that variations of both the number of cells N and the individual cell mass m, do not affect the characteristic pattern of the equilibrium configuration of the system in the case of both a not H-stable (clusters) and a H-stable interaction kernel (homogeneous distribution).However, in each panel, a horizontal arrow highlights that variations in the number of component cells N (with an individual fixed mass m=0.0018μ m/(μ g s)) affect differently the equilibrium configuration of the system according to the H-stability of the interaction kernel. On one hand, in all the left panels of Fig. <ref> (i.e., for values of the interaction strengths that makes the interaction kernel in Eq. (<ref>) not H-stable), the increase of the overall number of cells N forming the aggregate in fact induces a growth of both the size and the number of clusters characterizing the equilibrium configuration, whereas the equilibrium radius of the aggregate is preserved almost constant. On the other hand, in the right panel of Fig. <ref> (i.e., for values of the interaction strengths that makes the interaction kernel in Eq. (<ref>) H-stable), we have that the increment of the overall number of cells N within the aggregate leads to a growth of the equilibrium radius of the aggregate, while the minimum intercellular relative distance is almost constant.The effect of variations in the individual mass of cells m, by keeping fixed the overall number of cells N=100, is conversely indicated by a vertical arrow. In this case, the equilibrium configuration of the system is invariant since we can absorb m in the time scale of the system, and then we only observe the right variation on the equilibration times (see the values of t_ reported at the bottom of each panel in Fig. <ref>).Finally, in each panel, a diagonal arrow indicates the effect of variations of both the number N and the individual mass m of cells constituting the aggregate, by keeping constant the total mass of the aggregate, i.e., M_=m N = 0.18μ m. In this case, the variation of the pattern of the equilibrium configuration of the system due to the increase of the number of cells N (and the consequent decrease of the individual mass m) is consistent to the effect observed by dealing with larger aggregates of cells with the same mass m (see the blue arrow). In the case of not H-stable interaction kernels, the equilibrium radius of the aggregate is almost constant in all cases, while t_ decreases along the green arrow. Conversely, in the case of H-stable interaction kernels, the equilibrium radius of the aggregate increases with the number of cells, while t_ increases along the green arrow.Finally, in Fig. <ref>, we show the evolution in time of the minimum intercellular distance d_(t) and the diameter d_(t) = max_{i,j=1,…,N : j≠ i}|_i(t)-_j(t)| for s=1.75, observed dealing with different amount of cells, i.e., N=50,100,200 whose the cell mass m is kept equal to 0.0018μ m. As the behavior of d_(t) and d_(t) is similar for other values of s that we do not show here for simplicity. However, as already shown in Fig. <ref>, the value of s differently affects the minimal interparticle distance at the equilibrium d_(t_) according to the H-stability of the interaction kernel. Comparing the behavior of the evolutions observed for different values of N, we can then notice that: (i) in not H-stable cases, the minimum intercellular distance at equilibrium varies with the behavior of the interaction kernel at the origin: in particular, d_(t_) increases as the singularity of the interaction kernel at short distances gets stronger, i.e., by decreasing the value of s.However, the value of d_(t_) decreases to zero as N→∞ for a given s, see also <cit.>. On the other hand, regardless the behavior of the interaction kernel at the origin, the equilibrium diameter of the aggregate at the equilibrium d_(t_) is not affected by variations in the number and the mass of cells; (ii) conversely, in H-stable cases, the minimum intercellular distance converges to a fixed value d_(t_) (around 18μ m and smaller than ) depending slightly on the explicit form of the interaction kernel (i.e., the value of s) and the number of cells forming the aggregate N. Moreover, it stabilizes as N→∞ for a given s. It also follows that, regardless of the explicit form of the interaction kernel, the radius of the aggregate at the equilibrium state increases with the number of particles N, but it is not affected by variations of the individual mass m (which only affects the equilibrium time).Accounting for the above theoretical/analytical considerations, this section is devoted to a series of numerical simulations performed to show how the dynamics of pairwise interacting particles are regulated by the H-stability of the interaction kernel and its behavior at the origin. In particular, we here assume that the explicit form of the interaction kernel K:_+→ in system (<ref>) writes as in Eq. (<ref>) and perform several numerical tests by varying either the value of s (that characterizes the repulsive behavior of K at short distances) or the ratio / between the repulsive and adhesive interaction strength (that conversely defines the H-stability of the interaction kernel).In this perspective, let us first consider a cell aggregate constituted by N=100 individuals. In all realizations, cells are initially randomly distributed within a round area of radius equal to 40μ m and centered at the origin, as shown in Fig. <ref> (left panel). In particular, the initial distribution of the aggregate (0) is such that for each cell i=1,…,N the minimal interparticle distance d_(0), defined in Eq. (<ref>), is not null (in order to avoid that the center of mass of distinct cells are initially located in the same position) and lower than(so that each cell is initially able to interact at least with another cell). According to <cit.> and the references therein, we hereafter set the cell biophysical properties (mass and size) as follows:Param. Description Value [Unit] Ref. m mean cell mass 1.8·10^-3μ g <cit.>d_ mean cell diameter 20μ m <cit.>d_ maximal extension of cell filopodia 60μ m <cit.> From the analytical results provided in the previous section, it thereby follows that the H-stability of the interaction kernel K depends on the ratio of the interaction strengthsand , according to the value of s, i.e., to the shape of the repulsive part of the interaction kernel. Specifically, we here investigate the following cases: s 21.75 1.5 1.25 1 0.75 0.5 0.25 F^* 38.4037.26 35.26 34.02 31.32 28.29 23.37 15.36 where F^* is the minimum ratio between the repulsive and adhesive interaction strengths that makes the interaction kernel H-stable (see Eq. (<ref>) for cases with s≠1 and Eq. (<ref>) when s=1). In this respect, we first perform a series of numerical simulations exploring the effect of variations of / for each one of the above-cited values of s. This allows us to observe how the equilibrium configuration of the system and cell dynamics is regulated by the H-stability of a given interaction kernel/potential. In more details, in all realizations, we keep the strength of cell-cell adhesivenessconstantly equal to 1μ m/(μ g s), so that the value of cell resistance to compressioncoincides with / and determines the H-stability of K. This means that we are fixing the adhesive characteristics of cell-cell interactions, focusing on the repulsive part. However, the discriminating parameter is still the ratio /. In particular, we test all the following values for the repulsive strength: = 1, 10, 20, 30, 40, 50, 100, 1000 μ m/(μ g s). For all realizations, the evolution of the system has been observed until a stable equilibrium configuration is established and d_(t_F) denotes the minimal interparticle distance evaluated at the equilibrium. The graph in the right panel of Fig. <ref> then shows how the final minimal intercellular distance d_(t_F) varies according to the value of the ratio / for each one of considered value of s. According to the above theoretical considerations, it clearly emerges that, independently on the slope of the repulsive part of the kernel (i.e., on s), the H-stability of the interaction kernel K regulates the minimum distance between the interacting particles at the equilibrium configuration. In fact, for any choices of the shape of K, we have that: if / < F^* (i.e., K is not-H stable), the minimal intercellular distance at the equilibrium d_(t_) falls within the range [0, ]; while, if / > F^* (i.e., K is H stable), d_(t_)∈[, ]. From a biological point of view, these results can be interpreted as it follows: - not H-stable interaction kernels (i.e., / < F^*) are not able to avoid the superposition of cell nuclei, and thereby the observed equilibrium configurations are biologically unreliable (i.e., d_<);- conversely, H-stable interaction kernels (i.e., / > F^*) allow cells to preserve a equilibrium intercellular distance large enough to survive (i.e., d_>). In this respect, we can further observe that, starting from an initial distribution of cells constituting a single cluster, the minimal intercellular distance at equilibrium will never exceed the repulsion radius , i.e., d_(t_)≤, regardless of both the shape and the H-stability of the interaction kernel. In fact, dealing with a first order model where cell dynamics is regulated solely by repulsive-attractive interactions, we have that as d_(t) exceeds , the repulsive velocity component between any pair of cells vanishes and cells not more drift away.Referring again to the graph in the right panel of Fig. <ref>, we can moreover notice that in case of not H-stable interaction kernels/potentials the minimal intercellular distance at the equilibrium varies according to the singularity of the repulsive part of K. In fact, comparing the results obtained by setting the interaction strengths such that / < 15.36 (so that the interaction kernels/potentials are not H-stable for all considered choices of the value of s), we have that the value of d_(t_) increases as s decreases, i.e., as K is more singular near to the origin. On the other hand, we do not observe an analogous correlation between the minimal interparticle distance at the equilibrium and the singularity of the kernel/potential near to the origin in case of H-stable interaction kernels, i.e., when the interaction strength are such that / > 37.26 (see again Fig. <ref>, right panel).In order to further highlight the above commented difference between the not H-stable and H-stable regimes, we report in Fig. <ref> the equilibrium configuration of the aggregate observed in the some representative cases among those described above. Specifically, we hereafter focus only on three possible choices for the behavior of the interaction kernel near the origin (i.e., referring to Eq. (<ref>), those corresponding to s=1.75, 1, 0.25), and only to two values of the ratio /, i.e., / = 10 and 100, that respectively makes the considered interaction kernels not H-stable and H-stable. Let us moreover notice that we are comparing the results arising when the interaction kernel at the origin is assumed either more regular than hyperbolic (s=1.75), hyperbolic (s=1) or more singular than hyperbolic (s=0.25). These final distributions further show that if the interaction kernel is not H-stable (as in the first row in Fig. <ref>), the individuals tend to clusterize and the final minimal intercellular distance depends on the behavior of the interaction kernel at the origin, i.e., on s. In particular, we have that by setting s=1.75, cells divide into several clusters, with very small intercellular distances within the clusters. Further, the spatial distribution of cells at the equilibrium appears more homogeneous as the value of s is lower, see the cases with s=1 and s=0.25, i.e., as the interaction kernel/potential is more singular at the origin. However, in all cases, the intercellular distance is always too small to allow cell survival. On the other hand, dealing with a H-stable interaction kernel (see the second row in Fig. <ref>), cells re-organize in a quite homogeneous distribution characterized by a minimum intercellular distance that allows cell survival. Moreover, such behavior arises for all values of the parameter s here reported as well as those considered above (i.e., s=0.5, 0.75, 1.25 1.5).To investigate deeper how the choice of a not H-stable interaction kernel rather than a H-stable one affects the system dynamics, we focus on the six representative cases reported in Fig. <ref> and analyze the effect of variations of the overall number N and of the individual mass m of cells, both separately (thereby varying the overall mass of the aggregate) and simultaneously by keeping constant the overall mass of the aggregate M_=mN. In more details, for each one of the representative cases considered in Fig. <ref>, we perform a series of numerical simulations to compare the above-commented equilibrium configurations (obtained with N=100 cells with mass m=0.0018 μ g) with those arising if we deal with: (i) a smaller/larger aggregate of cells whose individual mass m is still equal to 0.0018 μ g (specifically, we investigate the cases with N=50 and N=200 cells);(ii) an aggregate still formed by N=100 cells whose individual mass m is lower/higher than 0.0018 μ g (in particular, we consider cells with m=0.0009 μ g and m=0.0036 μ g, respectively);(iii) an aggregate whose overall mass M_=m N is kept equal to 0.18 μ g, but the number N and, in turn, the individual mass m of cells are different from 100 and 0.0018 μ g, respectively, (in this case, we first consider an aggregate composed by N=50 cells with m=0.0036μ g; then we set N=200 and m=0.0009μ g). This is related to the mean-field limit as explained in Section <ref>. In this perspective, the initial distributions used to perform the numerical simulation with N = 50, 100, 200 cells are reported in Fig. <ref>. It is worth to notice that, for each value of N, the initial distribution of the aggregate is defined regardless the value of the individual cell mass m. Notice also that the effect of m can be absorbed in the time scale. However, we decided to keep it to show the behavior when we fix the normalization of the total mass, i.e., M_=m N.The numerical results obtained by setting s=1.75 in Eq. (<ref>), i.e., dealing with a interaction kernel that is more regular than the hyperbolic kernel at the origin, are reported in the top row in Fig. <ref>. Analogously, the equilibrium configurations obtained by setting s=1 and s=0.25 in Eq. (<ref>), are reported in Fig. <ref> in the central and bottom row, respectively. The results reported in the left panels of Fig. <ref> show the variation of the equilibrium configuration of the system (<ref>) when the values of the interaction strengths make the interaction kernel not H-stable (i.e., / > F^*. In particular, as in Fig. <ref>, we consider the case with = 10μ m/(μ g s) and = 1μ m/(μ g s) so that / = 10 < F^*. Conversely, the results shown in the right panels of Fig. <ref> refer to the case with = 100μ m/(μ g s) and = 1μ m/(μ g s) so that / = 100 > F^*, i.e., the interaction kernel is H-stable.From these numerical results, it first emerges that variations of both the number of cells N and the individual cell mass m, do not affect the characteristic pattern of the equilibrium configuration of the system in the case of both a not H-stable (clusters) and a H-stable interaction kernel (homogeneous distribution).However, in each panel, a horizontal arrow highlights that variations in the number of component cells N (with an individual fixed mass m=0.0018μ m/(μ g s)) affect differently the equilibrium configuration of the system according to the H-stability of the interaction kernel. On one hand, in all the left panels of Fig. <ref> (i.e., for values of the interaction strengths that makes the interaction kernel in Eq. (<ref>) not H-stable), the increase of the overall number of cells N forming the aggregate in fact induces a growth of both the size and the number of clusters characterizing the equilibrium configuration, whereas the equilibrium radius of the aggregate is preserved almost constant. On the other hand, in the right panel of Fig. <ref> (i.e., for values of the interaction strengths that makes the interaction kernel in Eq. (<ref>) H-stable), we have that the increment of the overall number of cells N within the aggregate leads to a growth of the equilibrium radius of the aggregate, while the minimum intercellular relative distance is almost constant.The effect of variations in the individual mass of cells m, by keeping fixed the overall number of cells N=100, is conversely indicated by a vertical arrow. In this case, the equilibrium configuration of the system is invariant since we can absorb m in the time scale of the system, and then we only observe the right variation on the equilibration times (see the values of t_ reported at the bottom of each panel in Fig. <ref>).Finally, in each panel, a diagonal arrow indicates the effect of variations of both the number N and the individual mass m of cells constituting the aggregate, by keeping constant the total mass of the aggregate, i.e., M_=m N = 0.18μ m. In this case, the variation of the pattern of the equilibrium configuration of the system due to the increase of the number of cells N (and the consequent decrease of the individual mass m) is consistent to the effect observed by dealing with larger aggregates of cells with the same mass m (see the blue arrow). In the case of not H-stable interaction kernels, the equilibrium radius of the aggregate is almost constant in all cases, while t_ decreases along the green arrow. Conversely, in the case of H-stable interaction kernels, the equilibrium radius of the aggregate increases with the number of cells, while t_ increases along the green arrow.Finally, in Fig. <ref>, we show the evolution in time of the minimum intercellular distance d_(t) and the diameter d_(t) = max_{i,j=1,…,N : j≠ i}|_i(t)-_j(t)| for s=1.75, observed dealing with different amount of cells, i.e., N=50,100,200 whose the cell mass m is kept equal to 0.0018μ m. As the behavior of d_(t) and d_(t) is similar for other values of s that we do not show here for simplicity. However, as already shown in Fig. <ref>, the value of s differently affects the minimal interparticle distance at the equilibrium d_(t_) according to the H-stability of the interaction kernel. Comparing the behavior of the evolutions observed for different values of N, we can then notice that: (i) in not H-stable cases, the minimum intercellular distance at equilibrium varies with the behavior of the interaction kernel at the origin: in particular, d_(t_) increases as the singularity of the interaction kernel at short distances gets stronger, i.e., by decreasing the value of s.However, the value of d_(t_) decreases to zero as N→∞ for a given s, see also <cit.>. On the other hand, regardless the behavior of the interaction kernel at the origin, the equilibrium diameter of the aggregate at the equilibrium d_(t_) is not affected by variations in the number and the mass of cells; (ii) conversely, in H-stable cases, the minimum intercellular distance converges to a fixed value d_(t_) (around 18μ m and smaller than ) depending slightly on the explicit form of the interaction kernel (i.e., the value of s) and the number of cells forming the aggregate N. Moreover, it stabilizes as N→∞ for a given s. It also follows that, regardless of the explicit form of the interaction kernel, the radius of the aggregate at the equilibrium state increases with the number of particles N, but it is not affected by variations of the individual mass m (which only affects the equilibrium time). § BIOLOGICAL APPLICATIONS §.§ Inverse problem Accounting for the above-considerations and numerical results, this section focuses on how to find a proper interaction kernel so that the typical equilibrium state of system in (<ref>) resembles a cell distribution observed in a given experiments. In particular, we here consider the biological picture shown in Fig. <ref>: it is a frame of an experimental assay where a spheroid of ovarian cancer cells is placed on a two dimensional Petri dish (kindly provided by kindly provided by Prof. Luca Munaron of the Department of Life Sciences and Systems Biology, Università degli Studi di Torino, Italy). This picture gives the spatial distribution of the nuclei (yellow dots) of N=817 cells constituting the spheroid, and the aim of this section is to identify (at least) one of the interaction kernels defined in Eq. (<ref>), as well as the values of the interaction parameters that is able to give a good approximation of the experimental distribution of cells shown in Fig. <ref>.In this perspective, by assuming that each material point represents the center of mass of a single cell, it is first worth to notice that the proper interaction kernel has to ensure that the minimum intercellular distance required for cell survival is preserved during all system evolution. In this respect, the numerical results shown in previous section suggest that it is preferable to opt for a H-stable interaction kernel rather than for a not H-stable one. We therefore perform a series of simulations by assuming H-stable interaction kernels to implement intercellular interactions and compare the resulting equilibrium state of a system of 817 cells with the experimental cell distribution in Fig. <ref>.Entering in more details, we here focus again only on three possible choices for the explicit form of the interaction kernel at the origin, i.e., referring to Eq. (<ref>), with those denoted by s = 1.75, 1, 0.25, respectively, and for each of these cases we take into account three distinct settings of the interaction parameters such that / > F^* ensuring the H-stability of the interaction kernel. In particular, all numerical simulations are performed assuming =20μ m, =60μ m and =1μ m /(μ g s), as in previous section, whereas the repulsive strengthis respectively set equal to 50, 75,100μ m /(μ g s). Finally, the mass m of cancer cell is fixed equal to 0.0018μ m/(μ g s), according again to <cit.>, and all numerical simulations are initialized from the same initial distribution of cells. The equilibrium distributions obtained in these cases are shown in Fig. <ref>. Each panel refers to a different choice of both the behavior of the interaction kernel at the origin (i.e., the value of s) and of the repulsive strength , and for each case we plot together both the real position of cells (as represented in Fig. <ref>) and the equilibrium distribution given by the numerical simulations.It clearly emerges that the worst approximation of the real distribution of cells occurs by setting s=1.75 and /=50, (i.e., top left panel in Fig. <ref>), while it results not obvious how to chose between the other cases. In this respect, in order to compare the real distribution and the numerical equilibrium state, we use the Hungarian algorithm (also known as Munkres assignment algorithm), which is a combinational optimization algorithm that return a value C, termed minimum cost, such that _ = C/817 quantifies the mean distance between the numerical and the real position of each cell. We then report the values of the mean error _, at the top of each panel in Fig. <ref>. Consistently with our above considerations about the difference between the numerical results and the real distribution of cells, the higher value of_ occurs in the case characterized by s=1.75 and /=50μ m/(μ g s) (see again the top left panel in Fig. <ref>), whereas in all other cases the values of _ are similar, and moreover the values of _ is always close (and also lower than) the value of . From these results, we can conclude that, dealing with these possible options to implement the interaction cells, the better approximation of the real cell distribution of cells occurs by assuming that the interaction kernel is hyperbolic at the origin (i.e., s=1 in Eq. (<ref>)) and by setting the repulsive strength / equal to 50μ m/(μ g s) (see the bottom left panel in Fig. <ref>). However, it is worth to notice that we here compare few possible options for the explicit form of the interaction kernel and, moreover, we do not take into account the dependence of the final configuration on the initial condition used in numerical simulations. It is therefore possible to improve the approximation of the experimental distribution of cells by replicating the above method for different assumptions about the behavior of K and taking into account the dynamics by also comparing the distribution of cells in time. This is just a first step in the direction of identifying interaction potentials from experimental data according to the specific problem we are dealing with. §.§ Cell sorting In multicellular organisms, the relative adhesion of various cell types to each other or to noncellular components surrounding them is also fundamental. From the late 1950s, it has been widely noticed that during embryonic development the behavior of cell aggregates resembles that of viscous fluid. A random mixture of two types of embryonic cells, in fact, spontaneously reorganizes to reestablish coherent homogeneous tissues. A similar process is a key step also in the regeneration of normal animal from aggregates of dissociated cells of adult hydra. It also explains the layered structure of the embryonic retina. These phenomena, commonly called cell sorting, involve neither cell division nor differentiation, but are entirely caused by spatial rearrangements of cell positions due to differences in the specific adhesivities; see <cit.> and references therein. Indeed, specific hierarchies of adhesive strengths lead to specific configurations of the cellular aggregate.A simple and intuitive simulation reproducing biological cell sorting consists of a cellular aggregate formed by two types of randomly positioned individuals, namely, light “L" and dark “D" (that are graphically represented by white and black circles, respectively, see Fig. <ref>). In particular, we here assume that the two types of cells have the same biophysical properties (i.e., cell mass m=0.0018μ g, cell diameter d_R=20μ m and maximal extension of filopodia d_A=60μ m) but that behave differently. In this respect, the evolution in time of the spatial distribution of the aggregate is given by the extension of the system in Eq. (<ref>) to two populations:[ d_i^dt = - m∑_j=1 j≠ i^N_K^(|_i^(t) - _j^(t)|)_i^(t) - _j^(t)|_i^(t) - _j^(t)|- m∑_j=1^N_K^(|_i^(t) - _j^(t)|)_i^(t) - _j^(t)|_i^(t) - _j^(t)|;; d_h^dt = -m∑_j=1 j≠ h^N_K^(|_h^(t) - _j^(t)|)_h^(t) - _j^(t)|_h^(t) - _j^(t)| - m∑_j=1^N_K^(|_h^(t) - _j^(t)|)_h^(t) - _j^(t)|_h^(t) - _j^(t)|, ]where _i^(t) and _h^(t), with i=1,…,N_ and h=1,…,N_ denote the actual positions of the “L" and “D" cells, respectively; while the interaction kernels K^pq:_+↦, with p,q∈{L,D}, denote how a cell of phenotype p reacts to the presence of a cell of phenotype q. Specifically, we assume that both “L" and “D" cells are characterized by the same resistance to compression, i.e., the repulsive part of cell-cell interaction is independent on the type of interacting individuals; while cell-cell adhesion instead depends on the type of the cells involved. In this respect, dealing with the set of possible explicit form for the interaction kernels proposed in Eq. (<ref>), it is consistent to assume that: all interaction kernels have the same behavior at the origin (i.e., for any pair of p,q∈{L,D} the interaction kernel is characterized by the same values of s and ), whereas the adhesion strength depends on the type of the interacting cells and it is thereby denoted by ^pq with p,q∈{L,D}. The explicit form of the interaction kernels then writes as it follows:K^pq(r) = {[ - (2)^3-2s r^2s-3,if0 < r ≤2;; 2(r-),if 2 < r ≤;; -4 ^pq (r-) (r-)(-)^2, if< r ≤;; 0, ifr >, ]. p,q∈{,}. In order to reproduce the cell sorting, starting from the initial distribution reported in Fig. <ref>, we perform a series of numerical simulations by setting distinct values of both homotypic and heterotypic cell-cell adhesiveness, i.e., ^pq with p=q and p≠ q, respectively; and by taking into account several assumptions about the behavior of the interaction kernels K^pq(r), for any p,q∈{L,D}, at the origin. In this respect, according to previous results, we further state that for any p,q∈{L,D}, theinteraction strengthsand ^pq have to satisfy the H-stability constrain /^pq> F^*, where F^* depends on the value of s according to Eq. (<ref>). In fact, as shown in previous sections, the H-stability of the interaction kernel results fundamental to avoid unrealistic cell overlapping when each cell is represented as a material point. Entering in more details, according to the numerical simulations performed in previous sections, we here focus only on three possible behaviors of the interaction kernels at the origin: i.e., referring to Eq. <ref>, we respectively set s equal to 1.75, 1, 0.25, (see Fig. <ref>). With respect to the interaction strength, according to the results reported in Section <ref>, the value of the repulsive strengthis here fixed qual to 100 μ m/(μ g s) in all realizations; while the adhesion strengths are set in different ways according to the type of the interacting cells in order to reproduce characteristic dynamics of cell sorting. In all realizations, the value of ^pq, with p,q∈{L,D}, is such that^pq< /F^* = {[2.68, if s=1.75;;3.16,if s=1.0;;6.51, if s=0.25. ].The numerical results reported in the left column in Fig. <ref>, shown that if the heterotypic adhesiveness between the two cell types is higher than the two homotypic adhesiveness interactions (i.e., ^LD=^DL>^LL=^DD), cells heterogeneously mix to form an experimentally observed checkerboard. Conversely, if the homotypic adhesions are stronger than the heterotypic ones (i.e., ^LL=^DD>^LD=^DL, see the central column in Fig. <ref>), we find a spontaneous cell sorting, with the formation of small clusters of cells of the same type within the domain. If further, the adhesion between the light cells is larger than than the heterotypic contact interactions, which is in turn larger than the adhesion between the dark cells (i.e., ^LL>^LD=^DL>^DD, see the right column in Fig. <ref>), we observe the autonomous emergence of little island of light individuals surrounded by a crew of dark cells: this phenomenon is called engulfment and was investigated also with other types of models <cit.>. Related numerical results were slightly explored in the context of aggregation models in <cit.> without relating them to cell sorting. From these numerical results, it further emerges that variations in the explicit form of the repulsive part of the interaction kernels, i.e., variations in the value of the parameter s, do not significantly affect the evolution of the system. We find it consistent with the fact that (i) the interaction parameters are always set to guarantee the H-stability of the system, and that (ii) the repulsive strengthhas been set strong enough to maintain the minimum intercellular distance d_ greater than the mean cell nucleus =/2 (according to previous simulations).§ CONCLUSIONSThe analysis of cell patterning, as well as the description of the characteristic large-time configurations of cell aggregates, is a relevant issue in developmental biology. The spatial distribution of cells mediates in fact a wide range of physio-pathological phenomena, i.e., from morphogenesis to cancer invasion <cit.>. In this respect, we have here introduced a discrete microscopic model to reproduce the dynamics of cell systems, where each individual is described by a material point, with concentrated mass, and set to move according to a first-order ODE. In particular, cell velocity has been here set to account for nonlocal adhesive and repulsive contributions, the former including long-range cadherin-mediated mechanisms, the latter modeling cell nucleus resistance to compression.Both migratory components have been described by introducing a proper family of pairwise interaction kernels, and relative potentials. Such functions are characterized by a negative repulsive part, which may have different slopes, and by a positive parabolic trend in the attractive part. Further, the proposed kernels are intrinsically multiparametric, being determined by a set of free coefficients (also relative to the extension of the interaction regions). The specificity of the cell interaction velocity components obviously impacts on the resulting system behavior, which can range from a dramatic particle collapse to an implausible expansion of the aggregate. However, to properly and realistically reproduce experimental cell patterns, it is necessary to ensure that the model cells reach and maintain typical finite mutual distances.In this respect, we have here analyzed how the concept of H-stability, derived from statistical mechanics, can relate to the asymptotic behavior of cell particle systems. In particular, our main message is that a crystalline configuration of a cell aggregate can be obtained if the underlying intercellular interaction kernel, and relative potential, satisfies a proper H-stability condition. With this concept in mind, we have then turned to analyze the regions of the space of selected free model parameters (i.e., those concerning cell adhesive and repulsive interactions) that result in the H-stability of a given family of interaction kernels, differing in repulsive part (mainly near the origin). The proposed analytical study has been then enriched by means of numerical realizations that have been able to characterize (in terms of individual spatial configuration and interparticle distance) the large time pattern of cell systems, upon variations also of cell mass and cell number. In this respect, our analysis has also shown that if we aim to derive a continuous macroscopic model from the proposed discrete approach, via coarse-grain procedures, not H-stable potentials must be instead chosen: they in fact allows from minimal intercellular distance converging to zero if N →∞ thereby controlling the dimension of the overall cell aggregate.Our study has also relevant implications in a number of biological applications. First of all, it a priori restricts the possible variations of the free interaction parameters, allowing accurate calibrations and estimates without the need of massive preliminary simulations. It is also possible to solve the inverse problem of finding suitable interaction potentials to reproduce a given configuration of an experimental colony. In particular, this application involves optimization issues. Finally, our analysis allows to easily reproduce cell sorting phenomena by only tuning the different adhesiveness of the component populations, provided that the relative sets of parameters result in H-stable systems.It is however important to notice that the proposed work, as well as the present applications, is based on the assumption that cell behavior is completely determined by adhesive/repulsive contributions. This is of course an oversimplification of the biological picture. Cell migration is in fact a quite complex process involving several other mechanisms and stimuli, such as chemotaxis (i.e., cell locomotion up to gradients of a diffusible chemical field) or durotaxis (i.e., cell locomotion towards stiffer regions of the matrix environment). In this respect, it would be interesting to include in our cell model some of these velocity components and to study how they possibly affect the stable configuration of the particle system. The development of the proposed approach in this respect obviously extends the range of possible biological applications. § ACKNOWLEDGMENTSJAC acknowledges support by the Engineering and Physical Sciences Research Council (EPSRC) under grant no. EP/P031587/1, by the Royal Society and the Wolfson Foundation through a Royal Society Wolfson Research Merit Award and by the National Science Foundation (NSF) under grant no. RNMS11-07444 (KI-Net). AC acknowledge partial funding by the Politecnico di Torino and the Fondazione Cassa di Risparmio di Torino in the context of the funding campaign “La Ricerca dei Talenti” (HR Excellence in Research). AC and MS acknowledge Istituto Nazionale di Alta Matematica (INdAM) “Francesco Severi” and the “Gruppo Nazionale per la Fisica Matematica” (GNFM). abbrv | http://arxiv.org/abs/1706.08969v1 | {
"authors": [
"J. A. Carrillo",
"A. Colombi",
"M. Scianna"
],
"categories": [
"q-bio.CB"
],
"primary_category": "q-bio.CB",
"published": "20170627171015",
"title": "Adhesion and volume constraints via nonlocal interactions lead to cell sorting"
} |
Supernovae and Weinberg's Higgs Portal Dark Radiation and Dark Matter Kin-Wang [email protected] December 30, 2023 ===================================================================== We consider the problem of learning when obtaining the training labels is costly, which is usually tackled in the literature using active-learning techniques. These approaches provide strategies to choose the examples to label before or during training. These strategies are usually based on heuristics or even theoretical measures, but are not learned as they are directly used during training. We design a model which aims at learning active-learning strategies using a meta-learning setting. More specifically, we consider a pool-based setting, where the system observes all the examples of the dataset of a problem and has to choose the subset of examples to label in a single shot. Experiments show encouraging results. § INTRODUCTION Machine learning, and more specifically deep learning techniques, are now recognized for their ability to obtain high performance on a large variety of problems, from image recognition to natural language processing. However, most of the tasks tackled for now are supervised and need a critical amount of labeled data to be learned properly. Depending on the final application, these labeled examples are often expensive to get (e.g manual annotation), and not always available in large quantity.Learning using a small amount of labeled data is thus a key issue in the machine learning domain.Humans are able to learn and generalize well from only a few labeled examples (e.g children can recognize rapidly any depiction of a car or some animals -drawing, photo, real life- after having been shown only a few pictures with explicit "supervision"). This problem has been studied in the literature as one-shot (or few-shots) learning, where the goal is to predict based on very few supervised examples (e.g one per category). This setting was first proposed in <cit.>, and it knows a renewal of interest under slightly different flavors. Recently, several methods have been presented, relying on different techniques such as matching networks and bi-LSTM (<cit.>) or memory-networks (<cit.>) which are learned using a meta-learning approach: they aim at learning from a large set of learning problems a strategy that will enable the algorithm to efficiently and rapidly use the (small) supervision when facing a new problem (see Section <ref> for a description of the related work). In this setting, one consider that the model has as an input a set of already labeled data, usually k examples chosen randomly per category in the problem. In parallel, the field of active learning focuses on approaches that allow a model to ask an oracle for the labels of some training examples, to improve its learning. It is thus based on a different assumption where the model has the ability to ask labels for a set of unsupervised data. In this case, different settings can be defined, regarding the nature of the unsupervised examples set (a finite dataset completely observable, i.e pool-based, or a stream of inputs), and the nature of the acquisition process (single step or sequential). Some approaches also benefits from an initial small labeled dataset. The decision process for selecting the examples to label being made during training, all methods from state of the art in this field do not learn this decision process, but instead design specific heuristics or criterion. We propose to study a problem at the crossroad of one-shot learning and active learning. We present a method that not only learns to classify examples using small supervision but additionally learns a label acquisition strategy which is used to acquire the training set. We study the case of pool-based setting: the model works on a completely observable set of examples. This is novel with regard to previous approach in one-shot learning which consider a stream of examples to classify one after the other. The choice of the subset of examples to label is made in a single step via the acquisition strategy. In Section <ref>, we define the problem and the specific training strategy inspired from recent one-shot learning methods. We then describe our approach in Section <ref>, which is based on representation learning and the use of bi-directional recurrent networks. Section <ref> provides experimental results on artificial and real datasets. § RELATED WORK The active-learning problem has been studied under various flavors, reviewed in a survey in <cit.>. Generically speaking, methods are usually composed of two components: a selector, which decides which examples should be labeled, and a predictor. Most of the approaches focus on sequential labeling strategies, where the system can send some examples to be labeled to the oracle, eventually update its prediction model, and choose new examples to be labeled depending on the answers of the oracle and/or the new predictor. The data examples can be presented to the selector either as a complete set (e.g pool-based) or in a sequential fashion, where the selector has to decide at each step if the example should be labeled or not. Several methods for single-instance selector in pool-based setting have been proposed such as <cit.>, which uses Fisher information matrices, or <cit.> that relies on a multi-armed bandit approach. Batch-mode (i.e each step can ask for several labels) have been studied for instance by <cit.>, using a definition of the performance based on high likelihood of labeled examples and low uncertainty of the unlabeled ones. Stream-based setting have been tackled through measures of "informativeness" (i.e favor labeling of more informative examples <cit.>), by defining region of uncertainty (e.g <cit.>), using "committees" for the decision (e.g <cit.> with an ensemble method focusing on favoring diversity in committee members). Other types of approaches design decisions by studying the expected model change (<cit.>) or the expected error reduction (<cit.>). Static methods (i.e where the subset of examples to label is decided in a single shot) have been less studied as it can not benefit from the feedback of the oracle or any estimation w.r.t. the prediction, the quality of the current predictor oruncertainty measure. However such methods can prove useful when asking several times in a row an oracle is not possible, or when interactions between the learner and the "oracle" is limited, e.g. as cited by <cit.> when using Amazon Mechanical Turks. In this paper, the authors define the problem as selective labeling, in a semi-supervised context. They propose to select a subset of examples to label by minimizing the upper-bound of a deterministic out-of-sample error bound for Laplacian regularized Least Squares.<cit.> present an approach for single batch active learning for specific graphs-based tasks, while <cit.> propose a method based on transductive experimental design, however they design a sequential optimization algorithm to overcome the combinatorial problem. In parallel, the problem of one-shot learning (first described in <cit.>) knows a renewal of interest. Notably, recent methods have proposed to use a meta-learning approach, by relying on additionnal data of similar nature (e.g images of different classes). The goal is to design systems that learns to predict on novel problems based only on few labeled examples. For example, <cit.> propose to use the recent memory-augmented neural network, to integrate and store the new examples. Similarly, <cit.> propose to rely on external memories for neural networks, bidirectional LSTM and attention LSTM. One key aspect of their approach is their aim at representing an instance w.r.t. the current memory (i.e observed labeled examples). Note that these approaches design a "one-shot learning problem" (e.g training point/inference point) as a sequential problem, where one instance arrives after the other. Additionally, the system can receive some afterward feedback on the observed instances.Tackling active-learning through meta-learning has been little studied for now. The work of <cit.> propose an extension of the model of <cit.>, where the true label of the observed instance is withheld unless the system ask for it. The model can either classify or ask for the label. The decision is learned through reinforcement learning, where the system gets a high reward for accurate prediction and is penalized when acquiring label or giving false prediction. They design the action-value function to learn as a LSTM. This suffers from a similar drawback as one-shot learning methods, as it does not consider the dataset as a whole but instead follow a "myopic" process.The recent work of <cit.> is the most closely related to ours, as they propose a similar approach for this novel task of meta-learning an active labeling strategy in a pool-based setting. However, they present a model that sequentially select an item to label in several step, while we propose a "one-step" static selection that does not rely on any oracle feedback.§ META-ACTIVE LEARNING PROBLEM AND SETTING §.§ Preliminary The generic goal of an active learning system is to provide the best prediction on a task, using the fewer amount of labels as possible. The system has to choose the most relevant examples to label in order to learn accurately. It is usually considered that the model has access to an oracle, which provides the labels of given examples. Active learning usually aims at tackling a single problem, i.e one dataset and one task. We consider in this paper a pool-based setting with a single-step acquisition, which resumes to the generic following schema: (i) the system receives an entire unsupervised set of examples, (ii) it computes the subset of examples to send to the oracle for labeling, (iii) learning is made based on this reduced supervised subset. In such a single step setting, the decision process for choosing the examples to label can not be learned. We propose to design a meta-active learning protocol in order to learn the acquisition strategy i.e the way to choose the examples to label, in a meta learning fashion. We follow a similar principle to what has been recently presented for one-shot learning problems, e.g in <cit.>. It aims at extending the basic principle of training in machine learning, where a model is trained on data-points from a similar distribution to the data-points observed during inference. For one-shot learning, it resumes as designing data-points as one-shot problems, on dataset of similar nature (e.g all inputs are images). The protocol therefore replicates the final task during training and aims at learning to learn from few examples. Let us now describe our meta-active learning protocol while introducing few notations. As explained in Figure <ref>, our training stage will consist of many elementary active classification problems built from a large dataset. Each elementary problem is denoted 𝒮=(C, 𝒮^Train,𝒮^Eval), it is dedicated to the classification of classes in a set C, coming with two sets of examples, the first one being used to infer a prediction model, 𝒮^Train, and the second one, 𝒮^Eval,being used to evaluate the inferred model. Starting from a large multiclass dataset ℬ of labeled examples belonging to a large number of categories 𝒰^Train, each elementary problem is built as follows: * A subset of classes 𝒞 is sampled uniformly in the set of all the categories in 𝒰^Train.* Then, a first set of N examples from classes in 𝒞 is sampled from ℬ to build 𝒮^Train={ (x_1,y_1),....,(x_N,y_N) }, where x_i is the i-th input data-point and y_i ∈𝒞 stands for its class. * At last, a second set of M new data points is sampled from ℬ to build 𝒮^Eval= { (x_N+1,y_N+1),....,(x_N+M,y_N+M) } where 𝒮^Train∩𝒮^Eval = ∅.In the learning stage, the system is presented a series of elementary training problems 𝒮. For each problem the training set 𝒮^Train is provided without any labels and the system is allowed to ask for the label of a limited subset 𝒟 of samples in 𝒮^Train according to an acquisition strategy. The system then infers a predictive model d from 𝒟 that is evaluated over 𝒮^Eval. Learning aims at learning the various components of the system (acquisition strategy, infering a predictive model). Each pair (𝒮^Train,𝒮^Eval) serves as a supervised example for the meta-learning algorithm of the system. In the test stage the system is evaluated on elementary testing problemsto evaluate the quality of our meta-learning approach. The testing problems are fully different from training problems since there arebased on a new subset of categories 𝒰^Test that is disjoint from the categories used to build the training sets 𝒰^Train. An illustration of this setting is provided in Figure <ref> with image classification. All elementary classification problems are binary classification (i.e |𝒞|=2). The training problems contains categories such as cats, dogs, houses and bicycles, with different classification problems e.g classification between cat and dog, dog and house,etc. The elementary testing problems are drawn from a different set of categories, here elephants, cars, cakes and planes. §.§ Problem Definition The goal of a meta-active learning system is to learn an active-learning strategy such that, for each problem, coming with atraining dataset of unlabeled examples, it can predict the most relevant examples to label and provide a good prediction based on these supervised examples on the "test" part of the problem. We propose a system to tackle such a task as composed of two modules. The first component is an active-learning strategy, which controls the selection of examples. This strategy is defined as a probability distribution over the set of training examples of a problem that we noteP(α | 𝒮^train) where α is a binary vector of size N such that α_k=1 if the strategy asked for label y_k and α_k=0 elsewhere. The distribution P(α | 𝒮^train) is used to sample which examples are asked to be labeled by the oracle. This yields a subset of labeled examples 𝒟_α = { x_j ∈𝒮^train / α_j=1 }⊂𝒮^train. The second component is a prediction component, which takes as an input an example x to classify in 𝒮^eval and the supervised training dataset 𝒟_α, and outputs prediction for this example denoted d(x,𝒟_α). The prediction component does not have access to the examples that have not been targeted by the acquisition policy – i.e only the examples from 𝒟_α are used.We resume the generic learning scheme in Algorithm <ref>. During training, the process iteratively samples a random problem 𝒮 in the set of training problems. The acquisition model receives 𝒮^train (without labels) and predicts which examples to select for labelling by sampling with P(α | 𝒮^train). The built labeled set 𝒟_α is used to output prediction for each example in 𝒮^Eval using the prediction module d. Its performance is evaluated on 𝒮^Eval which is used to update the model. The process is similar at testing time to evaluate the whole meta-learning system.Since we consider that acquiring labels during the first step has a price, we consider a generic objective function that is a trade-off between the prediction quality on the evaluation set 𝒮^Eval and the size of the labeledset (|𝒟_α|), i.e the labeling cost. The generic objective function resumes to:ℒ =E_𝒮∼ P(𝒮) [ E_α∼ P(α | 𝒮^Train) [ ∑_(x_j,y_j) ∈𝒮^Eval [ Δ(d(x_j,𝒟_α),y_j)] + λ |𝒟_α| ] ] where E_𝒮∼ P(𝒮) is the expectation over the distribution of problems which we will empirically approximate by an average over a large set of training problems. E_α∼ P(α | 𝒮^Train) stands for the expectation over the subset of examples selected according to the acquisition strategy andΔ (d(x_j,𝒟_α),y_j) measures the error Δ between the expected output y_jand the model prediction d(x_j,𝒟_α) for an evaluation sample x_j and a model inferred from 𝒟_α.§ DESCRIPTION OF THE MODEL §.§ Optimization criterion We now detail the optimization criterion based on the generic objective function defined in Equation <ref>.As explained in the previous section, the sub-dataset 𝒟_α of examples chosen for labeling comes from the binary vector α, s.t. an example x_j is asked for labeling if α_j ≠ 0. This vector α is sampled from the distribution P_θ(α | 𝒮^Train), outputted by the acquisition component (whose parameters are noted θ), given the unsupervised training set 𝒮^train. Thus, the number of elements in the dataset 𝒟_α is directly the number of non-zero elements in α. The loss for a given problem 𝒮 can therefore be rewritten as :ℒ_θ,d(𝒮)= 𝔼_α∼ P_θ(α |𝒮^Train)[∑_(x,y) ∈𝒮^EvalΔ(d(x,𝒟_α), y) + λ |𝒟_α|] = 𝔼_α∼ P_θ(α |𝒮^Train)[∑_(x,y) ∈𝒮^EvalΔ(d(x,𝒟_α), y)]_error in prediction+ 𝔼_α∼ P_θ(α |𝒮^Train)[λ∑_k=1^N α_k]_cost of labelization The first part corresponds to the prediction quality depending on the acquired and labeled examples. Its gradient w.r.t. parameters of both modules (noted for sake of simplicity ∇_θ,d) can be computed using inspired policy-gradient method (likelihood-ratio trick) as follows, where we consider for clarity the gradient of the prediction loss for a single example (x,y) in S^Eval: ∇_θ,d𝔼_α∼ P_θ(α |𝒮^Train)[Δ(d(x,𝒟_α), y)] = ∫∇_θ,d(P_θ(α | 𝒮^Train) Δ(d(x,𝒟_α),y) dα + ∫ P_θ(α | 𝒮^Train) ∇_θ,dΔ(d(x,𝒟_α),y) dα= ∫P_θ(α | 𝒮^Train)/P_θ(α | 𝒮^Train)∇_θ,d (P_θ(α | 𝒮^Train)) Δ(d(x,𝒟_α), y) dα+ ∫ P_θ(α | 𝒮^Train) ∇_θ,dΔ(d(x,𝒟_α),y) dα= ∫ P_θ(α | 𝒮^Train) ∇_θ,d (log(P_θ(α | 𝒮^Train))) Δ(d(x,𝒟_α), y) dα+ ∫ P_θ(α | 𝒮^Train) ∇_θ,dΔ(d(x,𝒟_α),y) dαThis can be approximated through Monte-Carlo sampling, which yields, on M histories:∇_θ,d𝔼_α∼ P_θ(α | 𝒮^Train)[Δ(d(x,𝒟_α), y)]≈1/M∑_m=1^M ∇_θ,d (log(P_θ(α | 𝒮^Train))) Δ(d(x,𝒟_α), y) + ∇_θ,dΔ(d(x,𝒟_α),y) §.§ Labels acquisition component This moduletakes as input the whole unlabeled training dataset of the current problem at hand and outputs a probability of the usefulness of labeling each of these samples. We propose to use recurrent neural networks, which were initially proposed to consider sequences of inputs. More specifically, we propose in this work to use bi-directional RNN, which ensure that the output i of the network is computed with regards to all inputs examples, and thus provide a "non-myopic" decision for each example (at the difference of a classical RNN), in order to benefit from the observation of all example for each decision. Note that it couldbe relevant to use attentional-LSTM here, as presented in <cit.>, as it provides an order-invariant network, but this has not been tested yet in our experiments. The output of the recurrent network isconsidered to be a probability distribution that is used to sample α, the binary vector that select the examples to label. The output can thus be seen either as (i) a multinomial distribution, where ∑_i=1^N α_i=1, [Note that this allows to manually bound the number of examples labeled as one has to decide beforehand the number of sampling], (ii) a bernouilli distribution where each P_θ(α_j | 𝒮_i^train) ∈{0,1}. We present in this paper experiments using a multinomial distribution sampled k-times, where k is the maximum number of examples labeled.§.§ Prediction componentThis module takes as input a (new) example and a limited supervised training dataset, and outputs a prediction (e.g a category). It could be any prediction algorithm, parametric or not, which requires learning or not. In our case, the component should be able to back-propagate some gradients of errors to drive the overall learning. We propose to use similarity based prediction, which doesn't need learning, thus allows for a fast overall meta-learning. We test two similarity measures, a normalized cosine similarity and an euclidean-based similarity. Additionally, computing the predicted label for a new input is done as follow : (i) each similarity with the supervised examples is computed. (ii) This vector of similarities is then converted into a probability distribution, using a softmax with temperature. (iii) The predicted label is computed as the sum of one-hot-vector labels of supervised examples weighted by this distribution. Note that when the temperature is high enough, this distribution is a one-hot vector, which is similar to a 1-nearest neighbor technique.Additionally, we propose to use a representation component, common to the acquisition and decision components. The key idea is to learn a latent representation space that disentangle the raw inputs to provide better prediction as well as facilitate the acquisition decision. This module, denoted f, takes as input an example in ℝ^K (the original space of all examples of ℬ) and outputs its representation in a latent space ℝ^L. It is conjointly learned with the others functions. Integrating this representation function in the original loss defined in Eq. <ref> resumes to:ℒ_θ,d,f(𝒮)= 𝔼_α∼ P_θ(α f(|𝒮^Train))[∑_(x,y) ∈𝒮^EvalΔ(d(f(x),f(𝒟_α)), y)] + 𝔼_α∼ P_θ(α |f(𝒮^Train))[λ∑_k=1^N α_k]Where we note for sake of clarity f(𝒮^Train)={f(x_1),…,f(x_N)}, and similarly for f(𝒟_α).§ EXPERIMENTS We first describe our experimental protocol and the baselines we used, then we show the results of our experiments on two datasets,letter and aloi. Experimental Protocol:To build our "meta-active learning" datasets, we set P, the number of categories of each elementary problem, N the number of examples in the "unsupervised" dataset, and M the number of examples to classify on. For simplicity, we chose in our experimentsto use same numbers P,M,N, whatever the every elementary problem. The generation of the complete dataset as illustrated in Figure <ref> with training/validation/testing problems is based on a partition of the full set of categories between train, validation and test, while keeping a common domain between all inputs. It is done as follows:* training dataset: we select a subset of the categories as "training classes" (e.g 50% of all classes) and their corresponding examples. We then generate a large amount of sub-problems: for one problem, (i) we randomly select P categories in the"training classes", (ii) we randomly select N examples in these P categories (i.e 𝒮_i^train, the examples that can be asked for labeling), (iii) we randomly select M additional examples to evaluate the predictions, i.e 𝒮_i^eval.* validation and testing datasets are generated similarly, on distinct "validation classes" and "testing classes", unobserved in the complete training dataset. Baselines: We propose for this study two baselines. These baselines follow the same global scheme, but with a different acquisition component: * Random acquisition: the examples to label are chosen randomly in the dataset. * K-medoids acquisition: the examples to label are selected following a k-medoid clustering technique, where we label each example if it is a centroid of a cluster. Note that these acquisition methods do not learn during the overall process, only the representation component (if one is used) is learned. While being simple, we expect the k-medoids baseline to be a reasonable and efficient baseline in our static active-learning setting, more especially when using a similarity-based function for prediction. Dataset letter: This dataset has 26 categories and 16 features. We took 10 categories for training, 7 for validation and 9 for testing. We generated 2000 problems in training and 500 problems for validation and testing. The size of a dataset (examples that can be labeled) is 25, and the number of examples to classify per problem is 40. Here again we study 3 types of problems, binary, 4-classes and 6-classes with various budget levels. The results are plotted in Figures <ref>,<ref>,<ref>. We observe mixed results. Our model performs better than a k-medoid acquisition strategy for a budget of 2 on binary-classification problems, but k-medoid leads to a better accuracy for higher budgets. It is also better for all budgets except 6 on 4-categories problems. For 6-categories problems, our model beats the two baselines for all budgets. This difference of performance can be explained by the small amount of different categories in the training dataset; with 10 categories and binary problems (45 different combinations), our model will observe the same problem a large number of times, which could lead to over-fitting. This seems to be the case, as it performs better on 6-classes problems (210 different combinations). We propose thus to study now a dataset with a larger number of categories.Dataset aloi: This dataset has 1000 categories, with around one hundred images per class. It is a more realistic and challenging dataset for the meta active learning setting we are dealing with. We created 4000 training problems on 350 training categories, and 500 validation and testing problems on respectively 300 and 350 categories. The number of examples that can be labeled is 25, and the number of examples to classify per problem is 40. The results are shown in Figures <ref>,<ref>,<ref>, for the 3 types of problems (2-classes, 4-classes and 6-classes). We see that our method performs better than k-medoid for all budgets and all types of problems, except on binary-classification with budget 6, where k-medoid performs slightly better (0.5%). On this bigger dataset, our approach is less prone to overfit, and thus manages to generalize well its acquisition strategy to novel problems on unseen categories. § CLOSING REMARKSWe present in this paper a first approach for a meta-learning approach to a pool-based static active-learning strategy. We propose a stochastic instantiation based on bi-directionnal LSTM to benefit from the whole unsupervised dataset before prediction. First results are encouraging and show the ability of our approach to learn a labeling strategy that performs as well or better than our k-medoid baseline.splncs03 | http://arxiv.org/abs/1706.08334v2 | {
"authors": [
"Gabriella Contardo",
"Ludovic Denoyer",
"Thierry Artieres"
],
"categories": [
"cs.LG"
],
"primary_category": "cs.LG",
"published": "20170626120617",
"title": "A Meta-Learning Approach to One-Step Active Learning"
} |
Department of Physics, University of York, York YO10 5DD, United Kingdom [email protected] Dipartimento di Matematica e Fisica, Universit Roma Tre, 00146 Rome, ItalyBioinformatics Institute, Agency for Science, Technology and Research (A*STAR), Singapore 138671, SingaporeDipartimento di Matematica e Fisica, Universit Roma Tre, 00146 Rome, Italy [email protected] Department of Physics, University of York, York YO10 5DD, United KingdomWhen graphene is placed on a monolayer of semiconducting transition metal dichalcogenide (TMD) its band structure develops rich spin textures due to proximity spin–orbital effects with interfacial breaking of inversion symmetry. In this work, we show that the characteristic spin winding of low-energy states in graphene on TMD monolayer enables current-driven spin polarization, a phenomenon known as the inverse spin galvanic effect (ISGE). By introducing a proper figure of merit, we quantify the efficiency of charge-to-spin conversion and show it is close to unity when the Fermi level approaches the spin minority band. Remarkably, at high electronic density, even though sub-bands with opposite spin helicities are occupied, the efficiency decays only algebraically. The giant ISGE predicted for graphene on TMD monolayers is robust against disorder and remains large at room temperature. Optimal charge-to-spin conversion in graphene on transition metal dichalcogenidesAires Ferreira December 30, 2023 ==================================================================================In the past decade, graphene has emerged as a strong contender for next-generation spintronic devices due to its long spin diffusion lengths at room temperature and gate tunable spin transport <cit.>. However, the lack of a band gap and its weak spin–orbit coupling (SOC) pose major limitations for injection and control of spin currents. In this regard, van der Waals heterostructures <cit.> built from stacks of graphene and other two-dimensional (2D) materials hold great promise <cit.>. The widely tunable electronic properties in vertically-stacked 2D crystals offer a practical route to overcome the weaknesses of graphene <cit.>. An ideal match to graphene are group-VI dichalcogenides MX_2 (e.g., M=Mo, W; X=S, Se). The lack of inversion symmetry in TMD monolayers enable spin- and valley-selective light absorption <cit.>, thus providing all-optical methods for manipulation of internal degrees of freedom <cit.>. The optical injection of spin currents across graphene–TMD interfaces has been recently reported <cit.>, following a theoretical proposal <cit.>. Furthermore, electronic structure calculations show that spin–orbital effects in graphene on TMD are greatly enhanced <cit.>, consistently with the SOC fingerprints in transport measurements <cit.>, pointing to Rashba-Bychkov (RB) SOC in the range of 1–10 meV.In this Letter, we show that the SOC enhancement in graphene on a TMD monolayer allows for current-induced spin polarization, a relativistic transport phenomenon commonly known as ISGE or the Edelstein effect <cit.>. In the search for novel spintronic materials, the role of the ISGE, together with its Onsager reciprocal—the spin-galvanic effect—is gaining strength, with experimental reports in spin-split 2D electron gases formed in Bi/Ag and LaAlO_3/, as well as in topological insulator (TI) α-Sn thin films <cit.>. In addition, the enhancement of non-equilibrium spin polarization has been proposed in ferromagnetic TMD and magnetically-doped TI/graphene <cit.>. The robust ISGE in nonmagnetic graphene/TMD heterostructures predicted here promises unique advantages for low-power charge-to-spin conversion (CSC), including the tuning of spin polarization by a gate voltage. Moreover, owing to the Dirac character of interfacial states in graphene on TMD monolayer, the ISGE shows striking similarities to CSC mediated by ideal topologically protected surface states <cit.>, allowing nearly optimal CSC. We quantify the CSC efficiency as function of the scattering strength, and show it can be as great as ≈30% at room temperature (for typical spin–orbit energy scale smaller than k_BT).The model.—The electronic structure of graphene on a TMD monolayer (G/TMD) is well described at low energies by a Dirac model in two spatial dimensions <cit.>H_0𝐤=τ_z[v σ·𝐤+λ (σ×𝐬)·ẑ+Δ σ_z+λ_sv s_z] ,where 𝐤=(k_x,k_y) is the 2D wavevector around a Dirac point, v is the Fermi velocity of massless Dirac electrons (v≈10^6 m/s) and σ_i,s_i,τ_i (i=x,y,z) are Pauli matrices associated with the sublattice, spin, and valley subspaces, respectively. The momentum-independent terms in Eq. (<ref>) describe a RB effect resulting from interfacial breaking of inversion symmetry (λ), and staggered (Δ) and spin–valley (λ_sv) interactions due to broken sublattice symmetry C_6v→ C_3v [see Fig. <ref> (a)]. The Dirac Hamiltonian H_0𝐤 contains all substrate-induced terms (to lowest order in 𝐤) that are compatible with time-reversal symmetry and the point group C_3v <cit.>, except for a Kane–Mele SOC term (∝σ_zs_z), which is too weak <cit.> to manifest in transport and can be safely neglected. The dispersion relation associated with H_0𝐤 for each valley τ≡τ_z=±1 consists of two pairs of spin split Dirac bands (omitting ħ)ϵ_τζ(k)=±τ√(v^2k^2+Δ_ζ^2(k)) ,where k≡|𝐤|, ζ=±1 is the spin-helicity index and Δ_ζ^2(k) =Δ^2+λ_sv^2+2λ^2 +2ζ√((λ^2-Δλ_sv)^2+v^2k^2(λ^2+λ_sv^2)) .A typical spectrum is shown in Fig. <ref>(b). The spin texture associated with each band reads ⟨𝐬⟩_α𝐤=-ζ ϱ(k) (k̂×ẑ)+m_α^z(k) ẑ ,where α≡(τζ). The first term describes the spin winding generated by the RB effect [Fig. <ref>(c)] and the second its out-of-plane tilting due to the broken sublattice symmetry. The entanglement between spin and sublattice degrees of freedom generates a nontrivial k dependence in the spin texture. For example, in the minimal model with only RB interaction, ϱ(k) coincides with the band velocity (in units of v), while m_α^z=0, i.e., the spin texture is fully in plane <cit.>. When all interactions in Eq. (<ref>) are included, we find ϱ(k)=vkλ/√((Δ_sv-λ^2)^2+v^2k^2(λ^2+λ_sv^2)) .The breaking of sublattice symmetry modifies the spin texture, with both valleys acquiring a spin polarization in the ẑ direction, consistently with first-principles studies <cit.>. The explicit form of m_α^z(k) is too cumbersome to be presented. Here, it is sufficient to note that |m_α^z(k=0)|=1, with |m_α^z(k)| decaying to zero away from the Dirac point <cit.>. Finally, due to time-reversal symmetry the ẑ polarizations at inequivalent valleys are opposite. For energies within the Rashba pseudo gap (RPG), that is, ϵ_0≡|ϵ_τ-(0)|<|ϵ|<2λ̃≡|ϵ_τ+(0)|, the Fermi surface is simply connected. Hence, at low energies, the electronic states have well-defined spin helicity [Fig. <ref>(b-c)]. This feature of G/TMD interfacial states is reminiscent of spin–momentum locking in topologically protected surface states <cit.>, hinting at efficient CSC.Semiclassical argument.—The efficiency of CSC can be demonstrated using a simple semiclassical argument. For ease of notation, hereafter we employ natural units (e≡1≡ħ). Under a dc electric field, say ℰ⃗=ℰ x̂, the ŷ-polarization spin density in the steady state reads ⟨ S_y⟩=∑_α∫(d𝐤)1/2⟨ s_y⟩_α𝐤 δ f_α𝐤, where δ f_α𝐤 is the deviation of the quasiparticle distribution function with respect to equilibrium and (d𝐤)≡ d^2𝐤/4π^2. Owing to the tangential winding of the in-plane spin texture, only the longitudinal component of the quasiparticle distribution function δ f_α𝐤^∥≡ g_α(k) k̂·k̂_x contributes to the integral. At zero temperature, g_α(k)=∓ℰv_α k τ_*α k δ(ϵ_α(k)-ϵ), where v_α k=∂_kϵ_α(k) is the band velocity, τ_*α k is the longitudinal transport time and ϵ is the Fermi energy (∓ for electron/holes). For energies inside the RPG (regime I), one easily finds ⟨ S_y⟩_I=∓ℰ/4π ϱ(k_F) k_Fτ_*,where k_F is the Fermi momentum and τ_*=τ_*(τ-)k_F (assumed valley-independent for simplicity). The charge current density, ⟨ J_x⟩=-v∑_α∫(d𝐤)⟨τ_zσ_x⟩_α𝐤 δ f_α𝐤, can be computed following identical steps. We obtain ⟨ J_x⟩_I=ℰ/2π v_F k_F τ_* ,where v_F=|v_τ-(k_F)|. The implications of our results are best illustrated by considering the minimal model, for which ϱ(k_F)=v_F/v and thus ⟨ S_y⟩_I=∓⟨ J_x⟩_I/(2v). Figure <ref> (d) shows the ratio of ⟨ S_y⟩/⟨ J_x⟩ in the linear response regime computed according to the Kubo formula, confirming the linear proportionality ⟨ S_y⟩_I∝⟨ J_x⟩_I. The well-defined spin winding direction in regime I, responsible for the semiclassical form of the non-equilibrium spin polarization [Eq. (<ref>)], automatically implies a large ISGE in the clean limit. Generally, the CSC is optimal near the RPG edges, where |ρ| is the largest in regime I. In this energy range, the CSC is only limited by the electronic mobility, i.e., |⟨ S_y⟩|_I≈⟨ J_x⟩_I/(2v_F)∝(k_Fτ_*) ℰ. These considerations show that |ϱ|≡|⟨ S_y⟩|/(2v_F⟨ J_x⟩) is the proper figure of merit in regime I. For models with |λ_sv|≪|λ|, the efficiency is nearly saturated max_ϵ∈I;λ_sv=0 |ϱ(k(ϵ))|=2√(2)/3≈0.94,and is generally close to unity for not too large spin–valley coupling <cit.>. In regime II, both spin helicities ζ=±1 contribute to the non-equilibrium spin density, resulting into a decay of the CSC rate. Here, |ϱ| is not a suitable figure of merit and an alternative must be sought. As we show later, in this regime (|ϵ|>2λ̃) the CSC efficiency exhibits an algebraic decay law, enabling a remarkably robust ISGE in typical experimental conditions. Quantum treatment.— To evaluate the full energy dependence of the ISGE, we employ the self-consistent diagrammatic approach developed by two of us in Ref. <cit.>. Despite the complexity of the Hamiltonian, Eq. (<ref>), one can solve the Bethe–Salpeter equations for the T-matrix ladder. This provides accurate results in the regime k_Fv_Fτ_*≫1. The zero-temperature spin density–charge current response function reads as χ_yx(ω=0) =1/2πΩ⟨Tr[S_y G^+ J_x G^-]⟩ ,where G^±=(ϵ-H± i0^+)^-1 is the Green's function in the retarded/advanced sector of disordered G/TMD. Here, Tr denotes the trace over internal and motional degrees of freedom, ⟨...⟩ stands for disorder average and Ω is the area. In the diagrammatic approach, the disorder enters as a self-energy, Σ^a (a=±), “dressing” the single-particle Green's functions, and as vertex corrections in the electron–hole propagator [Fig. <ref> (a)]. Since the response functions of interest are determined by the same relaxation time, τ_*, the CSC is expected to be little sensitive to the disorder type as long as the latter is non magnetic. For practical purposes, we use a model of short-range scalar impurities, V(𝐱)=u_0∑_i=1^Nδ(𝐱-𝐱_i), where {𝐱_i=(x_i,y_i)} are random impurity locations and u_0 parametrizes their strength. This choice will enable us to establish key analytical results across weak (Born) and strong (unitary) scattering regimes.We first evaluate Eq. (<ref>) for models with fully in-plane spin texture, Δ,λ_sv=0. For ease of notation, we assume ϵ,λ>0 in what follows. The self-energy is given by Σ^a=n T^a, where T^a=(u_0^-11-g_0^a)^-1 and n=N/Ω is the impurity areal density. Moreover, g_0^a≡∫(d𝐤)G_0𝐤^a and G_0𝐤^±=(ϵ-H_0𝐤± i0^+)^-1 is the bare Green's function. Neglecting the real part of Σ^a, we have Σ^±=∓ in(η_0γ_0+η_3 γ_KM+η_r γ_r) ,where γ_0=τ_0σ_0s_0 (identity), γ_r=τ_z(σ×𝐬)·ẑ, γ_KM=τ_0σ_zs_z and in the weak scattering limit η_0=u_0^2/8v^2(ϵ+λ), η_3=u_0^2/8v^2λ, η_r=-u_0^2/16v^2ϵ ,inside the RPG and η_0=u_0^2ϵ/4v^2 and η_KM=η_r=0 for ϵ>2λ (see <cit.> for full T-matrix expressions). The rich matrix structure in Eq. (<ref>) stems from the chiral (pseudo-spin) character of quasiparticles. In constrast, in the 2D electron gas with RB spin–orbit interaction, the self energy due to spin-independent impurities is a scalar in all regimes <cit.>. Next, we evaluate the disorder averaged Green's function, G_𝐤^a=[(G_0𝐤^a)^-1-Σ^a]^-1. We define ϵ^a=ϵ+i a n η_0, λ^a=λ-i a n η_r, and m^a=i a n η_3, which represent an energy shift, a renormalized RB coupling and a random SOC gap, respectively. After tedious but straightforward algebra we find G_𝐤^a=-[(ϵ^a L_+^a+λ^aL_-^a)γ_0+vL_+^aτ_z σ·𝐤-1/2(ϵ^a-m^a)L_-^aγ_r+(m^aL_+^a+λ^aL_-^a)γ_KM-vL_-^aγ_v𝐤+Γ_𝐤^a],where L_±^a=(L_1^a± L_2^a)/2 with L_1(2)^a=[v^2k^2-(ϵ^a-m^a)(ϵ^a+m^a±2λ^a)]^-1 ,γ_v𝐤=τ_0σ_0(𝐤̂×𝐬)·ẑ and Γ_𝐤^a is a k_i-quadratic term <cit.>. The last step consists of evaluating the vertex corrections. The renormalized charge current vertex satisfies the Bethe-Salpeter (BS) equation J̃_x=J_x+n ∫(d𝐤){ T^+ 𝒢_𝐤^+ J̃_x 𝒢_𝐤^- T^- } .The infinite set of non-crossing diagrams generated by the T-matrix ladder describes incoherent multiple scattering events at all orders in the scattering strength u_0 [Fig. <ref>(b)], yielding an accurate description of spin–orbit coupled transport phenomena in the dilute regime <cit.>. To solve Eq. (<ref>), we decompose J̃_x as J̃_x=J̃_x^μνρτ_μσ_νs_ρ, where the repeated indices μ,ν,ρ≡{0,i} are summed over. The number of nonzero components J̃_x^μνρ is constrained to only four by the symmetries of G/TMD <cit.>: (μ,ν,ρ)={(0,0,y),(z,x,0),(0,z,x),(z,y,z)}. Exploring the properties of the Clifford algebra, one can show that the nonzero vertex components have a one-to-one correspondence to their associated non-equilibrium response functions <cit.>. This allows us to express χ_yx in terms of the spin density component only, J̃_x^s≡J̃_x^00y, i.e., χ_yx=F_s(u_0) J̃_x^s, whereJ̃_x^s=-v/ϵ ϵ^2(ϵ+2λ)+θ(ϵ-2λ)(8λ^3-ϵ^3)/ϵ^2+4λ^2+ε_Λ.Here, θ is the Heaviside step function and ε_Λ is a weak correction logarithmic in the ultraviolet cutoff Λ set by the inverse of the lattice scale <cit.>. Finally, F_s(u_0) is a complicated function, which in the Gaussian and unitary scattering limits takes the form F_s(u_0)=1/2π n×4/u_0^2,|g_0^+u_0|≪1 (ϵ/2π v^2log|Λ^2/ϵ√(ϵ^2-4λ^2)|)^2,|u_0|→∞,respectively. Analogously, we can determine the expression for the charge conductivity σ_xx=F_c(u_0)J̃_x^c, with J̃_x^c≡J̃_x^zx0 <cit.>. The CSC rate can now be determined -2vχ_yx/σ_xx=θ(2λ-ϵ)+2λ/ϵg(u_0,ϵ) θ(ϵ-2λ) ,where g(u_0,ϵ=2λ)=1 and deviates only slightly from this value when u_0 is large and for ϵ>2λ [see Fig. <ref>(d)]. The central result Eq. (<ref>) puts our earlier semiclassical argument on firm grounds, and shows that the CSC is little affected by the disorder strenght outside the RPG.Discussions.— In realistic G/TMD heterostructures, Δ and λ_sv can be comparable to the RB coupling <cit.>, leading to major modifications in the band structure. Nevertheless, a thorough analysis, summarized in Fig. <ref>, shows that the ISGE remains robust. For instance, for |λ_sv|≪λ,|Δ|, the k dependence of the in-plane spin texture is virtually unaffected [Eq. (<ref>)]. Thus, according to the semiclassical results the CSC efficiency should be high at the RPG edge. This is confirmed by a numerical inversion of the Bethe-Salpeter equations in the full model. The figure of merit γ plotted in Fig. <ref> reaches its predicted optimal value [Eq. (<ref>)]. When the spin–valley coupling is significant, the in-plane spin texture shrinks, however the CSC efficiency remains sizeable [Fig. <ref>(b)]. Outside the RPG, the definition of efficiency γ is complicated due to the coexistence of counter-rotating spins. To analyze this regime, we employ a heuristic definition satisfying: (i) 0≤γ≤1 for all parameters, (ii) γ decays for ϵ≫2λ̃ due to collapsing of spin-split Fermi rings and (iii) γ is continuous across the RPG. Since the band velocity saturates quickly to its upper bound (=v), we use its value at the RPG edge as representative for the regime II, which lead us to the following definition γ=2|χ_yx|/σ_xx× v_F(ϵ) , ϵ<2λ̃v_F(2λ̃) , ϵ≥2λ̃ .where v_F(ϵ)≡|v_τ-(k(ϵ))|. Consistently with the rate derived for the minimal model [Eq. (<ref>)], the asymptotic behavior of γ is of power-law type, and thus the CSC remains robust in the accessible range of electronic densities. A relevant question is how much efficiency is lost due to thermal fluctuations. Figure <ref>(b) shows the CSC figure of merit at selected temperatures in the weak scattering limit (see Ref. <cit.> for methods). Since the T=0 ratio decays slowly in regime II, the smearing caused by thermal activation is ineffective, allowing a giant ISGE at room temperature, e.g., γ_room≈0.3 for a chemical potential μ≈5λ≈50 meV. We finally comment on the rippling of the graphene surface and imperfections causing local variations in the RPG <cit.>. Inhomogeneities in the spin–orbit energy scales are expected to be small in samples with strong interfacial effect <cit.>. As long as |λ(𝐱)-λ|≪λ, the random spin–orbit field acts merely as an additional source of scattering <cit.>, which according to our findings would not affect the ISGE efficiency.In conclusion, we have presented a rigorous theory of inverse spin galvanic effect for graphene on transition metal dichalcogenide monolayers. We introduced a figure of merit for charge-to-spin conversion and show it attains values close to unity at the minority spin band edge. The effect is robust against nonmagnetic disorder and remains large at room temperature. The current-driven spin polarization is only limited by the electronic mobility, and thus it is expected to achieve unprecedentedly large values in ultra-clean samples. Our results are also relevant for group-IV honeycomb layers<cit.>, which are described by similar Dirac models.The codes used for numerical analyses are available from the Figshare database, under the Ref <cit.>.M.M. thanks C. Verma for his hospitality at the Bioinfomatics Institute in Singapore. A.F. gratefully acknowledges the financial support from the Royal Society (U.K.) through a Royal Society University Research Fellowship. R.R. acknowledges the hospitality of CA2DM at NUS under grant R-723-000-009-281 (GL 769105). M.O. and A.F. acknowledge funding from EPSRC (Grant Ref: EP/N004817/1).10 G_spintronics_reviewW. Han, R.K. Kawakami, M. Gmitra, and J. Fabian. Nature Nano. 9, 794 (2014).2D_materials_reviewA. K. Geim, and I. V. Grigorieva. Nature 499, 419 (2013).Spinorbitronics_reviewA. Soumyanarayanan, N. Reyren, A. Fert , and C. Panagopoulos. Nature 539, 509 (2016).2D_band_engineering L. Britnell, et al. Science 340, 1311 (2013). A. V. Kretinin, et al. Nano Lett. 14 (2014); H. Fang, et al. PNAS 111, 6198 (2014); F. Withers, et al. Nat. Materials, 14 301 (2015). T. Shen, A.V. Penumatcha, and J. Appenzeller, ACS Nano 10, 4712 (2016).TMDZ. Zhu et al. Physical Review B, 84, 153402 (2011); D. Xiao, et al. Physical Review Letters 108, 196802 (2012); K. F. Mak, et al. Nature Nanotechnology 7, 494 (2012); H. Zeng et al. Nature Nanotechnology 7, 490 -493 (2012).Muniz_2015R.A. Muniz, and J.E. Sipe, Phys. Rev. B 91, 85404 (2015).G-TMD_Exp_Luo_17Y.K. Luo et al. Nanoletters 17, 3877 (2017); G-TMD_Exp-Avsar_17A. Avsar et al. pre-print: arXiv:1705.10267 (2017).gmitra2015 M. Gmitra, J. Fabian, Physical Review B, 92, 155402 (2015).gmitra2016M. Gmitra, D. Kochan, P. Hgl, and J. Fabian, Phys. Rev. B 93, 155104 (2016).wang2015Z. Wang, et al. Nature Communications 6, 8339 (2015).avsar2014 A. Avsar, et al. Nature Communications 5, 4875 (2014).wang2016Z. Wang, et al. Phys. Rev. X 6, 041020 (2016).Volkl16T. Vlkl, et al. pre-print: arXiv:1706.07189 (2017).ISGE E. I. Rashba, Sov. Phys. Solid State 2, 1109 (1960) ; E. L. Ivchenko and G. E. Pikus, JETP Lett. 27, 604 (1978); E. L. Ivchenko, Y. B. Lyanda-Geller, and G. E. Pikus, JETP Lett. 50, 175 (1989); A. G. Aronov and Y. B. Lyanda-Geller, JETP Lett. 50, 431 (1989). V. M. Edelstein, Solid State Comm. 73, 233 (1990).sanchez2013J. R. Snchez, et al. Nature Comm. 4, 2944 (2013).sanchez2016J. R. Snchez, et al. Phys. Rev. Lett. 116, 096602 (2016).lesne2016E. Lesne, et al. Nature Materials 15, 1261-1266 (2016).magnetic_ISGE M. Rodriguez-Vega, G. Schwiete, J. Sinova, and E. Rossi, pre-print: arXiv:1610.04229 (2016). X. Li, H. Chen, and Q. Niu, pre-print: arXiv:1707.04548 (2017).Schwab11P. Schwab, R. Raimondi, and C. Gorini. EPL 93, 67004 (2011).Kochan17D. Kochan, S. Irmer, and J. Fabian. Phys. Rev. B 95, 165415 (2017).Hernando06 D. Huertas-Hernando, F. Guinea, and A. Brataas, Phys. Rev. B 74, 155426 (2006).Fabian09S. Konschuh, M. Gmitra, and J. Fabian, Phys. Rev. B 82, 245412 (2010).rashba2009E. I. Rashba, Physical Review B 79, 161409 (2009).SMSee Supplemental Material attached below for explicit expressions and further discussions. Additional Refs. <cit.> are also included therein.Vozmediano_11M. A. H. Vozmediano, Philos. Trans. R. Soc., A 369, 2625 (2011).Huang_16C. Huang, Y.D. Chong, M. A. Cazalilla. Phys. Rev. B 94, 085414 (2016).MilletariFerreira2016M. Milletar and A. Ferreira, Phys. Rev. B 94, 134202 (2016); ibidem, 201402 (2016).Schwab_02P. Schwab and R. Raimondi, EPJ B 25, 483 (2002).note_symmetriesInvariance under mirror reflection about x̂-axis and isospin rotations, Λ_z=τ_z, reduces Eq. (<ref>) to a set of 8×8 coupled equations. In addition, the minimal Dirac–Rashba model is invariant under a rotation of π exchanging sublattices, C_2, and Λ_x,y=τ_x,yσ_z, leading to only four allowed components.milletari2017M. Milletar, M. Offidani, A. Ferreira, and R. Raimondi, Phys. Rev. Lett. 119, 246801 (2017).ferreira2011A. Ferreira, J. Viana-Gomes, J. Nilsson, E. R. Mucciolo, N. Peres, and A. C. Neto, Phys. Rev. B 83, 165402 (2011).RipplesM. B. Lundeberg, and J. A. Folk. Phys. Rev. Lett. 105, 146804 (2010); V. K. Dugaev, E. Y. Sherman, and J. Barnaś. Phys. Rev. B 83, 085306 (2011); I. M. Vicent, H. Ochoa, and F. Guinea. Phys. Rev. B 95, 195402 (2017)Yang_17B. Yang, et al. Phys. Rev. B 96, 041409(R) (2017).Honeycomb_LayersS. Cahangirov, et al., Phys. Rev. Lett. 102, 236804 (2009). T. Amlaki, M. Bokdam, and P. J. Kelly. Phys. Rev. Lett. 116, 256805 (2009). C.-C. Liu, H. Jiang, and Y. Yao, Phys. Rev. B 84, 195430 (2011).figshareDOI: https://doi.org/10.6084/m9.figshare.c.3904732.v1https://doi.org/10.6084/m9.figshare.c.3904732.v1.§ SUPPLEMENTARY INFORMATION FOR “OPTIMAL CHARGE-TO-SPIN CONVERSION IN GRAPHENE ON TRANSITION METAL DICHALCOGENIDES” In this Supplementary Information we provide additional details on the Dirac-Rashba model and the semiclassical theory at large electronic density. We also provide the explicit form of the renormalized charge current vertex for the minimal model (i.e., Δ=λ_sv=0, λ≠0), as well as additional details on the finite temperature calculation and the impact of random fluctuations in the spin–orbit energy scale.§ DETAILS ON THE MODEL§.§ SpectrumThe effective Hamiltonian of graphene on TMD monolayer can be written as <cit.> H_0𝐤=τ_z[v σ·𝐤+λ (σ×𝐬)·ẑ+Δ σ_z+λ_sv s_z] ,where σ,𝐬 are Pauli matrices and we have used the representation for the 4-component spinors at each valley (τ_z=±1):ψ_τ_z=±=(ψ_±,a(b)^↑,ψ_±,a(b)^↓,ψ_±,b(a)^↑,ψ_±,b(a)^↓)^t .In the above, a(b) are graphene sublattice indexes, and ↑,↓ denote the spin projection. The respective eigenvalues are given in Eqs. (2)-(3) of main text. The Rashba pseudo-gap at k=0 (see Fig. 1, main text) is easily computed as 2λ̃=min{|Δ+λ_sv|, √(4λ^2+(Δ-λ_sv)^2)},while the bottom of the spin majority conduction band is ϵ_m=|λ(Δ+λ_sv)|/√(λ^2+λ_sv^2) .For energies ϵ_m<ϵ<2λ̃ the spectrum develops a small “Mexican hat” feature <cit.>. In Fig. <ref> we show the evolution of the spectrum for finite Rashba effect as one turns on the proximity couplings Δ,λ_sv. We note that the energy spectrum is gapless in the following particular cases: (i) λ=0 and |λ_sv|>|Δ| and (ii) λ_sv=-Δ.In the minimal model (Δ,λ_sv=0) the spin texture is entirely in-plane, due to the Rashba spin-momentum locking. The additional proximity-induced couplings in Eq. (<ref>) favor the establishment of a finite s_z-component. In Fig. <ref>, we show the spin texture of the electron spin-majority band for a number of representative cases. §.§ Semiclassical interpretation of the large energy behavior of the spin–charge response function We demonstrate how the asymptotic scaling of the ISGE efficiency reported in the main text [viz., Eq. (17)] can be understood within a simple semiclassical picture. For simplicity we study the pure-Rashba model, where m_ζ^z(k)=0. The argument can be easily generalized for other cases. Neglecting interband transitions, the spin-y linear response to an electric field applied along x̂ axis is given by [cf. Eq. (6) of the main text] χ_yx=1/4π∫dθ_k/2π∑_ζ=±1⟨ s_y⟩_ζcosθ_k k_ζ(ϵ)τ_*ζ(ϵ)where k_ζ(ϵ)=v^-1√(ϵ(ϵ-ζλ)) are the Fermi radii. Substituting the expression for the equilibrium spin texture [Eq. (5); main text], we find, for large ϵχ_yx ≃-1/8π∑_ζ=±1ζ ϵ/v(1-ζλ/2ϵ+...)τ_*ζ(ϵ) .Using the form of the momentum relaxation time in the Gaussian and unitary limits, we find, respectively τ_*ζ(ϵ) =A/ϵ⟶χ_yx=A/8π vλ/ϵ , τ_*ζ(ϵ) =A^'ϵ⟶χ_yx=A^'/8π vλ ϵ .While the collapsing of the Fermi rings, k_ζ(ϵ)→ k_-ζ(ϵ) as ϵ≫2|λ|, tends to diminish the out-of-equilibrium spin polarization, the latter can still be finite depending on the asymptotic behavior of τ_*(ϵ). In the unitary limit, one has vk_ζ(ϵ)τ_*ζ(ϵ)∝ϵ^2, resulting in a monotonically increasing spin–charge response function. However, the ratio between the spin–charge response function and the charge conductivity is always χ_yx/σ_xx∝ϵ^-1, as shown in the main text. While in Eqs. (<ref>)-(<ref>) we have neglected the role of scattering between states with different spin helicity, the latter processes are included in the quantum-mechanical treatment in the main text. Given the agreement of Eqs. (<ref>)-(<ref>) with Eq. (16) of the main text, we conclude that their inclusion will not affect the above semiclassical picture.§ DETAILS ON THE DIAGRAMMATIC CALCULATION§.§ Disorder averaged propagators We provide the full form of the disorder averaged propagator in the pure-Rashba model. Denoting with a=±, respectively, the retarded and advanced sector of the theory, we obtain G_𝐤^a=-[(ϵ^aL_+^a+λ^aL_-^a)γ_0+vL_+^aτ_zσ·𝐤-1/2(ϵ^a-m^a)L_-^aγ_r+(m^aL_+^a+λ^aL_-^a)γ_KM-vL_-^aτ_0σ_0(𝐤̂×𝐬)·ẑ+Γ_𝐤^a] ,where ϵ^a=ϵ+i a n η_0 , λ^a=λ-i a n η_r , m^a=-i a n η_3 , L_±^a=(L_1^a± L_2^a)/2 , L_1(2)^a=[v^2k^2-(ϵ^a-m^a)(ϵ^a+m^a±2λ^a)]^-1,and Γ_𝐤^a=(λ^aL_+^aτ_0+(ϵ^a+m^a)/2L_-^aτ_z)[sin2ϕ_k(σ_xs_x-σ_ys_y)-cos2ϕ_kτ_z(σ_xs_y+σ_ys_x)] .§.§ T-matrix calculation We report the full form of the imaginary part of the self-energy in the T-matrix approximation:Σ^±=∓ in(η_0γ_0+η_3 γ_KM+η_r γ_r), η_0,η_3=u_0/2 Im [1/1-u_0(g_0,0^++g_0,KM^+)±1-u_0(g_0,0^+-g_0,KM^+)/[1-u_0(g_0,0^+-g_0,KM^+)]^2-(2u_0 g_0,r^+)^2] , η_r=Im [u_0g_0,r^+/[1-u_0(g_0,0^+-g_0,KM^+)]^2-(2u_0 g_0,r^+)^2] ,with g_0,i^+=1/8 Tr[g_0^+γ_i] and γ_i={γ_0,γ_KM,1/2γ_r} as defined in the main text. The real part of the self-energy (omitted for simplicity) leads to a renormalization of the Fermi energy and of λ as well as a random mass term of the Kane-Mele type <cit.>. The Fermi energy renormalization contains a logarithmic divergence, which can be taken into account by wave function renormalization and leads to a renormalization of the Fermi velocity <cit.>.§.§ Full form of the renormalized charge vertex We first define the general structure of the renormalized charge current vertex in the minimal model as {J̃_x^c, J̃_x^s ,J̃_x^sh,J̃_x^m} =1/8Tr[J̃_x{τ_zσ_x,s_y,τ_zσ_ys_z,σ_zs_x}].Below we provide the explicit form of the components to lowest order in the impurity density n for the weak scattering limit. For simplicity, we assume λ>0. Outside the Rashba pseudogap, ϵ>2λ, we obtain J̃_x^c=2v+𝒪(n),J̃_x^s=-2λ/ϵv+𝒪(n) ,J̃_x^sh,m=0 ,while inside the Rashba pseudogap, ϵ<2λ, we find J̃_x^c=v2(ϵ^2+λϵ+2λ^2)/ϵ^2+4λ^2+𝒪(n) , J̃_x^s=-vϵ(ϵ+2λ)/ϵ^2+4λ^2+𝒪(n) , J̃_x^sh=n u_0^2/16 vϵ/λ+𝒪(n^2) , J̃_x^m=n u_0^2/4 vϵ^2/ϵ^2+4λ^2+𝒪(n^2) .At leading order in n, the important components are J̃_x^c,J̃_x^s. In the strong scattering regime Eqs. (15)-(19) acquire logarithmic corrections in the ultraviolet cutoff Λ. In Fig. <ref> we show that such corrections are small for the leading terms J̃_x^c and J̃_x^s, so that Eqs. (15)-(17) still hold in this regime. Having performed the limit u_0→∞, Eqs. (18),(19) for the subleading components J̃_x^sh,J̃_x^m are no longer valid; yet we find the corrections provide a negligible contribution to the response functions (not shown). §.§ Finite temperature calculation for the ISGE efficiency In Fig. (3) of the main text we showed the figure of merit's temperature-dependence. The calculation was performed numerically employing the following definition γ(μ,T)=∫_-∞^+∞dϵ ∂ f(ϵ,μ,T)/∂ϵ 2|χ_yx(ϵ,T=0)|[θ(|ϵ|-2λ̃)v_F(2λ̃)+θ(2λ̃-|ϵ|)v_F(ϵ)]/∫_-∞^+∞dϵ ∂ f(ϵ,μ,T)/∂ϵ σ_xx(ϵ,T=0) ,where f(ϵ,μ,T)={ 1+exp[(ϵ-μ)/k_BT]} ^-1 is the Fermi–Dirac distribution function; see main text for remaining definitions. § EFFECT OF RANDOM SOC We analyze here the impact of random Rashba fields (RRFs) on the CSC efficiency. In graphene without proximity SOC, RRFs lead to current-driven spin polarization via asymmetric spin precession <cit.>. In graphene on TMD, small fluctuations in the Rashba-Bychkov coupling (|λ(𝐱)-λ|≪λ) cannot disturb the spin helicity of eigenstates. This directly implies that the CSC rate in regime I remains unaffected (see main text). To investigate the impact of random SOC in regime II, we model the RRF as a short-range disorder potential with Rashba-Bychkov matrix structure:V_RRF(𝐱) =u_rγ_r ∑_i=1^Nδ(𝐱-𝐱_i) ,Neglecting its real part real, the self-energy Σ^a preserves the structure of Eq. 10 of the main textΣ^±=∓ in(η_0γ_0+η_3 γ_KM+η_r γ_r) ,where γ_0=τ_0σ_0s_0 (identity), γ_r=τ_z(σ×𝐬)·ẑ, γ_KM=τ_0σ_zs_z. We report the the weak scattering limit form of the parameters appearing in Eq. (<ref>) for positive energiesη_0=-η_r=u_r^2/2v^2ϵ , η_r=0 ,ϵ>2λ η_0=-η_r=-η_3=u_r^2/4v^2ϵ ,ϵ<2λ .We find that at leading order in the impurity areal density, Eq. (17) of the main text still holds with a slightly different functional form for g(u_r,ϵ). In Fig. (<ref>) we plot the ratio |χ_yx|/σ_xx for Rashba-like and scalar impurities (as considered in the main text); the CSC ratio in the two cases is virtually identical in the Born scattering regime. This confirms that as long as the proximity effect is well developed in the band structure of graphene, the CSC mechanism is robust against random fluctuations in the energy scales of the model. | http://arxiv.org/abs/1706.08973v2 | {
"authors": [
"Manuel Offidani",
"Mirco Milletarí",
"Roberto Raimondi",
"Aires Ferreira"
],
"categories": [
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.mes-hall",
"published": "20170627180002",
"title": "Optimal charge-to-spin conversion in graphene on transition metal dichalcogenides"
} |
[email protected] NEST, Scuola Normale Superiore, I-56126 Pisa, Italy NEST, Scuola Normale Superiore, I-56126 Pisa, Italy Istituto Italiano di Tecnologia, Graphene Labs, Via Morego 30, I-16163 Genova, Italy Istituto Italiano di Tecnologia, Graphene Labs, Via Morego 30, I-16163 Genova, Italy School of Physics & Astronomy, University of Manchester, Oxford Road, Manchester M13 9PL, United Kingdom In a fluid subject to a magnetic field the viscous stress tensor has a dissipationless antisymmetric component controlled by the so-called Hall viscosity. We here propose an all-electrical scheme that allows a determination of the Hall viscosity of a two-dimensional electron liquid in a solid-state device. Non-local transport and the Hall viscosity of 2D hydrodynamic electron liquids Marco Polini December 30, 2023 ============================================================================== § INTRODUCTION Electron systems roaming in a crystal where the mean free path for electron-electron collisions is the shortest length scale of the problem can be described by conservation laws for macroscopic collective variables <cit.>. Such non-perturbative hydrodynamic description relies on the knowledge of a small number of kinetic coefficients <cit.>, i.e. the bulk ζ and shear η viscosities and the thermal conductivity κ. In the presence of time-reversal symmetry, these fully determine the response of the electron system to slowly-varyingexternal fields. However, when time-reversal symmetry is broken (for example due to the presence of an external magnetic field), a dissipationless term, controlled by the so-called Hall viscosity <cit.> η_ H, appears in the viscous stress tensor <cit.> σ^'_ij. In two spatial dimensions one has σ^'_ij = ∑_k, ℓη_ij, kℓ v_kℓ , where i,j,k and ℓ denote Cartesian indices, v_kℓ≡ (∂_k v_ℓ +∂_ℓ v_k)/2, and η_ij, kℓ is a rank-4 tensor, usually called “viscosity” tensor <cit.>, η_ij, kℓ ≡ζδ_ijδ_kℓ+ η (δ_ikδ_jℓ + δ_iℓδ_jk - δ_ijδ_kℓ)+η_ H (δ_jkϵ_iℓ - δ_iℓϵ_kj) . In Eq. (<ref>), η_ H parametrizes the portion of η_ij, kℓ which is antisymmetric with respect to the exchange ij ↔ kℓ and is non-zero only when time-reversal symmetry is broken.Recent theoretical and experimental work has attracted interest in the flow of viscous electron liquids. In particular, two experiments <cit.> in high-quality encapsulated graphene sheets have demonstrated two unique qualitative features of viscous electron transport (negative quasi-local resistance <cit.> and super-ballistic electron flow <cit.>), providing, for the first time, the ability to directly measure the dissipative shear viscosity η of a two-dimensional (2D) electron system. A different experiment <cit.> has shown that near charge neutrality, electron-electron interactions in graphene are strong enough to yield substantial violations of the Wiedemann-Franz law. Evidence of hydrodynamic transport has also been reported in quasi-2D channels of palladium cobaltate <cit.>. Interestingly, also hydrodynamic features of phonon transport have been recentlydiscussed in the literature <cit.>.In this work, we focus on the role of the Hall viscosity η_ H on the non-local electrical transport characteristics of 2D electron systems subject to a perpendicular magnetic field. Solving suitable magneto-hydrodynamic equations for the rectangular geometries sketched in Fig. <ref>, we demonstrate how one can directly measure η_ H by purely electrical non-local measurements. Our Article is organized as follows. In Sect. <ref> we review the theory of magneto-hydrodynamic transport in viscous 2D electron systems.In Sect. <ref> we present the solution of the magneto-hydrodynamic equations in the case of a rectangular setup, with infinite length in the x̂ direction and finite width in the ŷ direction, in the presence ofa single current injector on one side of the setup. This solution is then used in Sect. <ref> as a building block to construct the solutions for the “transverse” and “vicinity” geometries, sketched in Figs. <ref> (a) and (b), respectively. Finally, in Sect. <ref> we summarize our principal findings and draw our main conclusions. Detailed microscopic derivations of the magneto-hydrodynamic equations and of the corresponding boundary conditions are reported in Appendices <ref> and <ref>, respectively. Appendix <ref> discusses how the solution of Eqs. (<ref>)-(<ref>) changes due to a change in the boundary conditions.§ MAGNETO-HYDRODYNAMIC THEORY OF NON-LOCAL TRANSPORT In the linear-response and steady-state regimes, electron transport in the hydrodynamic regime in the presence of a static magnetic field = B ẑ is described by the continuity equation ∇· ()=0 , and the Navier-Stokes equation - ∇ P() + ∇·σ^'() + e n̅∇φ() - e/c() ×=m /τ() . Here,()=n̅ () is the particle current density, () is the fluid element velocity, n̅ is the ground-state uniform density, P() is the pressure, σ^'() is the viscous stress tensor whose Cartesian components have been explicitly reported in Eqs. (<ref>)-(<ref>), φ() is the 2D electrostatic potential in the plane where electrons move, - e is the electron charge, m is the electron effective mass, andτ is a phenomenological transport time describing momentum-non-conserving collisions <cit.> (e.g. scattering of electrons against acoustic phonons). The gradient of the pressure is proportional to the gradient of the density via ∇ P()=( B/n̅)∇ n(), where B=n̅^2/ N_0 is the bulk modulus <cit.> of the homogeneous electron liquid, N_0 being the density of states at the Fermi energy <cit.>. It is useful to define the electrochemical potential as ϕ( r)=φ( r)+ δμ ( r)/(-e) where δμ( r)=[n( r)-n̅]/ N_0 is the chemical potential measured with respect to the equilibrium value, e.g. μ̅=ħ v_ F√(πn̅) for the case of single-layer graphene <cit.> and μ̅=ħ^2 πn̅/(2m) for bilayer graphene <cit.>.Since experimental probes are usually sensitive to ϕ( r), from now on we will focus our attention on the electrochemical potential rather than on φ( r). We now note that the viscous stress tensor in Eqs. (<ref>)-(<ref>) can be written in the following compact form σ^' =(η +i η_ Hτ_y)[(∂_x v_x-∂_y v_y)τ_z+(∂_x v_y+ ∂_y v_x)τ_x] + ζ∇· v , where τ_i with i=x,y,z are standard 2× 2 Pauli matrices acting on Cartesian indices. As in Eq. (<ref>) above, in the linear-response and steady-state regimes we can write v( r) =J( r)/n̅. We then note that the bulk viscosity ζ couples to ∇· J, which vanishes because of the continuity equation (<ref>). The bulk viscosity term in the viscous stress tensor therefore drops out of the problem at hand. In summary, Eq. (<ref>) simplifies to: σ^'=m (ν +i ν_ Hτ_y)[(∂_x J_x-∂_y J_y)τ_z+(∂_x J_y+ ∂_y J_x)τ_x] , where ν≡η/(m n̅) is the kinetic shear viscosity and ν_ H≡η_ H/(m n̅) is the kinetic Hall viscosity. Replacing Eq. (<ref>) into Eq. (<ref>) and introducing the electrochemical potential ϕ( r), we can write the Navier-Stokes equation (<ref>) as σ_0/e∇ϕ()=(1-D_ν^2 ∇^2)() + ω_ cτ(1+D_ H^2 ∇^2 ) () ×ẑ , where σ_0 =n e^2τ/m,D_ν≡√(ντ) has been introduced in Refs. torre_prb_2015,bandurin_science_2016,pellegrino_prb_2016, D_ H≡√(-ν_ H/ω_ c), and ω_ c≡ e B/(mc) is the usual cyclotron frequency. As we will see below, ν_ H and ω_ c have opposite signs so that D_ H is a well defined length scale. Notice that the Hall viscosity parametrizes a correction to the ordinary Lorentz force due to the spatial dependence of the velocity v( r). We now resort to useful results of semiclassical Boltzmann transport theory (see Appendix <ref>), which capture the dependence of the shear and Hall viscosities on the magnitude of the applied magnetic field <cit.>: ν =ν_0 B_0^2/B_0^2+B^2 ν_ H =-ν_0 B B_0/B_0^2+B^2 , where ν_0 is the kinematic shear viscosity at zero magnetic field and B_0≡ c n̅/(4 eN_0 ν_0)is a characteristic magnetic field.For example, for bilayer graphene <cit.> with carrier density n̅=10^12 cm^-2, we find B_0≈ 0.1 Tesla for <cit.> ν_0 ≈ 0.1 m^2/s. For the same set of parameters and |B| ≪ B_0, we find D_ H≈ 1 μ m. Despite Eq. (<ref>) has been derived in the weak-field limit, it yields sensible results even for high magnetic fields. In the limit |B| ≫ B_0, indeed, we find ν_ H≈ -(B)ℓ_B^2 n̅/(4 ħ N_0 ), ℓ_B=√(c ħ/(e|B|)) being the magnetic length, in agreement with the quantum Hall regime results derived in Ref. sherafati_prb_2016.Since all the setups in Fig. <ref> are translationally-invariant in the x̂ direction,it is useful to introduce the following Fourier Transforms <cit.> (FTs) with respect to the spatial coordinate x:ϕ̃(k,y) = ∫_-∞^+∞d x e^-i k xϕ() andJ̃(k,y) = ∫_-∞^+∞d x e^-i k x J().The three coupled partial-differential equations (<ref>)-(<ref>) can be combined into a 4×4 system of first-order ordinary differential equations: ∂_y w(k,y) =M(k)w(k,y) , where w(k,y) is a four-component vector, i.e. w(k,y)=[k J̃_x(k,y),k J̃_y(k,y),∂_yJ̃_x(k,y),e n̅ϕ̃(k,y)/(m ν)]^ T, andM(k) = k ([0010; -i000;1+1/(k D_ν)^2 ν_ r +ω_ cτ/(k D_ν )^2 i ν_ r -i;(ν_ r-ω_ cτ) /(k D_ν)^2 1 + ν_ r^2 +(1 +ν_ rω_ cτ)/(k D_ν)^2i(1+ν_ r^2)-i ν_ r ]) ,where ν_ r≡ν_ H/ν.The matrix M(k) has four eigenvalues: λ_1/2(k) = ± |k| and λ_3/4(k) = ± q, where we have introduced the shorthand q ≡√(k^2+1/D_ν^2) . The corresponding eigenvectors are: w_1/2(k)= [ i;±(k); ± i (k); 1 ∓ i (k) ω_ cτ/D_ν^2 k^2 ] , w_3/4(k)= [±k/q; - i k^2/q^2; 1; (ν_ r -ω_ cτ)/D_ν^2 q^2 ] .Note that the eigenvalues are independent of the cyclotron frequency and Hall viscosity, while the eigenvectors explicitly depend on them. The general solution of Eq. (<ref>) can be therefore written as a linear combination of exponentials of the form ∑_j=1^4 a_j(k) w_j(k)exp(λ_jy).The four coefficients a_j(k) can be determined from the enforcement of suitable boundary conditions (BCs). § SINGLE-INJECTOR SETUP We consider a single current injector in a rectangular setup with infinite length in the x̂ direction and finite width W in the ŷ direction. This plays the role of “building block”, allowing us to solve the magneto-hydrodynamic problem posed by Eqs. (<ref>)-(<ref>) in more complicated setups like the ones sketched in Figs. <ref>(a) and (b). A current injector is mathematically described by the usual point-like BC for the component of the current density perpendicular to the y=0 edge: J_y(x,0)= -Iδ(x)/e , where I in the dc drive current <cit.>.On the edge opposite to the injector (i.e. at y=W), the orthogonal component of the current density must vanish:J_y(x, W)= 0 . The solution of the viscous problem requires additional BCs on the tangential components of the current density at both edges. We impose that the current density on the boundary Ω of the sample is proportional to the off-diagonal component of the stress <cit.>: [ê_ t· (σ̂^'·ê_ n ) + (m ν/ℓ_ b) ê_ t· ]_Ω=0 , where ê_ n denotes the outer normal unit vector to the boundary,ê_ t=ê_ n×ẑ, and ℓ_ b is the so-called “boundary slip length”. Using Boltzmann equation and the well-known Reuter-Sondheimer model of boundary scattering <cit.>, we obtain (Appendix <ref>) ℓ_ b=ν/v_ F6π/9π^2-32(1+p)/(1-p) , where 0≤ p ≤ 1 is the probability of specular scattering for an electron at the boundary <cit.>. Here, p=1 for perfectly specular scattering and p=0 for completely diffusive scattering. Eq. (<ref>) shows that ℓ_ b depends on both electron-electron scattering, through ν, and electron-boundary scattering, through p. The boundary slip length diverges for p→ 1, recovering the free-surface BC <cit.>,while it remains finite in the limit p→ 0, i.e. lim_p→ 0ℓ_ b≃ 0.1 ℓ_ ee, where ℓ_ ee is the electron-electron scattering length <cit.>. (In deriving the last result we have used ν≃ v_ Fℓ_ ee/4, see Appendix <ref>.) For completely diffusive scattering, ℓ_ b is therefore ten times smaller than the electron-electron scattering length. Since the latter quantity is much smaller than the macroscopic length scales of hydrodynamic electron flow, ℓ_ b is negligibly small for p→ 0 and we obtain the no-slip BC <cit.>. Although Eq. (<ref>) has been obtained by using a very simple model for boundary scattering <cit.>, we believe that more refined models will yield different numerical values for ℓ_ b but will likely not change neither the structure of Eq. (<ref>) nor the qualitative conclusions we just drew.For the setups in Fig. <ref>, we have ê_ n =ŷ(ê_ n = - ŷ) for the upper (lower) edge at y=W (y=0), respectively. In FT with respect to x, the BCs become ∂_yJ̃_x(k,0)+i k J̃_y(k,0)=(2 i k ν_ r + ℓ_ b^-1)J̃_x(k,0) ,J̃_y(k,0)=- I /e ,∂_y J̃_x(k,W)=(2 i k ν_ r - ℓ_ b^-1)J̃_x(k,W) ,J̃_y(k, W) =0 . In the remainder of this Section, we consider the case of free-surface BCs <cit.>, which are obtained by taking the limit ℓ_ b→ +∞.This choice is physically justified by the measured <cit.> monotonic temperature dependence (i.e. no Gurzhi effect) of the ordinary longitudinal resistance in the linear-response regime and in the case of a uniform steady-state flow. For more details, we refer the reader to Refs. torre_prb_2015,pellegrino_prb_2016,bandurin_science_2016. To study the impact of a boundary slip length ℓ_ b< +∞, we have carried out the same calculations described in this Section in the opposite limit, i.e. for ℓ_ b=0 (See Appendix <ref>). The results obtained for ℓ_ b→ +∞ and ℓ_ b = 0 are compared in Sect. <ref>, see Fig. <ref>.The FT of the electrochemical potential along the edges reads as following: ϕ̃_+ (k)≡ϕ̃(k,0)==I ρ_0/ k {sinh (q̅) [(1 + 2 D_ν^2 k^2) cosh (k̅)+ i ω_ cτsinh (k̅)]+i 4ν_ Hντ^2k^2 q{q sinh (k̅) sinh (q̅)-k[cosh (k̅) cosh (q̅)-1]}+i2ν_ H^2 τ^2 k^2{2 k q (2 ω_ cτ-ν_ r) [1-cosh (k̅) cosh (q̅)]-sinh (q̅){iD_ν^-2cosh (k̅)-2 sinh (k̅) [k^2 (ω_ cτ-ν_ r)+ q^2 ω_ cτ]}}}×{sinh (k̅) sinh (q̅) +4ν_ H^2 τ^2 k^2 {(2k^2+D_ν^-2) sinh (k̅) sinh (q̅)+2 k q[1- cosh (k̅) cosh (q̅)]}}^-1,ϕ̃_-(k) ≡ϕ̃(k,W) = Iρ_0/ k{(1 + 2 D_ν^2k^2) sinh(q̅)+2ν_ Hτ k^2{ν_ rsinh (q̅)+2 i k q D_ν^2(1+ν_ r^2) [cosh (k̅)-cosh (q̅)]}}×{sinh (k̅) sinh (q̅) +4ν_ H^2 τ^2 k^2 {(2k^2+D_ν^-2) sinh (k̅) sinh (q̅)+2 k q[1- cosh (k̅) cosh(q̅)]}}^-1, where k̅=kW, q̅=qW, ρ_0=σ_0^-1, and σ_0=n̅e^2 τ/m represents a Drude-like conductivity.It is useful to express the edge electrochemical potentials (<ref>)-(<ref>) as ϕ̃_+(k) = I[r̃_+(k)- iρ_ H/k+ 2i ρ_ν_ H k W^2 +r̃_ + S(k)+ r̃_ + A(k)] , ϕ̃_-(k) = I[r̃_-(k)+r̃_ - S(k)+ r̃_ - A(k)] , where ρ_ H=-m ω_ c/(n̅ e^2)=B/(-en̅c), and ρ_ν_ H=m ν_ H/(n̅ e^2 W^2)=ρ_ H D_ H^2/W^2.The former quantity is the usual Hall resistivity. The resistances r̃_± (k) physically represent the solutions at zero magnetic field <cit.> and are given by r̃_+(k)=ρ_0 + 2 ρ_ν k^2 W^2 /ktanh(k W) andr̃_-(k)=ρ_0 + 2 ρ_ν k^2 W^2/sinh(k W)k , whereρ_ν=m ν/(n̅ e^2 W^2). The inverse FT of r̃_+(k) and r̃_+(-) can be calculated analytically. We find r_+(x)=-ρ_0/2 πln [sinh^2(π x/2W)]-πρ_ν/2sinh^2(π x/2W) and r_-(x)=-ρ_0/2 πln [cosh^2(π x/2W)]+πρ_ν/2cosh^2(π x/2W) . In Eqs. (<ref>) and (<ref>), the quantity r̃_± S(k) (r̃_± A(k)) is a real (imaginary) function and it is non zero only for a finite value of the kinematic Hall viscosity ν_ H. Furthermore, r̃_± S(k) (r̃_± A(k)) is even (odd) under the exchange k → -k, implying that the corresponding inverse FTs are even (odd) real functions of the coordinate x. At zero magnetic field, ω_ c and ν_ H vanish, implying that r̃_± S(k)=0 and r̃_± A(k)=0.After straightforward mathematical manipulations, we find: ϕ_+(x) =I{-ρ_0/2 πln [sinh^2(π x/2W)]-πρ_ν/2sinh^2(π x/2W) +ρ_ H/2(x)+2 ρ_ν_ Hδ^'( x/W) + r_ + S(x) + r_ + A(x)} and ϕ_-(x)=I{-ρ_0/2 πln [cosh^2(π x/2W)]+πρ_ν/2cosh^2(π x/2W) + r_ - S(x) + r_ - A(x) } , wherethe third term on the right hand side of Eq. (<ref>) is the usual contribution due to the Lorentz force, the fourth term in the same equation is a singular contribution (proportional to the derivative of the Dirac delta function) localized at the position of the injector and due to the Hall viscosity, while r_± S(x) and r_± A(x) are the inverse FTs of r̃_± S(k)and r̃_± A(k), which must be calculated numerically. Fig. <ref>(a) (Fig. <ref>(b)) shows the resistancer_± A(x) (r_± S(x)),in units of ρ_ν, plotted as a function of x/W. These calculations refer to the massive (m=0.03 m_ e, where m_ e is the bare electron mass in vacuum) chiral 2D electron system <cit.> in a bilayer graphene sample with W=2.5 μ m, n̅=10^12 cm^-2, B=0.1 B_0, ν_0=0.1 m^2/ s, with the black lines referring to τ=2 ps and the red lines to the ultra-clean limit, τ=200 ps.The quantities r_ + A(x) and r_ + S(x) are denoted by solid lines, while r_ - A(x) and r_ - S(x) are denoted by dashed lines. For small magnetic fields and in the ultra-clean τ→∞ limit we can linearize r_± A(x) with respect to B and 1/τ, obtaining the following analytical expressions: r_ +A(x)≈ρ_ν_ H W^2/4D_ν^2[( π x/2 W)-(x) - π x/2 W/sinh^2( π x/2 W)] and r_ -A(x)≈ρ_ν_ H[π^2 tanh( π x/2 W)/2 cosh^2( π x/2 W) -π W x/8D_ν^2cosh^2( π x/2 W) ] , which are in excellent agreement with the numerical results shown for τ = 200 ps in Fig. <ref>(a). In the limit B/B_0≪ 1, the quantities r̃_± S(k) start at order (B/B_0)^2. § NON-LOCAL RESISTANCES AND THE HALL VISCOSITYWe now turn to discuss explicitly the impact of the Hall viscosity on non-local electrical transport measurements carried out in the two setups sketched in Fig. <ref>(a) and (b). We start from the setup depicted in Fig. <ref>(a),where the current I is injected into the green electrode at x=0 and y=0, and extracted from the blue electrode at x=0 and y=W.As in the case of the single-injector setup we just discussed, we treat each electrode with the usual point-like BC for the component of current density orthogonal to the edge.Exploiting the linearity of the problem, it is possible to write the electrochemical potentials along the edges of the setup in Fig. <ref>(a) in terms of the potentials ϕ_+(x) and ϕ_-(x) along the lower and upper edges of the single-injector setup, respectively:ϕ(x,0) =ϕ_+(x)-ϕ_-(-x) , ϕ(x,W) =ϕ_-(x)-ϕ_+(-x) . We remind the reader that ϕ_+(x) and ϕ_-(x) are the inverse FTs of ϕ̃_+(k) and ϕ̃_-(k), respectively. For the case of the free-surface BCs (ℓ_ b→ +∞), explicit expressions for the latter quantities have been given in Eqs. (<ref>) and (<ref>). For the no-slip BCs, we refer the reader to Eqs. (<ref>)-(<ref>) in Appendix <ref>.We now introduce the “transverse” non-local resistance, which is measured in the setup sketched in Fig. <ref>(a), as R_ T(x) ≡ϕ(x,0)-ϕ(-x,0)/I= ρ_ H(x)+4 ρ_ν_ Hδ^'( x/W)+2[r_ +A(x)+r_ -A(x)] . We note that R_ T(x) →ρ_ H(x) for |x| ≫ W, because, in the same limit, [r_ +A(x)+r_ -A(x)]→ 0, independently of the value of ℓ_ b. In order to have a clear signature of the Hall viscosity it is therefore convenient to perform two measurements of the transverse resistance R_ T, i.e. one at position 0<x≲ W and a second one at position x^'≫ W. The differenceΔ R_ T(x)≡R_ T(x)- lim_x^'→∞ R_ T(x^')=2[r_ +A(x)+r_ -A(x)] is independent of ρ_ H and non-zero only in the presence of a finite Hall viscosity.Results in the transverse geometry show a weak dependence on the BCs (<ref>). Fig. <ref>(a) shows the quantity Δ R_ T(x) as a function of x/W, as calculated by using the BCs (<ref>) in the two limiting cases, ℓ_ b→ +∞(solid lines) and ℓ_ b→ 0 (dashed lines). For the calculations we have used two different values of τ: τ=2 ps (black) and τ=200 ps (red). From Fig. <ref> we clearly see a weak dependence of Δ R_ T(x) on ℓ_ b, independently of the value of τ. We now follow similar algebraic steps for the setup in Fig. <ref>(b). In this case the current I is injected into the green electrode at x=0 and y=0, and extracted from the blue electrode at x=x_0<0 and y=0. We find ϕ(x,0) =ϕ_+(x)-ϕ_+(x-x_0) , ϕ(x,W) =ϕ_-(x)-ϕ_-(x-x_0) . We define the “vicinity” resistance <cit.>as R_ V(x)≡ϕ(x,0)-ϕ(x^'→ +∞,0)/I . The mathematical expression of R_ V(x) notably simplifies in the limit x_0→ -∞, becoming R_ V(x) =r_+(x)+ρ_∗ x/2W+r_ +A(x)+r_ +S(x)-[r_ +A(+ ∞)+r_ +S(+ ∞)] , where x>0 and the resistance ρ_∗ is obtained by the asymptotic relation ρ_∗=-2W lim_x→∞ r_+(x)/|x|. In the limit ℓ_ b→∞ (i.e. free-surface BCs) and using Eq. (<ref>) we find ρ_∗=ρ_0.In the opposite limit, ℓ_ b→ 0 (i.e. no-slip BCs), we find ρ_∗=ρ_0{1-2 D_ν/Wtanh[W/(2D_ν)]}^-1 (see Appendix <ref>).Since we are interested in the impact of the Hall viscosity onhydrodynamic electrical transport, it is useful to concentrate our attention on the difference between the vicinity resistance in the presence of an applied magnetic field andin the absence of it: Δ R_ V(x)≡R_ V(x)- R_ V(x)|_B=0 . The vicinity geometry displays a non-trivial dependence on the BCs (<ref>). In Fig. <ref>(b) we show the quantity Δ R_ V(x) as a function of x/W, as calculated by using the BCs (<ref>) in the two limiting cases, ℓ_ b→ +∞(solid lines) and ℓ_ b→ 0 (dashed lines). As in panel (a) of the same figure, we have carried out calculations for two different values of τ: τ=2 ps (black) and τ=200 ps (red). In the ultra-clean limit (τ = 200 ps) the dependence of Δ R_ T(x) on ℓ_ b is large. Indeed, by comparing the solutions with free-surface and no-slip BCs, we note from Fig. <ref>(b) that even the sign of Δ R_ T(x) depends on ℓ_ b.Before concluding, in Fig. <ref>(a) and (b) we illustrate the dependence ofΔ R_ T(x) and Δ R_ V(x) on B/B_0, respectively. In this case, these quantities have been calculated by using the free-surface BCs and evaluated at a given position x ≲ W. In the weak-field B≪ B_0 limit, Δ R_ T is given by the product of ρ_ν_ H and a function that depends only on x, τ, and ν(B=0).A measurement of Δ R_ T therefore yields immediately the value of the Hall viscosity, provided that τ is measured from the ordinary longitudinal resistance <cit.> ρ_xx and ν(B=0) from one of the protocols discussed in Refs. bandurin_science_2016,kumar_arxiv_2017. We emphasize that this way of accessing ν_ H is insensitive to the classical Hall resistivity ρ_ H and to ballistic effects like transverse magnetic focusing <cit.>. The latter statement holds true as long as ν_ H is extracted from a measurement of Δ R_ T at values of B thatare well below those that are necessary to focus electron trajectories <cit.>, for typical sample sizes. The vicinity geometry is in practice less convenient to probe ν_ H. This is because—as seen in Fig. <ref>(b)—the range of values of B/B_0 over which Δ R_ V depends on B solely through the Hall viscosity is smaller than in the highly-symmetric geometry shown in Fig. <ref>(a). We also mention once more that the geometry sketched in Fig. <ref>(a) is more convenient for accessing ν_ H with respect to the one in Fig. <ref>(b), because in the former one the detailed nature of BCs does not influence in a significant matter the role played by the Hall viscosity on non-local electrical measurements. In other words, in the case of Fig. <ref>(a) and corresponding Δ R_ T(x), a precise estimate of ℓ_ b is unnecessary. This is at odds with a recently studied geometry <cit.>,where the impact of ν_ H on the electrochemical potential at the boundaries of the setup exists only for finite values of the boundary slip length (ℓ_ b < +∞).In our case, the electrochemical potential at the boundaries depends non-trivially on ν_ H for both free-surface (ℓ_ b = +∞) and no-slip (ℓ_ b=0) BCs. § SUMMARY AND CONCLUSIONS In summary, we have proposed an all-electrical scheme that allows a determination of the Hall viscosity ν_ H of a two-dimensional electron liquid in a solid-state device. We have carried out extensive calculations for two device geometries, illustrated in Figs. <ref>(a) and (b), and a family of boundary conditions, reported in Eq. (<ref>), which depends on one parameter, the so-called boundary slip length ℓ_b. The latter allows to interpolate between the widely used no-slip (ℓ_ b=0) and free-surface (ℓ_ b=+∞) boundary conditions.We have demonstrated that the transverse geometry in Fig. <ref>(a) is particularly suitable for extracting ν_ H from experimental data.Indeed, we have shown that a measurement of Δ R_ T(x)—Eq. (<ref>)—yields immediately the value of the Hall viscosity, provided that τ is measured from the ordinary longitudinal resistance <cit.> ρ_xx and ν(B=0) from one of the protocols discussed in Refs. bandurin_science_2016,kumar_arxiv_2017.We have also shown that Δ R_ T(x) is insensitive to the value of the boundary slip length, a finding that further enforces the robustness of this quantity as a diagnostic tool of the Hall viscosity.Note added.—While preparing this manuscript we became aware of related work <cit.> where the effect of the Hall viscosity on the dc flow of an electron fluid was studied by neglecting the impact of momentum-non-conserving collisions.This work was supported by Fondazione Istituto Italiano di Tecnologia and the European Union's Horizon 2020 research and innovation programme under grant agreement No. 696656 “GrapheneCore1”. § DERIVATION OF THE LINEARIZED HYDRODYNAMIC EQUATIONS FROM THE SEMICLASSICAL BOLTZMANN EQUATIONFor sufficiently long-wavelength (λ≫ 2π/k_ F) and low-frequency (ω≪ 2E_ F/ħ, where E_ F is the Fermi energy) perturbations, the response of a 2D electron system can be described by using the semiclassical Boltzmann equation <cit.>: [∂_t+ v_ p·∇_ p+ F( r,p,t)·∇_ r] f( r, p,t)= = S{f}( r, p,t) , where v_ p≡∇_ pϵ_ p is the electron velocity, ϵ_ p being the band energy, F( r,p,t)=-e[ E( r,t)+ v_ p× B( r, t)/c] is the total force acting on electrons, B( r,t) = ẑB( r, t) being the external magnetic field, and S{f}( r, p,t) is the collision integral. The latter describes all types of electron collisions,i.e. electron-electron, electron-phonon, and electron-impurity collisions.We solve Eq. (<ref>) by using the following Ansatz: f( r, p,t)=f_0(ϵ_ p)-f_0'(ϵ_ p) ℱ( r,θ_ p,t) , where f_0(ϵ)={exp[(ϵ-μ̅)/(k_ B T)]+1}^-1 is the equilibrium Fermi-Dirac distribution function, f_0'(ϵ) is its derivative with respect to the energy ϵ, and θ_ p is the polar angle of the vector p. Retaining only terms that are linear with respect to ℱ( r,θ_ p,t), assuming a uniform and static magnetic field, and Fourier transforming with respect to time, we obtain the following equation for ℱ( r,θ_ p,ω):-iωℱ( r,θ_ p,ω)+ v_ p·[∇ℱ( r,θ_ p,ω)+e E( r,ω)] +ω_ c∂_θ_ pℱ( r,θ_ p,ω)=S^ el{ℱ} ( r,θ_ p,ω) +S^ ee{ℱ}( r,θ_ p,ω) .Here E( r,ω) is the total electric field, i.e. the sum of the external field and the field generated by the electron distribution itself (the Hartree self-consistent field), ω_ c=eB/(mc) is the cyclotron frequency, m≡ p_ F/v_ F is the effective mass, p_ F and v_ F being the Fermi momentum and velocity, respectively, S^ el{f} describes momentum non-conserving collision with phonons and impurities, and, finally, S^ ee{f} is the electron-electron collision integral.We now introduce the Fourier decomposition of the distribution function ℱ( r,θ_ p,ω) with respect to the polar angle: ℱ( r, θ_ p, ω)=∑_n=-∞^+∞ℱ_n( r, ω)e^inθ_ p , where the Fourier coefficients ℱ_n( r, ω) are given by ℱ_n( r, ω)=∫dθ_ p/2πe^-in θ_ pℱ( r, θ_ p, ω) . The lowest-order Fourier coefficients are directly related to simple physical quantities. For example,ℱ_0 describes an isotropic dilatation or contraction of the Fermi circle, i.e. a density perturbation n( r,ω)≡∫ dp [f( r, p,ω)-f_0(ϵ_ p)]= N_0 ℱ_0( r,ω) , where N_0 is the density of states at the Fermi energy.The coefficients ℱ_± 1 describe a rigid translation of the Fermi surface, which give rise to a finite current: J( r,ω)≡∫ dp v_ p[f( r, p,ω)-f_0(ϵ_ p)]= N_0v_ F/2[ ℱ_-1( r,ω)+ℱ_1( r,ω); iℱ_1( r,ω)-iℱ_-1( r,ω) ] .The coefficients ℱ_± 2 describe an elliptic deformation of the Fermi surface, and are related to the trace-less part of the stress tensor:T ( r,ω)≡∫ dp p ⊗ v_ p[f( r, p,ω)-f_0(ϵ_ p)] = = N_0v_ F^2m/2[ℱ_0( r,ω)+ℱ_2( r,ω)+ ℱ_-2( r,ω)/2τ_z+ℱ_-2( r,ω)-ℱ_2( r,ω)/2iτ_x ] .Here,is the 2× 2 identity matrix and τ_i with i=x,y,z are ordinary 2× 2 Pauli matrices acting on Cartesian indices. Higher-order coefficients describe deformations of the Fermi surface with a more complicated angular dependence.Following Ref. guo_pnas_2017, we approximate the collision integrals in Eq. (<ref>) with the simplest possible expression, which is linear in the coefficients ℱ_n and respects relevant conservation laws. The collision integral S^ el{ℱ}( r,θ_ p,ω) respects particle number conservation and is described by the phenomenological time scale τ. Its form is S^ el{ℱ}( r,θ_ p,ω) = -1/τ[ℱ( r,θ_ p,ω)-ℱ_0( r,ω)] . The electron-electron collision integral should instead respect both particle number and momentum conservation and is described by the parameter τ_ ee. It reads as following <cit.>S^ ee{ℱ} ( r,θ_ p,ω)=-1/τ_ eeℱ( r,θ_ p,ω)+ +1/τ_ ee[ℱ_0 ( r,ω). + .ℱ_1 ( r,ω)e^iθ_ p+ℱ_-1 ( r,ω)e^-iθ_ p] , where we impose a single relaxation rate due to electron-electron interactions for all non-conserved harmonics, i.e. F_n( r,ω) with |n|>1.Multiplying Eq. (<ref>) by e^-inθ_ p and averaging over the angle we get a hierarchy of equations for the moments of the distribution function:-i ωℱ_n( r,ω)+v_ F/2{∂_x[ℱ_n-1( r,ω)+ℱ_n+1( r,ω)]-i∂_y[ℱ_n-1( r,ω)-ℱ_n+1( r,ω)]} +ev_ F/2[E_x( r,ω)(δ_n, 1+δ_n, -1)-iE_y( r,ω)(δ_n, 1-δ_n, -1)]+inω_ cℱ_n( r, ω)=-1/τ_ ee[ℱ_n( r,ω)-ℱ_0( r,ω)δ_n, 0-ℱ_1( r,ω)δ_n, 1-ℱ_-1( r,ω)δ_n, -1]-1/τ[ℱ_n( r,ω)-ℱ_0( r,ω)δ_n, 0] .Setting n=0 in Eq. (<ref>) leads to the continuity equation: -iω n( r,ω)+∇· J ( r,ω)=0 . The two equations for n=± 1 can be combined to give the Navier-Stokes equation-iω J( r,ω)+1/m∇·T̂( r,ω) +e v_ F^2N_0/2 E( r,ω)+ω_ c J( r,ω)×ẑ=-1/τ J( r,ω) . To obtain a closed set of equations we truncate the series of equations (<ref>) neglecting all the coefficients ℱ_n with |n|≥ 3. By doing this we are able to close the equations for n=± 2 and we obtain ℱ_± 2( r,ω)= -v_ F/2(∂_x ∓ i∂_y )ℱ_± 1( r,ω)/1/τ+1/τ_ ee-iω± 2iω_ c . Replacing this result into the expression for the stress tensor (<ref>) leads toT( r,ω)=ℬ/n̅n( r,ω)-σ'( r,ω) , whereℬ=n̅mv_ F^2/2=n̅^2/𝒩_0 is the bulk modulus <cit.> of the electron liquid, while the viscous stress tensor is given byσ'( r,ω)= mν(ω) [∂_xJ_x-∂_y J_y ∂_x J_y+∂_y J_x; ∂_x J_y+∂_y J_x -∂_xJ_x+∂_y J_y; ]+m ν_ H(ω) [ ∂_xJ_y+∂_y J_x -∂_x J_x+∂_y J_y; -∂_x J_x+∂_y J_y-∂_xJ_y-∂_y J_x;]. This coincides with the expression in Eq. (<ref>). Here, the frequency-dependent viscosities are given by ν(ω)=v_ F^2/41/τ_ ee+1/τ-iω/(1/τ_ ee+1/τ-iω)^2+4ω_ c^2 and ν_ H(ω) =-v_ F^2/2ω_ c/(1/τ_ ee+1/τ-iω)^2+4ω_ c^2 . Setting ω=0 and defining ν_0=v_ F^2τ_ eeτ/[4(τ_ ee+τ)], one immediately reaches Eqs. (<ref>) and (<ref>) for the magnetic-field-dependent dc viscosities. § DERIVATION OF THE BCS REPORTED IN EQ. (<REF>) In this Appendix we present a brief derivation of the hydrodynamic BCs in Eq. (<ref>) for the components of the fluid-element current J, starting from simple BCs <cit.> for the Boltzmann distribution function.Let us consider a portion of the boundary located at position r_0 anda local “reference system” defined by the vectors ê_ t and ê_ n introduced in Sect. <ref>. We denote by θ_0 the angle between the tangent vector ê_t and the x̂ direction. The distribution F( r_0,θ,ω) represents the density of carriers impinging from the bulk on the boundary if θ_0<θ<θ_0+π, while it represents the density of carriers scattered from the boundary into the bulk for θ_0-π<θ<θ_0. The density of scattered particles is related to the density of impinging particles by F( r_0,θ+θ_0,ω)=∫_0^π dθ' r(θ,θ') F( r_0,θ'+θ_0,ω) , where r(θ, θ') is the probability for a particle impinging with an angle θ' with respect to the boundary to be scattered with an angle θ, and -π <θ<0. Making use of Eqs. (<ref>) and (<ref>), we obtain F_n( r_0,ω)e^inθ_0=∑_m=-∞^∞ e^imθ_0(u_n-m+r_nm) F_m( r_0,ω) , where u_n≡∫_0^πdθ/2πe^-inθ=1/2 n=00 n -i/(nπ) n and r_nm≡∫_-π^0dθ e^-inθ∫_0^π dθ' e^imθ'r(θ,θ')/2π . Since the particle number is conserved in the collisions with the boundary, we have ∫_-π^0 r(θ,θ')dθ =1. This implies r_0m=u_-m . Consistently with the procedure followed in deriving the hydrodynamic equations (see Appendix <ref>), we neglect all the contributions stemming from F_n with |n|>2. Setting n=0 in Eq. (<ref>) and making use of Eq. (<ref>) yields e^iθ_0 F_1( r_0,ω)-e^-iθ_0 F_-1( r_0,ω)=0 .Noting thatê_n · J( r_0,ω)= N_0v_ Fi/2[e^iθ_0 F_1( r_0,ω)-e^-iθ_0 F_-1( r_0,ω)] , we can rewrite Eq. (<ref>) as ê_n · J( r_0,ω)=0 . Setting n=1,2 in Eq. (<ref>) gives instead (i/π+r_12)e^2iθ_0 F_2+(-1/2+r_11+r_1-1)e^iθ_0 F_1 +(-i/π+r_10) F_0 +(-i/3π+r_1-2)e^-2iθ_0 F_2 =0 and (-1/2+r_22)e^2iθ_0 F_2 +(-4i/3π+r_21+r_2-1)e^iθ_0 F_1 +r_20 F_0 +r_2-2e^-2iθ_0 F_2=0 .In what follows we use a simple one-parameter model for the scattering probability <cit.>, which consists in the linear superposition of specular reflection with probability p and diffuse reflection with probability 1-p. This reads r(θ,θ')=pδ(θ+θ')+(1-p)/π . This implies r_mn=pu_-n-m+2(1-p)u_-nu_-m . Replacing Eq. (<ref>) into Eqs. (<ref>)-(<ref>) and using Eq. (<ref>) we obtain [ i(3+p)/3π -i(1+3p)/3π;-1/2 p/2 ][ F_2 e^2iθ_0; F_-2 e^-2iθ_0 ]=e^iθ_0 F_1 [1-p/2; 4i(1-p)/3π ] . Solving for F_± 2 gives F_2e^2iθ_0=ie^iθ_0 F_1(9π^2 p -48p-16)/6π (p+1) and F_-2e^-2iθ_0=ie^iθ_0 F_1(9π^2 -48-16p)/6π (p+1) . Using this solution and noting that ê_ t· [σ̂^' ( r_0,ω)·ê_ n ]== -i N_0 v_ F^2m/4[ F_2( r_0,ω)e^2iθ_0- F_-2( r_0,ω)e^-2iθ_0] and ê_ t· J( r_0,ω)= N_0v_ Fe^iθ_0 F_1( r_0,ω) , finally leads to Eq. (<ref>) with ℓ_ b=6π/9π^2-32ν/v_ F1+p/1-p≈ 0.33 ν/v_ F1+p/1-p . § ON THE SOLUTIONS WITH NO-SLIP BCSIn the main text we have focused on the free-surface BCs.Here, we discuss how the results for the single-injector setup depend on the BCs, by analysing the no-slip—ℓ_ b=0 in Eq. (<ref>)—BCs. We start by noting that Eqs. (<ref>) and (<ref>), i.e. ϕ̃_+(k) = I[r̃_+(k)- iρ_ H/k+ 2i ρ_ν_ H k W^2 +r̃_ + S(k)+ r̃_ + A(k)] and ϕ̃_-(k) = I[r̃_-(k)+r̃_ - S(k)+ r̃_ - A(k)] , hold true independently of the chosen BCs. In the special case of the no-slip BCs (ℓ_ b=0), we findr̃_+(k) =ρ_0 q^2cosh(k̅)sinh(q̅)-k q cosh(q̅)sinh(k̅)/k{2kq[1-cosh(k̅)cosh(q̅)]+(k^2+q^2)sinh(k̅)sinh(q̅) } , r̃_-(k) =ρ_0 q[q sinh(q̅)-k sinh(k̅)]/k{2kq[1-cosh(k̅)cosh(q̅)]+(k^2+q^2)sinh(k̅)sinh(q̅)}, r̃_± S(k) =0 , r̃_+A(k) = -iρ_ν_ Hq(3k̅^2+q̅^2)[1-cosh(k̅)cosh(q̅)]+k(3q̅^2+k̅^2)sinh(k̅)sinh(q̅)/2kq[1-cosh(k̅)cosh(q̅)]+(k^2+q^2)sinh(k̅)sinh(q̅) , r̃_- A(k) = iρ_ν_ H q (q̅^2-k̅^2)[cosh(k̅) - cosh(q̅)]/2kq[1-cosh(k̅)cosh(q̅)]+(k^2+q^2)sinh(k̅)sinh(q̅) . Here, k̅=kW, q̅=qW, q=√(k^2+1/D_ν^2), D_ν=√(ντ),ρ_0=m /(n̅ e^2 τ), ρ_ H=-m ω_ c/(n̅ e^2 ), and ρ_ν_ H=m ν_ H/(n̅ e^2 W^2). The resistance r̃_±(k) coincides with that in the absence of the external magnetic field. Straightforward mathematical manipulations lead to the following asymptotic behavior in the limit k→ 0: r̃_±(k) →ρ_0/k^2 W {1-2 D_ν/Wtanh[W/(2D_ν)]} . This means that the asymptotic behavior of the corresponding inverse FT for |x| ≫ W is r_±(x) → -ρ_0 |x|/(2 W){1-2 D_ν/Wtanh[W/(2D_ν)]}^-1.The quantities r̃_± A(k) are proportional to the kinematic Hall viscosity ν_ H and they are imaginary and odd with respect to the exchange k→-k.This implies that the corresponding inverse FTs are odd and real functions of the spatial coordinate x.In the clean τ→∞ limit, we find r̃_+(k) =-ρ_ν2 W k̅[2 k̅+sinh(k̅)]/1+2 k̅^2-cosh(2k̅) , r̃_-(k) =-ρ_ν4 W k̅[k̅cosh(k̅)+sinh(k̅)]/1+2 k̅^2-cosh(2 k̅) , r̃_+A(k) =-iρ_ν_ H4 Wk̅^3/1+2 k̅^2-cosh(2k̅) , r̃_-A(k) = iρ_ν_ H4 Wk̅^2 sinh(k̅)/1+2 k̅^2-cosh(2k̅) .Finally, we consider the case of the half-plane geometry, with a current injector placed at the origin <cit.>. We obtain the solution of the problem for this simple geometry by taking the limit W→∞ in Eqs. (<ref>)-(<ref>). We find r̃_+(k)=ρ_0[1/|k|+D_ν^2(q+|k|)] and r̃_+A(k) =i ρ_0 ν_ H/ν D_ν^2(k)(q -|k|) , while r̃_-(k) = r̃_-A(k) = 0. Eqs. (<ref>) and (<ref>) can be Fourier-transformed analytically. The result isr_+(x) =-ρ_0[1/πln(|x|/D_ν)+D_ν^2/π x^2 + D_ν/π |x| K_1(|x|/D_ν)] and r_+A(x) =ρ_0 ν_ H/νD_ν/2 x[-I_1( |x|/D_ν) + L_1( |x|/D_ν)] , where I_1(x) (K_1(x)) is the modified Bessel function of first (second) kind and order one and L_1(x) is the modified Struve function of order one. 77gurzhi_spu_1968 R.N. Gurzhi, Sov. Phys. Uspekhi 11, 255 (1968).dyakonov_prl_1993 M. Dyakonov and M. Shur, http://dx.doi.org/10.1103/PhysRevLett.71.2465Phys. Rev. Lett. 71, 2465 (1993).dyakonov_prb_1995 M.I. Dyakonov and M.S. Shur, http://dx.doi.org/10.1103/PhysRevB.51.14341Phys. Rev. B 51, 14341 (1995).dyakonov_ieee_1996 M. Dyakonov and M. Shur, http://dx.doi.org/10.1109/16.485650IEEE Trans. Electron Devices 43, 380 (1996).conti_prb_1999 S. Conti and G. Vignale, http://dx.doi.org/10.1103/PhysRevB.60.7966Phys. Rev. B 60, 7966 (1999).govorov_prl_2004 A.O. Govorov and J.J. Heremans, http://dx.doi.org/10.1103/PhysRevLett.92.026803Phys. Rev. Lett. 92, 026803 (2004).muller_prb_2008 M. Müller and S. Sachdev, http://dx.doi.org/10.1103/PhysRevB.78.115419Phys. Rev. B 78, 115419 (2008).fritz_prb_2008 L. Fritz, J. Schmalian, M. Müller, and S. Sachdev,http://dx.doi.org/10.1103/PhysRevB.78.085416Phys. Rev. B 78, 085416 (2008).muller_prl_2009 M. Müller, J. Schmalian, and L. Fritz, http://dx.doi.org/10.1103/PhysRevLett.103.025301Phys. Rev. Lett. 103, 025301 (2009).bistritzer_prb_2009 R. Bistritzer and A.H. MacDonald, http://dx.doi.org/10.1103/PhysRevB.80.085109Phys. Rev. B 80, 085109 (2009).andreev_prl_2011 A.V. Andreev, S.A. Kivelson, and B. Spivak, http://dx.doi.org/10.1103/PhysRevLett.106.256804Phys. Rev. Lett. 106, 256804 (2011).mendoza_prl_2011 M. Mendoza, H.J. Herrmann, and S. Succi, http://dx.doi.org/10.1103/PhysRevLett.106.156601 106, 156601 (2011).svintsov_jap_2012 D. Svintsov, V. Vyurkov, S. Yurchenko, T. Otsuji, and V. Ryzhii, http://dx.doi.org/10.1063/1.4705382J. Appl. Phys. 111, 083715 (2012).mendoza_scirep_2013 M. Mendoza, H.J. Herrmann, and S. Succi, http://dx.doi.org/10.1038/srep01052Sci. Rep. 3, 1052 (2013).tomadin_prb_2013 A. Tomadin and M. Polini, http://dx.doi.org/10.1103/PhysRevB.88.205426 88, 205426 (2013).tomadin_prl_2014 A. Tomadin, G. Vignale, and M. Polini, http://dx.doi.org/10.1103/PhysRevLett.113.235901Phys. Rev. Lett. 113, 235901 (2014).torre_prb_2015 I. Torre, A. Tomadin, A.K. Geim, and M. Polini, http://dx.doi.org/10.1103/PhysRevB.92.165433Phys. Rev. B 92, 165433 (2015).torre_prb_2015_I I. Torre, A. Tomadin, R. Krahne, V. Pellegrini, and M. Polini, http://dx.doi.org/10.1103/PhysRevB.91.081402Phys. Rev. B 91, 081402(R) (2015).narozhny_prb_2015 B.N. Narozhny, I.V. Gornyi, M. Titov, M. Schütt, and A.D. Mirlin, http://dx.doi.org/10.1103/PhysRevB.91.035414Phys. Rev. B 91, 035414 (2015).briskot_prb_2015 U. Briskot, M. Schütt, I.V. Gornyi, M. Titov, B.N. Narozhny, and A.D. Mirlin, http://dx.doi.org/10.1103/PhysRevB.92.115426Phys. Rev. B 92, 115426 (2015).lucas_njp_2015 A. Lucas, http://dx.doi.org/10.1088/1367-2630/17/11/113007New J. Phys. 17, 113007 (2015).levitov_naturephys_2016 L. Levitov and G. Falkovich, http://dx.doi.org/10.1038/nphys3667Nature Phys. 12, 672 (2016).pellegrino_prb_2016 F.M.D. Pellegrino, I. Torre, A. K. Geim, and M. Polini http://dx.doi.org/10.1103/PhysRevB.94.155414Phys. Rev. B 94, 155414 (2016).lucas_prb_2016 A. Lucas, J. Crossno, K.C. Fong, P. Kim, and S. Sachdev,http://dx.doi.org/10.1103/PhysRevB.93.075426Phys. Rev. B 93, 075426 (2016).Levchenko_prb_2017 A. Levchenko, H.Y. Xie, and A.V. Andreev,http://dx.doi.org/10.1103/PhysRevB.95.121301Phys Rev B. 95, 121301(R) (2017).landaufluidmechanics L.D. Landau and E.M. Lifshitz, Course of Theoretical Physics:Fluid Mechanics (Pergamon, New York, 1987).avron_prl_1995 J.E. Avron, R. Seiler, and P.G. Zograf,http://dx.doi.org/10.1103/PhysRevLett.75.697Phys. Rev. Lett. 75, 697 (1995).tokatly_prb_2007 I.V. Tokatly and G. Vignale,http://dx.doi.org/10.1103/PhysRevB.76.161305Phys. Rev. B 76, 161305 (2007).tokatly_jpcm_2009 I.V. Tokatly and G. Vignale, http://dx.doi.org/10.1088/0953-8984/21/27/275603J. Phys.: Condens. Matter 21, 275603 (2009).read_prb_2009 N. Read, http://dx.doi.org/10.1103/PhysRevB.79.045308Phys. Rev. B 79, 045308 (2009).read_prb_2011 N. Read and E.H. Rezayi, http://dx.doi.org/10.1103/PhysRevB.84.085316Phys. Rev. B 84, 085316 (2011).haldane_prl_2011 F.D.M. Haldane,http://dx.doi.org/10.1103/PhysRevLett.107.116801Phys. Rev. Lett. 107, 116801 (2011).hoyos_prl_2012 C. Hoyos, and D.T. Son, http://dx.doi.org/10.1103/PhysRevLett.108.066805Phys. Rev. Lett. 108, 066805 (2012).bradlyn_prb_2012 B. Bradlyn, M. Goldstein, and N. Read,http://dx.doi.org/10.1103/PhysRevB.86.245309Phys. Rev. B 86, 245309 (2012).sherafati_prb_2016 M. Sherafati, A. Principi, and G. Vignale,http://dx.doi.org/10.1103/PhysRevB.94.125427Phys. Rev. B 94, 125427 (2016).alekseev_prl_2016 P.S. Alekseev, http://dx.doi.org/10.1103/PhysRevLett.117.166601Phys. Rev. Lett. 117, 166601 (2016).cortijo_2DM_2016 A. Cortijo, Y. Ferreirós, K. Landsteiner, and M.A.H. Vozmediano,http://dx.doi.org/10.1088/2053-1583/3/1/0110022D Mater. 3, 1 (2016).scaffidi_prl_2017 T. Scaffidi, N. Nandi, B. Schmidt, A.P. Mackenzie, and J.E. Moore,http://dx.doi.org/10.1103/PhysRevLett.118.226601Phys. Rev. Lett. 118, 226601 (2017).bandurin_science_2016 D. Bandurin, I. Torre, R.K. Kumar, M. Ben Shalom, A. Tomadin, A. Principi, G.H. Auton, E. Khestanova, K.S. NovoseIov, I.V. Grigorieva, L.A. Ponomarenko, A.K. Geim, and M. Polini, http://dx.doi.org/10.1126/science.aad0201Science 351, 1055 (2016).kumar_arxiv_2017 R.K. Kumar, D.A. Bandurin, F.M.D. Pellegrino, Y. Cao, A. Principi, H. Guo, G.H. Auton,M. Ben Shalom, L.A. Ponomarenko, G. Falkovich, I. V. Grigorieva, L.S. Levitov, M. Polini, and A.K. Geim, http://dx.doi.org/10.1038/nphys4240Nature Phys. (2017).crossno_science_2016 J. Crossno, J.K. Shi, K. Wang, X. Liu, A. Harzheim, A. Lucas, S. Sachdev, P. Kim,T. Taniguchi, K. Watanabe, T.A. Ohki, and K.C. Fong,http://dx.doi.org/10.1126/science.aad0343Science 351, 1058 (2016).moll_science_2016 P.J.W. Moll, P. Kushwaha, N. Nandi, B. Schmidt, and A.P. Mackenzie,http://dx.doi.org/10.1126/science.aac8385Science 351, 1061 (2016).fugallo_nanolett_2014 G. Fugallo, A. Cepellotti, L. Paulatto, M. Lazzeri, N. Marzari, and F. Mauri, http://dx.doi.org/10.1021/nl502059fNano Lett. 14, 6109 (2014).capellotti_natcomm_2015 A. Cepellotti, G. Fugallo, L. Paulatto, M. Lazzeri, F. Mauri, and N. Marzari, http://dx.doi.org/10.1038/ncomms7400Nature Comm. 6, 6400 (2015).Giuliani_and_Vignale G.F. Giuliani and G. Vignale, http://dx.doi.org/10.1080/00107510903194710 Quantum Theory of the Electron Liquid (Cambridge University Press, Cambridge, 2005).kotov_rmp_2012 V.N. Kotov, B. Uchoa, V.M. Pereira, F. Guinea, and A.H. Castro Neto,http://dx.doi.org/10.1103/RevModPhys.84.1067 84, 1067 (2012).steinberg_pr_1958 M.S. Steinberg, http://dx.doi.org/10.1103/PhysRev.109.1486Phys. Rev. 109, 1486 (1958). abanin_science_2011 D.A. Abanin, S.V. Morozov, L.A. Ponomarenko, R.V. Gorbachev, A.S. Mayorov, M.I. Katsnelson, K. Watanabe, T. Taniguchi, K.S. Novoselov, L.S. Levitov, and A.K. Geim, http://dx.doi.org/10.1126/science.1199595Science 332, 328 (2011).reuter_rspa_1948 G.E.H. Reuter and E.H. Sondheimer, http://dx.doi.org/10.1098/rspa.1948.0123Proc. R. Soc. Lond. A 195, 336 (1948). principi_prb_2016 A. Principi, G. Vignale, M. Carrega, and M. Polini, http://dx.doi.org/10.1103/PhysRevB.93.125410Phys. Rev. 93, 125410 (2016)beconcini_prb_2016 M. Beconcini, S. Valentini, R.K. Kumar, G.H. Auton, A.K. Geim, L.A. Ponomarenko, M. Polini, and F. Taddei,http://dx.doi.org/10.1103/PhysRevB.94.115441Phys. Rev. B 94, 115441 (2016).taychatanapat_naturephys_2013 T. Taychatanapat, K. Watanabe, T. Taniguchi, P. Jarillo-Herrero,http://dx.doi.org/10.1038/nphys2549Nature Phys. 9, 225 (2013). delacretaz_arxiv_2017 L.V. Delacrétaz, and A. Gromov,https://arxiv.org/abs/1706.03773arXiv:1706.03773.katsnelsonM.I. Katsnelson, https://doi.org/10.1017/CBO9781139031080Graphene: Carbon in Two Dimensions (Cambridge University Press, Cambridge, 2012).guo_pnas_2017 H. Guo, E. Ilsevena, G. Falkovich, and L.S. Levitov, http://dx.doi.org/10.1073/pnas.1612181114Proc. Natl. Acad. Sci. (USA) 114, 3068 (2017).dejong_prb_1995 M.J.M. de Jong and L.W. Molenkamp, http://dx.doi.org/10.1103/PhysRevB.51.13389 51, 13389 (1995). | http://arxiv.org/abs/1706.08363v2 | {
"authors": [
"Francesco M. D. Pellegrino",
"Iacopo Torre",
"Marco Polini"
],
"categories": [
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.mes-hall",
"published": "20170626133101",
"title": "Non-local transport and the Hall viscosity of 2D hydrodynamic electron liquids"
} |
Department of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USADepartment of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA We construct a four-dimensional SU(5) grand unified theory in which the proton is stable. The Standard Model leptons reside in the 5 and 10 irreps of SU(5), whereas the quarks live in the 40 and 50 irreps. The SU(5) gauge symmetry is broken by the vacuum expectation values of the scalar24 and 75 irreps. All non-Standard Model fields are heavy. Stability of the proton requires three relations between the parameters of the model to hold. However, abandoning the requirement of absolute proton stability, the model fulfills current experimental constraints without fine-tuning. SU(5) Unification without Proton Decay Benjamín Grinstein December 30, 2023 ====================================== § INTRODUCTION Grand unified theories (GUTs) present an attractive way to extend the Standard Model (SM) <cit.>. In addition to being esthetically appealing, they have several nice features – they reduce the number of multiplets,exhibit gauge coupling unification and explain why electric charges of quarks and leptons are connected.The first attempt of partial unification was based on the groupSU(4)× SU(2)_L × SU(2)_R<cit.>, while the seminal papers describing full unification of couplings were those proposing SU(5)<cit.> and SO(10)<cit.> gauge groups.Unfortunately, GUTs with complete gauge coupling unification constructed so far in four dimensions are plagued with proton decay and the current experimental limit <cit.> excludes their simplest realization. Although there exist many models extending proton lifetime to an experimentally acceptable level (see <cit.> and references therein, includingorbifold GUTs), a theoretically interesting question remains: is it at allpossible to construct a viable four-dimensionalGUTbased on a single gauge group with an absolutely stable proton?In this letter we propose such a model. The main idea is simple but the realization is somewhat involved.We present our model rather as a proof of concept, anticipating a simpler realization in the future.An alternative proposal achieves proton stability by imposing gauge conditions that eliminate all non-SM fields from the theory <cit.>, resulting in a model that, however, appears to be indistinguishable from the SM.The only other four-dimensional models with a single unifying gauge group designed to completely forbid proton decay we are aware of <cit.> are experimentally excluded due to the presence of new light particles carrying SM charges. The most dangerous proton decay channels in GUTs are those mediated by vector leptoquarks and arise from gauge kinetic terms in the Lagrangian. In our model those channels are absent, sincethe quarks and leptons livein different SU(5) representations. In particular, the leptons reside in the 5 and 10 irreps of SU(5), the right-handed (RH) down quarks are formed from a linear combination of two 50 irreps, whereas the left-handed (LH) quark doublets and the RH up quarks come from a linear combination of two 40 irreps. The SU(5) gauge symmetry is spontaneously broken down to the SM byvacuum expectation values (vevs) of scalar field multiplets transforming as 24 and 75 irreps. In order to obtain correct SM masses,the SM Higgs is chosen to be part of a scalar45 irrep multiplet, and there are no proton decay channels mediated by scalar leptoquarks from the Yukawa terms. The letter is structured as follows. In Sec. <ref> we present the fermion and scalar content of the theory. Section <ref> describes the relevant Lagrangian terms. In Sec. <ref> we demonstrate that the SM fermions have SM Yukawa-type masses and all other fields in the theory are heavy. In Sec. <ref> we show that proton decay is absentat all orders in perturbation theory. We present conclusions and possible future directions in Sec. <ref>.§ PARTICLE CONTENT The model is based on thegauge group SU(5). The fermion sector of the theory is composed ofthe 5,10,40 and 50 irreps, where the 40 and 50 come in two vector-like copies, making the theory anomaly-free. The scalar sector consists of Higgs fields in the 24, 45 and 75 irreps. §.§ Fermion sectorThe fermion multiplets in the theory come in the following LH spinor field representations, listed below along with their SU(3)_c × SU(2)_L × U(1)_Y decomposition <cit.>: 5^c =l ⊕ D^c_5[3pt] 10=e^c ⊕ Q_10⊕ U^c_10[3pt] 40_i = Q_40_i ⊕ U^c_40_i ⊕ (1,2)_-3/2⊕ (3̅, 3)_-2/3⊕ (8,1)_1 ⊕ (6̅, 2)_1/6 40_i = Q_40_i^c⊕U_40_i⊕ (1,2)_3/2⊕ (3, 3)_2/3⊕ (8,1)_-1⊕ (6, 2)_-1/6 50_i^c = D_50_i^c⊕(1,1)_2⊕ (3, 2)_7/6⊕ (6, 3)_1/3⊕ (6̅,1)_-4/3 ⊕ (8, 2)_-1/2 50^c_i= D_50_i⊕(1,1)_-2⊕ (3̅, 2)_-7/6⊕ (6̅, 3)_-1/3⊕ (6,1)_4/3 ⊕ (8, 2)_1/2 ,where i=1,2. The lowercase fields l, e are the LH lepton doublet and RH electron, respectively. The fieldsQ, U and D have the same quantum numbers as the SM's LH quark doublet q and RH quark singlets u and d, respectively.When coupling to the 5^c, SU(5) gauge bosons can act to transmute an l to an anti-D^c_5, and when coupling to the 10 to transmute Q_10 to an anti-U^c_10. This is the standard route for proton decay in GUTs. If, however, the 5^c multiplet is split, in that the D^c_5 mass is comparable to the GUT scale, while that of l arises from electroweak symmetry breaking, and the light d quark arises from a linear combination of the anti-D^c_50_i, then proton decay cannot proceed through this gauge boson exchange. This is an example of the realization of the mechanism we are proposing for proton stability. §.§ Higgs sectorThe scalar sector consists of the 24, 45 and 75 irreps of SU(5). Their decomposition into SM multiplets:24_H=(1,1)_0 ⊕ (1,3)_0 ⊕ (3,2)_-5/6⊕ (3̅,2)_5/6⊕ (8,1)_0 [2pt] 45_H=H ⊕ (3,1)_-1/3⊕ (3,3)_-1/3⊕ (3̅,1)_4/3⊕(3̅,2)_-7/6 ⊕ (6̅, 1)_-1/3⊕ (8,2)_1/275_H =(1,1)_0 ⊕ (3,1)_5/3⊕ (3̅,1)_-5/3⊕ (3,2)_-5/6⊕ (3̅,2)_5/6 ⊕ (6̅,2)_-5/6⊕(6,2)_5/6⊕ (8,1)_0 ⊕ (8,3)_0.Only the Higgses in the 24 and 75 irreps developvevs at theGUT scale, which break the SU(5) gauge symmetry down to SU(3)_c × SU(2)_L× U(1)_Y<cit.>. The SM Higgs field H is partof the 45 irrep.§ LAGRANGIANThe fermion kinetic terms in the Lagrangian are:ℒ_ kin =i ∑_RTr(R DR) , where the sum is over the representations R=5^c, 10, 40_i, 40_i, 50^c_i and 50^c_i. In the standard SU(5) GUT those terms give rise to dangerous dimension-six operators mediating proton decay. In our model such terms generating proton decay are absent, since physical states of SM quarks and leptons reside in different representations of SU(5), asshown in Sec. <ref>.The Yukawainteractions in our model are given by: ℒ_Y =Y_l5^c 1045^*_H+Y_u^ij40_i 40_j45_H + Y_d^ij 40_i 50^c_j45^*_H+ M_40^ij 40_i 40_j +λ^ij_1 24_H 40_i 40_j +λ^ij_2 40_i 24_H40_j + λ_3^i24_H 1040_i + λ^ij_4 40_i 75_H40_j + λ_5^i 75_H 1040_i[1pt] +M_50^ij 50^c_i 50^c_j + λ^ij_6 50^c_i 24_H 50^c_j + λ^ij_7 50^c_i 75_H 50^c_j+ λ_8^i 75_H 5^c50^c_i +h.c. ,with an implicit sum over i,j=1,2, the terms with λ_1,2^ijcorresponding to the two independent contractions, and the Hermitian conjugate applied to non-Hermitian terms. In Eq. (<ref>) the coefficients of the only other allowed gauge-invariant renormalizable Yukawa terms Y'_u^i 10 40_i45_H were set to zero. Since the SM leptons live only in the 5 and 10 irreps while the SM quarks live only in the 40 and 50 irreps, along with the absence of proton decay through vector gauge bosons, there isno tree-level proton decay mediated by any of the Yukawa-type terms (contrary to other GUT models <cit.>). To see this, consider, for example, the first term in Eq. (<ref>): an exchange of the (3,1)_-1/3 of the 45 necessarily couples the light lepton doublet l to the GUT-heavy Q_10. The Lagrangian of the scalar sectorconsists of all possible renormalizable gauge-invariant terms involving the 24, 45 and 75 representations: ℒ_H=-12μ_24^2 Tr (24_H^2)+14 a_1[ Tr (24_H^2) ]^2 +14 a_2 Tr (24_H^4) - 12μ_75^2 Tr (75_H^2)+14∑ b_k Tr (75_H^4)_k + M_45^2 Tr(|45_H|^2)+ 12 ∑ g_k Tr (24_H^275_H^2)_k + ∑ h_k Tr(24_H^2|45_H|^2)_k +... ,where the index k=1,2,3 corresponds to the contractions in which the twolowestrepresentations in a given trace combine into a singlet, a two-component tensor and a four-component tensor, respectively, and a prime is added if more than one contraction in eachcase exists. For simplicity, we exclude cubic terms in the scalar potential by assuming a 𝒵_2 symmetry of the Lagrangian. § PARTICLE MASSESIn this section we show thatthere exists a region of parameter space for which all SM fields have standard masses at the electroweak scale and below, whereas all new fields develop large masses. §.§ Fermion representations 5 and 50We first focus on the particles in the representation of thedown quark. After SU(5) breaking, the corresponding Lagrangian mass terms are: ℒ_ mass = ( D_50_1 D_50_2 ) ℳ_D (D^c_5 D^c_50_1D^c_50_2) ,with the mass matrix elements ℳ_D^i,1 =√(2)3λ_8^iv_75 , ℳ_D^i,j+1 = M_50^ij + c^D_24λ_6^ijv_24 + c^D_75λ_7^ijv_75 ,where v_24, v_75 are the vevs of the representations 24, 75, respectively, c^D_24 = 1/(3√(30)) and c^D_75 = 1/(3√(2)). In order to switch to the mass eigenstate basis, we perform a bi-unitary transformation ℳ^ diag_D = (R_D)_2×2 ℳ_D(L_D)^†_3×3 and, correspondingly, the mass eigenstates are ( D^c_5' D^c_50_1'D^c_50_2')_L=L_D( D^c_5D^c_50_1 D^c_50_2)_L ,( D'_50_1 D'_50_2)_R= R_D(D_50_1D_50_2)_R. The unitary matrices L_D and R_D are used to diagonalize the matrices [ (ℳ_D)^†ℳ_D] and [ℳ_D (ℳ_D)^†], respectively.From the structure of ℳ_D we immediately infer that the matrix [(ℳ_D)^†ℳ_D] has one of the eigenvalues equal to zero. In order to completely forbid proton decay, the corresponding eigenstate D^c_5' cannot contain any admixture of D^c_5. This is achieved by requiring the following tuning of parameters[Condition (<ref>) does not take into account terms involving the SM Higgs. With just this relation satisfied and no further fine-tuning of the electroweak terms, this would produce a tiny mixing between the heavy and light fields suppressed by v/M_ GUT, where v is the SM Higgs vev and M_ GUT is the unification scale. This would result in proton decay with lifetime τ_p ≈ 10^60years. However, there exists a condition more general than (<ref>) involving also the electroweak Yukawas, which ensures that there is no mixing between the SM quarks and the heavy fields. An alternative solution would be to stay with condition (<ref>) and simply fine-tune Y_u^ij and Y_d^ij, so that they produce exactly the SM quark mass terms, without any mixing between the light and heavy states.]:det(M_50^ij+c^D_24λ_6^ijv_24+c^D_75λ_7^ijv_75)= 0.In this case D^c_5'is a linear combination solely of D^c_50_1 andD^c_50_2, and can be associated with the SM field d^c:d^c =L_D^12 D^c_50_1 +L_D^13 D^c_50_2 ,where the matrix entriesL_D^1,j+1 are functions of M_50^ij,v_24, v_75, λ_6^ij,λ_7^ij and λ_8^i.The condition in Eq. (<ref>) ensures that our model has no proton decay that would involve either a component of the SM lepton doublet l or the down quark d. To our knowledge this novel model building feature has not been discussed in the literature.If one chooses to abandon the requirement of absolute proton stability, the parameters of the model need not be tuned. Proton decay experimental constraints <cit.> require merely L_D^11≲ 0.1 ×√((L_D^12)^2+(L_D^13)^2) .The factor of ∼0.1 can be easily understood: The presence of D_5^c in D_5^c' would trigger proton decay. Thestandard SU(5) model predicts proton decay at a rate roughly 100 times larger than the current experimental bound. The contribution to this rate scales like the admixture of D_5^c squared, thus the admixture itself has to be roughly less than 10%.Finally, one also has to show that all the fields within the 50^c irrep other than D_50^c are heavy. For this to be the case, it is sufficient to show that the Lagrangian terms: Δℒ_ mass =λ^ij_6 50^c_i 24_H 50^c_j +λ^ij_7 50^c_i 75_H 50^c_j generate different mass contributions: Δℳ^ij = c^R_24λ_6^ijv_24 + c^R_75λ_7^ijv_75for those representations than for D_50^c, since then the equivalent of condition (<ref>) would not be fulfilledfor those representations and they would acquire GUT-scale masses. The values of c_24 and c_75 are presented in Table <ref>. When combined, these fulfill our requirements. Table <ref> shows that the contribution of the term involving the 75 irrep in Eq. (<ref>)gives the same mass for D_50^cas for (3, 2)_7/6and (6̅,1)_-4/3. The contribution of the term involving the24 irrepin Eq. (<ref>) breaks this degeneracy.§.§ Fermion representations 10 and 40The analysis for the SU(3)_c × SU(2)_L × U(1)_Y representations with the quantum numbers of the quark doublet Q and anti-up quark U^c is a little different, since they both reside in the40of SU(5). Following the reasoning from the previous case, we arrive at the two conditions:det[M_40^ij+(c^U,Q_24_1λ_1^ij+ c^U,Q_24_2λ_2^ij)v_24+ c^U,Q_75λ_4^ijv_75]= 0,with the values of the coefficients provided in Table <ref>. If these relations are fulfilled, the SM fields u^c and q are not part ofthe 10 irrep,preventing the proton from decaying through channels involving q, u and e. We verified that there exists a class of values for the parameters M_40^ij, λ_1,2,4^ij fulfilling the requirement (<ref>),thus forbidding proton decay. The SM u^c and q are given by:u^c = L_U^12 U^c_40_1 + L_U^13 U_40_2^c,q= L_Q^12 Q_40_1+ L_Q^13Q_40_2 ,where L_U, Q^1, j+1 arefunctions of M_40^ij, v_24, v_75, λ_1,2,4^ij andλ^i_3,5. The values of c_24_1^R, c_24_2^R and c_75^R for the other SU(3)_c × SU(2)_L × U(1)_Y components of the 40 are given in Table <ref>. All those representations have different sets of c^R's as compared to U^c and Qand, consequently, Eq. (<ref>) is not satisfied in those cases. Therefore,those representations develop GUT-scale masses.§.§ Scalar representations 24, 45 and 75 In our model the gauge group SU(5) is broken down to the SM by the GUT-scale vevs of the 24 and 75 irreps, while the 45 does not develop a vev. Stability of the scalar potential is equivalent to the condition that all squared masses of the components of the 24 and 75 irreps arepositive, except for one combination of (3,2)_-5/6 and one of (3̅,2)_5/6<cit.>, the would-be Goldstone bosons of the broken SU(5).We checked that there exists a large region of parameter space for which all components of the 24 and 75 develop large positive squared masses, apart from the (3,2)_-5/6 and (3̅,2)_5/6 for which the mass-squared matrix is given byℳ^2_(3,2) = -118 (g_2+11g_3+ 15 g_3') ( v_75^2/5 v_24 v_75/2√(10) v_24 v_75/2√(10) v_24^2/8). We have used relations between parameters satisfied at the stationary point of the potential. The constant of proportionality is a combination of coupling constants, defined in Eq. (<ref>),and can take either sign. The matrix (<ref>)has a vanishing determinant so that one of the linear combinations of the fields is massless whilethe other is heavy.The representation 45 does not take part in SU(5) breaking and its SU(3)_c × SU(2)_L × U(1)_Y components generically have masses at the GUT scale. Since one of those fields is the SM Higgs, a cancellation between some of the parameters of the potential is required. To show that such an arrangement is possible, it is sufficient to consider only theexplicit mass term for the 45 along with the terms mixing it with the 24 in Eq. (<ref>). A small SM Higgs mass contribution is obtained for:M_45^2 + (h_1 - 67240 h_2 + 31120h_2'- 1360 h_3 - 516h_3' ) v_24^2 ≃ 0.We verified that there exists a wide range of parameters for which the GUT-scale masses of all other components of the 45 are positive.The fine-tuning in Eq. (<ref>) is equivalent to the standard SU(5) doublet-triplet splitting problem and perhaps may be solved by introducing additional SU(5) representations along the lines of <cit.>.§.§ Quark and lepton massesThe SM electron Yukawa emerges from the term:Y_l5^c 1045^*_H ⊃y_llH^* e^c.The terms contributing to the SM down quark mass are:Y_d^ij40_i 50^c_j45^*_H ⊃ y_dq H^* d^c,and for the SM up quark we have:Y_u^ij 40_i 40_j45_H⊃y_uqHu^c .There is no need to correct the typical SU(5) relation between the electron and down quark Yukawas, since they are not directly related in our model. § PROTON STABILITY AT LOOP LEVELSo far, we have shown that the model proposed in this letter is completely free from any tree-level proton decay. As it turns out, it is also possible to forbid proton decay at any order in perturbation theory. First we note that the model hasno proton decay at any loop order mediated by vector gauge bosons or scalars from the 45 irrep. This can be argued on symmetry grounds. All the Lagrangian terms in Eqs. (<ref>) and (<ref>), apart fromλ_3^i24_H 1040_i, λ_5^i 75_H 1040_i andλ_8^i 75_H 5^c50^c_i, are invariant under:5^c → - 5^c,10 → -10.Under this transformation, the SM leptons are odd while the SM quarks are even.For proton decay one must have an odd number of leptons in the final state and none in the initial state, and there must be no heavy particles in either the initial or final states. This is odd under the transformation(<ref>), and hence forbidden.The only remaining loop-level proton decay channels are those mediated by the scalars from the 24 and 75 irreps. To forbid these, we assume that the spontaneous breaking of SU(5) is nonlinearly realized <cit.> and we can replace the 24 and 75 irreps by nondynamical condensates <cit.>. The 24 and 75 scalar sector of the theory is then described by a nonlinear sigma model <cit.>. This concludes the proof thatin our model the proton is stable.§ CONCLUSIONSWe have constructed a grand unified model in four dimensions based on the gauge group SU(5) which does not exhibit any proton decay. This was accomplished by assigning the quarks and leptons to different irreps of SU(5). In order to forbid proton decay at tree level, three relations between the model parameters have to hold. In addition, for proton stability at any loop order, the SU(5) breaking has to be nonlinearly realized. Abandoning the requirement of absolute proton stability removes the necessity of any tuning or the nonlinear symmetry breaking, and the model is consistent with experiments for a large range of natural parameter values.The model hasadditional desirable features.Upon adding one <cit.> or several <cit.> extra scalar representations it allows for gauge coupling unification if some of the scalar fields from the 45 irrep are at theTeV scale. It also contains no problematic relation between the electron and down quark Yukawa plaguing the standard SU(5) models. However, the usual doublet-triplet splitting problem still persists and requires further model building, perhaps along the lines of a non-supersymmetric version of <cit.>.Let us stress again that our goal was just to show through an explicit construction that, contrary to common belief, four-dimensional grand unified theories with a stable proton do exist. We hope that this may inspire new directions in model buildingand revive the interest in grand unification, whichperhaps deserves more attention in spite of negative results from proton decay experiments. §.§ AcknowledgmentsWe are grateful to Ilja Doršner and the anonymous Physical Review Letters referees for constructive comments regarding our manuscript. This research was supported in part by the DOE Grant No. DE-SC0009919. | http://arxiv.org/abs/1706.08535v2 | {
"authors": [
"Bartosz Fornal",
"Benjamin Grinstein"
],
"categories": [
"hep-ph",
"hep-th"
],
"primary_category": "hep-ph",
"published": "20170626180004",
"title": "SU(5) Unification without Proton Decay"
} |
Frequency responses of the K-Rb-^21Ne co-magnetometer Yao Chen School of Instrumentation Science and Opto-electronics Engineering, Beihang University, Beijing, China [email protected] § ABSTRACTThe frequency responses of the K-Rb-^21Ne co-magnetometer to magnetic field and exotic spin dependent forces are experimentally studied and simulated in this paper. Both the relationship between the output amplitude, the phase shift and frequencies are studied. The responses of magnetic field are experimentally investigated. Due to a lack of input methods, others are numerically simulated.§ INTRODUCTIONAtomic co-magnetometers use two spin ensembles occupying the same volume to suppress their sensitivity to magnetic field noise and maintain the sensitivity to rotations<cit.>, anomalous spin coupling with space anisotropy<cit.> anomalous spin forces<cit.> etc. In a K-Rb-^21Ne co-magnetometer, the alkali spin ensemble couples with the noble gas spin ensemble through spin exchange interaction and alkali atoms are in the spin exchange relaxation free (SERF) regime<cit.> to sensitively detect the direction of the ^21Ne nuclear spin.As a sensor, the co-magnetometer's dynamical properties are very important. There are various literatures focus on the dynamics of the co-magnetometer to external magnetic field<cit.>. This study focus on the frequency response of the co-magnetometer to exotic spin dependent forces. As the coupling of the electron spins and the nuclear spins in the co-magnetometer to external exotic spin dependent forces are different. We independently investigate the effect of spin force on the electron spins and nuclear spins. The full Bloch equations are given and the responses of the co-magnetometer to spin forces are numerically solved. Results show that under relatively high frequencies, the electron spins and the nuclear spins should decouple due to their different dynamical properties. Under low frequencies, the electron spins and the nuclear spins should strongly couple to each other. § THEORYThis simulation based on the full Bloch equations of the co-magnetometer. There are various literature describes the full Bloch equations for the K-Rb-^21Ne co-magnetometer<cit.>. Here we just use them to simulate the responses of the co-magnetometer to external fields. In this paper, we are interested in the co-magnetometer's responses to external spin dependent force. There are some argument that the spin dependent force could be treated as an effective magnetic field<cit.>. Different from the traditional magnetic field, the coupling of the forces to the electron spins and nuclear spins in the co-magnetometer are independent. So we should study separately the responses of the forces on electron spins and nuclear spins. For the frequency response, a sinusoidal signal will be applied to the Bloch equations to get the output amplitude and the relative phase shift between the input and output signal.The choices of parameters in the co-magnetometer's Bloch equation for simulation is critical. In a K-Rb-^21Ne co-magnetometer, typically several atm ^21Ne gases are utilized and a hybrid pumping technique is utilized to polarize ^21Ne spins to approximately 20%.We chose the parameters in this simulation based on the conditions in these references<cit.>. The dynamics of the co-magnetometer are close related to the spin relaxation time of the electron spins and nuclear spins. Moreover, the electron spins and the nuclear spins will also coupled together under certain conditions. It is hard to give a analysis solution and that is the reason why we do some numerical simulations.§ EXPERIMENTAL AND SIMULATION RESULTSDifferent frequencies of magnetic fields could be applied to the co-magnetometer and thus the frequency responses could be derived. However there is no method to apply an exotic spin dependent forces to the co-magnetometer at different frequencies. So only the simulation results are given. The experimental conditions in this paper is similar to this references<cit.>. We applied a sinusoidal magnetic field to the co-magnetometer. The amplitude of the magnetic field is 0.08nT. This magnetic field is much smaller than the equivalent magnetic field line width. Thus the co-magnetometer works in the linear area. Both the x and y direction magnetic fields are studied. As the co-magnetometer probes the x polarization of the alkali metal spins, we directly give the amplitude of the x polarization sinusoidal output signal. We also give the phase shift of the output polarization signal to the input magnetic fields. Figure <ref> shows the experimental results of the x and y magnetic fields response. We also fit the experimental results to the simulation results. They fit well with each other. This result gives a very good evidence that our simulation modal is right and the parameters in the modal are reasonable. The frequency response of the co-magnetometer to magnetic fields at low frequencies are not sensitive. At higher frequencies, the response is larger. This is a fact that the nuclear spins and the electron spin ensembles will coupled together at low frequencies to automatically compensating the input magnetic field. Figure <ref> shows the phase shift of the output signal to the input x and y magnetic field signal.The responses of the co-magnetometer to exotic spin dependent forces are also studied in this paper. In the co-magnetometer,both the alkali metal electron spins and the noble gas nuclear spins are coupled to external exotic spin dependent forces. The co-magnetometer probe the x direction electron spins to sense weak field. The coupling between the electron spins and the exotic forces could be directly detected. For the nuclear spins, they coupled to the external exotic spin forces. At the same time, they will produce a magnetic field which could be sensed by the electron spins through spin exchange interaction. So the coupling between the nuclear spins and the exotic forces could only be indirectly detected. We define the coupling between the nuclear spins and the exotic spin dependent forces to be like an effective magnetic field in the x and y direction to be B^n_ax and B^n_ay(a here stands for abnormal. These fields are abnormal fields.). Similarly we define B^e_ax and B^e_ay to be the effective magnetic field produced by the exotic forces which are coupled to the electron spins. We assume a sinusoidal exotic forces input with frequencies range from 0.05Hz to 10Hz for the simulation. Figure <ref> also shows the simulation of the co-magnetometer to the exotic forces. We study the responses of the electron spins and nuclear spins independently. We chose the amplitude input to be 0.08nT for the electron spins and the nuclear spins. The vertical axis represents the polarization of the electron spin in the x direction. Figure <ref> shows the phase shift of the output signal relative to the input fields.§ DISCUSSIONThe electron spins and the nuclear spins both response to the magnetic field and the co-magnetometer is not sensitive to low frequency magnetic fields. This is the self compensating effect in the co-magnetometer. At higher frequencies, the electron spins and the nuclear spins decoupled and the electron spins will response to the magnetic field and the nuclear spins will gradually lost sensitivity to external magnetic field due to small dynamical range. The exotic spin dependent field are different for the electron spins and nuclear spins. We simulate them independently in this paper. The most sensitive term at 1-10Hz frequency range for the exotic field are the x component of the B^e_ax. We could use this to measure the exotic spin dependent force at higher frequencies. The benefits is that at higher frequencies noises are usually low. 10 kornack2005nuclear T. Kornack, R. Ghosh, and M. Romalis.Nuclear spin gyroscope based on an atomic comagnetometer.Physical review letters, 95(23):230801, 2005. smiciklas2011newM. Smiciklas, J. Brown, L. Cheuk, S. Smullin, and M. Romalis.New test of local lorentz invariance using a ^21Ne-Rb-K co-magnetometer.Physical review letters, 107(17):171604, 2011. vasilakis2009limitsG. Vasilakis, J. Brown, T. Kornack, and M. Romalis.Limits on new long range nuclear spin-dependent forces set with a K-^3He co-magnetometer.Physical review letters, 103(26):261801, 2009. kominis2003subfemtoteslaI. Kominis, T. Kornack, J. Allred, and M. Romalis.A subfemtotesla multichannel atomic magnetometer.Nature, 422(6932):596-599, 2003. fang2016dynamicsJ. Fang, Y. Chen, Y. Lu, W. Quan, and S. Zou.Dynamics of Rb and ^21Ne spin ensembles interacting by spin exchange with a high Rb magnetic field.Journal of Physics B: Atomic, Molecular and Optical Physics, 49(13):135002, 2016. fang2016lowJ. Fang, Y. Chen, S. Zou, X. Liu, Z. Hu, W. Quan, H. Yuan, and M. Ding.Low frequency magnetic field suppression in an atomic spin co-magnetometer with a large electron magnetic field.Journal of Physics B: Atomic, Molecular and Optical Physics, 49(6):065006, 2016. kornack2002dynamicsT. Kornack and M. Romalis.Dynamics of two overlapping spin ensembles interacting by spin exchange.Physical review letters, 89(25):253002, 2002. chen2016spinY. Chen, W. Quan, L. Duan, Y. Lu, L. Jiang, and J. Fang.Spin-exchange collision mixing of the K and Rb ac stark shifts.Physical Review A, 94(5):052705, 2016. ji2016searchingW. Ji, C. Fu, and H. Gao.Searching for new spin-dependent interactions with smco5 spin sources and a serf co-magnetometer.arXiv preprint arXiv:1610.09483, 2016. | http://arxiv.org/abs/1706.08760v1 | {
"authors": [
"Yao Chen"
],
"categories": [
"physics.atom-ph"
],
"primary_category": "physics.atom-ph",
"published": "20170627100507",
"title": "Frequency responses of the K-Rb-$^{21}$Ne co-magnetometer"
} |
^1Institute of Mathematics, Technische Universitt Berlin, Strasse des 17. Juni 136, D-10623 Berlin, Germany ^2Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstrasse 39, D-10117 Berlin, Germany ^3Lobachevsky State University of Nizhni Novgorod, pr.Gagarina 23, Nizhni Novgorod, 603950, Russia ^4Institute for Theoretical Physics, University of Mnster, Wilhelm-Klemm-Str. 9, D-48149 Mnster, Germany ^5Center for Nonlinear Science (CeNoS), University of Mnster, Corrensstr. 2, D-48149 Mnster, GermanyWe study the dynamics ofan array of nearest-neighbor coupled spatially distributed systems each generating a periodic sequence of short pulses. We demonstrate that unlike a solitary system generating a train of equidistant pulses, an array of such systems can produce a sequence of clusters of closely packed pulses, with the distance between individual pulses depending on the coupling phase. This regime associated with the formation of locally coupled pulse trains bounded due to a balance of attraction and repulsion between them is different from the pulse bound states reported earlier in different laser, plasma, chemical, and biological systems. We propose a simplified analytical description of the observed phenomenon, which is in a good agreement with the results of direct numerical simulations of a model system describing an array of coupled mode-locked lasers.42.60.Fc, 42.60.Da, 42.65.Sf, 02.30.KsBound pulse trains in arrays of coupled spatially extended dynamical systems D.Puzyrev^1, A. G.Vladimirov^2,3, A.Pimenov^2, S. V.Gurevich^4,5, S.Yanchuk^1 December 30, 2023 =================================================================================Nonlinear temporal pulses and spatial dissipative localized structuresappear in various optical, plasma, hydrodynamic, chemical, and biological systems <cit.>. Being well-separated from each other these structures can interact locally via exponentially decaying tails and, as a result of this interaction, they can form bound states, known also as “dissipative soliton molecules” <cit.>, characterized by fixed distances and phase differences between individual structures. Such bound states can emerge due to the oscillatory character of theinteraction force which is related to the presence of oscillating tails. Another scenariooccurs in the case of monotonic repulsiveinteraction when either the pulse tails decay monotonically, or a strong nonlocal repulsive interaction between the pulses is present. In this case the pulses tend to distribute equidistantly in time or space leading to periodic pulse trains <cit.> which, in contrast to closely packed bound states, exhibit large distances between the consequent pulses. In this Letter we show that even in the casewhen the pulses in an individual system exhibitstrong repulsion, the formation of bound pulse trainscan be achieved by arranging several systems in an array with nearest-neighbor coupling. As a result, the pulses interact not only within one system, but also with those in the neighboringones leading to a different balance of attraction and repulsion. More specifically, we demonstrate that this array can produce a periodic train of clusters consisting of two or more closely packed pulses with the possibility to change the interval between the pulses via the variation of coupling phase parameter. We show that the observed pulse train states coexist with the regimes which are amplitude synchronized and possess fixed phase shifts between the pulses emitted by neighboring array elements. In contrast to the pulse bound state regimes predicted and observed experimentallypreviously <cit.>, this regime cannot exist in a solitary pulse-generating system. We illustrate this general result by considering a particular example of an array of mode-locked lasers coupled via evanescent fields in a ring geometry. Such lasers are widely used for generation of short optical pulses with high repetition rates and optical frequency combs suitable for numerousapplications. Combining many lasers into an array one can achieve much larger output power and substantially improve the characteristics of the output beam by synchronizing the frequencies of the individual lasers <cit.>. Furthermore, it was recently demonstrated experimentally and verified theoretically that, in contrast to broad area lasers suffering from transverse instabilities leading to poor output beam quality, phase synchronization of individual elements of a multistripe semiconductor laser arrays can be used to generate high power beams with low far-field divergence <cit.>.The correspondence between spatially extendedand time-delay systems was established in series of publications <cit.>. In particular,it was shown thatdelay differential equations (DDEs) can be reduced to the well known Ginzburg-Landau amplitude equation in a vicinity of a bifurcation point. On the other hand, many problems expressed in terms of partial differential equations can be reformulated in terms of DDEs <cit.>. Therefore, for our analysis it is convenient to assume thateach individual array element generating a periodicpulse train is described by a set of DDEs. Then the dynamics of an array of N such elements can be described by the set of symmetrically coupled systems of nonlinear DDEsdu⃗_j/dt=F⃗[u⃗_j(t),u⃗_j(t-τ)]+C(u⃗_j-1+u⃗_j+1).Here u⃗_j,j=1,...,N is thestate variable describing the j-th system and C is the coupling matrix. We assume that in the absence of coupling, C=0, system generates periodicpulses with the period close to the delay time τ. In our simulations we use a particular model describing a mode-locked laser <cit.>. There, u⃗=(A(t),G(t),Q(t))^T, where A denotes the complex electric field amplitude, whereas G and Q are saturable gain and loss, respectively. The components of the right hand side vector function F⃗ are defined by F_1=-γ A+γ√(κ)RA(t-τ), F_2=G_0-γ_gG-e^-Q(e^G-1)|A(t-τ)|^2, and F_3=Q_0-γ_qQ-s(1-e^-Q)|A(t-τ)|^2, with R:=exp[(1-iα_g)G-(1-iα_q)Q]/2-iϑ.Here, the parameter γ represents the spectral filtering bandwidth, κ is the attenuation factor describing linear non-resonant intensity losses per cavity round trip, G_0 is the pump parameter, which is proportional to the injection current in the gain region, Q_0 is the unsaturated absorption parameter, γ_g and γ_q are the carrier relaxation rates in the amplifying and absorbing sections, and s is the ratio of the saturation intensities in these two sections. Though all the parameter values can vary among different lasers, we assume that this variation is sufficiently small and consider equal parameters. In what follows we limit our analysis to the physically meaningful situation when the lasers are coupled via evanescent fields and, hence, the coupling matrix C has only a single nonzero element C_11=η e^iφ, where η is the coupling strength and φ is the coupling phase. In the absence of coupling, η=0, for the chosen parameter values each laser operates in a stable fundamental passive mode-locking regime with a single sharp pulse per cavity round trip time <cit.>. This regime corresponds to modulated waves (relative periodic orbits) with A_j(t)=U(t-θ_j)e^iω t+iυ_j, G_j=G(t-θ_j), and Q_j=Q(t-θ_j), where U(t), G(t), and Q(t) are periodic in time with the period T close to the delay τ, and arbitrary phase shifts θ_j and υ_j.For small coupling η, the phase shifts θ_j and υ_j start evolving slowly in time due to the interaction between the lasers and, as a result, a synchronized state can be achieved. In particular, due to the index shift symmetry of the system, solutions are observed, that are synchronized in the amplitude |A_j|=|A| and with the constant phase shift between the adjacent lasers υ_j+1-υ_j=2π l/N, l=0,…,N-1 <cit.>. The simplest types of the synchronized regimes are complete in-phase synchronization (l=0) and anti-phase synchronization (l=N/2) for even number of lasers N. Note, that there is also a potentially interesting ”non-invasive” case l=N/4, for which the coupling vanishes A_j-1+A_j+1=0. For odd values of N, however, the anti-phase and non-invasive synchronization regimes do not exist.Further we consider the minimal cases of N=2 and N=4 lasers, where N=4 is the smallest number that allows in-phase, anti-phase, and non-invasive synchronized solutions. Figure <ref>(a) demonstrates the stability regions for the in-phase and anti-phase synchronized mode-locked solutions of the system of four lasers using the master stability function approach <cit.> in the (φ,η) plane of coupling parameters. The form of coupling implies that the stability region of the anti-phase synchronized solution coincides with that of the in-phase synchronized solution shifted by π with respect to the coupling phase angle φ.Furthermore, the P and T lines in Fig. <ref> (a) show bifurcation thresholds of the in-phase synchronized regime (l=0). In particular, the green line (T) indicates a torus bifurcation threshold whereas the two red lines correspond to pitchfork bifurcations.The torus bifurcation leads to a slight change of the pulse shape from one pulse period to another, while synchronization and period of pulsing remains the same. Instead, the pitchfork bifurcations of the synchronized solution leads to the appearance of a new bound pulse train regime. In this regime, lasers pulse sequentially on the ring one after another, as shown in Fig. <ref>(b). Here, each laser stays close to its fundamental mode-locked regime with period τ_0 close to the delay time τ. The pulse train bound-state regime can be better visualized using the so-called pseudo-spatial coordinates plane (T,σ) <cit.>, where σ=t τ_0 is the original fast time and T=t/τ_0 is the slow time (number or round trips, τ_0=τ+0.03), see Fig. <ref>(a). We observe that pulses which were initially distributed on the interval σ∈[0,τ_0] start to interact and finally form a bound cluster. The distance between the pulses in this cluster can be controlled by changing the coupling phase φ. Similar bound pulse train for the case of two coupled lasers is shown in Fig. <ref>(b). In what follows, we investigate the origin of this bound state solution by applying the multiscale method <cit.> to the two-laser system in order to find the reduced system of equations governing the slow dynamics of the time separation between the pulses and their phase differences.In order to use the multiscale method, we consider the limit of small coupling, η=εμ with a small parameter ε, and search for the solution of system (<ref>) in the form A_j(t_0,t_1)=e^iϕ_j(t_1)𝒜[t_0+θ_j(t_1)]+ε A_j^1(t_0,t_1), G_j=𝒢[t_0+θ_j(t_1)]+ε G_j^1(t_0,t_1), Q_j=𝒬[t_0+θ_j(t_1)]+ε Q_j^1(t_0,t_1). Here 𝒜, G, and Q is a τ_0-periodic solution of the unperturbed system (mode-locked regime in an uncoupled laser), A_j^1, G_j^1, Q_j^1 describe first order corrections due to the coupling between the lasers, t_0=t and t_1=ε t are fast and slow times, respectively.In the following, we explain how the reduced system (<ref>) for the the time separation Θ=θ_2-θ_1 between the pulses and the phase difference Φ=ϕ_2-ϕ_1 between pulses peaks can be obtained. For this purpose, the ansatz above is substituted into (<ref>) and the resulting system is expanded in orders of ε (see <cit.> for more details on this method). In the order 𝒪(ε), the following linear system of DDEs for the vector of perturbations S_j=(Re A_j^1,Im A_j^1,G_j^1,Q_j^1)^T is obtained -Ṡ_j+a_1(t)S_j(t)+a_2(t)S_j(t-τ)=a_3θ̇_j+a_4ϕ̇_j+ℛ((-1)^jΘ,(-1)^jΦ),j=1,2, with linear operators a_1,2 and vector functions a_3,4 depending only on the unperturbed pulse solution. Expressions for a_1,2,3,4 and ℛ are given in the Supplemental material.The solvability condition (for bounded solutions) of the linear non-homogeneous system (<ref>) requires that its right hand side is orthogonal to the neutral (or Goldstone) modes of the adjoint homogenous system <cit.>. In the case of small coupling coefficient, η≪1, these modes can be approximated by ψ_j^† and ξ_j^† with j=1,2, that are related to the phase shift and the time-shift invariance of the model equations. These modes can be found numerically (see, e.g. <cit.>). The orthogonality of the right hand side of (<ref>) to ψ_1,2^† with respect to the inner product ∫_0^T(a_3θ̇_j+a_4ϕ̇_j+ℛ((-1)^jΘ,(-1)^jΦ))ψ_j^†(t)dt=0 leads to the system of two ordinary differential equations p_ψθ̇_1+q_ψϕ̇_1=μ R_ψ(Θ,Φ),p_ψθ̇_2+q_ψϕ̇_2=μ R_ψ(-Θ,-Φ),where coefficients p_ψ, q_ψ, and R_ψ are given by the the corresponding scalar products cf. the Supplemental material. Subtracting equations (<ref>) and (<ref>) from one another, one obtains the equation for the phase difference Φ and time separation of the pulses Θ: p_ψΘ̇+q_ψΦ̇=μ(R_ψ(-Θ,-Φ)-R_ψ(Θ,Φ)).In the same way, the orthogonality conditions to the modes ξ_1,2^† lead to the equation p_ξΘ̇+q_ξΦ̇=μ(R_ξ(-Θ,-Φ)-R_ξ(Θ,Φ)).Solving now (<ref>) and (<ref>) for Θ̇ and Φ̇, we obtain the reduced system of two ordinary differential equations for the slow time evolution of Θ and Φ: Θ̇=ηcos(Φ+Δ_Θ(Θ))f_Θ(Θ), Φ̇=ηsin(Φ+Δ_Φ(Θ))f_Φ(Θ),where f_Θ,Φ(Θ)≥0. The specific shape of the right hand side of (<ref>) is due to the fact that the function R_ψ(Θ,Φ) contains only first Fourier harmonic in Φ. As a result, the dependence on Φ is a linear combination of sin(Φ) and cos(Φ) that can be represented as (<ref>). More details are given in the Supplemental material.The bound pulse train states correspond to the fixed points of (<ref>). These points lying on the intersection of nullclines of (<ref>) are defined by the condition cos(Φ+Δ_Θ(Θ))=sin(Φ+Δ_Φ(Θ))=0, which implies that one of the two conditions should be satisfied, Δ_Θ(Θ)=Δ_Φ(Θ), or Δ_Θ(Θ)=Δ_Φ(Θ)+π. The first condition corresponds to the saddles of the system (<ref>), while the second equation corresponds either to nodes or to foci. Figure <ref>(a) shows intersecting nullclines of (<ref>) in the (Θ,Φ) phase plane. Here, blue filled (unfilled) circles depict stable (unstable) nodes, red filled (unfilled) circles correspond to stable (unstable) foci, and blue squares – to saddles. All of these equilibria correspond to pulse bound states in system (<ref>) with the same stability properties. Note that a particular case Θ=0 corresponds to the synchronized pulses with the zero time separation, when the system (<ref>) transforms into a single equation Φ̇=μ C_ΦsinΦ, which admits either in-phase Φ=0 or anti-phase synchronization Φ= as it was mentioned above.Noteworthy, the reduced system (<ref>) resembles the equations governing the slow dynamics of the distance and phase difference between two interacting dissipative solitons in spatially extended systems described by generalized complex Ginzburg-Landau equation on an unbounded domain <cit.>. The case of coupled lasers, however, is distinct in two aspects: (i) unlike the case of complex Ginzburg-Landau equation the presence of the phase shifts Δ_Θ,Φ(Θ) in Eqs. (<ref>) allows for the existence of bound states with the Θ-dependent phase difference between the pulses different from 0, π, and ±π/2, and (ii) instead of a countable set of equidistant roots, the functions f_Θ,Φ(Θ) have no roots at all, which means that in laser arrays there is a finite number of bound states which are distributed along the Θ-axis in a more complex manner.The 2D phase plane of the reduced system (<ref>) is presented in Fig. <ref>, where the equilibria and their basins of attraction are shown. Note, that due to the symmetry (Θ,Φ)→(-Θ,-Φ) it is sufficient to show only the left half of the coordinate system. Here, the point 𝐂1 corresponds to a stable anti-phase synchronized solution, while points 𝐁1, 𝐁2, and 𝐁3 indicate the bound states with nonzero pulse time separations Θ. Figure <ref> shows the case of φ=3.0. For other values of φ, there can co-exist from two to five stable equilibria corresponding to distinct bound states. The basins of attraction of these states are separated by saddles and, interestingly, they can wind into spiral sources as it is shown in the inset of Fig. <ref>. The video showing the position of the equilibria and corresponding basins of attraction for different values of φ is available in the Supplemental material.A more detailed stability analysis of the bound state corresponding to the equilibrium 𝐁1 is performed numerically using the path continuation software DDE-BIFTOOL <cit.> applied to Eqs. (<ref>). The bifurcation diagram showing the domain of stability of this bound state is presented in Fig. <ref>(b). Here, red line P corresponds to a subcritical pitchfork bifurcation from the in-phase synchronized solution, whereas the blue F line corresponds to a fold bifurcation leading the appearance of unstable bound state solutions. The dashed black line T shows the first torus bifurcation of pulse bound state which leads to a slight change of the pulse shapes from one pulse period to another, while the period of the pulsing remains the same.To conclude, we discovered the bound pulse train regime in an array of nearest-neighbor coupled nonlinear distributed dynamical systems. In this regime trains of short pulses generated by individual elements of the array are bound by local interaction, forming the closely packed pulse clusters. In the limit of small coupling strength asymptotic equations are derived governing the slow time evolution positions and phases of the interacting pulses in an array consisting of two pulse generators. The pulse separations and phase differences between the pulses in bound states as well as basins of attraction of different bound states calculated using this semi-analytical approach are in good agreement with the results of direct numerical simulations of a set of DDEs describing an array of coupled mode-locked lasers (<ref>). The stability and bifurcations of bound pulse train regime were studied numerically with the path-following technique. The bound states reported in this Letter have a similarity with rather well studied bound states of dissipative solitons in spatially extended systems, where multiple soliton clusters surrounded by a linearly stable homogeneous regime can be formed due to a similar mechanism of balancing between attraction and repulsion. However, unlike the bound states formed by dissipative solitons, the appearance of this new type of bound states is related to the presence of coupling between the neighboring lasers and it is impossible in a solitary array element, where zero intensity steady state is linearly unstable and pulse interaction is nonlocal and always repulsive. Furthermore, unlike the case of complex Ginzburg-Landau-type equations, the new bound pulse train regime can exhibit continuously changing phase difference between the pulses depending on their time separation and correspond to a finite number of fixed points distributed non-equidistantly along the time axis. Since the physical mechanism of the bound state formation due to the coupling between neighboring lasers is quite general, it can be observed in other physical systems described by coupled sets of partial or delay differential equations, where pulse solutions are present. Therefore, we believe that our results are generic and valid for a large class of coupled spatially extended systems of different physical origin. We thank the German Research Foundation (DFG) for financial support in the framework of the Collaborative Research Center 910, Project A3 and Collaborative Research Center 787, Project B5. A.V. also acknowledges the support of the Grant No. 14-41-00044 of the Russian Scientific Foundation. apsrev4-1 58 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Rosanov(2002)]Rosanov02 author author N. N. Rosanov, @nooptitle Spatial Hysteresis and Optical Patterns (publisher Springer, address Berlin, year 2002)NoStop [Vahed et al.(2014)Vahed, Prati, Turconi, Barland,and Tissoni]Vahed20140016 author author H. Vahed, author F. Prati, author M. Turconi, author S. Barland,and author G. Tissoni, 10.1098/rsta.2014.0016 journal journal Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences volume 372 (year 2014), 10.1098/rsta.2014.0016NoStop [Clerc et al.(2005)Clerc, Petrossian, and Residori]Clerc2005 author author M. G. Clerc, author A. Petrossian, and author S. Residori, 10.1103/PhysRevE.71.015205 journal journal Phys. Rev. E volume 71, pages 015205 (year 2005)NoStop [Arecchi et al.(1999)Arecchi, Boccaletti, and Ramazza]Arecchi19991 author author F. Arecchi, author S. Boccaletti,and author P. Ramazza, http://dx.doi.org/10.1016/S0370-1573(99)00007-1 journal journal Physics Reports volume 318, pages 1(year 1999)NoStop [Akhmediev and Ankiewicz(2008)]AkhmedievDS2008 editor N. Akhmediev andeditor A. Ankiewicz, eds.,@nooptitle Dissipative Solitons: From Optics to Biology and Medicine, series Lecture Notes in Physics, Vol.volume 751 (publisher Springer Berlin / Heidelberg, year 2008)NoStop [Lioubashevski et al.(1996)Lioubashevski, Arbell, and Fineberg]LiobashevskiPRL1996 author author O. Lioubashevski, author H. Arbell,and author J. Fineberg, 10.1103/PhysRevLett.76.3959 journal journal Phys. Rev. Lett. volume 76, pages 3959 (year 1996)NoStop [Lloyd et al.(2015)Lloyd, Gollwitzer, Rehberg, and Richter]Lloyd2015 author author D. J. B.Lloyd, author C. Gollwitzer, author I. Rehberg,and author R. Richter, 10.1017/jfm.2015.565 journal journal Journal of Fluid Mechanics volume 783, pages 283 (year 2015)NoStop [Rotermund et al.(1991)Rotermund, Jakubith, Von Oertzen, andErtl]Rotermund19913083 author author H. H. Rotermund, author S. Jakubith, author A. von Oertzen,andauthor G. Ertl, 10.1103/PhysRevLett.66.3083 journal journal Phys. Rev. Lett. volume 66, pages 3083 (year 1991)NoStop [Mikhailov and Showalter(2006)]Mikhailov2006 author author A. S. Mikhailov and author K. Showalter, @noopjournal journal Physics Reports volume 425, pages 79(year 2006)NoStop [Purwins et al.(2010)Purwins, Bödeker, and Amiranashvili]PurwinsDS2010 author author H.-G. Purwins, author H. U. Bödeker,and author S. Amiranashvili, @noopjournal journal Adv. in Phys. volume 59, pages 485 (year 2010)NoStop [Liehr(2013)]Lier2013 author author A. W. Liehr, @nooptitle Dissipative Solitons in Reaction Diffusion Systems. Mechanism, Dynamics, Interaction (publisher Springer Berlin/Heidelberg, year 2013)NoStop [Suzuki et al.(1995)Suzuki, Ohta, Mimura, and Sakaguchi]Suzuki1995 author author M. Suzuki, author T. Ohta, author M. Mimura,and author H. Sakaguchi, 10.1103/PhysRevE.52.3645 journal journal Phys. Rev. E volume 52, pages 3645 (year 1995)NoStop [Barland et al.(2012)Barland, Giudici, Tissoni, Tredicce, Brambilla, Lugiato, Prati, Barbay, Kuszelewicz, Ackemann, Firth, and Oppo]barland12 author author S. Barland, author M. Giudici, author G. Tissoni, author J. R. Tredicce, author M. Brambilla, author L. Lugiato, author F. Prati, author S. Barbay, author R. Kuszelewicz, author T. Ackemann, author W. J.Firth,and author G.-L.Oppo, @noopjournal journal Nature Photonics volume 6, pages 204 (year 2012)NoStop [Grelu and Akhmediev(2012)]Grelu2012 author author P. Grelu and author N. Akhmediev, 10.1038/nphoton.2011.345 journal journal Nature Photonics volume 6, pages 84 (year 2012)NoStop [Elphick et al.(1990)Elphick, Meron, Rinzel, and Spiegel]Elphick1990 author author C. Elphick, author E. Meron, author J. Rinzel,and author E. Spiegel, http://dx.doi.org/10.1016/S0022-5193(05)80138-9 journal journal Journal of Theoretical Biology volume 146, pages 249(year 1990)NoStop [Kutz et al.(1998)Kutz, Collings, Bergman, and Knox]Kutz1998 author author J. N. Kutz, author B. C. Collings, author K. Bergman,andauthor W. H. Knox, 10.1109/3.709592 journal journal IEEE Journal of Quantum Electronics volume 34, pages 1749 (year 1998)NoStop [Nizette et al.(2006)Nizette, Rachinskii, Vladimirov, andWolfrum]Nizette200695 author author M. Nizette, author D. Rachinskii, author A. Vladimirov,andauthor M. Wolfrum, http://dx.doi.org/10.1016/j.physd.2006.04.013 journal journal Physica D: Nonlinear Phenomena volume 218, pages 95(year 2006)NoStop [Camelin et al.(2016)Camelin, Javaloyes, Marconi, andGiudici]PhysRevA.94.063854 author author P. Camelin, author J. Javaloyes, author M. Marconi,andauthor M. Giudici, 10.1103/PhysRevA.94.063854 journal journal Phys. Rev. A volume 94, pages 063854 (year 2016)NoStop [Akhmediev and Ankiewicz(1997)]Akhmediev1997 author author N. Akhmediev and author A. Ankiewicz, @nooptitle Solitons, Nonlinear Pulses and Beams (publisher Chapman and Hall, London,year 1997)NoStop [Akhmediev et al.(1998)Akhmediev, Ankiewicz, and Soto-Crespo]akhmediev98 author author N. Akhmediev, author A. Ankiewicz,and author J. M. Soto-Crespo, @noopjournal journal J. Opt. Soc. America B volume 15, pages 515 (year 1998)NoStop [Grelu et al.(2002)Grelu, Belhache, Gutty, and Soto-Crespo]Grelu:02 author author P. Grelu, author F. Belhache, author F. Gutty,and author J.-M. Soto-Crespo, 10.1364/OL.27.000966 journal journal Opt. Lett. volume 27, pages 966 (year 2002)NoStop [Tang et al.(2002)Tang, Zhao, Shen, Lu, Man, and Tam]PhysRevA.66.033806 author author D. Y. Tang, author B. Zhao, author D. Y. Shen, author C. Lu, author W. S. Man,and author H. Y. Tam, 10.1103/PhysRevA.66.033806 journal journal Phys. Rev. A volume 66, pages 033806 (year 2002)NoStop [Seong and Kim(2002)]Seong:02 author author N. H. Seong and author D. Y. Kim, 10.1364/OL.27.001321 journal journal Opt. Lett. volume 27,pages 1321 (year 2002)NoStop [Zhao et al.(2007)Zhao, Tang, Wu, Lei, andWen]Zhao:07 author author L. M. Zhao, author D. Y. Tang, author X. Wu, author D. J. Lei,and author S. C. Wen, 10.1364/OL.32.003191 journal journal Opt. Lett. volume 32, pages 3191 (year 2007)NoStop [Schenk et al.(1998)Schenk, Schütz, Bode, and Purwins]Purwins1998 author author C. P. Schenk, author P. Schütz, author M. Bode,and author H.-G. Purwins, 10.1103/PhysRevE.57.6480 journal journal Phys. Rev. E volume 57, pages 6480 (year 1998)NoStop [Lin et al.(2015)Lin, Chan, Lee, and Chen]7274649 author author J. H. Lin, author C. W. Chan, author H. Y. Lee,andauthor Y. H. Chen, 10.1109/JPHOT.2015.2481600 journal journal IEEE Photonics Journal volume 7,pages 1 (year 2015)NoStop [Vladimirov et al.(2001)Vladimirov, Khodova, and Rosanov]PhysRevE.63.056607 author author A. G. Vladimirov, author G. V. Khodova,and author N. N. Rosanov, 10.1103/PhysRevE.63.056607 journal journal Phys. Rev. E volume 63, pages 056607 (year 2001)NoStop [Ortaç et al.(2010)Ortaç, Zaviyalov, Nielsen, Egorov, Iliew, Limpert, Lederer, and Tünnermann]Ortac:10 author author B. Ortaç, author A. Zaviyalov, author C. K. Nielsen, author O. Egorov, author R. Iliew, author J. Limpert, author F. Lederer,and author A. Tünnermann, 10.1364/OL.35.001578 journal journal Opt. Lett. volume 35, pages 1578 (year 2010)NoStop [Wu et al.(2011)Wu, Tang, Luan, and Zhang]Wu20113615 author author X. Wu, author D. Tang, author X. Luan,and author Q. Zhang, http://dx.doi.org/10.1016/j.optcom.2011.03.071 journal journal Optics Communications volume 284, pages 3615(year 2011)NoStop [Li et al.(2012)Li, Zhang, Meng, Hao, Li, Du, and Yang]Li2012 author author X. L. Li, author S. M. Zhang, author Y. C. Meng, author Y. P. Hao, author H. F. Li, author J. Du,and author Z. J.Yang, 10.1134/S1054660X12040081 journal journal Laser Physics volume 22, pages 774 (year 2012)NoStop [Gui et al.(2013)Gui, Xiao, and Yang]Gui:13 author author L. Gui, author X. Xiao,andauthor C. Yang, 10.1364/JOSAB.30.000158 journal journal J. Opt. Soc. Am. B volume 30, pages 158 (year 2013)NoStop [Tsatourian et al.(2013)Tsatourian, Sergeyev, Mou, Rozhin, Mikhailov, Rabin, Westbrook, and Turitsyn]Tsatourian2013 author author V. Tsatourian, author S. V. Sergeyev, author C. Mou, author A. Rozhin, author V. Mikhailov, author B. Rabin, author P. S. Westbrook,and author S. K. Turitsyn, http://dx.doi.org/10.1038/srep03154 journal journal Scientific Reports volume 3, pages 3154 EP(year 2013), note articleNoStop [Botez and Scifres(2008)]Botez_book editor D. Botez and editor D. R. Scifres, eds., @nooptitle Diode Laser Arrays, series Cambridge Studies in Modern Optics No. number 14 (publisher Cambridge University Press, year 2008)NoStop [Glova(2003)]1063-7818-33-4-R01 author author A. F. Glova, http://stacks.iop.org/1063-7818/33/i=4/a=R01 journal journal Quantum Electronics volume 33, pages 283 (year 2003)NoStop [Hagemeier and Robinson(1979)]Hagemeier:79 author author H. E. Hagemeier and author S. R. Robinson, 10.1364/AO.18.000270 journal journal Appl. Opt. volume 18,pages 270 (year 1979)NoStop [Likhanskii and Napartovich(1990)]0038-5670-33-3-R03 author author V. V. Likhanskii and author A. P. Napartovich, http://stacks.iop.org/0038-5670/33/i=3/a=R03 journal journal Soviet Physics Uspekhi volume 33, pages 228 (year 1990)NoStop [Li and Erneux(1992)]PhysRevA.46.4252 author author R.-d. Li and author T. Erneux,10.1103/PhysRevA.46.4252 journal journal Phys. Rev. A volume 46, pages 4252 (year 1992)NoStop [Li and Erneux(1994)]PhysRevA.49.1301 author author R.-d. Li and author T. Erneux,10.1103/PhysRevA.49.1301 journal journal Phys. Rev. A volume 49, pages 1301 (year 1994)NoStop [Kozyreff et al.(2000)Kozyreff, Vladimirov, and Mandel]PhysRevLett.85.3809 author author G. Kozyreff, author A. G. Vladimirov,and author P. Mandel, 10.1103/PhysRevLett.85.3809 journal journal Phys. Rev. Lett. volume 85, pages 3809 (year 2000)NoStop [Kozyreff et al.(2001)Kozyreff, Vladimirov, and Mandel]KVM01 author author G. Kozyreff, author A. G. Vladimirov,and author P. Mandel, @noopjournal journal Physical Review E volume 64, pages 016613 (year 2001)NoStop [Jechow et al.(2009)Jechow, Lichtner, Menzel, Radziunas, Skoczowsky, and Vladimirov]Jechow09 author author A. Jechow, author M. Lichtner, author R. Menzel, author M. Radziunas, author D. Skoczowsky,and author A. Vladimirov, @noopjournal journal Optics Express volume 17, pages 19599 (year 2009)NoStop [Lichtner et al.(2012)Lichtner, Tronciu, and Vladimirov]Lichtner12 author author M. Lichtner, author V. Z. Tronciu,and author A. G. Vladimirov, @noopjournal journal IEEE J. Quant. Electron. volume 48, pages 353 (year 2012)NoStop [Kashchenko(1998)]Kashchenko1998 author author S. A. Kashchenko, @noopjournal journal Computational Mathematics and Mathematical Physics volume 38, pages 443 (year 1998)NoStop [Giacomelli and Politi(1996)]Giacomelli1996 author author G. Giacomelli and author A. Politi, @noopjournal journal Phys. Rev. Lett. volume 76, pages 2686 (year 1996)NoStop [Yanchuk et al.(2015)Yanchuk, Lücken, Wolfrum, andMielke]Yanchuk2015a author author S. Yanchuk, author L. Lücken, author M. Wolfrum,andauthor A. Mielke, 10.3934/dcds.2015.35.537 journal journal Discrete Contin. Dyn. Syst. A volume 35, pages 537 (year 2015)NoStop [Yanchuk and Giacomelli(2017)]YanchukGiacomelli2017 author author S. Yanchuk and author G. Giacomelli, 10.1088/1751-8121/50/10/103001 journal journal Journal of Physics A: Mathematical and Theoretical volume 50, pages 103001 (year 2017)NoStop [Vladimirov and Turaev(2005)]PhysRevA.72.033808 author author A. G. Vladimirov and author D. Turaev, 10.1103/PhysRevA.72.033808 journal journal Phys. Rev. A volume 72, pages 033808 (year 2005)NoStop [Golubitsky et al.(1988)Golubitsky, Stewart, and Schaeffer]Golubitsky1988 author author M. Golubitsky, author I. Stewart,and author D. G. Schaeffer,@nooptitle Singularities and Groups in Bifurcation Theory. Volume II, series Applied Mathematical Sciences, Vol. volume 69 (publisher Springer-Verlag, address New-York, year 1988)p. pages 533NoStop [Yanchuk and Wolfrum(2008)]Yanchuk2008a author author S. Yanchuk and author M. Wolfrum, 10.1103/PhysRevE.77.026212 journal journal Phys. Rev. E volume 77, eid 026212 (year 2008)NoStop [D'Huys et al.(2008)D'Huys, Vicente, Erneux, Danckaert,and Fischer]DHuys2008 author author O. D'Huys, author R. Vicente, author T. Erneux, author J. Danckaert,and author I. Fischer, 10.1063/1.2953582 journal journal Chaosvolume 18, eid 037116 (year 2008)NoStop [Pecora and Carroll(1998)]pecora1998master author author L. M. Pecora and author T. L. Carroll, @noopjournal journal Physical Review Letters volume 80, pages 2109 (year 1998)NoStop [Rebrova et al.(2011)Rebrova, Huyet, Rachinskii, andVladimirov]PhysRevE.83.066202 author author N. Rebrova, author G. Huyet, author D. Rachinskii,andauthor A. G. Vladimirov, 10.1103/PhysRevE.83.066202 journal journal Phys. Rev. E volume 83, pages 066202 (year 2011)NoStop [Arkhipov et al.(2016)Arkhipov, Habruseva, Pimenov, Radziunas, Hegarty, Huyet, andVladimirov]Arkhipov:16 author author R. M. Arkhipov, author T. Habruseva, author A. Pimenov, author M. Radziunas, author S. P. Hegarty, author G. Huyet,and author A. G. Vladimirov, 10.1364/JOSAB.33.000351 journal journal J. Opt. Soc. Am. B volume 33, pages 351 (year 2016)NoStop [Guo and Wu(2013)]guo2013bifurcation author author S. Guo and author J. Wu,<https://books.google.de/books?id=ZM2CmAEACAAJ> title Bifurcation Theory of Functional Differential Equations, Applied Mathematical Sciences (publisher Springer New York, year 2013)NoStop [Turaev et al.(2007)Turaev, Vladimirov, and Zelik]PhysRevE.75.045601 author author D. Turaev, author A. G. Vladimirov,and author S. Zelik, 10.1103/PhysRevE.75.045601 journal journal Phys. Rev. E volume 75,pages 045601 (year 2007)NoStop [Malomed(1991)]PhysRevA.44.6954 author author B. A. Malomed, 10.1103/PhysRevA.44.6954 journal journal Phys. Rev. A volume 44,pages 6954 (year 1991)NoStop [Afanasjev et al.(1997)Afanasjev, Malomed, and Chu]PhysRevE.56.6020 author author V. V. Afanasjev, author B. A. Malomed,and author P. L. Chu, 10.1103/PhysRevE.56.6020 journal journal Phys. Rev. E volume 56,pages 6020 (year 1997)NoStop [Engelborghs et al.(2002)Engelborghs, Luzyanina, and Roose]Engelborghs:2002:NBA:513001.513002 author author K. Engelborghs, author T. Luzyanina,and author D. Roose, 10.1145/513001.513002 journal journal ACM Trans. Math. Softw. volume 28, pages 1 (year 2002)NoStop | http://arxiv.org/abs/1706.08802v1 | {
"authors": [
"D. Puzyrev",
"A. G. Vladimirov",
"A. Pimenov",
"S. V. Gurevich",
"S. Yanchuk"
],
"categories": [
"nlin.PS",
"math.DS",
"physics.optics"
],
"primary_category": "nlin.PS",
"published": "20170627120502",
"title": "Bound pulse trains in arrays of coupled spatially extended dynamical systems"
} |
We consider uniform random permutations in proper substitution-closed classes and study their limiting behavior in the sense of permutons.The limit depends on the generating series of the simple permutations in the class. Under a mild sufficient condition, the limit is an elementary one-parameter deformation of the limit of uniform separable permutations, previously identified as the Brownian separable permuton. This limiting object is therefore in some sense universal.We identify two other regimes with different limiting objects. The first one is degenerate; the second one is nontrivial and related to stable trees. These results are obtained thanks to a characterization of the convergence of random permutons through the convergence of their expected pattern densities. The limit of expected pattern densities is then computed by using the substitution tree encoding of permutations and performing singularity analysis on the tree series. Formation of exotic baryon clusters in ultra-relativistic heavy-ioncollisions. A.S. Botvina^1,2, J. Steinheimer^1, M. Bleicher^1,3 December 30, 2023 ================================================================================ § INTRODUCTION The aim of this paper is to study the asymptotic behavior of a permutation of large size,picked uniformly at random in a substitution-closed permutation class generated by a given (finite or infinite) family of simple permutations satisfying additional conditions.We first give a few definitions necessary to present the recent literature on related problems, and to state our results. §.§ Permutation classes and their limit For any positive integer n, the set of permutations of [n]:= {1,2,…, n} is denoted by _n.We write permutations of _n in one-line notation as σ = σ(1) σ(2) …σ(n).For a permutation σ in _n, the size n of σ is denoted by |σ|. For σ∈_n, and I⊂ [n] of cardinality k, let _I(σ) be the permutation of _k induced by {σ(i) : i∈ I}.For example for σ=65831247 and I={2,5,7} we have_{2,5,7}(65831247)=312since the values in the subsequence σ(2) σ(5) σ(7)=514 are in the same relative order as in the permutation 312.A permutation π = _I(σ) is a pattern involved (or contained) in σ,and the subsequence (σ(i))_i ∈ I is an occurrence of π in σ.When a pattern π has no occurrence in σ, we say that σ avoids π.The pattern containment relation defines a partial order on 𝔖 = ∪_n 𝔖_n:we write π≼σ if π is a pattern of σ.A permutation class is a family 𝒞 of permutations that is downward closed for ≼, i.e.for any σ∈𝒞 and any pattern π≼σ, it holds that π∈𝒞.For every set B of patterns, we denote by (B) the set of all permutations that avoid every pattern in B.Clearly, for all B, (B) is a permutation class.Conversely (see for instance <cit.>), every class 𝒞 of permutations can be defined by a set B of excluded patterns.Moreover, for any given 𝒞, we can define uniquely a set B such that 𝒞=(B):it is enough to impose that B is chosen minimal (for set inclusion) among all B' such that 𝒞=(B').This B (which happens to be an antichain) is called the basis of the class 𝒞.The basis of a permutation class may be finite or infinite (see <cit.>). One in many ways permutation classes can be studied is by looking at the features of a typical large permutation σ in the class.A particularly interesting characteristic is the frequency of occurrence of a pattern π, especially when it is considered for all π simultaneously. Denote by occ(π,σ) the number of occurrences of a pattern π∈_k in σ∈_n and by (π,σ) the pattern density of π in σ. More formallyocc(π,σ)= card{I ⊂ [n]of cardinality ksuch that _I(σ)=π}(π,σ)= occ(π,σ)/nk =ℙ(_ I(σ)=π),where I is randomly and uniformly chosen among the nk subsets of [n] with k elements. The study of the asymptotics of (π,σ_n), where σ_n is a uniform random permutation of size n in a permutation class 𝒞 and π∈ is a fixed pattern, has been carried out in several cases. * The behavior of 𝔼[(π,σ_n)] for various classes 𝒞 andfixed π was investigated by Bóna <cit.>, Homberger <cit.>, Chang, Eu and Fu <cit.> and Rudolf <cit.>. * Janson, Nakamura and Zeilberger <cit.> considered higher moments and joint moments, rigourously when 𝒞 =, and also empirically for various classes 𝒞. A bit later, Janson <cit.> has given for 𝒞 = (132) the joint limit in distribution of the random variables (π,σ_n), properly rescaled as to yield a nontrivial limit. A parallel line of work consists in studying the asymptotic shape of the diagram of σ_n. The diagram of a permutation σ∈_n is the set of points {(i,σ(i)),1≤ i ≤ n} in the Cartesian plane. To gain understanding of typical large permutations in 𝒞,one can investigate the geometric properties of the diagram of σ_n as n→∞, possibly after rescaling this diagram so that it fits into a unit square. * Madras and Liu <cit.>, Atapour and Madras <cit.> and Madras and Pehlivan <cit.> considered the asymptotic shape of σ_n when 𝒞 = Av(τ) for small patterns τ. * In parallel, Miner and Pak <cit.> described very precisely the asymptotic shape of σ_n, when 𝒞 =Av(τ) for the 6 patterns τ in _3. These shapes are related toBrownian excursion, as explained by Hoffman, Rizzolo and Slivken <cit.>. * Bevan describes the limit shape of permutations in so-called connected monotone grid classes <cit.>. These two points of view may seem different, but they are in fact tightly bound together.Indeed, as we shall see in Section <ref>,it follows from results of <cit.> that the convergence of pattern densitiescharacterizes the convergence of the diagrams, seen as permutons.This important property was actually the main motivation for the introduction of permutons in <cit.>. §.§ The permuton viewpoint A permuton is a probability measure on the unit square [0,1]^2 with uniform marginals, i.e. its pushforwards by the projections on the axes are both the Lebesgue measure on [0,1].Permutons generalize permutation diagrams in the following sense: to every permutation σ∈_n, we associate the permuton μ_σ with density μ_σ(dxdy) = n_σ(⌈ xn ⌉) = ⌈ yn ⌉ dxdy.Note that it amounts to replacing every point (i,σ(i)) in the diagram of σ (normalized to the unit square)by a square of the form [(i-1)/n,i/n]× [(σ(i)-1)/n,σ(i)/n], which has mass 1/n uniformly distributed.Permutons were first considered by Hoppen, Kohayakawa, Moreira, Rath and Sampaio in <cit.>, with the point of view of characterizing permutation sequences with convergent pattern densities. The name permuton and the measure point of view were given afterwards by Glebov, Grzesik, Klimošová and Král <cit.>. This recently introduced concept has already been the subject of many articles, including: * results on the set of possible pattern densities of permutons <cit.>;* a large deviation principle in the space of permutons, giving access to the analysis of random permutations with fixed pattern densities <cit.>;* the description of the limiting distribution ofthe number of fixed points (and more generally of cycles of a given length)for “equi-continuous” sequences of permutations with a limiting permuton <cit.>;* central limit theorems and refinements for pattern occurrences in random permutation models associated to permutons <cit.>;* the permuton convergence of some exponentially tilded models of random permutations <cit.>;* a study of permuton-valued processes, in the context of random sorting networks <cit.>. In the context of this article, the theory of permutons is a nice framework to state scaling limit results for sequences of (random) permutations. Indeed, the space ℳ of permutons is equipped with the topology of weak convergence of measures,which makes it a compact metric space.This allows one to define convergent sequences of permutations:we say that (σ_n)_n converges to μ when (μ_σ_n) →μ weakly.Accordingly, for a sequence (σ_n)_n of random permutations, we will consider the convergence in distribution of the associated random measures (μ_σ_n) in the weak topology. The limiting object is then a random permuton. By definition, convergence to a permuton encodes the first-order asymptotics of the shape of a sequence of permutations.As we shall see in Section <ref>, it also encodes the first-order asymptotics of pattern densities:a sequence (σ_n)_n of random permutations converges in distribution to a random permuton if and only if the sequences of random variables ((π,σ_n))_n converge in distribution,jointly for all π∈.Moreover, for any pattern π, the limit distribution of the density of π can be expressed as a function of the limit permuton. Our previous article <cit.> studies the limit of the class 𝒞 = (2413,3142) of separable permutations,in terms of pattern densities and permutons. Let σ_n be a uniform random separable permutation of size n. There exists a random permuton μ, called the Brownian separable permuton, such that (μ_σ_n)_n converges in distribution to μ.The result of <cit.> is more precise, and describes the asymptotic joint distribution of the random variables (π,_n) as a measurable functional of a signed Brownian excursion.This object is a normalized Brownian excursion whose strict local minima are decorated with an i.i.d. sequence of balanced signs in {+,-}.In another paper <cit.>, the fifth author gives a direct construction of μ from this signed Brownian excursion. In particular, μ is not equal almost surely to a given permuton; in this regard, separable permutations behave differently from all other classesanalysed so far in the literature (which converge to a deterministic permuton).The class of separable permutations is the smallest nontrivial substitution-closed class, as defined in the next subsection.The present paper aims at showing a convergence result similar to Theorem <ref> for other substitution-closed classes.We will see that in many cases the limit belongs to a one-parameter family of deformations of the Brownian separable permuton:the biased Brownian separable permuton μ^(p) of parameter p∈ (0,1) is obtained from a biased signed Brownian excursion(defined similarly to the signed Brownian excursion but with each sign having probability p of being a +).Simulations of the biased Brownian separable permuton are given in <ref>. A precise definition will be given in <ref> (<ref>). Finally, we mention that although permutons are a very nice, natural, and powerful way of studying “limits of permutation classes” (in particular because they unify many earlier results, as explained above),this approach has its weaknesses. Most importantly, it gives no information beyond the first order.For instance, the results of <cit.> describe the canoe shape of large permutations in classes avoiding one pattern of length three.As the width of the canoe is a o(n), one only sees the diagonal (or antidiagonal) in the permuton limit, but has no information about the fluctuations around this limit.§.§ Substitution-closed classes Let θ=θ(1)⋯θ(d) be a permutation of size d, and let π^(1),…,π^(d) be d other permutations.The substitution of π^(1),…,π^(d) in θ is the permutation of size |π^(1)|+ … +|π^(d)|obtained by replacing each θ(i) by a sequence of integers isomorphic to π^(i) while keeping the relative order induced by θ between these subsequences.This permutation is denoted by θ[π^(1),…,π^(d)].We sometimes refer to θ as the skeleton of the substitution.When θ is 12 … k (resp. k… 21), for any value of k ≥ 2, we rather write ⊕ (resp. ⊖) instead of θ.Note that the specific value of k does not appear in this notation, but can be recovered counting the number of permutations π^(i) which are substituted in ⊕ (resp. ⊖). Examples of substitution (see <ref> below) are conveniently presented representing permutations by their diagrams:the diagram of θ[π^(1),…,π^(d)] is obtained byblowing up each point θ_i of θ onto a square containing the diagram of π^(i). By definition of permutation classes, if θ[π^(1),…,π^(d)] ∈𝒞 for some permutation class 𝒞, then θ, π^(1),…,π^(d)∈𝒞. The converse is not always true.A permutation class 𝒞 is substitution-closed if, for every θ, π^(1),…,π^(d) in 𝒞, θ[π^(1),…,π^(d)] ∈𝒞. The focus of this paper will be substitution-closed classes. To study such classes it is essential to observe that any permutation has a canonical decomposition using substitutions, which can be encoded in a tree. This decomposition is canonical in the same sense as the decomposition of integers into products of primes.In this analogy, simple permutations play the role of prime numbers and the substitution plays the role of the product. We first give a simple definition:a permutation σ is ⊕-indecomposable (resp. ⊖-indecomposable)if it cannot be written as ⊕[π^(1),π^(2)] (resp. ⊖[π^(1),π^(2)]),(or equivalently, if there is no d such that σ can be written as ⊕[π^(1),…,π^(d)] (resp. ⊖[π^(1),…,π^(d)])).A simple permutation is a permutation of size n > 2 that does not map any nontrivial interval(i.e. a range in [n] containing at least two and at most n-1 elements) onto an interval.For instance, 451326 is not simple as it maps [3;5] onto [1;3].The smallest simple permutations are 2413 and 3142 (there is no simple permutation of size 3). Remark: Usually in the literature, the definition of a simple permutation requires only n ≥ 2 and not n> 2, so that 12 and 21 are considered to be simple. However in our work, 12 and 21 do not play the same role as the other simple permutations, that is why we do not consider them to be simple.Every permutation σ of size n≥ 2 can be uniquely decomposed as either: * α[π^(1),…,π^(d)], where α is simple (of size d≥ 4),* ⊕[π^(1),…,π^(d)], where d≥ 2 and π^(1),…,π^(d) are ⊕-indecomposable,* ⊖[π^(1),…,π^(d)], where d≥ 2 and π^(1),…,π^(d) are ⊖-indecomposable. This decomposition theorem can be applied recursively inside the permutations π^(i) appearing in the items above,until we reach permutations of size 1.Doing so, a permutation σ can be naturally encoded by a rooted planar tree, whose internal nodes are labeled by the skeletons of the substitutions that are considered along the recursive decomposition process,and whose leaves correspond to the elements of σ. This construction provides a one-to-one correspondence between permutations and canonical trees (defined below) that maps the size to the number of leaves.A canonical treeis a rooted planar tree whose internal nodes carry labels satisfying the following constraints. * Internal nodes are labeled by ⊕,⊖, or by a simple permutation.* A node labeled by α has degree[Throughout the paper, by degree of a node in a tree, we mean the number of its children (which is sometimes called arity in other works).Note that it is different from the graph-degree: for us, the edge to the parent (if it exists) is not counted in the degree.]|α|, nodes labeled by ⊕ and ⊖ have degree at least 2.* A child of a node labeled by⊕ (resp. ⊖) cannot be labeled by⊕ (resp. ⊖).Canonical trees are known in the literature under several names: decomposition trees, substitution trees,…We choose the term canonical because we consider many variants of substitution trees in this paper, but only these canonical ones provide a one-to-one correspondence with permutations.The representation of permutations by their canonical trees is essential in the study of substitution-closed classes.The reason is that, for any such class 𝒞, the set of canonical trees of permutations in 𝒞 can be easily described.Let 𝒞 be a substitution-closed permutation class, and assume[Otherwise,𝒞 = {12… k : k ≥ 1 } or 𝒞 = {k… 21 : k ≥ 1 } or𝒞 = {1} and these cases are trivial.] that 12,21 ∈𝒞.Denote by 𝒮 the set of simple permutations in 𝒞.The set of canonical trees encoding permutations of 𝒞 is the set of all canonical trees built on the set of nodes {⊕, ⊖}∪{α : α∈𝒮}. First, if a canonical tree contains a node labeled by a simple permutation α∉𝒮, then the corresponding permutation σ contains the pattern α∉𝒞, and hence σ∉𝒞.Second, by induction, all canonical trees built on {⊕, ⊖}∪{α : α∈𝒮} encode permutations of 𝒞, because 𝒞 is substitution-closed.If necessary, details can be found in <cit.>.For instance, the class (2413,3142) of separable permutations studied in <cit.>corresponds to the set of all canonical trees built on {⊕, ⊖}, i.e., to 𝒮=∅.It is therefore the smallest nontrivial substitution-closed class.Let 𝒞 be any substitution-closed permutation class, and let 𝒮 be the set of simple permutations in 𝒞.Because 𝒞 is a class, it holds that for all α∈𝒮, if α' is a simple permutation such that α' ≼α, then α' ∈𝒮.Whenever a set 𝒮 of simple permutations satisfies this property, we say that 𝒮 is downward-closed (implicitly: for ≼ and among the set of simple permutations).Thanks to their encoding by families of trees, it can be proved that substitution-closed permutation classes (possibly, satisfying additional constraints) share a common behavior.For example, the canonical tree representation of their elements imply that all substitution-closed classes with finitely many simple permutations have an algebraic generating function <cit.>. (This is actually easy, the main contribution of <cit.> being to generalize this algebraicity result to all classes containing a finite number of simple permutations,again using canonical trees as a key tool.) Our work illustrates this universality paradigm in probability theory: we prove that the biased Brownian separable permuton is the limiting permutonof many substitution-closed classes (see Theorem <ref> and to a lesser extent Theorem <ref>). §.§ Our results: Universality Let 𝒮 be a (finite or infinite) set of simple permutations.We denote by ⟨𝒮⟩_n the set of permutations of size nwhose canonical trees use only nodes ⊕, ⊖ and α∈𝒮,and we define ⟨𝒮⟩ = ∪_n ⟨𝒮⟩_n.From Proposition <ref> and Observation <ref>,every substitution-closed permutation class 𝒞 containing 12 and 21 can be written as 𝒞 = ⟨𝒮⟩for a downward-closed set 𝒮 of simple permutations (which is just the set of simple permutations in 𝒞).For a generic (not necessarily downward-closed) set 𝒮 of simple permutations,⟨𝒮⟩ is a family of permutations more general than a substitution-closed permutation class.The results that we obtain apply not only to permutation classes but also to such sets of permutations. Note however that our work does not consider substitution-closed sets of permutationsnot containing either 12 or 21 (as mentioned above, a permutation class not containing one of these two permutations is necessary trivial, but there might be interesting such substitution-closed sets). In principle, such sets of permutations could also be studied by the approach developed in this paper,but we prefer to leave such cases outside of our study.Indeed, to cover them, it would require to re-do all computations, modifying the combinatorial equations that we start from (see Proposition <ref> p. Prop:systeme1)and all equations that follow, so as not to allow the nodes labeled ⊕ and/or ⊖.We are interested in the asymptotic behavior of a uniform permutation _n in ⟨𝒮⟩_n which we describe in terms of permutons.Let S(z)=∑_α∈𝒮 z^|α|be the generating function of 𝒮 and let R_S∈ [0,+∞] be the radius of convergence of S.Let 𝒮 be a set of simple permutations such thatR_S > 0 andlim_r→ R_Sr < R_S S'(r) > 2/(1+R_S)^2 -1. H1For every n≥ 1, let _n be a uniform permutation in ⟨𝒮⟩_n, and let μ__n be the random permuton associated with _n. The sequence (μ__n)_n tends in distribution in the weak convergence topology to the biased Brownian separable permuton μ^(p) whose parameter p is given in (<ref>) p. eq:p_plus_p_moins.An important point in <ref> is that the limiting object depends on 𝒮 only through the parameter p.It turns out that p only depends on the number of occurrencesof the patterns 12 and 21 in the elements of 𝒮. We illustrate this universality of the limiting object on <ref>, by showing large uniform random permutations in two different substitution-closedclasses: the first one has a finite set of simple permutations ={2413,3142,24153,42513}, while the second is the substitution closure of (321), which contains infinitely many simple permutations and satisfies (<ref>) (as we will explain below). Although this is hard to see on the picture, the corresponding values of the biaised parameter are different, namely .5 and around .6 respectively(see <ref>, <ref>). In the following, to lighten the notation, we write S'(R_S) := lim_r→ R_Sr < R_S S'(r). Note that S'(R_S) may be ∞. The case when Condition (<ref>) of <ref> is not satisfied is discussed in the next section. When Condition (<ref>) is satisfied the case is called standardbecause there are natural and easy sufficient conditions to ensure this case (that are given below).Moreover, this case includes most sets 𝒮 studied so far in the literature on permutation classes, to our knowledge. This gives a fairly precise (and positive) answer to an important question raised in our previous article <cit.>: is the Brownian separable permuton universal(in the sense that it describes the limit of a large family of substitution-closed classes)? We now give several cases in which Condition (<ref>) of <ref> is satisfied. * If S is a generating function with radius of convergence R_S > √(2)-1, (<ref>) is satisfied.Indeed, the condition R_S > √(2)-1 implies 2/(1+R_S)^2 -1 <0,and S'(R_S) is nonnegative since S' (like S) is a series with nonnegative coefficients.In particular, the situation where R_S > √(2)-1 covers the cases where there are finitely many simple permutations in the class (then S is a polynomial and R_S=∞), and more generally where R_S=1 (i.e. the number of simple permutations of size n grows subexponentially).* If S' is divergent at R_S, (<ref>) is trivially verified. In particular, this happens when S is a rational generating function, or when S has a square root singularity at R_S. In the literature, there are quite a few examples of permutations classes whose set 𝒮 of simple permutations has been enumerated. We can therefore ask whether Condition (<ref>) applies to them. In most examples we could find, it is indeed satisfied, and this follows from the discussion above. We record these examples here.*Classes with finitely many simple permutations have attracted a fair amount of attention,see <cit.> and subsequently <cit.>. * Several families of simple permutations with a bounded number of elements of each size have appeared in the literature: the family of exceptional simple permutations (also called simple parallel alternations in <cit.>),the family of wedge simple permutations (see also <cit.>), the families of oscillations and quasi-oscillations (see <cit.>), and the families of simple permutations contained in the following three classes: (4213,3142), (4213,1342) and (4213,3124) – see <cit.>. * The family of simple pin-permutations has a rational generating function – see <cit.>.* The generating function S is also rational when 𝒮 is the set of simple permutations contained in several permutation classes defined by the avoidance of two patterns of size 4, namely (3124,4312) – see <cit.>, (2143,4312) and (1324,4312) – see <cit.>, (2143,4231) – see<cit.>, (1324,4231) – see <cit.>, (4312,3142) and (4231,3124) – see <cit.>.* The set 𝒮 of simple permutations ofthe class(4231, 35142, 42513, 351624) enumerated in <cit.>is also rational. * We come back to the above example, whereis the substitution of Av(321). This class has been studied in <cit.>, where an explicit basis of avoided patterns is given. In this case, 𝒮 is the set of simple permutations avoiding 321, whose generating function S is computed in <cit.>:it has a square-root singularity at R_S = 13, which proves that (<ref>) is fulfilled.In addition to verifying Condition (<ref>), we have computed the numerical value of the parameter p for some of the above-mentioned sets 𝒮 of simple permutations; see <ref> (p. ex:calculp1).Notably absent from the above list is the class (2413), enumerated in <cit.>.Since the avoided pattern, 2413, is simple, this class is substitution-closed.Its generating series behaves as C (ρ-z)^3/2 around its dominant singularity ρ=1/8. This prevents the set of simple permutations in this class to satisfy Condition (<ref>); compare with <ref>. §.§ Our results: Beyond universalityWhen R_S > 0, for the two remaining cases S'(R_S)<2/(1+R_S)^2-1 and S'(R_S)=2/(1+R_S)^2-1, the asymptotic behavior of μ__n is qualitatively different, and the results require slight additional hypotheses and notation. As a consequence, for the moment we only briefly describe these behaviors, the results being stated with full rigor later.* Case S'(R_S)<2/(1+R_S)^2-1. This is a degenerate case. We first show in <ref>that, with a small additional assumption which will be called (CS), the sequence (μ__n)of random permutons converges. If uniform simple permutations in 𝒮∩_n have a limit (in the sense of permutons), we show that the limit of permutations in ⟨𝒮⟩ is the same(see <ref> and the subsequent comment). This explains the terminology “degenerate”: all permutations in the class (or set) ⟨𝒮⟩ are close to the simple ones, and the “composite” structure of permutations does not appear in the limit. * Case S'(R_S)=2/(1+R_S)^2-1. This critical case is more subtle.We again need to assume the above mentioned hypothesis (CS). According to the behavior of S near R_S, the limiting permuton of (μ__n) caneither be the (biased) Brownian separable permuton (<ref>) or belong to a new family of stable permutons (<ref>).Finite substructures of stable permutons areconnected to those of the random stable tree (see <cit.>), which explains the terminology. Two simulations are presented in <ref>.We believe that the above-mentioned class (2413) belongs to the degenerate regime. Indeed, in the critical regime, the singularity exponent of the class should be smaller than 1, and cannot be 3/2, as for (2413). Since there is no direct description of the simple permutations in (2413), it seems however out of reach to prove our hypothesis (CS) for this specific class. We are therefore unable to describe its limiting permuton and let this open for further research. We refer to <cit.> for simulations of uniform random permutations in this class.The variety of behaviors that we observe can be informally understood in terms of trees. We have seen in <ref> that permutations in ⟨𝒮⟩ can be encoded by trees. Taking a uniform element in ⟨𝒮⟩_n, we can prove that the corresponding tree is a multi-type Galton-Watson tree conditioned on having n leaves. This link with conditioned Galton-Watson trees is not used in this paper, but may give intuition on our results.In the standard and critical case, these Galton-Watson trees are critical. It is therefore not surprising to see two different limiting behaviors. When the law of reproduction has finite variance, we get one behavior related to the Brownian excursion and the Brownian continuum random tree (that we describe as the universal one). When the law of reproduction has infinite variance, we get the behavior related to stable trees.In the standard case, the law of reproductionalways has finite variance. On the contrary, in the degenerate case, the underlying Galton-Watson tree model is subcritical. At the limit, such trees conditioned to being large have one internal node of very high degree (<cit.>). This node corresponds to a large simple permutation in the tree encoding a uniform random permutation _n in ⟨𝒮⟩. It is therefore not surprising that _n is asymptotically close to a uniform simple permutation in ⟨𝒮⟩.The reader may have noticed that all cases where we describe the asymptotic behavior of μ__n are such that R_S >0. Observe that it is always the case for proper permutation classes (i.e., permutation classes different from ).Indeed, from the Marcus-Tardos Theorem <cit.>, the number of permutations of size n in a proper class is at most c^n, for some constant c.For the class , we however do have R_S = 0, sincethere are asymptotically e^-2n!(1+𝒪(1/n)) simple permutations of size n <cit.>.In this case, the sequence (μ__n)_n of permutonsassociated with a uniform permutation _n inconverges in distribution to the uniform measure on [0,1]^2.The situation where R_S = 0 may happen as well for sets ⟨𝒮⟩ where 𝒮 is not downward-closed,but we leave these cases open. §.§ Limits of proportions of pattern occurrencesLet us change our approach and discuss in this section the asymptotic behavior (as n→∞) of the proportion (π,_n) of occurrences of a fixed pattern π in _n (as done in <cit.> for uniform random permutations in various classes). Since most examples fit in that regime, we focus here on the standard case (when (<ref>) is satisfied).As mentioned in <ref> and explained in more details in <ref>, the convergence of μ__ntowards μ^(p) implies the (joint) convergence in distribution((π,_n) )_π→(π,μ^(p)).The limiting random variables (π,μ^(p)) have been studied in <cit.> (for p=.5): in particular, (π,_n) is non-deterministic if and only if π is separable of size at least 2 and it is possible to compute their moment algorithmically. These results are easily extended to the general case p ∈ (0,1). Therefore, for separable patterns π, (<ref>) establishes the convergence of (π,_n) to a non-deterministic limit. Since these are bounded variables, their (joint) moments also converge to the (joint) moments of the limiting vector, which can be computed algorithmically (even if in practice only low order moments can be effectively computed; see the discussion in <cit.>). Note that all these limiting moments are trivially nonzero, since these are moments of nondeterministic nonnegative random variables.For nonseparable patterns however, the situation is different: (<ref>) only entails the convergence of (π,_n) to 0. Indeed, if π is nonseparable, the limiting quantity (π,μ^(p)) is identically 0 (this is a consequence of <cit.> when p=.5, the result being easily extended to p ∈ (0,1)).We can go further and ask whether ((π,_n))_n has a limit in distribution with some appropriate normalization.We therefore investigate the moments of (π,_n). In <ref>,we define some permutation statistics (π) (see <ref>) andshow that under the hypothesis (H1) we have the following asymptotic behavior[We say that the sequence (a_n) behaves as Θ(b_n) if there are c,C>0 such that c |b_n|≤ |a_n|≤ C |b_n| for every n≥ 1.].Proposition <ref>.For each π∈ and m≥ 1, we have [((π,_n))^m] = Θ (n^-(π)/2).Proposition <ref> also holds for separable patterns π: in that case (π)=0 and we have [((π,_n))^m]= Θ (1). No news here, since the moments of (π,_n) have nonzero limits, as previously explained. For nonseparable patterns, (π) is positive andmeasures in some sense how nonseparable π is. Note that the order of magnitude of [((π,_n))^m] is independent of m, which implies that there is a set of probability Θ(n^-(π)/2) on which the variables (π,_n) stays bounded away from 0 (see <ref>). This event of small probability contributes to the asymptotic behavior of moments, and thus the method of moments is inappropriate to find a limiting distributionfor some appropriate normalization of (π,_n). Finding such a limiting distribution is therefore left as an open question. §.§ Outline of the proof In our previous paper <cit.> (i.e. when the family of simple permutations is 𝒮=∅),the proof of the convergence to the Brownian separable permuton strongly relied ona connection to Galton-Watson trees conditioned on having a given number of leaves.This allowed us to use fine results by Kortchemski <cit.> or Pitman and Rizzolo <cit.> on such conditioned random tree models. For a general family 𝒮, generalizing this approach would require delicate results on the asymptotic behavior of conditioned multitype Galton-Watson trees. Moreover, there are several other steps in the main proofs of <cit.>, in particular the subtree exchangeability argument, that are not easily adapted.The strategy developed in the present paper is different. We strongly use the framework of permutons. Indeed, we first show that to establish the convergencein distribution of (μ_σ_n)_nto some random permuton μ,it is enough to prove the convergence of ([(π,σ_n)])_n for every pattern π (see <ref>). By definition, if π∈_k and n≥ k,[(π,σ_n)] = #{σ∈⟨⟩_n, I ⊂ [n] : _I(σ) = π}/ n k #⟨⟩_n The asymptotic behavior of the numerator and denominator is then obtained with analytic combinatorics, which allows us to transfer from the behavior of a generating series near its singularity to the asymptotic behavior of its coefficients. This goes in three steps.Step 1: Enumeration. We compute (or characterize by an implicit equation) some generating series. For instance to estimate the denominator of (<ref>) we consider ∑_n≥ 1#⟨⟩_n z^n. We readily use the size-preserving bijection between ⟨⟩ and the classof -canonical trees, counted by the number of leaves. Hence the generating function we want to compute is the same as that of , denoted T.Using again the encoding of permutations by trees, the numerator can be described as a number of trees with marked leaves and some conditions on the tree induced by these marked leaves. Obtaining generating functions for such combinatorial classes is possible, and needs the introduction of several intermediary functions which count trees with various contraints, and possibly one marked leaf. This is detailed in <ref>. Step 2: Singularity analysis. Then we want to know the singular behavior of the generating functions we computed so far. As it turns out, the singular behavior of some intermediate function T_ drives the singular behavior of all the other series. The function T_ is characterized by the implicit equationT_(z) = z + Λ(T_(z)),where Λ is a known analytic function with radius of convergence R_Λ that involves S and some rational functions (see <ref> p. eq:DefLambda). Hence the behavior of T_ depends on whether there is a point inside the disk of convergence D(0,R_Λ) where Λ'=1, because around such a critical point, the equation (<ref>) is not invertible. Since Λ is a series with positive integer coefficients, it suffices to check the sign of Λ'(R_Λ) -1, which can easily be translated in terms of the function S. This is where the sign of S'(R_S)- 2/(1+R_S)^2 + 1 appears, leading to the three different cases. More precisely[In this informal description, we left out some conditions on the singularity of S that appear in the critical and degenerate cases.] * The standard case S'(R_S)>2/(1+R_S)^2-1 is equivalent to Λ'(R_Λ) >1. In this case there is a unique critical point τ∈ (0,R_Λ). As a result, the radius of convergence of T_ is ρ = τ - Λ(τ), and the analyticity of Λ around τ = T_(ρ) implies that T_ has a singularity of exponent 1/2(<ref>). Such a behavior is sometimes called branch point in the literature: Λ is analytic at τ but the equation (<ref>) has two solutions (called branches) near ρ and one cannot find an analytic solution in a neighbourhood of ρ. The solution T_ is therefore singular at ρ. * The degenerate case S'(R_S)<2/(1+R_S)^2-1 is equivalent to Λ'(R_Λ) <1. In this case there is no critical point in the disk D(0,R_Λ) nor at its boundary. As a result, the unique dominant singularity of T_ is the point ρ = R_Λ - Λ(R_Λ) where T_(ρ) reaches the singularity R_Λ of Λ. Moreover, T_ has a bounded derivative at its singularity, and so has exponent δ > 1, which is the same as the exponent of S (<ref>). * The critical case S'(R_S)=2/(1+R_S)^2-1 is equivalent to Λ'(R_Λ) =1. In this case there is no critical point inside the disk D(0,R_Λ), but the singularity R_Λ of Λ is a critical point. Once again the radius of convergence of T_ is ρ = R_Λ - Λ(R_Λ), but T_ has no first derivative at its singularity. Here the exponent of the singularity of T_ depends on that of the singularity of S, and belongs to [1/2,1) (see <ref>). Once we have found the asymptotic behavior of T_, we should analyze the tree series found in Step 1. It is purely routine from an analytic point of view, but involves some combinatorial arguments, regarding the encoding of permutations by substitution trees.Step 3: Transfer. Finally we use a transfer theorem of analytic combinatorics (<ref>) to translate the singularity exponents we found in Step 2 into a limiting behavior for (<ref>). Informally, a square-root singularity, which is the same as in "usual" families of trees, will lead to the Brownian separable permuton. A singularity of exponent in (1/2,1) will lead to the δ-stable tree, where δ∈ (1,2) is the inverse of the exponent. A singularity of exponent δ > 1 will invariably lead to the degenerate case.§.§ Organization of the paperThe paper is organized as follows (see also <ref>). * Section <ref> is devoted to proving useful results on the convergence of random permutons.The proofs heavily rely on previous estimates for deterministic permutons <cit.>. We believe that these general results regarding random permutons are interesting on their own, therefore these are presented in a self-contained way.* In Sections <ref> and <ref>, we prove nonasymptotic enumeration results for the number of permutations encoded by some given families of (decorated) trees. The main result is <ref>, which is the first step towards the estimation of 𝔼[(π,σ_n)].* In Sections <ref>, <ref>, <ref> we prove our main results: the convergence of the sequence (μ__n)_n of permutons. As already mentioned, the quantitative behavior depends on the family 𝒮, more precisely on the sign of S'(R_S)-2/(1+R_S)^2+1: * Section <ref> is devoted to the standard case S'(R_S)>2/(1+R_S)^2-1. We show in <ref> the convergence to the biased Brownian separable permuton.* In Section <ref>, we consider the degenerate case S'(R_S)<2/(1+R_S)^2-1.* In Section <ref>, we consider the critical case S'(R_S)=2/(1+R_S)^2-1. This case itself is divided into two subcases, according to whether the exponent δ (defined in <ref>) is smaller (<ref>) or greater (<ref>) than 2.* We postpone to <ref> many useful results of complex analysis.* Finally, <ref>discusses how<ref> have been obtained. § CONVERGENCE OF RANDOM PERMUTONS In this section, we first recall the terminology of (deterministic) permutons, introduced in <cit.>. We also adapt their results to obtain criteria for the convergence in distribution of random permutons.Notation. Since this section involves many different probability spaces, we use a superscript on(and similarly on expectation symbols ) to record the source of randomness. In the case where the event A( u,v) (or the function H( u,v)) depends on two random variables u and v, we interpret ^ u(A( u,v)) (or ^ u[H( u,v)]) as the conditional probability (expectation) with respect to v. §.§ Deterministic permutons and extracted permutations Recall from <ref> that a permuton isa probability measure on the unit square with uniform marginals.To a permutation σ of size n, we can associate the permutonμ_σ which is essentially the (normalized) diagram of σ, where each dot has been replaced with a small square of dimension 1/n × 1/n carrying a mass 1/n.Let ℳ be the set of permutons. We need to equip ℳ with a topology. We say that a sequence of (deterministic) permutons (μ_n)_n converges weakly to μ (simply denoted μ_n →μ) if ∫_[0,1]^2 f dμ_n n→ +∞→∫_[0,1]^2 f dμ,for every bounded and continuous function f: [0,1]^2 →ℝ. With this topology, ℳ is compact and metrizable by a metric d_□ which has been introduced in <cit.> (see Lemmas 2.5 and 5.3 in <cit.>):μ_nn→ +∞→μ⇔ d_□(μ_n,μ)n→ +∞→ 0.Since ℳ is compact, Prokhorov's theorem ensures that the space of probability distributions on ℳ is compact (for convergences of measure, we refer to <cit.>).Recall from Section <ref> that for σ∈_n and π∈_k, we have(π,σ)=ℙ^ I_n,k(_ I_n,k(σ)=π),where I_n,k is randomly and uniformly chosen among the nk subsets of [n] with k elements. The random permutation _ I_n,k(σ) is called the induced subpermutation (of size k) in σ. We will define the pattern density (π,μ) of a pattern π∈_k in a permuton μ by analogy with this formula.Take a sequence of k random points (,) = ((_1,_1),…, (_k,_k)) in [0,1]^2,independently with common distribution μ.Because μ has uniform marginals and the _i's (resp. _i's) are independent,it holds that the _i's (resp. _i's) are almost surely distinct. We denote by (_(1),_(1)),…, (_(k),_(k)) the x-ordered sample of (,),i.e. the unique reordering of the sequence ((_1,_1),…, (_k,_k)) such that _(1)<⋯<_(k). Then the values (_(1),⋯,_(k)) are in the same relative order as the values of a unique permutation, that we denote (,). Since the points are taken at random, (,) is a random permutation of size k. We call it the induced subpermutation (of size k) in μ. Then we set(π,μ) = ^, ( (,)=π ). Rewriting this probability in an integral form, we get immediately:(π,μ) = ∫_([0,1]^2)^k_(x⃗, y⃗) = π μ(dx_1dy_1)⋯μ(dx_kdy_k)which identifies (π,·) as a measurable function on the space of permutons.In the following, as we consider a random permuton μ, we need to construct a finite sequence of points (_1,_1), …, (_k,_k), which are independent with common distribution μ conditionally on μ. This is possible up to considering a new probability space where the joint distribution of (μ, (_1, _1),…,(_k, _k)) is characterized as follows: for every positive measurable functional H : ℳ× ([0,1]^2)^k →, ^μ, ,[H(μ,(_1,_1), …, (_k,_k))]= ^μ[ ∫_([0,1]^2)^kμ(dx_1 dy_1) ⋯μ(dx_k dy_k) H(μ, (x_1,y_1), …, (x_k,y_k)) ].In this new probability space, we callthe vector (,) = (_i, _i)_1≤ i ≤ k, and we use the notation (, μ)=(,), to highlight the two levels of randomness. We end this section by the following two estimates, proved in <cit.>. If π∈_k and σ∈_n, then |(π,σ)-(π,μ_σ)| ≤1/n k 2. There is a k_0 such that if k>k_0, for any permuton ν, ^[ d_□(μ_(,ν),ν) ≥ 16k^-1/4] ≤1/2 e^-√(k). §.§ Random permutons and convergence in distributionWe now consider a sequence of random permutations (σ_n) (with σ_n of size n). An example of interest for the present paper is when, for each n ≥ 1, σ_n is a uniform random permutation of size n in a given class . Another example are the random permutations (σ_n)_n≥ 1 = ((,μ))_n≥ 1 constructed above from a given random permuton μ. In the case where μ is deterministic, these correspond to the Z-random permutations from <cit.>, used to prove that each permuton is the limit of some permutation sequence. Taking I_n,k independently from (σ_n), we have for every π of size k: ^σ_n[(π,σ_n)]= ^σ_n[ℙ^ I_n,k(_ I_n,k(σ_n)=π)] =^σ_n, I_n,k (_ I_n,k(σ_n) = π). Similar, for a random permuton Μ, we have ^μ[(π,μ)]=^μ, ((,μ)=π).This is a consequence of (<ref>) above,applied to H(μ,(x_1,y_1),…,(x_k,y_k)) = _(x⃗,y⃗) = π and combined with (<ref>).The same argument may be applied to H(ν,(x_1,y_1),…,(x_k,y_k)) = _d_□(μ_(x⃗,y⃗),ν) ≥ 16k^-1/4,yielding a randomized version of<ref>. There is a k_0 such that if k>k_0, for any random permuton ν, ^ν,[ d_□(μ_(,ν),ν) ≥ 16k^-1/4] ≤1/2 e^-√(k). This result has an important consequence for the distribution of random permutons. Let μ, μ' be two random permutons. If there exists k_1 such that for k≥ k_1 and every π of size k we have ^μ,((,μ)=π)=^μ',((,μ')=π),then μd=μ'. We need to prove that 𝔼^μ[ϕ(μ)]=𝔼^μ'[ϕ(μ')] for every bounded and continuous function ϕ:ℳ→ℝ. Fix k≥ k_1. It holds that 𝔼^μ[ϕ(μ)]-𝔼^μ'[ϕ(μ')] = 𝔼^μ,[ϕ(μ)-ϕ(μ_(,μ))]+(𝔼^μ,[ϕ(μ_(,μ))] -𝔼^μ','[ϕ(μ_(',μ'))])+ 𝔼^μ','[ϕ(μ_(',μ'))-ϕ(μ')], where ' denotes a sequence of k independent points with common distribution μ', conditionally on μ'. The second term in the above display is zero by assumption. Moreover, from <ref> the first and third terms go to zero when k→ +∞. Our main theorem in this section deals with the convergence of sequences of random permutations to a random permuton.It generalizes the result of <cit.> which states that deterministic permuton convergence is characterized by convergence of pattern densities.We extend their proof to the case of random sequences, where permuton convergence in distribution is characterized by convergence of average pattern densities, or equivalently of the induced subpermutations of any (fixed) size. For any n, let σ_n be a random permutation of size n. Moreover, for any fixed k, let I_n,k be a uniform random subset of [n] with k elements, independent of σ_n. The following assertions are equivalent. (a) (μ_σ_n)_n converges in distribution for the weak topology to some random permuton μ. (b) The random infinite vector ((π,σ_n))_π∈𝔖 converges in distribution in the product topology to some random infinite vector (Λ_π)_π∈𝔖. (c)For every π in 𝔖, there is a Δ_π≥ 0 such that[(π,σ_n)] Δ_π. (d)For every k, the sequence(_ I_n,k(σ_n))_n of random permutations converges in distribution to some random permutation ρ_k. Whenever these assertions are verified, we have (Λ_π)_π d = ((π,μ))_π and for every π∈_k, (ρ_k = π) = Δ_π = [Λ_π] = [(π,μ)] = ((,μ) = π). In item (c) above, it is enough to consider all π of size at least 2.Indeed, for π=1, the statement is trivial, since (π, ·) is identically 1. Proof of (a)⇒(b).Let π_1,…,π_r be a finite sequence of patterns. By <cit.>, the mapμ↦ ((π_i,μ))_1≤ i ≤ r is continuous.Therefore, μ_σ_nd→μ implies((π_i,μ_σ_n))_1≤ i ≤ rd→((π_i ,μ))_1≤ i ≤ r.Using <ref>, one can replace each (π_i,μ_σ_n) by (π_i,σ_n) in the above convergence.This proves the convergence in distributionof all induced permutations ((π_i,σ_n))_1≤ i ≤ k,and hence of ((π,σ_n))_π∈𝔖 in the product topology (see for instance <cit.>). Proof of (b)⇒(c).If (π,σ_n) d→Λ_π,astakes values in [0,1], we have[(π,σ_n)] n→∞→[Λ_π]. Proof of (c)⇒(d).Fix π∈_k and consider the sequence ^σ_n, I_n,k(_ I_n,k(σ_n) = π)= ^σ_n[(π,σ_n)],which converges if (c) holds (the equality comes from <ref>). Since _ I_n,k(σ_n) is a random variable taking its values in the finite set _k, this says exactly that the sequence (_ I_n,k(σ_n))_n converges in distribution. Proof of (d)⇒(a). Consider a sequence of random permutations (σ_n) satisfying (d), i.e. for every k, there is a random permutation ρ_k such that _ I_n,k(σ_n) d→ρ_k. Put differently, for every pattern π of size k, we haveℙ^σ_n,I_n,k(_ I_n,k(σ_n)=π)→(ρ_k = π). From <ref> and <ref>, we get^σ_n[ (π,μ_σ_n)] = ^σ_n[ (π,σ_n)] + 𝒪(1/n) = ^ I_n,k,σ_n(_ I_n,k(σ_n) = π) + 𝒪(1/n).Set θ_k,n = (,μ_σ_n). Then, using <ref>, for every π∈_k, we have ^θ_k,n(θ_k,n = π) =^,σ_n((, μ_σ_n) = π) =^σ_n[ (π,μ_σ_n)]= ^ I_n,k,σ_n(_ I_n,k(σ_n) = π) + 𝒪(1/n) →(ρ_k = π).In other words, θ_k,nd→ρ_k. Since μ_ρ_k takes its values in a finite set of permutons, this impliesμ_θ_k,nd→μ_ρ_k.Let H : (ℳ,d_□) → be a bounded continuous functional.It holds that | [H(μ_σ_n)] - [H(μ_θ_k,n)]| ≤[ |H(μ_σ_n)- H(μ_θ_k,n)| ]≤[ |H(μ_σ_n)- H(μ_θ_k,n)| 1_d_□(μ_σ_n, μ_θ_k,n)≤ 16k^-1/4]+ [ |H(μ_σ_n)- H(μ_θ_k,n)| 1_d_□(μ_σ_n, μ_θ_k,n)> 16k^-1/4].The first term can be bounded by introducing the modulus of continuity of H, which is defined as ω() = sup_d_□(ξ,ζ) ≤ |H(ξ)-H(ζ)|.Since ℳ is compact, it goes to 0 whengoes to 0.Hence,[ |H(μ_σ_n)- H(μ_θ_k,n)| 1_d_□(μ_σ_n, μ_θ_k,n)≤ 16k^-1/4] ≤[ ω( d_□(μ_σ_n, μ_θ_k,n))1_d_□(μ_σ_n, μ_θ_k,n)≤ 16k^-1/4]≤ω(16k^-1/4).As for the second term, for k large enough, <ref> yields [ |H(μ_σ_n)- H(μ_θ_k,n)| 1_d_□(μ_σ_n, μ_θ_k,n)> 16k^-1/4] ≤[ 2 sup |H| 1_d_□(μ_σ_n, μ_θ_k,n)> 16k^-1/4]≤1/2 e^-√(k)2sup |H|.Putting things together, we obtain | [H(μ_σ_n)] - [H(μ_θ_k,n)]| ≤ω(16k^-1/4) +1/2 e^-√(k)2sup |H|.Assume that (μ_σ_n)_n has a subsequence converging in distribution to a random permuton μ'. Taking the limit when n →∞ of (<ref>) along this subsequence, we get| [H(μ')] - [H(μ_ρ_k)]| ≤ω(16k^-1/4) +e^-√(k)sup |H|.(Recall indeed that (θ_k,n)_n converges to ρ_k in distribution.) The right-hand side tends to 0 when k tends to infinity, which proves that (μ_ρ_k)_k converges to μ' in distribution as well.Therefore, all converging subsequences of (μ_σ_n)_n converge to the same limit μ', which is the limit of (μ_ρ_k)_k ≥ 1.Thanks to the compactness of the space of probability distributions on ℳ, this is enough to conclude that (μ_σ_n) has indeed a limit. Item (a) is proved.Proof of additional statements.Assume that (a)–(d) hold.That (Λ_π)_π d = ((π,μ))_π follows from the proof of (a)⇒(b).Fix any integer k, and any permutation π of size k.The above equality in distribution implies [Λ_π] = [(π,μ)]. That Δ_π = [Λ_π] is clear from the proof of (b)⇒(c). The equality (ρ_k = π) = Δ_π follows from the proof of (c)⇒(d). Finally, [(π,μ)] = ((,μ) = π) comes from <ref>. In some sense, <ref> can be seen as an analogue of a theorem of Aldous for random trees <cit.>. Both in permutations and trees, there is a natural way to construct a smaller structure from k elements of a big structure (induced subpermutations or subtrees). The goal is then to reduce the convergence of the big structure to the convergence, for each k, of the induced substructures.For trees, we need an extra tightness assumption (that the family of trees is “leaf-tight” in Aldous' terminology). In our case, since the space of permutons is compact, we do not need such an assumption.We finish this section by a comment on the existence of random permutons with prescribed induced subpermutations. A family of random permutations (ρ_n)_n is consistent if * for every n≥ 1, ρ_n ∈_n, * for every n≥ k ≥ 1, if I_n,k is a uniform subset of [n] of size k, independent of ρ_n, then _ I_n,k(ρ_n)d = ρ_k. It turns out that consistent family of random permutations and random permutons are essentially equivalent: If Μ is a random permuton, then the family defined by ρ_kd = (,Μ) is consistent. Conversely, for every consistent family of random permutations (ρ_k)_k≥ 1, there exists a random permuton μ whose distribution is uniquely determined, such that (,Μ)d = ρ_k. In that case, μ_ρ_nμ. Set n≥ k ≥ 1. The first assertion follows from the following coupled construction of 𝐦⃗_n and:is a uniform random subset of 𝐦⃗_n, chosen independently of it. It follows that (, Μ) = _ I_n,k((𝐦⃗_n,Μ)), for some random subset I_n,k of [n]. By construction, the distribution of I_n,k is uniform and independent of (𝐦⃗_n,Μ). Hence the consistency follows. The converse is immediate, by applying the implication (d)→(a)and the last assertion of <ref> to the sequence (ρ_k)_k≥ 1. Consistency ensures that we get the prescribed induced subpermutations, and uniqueness in distribution follows by <ref>.§ CODING PERMUTATIONS BY TREES §.§ Substitution trees As seen in <ref> (<ref>), any permutation σ can be recursively decomposed using substitutions in a canonical way and this decomposition can be encoded in a canonical tree. However, if we do not impose conditions on θ and the π^(i)'s (as done in <ref>), a permutation σ may be represented in many ways as a substitution σ=θ[π^(1),…,π^(d)], where the π^(i)'s themselves may be further decomposed using substitutions. Such decompositions can be recorded in substitution trees. A rooted planar tree is either a leaf, or consists of a root node ∅with an ordered k-tuple of subtrees attached to the root, which are themselves rooted planar trees. In our context, the size of a tree t is its number of leaves.It is denoted |t|,whereas #t denotes the number of nodes of t (including both leaves and internal nodes).Internal vertices of all trees considered in this paper have degree at least 2. It is natural (and also convenient for counting purposes in <ref>) to consider thatthe single leaf of the tree of size 1 is also its root (and is therefore also denoted ∅). Since we work with planar trees, we can label their leaves canonically with the integers from 1 to |t|: the leaf labeled by i is the ith leaf met in the depth-first traversal of t which choses left before right. A subset of the set of leaves of a tree t is therefore canonically represented by a subset I of [|t|]. A substitution tree of size n is a labeled rooted planar tree with n leaves, where any internal node with k ≥ 2 children is labeled by a permutation of size k. Internal nodes with only one child are forbidden.Internal nodes labeled by the ascending permutation 1 2 ⋯ r or the descending permutation r ⋯ 2 1 (for some r ≥ 2) will play a particular role. Therefore we replace every such label with a ⊕ (for ascending permutations) or a ⊖ (for descending permutations). Since the size of a label corresponds to the number of children and since there is exactly one ascending (resp. descending) permutation of each size, there is no loss of information in this replacement. Internal nodes labeled ⊕ or⊖ are called linear nodes, the other nodes being called nonlinear. Among nonlinear nodes, the ones labeled by simple permutations are called simple nodes.An example of substitution tree is shown in <ref>, left. Let t be a substitution tree. We define inductively the permutation (t) associated with t: * if t is just a leaf, then (t)=1; * if the root of t has r≥ 2 children with corresponding subtrees t_1,…,t_r (from left to right), and is labeled with the permutation θ, then (t) is the permutation obtained as the substitution of (t_1),…,(t_r) in θ: (t) = θ[(t_1),…,(t_r)]. <ref> illustrates this construction.When (t)=σ, when say that t is a tree that encodes σ, or a tree associated with σ.Nonsimple permutations σ are encoded by several trees t. However, if we restrict ourselves to canonical trees (which are particular cases of substitution trees; see <ref>), we have uniqueness. Indeed, from <ref>, to any permutation σ we can associate uniquely a canonical tree t such that (t)=σ.The remaining of <ref> is devoted to the proof of simple combinatorial lemmas on the structure of the set of substitution trees associated with a given permutation . These lemmas are useful in <ref>. We first make the following observation. Take a substitution tree τ of some permutation π with a marked node v labeled by θ. Consider also a substitution tree τ' of θ. Then replacing v by the tree τ' yields a new substitution tree τ” of the same permutation π. (When doing this replacement the |θ| subtrees attached to v are glued on the leaves of τ', respecting their order, see <ref>.) This operation will be referred to as the inflation of v with τ'. Conversely, consider a connected set A of internal nodes in a substitution tree τ” of π.From this set we build a substitution tree τ' whose set of internal nodes is A, the ancestor-descendant relation in τ' is inherited from the one in τ”, and we add leaves so that the degree of each node of A is the same in τ' than in τ”. We denote θ=(τ'). Then merging all nodes in A into a single node labeled by θ turns τ” into a new substitution tree τ of the same permutation π. We call this a merge operation. For example, the tree τ of <ref> can be obtained from the tree τ” of the same figure by merging the nodes labeled 132 and ⊖. We now consider a last family of substitution trees. An expanded tree is a substitution tree where nonlinear nodes are labeled by simple permutations, while linear nodes are required to be binary.Any expanded tree of π is obtained from its canonical tree by inflating all nodeslabeled by ⊕ (resp. ⊖) with binary trees whose internal nodes are all labeled by ⊕ (resp. ⊖) Let τ be an expanded tree of π.Consider, if any, two adjacent linear nodes of τ with the same label (either both ⊕ or both ⊖) and merge them. Note that the resulting node will still have label ⊕ or ⊖. We repeat this operation until there is no adjacent linear nodes with the same label. Nonlinear nodes in the resulting tree τ' are all labeled by simple permutations: it is the case in τ (by definition of expanded trees) and we did not create any new nonlinear nodes. Therefore τ' satisfy all conditions of canonical trees (see <ref>). By uniqueness, τ' is the canonical tree of π. Reversing the merge operations, τ can be obtained from τ' by inflating its linear nodes, which proves the proposition. We recall a fact well-known to combinatorialists:the number of complete binary trees (i.e. plane rooted trees, whose internal vertices have all degree 2)with d leaves is _d-1. Therefore each linear node of degree d of the canonical tree,can be inflated with a binary tree in _d-1 ways. We therefore get the following interesting corollary, regarding the number and properties of expanded trees.Let π be a permutation and d_1,⋯,d_r (resp. e_1,⋯,e_s) be the degrees of the nodeslabeled ⊕ (resp. ⊖)in the canonical tree of π. Then * the number N_π of expanded trees of π is ∏_i=1^r _d_i-1 ∏_j=1^s _e_j-1, where we denote by _k := 1/k+12kk the k-th Catalan number, which counts complete binary trees with k leaves. * each expanded tree of π has ∑_i=1^r (d_i-1) nodes labeled ⊕and ∑_j=1^s (e_j-1) nodes labeled ⊖. * the labels of the nonlinear nodes in any expanded tree of τare the same as in its canonical tree. Any substitution tree of π can be obtained from some expanded tree of π by merge operations. The proof is similar to that of <ref>. Starting from any substitution tree of π and inflating every node that is neither simple nor binary by an expanded tree encoding its label, we get an expanded tree. Reversing these inflation operations, we can obtain any substitution tree from some expanded treeof π, using only merge operations.§.§ Induced trees Since permutations are encoded by trees and since we are interested in patterns in permutations, we consider an analogue of patterns in trees: this leads to the notion of induced trees. Let t be a tree, and u and v be two nodes (internal nodes or leaves) of t.The first common ancestor of u and v is the node furthest away from the root ∅ that appearson both paths from ∅ to u and from ∅ to v in t.The following simple observationallows to read the relative order of σ_i and σ_j in any substitution tree encoding σ. Let i ≠ j be two leaves of a substitution tree t and σ=(t). Let v be the first common ancestor of i,j in t and θ be the permutation labeling v. We define k (resp. ℓ) such that the k-th (resp. ℓ-th) child of v is an ancestor of i (resp. j).Then σ_i>σ_j if and only if θ_k>θ_ℓ. Let t be a substitution tree, and let I be a subset of the leaves of t. The tree t_I induced by I is the substitution tree of size |I| defined as follows. The tree structure of t_I is given by:* the leaves of t_I are the leaves of t labeled by elements of I;* the internal nodes of t_I are the nodes of t that are first common ancestors of two (or more) leaves in I;* the ancestor-descendant relation in t_I is inherited from the one in t;* the order between the children of an internal node of t_I is inherited from t. The label of an internal node v of t_I is defined as follows:* if v is labeled by a permutation θ in t, the label of v in t_I is given by the pattern of θ induced by the children of v having a descendant that belongs to t_I (or equivalently, to I). A detailed example of the induced tree construction is given in <ref>.Note that if v has label ⊕ in t, it has also label ⊕ in t_I. Indeed, ⊕ nodes correspond to increasing permutations and all patterns of increasing permutations are increasing permutations. The same holds with ⊖. The converse is however not true: a node can be linear in t_I but nonlinear in t ( e.g. the bottommost green node in <ref>). By definition, for any substitution tree t with k leaves and subset I of [k], t_I is a substitution tree. However, if t is a canonical tree, t_I is a substitution tree which is not necessarily canonical (see for example <ref>). An important feature of induced trees is the following, which follows from <ref> and is illustrated in <ref>. Let t be a substitution tree with k leaves, and I be a subset of [k]. We have _I((t)) = (t_I).As a consequence of this formula, counting the total number of occurrences of a given pattern in some family of permutations can be reduced to counting the total number of induced trees equal to a given t_0 in the corresponding family of canonical trees. This is precisely the goal of the next section.§ EXACT ENUMERATION OF VARIOUS FAMILIES OF TREES Let 𝒮 be a fixed family of simple permutations. Recall that its generating function isS(z)=∑_α∈𝒮 z^|α|= ∑_n≥ 4 s_nz^n,where s_n is the number of permutations of size n in 𝒮.An 𝒮-canonical tree is any canonical tree whose simple nodes carry labels in 𝒮.We denote by 𝒯 the combinatorial class of 𝒮-canonical trees, the size of |t| a tree t being its number of leaves. Recall that ⟨𝒮⟩ is by definition the set of permutations whose canonical tree is in 𝒯. Since canonical trees encode permutations in a unique way,defines a size-preserving bijection between𝒯 and ⟨𝒮⟩. Both have therefore the same generating function which we denote byT(z)=∑_t∈𝒯z^|t|= ∑_σ∈⟨𝒮⟩z^|σ|. In <ref> below, we explain how to compute T(z) starting from the datum S(z). We then study families of 𝒮-canonical trees with one marked leaf, with constraints on the root and/or on the marked leaf. These are building blocks for <ref>, where we consider the family of 𝒮-canonical trees with k marked leaves, inducing a given tree t_0.§.§ Generating functions of 𝒮-canonical trees (possibly with marked leaves) In order to compute T(z) in terms of S(z), we need to introduce the auxiliary family 𝒯_ (resp. 𝒯_) of 𝒮-canonical trees with a root (always denoted ∅) that is not labeled ⊕ (resp. ⊖),and its generating function T_ (resp. T_):T_(z)=∑_t∈𝒯; ∅ is not labeled ⊕z^|t| .Note that replacing all labels ⊖ by ⊕ and ⊕ by ⊖defines an involution on 𝒮-canonical trees. This implies in particular T_=T_ and will be used to get other similar identities below. Together with the condition T_(0)=0, the generating functionT_ is determined by the following implicit equationT_= z + T_^2/1-T_ + S (T_/1-T_).The main series T is then simply given in terms of T_ byT= T_/1-T_. A tree of 𝒯 is either a leaf, or a root labeled ⊕ and a sequence of at least two trees in 𝒯_, or a root labeled⊖ and a sequence of at least two trees in 𝒯_, or a root labeled by α∈𝒮 and a sequence of |α| unconstrained trees. ThereforeT=z+ T_^2/1-T_+T_^2/1-T_ + S(T)=z+ 2 T_^2/1-T_ + S(T)Similarly,T_=T_=z+ T_^2/1-T_ + S(T). By combining these two equations we get T=T_ + T_^2/1-T_= T_/1-T_, that is <ref>.Substituting it back in <ref> gives <ref>. Observe, that under the assumption T_(0)=0, <ref> allows one to compute inductively the coefficients of T_. Hence T_ is uniquely determined by T_(0)=0 and <ref>, as claimed.We now consider trees witha marked leaf. As before, subscripts indicate a constraint on the root. The generating function of trees with a marked leaf counted by their number of unmarked leaves is obtained by differentiating the generating function of trees without marked leaf: T', T'_,T'_. Indeed,T'_(z)=∑_t∈𝒯; ∅ is not labeled ⊕|t| z^|t|-1 =∑_t̂ obtained by marking a leaf from a tree of 𝒯 of root not labeled ⊕ z^#unmarked leaves oft̂ . Accordingly, we denote by 𝒯', 𝒯'_ and 𝒯'_ the families of trees counted by T', T'_ and T'_. Consistently, we use superscripts when we consider families of trees with a marked leaf that satisfies an additional constraint, and similarly for their generating function. We say that a leaf is ⊕-replaceable (resp. ⊖-replaceable) if it maybe replaced by a tree whose root is labeled ⊕ (resp. ⊖) without violating the definition of canonical trees (see the third item in <ref>). In other words, its parent (if it exists) should be labeled by ⊖ or by a simple permutation (resp. by ⊕ or by a simple permutation). We then denote 𝒯_^+ (resp. 𝒯_^-) the families of trees in 𝒯_ with a ⊕-replaceable marked leaf (resp. a⊖-replaceable marked leaf). Similar definitions hold for 𝒯^+, 𝒯^-, 𝒯_^+ and 𝒯_^+.As for T', we take the convention that T_^+ and all generating functions with superscript count treesaccording to the number of unmarked leaves. By definition, T_^- has constant coefficient 1 (corresponding to the tree consisting of a single leaf).We however take the convention that T_^+ has constant coefficient 0:in other words, the single leaf is excluded from the family 𝒯_^+ (intuitively, a single leaf cannot be replaced by a tree with root labeled ⊕, since the trees in 𝒯_ should not have a root labeled ⊕). The generating functions T^+, T_^+ and T_^+ are given by the following formulas: T^+ =1/1-S'(T) -- S'(T); T_^+ =1/1+T^+ ;T_^+= ( S'(T) ++ S'(T)) T_^+where T and T_ are given by <ref>and =(11-T_)^2-1. Quantities with a minus superscript are obtained by symmetry: T^-=T^+, T_^-=T_^+ andT_^-=T_^+. Consider a tree t in 𝒯_^+. As explained above, |t| ≠ 1 and we distinguish cases according to the label of the root of t,which may be either ⊖ or a simple permutation. * The root of t is labeled ⊖ (see left of <ref>). Then t can be decomposed as a tree in 𝒯_^+ (which may be a single leaf) and a nonempty pair of sequences of unmarked trees in 𝒯_.* The root of t is labeled by a simple permutation α∈𝒮 of size d (see right of <ref>). Then t can be decomposed as a d-uple of unconstrained trees, with one of them having a ⊕-replaceable marked leaf. Therefore we haveT_^+ =T_^+ + S'(T) T^+,where =(11-T_)^2-1 counts nonempty pairs of sequences of unmarked trees in 𝒯_ (since T_=T_).Similarly, we haveT_^+ = 1 +T_^+ + S'(T) T^+ ; T^+= 1 +T_^+ +T_^+ + S'(T) T^+. The above three equations form a system with three indeterminates: T_^+, T_^+ and T^+ (W and T are known thanks to <ref>). Solving this system gives <ref>.The symmetry argument giving T_^-, T_^- and T^- consist as before in exchanging ⊖ and ⊕ labels in 𝒮-canonical trees. §.§ Generating function counting trees with marked leaves inducing a given tree To enumerate trees with marked leaves inducing a given tree, we introduce another kind of generating functions.Recall that for any permutations α and θ, occ(θ,α) is the number of occurrences of θ in α.For a permutation θ, we set _θ(z) = ∑_α∈𝒮occ(θ,α) z^|α|-|θ|. For d≥ 1 and any fixed α,∑_θ∈_docc(θ,α) = |α|d. Therefore ∑_θ∈_d_θ is related to the d-th derivative of S by∑_θ∈_d_θ = S^(d)d!. This implies that the radius of convergence of each _θ is at least R_S, the radius of convergence of S.Fix a substitution treewith k leaves.Let us call _ the family of 𝒮-canonical trees t with k marked leaves ≪=(ℓ_1,…,ℓ_k) such that these leaves induce :_ = (t,≪)such thatt ∈𝒯 andt_≪=.We define the size of an object (t,≪) as the number of leaves in t (both marked and unmarked). The corresponding generating series is denoted T_(z) Let (t,≪) ∈_.As noted after the definition of induced trees, a nonlinear node of t_≪ has to come from a nonlinear node of t, whereas a linear node of t_≪ may come from a linear or a nonlinear node of t. In order to ease the enumeration, we partition _ according to the set of nodes of t_≪ = t_0 coming from nonlinear nodes of t (that is, simple nodes of t since t is canonical).More formally, let tbe the set of internal nodesof a tree t. With each (t,≪)∈_, we associate the set ⊆t of the first common ancestors of ≪ in t.From the definition of induced tree, a node v incorresponds to a unique node in , that we denote φ(v). For V_s ⊆, let_,V_s = { (t,≪) ∈_ : {v∈ : φ(v)is simple} = V_s}. Clearly _,V_s is nonempty if and only if V_s contains every nonlinear node of . An example of a marked tree (t,≪) with the corresponding pair (, V_s) is shown on <ref>. In pictures, we will always circle nodes v in V_s and the corresponding nodes φ(v) in t. A decorated tree is a pair (,V_s) where*is a substitution tree;* V_s is a subset ofthat contains all nonlinear nodes.Therefore we have the following decomposition:_=⋃_V_s s.t. (,V_s) is a decorated tree_,V_s. Let (,V_s) be a decorated tree. We consider the generating function T_,V_s of _,V_s, the size of (t,≪) being its number of leaves (both marked and unmarked):T_,V_s(z)=∑_(t,≪) ∈_,V_s z^|t|.To compute T_,V_s, we introduce some notation.For every internal node v of , let * θ_v be the permutation labeling v, * d'_v be its number of children which are leaves or in V_s, * d^+_v be its number of children which are not in V_s and are labeled by ⊕, * d^-_v be its number of children which are not in V_s and are labeled by ⊖, * d_v=d'_v+d^+_v+d^-_v be its total number of children.We also set the type of root to be ' if the root of t_0 is in V_s, and + (resp. -) if the root is not in V_s and labeled ⊕ (resp. ⊖). Let (,V_s) be a decorated tree and k be its number of leaves. ThenT_,V_s = z^kT^type of root∏_v∈ A_v, where A_v =_θ_v(T)(T')^d'_v (T^+)^d^+_v (T^-)^d^-_vifv∈ V_s ,( 1 1-T_)^d_v+1 (T_')^d'_v (T_^+)^d^+_v (T_^-)^d^-_vifv∉V_sand θ_v = ⊕,( 1 1-T_)^d_v+1 (T_')^d'_v (T_^+)^d^+_v (T_^-)^d^-_vifv∉V_sand θ_v = ⊖.The proof is based on a decomposition of marked 𝒮-canonical trees (t,≪) of _,V_s followed by a study of the series A_v depending on the type of the node φ(v) in t.First step: Decomposing a tree in _,V_s.We fix a decorated tree (,V_s) with k leaves and a marked 𝒮-canonical tree (t,≪) ∈_,V_s.We want to decompose t into subtrees, one for each internal node ofplus one attached to the root of t. Recall that φ:→⊆t is the correspondence between the internal nodes ofand the set of first common ancestors of leaves ≪ in t. For every internal node v of , let t_v be the subtree of t defined as follows.* The root of t_v is φ(v).* The nodes of t_v are descendants of φ(v).* A descendant of φ(v) in t belongs to t_v if and only if its first proper ancestor inis φ(v) (proper meaning different from the node itself).Moreover we define t_B as the subtree of t rooted at the root of t and containing the nodes of t having no proper ancestor in ; B stands for “bottom” and is used here as a symbol, not as a variable. (If φ maps the root of t_0 to the root of t, then t_B is reduced to a leaf.) A schematic representation of the trees t_v and t_B is given in <ref>.By definition, a node u of t that is not inbelongs to exactly one t_v. On the contrary, if u is in , then u is the root of t_φ^-1(u) and is a leaf of another t_v, where v is the parent of φ^-1(u). (If φ^-1(u) is the root of , then there is no such v, and u is a leaf of t_B.)By construction of t_v and t_B, their leaves are either leaves of t or belong to . We mark the leaves that belong toor that are marked leaves of t. In this way, the trees t_v and t_B that we have constructed are marked trees. The following properties are straightforward to check. * The tree t_v is an 𝒮-canonical tree with d_v marked leaves. * The root of t_v is nonlinearif and only if v∈ V_s. * The root of t_v is ⊕ if and only if v∉ V_s and is labeled ⊕. * The root of t_v is ⊖ if and only if v∉ V_s and is labeled ⊖. * The d_v marked leaves of t_v belong to d_v subtrees coming from d_v distinct children of the root of t_v. The pattern induced by the position of those d_v children on the permutation labeling the root of t_v is θ_v.(For example, in <ref>, four marked leaves are branched on the node labeled 362514 at positions 1,2,5,6. This implies that the corresponding node in t_0 is labeled with θ_v=2413.)* Let w be the i-th child of v in . If w∈∖ V_s, and its label is a ⊕ (resp. a ⊖), then the i-th marked leaf of t_v must be ⊕-replaceable (resp. ⊖-replaceable).The combinatorial class of trees satisfying properties i) to vi) will be denoted 𝒜_v.In addition, we observe that t_B is an 𝒮-canonical tree with one marked leaf; moreover, if the root ofis not in V_s and is labeled by ⊕ (resp. ⊖), then the marked leaf of t_B must be ⊕-replaceable (resp. ⊖-replaceable).This yields a map[:_,V_s→ ^ type of root×∏_v∈𝒜_v; (t,≪)↦(t_B,(t_v)_v∈). ] We claim that this map is a bijection, and that the inverse map is obtained as follows.Let us be given t_B and a collection of trees t_v, one for each internal node of . We first take t_B and glue t_root of on it, the root of t_root of replacing the marked leaf of t_B. We then proceed inductively: if t_v has already been glued and w is the i-th child of v, we glue t_w on t_v, by replacing the i-th marked leaf of t_v with the root of t_w.This yields a tree t with k marked leaves, denoted ≪. This tree is 𝒮-canonical because of items i), iii), iv) and vi) of the definition of 𝒜_v: we only glue trees with root ⊕ (resp. ⊖) on ⊕-replaceable leaves (resp. ⊖-replaceable leaves). By construction, the k marked leaves ≪ induce a tree having the same structure as t_0 and item v) of the definition of 𝒜_v ensures that the labels in the induced tree and in t_0 do match. Because of item ii), the tree (t,≪) is indeed in _t_0,V_s. We have therefore constructed a map from ^ type of root×∏_v∈𝒜_v to _t_0,V_s. By construction, this map indeed inverts , andis a bijection.Let A_v be the generating function of the combinatorial class 𝒜_v, counted by the number of unmarked leaves. If A_v verifies (<ref>), then (<ref>) follows from the fact thatis a bijection. Note indeed that the factor z^k in (<ref>) comes from the fact that we count marked leaves in the series in the left-hand side and but not in the series in the right hand side (the bijectionleaves the number of unmarked leaves invariant).We are left to show that the generating function A_v verifies (<ref>). Second step (i): Computing A_v when v∈ V_s.Recall that the d_v marked leaves of any tree t ∈𝒜_v belong to d_v subtrees coming from d_v distinct children of the root of t. Since v ∈ V_s, the elements t of 𝒜_v can be uniquely decomposed as follows(this decomposition is illustrated on <ref>). * The root of t should be labeled by a simple permutation α in 𝒮; among the |α| children of the root, d_v are marked (corresponding to the subtrees containing a marked leaf) and the pattern of α corresponding to the positions of these marked children should be θ_v. (In <ref>, α=3142, the marked leaves are the first, third and fourth subtrees, and the pattern of 3142 corresponding to positions {1,3,4} is indeed θ_v=231) * We glue |α|-d_v unmarked 𝒮-canonical trees with arbitrary roots on the unmarked children of α. (In <ref>, we have only one such tree, which is glued on the second child of the root); * We glue d_v 𝒮-canonical trees with one leaf marked and an arbitrary root on the marked children of α. In addition, * for d'_v of these trees, there is no constraint on the marked leaf. (In <ref>, the trees glued on the third and fourth children of the root are unconstrained trees with a marked leaf.) * For d^+_v (resp. d^-_v) of these trees, the marked leaf must be ⊕-replaceable (resp. ⊖-replaceable). (In <ref>, we must glue a tree with a ⊖-replaceable marked leaf on the first child of the root.) The generating functions of the first two steps can be computed as∑_α∈𝒮occ(θ_v,α) T(z)^|α|-d_v=_θ_v(T(z)),where _θ_v is defined in <ref> p. eq:defOcc. Indeed, once the label α of the root is chosen, occ(θ_v,α) counts the number of ways to mark children of the root in step i), and T(z)^|α|-d_v comes from step ii). Step iii) yields an additional factor (T')^d'_v (T^+)^d^+_v (T^-)^d^-_v. This proves the formula (<ref>) in the case where v is in V_s.Second step (ii): Computing A_v when v∉ V_s.When v is not in V_s and labeled by ⊕, the elements of the class 𝒜_v can be uniquely decomposed as follows (this decomposition is illustrated on <ref>).* The root is labeled by ⊕. * We attach to the root d_v 𝒮-canonical trees whose root is not labeled by ⊕, each with one marked leaf. In addition, * for d'_v of these trees, there is no constraint on the marked leaf. (In <ref>, the two right-most nonhatched trees attached to the root are trees with an unconstraint marked leaf.) * for d^+_v (resp. d^-_v) of these trees, the marked leaf must be ⊕-replaceable (resp. ⊖-replaceable). (In <ref>, the left-most nonhatched tree attached to the root should have a ⊖ replaceable marked leaf.) * Between and around these d_v trees, we attach d_v + 1 possibly empty sequences of unmarked 𝒮-canonical trees whose root is not labeled by ⊕.(In <ref>, each of these sequences is represented by a hatched blob.) Item i) does not involve any choice. Choices in item ii) are counted by (T_')^d'_v (T^+_)^d^+_v (T^-_)^d^-_v, while item iii) yields a factor ( 1 1-T_)^d_v+1. This proves the formula (<ref>) in the case where v is not in V_s and labeled by ⊕. The case when v is not in V_s and is labeled by ⊖ follows by symmetry. This ends the proof of the combinatorial identity (<ref>) and therefore of <ref>.§ ASYMPTOTIC ANALYSIS: THE STANDARD CASE S'(R_S)>2/(1+R_S)^2-1 Let 𝒮 be a set of simple permutations.The goal of this section is to precisely state, and then prove, <ref> (p.Th:MainIntro):the convergence to the biased Brownian separable permuton of uniform random permutations in ⟨𝒮⟩ when 𝒮 satisfies Condition (<ref>).§.§ Definition of the biased Brownian separable permuton and statement of the theorem The (unbiased) Brownian separable permuton was defined in <cit.>.Because the biased Brownian separable permuton is a one-parameter deformation of it,it is useful to first recall some facts about the substitution trees encoding separable permutations and the (unbiased) Brownian separable permuton. As noted in <ref>, the canonical trees (called decomposition trees in <cit.>)of separable permutations are those whose internal nodes are all labeled ⊕ or ⊖.If we consider more generally substitution trees, the following implication still holds: if τ is a substitution tree whose nodes are labeled ⊕ or ⊖, then (τ) is a separable permutation.Recall from <ref> that an expanded tree is a substitution tree where nonlinear nodes are labeled by simple permutations, while linear nodes are required to be binary. In the case of separable permutations, we do not have simple nodes, so that expanded tree are binary trees labeled with ⊕ and ⊖. These are also referred to as separation trees in the literature. <ref> shows a separable permutations together with two separation trees associated with it.For any separable permutation π, we denote by N_π its number of separation trees.If π is not separable, we set N_π = 0.It is shown in <cit.> that the Brownian separable permuton μ satisfies the following property: for any k≥ 2 and any π∈_k, [(π,μ)] = N_π/2^k-1_k-1 ,where, as before, we denote by _k := 1/k+12kk the k-th Catalan number, which counts complete binary trees with k leaves. In other words, the random permutation of size k extracted from μ is distributed like the permutation encoded by a uniform complete binary tree with k leaves,and hence k-1 internal nodes,whose signs are chosen uniformly and independently in {⊕,⊖}. In light of <ref> and <ref> (p. eq:E(occ)=P(perm)),this characterizes the law of the Brownian separable permuton μ among random permutons. The biased Brownian separable permuton of parameter p∈(0,1) has a similar characterization, except that the signs are now chosen with a bias.For a separable π, let r_+(π) (resp. r_-(π)) be the number of internal nodes labeled ⊕ (resp. ⊖) in a separation tree of π.Even if this is not relevant for the present paper, let us observe that r_+(π) (resp. r_-(π)) is simply the number of ascents (resp. descents) of π[ To see this, observe that each internal node v of a separation tree is the first common ancestor of exactly one pair of consecutive leaves (the right-most leaf of its left subtree and the left-most leaf of its right subtree). This two consecutive leaves, corresponding to consecutive elements of the permutation, form an ascent (resp. a descent) if and only if v is labeled by ⊕ (resp. ⊖).]. In particular, r_+(π) and r_-(π) do not depend on the choice of a separation tree (this is also a particular case of <ref>). The biased Brownian separable permuton of parameter p∈ (0,1) is the random permuton μ^(p) characterized by the following relations: for all k≥ 2 and all π∈_k,[(π,μ^(p))] = N_π/_k-1 p^r_+(π)(1-p)^r_-(π) . (Note that the right-hand side is zero if π is not separable.) Several remarks are in order. * For p=1/2, we get the unbiased Brownian separable permuton.* This characterization of μ^(p) is equivalent to the following: for every k≥ 1, (,μ^(p))d= ( b_k^(p)),where b_k^(p) is a uniform binary planar tree with k leaves, where each internal node is labeled ⊕ (resp. ⊖) with probability p (resp. 1-p), independently from each other.* The existence of μ^(p) is not immediate from this definition, but according to <ref>, it suffices to show that ( b_k^(p)) forms a consistent family of random permutations. This is indeed the case, and follows from the fact that a uniform induced subtree of b_n^(p) of size k is distributed like b_k^(p) (this is,e.g., a consequence of Rémy's algorithm to generate uniform random binary trees <cit.>).* The definition of μ^(p), and the above argument justifying its existence, are not constructive. For an explicit construction of μ^(p) starting from a Brownian excursion, see <cit.>. * Knowing a priori that such a permuton exists is not necessary for the proof of our main theorem. Indeed, we will prove that the quantity [(π,_n)] converges to the right-hand side of <ref> (for all patterns π), where _n is a uniform permutation in ⟨𝒮⟩_n, and the parameter p depends on S. From <ref>, this implies the existence of a random permuton μ^(p) satisfying (<ref>) (only for the relevant value of p; not for all p)and the convergence in distribution of (μ__n)_n to μ^(p).Now, we have all the necessary definitions to make explicit the parameter p of the statement of <ref>, that we restate in a full version.(Recall also the definition of _θ(z) from <ref>, p. eq:defOcc.)Let 𝒮 be a set of simple permutations such thatR_S > 0 andlim_r→ R_Sr < R_S S'(r) > 2/(1+R_S)^2 -1. H1For every n≥ 1, let _n be a uniform permutation in ⟨𝒮⟩_n, and let μ__n be the random permuton associated with _n. The sequence (μ__n)_n tends in distribution in the weak convergence topology to the biased Brownian separable permuton μ^(p) of parameter p, wherep=(1+κ)^3_12 (κ)+1/(1+κ)^3(_12 (κ)+_21 (κ)) +2and κ is the unique point in (0,R_S) such that S'(κ) =2 (1+κ)^2 - 1. Since S is a power series with nonnegative coefficients,t↦ S'(t) -2 (1+t)^2 + 1 is increasing and continuous on [0,R_S) (as the sum of two increasing and continuous functions).It therefore takes all values from -1 to some positive number (possibly +∞) exactly once. This entails the existence and uniqueness of κ. In many cases _12 = _21, and then p = 1/2 and μ^(p) is the unbiased Brownian separable permuton. This is the case with separable permutations (= ∅), with = {2413} or = {3142}, and with any set of simple permutations stable by taking reverse or complement, like the one considered in the introduction ={2413,3142,24153,42513}.Whenis the family of increasing oscillations (see for instance <cit.>), we can computeS(z) = 2z^4/1-z;_12(z) = 2z^2(3 - 3z + z^2)/(1 - z)^3; _21(z)=2z^2(3-2z)/(1 - z)^2. We get through numerical approximation κ≈ 0.2709 and deduce p≈0.5353. Takingto be the family of simple permutations in (321), we are interested in the class =⟨⟩ which is the substitution-closure of (321). In this case, <cit.> gives S(z) = 1-z-2z^2 -2z^3 - √(1-2z-3z^2)/2+2z. We get through numerical approximation κ≈ 0.2486. It seems hard to compute the generating series _12, but we can locate its value at κ by exhaustively computing the number of inversions of each permutation inup to a certain order N, and controlling the rest of the series using the fact thata permutation of size n in (321) cannot have more than n^2/4 inversions[Permutations avoiding 321 consist of two increasing subsequences.The number of inversions of σ∈(321) of size n is therefore at most max_0≤ k ≤ n k(n-k) ≤n^24.The claim follows.]. Performing this with N=12 yields p∈[0.577, 0.622].The remainder of Section <ref> is devoted to the proof of <ref>, using generating functions from <ref> and methods of analytic combinatorics. More precisely, using <ref> we are interested in the limit of [(π,_n)], which we will express in terms of probability that a tree with marked leaves induces a given subtree. This probability itself will be expressed as the ratio of the coefficients of the generating function of trees with marked leaves inducing a given subtree and of the generating function of trees without marked leaves. We begin with the study of the asymptotics of these generating functions. Notation: throughout the article, the class , or equivalently its set of simple permutations , are considered as fixed, and so is the pattern π or the tree t_0 of which we are studying the proportion of occurrences (and therefore their size k). Constants in asymptotic expansions, including the ones in o, 𝒪 and Θ symbols, may therefore depend on these objects.§.§ Asymptotics of the generating function of trees with no or one marked leafFrom <ref> p.eq:Tnonp, we haveT_=z+ Λ(T_)whereΛ(u)=u^2/1-u+ S(u/1-u). We denote by R_Λ the radius of convergence of Λ. Note that R_Λ= R_S1+R_S≤ 1. We will also use repeatedly the inverse equation R_S=R_Λ1-R_Λ. In the following, to lighten the notation,we write Λ'(R_Λ) := lim_r→ R_Λ r < R_ΛΛ'(r). Note that Λ'(R_Λ) may be ∞. Differentiating <ref>, we getΛ'(u)= 1/(1-u)^2(1 + S'( u/1-u) )-1.In particular, it follows that the condition (<ref>) is equivalent to R_Λ >0 and Λ'(R_Λ)>1. Since S is analytic at 0 with nonnegative coefficients, the same holds for Λ.Moreover, the series expansion of Λ is Λ(u) = u^2 + ∑_i≥ 3λ_i u^i,with λ_i ≥ 1for all i ≥ 3.In particular it is aperiodic, in the sense given in <ref>. Assume that (<ref>) holds, and recall that κ is defined by S'(κ) =2 (1+κ)^2 - 1. There is a unique τ∈ (0,R_Λ) such that Λ'(τ)=1, and we have τ = κ/1+κ. The generating functions T and T_ have the same radius of convergence ρ=τ-Λ(τ)∈ (0,τ) and have a unique dominant singularity[For the reader who is not familiar with complex analysis, all useful definitions and results are given in <ref>. In particular, "near ρ" means "in a Δ-neighborhood of ρ", where "Δ-neighborhood" is defined in <ref>. The formal definition of (unique) dominant singularity is given in <ref> p.eq:def_Exp. ] in ρ. Their asymptotic expansions near ρ are: T(z)= τ/1 - τ - β λ √(1-zρ) + 𝒪(1-zρ), T_(z) = τ - β√(1-zρ) + 𝒪(1-zρ). where β = √(2 ρ/Λ”(τ)) and λ= 1/(1-τ)^2. In particular, T and T_ are convergent at z=ρ and T(ρ)= τ/1 - τ, T_(ρ)=τ. This type of behavior with a square-root dominant singularity is classical for series defined by an implicit equation (such as T_, which is characterized by T_(z) = z + Λ(T_(z))), that belong to the smooth implicit function schema <cit.>. This schema is defined by the existence of a solution to some characteristic equation, which in our case reduces to hypothesis (<ref>), as explained in <ref>. Our result is a special case of <cit.>, a general result for equations of the form U(z) = z +Λ(U(z)). Implicit equations of this form characterize generating functions of weighted trees counted by their number of leaves, and are also considered in <cit.>. From <ref>, Λ' is strictly increasing in the real interval (0,R_Λ). Together with the fact that Λ'(0)=0 and the assumption Λ'(R_Λ)>1 (see <ref>), this proves the existence and uniqueness of τ >0 such that Λ'(τ)=1. Setting v=u1-u in Λ', which is given by (<ref>), we have Λ'( v 1+v) = (1+v)^2(1+S'(v)) - 1. It follows that Λ'(κ/1+κ) = 1. By uniqueness of τ, we conclude τ = κ/1+κ. We now consider the expansion of T_ and deduce afterwards the one of T. From <ref>, we have T_(z)=z+Λ(T_(z)). Then Theorem 1 in <cit.> gives that T_ is analytic at 0 and has a unique dominant singularity of exponent 1 2 in ρ=τ-Λ(τ), with the expansion given in <ref>: T_(z)=τ - β√(1-zρ) + 𝒪(1-zρ). The next step is to justify that ρ∈ (0,τ). That ρ <τ follows from Λ(τ)>0 (since τ >0). Moreover, since Λ has nonnegative coefficients and no constant term, we have Λ(τ)<τΛ'(τ)=τ, so that ρ>0. Finally, we look at the series T=T_1-T_ (see <ref>). Observe that T_(ρ)=τ < R_Λ≤ 1. Consequently, the dominant singularity of T is the same as T_, i.e. ρ,and is still unique – indeed this singularity is reached before that the denominator vanishes; more formally, this is a particular case of subcritical composition (<ref>). The asymptotic expansion of T near ρ is obtained through the following computation:T(z)=τ - β√(1-zρ) + 𝒪(1-zρ)/1-τ + β√(1-zρ) + 𝒪(1-zρ) = τ/1 - τ - β/(1 - τ)^2√(1-zρ) + 𝒪(1-zρ). All generating functions T', T'_, T'_, T^+, T^+_, T^+_, T^-, T^-_ and T^-_ have a unique dominant singularity in ρ. They diverge at the singularity z=ρ and behave as K (1-zρ)^-1/2, where the constant K is given in the table below: [ superscript\subscript ∅not⊕not⊖; ' λ^2 λ λ; + λ; - λ; ] with = β(1-τ)^2/2ρ (recall that λ = 1/(1-τ)^2 was defined in <ref>).Note that the table is in fact a rank 1 matrix. Namely,passing from a root different from ⊕ (resp. ⊖) to a nonconditioned root always adds a factor λ, independently of the condition on the marked leaf. Similarly, removing the leaf condition always yields the same factor λ independently of the conditions on roots. By singular differentiation (see <ref>)of <ref>,we have (near ρ)T'(z)= β/2ρ(1-τ)^2(1-zρ)^-1/2 +Ø(1) = λ^2 (1-zρ)^-1/2 +Ø(1) ,T_'(z)=β/2ρ(1-zρ)^-1/2 +Ø(1)= λ (1-zρ)^-1/2 +Ø(1).Since T_=T_, we have obtained the constants in the first line of the table. We now turn to the two last lines. In order to use <ref> p.eq:tp, we first compute the expansions of all intermediate quantities appearing in these formulas.From <ref>, we obtain the following expansion near ρ:1+ W(z)= (1/1-T_(z))^2 = 1/(1-τ)^2 -2β/(1-τ)^3√(1-zρ) + 𝒪(1-zρ). We turn to the expansion of S'(T).Putting u=T_ in Λ'(u) (which is given by (<ref>)) and using <ref>, we have1+S'(T) = 1+S'(T_1-T_) = (1-T_)^2(Λ'(T_) +1).Recall that T_(ρ) = τ < R_Λ.Therefore, the composition Λ'∘ T_ is subcritical (see <ref>).This implies that Λ'∘ T_ has a unique dominant singularity at ρ,and plugging in the asymptotic expansion (<ref>) of T_ at ρ, we obtain Λ'(T_(z))= Λ'(τ) + Λ”(τ)(T_(z) - τ) + O((T_(z) - τ) ^2)= 1 - 2ρβ +O ,where we used the equalities Λ'(τ) = 1 (by definition of τ) and Λ”(τ) = 2ρβ^2 (by definition of β).Combining <ref> and <ref> into <ref>, we obtain, after simplification:1 + S'(T) = 2(1-τ)^2 - [ 2ρβ (1-τ)^2 - 4β(1-τ)] + ØThe expansion ofWS'(T) + W + S'(T) then follows from <ref> and <ref>:WS'(T) + W + S'(T) = (1+W)(1+S'(T)) - 1= 2(1- τ)^2(1- τ)^2 -1 - [4β(1- τ)^2(1-τ)^3 + 2ρ(1- τ)^2β(1- τ)^2 - 4β(1- τ)(1- τ)^2] + Ø= 1 - 1λ + Øby definition ofand λ.We can now derive the expansions of our generating functions, using <ref>. First,T^+=1/1-S'(T) -- S'(T)=λ(1-zρ)^-1/2+Ø(1).Then,T^+_=1/1+W T^+ = λ/λ(1-zρ)^-1/2+Ø(1) =(1-zρ)^-1/2+Ø(1).Since S'(T) ++ S'(T) takes value 1 at ρ, the series T^+_ has the same first-order expansion:T^+_=( S'(T) ++ S'(T)) T_^+= (1-zρ)^-1/2+Ø(1). By symmetry, we have T^-=T^+, T_^-=T_^+ and T_^-=T_^+, and this completes the proof of the proposition.§.§ Asymptotics of the generating function of marked trees with a given induced tree We recall some notation introduced in <ref>. Let t_0 be a substitution tree with k≥ 2 leaves and e edges. Let V_* (resp. V_+, V_-) be the set of nonlinear nodes (resp. nodes labeled ⊕, ⊖) in . Recall that, for v ∈t_0, d_v is the degree of v and θ_v the permutation labelingv, and that _ is the set of 𝒮-canonical trees t with k marked leaves such that these leaves induce .Denote by T_ the generating function of _ (where the size is the number of leaves, both marked and unmarked). The series T_ has a unique dominant singularity in ρ, with the asymptotic expansion T_ = _ (1- z ρ)^-(e+1)/2 (1+o(1)), where the constant _ is _ =ρ^k (λ^2)^e+1∏_v∈ V_*_θ_v(τ1-τ)×∏_v∈ V_+ ∪ V_- (_θ_v(τ1-τ) +(1-τ)^d_v+1).By definition, T_=∑_V_s T_,V_s, where the sum runs over sets V_s such that (,V_s) is a decorated tree. We start from the formula for T_,V_s, which is given by <ref>. From <ref>, the nine series T',…,T^-_ of trees with one marked leaf all have unique dominant singularities in ρ. This is also the case for the functions _θ_v(T) and ( 1 1-T_)^d_v by subcritical composition (see <ref>). Indeed, from <ref> the radius of convergence of _θ is at least R_S and from <ref>, T and T_ are convergent at ρ with T(ρ) = τ1-τ< R_Λ1-R_Λ = R_S and T_ (ρ) = τ < 1. As a consequence, T_ has a unique dominant singularity in ρ (see <ref>).For exact asymptotics near ρ, note that _θ_v(T) and 1 1-T_ converge respectively to _θ_v(τ1-τ) and 1 1-τ at ρ (see <ref>), while the nine series T',…,T^-_ behave as (1-zρ)^-1/2, where the constants are given in <ref>. We thus get using the notation of <ref>:z^kT^type of root = ρ^k λ^1+_ root∈ V_s (1- z ρ)^-1/2 (1+o(1));A_v = (1- z ρ)^-d_v/2(1+o(1)) ·_θ_v(τ1-τ) (λ^2)^d'_v (λ)^d_v^+ + d_v^- if v ∈ V_s;(1 1-τ)^d_v+1 (λ)^d'_v^d_v^+ + d_v^- if v ∉ V_s.The asymptotic behavior of T_,V_s near ρ is then obtained by multiplying the above expressions. The formula can be simplified by observing that ∑_v∈ (d'_v + d_v^+ + d_v^-) =∑ _v∈ d_v= eand∑_v∈d'_v +_ root∈ V_s = |V_s|+k, and we obtain:T_,V_s =(1+o(1)) (1- z ρ)^-(e+1)/2ρ^k ^e+1λ^1+k+|V_s|∏_v∈ V_sλ^d_v_θ_v(τ1-τ) ∏_v∉ V_s(1 1-τ)^d_v+1=(1+o(1)) (1- z ρ)^-(e+1)/2ρ^k (λ^2)^e+1∏_v∈ V_s_θ_v(τ1-τ) ∏_v∉ V_s[(1 1-τ)^d_v+1λ^-d_v-1].To write the second line, we have used that ∑_v∈ V_s d_v + ∑_v∉ V_s (d_v+1) +1+k+|V_s| =∑_v∈ d_v + || +k +1 = 2e+2.Now we have that T_ is the sum of T_,V_s over sets V_s such that (,V_s) is a decorated tree. By definition, such V_s can be written as V_* ∪V_s for some V_s⊂ V_+ ∪ V_- (the notation V_*, introduced right before the proposition, is the set of nonlinear nodes of ). This change of variables leads toT_ =(1+o(1)) (1- z ρ)^-(e+1)/2ρ^k (λ^2)^e+1∏_v∈ V_*_θ_v(τ1-τ) ×∑_V_s⊂ V_+ ∪ V_-( ∏_v∈V_s_θ_v(τ1-τ)) ( ∏_v∈ V_+ ∪ V_-v∉V_s[(1 1-τ)^d_v+1λ^-d_v-1]). We first observe that since λ=(1-τ)^-2, the last factor simplifies as (1-τ)^d_v+1. The proposition then follows by writing the sum of products on the second line as a product of sums. §.§ Probability of tree patterns Recall thatis the set of 𝒮-canonical trees (ie canonical trees of permutations in ⟨𝒮⟩).We take a uniform random tree with n leaves in and mark k of its leaves, also chosen uniformly at random. We denote by 𝐭^(n)_k the tree induced by the k marked leaves. Let k≥ 2, and letbe any substitution tree with k leaves. Then (𝐭^(n)_k = ) = k! 2√(π)/Γ(e(t_0)+1 2)(1-τ)^2/β_n^e()/2+1-k(1+o(1)),where _ is given by <ref>, and e() is the number of edges of . Directly from the definition, we have:(𝐭^(n)_k=) = [z^n] T_(z)/nk [z^n] T(z). The Transfer Theorem (<ref>) gives us the asymptotic behavior of [z^n] T(z) and [z^n] T_(z) from the asymptotic expansions in <ref> (<ref>) and <ref>. Deriving the result from there is a routine exercise.§.§ Back to permutationsLet π be a permutation of size k.Recall from <ref> that an expanded tree is a substitution tree where nonlinear nodes are labeled by simple permutations, while linear nodes are required to be binary. As in <ref>, we denote N_π the number of expanded trees of π.We know (see <ref>) that they have all the same number of linear nodes labeled ⊕ (resp. ⊖), this number being denoted by r_+ (resp. r_-) and they all contain the same r_* simple nodes, whose labels will be denoted as θ_1, …,θ_r_*.We introduce the default of binarity of the permutation π:(π)= ∑_i=1^r_* (|θ_i|-2).Observe that (π)=0 if and only if π is separable.Finally, to state the next proposition, we also need to introduce the quantities ν_+ = _12(τ/1-τ), ν_- = _21(τ/1-τ)and p = ν_+ +(1-τ)^3/ν_+ + ν_- +2(1-τ)^3,Note that this is the samep as in <ref>. Let π∈_k with k≥ 2 and let _n be auniform random permutation in ⟨⟩_n. With notation as above, we have [(π,_n)] = _πn^-(π)/2(1+o(1)),where_π =N_π k! √(π)/2^2k-2-(π)Γ(k - (π)+1 2) p^r_+(1-p)^r_-∏_i=1^r_*[ρ^-1(β(1-τ)^2)^|θ_i|_θ_i(τ1-τ)] .We denote by I a uniform random k-element subset of [n] and by t^(n) a uniform random 𝒮-canonical tree with n leaves. It holds that _n d=( t^(n)). As a consequence of <ref>, we have [(π,_n)] = (_ I(_n) = π) =((𝐭^(n)_ I) = π) =∑_ : () = π (𝐭^(n)_ I = ). After plugging in the estimate of <ref>, we get [(π,_n)] =(1+o(1))∑_:() = π k! 2√(π)/Γ(e(t_0)+1 2)(1-τ)^2/β_ n^-()/2 where () = 2k-2 -e() is the default of binarity of the tree . We claim that ifis a substitution tree of π, then ()≥(π) with equality if and only ifis an expanded tree. Indeed,() = e() + 2( k-1-e())= ∑_v ∈ (d_v -2).Moreover from<ref> any substitution tree can be obtained from an expanded tree of π by merging some internal nodes along edges connecting them and such merges always increase (strictly) the considered sum, which proves the claim.It follows that, in the sum of <ref>, only expanded trees appear asymptotically.Moreover, e() and the constant _ does not depend on the choice of an expanded treeof π.As a result, we get [(π,_n)] =(1+o(1)) _πn^-(π)/2 where _π = N_πk!2√(π)/Γ(e+1 2)(1-τ)^2/βρ^k (λ^2)^e+1(∏_i=1^r_*_θ_i(τ1-τ)) (ν_+ + (1-τ)^3 )^r_+(ν_- + (1-τ)^3 )^r_- =N_π k! 2 √(π)/Γ(e+1 2) p^r_+(1-p)^r_-(∏_i=1^r_*_θ_i(τ1-τ)) ×ρ^k(λ^2)^e+1(1-τ)^2/β(ν_+ + ν_- + 2(1-τ)^3)^r_+ +r_-Differentiating <ref> p.eq:LambdaPrime and using Λ'(τ) = 1 yield the identity Λ”(τ) =4 1-τ +1 (1-τ)^4 S”(τ1-τ). Moreover, since _12 +_21 = S”2 (see <ref>), this gives us ν_+ + ν_- + 2(1-τ)^3 = (1-τ)^4/2Λ”(τ) = (1-τ)^4 ρ/β^2 Finally, after collecting everything together, we get ρ^k(λ^2)^e+1(1-τ)^2/β(ν_+ + ν_- + 2(1-τ)^3)^r_+ +r_- =ρ^k(β/2ρ(1-τ)^2)^e+1(1-τ)^2/β(ρ (1-τ)^4/β^2)^r_+ +r_- = 1/ρ^r_* 2^e+1(β/(1-τ)^2)^2r_* +(π), where the last equality above has been obtained using that, for any expanded treeof π, we have r_+ + r_- + r_* = || = e-k+1 and (π) = 2k-2-e. This allows us to simplify <ref> and yields the desired value of _π.We can now conclude the proof of <ref>. Let _n be auniform random permutation in ⟨⟩_n. Our goal is to show that μ__n converges to the biased Brownian separable permuton of parameter p. Let π be any permutation of size k≥ 2.As a consequence of <ref> (with <ref>) and <ref>, we just have to show that [(π,_n)] n→∞→N_π/_k-1p^r_+(π) (1-p)^r_-(π).Assume first that π is not separable.In this case, we have N_π = 0. It also holds that (π)>0, and <ref> implies that[(π,_n)] → 0. Assume on the contrary that π is separable. In this case, (π) = 0, N_π = N_π and r_*=0.Therefore, from <ref> we get that [(π,_n)] n→∞→N_πp^r_+(π) (1-p)^r_-(π) k! √(π)/2^2k-2Γ(k-12) = N_π/_k-1p^r_+(π) (1-p)^r_-(π),where we have used the identity Γ(k-12) = 2^3-2k√(π) Γ(2k-2)/Γ(k-1) = 2^3-2k√(π)(2k-3)!/(k-2)!coming from the duplication formula of the Gamma function. This concludes the proof. §.§ Occurrences of nonseparable patternsSince [(π,_n)] tends to 0 whenever π is a nonseparable pattern and the random variabletakes only nonnegative values, (π,_n) tends to 0 in probability.Here, we discuss more precisely the asymptotic behavior of (π,_n) in this case. The first result gives the order of magnitude of its moments; we then present a consequence for the random variable itself. For π∈⟨⟩ and m≥ 1, [((π,_n))^m] = Θ (n^-(π)/2). This result does consider separable patterns π, but in this case it is a direct consequence of our main theorem. Indeed, according to <ref>, <ref> entails convergence in distribution of ((π,_n))_n to (π,Μ^(p)), jointly for all π∈, and hence of all moments and mixed moments (since those random variables are bounded by 1). Namely, we have [(π,_n)^m][(π,Μ^(p))^m].This limiting value is positive if and only if π is separable, and can be computed exactly by adapting the method exposed in <cit.>. By definition, (π,_n) =n k ^-1∑_I ⊂ [n], |I| = k__I(_n) = π, where we use k for the size of the pattern π, as usual. Consequently, [(π,_n)^m] =n k ^-m [ ∑_I_1,…, I_m ⊂ [n] ∀ i,|I_i| = k_∀ i,_I_i(_n) = π]. We split the sum according to the different possible values of K = ⋃_i I_i and j = |K|.Denoting B^K_k,m the set of possible ordered covers of K by m sets of size k, this gives [(π,_n)^m] =n k ^-m [ ∑_j=k^mk∑_K⊂ [n]|K|=j∑_(I_1,…, I_m) ∈ B^K_k,m_∀ i,_I_i(_n) = π]. Let us now remark that the unique increasing bijection between K and [j] induces a bijection between B^K_k,m and B^[j]_k,m. Let (J_i)_1≤ i≤ m denote the image of (I_i)_1≤ i ≤ m by this bijection. Clearly, _I_i(_n) = π_J_i(_K(_n)) = π. The sum can now be decomposed according to the different values of ρ =_K(_n) yielding [(π,_n)^m]=n k ^-m [ ∑_j=k^mk∑_K⊂ [n]|K|=j∑_(J_1,…, J_m) ∈ B^[j]_k,m∑_ρ∈_j__K(_n) = ρ_∀ i,_J_i(ρ)=π] = ∑_j=k^mk∑_(J_1,…, J_m) ∈ B^[j]_k,m∑_ρ∈_j ∀ i,_J_i(ρ)=π n k ^-m n j[(ρ,_n)].Since the summation index sets do not depend on n, it is enough to consider each summand separately to get the asymptotics. From <ref>, the summand n k ^-m n j[(ρ,_n)] is of order n^j-km -(ρ)/2. Whenever <ref> holds, π is a pattern of ρ =_K(_n). As a consequence, an expanded tree of ρ must have a substitution tree of π as an induced tree. Since the default of binarity may only decrease when taking induced trees, this implies that (ρ) ≥(π). Since additionally j ≤ km, we deduce that j-km -(ρ)/2 ≤ -(π)/2 which gives [(π,_n)^m] = Ø(n^-(π)/2). To prove that [(π,_n)^m] = Θ(n^-(π)/2), it is then enough to find one summand, which grows as n^-(π)/2 for large n. This is achieved considering the summand indexed byj = km;J_i = { mq + i:0 ≤ q ≤ k-1};ρ = π[1⋯ m,…,1⋯ m].Indeed in this case, (ρ)=(π), so that j-km -(ρ)/2 = -(π)/2, which concludes the proof of the proposition. For π∈⟨⟩ and> 0 small enough, ( (π,_n)> ) = Θ (n^-(π)/2), where the constant in the Θ symbol depends on . The upper bound is an immediate consequence of Markov's inequality.For the lower bound, let X be a random variable in [0,1], we have[X^2] ≤ [_(X<)X+_(X≥)] ≤[X] + (X ≥). The corollary follows by taking X=(π,_n) andsmall enough. § ASYMPTOTIC ANALYSIS: THE DEGENERATE CASE S'(R_S)<2/(1+R_S)^2-1In this section, we are interested in the case where the generating function S of simple permutations in 𝒮 satisfies the following condition.The generating function S of a family 𝒮 of simple permutations is said to satisfy hypothesis (H2) if S meets the following conditions at its radius of convergence R_S>0: * S' is convergent at R_S and S'(R_S) < 2/(1+R_S)^2 -1; *S has a dominant singularity of exponent δ>1 in R_S.<ref> means that, around the singularity R_S, one hasS(z)=g_S(z) + (C_S+o(1)) (R_S-z)^δ,for some analytic function g_S and constant C_S ≠ 0. We refer to <ref> for a precise definition. Clearly, under (H2), it holds that R_S < ∞.Note also that the assumption δ >1 is redundant with the convergence of S' at R_S.§.§ Asymptotic behavior of the main seriesAs in <ref>, the first step is to derive the asymptotic behavior of all generating functions for marked trees around their common dominant singularity. In this section, we will not compute constants explicitly, but only focus on the singularity exponent. Indeed, keeping track only of singularity exponents is here sufficientto determine the limiting permuton.The function Λ is defined in (<ref>) by:Λ(u)= u^2/1-u +S( u/1-u). Assume that S satisfies hypothesis (H2). Then Λ has a unique dominant singularity of exponent δ in R_Λ:=R_S1+R_S<1. Moreover, Λ' is convergent at R_Λ and Λ'(R_Λ)<1.The first assertion follows from <ref> (Supercritical case), using also that R_S < ∞.The convergence of Λ' at R_Λ follows from that of S' at R_S.Finally, the inequality Λ'(R_Λ)<1 is a straightforward computation from (<ref>) (recall that Λ' is given in (<ref>)). Recall that from <ref> (p. eq:Tnonp) T_ is implicitly defined by the equationT_(z)=z + Λ(T_(z)).As explained in <ref>, the condition Λ'(R_Λ)<1 implies that the singularity of T_(z) is not a branch point, but is inferred from the singularity of Λ. Assume that S satisfies hypothesis (H2). Then there isa unique ρ>0 such that T_(ρ)=R_Λ. Moreover, ρ is the radius of convergence of T_ and T_ has a unique dominant singularity of exponent δ in ρ. The proof is rather technical and postponed to <ref>.Now since from <ref>: T=T_1-T_, atthe singularity ρ of T_, the denominator of T(ρ) is 1-T_(ρ)=1-R_Λ>0, hence the singularity of T is inherited from the one of T_. More precisely, from <ref> (Subcritical case), we have the following corollary. Assume that S satisfies hypothesis (H2). Then T has a unique dominant singularity of exponent δ in ρ, with T(ρ) = R_Λ1-R_Λ = R_S. We now turn to the behavior of generating function of trees with one marked leaf.Assume that S satisfies hypothesis (H2). Then each of the nine generating functions T', T^+, …, T_^-has aunique dominant singularity of exponent δ-1 in ρ. For T' (resp. T'_=T'_) this follows immediately from <ref> (resp. <ref>)by singular differentiation (<ref>).Recall that T^+ is explicitly given in <ref> asT^+=1/1-S'(T) -- S'(T)where =(11-T_)^2-1.We examineand S'(T) to determine the exponent of the singularity of T^+ at its radius of convergence (which we will prove to be ρ). Since T_(ρ)=R_Λ <1, again by subcritical composition,<ref> gives thathas a unique dominant singularity of exponent δ in ρ.Moreover, (ρ) = (11-R_Λ)^2-1. As for S'(T) we need to analyse S' and T and determine whether the composition is critical, supercritical or subcritical (see <ref>).* By singular differentiation (<ref>), it follows from (H2) thatS' has a dominant singularity of exponent δ -1 in R_S.In addition, S'(R_S) < 2/(1+R_S)^2 -1 by (H2). * Recall that from <ref>T has a unique dominant singularity of exponent δ in ρ, with T(ρ) = R_Λ1-R_Λ = R_S. The composition S' ∘ T is therefore critical.* Moreover, T is aperiodic since it counts a superset of the separable permutations.From <ref> (Critical case-A),we obtain that S'(T) has a unique dominant singularityof exponent δ-1 in ρ, and therefore so does S'(T) ++ S'(T) (using <ref>). In ρ, the value of the series S'(T) ++ S'(T) = ( +1) (S'(T)+1) -1 is less than(11-R_Λ)^2 2(1+R_S)^2 -1 = 1.Therefore, by subcritical composition, T^+=1/1-S'(T) -- S'(T)has a unique dominant singularity of exponent δ-1 in ρ.Since 1/1+ and ( S'(T) ++ S'(T)) have unique dominant singularities in ρ of respective exponents δ and δ - 1, the other cases follow by <ref>, using the formulas given in <ref>. §.§ Probability of given patternsRecall that the function _θ was defined in <ref> by _θ(z) = ∑_α∈𝒮occ(θ,α) z^|α|-|θ|.Unlike in the previous section, the functions _θ(z)will appear in the asymptotic behaviors, thus we need some additional assumptions on them. First, as noticed in <ref>, we have ∑_θ∈𝔖_k_θ(z)= 1/k! S^(k)(z),which, under (H2), has a dominant singularity of exponent δ-k in R_S (see <ref>, about singular differentiation). The following hypothesis is thus reasonable.Let S have a dominant singularity of exponent δ>1 in R_S. The family of simple permutations 𝒮 satisfies the hypothesis (CS) if, for each pattern θ of size k, the corresponding series _θ(z) has a dominant singularity of exponent at least δ-k in R_S. Letbe a substitution tree with k leaves.Recall thatis the set of internal nodes of ,and that for any node v ∈, d_v denotes its number of children. Recall also from <ref> (p. Subsec:DecompTrees) that _ is the family of canonical trees with k marked leaveswhich induce a tree equal to ,and that T_ is its generating function.Combining <ref> and the above results, we obtain the following. For any substitution treeof size k≥2, assuming (H2) and (CS),the series T_ has a unique dominant singularity of exponent at least e_t_0 in ρ, where * e_t_0 = ∑_v: d_v > δ (δ - d_v), if there is at least one node v such that d_v > δ; * e_t_0 = δ - max_v ∈ d_v otherwise[Note for future reference that the two expressions for e_t_0 are equal when there is a unique v such that d_v > δ.]. First recall that _ can be decomposed as a union ⋃_V_s_,V_s, where V_s are subsets of which contain all nonlinear nodes (see <ref>). It is therefore enough to prove that each series T_,V_s has a unique dominant singularity in ρ with at least the desired exponent.Recall from <ref> the following formula for T_,V_s(z):T_,V_s= z^k T^type of root∏_v∈ A_v,where each A_v is given by (<ref>) and depends on the type of v. Using hypothesis (CS), <ref> and <ref> (Critical case-A), we know that _θ_v(T) has a dominant singularity of exponent at least δ-d_v in ρ,and, from <ref>,thatitis the term with the lowest exponent and the only possibly divergent term arising in A_v.It then follows from <ref> that A_v has a dominant singularity of exponent at least δ-d_v in ρ. The series T_,V_s(z) is then the product of the A_v's and of some convergent series.The result of the proposition is obtained from <ref>:T_,V_s(z) has a dominant singularity of exponent at least ∑_v: d_v > δ (δ - d_v) in ρ, if this sum is nonempty (i.e., when one of the series is divergent) andof the smallest singularity exponent among them, that is δ - max_v ∈ d_votherwise, i.e., if all factors are convergent. As in <ref> (p. section:TreePatterns), we take a uniform random tree with n leaves inwith k marked leaves(chosen also uniformly at random). Denote as before by 𝐭_n^k the tree induced by the k marked leaves.Assume (H2) and (CS).For any substitution treewith k ≥ 2 leaves,the probability (𝐭^(n)_k=) tends to 0,unlesshas only one internal node.As in <ref>, we use the formula (𝐭^(n)_k=) = [z^n] T_(z)/nk [z^n] T(z) From <ref> and <ref> (and using the notation e_t_0 herein defined),we get that [z^n] T_(z) = (C̃_ +o(1)) ρ^-n n^-1-e_t_0 for some constant C̃_, possibly equal to 0.On the other hand, <ref> and <ref> imply that[z^n] T(z) = C ρ^-n n^-δ-1 (1+o(1)), for some constant C 0.Putting everything together, we obtain(𝐭^(n)_k=)=k! (C̃_+o(1))/C n^e_, where e_= δ-k-e_t_0. For any subset A of the internal nodes of a tree t with k leaves,we claimthat the following inequality holds: ∑_v ∈ A d_v ≤ |A|+k-1.It is clear when k=1. For k>1, we decompose t as a root ∅ with d≥2 subtrees t_1, …, t_d. The chosen set A of nodes of t determines a set A_i of nodes in each tree t_i that has k_i leaves. Assume that ∑_v ∈ A_i d_v≤ |A_i|+k_i-1 for all i.Then, we have ∑_v ∈ A d_v = d ·_∅∈ A + ∑_i=1^d ∑_v ∈ A_i d_v ≤ d ·_∅∈ A + ∑_i=1^d |A_i|+k_i-1= d ·_∅∈ A + |A| - _∅∈ A +k -d = |A|+k-1 + (d-1)(_∅∈ A-1),and with the observation that (d-1)(_∅∈ A-1) ≤ 0, this proves our claim.Set A={v ∈: d_v >δ}; if |A| ≥ 1, one has e_ = δ-k-δ|A| + ∑_v ∈ A d_v ≤δ -k -δ |A| + |A|+k-1 = (1-|A|) (δ -1), which is negative for |A| ≥ 2 (indeed, δ >1 by (H2)).When |A|=0 or |A|=1, we havee_t_0=δ - max_v ∈ d_v(see also <ref>)and, therefore, e_= max_v ∈ d_v -k is negative unlesshas exactly one internal node (which is then of degree k).It is now straightforward to translate this result in terms of the probability to find a given pattern in a random permutation in the set :=⟨⟩.As recalled in <ref>, the hypothesis (CS) is equivalent to the following: for every k ≥ 1 and every permutation θ of size k, there exists an analytic function g_θ and a constant C_θ (possibly equal to 0) such that, on an Δ-neighborhood of R_S, it holds that_θ(z)=g_θ(z) + (C_θ+o(1)) (R_S - z)^δ-k.The quantities C_θ are involved in the statement of the following theorem.Let _n be a uniform random permutation in ⟨𝒮⟩_n. We assume hypotheses (H2) and (CS). Then, for any k≥ 2 and for any π∈_k,lim_n→∞ [(π,_n)] = C_π/∑_θ∈_k C_θ.Consequently, there exists a random permuton Μ_ with[(π,Μ_)]= C_π/∑_θ∈ S_k C_θsuch that (μ__n)_n tends to Μ_ in distribution. The starting point is the same as in the standard case (<ref>). As before we denote by I a random uniform k-element subset of [n], and 𝐭^(n)_k is the tree of size k induced by k uniform leaves in a uniform canonical tree of size n in . From <ref>, we have ((π,_n)) = (_ I(_n) = π) =((𝐭^(n)_k) = π) = ∑_:() = π (𝐭^(n)_k = ),where the sum runs over all substitution trees encoding π. Denote by ^π the substitution tree with only one internal node labeled by π. When n tends to infinity, using <ref>, we know that every term in the above sum vanishes, but the term corresponding to ^π:lim_n →∞((π,_n)) =lim_n →∞ (𝐭^(n)_k = ^π).Now we can compute directly from <ref> thatT_^π = _π linearz^kT^+(1 1-T_)^k+1 (T')^k +z^k_π(T)(T')^k+1.The first term has a dominant singularity of exponent at least δ-1, while the dominant singularity exponent of the second term is at least δ-k. We therefore focus on the second term and get by an easy computation that, around z=ρ, we have _π(T(z))= g(z) + (C_π T'(ρ)^δ-k +o(1) ) (ρ-z)^δ-k,and thusT_^π(z) = h(z) +(ρ^k T'(ρ)^δ+1 C_π +o(1)) (ρ-z)^δ-k,where g and h are analytic functions. Then we can apply the Transfer Theorem (<ref>) to T and T_^π (as in the proof of <ref>)and obtain that lim_n →∞ (𝐭^(n)_k = ^π) is proportional to C_π for π∈_k.Since the left-hand side of <ref> sums to one (when summed over π∈_k, for a fixed k), this proves <ref>.The rest of the statement follows immediately, using <ref> (with <ref>). §.§ Hypothesis (CS) and convergence of uniform random simple permutationsWe may wish to replace the hypothesis (CS) with a less technical hypothesis, such as the convergence of a random simple permutation in our setto the random permuton Μ_ uniquely determined by <ref>. We show here that, though not equivalent, these hypotheses are strongly related. Remark that we do not assume (H2) here, so that the existence ofμ_ cannot be inferred from <ref> and needs a separate proof. Suppose that S has a dominant singularity of exponent δ>1 andassume condition (CS). Then there exists a permuton Μ_ such that[(π,Μ_)]= C_π/∑_θ∈ S_k C_θ,where the C_π are given by <ref> (which holds under hypothesis (CS)).Let α_n be a uniform random permutation of size n in .If (μ_α_n) converges in distribution, then its limit is Μ_.Conversely, if we assumethat S and all series _θ have a unique dominant singularity, then (μ_α_n) converges in distribution (and the limit must be Μ_, using the first part of the proposition).Before giving the proof, let us do the following observation. If both (H2) and (CS) are satisfied, then we can apply both <ref>. By comparing <ref>, we have Μ_=Μ_ in distribution (recall that the distribution of a random permuton is determined by its expected pattern densities,see <ref>). In particular, assuming that a uniform random simple permutation in the class converges in distribution to some random permuton, then this random permuton is Μ_=Μ_, that is the limit of a uniform random permutation in the class. This justifies a claim in the introduction.We start with the existence of Μ_. For every k≥ 1, let ρ_k be a random permutation in _k such that (ρ_k = π) = C_π /(∑_θ∈ S_k C_θ). By <ref>, we only needto show that (ρ_k)_k forms a consistent family. Let 1≤ k ≤ n, then for π∈_k,(_ I_n,k(ρ_n) = π) = 1/ n k ∑_θ∈ S_k C_θ∑_σ∈_n occ(π,σ) C_σ.On the other hand,the following combinatorial identity can be derived from the definition of the (_θ)_θ∈:1/(n-k)!_π^(n-k)(z) = ∑_σ∈_n occ (π,σ) _σ(z).Indeed, the left-hand side is the series of simple permutations in 𝒮 whose entries are partitoned into a set of k marked entries forming a pattern π and a set of n-k marked entries. The right-hand side counts the same object, according to the pattern σ formed by all the n marked entries. To distinguish the marked entries of the first set from the ones of the second set, we need to specify a subpattern π inside the pattern σ, which explains the factor occ(π,σ).We now differentiate both sides of <ref> m times so that δ - n - m <0, and replace all series with their asymptotic estimates obtainedthanks to hypothesis (CS), <ref> and singular differentiation (<Ref>) [For x∈ℂ and r∈ℕ, we denote by (x)_r the falling factorial x(x-1)⋯(x-r+1)].g_π^(m)(z) +(-1)^m+n - k(δ-k)_m+n-k(C_π+o(1))(R_S - z)^δ - m - n = (n-k)! ∑_σ∈_n occ (π,σ) [ g_σ^(m)(z) + (-1)^m(δ - n)_m (C_σ + o(1))(R_S - z)^δ - m - n]As only the singular parts diverge, taking the limit in z→ R_S allows to identify the constants, yielding ∑_σ∈_nocc (π,σ) C_σ = (-1)^n-kδ - kn-k C_π. Plugging this back in <ref> yields (_ I_n,k(ρ_n) = π) ∝ C_π, π∈_k.As probabilities sum to 1, we get (_ I_n,k(ρ_n) = π) = (ρ_k = π), proving the consistency of (ρ_k)_k. As noticed in the proof of <ref>, the convergence of (μ_α_n) to Μ_ is equivalent to the following: for any fixed k ≥ 2 and any π∈_k, the limit lim_n→∞[ (π,α_n)] exists and is proportional to C_π for π∈_k. Furthermore, by consistency, we only need to show it for large k. Directly from the definitions, we have[ (π,α_n)] = k! [z^n-k] _π(z)/[z^n-k]S^(k)(z). For the proof of the direct implication, we assume that μ_α_n converges in distribution. From <ref>, this means that [ (π,α_n)] has a limit Δ_π for every π of size k ≥ 2. Then when n goes to infinity,[z^n] _π(z) = Δ_π + o(1)/k! [z^n] S^(k)(z).As a consequence, for any fixed π and >0, there exists polynomials g_-,g_+ such that for any real z in [0,R_S), (Δ_π/k! - )S^(k)(z) + g_-(z) ≤_π(z) ≤(Δ_π/k! + )S^(k)(z) + g_+(z).Hypothesis (CS) implies that in R_S we have _π(z) = g_π(z) +(C_π+o(1))(R_S - z)^δ - k for some analytic function g_π. Also S^(k) has a dominant singularity of exponent δ-k in R_S so S^(k)(z) = g_S^(k)(z) + (C_S^(k)+o(1))(R_S - z)^δ-k for some analytic function g_S^(k) and constant C_S^(k)>0. Plugging these asymptotic estimates into (<ref>) yields(Δ_π/k! - ) [(C_S^(k) +o(1)) (R_S - z)^δ - k +g_S^(k)] +g_- ≤ (C_π +o(1))[(R_S - z)^δ- k +g_π]≤(Δ_π/k! + ) [(C_S^(k) +o(1)) (R_S - z)^δ - k +g_S^(k)] +g_+.Let k be such that δ - k < 0, so that the singular parts are the only diverging quantities when z→ R_S. After taking the limit we get |C_π - C_S^(k)/k!Δ_π| ≤ for everyand hence equality. We have proven that (Δ_π)_π∈_k is proportional to (C_π)_π∈_k for large k, as desired.For the converse, we start from <ref>. <ref> (which we can apply because of the hypotheses on S and _θ) gives the following asymptotic behavior when n→∞:[ (π,α_n)]= k!n^-k (C_π+o(1)) R_S^-n+k n^-δ+k-1/C_S R_S^-n n^-δ-1.For fixed k, the limit of the right-hand side is proportional to C_π, which concludes the proof of the proposition. § ASYMPTOTIC ANALYSIS: THE CRITICAL CASE S'(R_S)=2/(1+R_S)^2-1 The goal of this section is to describe the limiting permuton of a uniform permutation in a substitution-closed class , whose set of simple permutations satisfies the following hypothesis.A family 𝒮 of simple permutations is said to satisfy hypothesis (H3)if the generating function S meets the following conditions at its radius of convergence R_S>0: * S has a dominant singularity of exponent δ>1 in R_S;* S' is convergent at R_S and S'(R_S) = 2/(1+R_S)^2 -1. This hypothesis implies the following behavior of Λ near its singularity.Assume that S satisfies hypothesis (H3) and set, as before,Λ(u)= u^2/1-u +S( u/1-u).Then Λ has a unique dominant singularity of exponent δ in R_Λ:=R_S1+R_S<1. Moreover, Λ' is convergent at R_Λ and Λ'(R_Λ)=1.The statement on the singularity exponent follows from <ref> (Supercritical case). The statement on Λ' is a simple computation from (H3) and <ref>. In the following, we denote δ_*=min(δ,2). The behavior of T_ is given in the following lemma, whose proof is postponed to <ref>. Note that the exponent is different from the one observed in <ref> (the singularity comes here froma mixture of a branch point and of the singularity of Λ, as explained in <ref>). Assume that S satisfies hypothesis (H3). Then there isa unique ρ>0 such that T_(ρ)=R_Λ. Moreover, ρ is the radius of convergence of T_ and T_ has a unique dominant singularity of exponent 1/δ_* in ρ. From here, the strategy is similar as the previous sections, we therefore skip unnecessary details. As in the previous section, we deduce immediately the asymptotic behavior of all generating functions of marked trees:Assume that S satisfies hypothesis (H3) and define ρ as above. Then T has a unique dominant singularity of exponent 1/δ_* in ρ, with T(ρ) = R_S. Moreover each of the generating functions T', T_', …, T^-_ has a unique dominant singularity of exponent 1/δ_*-1 in ρ. To go further, we have to assume hypothesis (CS),i.e. that, for any θ of size k, the series _θ(z) has a unique dominant singularity of exponent at least δ-k in R_S. The composition _θ∘ T is then critical, and from <ref> (Critical case-B), _θ(T(z)) has a unique singularity of exponent at least 1δ_*min(δ-k,1) in ρ. By a straightforward computation from <ref>, we get the following singularity for the series T_,V_s. Fix a treeand a subset V_s of its internal nodes. Then the series T_,V_s has a singularity of exponent at least e_(,V_s):= (|E|+1)(1δ_*-1) + ∑_v ∈ V_s1δ_*min(δ-d_v,0). The behavior of the uniform permutation in ⟨⟩_n now depends on whether δ is smaller or larger than 2.§.§ The case δ∈ (1,2). Let 𝒮 be a family of simple permutations verifying hypothesis (H3) and (CS), with δ∈(1,2). We consider the permuton Μ_ as in <Ref> and denote for π∈_k Δ_π = [(π,μ_)] = C_π/∑_θ∈_k C_θ.Let also ν_δ,k() = k!/(δ -1) ⋯ ((k-1)δ - 1)∏_v∈(d_v - 1 - δ) ⋯ (2-δ)(δ - 1)/d_v! be the probability distribution of the induced subtree with k leavesin the δ-stable tree (see <cit.>).If _n is a uniform permutation in ⟨𝒮⟩_n, then lim_n→∞[(π,_n)] = ∑_:() = πν_δ,k()∏_v∈Δ_θ_v. As a consequence, μ__n converges in distribution to a random permuton, whose average pattern densities are determined by <ref> and depend only on δ and μ_. We call it the δ-stable permuton driven by μ_.A construction of the δ-stable permuton driven by ν for every δ∈ (1,2) and random permuton ν is given in <ref>. In this case, all possible patterns, in particular nonseparable ones, appear with positive probability in the limit (as long as they appear with positive probability in a uniform simple permutation in the class). More precisely, the proof will show the following: k random leaves in a uniform canonical tree induce substitution trees with arbitrary large node degrees, andthe first common ancestors of those leaves are all simple permutations with probability tending to 1. We start from the estimate of <ref>. In the present case δ_*=δ and δ-d_v is always negative. If we fix (,V_s) and apply <ref> to T and T_,V_s, we get [z^n]T_,V_s/ n k [z^n]T=(cst+o(1))n^e_(,V_s), withe_(,V_s) = 1δ - k - e_(,V_s) =1δ - k -(|E|+1)(1δ-1) +∑_v ∈ V_s(d_vδ - 1) = -∑_v ∈∖ V_s (d_vδ - 1), where the last identity uses |E|-|| -k +1=0. Therefore e_(,V_s)<0 unless V_s=,i.e. all internal nodes ofare in V_s. As a consequence, if t^(n) is a uniform canonical tree of size n and I is a uniform subset of the leaves of length k, ( t^(n)_ I = ) = ∑_V_s ⊆ t_0[z^n]T_,V_s/ n k [z^n]T= [z^n]T_,t_0/ n k [z^n]T +o(1). Now we focus on the case V_s = t_0. The series T_, has a dominant singularity of exponente_(,t_0) = 1/δ-k (see <ref>).We are left with identifying the constant in its singular expansion. In what follows, we will always denote by C_Athe constant in front of the singular part of the expansion of the analytic function A, i.e. A(z) = g_A(z) + C_A(R_A - z)^δ_A, with R_A the radius of convergence of A and g_A analytic at R_A.In particular, given that δ∈ (1,2) and k ≥ 2, we have the following expansions (recall T(ρ)=R_S):T(z) = R_S +(C_T+o(1))(ρ-z)^1/δ,which impliesR_S-T(z)=-(C_T+o(1))(ρ-z)^1/δ;T'(z)=-(C_Tδ+o(1))(ρ-z)^1/δ-1; _θ(w)= (C_θ+o(1)) (R_S-w)^δ-k.Using <Ref>, we can find the singular expansion of T_,: T_,(z) = z^k (T')^|E|+1∏_v_θ_v(T)= [ρ^k (-C_T/δ)^|E|+1∏_v∈ C_θ_v (-C_T)^δ - d_v +o(1) ] (ρ-z)^1/δ - k.From the Transfer Theorem (<ref>) applied to T and T_ we deduce ( t^(n)_ I = t_0) = Γ(-1/δ) ρ^-n +(1/δ - k)n^-(1/δ - k+1)[ρ^k (-C_T/δ)^|E|+1∏_v∈ C_θ_v (-C_T)^δ - d_v +o(1) ] / n k Γ(k-1/δ) ρ^-n +1/δn^-(1/δ +1) [C_T +o(1)] = -k! Γ(-1/δ)/Γ(k-1/δ)[ (-C_T)^|E|/δ^|E|+1∏_v∈ C_θ_v (-C_T)^δ - d_v +o(1) ] = -k!Γ(-1/δ)/Γ(k-1/δ)δ^k∏_v∈1/δC_θ_v (-C_T)^δ + o(1). The recursive property of the Gamma function gives-k! Γ(-1/δ)/Γ(k-1/δ)δ^k = k!/(δ -1) ⋯ ((k-1)δ - 1). Furthermore, by definition of Δ_θ_v, relation (<ref>) and singular differentiation of S, we get C_θ_v = Δ_θ_vC_S^(d_v)/d_v! = Δ_θ_vC_S/d_v! (-1)^d_vδ(δ - 1) … (δ - d_v + 1) = Δ_θ_vC_S/d_v!δ(δ-1)(2-δ)⋯(d_v -1 - δ). This allows us to rewrite (<ref>) aslim_n →∞( t^(n)_ I = t_0)=k!(δ -1) ⋯ ((k-1)δ - 1)∏_v∈(δ-1)(2-δ)⋯(d_v -1 - δ)/d_v!Δ_θ_vC_S (-C_T)^δ. Now the proof of <ref> (see <ref>) yields -C_T_ = C_Λ^-1/δ. We have also the following relations between the various constants (see <ref> in the appendix):C_Λ=C_S/(1-R_Λ)^2δ and C_T = C_T_/(1-R_Λ)^2. From there we deduce -C_T = C_S^-1/δ.Therefore <ref> rewriteslim_n →∞( t^(n)_ I = t_0)=k!(δ -1) ⋯ ((k-1)δ - 1)∏_v∈(δ-1)(2-δ)⋯(d_v -1 - δ)/d_v!Δ_θ_v.Summing over trees t_0 with (t_0)=π gives the theorem. §.§ The case δ>2. Let 𝒮 be a family of simple permutations verifying hypotheses (H3) and (CS), with δ >2. If _n is a uniform permutation in ⟨𝒮⟩_n, then _n converges in distribution to the biased Brownian separable permuton of parameter p, where p=(1+R_S)^3_12 (R_S)+1/(1+R_S)^3(_12 (R_S)+_21 (R_S)) +2.While the limiting permuton in this case is independent of δ>2 and is the same as in the standard case, the fine details of this convergence might be different. In particular, if π is a nonseparable pattern, the order of magnitude of [(π,_n)] depends on δ and is in general bigger than in the standard case – compare <ref> to <ref>. Let (,V_s) be a decorated tree with k leaves. Once again, applying <ref> to T and T_,V_s leads to [z^n]T_,V_s/ n k [z^n]T=(cst+o(1))n^e_(,V_s), But in this case, e_(,V_s)= 12 -k - e_(,V_s) = 12 +- k + 12(|E|+1) + 12∑_v ∈ V_s; d_v > δ (d_v-δ)≤12|E|+1-k +12∑_v ∈ (d_v-2) = |E|-||-k+1 =0. The above inequality is justified as follows:∑_v ∈ V_s; d_v > δ (d_v-δ)≤∑_v ∈ V_s; d_v > δ (d_v-2)≤∑_v ∈ (d_v-2).The first inequality is an equality if and only if d_v≤δ for all v ∈ V_s (recall that δ >2).In the second part, the equality case occurs when for all v in , either d_v >δ or d_v=2. This implies that ifis not binary, <ref> is a strict inequality (regardless of V_s) and ( t^(n)_ I=t_0) =o(1). We can show that, up to replacing τ with R_Λ and κ with R_S, the estimates of the singular parts of T,T_,T', T^+, …, T^-_ in <ref> still hold. Indeed, the proofs can be transposed verbatim up to replacing some calls to the standard case of <ref> with the critical case. <ref> does not however hold in its generality anymore, because _θ is not necessarily convergent at τ1-τ for every θ.It is nonetheless convergent when |θ|=2, since the singularity of _θ is in δ - 2. This is enough to show that <ref> still holds for binary trees. Moreover nonbinary trees still disappear in the limit according to <ref>. This allows us to conclude as in <ref>. § COMPLEX ANALYSIS TOOLBOX§.§ Aperiodicity and Daffodil Lemma To study the asymptotic behavior of combinatorial generating functions, it is important to locate dominant singularities. The following lemma is useful to this purpose. Recall that a function A analytic at 0 is aperiodic if there do not exist two integers r≥ 0 and d≥ 2 and a function B analytic at 0 such that A(z)=z^rB(z^d). Let A be a generating function (with nonnegative coefficients) analytic in |z|<R_A. If A is aperiodic, then |A(z)|<A(|z|) ≤ A(R_A) for |z| ≤ R_A and z ≠ |z|. (The case |z|=R_A can only be considered if A(R_A)<∞.) This lemma can be found in <cit.>. Note that this reference does not consider the case of z on the circle of convergence,i.e. |z|=R_A (although this case is used later in the book,e.g. in the proof of Theorem VI.6, p. 405); the proof of the lemma in this case is similar to |z|<R_A. The complete statement of Daffodil Lemma in <cit.> also deals with cases where the function A is periodic, but we do not need these cases in our work.§.§ Transfer theorem We start by defining the notion of Δ-domain. We use (z) for the principal determination of the argument of z in ℂ∖ℝ^- taking its values in (-π,π).A domain Δ is a Δ-domain at 1 if there exist two real numbersR>1 and π/2<ϕ <π such thatΔ={z ∈ℂ| |z|<R,z≠ 1, |(1-z)|<ϕ}. By extension, for a complex number ρ≠ 0, a domain is a Δ-domain at ρ if it the image by the mapping z→ρ z of a Δ-domain at 1.A Δ-neighborhood of ρ is the intersection of a neighborhood of ρ and a Δ-domain at ρ.We will make use of the following family of Δ-neighborhoods: for ρ≠ 0 ∈ℂ, 0<r<|ρ|, φ >π/2, set Δ(φ,r,ρ) = {z∈ℂ, |ρ-z|<r, |(ρ - z)|<φ}.When a function A is analytic on a Δ-domain at some ρ, the asymptotic behavior of its coefficients is closely related to the behavior of the function near the singularity ρ. The following theorem is a corollary of <cit.>. Let A be a function analytic on a Δ-domain Δ at R_A, δ be an arbitrary real number in ℝ∖ℤ_≥ 0and C_A a constant possibly equal to 0.Suppose A(z) = (C_A + o(1))(1-zR_A)^δ when z tends to R_A in Δ. Then thecoefficient of z^n in A satisfies [z^n]A(z) = (C_A +o(1)) 1/R_A^n n^-(δ+1)/Γ(-δ). §.§ Singular differentiationThe next result is also useful to us.Let A be an analytic function in a Δ-neighborhood of R_A with the following singular expansion near its singularity R_A A(z)=∑_j=0^J C_j(R_A-z)^δ_j+Ø((R_A-z)^δ),where δ_j, δ∈ℂ.Then, for each k>0, the k-th derivative A^(k) is analytic in some Δ-domain at R_A andA^(k)(z)=(-1)^k ∑_j=0^J C_jδ_j (δ_j-1) ⋯ (δ_j-k+1)(R_A-z)^δ_j-k+Ø((R_A-z)^δ-k). We refer the reader to<cit.> for a proof of this theorem (this reference considers functions defined on a Δ-domain, but the proof still works with functions defined on a Δ-neighborhood).§.§ Exponents of dominant singularityIn this section, we introduce some compact terminology and easy lemmas to keep track of the exponent δ of the singularities and of the shape of the domain of analycity without computing the functions explicitly. Recall that the radius of convergence R_A of an analytic function A is the modulus of the singularities closest to the origin, called dominant singularities. Recall also that for series with positive real coefficients, by Pringsheim's theorem <cit.>, R_A is necessarily a dominant singularity. This justifies the following definition:Let δ be a real, which is not an integer.We say thata series A with radius of convergence R_A has a dominant singularity of exponent δ in R_A (resp. of exponent at least δ) if A has an analytic continuation on a Δ-neighborhood Δ_A of R_A and, on Δ_A, we haveA(z)= g_A(z) + (C_A +o(1)) (R_A - z)^δ,where g_A(z) is an analytic function on a neighbourhood of R_A (called the analytic part), and C_A a nonzero constant(resp. any constant); (C_A +o(1)) (R_A - z)^δ is sometimes referred to as the singular part. If furthermore, A has no other singularity on the disk of convergence, we say that it has a unique dominant singularity of exponent δ(resp. at least δ) in R_A. Since we assumed that A has an analytic continuation on a Δ-neighborhood Δ_A of R_A, by a standard compactness argument, this is equivalent to say that A can be extended to a Δ-domain in R_A. We make the following observation.According to the value of δ,we may move (part of) g_A(z) in the error term and write <ref> in a simpler form, still on a Δ-neighborhood of R_A. * For δ<0, g_A(z) = o((R_A - z)^δ) so A(z)= (C_A +o(1)) (R_A - z)^δ. * For 0<δ<1, considering the constant term is the Taylor series expansion of g_A(z) we find thatA(z)= A(R_A) + (C_A +o(1)) (R_A - z)^δ. * Similarly, for δ>1, we obtainA(z)= A(R_A) + A'(R_A) (z-R_A) + … + (C_A +o(1)) (R_A - z)^δ, in which the third dominant term (after the constant and the linear term)depends on how δ compares with 2.But in each case, we haveA(z)= A(R_A) + A'(R_A) (z-R_A) +Ø( (R_A - z)^δ_*),where δ_*=min(δ,2). We now record a few easy lemmas to manipulate these notions. First consider the stability by product.Let F and G be series with nonnegative coefficients and the same radius of convergenceR=R_F=R_G ∈ (0,∞). Assume they have each a dominant singularity of exponent δ_F and δ_G respectively in R. Then F · G has a dominant singularity in R of exponent δ defined by * δ=δ_F + δ_G if both δ_F and δ_G are negative;* δ=min(δ_F,δ_G) otherwise. Moreover, if both F and G have unique dominant singularities, so has F · G. The proof is easy. The analytic function F · G can be extended to the intersection of the domain of F and G. The exponent of the singular expansion around R is obtained by multiplying singular expansion of F and G: note that, if δ_F is negative, the series F is divergent and the singular part is the dominant part around R. On the opposite, when δ_F is positive, the dominant part of the expansion is the value F(R) of the analytic part at point R, which is always positive, since the series has nonnegative coefficients. The same holds of course for G, which explains the case distinction in the lemma. We now consider the composition F ∘ G. We should differentiate cases where G(R_G)>R_F, G(R_G)<R_F or G(R_G)=R_F (called sometimes supercritical, subcritical and critical cases <cit.>).Let F and G be series with nonnegative coefficients withradii of convergence R_F,R_G in (0,∞).Supercritical case:Assume that G(0)<R_F<G(R_G) (G(R_G) may be finite or infinite).Call ρ < R_G the unique positive number with G(ρ)=R_F.We assume that F has a dominant singularity of exponent δ_F in R_F. Then: * F ∘ G has also a dominant singularity of exponent δ_F in ρ.* Moreover, if G is aperiodic, then the dominant singularity of F ∘ G is unique.Subcritical case:Assume that G(R_G)<R_F. We assume that G has a dominant singularity of exponent δ_G in R_G. Then: * F ∘ G has also a dominant singularity of exponent δ_G in R_G.* Moreover, if the dominant singularity of G is unique,then the dominant singularity of F ∘ G is unique.Critical case-A:Assume that G(R_G)=R_F. We assume that F and G both have a dominant singularity of respective exponents δ_F and δ_G. Suppose furthermore δ_G>1. Then: * F ∘ G has also a dominant singularity of exponent min(δ_G,δ_F) in R_G.* Moreover, if G is aperiodic,then the dominant singularity of F ∘ G is unique. Critical case-B: Assume again that G(R_G)=R_F. We assume that F and G both have a dominant singularity of respective exponents δ_F and δ_G. Suppose furthermore δ_G ∈ (0,1). Then: * F∘ G has a dominant singularity of exponent min(δ_F,1)δ_G in R_G. * Moreover, if G is aperiodic, then the singularity is unique.Supercritical case:It is clear that F ∘ G is analytic around any r ∈ [0,ρ) and has nonnegative coefficients, hence it has radius of convergence at least ρ.To show that F ∘ G is defined in a Δ-neighborhood Δ of ρ, we show that G(Δ) is included in Δ_F. This follows easily from the fact that G is analytic in ρand has a derivative G'(ρ) which is a positive real number.When z is close to ρ, plugging G(z) in the expansion (<ref>) of F we obtainF(G(z))=g_F(G(z))+ (C_F+o(1))(R_F-G(z))^δ_F.The first term g_F(G(z)) is analytic at ρ. Since G(ρ)=R_F and G is differentiable at ρ we haveR_F-G(z)=(G'(ρ)+o(1))(ρ-z).Combining these two expansions yieldsF(G(z))=g_F(G(z))+ (C_FG'(ρ)^δ_F+o(1))(ρ-z)^δ_F,which proves i). Item ii) is also easy. In the case where we assume G aperiodic, we need <ref>,which ensures that |G(ζ)|<R_F for |ζ| ≤ |ρ|, ζ≠ρ. Subcritical case.Most arguments are similar to the ones of the supercritical case.Therefore we only explain the differences in the singular expansion of F(G(z)). Using the singular expansion (<ref>) of G, we haveF(G(z))=F[g_G(z)+(C_G+o(1))(R_G-z)^δ_G].Since G(R_G)<R_F<+∞, the exponent δ_G is positive and the term (R_G-z)^δ_G tends to 0 at R_G. Both G(z) and g_G(z) tend to G(R_G) as z → R_G, so that, by standard calculus arguments, we haveF(G(z)) =F(g_G(z))+F'(G(R_G)) (C_G+o(1))(R_G-z)^δ_G +o((R_G-z)^δ_G)=F(g_G(z))+(C_GF'(G(R_G)) +o(1) ) (R_G-z)^δ_G.Since F and g_G are analytic at G(R_G) and R_G respectively, this expansion is of the desired form. Critical case-A. As above, we focus on the expansion of F(G(z)). Since δ_G >1, G is differentiable at ρ=R_G and <ref> still holds. The difference is that g_F(G(z)) is not analytic anymore. Namely, when z is close to ρ,g_F(G(z))=g_F(g_G(z))+g_F'(g_G(R_G))(C_G+o(1))(R_G-z)^δ_G.ThenF(G(z))=g_F(g_G(z))+ (g_F'(g_G(R_G))C_G+o(1))(R_G-z)^δ_G+ (C_FG'(ρ)^δ_F+o(1))(ρ-z)^δ_F.Since g_F(g_G(z)) is analytic at ρ, theexponent of the dominant singularity of F∘ G is min(δ_F,δ_G). Note that the singular terms cannot cancel each other since when δ_F=δ_G the constants have the same sign. Critical case-B. Again, we focus on the singular expansion of F(G(z)). Now, since δ_G <1, G is not differentiable at ρ=R_G. Instead of (<ref>) we haveR_F-G(z)=-(C_G+o(1))(ρ-z)^δ_G.Eq. (<ref>) becomesF(G(z))=g_F(G(z))+ (C_F(-C_G)^δ_F+o(1))(ρ-z)^δ_Fδ_G.(In this case, C_G must be negative, as can be observed by writing the transfer theorem for the coefficients of G which are non-negative by assumption.)As for g_F(G(z)), (<ref>) still holds. We obtainF(G(z))=g_F(g_G(z))+ (g_F'(g_G(R_G))C_G+o(1))(R_G-z)^δ_G+(C_F(-C_G)^δ_F+o(1))(ρ-z)^δ_Fδ_G.We conclude that the exponent of the dominant singularity is min(δ_F,1)δ_G. We note that the above proof also yields the constant C_F ∘ Gappearing in the singular expansion of F ∘ G.The two following particular cases were used in <ref>, p. eq:CLa_CT (in particular, we assume in the discussion below that hypothesis (H3) holds). * We take F(z)=S(z) and G(u)=u/1-u. The composition is F∘ G(u)=Λ(u)-u^21-u, and u^21-u is analytic at R_Λ<1. Since u/1-u diverges at its singularity and S has a finite radius of convergence, the composition is supercritical. Extracting the constant from (<ref>), we getC_Λ=C_F ∘ G= C_FG'(R_F ∘ G)^δ_F = C_S/(1-R_Λ)^2 δ,since in this case C_F=C_S, δ_F=δ and R_F ∘ G=R_Λ.* We take F(u)=u/1-u and G(z)=T_(z). The composition isT(z)=T_(z)/1-T_(z) (<ref>). Since G(R_G)=T_(ρ)=R_Λ<1=R_F, (here we use Hypothesis (H3) and ρ is defined in <ref>) the composition is subcritical. Extracting the constant from (<ref>), we get C_T=C_GF'(G(R_G))= C_T_F'(T_(ρ))= C_T_/(1-R_Λ)^2. Finally, we state the following result, which follows from <ref>. If F has a (unique) dominant singularity of exponent (at least) δ in ρ, then its k-th derivative F^(k) has a (unique) dominant singularity of exponent (at least) δ-k in ρ.§.§ An analytic implicit function theoremThe following theorem allows to locate the dominant singularity of series defined by an implicit equation.Let F(z,w) be a bivariate function analytic at (z_0, w_0), we denote F_w=∂ F∂ w.IfF(z_0, w_0) = 0 and F_w(z_0, w_0) ≠ 0, then there exists a unique function ϕ(z) analytic in a neighbourhood of z_0 such that ϕ(z_0) = w_0 and F(z, ϕ(z)) = 0.We refer the reader to<cit.> for a proof of this result.§.§ Proof of <ref> Let R_ be the radius of convergence of T_. If T_(R_) ≥ R_Λ, thenby intermediate value theorem, we know that there exists ρ as in the lemma.We will prove this by contradiction.Assume T_(R_) < R_Λ.We apply <ref>.The bivariate function we consider is (z,w) ↦ z-w+Λ(w). It vanishes at (R_,T_(R_)) and the derivative with respect to w at that point is nonzero sinceΛ'(T_(R_)) < Λ'(R_Λ)<1.(Recall that this last inequality is equivalent to (<ref>) in Hypothesis (H2).)Therefore, T_ has an analytic continuation on a neighborhood of R_. Since it has positive coefficients, by Pringsheim's theorem <cit.>, this is in contradiction with the fact that R_ is the radius of convergence of T_. We have therefore proved that there exists ρ≤ R_ such that T_(ρ)=R_Λ. Note that it implies the relation ρ = R_Λ - Λ(R_Λ).We now consider T_ around z=ρ. Equation (<ref>) defining T_(z) can be rewritten asT_(z)=G(z,T_(z)), whereG(z,w)= w+1/1-Λ'(R_Λ)(z-w+Λ(w)).Since Λ has a dominant singularity of exponent δ>1 in R_Λ, Equation (<ref>), together with elementary computations, yield the following: for w in a Δ-neighborhood D_Λ of R_Λ,G(z,w)= R_Λ + z-ρ/1-Λ'(R_Λ) + Ø( (R_Λ-w)^δ_*).We now use Picard's method of successive approximants to show the existence and analycity of T_ on a Δ-neighborhood D_T of ρ. We refer to <cit.> for a synthetic description of the method in the casewhere Λ is analytic in R_Λ; we have to adapt it carefully to our setting.Define ϕ_0(z)=R_Λ and ϕ_j+1(z)=G(z,ϕ_j(z)) whenever ϕ_j(z) is in D_Λ. We have ϕ_1(z)-ϕ_0(z)=z-ρ1-Λ'(R_Λ). Also, <ref> of singular differentiationapplied to <ref> implies that∂G(z,w)∂ w= Ø( (R_Λ-w)^δ_*-1).Therefore[There is a slight subtlety here: we would like to apply the classical inequality |f(w)-f(w')| ≤f'_∞ |w - w'|, but this is not possible since the domain D_Λ is not convex. Note however that a Δ-neighborhood D is always a quasi-convex set, in the sense that we can always find a path between w and w' whose length is smaller than K|w-w'|, where K depends on the angle defining D but not on w and w'. Therefore the following weaker inequality holds: |f(w)-f(w')| ≤ K f'_∞ |w - w'|, which is good enough for our purpose (the constant K disappears in the Ø symbol).] for j ≥ 1, if ϕ_j(z) and ϕ_j+1(z) are defined and lie in D_Λ, we have ϕ_j+1(z)-ϕ_j(z) = Ø(η^δ_*-1|ϕ_j(z)-ϕ_j-1(z)|),where η=sup_w ∈ D_Λ |R_Λ-w|. Fix >0. Up to reducing the radius of D_Λ, we can therefore assume that |ϕ_j+1(z)-ϕ_j(z)| ≤ |ϕ_j(z)-ϕ_j-1(z) |. Thus, if ϕ_j(z) is in D_Λ for every i ≤ m, then ϕ_M(z) is defined and we have|(ϕ_M(z)-R_Λ) - z-ρ1-Λ'(R_Λ) | = |ϕ_M(z)-ϕ_1(z)| ≤/1- |ϕ_1(z)-ϕ_0(z)| =/1- |z-ρ1-Λ'(R_Λ)|.If we takesmall enough, the argument of ϕ_M(z)-R_Λ is close to the one of z-ρ. Furthermore if the modulus of z-ρ is small so is the one of ϕ_M(z)-R_Λ. This ensures the existence of a Δ-neighborhood D_T of ρ (not depending on M and z), such that for z∈ D_T and M≥ 1, ϕ_M(z) is in D_Λ as long as it is defined.In particular, ϕ_M+1(z) is also defined and by immediate induction,all ϕ_j are defined and analytic on D_T.<ref> also implies that ϕ_j converges locally uniformly on D_T. The limit is the unique solution w in D_Λ of the fixed point equation w=G(z,w) (the uniqueness of the solution comes from the fact that for every z∈ D_T, w ↦ G(z,w) is a contraction for w in D_Λ). This limit is therefore an analytic continuation of T_(z) to D_T. Note also that from <ref>, the following estimate holds on D_T:T_(z)-R_Λ=z-ρ1-Λ'(R_Λ) + o(|z-ρ|). Using the expansion given in <ref> of Λ around R_Λ, we have for z∈ D_T,T_(z)=ρ + (z-ρ) + g_Λ(T_(z)) + (C_Λ +o(1))(T_(z)-R_Λ)^δ.As 𝕀 - g_Λ is analytic at R_Λ with a nonzero derivative 1-Λ'(R_Λ), it can be inverted analytically around R_Λ by an analytic function h_Λ and henceT_(z) = h_Λ(z+ (C_Λ +o(1))(T_(z)-R_Λ)^δ)As T_(z)-R_Λ=1+o(1)/1-Λ'(R_Λ)(z-ρ), it follows from the Taylor expansion of h_Λ up to exponent ⌈δ⌉that T_ has a singularity of exponent exactly δ in ρ.In particular T_ has a singularity of exponent δ in ρ and hence ρ = R_.We now prove that T_ has no singularity ζ with |ζ| ≤ρ, except ζ=ρ. By a classical compactness argument (seee.g. <cit.>), this implies that T_ is analytic on a Δ-domain at ρ.Take such a singularity.Since T_ has nonnegative coefficients, the triangular inequality gives |T_(ζ)| ≤ T_(ρ) and since T_(z) is aperiodic, from <ref>we have a strict inequality unless ζ=ρ .Therefore, if |ζ| ≤ρ and ζ≠ρ, we have |Λ'(T_(ζ))|<Λ'(R_Λ)<1 and we can apply <ref>as above as in the second paragraph of this proof to argue that ζ cannot be a singularity.§.§ Proof of <ref>As in the proof of <ref>, the existence of ρ and the fact thatthe convergenceradius of T_ is at least ρ is straightforward. The key point is to prove that T_ has an analytic continuation to a Δ-neighborhood of ρ.By assumption, Λ is analytic on a Δ-neighborhood D_Λ=Δ(φ_Λ,r_Λ,R_Λ) of R_Λ, and the following approximation holds:Λ(w) = Λ(R_Λ) - (R_Λ-w)+ C'_Λ (R_Λ-w)^δ_⋆(1+(w)),where as before, δ_*=min(δ,2); C'_Λ is C_Λ or 12Λ”(R_Λ) depending on whether δ is smaller or bigger than 2; and (w) is an analytic function on D_Λ tending to 0 in R_Λ.Fix z in a Δ neighborhood D_T of ρ, whose parameters r_T and φ_T will be made precise later. The equation w=z + Λ(w) then rewrites asρ-z=C'_Λ (R_Λ-w)^δ_* (1 +(w)),or, as a fixed point equation w=G(z,w) forG(z,w):= R_Λ-( 1C'_Λ (ρ-z) ·1/1 + (w))^1/δ_*. We again use Picard's method of successive approximants to find an analytic solution w(z) for (<ref>), which will be the analytic continuation of T_(z) that we are looking for. For z∈ D_T, set ϕ_0(z)=R_Λ and, whenever ϕ_i(z) lies in D_Λ∪{R_Λ}, set ϕ_i+1(z)=G(z,ϕ_i(z)). In particular, R_Λ - ϕ_1(z) = ( 1C'_Λ (ρ-z) )^1/δ_*.Since 1/δ_*<1, we have (R_Λ -ϕ_1(z))=1δ_*(ρ-z).We choose the parameters defining the Δ-neighborhood D_T of ρ to be φ_T=φ_Λ and r_T=C'_Λ ( r_Λ2)^δ_*. In this way, if z is in D_T, then then ϕ_1(z) lives in D_Λ=Δ(φ_Λ,r_Λ2,R_Λ), for some φ_Λ<φ_Λ.We define an intermediate Δ-neighborhood D'_Λ = Δ(φ_Λ+φ_Λ/2,3 r_Λ4,R_Λ). This ensures that we have a constant 0<r_0<1, depending only on φ_Λ and φ_Λ, such that the circle γ_w of center w and radius r_0|R_Λ-w| is contained in D_Λ for every w ∈ D'_Λ and in D'_Λ for every w ∈D_Λ (cf. <ref>). Consider the partial derivative∂ G/∂ w(z,w)='(w) /δ_*·( 1/C'_Λ (ρ-z) )^1/δ_*·(1/1 + (w))^1/δ_*+1.We take w in the domain D'_Λ. The quantity '(w) can now be evaluated through a contour integral on γ_w⊂ D_Λ:'(w) = 1/2 π i∮_γ_w(u) du/(u-w)^2.This yields the inequality |'(w)| = Ø( sup_u ∈ D_Λ |(u)|/|R_Λ-w|).Plugging this back in <ref>, we get, for w in D'_Λ|∂ G/∂ w(z,w)|= Ø(|z-ρ|^1/δ_*·sup_u ∈ D_Λ |(u)|/|R_Λ-w|).Now we shall find a domain where we have enough control on |∂ G∂ w(z,w)| as to guarantee the stability of the iterates. A subtlety here is that this control is impossible near ϕ_0(z) = R_Λ. So we need to consider a domain around ϕ_1(z), hence that depends on z. For every z∈ D_T, we have ϕ_1(z) ∈D_Λ, so the diskΓ_z:={w: |w-ϕ_1(z)| ≤1r_0 |ϕ_1(z)-R_Λ| }is included in D'_Λ. For w in Γ_z, we have |R_Λ-w|=Θ(|ϕ_1(z)-R_Λ|) = Θ( |ρ-z|^1/δ_*),which implies after plugging back into <ref>|∂ G/∂ w(z,w)|= Ø(sup_u ∈ D_Λ |(u)|).By possibly reducing the radius r_Λ of D_Λ, we can make sup_u ∈ D_Λ |(u)| as small as wanted: for any w in Γ,|∂ G/∂ w(z,w)| ≤1/r_0+1.Similarly,|ϕ_2(z)-ϕ_1(z)|= ( 1C'_Λ |ρ-z| )^1/δ_*·| ( 1/1+(ϕ_1(z)))^1/δ_* -1 |can be made smaller than 1r_0+1 |ϕ_1(z)-R_Λ| by reducing r_Λ. In particular, ϕ_2(z) is in Γ_z.For m ≥ 2, assume that ϕ_1(z),⋯,ϕ_m(z) lie in Γ_z. Then for each i ≤ m, using (<ref>),|ϕ_i+1(z)-ϕ_i(z)| ≤( 1r_0+1) |ϕ_i(z)-ϕ_i-1(z)| ≤⋯≤( 1r_0+1)^i-1 |ϕ_2(z)-ϕ_1(z)|.Since ϕ_m(z) lies in Γ_z ⊂ D_Λ, the next term ϕ_m+1(z) is defined and |ϕ_m+1(z)-ϕ_1(z)|≤∑_i=1^m |ϕ_i+1(z)-ϕ_i(z)| ≤[ ∑_i=1^m ( 1r_0+1)^i-1] |ϕ_2(z)-ϕ_1(z)| ≤1r_0 |ϕ_1(z)-R_Λ|.In particular, ϕ_m+1(z) also lies in Γ_z and an immediate induction shows that this is indeed the case for all m ≥ 1.By (<ref>), the series ∑_i ≥ 0ϕ_i+1(z) -ϕ_i(z) is uniformly bounded by a geometric series and converges towards an analytic function ϕ on D_T. The limit ϕ(z) is a solution of ϕ(z)=z+Λ(ϕ(z)) and is the analytic continuation of T_(z) thatwe were looking for.A small modification of the above argument shows that, when r_T, or equivalently r_Λ, tends to 0, the quotient|ϕ_m+1(z)-ϕ_1(z)|/|ϕ_1(z)-R_Λ|also tends to 0. This proves thatϕ(z) - R_Λ= (ϕ_1(z)-R_Λ) (1+o(1)) =-( 1C'_Λ (ρ-z) )^1/δ_*(1+o(1)). The proof that T_(z) has no other singularities than ρ on the circle of convergence is similar to that of <ref>. § ON THE SIMULATIONS GIVEN IN THE INTRODUCTIONIn this appendix, we explain how the simulations in <ref> have been obtained.§.§ Biased Brownian permutonFix p in (0,1) and considera uniform binary planar tree b_n^(p) with n leaves,where each internal node is labeled ⊕ (resp. ⊖) with probability p (resp. 1-p), independently from each other. As mentioned in <ref>, τ_n^(p):=( b_n^(p)) forms a consistent family of random permutations. Therefore, from <ref>, we have the following lemma (using the notation (.,.) defined in <ref>)There exists a random permuton μ^(p),whose induced subpermutation are the τ_n^(p) ( i.e. for all n, τ_n^(p) d = (,Μ^(p))) andwe have the convergence in distributionμ_τ_n^(p)n→ +∞→μ^(p). By definition, μ^(p) is the biased Brownian separable permuton with parameter p (indeed τ_k^(p) d = (,Μ^(p)) implies <ref> of <ref>). This lemma is not needed to prove the results of this paper, but was used in the simulations. The three pictures in <ref> p. Fig:SimusPermuton are obtained by drawing the diagram of a random permutation distributed as τ_n^(p), for p=0.2 and n=11629, p=0.45 and n=12666, p=0.5 and n=17705. §.§ Stable permutons Fix δ∈ (1,2). For every k the following probability distribution on unlabeled plane trees with k leaves was introduced in <cit.>and is the distribution of the induced subtree with k leaves in the δ-stable tree:ρ_δ,k() = k!/(δ -1) ⋯ ((k-1)δ - 1)∏_v∈1_d_v≥ 2(d_v - 1 - δ) ⋯ (2-δ)(δ - 1)/d_v!.Now if we fix the distribution of a random permuton ν, for every n≥ 1, we build a random substitution tree t^(δ,ν)_n as follows: the tree is chosen according to ρ_δ,n, and conditional on that choice, all internal nodes v are independently labeled by a permutation distributed like (𝐦⃗_d_v, ν)(the notation (.,.) is defined in <ref>).Now we can define the permutations τ^(δ,ν)_n = ( t^(δ,ν)_n). This family of permutations is consistent: we omit the proof of this fact, which follows from the consistency of the family ((,ν))_k (<ref>) and from Marchal's algorithm <cit.> to generate trees of distribution ν_δ,k. We deduce the following lemma. For every δ∈(1,2) and random permuton ν, there exists a random permuton μ^(δ,ν),whose induced subpermutations are the τ_n^(δ,ν) ( i.e. for all n, τ_n^(δ,ν) d = (,Μ^(δ,ν))) and we have the convergence in distribution μ_τ_n^(δ,ν)n→ +∞→μ^(δ,ν). We call μ^(δ,ν) the δ-stable permuton driven by ν. Once again, this lemma is used in our simulations. The pictures in<ref> arethe rescaled diagrams of realizations of τ_n^(δ,ν) for n=20000 and δ∈{1.1,1.5},where we have taken ν to be the (nonrandom) uniform measure on [0,1]^2. §.§ Simulations of permutations in classesThe uniform random permutations in substitution-closed classes shown on <ref> have been obtained through a Boltzmann sampler. Obtaining such a sampler is routine from the equations on the generating series given in <ref> <cit.>. The only input that we need is a Boltzmann sampler for the set of simple permutations in the class. In the case of a finite set , this is trivial. For the set of simple permutations in (321), we built a Boltzmann sampler for the whole class (321), which has an easy recursive structure, and we run it until the output is simple (this happens with probability bigger than 1/4 when the parameter of the Boltzmann sampler is close to the radius of convergence of =⟨(321)⟩). Code is available on request. § ACKNOWLEDGEMENTSThe authors are grateful to the referees for suggestions which improved the presentation of the paper.MB and VF are partially supported by the Swiss National Science Foundation, under grants number 200021-172536 and 200020-172515. LG's research is supported by ANR grants GRAAL (ANR-14-CE25-0014) and PPPP (ANR-16-CE40-0016).99AA05 M. H. Albert, M. D. Atkinson. Simple permutations and pattern restricted permutations. Discrete Mathematics, 300(1):1–15, 2005. AlbertAtkinsonBrignall3 M. H. Albert, M. D. Atkinson, R. Brignall.The enumeration of three pattern classes using monotone grid classes.Electronic Journal of Combinatorics, 19(3) #P20, 2012. AlbertAtkinsonBrignall M. H. Albert, M. D. Atkinson, R. Brignall.The enumeration of permutations avoiding 2143 and 4231.Pure Mathematics and Applications, 22(2):87–98, 2011.AlbertAtkinsonKlazar M.H.Albert, M.D. Atkinson, M. Klazar. The enumeration of simple permutations. Journal of Integer Sequences vol.6 (2003), article 03.4.4.AlbertAtkinsonVatter3 M. H. Albert, M. D. Atkinson, V. Vatter.Inflations of geometric grid classes: three case studies.Australasian Journal of Combinatorics, vol. 58 (2014), p. 27-47.AlbertAtkinsonVatter M. H. Albert, M. D. Atkinson, V. Vatter.Counting 1324, 4231-Avoiding Permutations.Electronic Journal of Combinatorics, 16(1) #R136, 2009.AlbertBrignall M. H. Albert, R. Brignall.Enumerating indices of Schubert varieties defined by inclusions.Journal of Combinatorial Theory, Series A, 123(1): 154–168, 2014. AlbertVatter M. H. Albert, V. Vatter.Generating and Enumerating 321-Avoiding and Skew-Merged Simple Permutations. Electronic Journal of Combinatorics, 20(2) #P44, 2013. AldousCRT3D. Aldous. The Continuum Random Tree III.Annals of Probability, 21(1): 248—289, 1993.AtapourMadrasM. Atapour, N. Madras. Large deviations and ratio limit theorems for pattern-avoidingpermutations. Combinatorics, Probability and Computing,23(2): 160–200, 2014.Basis_Subst_Closure M. Atkinson, N. Ruškuc, R. Smith. Substitution-closed pattern classes.J. Combin. Theory Ser. A, 118: 317–340, 2011.BrownianPermutation F. Bassino, M. Bouvel, V. Féray, L. Gerin, A. Pierrot. The Brownian limit of separable permutations, Annals of Probability, 46(4): 2134–2189, 2018.BBPR F. Bassino, M. Bouvel, A. Pierrot, D. Rossin. An algorithm for deciding the finiteness of the number of simple permutations in permutation classes. Advances in Applied Mathematics, vol. 64 (2015), p. 124-200. PinPerm F. Bassino, M. Bouvel, D. Rossin. Enumeration of Pin-Permutations.Electronic Journal of Combinatorics, vol. 18 (2011), #P57. BevanPhDD. Bevan. On the growth of permutation classes.PhD thesis (2015), Open University, .Billingsley P. Billingsley. Convergence of probability measures (2d edition). John Wiley & Sons (1999).Bona0 M. Bóna. Exact enumeration of 1342-avoiding permutations: a close link with labeled trees and planar maps. J. Combin. Th. Series A, 80: 257–272, 1997. Bona1 M. Bóna. The absence of a pattern and the occurrences of another. Discrete Mathematics & Theoretical Computer Science, 12(2): 89–102, 2010. Bona2 M. Bóna. Surprising Symmetries in Objects Counted by Catalan Numbers. The Electronic Journal of Combinatorics, 19(1) #62, 2012.BonaBook M. Bóna. Combinatorics of permutations (2d edition). Chapman-Hall and CRC Press (2012). MathildeMarniCyril M. Bouvel, M. Mishna, C. Nicaud. Some families of trees arising in permutation analysis. Preprint, , (2016).BrignallSurvey R. Brignall.A survey of simple permutations.S. Linton, N. Ruškuc and V. Vatter, Eds., vol. 376 of London Mathematical Society Lecture Note Series, Cambridge University Press, p. 41-65, 2010.BRV R. Brignall, N. Ruškuc, V. Vatter.Simple permutations: decidability and unavoidable substructures.Theoretical Computer Science, 391(1-2): 150-163, 2008. ChenEuFuS.-E. Cheng, S.-P. Eu, T.-S. Fu, Area of Catalan pathson a checkerboard. European Journal of Combinatorics, 28(4): 1331–1344, 2007. DokosPakT. Dokos, I. Pak.The expected shape of random doubly alternating Baxter permutations.Online Journal of Analytic Combinatorics, vol. 9 (2014), #5.Drmota M. Drmota,Random Trees, An Interplay between Combinatorics and Probability, Springer (2009). BoltzmannSampler Ph. Duchon, Ph. Flajolet, G. Louchard, G. Schaeffer. Boltzmann Samplers for the Random Generation of Combinatorial Structures.Combinatorics, Probability and Computing, 13 (4-5): 577–625, 2004.DuquesneLeGall T. Duquesne, J. F. Le Gall,Random trees, Lévy processes and spatial branching processes, vol. 281 of Astérisque, Société mathématique de France (2002). FMN3 V. Féray, P.-L. Méliot, A. Nikeghbali. Graphons, permutons and the Thoma simplex: three mod-Gaussian moduli spaces. Preprint arXiv:1712.06841, 2017.Violet Ph. Flajolet, R. Sedgewick. Analytic combinatorics. Cambridge University Press (2009).FinitelyForcible R. Glebov, A. Grzesik, T. Klimošová, D. Král. Finitely forcible graphons and permutons. J. Combin. Th., Series B, 110: 112 – 135, 2015.ParameterTesting R. Glebov, C. Hoppen, T. Klimošová, Y. Kohayakawa, D. Král, H. Liu. Densities in large permutations and parameter testing. Eur. J. Combin., 60: 89–99, 2017.HoffmanBrownian1C. Hoffman, D. Rizzolo, E. Slivken.Pattern Avoiding Permutations and Brownian Excursion Part I: Shapes and Fluctuations.Random Structures and Algorithms, 50(3): 394–419, 2017.HoffmanBrownian2C. Hoffman, D. Rizzolo, E. Slivken. Pattern Avoiding Permutations and Brownian Excursion Part II: Fixed Points.Probability Theory and Related Fields, 169: 377–424, 2016.HombergerC. Homberger, Expected patterns in permutation classes. The Electronic Journal of Combinatorics,19:3 #43, 2012.Permutons C. Hoppen, Y. Kohayakawa, C. G. Moreira, B. Rath, R. M. Sampaio. Limits of permutation sequences. Journal of Combinatorial Theory, Series B, 103(1): 93–113, 2013.JansonSurveysGWTrees S. Janson. Simply generated trees, conditioned Galton–Watson trees, random allocations and condensation. Probability Surveys, vol. 9 (2012), p. 103–252. JansonPermutationsS. Janson. Patterns in random permutations avoiding the pattern 132. Combinatorics Probability and Computing, 26(1): 24–51, 2017. JansonNakamuraZeilbergerS. Janson, B. Nakamura, D. Zeilberger. On the Asymptotic Statistics of the Number of Occurrences of Multiple Permutation Patterns. Journal of Combinatorics, 6(1-2): 117–143, 2015. LargeDeviations R. Kenyon, D. Král, C. Radin, and P. Winkler. Permutations with fixed pattern densities. preprint, arXiv:1506.02340, 2015.IgorI. Kortchemski. Invariance principles for Galton-Watson trees conditioned on the number of leaves. Stochastic Processes and Applications, 122(9): 3126–3172, 2012. MickaelConstruction M. Maazoun, On the Brownian separable permuton.Preprint arXiv:1711.08986.MadrasLiuN. Madras, H. Liu.Random pattern-avoiding permutations.Algorithmic, Probability and Combinatorics,vol. 520 of Contemp. Math., p. 173–194. Amer. Math. Soc., 2010. MadrasPehlivanN. Madras, L. Pehlivan.Structure of Random 312-Avoiding Permutations.Random Structures and Algorithms, 49(3): 599–631, 2016.Madras2413N. Madras, G. Yıldırım.Longest Monotone Subsequences and Rare Regions of Pattern-Avoiding PermutationsElec. J. Combin., 24(4): #P4.13, 2017.MaTa04 A. Marcus and G. Tardos. Excluded permutation matrices and the Stanley-Wilf conjecture. Journal of Combinatorial Theory, Series A, 107(1): 153–160, 2004.Marchal P. Marchal. A note on the fragmentation of a stable tree. In Fifth Colloquium on Mathematics and Computer Science, DMTCS Proceedings (2008), p. 489–500.MinerPakS. Miner, I. Pak. The shape of random pattern-avoiding permutations. Advances in Applied Mathematics, vol. 55 (2014), p. 86–130.ExpTiltedPermuton S. Mukherjee. Estimation in exponential families on permutations.Ann. Statist. 44 (2): 853–875, 2016. FixedPointsPermuton S. Mukherjee. Fixed points and cycle structure of random permutations.Elec. J. Probab., 21:1–18, 2016.Pantone J. Pantone. The Enumeration of Permutations Avoiding 3124 and 4312.Annals of Combinatorics, 21(2): 293–315, 2017. PitmanRizzoloJ. Pitman, D. Rizzolo. Schröder's problems and scaling limits of random trees. Transactions of the American Mathematical Society, 367(10): 6943–6969, 2015.GeometryPermutonM. Rahman, B. Virag, M. Vizer. Geometry of permutation limits. Preprint arXiv:1609.03891, 2016 RemyJ.-L. Rémy. Un procédé itératif de dénombrement d’arbres binaires et son application à leur génération aléatoire. RAIRO Inform. Théor.,19(2): 179–195, 1985.RudolphK. Rudolph. Pattern popularity in 132-avoiding permutations.The Electronic Journal of Combinatorics, 20:1 #8, 1985.Stankova3142Z. Stankova. Forbidden subsequences. Discrete Math. 132(1-3): 291–316, 1994. | http://arxiv.org/abs/1706.08333v3 | {
"authors": [
"Frédérique Bassino",
"Mathilde Bouvel",
"Valentin Féray",
"Lucas Gerin",
"Mickaël Maazoun",
"Adeline Pierrot"
],
"categories": [
"math.PR",
"math.CO",
"60C05, 05A05"
],
"primary_category": "math.PR",
"published": "20170626120452",
"title": "Universal limits of substitution-closed permutation classes"
} |
[email protected] di Scienze Fisiche e Chimiche,Universitá degli Studi dell'Aquila, via Vetoio, 67100 L'Aquila, ItalyLaboratoire de Chimie Théorique, Université Pierre et Marie Curie, Sorbonne Universités, CNRS, Paris, France [email protected] de Chimie Théorique, Université Pierre et Marie Curie, Sorbonne Universités, CNRS, Paris, [email protected] de Chimie Théorique, Université Pierre et Marie Curie, Sorbonne Universités, CNRS, Paris, France [email protected] de Chimie Théorique, Université Pierre et Marie Curie, Sorbonne Universités, CNRS, Paris, FranceWe propose a method for obtaining effective lifetimes of scattering electronic states for avoiding the artificially confinement of the wave function due to the use of incomplete basis sets in time-dependent electronic-structure calculations of atoms and molecules. In this method, using a fitting procedure, the lifetimes are extracted from the spatial asymptotic decay of the approximate scattering wave functions obtained with a given basis set. The method is based on a rigorous analysis of the complex-energy solutions of the Schrödinger equation. It gives lifetimes adapted to any given basis set without using any empirical parameters. The method can be considered as an ab initio version of the heuristic lifetime model of Klinkusch et al. [J. Chem. Phys. 131, 114304 (2009)]. The method is validated on the H and He atoms using Gaussian-type basis sets for calculation of high-harmonic-generation spectra. Ab initio lifetime correction to scattering states for time-dependent electronic-structure calculations with incomplete basis sets Julien Toulouse June 19, 2017 ==================================================================================================================================§ INTRODUCTION Motivated by experimental advances in attosecond science <cit.>, there is currently a lot of interest in developing time-dependent electronic-structure computational methods for studying laser-driven electron dynamics in atomic and molecular systems (see, e.g., Ref. HocHinBon-EPJST-14). Examples of such methods include time-dependent density-functional theory (TDDFT) <cit.>, time-dependent Hartree-Fock (TDHF) <cit.>, multiconfiguration time-dependent Hartree-Fock (MCTDHF) <cit.>, time-dependent configuration interaction (TDCI) <cit.>, and time-dependent coupled cluster <cit.>. These methods involve orbitals which are often expanded on basis functions such as Gaussian-type functions <cit.>, and an important question is whether the continuum scattering states which are explored at high laser intensity, e.g. in the high-harmonic generation (HHG) process, are sufficiently well described.The description of the continuum scattering states can be much improved by using specially designed Gaussian-type basis sets, such as the one proposed by Kaufmann et al. <cit.>, as demonstrated recently in Refs. coccia16a,coccia16b. However, even with these basis sets, only an incomplete discrete set of scattering states which decay too fast away from the nucleus is obtained, with the consequences that ionization processes cannot be properly described and the time-dependent wave function undergoes artificial reflections. These problems can be alleviated with ad hoc lifetime models as proposed by Klinkusch et al. <cit.> or Lopata et al. <cit.> which introduce an imaginary part to the energy of each scattering state. This has the effect of partially absorbing the time-dependent wave function which limits artificial reflections and simulates ionization. However, these lifetime models have the disadvantage of being empirical and of depending on adjustable parameters. Alternative methods to these lifetime models include approaches using a complex-absorbing potential (CAP) <cit.>, an absorbing mask function <cit.>, or exterior-complex scaling <cit.> to absorb the time-dependent wave-function beyond a certain distance, but all these techniques also inevitably imply some empiricism in the choice of the involved parameters. In the present work, we develop an ab initio lifetime correction to scattering states based on a rigorous analysis of the complex-energy solutions of the Schrödinger equation. This ab initio correction gives lifetimes adapted to each particular incomplete basis set without any free parameters. More specifically, we start from the exact complex-energy solutions of the Schrödinger equation of a hydrogen-like atom, obtained by relaxing the boundary conditions on the wave function, making the Hamiltonian a non-Hermitian operator. For a complex-energy state, we show that the value of the corresponding lifetime is encoded in the spatial asymptotic behavior of the associated wave function. We thus propose, for a given Gaussian-basis set, to extract an effective lifetime associated with an approximate scattering state of real energy by matching its spatial asymptotic decay with the one of the exact complex-energy wave function having the same real part of the energy. In practice, this is done with a fit of the spatial asymptotic decay of each scattering wave function and leads to parameter-free lifetimes for the one-electron scattering states, which compensate for the incompleteness of the basis set in time-dependent calculations. We then show how the procedure can be extended to many-electron atoms and molecules to define lifetimes for N-electron scattering states used in TDCI calculations. Interestingly, the lifetimes defined in the heuristic model of Klinkusch et al. <cit.> are recovered as simple approximations of our lifetimes, which clarifies the theoretical grounds of this model.The paper is organized as follows. In Section <ref>, we present in detail the theory of our ab initio lifetime correction. In Section <ref>, we give computational details for the tests performed on the H and He atoms. In Section <ref>, we give and discuss the results. In particular, we show the effect of using the ab initio lifetime correction for calculating HHG spectra and we compare with the heuristic lifetime model. Finally, Section <ref> contains our conclusions. Unless otherwise stated, Hartree atomic units are used throughout the paper.§ THEORY §.§ Schrödinger equation for a hydrogen-like atom with complex energiesConsider the time-independent Schrödinger equation for a hydrogen-like atom (with a nuclear charge Z)( -1/2∇_ṟ^2 - Z/r) ψ(ṟ) = E ψ(ṟ),with a possibly complex energy E and the associated electronic wave function ψ(ṟ)=R(r) Y_ℓ^m(θ,ϕ) written as the product of a radial part R(r) and a spherical harmonics Y_ℓ^m(θ,ϕ). The radial part is determined by the equationR”(r) + 2/r R'(r) + ( - ℓ(ℓ+1)/r^2 + 2Z/r +2 E ) R(r) = 0,for a given angular momentum ℓ. The general solution of Eq. (<ref>), without imposing any boundary conditions, is (as found, e.g., with Mathematica <cit.>; see also Ref. Mah-BOOK-09)R(r)=c_1 R_1(r) + c_2 R_2(r),where c_1 and c_2 are two arbitrary complex constants, and R_1(r)=L(ν,2ℓ+1,2√(-2E)r)r^ℓ e^-√(-2E)r,andR_2(r)=U(-ν, 2ℓ+2,2√(-2E)r)r^ℓ e^-√(-2E)r,where ν=Z/√(-2E) -ℓ-1. <cit.> In these expressions, L is the generalized Laguerre function and U is the Tricomi confluent hypergeometric function, both defined for possibly complex arguments. The function R_1(r) is always finite at r=0|R_1(r=0)| < ∞,and, for generic values of the complex energy E, its asymptotic behavior for r→∞ isR_1(r)r→∞∼ 1/Γ(-ν)e^√(-2E)r/r^1+Z/√(-2E),where Γ(z) is the gamma function. For generic values of E, the function R_2(r) diverges at r=0 asR_2(r)r→0∼ 1/Γ(-ν)1/r^ℓ+1,while its asymptotic behavior for r→∞ isR_2(r)r→∞∼ e^-√(-2E)r/r^1-Z/√(-2E). Let us consider first the case of real and negative energies, E = ε <0. In this case, the function R_1(r) generally diverges as e^√(-2 ε)r for r→∞ [Eq. (<ref>)] and the function R_2(r) generally diverges as 1/r^ℓ+1 at r=0 [Eq. (<ref>)]. However, when ν is a negative or zero integer, i.e. for the discrete energy values ε = -Z^2/(2n^2) where n is a positive integer with n ≥ℓ+1, the prefactor 1/Γ(-ν) goes to 0 in Eq. (<ref>) and the divergence of the function R_1(r) is avoided. In fact, Eq. (<ref>) has the same prefactor 1/Γ(-ν) and the divergence of the function R_2(r) is also avoided for these discrete energy values, as less well known <cit.>. For these particular energy values, it turns out that the functions R_1(r) and R_2(r) both become proportional to the familiar associated Laguerre polynomials, so that one is free to choose any linear combination of R_1(r) and R_2(r) to obtain proper (finite and normalizable) eigenfunctions. This case corresponds to the discrete bound states.Consider now the case of real and positive energies, E = ε > 0. In this case, it can be seen from Eqs. (<ref>) and (<ref>) that |R_1(r)| and |R_2(r)| both behave asymptotically as 1/r [multiplied by an oscillatory cosine term for R_1(r) due to an additional term not shown in Eq. (<ref>)], but the function R_2(r) diverges as 1/r^ℓ+1 at r=0 [Eq. (<ref>)]. One thus has to set c_2=0 to obtain finite (but not normalizable) eigenfunctions. This case corresponds to the continuum scattering states.Finally, let us consider complex energies, E = ε - i γ/2, with a positive real part, ε > 0, and a negative imaginary part, -γ/2 < 0. The corresponding states are interpreted as decaying states with a finite lifetime τ = 1/γ in the sense that the time-evolved wave function ψ(ṟ,t) = e^-i (ε-iγ/2) tψ(ṟ,0) has a survival probability which decays in time as |ψ(ṟ,t)|^2/|ψ(ṟ,0)|^2=e^-γ t. For determining the asymptotic behavior of the radial functions R_1(r) and R_2(r), it is convenient to define the free-electron momentumk = √(2E)= k_r + i k_i,with a positive real partk_r=√(2ε+√(4ε^2+γ^2)/2),and a negative imaginary partk_i=-√(-2ε+√(4ε^2+γ^2)/2).Using √(-2E)= i k = i k_r + |k_i| in Eq. (<ref>), we thus find that the function R_1(r) diverges exponentially for r→∞ as|R_1(r)|r→∞∼ e^|k_i| r/r^1+Z |k_i|/|k|^2,so it cannot be considered as a proper eigenfunction but it is a kind of resonant state in which the electron “escapes” at infinity (see, e.g., Refs. HatSasNakPet-PTP-08,Moi-BOOK-11). On the contrary, using Eq. (<ref>), we see that the function R_2(r) goes exponentially to zero for r→∞,|R_2(r)|r→∞∼ e^-|k_i| r/r^1-Z |k_i|/|k|^2,but still diverges as 1/r^ℓ+1 at r=0 [Eq. (<ref>)] <cit.>. Therefore, nor can it be considered as a proper eigenfunction and again it may be thought of as a kind of resonant state in which the electron “escapes” at the position of the nucleus. Note that on the space of such diverging functions, the Hamiltonian is not a self-adjoint operator, which is why the eigenvalues can be complex.From the above analysis, we thus see that one useful property of a hydrogen-like electronic state with a complex energy E = ε - i γ/2 and c_1=0 is that the inverse lifetime γ of the state can be obtained from [after inverting Eq. (<ref>)]γ= 2|k_i| √(2ε+|k_i|^2),where |k_i| can be extracted from the asymptotic behavior of the radial function|k_i| = -lim_r→∞ln |R(r)|/r.In particular, in the case of a scattering state for which |R(r)| ∼ 1/r as r→∞, we correctly obtain |k_i|=0 and γ=0. For physical interpretation of Eq. (<ref>), we note that 1/(2|k_i|) can be thought of as a measure of the spatial extension of the state and √(2ε+|k_i|^2)=k_r can be interpreted as the velocity of the escaping electron <cit.>. In the following, we exploit this link between the spatial asymptotic decay of the state and its lifetime to formulate an ab initio lifetime correction to scattering states for compensating the use of incomplete basis sets.§.§ Ab initio lifetime correction to one-electron scattering states for incomplete basis setsWe still first consider a one-electron hydrogen-like atom. In standard quantum chemistry programs, the Schrödinger equation is solved using an incomplete Gaussian-type basis set. For each state p, the radial function is expanded on M basis functions {χ_μ (r)}R_p(r) = ∑_μ=1^M c_μ,p χ_μ (r),where c_μ,p are the calculated orbital coefficients. Each basis function χ_μ (r) of angular momentum ℓ_μ is generally itself a contraction of M_μ primitive Gaussian-type basis functionsχ_μ (r) = ∑_i=1^M_μ d_i,μr^ℓ_μe^-α_i,μr^2,where d_i,μ and α_i,μ are the (fixed) coefficients and exponents, respectively, of the i^th primitive in the contraction. Obviously, in addition to the bound states with negative energies ε_p <0, discrete states with positive energies ε_p >0 are also obtained and they can be considered as approximations to the exact continuum scattering states. These approximate positive-energy states usually reproduce a number of oscillations of the exact scattering states, but they go to zero much faster than 1/r for large r due to the limitation of the basis. When doing time-dependent calculations with this basis, this too fast decay of the approximate radial functions (and the fact that only a limited discrete set of states is obtained) artificially confines the electron around the nucleus, with the consequences that ionization processes cannot be properly described and the time-dependent wave function may undergo artificial reflections at the boundary of the space covered by the basis.Clearly, beyond a large enough r, the approximate radial function R_p(r) decays as e^-α_min r^2 where α_min is the smallest exponent appearing in the expansion of R_p(r). However, for an intermediate range of r, we have found that the envelope of R_p(r) can be well described by the asymptotic behavior of the complex-energy state in Eq. (<ref>), i.e.envelope[R_p(r)] ≈ A_p e^-B_p r/r^C_p forr_min < r <r_max,where A_p, B_p, C_p are constants to be determined for each positive-energy state p. Therefore, we reinterpret the (envelope of the) approximate positive-energy state R_p(r) as an approximation to a complex-energy state with spatial exponential decay, rather than an approximation to a real-energy scattering state with 1/r asymptotic behavior. Following Eq. (<ref>), we thus assign an inverse effective lifetime γ_p to each such approximate state R_p(r), obtained from the calculated energy of this state ε_p >0 and the decay exponent B_pγ_p = 2 B_p √(2ε_p+B_p^2).Naturally, we extract γ_p from the constant B_p since it dominates the asymptotic behavior, but we note that γ_p may also be extractable from C_p, according to Eq. (<ref>).To obtain B_p, we fit the envelope of R_p(r) for each state p. In principle, the envelope could be mathematically defined and obtained by the module of the analytic representation A[R_p(r)] of R_p(r): envelope[R_p(r)] = | A[R_p(r)]| where A[R_p(r)] = R_p(r) + iH [R_p(r)] and H [R_p(r)] is the Hilbert transform of R_p(r). However, we decide to proceed in the following simpler manner. For each positive-energy state p, we determine all the local maxima r_i of the absolute value of the oscillatory radial function |R_p(r)| and perform a linear fit of ln |R_p(r_i)| ln |R_p(r_i)| ≈ln A_p - B_p r_i -C_p ln r_i,to determine the constants ln A_p, B_p, and C_p.Equations (<ref>) and (<ref>) define an ab initio automatizable procedure for determining the lifetime correction to the one-electron scattering states for a given basis set. Once the values for γ_p are determined, the complex energies ε_p - i γ_p/2 can be used in the time propagation of the Schrödinger equation. The presence of the finite lifetimes for the scattering states leads to a partial absorption of the wave function which simulates ionization and reduces artificial reflections of the time-dependent wave function. We stress that we do not view these lifetimes as physical lifetimes (i.e., associated with a physical resonance phenomenon), but rather as artificial lifetimes compensating the missing part of the function space due to the use of an incomplete basis set (see Ref. BerGriHie-PRA-06 for how to relate lifetimes to missing degrees of freedom). In the limit of a complete basis set, the exact continuum scattering states with 1/r asymptotic behavior would be obtained, and the above procedure would lead to B_p=0 and thus γ_p=0, as it should. On the contrary, if we use a bad basis set containing basis functions which are too much localized to represent the scattering states well, then the above procedure would lead to large values for B_p and thus large values for γ_p, as we would expect.Interestingly, for small B_p (i.e., for good enough basis sets and in the lower-energy part of the continuum), we see that γ_p is proportional to √(2ε_p)γ_p ≈ 2 B_p √(2ε_p).If we set 2 B_p = 1/d̃ for all states p, where d̃ is a single parameter to be empirically chosen, Eq. (<ref>) reduces to the heuristic lifetime model of Klinkusch et al. <cit.>. In their reasoning, d̃ represents the characteristic escape length that an electron in the state p with classical velocity v_p = √(2ε_p) can travel during the lifetime 1/γ_p. Thus, Eq. (<ref>) can be considered as an extension of their heuristic model in which, for each scattering state and each basis set, the parameter d̃ is determined ab initio by setting it to 1/(2 B_p), a measure of the spatial extension of the state. It seems natural indeed that the parameter d̃ should be different for each state. In fact, we recently proposed a slightly more flexible version of the heuristic lifetime model in which two values of d̃ are used for the lower-energy part and the upper-energy part of the continuum spectrum <cit.>. The more general formula for the inverse lifetime that we propose in Eq. (<ref>) corresponds to using v_p = √(2ε_p+B_p^2) which indeed, as mentioned in Section <ref>, represents the velocity of the escaping electron in a complex-energy state.§.§ Extension of the ab initio lifetime correction to N-electron scattering states We discuss now the extension of our ab initio lifetime correction from one-electron hydrogen-like systems to N-electron atomic and molecular systems. The first step of an electronic-structure calculation is usually to solve an effective one-electron mean-field Schrödinger equation, i.e. the Hartree-Fock (HF) or Kohn-Sham (KS) equations,( -1/2∇_ṟ^2 + v_eff(ṟ) ) ψ_p(ṟ) = ε_p ψ_p(ṟ),where v_eff(ṟ) is an effective one-electron potential (in the case of HF, there is actually a different local effective potential for each orbital p, or equivalently an unique nonlocal effective potential, see e.g. Ref. GraKreKurGro-INC-00). For systems with a radial effective potential v_eff(r) with r=|ṟ| (i.e., atoms with spherically symmetric states), the long-range asymptotic behavior of v_eff(r) as r→∞ is known: v_eff(r) ∼ -(Z-N+1)/r for exact KS, and v_eff(r) ∼ -(Z-N)/r for the virtual orbitals in HF or KS with (semi)local density-functional approximations. Therefore, the analysis of the asymptotic behavior of the radial wave function done in Section <ref> can be applied here to the radial part of each positive-energy orbital ψ_p(ṟ) by just replacing Z with an effective nuclear charge Z_eff = Z-N+1 or Z_eff=Z-N (which may be zero). We can then straightforwardly apply the procedure of Section <ref> for each positive-energy orbital p, i.e perform the fit of Eq. (<ref>) and obtain the inverse lifetime γ_p with Eq. (<ref>). For systems with non-spherically symmetric effective potential v_eff(ṟ) (i.e., for molecules or atoms with non spherically symmetric states), for large enough r, any orbital ψ_p(ṟ) also feels an effective potential -Z_eff/r where Z_eff = ∑_I Z_I -N +1 or Z_eff = ∑_I Z_I -N (with Z_I being the charge of nucleus I) and r can be taken as the radial coordinate around the center of mass of the system. Thus, in this case as well, we can apply the procedure of Section <ref> using for example the spherical average of |ψ_p(ṟ)|^2 around the center of mass to obtain the inverse lifetime γ_p for each positive-energy orbital p.Once the one-electron orbitals have been determined, the N-electron states can be determined in a second step by a many-body electronic-structure calculation, and we would like to define now lifetimes for these states. For this, we note that attributing inverse lifetimes γ_p to the orbitals ψ_p(ṟ) (without changing them), i.e. just making the replacement ε_p →ε_p -iγ_p/2 in Eq. (<ref>), formally corresponds to adding the following nonlocal one-electron complex-absorbing potential (CAP) to the Hamiltonianv_CAP(ṟ,ṟ') = -i/2∑_p γ_pψ_p^*(ṟ') ψ_p(ṟ),where the sum is over the orthonormal positive-energy orbitals, or equivalently over all orbitals with the understanding that γ_p = 0 if ε_p <0. The CAP potential can also be conveniently expressed in second quantizationv̂_CAP = -i/2∑_p γ_pâ_p^†â_p,where â_p^† and â_p are creation and annihilation operators, respectively. We then have to include this potential in the many-body calculation. For example, we consider the case of the configuration interaction singles (CIS) method. In this method, the n^th N-electron state is written as| Ψ_n |=⟩ c_0 | Φ_0 |+⟩∑_i^occ∑_a^vir c_i,n^a | Φ_i^a |,⟩where | Φ_0 |$⟩ is the reference HF state,|Φ_i^a|=⟩ â_a^†â_i | Φ_0 |$⟩ is the state obtained by the single excitation from the occupied HF orbital i to the virtual HF occupied a, and the coefficients c_0 and c_i,n^a are obtained by diagonalizing the Hamiltonian in this space. In principle, one could think of rediagonalizing the Hamiltonian including the CAP potential. A simpler approach is to just calculate the first-order correction due to the CAP potential to the energy E_n of each scattering CIS state, i.e. states such that E_n > E_0 + IP where E_0 is the ground-state energy and IP is the ionization potential. Noting that since occupied orbitals have negative energies they do not contribute in Eq. (<ref>), the action of v̂_CAP on | Φ_0 |$⟩ is zero and⟨Φ|_i^a | v̂_CAP | Φ_j^b |=⟩ -(i/2) γ_a δ_ab δ_ij. We thus easily find ⟨Ψ|_n | v̂_CAP| Ψ_n |=⟩ -i/2Γ_n,whereΓ_nis given byΓ_n = ∑_i^occ∑_a^vir |c_i,n^a|^2γ_a,with againγ_a≠0only ifε_a >0. Thus, within first order, the action of the CAP potential is to attribute inverse lifetimesΓ_nto the scattering CIS states, i.e.E_n → E_n - i/2Γ_n,forE_n > E_0 + IP.Equation (<ref>) exactly corresponds to the expression used in the heuristic lifetime model of Klinkuschet al. <cit.> for CIS states. We have thus provided a theoretical derivation of their expression, giving stronger support for it and allowing generalizations. For example, for configuration interactions singles doubles (CISD), it is easy to find that Eq.(<ref>) now becomesΓ_n = ∑_i^occ∑_a^vir |c_i,n^a|^2γ_a + ∑_i,j^occ∑_a,b^vir |c_ij,n^ab|^2 (γ_a + γ_b),and so on. Alternatively, one could use directly the CAP potential of Eq. (<ref>) or (<ref>) in time-dependent methods such as TDDFT or TDHF.§ COMPUTATIONAL DETAILS We test our ab initio lifetime correction on the H and He atoms. We start with standard Gaussian-type correlation-consistent polarized valence-triple-zeta Dunning basis sets <cit.>,n-fold augmented with diffuse basis functions to describe Rydberg states <cit.>, denominated byn-aug-cc-pVTZ. For the atoms considered, these basis sets contain s, p, and d basis functions. For each angular momentum, we then addmGaussian-type functions adjusted to represent low-lying continuum states, as proposed by Kaufmannet al. <cit.> and used in Refs. coccia16a,coccia16b. The resulting basis sets are referred to asn-aug-cc-pVTZ+mK where K stands for “Kaufmann”. Specifically, we considern=6orn=8andm=3orm=8for the H atom, andn=6andm=7for the He atom. Note thatm=8andm=7for H and He, respectively, are the largest numbers of Kaufmann functions that we have been able to use before running into linear-dependency problems. We have recently extensively studied the convergence of the HHG spectrum of the H atom with such basis sets and found that the 6-aug-cc-pVTZ+8K basis set with a two-parameter heuristic lifetime model already gives a HHG spectrum in good agreement with the reference grid-based one (for laser intensities up to10^14W/cm^2) <cit.>.Using a development version of the Molpro software package <cit.>, we perform a Hartree-Fock calculation to obtain the orbitals with these different basis sets. To obtain the inverse lifetimeγ_pfor each positive-energy orbitalp, we numerically determine the local maxima of the absolute value of the radial part of the orbital using a spatial grid with step 0.05 bohr extending fromr_min = 0.05bohr tor_max = 2/√(α_min,s)whereα_min,sis the exponent of the most diffuse s-function in the basis set <cit.>. With the largest basis sets used, we haver_max = 419bohr with the 6-aug-cc-pVTZ+8K basis set for the H atom, andr_max = 295bohr with the 6-aug-cc-pVTZ+7K basis set for the He atom. We then perform the fit in Eq. (<ref>). We perform a CIS calculation (for the H atom, this is of course identical to HF) to obtain CIS total energiesE_nand coefficientsc_i,n^a, as well as transition moments, and calculate the CIS inverse lifetimesΓ_naccording to Eq. (<ref>) for the statesnsuch thatE_n > E_0 + IPwhereE_0is the HF ground-state energy. For the ionization potential, we takeIP = - ε_HOMOcalculated with the considered basis set, givingIP=0.5Ha for H andIP=0.918Ha for He. To test the obtained lifetimes, we calculate HHG spectra (in the dipole form) induced by acos^2-shape laser electric field by performing TDCIS calculations with the real-time propagation code Light <cit.> using a time stepΔt= 2.42 as (0.1 a.u.) and the same set up as in Ref. coccia16b. Specifically, for H we use a laser intensity ofI=10^14W/cm^2with a wavelength ofλ_0= 800nm, and for He we use a laser intensity ofI=5 ×10^14W/cm^2with a wavelength ofλ_0= 456nm <cit.>. In both cases, the time propagation has been carried out for 20 optical cycles. § RESULTS§.§ Hydrogen atomWe start by showing the typical radial wave functions of scattering states of the H atom that we obtain with Gaussian-type basis sets. In Fig. <ref>, we compare the radial wave functionR(r)of the exact s-symmetry scattering state of energy0.5175Ha with the radial wave function of the approximate scattering state of closest energy obtained with the 6-aug-cc-pVTZ, 6-aug-cc-pVTZ+3K, or 6-aug-cc-pVTZ+8K basis set (0.4805 Ha, 0.4323 Ha, and 0.5175 Ha, respectively). As observed in Ref. coccia16b, the 6-aug-cc-pVTZ basis set does not reproduce the long-range oscillatory behavior of the exact scattering wave function. The situation is improved when adding Kaufmann functions, i.e. with the 6-aug-cc-pVTZ+3K and 6-aug-cc-pVTZ+8K basis sets. The more Kaufmann functions are added, the more long-range oscillations are obtained in the radial wave functions. However, even with the 6-aug-cc-pVTZ+8K basis set, the amplitude of these oscillations decay much too fast at long distance in comparison with the exact1/rbehavior. It is not easy to continue to improve the 6-aug-cc-pVTZ+8K basis set by adding more and more Kaufmann functions because of linear dependencies. Instead, we will compensate for this wrong asymptotic behavior using our ab initio lifetime correction.We consider now the fit of the envelope of the radial wave functions with the logarithmic expression of Eq. (<ref>). In Fig. <ref>, we compare the valuesln|R_p(r_i)|wherer_iare the maxima of|R_p(r)|with the fitted curvelnA_p - B_p r -C_p lnrfor s, p, and d scattering wave functions of similar energies (0.343 Ha, 0.306 Ha, and 0.371 Ha, respectively <cit.>) calculated with the 6-aug-cc-pVTZ+8K basis set. As mentioned in the Computational details, the fit was performed within the radial window[r_min = 0.05; r_max = 419]bohr in which there are 11, 9, and 8 maxima of|R_p(r)|for the s, p, and d states considered, respectively. In Table <ref>, we also report the values of the parametersB_pandC_p, and the coefficients of determinationR^2of the fits, obtained with different numbers of maxima included, corresponding to using smaller values ofr_max. The quality of the fit is satisfactory withR^2 ≥0.96in all cases, but the value ofB_pappears to be quite sensitive to the number of maxima included and increases significantly when reducingr_max. The valuer_max = 2/√(α_min,s) = 419bohr chosen in this work thus gives the smallest value ofB_pand consequently the smallest value of the inverse lifetimeγ_p. This is in a sense a “safe” choice since it minimizes the lifetime correction. Let us discuss now the inverse lifetimesγ_pfor each scattering state obtained with the fit from Eq. (<ref>). In Fig. <ref> we showγ_pobtained with 6-aug-cc-pVTZ+8K and 8-aug-cc-pVTZ+8K basis sets as a function of the orbital energiesε_p. Consider first the 6-aug-cc-pVTZ+8K basis set. The inverse lifetimes for the s scattering states and the ones for the p and d scattering states both roughly follow a√(ε_p)trend. What is striking is that the inverse lifetimes of the p and d scattering states are much larger than the inverse lifetimes of the s scattering states. Since the inverse lifetimes should be zero in the limit of a complete basis set, this must mean that the 6-aug-cc-pVTZ+8K basis set is much worse for the p and d scattering states in comparison to the s scattering states. Consider now the 8-aug-cc-pVTZ+8K basis set. As expected, with this improved basis set containing more diffuse functions, we obtain much smaller inverse lifetimes for all scattering states. However, the inverse lifetimes for p and d scattering states with this basis set are still larger than the ones for s scattering states. We thus conclude that both basis sets are unbalanced in the description the s scattering states and the p and d scattering states. It is a nice feature of our ab initio lifetime correction that it clearly reveals the imbalance of the basis set for scattering states of different angular momenta. This better description of the s scattering states than the p and d scattering states may be explained by the fact that the most diffuse basis functions of then-aug-cc-pVTZ+8K basis sets are of s symmetry. To confirm this hypothesis, we have constructed a new basis set starting from the 6-aug-cc-pVTZ+8K basis set and adding p and d basis functions with the smallest exponent of the s basis functions in this basis, which isα_min,s = 2.28×10^-5bohr^-2. In Fig. <ref>, it is seen that the resulting basis set, denoted by 6-aug-cc-pVTZ+8K+pd(α_min,s), gives of course the same inverse lifetimes for the s scattering states, but much smaller inverse lifetimes for the p and d scattering states which are now comparable to the inverse lifetimes of the s scattering states. The 6-aug-cc-pVTZ+8K+pd(α_min,s)basis set is thus a more balanced basis set. In Fig. <ref>, we also show the inverse lifetimes obtained from the heuristic lifetime model of Klinkuschet al. <cit.>, i.e.γ_p = √(2ε_p)/d̃, with the 6-aug-cc-pVTZ+8K+pd(α_min,s)basis set and the value ofd̃=50bohr. This value ofd̃was empirically found in a previous work <cit.> to give a good HHG spectrum of the H atom with the aug-cc-pVTZ+8K basis set, in good agreement with the reference HHG spectrum obtained from grid calculations. Clearly, for the s scattering states, the inverse lifetimes obtained from the heuristic lifetime model with this value ofd̃are quite similar to the inverse lifetimes determined ab initio in the present work. For the p and d scattering states, the heuristic lifetime model gives inverse lifetimes that are a bit smaller than the ab initio inverse lifetimes obtained with the 6-aug-cc-pVTZ+8K+pd(α_min,s)basis set. Therefore, we can consider that our ab initio lifetime correction provides a first-principle justification for the value ofd̃empirically chosen in Ref. coccia16b.Finally, we test our ab initio lifetime correction for calculating the HHG spectrum of the H atom with a laser intensity ofI = 10^14W/cm^2using the 6-aug-cc-pVTZ+8K+pd(α_min,s) basis set. We show the obtained spectrum in Fig. <ref> and compare it to the HHG spectra calculated using either no lifetimes or lifetimes from the heuristic lifetime model withd̃=50bohr. All the spectra present roughly the expected aspect of an atomic HHG spectrum: a first intense peak atω/ω_0=1, followed by a plateau of peaks at the odd harmonic orders until a cutoff value beyond which the intensity of the peaks rapidly decreases. The spectrum obtained with no lifetimes is however very noisy, the signal not decreasing very much between the peaks. Introducing the lifetimes results in much clearer spectra with lower background and sharper peaks. The spectrum obtained with the lifetimes from the ab initio procedure and the one obtained the lifetimes from the heuristic model are very similar to each other, with the ab initio lifetimes giving a slightly lower background (less than one unit on the logarithmic scale). This test thus confirms the usefulness of introducing lifetimes, and confirm that the heuristic lifetime model can be replaced by our ab initio lifetime correction.§.§ Helium atom We apply now our ab initio lifetime correction to the He atom, as a first test on a system with more than one electron. We consider first the one-electron inverse lifetimesγ_pas a function of the orbital energiesε_pwhich are reported in the left panel of Fig. <ref> using the 6-aug-cc-pVTZ+7K and 6-aug-cc-pVTZ+7K+pd(α_min,s) basis sets. Similarly as for the H atom, the basis 6-aug-cc-pVTZ+7K+pd(α_min,s) is constructed from the basis 6-aug-cc-pVTZ+7K by adding p and d basis functions with the smallest s-basis function exponent. With the 6-aug-cc-pVTZ+7K basis set, the inverse lifetimes of the s scattering states are much smaller than the ones of the p and d scattering states. As for the H atom, the use of the 6-aug-cc-pVTZ+7K+pd(α_min,s) basis set gives a more balanced description of all the scattering states. The obtained inverse lifetimes follow a similar trend as the one observed for the H atom with a similar basis set, but tend to be a bit larger. As a consequence, if we want to roughly reproduce these ab initio lifetimes with the heuristic lifetime model, we need to choose a smaller value of the parameter:d̃=30bohr. We consider now the corresponding two-electron CIS inverse lifetimesΓ_n[Eq. (<ref>)] as a function of the CIS total energiesE_n, reported in the right panel of Fig. <ref>. The CIS inverse lifetimes are overall quite similar to the one-electron inverse lifetimes. The most important difference is that, just above the continuum threshold (ε_p ≳0orE_n ≳E_0+IP), the density of two-electron CIS states is higher than the density of one-electron HF states, and the CIS inverse lifetimes are significantly larger than the one-electron inverse lifetimes. Finally, we test our ab initio lifetime correction for calculating the HHG spectrum of the He atom with a laser intensity ofI = 5 ×10^14W/cm^2using the 6-aug-cc-pVTZ+7K+pd(α_min,s) basis set. Since the study of the effect of electronic correlation on the HHG spectrum <cit.> is beyond the scope of this work, we still use TDCIS even though the He atom has two electrons, i.e. we neglect double excitations. We show the obtained spectrum in Fig. <ref> and compare it to the HHG spectra calculated using either no lifetimes or lifetimes from the heuristic lifetime model withd̃=30bohr. As for the H atom, the spectrum obtained with no lifetimes is very noisy, whereas the spectra obtained with the lifetimes are much clearer. Using the ab initio lifetimes gives a slightly lower background than using the lifetimes from the heuristic lifetime model. This test thus confirms the applicability of our ab initio lifetime correction to two-electron systems. § CONCLUSIONS We have developed a method for obtaining effective lifetimes of scattering electronic states for avoiding the artificially confinement of the wave function due to the use of incomplete basis sets in time-dependent electronic-structure calculations. In this method, using a fitting procedure, the lifetimes are systematically extracted from the spatial asymptotic decay of the approximate scattering wave functions obtained with a given basis set. The main qualities of this method are that (1) it is based on a rigorous theoretical analysis, (2) it does not involve any empirical parameters, (3) it is adapted to each particular basis set used. Interestingly, the method can be considered as an ab initio version of the heuristic lifetime model of Klinkuschet al. <cit.>.As first tests of our method, we have considered the H and He atoms using Gaussian-type basis sets. We have shown that reasonable lifetimes adapted to the basis set are obtained. In particular, the inverse lifetimes correctly decrease when the size of the basis set is increased. Moreover, the obtained lifetimes revealed an unbalanced description of the scattering states of different angular momentum with the standard basis sets used, which we exploited to construct more balanced basis sets. Therefore, the method is useful to diagnose the quality of a basis set for describing scattering states. Finally, the obtained lifetimes have been shown to lead to much clearer HHG spectra (i.e., with a lower background and better resolved peaks) in time-dependent calculations.Future work includes testing the method on larger systems including molecules, calculating other properties than HHG spectra, and possibly using different types of basis sets. We believe that our approach could help adapting quantum-chemistry methods for the study of electron dynamics induced by high-intensity laser in atoms and molecules. § ACKNOWLEDGEMENTS This work was supported by the LabEx MiChem part of French state funds managed by the ANR within the Investissements d'Avenir programme under reference ANR-11-IDEX-0004-02. EC thanks L. Guidoni for support. 10cor07 P. B. Corkum and F. Krausz,Nat. Phys.3, 381 (2007).kra09 F. Krausz and M. Ivanov,Phys. Mod. Rev.81, 163 (2009).gal12 L. Gallmann, C. Cirelli, , and U. Keller,Annu. Rev. Phys. Chem.63, 447 (2012).atto Attosecond physics,inSpringer Series in Optical Sciences, edited by L. Plaja, R. Torres, and A. Zaïr, Springer, Berlin Heidelberg, 2013.paz15 R. Pazourek, S. Nagele, and J. Burgdörfer,Rev. Mod. Phys.87, 765 (2015).cal16 F. Calegari, G. Sansone, S. Stagira, C. Vozzi, and M. Nisoli,J. Phys. B: At. Mol. Opt. Phys.49, 062001 (2016).RamLeoNeu-ARPC-16 K. Ramasesha, S. R. Leone, and D. M. Neumark,Annu. Rev. Phys. Chem.67, 41 (2016).HocHinBon-EPJST-14 D. Hochstuhl, C. M. Hinz, and M. Bonitz,Eur. Phys. J Spec. Top.223, 177 (2014).RunGro-PRL-84 E. Runge and E. K. U. Gross,Phys. Rev. Lett.52, 997 (1984).Kul-PRA-87b K. C. Kulander,Phys. Rev. A36, 2726 (1987).CaiZanKitKocKreScr-PRA-05 J. Caillat, J. Zanghellini, M. Kitzler, O. Koch, W. Kreuzer, and A. Scrinzi,Phys. Rev. A71, 012712 (2005).KraKlaSaa-JCP-05 P. Krause, T. Klamroth, and P. Saalfrank,J. Chem. Phys.123, 074105 (2005).Kva-JCP-12 S. Kvaal,J. Chem. Phys.136, 194109 (2012).Krause:2007dm P. Krause, T. Klamroth, and P. Saalfrank,J. Chem. Phys.127, 034107 (2007).lupp+12mol E. Luppi and M. Head-Gordon,Mol. Phys.110, 909 (2012).lupp+13jcp E. Luppi and M. Head-Gordon,J. Chem. Phys.139, 164121 (2013).coccia16a E. Coccia and E. Luppi,Theor. Chem. Acc.135, 43 (2016).white15 A. White, C. J. Heide, P. Saalfrank, M. Head-Gordon, and E. Luppi,Mol. Phys.114, 947 (2016).coccia16b E. Coccia, B. Mussard, M. Labeye, J. Caillat, R. Taieb, J. Toulouse, and E. Luppi,Int. J. Quant. Chem.116, 1120 (2016).kauf+89physb K. Kaufmann, W. Baumeister, and M. Jungen,J. Phys. B: At. Mol. Opt. Phys.22, 2223 (1989).Klinkusch:2009iw S. Klinkusch, P. Saalfrank, and T. Klamroth,J. Chem. Phys.131, 114304 (2009).lop13 K. Lopata and N. Govind,J. Chem. Theory Comput.9, 4939 (2013).GolSho-JPB-78 A. Goldberg and B. W. Shore,J. Phys. B11, 3339 (1978).LefWya-JCP-83 C. Leforestier and R. E. Wyatt,J. Chem. Phys.78, 2334 (1983).KosKos-JCC-86 R. Kosloff and D. Kosloff,J. Comput. Phys.63, 363 (1986).MugPalNavEgu-PR-04 J. G. Muga, J. P. Palao, B. Navarro, and I. L. Egusquiza,Phys. Rep.395, 357 (2004).GreHoPabKamMazSan-PRA-10 L. Greenman, P. J. Ho, S. Pabst, E. Kamarchik, D. A. Mazziotti, and R. Santra,Phys. Rev. A82, 023406 (2010).kra92 J. L. Krause, K. J. Schafer, and K. C. Kulander,Phys. Rev. A45, 4998 (1992).MccStrWis-PRA-91 C. W. McCurdy, C. K. Stroud, and M. K. Wisinski,Phys. Rev. A43, 5980 (1991).he07 F. He, C. Ruiz, and A. Becker,Phys. Rev. A75, 053407 (2007).tao09 L. Tao, W. Vanroose, B. Reps, T. N. Rescigno, and C. W. McCurdy,Phys. Rev. A80, 063419 (2009).scr10 A. Scrinzi,Phys. Rev. A81, 053845 (2010).TelSosRozChu-PRA-13 D. A. Telnov, K. E. Sosnova, E. Rozenbaum, and S.-I. Chu,Phys. Rev. A87, 053406 (2013).Math10-PROG-15Wolfram Research, Inc., Mathematica, Version 10.3, Champaign, IL (2015).Mah-BOOK-09 G. D. Mahan,Quantum Mechanics in a Nutshell,Princeton University Press, Princeton, 2009.lifetimes-note2 The general solution of the Schrödinger equation given in Eqs. (<ref>)-(<ref>) is valid for Z≠0. For Z=0, R_1(r) in Eq. (<ref>) becomes identically zero, and should be replaced by a function which has the same expression as R_2(r) with the substitution √(-2E)→ -√(-2E). The continuum scattering states are then obtained as a linear combination of these two functions. However, our analysis of complex-energy states remains unchanged. Thus, our procedure for determining the lifetimes is still valid for Z=0.OthMonMar-ARX-16A. A. Othman, M. de Montigny, and F. Marsiglio,The coulomb potential in quantum mechanics revisited, <http://arxiv.org/abs/1612.06706>.HatSasNakPet-PTP-08 N. Hatano, K. Sasada, H. Nakamura, and T. Petrosky,Prog. Theor. Phys.119, 187 (2008).Moi-BOOK-11 N. Moiseyev,Non-Hermitian Quantum Mechanics,Cambridge University Press, Cambridge, 2011.lifetimes-note1 For ℓ=0, R_2(r) only diverges as 1/r at r=0 which is normalizable, i.e. ∫_0^∞dr r^2 |R_2(r)|^2 < ∞. However, even in this case, the expectation value of the Hamiltonian operator is infinite. For ℓ≥ 1, R_2(r) is not even normalizable.KlaGil-AQC-12 S. Klaiman and I. Gilary,On resonance: A first glance into the behavior of unstable states,inAdvances in Quantum Chemistry Vol. 63, edited by C. A. Nicolaides, E. Brändas, and J. R. Sabin, page 1, Academic Press, 2012.BerGriHie-PRA-06 R. A. Bertlmann, W. Grimus, and B. C. Hiesmayr,Phys. Rev. A73, 054101 (2006).GraKreKurGro-INC-00 T. Grabo, T. Kreibich, S. Kurth, and E. K. U. Gross,Orbital functionals in density functional theory: the optimized effective potential method,inStrong Coulomb Correlation in Electronic Structure: Beyond the Local Density Approximation, edited by V. Anisimov, Gordon & Breach, Tokyo, 2000.Dun-JCP-89 T. H. Dunning,J. Chem. Phys.90, 1007 (1989).MOLPRO_brief H.-J. Werner, P. J. Knowles, G. Knizia, F. R. Manby, M. Schütz, et al.,Molpro, version 2012.1, a package ab initio programs, 2012,see http://www.molpro.net.ding+11jcp F. Ding, W. Liang, C. T. Chapman, C. M. Isborn, and X. Li,J. Chem. Phys.135, 164101 (2011).lifetimes-note3 Because we do not impose spherical symmetry in our calculations, we do not have exactly degenerate s, p, and d states.cork93prl P. B. Corkum,Phys. Rev. Lett. 71, 1994 (1993).lewe+pra94 M. Lewenstein, P. Balcou, M. Y. Ivanov, A. L'Huillier, and P. B. Corkum,Phys. Rev. A49, 2117 (1994).shi13 H. Shi-Lin and S. Ting-Yun,Chin. Phys. B22, 013101 (2013). | http://arxiv.org/abs/1706.08785v1 | {
"authors": [
"Emanuele Coccia",
"Roland Assaraf",
"Eleonora Luppi",
"Julien Toulouse"
],
"categories": [
"physics.comp-ph",
"physics.atom-ph",
"physics.chem-ph"
],
"primary_category": "physics.comp-ph",
"published": "20170627112117",
"title": "Ab initio lifetime correction to scattering states for time-dependent electronic-structure calculations with incomplete basis sets"
} |
*Treewidth Bounds for Planar Graphs Using Three-Sided Brambles Karen L. CollinsDept. of Mathematics & Computer Sci.Wesleyan UniversityMiddletown CT [email protected] C. Smith Dept. of MathematicsYale UniversityNew Haven CT 06520 [email protected] 30, 2023 =============================================================================================================================================================================================================================== Square grids play a pivotal role in Robertson and Seymour's work on graph minors as planar obstructions to small treewidth.We introduce a three-sided bramble in a plane graph called a net, which generalizes the standard bramble of crosses in a square grid. We then characterize any minimal cover of a net as a tree drawn in the plane. We use nets in an O(n^3) time algorithm that computes both upper and lower bounds on the bramble number (hence treewidth) of any planar graph. Let G be a planar graph, BN(G) be its bramble number and λ(G) be the largest order of any net in a subgraph of G. Our algorithm outputs a constant, KB, so that λ(G)/4 ≤ KB ≤ BN(G)≤ 4KB ≤ 4λ(G). Let s(G) be the size of a side of the largest square grid minor of G. Smith (2015) has shown that λ(G) ≥ s(G). Our upper bound improves that of Grigoriev (2011) when λ(G)≤ (5/4)s(G). We correct a lower bound of Bodlaender, Grigoriev and Koster (2008) to s(G)/5 (instead of s(G)/4) and thus the lower bound of λ(G)/4 on our approximation is an improvement. KeywordsTreewidth · Bramble · Tree decomposition · Planar graph · Grid minor · Net § INTRODUCTION The treewidth of a graph is a fundamental idea in Robertson and Seymour's pioneering work on graph minors. For any graph G, let TW(G) be the treewidth of G (see <cit.>). The base case in the proof of the Graph Minor Theorem <cit.> relies on square grids for two reasons: (1) the n × n square grid has treewidth n, so the family of square grids has unbounded treewidth, and (2) each square grid is planar and hence has genus zero. Robertson and Seymour use their well-known Grid-Minor Theorem (also called the Excluded Grid Theorem) to start an induction on the genus of a graph. We replace squares with triangles in this role. In a graph, every 4-cycle contains a 3-cycle as a minor. In this sense, searching for a triangle is more general than searching for a square. We proceed by addressing the dual problem to treewidth, finding a graph's bramble number. Let BN(G) be the bramble number of a graph G. Seymour and Thomas first introduced brambles (originally called screens) in <cit.> to get lower bounds on the treewidth of a graph. For any graph G, TW(G)=BN(G)-1.Bellenbaum and Diestel havea short proof of this result in <cit.>. See also Reed <cit.> for more on brambles. Bodlaender <cit.> surveys various equivalent notions to treewidth.Our method for bounding treewidth will be to define a special class of brambles called nets. Nets were introduced in Smith's thesis <cit.>, where they were usedto construct two families of planar graphs, each containing a minor minimal obstruction to any treewidth. Nets are three-sided brambles in plane graphs. A formal definition of nets appears in Section 3.Nets can be thought of as a generalization of a natural bramble in a square grid called the bramble of crosses, described in <cit.>. Each cross contains vertices from one row and one column in the grid. Smith <cit.> defines n-triangular grids for n≥ 3 (see figure <ref>), and considers brambles whose elements meet all three sides of the triangle (rather than four sides, as in the bramble of crosses in the square grid). In particular, he proves that this bramble in the n-triangular grid has order n. Thus, triangular grids are a natural three-sided analogue to square grids. In this context, a bramble of crosses becomes a bramble of trees, each with a unique root and three branches. This three-sided bramble in a triangular grid provides a canonical example of a net in a plane graph.It is important to note that the n-triangular grid does not contain an n × n square grid. The largest square grid minor has side-length less than or equal to (n+1)/√(2) because, by counting vertices,there are(^n+1_2) vertices in an n-triangular grid and m^2 vertices in an m × m square grid. Thus, the bramble number of a triangular grid is larger than the bramble number of any of its square grid minors. These examples motivate us to improve lower bounds on treewidth for planar graphs found using square grids by finding high order three-sided nets.We define λ(G) to be the largest order of any net in a subgraph of G.Let s(G) be the size of the side of the largest square grid minor in G.Smith <cit.> has shown that s(G)≤λ(G), and in particular when G is an s× s square grid, then λ(G)=s. Chekuri and Chuzhoy (2016) show that there is a polynomial relationship between the treewidth and square grid minor size of a graph <cit.>.Since λ(G)≥ s(G), their results demonstrate a polynomial relationship between the treewidth and the net order of a graph.Smith <cit.> has a polynomial time algorithm to compute the minimum cover of a net, giving a lower bound for BN(G).In Section 4, we present a faster such algorithm. Given a net of a planar graph, G, we construct an O(n^2) time algorithm, Net-Alg, whose output is a minimum cover. We characterize a cover as a tree drawn in the plane which may go through the faces, edges or vertices of G. Net-Alg proceeds by the shortest path algorithm from Henzinger, et al. <cit.> and inspiration from Dreyfus <cit.> to find a Steiner tree that meets all three sides of the net.We use nets to replace square grids in the work of Bodlaender, Grigoriev and Koster <cit.>. Theyconstruct a rooted-search-tree algorithm to find lower bounds on the treewidth of a graph. In particular, their algorithm finds a square grid minor of size LB_2. The claim is that LB_2≥ s(G)/4; however as we will show in Section 6, the proof shows that LB_2≥ s(G)/5 only.Grigoriev <cit.> writes a new algorithm, using ideas from <cit.>, to construct a tree decomposition of a graph and thus gets an upper bound on treewidth. We adapt ideas from both <cit.> and <cit.> in our rooted-search tree algorithm, BT-Alg. Our algorithm uses nets to achieve both lower and upper bounds on bramble number, as seen in the following theorem. Theorem 5.7.Let G be a planar graph. Then BT-Alg computes KB in O(|G|^3) time, and λ(G)/4≤ KB ≤ BN (G) ≤ 4KB ≤ 4λ(G). The proof appears in Section 5. Theorem <ref> improves the upper bound of 5s(G)-6 from <cit.> whenever λ≤ 5s(G)/4. Since λ(G)≥ s(G), with the correction in Section 6, our lower bound is better than <cit.>.The outline of the paper is as follows. In Section 2 we do some preliminaries. In Section 3 we define nets and give an alternative characterization of a cover of net. In Section 4 we construct Net-Alg. In Section 5 we construct BT-Alg and prove Theorem <ref>. In Section 6 we correct the lower bound in Bodlaender et al.We make some concluding remarks in Section 7.§ PRELIMINARIES In this paper, every graph will be simple; for a graph G, we let V(G) denote its vertex set and E(G) denote its edge set. For each vertex, u ∈ V(G), we let N_G(u) denote its neighborhood, the set of vertices adjacent to u in G. For any subset of vertices, U ⊆ V(G), let G[U] denote the subgraph of G induced by U. If G[U] is connected, then we say the vertex set, U, is connected in G. We denote the induced subgraph, G[V(G) \ U], by G - U. We will refer to the power set of vertices in G as 𝒫(V(G)). If T is a tree and s, t ∈ V(T), then let sTt denote the unique path from s to t in T. Similarly, if W = (u_0, ..., u_n) is a walk in a graph, then for any 0 ≤ i ≤ j ≤ n, let u_iWu_j denote the subwalk from u_i to u_j in W. If W is a closed walk, let (u_jWu_n, u_1Wu_i) denote the concatenated walk, (u_j, u_j+1, ..., u_n, u_1, u_2,..., u_i). Let G be a graph, let T be a tree and let β: V(T) →(V(G)). The pair (T, β) is a tree decomposition of G if and only if * if u ∈ V(G), then there is some t ∈ V(T) so that u ∈β(t), * if uv ∈ E(T), then there is some t ∈ V(T) such that u,v ∈β(t), AND * if t_1, t_2, t_3 ∈ V(T) and t_2∈ V(t_1Tt_3), then β(t_1) ∩β(t_3) ⊆β(t_2). The width of a tree decomposition is defined as (T, β):=max{|β(t)|: t ∈ V(T)} - 1 and the treewidth of a graph G is TW(G):= min{(T, β): (T, β)is a tree decomposition of G}. Using ideas from Reed in <cit.>, we present an equivalent definition for a tree decomposition by looking at the inverse mapping:β^-1: V(G)→𝒫(V(T)) u↦{t ∈ V(T) : u ∈β(t)}. Let G be a graph, T a tree and β: V(T) →𝒫(V(G)). Then (T, β) is a tree decomposition of G just in case both * if u ∈ V(G), then β^-1(u) is nonempty and connected in T, AND * if uv ∈ E(G), then β^-1(u) ∩β^-1(v) ≠∅. The proof of lemma <ref> follows directly from the definition of a tree decomposition.Now we will give the precise definition of a bramble. Let G be a graph. For two subsets of vertices, A, B ⊆ V(G), we say that A and B touch if either * A∩ B ≠∅, OR * there is some uv ∈ E(G) such that u ∈ A and v ∈ B. A finite collection, = {B_i ⊆ V(G)}_i ∈ [n], of connected, mutually touching vertex sets is called a bramble in G. A set, C ⊆ V(G) coversif C nontrivially intersects each vertex set in the bramble. The order of a brambleis defined as ():= min{|C| : Ccovers }, and the bramble number of a graph, G, is BN(G):= max{():is a bramble inG}.For a simple example of a bramble in a graph, we can take the collection of vertices (as singleton sets) from a clique. The order of this bramble is the number of vertices in the clique. In particular, the bramble number of any graph is bounded below by its clique number. However, this example does not take advantage of the ability to have intersection among sets. Allowing sets in a bramble to intersect gives a much more robust class of obstructions to low treewidth.When talking about graph embeddings, we follow the conventions of West <cit.>. In particular, we refer to a graph embedded in the plane as a plane graph, and for a plane graph, G, a face, F, of G is a maximal region in the plane such that for any two points in F, there is a curve avoiding G connecting those points. For computational reasons, we point out that in any plane graph, if v is the number of vertices, e is the number of edges and f is the number of faces, then e ≤ 3v - 6 and f ≤ 2v-4. Therefore, any computation that iterates on vertices, edges and faces of a graph still runs in O(v)-time. With this in mind, for any planar graph we let |G| = |V(G)|.§ INTRODUCING NETS AND CHARACTERIZING NET COVERS We are ready to define a natural family of three-sided brambles that occur in plane graphs, generalizing the bramble of crosses in a square grid graph described in the introduction.Let G be a connected plane graph and let W = (u_0, u_1,...,u_n) be a closed walk peripheral to the unbounded face of G, so u_0=u_n.For any choice of j and k so that 0 ≤ j ≤ k ≤ n, we call the triple, ℱ = (W, j, k), a 3-frame of G. A 3-frame decomposes W into three subwalks with overlapping endpoints.Given a 3-frame, ℱ = (W, j, k), the three subwalks, u_0Wu_j, u_jWu_k and u_kWu_n, are called the sides of ℱ.Throughout the rest of the paper we refer to the vertex sets of the sides of the 3-frame by colors:blue = {u_0, ..., u_j};red = {u_j, ..., u_k};yellow = {u_k, ..., u_n}Given a 3-frame, ℱ, we call X⊆ V(G) an ℱ-vine if X is connected and contains at least one vertex in each side of ℱ.That is, an ℱ-vine has at least one blue, one red and one yellow vertex. Given a 3-frame, ℱ, we define the ℱ-net of G asthe collection of all ℱ-vines. We denote the ℱ-net of G by 𝐍(G,ℱ).The graph in Figure <ref> is given a frame with sides {a,b}, {b,c} and {c,d,a}. This frame defines a net whose elements are connected subsets of vertices, each of which intersects all three sides. The minimal elements of this net are {a, b}, {b, c}, {a,c,d} and {b, d, e}. Figure <ref> shows a 3-frame of a 6-triangular grid. Note that the vertex sets blue, red and yellow are not disjoint, sharing at least one vertex between pairs. In the case where W is a cycle, each pair of colors intersect on exactly one vertex, but both W and our choice of j and k may cause more intersection between the sides. Let G be a connected plane graph with a 3-frame ℱ. If X_1, X_2 ∈𝐍(G,ℱ), then X_1 ∩ X_2 ≠∅. For the sake of contradiction, suppose there are X_1, X_2 ∈𝐍(G,ℱ) such that X_1 ∩ X_2 = ∅. Define a plane graph obtained from G as follows. First, add three new vertices, x_b, x_r and x_y, to the unbounded face of G so that x_b is adjacent to each vertex in blue, x_r is adjacent to each vertex in red and x_y is adjacent to each vertex in yellow. Moreover, draw these edges so that x_b, x_r and x_y remain peripheral to the unbounded face of the graph. Then add another vertex x_0 to the unbounded face and draw edges from x_0 to x_b, x_r and x_y in such a way that preserves our proper embedding.The vertex sets {x_b}, {x_r} {x_y}, {x_0}, X_1 and X_2 induce a minor isomorphic to K_3,3. We have a proper planar embedding of this graph, contradicting Wagner's theorem. Therefore, X_1 ∩ X_2 ≠∅. The proof of the lemma demonstrates how the definition of a net takes advantage of the topology of the plane to guarantee intersection between any two vines. We will continue to use properties of our embedding to understand the order of these brambles.In order to understand what a minimum size cover of a net looks like, we consider what would topologically prevent a vine touching all three sides of the 3-frame. As we saw in the proof of lemma <ref>, any two vines have non-trivial intersection. That is, each vine in a net is itself a cover of the bramble.On the other hand, a sparse plane graph may not take full advantage of the space to find vines with few vertices. See Figure <ref>. For example, consider a cycle on 15 vertices, C_15 = (c_0, c_1, ..., c_15).We could evenly divide the cycle into a 3-frame: (C_15, 5, 10), and any (C_15, 5, 10)-vine would use at least five vertices. However, a set of two vertices, {c_5, c_10}, covers the (C_15, 5, 10)-net. We can use the embedding to understand why this set covers the bramble by adding an edge between c_5 and c_10. This additional edge would make {c_5, c_10} a (C_15, 5, 10)-vine, and any embedding of C_15 would afford us the space to make such an edge. To account for these latent vines, we need to pay attention to what connections are possible through faces of the embedding. We now define a new plane graph obtained from a plane graph that inherits connectivity from the bounded faces in our embedding.Let G be a connected plane graph. For each bounded face, f ∈ F(G): * Create a vertex v_f inside the face, f. * Add an edge between v_f and each vertex peripheral to f.We call the resulting plane graph the face graph of G, denoted G. Since each edge of G is contained in at most two faces of G, a greedy search on edges would find all faces in O(|G|) time. Notice that ℱ is also a 3-frame of G, so 𝐍(G,ℱ) ⊆𝐍(G, ℱ). The face graph is something like a combination of a plane graph with its dual and will give us a useful property concerning the connectedness of the graph. The following definition is inspired by a similar definition used by Robertson and Seymour in <cit.>. Let G be a connected plane graph and let W = (u_0, u_1, ..., u_n) be a closed walk peripheral to G. For any quadruple, 0 ≤ a ≤ b ≤ c ≤ d ≤ n, we say (u_a,u_c) and (u_b,u_d) cross in W.The Jordan Curve Theorem implies for any two paths in a connected plane graph, if the endpoints of one cross the endpoints of another in the peripheral walk, then the two paths share a common vertex. We use this fact in the following useful characterization of separating sets in face graphs. Let G be a connected plane graph, let W be a closed walk peripheral to the unbounded face of G and let X ⊆ V(G). For any two vertices u, v ∈ W\ X, there is a (u, v)-path in G - X if and only if there is no path in G[X] whose endpoints cross (u, v) in W. We prove the forward direction by assuming there is some (u,v)-path, say P, in G - X. If there was some (x,y)-path in G[X] so that (u, v) and (x, y) cross in W, then it would necessarily share a vertex with P. This is impossible since V(P) is disjoint with X.For the reverse implication, restrict our embeddings of G and G to the surface obtained by throwing out the unbounded face of G (which is also the unbounded face of G). If there is no path in G[X] whose endpoints cross (u, v) in W, then u and v are contained in a single face of G[X]. By the definition of a face, there is some polygonal (u, v)-curve, say γ, contained in that face. As we follow along this curve from u to v, we obtain a finite multi-sequence, M, of vertices, edges and faces of G that intersect γ. This multi-sequence starts with the vertex, u, and is followed either by a face or an edge of G. In fact, any consecutive pair in M has of one of six forms: (vertex, face), (vertex, edge), (edge, face), (edge, vertex), (face, edge) or (face, vertex). We will use M to find a (u, v)-walk in G - X. Obtain a multi-sequence, M', of vertices from G - X as follows. For each face, f, of G in M, replace f with v_f ∈ V(G). Clearly, v_f ∉ X since X ⊆ V(G). For each edge, e, of G in M, since e is not an edge of G[X], at least one of its endpoints is in V(G)\ X. Therefore, we can replace e with one of its endpoints, v_e, that is not in X (choosing at random if both endpoints are not in X). In search of a (u, v)-walk, we consider the six possible types of consecutive pairs of vertices in M'. In fact, since the edge relation is symmetric, it is enough to consider just the following three types: Type 1: (vertex, face). If γ transitions from a vertex, w ∈ V(G - X), to a face, f, of G, then w must be on the boundary of f. By the definition of G, we know that wv_f ∈ E(G). Type 2: (vertex, edge). If γ transitions from a vertex, w ∈ V(G - X), to an edge, e, of G, then w must be an endpoint of e. Thus, either w = v_e or wv_e ∈ E(G - X). Type 3: (edge, face). If γ transitions from an edge, e, of G to a face, f, of G, then e is on the boundary of f and so are both of its endpoints. Thus, v_ev_f ∈ E(G - X). From the case analysis, we see that M' must contain a (u, v)-walk in G - X as a subsequence. Therefore, there is a (u, v)-path in G - X. We are ready to give an alternative characterization for a vertex set covering a net in a plane graph. Let G be a connected plane graph with a 3-frame ℱ and let C ⊆ V(G). Then C covers 𝐍(G, ℱ) if and only if there is some Y ∈𝐍(G, ℱ) such that Y ∩ V(G) ⊆ C.For the backward implication, let Y ∈𝐍(G, ℱ). Consider any X ∈𝐍(G, ℱ). Since G is a subgraph of G, we know that X ∈𝐍(G, ℱ), and lemma <ref> implies that X ∩ Y ≠∅. Because X ⊆ V(G), we have X ∩ (Y ∩ V(G)) ≠∅. And since X was chosen arbitrarily from 𝐍(G, ℱ), we can conclude that Y ∩ V(G) covers 𝐍(G, ℱ).For the reverse implication, suppose C covers 𝐍(G, ℱ). Define W = (u_0, u_1, ..., u_n) and 0 ≤ j ≤ k ≤ n so that ℱ = (W, j, k). Let a be the maximum index in {0, ..., j} such that there is a path, say P_a, from some vertex in u_kWu_n to u_a in G - C. Let b be the maximum index in {j, ..., k} such that there is a path, say P_b, from some vertex in u_0Wu_j to u_b in G - C, and let c be the maximum index in {k, ..., n} such that there is a path, say P_c, from a vertex in u_jWu_k to u_c in G - C. Since C covers 𝐍(G, ℱ), we see that a<j, b<k and c<n, and u_a+1, u_b+1, u_c+1∈ C.We claim that by our choice of a and b, there is no path in G - C whose endpoints cross (u_a+1, u_b+1). For the sake of contradiction, suppose such a path, say Q, does exist. Let u_s, u_t ∈ V(G) - C be the endpoints of Q, where a+1 < s < b+1 and b+1 < t ≤ n or 0 < t < a+1. We consider the two possibilities for s. For example, see Figure <ref>. Case 1: Suppose a + 1 < s ≤ j. If b+1 < t ≤ k, then Q is a path from u_0Wu_j to u_t and we contradict the maximality of b. If k < t ≤ n, then Q is a path from u_kWu_n to u_s and we contradict the maximality of a. Finally, if 0 < t < a+1, then the endpoints of P_a cross the endpoints of Q in W, so P_a and Q are subpaths of the same component of G - C. This component has a path from a vertex in u_kWu_n to u_s, contradicting the maximality of a. Therefore, Case 1 leads to a contradiction.Case 2: Suppose j < s < b + 1. If b+1 < t ≤ k, then the endpoints of P_b cross the endpoints of Q in W.Hence, P_b and Q are subpaths of the same component of G - C, and this component has a path from a vertex in u_0Wu_j to u_t, contradicting the maximality of b. If k < t ≤ n, then the endpoints of P_b cross the endpoints of Q in W, so P_b and Q are subpaths of the same component of G - C. But this component must contain vertices in all three sides of ℱ, so it contains an ℱ-vine. This contradicts the fact that C covers 𝐍(G, ℱ).Finally, if 0 < t < a+1, then the endpoints of P_a cross the endpoints of Q in W, so P_a and Q are subpaths of the same component of G - C. But this component must contain vertices in all three sides of ℱ, so it contains an ℱ-vine. This contradicts the fact that C covers 𝐍(G, ℱ). Therefore, case 2 leads to a contradiction.Let X = V(G) - C. We have seen there is no path in G[X] whose endpoints cross (u_a+1, u_b+1). Lemma <ref> implies there is a (u_a+1, u_b+1)-path in G - X. The same argument shows there is a (u_b+1, u_c+1)-path in G - X. Therefore, V(G) \ X contains an ℱ-vine, say Y, in G. By the definition of X, Y ∩ V(G) ⊆ C, which completes the proof.Let G be a connected plane graph with at 3-frame ℱ. Then(𝐍(G, ℱ)) = min{|Y ∩ V(G)| : Y ∈𝐍(G, ℱ) }.Using the topology of a planar embedding, the problem of finding a cover of a net is equivalent to finding a connected subgraph of the face graph that intersects all three sides of the 3-frame. In the next section, we algorithmically minimize such a cover using shortest paths.§ NET-ALG: A MINIMUM SIZE COVER OF A NET Let G be a connected plane graph with a 3-frame ℱ.In order to develop an algorithm that finds a minimum order cover of a net, define the indicator function 1_G on V(G) such that1_G(v) = {[ 1, v ∈ V(G); 0, v ∈ V(G) \ V(G). ].Let G_↔ denote a directed graph obtained from G by replacing each edge with two directed edges of opposite orientation. Any directed path in G_↔ will then correspond to an undirected path of G and vice versa. Moreover, two directed paths are vertex-disjoint in G_↔ just in case the corresponding paths in G are vertex-disjoint.From 1_G, we obtain a weight function, 1_G↔, on the edges of G_↔, where each edge weight is given by the weight of its terminal vertex under 1_G. We determine the weight of a directed path in G_↔ to be the sum of its edge weights. Then 1_G↔ gives us the following distance function on G_↔.δ_↔: V(G_↔) × V(G_↔)→{0, 1, 2, ...} (u, v)↦min{∑_e ∈ E(P)1_G↔(e) : Pis a (u,v)-path in G_↔} We use this distance function to define a specific structure possessed by any “minimum weight” ℱ-vine in G. This structure is essentially the three vertex case of the Steiner tree characterization given by Dreyfus and Wagner in <cit.>.Let G be a connected plane graph with a 3-frame ℱ. Suppose Y ∈𝐍(G, ℱ) such that |Y ∩ V(G)| is minimum. Then G_↔[Y] contains a rooted tree T_↔, where T_↔ is the union of three internally disjoint shortest paths (with distance given by δ_↔), each starting at the root and terminating in one of the three sides of ℱ.By definition, G[Y] contains a path with one endpoint in the blue side of ℱ and the other in the red side. Let P be such a path with |V(P) ∩ V(G)| minimum, and let u_b and u_r be the endpoints of P in blue and red, respectively. By definition, Y also contains at least one vertex in the yellow side of ℱ. Let u_y ∈ Y be in yellow. Since G[Y] is connected, there is a path in G[Y] starting in P and terminating at u_y. Let Q be such a path minimizing |V(Q) ∩ V(G)| that is internally disjoint from P, and let v_0 be the endpoint of Q in P. Then G_↔[Y] contains three directed subpaths, v_0 Pu_b, v_0 Pu_r and Q. By definition, these paths are internally disjoint. Moreover, the minimality of |Y ∩ V(G)| implies there is no shorter path from v_0 to any side of the 3-frame, otherwise we could use it to replace the current path to that side. With this structural characterization in hand, we can search for a minimum size cover of a net using the single-source shortest path algorithm for planar graphs by Henzinger et al. <cit.>[Net-Alg]Input A connected plane graph, G, and a 3-frame of G, (W = (u_0, u_1, ..., u_n), j, k).Idea: Using the characterization in lemma <ref>, wesearch through each vertex in the face graph, finding shortest paths from a “root” vertex to each of the three sides of our net. By minimizing the sum of the distances given by these paths, we obtain a (W, j, k)-vine in G using the fewest possible vertices from G.Initialization: Construct the directed plane graph, G_↔. Let |V(G_↔)| = s, and give any order to the vertices, V(G_↔) = {v_1, ..., v_s}. Set bestcover = s and center = 0.Iteration: * For each i = 1, 2, ..., s: * Run Henzinger et al.'s single-source shortest path algorithm <cit.> on G_↔ with source v_i, obtaining a weighted distance, δ_↔(v_i, u), for each u ∈ V(G_↔). * Set b(i)= min{δ_↔(v_i, u) : u ∈ V(u_0Wu_j)} r(i)= min{δ_↔(v_i, u) : u ∈ V(u_jWu_k)} y(i)= min{δ_↔(v_i, u) : u ∈ V(u_kWu_n)} * Set d(i) = b(i) + r(i) + y(i) + 1_G(v_i). * Set center= {[ center ifbestcover ≤ d(i);i ifbestcover > d(i) ]. * Set bestcover = min{bestcover,d(i)}. * Run the single-source shortest path algorithm from <cit.> on G_↔ with source, center, to find P_b, P_r and P_y, shortest length paths for from center to V(u_0Wu_j), V(u_jWu_k) and V(u_kWu_n), respectively. Stop.Output: Define the vertex set Y := V(P_b) ∪ V(P_r) ∪ V(P_y).Given a connected plane graph G with at 3-frame ℱ, Net-Alg runs in O(|G|^2) time and outputs Y⊆ V(G), where |Y ∩ V(G)| = (𝐍(G, ℱ)).By construction, Y is an ℱ-vine and lemma <ref> implies |Y ∩ V(G)| is minimum among all such vines. Then theorem <ref> implies that Y ∩ V(G) is a minimum size cover of the ℱ-net in G.In the initialization of the algorithm, we can construct G in O(|G|) time. From this graph, we can obtain G_↔ in O(|G|) time since the number of edges in any planar graph contains at most 3(|G|-2) edges. In step 1 of the iteration, Henzinger's algorithm runs in O(|G|) time. We run this algorithm for each vertex in G, so step 1 completes in O(|G|^2) time. Step 2 runs in O(|G|) time. Therefore, the running time of Net-Alg is O(|G|^2). Now that we have an algorithm for finding a minimum net cover in a particular framed plane graph, we will use it to search a graph (and subgraphs) for large order nets.§ BT-ALG: UPPER AND LOWER TREEWIDTH BOUNDS We are interested not only in the order of a net in a particular framing of a plane graph, but more importantly in what nets are possible in subgraphs of that graph. Any net is hightly sensitive to the walk peripheral to the unbounded face of the embedding — if this walk is something simple like a 3-cycle, it severely limits the complexity of the net. However, higher order nets may be lurking in the interior of this embedding. For any plane graph, G, let λ(G) denote the maximum order of any net in a subgraph of G. Let G be a connected plane graph, let W = (u_0, ..., u_n) be a closed walk peripheral to the unbounded face of G, let ℱ = (W, j, k) be a 3-frame of G, and let Y ⊆ V(G) be an ℱ-vine obtained by Net-Alg. Suppose G' is a connected component of G - (Y∩ V(G)). If u_a, u_b ∈{u_0, ..., u_n}∩ V(G') with a ≤ b, then either u_aWu_b or (u_bWu_n, u_1Wu_a) is a walk in G'. We prove the statement by contradiction. Suppose there are integers s and t so that u_s ∈ V(u_aWu_b), u_t ∈ V(u_bWu_n, u_1Wu_a) and u_s, u_t ∈ Y. G[Y] is connected by definition, so there must be a (u_s, u_t)-path in G[Y]. Since (u_s, u_t) and (u_a, u_b) cross in W, any (u_a, u_b)-path in G' must contain a vertex in Y. This contradicts the hypothesis that u_a and u_b are contained in the same connected component of G - (Y∩ V(G)).Lemma <ref> implies that for any connected component G' of G - (Y ∩ V(G)), a closed walk peripheral toG' can be decomposed into two internally disjoint subwalks overlapping on their endpoints. One of these subwalks is a subwalk of W and the other has no interior vertices in W; that is, the interior vertices of this subwalk are uncolored in G. We are ready to present an algorithm for finding a high order net in a subgraph of G. Our algorithm is inspired by Bodlaender, Grigoriev and Koster's algorithm for finding a large square grid as a minor of a planar graph <cit.>. See Figure <ref> for an example of its output.[BT-Alg] Input: A connected planar graph, G_0.Initialization: Use Hopcroft and Tarjan's algorithm <cit.> to obtain a planar embedding of G_0, then define a 3-frame ℱ_0 of G_0. Set bestlow=-1.Idea: We create a rooted tree search by associating our framed graph with a root node. We then remove a minimum order cover of the net to separate the graph into component subgraphs, each associated with a child of the root node, and we describe a consistent way in which to define a new frame on each component. We keep track of the current greatest order of any net found in this process with bestlow.Iteration: Input the plane graph, G_i, and a 3-frame, ℱ_i. Note that i is a tuple. If |G_i| < bestlow, then let C_i = V(G_i) and proceed to the next ordered node in a breadth-first search. If no nodes remain unsearched, then stop. Otherwise, |G_i|≥ bestlow. Then do: * Run Net-Algfor G_i to find a minimum sized cover, C_i⊆ V(G), of 𝐍(G_i, ℱ_i). Set bestlow = max{bestlow, |C_i|}.* Let G_(i,1), G_(i,2), …, G_(i,m) be the components of G_i-C_i.* For each q∈ [m], we choose a 3-frame that is consistent with the colors on the 3-frame of G_i as follows. We define a 3-frame ℱ_(i,q) of G_(i,q) as follows. Every vertex that had a color in G_i retains its color in ℱ_(i, q). Lemma <ref> implies the colored vertices in ℱ_(i,q) form a subwalk of W_i containing at most two colors on its vertices; let e_1 and e_2 be the endpoints of this subwalk. Note that W_i may be a single vertex (in which case e_1=e_2) or it may be the empty walk (in which case we will let e_1=e_2 be an arbitrary vertex peripheral to the unbounded face of G_(i, q)). Let S_(i, q) be the (e_1,e_2)-subwalk peripheral to G_(i,q) whose interior vertices are not colored in ℱ_i. * If exactly one color is missing in G_(i, q): Assign every vertex of S_(i,q) with the missing color. * If exactly two colors are missing in G_(i, q): Choose (arbitrarily) one of the missing colors and assign every vertex of S_(i,q) with that color. Then choose one of e_1 or e_2 and assign to it the other missing color in addition. * If all three colors are missing in G_(i, q): Choose (arbitrarily)one of the missing colors and assign every vertex in S_(i, q) with that color. Then choose any vertex from S_(i,q) and assign it both of the two remaining colors in addition.In each case, the three colors determine a 3-frame ℱ_(i, q) of G_(i,q).* Recurse on child nodes (i,1), (i,2), …, (i,m) in a breadth-first search. Output: KB=bestlow.We were careful in how we defined the 3-frames for child nodes of the search tree so that the net covers obtained at each step of the algorithm have the following nice property as separating sets in G.Any subgraph associated to a node in the rooted search tree obtained in BT-Alg is incident with at most three cover sets found in a previous iteration of step 1. Let i be a node in the rooted search tree obtained by BT-Alg. If u ∈ V(G_i) is adjacent to some cover set, say C_i', associated to a previous node i' in the search tree, then C_i' separated u from at least one color of ℱ_i' in G_i', and u was assigned one of those missing colors, say blue. No later iteration between i and i' can separate a connected component containing G_i from the blue vertices. That is, C_i' is the unique cover set incident with G_i and separating V(G_i) from the blue side of the frame. Since there are only three colors in a 3-frame, there can only be three such cover sets incident with G_i.For any connected planar graph G, the constant KB output by BT-Alg satisfies λ(G)/4≤ KB≤λ(G). Since BT-Alg finds a net of order KB in a subgraph of G, we know that KB ≤λ(G). Let 𝐌 be a net of order λ(G) in some subgraph of G. Lemma <ref> implies that every vine in 𝐌 covers 𝐌, so each vine contains at least λ(G) vertices. Consider a node i in our search tree such that V(G_i) contains a vine, say Y ∈𝐌, and no child of G_i contains a vine in 𝐌. Such a node must exist in our search tree since the graph associated to each leaf has fewer than λ(G) vertices.By lemma <ref>, G_i is incident with at most three net covers obtained in BT-Alg. Thus, every vine in 𝐌 which is not contained in G_i intersects at least one of these three net covers nontrivially. Furthermore, every vine in 𝐌 that is contained in G_i must intersect C_i. Now we can cover 𝐌 with four covers obtained in the algorithm, at most three incident with G_i plus C_i. By construction, each of these covers has size at most KB, so λ(G) = (𝐌) ≤ 4KB. Inspired by <cit.>, we use BT-Alg to construct a tree decomposition of a planar graph. This will give an upper bound on treewidth.For any connected planar graph G, the constant KB output by BT-Alg satisfies TW(G) ≤ 4KB - 1. We define a tree decomposition of G as follows. Let T be the tree constructed in the rooted tree search in BT-Alg.For each node i ∈ V(T), define β(i) =C_i ∪{u ∈ V(G)\ V(G_i) : N_G(u) ∩ V(G_i) ≠∅}.That is, each vertex set β(i) contains the net cover, C_i, plus all vertices outside of G_i which have a neighbor in G_i. We need to confirm that (T, β) possesses the three properties of a tree decomposition. (1): In the algorithm, for each u ∈ V(G), there is some i ∈ V(T) such that u ∈ C_i, so u ∈β(i). (2): Suppose uv ∈ E(G), u ∈ C_i and v ∈ C_i'. If i = i' or both i ≠ i' and v ∈ V(G) \ V(G_i), then u, v ∈β(i). Otherwise v ∈ V(G_i) \ C_i, implying G_i' is a subgraph of some connected component of G_i - C_i. By definition u, v ∈β(i').(3): The third property is equivalent to showing that for any u ∈ V(G), β^-1(u) is connected. For any u ∈ V(G), β^-1(u) = {i ∈ V(T) : u ∈ C_ior(u ∉ V(G_i) and ∃ v ∈ N_G(u) ∩ V(G_i) ) }.If u ∈ C_j, then T[β^-1(u)] will be a subtree of T rooted at j.We have shown that (T, β) is a tree decomposition of G. For any i ∈ V(T), lemma <ref> implies β(i) intersects at most four net covers defined in the algorithm. Therefore, |β(i)| ≤ 4KB. That is, the width of (T, β) is at most 4KB - 1. Combining Theorems <ref> and <ref>, we get the following result on lower and upper bounds on the treewidth of a planar graph:Let G be a planar graph. Then BT-Alg computes KB in O(|G|^3) time, and λ(G)/4≤ KB ≤ BN (G) ≤ 4KB ≤ 4λ(G). We run BT-Alg on each connected component of G. Hopcroft and Tarjan's algorithm generates a planar embedding in O(|G|) time. In the iteration, Net-Alg runs in O(|G|^2) time. Moreover, the nodes in the search tree are in bijection with disjoint, nonempty cut-sets from G, so this iteration runs at most |G| times. Therefore, the rooted tree search completes in O(|G|^3) time. The upper and lower bounds on bramble number follow directly from theorem <ref> and theorem <ref>, respectively. § CORRECTING A LOWER BOUND WITH SQUARE GRIDS In this section, we explain the difficulties with <cit.>, in particular, the proof of Theorem 3.2(<cit.>) and the algorithm 𝒜_2(<cit.>). Algorithm 𝒜_2(<cit.>) defines four sides to a plane graph and it determines a tree of north-south or east-west cuts, and it inspired us to write BT-Alg. Theorem 3.2(<cit.>) finds lower bound LB_2 using Algorithm 𝒜_2(<cit.>). It inspired us to prove Theorem 5.5.In the examples below, we follow their notation, using 𝒩, ℰ, 𝒮, 𝒲 as the dummy vertices adjacent to the paths North, East, South, West.The first step of 𝒜_2(<cit.>) assigns roughly equal numbers of vertices to each of North, East, South and West. We note that throughout the algorithmic procedure, it is impossible to maintain four paths of roughly the same length. For instance, if a cut dramatically increases the number of peripheral vertices, then it is possible that one side will contain much less than one-quarter of the vertices in the new periphery.Thus, we cannot assume that the paths will be of approximately equal length or that this constitutes an assignment of vertices to the new periphery. The problem with 𝒜_2(<cit.>) is that it is not specific in its assignment of new peripheral vertices to North, East, South or West at each iteration. When making the periphery assignments, there are choices that will not yield the desired result in the proof of Theorem 3.2(<cit.>). Figure <ref> describes such a situation. Our example is a 6× 6 grid with 𝒩, ℰ, 𝒮, 𝒲each adjacent to a side of the grid. We choose C_1 to be the leftmost column that separates ℰ and 𝒲. The rule we use to assign vertices to the periphery is that we extend South to meet North, and West is atomic. Then C_i,2≤ i≤ 6, is the singleton vertex in West. This yields a connected component that is adjacent to six separations, where the expected number was four.Next we consider 𝒜_2(<cit.>) assuming that the assignment of vertices to the periphery occurs as in BT-Alg. We will show that there is an error in the proof of Theorem 3.2(<cit.>) and that the proof shows only LB_2≥ s(G)/5, where s(G) is the size of largest square grid minor of G. In the proof the authors construct a rooted-search-tree where each node i is associated to a subgraph G_i of G.Each G_i is determined by at most two 𝒩-𝒮 vertex cuts and at most two ℰ-𝒲 vertex cuts, hence at most four vertex cuts. The children of G_i are the connected components of G_i - C_i, where C_i is an additional vertex cut, either 𝒩-𝒮 or ℰ-𝒲. The authors assume that at most four vertex cuts are enough to separate all child nodes of G_i. They use this claim in the very last sentence of the proof of Theorem 3.2(<cit.>), to show some one of these cuts must have size at least s(G)/4.However, it may take five vertex cuts to determine, simultaneously, all subgraphs associated to child nodes of i. Although each of these children are determined by at most four cuts, to simultaneously separate all of them, we must include the four that separate G_i and the cut that divides G_i into its children.Figure <ref> is an example of when five cuts are needed. If we implement algorithm 𝒜_2(<cit.>)on the 5 × 5 grid, then 𝒜_2(<cit.>) could produce the sequence of five cut sets pictured, C_1, C_2, C_3, C_4 and C_5. After four iterations of the algorithm, there is only one component subgraph associated to a single node in the rooted-search-tree. Then removing C_5 produces two component subgraphs, each of which is adjacent to only four cuts in the algorithm. However, all five cuts need to be removed from the original grid in order to obtain both component subgraphs.Thus, the bound proved in Theorem 3.2(<cit.>) shows LB_2≥ s(G)/5, not s(G)/4. § CONCLUDING REMARKS We have shown that nets are a natural generalization of high order brambles found in square grids. Moreover, using only three sides in their definition makes nets a more natural candidate for a high order obstruction to treewidth than the four-sided square grid minors. In practice, because a square grid minor has more structure than a net, it is harder to find. Thus we expect our algorithm to perform better than 𝒜_2(<cit.>). It would be interesting to experimentally test BT-Alg for running time and efficiency on small graphs to see if this holds. In the experimental results in <cit.> LB_2 is usually at least half of BN(G), so typically LB_2 is much larger than the theoretical lower bound. We expect that in practice, KB will be much greater than λ(G)/4. We expect only very specific constructions to achieve this theoretical lower bound.plain | http://arxiv.org/abs/1706.08581v1 | {
"authors": [
"Karen L. Collins",
"Brett C. Smith"
],
"categories": [
"math.CO",
"05C10, 05C83, 05C85"
],
"primary_category": "math.CO",
"published": "20170626201526",
"title": "Treewidth Bounds for Planar Graphs Using Three-Sided Brambles"
} |
addFZU,addMFF]Vladislav Pokornýca [ca]Corresponding author [email protected]]Martin Žonda [email protected][addFZU]Institute of Physics, The Czech Academy of Sciences, Na Slovance 2, CZ-18221 Praha 8, Czech Republic [addMFF]Department of Condensed Matter Physics, Faculty of Mathematics and Physics,Charles University in Prague, Ke Karlovu 5, CZ-12116 Praha 2, Czech Republic We study the effect of electron correlations on a system consisting of asingle-level quantum dot with local Coulomb interaction attached to two superconducting leads. We use the single-impurity Anderson model with BCS superconducting baths to study the interplaybetween the proximity induced electron pairing and the local Coulomb interaction.We show how to solve the model using the continuous-time hybridization-expansion quantum Monte Carlo method. The results obtained for experimentally relevant parametersare compared with results of self-consistent second order perturbation theory as well as with the numerical renormalization group method. superconducting quantum dot quantum Monte Carlo zero-pi phase transition§ INTRODUCTIONConventional Josephson junctions had become a standard building blocks of various electronics devices including SQUIDs <cit.>, RSFQs <cit.>, and qubits <cit.>in quantum computing.No wonder that their tunable generalizations, the superconducting quantum dots, gain a lot of attentionfrom both theorist and experimentalist.These hybrids, where a quantum dot is placed between two superconducting leads, promise a great deal oftechnological advances such as quantum supercurrenttransistors <cit.>, monochromatic single-electron sources <cit.> or single-molecule SQUIDs <cit.>.Of no less importance is the fact that they are rich and relatively easy to deal with playgrounds forstudying various physical phenomena. These include the competition between Kondo effect andsuperconductivity <cit.>, the Andreev subgap transport <cit.>as well as quantum phase transitions from impurity spin-singlet to spin-doublet groundstate which are in the experiments signaled by the sign reversalof the supercurrent (0-π transition) and accompanied by a crossing of the subgapAndreev bound states (ABS) <cit.>.The superconducting quantum dot can be adequately described by a single impurityAnderson model (SIAM) coupled to BCS leads <cit.>.Consequently, a lot of different theoretical approaches have been appliedto study this system. Many important results have been obtained using various (semi)analytical methods based on different perturbation approaches <cit.>. Moreover, as was shown in recent studies <cit.>, a surprisinglylarge portion of the parametric space of the superconducting SIAMcan be reliably covered with a properly formulated second-order perturbation theory (2ndPT)in the on-dot electron Coulomb interaction. Unfortunately, this method cannot describe the π-junction behavior due to the ground-state degeneracy.None of the mentioned (semi)analytical perturbative methods can cover allexperimentally relevant cases. Therefore there is a big demand for “heavy” numerical methods. A very good quantitative agreement with the experiments can be obtained with the numericalrenormalization group (NRG) <cit.> and quantum Monte Carlo(QMC) <cit.> methods. Although both methods have some disadvantages, including their computational demands, they have, besides parametric universality, one big practical advantage. Namely, the existence of well-tested versatile open-source software packages.In the present paper we focus on the continuous-time hybridization-expansion quantum Monte Carlo (CT-HYB) <cit.> implementation for experimentally inspired parameters <cit.>representing a strong coupling regime, which is beyond the reach for most(semi)analytical techniques.We show how to include the superconductivity into CT-HYB quantum Monte Carlo solver.Then we study various single-particle quantities as functions of the gate voltage.We discus how they behave near quantum phase transition and show that the CT-HYB can be reliably used to obtain the phase diagram.We also use numerical analytical continuation to obtain the spectral function.We compare all obtained CT-HYB results with either 2ndPT or the NRG. § THE MODEL HAMILTONIANWe describe the system by the single-impurity Anderson model with BCS leads.The Hamiltonian readsℋ=ℋ_dot+∑_s=L,R(ℋ^s_lead+ℋ^s_c)where s=L,R denotes the left and right leads. The impurity Hamiltonian describes a single-level atom with local Coulomb repulsion U and on-site energy ε_σ=ε+σ B,where B is the external magnetic fieldℋ_dot=∑_σε^†_σ d_σ^† d_σ^† +Ud_↑^† d_↑^† d_↓^† d_↓^†.The Hamiltonian of the BCS leads readsℋ^s_lead =∑_𝐤σε(𝐤)c_s,𝐤σ^† c_s,𝐤σ^†-Δ∑_𝐤(e^iΦ_s c_s,𝐤↑^† c_s,-𝐤↓^†+H.c.)where Δ e^iΦ_s is the complex gap parameter. We assume the same gap size in both leads,Δ_L=Δ_R=Δ, meaning that the leads are made from the same material, as it is usual in the experimental setups. Finally, the coupling part readsℋ^s_c=-∑_𝐤σ t_s(c_s,𝐤σ^† d_σ^†+H.c.)where t_s denotes the tunneling matrix element.Hamiltonian (<ref>) does not conserve the electron number and therefore cannot be solved directlyusing standard CT-HYB technique. To circumvent this problem we utilized a canonical particle-hole transformation in the spin-down sectord^†_↑ → d^†_↑,d^†_↓→ d^†_↓, d^†_↑→ d^†_↑,d^†_↓→ d^†_↓,c_𝐤↑^† → c_𝐤↑^†, c_𝐤↓^†→ c^†_-𝐤↓, c^†_𝐤↑→ c^†_𝐤↑, c^†_𝐤↓→ c_-𝐤↓^†,previously used by Luitz and Assaad <cit.> to include superconductivity in the continuous-timeinteraction-expansion (CT-INT) QMC calculations.The new quasiparticles are identical to electrons in the spin-up sector and to holes in the spin-down sector. This transformation maps our system to SIAM with attractive interaction -U and off-diagonalhybridization of the quantum dot with the leads.The local energy levels transform as ε_σ→σε_σ.Since ε_σ=ε+σ B andσ^2=1, this transformation maps the local energy ε on the magnetic field B and vice versa. The dispersion and tunneling matrix elements transform in the same manner,ε(𝐤) →σε(𝐤) and t_s→σ t_s. The resulting Hamiltonian conserves the total electron number and can be solved using standard CT-HYBimplementations. § THE CT-HYB METHODWe use the TRIQS/CTHYB Monte Carlo solver <cit.>.We consider a flat density of states in the leads of finite half-width D=30Δ. The coupling of the quantum dot to the leads is described by tunneling rates Γ_s=π|t_s|^2/(2D). We denote Γ=Γ_R+Γ_L and consider only the symmetric coupling Γ_R=Γ_L.Any asymmetric coupling Γ_R≠Γ_L with the same total Γ can be easily gained from the symmetric solution using a simple analyticalrelation derived in Ref. <cit.>.Continuous-time quantum Monte Carlo belongs to a family of inherently finite-temperature methods and the calculations are usually restricted to rather high temperatures. However, since the typical energy scale in our setup is the superconducting gapΔ∼ 100μeV, it allows us to easily reach experimental range of temperatures T∼10-100mK.The biggest disadvantage of CT-HYB in comparison with NRG or 2ndPT isthat the calculation is performed on the imaginary-time axis.Obtaining the spectral function from imaginary-time data is a well-known ill-defined problem.Various numerical methods are used to perform the analytic continuation, the maximum entropy method being the mostcommon one <cit.>. However, this method fails to resolve sharp spectral features like the Andreev bound states. Therefore we use the Mishchenko's stochastic optimization method (SOM) <cit.> in itsrecent implementation <cit.> which is better suited to our needs. § RESULTSOur calculations are inspired by the experiment of Pillet et. al. <cit.>.The paper describes the tunneling spectroscopy measurement performed on a carbon nanotube connected to superconducting aluminum leads.Experimental results show the Andreev bound states as functions of gate voltageand are nicely reproduced using the NRG method.The superconducting gap is Δ=150μeV, Coulomb interaction U≈2meVand the phase difference is zero (Φ_L=Φ_R). We use these parameters in our calculations and set the magnetic field B to zero. It is worth to note that we did not encounter anyfermionic sign problem during the calculation.In Fig. <ref> we plot the diagonal (panel a) and the off-diagonal (panel b)part of the occupation matrix as functions of the shifted local energy levelε_U=ε+U/2 (ε_U=0 represents the half-filled dot).From now on we use Δ as the energy unit. We restrict ourselves to positive values of ε_U as the rest can be determined from symmetry. We chose parameters U=13.3Δ and Γ_L=Γ_R=0.45Δ which are within the experimental range.The three solid lines are CT-HYB results calculated at inverse temperatures βΔ=10 (green), 20 (blue) and 40 (red) that correspond totemperatures T=175 mK, 77 mK and 44 mK, respectively. The diagonal part corresponds to the electron density n=⟨ d^†d⟩. It varies very weakly inπ phase, then changes abruptly at the phase transition. The position of the phase transition can be more easily determined from the off-diagonal part, which represents the induced gap μ=⟨ d^†d^†⟩.This parameter is negative in the π-phase and positive in the 0-phase. We see that the temperature has very little effect on the position of the phase transitionwhich takes place at ε_U≈ 5.5 Δ. We included also the results of the 2ndPT method, which is available only in the 0-phase.It fits very well the CT-HYB results in this phase although it gives the phasetransition at ε_U=6.15Δ (c.a. 12% error). However, this discrepancy is expected as we are investigating a strong coupling regime (U/Δ≫ 1 and U/Γ≫ 1).The inset of panel a shows the average perturbation order ⟨ k⟩scaled by the inverse temperature β for the CT-HYB calculations in the main panels.This quantity is an estimator of the kinetic energy <cit.>. It scales linearly with β and exhibits a maximum just above the phase transition point.Although the zero temperature extrapolation of the position of maxima could be in principle usedto estimate the phase transition point, safe determination of its position from ⟨ k⟩requires a rather elaborate procedure <cit.>.Calculation of a spectral function requires much more precise QMC data than the calculation ofan expectation value due to the underlying, ill-defined analytic continuation procedure. While theexpectation values with reasonable error bars can be obtained within few CPU-hours,calculation of a spectral function, including the SOM procedure,takes usually more than 100, depending strongly on the temperature and the coupling strength Γ. In Fig. <ref> we plot the color map of the spectral function at βΔ=40 (T=44mK)in the vicinity of the gap region as it can be directly compared to the experimental data.We use the same parameters as in Fig. <ref> and compare it with theposition of ABS calculated using NRG and 2ndPT methods at T=0.NRG results were obtained using NRG Ljubljana code <cit.>.The ingap maxima of the spectral function in the 0-phase and around the transition pointare in very good agreement with the positions of ABS calculated by NRG.In the π-phase the position of the maxima tends to shift to higher energies.This is surprising as the electron density and the induced gap values are in goodagreement with NRG even in this region. The position of the peaks also does not depend on the temperature and while it does depend on the width of the non-interacting band D, this dependence is weak and it effectsthe position of the maxima equally in 0 and π phase, therefore it cannot explain this discrepancy.In order to get some insight into this problem we plotted in Fig. <ref> the spectral functions calculated using NRG and CT-HYB methods. The top panel shows results for ε_U=7Δ which is in 0-phase. We see that the arrows that represent ABS from NRG calculation match the maxima of thespectral function obtained using SOM procedure from CT-HYB data. We also see that CT-HYB spectra are missing the structure just above the gap edges at ±Δ.Bottom panel shows spectral functions for ε_U=2Δ (π-phase).The mismatch between the arrows and the maxima is clearly visible and we do not have a satisfactory explanation of this discrepancy.The local energy level ε is a parameter that can be easily tunedin the experimental setups by changing the gate voltage.On the other hand, the tunneling rate Γ is very hard to measure and it is usually obtained from numerical fits <cit.>. Studying the relation between these parameters is therefore important for interpretation of the experimental results.In Fig. <ref> we plotted the phase diagram inthe ε_U-Γ plane. Red solid line represents the phase boundary calculated using CT-HYB at inverse temperature βΔ=20 that corresponds to T=77 mK.This boundary was determined from the positivity of the induced gap μ. This line does not changewith further decreasing temperature beyond the resolution of the plot. We also included 2ndPT result for comparison. As this perturbation expansion is performed inU/Γ parameter it differs more for smallvalues of Γ where it develops a “hump” as already pointed out inRef. <cit.>. The two lines thenmeet for Γ=0 at the exact result ε_U=U/2. The blue arrow marks the cut at Γ=0.9Δ along which the data inFigs. <ref> and <ref> are plotted. § CONCLUSIONSWe studied a 0-π quantum phase transition in a single-level quantum dot connected to two superconducting BCSleads using the continuous-time hybridization-expansion quantum Monte-Carlo method.We used the U and Δ parameters from experiment described in Ref. <cit.>in order to stay in a realistic region of the parameter space.Performing an electron-hole transformation in the spin-down sector we mapped the system on a modelthat can be solved using CT-HYB method as implemented in the TRIQS package. We presented results as functions of the gate voltage ε as this parameter is easily tunable in the experiment. We showed how the 0-π quantum phase transition point can be extracted from the behavior of the induced gap and presented the finite-temperature spectral function as well as the phase diagram in the ε-Γ plane that can be used to determine the value of the tunneling rate Γ.In summary, we showed that CT-HYB is an effective method for studying superconductingquantum dot systems, where the interaction strength is the dominant energy scale.The present formulation is sign problem free and one can access the low-temperature region using reasonable amount of computational resources. We also showed how the spectral function can be obtained using analytic continuation based on the Mishchenko's stochastic sampling method in order to study the behavior of the subgap Andreev bound states.Comparing the position of the subgap maxima with ABS frequencies from the NRG calculation shows good agreement in the 0-phase but a discrepancy in the π-phase which is of unknown origin. Furthermore, the model can be generalized to include a normal (non superconducting) electrode. As already pointed out in Ref. <cit.> where such a three-terminal device was studied,NRG and 2ndPT methods fail in this setup except special cases and CT-HYB becomes the method of choice. § ACKNOWLEDGMENTSResearch on this problem was supported by Grant No. 15-14259S of the Czech Science Foundation (V.P.) and the Grant No. DEC-2014/13/B/ST3/04451 of the National Science Centre (Poland) (M.Ž.). Access to computing and storage facilities owned by parties and projects contributing to theNational Grid Infrastructure MetaCentrum provided under the programme “Projects of LargeResearch, Development, and Innovations Infrastructures” (CESNET LM2015042), is greatly appreciated. V. P. thanks Roberto Mozara for the help with the stochastic optimization method. § REFERENCES 10 url<#>1urlprefixURL href#1#2#2 #1#1W1996 H. Weinstock, SQUID sensors: Fundamentals, fabrication and applications - introduction, in: H. Weinstock (Ed.), SQUID SENSORS: FUNDAMENTALS, FABRICATION AND APPLICATIONS, Vol. 329 of NATO ADVANCED SCIENCE INSTITUTES SERIES, SERIES E, APPLIED SCIENCES, NATO, Sci & Environm Affairs Div, 1996, pp. R13–R14, NATO Advanced Study Institute on SQUID Sensors - Fundamentals, Fabrication and Applications, ACQUAFREDDA DI MARATEA, ITALY, JUN 18-30, 1995.LS1991 K. K. Likharev, V. K. Semenov, RSFQ logic/memory family: A new Josephson-junction technology for sub-terahertz-clock-frequency digital systems, IEEE TRANSACTIONS ON APPLIED SUPERCONDUCTIVITY 1 (1) (1991) 3–28. http://dx.doi.org/10.1109/77.80745 doi:10.1109/77.80745.CW2008 J. Clarke, F. K. Wilhelm, Superconducting quantum bits, NATURE 453 (7198) (2008) 1031–1042. http://dx.doi.org/10.1038/nature07128 doi:10.1038/nature07128.Herrero-2006 P. Jarillo-Herrero, J. van Dam, L. Kouwenhoven, Quantum supercurrent transistors in carbon nanotubes, Nature 439 (7079) (2006) 953–956. http://dx.doi.org/10.1038/nature04550 doi:10.1038/nature04550.Zanten-2016 D. M. T. van Zanten, D. M. Basko, I. M. Khaymovich, J. P. Pekola, H. Courtois, C. B. Winkelmann, Single quantum level electron turnstile, Phys. Rev. Lett. 116 (2016) 166801. http://dx.doi.org/10.1103/PhysRevLett.116.166801 doi:10.1103/PhysRevLett.116.166801.Cleuziou-2006 J. P. Cleuziou, W. Wernsdorfer, V. Bouchiat, T. Ondarcuhu, M. Monthioux, Carbon nanotube superconducting quantum interference device, Nat. Nanotechnol. 1 (1) (2006) 53–59. http://dx.doi.org/10.1038/nnano.2006.54 doi:10.1038/nnano.2006.54.Domanski-2017 T. Domański, M. Žonda, V. Pokorný, G. Górski, V. Janiš, T. Novotný, Josephson-phase-controlled interplay between correlation effects and electron pairing in a three-terminal nanostructure, Phys. Rev. B 95 (4) (2017) 045104. http://dx.doi.org/10.1103/PhysRevB.95.045104 doi:10.1103/PhysRevB.95.045104.Rodero-2011 A. Martín-Rodero, A. Levy Yeyati, Josephson and Andreev transport through quantum dots, Adv. Phys. 60 (6) (2011) 899–958. http://dx.doi.org/10.1080/00018732.2011.624266 doi:10.1080/00018732.2011.624266.PJZG2013 J.-D. Pillet, P. Joyez, R. ŽŽitko, M. F. Goffman, Tunneling spectroscopy of a single quantum dot coupled to a superconductor: From Kondo ridge to Andreev bound states, Phys. Rev. B 88 (2013) 045101. http://dx.doi.org/10.1103/PhysRevB.88.045101 doi:10.1103/PhysRevB.88.045101.ZPJN2015 M. Žonda, V. Pokorný, V. Janiš, T. Novotný, Perturbation theory of a superconducting 0-π impurity quantum phase transition, Sci. Rep. 5 (2015) 8821. http://dx.doi.org/10.1038/srep08821 doi:10.1038/srep08821.Delagrange-2016 R. Delagrange, R. Weil, A. Kasumov, M. Ferrier, H. Bouchiat, R. Deblock, 0-π quantum transition in a carbon nanotube Josephson junction: Universal phase dependence and orbital degeneracy, Phys. Rev. B 93 (19) (2016) 195437. http://dx.doi.org/10.1103/PhysRevB.93.195437 doi:10.1103/PhysRevB.93.195437.Luitz-2012 D. J. Luitz, F. F. Assaad, T. Novotný, C. Karrasch, V. Meden, Understanding the Josephson current through a Kondo-correlated quantum dot, Phys. Rev. Lett. 108 (22) (2012) 227001. http://dx.doi.org/10.1103/PhysRevLett.108.227001 doi:10.1103/PhysRevLett.108.227001.Clerk-2000 A. A. Clerk, V. Ambegaokar, Loss of π-junction behavior in an interacting impurity Josephson junction, Phys. Rev. B 61 (13) (2000) 9109–9112. http://dx.doi.org/10.1103/PhysRevB.61.9109 doi:10.1103/PhysRevB.61.9109.Karrasch-2008 C. Karrasch, A. Oguri, V. Meden, Josephson current through a single Anderson impurity coupled to BCS leads, Phys. Rev. B 77 (2) (2008) 024517. http://dx.doi.org/10.1103/PhysRevB.77.024517 doi:10.1103/PhysRevB.77.024517.Meng-2009 T. Meng, S. Florens, P. Simon, Self-consistent description of Andreev bound states in Josephson quantum dot devices, Phys. Rev. B 79 (22) (2009) 224521. http://dx.doi.org/10.1103/PhysRevB.79.224521 doi:10.1103/PhysRevB.79.224521.ZPJN2016 M. ŽŽonda, V. Pokorný, V. Jani šš, T. Novotný, Perturbation theory for an Anderson quantum dot asymmetrically attached to two superconducting leads, Phys. Rev. B 93 (2016) 024523. http://dx.doi.org/10.1103/PhysRevB.93.024523 doi:10.1103/PhysRevB.93.024523.Yoshioka-2000 T. Yoshioka, O. Y., Numerical renormalization group studies on single impurity Anderson model in superconductivity: A unified treatment of magnetic, nonmagnetic impurities, and resonance scattering, J. Phys. Soc. Jpn. 69 (6) (2000) 1812–1823. http://dx.doi.org/10.1143/JPSJ.69.1812 doi:10.1143/JPSJ.69.1812.la2010 D. J. Luitz, F. F. Assaad, Weak-coupling continuous-time quantum Monte Carlo study of the single impurity and periodic Anderson models with s-wave superconducting baths, Phys. Rev. B 81 (2010) 024509. http://dx.doi.org/10.1103/PhysRevB.81.024509 doi:10.1103/PhysRevB.81.024509.qmc-RMP2011 E. Gull, A. J. Millis, A. I. Lichtenstein, A. N. Rubtsov, M. Troyer, P. Werner, Continuous-time Monte Carlo methods for quantum impurity models, Rev. Mod. Phys. 83 (2011) 349–404. http://dx.doi.org/10.1103/RevModPhys.83.349 doi:10.1103/RevModPhys.83.349.cthyb2016 P. Seth, I. Krivenko, M. Ferrero, O. Parcollet, TRIQS/CTHYB: A continuous-time quantum Monte Carlo hybridisation expansion solver for quantum impurity problems, Comp. Phys. Commun. 200 (2016) 274–284. http://dx.doi.org/10.1016/j.cpc.2015.10.023 doi:10.1016/j.cpc.2015.10.023.triqs2015 O. Parcollet, M. Ferrero, T. Ayral, H. Hafermann, I. Krivenko, L. Messio, P. Seth, TRIQS: A toolbox for research on interacting quantum systems, Comp. Phys. Commun. 196 (2015) 398–415. http://dx.doi.org/10.1016/j.cpc.2015.04.023 doi:10.1016/j.cpc.2015.04.023.KZN2017 A. Kadlecová, M. ŽŽonda, T. Novotný, Quantum dot attached to superconducting leads: Relation between symmetric and asymmetric coupling, Phys. Rev. B 95 (2017) 195114. http://dx.doi.org/10.1103/PhysRevB.95.195114 doi:10.1103/PhysRevB.95.195114.JG1996 M. Jarrell, J. E. Gubernatis, Bayesian inference and the analytic continuation of imaginary-time quantum Monte Carlo data, Phys. Rep. 269 (3) (1996) 133–195. http://dx.doi.org/10.1016/0370-1573(95)00074-7 doi:10.1016/0370-1573(95)00074-7.MPSS2000 A. S. Mishchenko, N. V. Prokof'ev, A. Sakamoto, B. V. Svistunov, Diagrammatic quantum Monte Carlo study of the Fröhlich polaron, Phys. Rev. B 62 (2000) 6317–6336. http://dx.doi.org/10.1103/PhysRevB.62.6317 doi:10.1103/PhysRevB.62.6317.SOM-code I. Krivenko, TRIQS-based stochastic optimization method for analytic continuation, github.com/krivenko/som (2017).H2007 K. Haule, Quantum Monte Carlo impurity solver for cluster dynamical mean-field theory and electronic structure calculations with adjustable cluster base, Phys. Rev. B 75 (2007) 155113. http://dx.doi.org/10.1103/PhysRevB.75.155113 doi:10.1103/PhysRevB.75.155113.HWWW2016 L. Huang, Y. Wang, L. Wang, P. Werner, Detecting phase transitions and crossovers in Hubbard models using the fidelity susceptibility, Phys. Rev. B 94 (2016) 235110. http://dx.doi.org/10.1103/PhysRevB.94.235110 doi:10.1103/PhysRevB.94.235110.Ljubljana-code R. ŽŽitko, NRG Ljubljana - open source numerical renormalization group code, nrgljubljana.ijs.si (2014). | http://arxiv.org/abs/1706.08783v2 | {
"authors": [
"Vladislav Pokorný",
"Martin Žonda"
],
"categories": [
"cond-mat.mes-hall",
"cond-mat.str-el"
],
"primary_category": "cond-mat.mes-hall",
"published": "20170627111845",
"title": "Correlation effects in superconducting quantum dot systems"
} |
MatlabΓ <cit.>𝐤·𝐩 L>l<defnDefinition*defn*DefinitionconditionspropPropositioncorCorollaryexplExampletheoremTheoremThese authors contributed equally to the preparation of this work.Donostia International Physics Center, P. Manuel de Lardizabal 4, 20018 Donostia-San Sebastián, SpainDepartment of Applied Physics II, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, SpainMax Planck Institute for Solid State Research, Heisenbergstr. 1, 70569 Stuttgart, Germany.These authors contributed equally to the preparation of this work.Department of Condensed Matter Physics, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, SpainDepartment of Physics, Princeton University, Princeton, New Jersey 08544, USAPrinceton Center for Theoretical Science, Princeton University, Princeton, New Jersey 08544, USAMax Planck Institute for Chemical Physics of Solids, 01187 Dresden, GermanyDepartment of Condensed Matter Physics, University of the Basque Country UPV/EHU, Apartado 644, 48080 Bilbao, SpainDepartment of Physics, Princeton University, Princeton, New Jersey 08544, USADonostia International Physics Center, P. Manuel de Lardizabal 4, 20018 Donostia-San Sebastián, SpainOn sabbaticalLaboratoire Pierre Aigrain, Ecole Normale Supérieure-PSL Research University, CNRS, Université Pierre et Marie Curie-Sorbonne Universités, Université Paris Diderot-Sorbonne Paris Cité, 24 rue Lhomond, 75231 Paris Cedex 05, FranceOn sabbaticalSorbonne Universités, UPMC Univ Paris 06, UMR 7589, LPTHE, F-75005, Paris, FranceOn sabbaticalPrinceton Center for Theoretical Science, Princeton University, Princeton, New Jersey 08544, USA Topological phases of noninteracting particles are distinguished by global properties of their band structure and eigenfunctions in momentum space. On the other hand, group theory as conventionally applied to solid-state physics focuses only on properties which are local (at high symmetry points, lines, and planes) in the Brillouin zone. To bridge this gap, we have previously [B. Bradlyn et al., Nature 547, 298–305 (2017)] mapped the problem of constructing global band structures out of local data to a graph construction problem. In this paper, we provide the explicit data and formulate the necessary algorithms to produce all topologically distinct graphs. Furthermore, we show how to apply these algorithms to certain “elementary” band structures highlighted in the aforementioned reference, and so identified and tabulated all orbital types and lattices that can give rise to topologically disconnected band structures. Finally, we show how to use the newly developed BANDREP program on the Bilbao Crystallographic Server to access the results of our computation.Graph Theory Data for Topological Quantum Chemistry Barry Bradlyn December 30, 2023 ===================================================§ BACKGROUND & SUMMARY One of the most unexpected developments in condensed matter physics was the recent discovery of noninteracting topological insulators, in which the global dependence of Bloch wavefunctions on crystal momentum is topologically distinct from that in the atomic limit. While the original classification of such topological phases incorporated only on-site symmetries such as time-reversal, charge conjugation, and particle hole transformation<cit.>, the set of distinct topological insulators protected by crystal symmetry is much richer<cit.>. However, the conventional application of crystal symmetry to band theory is in the local study of degeneracies at isolated high-symmetry points in the Brillouin zone, through the 𝐤·𝐩 method<cit.>. In this approach, one studies the subset of crystal symmetries that leave a particular, isolated 𝐤-vector invariant. Electronic states at this momentum point transform under irreducible representations of the little group, and Hamiltonians in the neighborhood of this wavevector can be perturbatively expanded in terms of these representations. This local approach obfuscates any connection to band topology, which requires understanding how these different localized descriptions fit together to make a global band structure. While there has been some recent work towards addressing how symmetry constrains global band structures<cit.>, a complete and constructive approach to the problem has not yet been presented.On the other hand, we know from Heisenberg that global properties in momentum space map under the Fourier transform to local properties in position space. As such, one approach to topological band theory is to study the sets of energy bands that can arise from localized, symmetric orbitals in the atomic limit. This approach was introduced by the present authors in Ref. NaturePaper, using the theory of induced “band representations”. We argued that all sets of bands induced from symmetric, localized orbitals are topologically trivial by design; furthermore, by contraposition, any group of bands that does not arise as a band representation must be topologically nontrivial. In order to determine whether or not a given set of bands transform as a band representation, however, requires knowing global information about the band structure. In particular topological phase transitions occur when isolated sets of topological bands disconnect from a band representation; cataloguing when these transitions can occur requires an understanding of the allowed connectivities of energy bands throughout the whole Brillouin zone. Thus, while the real-space approach makes the topology of Bloch bands manifest, it cannot be practically useful unless we directly address the issue of band connectivity in momentum space. In order to solve this problem, we introduced and briefly discussed in Ref. NaturePaper a mapping from the problem of patching together the local, 𝐤·𝐩 bands to form a global band structure to a problem in graph theory. As described in full theoretical detail in the subsequent Ref. GraphTheoryPaper, we map the little group representations appearing at each high-symmetry 𝐤-vector to a node in a “connectivity graph”. Edges in this graph are drawn in accordance with the group-theoretic “compatibility relations” between representations, as defined in Refs. bradley,kvec,grouptheory. Given a set of irreducible representations at every 𝐤-vector – chosen for instance from an elementary band representation (EBR) – the problem of enumerating all the global ways in which these representations can be connected is equivalent to the problem of enumerating all distinct connectivity graphs. In the present work, we outline the algorithms we have developed both to construct these connectivity graphs, and to enumerate all the elementary band representations that allow for topologically disconnected band structures, both with and without spin-orbit coupling and time-reversal symmetry. In Secs. <ref>–<ref> we present algorithms for finding the minimal set of paths through the Brillouin zone which fully determine the connectivity of a band structure. Next, after reviewing the formal aspects of the mapping of band connectivity to graph theory, we present two algorithms for constructing and identifying disconnected connectivity graphs. The first approach, given in Secs. <ref> and <ref>, involves a direct implementation of the group-theoretic constraints on the connectivity graph, and the application of spectral graph theory<cit.> to identify disconnected subgraphs (bands). The second approach, discussed in Sec. <ref>, builds disconnected subgraphs directly, growing outward from the 𝐤-vector with the fewest nodes, in the spirit of Prim's algorithm<cit.>. Throughout the discussion, we use the non-symmorphic space group P4/ncc (130) as an illustrative example. We have applied these algorithms to tabulate the allowed connectivities of all band representations. We have made the entirety of this data available in the form of end-user programs, whose output is described in Sec. <ref>. In Sec. <ref> we show how to apply our data to the physically relevant example of graphene, the prototypical (symmorphic) topological insulator; this also serves as a consistency check on our algorithms. Finally, in Sec. <ref>, we show how to access and utilize our data.§ METHODS§.§ Determination of minimal 𝐤-vectors and paths To begin to approach the problem of computing all distinct connectivity graphs for each space group, we must first enumerate a minimal set of paths through the Brillouin zone that are needed for the computation, as described in Ref. GraphTheoryPaper. To that end, we have developed an algorithm which identifies a priviledged set of maximal𝐤-vectors (defined below) for all the 230 space groups, and the minimal set of non-redundant connections between them needed to construct a connectivity graph. The details of the theory and definitions used in the algorithm are described in Ref. grouptheory, although we summarize the essentials here. Given a space group G and a 𝐤-vector in reciprocal space, the symmetry operations R of the point group that keep 𝐤 invariant modulo a reciprocal lattice translation – i. e. , those that fulfill the relation 𝐤R=𝐤+𝐊,with 𝐊 any vector of the reciprocal lattice – belong to the little co-group G̅_𝐤 of 𝐤. Note that to be consistent with the crystallographic conventions of Ref. grouptheory, we define the little co-group as acting on 𝐤-vectors from the right. The little co-group of a 𝐤-vector is isomorphic to one of the 32 crystallographic point groups. For each 𝐤-vector 𝐤_i, we consider the (closure of[It is important to take the closure of the manifolds defined in this way, in order to ensure that the high symmetry points at the endpoints of lines are defined to be contained in the lines, etc.]) largest continuous submanifold of reciprocal space which contains 𝐤_i such that the little co-group of every point in the manifold is isomorphic to G̅_𝐤_i. We refer to such a submanifold, either a point, line, plane, or volume, of 𝐤-vectors as a 𝐤-manifold. Each 𝐤-manifold is identified by a letter, and is specified by coordinate triplets depending on 0,1,2 and 3 free parameters, respectively. We emphasize the (rather tautological) fact that picking specific values for these free parameters yields a 𝐤-vector contained in the 𝐤-manifold. We say that two 𝐤-manifolds A={𝐤_1(𝐮_1)} and B={𝐤_2(𝐮_2)} are connected if, for some specific values of the free parameters 𝐮_1 and 𝐮_2, we have𝐤_1(𝐮_1)=𝐤_2(𝐮_2)+𝐊for some vector 𝐊 of the reciprocal lattice.The star of a vector 𝐤, denoted *𝐤 is the set of vectors {𝐤R} for all R in the point group of the space group, which are not equivalent to 𝐤 as per Eq. (<ref>). All the vectors in the same star have conjugate little co-groups, i. e. G̅_𝐤R=RG̅_𝐤R^-1. Note that if G̅_𝐤 is a normal subgroup of the point group G̅, then it is possible for multiple vectors in *𝐤 to belong to the same labelled 𝐤-manifold. For example, in the honeycomb lattice of graphene, the vectors K [with reduced coordinates (1/3,1/3)] and K' [with reduced coordinates (2/3,2/3)] lie in the same star, and are related by the sixfold rotational symmetry of the lattice. Furthermore, the little co-groups G̅_K and G̅_K' are isomorphic normal subgroups of the point group G̅ (all subgroups of index 2 are normal<cit.>), and so both belong to the same 𝐤-manifold with coordinates {(u,u,0) | 0 < u < 2/3} (here and in the remainder of this work, we give the reduced coordinates of 𝐤-vectors in the notation of Ref. cdml.) We say that a manifold of vectors 𝐤 in the reciprocal lattice is of maximal symmetry, if its little co-group is not a subgroup of the little co-group of another manifold of vectors 𝐤' connected to it. We shall see that the set of maximal 𝐤 for each space group plays a special role in determining the connectivity of energy bands. We refer to a vector contained in a 𝐤-manifold of maximal symmetry as a maximal 𝐤-vector, or analogously as a 𝐤-vector of maximal symmetry. Note that m𝐤-manifolds of maximal symmetry need not be points; for example in space groups with only a single rotation axis (such as P6mm, the space group of AA stacked graphene in an external z-directed electric field), 𝐤-manifolds are lines directed along the rotation axis when time-reversal symmetry is neglected.The non-equivalent (per the equivalence Eq. <ref> under reciprocal lattice translations) manifolds of 𝐤-vectors for every space group have been tabulated (see for instance Ref. cdml). An on-line database of the 𝐤-vectors for every space group, both maximal and non-maximal, is accessible via Ref. progkvecs. As discussed in Ref. grouptheory, the set of maximal 𝐤-vectors is different depending on whether or not time-reversal (TR) symmetry is considered. In some lines of 𝐤-vectors in polar space groups, for example, all the points have the same little co-group and – as a consequence – the same symmetry properties (all irreps of the little group depend smoothly on the coordinate along the line). However, when TR is considered as an extra (antiunitary) symmetry operation, some points in the line (the Γ point and points at the boundary of the first Brillouin zone) are TR-invariant; we refer to these as Time Reversal Invariant Momentum (TRIM) points. At the TRIM points then, TR symmetry sometimes forces irreps that in principle correspond to different energy levels without TR to become degenerate. A similar issue arises in body- and face-centered space groups at points with antiunitary operations combining a rotation or reflection with TR. Aside from these caveats, our definition of maximal 𝐤-vector coincides with the colloquial notion of a “high-symmetry”𝐤-vector.As an example, we give in Table <ref> the list of 𝐤-vectors in the space group P4/ncc (130), sorted into labelled manifolds sharing the same little co-group. This is a tetragonal, non-symmorphic space group, generated by inversion {I|000}, a fourfold z-axis rotation {C_4z|00},and a two-fold screw rotation {C_2y|0} about the y-axis. The first three columns of the table show the label of the manifold containing each𝐤-vector, the multiplicity or the number of vectors in its star, and the coordinates of a representative vector of the star in the standard setting, respectively. In the fourth column we give the symbol of the little co-group of each 𝐤-manifold. In the fifth column, we indicate whether or not each 𝐤-manifold is maximal. The last column indicates if the TR operator keeps 𝐤 invariant. Being a centrosymmetric space group, the set of maximal 𝐤-vectors is the same with or without TR: adding time-reversal is equivalent to adding the composite of inversion and time reversal to the little co-group of every 𝐤-vector. Since this does not change the group-subgroup relation of connected 𝐤-vectors, it does not change the set of maximal 𝐤-vectors as per our definition. Figure <ref> shows the region 0≤ k_x,k_y,k_z≤1/2 of the first Brillouin zone, where the special 𝐤-vectors of Table <ref> have been indicated.After having determined all the maximal 𝐤-vectors in a given space group (in the following we denote them as 𝐤^M), we next compute all the possible connections between each maximal 𝐤^M and all the non-maximal 𝐤-vectors. Each manifold of non-maximal 𝐤-vectors is parametrized by 1 (lines), 2 (planes) or 3 (the general 𝐤-vector) free parameters. Note that, to get all the possible connections, we must consider an equation analogous to Eq. (<ref>) for each vector in *𝐤. Continuing with our example, in Table <ref> we show all the connections between the 𝐤^M-vectors and the 𝐤-vectors of non-maximal symmetry in the space group P4/ncc (130). The first column shows the list of maximal vectors 𝐤^M, the second gives the non-maximal 𝐤-manifolds (lines or planes) connected to each 𝐤^M, the third column shows the specific values of the continuous parameters for which the points are connected, and the last column indicates the number of vectors *𝐤 connected to each 𝐤^M, equal to the quotient |*𝐤|/|*𝐤^M| (where |· | denotes the number of elements in a set). For instance, the four vectors of the star 𝐤=(0,v,0)∈Δ are *𝐤={(0,v,0),(0,-v,0),(v,0,0),(-v,0,0)}, and are connected to Γ:(0,0,0) for v→0. We have suppressed the trivial connections between the 𝐤^M-vectors and the general position GP={(u,v,w)}.Let us define the set of direct paths that join two maximal 𝐤-vectors 𝐤^M_1 and 𝐤^M_2 as the intersection of the sets of non-maximal 𝐤_i connected to 𝐤^M_1 and 𝐤^M_2. Using the list of possible connections between 𝐤-manifolds in a space group, we can construct the set of all direct paths between pairs of maximal 𝐤-vectors. In our example of space group P4/ncc (130), we can construct the set of all direct paths by taking intersections of the sets of connections given in Table <ref>. Table <ref> shows the result of this analysis. The first and fourth columns give all the pairs of maximal 𝐤-vectors. The second column shows the possible direct paths that connect the two 𝐤^M-vectors. The third column gives the number of vectors in the star of the intermediate 𝐤-vectors of non-maximal symmetry connected to both 𝐤^M-vectors. As in Table <ref>, the trivial connection through the general position GP, common to all pairs of 𝐤^M-vectors, has been omitted in the table.§.§ Set of independent pathsThe end goal of enumerating all paths through the Brillouin zone is the determination of all the possible connectivity graphs<cit.>, with a special focus on the graphs for the building blocks of band theory, the elementary band representations<cit.>. The elementary band representations are representations of infinite dimension that can be expressed – like any representation of the space group – as a direct sum of space group irreps, themselves induced from irreps of the little group of each 𝐤-vector in reciprocal space.Recall that the little group G_𝐤 is the subgroup of the space group G that leaves 𝐤 invariant, with the understanding that translations act trivially on 𝐤.The little co-group G̅_k is then the point group of G_𝐤, and linear representations of the little group that we use here can equally well be viewed as projective representations of the little co-group<cit.>). The multiplicities of each irrep of the little group of every maximal vector 𝐤^M have been calculated for all the elementary band representations<cit.>. The multiplicities of the irreps of the little group of 𝐤-vecs of non-maximal symmetry can be determined from these via the compatibility relations. The procedure can be briefly described as follows. When two lines of 𝐤-vectors intersect at a point (or two planes at a line), this intersection point (line) generically has higher symmetry than the points that lie on only one line (plane). The little co-group of the intersection point 𝐤_s is thus generically a supergroup of the little co-group of the line, G̅_𝐤⊂G̅_𝐤_s and the little groups also satisfy, G_𝐤⊂ G_𝐤_s. The matrices of an irrep ρ of the little group G_𝐤_s associated to the symmetry elements that belong to G_𝐤 form a representation of the little group of 𝐤, known as the restricted (subduced) representation ρ↓ G_𝐤. In general, this subduced representation is reducible. The compatibility relations give the decomposition of the irreps of G_𝐤_s into irreps of G_𝐤 upon subduction. As an example, we can examine compatibility of little group representations between the little groups G_Γ and G_Δ in our example of space group P4/ncc. Because it is located at the origin of the Brillouin Zone, representations of the little group G_Γ are insensitive to fractional lattice translations, and so are determined by representations of the little co-group G̅_Γ≈ 4/mmm. Using data on the Bilbao Crystallographic Server<cit.>, we can focus on the 2D representation Γ_5^+ of G_Γ, characterized by the representation matrices D_0^Γ^+_5({I|000})=σ_0, D_0^Γ^+_5({C_4z|00})=iσ_y, D_0^Γ^+_5({C_2y|0})=σ_x,where D_0^ρ(g) signifies the representation matrix of element g in the representation ρ at 𝐤=0, and σ_0,σ_x,σ_y,σ_z are the Pauli matrices augmented by the two-by-two identity matrix σ_0. If we now subduce this representation onto the little group G_Δ⊂ G_Γ, we see that G_Δ contains only {C_2y|0} and {m_z|0}={C_4z|00}^2{I|000}. We find then that at 𝐤=0 (where Γ and Δ are connected) that the subduced representation η=Γ^+_5↓ G_Δ is determined byD_0^η({C_2y|0})=σ_x, D_0^η({m_z|0})=-σ_0.This is a reducible representation; it decomposes as a direct sum of two representations Δ_2⊕Δ_3 with representation matricesD_0^Δ_2({C_2y|0}) =1, D_0^Δ_2({m_z|0})=1 D_0^Δ_3({C_2y|0}) =-1, D_0^Δ_3({m_z|0})=-1.Thus we deduceΓ^+_5↓ G_Δ≈Δ_2⊕Δ_3 Returning to our general considerations, let us consider two maximal vectors 𝐤^M_1 and 𝐤^M_2 connected through the line or plane 𝐤, and a given elementary band representation that subduces into the sets of irreps {ρ^M_1_i}, {ρ^M_2_i} and {ρ_i}, of the little groups of 𝐤^M_1, 𝐤^M_2 and 𝐤, respectively. Then the compatibility relations give, on the one hand, the relations between the sets of irreps {ρ^M_1_i}→{ρ_i} and, on the other hand, between the sets of irreps {ρ^M_2_i}→{ρ_i}. As the electronic bands are continuous functions in reciprocal space, the set of irreps {ρ^M_1_i} must be connected to the set of irreps at {ρ^M_2_i} through the {ρ_i}. The compatibility relations at both endpoints of the connection restrict the different possible ways to connect the irreps at 𝐤^M_1 and 𝐤^M_2. Every set of connections between every pair of maximal 𝐤-vectors that fulfill the compatibility relations defines a valid band structure, and hence a valid connectivity graph<cit.>. However, it is not necessary to consider the compatibility relations along all the possible intermediate paths that connect a pair of maximal 𝐤-vectors. In most cases, there are redundancies in the restrictions imposed by the compatibility relations along different paths. In order to arrive at a computationally tractable problem, we must minimize the set of paths considered in our calculations. In the following, we explain the different sources of redundant connections, and the algorithms used to remove them. We will continue to make use of the practical example of the space group P4/ncc (130) and Table <ref> to support our explanations.* Paths that are a subspace of other pathsIf two maximal 𝐤-vectors 𝐤^M_1 and 𝐤^M_2 are connected both through a plane 𝐤_p and through a line 𝐤_l contained in the plane, the set of compatibility relations between 𝐤^M_1,𝐤^M_2 and 𝐤_p are redundant and can be omitted in the analysis.To see this, note that as the line 𝐤_l is contained in the plane, 𝐤_p, there exist compatibility relations between the irreps of the little group 𝐆_𝐤_l of 𝐤_l and the little group 𝐆_𝐤_p of 𝐤_p, {ρ^l_i}→{ρ^p_i}. Then the set of compatibility relations between each of 𝐤^M_1, 𝐤^M_2 and the plane 𝐤_p are completely determined from the set of compatibility relations between 𝐤^M_1,𝐤^M_2 and 𝐤_l and between this line and 𝐤_p; formally we have ρ^M_i↓ G_𝐤_p≈ (ρ^M_i↓ G_𝐤_l)↓ G_𝐤_pfor each representation ρ^M_i of the little groups G_𝐤^M_1 and G_𝐤^M_2. Thus, compatibility along the plane 𝐤_p places no additional restrictions on connectivity beyond those obtained from the line 𝐤_l.In the example of Table <ref>, for instance, we see that along the Γ↔ Z connection, the line Λ is contained in both the B and C planes. Thus, we can neglect the planes B and C in our analysis of connectivity, without loss of generality. More generally, all the connections through planes can be discarded, when there is also a line connecting the same pair of 𝐤^M-vectors. Note that the C plane connecting the points Γ and A is the only direct connection between these two 𝐤-vectors. This means that the symmetry constraints along Γ-C-A are independent of the constraints arising from connections through multiple lines in Table <ref>, and so this connection cannot be discarded. * Paths related by a symmetry operationLet R be a rotational element that belongs to the little co-groups of 𝐤^M_1 and 𝐤^M_2, but that does not belong to the little co-group of an intermediate line 𝐤 that is connected to both 𝐤^M_1 and 𝐤^M_2. These two 𝐤^M_i are also connected through the line 𝐤R. The sets of compatibility relations between each𝐤^M_i and 𝐤Rdiffer from the compatibility relations between 𝐤^M_i and 𝐤 by conjugation by R[Note then that for G̅_𝐤 a normal subgroup of G̅, the compatibility relations are identical.], but a set of connections between the irreps of the little groups of 𝐤^M_1 and 𝐤^M_2 through 𝐤 uniquely determines the connections through all of the 𝐤R by symmetry. Therefore, a single line or plane of the star of 𝐤 gives all the independent restrictions on the connectivity graphs. In the example of Table <ref>, the 𝐤^M-vectors Z and A are connected through four lines of the star of S:(u,u,1/2),(-u,u,1/2),(u,-u,1/2) and (-u,-u,1/2) but it is only necessary to consider one of them. Note that since different representatives of the star are chosen independently for each connection, this set of paths may not lie in a single representation domain (i.e. a submanifold M of the Brillouin zone such that M contains one 𝐤-vector from every star, and MG̅∩ M =∅<cit.>). (Note that a similar construction was used by Kane and Mele to define the ℤ_2 invariant for topological insulators by considering one half of the Brillouin zone which does not map to itself under time-reversal<cit.>). * Paths that are a combination of other pathsLet 𝐤^M_1, 𝐤^M_2and 𝐤^M_3 be three maximal 𝐤-vectors. Consider a line 𝐤_l_12 connecting 𝐤^M_1 and 𝐤^M_2, a line 𝐤_l_23 connecting 𝐤^M_2 and 𝐤^M_3, and a plane 𝐤_p_13 connecting 𝐤^M_1 and 𝐤^M_3. If the plane 𝐤_p_13 contains both the lines 𝐤_l_12 and 𝐤_l_23, then set of compatibility relations between the plane and the two maximal 𝐤^M_1 and 𝐤^M_3 are not independent from the two sets of compatibility relations obtained from the lines 𝐤_l_12 and 𝐤_l_23. Therefore, the path that includes the plane can be neglected in the analysis of the connectivity graphs. For example, in Table <ref> the path Z↔ C↔ M can be omitted because it is possible to define the path Z↔ S↔ A↔ V↔ M,and the lines S and V are contained in C (see Fig. <ref>). The possible connections between Z, A and M along the lines S and V determine the possible connections between Z and M through C. While the above rules1,2, and3 are perhaps obvious from a physical picture of energy bands, we will need to impose them explicitly at the level of our graph algorithms. We have applied these rules to calculate the independent paths through the Brillouin zone of each space group. In Table <ref>, the nine bolded paths constitute the full set of independent paths to be considered in the analysis of connectivity graphs for the space group P4/ncc (130), both with and without TR. §.§ Connectivity of non-symmorphic groups. To complete our enumeration of paths through the Brillouin zone (or more precisely, from Rule # 1 above, the representation domain), we must now pay special attention to some subtleties that arise for non-symmorphic space groups. The connectivity of energy bands – and particularly of elementary band representations – in non-symmorphic space groups was first studied by Michel and Zak <cit.>, who pointed out the essential role of the “monodromy of little group representations”<cit.> in determining the band connectivity. We briefly review their analysis here, and then adapt it to our algorithm for determining the necessary paths and compatibility through momentum space.§.§.§ Monodromy of Little Group Representations Michel and Zak limited their analysis to spinless (i.e. single-valued) band representations, for the most part ignoring TR symmetry as well. First, they analyzed the 9 non-symmorphic space groups (Bieberbach groups<cit.>) generated by the lattice translations and a single screw axis or a glide plane: Pc, Cc, P2_1, P3_1, P3_2, P4_1,P4_3, P6_1 and P6_5. The only Wyckoff position in these groups is the general one (and so it is, by fiat, a maximal Wyckoff position<cit.>) and the unique irreducible representation of the site-symmetry group (the group which leaves a representative point in this Wyckoff position invariant, which is isomorphic to point group 1, the trivial group in this case) induces a single elementary representation. If we allow for spin, we find that there is one double-valued (spinor) band representation as well. In general, if {R|𝐭} represents a screw axis or a glide plane, and n∈{2,3,4,6} is the order of R, then {R|𝐭}^n={E|p𝐓_𝐑}, where 𝐓_𝐑 is a lattice translation, and p is an integer that satisfies 0<p<n. We will characterize a non-symmorphic space group by first finding the set of elements with the largest n, and from among those choosing the smallest p. If we consider the space groups with screw rotations from the previous list, we see that for the largest order n of operations in the point group, it is possible to choose one such that p=1. Thus, for these space groups we can take as basis vectors of the lattice of translations 𝐞_3=𝐓_R and two vectors orthogonal to the screw axis, 𝐞_1 and 𝐞_2. The corresponding basis vectors of the reciprocal lattice are those vectors 𝐞_i^* that satisfy 𝐞_i·𝐞_j^*=2πδ_ij. For example, the space group P6_1 contains the sixfold screw {C_6z|001/6}, which has (n,p)=(6,1). Similarly, if we consider the space group P6_5, this contains the operation {C_6z^-1|001/6}, which also satisfies (n,p)=(6,1). For 𝐤-vectors of the form 𝐤=k_3𝐞_3^*, the little group G_𝐤 is the whole space group G, and its n irreps (before considering TR symmetry at k=0,1/2) are one-dimensional. The nexplicitly (𝐤-dependent, from the fractional lattice translation) one-dimensional matrices D_𝐤^ρ_j({R|𝐭}) of the symmetry operation {R|𝐭} in these simple irreps ρ_j, j=0,1,…,n-1 areD_𝐤^ρ_j({R|𝐭})=e^i(2π j+k_3)/n.When k_3 varies in a full period, k_3→ k_3+1, we see thatD_𝐤^ρ_j({R|𝐭})→ D_𝐤^ρ_j+1({R|𝐭})Thus, as we move along the 𝐞_3^* direction in reciprocal space, there is a cyclic permutation of the n irreps of G_𝐤. This phenomenon is called monodromy<cit.>. Since the Bloch wavefunctions, and hence the energy bands, are periodic functions in the BZ, we deduce as a consequence of the cyclic permutation of irreps along 𝐞_3^* that all n irreps must be connected. This monodromy property also holds for the two space groups with a glide plane, Pc and Cc, as is evident by writing a glide reflection as the composition of inversion and a twofold screw rotation; n=2 for these cases. Thus, monodromy ensures that all elementary band representations (without spin or time reversal) are connected for the 9 nontrivial Bieberbach groups listed above. Slightly different arguments were used in Ref. Michel1999 to demonstrate the full connectivity of elementary band representations in the 5 additional space groups P4_2, I4_1, P6_3, P6_2 and P6_4, for which p>1, which are not Bieberbach groups. Note that monodromy acts as a cyclic permutation of order 2 in the first three of these groups, and of order 3 in the last two groups. Therefore, not all n irreps are forced to be connected by monodromy alone. In the first three groups the irreps are necessarily connected pairwise, and in the last two groups there are sets of three irreps internally connected. Finally, Michel and Zak explicitly computed the band representations in these groups todemonstrate that the irreps not connected by monodromy never occur in the same elementary band representations, and hence deduce the full connectivity of the (spinless) elementary band representations in these 5 groups. Finally Michel and Zak analysed explicitly the connectivities of the space groups I2_12_12_1 and I2_13, and directly proved the connectivity of their elementary band representations<cit.>.These results, and the fact that the 14=9+5 space groups of the previous first two lists contain all types of screw rotations and glide reflections operations in all the 230 space groups, led the authors to conclude that all the (spinless) elementary band representations in non-symmorphic space groups are connected, i. e. that one can travel continuously through all energy bands. However, we have shown in certain cases that this extrapolation is not justified, due to three possibilities that were not previously considered. First, in some space groups, for example in certain elementary band representations of the space groups P4_2/m, I4_1/a or I4̅c2, the multiplicity of the irreps permuted under the monodromy operation is 2 or 3, rather than 4. It is thus possible in these cases to separate the irreps into disconnected subsets where irreps permute amongst themselves under monodromy, each irrep occuring only once per subset. An additional subtlety is that, for space groups other than P6_3 which contain 6_3 type screws (i.e. screws of the form {C_6|1/2𝐓_R}), the special logic (recapitulated above) that Michel and Zak used to prove the connectivity breaks down – in some of these groups, little group representations which are not connected by monodromy do in fact appear in the same elementary band representation. In fact, we have found that in space groups P6_3/m, P6_3/mcm and P6_3/mmc, some elementary band representations can be disconnected. Finally, due to the glide planes in the space groups Pc and Cc the two irreps at Γ are connected. But in other space groups with glide planes, there are more than 2 irreps in the little groups of lines contained in the glide planes as, for example, in the space group P4/ncc. Then the necessary pairwise connections of irreps does not guarantee that all the irreps are connected. For example, let us focus on the case of space group P6_3/m. In particular, the points Γ=(0,0,0) and A=(0,0,) are connected by the line Δ=(0,0,w), w∈[0,]. There are twelve one-dimensional single-valued representations of G_Γ, permuted pairwise under monodromy w→ 1-w as (Γ^±_1↔Γ^±_2),(Γ^±_3↔Γ^±_4),(Γ^±_5↔Γ^±_6).Additionally, there are three single-valued representations of G_A labelled A_1,A_2, and A_3, all two-dimensional. Along Δ, the compatibility relations force A_1 to connect to (Γ_1^±,Γ_2^±), A_2 to connect to (Γ_3^±,Γ_4^±), and A_3 to connect to (Γ_5^±,Γ_6^±); we thus deduce that each of the representations A_1, A_2, and A_3 are invariant under monodromy.Unlike in the space group P6_3 considered by Michel and Zak, the space group P6_3/m has elementary band representations with more than two bands. In particular, the elementary band representation induced from the A^g representation of the stabilizer group of the 6g Wyckoff position (isomorphic to the group 1̅) subduces the six representations Γ_1^+,Γ_2^+,Γ_3^+,Γ_4^+,Γ_5^+, and Γ_6^+ of G_Γ, and the three representations A_1, A_2, and A_3 of G_A. From Eq. (<ref>), we see that these representations can be grouped into three disconnected sets of bands. Let us now consider more generally the consequences of monodromy in non-symmorphic groups, as it pertains to band connectivity. In general, if along the𝐤=k_3𝐞_3^* direction there are maximal 𝐤-vectors for specific values of k_3 (for example the Γ point for k_3=0 or a point at the boundary of the first Brillouin zone), equivalent points are separated along the line with periodicity one (in units of 𝐞_3^*). The compatibility relations at these equivalent points give different relations between the irreps at the maximal 𝐤-vectors and a general point in the line. In general, for a screw axis that satisfies the above relation {R|𝐭}^n={E|p𝐓_𝐑},with n chosen to be as large as possible and subsequently p chosen to be as small as possible, as above,there are n/p distinct sets of compatibility relations at n/p different but equivalent points of the line 𝐤_p=(k_3+j)𝐞_3^*. j∈{0,1,…,n/p-1}. For glide planes there are, in general, 2 different sets of compatibility relations. However, it can be proved that, for all glides and screws, the maximum number of independent sets of compatibility relations is 2. To see this, consider two equivalent vectors 𝐤 and 𝐤'=𝐤+𝐞_3^* on the boundary of the first Brillouin zone. There are two compatibility relations at 𝐤 and 𝐤' along the line k_3𝐞_3^*, related by monodromy. However, because energy bands are periodic in the first Brillouin zone (irrespective of representation label), these two compatibility relations must completely determine the band structure. This is a physical manifestation of the fact that the monodromy groups are cyclic, i. e. generated by a translation by 𝐞_3^* in reciprocal space.In our example of the space group P4/ncc (130), the lines Δ, U, Y, T, Σ and S (defined in Table <ref> and Fig. <ref>) contain in their little groups 2-fold screw rotation, and glide reflection: for example G_U contains the operations {C_2y|0} and {m_z| 0}. Similarly, the little groups of the lines Λ, V and W contain the glide reflection {m_x|0 0 }. Therefore, we must consider, in general, 2 different sets of compatibility relations between representations of the little groups of these lines, and of the little groups of2 distinct maximal 𝐤-vectors in the line differing by a reciprocal lattice translation. In some cases the (in-principle) different sets of compatibility relations can be equivalent. This occurs for instance along the S and Σ lines in our example: the two sets of compatibility relations at each maximal 𝐤-vector (Z-S, A-S, Γ-Σ and M-Σ) are identical. However, in all other lines in this space group, the two sets of compatibility relations are different. For instance, let us consider the path Γ-Λ-Z, where the relevant little-group operation is the glide reflection {m_x| 0 }. The compatibility relations for irreps between Z:(0,0,1/2) and Λ:(0,0,w) are:[ Z_1 → Λ_2⊕Λ_3; Z_2 → Λ_1⊕Λ_4; Z_3 → Λ_5; Z_4 → Λ_5 ]The same set of compatibility relations are obtained at all the points equivalent to Z in the line Λ. A shift of 𝐤 by 𝐞^*_3 interchanges the representations Λ_2↔Λ_3, Λ_1↔Λ_4, and Λ_5↔Λ_5. However, the compatibility relations at Γ:(0,0,0) and at the equivalent point Γ':(0,0,1) are:[ Γ:(0,0,0)→Λ:(0,0,w=0) Γ ':(0,0,1)→Λ:(0,0,w=1); Γ_1^+ → Λ_1 Λ_4; Γ_1^- → Λ_4 Λ_1; Γ_2^+ → Λ_2 Λ_3; Γ_2^- → Λ_3 Λ_2; Γ_3^+ → Λ_4 Λ_1; Γ_3^- → Λ_1 Λ_4; Γ_4^+ → Λ_3 Λ_2; Γ_4^- → Λ_2 Λ_3; Γ_5^+ → Λ_5 Λ_5; Γ_5^- → Λ_5 Λ_5 ]Looking at the lists in Eq. (<ref>) we see, for example, that Γ_1^+ is necessarily connected to either Γ_1^- or Γ_3^+ and to Z_2 at the Z point, but not all the irreps at the Γ point are necessarily connected due to the glide plane parallel to the Λ line. We show this graphically for the band structure along the Γ-Λ-Z-Λ-Γ'≡Γ line in Fig. <ref>. In the following, we distinguish both sets of irreps at the intermediate path by a superscript. Taking as an example the irrep Γ_1^+ in Eq. (<ref>), we write the compatibility relations as Γ_1^+→Λ_1^1 and Γ_1^+→Λ_4^2.§.§.§ Monodromy and the Minimal Set of Paths Now that we understand the role monodromy plays in enforcing connectivity of energy bands, we can incorporate it into our set of rules from Sec. <ref> in order to find the smallest set of non-redundant paths through the Brillouin zone for non-symmorphic space groups. We know from the preceding analysis of Sec. <ref> that for lines L={𝐤_t} whose little group G_L contains either a screw rotation or a glide reflection, that band connectivity along L is fully determined by a pair of distinct compatibility relations for the little group of two equivalent points G_𝐤_t andG_𝐤_-t+𝐊 in L. However, referring back to Rule1 of Sec. <ref>, we notice that these pairs of compatibility relations only impose additional constraints on the band structure when generic points on the line 𝐤_t and 𝐤_-t+𝐊 are not related by a symmetry operation, i.e. when they do not lie in the same star. When the two points do lie in the same star, then there exists an element R in the point group such that 𝐤_tR=𝐊+𝐤_-t. The band connectivity at 𝐤_t determines the band connectivity at 𝐤_-t+𝐊 under the action of R; the monodromy of little group representations then is implemented by conjugation of the little group by R, as per Eq. (<ref>). In this case, any connectivity constraints enforced by monodromy are restricted by symmetry to occur at maximal 𝐤-vectors invariant under the action of R.We can see this cleary in our analysis of the Γ-Λ-Z-Λ-Γ' line in space group P4/ncc from Fig. <ref> above. In this space group the two connecting lines (0,0,v)∈Λ and (0,0,1-v)∈Λ (v∈[0,]) are related by the action of inversion. Conjugation by inversion interchanges the little group representations Λ_1↔Λ_4, consistent with the monodromy due to the glide-reflection in the little group G_Λ (see Fig. <ref>). Consequently, the connectivity of bands implied by this monodromy is forced to occur at the high-symmetry point Z, which hosts only two-dimensional representations. The full band connectivity along the line Γ-Λ-Z-Λ-Γ' is thus determined uniquely by the connectivity along Γ-Λ-Z, as we can see from the figure. A similar monodromy-enforced band crossing with the addition of TR symmetry was exploited in Ref. Hourglass. Taking this into account, we have the additional rule for determining the non-redundant paths through the Brillouin zone for non-symmorphic groups:* If the little group of a line or plane {𝐤_t} contains a screw rotation or a glide reflection, and if 𝐤_t is not in the star of K+𝐤_-t, then we must take into account two sets of compatibility relations for representations of G_𝐤_t related by monodromy. In particular, note that in any space group with TR or inversion symmetry, -𝐤∈*𝐤 for every 𝐤-vector. Hence in these space groups we need only consider a single set of compatibility relations for all connections between TRIM points. In the specific example of space group P4/ncc (130), we see then that we need only consider a single set of compatibility relations for all connections, since the little group of all maximal 𝐤-vectors contains inversion.§.§ The graph construction algorithm Armed with the sets of paths through the Brillouin zone for each space group, we would now like to solve the constraints imposed by the group-theoretic compatiblity relations along those paths. These solutions take the form of groups of connected bands, with bands at high symmetry points and lines transforming under a direct sum of irreps of the space group. Any set of such bands can be realized as the spectrum of some local Hamiltonian H (including topological bands, although these cannot appear in isolation). However, since the compatibility relations are a purely group-theoretic device with meaning independent of any choice of Hamiltonian, it is useful for us to devise a more refined graph-theoretic picture of band connectivity. Our final goal is the classification of all valid band structures for all elementary band representations<cit.> in each space group. To begin, we first review some graph-theoretic terminology.§.§.§ Review of Connectivity Graphs We aim to enumerate and classify all the valid band structures consistent with the symmetry constraints in each of the 230 space groups. In order to ensure that the compatibility relations are satisfied along the non-redundant lines and planes joining all pairs of maximal 𝐤-vectors, we will employ the mapping to graph theory introduced in Ref. NaturePaper, and expanded upon in detail in Ref. GraphTheoryPaper. Here we briefly review the basic concepts of spectral graph theory, and the tools that will be needed for our algorithm. First, recall that apartition of a graph is a subset, V_0, of nodes such that no two nodes in V_0 are connected by an edge. In our construction, each partition will correspond to a high-symmetry (manifold of) 𝐤-vector(s), and irreps of the little group of each 𝐤-vector will be represented as nodes (see Fig. <ref>).Next, thedegree of a node p in a graph is the number of edges that end on p. Using these ideas, we defined in Refs. NaturePaper,GraphTheoryPaper the connectivity graph for a collection of little group representations ℳ (i. e. bands) forming a (physical) band representation for a space group G as follows:Given an elementary band representation with little group representations ℳ, we construct aconnectivity graphC_ℳ using the following rules: We associate a node, n^a_𝐤_i∈ C_ℳ, in the graph to each representation ρ_𝐤_i^a∈ℳ of the little group G_𝐤_i of every high-symmetry manifold (point, line, plane, and volume), 𝐤_i.If an irrep occurs multiple times in ℳ, there is a separate node for each occurence. The degree of each node, n^a_𝐤_i, is P_𝐤_i·dim(ρ^a_𝐤_i), where P_𝐤_i is the number of high-symmetry manifolds connected to the point 𝐤_i:dim(ρ^a_𝐤_i) edges lead to each of these other 𝐤-manifolds in the graph, one for each energy band.When the manifold corresponding to 𝐤_i is contained within the manifold corresponding to 𝐤_j, as in a high-symmetry point that lies on a high-symmetry line, their little groups satisfy G_𝐤_j⊂ G_𝐤_i. For each node n_𝐤_i^a, we computeρ_𝐤_i^a↓ G_𝐤_j≈⊕_bρ_𝐤_j^b.We then connect each node n_𝐤_j^b to the node n_𝐤_i^a with dim(ρ^b_𝐤_j) edges. A solution to the compatibility relations is thus equivalent to a way of connecting nodes to form a connectivity graph consistent with the above definition. Therefore, using the compatibility relations, we will algorithmically construct all valid connectivity graphs for each of the 5646 elementary band representations, as well as for the 4757 independent physically elementary band representations, i. e. those band representations that are elementary in the presence of TR symmetry<cit.>. To find those connectivity graphs that separate into disconnected components, and hence to identify topologically nontrivial groups of isolated bands, we employ some standard techniques of spectral graph partitioning<cit.>. In particular, recall that to every graph with m nodes, we can define an m× m adjacency matrixA, where the (ij)th entry is the number of edges connecting node i to node j. A graph is uniquely determined by its adjacency matrix. Furthermore, we introduce thedegree matrixD of a graph, a diagonal matrix whose (ii)th entry is the degree of the node i. From these we define theLaplacian matrix L≡ D-A.The spectrum of L has the following useful property: For each connected component of a graph G, there is a zero eigenvector of the Laplacian of G. Furthermore, the components of the eigenvector with this eigenvalue can be chosen to be 1 on all nodes in the connected component, and 0 on all others. The proof of this statement is given in Refs. GraphThy,GraphTheoryPaper, and follows directly from the observation that the sum of entries in any row of the Laplacian matrix is by definition zero, coupled with the observation that if L_ij≠ 0, then nodes i and j lie in the same connected component. Next, we will use the facts above to construct our algorithm, with which we have tabulated all realizable topological phases for each elementary band representation. First, we will describe our method for constructing all connectivity graphs for a band representation, given the data of Secs. <ref>–<ref>. Then, in Sec. <ref>, we will describe some details of the implementation of the spectral graph partitioning method outlined above.§.§.§ Direct construction of connectivity graphs We now design an algorithm to construct and diagonalize the Laplacian matrix for the connectivity graphs of an elementary band representation, separating the task into two steps. We first construct all possible adjacency matrices, and then we subtract the degree matrix from them to obtain the Laplacian. Note that by the definition of the connectivity graphs, the adjacency matrix has a natural block structure: blocks are indexed by a pair (𝐤_1,𝐤_2) of 𝐤-vector partitions, and a block contains a nonzero submatrix if and only if those two 𝐤-vectors are compatibile (c. f. Sec. <ref>). We exploit this block structure to first build each nonvanishing submatrix separately. We then combine the submatrices together in all distinct ways to form the adjacency matrices for all valid connectivity graphs.Let us begin with an elementary band representation ρ↑ G in a space group G. We start by noting that, as per the results of Sec. <ref>, we need only concern ourselves with the minimal set of independent paths in the BZ of G to fully constrain the connectivity graphs. As such, our adjacency matrices will have n_𝐤× n_𝐤 blocks, where n_𝐤 is the number of 𝐤-vectors appearing in the nonredundant paths through the BZ, with appropriate care taken to distinguish independent paths in non-symmorphic groups as per Sec. <ref>. Because the adjacency matrix is symmetric with vanishing diagonal blocks (no edges connect nodes in the same partition), we need only analyze the blocks above the diagonal. To this end, we order the blocks of the adjacency matrix by placing the maximal 𝐤-vectors first, followed by the non-maximal ones. In this way, all the submatrices which we analyze have rows labelled by irreps of the little group of maximal 𝐤-vectors, and columns labelled by irreps of the little group of non-maximal 𝐤-vectors. The precise representations are determined by subduction ρ↑ G↓ G_𝐤 of the band representation onto the little groups of each 𝐤-vector, as obtained in Ref. grouptheory. We next construct a valid submatrix making use of the compatibility tables<cit.> along the paths through the BZ. As discussed above, if 𝐤_1 and 𝐤_2 are not compatible 𝐤-vectors, then the only valid submatrix for a block (𝐤_1,𝐤_2) is the zero matrix. Let us consider then the block (𝐤^M_1,𝐤_t), with 𝐤^M_1 a maximal 𝐤-vector, and 𝐤_t a nonmaximal vector compatible with it. Then this block is nonzero, and the entries in the submatrix fulfill the following rules: First, we only allow one nonzero entry per column. This reflects the fact that each representation of the little group G_𝐤^M_1 subduces onto a unique sum of representations of G_𝐤_t. Second, the sum of the entries in each row must be the dimension of the corresponding little-group representation. This corresponds to the fact that subduction does not change the dimension of a representation. Given a single valid submatrix, we generate all other valid submatrices by considering column-wise permutations: given a valid submatrix with two columns labelled by isomorphic little group representations, then the submatrix obtained by permuting these columns will also be a valid submatrix.As an example let us continue with the space group P4/ncc (130).As shown in Table <ref>, and the discussion in Sec. <ref>, the maximal 𝐤-vectors are labelled A, Γ, M, R, X and Z. The independent connections are along the Λ, Σ, Δ, S, U, V, Y, T, and W lines/planes. Recall from Sec. <ref> that even though this space group is nonsymmorphic, we only need one set of compatibility relations along each connection in order to determine the allowed band connectivities. Let us consider the elementary band representation induced from the A̅^u (Γ̅_3) double-valued representation of the site-symmetry group of the maximal8d Wyckoff position. Because the Wyckoff multiplicity of this site is 8, and because the A̅^u representation is one-dimensional, there are 8 bands in this band representation. For simplicity, we will consider the case without TR symmetry. From Ref. grouptheory, we know that this band representation subduces to the A̅_5⊕A̅_5 representation at the A point, and the V̅_6⊕V̅_6⊕V̅_7⊕V̅_7≡ 2V̅_6⊕2V̅_7 (here and throughout, we will indicate the repeated direct sum of a representation with itself by integer multiplication). representation along the line V. The compatibility relations for the little groups G_A→ G_V areA̅_5→V̅_6⊕V̅_7. Using these, one valid submatrix for the (A,V) block of the adjacency matrix is shown in Table <ref>. By permuting the identically labelled V̅_6 and V̅_7 columns, we arrive at the following three additional submatrices shown in Table <ref>After constructing all nonzero submatrices for all compatible connections, we next build the full adjacency matrix row by row. In doing so, we would like to avoid overcounting configurations that differ only by a relabelling of representations along non-maximal 𝐤-vectors, , such as exchanging the two identical copies of V_6 along the line V. For example, consider a high-symmetry line connecting two maximal 𝐤-vectors. Connections along this line are specified by two separate blocks in the adjacency matrix, each with its own set of valid submatrices. To ensure we consider only distinct connections along this line, we fix a valid submatrix for one endpoint, thus fixing the connections at one endpoint of the non-maximal line.Incorporating this constraint into our algorithm, we first sort our maximal 𝐤-vectors by number of little group representations, from highest to lowest. Going row-by-row through the adjacency matrix, the first time a non-maximal 𝐤-vector appears in a non-vanishing block, we can arbitrarily choose a fixed, valid submatrix for this block. In all subsequent blocks corresponding to this non-maximal 𝐤-vector we must consider all allowed valid submatrices. We summarize these rules in Table <ref>.As a specific example, let us consider the Γ-Λ-Z line in space group P4/ncc. We continue to work with the elementary band representation induced from the Γ̅_3 representation of the site-symmetry group of the 8d Wyckoff position. As indicated in Table <ref>, we can fix the Γ-Λ submatrix to be any solution to the compatibility relations; we choose the submatrix shown in Table <ref> On the other hand, for the Λ-Z connection, according to Table <ref> we consider all possible submatrices, as shown in Table <ref> Even with this procedure to eliminate redundancies, the number of raw adjacency matrices that arise from these permutations, may be computationally intractable. For instance, the example of Table <ref> defines on the order of 4^9 adjecency matrices (the exponent is the number of permuting blocks). We implement some additionalfilters to avoid redundancies:*Multiple-irrep filter: Consider a maximal 𝐤-vector which in a particular band representation contains only multiple copies of the same representation. Then we can fix a valid submatrix for every block corresponding to this 𝐤-vector. This is because permutations amongst these identical irreps cannot change the connectivity of the full graph. We illustrate this for the line A-V^1-M in space group P4/ncc (130) in Fig. <ref>.*Fake-Weyl filter: The inverse map from a connectivity graph to a band structure requires a choice of embedding (in the formal graph-theoretic sense, c.f. Ref. GraphThy) of the connectivity graph into the Brillouin zone. In particular, all nodes in the partition labelled by 𝐤_i map onto the manifoldof points 𝐤_i in the BZ. In this embedding, edges of the connectivity graph may cross, corresponding to crossings of bands in the band structure. Generically, crossings along high-symmetry lines are only protected if the two bands carry different representations of the little group of the line. Accidental crossings of identical representations are not stable to perturbations: they will either gap, or in the case of Weyl nodes (which require broken inversion symmetry), they can be pushed away from high-symmetry lines. Because we are interested in classifying generic, stable band structures, we will discount connectivity graphs corresponding to accidental crossings. Therefore, if we have accidental crossings we only need to consider one combination of the sub-matrix block. To visualize this filter let us consider a hypothetical space group with maximal 𝐤-vectorsA and B connected by a line L. We take the compatibility relations to beA_1→ L_1⊕ L_2,A_2→ L_1⊕ L_2,B_1→ L_1⊕ L_2,B_2→ L_1⊕ L_2, such that A_1, A_2, B_1 and B_2 are irreps of dimension 2, and L_1 and L_2 have dimension 1. We fix the sub-matrix corresponding to the A-L connection as shown in Table <ref>.The possible complementary B-L sub-matrices are given in Table <ref>. By examining Fig. <ref>(b), we can see the crossing of irreps corresponding to each of these B-L submatrices. Since crossings of identical irreps can be gapped, it is easy to see that both (I) and (II) in the figure give the same generic band connectivity, and so we only need to consider one of them.To implement this, our algorithm first indexes all valid submatrices for each connection. We then ensure that all submatrices that are fixed in the adjacency matrix are chosen as block matrices, with blocks corresponding to the identity matrix (c.f. Table <ref>). Having done this, we identify connections where two different representations of the little group of a maximal 𝐤-vector are connected by an identical pair of non-maximal representations, as along the A-L-B connection from Eq. (<ref>). It is then straightforward to exclude submatrices that correspond to the “fake-Weyl” connections of Fig. <ref>(b). *Single-irrep filter: if one of the 𝐤-vectors has only a single little group representation in a given EBR, then all connectivity graphs for this EBR will be fully connected. This follows from the trivial observation that all bands must pass through that irrep. Therefore, as long as we are interested only in band connectivity, we do not need to calculate the connectivity graphs for such an EBR. For example, with TR symmetry, all spin-1/2 (double-valued) band representations with 8 bands in space group P4/ncc (130) are connected in this way, through an eight-fold degeneracy at the A point<cit.>.With these filters in-hand, the gargantuan task of computing all valid connectivity graphs becomes algorithmically tractable[An additional type of redundancy can occur when only some of the little group representations in a block corresponding to a maximal 𝐤-vector are isomorphic (contrast this with the multiple irrep filter, where it was all irreps in a given block). In this case, every permutation of the rows in the full adjacency matrix corresponding to the identical irreps results in an equivalent graph. To account for this, if no submatrices containing this 𝐤-vector are fixed by other means, then the first column of the first non-zero submatrix in which this maximal 𝐤-vector appears is fixed, and only the rest of the columns will be permuted. However, we have not found it computationally necessary to implement such a filter.] §.§ Disconnected graphs from spectral graph theory We now consider all connectivity graphs consistent with the representation content of an (physically) elementary band representation, constructed using the algorithm described above. Because the band representation is elementary, we know that the connectivity graphs will have either one connected component, or will decompose into a set of disconnected components. Furthermore, from Ref. NaturePaper, all connectivity graphs with more than one connected component will then correspond to a possible topological phase. To determine which connectivity graphs are topologically disconnected in this way, we employ the spectral graph partitioning technique described in Sec. <ref>. In particular, the number of zero eigenvalues of the graph Laplacian matrix gives the number of disconnected components of the connectivity graph, and the associated eigenvectors give the little group representations in each component. Let us consider as a first example the space group P4mm (99). We focus on the physically elementary band representation (i. e. including TR symmetry) induced from the E̅ (Γ̅_5) site-symmetry representation of the 2c Wyckoff position EBR. We find that the Laplacian matrices for this band representation are 30× 30 matrices. We give one example in Table <ref>. S̅_̅2̅ S̅_̅2̅ S̅_̅3̅ S̅_̅3̅ S̅_̅4̅ S̅_̅4̅ S̅_̅5̅ S̅_̅5̅ V̅_̅6̅ V̅_̅6̅ V̅_̅7̅ V̅_̅7̅ T̅_̅5̅ T̅_̅5̅ T̅_̅5̅ T̅_̅5̅ Δ̅_̅5̅ Δ̅_̅5̅ Δ̅_̅5̅ Δ̅_̅5̅ Λ̅_̅6̅ Λ̅_̅6̅ Λ̅_̅7̅ Λ̅_̅7̅ Σ̅_̅5̅ Σ̅_̅5̅ Σ̅_̅5̅ Σ̅_̅5̅ Y̅_̅2̅ Y̅_̅2̅ Y̅_̅3̅ Y̅_̅3̅ Y̅_̅4̅ Y̅_̅4̅ Y̅_̅5̅ Y̅_̅5̅ U̅_̅2̅ U̅_̅2̅ U̅_̅3̅ U̅_̅3̅ U̅_̅4̅ U̅_̅4̅ U̅_̅5̅ U̅_̅5̅ W̅_̅5̅ W̅_̅5̅ W̅_̅5̅ W̅_̅5̅ A̅_̅5̅ 0 -1 0 -1 0 -1 0 -1 -1 0 -1 0 0 0 -2 -20 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0A̅_̅5̅ -1 0 -1 0 -1 0 -1 0 0 -10 -1 -2 -2 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0Γ̅_̅6̅ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -20 0 0 -2 0 -2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0Γ̅_̅6̅ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -2 00 0 -2 0 -2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0Γ̅_̅7̅ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 -2 0 00 -2 0 0 0 0 0 -2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0Γ̅_̅7̅ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-2 0 0 0-2 0 0 0 0 0 -2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0M̅_̅5̅ 0 0 0 0 0 0 0 0 -1 0 -1 0 0 0 0 00 0 0 00 0 -2 -2 0 0 0 0 0 -1 0 -1 0 -1 0 -1 0 0 0 0 0 0 0 0 0 0 0 0M̅_̅5̅ 0 0 0 0 0 0 0 0 0 -1 0 -1 0 0 0 00 0 0 0-2 -20 0 0 0 0 0 -1 0 -1 0 -1 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0R̅_̅3̅ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -20 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 -1 0 0 0 0 0 -2R̅_̅3̅ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -2 00 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 -1 0 0 0 0 0 -2 0R̅_̅4̅ 0 0 0 0 0 0 0 0 0 0 0 0 0 -2 0 00 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 -1 0 -2 0 0R̅_̅4̅ 0 0 0 0 0 0 0 0 0 0 0 0 -2 0 0 00 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 -1 0 -2 0 0 0X̅_̅3̅ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 0 -20 0 0 0 0 0 0 0 0 -1 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 -2X̅_̅3̅ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 0 -2 00 0 0 0 0 0 0 0 -1 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 -2 0X̅_̅4̅ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 00 -2 0 00 0 0 0 0 0 0 0 0 0 0 -10 0 0 -1 0 0 0 0 0 00 0 0 -2 0 0X̅_̅4̅ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0-2 0 0 00 0 0 0 0 0 0 0 0 0 -1 00 0 -1 0 0 0 0 0 0 00 0-2 0 0 0Z̅_̅5̅ 0 -1 0 0 0 0 0 -1 0 0 0 0 0 0 0 00 0 0 00 0 0 0 0 -2 0 0 0 0 0 00 0 0 0 0 0 0 -1 0 -1 0 0 0 0 0 0Z̅_̅6̅ -1 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 00 0 0 00 0 0 0 0 0 0 -2 0 0 0 00 0 0 0 0 0 -1 0 -1 0 0 0 0 0 0 0Z̅_̅7̅ 0 0 0 -1 0 -1 0 0 0 0 0 0 0 0 0 00 0 0 00 0 0 0 -2 0 0 0 0 0 0 00 0 0 0 0 -1 0 0 0 00 -1 0 0 0 0Z̅_̅8̅ 0 0 -1 0 -1 0 0 0 0 0 0 0 0 0 0 00 0 0 00 00 0 0 0 -2 0 0 0 0 00 0 0 0 -1 0 0 0 0 0-1 0 0 0 0 0 Schematic plot of part of the Laplacian matrix of P4/ncc 8d Γ̅_3 EBR.We used theNumPy package to diagonalize the matrix. In the example of Table <ref>, we find that there are two zero eigenvectors. We thus have two 30-dimensional eigenvectors with 2 different values among their 30 components, this means the EBR can be disconnected in 2 different ways. Because there are multiple null eigenvectors, numerical diagonalization will generically return an arbitrary linear combination of them, rather than giving them in a basis where all entries are either 1 or 0(any linear combination of degenerate eigenvectors is also an eigenvector). To find this basis, we first form the (non-square) matrix whose rows are the zero eigenvectors. We then perform Gauss-Jordan elimination on this matrix, yielding a matrix whose entries are all either zero or one. The rows of this matrix are then the eigenvectors in the appropriate basis. We can then read off the irreps in each connected component. To see how this works in detail, let us return to the Laplacian matrix in Table <ref>. The numerically determined eigenvectors are[ v_1 = (0.258, -0.0098, 0.258, -0.0098, 0.258, -0.0098, 0.258, -0.0098, 0.258, -0.0098, 0.258, -0.0098,; 0.258, -0.0098, 0.258, -0.0098, 0.258, -0.0098, 0.258, -0.0098, 0.258, -0.0098, 0.258, -0.0098, 0.258,; -0.0098, 0.258, -0.0098, 0.258, -0.0098); ; v_2 = (0.0098, 0.258, 0.0098, 0.258, 0.0098, 0.258, 0.0098, 0.258, 0.0098, 0.258, 0.0098, 0.258,;0.0098, 0.258, 0.0098, 0.258, 0.0098, 0.258, 0.0098, 0.258, 0.0098, 0.258, 0.0098, 0.258, 0.0098,;0.258, 0.0098, 0.258, 0.0098, 0.258). ]Forming the matrix [[ 𝐯_1; 𝐯_2 ]] and performing Gaussian elimination yields[[ 𝐯_1; 𝐯_2 ]]→[ [ 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0; 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 ]]. Comparing the rows of this matrix with the ordering of representations in the Laplacian matrix shown in Table <ref>, we find for the two connected components of this connectivity graphA̅_̅6̅ - Γ̅_̅6̅ - M̅_̅6̅ - R̅_̅5̅ - X̅_̅5̅ - Z̅_̅6̅ - V̅_̅6̅ - C̅_̅3̅ -C̅_̅4̅ -T̅_̅3̅ -T̅_̅4̅ -Λ̅_̅6̅ -B̅_̅3̅ -B̅_̅4̅ -W̅_̅5̅,andA̅_̅7̅ -Γ̅_̅7̅ - M̅_̅7̅ -R̅_̅5̅ -X̅_̅5̅ -Z̅_̅7̅ -V̅_̅7̅ -C̅_̅3̅ -C̅_̅4̅ -T̅_̅3̅ -T̅_̅4̅ -Λ̅_̅7̅ -B̅_̅3̅ -B̅_̅4̅ -W̅_̅5̅,and so the band structure corresponding to this connectivity graph has two topologically disconnected groups of bands. This procedure based on Gauss-Jordan elimination generalizes straightforwardly to Laplacian matrices with more than two zero eigenvalues.Let us now return to our previous example, the band representation of space group P4/ncc (130) induced from the Γ̅_3 representation of the stabilizer group of the 8d Wyckoff position. We find that there are approximately ∼15×10^6 valid 80× 80 Laplacian matrices. The diagonal entries of each of these correspond to the degree matrix, and are all given by[ D = (16,16,8,8,8,8,16,16,8,8,8,8,8,8,8,8,8,8,8,8,2,2,2,2,2,2,2,2,4,4,4,4,4,4,4,4,4,4,4,4,; 4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,4,4,4,4,4,4,4,4). ] In Table <ref> we give one of these Laplacian matrices. In the interest of space, we show only the rows above the main diagonal that are not trivially zero. Diagonalizing all valid Laplacian matrices, we find that this band representation can be disconnected in two inequivalent ways. In the first, the two disconnected components contain the representationsA̅_̅5̅-Γ̅_̅8̅-Γ̅_̅8̅-M̅_̅5̅-R̅_̅3̅-R̅_̅4̅-X̅_̅3̅-X̅_̅4̅-Z̅_̅5̅-Z̅_̅7̅andA̅_̅5̅-Γ̅_̅9̅-Γ̅_̅9̅-M̅_̅5̅-R̅_̅3̅-R̅_̅4̅-X̅_̅3̅-X̅_̅4̅-Z̅_̅6̅-Z̅_̅8̅. Alternatively, we find that the decomposition into components A̅_̅5̅-Γ̅_̅8̅-Γ̅_̅9̅-M̅_̅5̅-R̅_̅3̅-R̅_̅4̅-X̅_̅3̅-X̅_̅4̅-Z̅_̅5̅-Z̅_̅8̅andA̅_̅5̅-Γ̅_̅8̅-Γ̅_̅9̅-M̅_̅5̅-R̅_̅3̅-R̅_̅4̅-X̅_̅3̅-X̅_̅4̅-Z̅_̅6̅-Z̅_̅7̅is also possible.Since we construct the full connectivity graph for each band representation, we can also deduce the existence of symmetry-enforced band crossings in semi-metallic band structures. For example, we can see from the decomposition Eqs. (<ref>–<ref>) that at 1/4 filling (which can be achieved, for instance, by charge transfer) the band representation in space group P4/ncc (130) induced from the Γ̅_3 representation of the site symmetry group of the 8d Wyckoff position realize a semimetal, with nodal Fermi surfaces at the A and M points. For cases where the information about semimetals is not needed, we will now develop an algorithm in Sec. <ref> which is more efficient at determining the allowed decompositions of a connectivity graph. This algorithm determines disconnected components without explicitly constructing the full graph in the cases where it is connected. §.§ Disconnected graphs via fast search We now present a second algorithm, which directly constructs disconnected connectivity graphs. While it is based almost purely on combinatorial analysis, it is related in spirit to Prim's algorithm for finding minimal spanning forests<cit.>, in that it “grows” a disconnected connectivity graph starting from a set of seed nodes. Because the algorithm terminates once a solution is found, it has a much faster average runtime than the direct method of Sec. <ref>. We start with the (previously calculated<cit.>) multiplicities of all the irreps of the little group of every maximal 𝐤-vector in the decomposition of a band representation, the independent set of paths between them, and all the compatibility relations along these paths. We then attempt to partition the set of irreps at every maximal 𝐤-vector into subsets in such a way that a connectivity graph can be constructed with each subset corresponding to a distinct disconnected subgraph. These subgraphs can, in principle, be further disconnected into smaller connectivity subgraphs. The purpose of our algorithm is to determine all the possible ways to decompose an elementary band representation into disconnected subgraphs that are further indecomposable. We call each of these indecomposable subgraphs a branch of the connectivity graph of the band representation. Every distinct decomposition into branches represents a valid connectivity graph. Throughout, we shall describe the whole process with our example of space group P4/ncc (130); the general formalism then becomes obvious. We will focus on the elementary band representation induced from the A_g (in the notation of Ref. mulliken or Γ_1^+ in the notation of Ref. koster) irrep of the site symmetry group of the Wyckoff position 8d(with representative coordinates (0,0,0) in the unit cell), isomorphic to the point group 1̅, generated by inversion. Table <ref> gives the subduced irreps of this band representation into every maximal 𝐤^M (see Table <ref>). For simplicity, we will not consider TR symmetry in the example.At each maximal 𝐤-vector 𝐤^M, we calculate two parameters N(𝐤^M) and Ω(𝐤^M) defined as follows. If the elementary band representation subduces into the irreps ρ_1,ρ_2,…... of the little group of 𝐤^M with multiplicities n_1,n_2,…, thenN(𝐤^M)=∑_in_iandΩ(𝐤^M)=N!/n_1!n_2!…are the total number of irreps at 𝐤^M and the number of distinguishable ways to order these irreps (according to the energy associated to each irrep, for example). Next, we impose an ordering on the maximal 𝐤-vectors {𝐤^M_1,𝐤^M_2,…} of the space group. This ordering is defined by the rulesN(𝐤^M_i) ≤ N(𝐤^M_j), ifi<jΩ(𝐤^M_i) ≤Ω(𝐤^M_j), if i<j and N(𝐤^M_i)=N(𝐤^M_j) If two or more 𝐤-vectors have the same values of both N and Ω, then we order them in an arbitrary way. We can thus refer to an elementary band representation by the label of the first vector 𝐤^M_1 in this ordering.In the example of the elementary band representation of Table <ref>, the maximal 𝐤-vectors are ordered according to these rules. For this elementary band representation, 𝐤^M_1≡ R, and we have N(R)=4 and Ω(R)=6. This reflects the fact that the R point there are two distinct irreps R_1 and R_2, each with multiplicity two; these four irreps can be ordered in 6 distinguishable ways.Next, we calculate the compatibility relations between every irrep at every maximal 𝐤^M_i and the irreps at the non-redundant lines or planes connected to it, as per Sec. <ref>. For our example, Table <ref> gives these compatibility relations for the elementary band representation of Table <ref>. The first column of Table <ref> gives the maximal 𝐤-vectors 𝐤^M_i, the second column lists, for each 𝐤^M_i, the irreps of its little group obtained by subduction of the elementary band representation this information is also given in Table <ref>). The third and fourth columns give the dimension and the multiplicity of the irrep. The next columns give the compatibility relations for each irrep of each 𝐤^M_i along the paths that connect this point with the point shown at the first row of the given column. Only the independent paths, according to the discussion of subsection <ref> are considered; we denote by "-" those paths that contain redundant information, and which can be omitted in the analysis. Using this data, we can calculate the possible connectivity graphs that consist of disconnected branches. We will procede with the following steps:*First we enumerate all the distinguishable ways the irreps at the first vector 𝐤^M_1 can be partitioned into different branches. *We choose one of the potential decompositions obtained in Step1, and determine the total dimension d of the irreps of the little group G_𝐤^M_1 in that branch. We also calculate the subduced representations of the little group of each path involving the point 𝐤^M_1. *We combine the irreps (direct sum of irreps) at the second (in our (N,Ω) ordering) maximal 𝐤 vector 𝐤^M_2 in all the possible ways to get branches with dimension d. For these possible branches we also calculate the subduced representations of the little group of each path involving 𝐤^M_2. *We compare the sets of irreps along the common path between 𝐤^M_1 and 𝐤^M_2. If the two sets are identical, then this connection satisfies the compatibility relations and represents part of a valid connectivity graph. If the set of irreps is not the same, then the possible branch is discarded. *We return iteratively to Step1 for each branch that is not discarded in Step4, at each iteration adding the next maximal 𝐤-vector 𝐤^M_i in our ordering, and repeating the same calculations for all sets of connected vectors. We compare, by pairs, the sets of irreps along the common paths between the most recently introduced 𝐤^M_i and each of the previously added maximal 𝐤 vectors. Finally, we keep all the possible branches that fulfill all the compatibility relations along all the intermediate paths. This procedure either ends in a valid disconnected subgraph of the connectivity graph of the band representation, or terminates before all maximal 𝐤-vectors are added to the branch.Let us apply this method to our example in space group P4/ncc. In our example, there are 8 possible disconnected sets: the four irreps can be divided into 4 branches, {(R_1),(R_1),(R_2),(R_2)}, into three branches in three different ways, {(R_1),(R_1),(R_2,R_2)}, {(R_1),(R_1,R_2),(R_2)} and {(R_1,R_1),(R_2),(R_2)}, or into two branches in four different ways, {(R_1,R_1),(R_2,R_2)}, {(R_1,R_2),(R_1,R_2)}, {(R_1,R_1,R_2),(R_2)} and {(R_1),(R_1,R_2,R_2)}. Except for the two decompositions into 2 branches of 2 irreps, {(R_1,R_1),(R_2,R_2)} and {(R_1,R_2),(R_1,R_2)}, all decompositions include a branch with a single copy of either R_1 or R_2. Let us take a decomposition with a branch that includes R_1 at the point R, which has dimension 2. We look first along path W. Note that the space group of the example is a non-symmorphic group and that the glide reflections {m_1,0,0|1/2,0,1/2} and {m_0,1,0|0,1/2,1/2}in the little group G_W. However, as explained in section <ref>, in this case there is only one set of compatibility relations needed along every line, in particular along W. We have from the compatibility relations that R_1 is connected to W_1 and W_4 According to Table <ref>, since all irreps at the X point are 2-dimensional, only one of them must be included in the branch of R_1. But both X_1 and X_2 are connected to W_1 and W_3, but not W_4. Therefore, there cannot be a branch that includes at R only the irrep R_1 in which the compatibility relations are fulfilled. The same is true for a branch that includes only the irrep R_2 at the R point.Next we check the possible decomposition {(R_1,R_1),(R_2,R_2)}. In the first branch, the compatibility relations that involve the direct sum R_1⊕R_1 with dimension 4 are, R_1⊕R_1→ 2 W_1⊕ 2 W_4. At the X point, there is no way to get a direct sum of two irreps whose compatibility relations give the same set as for R_1⊕ R_1. Therefore the pair (R_1,R_1) cannot be the only two irreps at R of a branch in which the compatibility relations are fulfilled.Finally, we check the possible decomposition {(R_1,R_2),(R_1,R_2)}. In this case, there are solutions of the compatibility relations. The Table <ref> shows the sets of irreps at each 𝐤^M-vec that form a branch that fulfill the compatibility relations. This table can be viewed as a 6×6 matrix whose (i,j)'th element gives the set of compatibility relations between the direct sum of irreps chosen at the maximal 𝐤^M_i point of the reciprocal space and the irreps at the intermediate paths between the 𝐤^M_i and 𝐤^M_j maximal 𝐤-vectors. The subset of irreps chosen at the maximal 𝐤-vectors form a branch which fulfills the compatibility relations if this matrix is symmetric. Finally, Table <ref> gives all the possible partitions of irreps into two different branches for the elementary band representation of Table <ref>. This enumerates all disconnected connectivity graphs for this band representation. § DATA RECORDSUsing the algorithms described in Sec. <ref> along with the group-theoretic data computed in Ref. grouptheory, we have computed the minimal paths through the Brillouin zone for each of the 230 space groups needed to determine the connectivity of energy bands both with and without TR symmetry, as well as all possible disconnected connectivity graphs for all elementary and physically elementary band representations. We have compiled the output of the algorithms described in Sec. <ref> into different subprograms, all contained within the “BANDREP” application on the Bilbao Crystallographic Server (<www.cryst.ehu.es/cryst/bandrep>). The main input screen for this application is shown in Fig. <ref>. Here we will focus only on those features directly related to connectivity graphs. First, let us describe how to access the disconnected solutions for the connectivity graphs of a band representation. Entering the number of a space group, and clicking on either the “Elementary” or “Elementary TR” buttons gives a table of all elementary or physically elementary band representations in the given space group, respectively. Band representations are listed according to Wyckoff position, and the irreducible representation of the site-symmetry group from which they are induced. In addition to the little group representations subduced at each maximal 𝐤-vector, for each band representation the output table contains a row labelled “Decomposable\ Indecomposable,” which indicates whether or not a disconnected connectivity graph exists for the given band representation. In Fig. <ref>, we show the output of selecting “Elementary” for the space group I2_13 (199). In particular, there is one decomposable elementary band representation. It is induced from the ^2E̅ (Γ̅_3) representation of the site-symmetry group of the 12b Wyckoff position, which is isomorphic to the point group C_2. For this band representation – and more generally for any band representation with disconnected connectivity graphs – the entry in the “Decomposable\ Indecomposable” row is a clickable button. The output of clicking this button is a list of all possible ways of partitioning connectivity graphs into disconnected components. This data is given in the format of Sec. <ref> and Table <ref>; each row corresponds to a different disconnected solution to the compatibility relations, and each column gives the little group representations subduced at each maximal 𝐤-vector in each branch (disconnected component). Fig. <ref> shows this output for the decomposable band representation ^2E̅↑ G induced from the 12b position in SG I2_13 (199). We see that there are three possible disconnected connectivity graphs, each with two disconnected components. To obtain the analogous information for the physically elementary band representations with TR symmetry, we can click instead the “Elementary TR” button on the main input screen. This output for space group I2_13 (199) is shown in Fig. <ref>. We see that with TR symmetry, there are now two decomposable physically elementary band representations. The first is induced from the physically irreducible E̅E̅ (Γ̅_4Γ̅_4) representation of the site-symmetry group of the 8a position, isomorphic to the point group C_3. The second decomposable physically elementary band representation is induced from the ^1E̅^2E̅ (Γ̅_3Γ̅_4) representation of the site-symmmetry group of the 12b Wyckoff position, which is isomorphic to the point group C_2 . In Fig. <ref> we show the possible disconnected connectivity graphs for this latter band representation. It turns out that in this case there is only one allowed disconnected connectivity graph, with two branches.In addition to the connectivity graphs, we also give, for each space group, the minimal list of paths through the BZ and the associated compatibility relations needed to construct the full connectivity graphs from the little group representations at the maximal 𝐤-vectors. From the table of band representations accessed from either the “Elementary” or “Elementary TR” function, this data can be accessed by clicking the button labelled “Minimal set of paths and compatibility relations to analyse the connectivity.” The location of this button above the table of band representations can be seen in Figs. <ref> and <ref>. The output of this application gives two tables. The first table lists the minimal set of connections between maximal 𝐤-vectors, given in the format of Table <ref>. It has three columns: each row gives two maximal 𝐤-vectors in the first and third column which are connected by the non-maximal 𝐤-vector in the second column.Directly below the table of 𝐤-vectors, we display the compatibility relations along each of the listed connections. This table has five columns. The first, third, and fifth columns correspond to the first maximal, intermediate, and second maximal 𝐤-vector columns given in the table of connections, while the second and fourth columns give the compatibility relations along each connection. For each little group representation of the maximal 𝐤-vectors, the compatibility relations are given in the format of Eq. (<ref>). For those non-symmorphic groups that require two different sets of compatibility relations related by monodromy, the second set is given immediately next to the first.As an example, we show in Fig. <ref> the set of paths and compatibility relations for SG I2_13 (199) without TR symmetry, obtained by clicking the “Minimal set of paths and compatibility relations to analyse the connectivity.” button in Fig. <ref>. We see that there are only three maximal 𝐤-vectors that determine the connectivity, Γ, H, and P. There are three essential connections,Γ ↔ Δ↔ H, Γ ↔ Λ↔ H, Γ ↔ Λ↔ P.Although this group is non-symmorphic, we see from the compatibility table that only one set of compatibility relations is needed along each connection. This is due to the additional constraints imposed by the cubic threefold rotation.Clicking on the analogous button in the output of Fig. <ref> gives the minimal paths and compatibility relations for this same space group once time-reversal symmetry is included. We show these in Fig. <ref>. We see immediately that TR singles out an additional (TR-invariant) maximal 𝐤-vector, labelled N. In addition to the connections in Eq. (<ref>), we see that with TR we must also consider compatibility along the connectionN↔ D↔ P. Once again, we see from the compatibility table that only one set of compatibility relations is needed for every connection in this space group with TR symmetry.§ TECHNICAL VALIDATION Now that we have produced the data and the applications with which to access it, we will show here an example of how they may be used. We examine the case of graphene on a graphite (or another symmetry-preserving, lattice-matched substrate which breaks only inversion symmetry), corresponding to the Kane-Mele model with inversion-symmetry breaking. This is described by the three-dimensional space group P6mm (183). We will see how we can recover the full topological phase diagram using the graph output files, and insodoing give a consistency check on our data. The relation between the topological phases of graphene and the connectivity of elementary band representations was computed first in Refs. NaturePaper,EBRtheory. Here we will show how to recover these computations using the applications we have produced. The carbon atoms in graphene sit at the 2b Wyckoff position of space group P6mm (183). The site-symmetry group of this position is isomorphic to the point group C_3v (3m), generated by a threefold rotation C_3z about the z-axis (normal to the plane) and the vertical mirror m_y. By consulting the data presented in Refs. NaturePaper,grouptheory, we can see that spinful p_z orbitals transform in the two-dimensional Γ̅_6 representation of this group. Next, we consult the BANDREP program for this space group. Since we are interested in orbitals at the 2b position only with time-reversal symmetry, we may use the “Wyckoff TR” option<cit.>. This outputs the physically elementary band representations induced only from the 2b Wyckoff position; the output is shown in Fig. <ref>.From this we see that the band representation induced from the E̅_1(Γ̅_6) representation at the 2b site is decomposable. From the first column of the table, we see that the maximal 𝐤 vectors in this space group are labelled Γ,K,M,A,H,L. Since we are only interested in the two-dimensional system, we need only concern ourselves with the subset Γ,K,M of maximal 𝐤-vectors at k_z=0. Furthermore, the little group of all vertical lines in this space group is the same as the unitary subgroup of the little group of each endpoint, and so the compatibility relations along the vertical are trivial. Thus, we can find all disconnected compatibility graphs for the 2D system from the BANDREP application by simply looking at the k_z=0 subgraph of every 3D connectivity graph. Knowing this, we examine the output of the “Decomposable” option for the E̅_1↑ G band representation induced from the 2b position of SG P6mm, given in Fig. <ref> We see that there are two different disconnected connectivity graphs. In the first, the 2D system has one connected component containing the Γ̅_8,K̅_6, and M̅_5 little group representations, while the other component contains Γ̅_9,K̅_4,K̅_5 and M̅_5. The second disconnected solution has Γ̅_8 and Γ̅_9 interchanged. This matches precisely the result of Ref. NaturePaper obtained by a direct analysis of the compatibility relations. These disconnected band graphs correspond to the topologically disconnected bands of the Kane-Mele model of graphene with Rashba spin-orbit coupling. § USAGE NOTES All of the data and applications described in Sec. <ref> can be accessed via the “BANDREP” program at the Bilbao Crstallographic Server, accessible at <http://www.cryst.ehu.es/cryst/bandrep>. In conjunction with the group theory applications described in Ref. grouptheory, and hosted on the Bilbao Crystallographic Server, all information on type (1,1) topological phases<cit.> may be deduced. Additionally, the algorithms described in Sec. <ref> may be used along with any list of little group representations to deduce the existence of type (1,2) and type (2,2) band-inversion topological insulators. BB would like to thank Ida Momennejad and Dustin Ngo for fruitful discussions. MGV would like to thank Gonzalo Lopez-Garmendia for help with computational work. BB, JC, ZW, and BAB acknowledge the hospitality of the Donostia International Physics Center, where parts of this work were carried out. JC also acknowledges the hospitality of the Kavli Institute for Theoretical Physics, and BAB also acknowledges the hospitality and support of the École Normale Supérieure and Laboratoire de Physique Théorique et Hautes Energies. The work of MVG was supported by FIS2016-75862-P and FIS2013-48286-C2-1-P national projects of the Spanish MINECO. The work of LE and MIA was supported by the Government of the Basque Country (project IT779-13)and the Spanish Ministry of Economy and Competitiveness and FEDER funds (project MAT2015-66441-P). ZW and BAB, as well as part of the development of the initial theory and further ab-initio work, were supported by the NSF EAGER Grant No. DMR-1643312, ONR - N00014-14-1-0330, ARO MURI W911NF-12-1-0461, and NSF-MRSEC DMR-1420541. The development of the practical part of the theory, tables, some of the code development, and ab-initio work was funded by Department of Energy de-sc0016239, Simons Investigator Award, the Packard Foundation, and the Schmidt Fund for Innovative Research. | http://arxiv.org/abs/1706.08529v2 | {
"authors": [
"M. G. Vergniory",
"L. Elcoro",
"Zhijun Wang",
"Jennifer Cano",
"C. Felser",
"M. I. Aroyo",
"B. Andrei Bernevig",
"Barry Bradlyn"
],
"categories": [
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mes-hall",
"published": "20170626180003",
"title": "Graph Theory Data for Topological Quantum Chemistry"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.