text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
N. Copernicus Astronomical Centre, Polish Academy of Sciences, Bartycka 18, 00 716 Warszawa, Poland [email protected] This paper is a follow-up on two previous ones, in which properties of blueshifted rays were investigated in Lemaître – Tolman (L–T) and quasispherical Szekeres (QSS) spacetimes. In the present paper, an axially symmetric QSS deformation is superposed on such an L–T background that was proved, in the first paper, to mimic several properties of gamma-ray bursts. The present model makes z closer to -1 than in the background L–T spacetime, and, as implied by the second paper, strong blueshifts exist in it only along two opposite directions. The QSS region is matched into a Friedmann background. The Big Bang (BB) function t_B(r), which is constant in the Friedmann region, has a gate-shaped hump in the QSS region. Since a QSS island generates stronger blueshifts than an L–T island, the BB hump can be made lower – then it is further removed from the observer and implies a smaller observed angular radius of the source. Consequently, more sources can be fitted into the sky – all these facts are confirmed by numerical computations. Null geodesics reaching present observers from different directions relative to the BB hump are numerically calculated. Patterns of redshift across the image of the source and along the rays are displayed. Properties of blueshifted light rays in quasispherical Szekeres metrics Andrzej Krasiński=======================================================================§ MOTIVATION AND BACKGROUND In Lemaître <cit.> – Tolman <cit.> and Szekeres <cit.> spacetimes, some of the light rays emitted at the Big Bang (BB) reach all observers with infinite blueshift (1 + z ν_e/ν_o = 0, where ν_e and ν_o are frequencies of the emitted and observed radiation, respectively). This is in contrast to Robertson – Walker spacetimes, where all light from the BB is observed with z = ∞ <cit.>. The quantity z, traditionally called redshift, being negative (and then called blueshift) means that the frequency observed is higher than the frequency at the emission point, and z → -1 implies ν_o →∞. The existence of blueshifts in L–T models was predicted by Szekeres in 1980 <cit.>, in a casual remark without proof, and then confirmed by Hellaby and Lake in 1984 <cit.> by explicit calculation.Two conditions are necessary for infinite blueshift:(1) The BB time at the emission point of the ray must have nonzero spatial derivative in comoving-synchronous coordinates (the BB is “nonsimultaneous”).(2) The ray is emitted at the BB in a radial direction.Condition (2) was derived in Ref. <cit.>, but seems to have been overlooked by all later authors until Ref. <cit.>, even though it follows quite simply from the geodesic equations. The two conditions together seem to be also sufficient, but a general proof of their sufficiency still does not exist; it is only implied by the full list of separate cases <cit.> and hinted at by numerical calculations <cit.>.The Szekeres spacetimes <cit.>, in general, have no symmetry, thus no radial directions. In view of condition (2) it was not clear whether any rays with infinite blueshift exist in them. This question was addressed in Ref. <cit.>. It was shown that in an axially symmetric quasispherical Szekeres (QSS) spacetime, z = -1 can possibly happen on axial rays; i.e., those that intersect every space of constant time on the symmetry axis. It was then confirmed by a numerical calculation in an exemplary QSS model that 1 + z < 10^-5 along axial rays emitted from the BB. It was also shown, by a blind numerical search, that rays with 1 + z < 0.07, and with similar spatial profiles of z along neighbouring rays, exist in an exemplary fully nonsymmetric QSS model.Since the L–T and Szekeres models have been proven to successfully describe several observed features of our Universe <cit.>, and they predict a possible existence of blueshifts, one must thoroughly test the implications of blueshifts in order to either find a place for them among the observed phenomena, or conclude that the BB in the real Universe must have been simultaneous. With this motivation, it was shown in Ref. <cit.> that an L–T region with a gate-shaped “hump” on the BB profile matched into a Friedmann background can mimic some observed properties of gamma-ray bursts (GRBs), such as the frequency range (0.24 × 10^19 to 1.25 × 10^23 Hz), the existence of afterglows and the large distances to the sources. Placing several different L–T regions in the same Friedmann background would then account for the large number of possible sources. However, the model of Ref. <cit.> was unsuccessful on two accounts:(1) The gamma-ray flashes and the afterglows lasted for too long. The model contains a parameter that should allow for controlling the durations, but insufficient numerical accuracy did not permit actual use of it.(2) The radiation was emitted isotropically instead of being collimated into narrow beams, as the observed GRBs are supposed to be <cit.>.Also, the model of Ref. <cit.> left some problems open. The main one was: how small could the humps on the BB profile be made while still generating the right range of frequencies of the observed radiation.[It is easy to obtain small 1 + z with a high hump on the BB, but then the radiation source is close to the observer and has a large angular diameter in the sky. With a lower hump the diameter gets smaller, but 1 + z gets larger. Keeping both the diameter and 1 + z sufficiently small is the main difficulty.]Ref. <cit.> was the first step in improving the model of Ref. <cit.>. It showed by examples that strongly blueshifted rays in QSS spacetimes exist only along two opposite directions. That paper also proved that in a QSS model the minimum 1 + z is smaller than in an L–T model that has the same BB profile.The present paper builds upon this last observation. The model considered here is a QSS deformation superposed on the L–T region of Ref. <cit.>. Since the QSS deformation results in a smaller 1 + z at the observer, the minimum value of 1 + z found in Ref. <cit.> can be achieved with a lower BB hump. This implies a greater distance between the source of radiation and the observer, and a smaller angular diameter of the source seen in the sky. The progress achieved with respect to Ref. <cit.> is rather moderate, but this cannot be the ultimate limit of improvement: the class of BB profiles used here was found by trial and error (see Sec. <ref>), and it is impossible that the optimal shape could be hit upon in this way.The L–T and Szekeres metrics are solutions of the Einstein equations with a dust source, so they cannot apply to the real Universe at such early times when pressure cannot be neglected. It is assumed that they may apply onward from the end of the last-scattering (LS) epoch. The mean mass density at LS, denoted ρ_ LS, in the now-standard ΛCDM model is known <cit.>, see Sec. <ref>. For every past-directed null geodesic in a QSS (or L–T) region, the mass density at the running point is numerically calculated. When this density becomes equal to ρ_ LS, the integration is stopped. Thus, 1 + z between LS and the present time is bounded from below, z_ LS≥ z_ min > -1. The computational problem is to arrange the BB profile so that it makes z_ LS sufficiently near to -1 (1 + z_ LS < 1.689 × 10^-5 <cit.>), but does not lead to perturbations of the CMB radiation larger than observations allow. Among other things, this implies that the model must be capable of making the angular diameter of the radiation sources smaller than the observed diameter of the GRBs (currently[Private communication in 2015 from Linda Sparke, then at NASA. The 1^∘ is the current resolution of the detectors rather than the true diameter.] ≈ 1^∘, see Sec. <ref>).In Secs. <ref> and <ref>, the subfamily of QSS models employed here is presented. It is an axially symmetric QSS region matched into a Friedmann background with curvature index k = -0.4. In Sec. <ref> the parameters of the background model are specified. They are different from those of the ΛCDM model <cit.> – it was convenient to keep them the same as in the earlier papers by this author <cit.>. In Sec. <ref>, the equations of null geodesics in the QSS region are presented. In Sec. <ref>, basic properties of redshift are described, and the conditions for z = -1 in an axially symmetric QSS model are spelled out. In Sec. <ref>, the equation of the Extremum Redshift Surface (ERS) is derived,[Sections <ref>, <ref>, <ref> and <ref> are partly copied from Ref. <cit.>.] on which z has maxima or minima along axial rays. In Sec. <ref>, the numerical parameters of the model used here are adapted to the GRBs of lowest frequency. In Sec. <ref>, exemplary nonaxial plane rays reaching the present observers are numerically determined. The observers are placed in three directions with respect to the QSS region: (I) – in prolongation of the dipole minimum, (II) – in prolongation of the dipole maximum, and (III) – in prolongation of the dipole equator of the boundary of the QSS region. For each observer, the redshift profiles across the image of the radiation source are presented in tables. In Sec. <ref>, redshift profiles along the nonaxial rays reaching Observer I are displayed to show that analogues of the ERS exist also along nonaxial directions. In Sec. <ref> it is estimated that ≈ 11,000 radiation sources of Sec. <ref> could be fitted into the celestial sphere. The necessary and possible improvements of the model are discussed in Sec. <ref>. Section <ref> contains the summary and conclusions.The present paper is a study in the geometry of the QSS spacetimes and in properties of their blueshifted rays. Also, it introduces methods that can be used in further refinements of the model. The observed parameters of the GRBs were used as a beacon pointing the way, but the configuration derived here needs further improvements before it can be considered a model of a GRB source; see Sec. <ref>.Most results of numerical calculations are quoted up to 17 decimal digits. Such precision is needed to capture time intervals of ≈ 10 min at the observer, which is ≈ 2 × 10^-16 in the units used here, see Sec. <ref>. (The 10 min is a representative time during which GRBs are visible to the detectors <cit.>.)§ QSS SPACETIMES The metric of the QSS spacetimes is <cit.>d s^2 =d t^2 - (Φ,_r - Φ E,_r/ E)^2/1 + 2 E(r) d r^2 - (Φ/ E)^2 ( d x^2 +d y^2), ES/2[(x - P/S)^2 + (y - Q/S)^2 + 1],P(r), Q(r), S(r) and E(r) being arbitrary functions such that S ≠ 0 and E ≥ -1/2 at all r.The source in the Einstein equations is dust (p = 0) with the velocity field u^α = δ_0^α. The surfaces of constant t and r are nonconcentric spheres, and (x, y) are stereographic coordinates on each sphere. At a fixed r, they are related to the spherical coordinates byx=P + S (ϑ/2) cosφ, y=Q + S (ϑ/2) sinφ.The functions (P, Q, S) determine the centers of the spheres in the spaces of constant t (see illustrations in Ref. <cit.>). Because of the nonconcentricity, the QSS spacetimes, in general, have no symmetry <cit.>.With Λ = 0 assumed, Φ(t,r) obeysΦ,_t^2 = 2 E(r) + 2 M(r)/Φ,where M(r) is an arbitrary function. We consider models with E > 0, thenΦ(t,r)= M/2E (coshη - 1),sinhη - η = (2E)^3/2/M[t - t_B(r)],where t_B(r) is one more arbitrary function; t = t_B(r) is the BB time, at which Φ(t_B, r) = 0. We assume Φ,_t > 0 (the Universe is expanding).The mass density implied by (<ref>) isκρ = 2 (M,_r - 3 ME,_r /E)/Φ^2 (Φ,_r - Φ E,_r /E), κ8 π G/c^2.This density distribution is a mass dipole superposed on a spherically symmetric monopole <cit.>. The dipole, generated by E,_r/ E, vanishes where E,_r = 0. The density is minimum where E,_r/ E is maximum and vice versa <cit.>.The arbitrary functions must be such that 0 < ρ < ∞ at all t > t_B(r). The conditions that ensure this are <cit.>:M,_r/3M ≥ √((S,_r)^2 + (P,_r)^2 + (Q,_r)^2)/S    ∀ r,E,_r/2E > √((S,_r)^2 + (P,_r)^2 + (Q,_r)^2)/S    ∀ r.These inequalities imply <cit.>M,_r/3M≥ E,_r/ E, E,_r/2E >E,_r/ E∀ r.The extrema of E,_r/ E with respect to (x, y) are <cit.>. E,_r/ E|_ extreme = ±√((S,_r)^2 + (P,_r)^2 + (Q,_r)^2)/S,with + corresponding to maximum and - to minimum. In the following, we will call these two loci “dipole maximum” and “dipole minimum”, respectively.The L–T models follow from the QSS models as the limit of constant (P, Q, S). Then the constant-(t, r) spheres become concentric, and the spacetime becomes spherically symmetric. The Friedmann limit is obtained when E / M^2/3 and t_B are constant (in this limit, (P, Q, S) can be made constant by a coordinate transformation). A QSS spacetime can be matched to a Friedmann spacetime across an r = constant hypersurface.Because of p = 0, the QSS models can describe the past evolution of the Universe no further back than to the last scattering hypersurface (LSH). See Sec. <ref> for information on how to determine it in our model.§ THE QSS MODELS CONSIDERED IN THIS PAPER We will consider such QSS spacetimes whose L–T limit is Model 2 of Ref. <cit.>. The r-coordinate is chosen so thatM = M_0 r^3,and M_0 = 1 (kept in formulae for dimensional clarity) <cit.>. From this point on, the r-coordinate is unique. The function E(r), assumed in the form2E/r^2-k = 0.4,is the same as in the background Friedmann model.The units used in numerical calculations were introduced and justified in Ref. <cit.>. Taking <cit.>1pc = 3.086 × 10^13km,1y = 3.156 × 10^7s,the numerical length unit (NLU) and the numerical time unit (NTU) are defined as follows:1NTU = 1NLU = 9.8 × 10^10y = 3 × 10^4Mpc. The BB profile belongs to the same 5-parameter family as in Ref. <cit.>, see Fig. <ref>. It consists of two curved arcs and a straight line segment joining them. The upper-left arc, shown as a thicker line, is a segment of the curver^6/B_1^6 + (t - t_ Bf - A_0)^6/B_0^6 = 1,wheret_ Bf = -0.13945554689046649NTU≈ -13.67 × 10^9years;see Sec. <ref> for comments on this value. The lower-right arc (also shown as a thicker line) is a segment of the ellipse(r - B_1 - A_1)^2/A_1^2 + (t - t_ Bf - A_0)^2/A_0^2 = 1.The straight segment[It was introduced to keep t_B r finite everywhere.] passes through the point (r, t) = (B_1, t_ Bf + A_0) where the full curves (shown as dotted lines) would meet; x_0 determines its slope.The free parameters are A_0, A_1, B_0, B_1 and x_0. Figure <ref> does not show the values used in numerical calculations; in particular x_0 and A_1 are greatly exaggerated. The actual values in Model 2 of Ref. <cit.> are([ A_0; B_0; A_1; B_1; x_0; ]) = ([ 0.000026NTU; 0.0001NTU;1 × 10^-10; 0.015;2 × 10^-13; ])(A_1, B_1 and x_0 are dimensionless). This profile will be the starting point for modifications.The QSS model used here is axially symmetric, with P(r) = Q(r) = 0 and S(r) the same as in Ref. <cit.>:S = √(a^2 + r^2),where a > 0 is a constant, and soE = 1/2 S (x^2 + y^2 + S^2);This S(r) obeys (<ref>) and (<ref>), which, using (<ref>) and (<ref>), both reduce to1/r > S,_r / S.The equation of the dipole “equator” E,_r = 0 isx^2 + y^2 = S^2;the axis of symmetry is x = y = 0. The extrema of the dipole are, from (<ref>). E,_r/ E|_ extreme = ±S,_r/S. At r > r_b, wherer_b = A_1 + B_1 = 0.0150000001,the BB profile becomes flat, and the geometry of the model becomes Friedmannian. See Sec. <ref> for remarks on the choice of coordinates in that region.§ THE BACKGROUND MODEL Our Friedmann background is defined by:Λ = 0,k = - 0.4,t_B = t_ Bf,where k is the curvature index and t_B is the BB time given by (<ref>); t = 0 is the present time. The t_ Bf is the asymptotic value of the function t_B(r) in the L–T model that mimicked accelerating expansion <cit.>. This differs by ∼ 1.6 % from (- T), where T is the age of the Universe given by the Planck satellite team <cit.>T = 13.819 × 10^9y = 0.141NTU. The density at the last scattering time is <cit.>κρ_ LS = 56.1294161975316 × 10^9( NLU)^-2.This value follows from the model of the cosmological recombination process <cit.> and is independent of the after-recombination model. With (<ref>), ρ_ LS implies the redshift relative to the present time1 + z^ b_ LS = 952.611615159.This differs by ∼ 12.7 % from the ΛCDM value <cit.>z_ LS = 1090.The present temperature of the CMB radiation is directly measured, so if (<ref>) were taken for real, the temperature of the background radiation at emission would be ∼ 3380 K instead of ∼ 3000 K dictated by current knowledge. To reconcile our model with these data, many recalculations would be required. Since our model needs other improvements anyway, we will stick to (<ref>), to be able to compare the present results with the earlier ones.§ NULL GEODESICS IN THE AXIALLY SYMMETRIC QSS SPACETIMES In an axially symmetric QSS metric, x and y can be chosen such that P = Q = 0; then x = y = 0 is the symmetry axis <cit.>. However, the loci x = ∞ and y = ∞ are coordinate singularities (they are at the pole of the stereographic projection), and numerical integration of nonaxial geodesics breaks down on crossing those sets. Therefore, we introduce the new coordinates (ϑ, φ) byx = S_b (ϑ/2) cosφ,y = S_b (ϑ/2) sinφ,where S_b is S at the Szekeres/Friedmann boundary:S_bS(r_b) = √(a^2 + r_b^2).This changes (<ref>) and (<ref>) tod s^2 =d t^2 -N^2d r^2/1 + 2 E(r) - (Φ/ F)^2 ( dϑ^2 + sin^2 ϑ dφ^2), FS_b/2 S (1 + cosϑ) + S/2 S_b (1 - cosϑ), NΦ,_r - Φ F,_r/ F.The dipole equator F,_r = 0 is now at (ϑ_ eq/2) = S/S_b (so ϑ_ eq = π/2 at the QSS boundary). On the boundary sphere r = r_b we have F = 1 and (ϑ, φ) become the spherical coordinates with the origin at r = 0.Along a geodesic we denote(k^t, k^r, k^ϑ, k^φ) (t, r, ϑ, φ)λ,where λ is an affine parameter. The geodesic equations for (<ref>) – (<ref>) arek^tλ +N N,_t/1 + 2E(k^r )^2 + ΦΦ,_t/ F^2[(k^ϑ)^2 + sin^2 ϑ(k^φ)^2] = 0, k^rλ +2N,_t/ N k^t k^r + ( N,_r/ N - E,_r/1 + 2E) (k^r)^2 + 2 S,_r sinϑΦ/SF^2Nk^r k^ϑ- Φ (1 + 2E)/ F^2N[(k^ϑ)^2 + sin^2 ϑ(k^φ)^2] = 0,k^ϑλ +2 Φ,_t/Φ k^t k^ϑ - S,_r sinϑ N/S Φ (1 + 2E) (k^r)^2 + 2N/Φ k^r k^ϑ+F,_ϑ/ F [- (k^ϑ)^2 + sin^2 ϑ(k^φ)^2] - cosϑsinϑ(k^φ)^2 = 0,k^φλ +2 Φ,_t/Φ k^t k^φ + 2N/Φ k^r k^φ+2 [cosϑ/sinϑ -F,_ϑ/ F] k^ϑ k^φ = 0.The geodesics determined by (<ref>) – (<ref>) are null when(k^t)^2 -N^2 (k^r)^2/1 + 2E(r) - (Φ/ F)^2 [(k^ϑ)^2 + sin^2 ϑ(k^φ)^2] = 0.Note that k^φ≡ 0 is a solution of (<ref>) while ϑ≡ 0 and ϑ≡π (axial rays) are solutions of (<ref>).To calculate k^r on nonaxial null geodesics, Eq. (<ref>) will be used, which is insensitive to the sign of k^r. A numerical program for integrating the set {(<ref>), (<ref>) – (<ref>)} will have to change the sign of k^r wherever k^r reaches zero.There exist no null geodesics on which k^φ≡ 0 and ϑ has any constant value other than 0 or π. This follows from (<ref>): Suppose k^φ≡ 0 everywhere and k^ϑ = 0 at a point. Then, if sinϑ≠ 0, the third term in (<ref>) will be nonzero (because |S Φ (1 + 2E)| < ∞, S,_r ≠ 0 from (<ref>), N≠ 0 from no-shell-crossing conditions <cit.> and k^r ≠ 0 from (<ref>)), and so k^ϑλ≠ 0. Consequently, in the axially symmetric case the only analogues of radial directions are ϑ = 0 and ϑ = π. The fact reported under (<ref>) below is consistent with this.The coefficient 1/Φ in (<ref>) and (<ref>) becomes infinite at r = 0, where Φ = 0 <cit.>, but all the suspicious-looking terms are in fact finite there <cit.>. In the present paper the only geodesics running through r = 0 will be the axial ones, on which (<ref>) and (<ref>) are obeyed identically.Let the subscript o refer to the observation point. On past-directed rays k^t < 0 and the affine parameter along each one can be chosen such thatk^t_o = -1.Then, from (<ref>) we have(k_o^ϑ)^2 + sin^2 ϑ(k_o^φ)^2 ≤( F_o/Φ_o)^2;the equality occurs when the ray is tangent to a hypersurface of constant r at the observation event, k_o^r = 0.On the boundary r = r_b between the QSS and Friedmann regions the coordinates on both sides must coincide. Thus, for the Friedmann region one must use the metric (<ref>) with t_B = t_ Bf given by (<ref>) (E has the Friedmann form (<ref>) everywhere). The metric then becomes Friedmann with no further limitation on S. But for correspondence with Ref. <cit.>, we choose the coordinates in the Friedmann region so thatS = √(a^2 + r_b^2) = S_b.Then, F = 1 and (ϑ, φ) are the spherical coordinates throughout the Friedmann region.§ THE REDSHIFT IN AXIALLY SYMMETRIC QSS SPACETIMES Along a ray emitted at P_e and observed at P_o1 + z = (u_α k^α)_e/(u_α k^α)_o,where u_α are the four-velocities of the emitter and of the observer, and k^α is the affinely parametrised tangent vector field to the ray <cit.>. In our case, both u_α = δ^0_α, and then (<ref>) simplifies to 1 + z = k_e^t/k_o^t. If the affine parameter is rescaled so that (<ref>) holds, then1 + z = - k_e^t.Equation (<ref>) has the first integral:k^φsin^2 ϑΦ^2 /F^2 = J_0,where J_0 is constant. When (<ref>) is substituted in (<ref>), the following results:(k^t)^2 =N^2 (k^r)^2/1 + 2E + (Φ/ F)^2 (k^ϑ)^2 + (J_0F/sinϑΦ)^2.Equations (<ref>) and (<ref>) show that for rays emitted at the BB, where Φ = 0, the observed redshift is infinite when J_0 ≠ 0. A necessary condition for infinite blueshift (1 + z_o = 0) is thus J_0 = 0, so(a) either k^φ = 0, i.e. the ray proceeds in the hypersurface of constant φ,(b) or ϑ = 0, π along the ray (J_0/sinϑ→ 0 when ϑ→ 0, π by (<ref>)).Condition (b) appears to be also sufficient, but this has been demonstrated only numerically in concrete examples of QSS models (<cit.> and Sec. <ref> here).Consider a ray proceeding from event P_1 to P_2 and then from P_2 to P_3. Denote the redshifts acquired in the intervals [P_1, P_2], [P_2, P_3] and [P_1, P_3] = [P_1, P_2] ∪ [P_2, P_3] by z_12, z_23 and z_13, respectively. Then, from (<ref>)1 + z_13 = (1 + z_12) (1 + z_23).In particular, for a ray proceeding to the past from P_1 to P_2, and then back to the future from P_2 to P_1:1 + z_12 = 1/1 + z_21. § THE EXTREMUM REDSHIFT SURFACE Consider a null geodesic that stays in the surface {ϑ, φ} = {π,constant}; it obeys (<ref>) and (<ref>) identically. On it, k^r ≠ 0 at all points because with k^ϑ = k^φ = 0 the geodesic would be timelike wherever k^r = 0, so r can be used as a parameter. Assume the geodesic is past-directed so that (<ref>) applies. Using (<ref>) and changing the parameter to r, we obtain from (<ref>)z r =N N,_t/1 + 2E k^r.Since N≠ 0 from no-shell-crossing conditions <cit.> and k^r ≠ 0, the extrema of z on such a geodesic occur whereN,_t ≡Φ,_tr - Φ,_tF,_r/ F = 0.In deriving (<ref>), ϑ = π was assumed, but φ was an arbitrary constant. Thus, the set in spacetime defined by (<ref>) is 2-dimensional; it is the Extremum Redshift Surface (ERS) <cit.>.From (<ref>) and (<ref>) we obtainΦ,_t=r √(2 M_0 r/Φ - k),Φ,_tr = √(2 M_0 r/Φ - k) + M_0 r^3/Φ^2t_B,r.Using (<ref>), (<ref>) and (<ref>) with ϑ = π, Eq. (<ref>) becomes√(2 M_0 r/Φ - k)(1 - r S,_r/S) = - M_0 r^3/Φ^2 t_B,r.To avoid shell crossings, t_B,r < 0 must hold at all r > 0 <cit.>, <cit.>,[Refs. <cit.> and <cit.> did not spell out the condition r > 0 in deriving the no-shell-crossing conditions, but it is implicitly there.] so the right-hand side of (<ref>) is non-negative. The left-hand side is positive with S given by (<ref>). Using (<ref>) for Φ, remembering that k < 0 and denotingχsinh^2 (η/2)we obtain from (<ref>)χ^4 + χ^3 = - k^3 [r t_B,r/4 M_0 (1 - r S,_r / S)]^2.With k < 0, (<ref>) is solvable for χ at any r, since its left-hand side is independent of r and can vary from 0 to +∞ while the right-hand side is non-negative.Note that where t_B,r = 0, Eqs. (<ref>) and (<ref>) imply χ = η = 0, i.e. at those points the ERS is tangent to the BB. Also, the ERS is tangent to the BB at r = 0 unless t_B r r → 0∞. (This would imply ρ r r → 0∞, an infinitely thin peak in density at r = 0 – an unusual configuration, but not a curvature singularity <cit.>.) The model considered here will have t_B,r = 0 at r = 0.In the limit S,_r = 0, (<ref>) reproduces the equation of the Extremum Redshift Hypersurface (ERH) of Ref. <cit.>.Equation (<ref>) was derived for null geodesics proceeding along ϑ = π, where F,_r/ F = S,_r/S > 0. With S given by (<ref>) we haveF_11/(1 - rS,_r/S) = (r/a)^2 + 1 > 1,so, at a given r, the ERS has a greater η (and so a greater t - t_B) than the corresponding ERH of the L–T model. Also, the extrema of z along the dipole maximum occur at a greater χ (and thus greater t - t_B) when a is smaller. This will be illustrated by Fig. <ref> in the next section.Conversely, for a ray proceeding along the dipole minimum axis (where ϑ = 0), the factor F_1 is replaced byF_21/(1 + rS,_r/S) = a^2 + r^2/a^2 + 2r^2 < 1,and so the ERS has a smaller t - t_B than the ERH in L–T. Also here, a smaller a has a more pronounced effect.Extrema of redshift also exist along directions other than ϑ = 0 and ϑ = π, as will be demonstrated by numerical examples in Sec. <ref>, but a general equation defining their loci remains to be derived.§ A GENERALISED MODEL 2 OF REF. <CIT.> Along each past-directed null geodesic, the mass density is calculated using (<ref>) – (<ref>). As explained in Sec. <ref>, in any model the density at the LSH must be the same as in (<ref>). So, the instant of crossing the LSH is that where the density becomes equal to (<ref>).The starting point for this paper is Model 2 of Ref. <cit.>, whose functions M(r), E(r) and t_B(r) are given by (<ref>), (<ref>) and (<ref>) – (<ref>). In that model, the strongest blueshift between the LSH and the present epoch was1 + z_ maxb = 1.36167578 × 10^-5.It was calculated by the rule (<ref>). The first factor,1 + z_ ols2 = 1.07858890707746014 × 10^-7,was the blueshift between the LSH and r = 0, achieved on a path that will be called “Ray A”. The second factor,1 + z_ po2 = 126.246039921.was the redshift between r = 0 and the present epoch on a path going off from the same initial point as Ray A, but to the future; it will be called “Ray B”.On Model 2, axially symmetric QSS deformations given by P = Q = 0, (<ref>) and (<ref>) are superposed. Numerical experiments with rays proceeding along ϑ = π were done to improve on (<ref>) as much as possible. As explained under (<ref>), smaller a increases the region under the ERS. So, with the parameters of (<ref>), a^2 was gradually changed from 10 through 1, 1/10, 10^-2, 10^-3 to 10^-4. For each a the quantityt(0) - t_B(0) Δ t_cwas chosen such as to obtain a minimum 1 + z between the LSH and r = 0. This led to smaller 1 + z on Ray A only down to a^2 = 0.001. With a^2 still smaller, the ray either flew over the BB hump and crossed the LSH in the Friedmann region with a large z > 0 or dipped under the LSH still within the QSS region with a small z > 0. No intermediate value of Δ t_c led to z < 0 (but this discontinuity could possibly be overcome with greater numerical precision). The best result achieved with a^2 = 0.001 was 1 + z_2 = 8.87933914173189009 × 10^-8.In the next experiments, the slope of the straight segment of the BB profile was gradually decreased, i.e x_0 was increased from 2 × 10^-13 through 1 × 10^-12 to 1 × 10^-11, with the other parameters unchanged. For each value of x_0, the Δ t_c leading to the smallest 1 + z was determined. The best result achieved at this stage was1 + z_1 = 6.74014204449235876 × 10^-8.Varying A_1, B_1, B_0, and lowering the degree of (<ref>) to 4 and to 2, led to nothing better than (<ref>). So, this is taken as the best improvement over the L–T model achieved using an axially symmetric QSS deformation.Figure <ref> shows Ray A, with 1 + z_1 given by (<ref>), and the corresponding ERS and BB profiles. Curve 1 is the ERH profile of Model 2 from Ref. <cit.>, and Curve 2 is the ERS profile with a^2 = 10^-5. As stated above, smaller a gives more space under the ERS, but when too small it creates a discontinuity in z that prevents z < 0 altogether.The ERS profile has two branches on each side of r = 0, so some rays will intersect it four times and z along them will have two local maxima and two local minima. Examples will appear in Sec. <ref>.On Ray B, the upward 1 + z is1 + z_ 1 up = 6.39228356761256666 × 10^-3.Thus, total (1 + z) between LSH and now is1 + z_2 = 1 + z_1/1 + z_ 1 up = 1.05441849899 × 10^-5 .This fits the lowest-frequency GRBs, for which <cit.>1 + z_ max≈ 1.689 × 10^-5,with a wider margin than (<ref>), so the BB hump can now be lowered to yield (1 + z) closer to (<ref>). The easiest way to do this is to decrease B_0 (see Fig. <ref>). Then Δ t_c is fine-tuned to make (1 + z_ ols) on Ray A as small as possible (1 + z_ ols gets larger when B_0 gets smaller, so there is a limit on decreasing B_0). The B_0 that allows sufficiently small (1 + z)_ ols isB_0 = 0.000091,and then the smallest 1 + z on Ray A is1 + z_ ols3 = 1.11939135405414447 × 10^-7. For Ray B corresponding to Ray A of (<ref>) (proceeding along the dipole minimum), the 1 + z between r = 0 and the present epoch is1 + z_ 3 up = 7.11151887923544557 × 10^-3,so (1 + z) between LSH and now along Rays A and B is1 + z_3 = 1 + z_ ols3/1 + z_ 3 up = 1.574 × 10^-5,and the present observer is atr = r_ obs = 0.88983013520392229.This is larger than r_ O2 = 0.88705643159726955 in Model 2 of Ref. <cit.>. Thus, a Szekeres deformation superposed on an L–T model results in moving the observer further from the radiation source, which leads to a smaller angular diameter of the source seen in the sky; see Sec. <ref>.Rays A and B referred to above haveΔ t_c = 0.00000863099500NTU. The corresponding results for rays propagating in the opposite direction, i.e. along the dipole minimum between the LSH and r = 0 (Ray C), and along the dipole maximum between r = 0 and the observer (Ray D), are as follows. The best value of 1 + z on Ray C is1 + z_ 1 dip min = 1.73185662921682137 × 10^-7,achieved withΔ t_c = 0.00000981550000NTU.Then, 1 + z calculated toward the future along the dipole maximum is1 + z_ 2 dmax up = 7.26948511585012724 × 10^-3.So, the 1 + z between the LSH and the present time is1 + z_4 = 1 + z_ 1 dip min/1 + z_ 2 dmax up = 2.382 × 10^-5.The present time was reached by the ray atr_ obs = 0.88935629118490100.Thus, on this ray 1 + z is larger while r_ obs is smaller.In each case the numerical calculation overshot the present time. For the ray that produced (<ref>) and (<ref>), the value of t at the endpoint wast_ end 1 = 5.75302117391131287 × 10^-11NTU,and for the ray that produced (<ref>) and (<ref>) it wast_ end 2 = 9.65282969667925857 × 10^-10NTU. § NONAXIAL PLANE RAYS So far, rays crossing the symmetry axis of the t = constant spaces in the metric (<ref>) – (<ref>) were considered. Now, we will consider nonaxial rays (ϑ will no longer be 0 or π all along the ray) propagating in a hypersurface of constant φ. By (<ref>), J_0 = 0 along them, and they obey (<ref>) identically. Because of axial symmetry of the model, the image will be the same for every φ.We will consider pencils of rays flying through the vicinity of the BB hump shown in Fig. <ref> and reaching the present observer situated in three locations:Observer I: At(t, r, ϑ)_ I = (t_ end 1, r_ obs, 0),with r_ obs given by (<ref>). This is the endpoint of Ray B.Observer II: At(t, r, ϑ)_ II = (t_ end 2, r_ obs, π),with r_ obs given by (<ref>). This is the endpoint of Ray D.Observer III: At(t, r, ϑ)_ III = (0, r_p, π/2),where r_p = (r_ obs + r_ obs)/2.The ϑ_ III is at the dipole equator on the boundary of the Szekeres region. One ray reaching Observer III will have ϑ = π/2 throughout the Friedmann region.The equations to be integrated are, from (<ref>) – (<ref>):t λ =k^t,k^tλ =-N N,_t/1 + 2E(k^r )^2 - ΦΦ,_t/ F^2(k^ϑ)^2,ϑλ =k^ϑ,k^ϑλ =- 2 Φ,_t/Φ k^t k^ϑ + sinϑ S,_rN/S Φ (1 + 2E) (k^r)^2 - 2N/Φ k^r k^ϑ+ sinϑ(S^2 - S_b^2)/2S S_bF(k^ϑ)^2, r λ =k^r, k^r= ±√(1 + 2E)/ N√(ξ),ξ(k^t)^2 - (Φ k^ϑ/ F)^2.The initial values for (t, r, ϑ) will be at the observer positions specified above, the initial value for k^t is (<ref>), and the rays will be calculated backward in time from there. With k^φ = 0, Eq. (<ref>) reduces to(k_o^ϑ)^2 ≤( F_o/Φ_o)^2.As before, the equality occurs when k^r_o = 0.For observers in the Friedmann region, F_o = 1, as explained under Eq. (<ref>). For Observer I Φ_o was calculated by the program that found (<ref>); it is(Φ_o)_ obs 1 = 0.40202832540890049. The angle α between two rays at an observer can be calculated as follows. The direction of a ray is determined by the unit spacelike vector given by <cit.>n^α = u^α - k^α/k^ρ u_ρ,where k^α is the tangent vector to the ray and u^α is the velocity vector of the observer; n^α u_α = 0. Since g_αβ n^α n^β = -1, the angle between two directions obeyscosα = - g_αβ n^α_1 n^β_2.Since u^α = δ^α_0 everywhere, and k^0 = -1 at the observer, the components of a general n^α at the observer aren^α_o = (0, k^r_o, k^ϑ_o, k^φ_o).Using (<ref>), (<ref>) and assuming k^φ_o = 0 we then obtain for the angle α_RS between rays R and Scosα_RS = √(1 - (k^ϑ_RoΦ_o/ F_o)^2)×√(1 - (k^ϑ_SoΦ_o/ F_o)^2)+k^ϑ_Ro k^ϑ_So(Φ_o/ F_o)^2.Both k^ϑ_o must obey (<ref>), so |cosα_RS| ≤ 1 and α_RS obeying (<ref>) exists.When Ray R is axial (k^ϑ_Ro = 0), and the observer lies in the Friedmann region where F_o = 1, (<ref>) becomescosα_RS = √(1 - (k^ϑ_SoΦ_o)^2)⟹sinα_RS = k^ϑ_SoΦ_o.This equation can be used to estimate the angular radius of a radiation source in the sky; then α is the angle between the direction of the central ray (going along the symmetry axis for Observers I and II) and the direction of the ray that grazes the edge of the source. The latter can be approximately determined in numerical experiments.The redshift in the Friedmann background between the LSH and the present time, calculated numerically along a null geodesic is1 + z_b = 951.83531161489873.This differs slightly from (<ref>), which was calculated from 1 + z^ b_ LS =R_ now/ R_ LS, where R is the Friedmann scale factor, and also from 1 + z_ comp = 951.91469714961829 calculated in Ref. <cit.>. The differences are caused by numerical inaccuracies (in particular, a different numerical step was used in <cit.>). Since all null geodesics in the following will be calculated numerically, (<ref>) will be taken as the reference value.The figures in this section show rays that stay over or near the BB hump for some of the flight time. The initial value of k^r for each ray follows from (<ref>) after the value of k_o^ϑ is chosen. At all initial points, k^r < 0, but ξ was monitored along each ray, and when it went down to or below zero, the sign of k^r was reversed.[Note that ξ < 0 is impossible on a null geodesic with k^φ = 0 by (<ref>). But it can happen because of numerical inaccuracy. If ξ < 0 at step n, then for this step it was replaced by (- ξ); then it should begin to grow. Along some rays the sign reversals of ξ in a vicinity of the smallest r had to be done many times .] §.§ Rays reaching Observer I Table <ref> lists the parameters of exemplary nonradial rays received by Observer I, with the angular radii calculated by (<ref>). The angular radius of the whole BB hump (Ray 9 in the table) here is somewhat smaller than the 1.00097^∘ in the L–T/Friedmann model of Ref. <cit.>. Decreasing this radius was one of the aims of replacing the L–T region with Szekeres.In Figs. <ref> and <ref> the coordinates areX = - r cosϑ,Y = r sinϑ.Figure <ref> shows the projections of the rays from Table <ref> on a surface of constant t along the flow lines of the dust in a neighbourhood of the QSS region. Figure <ref> is a closeup view on the vicinity of the BB hump. The dotted circle is at r = r_b, the r-coordinate of the edge of the BB hump. The cross marks the center r = 0 of the dotted circle; the arrow on the horizontal arm of the cross in Fig. <ref> points in the direction of the Szekeres dipole maximum. The large dots in Fig. <ref> mark the points where the rays intersect the LSH. The endpoints of the rays are where the numerical calculation determined their crossing the BB. Figures <ref> and <ref> are nearly the same as the corresponding ones for the L–T/Friedmann model in Ref. <cit.>; there are only small quantitative differences between them. They are shown here to facilitate comparisons with the images of the rays reaching Observers II and III further on.Ray 0 is not included in the figures because, at their scale, it would coincide with the Y = 0 axis. It is included in the table in order to show how 1 + z_ LSH abruptly jumps from the near-zero value (<ref>) on an axial ray to a large positive value on a ray that is only slightly nonaxial.The redshifts initially increase with the viewing angle. The maximum z_ LSH is achieved on Ray 8 inside the image of the source, not at its edge, and it is larger than the background (<ref>). The same thing happened in the L–T/Friedmann model <cit.>, and will again occur for Observers II and III further in this paper. Ray 9 just grazes the world-tube r = r_b, and z_ LSH on it is close to (<ref>). Its k^ϑ_o was determined by trial and error: for each ray the program that calculated its path determined the minimum rr_cl along it, and Ray 9 is the one where r_cl - r_b = 0.0000000000735095811 was reasonably small.The rays abruptly change their direction every time they come near to the surface r = r_b. The change is sharper on the second intersection with r = r_b where the ray is closer to the BB. When the rays travel over the BB hump further from its edge the deflections are smaller.The angle of deflection depends on the interval of t that the ray spends near the edge of the BB hump. Ray 1 meets r = r_b nearly head-on and does not strongly change direction on first encounter. On second encounter, it is closer to the BB and is forced to bend around more.The other rays meet the r = r_b surface at smaller angles than Ray 1, so they stay near it for longer times. For Rays 3, 4 and 5, this causes a much stronger deflection than for Ray 1. For Rays 6 – 8, another effect prevails: they fly farther from the axis, so they approach the BB at larger t - t_B and stay over it for a shorter time; therefore the deflection angle decreases again. Ray 9 does not enter the Szekeres region but only touches it, so it propagates almost undisturbed as in the Friedmann region.Figures <ref> and <ref> show only those rays for which k^ϑ_o > 0. The images of the rays with k^ϑ_o <0 are mirror reflections of those shown. In fact, since ϑ = 0 is the axis of symmetry, the image will be the same for every φ, so one should imagine the complete collection of constant-φ null geodesics by rotating Figs. <ref> and <ref> around the ϑ = 0 axis. §.§ Rays reaching Observer II Table <ref> and Fig. <ref> are analogues of Table <ref> and Fig. <ref> for Observer II. The analogue of Ray n from Table <ref> is Ray 10 + n in Table <ref>. The k^ϑ_o are the same as in Table <ref>, with the exception of Ray 19 – see below for an explanation. The angular radii are slightly smaller here because Φ_o for Observer II is slightly smaller than (<ref>):(Φ_o)_ obs 2 = 0.40181424093371831.But at the level of precision used in the tables, the angular radii for Rays 11 – 18 are the same as those for Rays 1 – 8. The analogue of Ray 0 is not included.Ray 19 grazes the edge of the Szekeres region – so its k^ϑ_0 determines the angular radius of the whole source by (<ref>). Since r_ obs is smaller here, the angular radius for Ray 19 is larger than for Ray 9; it isα_ II = 0.9681^∘. The values of 1 + z_ LSH in Table <ref> are different from those in Table <ref>, but the general pattern is the same: z_ LSH initially increases with the viewing angle, achieves a maximum inside the image of the source, then decreases to the background value at its edge. The maximum is achieved at the same k^ϑ_o as before, on Ray 18. §.§ Rays reaching Observer III Observer III, unlike Observers I and II, is not located on the axis of symmetry, so the (past-directed) rays going off from her position with k^ϑ_o < 0 will not be mirror images of those with k^ϑ_o > 0. Therefore, these two groups of rays are shown in separate tables and separate figures. Table <ref> and Fig. <ref> contain the rays for which k^ϑ_o ≤ 0; the rays in Table <ref> and Fig. <ref> have k^ϑ_o > 0. The set of values of |k^ϑ_o| is the same as in Table <ref> and Fig. <ref>. The analogues of Ray n from Table <ref> are Ray 20 + n in Table <ref> and Ray 30 + n in Table <ref>.The value of Φ_o here is between the previous ones,(Φ_o)_ obs 3 = 0.40192128311507536,while t_o = 0 does not differ significantly from (<ref>) and (<ref>), so the angular radii would also be intermediate; they are not listed in the tables.The most conspicuous difference from the previous cases is in Ray 20, which proceeds along ϑ = π/2 in the Friedmann region: it is deflected toward larger ϑ on entry to the Szekeres region, and bends oppositely to all other rays on leaving it. Rays 21 and 22 get deflected so strongly that they cross the line ϑ = π/2, 3 π/2 well inside the Szekeres region, unlike their analogues, Rays 1, 2, 11 and 12, which cross the ϑ = 0, π lines just before leaving the Szekeres region. Beginning with Ray 23, the paths of the rays become similar (though different in numerical detail) to the corresponding ones for Observers I and II.The pattern of 1 + z_ LSH across the image of the source here is different from those for Observers I and II: with decreasing k^ϑ_o < 0 the redshift achieves a minimum on Ray 22, then a maximum larger than in the background on Ray 28; it then drops to the background value. One ray in this family (not shown) will pass through r = 0, but with 0 ≠ϑ≠π, so it will not have z = -1 at the BB for the reason indicated under Eq. (<ref>). See also Ref. <cit.>, where rays passing through r = 0 were numerically integrated for the same kind of Szekeres dipole (but with a different BB profile and with a^2 = 0.1) – only those proceeding along ϑ = 0, π had z ≈ -1 near the BB.For rays with k^ϑ_o > 0 the pattern of 1 + z_ LSH is similar to that for Observer II: there is only the maximum, on Ray 38. However, the values of 1 + z_ LSH differ, some of them substantially, from their counterparts in Table <ref>.The paths of the rays are similar to those for Observers I and II, but the angle of deflection is smaller for each ray here. Also, the rays bend away from the X = 0 axis near the Y = 0 line – this effect was not visible for Observer I and barely noticeable for Observer II.§ REDSHIFT PROFILES ALONG NONAXIAL NULL GEODESICS The z-profiles along Rays 1 – 6 and 9 are shown in Figs. <ref> and <ref>; they are similar to those in the L–T/Friedmann model <cit.>. They show that analogues of the ERS (call them ERS') exist also along nonaxial rays. Figure <ref> shows the z(r) relation for Ray 3 in a neighbourhood of r = r_b; it is a key to reading Fig. <ref>. In segment (a) of the ray, z increases from 0 at the observer to a local maximum at r ≈ r_b, where the (past-directed) ray intersects the outer branch of the ERS' for the first time. Then, in segment (b), z decreases to a local minimum at a slightly smaller r, where the ray intersects the inner branch of the ERS' for the first time. Further along the ray, in segment (c), z increases until it reaches the second local maximum at the second intersection of the ray with the inner branch of the ERS'. Then, in segment (d), z decreases up to the second intersection of the ray with the outer branch of the ERS', where it achieves its second and last local minimum. From then on, in segment (e), z keeps increasing up to ∞ achieved at the BB.Along Rays 1 and 2 in Fig. <ref>, the second minimum of z is smaller than the first maximum, so those z(r) curves self-intersect.§ FITTING THE RADIATION SOURCES IN THE CELESTIAL SPHERE Imagine a radiation source to be a disk on the celestial sphere of angular radius ϑ_0. How many such disks would fit into the celestial sphere at the same time?An equivalent question is, how many non-overlapping circles of a given radius can be drawn on a sphere of a given radius? A rough answer would be obtained by dividing the surface area of the sphere by the surface area inside the circle. But this would be an overestimate – the circles cannot cover the sphere completely. A better approximation is to inscribe each circle into a quadrangle of arcs of great circles on the sphere. Such figures cannot cover the sphere either, but this method takes into account some of the area outside the circles. Details of the calculation are presented in the Appendix. The area of the sphere divided by the area of the quadrangle isN = π/arcsin(sin^2 ϑ_0).Taking ϑ_0 = 0.5^∘, the current resolution of the GRB detectors (see footnote <ref>), we obtainN_0.5≈ 41,254.With ϑ_0 = 0.96767^∘ of Table <ref>, we obtainN_0.96767≈ 11,014.Finally, with ϑ_0 = 0.9681^∘, as in (<ref>), we obtainN_0.9681≈ 11,005. It is instructive to compare these numbers with the number of GRBs detected in observations. This author was not able to get access to a definitive answer, but here is an estimate based on partial information. The BATSE (Burst and Transient Source Explorer) detector, which worked in the years 1991 – 2000, discovered 2704 GRBs <cit.> (it was de-orbited in 2000 <cit.>). Assuming the same rate of new discoveries, 8112 GRBs should have been detected between 1991 and now – still fewer than (<ref>).When the angular radius is divided by f, the number of possible sources in the sky should be multiplied by f^2. Equation (<ref>) approximately confirms this, since for small ϑ_0 we have sinϑ_0 ≈ϑ_0 ≈arcsinϑ_0.)§ POSSIBLE AND NECESSARY IMPROVEMENTS OF THE MODEL The model presented here accounts for the lowest frequency of the radiation in the observed GRBs (the model of highest-frequency GRBs was discussed in Ref. <cit.>). The angular radius of the radiation sources seen by the present observer is twice as large as the current observations allow (nearly 1^∘ in the model vs. 0.5^∘ – the resolution of the GRB detectors; see footnote <ref>). In order to decrease this angle, the BB hump that emits the radiation should be made narrower or lower; in the second case it would be further away from the observer seeing the high-frequency flash. The BB profile chosen in this paper cannot be the limit of improvement. The first attempt to explain the GRBs using a cosmological blueshift resulted in a model <cit.> whose hump had the height A_0 + B_0 = 0.026 NTU and width A_1 + B_1 = 0.108. By experimenting with the parameters of the hump, the numbers in (<ref>) were achieved; i.e. the height was decreased ≈ 206 times and the width 7.2 times. The result of such a blind search cannot be the best possible. In particular, other classes of shapes of the BB hump should be tried.To get small 1 + z, the BB profile should be such that the blueshifted ray spends as much time as possible traveling above the LSH but below the ERS. As follows from (<ref>) and (<ref>), the room under the ERS becomes larger when t_B r is larger and when a is smaller. The problem with small a was described in Sec. <ref>, but it might be overcome using a greater numerical precision. A larger t_B r tends to make the BB hump higher. In order to keep the hump acceptably low, the large t_B r has to be limited to a short interval of r – this is where the steep slope of the hump in Fig. <ref> came from.A serious limitation is the fact mentioned in Sec. <ref> that the ERS is tangent to the BB at r = 0. If this could be overcome, the rays would stay in the blueshift-generating region (below the ERS) for a longer time interval, and so the required 1 + z range could be achieved with a lower or narrower hump.Further optimizations are possible. For example, the function E(r) here has the Friedmann shape (<ref>) throughout the Szekeres region – obviously one should check what happens when it has other shapes. Friedmann backgrounds other than the one of Sec. <ref> should be tested. Szekeres dipoles other than (<ref>) should also be tested, in particular non-axially-symmetric ones. Carrying out such tests is laborious – it involves finding, by numerical shooting, the minimum of a function of several variables (in this paper these were 7 variables: the five in (<ref>), the a of (<ref>) and the Δ t_c of (<ref>)).Similar to the L–T model of Ref. <cit.>, the model presented here implies too-long durations for the high-frequency flashes and for their afterglows. This is because, in axially symmetric models, once the observer and the source are placed on the symmetry axis, they stay there forever – the source does not drift <cit.>. The only changes of the observed frequency and intensity may then occur because the observer receives rays emitted from different points of the BB hump along the same line of sight, so the changes occur on the cosmological time scale and are much slower than in the observed GRBs (see Ref. <cit.> for the numbers).A nonsymmetric Szekeres model offers a new possibility. In such a model there also exist two opposite directions along which radiation is strongly blueshifted <cit.>. However, the cosmic drift <cit.> will cause an observer who was initially in the path of one of those preferred rays to be off it after a while. The time scale of this process should be short, as a consequence of the very large distance between the source and the observer and of the discontinuous change from blueshift to redshift as soon as the strongly blueshifted ray misses the observer.One solution of the duration problem has already been tested, and will be submitted for publication soon. If there is another QSS region between the radiation source and the observer, then the cosmic drift in the intervening QSS region will cause the highest-frequency ray to miss the observer after 10 minutes or less. This satisfactorily solves the problem of the duration of the high-frequency flash, but not the problem of the duration of the afterglow. The latter still awaits solution.§ SUMMARY AND CONCLUSIONS In Ref. <cit.>, existence and properties of blueshifts in exemplary simple quasispherical Szekeres models were investigated. Using that knowledge, in the present paper it was investigated whether a QSS mass dipole superposed on a L–T background would allow better mimicking of gamma-ray bursts by cosmological blueshifting than in Ref. <cit.>, where pure L–T models were used.The axially symmetric QSS model was introduced in Secs. <ref> and <ref>. The QSS region is matched to a negative-spatial-curvature Friedmann background (Sec. <ref>), chosen for correspondence with earlier papers by this author <cit.>. After presenting definitions and preliminary information in Secs. <ref>, <ref> and <ref>, in Sec. <ref> the parameters of the QSS model are chosen such that at present the highest frequency of the blueshifted radiation agrees with the lowest frequency of the observed GRBs (this agreement requires that the blueshift between the last scattering and the present time obeys 1 + z ≤ 1.689 × 10^-5 <cit.>). The introduction of the Szekeres dipole has the consequence that the required 1 + z is achieved with a lower hump in the BB profile, which is thus at a greater distance from the observer than in the L–T model. In Sec. <ref>, the paths of nonaxial light rays reaching three different present observers are presented. The observers are placed in prolongation of the mass-dipole maximum axis, of the dipole minimum axis, and of the dipole equator. The distributions of the observed redshift across the image of the source are different for each observer, and the angular radii of the source are between 0.96767^∘ and 0.9681^∘. This is nearly twice as much as the current GRB observations allow, but the model has the potential to be improved (see Sec. <ref>). In Sec. <ref>, the redshift profiles along nonaxial rays were calculated in order to show that extrema of redshift also exist along them. In Sec. <ref> it was estimated that with the angular radii of the radiation sources being between 0.96767^∘ and 0.9681^∘, approximately 11,000 such sources could be simultaneously fitted into the sky of the present observer. Finally, possible further improvements in the model were discussed in Sec. <ref>.The models of generating the high-frequency radiation flashes discussed here and in Ref. <cit.> are subject to two kinds of tests:1. In the future, the observers should be able to resolve the fuzzy disks they now see as GRB sources (see footnote <ref>), and measure the distribution of radiation frequencies and intensities across them. Then it will be possible to compare those distributions with model predictions. A model that would predict such a distribution correctly could then be used to get information about the sources.2. If the gamma flashes are generated simultaneously with the CMB radiation, as proposed here and in Ref. <cit.>, then they are observed now as short-lived because its source comes into and out of the observer's view, but has existed there since the last-scattering epoch. In this case, the central high-frequency ray should be surrounded by rays with positive redshifts smoothly blending with the CMB background at the edge of the source image, as shown in tables in Sec. <ref>. But if a source of the radiation flash lies later than the last scattering, then it is independent of the CMB. It should black out all CMB rays within some angle around the central ray, and the redshift profile across the image of the source would not need to continuously match the CMB at the edge.This author does not wish to question the validity of the GRB models proposed so far. The motivation for this work was this: history of science teaches us that if a well-tested theory predicts a phenomenon, then the prediction has to be taken seriously and checked against experiments and observations. Since general relativity clearly predicts that some of the light generated during last scattering might reach us with strong blueshift, consequences of this prediction have to be worked out and submitted to tests. In trying to accommodate blueshifts, the suspicion fell on the GRBs because it is generally agreed that at least some of their sources lie billions of years to the past from now <cit.>. The BB humps discussed here would lie about twice as far, at ≈ 13.6 Gyr to the past, by (<ref>). For the relativity theory, it would be interesting to know whether at least some of the observed GRBs are powered by the mechanism discussed here.§ HOW MANY CIRCLES OF A GIVEN RADIUS CAN BE DRAWN ON A SPHERE OF A GIVEN RADIUS?Imagine a circle K drawn on a sphere S of radius a and a cone that intersects S along K and has its vertex at the center of S; see Figs. <ref> and <ref>. Let the opening angle of the cone be ϑ_0. Now imagine a square pyramid circumscribed on this cone. The pyramid intersects S along the curvilinear quadrangle shown in thicker lines in Fig. <ref>. The part of S inside the quadrangle has the surface area 8 times the surface area inside the curvilinear triangle ABC; see also Fig. <ref>.Suppose the center of the sphere is at x = y = z = 0, so the equation of the sphere is x^2 + y^2 + z^2 = a^2, and the axis of the cone goes along the z axis. The metric of the sphere in the (x, y) coordinates isd x^2 +d y^2 +d z^2 =(a^2 - y^2)d x^2 + 2xyd xd y + (a^2 - x^2)d y^2/a^2 - x^2 - y^2,and so the surface element of the sphere is√(g) d xd y = a/√(a^2 - x^2 - y^2) d xd y.The side AC of the triangle lies in the plane x = 0, and y on it changes from 0 to a sinϑ_0. The side AB lies in the plane y = x. The y-coordinate of the point B isy_ B = a sinϑ_0/√(1 + sin^2ϑ_0),as is easy to calculate knowing that this point lies simultaneously on the sphere x^2 + y^2 + z^2 = a^2, in the plane y = x and in the plane z = y ϑ_0 that contains the right face of the pyramid. The auxiliary point D has the same y-coordinate as B. The arc BC (which is part of the intersection of the right face of the pyramid with the sphere) obeys the equationx = √(a^2 - y^2/sin^2ϑ_0) x_ BC(y).The surface area of the triangle ABC is thusS_ ABC = ∫_0^y_ B d y ∫_0^y a/√(a^2 - x^2 - y^2)d x + ∫_y_ B^a sinϑ_0 d y ∫_0^x_ BC(y)a/√(a^2 - x^2 - y^2)d x = ∫_0^y_ B a arcsin(y/√(a^2 - y^2))d y + ∫_y_ B^a sinϑ_0 a arcsin(x_ BC(y)/√(a^2 - y^2))d y. The two integrals in (<ref>) areS_I= a^2 ϑ_0 sinϑ_0/√(1 + sin^2ϑ_0) -1 2 a^2 arcsin(sin^2 ϑ_0), S_II =- a^2 ϑ_0 sinϑ_0/√(1 + sin^2ϑ_0) + a^2 arcsin(sin^2 ϑ_0).So, the area of the triangle ABC is 1 2 a^2 arcsin(sin^2 ϑ_0), and the area of the quadrangle in Fig. <ref> isS_ quad = 4 a^2 arcsin(sin^2 ϑ_0).(When ϑ_0 = π/2, this gives the obvious result 2 π a^2.)Hints for the less-trivial parts of calculating the integrals:In S_I change the variables by arcsin(y/√(a^2 - y^2)) = w and integrate by parts to get rid of the factor w under the integral.In S_II change the variables by y = a sinϑ_0 sin u, then integrate by parts to get rid of arcsin under the integral, and finally use the identity arctanλ = arcsin(λ/√(1 + λ^2)). Now an approximate answer to the question in the title can be given. The quadrangles will not cover the whole surface of the sphere, but by dividing the surface area of the sphere, 4 π a^2, by S_ quad, we obtain an upper bound on the number of nonoverlapping circles that can be drawn on the sphere; it is (<ref>). In deriving the geodesic equations, the computer-algebra system Ortocartan <cit.> was used. 99 Lema1933 G. Lemaître, L'Univers en expansion [The expanding Universe]. Ann. Soc. Sci. Bruxelles A53, 51 (1933); English translation: Gen. Relativ. Gravit. 29, 641 (1997); with an editorial note by A. Krasiński: Gen. Relativ. Gravit. 29, 637 (1997).Tolm1934 R. C. Tolman, Effect of inhomogeneity on cosmological models. Proc. Nat. Acad. Sci. USA 20, 169 (1934); reprinted: Gen. Relativ. Gravit. 29, 935 (1997); with an editorial note by A. Krasiński, in: Gen. Relativ. Gravit. 29, 931 (1997).Szek1975 P. Szekeres, A class of inhomogeneous cosmological models. Commun. Math. Phys. 41, 55 (1975).Szek1975b P. Szekeres, Quasispherical gravitational collapse. Phys. Rev. D12, 2941 (1975).Elli1971 G. F. R. Ellis, Relativistic cosmology. In Proceedings of the International School of Physics “Enrico Fermi”, Course 47: General relativity and cosmology. Edited by R.K. Sachs, Academic Press, 1971, pp. 104-182. Reprinted: Gen. Relativ. Gravit. 41, 581 (2009); with an editorial note by W. Stoeger, in Gen. Relativ. Gravit. 41, 575 (2009).PlKr2006 J. Plebański and A. Krasiński, An Introduction to General Relativity and Cosmology. Cambridge University Press 2006, 534 pp, ISBN 0-521-85623-X.Szek1980 P. Szekeres, Naked singularities. In: Gravitational Radiation, Collapsed Objects and Exact Solutions. Edited by C. Edwards. Springer (Lecture Notes in Physics, vol. 124), New York, pp. 477 – 487 (1980).HeLa1984 C. Hellaby and K. Lake, The redshift structure of the Big Bang in inhomogeneous cosmological models. I. Spherical dust solutions. Astrophys. J. 282, 1 (1984) + erratum Astrophys. J. 294, 702 (1985).Kras2016a A. Krasiński, Cosmological blueshifting may explain the gamma ray bursts. Phys. Rev. D93, 043525 (2016).Kras2014d A. Krasiński, Blueshifts in the Lemaître – Tolman models. Phys. Rev. D90, 103525 (2014).Kras2016b A. Krasiński, Existence of blueshifts in quasispherical Szekeres spacetimes. Phys. Rev. D94, 023515 (2016).BoCK2011 K. Bolejko, M.-N. Célérier and A. Krasiński, Inhomogeneous cosmological models: exact solutions and their applications. Class. Quant. Grav. 28, 164002 (2011).SuGa2015 R. A. Sussman, I. D. Gaspar, Multiple non-spherical structures from the extrema of Szekeres scalars. Phys. Rev. D92, 083533 (2015).Perlwww D. Perley, Gamma-Ray Bursts. Enigmatic explosions from the distant universe. http://w.astro.berkeley.edu/dperley/pub/grbinfo.htmlPlan2014 Planck collaboration, Planck 2013 results. XVI. Cosmological parameters. Astron. Astrophys. 571, A16 (2014).Plan2014b Planck collaboration, Planck 2013 results. XV. CMB power spectra and likelihood. Astron. Astrophys. 571, A15 (2014).Hell1996 C. Hellaby, The nonsimultaneous nature of the Schwarzschild R = 0 singularity. J. Math. Phys. 37, 2892 (1996).BoST1977 W. B. Bonnor, A. H. Sulaiman and N. Tomimura, Szekeres's space-times have no Killing vectors, Gen. Relativ. Gravit. 8, 549 (1977).DeSo1985 M. M. de Souza, Hidden symmetries of Szekeres quasi-spherical solutions, Revista Brasileira de Física 15, 379 (1985).HeKr2002 C. Hellaby and A. Krasiński. You cannot get through Szekeres wormholes: Regularity, topology and causality in quasispherical Szekeres models. Phys. Rev. D66, 084011 (2002).Kras2014a A. Krasiński, Accelerating expansion or inhomogeneity? A comparison of the ΛCDM and Lemaître – Tolman models. Phys. Rev. D89, 023520 (2014) + erratum Phys. Rev. D89, 089901(E) (2014).unitconver Energy and Work Units Conversion, http://www.asknumbers.com/EnergyWorkConversion.aspxPeeb1968 P. J. E. Peebles, Recombination of the Primeval Plasma, Astrophys. J. 153, 1 (1968).ZKSu1968 Ya. B. Zeldovich, V. G. Kurt, R. A. Syunyaev, Recombination of hydrogen in the hot model of the Universe, Zhurn. Eksper. Teor. Fiz. 55, 278 (1969); Soviet Physics JETP 28, 146 (1969).recoWiki Recombination (cosmology), https://en.wikipedia.org/wiki/Recombination_(cosmology)BKHC2010 K. Bolejko, A. Krasiński, C. Hellaby and M.-N. Célérier, Structures in the Universe by exact methods – formation, evolution, interactions. Cambridge University Press 2010, 242 pp, ISBN 978-0-521-76914-3.NoDe2007 B. C. Nolan and U. Debnath, Is the shell-focusing singularity of Szekeres space-time visible? Phys. Rev. D76, 104046 (2007).KHBC2010 A. Krasiński, C. Hellaby, K. Bolejko and M.-N. Célérier, Imitating accelerated expansion of the Universe by matter inhomogeneities – corrections of some misunderstandings. Gen. Relativ. Gravit. 42, 2453 (2010).BATSE BATSE All-Sky Plot of Gamma-Ray Burst Locations, https://heasarc.gsfc.nasa.gov/docs/cgro/cgro/batse_src.htmldeorbit Gamma-Ray Astrophysics, https://gammaray.msfc.nasa.gov/batse/Krasfail A. Krasiński, Gamma ray bursts may be blueshifted bundles of the relic radiation. arXiv:1502.00506.KrBo2011 A. Krasiński and K. Bolejko, Redshift propagation equations in the β' ≠ 0 Szekeres models. Phys. Rev. D83, 083503 (2011).QABC2012 C. Quercellini, L. Amendola, A. Balbi, P. Cabella and M. Quartin, Real-time cosmology. Phys. Rep. 521, 95 – 134 (2012).KoKo2017 M. Korzyński and J. Kopiński, Optical drift effects in general relativity. J. Cosm. Astropart. Phys. 03, 012 (2018).Kras2001 A. Krasiński, The newest release of the Ortocartan set of programs for algebraic calculations in relativity. Gen. Relativ. Gravit. 33, 145 (2001).KrPe2000 A. Krasiński and M. Perkowski, The system ORTOCARTAN – user's manual. Fifth edition, Warsaw 2000.
http://arxiv.org/abs/1704.08145v2
{ "authors": [ "Andrzej Krasiński" ], "categories": [ "gr-qc", "astro-ph.HE" ], "primary_category": "gr-qc", "published": "20170426145052", "title": "Properties of blueshifted light rays in quasispherical Szekeres metrics" }
Max Planck Institute for Nuclear Physics, Saupfercheckweg 1, D 69117 Heidelberg, Germany Center for Advanced Studies, Peter the Great St. Petersburg Polytechnic University, 195251 St. Petersburg, Russia Max Planck Institute for Nuclear Physics, Saupfercheckweg 1, D 69117 Heidelberg, GermanyWe report calculations of the one-loop self-energy correction to the bound-electron g factor of the 1s and 2s states of light hydrogen-like ions with the nuclear charge number Z ≤ 20. The calculation is carried out to all orders in the binding nuclear strength. We find good agreement with previous calculations and improve their accuracy by about two orders of magnitude.31.30.jn, 31.15.ac, 32.10.Dk, 21.10.KyOne-loop electron self-energy for the bound-electron g factor Z. Harman   =============================================================The bound-electron g factor in light hydrogen-like and lithium-like ions has been measured with a high accuracy, which reached to 3× 10^-11 in the case of C^5+ <cit.>. Such measurements have yielded one of the best tests of the bound-state QED theory <cit.> and significantly improved the precision of the electron mass <cit.>. Further advance of the experimental accuracy toward the 10^-12 level is anticipated in the near future <cit.>.One of the dominant effects in the bound-electron g factor is the one-loop electron self-energy. Its contribution to the total g factor value is so large that the effect needs to be calculated to all orders in the nuclear binding strength parametereven for ions as light as carbon (Z is the nuclear charge number, α is the fine-structure constant). The numerical error in the evaluation of the electron self-energy is currently the second-largest source of uncertainty for the hydrogen-like ions (the largest error stemming from the two-loop electron self-energy <cit.>). The error needs to be descreased in order to match the anticipated experimental precision.The numerical accuracy of the one-loop self-energy is also relevant for the determination of the electron mass <cit.>. The self-energy values actually used in the electron-mass determinations were obtained by an extrapolation of the high- and medium-Z numerical results down to Z = 6 (carbon) and 8 (oxygen). Clearly, this situation is not fully satisfactory and a direct numerical calculation would be preferable.All-order (in ) calculations of the electron self-energy to the bound-electron g factor have a long history. First calculations of this correction were accomplished two decades ago <cit.>. The numerical accuracy of these evaluations was advanced in the later works <cit.>, which was crucial at the time as it brought an improvement of the electron mass determination. This correction was revisited again in Refs. <cit.>. In the present work, we aim to advance the numerical accuracy of the one-loop electron self-energy and bring it to the level required for future experiments.We consider the one-loop self-energy correction to the g factor of an electron bound by the Coulomb field of the point-like and spinless nucleus. This correction can be represented <cit.> as a sum of the irreducible (ir) and the vertex+reducible (vr) parts,Δ g_ SE = Δ g_ ir + Δ g_ vr .The irreducible part isΔ g_ ir = 2 δ_g a| γ^0 Σ(_a)|a ,where Σ() = Σ()-δ m is the (renormalized) one-loop self-energy operator (see, e.g., <cit.>) and | δ_g a is the perturbed wave function| δ_g a = ∑_n ≠ a|n n|δ V_g|a/_a-_n ,with δ V_g = 2m[×]_z being the effective g-factor operator <cit.> that assumes that the spin projection of the reference state is m_a= 1/2. The vertex+reducible part isΔ g_ vr = i/2π∫_Cdω ∑_n_1n_2[n_1|δ V_g |n_2 an_2|I(ω)|n_1a/(Δ_an_1-ω)(Δ_an_2-ω)- δ_n_1n_2a|δ V_g |a an_1|I(ω)|n_1a/(Δ_an_1-ω)^2] ,whereI(ω) is the operator of the electron-electron interaction (see, e.g., <cit.>), ω is the energy of the virtual photon, Δ_ab = _a-_b, and a proper covariant identification and cancellation of ultraviolet and infrared divergences is assumed. The integration contour C in Eq. (<ref>) is the standard Feynman integration contour; it will be deformed for a numerical evaluation as discussed below.The vertex+reducible contribution is further divided into three parts: the zero-potential, one-potential, and many-potential contributions,Δ g_ vr = Δ g_ vr^(0) + Δ g_ vr^(1) + Δ g_ vr^(2+) .This separation is induced by the following identity, which splits the integrand according to the number of interactions with the binding Coulomb field in the electron propagators,G δ V_gG ≡[G^(0) δ V_gG^(0)]+ [ G^(0) δ V_gG^(1) + G^(1) δ V_gG^(0)] + [ G δ V_gG - G^(0) δ V_gG^(0)- G^(0) δ V_gG^(1) - G^(1) δ V_gG^(0)],where G ≡ G() ≡∑_n |n n|/(-_n) is the bound-electron propagator, G^(0)≡ G |_Z = 0 is the free-electron propagator, andG^(1)() ≡ Z [ d/dZ G() ]_Z = 0is the one-potential electron propagator.In the present work, we will be concerned mainly with the numerical evaluation of Δ g_ vr^(2+), since all other contributions were computed to the required accuracy in our previous investigations <cit.>.After performing integrations over the angular variables analytically as described in Ref. <cit.>, we obtain the result that can be schematically represented asΔ g_ vr^(2+) = lim_|κ_ max|→∞∫_C dω ∫_0^∞ dx dy dz ∑_|κ| = 1^|κ_ max| f_|κ|(ω,x,y,z) ,where x, y, and z are the radial integration variables, |κ| is the absolute value of the angular momentum-parity quantum number of one of the electron propagators, and f_|κ| is the integrand. Summations over other angular quantum numbers are finite and absorbed into the definition of f_|κ|.The approach of the present work is to split Δ g_ vr^(2+) into two parts,Δ g_ vr^(2+) = Δ g_ vr,a^(2+) + Δ g_ vr,b^(2+) = ∫_C_ LH,a dω ∫_0^∞ dx dy dz ∑_|κ| = 1^κ_ a f_|κ|(ω,x,y,z) + lim_κ_ max→∞∫_C_ LH,b dω ∫_0^∞ dx dy dz ∑_|κ| = κ_ a+1^κ_ max f_|κ|(ω,x,y,z) ,where κ_a is an auxiliary parameter and C_ LH,a and C_ LH,b are two integration contours used for the evaluation of the two parts of Eq. (<ref>). In the present work we used κ_a=120, which corresponds to the maximal value of |κ| used in Ref. <cit.>, and C_ LH,abeing the same contour as used in that work. So, the numerical evaluation of Δ g_ vr,a^(2+) was mostly analogous to the one reported in Ref. <cit.>, but we had to improve the accuracy of numerical integrations by several orders of magnitude. In the updated numerical integrations, the extended Gauss-log quadratures <cit.> were employed, alongside with the standard Gauss-Legendre quadratures. We found it impossible to extend the partial-wave expansion significantly beyond the limit of κ_a=120 within the same numerical scheme as used in Ref. <cit.>. The reason is that the integration contour C_ LH,a used there, as well as in our previous works <cit.>, involved computations of the Whittaker functions of the first kind M_α,β(z) and their derivatives for large complex values of the argument z. The algorithms we use <cit.> for computing M_α,β(z) become unstable for large α (needed for large κ's) and large and complex z, even when using the quadruple-precision arithmetics. For this reason, in order to compute Δ g_ vr,b^(2+), we had to switch to the contour C_ LH,b, which was originally introduced by P. J. Mohrin his calculations of the one-loop self-energy <cit.>. The crucial feature of this contour is that it involves the computation of the Whittaker functions M_α,β(z) and W_α,β(z) of the real arguments z only. For real arguments, the computational algorithms were shown <cit.> to be stable even for very large κ's (and, hence, α's).Specifically, the contours C_ LH,a and C_ LH,b consist of two parts, the low-energy and the high-energy ones. The low-energy part extends along (Δ,0) on the lower bank of the cut of the photon propagator of the complex ω plane and along (0,Δ) on the upper bank of the cut. The high-energy part consists of the interval (Δ,Δ+i∞) in the upper half-plane and the interval (Δ,Δ-i∞) in the lower half-plane. The difference between C_ LH,a and C_ LH,b is only in the choice of the parameter Δ. For C_ LH,a, we use Δ =_a (the same choice as in our previous works <cit.>), whereas for C_ LH,b, we use Δ = _a (the Mohr's choice). Detailed discussion of the integration contour and the analytical properties of the integrand can be found in the original work <cit.>.We found that the price to pay for using the contour C_ LH,b was the oscillatory behavior of the integrand as a function of the radial variables for ω∼_a. Because of this, we had to employ very dense radial grids for numerical integrations, which made computations rather time-consuming.The largest error of the numerical evaluation of Eq. (<ref>) comes from the termination of the infinite summation over |κ| and the estimation of the tail of the expansion. In the present work, we performed the summation over |κ| before all integrations and stored the complete sequence of partial sums, to be used for the extrapolation performed on the last step of the calculation. The convergence of the expansion was monitored; in the cases when the series converged to the prescribed accuracy (i.e., the relative contribution of several consecutive expansion terms was smaller than, typically, 10^-11 for Δ g_ vr,a^(2+) and 10^-6 for Δ g_ vr,b^(2+)), the summation was terminated. This approach reduced the computation time considerably as compared to our previous scheme <cit.>, where the summation over |κ| was performed after all integrations. If the convergence of the partial-wave expansion had not been reached, the summation was extended up to the upper cutoff κ_ max = 450.The remaining tail of the series was estimated by analyzing the |κ|-dependence of the partial-wave expansion terms after all integrations. We fitted the last m expansion terms (typically, m = 20) to the polynomial in 1/|κ| with 1-3 fitting parameters,δ S_|κ| = c_0/|κ|^3 + c_1/|κ|^4+… .The uncertainty of the extrapolation was estimated by varying the cutoff parameter κ_ max by 20% and multiplying the resulting difference by a conservative factor of 1.5. This procedure usually led to the expansion tail estimated with an accuracy of about 10%.We observed an interesting feature, namely, that the tail of the expansion, with a high accuracy, is the same for the 1s and for the 2s states. E.g., for Z = 4, we find the expansion tail of δ g(1s) = -1.88 (19)× 10^-12 and δ g(2s) = -1.88 (19)× 10^-12; for Z = 16, we obtain δ g(1s) = -3.00 (27) × 10^-11 and δ g(2s) = -3.01 (27) × 10^-11. We do not know the reason for this but such an agreement shows a high degree of consistency of our numerical calculations for the 1s and 2s states.Our numerical results for the self-energy correction to the bound-electron g factor of the 1s and 2s states of hydrogen-like ions are presented in Table <ref>. The values for the irreducible part Δ g_ ir are taken from our previous investigations (from Ref. <cit.> for Z ≤ 12 and from Ref. <cit.> otherwise). Using results of Ref. <cit.>, we introduced small corrections that accounted for a different value of the fine-structure constant used in that work. In Table <ref> we also present values of the higher-order remainder function H(), obtained after separating out all known terms of theexpansion <cit.> from our numerical results,Δ g_ SE = α/π[ 1 + ()^2/6n^2 + ()^4/n^3 {32/9 ln[()^-2]+ b_40} + ()^5/n^3 H()] ,where b_40(1s) = -10.236 524 32 and b_40(2s) = -10.707 715 60.The results for the higher-order remainder function are plotted in Fig. <ref>.Our calculation represents an improvement in accuracy over previous works by about two orders of magnitude. Table <ref> shows the comparison of various calculations for carbon. It is gratifying to find that all results are consistent with each other within the given error bars.In the present work, we performed direct numerical calculations for ions with Z≥ 4. For smaller Z, numerical cancelations in determining the higher-order remainder become too large to make numerical calculations meaningful. Instead of direct calculations, we extrapolated the numerical values presented in Table <ref> for H() down towards Z→ 0. Doing this, we assumed the following ansatz for H(), which was inspired by the expansion of the one-loop self-energy for the Lamb shift,H() ≈ c_00 + (){ln^2[()^-2] c_12+ ln[()^-2] c_11 +c_10} + ()^2 c_20 .For the 2s-1s difference, we use the form (<ref>) with c_12 = 0, assuming the leading logarithm to be state-independent. The extrapolated results are presented in Table <ref>. The uncertainties quoted for our fitting results are obtained under the assumption that the logarithmic terms in the next-to-leading order of theexpansion of H() comply with Eq. (<ref>). If we introduce, e.g., a cubed logarithmic term into Eq. (<ref>), our estimates of uncertainties would increase by about a factor of 2.In summary, we reported calculations of the one-loop self-energy correction to the bound-electron g factor of the 1s and 2s state of light hydrogen-like ions, performed to all orders in the binding nuclear strength parameter . The relative accuracy of the results obtained varies from 1× 10^-10 for Z = 4 to 3× 10^-9 for Z = 20. Our results agree well with the previously published values but their accuracy is by about two orders of magnitude higher.§ ACKNOWLEDGEMENT V.A.Y. acknowledges support by the Ministry of Education and Science of the Russian Federation Grant No. 3.5397.2017/BY. 10sturm:14 S. Sturm, F. Köhler, J. Zatorski, A. Wagner, Z. Harman, G. Werth, W. Quint, C. H. Keitel, and K. Blaum, Nature 506, 467–470 (2014).sturm:11 S. Sturm, A. Wagner, B. Schabinger, J. Zatorski, Z. Harman, W. Quint, G. Werth, C. H. Keitel, and K. Blaum, Phys. Rev. Lett. 107, 023002 (2011).mohr:16:codata P. J. Mohr, D. B. Newell, and B. N. Taylor, Rev. Mod. Phys. 88, 035009 (2016).sturm:17 S. Sturm, M. Vogel, F. Köhler-Langes, W. Quint, K. Blaum, and G. Werth, Atoms 5, 4 (2017).pachucki:04:prl K. Pachucki, U. D. Jentschura, and V. A. Yerokhin, Phys. Rev. Lett. 93, 150401 (2004), [(E) ibid., 94, 229902 (2005)].pachucki:05:gfact K. Pachucki, A. Czarnecki, U. D. Jentschura, and V. A. Yerokhin, Phys. Rev. A 72, 022108 (2005).persson:97:g H. Persson, S. Salomonson, P. Sunnergren, and I. Lindgren, Phys. Rev. A 56, R2499(1997).blundell:97 S. A. Blundell, K. T. Cheng, and J. Sapirstein, Phys. Rev. A. 55, 1857 (1997).beier:00:pra T. Beier, I. Lindgren, H. Persson, S. Salomonson, P. Sunnergren, H. Häffner, and N. Hermanspahn, Phys. Rev. A 62, 032510 (2000).yerokhin:02:prl V. A. Yerokhin, P. Indelicato, and V. M. Shabaev, Phys. Rev. Lett. 89, 143001 (2002).yerokhin:04 V. A. Yerokhin, P. Indelicato, and V. M. Shabaev, Phys. Rev. A 69, 052503 (2004).yerokhin:08:prl V. A. Yerokhin and U. D. Jentschura, Phys. Rev. Lett. 100, 163001 (2008).yerokhin:10:sehfs V. A. Yerokhin and U. D. Jentschura, Phys. Rev. A 81, 012502 (2010).pachucki:14:cpc K. Pachucki, M. Puchalski, and V. Yerokhin, Comput. Phys. Commun. 185, 2913(2014).yerokhin:99:pra V. A. Yerokhin and V. M. Shabaev, Phys. Rev. A 60, 800(1999).mohr:74:a P. J. Mohr, Ann. Phys. (NY) 88, 26(1974).mohr:74:b P. J. Mohr, Ann. Phys. (NY) 88, 52(1974).
http://arxiv.org/abs/1704.08080v2
{ "authors": [ "V. A. Yerokhin", "Z. Harman" ], "categories": [ "physics.atom-ph" ], "primary_category": "physics.atom-ph", "published": "20170426124905", "title": "One-loop electron self-energy for the bound-electron $g$ factor" }
A Siamese Deep Forest Lev V. Utkin^1 and Mikhail A. Ryabinin^2 Department of Telematics Peter the Great St.Petersburg Polytechnic University St.Petersburg, Russia e-mail: ^[email protected], ^[email protected]=========================================================================================================================================================================================================== A Siamese Deep Forest (SDF) is proposed in the paper. It is based on the Deep Forest or gcForest proposed by Zhou and Feng and can be viewed as a gcForest modification. It can be also regarded as an alternative to the well-known Siamese neural networks. The SDF uses a modified training set consisting of concatenated pairs of vectors. Moreover, it defines the class distributions in the deep forest as the weighted sum of the tree class probabilities such that the weights are determined in order to reduce distances between similar pairs and to increase them between dissimilar points. We show that the weights can be obtained by solving a quadratic optimization problem. The SDF aims to prevent overfitting which takes place in neural networks when only limited training data are available. The numerical experiments illustrate the proposed distance metric method.Keywords: classification, random forest, decision tree, Siamese, deep learning, metric learning, quadratic optimization§ INTRODUCTION One of the important machine learning tasks is to compare pairs of objects, for example, pairs of images, pairs of data vectors, etc. There are a lot of approaches for solving the task. One of the approaches is based on computing a corresponding pairwise metric function which measures a distance between data vectors or a similarity between the vectors. This approach is called the metric learning <cit.>. It is pointed out by Bellet et al. <cit.> in their review paper that the metric learning aims to adapt the pairwise real-valued metric function, for example, the Mahalanobis distance or the Euclidean distance, to a problem of interest using the information provided by training data. A detailed description of the metric learning approaches is also represented by Le Capitaine <cit.> and by Kulis <cit.>. The basic idea underlying the metric learning solution is that the distance between similar objects should be smaller than the distance between different objects.Suppose there is a training set S ={(𝐱_i,𝐱_j ,y_ij), (i,j)∈ K} consisting of N pairs of examples 𝐱 _i∈ℝ^m and 𝐱_j∈ℝ^m such that a binary label y_ij∈{0,1} is assigned to every pair (𝐱 _i,𝐱_j). If two data vectors 𝐱_i and 𝐱 _j are semantically similar or belong to the same class of objects, then y_ij takes the value 0. If the vectors correspond to different or semantically dissimilar objects, then y_ij takes the value 1. This implies that the training set S can be divided into two subsets. The first subset is called the similar or positive set and is defined as𝒮={(𝐱_i,𝐱_j):𝐱_i and 𝐱_j are semantically similar and y_ij=0}.The second subset is the dissimilar or negative set. It is defined as𝒟={(𝐱_i,𝐱_j):𝐱_i and 𝐱_j are semantically dissimilar and y_ij=1}.If we have two observation vectors 𝐱_i∈ℝ^m and 𝐱_j∈ℝ^m from the training set, then the distance d(𝐱_i,𝐱_j) should be minimized if 𝐱_i and 𝐱_j are semantically similar, and it should be maximized between dissimilar 𝐱_i and 𝐱_j. The most general and popular real-valued metric function is the squared Mahalanobis distance d_M ^2(𝐱_i,𝐱_j) which is defined for vectors 𝐱_i and 𝐱_j asd_M^2(𝐱_i,𝐱_j)=(𝐱_i-𝐱 _j)^TM(𝐱_i-𝐱_j).Here M∈ℝ^m× m is a symmetric positive semi-defined matrix. If 𝐱_i and 𝐱_j are random vectors from the same distribution with covariance matrix C, then M=C^-1. If M is the identity matrix, then d_M^2(𝐱_i,𝐱_j) is the squared Euclidean distance.Given subsets 𝒮 and 𝒟, the metric learning optimization problem can be formulated as follows:M^∗=min_M[J(M,𝒟,𝒮)+λ· R(M)],where J(M,𝒟,𝒮) is a loss function that penalizes violated constraints; R(M) is some regularizer on M; λ≥0 is the regularization parameter.There are many useful loss functions J which take into account the condition that the distance between similar objects should be smaller than the distance between different objects. These functions define a number of learning methods. It should be noted that the learning methods using the Mahalanobis distance assume some linear structure of data. If this is not valid, then the kernelization of linear methods is one of the possible ways for solving the metric learning problem. Bellet et al. <cit.> review several approaches and algorithms to deal with nonlinear forms of metrics. In particular, these are the Support Vector Metric Learning algorithm provided by Xu et al. <cit.>, the Gradient-Boosted Large Margin Nearest Neighbors method proposed by Kedem et al. <cit.>, the Hamming Distance Metric Learning algorithm provided by Norouzi et al. <cit.>.A powerful implementation of the metric learning dealing with non-linear data structures is the so-called Siamese neural network introduced by Bromley et al. <cit.> in order to solve signature verification as a problem of image matching. This network consists of two identical sub-networks joined at their outputs. The two sub-networks extract features from two input examples during training, while the joining neuron measures the distance between the two feature vectors. The Siamese architecture has been exploited in many applications, for example, in face verification <cit.>, in the one-shot learning in which predictions are made given only a single example of each new class <cit.>, in constructing an inertial gesture classification <cit.>, in deep learning <cit.>, in extracting speaker-specific information <cit.>, for face verification in the wild <cit.>. This is only a part of successful applications of Siamese neural networks. Many modifications of Siamese networks have been developed, including fully-convolutional Siamese networks <cit.>, Siamese networks combined with a gradient boosting classifier <cit.>, Siamese networks with the triangular similarity metric <cit.>.One of the difficulties of the Siamese neural network as well as other neural networks is that limited training data lead to overfitting when training neural networks. Many different methods have been developed to prevent overfitting, for example, dropout methods <cit.> which are based on combination of the results of different networks by randomly dropping out neurons in the network. A very interesting new method which can be regarded as an alternative to deep neural networks is the deep forest proposed by Zhou and Feng <cit.> and called the gcForest. In fact, this is a multi-layer structure where each layer contains many random forests, i.e., this is an ensemble of decision tree ensembles. Zhou and Feng <cit.> point out that their approach is highly competitive to deep neural networks. In contrast to deep neural networks which require great effort in hyperparameter tuning and large-scale training data, gcForest is much easier to train and can perfectly work when there are only small-scale training data. The deep forest solves tasks of classification as well as regression. Therefore, by taking into account its advantages, it is important to modify it in order to develop a structure solving the metric learning task. We propose the so-called Siamese Deep Forest (SDF) which can be regarded as an alternative to the Siamese neural networks and which is based on the gcForest proposed by Zhou and Feng <cit.> and can be viewed as its modification. Three main ideas underlying the SDF can be formulated as follows:* We propose to modify training set by using concatenated pairs of vectors. * We define the class distributions in the deep forest as the weighted sum of the tree class probabilities where the weights are determined in order to reduce distances between semantically similar pairs of examples and to increase them between dissimilar pairs. The weights are training parameters of the SDF. * We apply the greedy algorithm for training the SDF, i.e., the weights are successively computed for every layer or level of the forest cascade. We consider the case of the weakly supervised learning <cit.> when there are no information about the class labels of individual training examples, but only information in the form of sets 𝒮 and 𝒟 is provided, i.e., we know only semantic similarity of pairs of training data. However, the case of the fully supervised learning when the class labels of individual training examples are known can be considered in the same way.It should be noted that the SDF cannot be called Siamese in the true sense of the word. It does not consist of two gcForests like the Siamese neural network. However, its aim coincides with the Siamese network aim. Therefore, we give this name for the gcForest modification.The paper is organized as follows. Section 2 gives a very short introduction into the Siamese neural networks. A short description of the gcForest proposed by Zhou and Feng <cit.> is given in Section 3. The ideas underlying the SDF are represented in Section 4 in detail. A modification of the gcForest using the weighted averages, which can be regarded as a basis of the SDF is provided in Section 5. Algorithms for training and testing the SDF are considered in Section 6. Numerical experiments with real data illustrating cases when the proposed SDF outperforms the gcForest are given in Section 7. Concluding remarks are provided in Section 8.§ SIAMESE NEURAL NETWORKS Before studying the SDF, we consider the Siamese neural network which is an efficient and popular tool for dealing with data of the form 𝒮 and 𝒟. It will be a basis for constructing the SDF.A standard architecture of the Siamese network given in the literature (see, for example, <cit.>) is shown in Fig. <ref>. Let 𝐱_i and 𝐱_j be two data vectors corresponding to a pair of elements from a training set, for example, images. Suppose that f is a map of 𝐱_i and 𝐱_j to a low-dimensional space such that it is implemented as a neural network with the weight matrix W. At that, parameters W are shared by two neural networks f(𝐱_1) and f(𝐱_2) denoted as E_1 and E_2 and corresponding to different input vectors, i.e., they are the same for the two neural networks. The property of the same parameters in the Siamese neural network is very important because it defines the corresponding training algorithm. By comparing the outputs 𝐡_i=f(𝐱_i) and 𝐡 _j=f(𝐱_j) using the Euclidean distance d(𝐡 _i,𝐡_j), we measure the compatibility between 𝐱_i and 𝐱_j. If we assume for simplicity that the neural network has one hidden layer, then there holds𝐡=σ(W𝐱+b).Here σ(z) is an activation function; W is the weight p× M matrix such that its element w_ij is the weight of the connection between unit j in the input layer and unit i in the hidden layer, i=1,...,p, j=1,...,M; b=(b_1,...,b_p) is a bias vector; 𝐡 =(h_1,...,h_p) is the vector of neuron activations, which depends on the input vector 𝐱.The Siamese neural network is trained on pairs of observations by using specific loss functions, for example, the following contrastive loss function:l(𝐱_i,𝐱_j,y_ij)={[‖𝐡_i-𝐡_j‖ _2^2,y_ij=0,; max(0,τ-‖𝐡_i-𝐡_j‖ _2^2),y_ij=1, ].where τ is a predefined threshold.Hence, the total error function for minimizing is defined asJ(W,b)=∑_i,jl(𝐱_i,𝐱_j,y_ij)+μ R(W,b).Here R(W,b) is a regularization term added to improve generalization of the neural network, μ is a hyper-parameter which controls the strength of the regularization. The above problem can be solved by using the stochastic gradient descent scheme.§ DEEP FOREST According to <cit.>, the gcForest generates a deep forest ensemble, with a cascade structure. Representation learning in deep neural networks mostly relies on the layer-by-layer processing of raw features. The gcForest representational learning ability can be further enhanced by the so-called multi-grained scanning. Each level of cascade structure receives feature information processed by its preceding level, and outputs its processing result to the next level. Moreover, each cascade level is an ensemble of decision tree forests. We do not consider in detail the Multi-Grained Scanning where sliding windows are used to scan the raw features because this part of the deep forest is the same in the SDF. However, the most interesting component of the gcForest from the SDF construction point of view is the cascade forest.Given an instance, each forest produces an estimate of class distribution by counting the percentage of different classes of examples at the leaf node where the concerned instance falls into, and then averaging across all trees in the same forest. The class distribution forms a class vector, which is then concatenated with the original vector to be input to the next level of cascade. The usage of the class vector as a result of the random forest classification is very similar to the idea underlying the stacking method <cit.>. The stacking algorithm trains the first-level learners using the original training data set. Then it generates a new data set for training the second-level learner (meta-learner) such that the outputs of the first-level learners are regarded as input features for the second-level learner while the original labels are still regarded as labels of the new training data. In fact, the class vectors in the gcForest can be viewed as the meta-learners. In contrast to the stacking algorithm, the gcForest simultaneously uses the original vector and the class vectors (meta-learners) at the next level of cascade by means of their concatenation. This implies that the feature vector is enlarged and enlarged after every cascade level. The architecture of the cascade proposed by Zhou and Feng <cit.> is shown in Fig. <ref>. It can be seen from the figure that each level of the cascade consists of two different pairs of random forests which generate 3-dimensional class vectors concatenated each other and with the original input. After the last level, we have the feature representation of the input feature vector, which can be classified in order to get the final prediction. Zhou and Feng <cit.> propose to use different forests at every level in order to provide the diversity which is an important requirement for the random forest construction.§ THREE IDEAS UNDERLYING THE SDF The SDF aims to function like the standard Siamese neural network. This implies that the SDF should provide large distances between semantically similar pairs of vectors and small distances between dissimilar pairs. We propose three main ideas underlying the SDF:* Denote the set indices of all pairs 𝐱_i and 𝐱 _j as K={(i,j)} We train every tree by using the concatenation of two vectors 𝐱_i and 𝐱_j such that the class y_ij ∈{0,1} is defined by the semantical similarity of the vectors. In fact, the trees are trained on the basis of two classes and reflect the semantical similarity of pairs, but not classes of separate examples. With this concatenation, we define a new set of classes such that we do not need to know separate classes for 𝐱_i or for 𝐱_j. As a result, we have a new training set R={(𝐱_i,𝐱_j),y_ij ), (i,j)∈ K} and exploit only the information about the semantical similarity. The concatenation is not necessary when the classes of training elements are known, i.e., we have a set of labels {y_1,...,y_n}. In this case, only the second idea can be applied. * We partially use some modification of ideas provided by Xiong et al. <cit.> and Dong et al. <cit.>. In particular, Xiong et al. <cit.> considered an algorithm for solving the metric learning problem by means of the random forests. The proposed metric is able to implicitly adapt its distance function throughout the feature space. Dong et al. <cit.> proposed a random forest metric learning (RFML) algorithm, which combines semi-multiple metrics with random forests to better separate the desired targets and background in detecting and identifying target pixels based on specific spectral signatures in hyperspectral image processing. A common idea underlying the metric learning algorithms in <cit.> and <cit.> is that the distance measure between a pair of training elements 𝐱 _i,𝐱_j for a combination of trees is defined as average of some special functions of the training elements. For example, if a random forest is a combination of T decision trees {f_t(𝐱),t=1,...,T}, then the distance measure isd(𝐱_i,𝐱_j)=T^-1∑_t=1^Tf_t(ψ(𝐱 _i,𝐱_j)).Here ψ(𝐱_i,𝐱_j) is a mapping function which is specifically defined in <cit.> and <cit.>. We combine the above ideas with the idea of probability distributions of classes provided in <cit.> in order to produce a new feature vector after every level of the cascade forest. According to <cit.>, each forest of a cascade level produces an estimate of the class probability distribution by counting the percentage of different classes of training examples at the leaf node where the concerned instance falls into, and then averaging across all trees in the same forest. Our idea is to define the forest class distribution as a weighted sum of the tree class probabilities. At that, the weights are computed in an optimal way in order to reduce distances between similar pairs and to increase them between dissimilar points.The obtained weights are very similar to weights of the neural network connections between neurons, which are also computed during training the neural network. The trained values of weights in the SDF are determined in accordance with a loss function defining properties of the SDF or the neural network. Due to this similarity, we will call levels of the cascade as layers sometimes.It should be also noted that the first idea can be sufficient for implementing the SDF because the additional features (the class vectors) produced by the previous cascade levels partly reflect the semantical similarity of pairs of examples. However, in order to enhance the discriminative capability of the SDF, we modify the corresponding class distributions. * We apply the greedy algorithm for training the SDF that is we train separately every level starting from the first level such that every next level uses results of training at the previous level. In contrast to many neural networks, the weights considered above are successively computed for every layer or level of the forest cascade. § THE SDF CONSTRUCTION Let us introduce notations for indices corresponding to different deep forest components. The indices and their sets of values are shown in Table <ref>. One can see from Table <ref>, that there are Q levels of the deep forest or the cascade, every level contains M_q forests such that every forest consists of T_k,q trees. If we use the concatenation of two vectors 𝐱_i and 𝐱_j for defining new classes of semantically similar and dissimilar pairs, then the number of classes is 2. It should be noted that the class c corresponds to label y_ij∈{0,1} of a training example from the set R.Suppose we have trained trees in the SDF. One of the approaches underlying the deep forest is that the class distribution forms a class vector which is then concatenated with the original vector to be an input to the next level of the cascade. Suppose a pair of the original vectors is (𝐱_i ,𝐱_j), and the p_ij,c^(t,k,q) is the probability of class c for the pair (𝐱_i,𝐱_j) produced by the t-th tree from the k-th forest at the cascade level q. Below we use the triple index (t,k,q) in order to indicate that the element belongs to the t-th tree from the k-th forest at the cascade level q. The same can be said about subsets of the triple. Then, according to <cit.>, the element v_c^(k,q) of the class vector corresponding to class c and produced by the k-th forest in the gcForest is determined asv_ij,c^(k,q)=T_k,q^-1∑_t=1^T_k,qp_ij,c^(t,k,q).Denote the obtained class vector as 𝐯_ij^(k,q)=(v_ij,0 ^(k,q),v_ij,1^(k,q)). Then the concatenated vector 𝐱 _ij^(1) after the first level of the cascade is𝐱_ij^(1)=(𝐱_i,𝐱_j,𝐯 _ij^(1,1),....,𝐯_ij^(M_1,1))=(𝐱 _i,𝐱_j,𝐯_ij^(k,1),k=1,...,M_1).It is composed of the original vectors 𝐱_i, 𝐱_j and M_1 class vectors obtained from M_1 forests at the first level. In the same way, we can write the concatenated vector 𝐱_ij^(q) after the q-th level of the cascade as𝐱_ij^(q) =(𝐱_i^(q-1),𝐱 _j^(q-1),𝐯_ij^(1,q),....,𝐯_ij^(M_q,q)) =(𝐱_i^(q-1),𝐱_j^(q-1),𝐯 _ij^(k,q), k=1,...,M_q).In order to reduce the number of indices, we omit the index q below because all derivations will concern only level q, where q may be arbitrary from 1 to Q. We also replace notations M_q and T_k,q with M and T_k, respectively, assuming that the number of forests and numbers of trees strongly depend on the cascade level.The vector 𝐱_ij in (<ref>) has been derived in accordance with the gcForest algorithm <cit.>. However, in order to implement the SDF, we propose to change the method for computing elements v_ij,c^(k) of the class vector, namely, the averaging is replaced with the weighted sum of the form:v_ij,c^(k)=∑_t=1^T_kp_ij,c^(t,k)w^(t,k).Here w^(t,k) is a weight for combining the class probabilities of the t-th tree from the k-th forest at the cascade level q. The weights play a key role in implementing the SDF. An illustration of the weighted averaging is shown in Fig. <ref>, where we partly modify a picture from <cit.> (the left part is copied from <cit.>) in order to show how elements of the class vector are derived as a simple weighted sum. It can be seen from Fig. <ref> that two-class distribution is estimated by counting the percentage of different classes (y_ij=0 or y_ij=1) of new training concatenated examples (𝐱_i,𝐱_j) at the leaf node where the concerned example (𝐱_i,𝐱 _j) falls into. Then the class vector of (𝐱 _i,𝐱_j) is computed as the weighted average. It is important to note that we weigh trees belonging to one of the forests, but not classes, i.e., the weights do not depend on the class c. Moreover, the weights characterize trees, but not training elements. This implies that they do not depend on the vectors 𝐱_i, 𝐱_j too. One can also see from Fig. <ref> that the augmented features v_ij,0^(k) and v_ij,1^(k) or the class vector corresponding to the k-th forest are obtained as weighted sums, i.e., there holdv_ij,0^(k) =0.5· w^(1,k)+0.4· w^(2,k)+1· w^(3,k),v_ij,1^(k) =0.5· w^(1,k)+0.6· w^(2,k)+0· w^(3,k).The weights are restricted by the following obvious condition:∑_t=1^T_kw^(t,k)=1.In other words, we have the weighted averages for every forest, and the corresponding weights can be regarded as trained parameters in order to decrease the distance between semantically similar 𝐱_i and 𝐱_j and to increase the distance between dissimilar 𝐱_i and 𝐱_j. Therefore, we have to develop a way for training the SDF, i.e., for computing the weights for every forest and for every cascade level.Now we have numbers v_ij,c^(k) for every class. Let us analyze these numbers from the point of the SDF aim view.First, we consider the case when (𝐱_i,𝐱_j )∈𝒮 and y_ij=0. However, we may have non-zero v_ij,c ^(k) for both classes. It is obvious that v_ij,0^(k) (the average probability of class c=0) should be as large as possible because c=y_ij=0. Moreover, v_ij,1^(k) (the average probability of class c=1) should be as small as possible because c≠ y_ij=0.We can similarly write conditions for the case when (𝐱 _i,𝐱_j)∈𝒟 and y_ij=1. In this case, v_ij,0^(k) should be as small as possible because c≠ y_ij=1, and v_ij,1^(k) should be as large as possible because c=y_ij=1.In sum, we should increase (decrease) v_ij,c^(k) if c=y_ij (c≠ y_ij). In other words, we have to find the weights maximizing (minimizing) v_ij,c^(k) when c=y_ij (c≠ y_ij). The ideal case is when v_ij,c^(k)=1 by c=y_ij and v_ij,c^(k)=0 by c≠ y_ij. However, the vector of weights has to be the same for every class, and it does not depend on a certain class. At first glance, we could find optimal weights for every individual forest separately from other forests. However, we should analyze simultaneously all forests because some vectors of weights may compensate those vectors which cannot efficiently separate v_ij,0^(k) and v_ij,1^(k).§ THE SDF TRAINING AND TESTING We apply the greedy algorithm for training the SDF, namely, we train separately every level starting from the first level such that every next level uses results of training at the previous level. The training process at every level consists of two parts. The first part aims to train all trees by applying all pairs of training examples. This part does not significantly differ from the training of the original deep forest proposed by Zhou and Feng <cit.>. The difference is that we use pairs of concatenated vectors (𝐱_i,𝐱_j) and two classes corresponding to semantic similarity of the pairs. The second part is to compute the weights w^(t,k), t=1,...,T_k. This can be done by minimizing the following objective function over M unit (probability) simplices in ℝ ^T_k denoted as Δ_k, i.e., over non-negative vectors 𝐰^(k)=(w^(1,k),...,w^(T_k,k))∈Δ_k, k=1,...,M, that sum up to one:min_𝐰J_q(𝐰)=min_𝐰∑_i,jl(𝐱 _i,𝐱_j,y_ij,𝐰)+λ R(𝐰).Here 𝐰 is a vector produced as the concatenation of vectors 𝐰^(k), k=1,...,M, R(𝐰) is a regularization term, λ is a hyper-parameter which controls the strength of the regularization. We define the regularization term asR(𝐰)=‖𝐰‖ ^2.The loss function has to increase values of augmented features v_ij,0 ^(k) corresponding to the class c=0 and to decrease features v_ij,1^(k) corresponding to the class c=1 for semantically similar pairs (𝐱_i,𝐱_j). Moreover, the loss function has to increase values of augmented features v_ij,1^(k) corresponding to the class c=1 and to decrease features v_ij,0^(k) corresponding to the class c=0 for dissimilar pairs (𝐱_i,𝐱_j). §.§ Convex loss function Let us denote the set of vectors 𝐰 as Δ. In order to efficiently solve the problem (<ref>), the condition of the convexity of J_q(𝐰) in the domain of 𝐰 should be fulfilled. One of the ways for determining the loss function l is to consider a distance d(𝐱_i,𝐱_j) between two vectors 𝐱_i and 𝐱_j at the q-th level. However, we do not have separate vectors 𝐱_i and 𝐱_j. We have one vector whose parts correspond to vectors 𝐱_i and 𝐱_j. Therefore, this is a distance between elements of the concatenated vector (𝐱_i^(-1),𝐱_j^(-1)) obtained at q-1 level and augmented features 𝐯_ij^(k), k=1,...,M, of a special form. Let us consider the expression for the above distance in detail. It consists of M+1 terms. The first term denoted as X_ij^(q) is the Euclidean distance between two parts of the output vector obtained at the previous levelX_ij=∑_l=1^m(x_i,l^(-1)-x_j,l^(-1))^2.Here x_i,l is the l-th element of 𝐱_i, m is the length of the input vector for the q-th level or the length of the output vector for the level with the number q-1.Let us consider elements v_ij,0^(k) and v_ij,1^(k) now. We have to provide the distance between these elements as large as possible taking into account y_ij. In particular, if y_ij=0, then we should decrease the difference v_ij,1^(k)-v_ij,0^(k). If y_ij=1, then we should decrease the difference v_ij,0^(k)-v_ij,1^(k). Let us introduce the variable z_ij=-1 if y_ij=0, and z_ij=1 if y_ij=1. Then the following expression characterizing the augmented features v_ij,0^(k) and v_ij,1^(k) can be written:[max(0,z_ij(v_ij,0^(k)-v_ij,1^(k)) )]^2.Substituting (<ref>) into the above expression, we get next M terms[max(0, ∑_t=1^T_kP_ij^(t,k)w^(t,k)) ]^2, k=1,...,M,whereP_ij^(t,k)=z_ij(p_ij,0^(t,k)-p_ij,1^(t,k)).Finally, we can writed(𝐱_i,𝐱_j)=X_ij+∑_k=1^M[ max(0, ∑_t=1^T_kP_ij^(t,k)w^(t,k))] ^2.So, we have to maximize d(𝐱_i,𝐱_j) with respect to w^(t,k) under constraints (<ref>). Since X_ij does not depend on w^(t,k), then we consider the following objective functionJ_q(𝐰)=∑_i,j∑_k=1^M[max(0, ∑ _t=1^T_kP_ij^(t,k)w^(t,k))]^2+λ‖𝐰‖ ^2.The function d(𝐱_i,𝐱_j) is convex in the interval [0,1] of w^(t,k). Then the objective function J_q(𝐰) as the sum of the convex functions is convex too with respect to weights. §.§ Quadratic optimization problem Let us consider the problem (<ref>) under constraints (<ref>) in detail. Introduce a new variable ξ_ij^(k) defined asξ_ij^(k)=max(0, ∑_t=1^T_kP_ij^(t,k)w^(t,k) ).Then problem (<ref>) can be rewritten asJ_q(𝐰)=min_ξ_ij^(k),𝐰∑_i,j∑_k=1 ^M(ξ_ij^(k))^2+λ‖𝐰 ‖ ^2,subject to (<ref>)ξ_ij^(k)≥∑_t=1^T_kP_ij^(t,k)w^(t,k),ξ_ij ^(k)≥0,(i,j)∈ K, k=1,...,M.We have obtained the standard quadratic optimization problem with linear constraints and variables ξ_ij^(k) and w^(t,k). It can be solved by using the well-known standard methods.It is interesting to note that the optimization problem (<ref> )-(<ref>) can be decomposed into M problems of the form:J_q(𝐰^(k))=min_ξ_ij,𝐰^(k)∑_i,jξ_ij ^2+λ‖𝐰^(k)‖ ^2,subject to (<ref>)ξ_ij≥∑_t=1^T_kP_ij^(t,k)w^(t,k),ξ_ij ≥0,(i,j)∈ K, k=1,...,M.Indeed, by returning to problem (<ref>)-(<ref>), we can see that the subset of variables ξ_ij^(k) and w^(t,k) for a certain k and constraints for these variables do not overlap with the subset of similar variables for another k and the corresponding constraints. This implies that (<ref>) can be rewritten asJ_q(𝐰)=∑_k=1^Mmin_ξ_ij,𝐰∑_i,j( ξ_ij^(k))^2+λ‖𝐰‖ ^2,and the problem can be decomposed.So, we solve the problem (<ref>)-(<ref>) for every k=1,...,M and get M vectors 𝐰^(k) which form the vector 𝐰. The above means that the optimal weights are separately determined for individual forests. §.§ A general algorithm for training and the SDF testing In sum, we can write a general algorithm for training the SDF (see Algorithm <ref>). Its complexity mainly depends on the number of levels.Having the trained SDF with computed weights 𝐰 for every cascade level, we can make decision about the semantic similarity of a new pair of examples 𝐱_a and 𝐱_b. First, the vectors make to be concatenated. By using the trained decision trees and the weights 𝐰 for every level q, the pair is augmented at each level. Finally, we get𝐱_ab^(Q)=(𝐱_a^(Q),𝐱_b ^(Q))=𝐯_ab.Here 𝐯_ab is the augmented part of the vector 𝐱 _ab^(Q) consisting of elements from subvectors 𝐯_0 and 𝐯_1 corresponding to the class c=0 and to the class c=1, respectively. The original examples 𝐱_a and 𝐱_b are semantically similar if the sum of all elements from 𝐯_0 is larger than the sum of elements from 𝐯_1, i.e., 𝐯 _0·1^T>𝐯_1·1^T, where 1 is the unit vector. In contrast to the similar examples, the condition 𝐯_0·1^T<𝐯_1 ·1^T means that 𝐱_a and 𝐱_b are semantically dissimilar and y_ab=1. We can introduce a threshold τ for a more robust decision making. The examples 𝐱_a and 𝐱_b are classified as semantically similar and y_ab=0 if 𝐯_0·1^T-𝐯_1·1 ^T≥τ. The case 0≤𝐯_0·1 ^T-𝐯_1·1^T≤τ can be viewed as undeterminable.It is important to note that the identical weights, i.e., the gcForest can be regarded as a special case of the SDF.§ NUMERICAL EXPERIMENTS We compare the SDF with the gcForest whose inputs are concatenated examples from series data sets. In other words, we compare the SDF having computed (trained) weights with the SDF having identical weights. The SDF has the same cascade structure as the standard gcForest described in <cit.>. Each level (layer) of the cascade structure consists of 2 complete-random tree forests and 2 random forests. Three-fold cross-validation is used for the class vector generation. The number of cascade levels is automatically determined.A software in Python implementing the gcForest is available at https://github.com/leopiney/deep-forest. We modify this software in order to implement the procedure for computing optimal weights and weighted averages v_ij,c^(k). Moreover, we use pairs of concatenated examples composed of individual examples as training and testing data.Every accuracy measure A used in numerical experiments is the proportion of correctly classified cases on a sample of data. To evaluate the average accuracy, we perform a cross-validation with 100 repetitions, where in each run, we randomly select N training data and N_test=2N/3 test data.First, we compare the SDF with the gcForest by using some public data sets from UCI Machine Learning Repository <cit.>: the Yeast data set (1484 instances, 8 features, 10 classes), the Ecoli data set (336 instances, 8 features, 8 classes), the Parkinsons data set (197 instances, 23 features, 2 classes), the Ionosphere data set (351 instances, 34 features, 2 classes). A more detailed information about the data sets can be found from, respectively, the data resources. Different values for the regularization hyper-parameter λ have been tested, choosing those leading to the best results.In order to investigate how the number of decision trees impact on the classification accuracy, we study the SDF by different number of trees, namely, we take T_k=T=100, 400, 700, 1000. It should be noted that Zhou and Feng <cit.> used 1000 trees in every forest.Results of numerical experiments for the Parkinsons data set are shown in Table <ref>. It contains the accuracy measures obtained for the gcForest (denoted as gcF) and the SDF as functions of the number of trees T in every forest and the number N=100,500,1000,2000 of pairs in the training set. It can be seen from Table <ref> that the accuracy of the SDF exceeds the same measure of the gcForest in most cases. At that, the difference is rather large for the small amount of training data. In particular, the largest differences between accuracy measures of the SDF and the gcForest are observed by T=400, 1000 and N=100. Similar results of numerical experiments for the Ecoli data set are given in Table <ref>. It is interesting to point out that the number of trees in every forest significantly impacts on the difference between accuracy measures of the SDF and gcForest. It follows from Table <ref> that this difference is smallest by the large number of trees and by the large amount of training data. If we look at the last row of Table <ref>, then we see that the accuracy 0.915 obtained for the SDF by T=100 is reached for the gcForest by T=1000. The largest difference between accuracy measures of the SDF and the gcForest is observed by T=100 and N=100. The same can be seen from Table <ref>. This implies that the proposed modification of the gcForest allows us to reduce the training time. Table <ref> provides accuracy measures for the Yeast data set. We again can see that the proposed SDF outperforms the gcForest for most cases. It is interesting to note from Table <ref> that the increasing number of trees in every forest may lead to reduced accuracy measures. If we look at the row of Table <ref> corresponding to N=500 pairs in the training set, then we can see that the accuracy measures by 100 trees exceed the same measures by larger numbers of trees. Moreover, the largest difference between accuracy measures of the SDF and the gcForest is observed by T=1000 and N=100. Numerical results for the Ionosphere data set are represented in Table <ref>. It follows from Table <ref> that the largest difference between accuracy measures of the SDF and the gcForest is observed by T=1000 and N=500.The numerical results for all analyzed data sets show that the SDF significantly outperforms the gcForest by small number of training data (N=100 or 500). This is an important property of the SDF which are especially efficient when the amount of training data is rather small.It should be noted that the multi-grained scanning proposed in <cit.> was not applied to investigating the above data sets having relatively small numbers of features. The above numerical results have been obtained by using only the forest cascade structure.When we deal with the large-scale data, the multi-grained scanning scheme should be use. In particular, for analyzing the well-known MNIST data set, we used the same scheme for window sizes as proposed in <cit.>, where feature windows with sizes ⌊ d/16⌋, ⌊ d/9⌋, ⌊ d/4⌋ are chosen for d raw features. We study the SDF by applying the MNIST database which is a commonly used large database of 28×28 pixel handwritten digit images <cit.>. It has a training set of 60,000 examples, and a test set of 10,000 examples. The digits are size-normalized and centered in a fixed-size image. The data set is available at http://yann.lecun.com/exdb/mnist/. The main problem in using the multi-grained scanning scheme is that pairs of the original examples are concatenated. As a result, the direct scanning leads to scanning windows covering some parts from every example belonging to a concatenated pair, which do not correspond the images themselves. Therefore, we apply the following modification of the multi-grained scanning scheme. Two identical windows simultaneously scan two concatenated images such that pairs of feature windows are produced due to this procedure, which are concatenated for processing by means of the forest cascade. Fig. <ref> illustrates the used procedure. Results of numerical experiments for the MNIST data set are shown in Table <ref>. It can be seen from Table <ref> that the largest difference between accuracy measures of the SDF and the gcForest is observed by T=1000 and N=100. It is interesting to note that the SDF as well as the gcForest provide good results even by the small amount of training data. At that, the SDF outperforms the gcForest in the most cases. An interesting observation has been made during numerical experiments. We have discovered that the variable z_ij, initially taking the values -1 for y_ij=0 and 1 for y_ij=1, can be viewed as a tuning parameter in order to control the number of the cascade levels used in the training process and to improve the classification performance of the SDF. One of the great advantages of the gcForest is its automatic determination of the number of cascade levels. It is shown by Zhou and Feng <cit.>, that the performance of the whole cascade is estimated on validation set after training a current level. The training procedure in the gcForest terminates if there is no significant performance gain. It turns out that the value of z_ij significantly impact on the number of cascade levels if to apply the termination procedure implemented in the gcForest. Moreover, we can adaptively change the values of z_ij with every level. It has been revealed that one of the best change of z_ij is z_ij^(q)=2z_ij^(q-1), where z_ij^(1)=-1 for y_ij=0 and 1 for y_ij=1. Of course, this is an empirical observation. However, it can be taken as a direction for further improving the SDF.§ CONCLUSION One of the implementations of the SDF has been represented in the paper. It should be noted that other modifications of the SDF can be obtained. First of all, we can improve the optimization algorithm by applying a more complex loss function and computing optimal weights, for example, by means of the Frank-Wolfe algorithm <cit.>. We can use a more powerful optimization algorithm, for example, an algorithm proposed by Hazan and Luo <cit.>. Moreover, we do not need to search for the convex loss function because there are efficient optimization algorithms, for example, a non-convex modification of the Frank-Wolfe algorithm proposed by Reddi et al. <cit.>, which allows us to solve the considered optimization problems. The trees and forests can be also replaced with other classification approaches, for example, with SVMs and boosting algorithms. However, the above modifications can be viewed as directions for further research.The linear combinations of weights for every forest have been used in the SDF. However, this class of combinations can be extended by considering non-linear functions of weights. Moreover, it turns out that the weights of trees can model various machine learning peculiarities and allow us to solve many machine learning tasks by means of the gsForest. This is also a direction for further research.It should be noted that the weights have been restricted by constraints of the form (<ref>), i.e., the weights of every forest belong to the unit simplex whose dimensionality is defined by the number of trees in the forest. However, numerical experiments have illustrated that it is useful to reduce the set of weights in some cases. Moreover, this reduction can be carried out adaptively by taking into account the classification error at every level. One of the ways for adaptive reduction of the unit simplex is to apply imprecise statistical models, for example, the linear-vacuous mixture or imprecise ε-contaminated models proposed by Walley <cit.>. This study is also a direction for further research.We have considered a weakly supervised learning algorithm when there are no information about the class labels of individual training examples, but we know only semantic similarity of pairs of training data. It is also interesting to extend the proposed ideas on the case of fully supervised algorithms when only the class labels of individual training examples are known. The main goal of fully supervised distance metric learning is to use discriminative information in distance metric learning to keep all the data samples in the same class close and those from different classes separated <cit.>. Therefore, another direction for further research is to adapt the proposed algorithm for the case of available class labels.§ ACKNOWLEDGEMENT The reported study was partially supported by RFBR, research project No. 17-01-00118. 10Bellet-etal-2013 A. Bellet, A. Habrard, and M. Sebban. A survey on metric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709, 28 Jun 2013.Berlemont-etal-2015 S. Berlemont, G. Lefebvre, S. Duffner, and C. Garcia. Siamese neural network based similarity metric for inertial gesture classification and rejection. In Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, volume 1, pages 1–6. IEEE, May 2015.Bertinetto-etal-2016 L. Bertinetto, J. Valmadre, J.F. Henriques, A. Vedaldi, and P.H.S. Torr. Fully-convolutional siamese networks for object tracking. arXiv:1606.09549v2, 14 Sep 2016.Bromley-etal-1993 J. Bromley, J.W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore, E. Sackinger, and R. Shah. Signature verification using a siamese time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence, 7(4):737–744, 1993.LeCapitaine-2016 H. Le Capitaine. Constraint selection in metric learning. arXiv:1612.04853v1, 14 Dec 2016.Chen-Salman-2011 K. Chen and A. Salman. Extracting speaker-specific information with a regularized siamese deep network. In Advances in Neural Information Processing Systems 24 (NIPS 2011), pages 298–306. Curran Associates, Inc., 2011.Chopra-etal-2005 S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1, pages 539–546. IEEE, 2005.Dong-Du-Zhang-2015 Y. Dong, B. Du, and L. Zhang. Target detection based on random forest metric learning. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 8(4):1830–1838, 2015.Frank-Wolfe-1956 M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 3(1-2):95–110, March 1956.Hazan-Luo-2016 E. Hazan and H. Luo. Variance-reduced and projection-free stochastic optimization. In Proceedings of the 33rd International Conference on Machine Learning, volume 48 of ICML'16, pages 1263–1271, 2016.Hu-Lu-Tan-2014 J. Hu, J. Lu, and Y.-P. Tan. Discriminative deep metric learning for face verification in the wild. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1875–1882. IEEE, 2014.Kedem-etal-2012 D. Kedem, S. Tyree, K. Weinberger, F. Sha, and G. Lanckriet. Non-linear metric learning. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 2582–2590. Curran Associates, Inc., 2012.Koch-etal-2015 G. Koch, R. Zemel, and R. Salakhutdinov. Siamese neural networks for one-shot image recognition. In Proceedings of the 32nd International Conference on Machine Learning, volume 37, pages 1–8, Lille, France, 2015.Kulis-2012 B. Kulis. Metric learning: A survey. Foundations and Trends in Machine Learning, 5(4):287–364, 2012.Leal-Taixe-etal-2016 L. Leal-Taixe, C. Canton-Ferrer, and K. Schindler. Learning by tracking: Siamese cnn for robust target association. arXiv preprint arXiv:1604.07866, 26 Apr 2016.LeCun-etal-1998 Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.Lichman:2013 M. Lichman. UCI machine learning repository, 2013.Mu-Ding-2013 Y. Mu and W. Ding. Local discriminative distance metrics and their real world applications. In 2013 IEEE 13th International Conference on Data Mining Workshops (ICDMW), pages 1145–1152. IEEE, Dec 2013.Norouzi-etal-2012 M. Norouzi, D. Fleet, and R. Salakhutdinov. Hamming distance metric learning. In P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1070–1078. Curran Associates, Inc., 2012.Reddi-etal-2016 S.J. Reddi, S. Sra, B. Poczos, and A. Smola. Stochastic frank-wolfe methods for nonconvex optimization. arXiv:1607.08254v2, July 2016.Srivastava-etal-2014 N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958, 2014.Walley91 P. Walley. Statistical Reasoning with Imprecise Probabilities. Chapman and Hall, London, 1991.Wang-etal-2016 B. Wang, L. Wang, B. Shuai, Z. Zuo, T. Liu, C.K. Luk, and G. Wang. Joint learning of convolutional neural networks and temporally constrained metrics for tracklet association. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1–8. IEEE, 2016.Wolpert-1992 D.H. Wolpert. Stacked generalization. Neural networks, 5(2):241–259, 1992.Xiong-etal-2012 C. Xiong, D. Johnson, R. Xu, and J.J. Corso. Random forests for metric learning with implicit pairwise position dependence. arXiv:1201.0610v1, Jan 2012.Xu-Weinberger-Chapelle-2012 Z. Xu, K.Q. Weinberger, and O. Chapelle. Distance metric learning for kernel machines. arXiv:1208.3422, 2012.Zheng-etal-2016 L. Zheng, S. Duffner, K Idrissi, C. Garcia, and A. Baskurt. Siamese multi-layer perceptrons for dimensionality reduction and face identification. Multimedia Tools and Applications, 75(9):5055–5073, 2016.Zhou-Feng-2017 Z.-H. Zhou and J. Feng. Deep forest: Towards an alternative to deep neural networks. arXiv:1702.08835v1, February 2017.
http://arxiv.org/abs/1704.08715v1
{ "authors": [ "Lev V. Utkin", "Mikhail A. Ryabinin" ], "categories": [ "stat.ML", "cs.LG", "68T10" ], "primary_category": "stat.ML", "published": "20170427185141", "title": "A Siamese Deep Forest" }
[email protected] Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems,[email protected]@nottingham.ac.uk Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems,We consider a two-levelopen quantum system undergoing either pure dephasing, dissipative, or multiply decohering dynamics and show that, whenever the dynamics is non-Markovian, the initial speed of evolution is a monotonic function of the relevant physical parameter driving the transition between the Markovian and non-Markovian behaviour of the dynamics. In particular, within the considered models, a speed increase can only be observed in the presence of backflow of information from the environment to the system. 03.65.Aa, 03.65.Yz, 03.65.TaRole of non-Markovianity and backflow of information in the speed of quantum evolution Gerardo Adesso April 25, 2017 ======================================================================================Introduction. The inevitable interaction between any system and its surroundings makes the study of open system dynamics indispensable <cit.>. This is especially true within the quantum realm, wherein the environment has in general a major detrimental effect on the quantum features of the system and thus hinders the performance of quantum technologies <cit.>. However, when the system-environment correlation time approaches any of the time-scales characterizing the system dynamics, i.e., within the so-called non-Markovian regime <cit.>, it may happen that reservoir memory effects give rise to revivals of the quantum properties of the system, a phenomenon that is known as backflow of information from the environment to the system <cit.>. The possibility to exploit the environment itself to combat decoherence is one of the many reasons that have recently attracted a tremendous interest into the characterization, detection, and quantification of non-Markovian dynamics <cit.>.In particular, some effort has been recently devoted to investigating the role played by non-Markovianity in the speed of evolution of a quantum system <cit.>, whose control is an essential ingredient in many operational tasks <cit.>. For example, when the open quantum system is used as a quantum memory, one needs longer coherence time and thus slowing down the noisy dynamics can be beneficial <cit.>. On the other hand, if one is performing a quantum logic gate on the system, it is instead the speeding up of the evolution that will be desirable in order to reach the fastest possible computation time <cit.>. The authors of refs. <cit.> investigated the effect of non-Markovianity on some instances of quantum speed limits holding for open quantum processes, expressed as lower bounds to the evolution time necessary to go from an initial state to a target state through a given noisy dynamics <cit.>. More specifically, they analyzed the tightness of these lower bounds, compared with an actual fixed evolution time, when changing the relevant physical parameter that determines the transition between the Markovian and non-Markovian regime of the dynamics. In <cit.> it was shown that some examples of quantum speed limits can get less tight when increasing the degree of non-Markovianity of the dynamics of a two-level atom on resonance with a lossy cavity. On the other hand, in <cit.> it was shown that the same quantum speed limits adopted in <cit.> become tighter when increasing the degree of non-Markovianity of the dynamics of the polarization degree of a photon undergoing pure dephasing due to the interaction with the frequency degrees of freedom of the photon itself. However, it is still not clear whether the fact that a quantum speed limit becomes, e.g., less tight by increasing the degree of non-Markovianity implies that also the corresponding actual evolution time is decreasing and thus non-Markovianity is speeding up the evolution.In this paper we analyze the behaviour of the actual speed of the evolution of a two-level quantum system undergoing paradigmatic examples ofpurely dephasing, dissipative and multiply decohering dynamics amenable to an analytical solution. We show that, when the dynamics is non-Markovian, the initial value of the speed of evolution is a monotonic function of the relevant physical parameter driving the transition between Markovianity and non-Markovianity of the evolution (see Table <ref>), while this need not be the case when the dynamics is Markovian. More specifically, within the aforementioned models, we show that a speed-up of the evolution can only happen in the presence of information backflow from the environment to the system. This clarifies the role of specific non-Markovian signatures in achieving speed-ups, which may have relevant implications for quantum technologies.Non-Markovian dynamics. The evolution ρ(t) over the time interval t of any initial quantum state ρ(0)can be characterized by a one-parameter family {Λ_t | t ≥ 0, Λ_0 = 𝕀} of completely positive and trace preserving (CPTP) maps, so-called dynamical maps, as follows <cit.>:ρ(t) = Λ_t [ρ(0)].If the quantum system is closed, it undergoes a reversible unitary evolution so that the corresponding family of dynamical maps is a one-parameter group, i.e.: (i) it contains the identity element; (ii) it is closed under composition of any two elements; (iii) such composition is associative; (iv) the inverse Λ_t^-1 of every element exists and is also an element of the family. On the other hand, when the quantum system is open, it undergoes a noisy irreversible evolution, which prevents the corresponding family of dynamical maps from being a group, asproperty (iv) is inevitably violated, i.e., either Λ_t^-1 does not exist for some t or if Λ_t^-1 does exist for any t it is not contained within the family of dynamical maps describing the evolution.A particular and well-known class of open evolutions is such that the corresponding family of dynamical maps forms a one-parameter semi-group <cit.>, which satisfies all the remaining properties (i), (ii) and (iii). The semi-group property of a family of dynamical maps can be succinctly characterized by the following relation:Λ_t = Λ_t-sΛ_s,holding for any 0 ≤ s ≤ t. This property means that the map can be divided into infinitely many identical steps, in such a way that the ensuing dynamics can be intuitively interpreted as being memoryless. This class of open evolutions represents the prototypical example of Markovian dynamics.The semi-group property can be easily generalized by introducing the notion of CP-divisibility <cit.>. The dynamics {Λ_t} is said to be CP-divisible if there exists a two-parameter family {Λ̃_t,s} of CPTP maps, which need not be within the family {Λ_t}, such that:Λ_t = Λ̃_t,sΛ_s,for any 0≤ s ≤ t. Analogously to the case where the family of dynamical maps forms a semi-group, a CP-divisible dynamics can be seen as the concatenation of infinitely many other dynamical maps and thus can be loosely interpreted as being memoryless and is commonly considered to be Markovian.Yet the border between Markovian and non-Markovian dynamics is still elusive as consensus on where to draw it, and on how to quantify non-Markovianity of maps beyond such border, is still lacking in the current literature <cit.>. On one hand, in the CP-divisibility paradigm one may quantify the non-Markovianity degree by measuring how much the intermediate map Λ̃_t,s appearing in Eq. (<ref>) is far from being CPTP <cit.>. On the other hand, one may consider the backflow of information from the environment to the system as a genuinely non-Markovian signature. Information manifests itself in many forms, such as quantum state distinguishability, coherence, and correlations. All these manifestations of information share a common property, i.e., being contractive under CPTP maps, which is due to the fact that CPTP maps are the mathematical counterpart of noise and thus can only produce a loss of information. However, if the dynamics is not CP-divisibile, the fact that the intermediate map Λ̃_t,s is not CPTP may give rise to temporary revivals of information throughout the evolution. This alternative paradigm thus estimates the degree of non-Markovianity by measuring how much information flows back to the system during the entire evolution <cit.>. When considering this paradigm in the following, we will specifically adopt the indicator of backflow of information based on trace distance as introduced in <cit.>. Speed of quantum evolution. Information theory stands as the fundamental bridge linking non-Markovianity of a dynamics with the speed of the corresponding evolution <cit.>. The latter can be indeed naturally introduced by resorting to any CPTP-contractive Riemannian metric g defined on the set of quantum states, which assigns to the neighbouring states ρ and ρ+dρ the squared infinitesimal distance(ds)^2 = g_ρ(dρ,dρ).Indeed, by using Eq. (<ref>), the speed of the quantum evolution ρ(t)=Λ_t[ρ(0)] at time t can be immediately defined asv(t) = ds/dt = √(g(t)),where g(t)=g_ρ(t)(ρ̇(t),ρ̇(t)). The Morozova-Chencov-Petz theorem states that there are infinitely many such metrics <cit.>, two paradigmatic examples of which being the Bures-Uhlmann metric <cit.>, also known as quantum Fisher information metric, and the Wigner-Yanase metric <cit.>. In this paper we will adopt the former, for which the following useful relation holds as well <cit.>:g(t)=- 2 d^2/dt^2 F(ρ(0),ρ(t)),where F(ρ,σ)=((√(ρ)σ√(ρ)) )^2 is the Uhlmann fidelity between the states ρ and σ.We now investigate the behaviour of the initial speed of evolution of a two-level quantum system undergoing typical dynamics. We will impose the initial condition ρ(0)=|ψ⟩⟨ψ|, corresponding to the qubit being in an arbitrary pure state |ψ⟩=cosθ/2 |0⟩ + e^iϕsinθ/2 |1⟩, with Bloch vectorn⃗(0)={sinθcosϕ,-sinθsinϕ,cosθ},where θ∈[0,π] and ϕ∈[0,2 π[. We will calculate the fidelity between ρ(0) and ρ(t) via the general formula <cit.>F(ρ(0),ρ(t))= 1/2[1 + n⃗(0)·n⃗(t) + √((1 - n⃗(0)·n⃗(0)) (1 - n⃗(t)·n⃗(t)))],where n⃗(t) is the Bloch vector of the evolved state ρ(t).Results for purely dephasing dynamics. We begin by considering a purely dephasing dynamics, described by the following time-local master equation <cit.>:ρ̇(t) = γ(t) (σ_z ρ(t) σ_z - ρ(t)),where γ(t)=-Ġ(t)/G(t) is the decay rate and G(t) is the decoherence function accounting for all the environmental features relevant to the system dynamics. Inserting into Eq. (<ref>) the Bloch vector of the initial state, given by Eq. (<ref>), and of the corresponding evolved state, given by n⃗(t)={(e^iϕ G(t)) sinθ,-(e^iϕ G(t)) sinθ ,cosθ},we getF(ρ(0),ρ(t)) = 1/4[3+ cos 2θ + 2 (G(t))sin^2θ].Therefore, by using Eqs. (<ref>) and (<ref>), we immediately obtain that the squared speed of evolution at time t is given byv(t)^2 = - (G̈(t)) sin^2θ. We explore two particular physical instances governed by the master equation of Eq. (<ref>). We first consider a qubit interacting with a bosonic reservoir at zero temperature with Ohmic spectrum <cit.>, whose decoherence function isG(t) = e^-Υ(t), ,where γ(t)=ω_c [1+(ω_c t )^2]^-s/2Γ[s]sin[s arctan(ω_c t )], with ω_c the cut-off frequency, s the Ohmicity parameter, and Γ[x] the Euler function. This dynamics is CP-divisible when s ≤ 2, while it is CP-indivisible and manifests backflow of information for any s>2 <cit.>. By substituting Eq. (<ref>) into Eq. (<ref>), we get that the squared initial speed of evolution isv(0)^2= 2 ω_c^2 Γ[s+1] sin^2θ,which is a strictly monotonically increasing function of s for any s>2, i.e., in the whole non-Markovian region, while it is not a monotonic function of s anymore when s ≤ 2. We then consider another physical example of purely dephasing dynamics, wherein the two-level open quantum system is implemented by the polarization degree of freedom of a photon with its frequency degrees of freedom playing the role of the environment, which is coupled to the system via a birefringent material <cit.>. The corresponding decoherence function is given now byG(t)= e^-σ^2 (Δ n)^2 t^2/2(e^iω_1 Δ n tcos^2ξ + e^iω_2 Δ n tsin^2ξ),where Δ n is the difference between the birefringent material refraction indexes for a photon in the vertical and horizontal polarization, respectively, while σ, ω_1, ω_2 and ξ are the parameters characterizing the bimodal distribution representing the probability of finding the photon in a mode with a given frequency. More specifically, σ is the common width of the two peaks, which are centred at the frequencies ω_1 and ω_2, and ξ∈[0,π/2] is the parameter controlling the relative weight of the two peaks. This dynamics is not CP-divisible and manifests backflow of information when ξ∈[ξ_1,ξ_2], with ξ_1 and ξ_2 provided in <cit.>. By plugging Eq. (<ref>) into Eq. (<ref>), we get that the squared initial speed of evolution is given byv(0)^2= 1/2 (Δ n)^2 [2 σ^2 + ω_1^2 + ω_2^2 - (ω_2^2 - ω_1^2) cos2ξ] sin^2θ,which is a strictly monotonically increasing (resp. decreasing) function of ξ for any ξ∈[0,π/2] when ω_2>ω_1 (ω_1>ω_2).Results for dissipative dynamics. Let us now study the initial speed of evolution of a qubit undergoing amplitude damping, a paradigmatic example of dissipative evolution. This is described by the following master equation <cit.>:ρ̇(t) = γ(t) (σ_-ρ(t)σ_+ - {σ_+σ_-,ρ(t)}/2 ),where γ(t)=-2(Ġ(t)/G(t)) is the decay rate, G(t) is the decoherence function, while σ_±=σ_x ± i σ_y are the raising and lowering operators of the qubit. By imposing the initial condition Eq. (<ref>), we get that the Bloch vector of the evolved state isn⃗(t) ={(e^-iϕ G(t))sinθ,(e^-iϕ G(t))sinθ,1-|G(t)|^2 (1-cosθ)}. The fidelity between the evolved state and the initial state, according to Eq. (<ref>),is thus given byF(ρ(0),ρ(t)) = 1/2(1 + cosθ - 2 |G(t)|^2 cosθsin^2θ/2 + (G(t)) sin^2θ). By using Eqs. (<ref>) and (<ref>), the squared speed of evolution is then. We now consider the Jaynes-Cummings model as a physical implementation of an amplitude-damped qubit <cit.>. This model consists of a two-level atom immersed in a lossy cavity with Lorentzian spectral density, with decoherence functionG(t) =e^-(λ-iΔ) t/2[ cosh(Ω t/2) + λ-iΔ/Ωsinh(Ω t/2) ],where Ω=√(λ^2-2iλΔ -4W^2), W=γ_Mλ/2+Δ^2/4, λ is the the width of the reservoir spectral density,which is centred at a frequency that is detuned from the atomic frequency by the amount Δ, and finally γ_M is the effective coupling constant.When the Jaynes-Cummings model is on resonance, i.e., Δ=0, the dynamics is divisible when γ_M≤λ/2, while it gives rise to both CP-indivisibility and backflow of information for any γ_M> λ/2. On the other hand, by increasing the detuning Δ, the threshold value of γ_M/λ above which the dynamics is CP-indivisible decreases <cit.>.By replacing Eq. (<ref>) into Eq. (<ref>), we get that the squared initial speed of evolution is given by,which is a strictly monotonically increasing function of γ_M.Results for multiply decohering dynamics. We finally consider an example of multiply decohering dynamics, that is, a qubit undergoing a Pauli channel <cit.>. This evolution can be described by the following time-local master equation:ρ̇(t) = ∑_j=1^3 γ_j(t) (σ_j ρ(t) σ_j - ρ(t)),where the γ_j(t)'s are the decay rates. The solution of the above master equation is given by ρ(t)=∑_j=0^3 p_j(t) σ_j ρ(0) σ_j, where p_0,1(t) = 1/4 (1 + λ_1(t) ±λ_2(t) ±λ_3(t)) and p_2,3(t) = 1/4 (1 - λ_1(t) ±λ_2(t) ∓λ_3(t)) with λ_j(t) = e^-(Υ_k(t) + Υ_l(t)) (j ≠ k ≠ l ∈{1,2,3}), and Υ_j(t) = 2 ∫_0^t γ_j(t')dt'. By imposing again the initial condition expressed in Eq. (<ref>) and assuming λ_2(t)=λ_1(t), we get that the Bloch vector of the evolved state is given byn⃗(t)={λ_1(t) cosϕsinθ, -λ_1(t) sinϕsinθ, λ_3(t) cosθ}.The fidelity between the above evolved state ρ(t) and initial state ρ(0) is thus simply obtained by using Eq. (<ref>), yielding:F(ρ_0,ρ(t)) = 1/4 [2 + λ_1(t) + λ_3(t) + (λ_3(t) - λ_1(t)) cos2θ]. Therefore, by using Eqs. (<ref>) and (<ref>), we easily get that the squared speed of evolution at time t is given byv(t)^2 = - 1/2[λ̈_1(t) + λ̈_3(t) + (λ̈_3(t) - λ̈_1(t)) cos2θ] . Let us first consider the case with decay rates given by γ_1(t)=γ_2(t)=λ/2 and γ_3(t)=-(ω/2)tanh(ω t), i.e.,λ_1(t) = λ_2(t)=e^-λ tcosh(ω t), λ_3(t) = e^-2λ t,where 0 ≤ω≤λ. This dynamics is CP-divisible when ω=0, while it is CP-indivisible for any 0<ω≤λ. However, there is no backflow of information for any value of ω <cit.>. By replacing Eq. (<ref>) into Eq. (<ref>) we get that the squared initial speed of evolution is given byv(0)^2= -4 λ^2 cos^2θ - (λ^2+ω^2)sin^2θ,which is a strictly monotonically decreasing function of ω.Let us now turn to considering the case of γ_1(t)=γ_2(t)=λ/2 and γ_3(t)=(ω/2)tan(ω t), i.e.,λ_1(t)=λ_2(t)=e^-λ t |cos(ω t)|, λ_3(t) = e^-2λ t,where λ≥0 and ω≥ 0. This dynamics is CP-divisible when ω=0, while it is both CP-indivisible and manifests backflow of informationfor any ω>0 <cit.>. By replacing Eq. (<ref>) into Eq. (<ref>) we get that the squared initial speed of evolution is given in this case byv(0)^2= -4 λ^2 cos^2θ - (λ^2-ω^2)sin^2θ,which is a strictly monotonically increasing function of ω.Conclusions. The control of the speed of quantum evolution is an indispensable feature in several technological applications <cit.>. By performing an in-depth analysis, we have shown that, whenever the dynamics is non-Markovian, the initial speed of evolution of a qubit undergoing prototypical instances of purely dephasing, dissipative and multiply decohering channels, is a monotonic function of the relevant physical parameter determining the crossover between Markovianity and non-Markovianity of the evolution (see Table <ref>), which in turn may be experimentally controlled in different settings, e.g. in quantum optics <cit.>. More specifically, within the considered models, we have shown that a speed-up of the evolution can only be observed in the presence of information backflow from the environment to the system (as defined in <cit.>).This analysis reveals that the presence of information backflow, which is a specific facet of non-Markovianity attracting increasing interest <cit.>, may play a key role as an enhancer for quantum technologies relying on fast and accurate control of open system dynamics.Our study sheds further light on the interplay between non-divisibility and the speed of evolution of an open quantum evolution. While previous studies were more concerned with (not necessarily saturated) lower bounds to the speed of evolution and how they are affected in the non-Markovian regime <cit.>, this study reveals a precise connection between the actual initial speed of evolution and the relevant model parameters driving the manifestation of non-Markovianity (CP-individisility).Yet a general criterion determining exactly when (and under which physical conditions) an increase of the parameters driving the transition to non-Markovianity amounts to speeding up rather than slowing down the evolution is still missing and certainly deserves future investigation.Acknowledgements. This work is supported by the European Research Council (ERC) Starting Grant GQCOP "Genuine Quantumness in Cooperative Phenomena" (Grant No. 637352), and by the Foundational Questions Institute (fqxi.org) Physics of the Observer Programme (Grant No. FQXi-RFP-1601).apsrev
http://arxiv.org/abs/1704.08061v1
{ "authors": [ "Marco Cianciaruso", "Sabrina Maniscalco", "Gerardo Adesso" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170426112037", "title": "Role of non-Markovianity and backflow of information in the speed of quantum evolution" }
[pages=1-last]file.pdf
http://arxiv.org/abs/1704.08432v3
{ "authors": [ "Sunyoung Kwon", "Sungroh Yoon" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20170427050308", "title": "DeepCCI: End-to-end Deep Learning for Chemical-Chemical Interaction Prediction" }
1Institute for Cosmic Ray Research, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8582, Japan2Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), University of Tokyo, Kashiwa, Chiba 277-8583, Japan 3Department of Astronomy, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan4Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan5Research Center for the Early Universe, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan 6The Open University of Japan, Wakaba 2-11, Mihama-ku, Chiba 261-8586, Japan 7Faculty of Natural Sciences, National Institute of Technology, Kure College, 2-2-11 Agaminami, Kure, Hiroshima 737-8506, Japan 8Research Center for Space and Cosmic Evolution, Ehime University, Bunkyo-cho 2-5, Matsuyama 790-8577, Japan 9National Astronomical Observatory, Mitaka, Tokyo 181-8588, Japan 10Institute of Astronomy, National Tsing Hua University, 101 Section 2, Kuang-Fu Road, Hsinchu 30013, Taiwan11The Graduate University for Advanced Studies (SOKENDAI), 2-21-1 Osawa, Mitaka, Tokyo 181-8588 12Subaru Telescope, NAOJ, 650 N Aohoku Pl., Hilo, HI 96720, USA 13European Southern Observatory, Karl-Schwarzschild-Str. 2, D-85748, Garching bei Munchen, Germany 14Academia Sinica, Institute of Astronomy and Astrophysics, 11F of AS/NTU Astronomy-Mathematics Building, No.1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan 15Department of Physics, Faculty of Science, Mahidol University, Bangkok 10400, ThailandBased on data obtained with the Subaru Telescope. The Subaru Telescope is operated by the National Astronomical Observatory of Japan. [email protected] universe — galaxies: formation — galaxies: high-redshiftSILVERRUSH. II. First Catalogs and Properties of∼ 2,000 Lyα Emitters and Blobs at z∼ 6-7Identified over the 14-21 deg^2 Sky Takatoshi Shibuya1,Masami Ouchi1,2,Akira Konno1,3,Ryo Higuchi1,4,Yuichi Harikane1,4,Yoshiaki Ono1,Kazuhiro Shimasaku3,5,Yoshiaki Taniguchi6,Masakazu A. R. Kobayashi7,Masaru Kajisawa8,Tohru Nagao8,Hisanori Furusawa9,Tomotsugu Goto10,Nobunari Kashikawa9,11,Yutaka Komiyama9,11 Haruka Kusakabe3,Chien-Hsiu Lee12,Rieko Momose10,Kimihiko Nakajima13,Masayuki Tanaka9,11,Shiang-Yu Wang14,andSuraphong Yuma15December 30, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================== We present an unprecedentedly large catalog consisting of 2,230 ≳ L^* Lyα emitters (LAEs) at z=5.7 and 6.6 on the 13.8 and 21.2 deg^2 sky, respectively, that are identified by the SILVERRUSH program with the first narrowband imaging data of the Hyper Suprime-Cam (HSC) survey. We confirm that the LAE catalog is reliable on the basis of 96 LAEs whose spectroscopic redshifts are already determined by this program and the previous studies. This catalogue is also available on-line. Based on this catalogue, we derive the rest-frame Lyα equivalent-width distributions of LAEs at z≃5.7-6.6 that are reasonably explained by the exponential profiles with the scale lengths of ≃120-170 Å, showing no significant evolution from z≃5.7 to z≃6.6. We find that 275 LAEs with a large equivalent width (LEW) of >240 Å are candidates of young-metal poor galaxies and AGNs. We also find that the fraction of LEW LAEs to all ones is 4% and 21% at z≃5.7 and z≃6.6, respectively. Our LAE catalog includes 11 Lyα blobs (LABs) that are LAEs with spatially extended Lyα emission whose profile is clearly distinguished from those of stellar objects at the ≳ 3σ level. The number density of the LABs at z=6-7 is ∼ 10^-7-10^-6 Mpc^-3, being ∼ 10-100 times lower than those claimed for LABs at z≃ 2-3, suggestive of disappearing LABs at z≳ 6, albeit with the different selection methods and criteria for the low and high-z LABs. § INTRODUCTION Lyα Emitters (LAEs) are one of important populations of high-z star-forming galaxies in the paradigm of the galaxy formation and evolution. Such galaxies are thought to be typically young (an order of 100 Myr; e.g., <cit.>), compact (an effective radius of <1 kpc; e.g., <cit.>), less-massive (a stellar mass of 10^8-10^9 M_⊙; e.g., <cit.>), metal-poor (≃0.1 of the solar metallicity; e.g., <cit.>), less-dusty than Lyman break galaxies (e.g., <cit.>), and a possible progenitor of Milky Way mass galaxies (e.g., <cit.>). In addition, LAEs are used to probe the cosmic reionizaiton, because ionizing photons escaped from a large number of massive stars formed in LAEs contribute to the ionization of intergalactic medium (IGM; e.g., <cit.>). LAEs have been surveyed by imaging observations with dedicated narrow-band (NB) filters for a prominent redshifted Lyα emission (e.g., <cit.>). In large LAE sample constructed by the NB observations, two rare Lyα-emitting populations have been identified: large equivalent width (LEW) LAEs, and spatially extended Lyα LAEs, Lyα blobs (LABs). LEW LAEs are objects with a large Lyα equivalent width (EW) of ≳240 Åwhich are not reproduced with the normal 1955ApJ...121..161S stellar initial mass function (e.g., <cit.>). Such an LEW is expected to be originated from complicated physical processes such as (i) photoionization by young and/or low-metallicity star-formation, (ii) photoionization by active galactic nucleus (AGN), (iii) photoionization by external UV sources (QSO fluorescence), (iv) collisional excitation due to strong outflows (shock heating), (v) collisional excitation due to gas inflows (gravitational cooling), and (vi) clumpy ISM (see e.g., <cit.>). The highly-complex radiative transfer of Lyα in the interstellar medium (ISM) makes it difficult to understand the Lyα emitting mechanism (<cit.>). LABs are spatially extended Lyα gaseous nebulae in the high-z Universe (e.g., <cit.>). The origins of LABs (LAEs with a diameter ≃20-400 kpc) are also explained by several mechanisms: (1) resonant scattering of Lyα photons emitted from central sources in dense and extended neutral hydrogen clouds (e.g., <cit.>), (2) cooling radiation from gravitationally heated gas in collapsed halos (e.g., <cit.>), (3) shock heating by galactic superwind originated from starbursts and/or AGN activity (e.g., <cit.>), (4) galaxy major mergers (e.g., <cit.>), and (5) photoionization by external UV sources (QSO fluorescence; e.g., <cit.>). Moreover, LABs have been often discovered in over-density regions at z≃2-3 (e.g., <cit.>). Thus, such LABs could be closely related to the galaxy environments, and might be liked to the formation mechanisms of central massive galaxies in galaxy protoclusters. During the last decades, Suprime-Cam (SCam) on the Subaru telescope has led the world on identifying such rare Lyα-emitting populations at z≳6 (LEW LAEs; e.g., <cit.>; LABs; e.g., <cit.>). However, the formation mechanisms of these rare Lyα-emitting populations are still controversial due to the small statistics. While LEW LAEs and LABs at z≃2-5 have been studied intensively with a sample of ≳100 sources, only a few sources have been found so far at z≳6. Large-area NB data are required to carry out a statistical study on LEW LAEs and LABs at z≳6. In March 2014, the Subaru telescope has started a large-area NB survey using a new wide field of view (FoV) camera, Hyper Suprime-Cam (HSC) in a Subaru strategic program (SSP; <cit.>). In the five-year project, HSC equipped with four NB filters of NB387, NB816, NB921, and NB101 will survey for LAEs at z≃2.2, 5.7, 6.6, and 7.3, respectively. The HSC SSP NB survey data consist of two layers; Ultradeep (UD), and Deep (D), covering 2 fields (UD-COSMOS, UD-SXDS), and 4 fields (D-COSMOS, D-SXDS, D-DEEP2-3, D-ELAIS-N1), respectively. The NB816, NB921, and NB101 images will be taken for the UD fields. The NB387, NB816, and NB921 observations will be conducted in 15 HSC-pointing D fields. Using the large HSC NB data complemented by optical and NIR spectroscopic observations, we launch a research project for Lyα-emitting objects: Systematic Identification of LAEs for Visible Exploration and Reionization Research Using Subaru HSC (SILVERRUSH). The large LAE samples provided by SILVERRUSH enable us to investigate e.g., LAE clustering (<cit.>), LEW LAEs and LABs (this work), spectroscopic properties of bright LAEs (<cit.>), Lyα luminosity functions (<cit.>), and LAE overdensity (R. Higuchi et al. in preparation). The LAE survey strategy is given by <cit.>. This program is one of the twin programs. Another program is the study for dropouts, Great Optically Luminous Dropout Research Using Subaru HSC (GOLDRUSH), that is detailed in <cit.>, <cit.>, and <cit.>. This is the second paper in the SILVERRUSH project. In this paper, we present LAE selection processes and machine-readable catalogs of the LAE candidates at z≃5.7-6.6. Using the large LAE sample obtained with the first HSC NB data, we examine the redshift evolutions of Lyα EW distributions and LAB number density. This paper has the following structure. In Section <ref>, we describe the details of the SSP HSC data. Section <ref> presents the LAE selection processes. In Section <ref>, we check the reliability of our LAE selection. Section <ref> presents Lyα EW distributions and LABs at z≃6-7. In Section <ref>, we discuss the physical origins of LEW LAEs and LABs. We summarize our findings in Section <ref>.Throughout this page, we adopt the concordance cosmology with (Ω_ m, Ω_Λ, h) = (0.3, 0.7, 0.7) (<cit.>). All magnitudes are given in the AB system (<cit.>). § HSC SSP IMAGING DATA We use the HSC SSP S16A data products of g, r, i, z, and y broadband (BB; <cit.>), NB921 and NB816 (<cit.>) images that are obtained in 2014-2016. It should be noted that this HSC SSP S16A data is significantly larger than the one of the first-data release in <cit.>. The NB921 (NB816) filter has a central wavelength of λ_c=9215Å (8177Å) and an FWHM of Δλ=135Å (113Å), all of which are the area-weighted mean values. The NB921 and NB816 filters trace the redshifted Lyα emission lines at z=6.580±0.056 and z=5.726±0.046, respectively. The NB filter transmission curves are shown in Figure <ref>. The central wavelength, FWHM, and the bandpass shape for these NB filters are almost uniform over the HSC FoV. The deviation of the λ_c and FWHM values are typically within ≃0.3% and ≃10%, respectively. Thus, we use the area-weighted mean transmission curves in this study. The detailed specifications of these NB filters are given in <cit.>.Table <ref> summarizes the survey areas, exposure time, and depth of the HSC SSP S16A NB data. The current HSC SSP S16A NB data covers UD-COSMOS, UD-SXDS, D-COSMOS, D-DEEP2-3, D-ELAIS-N1 for z≃6.6, and UD-COSMOS, UD-SXDS, D-DEEP2-3, D-ELAIS-N1 for z≃5.7. The effective survey areas of the NB921 and NB816 images are 21.2 and 13.8 arcmin^2, corresponding to the survey volumes of ≃1.9× 10^7 and ≃1.2× 10^7 Mpc^3, respectively. The area of these HSC NB fields are covered by the observations of all the BB filters. The typical limiting magnitudes of BB filters are g≃26.9, r≃26.5, r≃26.3, z≃25.7, and y≃25.0 (g≃26.6, r≃26.1, r≃25.9, z≃25.2, and y≃24.4) in a 1.5 aperture at 5σ for the UD (D) fields. The FWHM size of point spread function in the HSC images is typically ≃0.8 (<cit.>). The HSC images were reduced with the HSC pipeline, hscPipe 4.0.2 (<cit.>) which is a code from the Large Synoptic Survey Telescope (LSST) software pipeline (<cit.>). The HSC pipeline performs CCD-by-CCD reduction, calibration for astrometry, and photometric zero point determination. The pipeline then conducts mosaic-stacking that combines reduced CCD images into a large coadd image, and create source catalogs by detecting and measuring sources on the coadd images. The photometric calibration is carried out with the PanSTARRS1 processing version 2 imaging survey data (<cit.>). The details of the HSC SSP survey, data reduction, and source detection and photometric catalog construction are provided in <cit.>, <cit.>, and <cit.>. In the HSC images, source detection and photometry were carried out in two methods: unforced and forced. The unforced photometry is a method to perform measurements of coordinates, shapes, and fluxes individually in each band image for an object. The forced photometry is a method to carry out photometry by fixing centroid and shape determined in a reference band and applying them to all the other bands. The algorithm of the forced detection and photometry is similar to the double-image mode of SExtractor (<cit.>) that are used in most of the previous studies for high-z galaxies. According to which depends on magnitudes, S/N, positions, and profiles for detected sources, one of the BB and NB filter is regarded as a reference band. For merging the catalogs of each band, the object matching radius is not a specific value which depends on an area of regions with a >5σ sky noise level. We refer the detailed algorithm to choose the reference filter and filter priority to <cit.>. In the hscPipe detection and photometry, an NB filter is basically chosen as a reference band for the NB-bright and BB-faint sources such as LAEs. However, a BB filter is used as a reference band in the case that sources are bright in the BB image. The current version of hscPipe has not implemented the NB-reference forced photometry for BB-bright sources. In this specification, there is a possibility that we miss BB-bright sources with a spatial offset between centroids of BB and NB by using only the forced photometry. Thus, we combine the unforced or forced photometry for BB-NB colors to identify such BB-bright objects with a spatial offset between centroids of BB and NB (e.g., <cit.>). See Section <ref> for details of the LAE selection criteria. We use cmodel magnitudes for estimating total magnitudes of sources. The cmodel magnitude is a weighted combination of exponential and de Vaucouleurs fits to light profiles of each object. The detailed algorithm of the cmodel photometry are presented in <cit.>. To measure the S/N values for source detections, we use 1.^''5-diameter aperture magnitudes.*8c Properties of the HSC SSP S16A NB Data Field R.A. Dec. Area T_ exp m_ lim (5σ, 1.5^''ϕ) N_ LAE,ALL N_ LAE,F(J2000) (J2000) (deg^2) (hour) (mag) (1) (2) (3) (4) (5) (6) (7) (8) 8l(1) Field. 8l(2) Right ascension.8l(3) Declination. 8l(4) Survey area with the HSC SQL parameters in Table <ref>.8l(5) Total exposure time of the NB imaging observation. 8l(6) Limiting magnitude of the NB image defined by a 5σ sky noise in a 1.^''5 diameter circular aperture.8l(7) Number of the LAE candidates in the ALL (unforced+forced) catalog.8l(8) Number of the LAE candidates in the forced catalog.8l^a The value of N_ LAE,ALL (N_ LAE,F) includes 30 (7) LAEs selected in UD-COSMOS. 8cNB921 (z≃6.6) UD-COSMOS 10:00:28 +02:12:21 2.05 11.25 25.6 338 116 UD-SXDS 02:18:00 -05:00:00 2.02 7.25 25.5 58 23 D-COSMOS 10:00:60 +02:13:53 5.31 2.75 25.3 244^a 47^a D-DEEP2-3 23:30:22 -00:44:38 5.76 1.00 24.9 164 35 D-ELAIS-N1 16:10:00 +54:17:51 6.08 1.75 25.3 349 48Total — — 21.2 24.00 — 1153 269 8cNB816(z≃5.7) UD-COSMOS 10:00:28 +02:12:21 1.97 5.50 25.7 201 176 UD-SXDS 02:18:00 -05:00:00 1.93 3.75 25.5 224 188 D-DEEP2-3 23:30:22 -00:44:38 4.37 1.00 25.2 423 282 D-ELAIS-N1 16:10:00 +54:17:51 5.56 1.00 25.3 229 130Total — — 13.8 11.25 — 1077 776llcl HSC SQL Parameters and Flags for Our LAE Selection Parameter or Flag Value Band Comment4l detect_is_tract_inner True — Object is in an inner region of a tract andnot in the overlapping region with adjacent tractsdetect_is_patch_inner True — Object is in an inner region of a patch and not in the overlapping region with adjacent patchescountinputs >=3 NB Number of visits at a source position for a given filter.flags_pixel_edge False grizy, NB Locate within images flags_pixel_interpolated_centerFalse grizy, NB None of the central 3×3 pixels of an object is interpolated flags_pixel_saturated_centerFalse grizy, NB None of the central 3×3 pixels of an object is saturated flags_pixel_cr_centerFalse grizy, NB None of the central 3×3 pixels of an object is masked as cosmic ray flags_pixel_badFalse grizy, NB None of the pixels in the footprint of an object is labelled as bad*7c Photometric properties of example LAE candidates Object ID NB g r i z y(mag) (mag) (mag) (mag) (mag) (mag) (1) (2) (3) (4) (5) (6) (7) 7l(1) Object ID. 7l(2)-(7) Total magnitude of NB-, g-, r-, i-, z, and y-bands.7lThe 2σ limits of the total magnitudes for the undetected bands.7l(The complete machine-readable catalogs will be available on our project webpage at 7l http://cos.icrr.u-tokyo.ac.jp/rush.html.)7cUD-SXDS (NB921) HSC J021601-041442 23.85±0.10 26.89±0.45 27.03±0.62 26.65±0.63 25.28±0.31 25.29±0.53HSC J021754-051454 24.01±0.12 >27.6 >27.3 >26.9 26.09±0.57 25.21±0.50HSC J021702-050604 24.64±0.21 >27.6 >27.3 >26.9 >26.5 >25.8HSC J021638-043228 24.74±0.23 >27.6 >27.3 >26.9 26.17±0.60 >25.8HSC J021609-050236 24.90±0.26 27.53±0.72 27.29±0.75 >26.9 26.32±0.67 >25.87cUD-COSMOS (NB816) HSC J100243+024551 23.69±0.08 >27.6 >27.3 26.49±0.53 >26.6 >25.8HSC J100239+022806 24.14±0.13 >27.6 >27.3 26.76±0.64 26.12±0.54 >25.8HSC J100243+015931 24.63±0.19 >27.6 >27.3 >27.0 >26.6 >25.8HSC J095936+014108 25.02±0.26 >27.6 >27.3 >27.0 >26.6 >25.8HSC J100245+02153625.15±0.29 >27.6 >27.3 >27.0 >26.6 >25.8 § LAE SELECTION Using the HSC data, we perform a selection for LAEs at z≃6.6 and ≃5.7. Basically, we select objects showing a significant flux excess in the NB images and a spectral break at the wavelength of redshifted Lyα emission. In this study, we create two LAE catalogs: HSC LAE ALL (forced+unforced) catalog and HSC LAE forced catalog. The HSC LAE ALL catalog is constructed in a combination of the forced and unforced photometry. We use this HSC LAE ALL catalog for identifying objects with a spatial offset between centroids of BB and NB (see Section <ref>). On the other hand, the HSC LAE forced catalog consists of LAEs meeting only the selection criteria of the forced photometry. We use this HSC LAE forced catalog for statistical studies for LAEs (e.g., Lyα LFs). The HSC LAE forced catalog is a subsample of the ALL one. Figure <ref> shows the flow chart of the LAE selection process. We carry out the following processes: (1) SQL selection, (2) visual inspections for the object images, (3) rejections of variable and moving objects with the multi-epoch images, and (4) forced selection. The details are described as below. * SQL selection:We retrieve detection and photometric catalogs from postgreSQL database tables.Using SQL scripts, we select objects meeting the following criteria of (i) magnitude and color selections and (ii) hscPipe parameters and flags. * Magnitude and color selection:To identify objects with an NB magnitude excess in the HSC catalog, we apply the magnitude and color selection criteria that are similar to e.g., <cit.>:[NB921^ap_frc < NB921_5σ;&& (g_frc > g_3σ ||g^ap_frc > g_3σ);&& (r_frc > r_3σ ||r^ap_frc > r_3σ);&& ( z_frc - NB921_frc > 1.0||z_unf - NB921_unf > 1.0 ); && { [(z_frc < z_3σ ||z^ap_frc < z_3σ) . .; . . && (i_frc - z_frc > 1.3 ||i_unf - z_unf > 1.3) ] .; . ||(z_frc > z_3σ ||z^ap_frc > z_3σ)},;]for z≃6.6, and, [NB816^ap_frc < NB816_5σ;&& (g_frc > g_3σ ||g^ap_frc > g_3σ);&& ( i_frc - NB816_frc > 1.2||i_unf - NB816_unf > 1.2 ); && { [(r_frc < r_3σ ||r^ap_frc < r_3σ) . .; . . && (r_frc - i_frc > 1.0 ||r_unf - i_unf > 1.0) ] .; . ||(r_frc > r_3σ ||r^ap_frc > r_3σ)}, ] for z≃5.7, where the indices of frc and unf represent the forced and unforced photometry, respectively. The subscript of 5σ (3σ) indicates the 5σ (3σ) limiting magnitude for a given filter. The values with and without a superscript of ap indicate the aperture and total magnitudes, respectively. These magnitudes are derived with the hscPipe software (see Section <ref>; <cit.>). The limits of the i-NB816 and z-NB921 colors are the same as those of <cit.> and <cit.>, respectively. To exploit the survey capability of HSC identifying rare objects, we use the 3σ g and r limiting magnitude (instead of the value of 2σ used in <cit.>) for the criteria of Lyman break off-band non-detection. In the process (4), we replace 3σ with 2σ for the g and r magnitude criteria for the consistency with the previous studies. Note that we do not apply the flags_pixel_bright_object_[center/any] masking to the LAE ALL catalog in order to maximize LAE targets for future follow-up observations (<cit.>). These flags for the object masking are used in the process (4).* Parameters and flags:Similar to <cit.>, we set several hscPipe parameters and flags in the HSC catalog to exclude e.g., blended sources, and objects affected by saturated pixels, and nearby bright source halos. We also mask regions where exposure times are relatively short by using the countinputs parameter, N_ c, which denotes the number of exposures at a source position for a given filter. Table <ref> summarizes the values and brief explanations of the hscPipe parameters and flags used for our LAE selection. The full details of these parameters and flags are presented in <cit.>. To search for LAEs in large areas of the HSC fields, we do not apply the countinputs parameter to the BB images.The number of objects selected in this process is n_ SQL≃121,000.* Visual inspections for object images:To exclude cosmic rays, cross-talks, compact stellar objects, and artificial diffuse objects, we perform visual inspections for the BB and NB images of all the objects selected in the process (1). Most spurious sources are diffuse components near bright stars and extended nearby galaxies. The hscPipe software conducts the cmodel fit to broad light profiles of such diffuse sources in the NB images, which enhances the BB-NB colors. For this reason, the samples constructed in the current SQL selection are contaminated by many diffuse components. Due to the clear difference of the appearance between LAE candidates and diffuse components, such spurious sources can be easily excluded through the visual inspections. The number of objects selected in this process is n_ vis≃ 10,900.The visual inspection processes are mainly conducted by one of the authors. For the reliability check, four authors in this paper have individually carried out such visual inspections for ≃5,300 objects in the UD-COSMOS NB816 fields, and compare the results of the LAE selection. The difference in the number of selected LAEs is within ±5 objects. Thus, we do not find a large difference in our visual inspection results.* Rejection of variable and moving objects with multi-epoch images:We exclude variable and moving objects such as supernovae, AGNs, satellite trails, and asteroids using multi-epoch NB images. The NB images were typically taken a few months - years after the BB imaging observations. For this reason, there is a possibility that sources with an NB flux excess are variable or moving objects which happened to enhance the luminosities during the NB imaging observations. The NB images are created by coadding ≃10-20 and ≃3-5 frames of 15 minute exposures for the current HSC UD and D data, respectively. Using the multi-epoch images, we automatically remove the variable and moving objects as follows. First, we measure the flux for individual epoch images, f_ 1 epoch, for each object. Next, we obtain an average, f_ ave, and a standard deviation, σ_ epoch, from a set of the f_ 1 epoch values after a 2σ flux clipping. Finally, we discard an object having at least a multi-epoch image with a significantly large f_ 1 epoch value of f_ 1 epoch≥ f_ ave+ A_ epoch×σ_ epoch. Here we tune the A_ epoch factor based on the depth of the NB fields. The A_ epoch value is typically ≃2.0-2.5. Figure <ref> shows examples of the spurious sources. We also perform visual inspections for multi-epoch images to remove contaminants which are not excluded in the automatic rejection above. We refer the remaining objects after this process as the LAE ALL catalog.* Forced selection:In the selection criteria of Equations (<ref>) and (<ref>), the HSC LAE ALL catalog is obtained in the combination of the forced and unforced colors. In this process, we select LAEs only with the forced color excess to create the forced LAE subsamples from the HSC LAE ALL catalog. In addition, the 3σ limit is replaced with 2σ for the criteria of g and r band non-detections.Here we also adopt a new stringent color criterion of z- NB921>1.8 for z≃6.6 LAEs. Due to the difference of the z band transmission curves between SCam and HSC, the criterion of z- NB921>1.0 in Equation (<ref>) do not allow us to select LAEs whose EW_ 0, Lyα is similar to those of previous SCam studies. The BB-NB color criteria in in the forced selection correspond to the rest-frame Lyα EW of EW_ 0, Lyα > 14 Å and > 10 Å for z≃6.6 and z≃5.7 LAEs, respectively. These EW_ 0,Lyα limits are comparable to those of the previous SCam studies (e.g., <cit.>). The relation between EW_ 0,Lyα and BB-NB colors is described in <cit.> in details. Moreover, we remove the objects in masked regions defined by the flags_pixel_bright_object_[center/any] parameters (<cit.>). We refer the set of the remaining objects after this process as the forced LAE catalog. This forced LAE catalog is used for studies on LAE statistics such as measurements of Lyα EW scale lengths.The LAE candidates selected in this forced selection are referred to as the forced LAEs. On the other hand, we refer to the remaining LAE candidates in the HSC LAE ALL catalog as the unforced LAEs. The examples of forced and unforced LAEs are shown in Figure <ref>. As shown in the top-right panels of Figure <ref>, the unforced LAEs have a ≃0^''2-0^''3 spatial offset between centroids in NB and BB. In total, we identify 2,230 and 1,045 LAE candidates in the HSC LAE ALL and forced catalogs, respectively. Table <ref> presents the numbers of LAE candidates in each field. The machine-readable catalogs of all the LAE candidates will be provided on our project webpage at http://cos.icrr.u-tokyo.ac.jp/rush.html. The photometric properties of example LAE candidates are shown in Table <ref>. As shown in Table <ref>, the number of z≃5.7 LAEs in D-DEEP2-3 appears to be large compared to that of the other z≃5.7 fields. This may be because the seeing of the NB816 images of D-DEEP2-3 is better than that of the other z≃5.7 fields. Similarly, the small number of z≃6.6 LAEs in UD-SXDS may be affected by the seeing size. The number density of LAEs is discussed in the next section. Note that edge regions of UD-COSMOS is overlapped with a flanking field, D-COSMOS (<cit.>). We find that 30 (7) LAEs in UD-COSMOS are also selected in the HSC LAE ALL (forced) sample of D-COSMOS. To analyze the D field independently in the following sections, we include the overlapped LAEs in the D-COSMOS sample. Figure <ref> shows the color-magnitude diagrams for the LAE candidates. The solid curves in the color magnitude diagrams indicate the 3σ errors of BB-NB color as a function of the NB flux, f_ NB, given by± 3 σ_ BB-NB = -2.5 log_10( 1∓ 3√(f^2_ 1σ NB + f^2_ 1σ BB)/f_ NB),where f_ 1σ NB and f_ 1σ BB are the 1σ flux error in the z and NB921 (i and NB816) bands for z≃6.6 (z≃5.7), respectively. As shown in Figure <ref>, the LAE candidates have a significant NB magnitude excess. § CHECKING THE RELIABILITY OF OUR LAE SELECTION Here we check the reliability of our LAE selection.§.§ Spectroscopic Confirmations We have conducted optical spectroscopic observations with Subaru/FOCAS and Magellan/LDSS3 for 18 bright LAE candidates with NB≲24 mag. In these observations, we have confirmed 13 LAEs. By investigating our spectroscopic catalog of Magellan/IMACS, we also spectroscopically identify 8 LAEs with NB≲24 mag.In addition, we find that 75 LAEs are spectroscopically confirmed in literature (<cit.>; Higuchi et al. in preparation). In total, 96 LAEs have been confirmed in our spectroscopy and previous studies. Using the spectroscopic sample whose number of observed LAEs is known, we estimate the contamination rate to be ≃0-30%. The details of the spectroscopic observations and contamination rates are given by <cit.>.§.§ LAE Surface Number Density Figure <ref> shows the surface number density (SND) of our LAE candidates and LAEs identified in previous Subaru/SCam NB surveys, SCam LAEs (e.g., <cit.>). We find that the SNDs of the forced LAEs are comparable to those of SCam LAEs. On the other hand, the SNDs of unforced LAEs at z≃6.6 are higher than that of SCam LAEs. The high SND of the unforced LAEs is mainly caused by the color criterion for the HSC LAE ALL catalog of z- NB921>1.0 that is less stringent than z- NB921>1.8 (see Section <ref>). We also identify SND humps of our forced LAEs at z≃6.6 at the bright-end of NB≃23 mag in UD-COSMOS. The presence of such a SND hump has been reported by z≃6.6 LAE studies (e.g., <cit.>). The significance of the bright-end hump existence in Lyα LFs is ≃3σ, which are discussed in <cit.>. The slight declines in SNDs at a faint NB magnitude of NB≳24.5 mag would be originated from the incompleteness of the LAE detection and selection. <cit.> present the SND corrected for the incompleteness. Figure <ref> compiles the SNDs of all the HSC UD and D fields. We find that our SNDs show a small field-to-field variation, but typically follow those of the SCam LAEs.§.§ Matching Rate of HSC LAEs and SCam LAEs The UD-SXDS field has been observed previously by SCam equipped with the NB921 and NB816 filters (<cit.>). We compare the catalogs of our selected HSC LAE candidates and SCam LAEs, and calculate the object matching rates as a function of NB magnitudes. The object matching radius is 1^''. The object matching rate between the HSC LAEs and SCam LAEs is ≃90% at a bright NB magnitudes of ≲24 mag. The high object matching rate indicates that we adequately identify LAEs in our selection processes. However, the matching rate decreases to ≃ 70% at a faint magnitude of ≃24.5 mag. This is due to the shallow depth of the HSC NB fields compared to the SCam ones. <cit.> discuss the detection completeness of faint LAEs.*8c Properties of the LABs selected in the HSC NB Data. Object ID α (J2000) δ (J2000) NB_ tot UV_ tot logL_ Lyα EW_ 0, Lyα z_ spec (mag) (mag) (erg s^-1) (Å)(1) (2) (3) (4) (5) (6) (7) (8) 8l(1) Object ID. 8l(2) Right ascension. 8l(3) Declination. 8l(4) Total magnitudes of NB921- and NB816-bands for z≃6.6 and z≃5.7, respectively.8l(5) Total magnitudes of y- and z-bands for z≃6.6 and z≃5.7, respectively. 8l(6) Lyα luminosity.8l(7) Rest-frame equivalent width of Lyα emission line.8l(8) Spectroscopic redshift.8l^a CR7 in <cit.>.8l^b Himiko in <cit.>. 8l^c Spectroscopically confirmed in <cit.>.8l^d Spectroscopically confirmed in <cit.>.8l^e Spectroscopic measurements from the literature.8cNB921 (z≃6.6) HSC J100058+014815^a 10:00:58.00 +01:48:15.14 23.25 24.48 43.9^e 211±20^e 6.604^aHSC J021757-050844^b 02:17:57.58-05:08:44.64 23.50 25.40 43.4^e 78^+8_-6^e 6.595^bHSC J100334+024546^c 10:03:34.66 +02:45:46.5623.61 24.97 43.5^e 61±20^e 6.575^c 8cNB816 (z≃5.7) HSC J100129+014929 10:01:29.07+01:49:29.81 23.47 25.87 43.4 95^+40_-19 5.707^dHSC J100109+021513 10:01:09.72+02:15:13.4523.13 25.77 43.6 257^+172_-76 5.712^dHSC J100123+015600 10:01:23.84+01:56:00.4623.94 26.43 43.3 106^+70_-27 5.726^dHSC J095946+013208 09:59:46.73+01:32:08.4524.16 26.12 43.1 52^+25_-13 —HSC J100139+015428 10:01:39.94+01:54:28.3424.11 26.58 43.2 100^+66_-30 —HSC J161927+551144 16:19:27.73+55:11:44.7022.88 24.86 43.7 89^+33_-20 —HSC J161403+535701 16:14:03.82+53:57:01.2523.53 25.32 43.4 51^+23_-12 —HSC J232924+003600 23:29:24.85+00:36:00.3423.62 26.48 43.4 55^+45_-14 —§ RESULTS Here we present the Lyα EW distributions (Section <ref>) and LABs selected with the HSC data (Section <ref>). For the consistency with previous LAE studies, we use the forced LAE sample in the following analyses, if not specified.§.§ Lyα EW Distribution We present the Lyα EW distributions for LAEs at z≃5.7-6.6. In a method described in Section <ref>, we calculate the rest-frame Lyα EW, EW_ 0, Lyα, for the LAEs. The y (z) band magnitudes are used for the rest-frame UV continuum emission of z≃6.6 (z≃5.7) LAEs. Figure <ref> shows the observed Lyα EW distributions at z≃5.7-6.6 in the UD and D fields. To quantify these Lyα EW distributions we perform Monte Calro (MC) simulations. The procedure of the MC simulations is similar to that of e.g., <cit.>, <cit.> and <cit.>. First, we generate artificial LAEs in a Lyα luminosity range of logL_ Lyα/ erg s^-1=42-44 according to z≃5.7-6.6 Lyα LFs of <cit.>. Next, we assign Lyα EW and BB magnitudes to each LAE by assuming that the Lyα EW distributions are the exponential and Gaussian functions (e.g., <cit.>):dN/ d EW = N exp(-EW/W_ e), and,dN/ d EW = N 1/√(2πσ_ g^2)exp(-EW^2/2σ_ g^2), where N is the galaxy number, W_ e and σ_ g are the Lyα EW scale lengths of the exponential and Gaussian functions, respectively. By changing the intrinsic W_ e and σ_ g values, we make samples of artificial Lyα EW distributions. We then select LAEs based on NB and BB limiting magnitudes and BB-NB colors corresponding to Lyα EW limits which are the same as those of our LAE selection criteria (Section <ref>). Finally,the best-fit Lyα EW scale lengths are obtained by fitting to the artificial Lyα EW distribution to the observed ones.Figure <ref> presents the Lyα EW distributions obtained in the MC simulations. As shown in Figure <ref>, we find that the Lyα EW distributions are reasonably explained by the exponential and Gaussian profiles. The best-fit scale lengths are summarized in Table <ref>. The best-fit exponential (Gaussian) Lyα scale lengths are, on average of the UD and D fields, 153±18 Å and 154±15 Å (146±24 Åand 139±14 Å) at z≃5.7 and z≃6.6, respectively. As show in Table <ref>, there is no large difference in the Lyα EW scale lengths for the UD and D fields. This no large EW_ 0,Lyα difference indicates that the results of our best-fit Lyα EW scale lengths does not highly depend on the image depths and the detection incompleteness. In Section <ref>, we discuss the redshift evolution of the Lyα EW scale lengths. We investigate LEW LAEs whose intrinsic Lyα EW value, EW_ 0, Lyα^ int, exceeds 240 Å (e.g., <cit.>). To obtain EW_ 0, Lyα^ int, we correct for the IGM attenuation for Lyα using the prescriptions of <cit.>. In the HSC LAE ALL sample, we find that 45 and 230 LAEs have a LEW of EW_ 0, Lyα^ int > 240 Å, for z≃6.6 and z≃5.7 LAEs, respectively. These LEW LAEs are candidates of young-metal poor galaxies and AGNs. The fraction of the LEW LAEs in the sample is21% for z≃5.7 LAEs. The fraction of LEW LAEs at z≃5.7 is comparable to that of previous studies on z≃5.7 LAEs (e.g., ≃25% at z≃5.7 in <cit.>; ≃30-40% at z≃5.7 in <cit.>). In contrast, the fraction of LEW LAEs at z≃6.6 is 4% which is lower than that at z≃5.7. The low fraction at z≃6.6 might be due to the neutral hydrogen IGM absorbing the Lyα emission. Out of the LEW LAEs, 32 and 150 LAEs at z≃6.6 and z≃5.7 exceed EW_ 0, Lyα^ int = 240 beyond the 1σ uncertainty of EW_ 0, Lyα^ int, respectively.§.§ LABs at z≃5.7-6.6 We search for LABs with spatially-extended Lyα emission. To identify LABs, we measure the NB isophotal areas, A_ iso, for the forced LAEs. In this process, we include an unforced LAE, Himiko, which is an LAB identified in a previous SCam NB survey (<cit.>). First, we estimate the sky background level of the NB cutout images. Next, we run the SExtractor with the sky background level, and obtain the A_ iso values as pixels with fluxes brighter than the 2σ sky fluctuation. Note that the NB magnitudes include both fluxes of Lyα and the rest-frame UV continuum emission. Instead of creating Lyα images by subtracting the flux contribution of the rest-frame UV continuum emission, we here simply use the NB images for consistency with previous studies (e.g., <cit.>). Using A_ iso and NB magnitude diagrams, we select LABs which are significantly extended compared to point sources. This selection is similar to that of <cit.>. Figure <ref> presents A_ iso as a function of total NB magnitude. We also plot star-like point sources which are randomly selected in HSC NB fields. The A_ iso and NB magnitude selection window is defined by a 2.5σ deviation from the A_ iso-NB magnitude distribution for the star-like point sources. The value of 2.5σ is applied for fair comparisons with previous studies of e.g., <cit.> and <cit.> who have used ≃2-4σ. We perform visual inspections for the NB cutout images to remove unreliable LABs which are significantly affected by e.g., diffuse halos of nearby bright stars. In total, we identify 11 LABs at z≃5.7-6.6. Figure <ref> and Table <ref> present multi-band cutout images and properties for the LABs, respectively. As shown in Figure <ref>, these LABs are spatially extended in NB. Our HSC LAB selection confirms that CR7 and Himiko have a spatially extended Lyα emission. Six out of our 11 LABs have been confirmed by our spectroscopic follow-up observations (<cit.>) and previous studies (<cit.>). In Section <ref>, we discuss the redshift evolution of the LAB number density.§ DISCUSSION§.§ Redshift Evolution of Lyα EW Distribution We discuss the redshift evolution of the Lyα EW scale lengths in a compilation of the results from literature (<cit.>). Figure <ref> shows the redshift evolution of the Lyα EW scale lengths at z≃0-7. Our best-fit Lyα scale lengths are comparable to that of <cit.> and/or <cit.> at z≃5.7-6.6. The high Lyα EW scale lengths at high-z would indicate that metal-poor and/or less-dusty galaxies with a strong Lyα emission is more abundant at higher-z (e.g., <cit.>). In addition, <cit.> have found that the Lyα EW scale length increases towards high-z following a (1+z)-form. Our W_ e and σ_ g values for z≃5.7-6.6 are also roughly comparable to Zheng et al.'s (1+z)-form evolution. However, no significant evolution in the Lyα EW scale lengths from z≃5.7 to z≃6.6 is identified in our HSC LAE data, although a possible decline in σ_ g in the UD fields is found. A slight decrease both in W_ e and σ_ g from z≃5.7 to z≃6.6 has been found by <cit.>. This sudden decline in the Lyα scale lengths at z≃6.6 may be caused by the increasing hydrogen neutral fraction in the epoch of the cosmic reionization at z≳7. Note that the Lyα EW scale length measurements would largely depend on BB and NB depths and Lyα EW cuts. Using deeper NB and BB images from the future HSC data release, we will examine the redshift evolution of Lyα scale lengths accurately.§.§ Redshift Evolution of LAB Number Density We discuss the redshift evolution of the LAB number density, N_ LAB. Figure <ref> shows N_ LAB at z≃0-7 measured by this study and the literature (<cit.>). For the plot of the N_ LAB, <cit.> have compiled N_ LAB measurements down to an NB surface brightness (SB) limit of 5 × 10^-18 erg s^-1 cm^-2 arcsec^-2. The SB limits of our HSC NB data are ≃5 × 10^-18 and ≃8 × 10^-18 erg s^-1 cm^-2 for the UD and D fields, respectively. Our HSC NB images at least for the UD fields are comparably deep, allowing for fair comparisons with Yang et al.'s N_ LAB plot. Our N_ LAB values are 1.4×10^-6 and 2.9 × 10^-7 Mpc^-3 (2.6×10^-7 and 1.1×10^-7 Mpc^-3) at z≃5.7 and z≃6.6 in the UD (D) fields, respectively. The number density at z≃6-7 is ≃ 10-100 times lower than those claimed for LABs at z≃ 2-3 (e.g., <cit.>). As shown in Figure <ref>, there is an evolutional trend that N_ LAB increases from z≃7 to ≃3 and subsequently decreases from z≃3 to ≃0. This trend of the LAB number density evolution is similar to the Madau-Lilly plot of the cosmic SFR density (SFRD) evolution (e.g., <cit.>). Similar to <cit.>, we fit the Madau-Lilly plot-type formula,N_ LAB(z) = a×(1+z)^b/1 + [(1+z)/c]^d,where a, b, c, and d are free parameters <cit.> to our N_ LAB evolution. For the fitting, we exclude <cit.>'s data point which has been obtained in a overdense region, SSA22. The best-fit parameters are a=9.1×10^-8, b=2.9, c=5.0, and d=11.7. The similarity of the cosmic SFRD and LAB evolution might indicate that the origin of LABs are related to the star formation activity. As described in Section <ref>, LABs are thought to be formed in physical mechanisms that are connected with the star formation, e.g., the cold gas accretion and the galactic superwinds. The cold gas accretion couldproduce the extended Lyα emission powered by the gravitational energy (e.g., <cit.>). On the other hand, the superwinds induced by the starbursts in the central galaxies would blow out the surrounding neutral gas, and form extended Lyα nebulae (e.g., <cit.>). The cold gas accretion rate and the strength of galactic superwinds are predicted to evolve with physical quantities related to the cosmic SFRD (e.g., <cit.>). The comparisons of the cosmic SFRD and LAB evolutions would provide useful hints that LABs are formed in these scenarios. However, it should be noted that the LAB selection method is not homogeneous in our comparison of N_ LAB at z≃0-7. There is a possibility that the N_ LAB evolution from z≃7 to z≃3 is caused by the cosmological surface brightness dimming effect at high-z. The cosmological surface brightness dimming would significantly affect the detection and selection completeness for LABs at high-z. To confirm the N_ LAB evolution and quantitatively compare with the cosmic SFRD, we need to homogenize the selection method for LABs at z≃2-7 in the future HSC NB data.§ SUMMARY AND CONCLUSIONS We develop an unprecedentedly large catalog consisting of LAEs at z=5.7 and 6.6 that are identified by the SILVERRUSH program with the first NB imaging data of the Subaru/HSC survey. The NB imaging data is about an order of magnitude larger than any other surveys for z≃6-7 LAEs conducted to date. Our findings are as follows:* We identify 2,230 ≳ L^* LAEs at z=5.7 and 6.6 on the 13.8 and 21.2 deg^2 sky, respectively. We confirm that the LAE catalog is reliable on the basis of 96 LAEs whose spectroscopic redshifts are already determined by this program (<cit.>) and the previous studies (e.g., <cit.>). The LAE catalog is presented in this work, and published online. * With the large LAE catalog, we derive the rest-frame Lyα EW distributions of LAEs at z≃5.7 and ≃6.6 that are reasonably explained by the exponential profile. The best-fit exponential (Gaussian) Lyα scale lengths are, on average of the Ultradeep and Deep fields, 153±18 Åand 154±15 Å (146±24 Åand 139±14 Å) at z≃5.7 and z≃6.6, respectively, showing no significant evolution from z≃5.7 to z≃6.6. We find 45 and 230 LAEs at z≃6.6 and z≃5.7 with a LEW of EW_ 0,Lyα^ int> 240 Åcorrected for the IGM attenuation for Lyα. The fraction of the LEW LAEs to all LAEs is ≃4% and ≃21% at z≃6.6 and z≃5.7, respectively. These LEW LAEs are candidates of young-metal poor galaxies and AGNs. * We search for LABs that are LAEs with spatially extended Lyα emission whose profile is clearly distinguished from those of stellar objects at the ≳ 3σ level. In the search, we identify 11 LABs in the HSC NB images down to a surface brightness limit of ≃5-8 × 10^-18 erg s^-1 cm^-2 which is as deep as data of previous studies. The number density of the LABs at z≃6-7 is ∼ 10^-7-10^-6 Mpc^-3 that is ∼ 10-100 times lower than those claimed for LABs at z≃ 2-3, suggestive of disappearing LABs at z≳ 6, although the selection methods are different in the low and high-z LABs.It should be noted that Lyα EW scale length derivation methods and the LAB selections are not homogeneous in a redshift range of z≃0-7. Using the future z≃2.2, 5.7, 6.6, and 7.3 HSC NB data, we will systematically investigate the redshift evolution of Lyα EW scale lengths and N_ LAB at z≃2-7 in homogeneous methods.§ APPENDIX: CALCULATION OF LYΑ EW In this section, we describe the method to calculate the EW_ 0,Lyα values. The procedures and the assumption of this method are similar to those of e.g., <cit.>, <cit.>, <cit.>, <cit.>. For the calculation of EW_ 0,Lyα, we assume that LAEs have a δ function-shaped Lyα line and the flat rest-frame UV continuum emission (i.e. β_ν = 0, where β_ν is the UV spectral slope per unit frequency). In such an LAE spectrum, the magnitude, m, for a waveband filter with a transmission curve, T_ν, is described as follows: 48.6 + m = -2.5 ×log_10∫_0^∞ (f_c + f_l δ(ν - ν_α)) T_ν dν/∫_0^∞ T_ν dν,where f_l, f_c, δ(ν), and ν_α is a Lyα line flux, the flux density of the rest-frame UV continuum emission, the δ function, and the observed frequency of Lyα, respectively. Here we also assume that the Lyα line is located at 9215 Å (8177 Å) which is the central wavelength of the NB921 (NB816) filter, for z≃6.6 (z≃5.7) LAEs. In this study, we do not take into account the IGM transmission for Lyα, if not specified. This is because the IGM transmission for Lyα highly depends on the Lyα line velocity offset from the systemic redshift (e.g., <cit.>). The numerator of the logarithm in Equation (<ref>) corresponds to f_c ∫^∞_ν_cexp(-τ_ eff) T_ν dν + f_c ∫_0^ν_c T_ν dν + f_l T_ν (ν_α) =f_c B + f_c R + f_l T_ν (ν_α).In Equation (<ref>), we use B, R, and A that are defined by equations ofB ≡∫^∞_ν_cexp(-τ_ eff) T_ν dν, R ≡∫_0^ν_c T_ν dν, A ≡∫_0^∞ T_ν dν,where τ_ eff is the IGM optical depth calculated from analytical models of <cit.>. Using Equations (<ref>) and (<ref>), we derive the flux density of the NB and BB filters, f_ NB and f_ BB, as follows:f_ NB =10^-0.4(m_ NB + 48.6) = f_c (B_ NB + R_ NB) + f_l T_ NB (ν_α) /A_ NB, f_ BB =10^-0.4(m_ BB + 48.6) = f_c (B_ BB + R_ BB) /A_ BB.The B, R, and A values with the subscripts of NB (BB) are calculated with the transmission curves of the NB (BB) filters, T_ NB (T_ BB). In this study, we use magnitudes of the y and z band filters which do not cover the wavelength of Lyα for z≃6.6 and z≃5.7 LAEs, respectively, indicating T_ BB(ν_α)=0. In the case that m_ BB is fainter than the 1σ limit, we use the 1σ limiting magnitude for the EW_ 0,Lyα calculation. By combining the equations of f_ NB andf_ BB, we obtain f_c and f_l, f_c= A_ BBf_ BB/B_ BB + R_ BB = f_ BB,f_l=A_ NB(B_ BB + R_ BB)f_ NB - A_ BB(B_ NB + R_ NB)f_ BB/ (B_ BB + R_ BB)T_ NB (ν_α) =A_ NBf_ NB - (B_ NB + R_ NB)f_ BB/ T_ NB (ν_α) =a ×f_ NB - b ×f_ BB.Note that B_ BB + R_ BB = A_ BB due to the negligible IGM absorption at the wavelengths of the BB filters. Here we define a and b as a≡ A_ NB/T_ NB (ν_α),b≡ B_ NB + R_ NB/T_ NB (ν_α).For the HSC NB921 and NB816 filters, the sets of the values are calculated to be (a,b) ≃ (4.7, 2.3) × 10^12 and (a,b) ≃ (5.2, 2.7) × 10^12, respectively. Usingf_c and f_l, we calculate the EW_ 0,Lyα values viaEW_ 0,Lyα = f_l/f_cc/ν^21/1+z.To obtain the median values and uncertainties for EW_ 0,Lyα, we perform Monte Carlo (MC) simulations in a method similar to that of e.g., <cit.>. In the simulation, we randomly generate a flux density value, f_ MC, following a Gaussian probability distribution with an average of f and a dispersion of the 1σ sky background noise, f_ 1σ, for the NB and BB bands. Here we also randomize β_ν and ν_α in Gaussian probability distributions with 1σ dispersions of Δβ=0.2 and Δν_α= FWHM_ NB/2.35, respectively, where FWHM_ NB is the FWHM of the NB filters. The dispersion of Δβ=0.2 is typical for high-z galaxies (<cit.>). In the manner that are the same as described in this section, we calculate a EW_ 0,Lyα value using f_ MC for NB and BB. In this process, negative values of f_c, f_l, and EW_ 0,Lyα are forced to be zero. Such a process is performed 1,000 times for each object. During the iteration, a simulated EW_ 0,Lyα value is discarded in the case that a BB- NB color does not meet the selection criteria of Equations (<ref>) and (<ref>). Using the set of EW_ 0,Lyα values obtained from the MC simulations, we calculate the median values and the 16- and 84-percentile errors for EW_ 0,Lyα.We would like to thank James Bosch, Richard S. Ellis, Masao Hayashi, Robert H. Lupton, Michael A. Strauss for useful discussion and comments. We thank the anonymous referee for constructive comments and suggestions. This work is based on observations taken by the Subaru Telescope and the Keck telescope which are operated by the National Observatory of Japan. This work was supported by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan, KAKENHI (23244025) and (21244013) Grant-in-Aid for Scientific Research (A) through Japan Society for the Promotion of Science (JSPS), and an Advanced Leading Graduate Course for Photon Science grant. The NB816 filter was supported by Ehime University (PI: Y. Taniguchi). The NB921 filter was supported by KAKENHI (23244025) Grant-in-Aid for Scientific Research (A) through the Japan Society for the Promotion of Science (PI: M. Ouchi). NK is supported by the JSPS grant 15H03645. SY is supported by Faculty of Science, Mahidol University, Thailand and the Thailand Research Fund (TRF) through a research grant for new scholar (MRG5980153).The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University. This paper makes use of software developed for the Large Synoptic Survey Telescope. We thank the LSST Project for making their code available as free software athttp:dm.lsst.orgThe Pan-STARRS1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, the University of Maryland, and Eotvos Lorand University (ELTE) and the Los Alamos National Laboratory.Based on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by Subaru Telescope and Astronomy Data Center at National Astronomical Observatory of Japan.apj
http://arxiv.org/abs/1704.08140v3
{ "authors": [ "Takatoshi Shibuya", "Masami Ouchi", "Akira Konno", "Ryo Higuchi", "Yuichi Harikane", "Yoshiaki Ono", "Kazuhiro Shimasaku", "Yoshiaki Taniguchi", "Masakazu A. R. Kobayashi", "Masaru Kajisawa", "Tohru Nagao", "Hisanori Furusawa", "Tomotsugu Goto", "Nobunari Kashikawa", "Yutaka Komiyama", "Haruka Kusakabe", "Chien-Hsiu Lee", "Rieko Momose", "Kimihiko Nakajima", "Masayuki Tanaka", "Shiang-Yu Wang", "Suraphong Yuma" ], "categories": [ "astro-ph.GA", "astro-ph.CO" ], "primary_category": "astro-ph.GA", "published": "20170426143112", "title": "SILVERRUSH. II. First Catalogs and Properties of ~2,000 Lya Emitters and Blobs at z~6-7 Identified over the 14-21 deg2 Sky" }
Security Protection for Magnetic Tunnel JunctionShayan Taheri and Jiann-Shiun YuanDepartment of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816, U.S.A.Email: [email protected], [email protected] 30, 2023 =====================================================================================================================================================================================================================Energy efficiency is one of the most important parameters for designing and building a computing system nowadays. Introduction of new transistor and memory technologies to the integrated circuits design have brought hope for low energy very large scale integration (VLSI) circuit design. This excellency is pleasant if the computing system is secure and the energy is not wasted through execution of malicious actions. In fact, it is required to make sure that the utilized transistor and memory devices function correctly and no error occurs in the system operation. In this regard, we propose a built-in-self-test architecture for security checking of the magnetic tunnel junction (MTJ) device under malicious process variations attack. Also, a general identification technique is presented to investigate the behavior and activities of the employed circuitries within this MTJ testing architecture. The presented identification technique tries to find any abnormal behavior using the circuit current signal. Magnetic Tunnel Junction; Signal Processing for Security; Built-In-Self-Test; Emerging Technologies.§ INTRODUCTIONThe ubiquitous connectivity among computing systems is increasing and consequently significant growth is happening in the amount of data to be processed, transmitted, and stored by these systems. This situation brings a proper environment for adversaries to exploit possible backdoors in software and/or hardware to perform malicious purposes. Besides security, another design parameter that is highly critical for computing systems, especially in mobile devices, is energy. The dream of building a smart city with having millions of electronic devices around us is not possible, unless making them energy efficient.Recently, new transistor and memory technologies are introduced to the very large scale integration (VLSI) circuit design for the sake of low energy consumption, especially due to the device scaling barriers of the CMOS technology. These devices such as magnetic tunnel junction (MTJ) and tunnel field-effect transistor (TFET) are able to drop the energy consumption of electronic circuits remarkably. Although their merit is not only limited to energy reduction since they have unique features and properties applicable for security purposes <cit.>. For example, TFET can make the cryptographic processors to be more resilient toward side-channel attack. However, it should not be neglected that these properties can come to the aid of an attacker as well. An adversary may find a crack to cause performance degradation, functionality failure, acceleration of reliability issues and so forth. Therefore, development of novel VLSI testing and security checking techniques is mandatory with focus on these emerging devices. This work proposes a built-in-self-test architecture for security checking of the magnetic tunnel junction (MTJ) device under malicious process variations attack, in Section 2. Also, a general identification technique is presented to detect any abnormality in the behavior and activities of the employed circuitries within the MTJ testing architecture, in Section 3. We conclude the paper in Section 4.§ TESTING AND SECURITY CHECKING OF MAGNETIC TUNNEL JUNCTION Spintronics is the foundation of the next generation of memory technologies with having superior features such as energy efficiency, speed, and density in compare to the traditional memory technologies. Magnetic tunnel junction (MTJ) is the basic storage device in the Spintronics field that provides data non-volatility, fast data access, and low voltage circuit operation. These properties make this device a fit candidate for memory elements of the IoT devices <cit.>. However, the MTJ device may suffer from reliability issues <cit.> that can come to the aid of an attacker to perform malicious purposes. In this work, the impact of free layer thickness (T_m) malicious variations on the perpendicular magnetic anisotropy (PMA)-based MTJ device (shown in in Figure <ref>) operation is analyzed using the SPICE models for magnetic tunnel junctions based on mono-domain approximation <cit.>. This attack can cause logical transitions of the MTJ device earlier or later than the expected time (displayed in Figure <ref>), leading to an incorrect logical state sensing and its propagation throughout the system (especially in high clock frequencies).One solution for preventing any possible timing failure caused by the reliability-related security issues is using run-time timing errors detection and possibly correction in order to keep a processing core performance close to its golden performance. According to this solution, a built-in-self-test module for reliability and security (BIST-RS) analysis is included inside the original design. The BIST-RS functionality can be classified to: (a) error detection; (b) error prediction; and (c) error masking. The BIST-RS functionality in "error detection" is described as monitoring the signals of logical paths for transitions after the clock edge and flagging a possible error.In here, a BIST-RS architecture for analyzing the reliability and security of the MTJ device under malicious process variations attack is presented that is shown in Figure <ref>. This architecture consists of three main elements: Data Encoder, MTJ Structure (i.e. an array of the MTJ cells), and Data Decoder. The data encoder has the responsibility of capturing the applied test pattern, calculating its fingerprint, and constructing the sender message. The MTJ structure is a physical transmission medium (i.e. the communication channel) with the functionality of correctly conveying data to the receiver. A healthy MTJ structure doesn't change the information and provides them to the data decoder on time. A single malicious MTJ cell (i.e. when the value of its free layer thickness is outside the acceptable range of variations) can change the conveying information. Regarding the logical state of each MTJ device, it is stayed the same or a transition happens depending on its corresponding bit in the applied test pattern. The data decoder checks the receiver message and declares its integrity status using the error signal. If the logical state of the error signal is high, it implies that the MTJ structure is not reliable/healthy and vice versa.The data encoder and the data decoder are demonstrated practically through hardware implementation of the cyclic redundancy check (CRC) using TFET technology. We use 20 nm AlGaSb/InAs tunnel field effect transistor (TFET) technology (provided in the Universal TFET model 1.6.8 <cit.>) for implementation. TFET provides steeper sub-threshold slope (i.e. smaller than 60 mV/dec) <cit.> and is described as a gated p-i-n (i.e. the hole-dominant region, the intrinsic (pure) region, and the electron-dominant region) diode that has asymmetrical doping structure and operates under reverse-bias condition. The steeper sub-threshold slope of the TFET device helps to further down scale the supply voltage and reduce the leakage currents substantially, which makes it an excellent candidate to achieve low energy consumption for the IoT applications. The comparison between the drain-source current (I_DS) versus gate-source voltage (V_GS) curves of the n-type MOSFET and the n-type TFET is shown in Figure <ref>. For simulating this plot, both devices have the same width and length of 20 nm and are connected to the supply voltage of 0.6 V. As it can be seen from the figure, the TFET device turns ON and goes to its saturation region at a smaller value of the gate-source voltage in compare to the MOSFET device. Thus, the TFET technology is favorable for low voltage design.Cyclic redundancy check is an error detection code that is used for authentic data transmission between a source and a destination <cit.>. The input and output signals of the data encoder and the data decoder can be seen in Figure <ref>. The actual names of Y-axis labels are Clock, Data Reset, Data Enable, Output Logic 0, Output Logic 1, Check Reset, Check Enable, and Error in order. The clock signal has the period of 3 ns and the width of 6 ns. The reset mechanism of the encoder can be active before the arrival of the second clock cycle positive edge. Once it is disabled, the test pattern is applied and the data capturing signal is enabled. The constructed message is provided in the middle of the second clock cycle and around 7.5 ns. The fourth and fifth plots in Figure <ref> show the example logic zero and logic one of the encoder output signals. Resetting the decoder element can be continued until the arrival of the third clock cycle positive edge. During this period, all of the flip-flops of the receiver remainder register are set to logic one. After the arrival of the third clock cycle positive edge, the error signal goes to logic zero or stays at logic one depending on the delivered message integrity status.Each cell in the MTJ structure contains a driving and sensing circuit for its magnetic tunnel junction,which is shown in the bottom of Figure <ref>. The operation of this circuit can be described in this way: (a) converting the voltage signal of a bit in the message to the current signal; (b) applying the current signal to the magnetic tunnel junction under test; and (c) finding the absolute value of the voltage signal at the free layer terminal since the voltage polarization is different between the MTJ logic transitions; (d) eliminating the signal offset to make sure that it is symmetric; and (e) comparing it to half of the supply voltage to construct the output signal based on the corresponding voltages of the logical states. Figure <ref> indicates the circuit operation flow for zero-to-one and one-to-zero logic transitions using the inputs V_g,x and V_g,y respectively.Now, the defense mechanism of the proposed BIST-RS architecture in confronting the malicious free layer thickness (T_m) variations is discussed. As the first and most important step, the BIST-RS clock frequency should be set to the desired clock frequency for the main circuit exactly. Then, the BIST-RS is turned ON and different test patterns are applied to the data encoder. Next, the data decoder captures the message at the clock cycle positive edge and evaluates its integrity. The message content might be wrong due to possible transition delay fault(s) caused by the infected MTJ(s). An infected MTJ is found by notification from the data decoder error signal. The illustration of this concept is shown in Figure <ref>. As it can be observed, all the considered free layer thickness possibilities for zero-to-one and one-to-zero logic transitions are completed in the duration of 7.5 ns to 9.76 ns and 7.5 ns to 8.85 ns respectively, which are before the arrival of the third clock cycle positive edge. So, the malicious variations go undetected in this case. However, if this clock frequency is used for the original circuit all the times, then the attack doesn't have any impact on the IC functionality as well as its total performance. In reality, an IC might experience heavy workloads and high frequency computations during its lifetime. For those cases, the BIST-RS can be set to the clock frequency under test and the MTJ structure health is checked accordingly. The lack of need for including additional memory resources for testing as well as detecting faults without necessity to propagate them throughout the circuit under test are the primary privileges of this architecture over the traditional testing and verification methods. Also, implementation of the encoder and decoder modules using the TFET technology brings less energy consumption and area occupation than its CMOS counterpart. The total power consumption and area of these modules are 0.5020 μ W and 1,930,400 nm^2.§ THE BIST-RS FOR MTJ UNDER ATTACK Now, let's consider a scenario in which a malicious person is aware of the inserted BIST-RS inside the chip and aims to disrupt the testing and security checking process through manipulating the surrounding temperature or injecting a hardware Trojan inside the encryption/decryption module. For this scenario, we propose a general identification technique in this section, according to which any unusual behavior shown from the employed encryption/decryption module for MTJ testing can be discovered. Our technique performs the detection mechanism based upon the circuit analog signals rather than its digital data. Most of the detection techniques in the area of hardware security are developed in the digital domain, and the analog domain-based techniques have not been studied sufficiently <cit.>.In fact, the analog signals of an integrated circuit have unique variations, behavior, and features that can be used for detection, identification and monitoring purposes. The methodology of our technique can be divided into four steps: (a) choosing and applying a specific test pattern (i.e. based on its fault coverage capability) to the circuit under test and extracting an analog signal (i.e. the current signal in here), which is considered as the reference signal. This signal is correlated to the circuit properties. (b) automatic random selection of a certain number of test patterns (i.e. twenty in here), applying them to the circuit, and collecting all of the corresponding analog signals in order to build a dataset. Certain features may be extracted from the signals for the purpose of comparison in this step. (c) running a relational detector (i.e. the maximum of the absolute value of the cross correlation between the reference signal and a test signal) between the reference signal (i.e. obtaining when the circuit operates in normal condition) and all of the test signals inside the dataset in order to construct the "Evaluation Signal". (d) accepting or rejecting the evaluation signal depending on the detector threshold value (i.e. the mean of the reference evaluation signal) and its sensitivity, and calculating four basic statistical metrics for analyzing the detector performance. The four metrics for analysis of the detector performance are: True Positive (i.e. a signal is correctly rejected as not having originated from the original circuit), False Positive (i.e. a signal is wrongly rejected as not having originated from the original circuit), True Negative (i.e. a signal is correctly accepted as having originated from the original circuit), False Negative (i.e. a signal is wrongly accepted as having originated from the original circuit).The presented approach is examined in two experiments. In the first experiment, four datasets are collected from the cyclic redundancy check data decoder circuit using the CMOS 20nm Predictive Technology Model (PTM) - Multi Gate (MG) technology <cit.>. The circuit operating conditions for these datasets are defined as: (1) normal condition; (2) process variations (i.e. changing the transistor length within ± 20% range); (3) temperature variations (i.e. changing the temperature from 20^∘C to 120^∘C); and (4) malicious condition (i.e. a hardware Trojan is inserted inside the circuit). The designed hardware Trojan for this circuit is activated according to a logical AND function output with having the executed XOR function outputs on the CRC data decoder input pattern and the generated "Check Value" as its inputs, and its payload is the error signal malfunction. For the second experiment, only the normal condition and the malicious condition datasets are collected for the 32-bit KATAN block cipher <cit.>, which its encryption and decryption modules can be used in the MTJ testing architecture as well. The inserted Trojan in the KATAN circuit has the duty of flipping the first and the last bits of the ciphertext, and is awakened according to a logical AND function output with having the executed XOR function outputs on a portion of the key and a portion of the plaintext as its inputs.The top plot of Figure <ref> shows the comparison between the current signals of the healthy CRC data decoder circuit (i.e. the normal condition) when the reference pattern and an arbitrary test pattern are applied. As it can be seen, the signals have the same trend and very well lie on each other at least for the first 100 data points. However, the middle plot of this figure demonstrates that there are still differences between the current signals for those data points. The comparison between the current signals of the healthy and the malicious CRC data decoders (i.e. the normal and malicious conditions respectively) when the same input pattern is applied can be observed in the bottom plot of the figure. Similarly, the signals have the same trend with minor differences, except some data points that the differences can be up to 40,000 times higher that is due to the hardware Trojan effect. Therefore, it can be interpreted that: (a) the circuit current signal is time and test pattern variant; and (b) the extracted current signals from applying two different input patterns have dissimilar variations at any given time, even if they have the same overall trend. In fact, these variations can cause a specific change in the level of the evaluation signal. The calculated evaluation signals for the four datasets of the CRC data decoder along with their threshold value are demonstrated in Figure <ref>.It can be comprehended from the figure that the evaluation signal of the first dataset is nearly stable and has the least amount of variations, which is due to not having remarkable variations among different applied input patterns in the circuit normal condition. The evaluation signals of the second and the third datasets have larger variations with having relatively constant behavior. The evaluation signal of the fourth dataset has the largest amount of variations and its behavior may be considered as abnormal in comparison with the other datasets. In the next step, the four basic statistical metrics for analysis of the detector performance with different levels of sensitivity are calculated. The results for these metrics using the four datasets of the CRC data decoder are presented in Table <ref>.According to the definitions of the four basic statistical metrics, the detector shows perfect performance in identification of the circuit in the normal condition as well as detection of the hardware Trojan. Also, it demonstrates a good performance in identifying the circuit when the variations of the process technology and the temperature are acceptable. Similar performance capability can be observed from the detector in identification of the KATAN block cipher circuit, which is shown in Table <ref>.§ CONCLUSION In this paper, we propose a built-in-self-test architecture for security checking of the MTJ device under malicious process variations attack. The architecture consists of three main elements: sender, physical transmission medium (i.e. an array of the MTJ cells), and receiver. A healthy array of MTJ cells doesn't change the sent information and delivers them to the receiver on time for integrity checking. The lack of need for including additional memory resources for testing as well as detecting faults without necessity to propagate them throughout the circuit under test are the primary privileges of this architecture over the traditional testing and verification methods. Also, a general identification technique is presented to discover any abnormal behavior and activity shown from the employed circuitries within the architecture. According to this technique, the existing features inside the current signal of a circuit under test can be used in order to identify it in different conditions, distinguish it from different circuits, and detect its possible infection caused by a hardware Trojan. The experimental results show that the technique has adequate performance in identifying the circuit under test in normal and malicious conditions as well as under typical process and temperature variations. IEEEtran Shayan Taheri received the B.S. degree in Electrical Engineering from the Shahid Beheshti University (National University of Iran), Tehran, Iran, and the M.S. degree in Computer Engineering from the Utah State University, Logan, UT, USA, in 2013 and 2015, respectively. He is currently pursuing the Ph.D. degree in Electrical Engineering at the University of Central Florida, Orlando, FL, USA. His research interests and experiences include the applications of new transistor and memory technologies in secure and low power VLSI design, hardware Trojan design and analysis for the Internet of Things (IoT) devices, leveraging signal processing in hardware security, and VLSI Testing and Verification. Jiann-Shiun Yuan received the M.S. and Ph.D. degrees from the University of Florida, Gainesville, in 1984 and 1988, respectively. In 1988 and 1989 he was with Texas Instruments Incorporated, Dallas, for CMOS DRAM design. Since 1990 he has been with the faculty of the University of Central Florida (UCF), Orlando, where he is currently a Professor and Director of NSF Multi-functional Integrated System Technology (MIST) Center. He is the author of three textbooks and 300 papers in journals and conference proceedings. He supervised twenty-three Ph.D. dissertations, thirty-two M.S. theses, and five Honors in the Major theses at UCF. Since 1990, he has been conducting many research projects funded by the National Science Foundation, Intersil, Jabil, Honeywell, Northrop Grumman, Motorola, Harris, Lucent Technologies, National Semiconductor, and state of Florida. Dr. Yuan is a member of Eta Kappa Nu and Tau Beta Pi. He is a founding Editor of the IEEE Transactions on Device and Materials Reliability and a Distinguished Lecturer for the IEEE Electron Devices Society. He was the recipient of the 1995, 2004, 2010, and 2015 Teaching Award, UCF; the 2003 Research Award, UCF; the 2003 Outstanding Engineering Award, IEEE Orlando Section, the Excellence in Research Award at the full Professor level of the College of Engineering and Computer Science in 2015, and the Pegasus Professor Award, highest academic honor of excellence at UCF, in 2016.
http://arxiv.org/abs/1704.08513v1
{ "authors": [ "Shayan Taheri", "Jiann-Shiun Yuan" ], "categories": [ "cs.CR" ], "primary_category": "cs.CR", "published": "20170427112449", "title": "Security Protection for Magnetic Tunnel Junction" }
§ INTEGRABILITY In this article, we consider an analytic curve Γ in the n-dimensional complex projective space ℙ^n such that Γ̅ is an algebraic curve, and a vector field X meromorphic on Ω⊂ℙ^n, a neighbourhood of Γ̅, such that Γ is an orbit of X. So Γ is of the formΓ={(x_1(t),…,x_n(t)), t∈ℂ}where x(t) is a solution of X. The curve Γ is completed by taking the algebraic closure Γ̅ in ℙ^n. The points Γ̅∖Γ are limits of the solution x(t) when t goes to infinity or a singularity of x(t), and are equilibrium and singular points of X.We want to study integrability of X in the following sense. [See <cit.>] We say that X is meromorphically (l,n-l) integrable on Ω⊂ℙ^n if there exists meromorphic vector fields Y_1,…,Y_l-1 on Ω and functions F_1,…,F_n-l meromorphic on Ω such that * The vector fields X,Y_1,…,Y_l-1 are independent over the meromorphic functions on Ω and pairwise commute.* The functions F_1,…,F_n-l are functionally independent and are common first integrals of the vector fields X,Y_1,…,Y_l-1. Let us now define variational equations. The first order variational equation VE_1 is given byŻ=∇ X(x(t)) Z, Z∈ℂ^n.where x(t) is the solution on Γ depending on time. This is a linear time depending differential equation. We will note K the differential field generated by x_1(t),…,x_n(t), which will serve as the base field for Galois group computation. This differential system always admits a solution in K, corresponding to perturbations tangential to the orbit Γ. The first order variational equation can then be quotiented by this solution, defining a n-1 dimensional system, called the normal variational equation NVE_1.If we try to define similarly variational equation of higher order by consider series expansion of X up to order k, we end up with non linear equations, which is not satisfactory for Galois group computations. So we define the higher variational equation VE_k of order k as a linear differential equation on the jet space of order k near the point x(t), see <cit.>. An efficient way to build this higher variational equation is to write the series expansionX_j(x_1(t)+Z_1,…,x_n(t)+Z_n)= (∑_i_1,…,i_n f_j,i_1,…,i_n(t) Z_1^i_1… Z_n^i_n)_j=1… n,introduce the variables Z_i_1,…,i_n, write the systemŻ_m_1,…,m_n= ∑_j=1^n m_j Z_j^m_j-1∑_i_1,…,i_n f_j,i_1,…,i_n(t) Z_1^i_1… Z_n^i_nand then make the substitution of the monomials in the right partZ_1^i_1… Z_n^i_n→ Z_i_1,…,i_n ifi_1+…+i_n≤ k Z_1^i_1… Z_n^i_n→ 0ifi_1+…+i_n> kThis defines a linear system in Z_i_1,…,i_n with coefficients in K. Its solutions are linear combinations of Z_1(t)^i_1… Z_n(t)^i_n where Z_1(t),…,Z_n(t) are solutions of the non linear variational equation. Furthermore, a notion of higher normal variational equations NVE_k will be defined in section 2, generalizing the first order case NVE_1. Now to variational equations VE_k can be associated differential Galois groups Gal(VE_k), defined over the base field K. The Ayoul Zung Theorem is the following.[Ayoul-Zung <cit.>] If the vector field X is meromorphically integrable on a neighbourhood of Γ, the Galois groups of the variational equations VE_k on Γ are virtually Abelian for all k∈ℕ^*. The proof of this theorem is based on <cit.>, to which it reduces by doubling the dimension. The purpose of this article is to prove the inverse of the Ayoul-Zung Theorem under some generic conditions, i.e. if all Galoisian conditions are satisfied, then the vector field is meromorphically integrable. Let us first remark that a variational equation can be transformed in a system with algebraic coefficients by changing the time. Let us note ℂ(Γ̅) the field of rational functions on Γ̅. We can now introduce a variable s such that (possibly up to permutation of the coordinates)Γ̅={(γ_1(s),…,γ_n-1(s),s), s∈ℙ}where the γ_i are (multivalued) algebraic functions. They generate the differential field ℂ(Γ̅). The k-variational equation using the parametrization by s is of the form(S): ∂/∂ s Z=A(s) Zwith A a matrix with coefficients in ℂ(Γ̅). We will say that a differential system (S) with coefficient in ℂ(Γ̅) is Fuchsian if all singularities are regular, i.e. all solutions of (S) have singularities with at most polynomial growth. The monodromy group Mon(S) of the differential system (S) with coefficient in ℂ(Γ̅) is the group generated by a fundamental matrix of solutions computed on closed loops of the Riemann surface defined by ℂ(Γ̅). The Zariski closure of the monodromy group is the Galois group, i.e. Mon(S)=Gal(S), see <cit.>. The identity component of the Galois group is noted Gal^0(S), and the identity component of the monodromy group is then defined byMon^0(S)=Mon(S) ∩Gal^0(S).In this article, we will only consider the case when the normal first order variational equation is Fuchsian and its monodromy group is virtually diagonal, i.e. its identity component is generated up to common conjugacy by diagonal matrices.Let G be a group of diagonal matrices. The group G is * k-resonant for j if∏_i=1^n-1 M_ii^k_i=M_jj, ∀ M∈ G * Non-resonant if non k-resonant for all j∈{1,…,n},k∈ℕ^n, | k|≥ 2* Diophantine if there exist a basis (diag(λ_i,1,…,λ_i,n))_i=1… p of G such that∑_ν=1^∞ 2^-νln( max_ϵ_j,k≠ 0j=1… n,2≤| k|≤ 2^νϵ_j,k^-1)<∞ with ϵ_j,k=max_i=1… p| ∏_l=1^n λ_i,l^k_l -λ_i,j| These are the same definitions as simultaneous Brjuno condition in <cit.> and close to <cit.>. Remark that the Diophantine property is independent of the choice of the basis of G. Indeed, a basis change is equivalent to multiplication of k by an element of GL_n(ℤ), thus multiplying | k | by a constant. This multiplies at most the sum by a positive constant, and so does not change its finiteness status. Our main results are the following. Let X be a meromorphic vector field of a neighbourhood of Γ̅, with Γ an algebraic solution of X. Assume NVE_1 is Fuchsian and Mon^0(NVE_1) is diagonal, non-resonant and Diophantine. The vector field X is meromorphically integrable on a finite covering over a neighbourhood of Γ if and only if all variational equations near Γ have a virtually Abelian Galois group. Moreover, if integrable, the vector field X is then (l,n-l) integrable with l=dim(Gal^0(NVE_1))+1. A finite covering over a neighbourhood of Γ is introduced because when Gal(NVE_1) is not connected, the vector fields we will build could be finitely multivalued on a neighbourhood of Γ, with Gal(NVE_1)/Gal^0(NVE_1) inducing a finite action on the valuations. An example is given in section 5.1. Conversely, the Ayoul Zung theorem still applies as lifting the vector field X on a finite covering does not change Gal^0(VE_k), on which the conditions of the Ayoul-Zung Theorem holds.The non resonance condition is necessary to insure that Gal^0(NVE_k) does not grow, which is pivotal in the proof. We can thus remove this resonance condition, but in the other hand reinforce the conditions on the Galois groups Gal^0(NVE_k). Let X be a meromorphic vector field of a neighbourhood of Γ̅, with Γ an algebraic solution of X. Assume NVE_1 is Fuchsian, Gal^0(NVE_k) ≃ℂ^l-1 for all k∈ℕ^* and Mon^0(NVE_1) is Diophantine. Then the vector field X is (l,n-l) integrable on a finite covering over neighbourhood of Γ. This theorem cannot be transformed into an equivalence because Gal^0(NVE_k) ≃ℂ^l-1 for all k∈ℕ^* is not a necessary condition for integrability. It is however necessary for a stronger property, linearisability. Let X be a time dependant vector field meromorphic on an algebraic finite covering 𝒞 of a neighbourhood of {0∈ℂ^n }×ℙ. Let us note π:𝒞↦ℂ^n the projection, Γ=π^-1(0)∖ S where S are singular points of X on π^-1(0) and Γ̅=π^-1(0). Assume X=0 on Γ, the NVE_1 near Γ̅ is Fuchsian and Mon^0(NVE_1) is diagonal and Diophantine. The vector field X is holomorphically linearisable on a neighbourhood of Γ if and only if Gal^0(NVE_k) ≃ℂ^l-1 for all k∈ℕ^*.The linearisability problem corresponds to find a time dependant coordinates change, holomorphic on a neighbourhood of Γ, such that X becomes its linear part in these new coordinates. The connection between linearisability and integrability is already made in <cit.> for a neighbourhood of an equilibrium point. The linearisation of Theorem <ref> implies integrability on a finite covering of a neighbourhood of Γ. However, not all integrable vector fields X on a finite covering of a neighbourhood of Γ are linearisable, as given by the following exampleq̇_1=α/sq_1+1/sq_1^2q_2,q̇_2=-α/sq_2-1/sq_1q_2^2,ṡ=1The solutions of this system can be writtenq_1=c_1s^α+c_1c_2, q_2=c_2s^-α-c_1c_2,s=t+c_3The system is (2,1) integrable withq_1q_2, q_1∂/∂ q_1-q_2∂/∂ q_2,(α/s+1/sq_1q_2)q_1∂/∂ q_1-(α/s+1/sq_1q_2)q_2∂/∂ q_2+∂/∂ sHowever the system is not linearisable near 0. Indeed, 2-dimensional linear diagonal systems have solutions using at most 2 hyperexponential functions, independent of initial conditions, and here the hyperexponential function s^α+c_1c_2 depends on c_1,c_2. Remark that the contraposition of Theorem <ref> is satisfied: the system is not linearisable, and the Galois group of the NVE_k grows. This growing is obtained when differentiating (<ref>) with respect to c_1,c_2, producing logs terms at order 3 and higher in c_1,c_2. The plan of the article is the following * In section 2, we define several reduction of the vector fields and the higher order normal variational equations NVE_k.* In section 3, we prove formally the right to left implications of the main Theorems <ref>,<ref>,<ref>.* In section 4, we prove that the formal first integrals and vector fields converge on a finite covering over neighbourhood of Γ. We then finish the proofs of Theorems <ref>,<ref>,<ref>.* In section 5, we present some generalizations under stronger conditions, in particular the completion at singular points Γ̅∖Γ and computation of the covering of Γ on which the vector fields and first integrals are defined. § NORMAL HIGHER VARIATIONAL EQUATIONS §.§ Good coordinates near ΓLet us consider the following coordinates of the neighbourhood of Γ̅x=(γ_1(s)+q_1,γ_2(s)+q_2,…,γ_n-1(s)+q_n-1,s)whose inverse is(q,s)=(x_1-γ_1(x_n),…,x_n-1-γ_n-1(x_n),x_n)This coordinates become singular when the γ_i are singular. Now in these coordinates, the curve Γ̅ becomes q=0. The vector field has Γ for solution, and thus is tangent to Γ. So the vector field can now be written under the formq̇_1=∑_i=1^n-1 q_iX_1,i(s,q), … , q̇_n-1=∑_i=1^n-1 q_iX_n-1,i(s,q), ṡ=X_n(s,q)where the X_i,j are meromorphic in q,s on a neighbourhood of {0}×Γ̅. Moreover, they are analytic on Γ except possibly when the γ_i or the initial vector field X become singular. The functions X_i,j can therefore be represented by series in q with coefficients meromorphic in s∈Γ̅. Meromorphic functions on Γ̅ are in fact algebraic, and form the field ℂ(Γ̅). Exampleẋ_1=x_2,ẋ_2=-x_1This vector field admits x_1^2+x_2^2=1 as an algebraic solution. In the q,s coordinates, we haveq̇=-qs/√(1-s^2),ṡ=-q-√(1-s^2)The singular locus of γ_1(s)=√(1-s^2) is s=± 1 which is now a ramified pole of the vector field.§.§ Gauge reduction If the first order variational equation has a virtually Abelian Galois group, the solutions can be written using only functions of the forme^∫ w(s) ds or ∫ w(s) dswith w(s) an algebraic function on Γ̅. Let us define what we will call hyperexponential functions and logarithmic functions. A hyperexponential function H on Γ̅ is a multivalued function such that∂/∂ sH(s)=h(s) H(s), s∈Γ̅where h(s) is an algebraic function on Γ̅. A logarithmic function L on Γ̅ is a multivalued function such that∂/∂ s L(s)=h(s), s∈Γ̅where h(s) is an algebraic function on Γ̅. Let us now recall that also our reduction added singularities in the expression of the first order variational equation. However these singularities are only due to the change of coordinates, the base field is still ℂ(Γ̅), and thus Galois group and monodromy group stays unchanged.When solving the NVE_1 with virtually diagonal Galois group, we first consider the field extension 𝒫∩ℂ(s) where 𝒫 is the Picard Vessiot field of NVE_1. It is an algebraic extension of the field of rational functions on Γ̅ and so defines a Riemann surface Σ above Γ̅. We now have 𝒫∩ℂ(s)=ℂ(Σ)where ℂ(Σ) is the field of rational functions on Σ. Now the Picard Vessiot field can be expressed by𝒫=ℂ(Σ)(H_1(s),…,H_l-1(s))where the H_i are hyperexponential functions with logarithmic derivative in ℂ(Σ).Remark that to solve the VE_1 when knowing the solutions of the NVE_1, we just have to integrate some linear combination of the solutions of the NVE_1. If VE_1 has a virtually Abelian Galois group, we will have to consider at worst an additional logarithmic function for the Picard Vessiot field of the VE_1, but no additional algebraic extension will be necessary.Assume Gal^0(NVE_1) is diagonal. There exists a gauge transformation with coefficients in ℂ(Σ) of the NVE_1 such that the NVE_1 is then of the formZ'=([ X̃_1,10…0; …;0…0 X̃_n-1,n-1;])Z.The gauge transformation is a linear transformation on the Z's, but can also be applied to the (non linear) system (<ref>). The gauge transformation of Proposition <ref> applied to the variables q_1, …,q_n-1 of equation <ref> defines a vector field meromorphic in q_1,…,q_n-1 with coefficients in ℂ(Σ) we called gauge reduced. After gauge reduction, the system is of the formq̇_1=X̃_1,1(s)q_1+∑_| i|≥ 2^∞ f_1,i(s) q^i… q̇_n-1 =X̃_n-1,n-1(s)q_n-1+∑_| i|≥ 2^∞ f_n-1,i(s) q^i ṡ = X̃_n(s,q)with f_j,i∈ℂ(Σ). The poles of the f_j,i,X̃_i,i,X̃_n in Σ are such that their projection on Γ̅ is either such that γ is singular (due to the coordinate system near Γ) or on Γ̅∖Γ (proper singularities of the initial system). In any case, they belong to a finite set 𝒮⊂Σ, not depending on j,i.Remark that gauge reduction is simply a variable change, and thus conserves the commuting vector fields and first integrals on a neighbourhood of Γ if they exist. However, due to the introduction of the algebraic Riemann surface Σ, these vector fields and first integrals will be a priori defined on a finite covering of a neighbourhood of Γ. Thus (l,n-l) integrability on a finite covering of a neighbourhood of Γ is conserved by gauge reduction.§.§ Time reduction We want in equation (<ref>) a normalization of X_n(s,q) to 1, but this corresponds to a change of time, and commuting vector fields are not invariant by time change. To keep track of the time change, we introduce the time variable t, and the equation ṫ=1. The vector fields are now in dimension n+1, and we can make the time change, giving the systemṫ=1/X_n(s,q), ṡ=1, q̇_1=∑_i=1^n-1 q_iX_1,i(s,q)/X_n(s,q), … , q̇_n-1=∑_i=1^n-1 q_iX_n-1,i(s,q)/X_n(s,q)The last step is to consider s as the new time, removing the equation ṡ=1q̇_1=∑_i=1^n-1 q_iX_1,i(s,q)/X_n(s,q), … , q̇_n-1=∑_i=1^n-1 q_iX_n-1,i(s,q)/X_n(s,q), ṫ=1/X_n(s,q)The vector field obtained is time dependant, and the new time s lives on Σ. The equation (<ref>) defines a vector field we call the time reduction of the vector field X. The time reduction and gauge reduction can be made independently, and if done so we call the resulting vector field time and gauge reduced. Each of these reductions have drawbacks * The gauge reduction requires additional ramifications as the resulting vector field has a series expansion in q with coefficients in ℂ(Σ) instead of ℂ(Γ̅)* The time reduction does not conserve commuting vector fields. Let us remark that time reduction can produce some additional singularities. The function X_n(s,q) corresponds to the tangential part of X along Γ, and thus vanishes when X vanishes. These are the true singular points on the curve Γ̅ (and so not in Γ). However, when γ is singular, this also can produce a singularity. Still this singularity is artificial, as it is due only to the coordinates system, and thus the monodromy of the variational equations will be trivial around it. Let us now define the higher normal variational equations The higher normal variational equation NVE_k is the variational equation of (<ref>) without the equation in t. The key point to allow this definition is that t does not appear in the equations in q_i, and so restricting the system to the q's has sense. We now need to check that the classical definition of NVE_1 coincide with this definition for k=1. The above definition of NVE_k when k=1 is the VE_1 quotiented by the solution corresponding to a tangential perturbation to Γ with a change of independent variable.We begin from equation (<ref>). The curve Γ̅ is the straight line q_1=q_2=…=q_n-1=0, so the tangential perturbation is in the direction s only. The first order variational equation isŻ=([ ∂_s X_n ∂_q_1 X_n … ∂_q_n-1 X_n; 0 X_1,1 … X_1,n-1; 0 …; 0 X_n-1,1 … X_n-1,n-1; ])ZThe matrix is already in block triangular form, and the first component of Z correspond to the perturbation in s, so tangential to Γ. So the VE_1 quotiented by the solution corresponding to a tangential perturbation to Γ is given by the last n-1 coordinates, which defines a (n-1)× (n-1) matrix whose entries are X_i,j(s,0).Let us now compute the NVE_1 according to Definition <ref>. It is given byZ'=1/X_n([ X_1,1 … X_1,n-1; …; X_n-1,1 … X_n-1,n-1; ])Zwith ' the differentiation in s. The function X_n(s,0) appearing in front of the matrix can be removed using dt=X_n(s,0) ds, and so these matrices are the same after a change of independent variable.§ FORMAL RESULTS §.§ Formal flow From now on, the NVE_1 is assumed to have a virtually diagonal monodromy group and to be Fuchsian (and so the Galois group is also virtually diagonal). When gauge reduced, we note H_1,…,H_n-1 the hyperexponential basis of solutions of the NVE_1 with logarithmic derivatives X̃_i,i. Let us consider X a time and gauge reduced vector field and assume that Gal^0(VE_k) is Abelian ∀ k∈ℕ^*. We assume moreover at least one of the following hypotheses * Mon^0(NVE_1) is non-resonant.* Gal^0(NVE_k) ≃ℂ^l-1 ∀ k∈ℕ^*Let us consider s_0 a regular point on Σ with X(s_0,0) non singular. Then there exists formal seriesφ_j(s,c_1,…,c_n-1)=∑_i∈ℕ^n-1a_j,i_1,…,i_n-1(s) (c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1for j=1… n-1 andφ_n(s,c_1,…,c_n-1)=∑_i∈Resc_1^i_1… c_n-1^i_n-1L_i(s)+ ∑_i∈ℕ^n-1a_n,i_1,…,i_n-1(s) (c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1such that (q_1(s),…,q_n-1(s),t(s))=φ(s,c_1,…,c_n-1) is a formal solution of X for any c,a_j,0,…,0,1,0,…,0=0if the 1 is not in position j , ∀ j=1… n-1 a_j,0,…,0,1,0,…,0=1if the 1 is in position j, ∀ j=1… n-1, a_j,m_1,…,m_n-1(s_0)=0if Mon^0(NVE_1)ismresonant forj,Res the set multi-indices such that H_1(s)^i_1… H_n-1(s)^i_n-1∈ℂ(Σ), a_j,i_1,…,i_n-1∈ℂ(Σ) and L_i logarithmic functions.Remark that the condition on a_j,m_1,…,m_n-1(s_0) is non empty only when Mon^0(NVE_1) is resonant.We will first prove the existence of the formal series φ_j, j=1… n-1. We prove it by recurrence on the order k=i_1+…+i_n-1. For k=1, we consider the solutions of the NVE_1. This givesZ_0,…,0,1,0,…,0= c_jH_jwith the1in positionjAs the particular solution is q_1=q_2=… =q_n-1=0, this gives for the first orderφ_j(s,c_1,…,c_n-1)= c_jH_j, j=1… n-1 Let us now assume the formal series φ_j, j=1… n-1 exist at order k-1. We will now prove they exist at order k. The φ at order k-1 we have induces a solution of the NVE_k-1 given byZ_m_1,…,m_n(s)= ∏_j=1^n-1φ_j(s,c_1,…,c_n-1)^m_j mod<(c^i)_i_1+…+i_n=k>We now want to add terms of order k to this solution to obtain a solution of the NVE_k. Remark that a solution of NVE_k does not always leads to a series expansion of a solution of X near Γ. This is due to the linearisation process made in constructing the higher variational equations.Let us look at the variables Z_m_1,…,m_n(s) with m_1+…+m_n≥ 2. Using the formulaZ_m_1,…,m_n(s)= ∏_j=1^n-1φ_j(s,c_1,…,c_n-1)^m_j mod<(c^i)_i_1+…+i_n=k+1>we see that the unknown terms in c of order k in φ_j are not necessary to compute these expressions. This is because the total valuation in c of the φ_j,j=1… n-1 is 1, and as we have m_1+…+m_n≥ 2, any term in c of order k in φ_j after expanding the product gives terms of order at least k+1, which disappear in the modulo.We now have almost all the components of a solution of the NVE_k which comes from a series expansion of a solution of X near Γ. The only components we do not know are Z_0,…,0,1,0,…,0 with the 1 in position 1 to n-1. These component will give the expression of φ_j at order k. They can be computed using variation of constants, giving the formulaZ_0,…,0,1,0,…,0=H_j(s) ∑_2≤| i|≤ k∫ f_j,i(s)Z_i(s)/H_j(s) dswith f_j,i∈ℂ(Σ) coming from the computation of the series expansion of X near Γ. Now knowing that Z_m is a ℂ(Σ) linear combination of (c_1H_1(s))^i_1… (c_dH_n-1(s))^i_n-1 with 2≤| i|≤ k, we haveZ_0,…,0,1,0,…,0=H_j(s) ∑_2≤| i|≤ k∫ g_j,i(s)(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1/H_j(s) dswith g_j,i∈ℂ(Σ). Using φ_j at order k-1, we already know the expression of Z_0,…,0,1,0,…,0 at order k-1 in c, givingZ_0,…,0,1,0,…,0=φ_j(s)_|order k-1+H_j(s) ∑_| i| = k∫ g_j,i(s)(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1/H_j(s) ds.The VE_k has a virtually Abelian Galois group, thus Z_0,…,0,1,0,…,0 is in a virtually Abelian extension of ℂ(Σ) for any c. Then each term of the sum should be as there cannot be any cancellation between the integrals due to each term having different coefficient c^i.Let us now prove the two following Lemmas If a hyperexponential function H∈ℂ(Σ)(H_1,…,H_n-1) admits an integral in ℂ(Σ)(H_1,…,H_n-1) then it admits an integral in ℂ(Σ).H.The integral of H has at most an additive monodromy group over the base field ℂ(Σ)(H). However, when know that it is in ℂ(Σ)(H_1,…,H_n-1) which has a multiplicative group. This implies that∫ H(s) ds ∈ℂ(Σ)(H)Now two cases. If H∈ℂ(Σ), then the Lemma follows immediately as ∫ H(s) ds ∈ℂ(Σ) and then ∫ H(s) ds/H ∈ℂ(Σ). If H∉ℂ(Σ), we write∫ H(s) ds= F(s,H(s)), F∈ℂ(Σ)(x).Now as H is hyperexponential, we can act the Galois group on this equality giving∫α H(s) ds=F(s,α H(s)),α∈ℂ^*.Making a series expansion at α=0 and identifying the powers of α, we obtain ∫ H(s) ds= g(s) H(s),gℂ(Σ)which gives the Lemma.If a hyperexponential function H∉ℂ(Σ) admits an integral in virtually Abelian extension of ℂ(Σ), then it admits an integral in ℂ(Σ).H.The integral of H has at most an additive monodromy group over the base field ℂ(Σ)(H). As we also know that H∉ℂ(Σ), the monodromy acts on H multiplicativelyσ_α(H)=α HIf the monodromy group of ∫ H over ℂ(Σ)(H) is not identity, we have also a monodromy elementδ_β(∫ H)=β+∫ H.Now computing the commutator, we have[σ_α,δ_β]=δ_β (1-α)So the monodromy would not be commutative, which is not compatible with the hypotheses. This implies that the monodromy is identity, and so that ∫ H ∈ℂ(Σ)(H). We now apply the previous Lemma <ref>, giving that it admits an integral in ℂ(Σ).H.Let us first assume we have the non resonance hypothesis. We have(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1/H_j(s)∉ℂ(Σ)Due to the definition of Σ, we have moreover that(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1/H_j(s)∉ℂ(Σ)We can now use the Lemma <ref>, and we obtain a solution for Z_0,…,0,1,0,…,0 of the formZ_0,…,0,1,0,…,0=φ_j(s)_|order k-1+∑_| i| = kg̃_j,i(s)(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1This gives an expression for φ_j at order k in c for j=1… n-1.We now assume the hypothesis Gal^0(NVE_k)≃ Gal^0(NVE_1), ∀ k∈ℕ^*. We have to integrate terms of the formg_j,i(s)(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1/H_j(s)As the Galois group does not grow, we can apply Lemma <ref>, and so we know it admits an integral of the formg̃_j,i(s)(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1/H_j(s),g̃_j,i∈ℂ(Σ)If the multi index i is resonant with respect to j, we moreover have that(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1/H_j(s)∈ℂ(Σ)So we can freely add a constant to this integral, keeping the same form, just changing the g̃_i,j(s). As s_0 is a regular point, the H do not vanish nor become singular at s_0, the function g_i,j is not singular at s_0, and thus adjusting the constant we can always assume that g̃_i,j(s_0)=0. Now doing this on all terms, we obtain a solution for Z_0,…,0,1,0,…,0 of the formZ_0,…,0,1,0,…,0=φ_j(s)_|order k-1+∑_| i| = kg̃_j,i(s)(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1with moreover g̃_j,i(s_0)=0 when i is resonant with respect to j. This gives the expression for φ_j at order k in c for j=1… n-1.Now let us focus on the last case φ_n. We have∂/∂ sφ_n(s)=1/X_n(s,φ_j(s)_j=1… n-1)Expanding in series the right term, we obtain an expression of the formφ_n(s)=∫1/X_n(s,0) ds +∑_1≤| i|≤ k∫ g_n,i(s)(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1 dswith g_n,i∈ℂ(Σ). The set Res is the the multi-indices such thatH_1(s)^i_1… H_n-1(s)^i_n-1∈ℂ(Σ).Remark that if i∉Res, then we moreover haveH_1(s)^i_1… H_n-1(s)^i_n-1∉ℂ(Σ)as ℂ(Σ) contains all algebraic functions in the Picard Vessiot field 𝒫. Now for each i∈ Res, we have thatg_n,i(s)(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1∈ℂ(Σ)and thus its integral is a logarithmic function. And for i∉Res, Lemma <ref> applies, and gives an integral of the form∫ g_n,i(s)(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1 ds=g̃_n,i(s)(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1This gives the expression of φ_n of the Proposition. Let us remark that in the first case, we also proved that the Galois group does not grow, as we built the general solution (φ_j(s,·))_j=1… n-1 as a formal series with coefficients in the Picard Vessiot field of the NVE_1. Thus the solutions of higher normal variational equations also belong to this field. This proves the corollary Under the conditions of Proposition <ref>, Mon^0(NVE_1) non-resonant implies Gal^0(NVE_k) ≃ℂ^l-1 ∀ k∈ℕ^*. This also implies that Theorem <ref> implies the right to left implication of Theorem <ref>. The right to left implication of Theorem <ref> is given by Ayoul-Zung Theorem. So we still only have to prove Theorem <ref>. Proposition <ref> is also close to a more streamlined equivalent of <cit.> in the non Hamiltonian case with “small” Galois group. Indeed, our solutions series allow to find a linearisation map (see section 3.3.), which induces a linear transformation on the jet space diagonalizing the NVE_k. §.§ Formal integrabilityLet us consider X a meromorphic vector field in the neighbourhood of an algebraic curve Γ̅ with Γ a solution of X. Let us assume that Gal^0(VE_k) is Abelian ∀ k∈ℕ^* and note l=dim(Gal^0(NVE_1))+1. Assume Gal^0(NVE_k) ≃ℂ^l-1 ∀ k∈ℕ^*. Then X is formally (l,n-l) integrable on a finite covering of a neighbourhood of Γ̅.We can always assume the vector field X to be gauge reduced as this does not impact integrability on a finite covering of a neighbourhood of Γ. Let us now make the time reduction on X and apply Proposition <ref>. We build the formal series solution φ. The restricted function (φ_j(s,·))_j=1… n-1 is at first order given byφ_1=c_1H_1(s),…,φ_n-1=c_n-1H_n-1(s)Thus we can formally invert the map(φ_j)_j=1… n-1(s,·) :(c_1H_1(s),…,c_n-1H_n-1(s)) →ℂ^n-1givingc_jH_j(s)=Φ_j(q,s), q∈ℂ^n-1Now the hyperexponential solutions H_1,…,H_n-1 are possibly not algebraically independent. This is the case when l<n. In such case, we can build n-l independent non trivial relations between the H_j of the form∏_j=1^n-1 H_j(s)^k_j∈ℂ(Σ), k_i∈ℤNow replacing the H_j by Φ_j(q,s), this allows to build formally n-l first integrals F_1,…,F_n-l∈ℂ(Σ)((q)). They are functionally independent because they are independent at first order. Now first integrals are not affected by the time reduction, and thus F_1,…,F_n-l are also formal first integrals of the vector field X. We now want to build the formal vector fields. We consider the n functionsJ_i=ln H_i(s)- lnΦ_i(q,s) i=1… n-1 J_n=∑_i∈ResΦ_1(q,s)^i_1…Φ_n-1(q,s)^i_n-1/H_1(s)^i_1… H_n-1(s)^i_n-1 L_i(s)+ ∑_i∈ℕ^n-1a_n,i_1,…,i_n-1(s) Φ_1(q,s)^i_1…Φ_n-1(q,s)^i_n-1-tThese formal expressions are functionally independent first integrals of the vector field X after time reduction, and thus so they are before the time reduction.Let us consider the vector space V of vectors v∈ℂ(Σ)((q))^n such that∑_j=1^n-1∂_q_j F_i v_j+∂_s F_i v_n=0 ∀ i=1… n-lThe commuting vector fields we are searching should be in V as the F_i should be first integrals of them. As the F_i are independent, the dimension of V is l over ℂ(Σ)((q)), and we note v_1,…,v_l a basis of them.The logarithmic derivatives of the F_i are linear combinations of the derivatives of the J_i. So let us consider a basis B of the supplementary space of dimension l of these linear combinations, and noteJ̃_i=∑_j=1^n B_i,j J_jRemark we can assume J̃_n= J_n as J_n cannot intervene in the expressions of the first integrals F_i because of the t appearing in it.Let us now define the n× n matrixJac=([ ∂_q_1 J_1 … ∂_q_n-1 J_1 ∂_s J_1; …; ∂_q_1 J_n … ∂_q_n-1 J_n ∂_s J_1; ])and the l× l matrixJ̃ãc̃=([ ℒ_v_1J̃_1 … ℒ_v_lJ̃_1; …; ℒ_v_1J̃_l … ℒ_v_lJ̃_l; ])All the logarithmic derivatives of J_1,…,J_n-1 are in ℂ(Σ)((q)), and thus so are ℒ_v_i J_j,j=1… n-1, i=1… l. For J_n, when computing ℒ_v_i J_n, the coefficients in front of the logarithmic functions L_i(s) are first integrals, and thus their Lie derivative with respect to v_i is zero. Thus the logarithmic functions L_i(s) are always differentiated, and thus ℒ_v_iJ̃_n∈ℂ(Σ)((q)), ∀ i=1… l. So the coefficients of matrix J̃ãc̃ are in ℂ(Σ)((q)). Let us consider an invertible matrix M∈ M_n(K) where K is a differential field for the derivations in x_1,…,x_n. The vector fields ∑_i=1^n M_i,j∂/∂ x_i pairwise commute if and only if the 1-forms ∑_j=1^n (M^-1)_i,j dx_j are closed.In this proof, we will note [ ]_j for the j-th column extraction of a matrix, and ( )_k,j the k,j coefficient extraction. The closure condition for the differential forms writes[∂_q_i (M^-1)]_j =[∂_q_j (M^-1)]_i,∀ i,j[M^-1(∂_q_iM)M^-1]_j =[M^-1(∂_q_jM)M^-1]_i,∀ i,jM^-1(∂_q_iM)[M^-1]_j =M^-1(∂_q_jM)[M^-1]_i,∀ i,j(∂_q_iM)[M^-1]_j =(∂_q_jM)[M^-1]_i,∀ i,j ∑_k=1^n [∂_q_iM)]_k(M^-1)_k,j= ∑_k=1^n [∂_q_jM]_k (M^-1)_k,i,∀ i,j ∑_k=1^n ((M^⊺)^-1)_j,k [∂_q_iM)]_k = ∑_k=1^n ((M^⊺)^-1)_i,k [∂_q_jM]_k,∀ i,j ∑_k=1^n ((M^⊺)^-1)_j,k (∂_q_iM))_p,k= ∑_k=1^n ((M^⊺)^-1)_i,k (∂_q_jM)_p,k,∀ i,j,pNoting B_p=([ (∂_q_1M)_p,1… (∂_q_nM)_p,1;… …; (∂_q_1M)_p,n… (∂_q_nM)_p,n;]),this relation above rewrites((M^⊺)^-1 B_p)_i,j=((M^⊺)^-1 B_p)_j,i,∀ i,j,p(M^⊺)^-1 B_p = B_p^⊺ M^-1, ∀ p.In other words the matrix (M^⊺)^-1 B_p is symmetric for all p. Let us now write the commutation condition∑_k=1^n (M)_k,i [∂_q_k M]_j = ∑_k=1^n (M)_k,j [∂_q_k M]_i, ∀ i,j ∑_k=1^n (M)_k,i (∂_q_k M)_p,j= ∑_k=1^n (M)_k,j (∂_q_k M)_p,i, ∀ i,j,p ∑_k=1^n (M^⊺)_i,k(B_p^⊺)_k,j= ∑_k=1^n (M^⊺)_j,k (B_p^⊺)_k,i, ∀ i,j,p (M^⊺ B_p^⊺)_i,j= (M^⊺ B_p^⊺)_j,i , ∀ i,j,p M^⊺ B_p^⊺= B_p M, ∀ p.In other words the matrix B_p M is symmetric for all p.Now to conclude, multiplying the first condition by M on the right and by M^⊺ on the left, we obtain(M^⊺)^-1 B_p = B_p^⊺ M^-1⇔ B_p M= M^⊺ B_p^⊺ The matrix J̃ãc̃ is a submatrix of Jac. The matrix Jac is invertible and its lines form closed 1-forms. So we can apply Lemma <ref> with M=Jac^-1, and thus the columns of Jac^-1 form commuting vector fields. The vectorsY_j=∑_i=1^l (J̃ãc̃^-1)_i,j v_i, j=1… lare linear combinations of the columns of Jac^-1, and thus are commutative vector fields. Let us finally check that X is among them. We have ℒ_X J_i=0 , ∀ i=1… n-1 and ℒ_X J_n=1. These equalities rewrite in matrix formJac([ X_1; …; X_n ])=([ 0; …; 0; 1 ])and thus X is given by the last column of Jac^-1. As moreover ℒ_X F_i=0 , ∀ i=1… n-l, X belongs to the vector space V and thus is a linear combination of the Y_j. After a basis change, we can assume for example that Y_l=X.Thus X admits n-l formal first integrals F_1,…,F_n-l∈ℂ(Σ)((q)) and l formal commuting vector fields X,Y_1,…,Y_l-1 with coefficients in ℂ(Σ)((q)). Thus X is formally (l,n-l) integrable on a finite covering of a neighbourhood of Γ̅.The first integrals come from the algebraic relations between the hyperexponential functions, and can effectively appear because the non resonance condition only holds on ℕ in contrary to the relations taken into account by the Galois group which hold over ℤ. The Proposition <ref> gives a formal result for Theorem <ref>, and using Corollary <ref>, also gives a formal result for the right to left implication of Theorem <ref>. Exampleq̇=α q/s(q^3+q^2s+s),ṡ=1/q^3+q^2s+sWe make the time reduction, givingq̇=α q/s,ṫ=q^3+q^2s+swhereis now the derivation in s. The series of Proposition <ref> areq=c_1s^α , t=1/2s^2+s^2(c_1s^α)^2/2α+2 +s(c_1s^α)^3/3α+1There are no resonant terms for α≠ -1,-1/3. The Galois group for α∉ℚ of NVE_1 is ℂ^*, and so there are no meromorphic first integrals in q,s. The functions J_i areJ_1=αln s- ln q, J_2=1/2s^2+s^2(c_1s^α)^2/2α+2 +s(c_1s^α)^3/3α+1-tAs there are no first integrals, we have J̃ãc̃=Jac, and we obtainJac=([-1/q α/s; 2s^2q/2α+2+3sq^2/3α+1 s+2sq^2/2α+2+q^3/3α+1 ])Now inverting this matrix, we find for the first column up to multiplication by a constant the following commuting vector field(3α q^2s+3α^2s+q^2s+4α s+s+α q^3+q^3)q/q^3+q^2s+s∂/∂ q- q^2s(3α q+3α s+3q+s)/q^3+q^2s+s∂/∂ sWhen α∈ℚ, the system admits a first integral,F_1(q,s)=q^denom(α)s^-numer(α)This first integral is indeed the exponential of -denom(α) J_1.Remark that when ṡ=1 already before the time reduction, the equation in time is ṫ=1. Then the matrix Jac is of the form([ *;-∇_q Φ *; *;0…0* ])The vector space V of vector fields independent of X such that with respect to them the Lie derivative of the first integrals of X is 0 is sent by this matrix on LieGal(NVE_1). Thus Jac^-1.LieGal(NVE_1) is the vector space generated by commuting vector fields independent with X and with common first integrals the F_i. Now this can also be seen as the the action of LieGal(NVE_1) on φ. Indeed, differentiating the φ along LieGal(NVE_1) is simply(∇_q Φ^-1)(Φ(q,s)).LieGal(NVE_1) =-(∇_q Φ)^-1.LieGal(NVE_1) =Jac^-1.LieGal(NVE_1)the last equality being true because Jac is a block triangular matrix. This process is easier to compute and show how LieGal(NVE_1) acts on the non linear system.§.§ Formal linearisability Let us now find a formal coordinates change for Theorem <ref>.Let X be a time dependant vector field meromorphic on an algebraic finite covering 𝒞 of a neighbourhood of {0∈ℂ^n-1}×ℙ. Let us note π:𝒞↦ℂ^n-1 the projection, Γ=π^-1(0)∖ S where S is the finite set of singular points of X on π^-1(0) and Γ̅=π^-1(0). Assume X=0 on Γ, the NVE_1 near Γ̅ is Fuchsian and Mon^0(NVE_1) is diagonal and Diophantine. If Gal^0(NVE_k) ≃ℂ^l-1 for all k∈ℕ^* then the vector field X is formally linearisable on a neighbourhood of Γ̅. Adding the equation ṫ=1 withcorresponding to the derivative in s, the vector field X(s,q) defines a system time reduced of the from (<ref>). We make a gauge reduction of X (the vector field X) and apply Proposition <ref>. We can formally invert the map(φ_j)_j=1… n-1(s,·) :(c_1H_1(s),…,c_n-1H_n-1(s)) →ℂ^n-1givingc_jH_j(s)=Φ_j(q,s), q∈ℂ^n-1Thus in new coordinates defined by Φ, the vector field X becomes linear with associated matrix([ X̃_1,1(s) 0 … 0; …; 0 … 0 X̃_n-1,n-1(s); ])where X̃_i,i(s) are logarithmic derivatives of the H_i. The coordinates change Φ is a formal series with coefficients in ℂ(Σ).Let us note P the matrix with coefficients in ℂ(Σ) given by Φ at first order. We now consider the transformation P^-1Φ. The matrix P^-1 is a gauge transformation sending equation (<ref>) to the linear part of X. Moreover by construction, the application P^-1Φ is tangent to identity.We know that P^-1Φ∈ℂ(Σ)[[q_1,…,q_n-1]]^n-1. Let us now consider the action of G=Gal(ℂ(Σ):ℂ(Γ̅)) on it. As the coefficients of the linear part of X are in ℂ(Γ̅) (recall that X is meromorphic in a neighbourhood of Γ̅), the linear part of X is left invariant by the action of G. Now acting G on P^-1Φ gives possibly several transformations, all tangent to identity, sending X to its linear part. Composing such a transformation with the inverse of another one, we obtain a transformation stabilizing the linear vector field associated to the linear part of X, and tangent to identity. Such a transformation has to be identity.Thus G leaves P^-1Φ invariant, and thus P^-1Φ∈ℂ(Γ̅)[[q_1,…,q_n-1]]^n-1. Thus X is conjugated to its linear part by a transformation in ℂ(Γ̅)[[q_1,…,q_n-1]]^n-1, i.e. a formal transformation on a neighbourhood of Γ̅.§ THE ZIGLIN GROUP§.§ DefinitionsLet us consider X a gauge reduced vector field and a point s_0∈Γ. Let us consider a closed curve γ on Γ with s_0∈γ and note Φ_γ∈ℂ{q_1,…,q_n-1}^n-1 the germ of holomorphic map given by the flow of X along γ with initial condition s_0,q_1,…,q_n-1. The Ziglin group Zig(X) is the group of germs of holomorphic maps Φ_γ for all such curves γ. The subgroup Zig^0(X) is the group of holomorphic maps Φ_γ for all such curves γ whose lift on Σ is closed. The Jacobian matrices of the elements of the Ziglin group are the monodromy matrices of the NVE_1. The Jacobian matrices of the elements of Zig^0(X) are monodromy matrices of the NVE_1 along closed curves on Σ, i.e. elements of Mon^0(NVE_1). Due to this, the Ziglin group generalize the monodromy group of the NVE_1, which was first used by Ziglin <cit.> to prove non integrability. Let us consider X a time and gauge reduced vector field. If Gal^0(VE_k) is Abelian ∀ k∈ℕ^*, then Zig^0(X) is Abelian. MoreoverZig(X)/Zig^0(X) ≃Gal(NVE_1)/Gal^0(NVE_1) Let us consider the flow Φ with initial condition q_1^0,…,q_n-1^0 at s_0. Its series expansion at order k in the initial conditions q_1^0,…,q_n-1^0 gives a vector of functions in s, which is a solution of the NVE_k. Now computing this flow along a closed loop γ on Σ defines an element of Zig^0(X). Doing the same on the series expansion at order k defines a monodromy matrix of the NVE_k. Now knowing that γ is closed on Σ, we also know that this monodromy matrix belongs to Gal^0(NVE_k). This group is Abelian by hypothesis, and thus any pair of elements of Zig^0(X) commute up to order k. This is valid for any k∈ℕ^*, and the elements of Zig^0(X) are holomorphic maps. Thus Zig^0(X) is Abelian.Let us now remark that Zig(X)/Zig^0(X) is a subgroup of the group of the covering Σ over Γ̅ (which is a finite group as Σ is an algebraic Riemann surface over Γ̅). By construction of Σ, we also haveGal(ℂ(Σ):ℂ(Γ̅))=Gal(NVE_1)/Gal^0(NVE_1)Thus Zig(X)/Zig^0(X) ⊂Gal(NVE_1)/Gal^0(NVE_1) Now remark that Φ_γ at first order gives the element of Mon(NVE_1) generated by the closed curve γ. And thusZig(X)/Zig^0(X) ⊃Mon(NVE_1)/Mon^0(NVE_1)As we know that Mon(NVE_1)/Mon^0(NVE_1) is finite, we deduce it is equal to Gal(NVE_1)/Gal^0(NVE_1), which gives the Proposition. Remark that when computing Gal(NVE_k),k≥ 2, only integrals are necessary, and never algebraic extensions. Thus the number of connected components of Gal(NVE_k) does not grows and soGal(NVE_k)/Gal^0(NVE_k) ≃Gal(NVE_1)/Gal^0(NVE_1)The Proposition thus extends this result to Zig(X).§.§ ConvergenceThe formal series solution of Proposition <ref> under the additional condition that Mon^0(NVE_1) is Diophantine is convergent on a neighbourhood of c=0 with s not projecting on a singularity of X.Let us fix an s_0∈Σ which projects not on a singularity of X, and let us consider the restricted function (φ_j(s_0,·))_j=1… n-1. This defines an invertible formal map in the neighbourhood of (q_1,…,q_n-1)=0. Let us note the new coordinates c_1,…,c_n-1.Now computing (φ_j(s_0,·))_j=1… n-1 along a closed curve on Σ with base point s_0 gives in the coordinates c_1,…,c_n-1 a diagonal transformation (recall that Mon^0(NVE_1) is a diagonal multiplicative group). Thus in the coordinates c_1,…,c_n-1, the Ziglin group element Φ_γ is a linear diagonal transformation. The same hold for any closed curve γ on Σ, and thus any element of Zig^0(X) is diagonal in the coordinates c_1, …,c_n-1.Thus the coordinates change (φ_j(s_0,·))_j=1… n-1 makes a simultaneous linearisation of all elements of Zig^0(X). The linear part of Zig^0(X) are the monodromy matrices Mon^0(NVE_1) which are Diophantine by hypothesis. We can now apply Stolovitch Theorem 2.1 <cit.>. The elements of Zig^0(X) are formally linearisable, and thus holomorphically linearisable. More importantly, the linearisation map is unique provided that resonant monomials in the transformation vanish. This is indeed the case of (φ_j(s_0,·))_j=1… n-1 as given by Proposition <ref>. Thus by uniqueness, the transformation (φ_j(s_0,·))_j=1… n-1 is convergent on a neighbourhood of 0.For now, we just have proved convergence at s_0. Let us note s_1∈Σ which projects on a non singular point for X and γ a path going from s_0 to s_1. Let us consider Φ_γ the flow of X along γ. We have then(φ_j(s_1,·))_j=1… n-1=Φ_γ∘ (φ_j(s_0,·))_j=1… n-1As Φ_γ and (φ_j(s_0,·))_j=1… n-1 are holomorphic in a neighbourhood of 0, so is(φ_j(s_1,·))_j=1… n-1. Using the formula ∂/∂ sφ_n(s)=1/X_n(φ_j(s)_j=1… n-1,s)we deduce that φ_n(s) also converges. Remark that the value of φ(s_1,·) depends on the path γ chosen. Changing the path between s_0 to s_1 is equivalent to compose with an element of Zig^0(X), which is diagonal in the coordinates c_1,…,c_n-1. Thus changing the path does not affect the convergence in a neighbourhood of 0 but can change the size of this neighbourhood. We now apply the same reasoning of Proposition <ref>, but using now converging series. This gives us vector fields and first integrals as converging series. Now these are converging outside the singularities of the time and gauge reduced vector field. These singularities correspond to zeros and singularities of the initial vector field, and singularities of the parametrization. So when going back to the initial variables x_1,…,x_n, the singularities of the parametrization disappear and we obtain commuting vector fields and first integrals, meromorphic on a finite covering of a neighbourhood of Γ.Finally, The number of vector fields produced is exactly dim(Gal^0(NVE_1))+1, as given by Proposition <ref>.§.§ End of proof of Theorem <ref> For the right to left implication, let us first remark that if the set of singularities S is the whole Γ, then the Theorem is empty. If it is not the whole Γ, as X is meromorphic on Γ̅, then S is finite. So we can use Proposition <ref> and we have a formal linearisation. Using moreover Proposition <ref>, the formal series of Proposition <ref> are convergent, and thus so is the coordinate change for the linearisation of X of Proposition <ref>.Let us now prove the left to right implication. The system is equivalent to its linear part with a holomorphic variable change on a finite covering of a neighbourhood of Γ. In these coordinates, X becomes linear, and as Gal^0(NVE_1) is diagonal, the linear system can be furthermore gauge reduced, giving a system of the form (the time being noted s)q'=([ X̃_1,1(s) 0 … 0; …; 0 … 0 X̃_n-1,n-1(s); ])qwith X̃_i,i(s) meromorphic on a finite covering of ℙ^1. The group Gal^0(NVE_k) of such equation is (ℂ^*)^l-1 for all k∈ℕ^* and the group Mon^0(NVE_k) is multiplicative. Now the gauge reduction is a variable change holomorphic on a finite covering of a neighbourhood of Γ, and so changes the monodromy group of NVE_k with at most a finite extension (or a finite index subgroup). Thus the monodromy group of NVE_k in the original coordinates is a finite extension of a diagonal group of matrices. As the system is Fuchsian, its Zariski closure is the Galois group, and thus Gal^0(NVE_k)≃ (ℂ^*)^l-1 with the same integer l. § COMPLETION AND FINITE COVERINGS Although the Theorems <ref>,<ref> are “in spirit” inverse of the Ayoul-Zung Theorem, they are not exactly because the integrability produced is not exactly the same as in the original Ayoul-Zung Theorem * The vector fields produced are defined on a finite covering over a neighbourhood of Γ, and thus are multivalued. This is due to algebraic extensions made for Σ. Thus strictly speaking, these vector fields are not meromorphic near Γ⊂ℙ^n.* The vector field is defined near Γ̅ and Galoisian conditions are over Γ̅, however the first integrals and vector fields produced are possibly not defined on a neighbourhood of Γ̅∖Γ.§.§ Minimal finite coverings for integrability The finite covering problem can be understood at the linear level. The Theorem <ref> gives a linearisation without needing a finite covering, but the resulting matrix system is not diagonal. The construction of vectors fields using Proposition <ref> however require gauge reduction, so diagonalization of the linear part. Let X be a meromorphic vector field of a neighbourhood of Γ̅, with Γ an algebraic solution of X. Assume NVE_1 is Fuchsian, Gal^0(NVE_k) ≃ℂ^l-1 for all k∈ℕ^* and Mon^0(NVE_1) is Diophantine. Let us note 𝒥_Σ,𝒥_Γ̅ the fields of first integrals of the NVE_1 with coefficients in ℂ(Σ) and ℂ(Γ̅) respectively. The vector field is integrable on a neighbourhood of Γ if and only if 𝒥_Σ/𝒥_Γ̅≃ℂ(Σ)/ℂ(Γ̅).We can assume the vector field X to be reduced as equation (<ref>), as it does not require algebraic extensions (the base field stays the rational functions on Γ̅). We begin by the right to left implication. We only need to find l-1 commuting vector fields and n-l first integrals (depending on the new time s). Let us first remark that the unitary minimal polynomial of an element of 𝒥_Σ has coefficients in 𝒥_Γ̅ (i.e. an algebraic first integral has a unitary minimal polynomial whose coefficients are also first integrals).Let us note w a generator of the algebraic extension ℂ(Σ)/ℂ(Γ̅). We note w_1,…,w_p its conjugates. By hypothesis, there exists F∈𝒥_Σ such that the action of G=Gal(ℂ(Σ):ℂ(Γ̅)) on w and F are exactly the same. We also note F_1,…,F_p the conjugates of F. The Galois group G acts as permutations on the w_1,…,w_p and F_1,…,F_p. Let us note Y_1,…,Y_l-1 the vector fields obtained by Theorem <ref>. Let us note (Y_i,j)_j=1… p the conjugates of Y_i (remark they could be dependent). We now consider the vector fields(∑_j=1^p F_j^m Y_i,j)_m=0… p-1,j=1… l-1Acting an element σ∈ G on such element givesσ(∑_j=1^p F_j^m Y_i,j) =∑_j=1^p σ(F_j^m Y_i,j) = ∑_j=1^p F_τ(j)^m Y_i,τ(j) =∑_j=1^p F_j^m Y_i,jwith τ∈ S_p. So these vector fields have coefficients in ℂ(Γ̅). The dimension of the vector space generated by (Y_i,j)_j=1… p,i=1… l-1 is at least l-1. We acted on it a matrix block diagonal whose blocks are the Vandermonde matrix given by the F_i. As the F_i are all different, this Vandermonde matrix is invertible, and the the dimension of(∑_j=1^p F_j^m Y_i,j)_m=0… p-1,j=1… l-1is at least l.To conclude, we need to build also the suitable first integrals. These are the elements of 𝒥_Γ̅. By Theorem <ref>, we know there are n-l independent first integrals in 𝒥_Σ, i.e. 𝒥_Σ is of transcendence degree n-l. As 𝒥_Γ̅ is a subfield of 𝒥_Σ of finite index, it has the same transcendence degree and so also contains n-l first integrals.Let us now prove the left to right implication. Let us first remark that the inclusion𝒥_Σ/𝒥_Γ̅⊂ℂ(Σ)/ℂ(Γ̅)is immediate. So we only have to prove the other way. We use Theorem <ref> to build vector fields Y_1,…,Y_l-1,Y_l=X and first integrals I_1,…,I_n-l. By hypothesis, we have Ỹ_1,…,Ỹ_l-1 meromorphic independent commuting vector fields in a neighbourhood of Γ. So this implies we can writeỸ_i=∑_j=1^l-1 f_i,j Y_ji=1… lwith f_i,j are first integrals in 𝒥_Σ. We can invert this linear relation, expressing the Y_j in functions of the Ỹ and the first integrals.Let us consider an element σ∈Gal(ℂ(Σ):ℂ(Γ̅)), and assume that σ fixes all the first integrals in 𝒥_Σ. We want to prove that σ=id. Due to the above relation, as σ fixes the Ỹ by assumption, we have that σ fixes the Y_j. Let us now consider the system of equationsℒ_Y_j f_i=δ_i,j i,j=1… lNow recalling the proof of Proposition <ref>, we know solutions for this system as linear combinationsJ_i=ln H_i(s)- lnΦ_i(q,s) i=1… n-1 J_n=∑_i∈ResΦ_1(q,s)^i_1…Φ_n-1(q,s)^i_n-1/H_1(s)^i_1… H_n-1(s)^i_n-1 L_i(s)+ ∑_i∈ℕ^n-1a_n,i_1,…,i_n-1(s) Φ_1(q,s)^i_1…Φ_n-1(q,s)^i_n-1-tLet us note J̃_1,…,J̃_l-1,J_n such solutions of system (<ref>) (J_n is a solution for i=l as we noted Y_l=X). The exponentials of certain linear combinations of the J_i form the first integrals F_i. These F_i are the first integrals of the Y_j, and the common kernel of the ℒ_Y_j is the functions in F_1,…, F_n-l.Thus all the solutions of system (<ref>) are of the formf_i=J̃_i+ Ψ_i(F_1,…,F_n-l), i=1… lLet us now look at the action of σ on J̃_i. As σ(J̃_i) should be still a solution of (<ref>), we haveσ(J̃_i)=J̃_i+ Ψ_i(F_1,…,F_n-l).The element fixes the first integrals F_i (whose logs are linear combinations of the J_i), and thus the action of σ on the J_i is of the formσ(J_i)=J_i+ Ψ̃_i(F_1,…,F_n-l), i=1… n.So in other words, the action of σ on H_i(s)/Φ_j(q,s) is a multiplication by an arbitrary function of (F_1,…,F_n-l).Now σ can also be seen as a element on Gal(NVE_1), and so can be represented by an (n-1)× (n-1) matrix. Let us precise the possible structures of Gal(NVE_1). The group Gal^0(NVE_1) is constituted of diagonal matrices. Let us regroup the H_i by blocks such that on each block any matrix of Gal^0(NVE_1) is a multiple of identity. Then an element of finite order of Gal(NVE_1) can * Act as a finite group on a block.* Permute different blocks.As σ∈Gal(NVE_1), we have that σ(H_i(s)/Φ_i(q,s)) should stay in the same differential field. Thus the action of σ is a multiplication by an element of 𝒥_Σ. Thus Gal^0(NVE_1) acts the same after and before σ, and so σ does not permute different blocks.If H_i and H_j belong to the same block, thenH_i(s)Φ_j(q,s)/H_j(s)Φ_i(q,s)is a first integral. Thus the action of σ on it is identity. Thus σ can at most multiply the H_i(s)/Φ_i(q,s) by a constant, and the same one on a block.So the matrix associated to σ is diagonal, and using that it fixes the first integrals, this diagonal matrix has to be in Gal^0(NVE_1). However an element Gal^0(NVE_1) acts trivially on ℂ(Σ), and thus σ=id.Any matrix of Gal(NVE_1) commuting with Gal^0(NVE_1) will act trivially on the vector fields Y_i. So we can consider the maximal subgroup of Gal(NVE_1) commuting with Gal^0(NVE_1) and then the quotient of Gal(NVE_1) by it. This defines a finite group which is the Galois group of ℂ(Σ̃), the field generated by the coefficients of Y_i. Only the “non-commutative” part of Gal(NVE_1) produce ramifications for the vector fields Y_i, and thus ℂ(Σ̃) is typically smaller than ℂ(Σ). Example 1 q̇_1=α q_2, q̇_2=α q_1-s q_2/s^2+1The Galois group is virtually diagonal, and after gauge reduction, we diagonalize it and we find the solutionsc_1(s+√(1+s^2))^α,c_2(s-√(1+s^2))^α.So ℂ(Σ)=ℂ(s,√(1+s^2)). The only first integral with coefficients in ℂ(Σ) is q_1^2-(1+s^2)q_2^2. Thus 𝒥_Σ=𝒥_Γ̅ and so this system is not integrable on a neighbourhood of q=0. It needs a 2-covering for its commuting vector field. Example 2 q̇_1=α q_2+s q_1/2(1+s^2), q̇_2=α q_1-1/2s q_2/s^2+1The Galois group is virtually diagonal, and after gauge reduction, we diagonalize it and we find the solutionsc_1(1+s^2)^1/4(s+√(1+s^2))^α,c_2(1+s^2)^1/4(s-√(1+s^2))^α.We have ℂ(Σ)=ℂ(s,√(1+s^2)). The only first integral with coefficients in ℂ(Σ) is (q_1^2-(1+s^2)q_2^2)/√(1+s^2). Thus 𝒥_Σ/𝒥_Γ̅≃ℂ(s,√(1+s^2))/ℂ(s) and so this system is integrable on a neighbourhood of q=0. The vector field and first integral produced by Theorem <ref> areq_2√(1+s^2)∂/∂ q_1+q_1/√(1+s^2)∂/∂ q_2,q_1^2-(1+s^2)q_2^2/√(1+s^2)We can remove the square root of the vector field by multiplying it with the first integral, and square the first integralq_2(q_1^2-(1+s^2)q_2^2) ∂/∂ q_1+q_1(q_1^2-(1+s^2)q_2^2)/1+s^2∂/∂ q_2,(q_1^2-(1+s^2)q_2^2)^2/1+s^2 Example 3 q̇_1=sq_1/4(s^2+4)-(s-2)q_3/4(s^2+4)+1/2α q_4 q̇_2=-α q_1/s^2+4-sq_2/4(s^2+4)+α(s+2)q_3/2(s^2+4)+(s-2)q_4/4(s^2+4) q̇_3=-(s-2)q_1/4s(s^2+4)+α q_2/2s-(s^2+8)q_3/4s(s^2+4) q̇_4=α (s+2)q_1/2s(s^2+4)+(s-2)q_2/4s(s^2+4)-α q_3/s^2+4-(3s^2+8)q_4/4s(s^2+4)The Galois group is virtually diagonal, and after gauge reduction we diagonalize it and we find the solutions (2+2√(s)+s)^1/4(1+√(s)+√(2+2√(s)+s))^α, (2-2√(s)+s)^1/4(1-√(s)+√(2-2√(s)+s))^α, (2+2√(s)+s)^1/4(1+√(s)-√(2+2√(s)+s))^α,(2-2√(s)+s)^1/4(1-√(s)-√(2-2√(s)+s))^α So ℂ(Σ)=ℂ(s,√(2+2√(s)+s)). The Galois group is of dimension 2. The first integrals and vector fields produced by Theorem <ref> areF_1=√(2+2√(s)+s)/s^2+4(s^3q_4^2+s^2q_2^2-s^2q_3^2-sq_1^2+4sq_1q_3-2sq_3^2+4sq_4^2-2q_1^2+4q_2^2 +2√(s)(s^2q_2q_4-sq_1q_3+sq_3^2+q_1^2-2q_1q_3+4q_2q_4)) F_2=√(2-2√(s)+s)/s^2+4(s^3q_4^2+s^2q_2^2-s^2q_3^2-sq_1^2+4sq_1q_3-2sq_3^2+4sq_4^2-2q_1^2+4q_2^2 -2√(s)(s^2q_2q_4-sq_1q_3+sq_3^2+q_1^2-2q_1q_3+4q_2q_4)) Y_1=(q_2+√(s)q_4)√(2+2√(s)+s)∂/∂ q_1+q_1+q_3√(s)/√(2+2√(s)+s)∂/∂ q_2+ (√(s)q_4+q_2)√(2+2√(s)+s)/√(s)∂/∂ q_3+q_3√(s)+q_1/√(s)√(2+2√(s)+s)∂/∂ q_4 Y_2=(q_2-√(s)q_4)√(2-2√(s)+s)∂/∂ q_1+q_1-q_3√(s)/√(2-2√(s)+s)∂/∂ q_2+ (√(s)q_4-q_2)√(2-2√(s)+s)/√(s)∂/∂ q_3+q_3√(s)-q_1/√(s)√(2-2√(s)+s)∂/∂ q_4The coefficients of F_1,F_2 define the field ℂ(Σ), and thus𝒥_Σ/𝒥_Γ̅≃ℂ(s,√(2+2√(s)+s))So this system is integrable on a neighbourhood of q=0. The meromorphic first integrals and vector fields areF_1^2+F_2^2,(F_1^2-F_2^2)^2,F_1Y_1+F_2Y_2,(F_1^2-F_2^2)(F_1Y_1-F_2Y_2) Remark that all our examples were linear in the q's. This is not only because it is easier, but also because under the Galoisian hypothesis we make, the vector fields are linearisable using Theorem <ref>. Now the construction of such example relies on finding a polynomial P∈ℂ(s)[X] with sufficiently many linear relations on the roots, i.e. the Galois group of P can be recovered by its action on these relations. It appears that quite complicated groups are possible <cit.>. This group in particular encodes how the stable/unstable reconnect in the neighbourhood of Γ. §.§ Linearisation at singular points The problem of extending our results to points of Γ̅∖Γ necessitates to prove the convergence of the series of Proposition <ref> at singular points. In particular, these series at the limit at a singular points can give formal transformations.Let X be a meromorphic vector field of a neighbourhood of Γ̅, with Γ an algebraic solution of X. Assume NVE_1 is Fuchsian, Gal^0(NVE_k) ≃ℂ^l-1 for all k∈ℕ^*, Mon^0(NVE_1) is Diophantine. Theorem <ref> applies and gives l meromorphic commutative vector fields Y, n-l independent meromorphic first integrals F and a Riemann surface Σ above Γ̅. Let us consider s_0∈Σ projecting on an equilibrium point of X and assume the local monodromy group around s_0 is non resonant Diophantine. Then Y,F extend meromorphically on a neighbourhood of (s_0,0).Let us first make the gauge reduction of the vector field X. We can assume s_0 projects on 0∈Γ̅. After the gauge reduction, the system becomesq̇_i=A/s q_i + 1/sR(s,q)with A=diag(α_1,…,α_n-1), R(s,q) holomorphic in a neighbourhood of q=0,s∈Σ with valuation in q at least 2. The pole at 0 is always of order at most 1 because s=0 is singular regular.We consider the series (φ_j(s,c))_j=1… n-1 given by Proposition <ref>φ_j(s,c_1,…,c_n-1)=∑_i∈ℕ^n-1a_j,i_1,…,i_n-1(s) (c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1for j=1… n-1.Let us first prove that s=0 is not a pole of a_j,i_1,…,i_n-1(s). Recall that in proof of Proposition <ref>, the a_j,i_1,…,i_n-1(s) are computed using the formulaH_j(s)/(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1∫ g_j,i(s)(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1/H_j(s) dswhere g_j,i(s) is a coefficient of the series expansion of X in q. And thus g_j,i(s) has a pole of order at most 1. We have moreover H_i(s) ∼ s^α_i near 0 (after possibly a scaling of the H_i) and the α_1,…,α_n-1 are non resonant. Now we know by hypothesis that the integral is of the forma_j,i(s)(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1/H_j(s) a_j,i(s)∈ℂ(Σ) Let us now prove that a_j,i has a limit at s_0. Making a series expansion near 0 gives∫ g_j,i(s)(c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1/H_j(s) ds =C+ bs^i.α-α_j+o(s^i.α-α_j)where b is a constant depending on g_j,i and C the integration constant. If i.α-α_j is negative, then the valuation of a_j,i at s_0 is non negative, and so a_j,i converges at s_0.If i.α-α_j is positive, then either C=0 and then a_j,i converges at s_0. Or C≠ 0 and then a_j,i∼ bs^-i.α+α_j.The local monodromy group near s_0 in Mon^0(NVE_1) is generated by the diagonal matrix diag(e^2imπα_1,…,e^2imπα_n-1) where m∈ℕ^* is the ramification index of Σ at s_0. The non resonance condition writesi.mα-mα_j∉ℤ∀ i∈ℕ^n-1| i|≥ 2As a_j,i∈ℂ(Σ), it admits a Puiseux series at 0 in ℂ[[s^1/m]]. And thus -m i.α+mα_j∈ℤ, which is impossible due to the non resonance condition.We can now make a series expansion of the a_j,i at s_0 (which projects to 0∈Γ̅), givingφ_j(s,c_1,…,c_n-1)=∑_i∈ℕ^n-1∑_i_n∈ℕ a_j,i_1,…,i_n-1,i_n s^i_n/m (c_1H_1(s))^i_1… (c_n-1H_n-1(s))^i_n-1for j=1… n-1 and inverting the relation we obtainc_iH_i(s)=Φ_i(s,q)Φ_i∈ℂ[[s^1/m,q]]and thenc_is^α_i=s^α_i/H_i(s)Φ_i(s,q)∈ℂ[[s^1/m,q]]So the application(s,q) ↦(s^α_i/H_i(s)Φ_i(s,q))_i=1… n-1linearises the vector field (<ref>) near (s,q)=(0,0).Let us note s̃=s^1/m and make a variable change in (<ref>). The non resonance condition writesi.mα-mα_j∉ℤ∀ i∈ℕ^n-1| i|≥ 2and with the new time s̃, mα are the eigenvalues of the leading matrix of the system. Now changing time by multiplying all the equations by s̃, we obtain a vector field with 0 as equilibrium point, and mα_1,…,mα_n-1,1 as eigenvalues. These eigenvalues satisfy the non resonance and Diophantine condition, and thus system is holomorphically linearisable near (s̃,q)=(0,0) and the coordinates change is unique <cit.>. Thus it has to be(s̃,q) ↦(s̃^mα_i/H_i(s̃^m)Φ_i(s̃^m,q))_i=1… n-1which thus is holomorphic in s̃,q. We then deduce that Φ is holomorphic in s̃,q near (s̃,q)=(0,0) and thus that the vector fields and first integrals Y,F of X can be extended to a neighbourhood of s_0. Remark that the resonance condition we put is slightly stronger than the minimal one possible, which isi.α-α_j≠ 0 ∀ i∈ℕ^n, | i|≥ 2, α_n=1This is however necessary to ensure that a_j,i remains regular at 0. Indeed, if only the above condition is satisfied, then it is always possible to choose an a_j,i regular at 0. However ifi.α-α_j∈ℤ this choice of integration constant could not lead to the solution in ℂ(Σ), i.e. the local behaviour of a_j,i is in ℂ[[s^1/m]] but not algebraic. ExampleConsider a sequence of polynomials P_k of degree at most an affine function in k and α∉ℚ such that∫ P_k(s) s^3k/2 (1-s)^α k ds ∈ℂ(s^1/2,(1-s)^α)This is typically the integral encountered in Proposition <ref>, and the above condition ensure that the integral belongs to the same field. Moreover, there is an unique choice for integration constant such that1/s^3k/2 (1-s)^α k∫ P_k(s) s^3k/2 (1-s)^α k ds ∈ℂ(s^1/2)However, in Proposition <ref> (whose non resonance hypothesis is not satisfied), we also need this expression to be regular at 0. The condition writes down∫_0^1 P_k(s) s^3k/2 (1-s)^α k ds=0and nothing ensures it is satisfied for all k. If not, we can expect the valuation of the a_j,i to go to -∞, meaning that s=0 would be an essential singularity for our vector fields/first integrals, even if the spectrum 1,3/2 allows linearisation near s=0.Remark also that the extension depends a priori on the sheave of Σ on which s_0 is and not only its projection. This is due to the fact that the non resonance condition is not the same if the ramification index m is not the same, as in the following exampleα_1=√(2), α_2=2√(2)+1/2We have k.α-α_j∉ℤ but 4α_1-2α_2∈ℤ and thus is resonant with ramification index m=2. So if we now want to combine this result with Proposition <ref>, we need to be able to extend the vector fields and first integrals of Proposition <ref> for all sheaves. So to extend the vector fields and first integrals of Proposition <ref> near an equilibrium point s_0∈Γ̅, the local monodromy group around any point of π^-1(s_0) should be non resonant Diophantine. § CONCLUSION We proved an inverse of Ayoul Zung Theorem under a small divisor condition and a non resonance condition or strong additional Galoisian condition. This strong Galoisian condition Gal^0(NVE_k) ≃ℂ^l-1 is in fact implied by the non resonance condition, suggesting it should not be so rare after all. It happens that the same approach allowed us to prove a linearisation Theorem <ref>. This Theorem is closely related to Theorems <ref>,<ref> as after linearisation, construction of commuting vector field and first integrals is possible. This suggests that similar more general inverses of Ayoul Zung Theorem are related to normal forms of holomorphic vector fields depending on time on a neighbourhood of 0. The exampleq̇_1=α/tq_1+1/sq_1^2q_2q̇_2=-α/tq_2-1/sq_1q_2^2which is not linearisable but integrable suggests more general series than in Proposition <ref> should be considered, in particular allowing formal first integrals in the exponents. This would produce resonant normal forms for the maps of Ziglin group Zig^0(X)q→ϕ(F_1(q),…,F_p(q))qwhere F_1,…,F_p are invariants of the map and ϕ formal series. This resonant normal form does not remove all the resonant monomials, and so we would need a definition of canonic coordinates change for the uniqueness and a theorem for the convergence of such formal coordinates change. We can hope that such approach would be sufficient to invert the Ayoul Zung Theorem without requiring additional Galoisian conditions other than virtual Abelianity of the VE_k.The completion of the vector fields and first integrals Y,F of Theorems <ref>,<ref> near singular points of X remains elusive and probably generically not possible. Indeed, at a singular point of X, nothing seems to forbid the valuation in s of the coefficients of the series expansion of X to go to -∞, giving an essential singularity to our coordinate change map Φ.
http://arxiv.org/abs/1704.08279v1
{ "authors": [ "Thierry Combot" ], "categories": [ "math.DS", "37J30, 37J35, 37J40" ], "primary_category": "math.DS", "published": "20170426182328", "title": "Necessary and sufficient conditions for meromorphic integrability near a curve" }
Dynamics and thermodynamics of a central spin immmersed in a spin bath Arun Kumar Pati Received: date / Accepted: date ======================================================================Let ℒ be a finite distributive lattice and S=K[x_α: α∈ℒ] be a polynomial ring over a field K and I=⟨ x_α x_β- x_α∨β x_α∧β :αβ,α,β∈ℒ⟩ an ideal of S. In this article we describe the first syzygy of the Hibi ring R[ℒ]=S/I, for a planar distributive lattice ℒ. We also derive an exact formula for the first Betti number of a planar distributive lattice. We give a characterization of planar distributive lattices for which the first syzygy is linear. § INTRODUCTIONLet ℒ be a finite distributive lattice and let S=K[x_α: α∈ℒ] be a polynomial ring over a fieldK. For α,β∈ℒ, f_(α,β)=x_αx_β- x_α∨βx_α∧β∈ S is called a diamond relation if αβ. Let I=⟨ f_(α,β): αβ, α,β∈ℒ⟩⊂ Sbe the associated ideal, called a Hibi ideal. The associated ring R[ℒ]=S/I is called a Hibi ring. In <cit.>, Hibishowed that R[ℒ] is an algebra with straightening laws on ℒ over a field K and further in <cit.>, Herzog-Hibi showed that the diamond relations f_(α,β) are Gröbner basis for the lexicographic term order on S extending the order of ℒ. The ring appears in geometric context in Lakshmibai-Gonciulea <cit.> where it was proved that the Schubert varieties in G_d,n degenerate flatly to the toric varietiesX(I_d,n)(=X_d,n as in <cit.>) which are varieties associated to the lattices I_d,n={(i_1,…,i_d): 1 ≤ i_1 < … < i_d ≤ n}. Using the degeneration, several geometric properties for theSchubert varieties were derived in <cit.>, where the study of the singularities of X(I_d,n) were discussed. Thequestion of singularity of the algebraic varieties associated to a lattice ℒ, namely*SpecK[ℒ] is dealt with in Wagner <cit.> and Lakshmibai-Mukherjee<cit.>. In a series of papers, as in Brown-Lakshmibai <cit.>, <cit.>, interesting multiplicity formulas and otherapplications were found for several classes of distributive lattice.Also Hibi in <cit.> proved that I is prime if and only if ℒ is distributive and that R[ℒ] is Gorenstein when the set of join irreducible elements J={ z ∈ℒ: x ∨ y=z ⇒ x=z or y=z } is pure i.e. all maximal chains have the same length. Furthermore, Hibi also showed in <cit.> that R[ℒ] is a Cohen-Macaulay normal domain. Hibi rings over a finite distributive lattice are further studied by several authors, cf. Aramova-Herzog-Hibi <cit.>, Thomas <cit.>, Ene <cit.> and Ene-Herzog-Saeedi-Madani <cit.>, to mention but a few. The projective dimension and regularity of I was found in Ene <cit.>, Ene-Herzog-Saeedi-Madani <cit.> respectively. It was also proved in Ene-Herzog-Saeedi-Madani <cit.> that, for a finite distributive lattice, reg R[ℒ]= | P |-*rankP -1, where P is the poset of join irreducible elements of ℒ. This, in particular, implies that R[ℒ] has a linear resolution if and only if P is a direct sum of a chain and an isolated element, see Ene-Herzog-Saeedi-Madani <cit.>. Also in <cit.>, the author gave anexplicit minimal free resolution for monomial curves in A_K^4 using Gröbner basis technique.In <cit.>, the author found the first syzygies of determinantal ideals using Gröbner basis theory. In this article we give a set of minimal generators for planar distributive lattice ℒ. Using the generator we also give a formula for the first Betti number for these lattices. § FIRST SYZYGY OF HIBI RINGLet f_1,f_2,…,f_n be all the diamond relations in I. Let us denote F=⊕_i=1^n S(-2)and g_1,g_2,…,g_n be the generators of F,ϕ: F ⟶ S, g_i ⟼ f_i, ker(ϕ) is the first syzygy of R[ℒ]. Let us denote ker(ϕ)=Syz^1R[ℒ] and Syz^1_1 R[ℒ] denote lineargenerators of theS-module Syz^1R[ℒ].Let P={α,β,γ,δ} be a poset ≤ defined by α≤β,γ,δ, let the lattice of the posetideals of P be called a cube lattice. ℒ is a planar distributive lattice if and only if every join irreducible (meet irreducible) β∈ℒ is covered (covers) by at most two join irreducibles (meet irreducibles) in ℒ.⇒: Let ℒ be a planar distributive lattice and β∈ℒ be ajoin-irreducible element. Let us assume that it is covered by three join-irreducible elements x,y,z in ℒ. Thenx ∨ y, x ∨ z, y∨ z are non-comparable. Hence there will be a cube sublattice in ℒ, which isnon-planar. Hence it follows. ⇐: Let every join-irreducible β in ℒ covered by at most two join-irreducible in ℒ, let this propery be called (*). Thenevery sublattice of ℒ also has the property (*). Now if ℒ is not planar then there exist a cubesublattice in ℒ, then this sublattice vialates the propery (*), as the cube has three join-irreducible elements covering the minimal element which is a join-irreducible. Hence we arrive at a contradiction. By duality we get the result withmeet irreducibles. ℒ is a planar distributive lattice if and only if for all x,y,z ∈ℒ, x∨ y ∼ x ∨ z andx ∧ y ∼ x ∧ z. ⇒: Let ℒ be a planar distributive lattice. Let x,y,z ∈ℒ, thenthe sublattice generated by x ∧ y, x ∧ z, y ∧ z are also planar. If x ∧ y, x ∧ z, y ∧ z are not comparable to each other then we will get three join-irreducibles covering minimal element. Therefore by Lemma <ref> we will arrive at a contradiction. Hence x∧ y ∼ x ∧ z and by duality other also holds. ⇐: Let the given conditions hold.Then we have the cases (i)x∨ y ≤ x ∨ z,x∧ y ≤ x ∧ z(ii)x ∨ y ≥ x ∨ y,x ∧ y ≤ x ∧ z (iii) x ∨ y ≥ x ∨ y,x ∧ y ≤ x ∧ z. Then the only possibilities to get the sublattices as in Figures <ref> ,<ref>, <ref>, <ref>, which are all planar sublattices. Hence ℒ is planar. If we have two relations f_(α_1,β_1)=x_α_1x_β_1-x_α_1 ∨β_1 x_α_1 ∧β_1 and f_(α_2,β_2)=x_α_2x_β_2-x_α_2 ∨β_2 x_α_2 ∧β_2 coming from the diamonds (α_1,β_1) and (α_2,β_2) respectively. Then with respect to monomial ordering >, we have in(f_(α_1,β_1))=x_α_1x_β_1, in(f_(α_2,β_2))=x_α_2x_β_2. Then we have the following possibilities * α_1=α_2 or α_1=β_2 or β_1=α_2 or β_1=β_2. * α_1,α_2,β_1,β_2 all are different. Without a loss of generality, let us suppose that α_1=α_2, for the case (1). Then we havein(f_(α_1,β_1))=x_α_1x_β_1, in(f_(α_1,β_2))=x_α_1x_β_2 and therefore, Lcm(f_(α_1,β_1),f_(α_1,β_2))=x_α_1x_β_1x_β_2. So,S(f_(α_1,β_1),f_(α_1,β_2))= x_β_2f_(α_1,β_1)- x_β_1f_(α_1,β_2) =x_β_1x_α_1 ∨β_2x_α_1 ∧β_2- x_β_2x_α_1 ∨β_1x_α_1 ∧β_1Then either β_1 ∼β_2, let us say β_1 ≤β_2 or β_1 β_2. Also by Lemma <ref>, forα_1,β_1,β_2 we have α_1 ∧β_1 ∼α_1 ∧β_2 andα_1 ∨β_1 ∼α_1 ∨β_2. Then we have the only possibilities* α_1 ∧β_1 ≤α_1 ∧β_2, α_1 ∨β_1 ≤α_1 ∨β_2.* α_1 ∧β_1 ≥α_1 ∧β_2, α_1 ∨β_1 ≤α_1 ∨β_2. * α_1 ∧β_1 ≤α_1 ∧β_2, α_1 ∨β_1 ≥α_1 ∨β_2.Now we verify all the above cases* Case I: When β_1 ∼β_2. * Check for (1): We will get the following sublattice * Check for (2): We will get the following sublattice * Check for (3): We will get the following sublattice * Case I: When β_1 β_2.* Check for (1): This subcase cannot consider since we will get β_1 ≤β_2, which will give a contradiction. * Check for (2): We will get the following sublattice* Check for (3): We will get the same sublattice like Figure <ref> by replacing β_1 and β_2 by β_2 and β_1 respectively.Now using the conditions of the cases (forwhich lattice exists), we prove that the following are the types of the generators of the first syzygy of the Hibi ring. Let us denote 𝒟_2={(α,β): αβ, α, β∈ℒ}. Infact, 𝒟_2=Hom(F_2,ℒ) non trivial homomorphism where F_2=the free lattice generated by α and β in ℒ.(Strip-type) Let (α_1,β_1),(α_1,β_2) ∈𝒟_2, for every sublattice isomorphic to Figure <ref> with conditions α_1 ∨β_1 ≠α_1 ∨β_2, α_1 ∧β_1=α_1 ∧β_2, β_1 ≤β_2, β_1 ≤α_1 ∨β_2, β_1 ≥α_1 ∧β_2,β_2 α_1 ∨β_1, β_2 ≥α_1 ∧β_1 the following are the generators of the first syzygy * S_1=-x_β_2g_(α_1,β_1) + x_β_1g_(α_1,β_2)-x_α_1 ∧β_1g_(β_2, α_1 ∨β_1). * S_2=x_α_1 ∨β_2g_(α_1,β_1)- x_α_1 ∨β_1g_(α_1,β_2) +x_α_1g_(β_2,α_1 ∨β_1). * Let f_(α_1,β_1) , f_(α_1,β_2) be the relations corresponding to (α_1,β_1),(α_1,β_2) respectively such thatf_(α_1,β_1)=x_α_1x_β_1- x_α_1 ∨β_1x_α_1 ∧β_1, f_(α_1,β_2)= x_α_1x_β_2- x_α_1 ∨β_2x_α_1 ∧β_2.Then with respect to monomial ordering>, in(f_(α_1,β_1))=x_α_1x_β_1,in(f_(α_1,β_2))=x_α_1x_β_2.Lcm(f_(α_1,β_1),f_(α_1,β_2))=x_α_1x_β_1x_β_2.S(f_(α_1,β_1),f_(α_1,β_2))=x_β_2f_(α_1,β_1)- x_β_1f_(α_1,β_2)= x_β_1x_α_1 ∨β_2x_α_1 ∧β_2-x_β_2x_α_1 ∨β_1x_α_1 ∧β_1=- x_α_1 ∧β_1(x_β_2x_α_1 ∨β_1- x_β_1x_α_1 ∨β_2)=-x_α_1 ∧β_1f_(β_2, α_1 ∨β_1) As β_2 α_1 ∨β_1. Hence -x_β_2g_(α_1,β_1) + x_β_1g_(α_1,β_2)-x_α_1 ∧β_1g_(β_2, α_1 ∨β_1) is a generator of the first syzygy.* Let f_(α_1,β_2), f_(β_2, α_1 ∨β_1) be the relations corresponding to (α_1,β_2),(β_2,α_1 ∨β_1) respectively, thereforef_(α_1,β_2)= x_α_1x_β_2- x_α_1 ∨β_2x_α_1 ∧β_2,f_(β_2, α_1 ∨β_1)=x_β_2x_α_1 ∨β_1- x_α_1 ∨β_2x_β_1.Then with respect to monomial ordering>, in(f_(α_1,β_2))=x_α_1x_β_2, in(f_(β_2, α_1 ∨β_1))=x_β_2x_α_1 ∨β_1. Lcm(f_(α_1,β_1), f_(β_2, α_1 ∨β_1))=x_α_1x_β_2x_α_1 ∨β_1. Now, S(f_(α_1,β_2),f_(β_2, α_1 ∨β_1))=x_α_1 ∨β_1f_(α_1,β_2)- x_α_1f_(β_2, α_1 ∨β_1) = x_α_1x_β_1x_α_1 ∨β_2- x_α_1 ∨β_1x_α_1 ∨β_2x_α_1 ∧β_2 =x_α_1 ∨β_2(x_α_1x_β_1- x_α_1 ∨β_1x_α_1 ∧β_1) =x_α_1 ∨β_2f_(α_1,β_1).Hence, x_α_1 ∨β_2g_(α_1,β_1)- x_α_1 ∨β_1g_(α_1,β_2) +x_α_1g_(β_2,α_1 ∨β_1) is a generator of the first syzygy.(L-type) Let (α_1,β_1),(α_1,β_2) ∈𝒟_2, for every sublattice isomorphic to Figure <ref> with conditions α_1 ∧β_1 ≠α_1 ∧β_2,α_1 ∨β_1 ≠α_1 ∨β_2, β_1 ≤β_2,β_1 ≤α_1 ∨β_2,β_1 α_1 ∧β_2, β_2 α_1 ∨β_1,β_2 ≥α_1 ∧β_1 the following is a generator of the first syzygy L=-x_β_2g_(α_1,β_1) +x_β_1g_(α_1,β_1) + x_α_1 ∨β_2g_(β_1,α_1 ∧β_2)- x_α_1 ∧β_1g_(β_2, α_1 ∨β_1).Let f_(α_1,β_1), f_(α_1,β_2) be the relations corresponding to (α_1,β_1),(α_1,β_2) respectively such thatf_(α_1,β_1)=x_α_1x_β_1- x_α_1 ∨β_1x_α_1 ∧β_1, f_(α_1,β_2)= x_α_1x_β_2- x_α_1 ∨β_2x_α_1 ∧β_2. Then with respect to monomial ordering>, in(f_(α_1,β_1))=x_α_1x_β_1, in(f_(α_1,β_2))=x_α_1x_β_2.Lcm(f_(α_1,β_1),f_(α_1,β_2))=x_α_1x_β_1x_β_2. S(f_(α_1,β_1),f_(α_1,β_2)) = x_β_2f_(α_1,β_1)- x_β_1f_(α_1,β_2) =x_β_1x_α_1 ∨β_2x_α_1 ∧β_2-x_β_2x_α_1 ∨β_1x_α_1 ∧β_1=-x_α_1 ∧β_1(x_β_2x_α_1 ∨β_1-x_α_1 ∨β_2x_β_2 ∧ (α_1 ∨β_1))+ x_α_1 ∨β_2( x_β_1x_α_1 ∧β_2- x_α_1 ∧β_1 x_β_1 ∨ (α_1 ∧β_2))= -x_α_1 ∧β_1f_(β_2,α_1 ∨β_1) + x_α_1 ∨β_2 f_(β_1,α_1 ∧β_2) where the third equality follows asβ_1 ∨ (α_1 ∧β_2)=β_2 ∧ (α_1 ∨β_1). Hence-x_β_2g_(α_1,β_1) +x_β_1g_(α_1,β_2) + x_α_1 ∨β_2 g_(β_1,α_1 ∧β_2)- x_α_1 ∧β_1g_(β_2, α_1 ∨β_1) is a generator of the first syzygy.(Box-type) Let (α_1,β_1),(α_1,β_2) ∈𝒟_2, for every sublattice isomorphic to Figure <ref> with conditions α_1 ∧β_1 ≠α_1 ∧β_2,α_1 ∨β_1 ≠α_1 ∨β_2, β_1 β_2, β_1 ≤α_1 ∨β_2, β_1 ≥α_1 ∧β_2, β_2 α_1 ∨β_1, β_2 α_1 ∧β_1 the following are the generators of the first syzygy* B_1=-x_β_2g_(α_1,β_1) + x_β_1g_(α_1,β_2)-x_α_1 ∨β_1g_(β_2,α_1 ∧β_1)-x_α_1 ∧β_2g_(α_1 ∨β_1, β_1 ∨β_2). * B_2=-x_β_1g_(α_1,β_2)+ x_α_1g_(β_1,β_2)+ x_β_1 ∨β_2g_(α_1,β_1 ∧β_2)+ x_α_1 ∧β_2g_(α_1 ∨β_1),(β_1 ∨β_2). * Let f_(α_1,β_1), f_(α_1,β_2) be the relations corresponding to (α_1,β_1),(α_1,β_2) respectively such thatf_(α_1,β_1)=x_α_1x_β_1- x_α_1 ∨β_1x_α_1 ∧β_1, f_(α_1,β_2)= x_α_1x_β_2- x_α_1 ∨β_2x_α_1 ∧β_2. Then with respect to monomial ordering>, in(f_(α_1,β_1))=x_α_1x_β_1,in(f_(α_1,β_2))=x_α_1x_β_2. Lcm(f_(α_1,β_1),f_(α_1,β_2))=x_α_1x_β_1x_β_2. S(f_(α_1,β_1),f_(α_1,β_2)) = x_β_2f_(α_1,β_1)- x_β_1f_(α_1,β_2) =x_β_1x_α_1 ∨β_2x_α_1 ∧β_2-x_β_2x_α_1 ∨β_1x_α_1 ∧β_1 = -x_α_1 ∨β_1(x_β_2 x_α_1 ∧β_1- x_α_1 ∧β_2 x_β_1 ∨β_2)- x_α_1 ∧β_2(x_α_1 ∨β_1x_β_1 ∨β_2- x_α_1 ∨β_2x_β_1) = -x_α_1 ∨β_1f_(β_2,α_1 ∧β_1)- x_α_1 ∧β_2f_(α_1 ∨β_1,β_1 ∨β_2) Hence the lemma follows.* Let f_(α_1,β_2),f_(β_1,β_2) be the relations corresponding to (α_1,β_2), (β_1,β_2) respectively. Therefore, f_(α_1,β_2)=x_α_1x_β_2- x_α_1 ∨β_2 x_α_1 ∧β_2 and f_(β_1,β_2)=x_β_1x_β_2- x_β_1 ∨β_2 x_β_1 ∧β_2. Then with respect to monomial ordering >, in(f_(α_1,β_2))=x_α_1x_β_2, in(f_(β_1,β_2))=x_β_1x_β_2. Lcm(f_(α_1,β_2),f_(β_1,β_2))=x_α_1x_β_1 x_β_2. Now,S(f_(α_1,β_2),f_(β_1,β_2))= x_β_1f_(α_1,β_2)- x_α_1f_(β_1,β_2) = x_α_1x_β_1 ∨β_2x_β_1 ∧β_2- x_β_1x_α_1 ∨β_2x_α_1 ∧β_2 = x_β_1 ∨β_2(x_α_1x_β_1 ∧β_2- x_α_1 ∧β_2x_α_1 ∨β_1)+x_α_1 ∧β_2(x_α_1 ∨β_1 x_β_1 ∨β_2- x_β_1x_α_1 ∨β_2) = x_β_1 ∨β_2f_(α_1, β_1 ∧β_2)+ x_α_1 ∧β_2f_(α_1 ∨β_1,β_1 ∨β_2)Hence the lemma follows.Syz^1_1 R[ℒ] is generated by the types S_1, S_2, L, B_1, B_2.Let f_(α_1,β_1)=x_α_1x_β_1-x_α_1 ∨β_1x_α_1 ∧β_1, f_(α_2,β_2)=x_α_2x_β_2-x_α_2 ∨β_2x_α_2 ∧β_2be the relation coming from the diamonds (α_1,β_1),(α_2,β_2) .Therefore with respect to monomial ordering >,in(f_(α_1,β_1))=x_α_1x_β_1, in(f_(α_2,β_2))=x_α_2x_β_2. Now we have that the S-polynomial is found from the sublattices described in the Figures <ref>, <ref>, <ref> and we proved in all Lemmas<ref>, <ref>, <ref>, that these are the only linear generators of the first syzygy which is coming from the abovesaid figures. Hence the theorem follows. (Diamond-type) Let (α_1,β_1),(α_2,β_2)∈𝒟_2, where α_1,α_2, β_1,β_2 all are distincts and f_(α_1,β_1),f_(α_2,β_2) be the relations of the pairs (α_1,β_1),(α_2,β_2) respectively. If there does not exist a,b ∈ℒ such that a ∧ b=β_1, a∨ b=β_2 thenthe following is a generator of first syzygy and we call it as a diamond type D=(x_α_2x_β_2-x_α_2 ∨β_2x_α_2 ∧β_2)g_(α_1,β_1)- (x_α_1x_β_1- x_α_1 ∨β_1x_α_1 ∧β_1)g_(α_2,β_2). Let f_(α_1,β_1)=x_α_1x_β_1- x_α_1 ∨β_1x_α_1 ∧β_1 and f_(α_2,β_2)=x_α_2x_β_2- x_α_2 ∨β_2x_α_2 ∧β_2. Then in(f_(α_1,β_1))=x_α_1x_β_1, in(f_(α_2,β_2))=x_α_2x_β_2 andLcm(f_(α_1,β_1),f_(α_2,β_2))=x_α_1x_α_2x_β_1x_β_2. So,S(f_(α_1,β_1),f_(α_2,β_2))= x_α_2x_β_2f_(α_1,β_1)- x_α_1x_β_1f_(α_2,β_2) = x_α_1x_β_1x_α_2 ∨β_2x_α_2 ∧β_2- x_α_2x_β_2x_α_1 ∨β_1x_α_1 ∧β_1=x_α_2 ∨β_2x_α_2 ∧β_2 (x_α_1x_β_1- x_α_1 ∨β_1x_α_1 ∨β_1) - x_α_1 ∨β_1x_α_1 ∧β_1 (x_α_2x_β_2- x_α_2 ∨β_2x_α_2 ∨β_2)= x_α_2 ∨β_2x_α_2 ∧β_2f_(α_1,β_1)-x_α_1 ∨β_1x_α_1 ∧β_1f_(α_2,β_2)Hence (x_α_2x_β_2-x_α_2 ∨β_2x_α_2 ∧β_2)g_(α_1,β_1)- (x_α_1x_β_1- x_α_1 ∨β_1x_α_1 ∧β_1)g_(α_2,β_2) is a generator of thesyzygy. Let us denote the S-module Syz^2_1 R[ℒ] be the set of all diamond type syzygies of R[ℒ]. Syz^1R[ℒ]=Syz^1_1 R[ℒ] ⊕Syz^2_1 R[ℒ].It follows from the Theorem <ref> and Lemma <ref>. Syz^2_1 R[ℒ]=0, if for (α_1,β_1),(α_2,β_2) ∈𝒟_2, where α_1,α_2,β_1,β_2 all are distincts and α_1,β_1 < α_2,β_2. If there exists a,b ∈ℒ such that a ∧ b=β_1, a ∨ b=β_2,α_1, α_2,ab, α_2 ∧ b α_1 ∨β_1, then the generator of the first syzygy from these two diamonds is a combination of other types of the generators. The proof follows from the following Lemmas <ref> and <ref>. With the same notation as Lemma <ref>, if a and b satisfy the given conditions then there exists non-trivial x,y,z ∈ℒ such that the following (<ref>) is a sublattice of ℒ. In Figure <ref>, we have a∧ b=β_1 and a ∨ b=β_2. Now we prove thatthere exist x,y,z ∈ℒ such that the following claim holds. * Claim: there exist x such that x=α_2 ∧ b=α_2 ∧β_2 ∧ b and β_1 < x < b.Let α_2 ∧ b=x_1, (α_2 ∧β_2)∧ b=x. Now since x_1 ∨β_2= (α_2 ∧ b)∨β_2=β_2, x ∨β_2=((α_2 ∧β_2)∧ b)∨β_2=β_2, x_1 ∧β_2=(α_2 ∧ b)∧β_2=α_2 ∧ b, x ∧β_2=((α_2 ∧β_2)∧ b)= α_2 ∧ b then it implies x=x_1 and since we have x=(α_2 ∧β_2)∧ b. Let (α_2 ∧β_2)∧ b=b. Then α_2∧β_2 ≥ b. Since α_1 > a, β_1 > b, then α_1∧β_1 > a∨ b= β_2. Which is a contradiction. Hence it follows.* Claim: x α_1 ∨β_1, (α_1 ∨β_1)∧ x=β_1.By the given conditions, we have x α_1 ∨β_1, (α_1 ∨β_1)∧ x=(α_1 ∧ x)∨(β_1∧ x)=(α_1∧β_1)∨β_1= β_1. * Claim: a ∨ x= α_2 ∧β_2 and a ∧ x=β_1.a ∨ x= a ∨ [(α_2 ∧β_2)∧ b]=(a ∨ b)∧ [a ∨ (α_2 ∧β_2)] =β_2 ∧ (α_2 ∧β_2)=α_2 ∧β_2 and a ∧ x= a ∧ [(α_2 ∧β_2)∧ b]=β_1 ∧ (α_2 ∧β_2)=β_1. * Claim: y ∧ b=x, where y=(α_1 ∨β_1)∨ x.[(α_1 ∨β_1)∨ x]∧ b=[(α_1 ∨β_1)∧ b]∨ (x ∧ b)=β_1 ∨ x=x. * Claim: a ∨ y=α_2 ∧β_2 and a ∧ y=α_1 ∧β_1.a ∨[(α_1 ∨β_1)∨ x]=[a ∨(α_1 ∨β_1)∨ x=a ∨ x= α_2 ∧β_2. and a ∧ [(α_1 ∨β_1)∨ x]=(a ∧ x)∨ [a ∧(α_1 ∨β_1)] =β_1 ∨ (α_1 ∨β_1 )=α_1 ∨β_1. * Claim:(α_2 ∧β_2)∧ z=y and (α_2 ∧β_2)∨ z=β_2, wherez=(α_1 ∨β_1)∨ b. (α_2 ∧β_2)∧ [(α_1 ∨β_1)∨ b] =[(α_2 ∧β_2)∧ (α_1 ∨β_1)]∨ [(α_2 ∧β_2)∧ b] =(α_1 ∨β_1)∨ x=y and (α_2 ∧β_2)∨ [(α_1 ∨β_1)∨ b] =(α_2 ∧β_2)∨ (α_1 ∨β_1)=(α_2 ∧β_2)∨ b=β_2. In the following Figure <ref>, if f_(2,3), f_(11,12) be the relations coming from the diamonds (2,3),(11,12) respectively. Thenthe diamond type arrising out of these two diamonds can be expressed as acombination of L-shape syzygies. Let f_(2,3)=x_2x_3-x_1x_4 and f_(11,12)=x_11x_12-x_9x_13 be the relations comingfrom the diamonds(2,3) and (11,12)respectively.Then with respect to monomial ordering > we havein(f_(2,3))=x_2x_3 and in(f_(11,12))=x_11x_12. So Lcm(f_(2,3),f_(11,12))=x_2x_3x_11x_12.S(f_(2,3),f_(11,12))=x_11x_12 f_(2,3)-x_2x_3 f_(11,12)=x_2x_3x_9x_13-x_1x_4x_11x_12=x_9x_13f_1-x_1x_4f_6. Therefore (x_11x_12-x_9x_13)g_(2,3)-(x_2x_3-x_1x_4)g_(11,12) gives a element of the first syzygy.Now this expression canbe written as(x_11x_12-x_9x_13)g_(2,3)-(x_2x_3-x_1x_4)g_(11,12)= x_11(x_12 g_(2,3)-x_6 g_(2,8)+ x_2 g_(6,8)-x_1 g_(6,10))- x_13(x_9 g_(2,3)-x_6 g_(2,5)+ x_2 g_(5,6) -x_1 g_(6,7))+x_6(x_13g_(2,5)-x_11g_(2,8)+x_2 g_(8,11)-x_1 g_(10,11))+x_2(x_13g_(5,6)- x_11g_(6,8) +x_6 g_(8,11)-x_3 g_(11,12))-x_1(x_13g_(6,7)-x_11g_(6,10) +x_6 g_(10,11)- x_4 g_(11,12)) which shows that it is a combination of four L-type elements.Hence the lemma follows for this case.Remark: The Theorem <ref>, tells us when the first syzygy is linear.§ FIRST BETTI NUMBER OF PLANAR DISTRIBUTIVE LATTICELet C_m+1 be the chain 1< 2<… < m+1, G(m,n)= C_m+1× C_n+1 be the product lattice, we call G(m,n) anm × n a grid lattice.Let ℒ be a planar distributive lattice and JM be the set of all join-meet irreducible elements of ℒ. Let D_(θ_i,θ_j)=[θ_i ∧θ_j,θ_i ∨θ_j], the interval for θ_i θ_j,θ_i, θ_j ∈ JM. One can write the lattice ℒ as a union of these intervals. Let n(θ_1,θ_2) be the number of striptype generator in D_(θ_1,θ_2). Then we have the following For θ_1,θ_2 ∈ JM,n(θ_1,θ_2)=S(ht(θ_1)-ht(θ_1 ∧θ_2), ht(θ_2)-ht(θ_1 ∧θ_2))where S(.,.) is the number of strip type generator from the grid lattice. Since the grid lattice formed by D_(θ_1,θ_2) is the latticeG(ht(θ_1)-ht(θ_1 ∧θ_2),ht(θ_2)-ht(θ_1 ∧θ_2)). Hence by the Lemma <ref> it follows. For a planar distributive lattice ℒ, the number of strip type generator of R[ℒ], we denote it by n(S) is the following n(S)= ∑_θ_i θ_jθ_i,θ_j ∈ JM n(θ_i,θ_j) - ∑_θ_i,θ_j,θ_k n(θ_i ∨(θ_j ∧θ_k),θ_j). From the Figure <ref> it follows.Let m(θ_1,θ_2) be the number of L-type generator in D_(θ_1,θ_2). Then we have the following For θ_1,θ_2 ∈ JM, m(θ_1,θ_2)=L(ht(θ_1)-ht(θ_1 ∧θ_2),ht(θ_2)-ht(θ_1 ∧θ_2)).where L(.,.) denotes the number of L-shape generator from the grid lattice. This lemma is clearly follows from the Lemma <ref>. For a planar distributive lattice ℒ, the number of L-type generator of R[ℒ], we denote it by n(L) is the following n(L)= ∑_θ_i θ_jθ_1,θ_j ∈ JMm(θ_i,θ_j) -∑ m (r,s) + ∑_θ_i,θ_j,θ_k(ht(θ_j ∨θ_k)-ht(θ_i ∨θ_j) (ht(θ_j ∧θ_k)-ht(θ_i ∧θ_j))r+12s+12. where r=ht(θ_i ∨θ_j)-ht(θ_j), s=ht(θ_j)-ht(θ_j ∧θ_k). The number of L-type for each D_(θ_i,θ_j) is m(θ_i,θ_j). For each θ_i, θ_j,θ_k there is double counting m(r,s) where r=ht(θ_i ∨θ_j)-ht(θ_j), s=ht(θ_j)-ht(θ_j ∧θ_k). Also for each θ_i,θ_j,θ_k there is a L-type (see the dottedline in the the Figure <ref>) and the number is (ht(θ_j ∨θ_k)-ht(θ_i ∨θ_j) (ht(θ_j ∧θ_k)-ht(θ_i ∧θ_j)) r+12s+12. Since for (θ_i,θ_j),(θ_j,θ_k),(θ_k,θ_l) thete is no L-type. Hence the lemma follows. Let B(θ_1,θ_2) be the number of box type in D_(θ_1,θ_2) then we have the following: For a planar distributive lattice ℒ, the number of box type generator of R[ℒ], n(B) isgiven by n(B)= ∑_θ_i θ_jθ_1,θ_j ∈ JMB(θ_i,θ_j) -∑_θ_i,θ_j,θ_kB(θ_i ∨ (θ_j ∧θ_k),θ_j).For each θ_i,θ_j the number of box types is B(θ_i,θ_j) and for each θ_i, θ_j,θ_k there is double countings B(θ_i ∨ (θ_j ∧θ_k),θ_j) in numbers in their intersections. Hence the lemma follows.Definition: Let (α_1,β_1),(α_2,β_2)∈𝒟_2. Then we say these two diamonds arecomparable if α_1 ≤α_2,α_1 ≤β_2,β_1 ≤α_2, β_1 ≤β_2. If at least oneof these pairs is not comparable we say (α_1,β_1),(α_2,β_2) are non- comparable.Let (α_1,β_1),(α_2,β_2)∈𝒟_2, the diamond type arrising out of these two diamonds isexpressed by strip, L, box types (infact only L-type) if only if* The diamonds are non-comparable or,* There exist a diamond (α_3,β_3)∈𝒟_2 such that there is a sublattice <ref> ⇐: This follows from the Lemma <ref>.⇒: Let there does not exist (α_3,β_3)∈𝒟_2 such that there is no sublattice <ref> and let (α_1,β_1),(α_2,β_2) are comparable, then we prove that (α_1,β_1),(α_2,β_2) are not expressible. The element (x_α_2x_β_2-x_α_2 ∨β_2x_α_2 ∧β_2)g_(α_1,β_1)- (x_α_1x_β_1-x_α_1 ∨β_1x_α_1 ∧β_1)g_(α_2,β_2) can be expressed by other type of generators if there exist a diagram like Figure <ref>, by Lemma <ref>. Now we will show for maximal sublattices this can not be express. Let either the diamond (a) or (b) is missing, let say (a) is missing. Then x_α_2g_(α_1,β_1) or x_β_2g_(α_1,β_1) is a term of a syzygy generator. But from thefigure we see that the coefficient of g_(α_1,β_1) could only be the points which are bolded in the figure and we see that x_α_2 or x_β_2 is not there. Thus this element cannot be expressed as a combination of other type of syzygy genrators. Similarly, we can show that if the diamond (b) is not there then the element cannot be expressed in terms of other types of generators. Hence the lemma follows. Remark: This lemma will help us to find the number of diamond type generator for a planar distributivelattice. Letn(D) be the number of diamond type generators. Then we have the following formula for the first Betti number. Let ℒ be a planar distributive lattice such that the condition of the Lemma <ref> holds. Then the first syzygy of the Hibi ring R[ℒ] is linear. This theoremfollows from the Lemma <ref>. Since any diamond type generator can be expressed by other type of generators (linear).The first Betti number β_1(L) of R[ℒ] is β_1(ℒ)= n(S)+n(L)+n(B)+n(D).It follows from the Lemmas <ref>,<ref>,<ref> and <ref>.Let k(ℒ)=#{(θ_i,θ_j): θ_i θ_j, θ_i,θ_j ∈ JM}. With above notations we have the following* When k(ℒ)=1, the first syzygy of R[ℒ] is linear. * When k(ℒ)=2,* If the lattice is <ref>, then first syzygy is non linear.* Else if the lattice is <ref> then first syzygy is linear if and only if ht(θ_2 ∧θ_3)-ht(θ_1 ∧θ_2)=1 or ht(θ_2 ∨θ_3)-ht(θ_1 ∨θ_2)=1, where θ_1,θ_2,θ_3 ∈ JM. * If k(ℒ) ≥ 3 then the first syzygy is non linear. * Since k(ℒ)=1 then it is always a grid lattice. By Lemma <ref> it is linear.* * If the lattice is <ref>, then the first syzygy have diamond type generators such as(x_θ_3x_θ_4-x_θ_3 ∨θ_4x_θ_3 ∧θ_4)g_(θ_1,θ_2) -(x_θ_1x_θ_2-x_θ_1 ∨θ_2x_θ_1 ∧θ_2)g_(θ_3,θ_4) and it cannot beexpressed by other types of generators, by Lemma <ref>. Hence it is non linear.* ⇒: If the lattice is <ref> and since first syzygy is linear then by Lemma <ref> there is a sublattice <ref> and therefore ht(θ_2 ∧θ_3) -ht(θ_1 ∧θ_2)=1or ht(θ_2 ∨θ_3)-ht(θ_1 ∨θ_2)=1, where θ_1,θ_2, θ_3 ∈ JM.⇐: Let the condition holds then all diamond type elements of the first syzygy can be expressed by other types.Hence the first syzygy is linear.* If k(ℒ) ≥ 3 then we have a figure like <ref>, then the first syzygy element coming from the lower diamond andupper diamond cannot be expressed by other type of syzygies. Hence diamond type elements will occur. Thus for this case the first syzygy is not linear. § BETTI NUMBER OF THE GRID LATTICE The first syzygy of Hibi ring for an m × n grid lattice is generated by strip type, L-type,box type. We have the generators except strip type, L-type, box type and diamond type are coming fromnon planer lattices. But by Theorem<ref> we have that diamond type will not appear in the m × n grid lattice. Hence the theorem follows.Now we give an exact formula for the first Betti number of the Hibi ring R[ℒ], where ℒ is an m× n grid distributive lattice. Let T(n) be the number of strip type generators for R[ℒ], where ℒ isa 1× n grid lattice. ThenT(n)=2n+13. So we have the lattice 1× n grid. We know that strip type generators from alattice coming from every two diamonds which have a common side. Therefore, we have the following T(n)=2 T(n-1)-T(n-2)+2(n-1)So, clearly T(n)=2n+13. Now we find strip type generators for the m× n grid lattice. For the m× n grid lattice the total number of strip type generators we denote it by S(m,n) is the following S(m,n)=m+12T(n)+n+12T(m),where T(m) and T(n) are the number of srtip type generators for the lattices 1 × n and 1 × m respectively. The strip type generators from m × n grid lattice is(m+(m-1)+ … +1)T(n)+(n+(n-1)+ … + 1)T(m)=m+12 T(m)+n+12 T(m).Now we find the number of L-type generators for the 2 × n grid lattice and using this we will find the number of L-type generators for m × n grid lattice.Let L(2,n) be the number of L-type generators for 2 × n grid lattice. Then L(2,n)=n(n^2-1)/3. We see thatL(2,n)=2 ∑_i=1^n(n+1-i)(i-1)=n(n^2-1)/3. The total number of L-type generators we denote it by L(m,n) for the m × n grid lattice is the followingL(m,n)=L(2,m)L(2,n)/2. We see that the total number is L(m,n)= ∑_i=0^m(m+1-i)(i-1)L(2,n)=m(m^2-1)/6L(2,n)= L(2,m)L(2,n)/2. The total number of box type generators we denote it by B(m,n) for the m × n grid latticeis the followingB(m,n)=B(2,m)B(2,n)/2. Since in a 2 × 2 grid lattice the number of box type is same as number of L-type generators, hencethe lemma follows.The Betti number β_1 of R[ℒ] for the m × n grid lattice is the following β_1=S(m,n)+L(m,n)+B(m,n).As by above lemma, first syzygy is generated by L-type, box type and strip type, so the total betti number is the sum of all numbers. Hence the lemma follows. § ACKNOWLEDGEMENT The corresponding author thanks University Grants Commission(UGC) for financial support and Dept. of Mathematics of Bits-Pilani Goa campus for hospitality. * 99 apaherzog1A. Aramova; J. Herzog, and T. Hibi, “Finite lattices and lexicographic Gröbner bases." EuropeanJournal of Combinatorics 21.4 (2000): 431-439.IS I. Sengupta, “A minimal free resolution for certain monomial curves in A^4”. Comm. Algebra 31 (2003), no. 6, 2791-2809.brown J. Brown and V. Lakshmibai. “Singular loci of Grassmann-Hibi toric varieties." arxiv preprint math/0612289 (2006). lakyJ. Brown, V. Lakshmibai; “Singular loci of Bruhat-Hibi toric varieties”, Journal of Algebra, Volume 319, Issue 11, 1 June 2008, Pages 4759-4779. hibi1 J. Herzog; T. Hibi, “Monomial ideals”. Graduate Texts in Mathematics, 260. Springer-Verlag London, Ltd., London, 2011. gonN. Gonciulea, V.Lakshmibai; “Singular Loci of Ladder Determinantal Varieties and Schubert Varieties”, Journal of algebra,volume 229,2000, 463-497.herzog2Ryan, C. Thomas. “Computing solution concepts in games with integer decisions”. Diss. University of British Columbia, 2010. hibi T. Hibi, “Distributive lattices, affine semigroup rings and algebras with straightening laws”, Commutative algebra and combinatorics (Kyoto, 1985), 93-109, Adv. Stud. Pure Math., 11, North-Holland, Amsterdam, 1987.eneV. Ene; “Syzygies of Hibi Rings”. Acta Math. Vietnam. 40 (2015), no. 3, 403-446.ene1 V. Ene; J. Herzog; Saeedi Madani, Sara; “A note on the regularity of Hibi rings”. Manuscripta Math. 148 (2015), no. 3-4, 501-506.him V.Lakshmibai, H. Mukherjee; “Singular loci of Hibi toric varieties”, Journal of Ramanujan Math soc. volume 26,(2011), 1-29.wagWagner, David G. “Singularities of toric varieties associated with finite distributive lattices." Journal of Algebraic Combinatorics 5.2 (1996): 149-166.ya Y. Ma, “The first syzygies of determinantal ideals". J. Pure and Applied Alg. 85, 73-103(1993).
http://arxiv.org/abs/1704.08286v2
{ "authors": [ "Priya Das", "Himadri Mukherjee" ], "categories": [ "math.AC" ], "primary_category": "math.AC", "published": "20170426183713", "title": "First syzygy of Hibi rings associated with planar distributive lattices" }
Improving Facial Attribute Prediction using Semantic Segmentation Mahdi M. Kalayeh [email protected] Boqing Gong [email protected] Mubarak Shah [email protected] Center for Research in Computer Vision University of Central Florida =================================================================================================================================================================================================Attributes are semantically meaningful characteristics whose applicability widely crosses category boundaries. They are particularly important in describing and recognizing concepts where no explicit training example is given, e.g., zero-shot learning. Additionally, since attributes are human describable, they can be used for efficient human-computer interaction. In this paper, we propose to employ semantic segmentation to improve facial attribute prediction. The core idea lies in the fact that many facial attributes describe local properties. In other words, the probability of an attribute to appear in a face image is far from being uniform in the spatial domain. We build our facial attribute prediction model jointly with a deep semantic segmentation network. This harnesses the localization cues learned by the semantic segmentation to guide the attention of the attribute prediction to the regions where different attributes naturally show up. As a result of this approach, in addition to recognition, we are able to localize the attributes, despite merely having access to image level labels (weak supervision) during training. We evaluate our proposed method on CelebAand LFWA datasets and achieve superior results to the prior arts. Furthermore, we show that in the reverse problem, semantic face parsing improves when facial attributes are available. That reaffirms the need to jointly model these two interconnected tasks.§ INTRODUCTIONNowadays, state-of-the-art computer vision techniques allow us to teach machines different classes of objects, actions, scenes, and even fine-grained categories. However, to learn a certain notion, we usually need positive and negative examples from the concept of interest. This creates a set of challenges as the examples of different concepts are not equally easy to collect. Also, the number of learnable concepts is linearly capped by the cardinality of the training data. Therefore, being able to robustly learn a set of sharable concepts that go beyond rigid category boundaries is of tremendous importance. Visual attributes are one particular type of the sharable concepts. They are human describable and machine detectable. The fact that attributes are generally not category-specific suggests that one can potentially describe anexponential number of categories with various combinations of attributes. Naturally, attributes are “additive” to the objects (e.g., horn for cow). It means that an instance of an object may or may not take a certain attribute while in either case the category label is preserved (e.g., a cow with or without horn is still a cow). Hence, attributes are especially useful in problems that aim at modeling intra-category variations such as fine-grained classification. Despite their additive character, attributes do not appear in arbitrary regions of the objects (e.g., the horn, if appears, would show up on a cow's head). This notion is the basis of our work. That is, in order to detect an attribute, instead of the entire spatial domain, we should focus on the region in which that attribute naturally shows up. We hypothesize that the attribute prediction can benefit from localization cues. However, attribute prediction benchmarks come with holistic image level labels. In addition, sometimes it is hard to define a spatial boundary for a given attribute. For instance, it is not clear that according to which spatial region in a face one decides if a person is “attractive" or not. To tackle this challenge, we transfer localization cues from a relevant auxiliary task to the attribute prediction problem. Using bounding box to show the boundary limits of an object is a common practice in computer vision. However, regions that different attributes occupy drastically change in shape and form. For example, in a face image, one cannot effectively put a bounding box around the region associated to “hair". In fact, the shape of the region can be used as an indicative signal on the attribute. Therefore, we need an auxiliary task that learns detailed localization information without restricting the corresponding regions to be in certain pre-defined shapes. Semantic segmentation has all the aforementioned characteristics. It is the problem of assigning class labels to every pixel in an image. As a result, a successful semantic segmentation approach has to learn pixel-level localization cues which implicitly encode color, structure, and geometric characteristics in fine detail. In this work, we are interested in facial attributes. Hence, the semantic face parsing problem <cit.> is a suitable candidate to serve as an auxiliary task to spatially hint the attribute prediction methods.To perform attribute prediction, we feed an image to a fully convolutional neural network which generates feature maps that are ready to be aggregated <cit.> and passed to the classifier. However, global pooling <cit.> is agnostic to where, in spatial domain, the attribute-discriminative activations occur. Hence, instead of propagating the attribute signal to the entire spatial domain, we funnel them into the semantic regions. By doing so, our model learns where to attend and how to aggregate the feature map activations. We refer to this approach as Semantic Segmentation-based Pooling (SSP) where activations at the end of the attribute prediction pipeline are pooled within different semantic regions.Alternatively, we can incorporate the semantic segmentation into earlier layers of the attribute prediction network with a gating mechanism. Specifically, we augment the max pooling operation such that it does not mix activations that reside in different semantic regions. To do so, we gate the activation output of the last convolution layer prior to the max pooling by element-wise multiplying it with the semantic regions. This generates multiple versions of the activation maps that are masked differently and presumably discriminative for various attributes.We refer to this approach as Semantic Segmentation-based Gating (SSG).Since the semantic segmentation is not available for the attribute benchmarks, we learn to estimate it using a deep semantic segmentation network. Our approach is conceptually similar to <cit.> in which an encoder-decoder model is built using convolution and deconvolution layers. However, considering the relatively small number of available data for the auxiliary segmentation problem, we modify the network architecture in order to adapt it to our facial attribute prediction problem. Despite being much simpler than <cit.>, we found our semantic segmentation network to be very effective in solving the auxiliary task of semantic face parsing. Once trained, such network is able to provide localization cues in the form of semantic segmentation (decoder output) that decompose the spatial domain of an image into mutually exclusive semantic regions.We show that both SSP and SSG mechanisms outperform the existing state-of-the-art facial attribute prediction techniques while employing them together results in further improvements.§ RELATED WORKIt is fair to say that the attribute prediction literature can be divided into holistic and part-based approaches. The common theme among the holistic methods is to take the entire image into account when extracting features for attribute prediction. On the other hand, part-based methods begin with an attribute-related part detection and then use the localized parts, in isolation from the rest of the image, to extract features.Our proposed method falls between the two ends of the spectrum. While we process the image in a holistic fashion to generate feature vectors for the classifiers, we employ localization cues in the form of semantic segmentation.It has been shown that part-based models generally outperform the holistic methods. However, they are prone to the localization error as it can affect the quality of extracted features. Among earlier works we refer to <cit.> as successful examples of part-based attribute prediction approaches. More recently, in an effort to combine part-based models with deep learning, Zhang et al. <cit.> proposed PANDA, a pose-normalized convolutional neural network (CNN) to infer human attributes from images. PANDA employs poselets <cit.> to localize body parts and then extracts CNN features from the localized regions. These features will later be used to train SVM classifiers for attribute prediction. Inspired by <cit.> while seeking to also leverage the holistic cues, Gkioxari et al. <cit.> proposed a unified framework that benefits from both holistic and part-based clues while utilizing a deep version of poselets <cit.> as part detectors. Liu et al. <cit.> have taken a relatively different approach. They show that pre-training on massive number of object categories and then fine-tuning on image level attributes is sufficiently effective in localizing the entire face region. Such weakly supervised method provides them with a located region where they perform facial attribute prediction. Finally, in a part-based approach, Singh et al. <cit.> use spatial transformer networks <cit.> to locate the most relevant region associated to a given attribute. They encode such localization cue in a Siamese architecture to perform localization and ranking for relative attributes. § METHODOLOGYIn this section, we begin with the attribute prediction models assuming that the semantic regions are given. We then move on to the semantic segmentation network and provide details on how the semantic regions are generated. §.§ Attribute Prediction NetworksTo leverage the localization cues for facial attribute prediction, we propose semantic segmentation-based pooling and gating mechanisms. We describe our basic attribute prediction model. Then, we explain SSP and SSG in detail including how they are employed in the basic model, simply as new layers, to improve facial attribute prediction.§.§.§ Basic Attribute Prediction NetworkOur basic attribute prediction model is a 12-layers deep fully convolutional neural network. We gradually increase the number of convolution filters from 64 to 1024 filters as we proceed towards the deeper layers. Prior to any increase in the number of convolution filters, we reduce the size of the activation maps using max pooling. For such operation both the kernel size and stride values are set to 2. In our architecture, every convolution layer is followed by the Batch Normalization <cit.> and PReLU <cit.>. The kernel size and stride values of all the convolution layers are respectively set to 3 and 1. The first 8 layers of our basic attribute prediction network are similar in configuration to the encoder part of the semantic segmentation network and detailed in Table <ref>. The rest consists of 4 convolution layers of 512 and 1024 filters, two layers of each. At the end of the pipeline, we aggregate the activations of the last convolution layer using global average pooling <cit.> to generate 1024-D vector representations. These vectors are subsequently passed to the classifier for attribute prediction. We train the network using sigmoid cross entropy loss. Section <ref> provides further details on the training procedure.§.§.§ SSP: Semantic Segmentation-based PoolingWe argue that attributes usually have a natural correspondence to certain regions within the object boundary. Hence, aggregating the visual information from the entire spatial domain of an image would not capture this property. This is the case for the global average pooling <cit.> used above in our basic attribute prediction model as it is agnostic to where, in the spatial domain, activations occur. Instead of pooling from the entire activation map, we propose to first decompose the activations of the last convolution layer into different semantic regions and then aggregate only those that reside in the same region. Hence, rather than a single 1024-D vector representation, we obtain multiple features, each representing only a single semantic region. This approach has an interesting intuition behind it. In fact, SSP funnels the backpropagation of the label signals, via multiple paths, associated with different semantic regions, through the entire network. This is in contrast with global average pooling that rather equally affects different locations in the spatial domain. We later explore this by visualizing the activation maps of the final convolution layer. While we can simply concatenate the representations associated with different regions and pass it to the classifier, it is interesting to observe if attributes indeed prefer one semantic region to another. Also, whether what our model learns matches human expectation on what attribute corresponds to which region. To do so, we take a similar approach to <cit.> where Bilen and Vedaldi employed a two branch network for weakly supervised object detection. We pass the vector representations, each associated to a different semantic region, to two branches one for recognition and another for localization. We implement these branches as linear classifiers that map 1024-D vectors to the number of attributes. Hence, we have multiple detection scores for an attribute each inferred based on one and only one semantic region. To combine these detection scores, we begin by normalizing the output of the localization branch using softmax non-linearity across different semantic regions. This is a per-attribute operation, not an across-attribute one. We then compute the final attribute detection score by a weighted sum of the recognition branch outputs using weights generated by the localization branch. Figure <ref>, on the right, shows the SSP architecture. §.§.§ SSG: Semantic Segmentation-based GatingThe max pooling is used to compress the visual information in the activation maps of the convolution layers. Its efficacy has been proven in many computer vision tasks such as image classification and object detection. However, attribute prediction is inherently different from image classification. In image classification, we want to aggregate the visual information across the entire spatial domain to come up with a single label for the image. Unlike that, many attributes are inherently localized to image regions. Consequently, aggregating activations that reside in the “hair" region with the ones that correspond to “mouth”, would confuse the model in detecting “smiling" and “wavy hair" attributes. We propose SSG to cope with this challenge. Figure <ref> shows a standard convolution layer followed by max pooling on the left, and the SSG architecture in the middle. The latter is our proposed alternative to the former. Here we assume the convolution layer to preserve the number of input channels but it does not have to be. To gate the output activations of the convolution layer, we broadcast element-wise multiplication for each of the N=7 semantic regions with the entire activation maps. This generates N copies (totally 1,792 = 256×7 activation maps) of the activations that are masked differently. Such mechanism spatially decomposes the activation maps into copies where activations with high values cannot simultaneously occur in two semantically different regions. For example, gating with the semantic segmentation that corresponds to the mouth region, would suppress the activations falling outside its area while preserving those that reside inside it. However, the area which a semantic region occupies varies from one image to another. We observed that, directly applying the output of the semantic segmentation network results in instabilities in the middle of the network. To alleviate this, prior to the gating procedure, we normalize the semantic masks such that the values of each channel sum up to 1. We then gate the activations right after the convolution and before the Batch Normalization <cit.>. This is very important since the Batch Normalization <cit.> enforces a normal distribution on the output of the gating procedure. Then, we can apply max pooling on these gated activation maps. Since, given a channel, activations can only occur within a single semantic region, max pooling operation cannot blend activation values that reside in different semantic regions. We later restore the number of channels using a 1×1 convolution. It is worth noting that SSG can mimic the standard max pooling by learninga sparse set of weights for the 1×1 convolution. In a nutshell, semantic segmentation-based gating allows us to process the activations of convolution layers in a per-semantic region fashion, and directly learns how to combine the pooled values afterwards. §.§ Semantic Segmentation NetworkWe have previously explained the rationale behind employing semantic face parsing to improve facial attribute prediction. Our design for the semantic segmentation network follows an encoder-decoder approach, similar in concept to the deconvolution network proposed in <cit.>. However, considering the limited number of training data for the segmentation network, we have made different design decisions to reduce the complexity of the model while preserving its capabilities. The encoder consists of 8 convolution layers in blocks of 2, separated with 3 max pooling layers. This is much smaller than the 13 layers used in the deconvolution network <cit.>. At the end of the encoder part, rather than collapsing the spatial resolution as in <cit.>, we maintain it at the scale of one-eighth of the input size. The decoder is a mirrored version of the encoder replacing convolution layers with deconvolution and max pooling layers with upsampling. Unlike <cit.> that uses switch variables to store the max pooling locations, we simply upsample the activation maps (repetition withnearest neighbor interpolation). We increase (decrease) the number of convolution (deconvolution) filters by a factor of 2 after each max pooling (upsampling), starting from 64 (512) filters as we proceed along the encoder (decoder) path. Every convolution and deconvolution layer is followed by Batch Normalization <cit.> and PReLU <cit.>. To cope with the challenge of relatively small number of training data, we propagate the semantic segmentation loss at different depths along the decoder path. That is, before each upsampling layer, we compute the loss by predicting the semantic segmentation maps at different scales. We then aggregate these losses with equal weights prior to backpropagation.Finally, while <cit.> employs VGG16 <cit.> weights to initialize the encoder, we train our network from scratch. These design decisions allow us to successfully train the semantic segmentation network with the limited number of training data. Detailed configuration of the semantic segmentation network is shown in Table <ref>.§ EXPERIMENTAL RESULTS§.§ Training Semantic Segmentation NetworkIn this paper, we are interested in facial attribute prediction. Hence, face parsing problem <cit.> which aims at pixel-level classification of a face image into multiple semantic regions is a suitable auxiliary task for us. To train the semantic segmentation network, we begin with 11 segment label annotations per image that <cit.> provides to supplement Helen face dataset <cit.>. These labels are as follows: background, face skin (excluding ears and neck), left eyebrow, right eyebrow, left eye, right eye, nose, upper lip, inner mouth, lower lip and hair. We combine left and right eye (eyebrow) labels to create a single eye (eyebrow) label. Similarly, we aggregate upper lip, inner mouth, and lower lip to generate a single mouth label. As a result we end up with a total of 7 labels (background, hair, face skin, eyes, eyebrows, mouth and nose). Figure <ref> illustrates a few instances of the input images along with their corresponding segment label annotations. The face parsing dataset <cit.> comes with 2,330 images in three splits of 2000, 230 and 100, respectively for training, validation and test. However, for the attribute prediction task, we can use the entire dataset to train the semantic segmentation network. We train our model with softmax cross entropy loss. Section <ref> provides details on the training procedure. Figure <ref> shows a few examples of segmentation maps generated by our network. Despite very few number of training data used in its training process, the semantic segmentation network is able to successfully localize various facial regions in previously unseen images. Later, we evaluate our proposed attribute prediction model where these semantic segmentation cues are utilized to improve facial attribute prediction. §.§ Datasets and Evaluation MetricsWe mainly evaluate our proposed approach on the CelebA dataset <cit.>. CelebA consists of 202,599 images partitioned into training, validation and test splits with approximately 162K, 20K and 20K images in the respective splits. There are a total of 10K identities (20 images per identity) with no identity overlap between evaluation splits. Images are annotated with 40 facial attributes such as, “wavy hair", “mouth slightly open", “big lips", etc. In addition to the original images, CelebA provides a set of pre-cropped images. We report our results on both of these image sets. It is worth noting that Liu et al. <cit.> have used both the training and validation data in order to train different parts of their model. In particular, training data has been used to pre-train and fine-tune ANet and LNet while they train SVM classifiers using the validation data. In our experiments, we only use the training split to train our attribute prediction networks.To supplement the analyses on CelebA dataset <cit.>, we also provide experimental results on LFWA<cit.>. LFWA has a total of 13,232 images of 5,749 identities with pre-defined train and test splits which divide the entire dataset into two approximately equal partitions. Each image is annotated with the same 40 attributes used in CelebA<cit.> dataset. For the LFWA dataset <cit.>, we follow the same evaluation protocol as the one for CelebA dataset <cit.>. To evaluate the attribute prediction performance, Liu et al. <cit.> use classification accuracy/error. However, we believe that due to significant imbalance between the numbers of positive and negatives instances per attribute, such measure cannot appropriately evaluate the quality of different methods. Similar point has been raised by <cit.> as well. Therefore, in addition to the classification error, we also report the average precision of the prediction scores.§.§ Evaluation of Facial Attribute PredictionFor all the numbers reported here, we want to point out that FaceTracer <cit.> and PANDA <cit.> use groundtruth landmark points to attain face parts. Wang et al. <cit.> use 5 million auxiliary image pairs, collected by the authors, to pre-train their model. Wang et al. <cit.> also use state-of-the-art face detection and alignment to extract the face region from CelebA and LFWA images. However, we train all our models from scratch with only attribute labels and the auxiliary face parsing labels.§.§.§ Evaluation on CelebA datasetWe compare our proposed method with the existing state-of-the-art attribute prediction techniques on the CelebA dataset <cit.>. To prevent any confusion and have a fair comparison, Table <ref> reports the performances in two separate columns distinguishing the experiments that are conducted on the original image set from those where the pre-cropped image set have been used. We see that even our basic model with global average pooling, with the exception of the MOON <cit.>, outperforms previous state-of-the-art techniques. Accordingly, we can make two observations.First, a simple yet well designed architecture can be very effective. Liu et al. <cit.> combine three deep convolutional neural networks with SVM and Rudd et al. <cit.> have adopted VGG16 <cit.> topped with a novel objective function. These models are drastically larger than our basic network. Specifically, in <cit.>, LNet_o and LNet_s have network structures similar to AlexNet <cit.>. AlexNet has 60M parameters. Thus, only the localization part in <cit.>, not considering ANet, has a total of 120M parameters. Rudd et al. <cit.> adopt VGG16 <cit.> that has 138M parameters. Our basic attribute prediction network has only 24M parameters thanks to replacing fully connected layers with a single global average pooling.Second, <cit.> and <cit.> are built on the top of networks previously trained on massive object category (and facial identity) data while we train all our networks from scratch. Hence, we reject the necessity of pre-training on other large scale benchmarks, arguing that CelebA dataset <cit.> itself is sufficiently large for successfully training facial attribute prediction models from scratch.Experimental results indicate that under different settings and evaluation protocols, our proposed semantic segmentation-based pooling and gating mechanisms can be effectively used to boost the facial attribute prediction performance. That is particularly important given that our global average pooling baseline already beats the majority of the existing state-of-the-art methods. To see if SSP and SSG are complementary to each other, we also report their combination where the corresponding predictions are simply averaged. We observe that such process further boosts the performance.To investigate the importance of aggregating features within the semantic regions, we replace the global average pooling in our basic model with the spatial pyramid pooling layer <cit.>. We use a pyramid of two levels and refer to this baseline as SPPNet^*. While aggregating the output activations in different locations, SPPNet^* does not align its pooling regions according to the semantic context that appears in the image. This is in direct contrast with the intuition behind our proposed methods. Experimental results shown in Table <ref> confirm that simply pooling the output activations at multiple locations is not sufficient. In fact, it results in a lower performance than global average pooling. This verifies that the improvement obtained by our proposed models is due to their content aware pooling/gating mechanisms.Naive Approach A naive alternative approach is to consider the segmentation maps as additional input channels. To evaluate its effectiveness, we feed the average pooling basic model with 10 input channels, 3 for RGB colors and 7 for different semantic segmentation maps. The input is normalized using Batch Normalization <cit.>. We train the network using the same setting as other aforementioned models. Our experimental results indicate that such naive approach cannot leverage the localization cues as good as our proposed methods. Table <ref> shows that at best, the naive approach is on par with the average pooling basic model. We emphasize that feeding semantic segmentation maps along with RGB color channels to a convolutional network results in blending the two modalities in an addition fashion. Instead, our proposed mechanisms take a multiplication approach by masking the activations using the semantic regions.Semantic Masks vs. Bounding Boxes To analyze the necessity of semantic segmentation, we generate a baseline, namely BBox, which is similar to SSP. However, we replace the semantic regions in SSP with the bounding boxes on the facial landmarks. Note that we use the groundtruth location of the facial landmarks, provided in CelebA dataset <cit.>, to construct the bounding boxes. Hence, to some extent, the performance of BBox is the upper bound of the bounding box experiment. There are 5 facial landmarks including left eye, right eye, nose, left mouth and right mouth. We use boxes with area 20^2 (40^2 gives similar results) and 1:1, 1:2 and 2:1 aspect ratios. Thus, there are a total of 16 regions including the whole image itself. From Table <ref>, we see that our proposed models, regardless of the evaluation measure, outperform the bounding box alternative suggesting that semantic masks should be favored over the bounding boxes on the facial landmarks.Balanced Classification AccuracyGiven the significant imbalance in the attribute classes, also noted by <cit.>, we suggested using average precision instead of classification accuracy/error to evaluate attribute prediction. Instead, Huang et al. <cit.> have adopted balanced accuracy measure. To see if our proposed approach is superior to <cit.> under balanced accuracy measure, we fine-tuned our models with the weighted (∝ imbalance level) binary cross entropy loss. From Table <ref>, we observe that under balanced accuracy <cit.>, all the variations of our proposed model outperform <cit.> with large margins.§.§.§ Evaluation on LFWA dataset To better understand the effectiveness of our proposed approach, we report experimental results on the LFWA dataset <cit.> in Table <ref>. We observe that, all the models proposed in this work which exploit localization cues improve our basic model. Specifically, SSP + SSG achieves considerably better performance than the average pooling basic model with 1.86% in classification error and 2.59% in the average precision. Our best model also outperforms all other state-of-the-art methods. §.§ Facial Attributes for Semantic Face ParsingIn this work, we established how semantic segmentation can be used to improve facial attribute prediction. What if we reverse the roles. Can facial attributes improve semantic face parsing? To evaluate this, we jointly train two networks where the first 8 layers of our basic attribute prediction network share weights with the encoder part of the semantic segmentation network. We optimize w.r.t the aggregation of two losses. Specifically, the attribute prediction loss on the CelebA <cit.> dataset and the semantic segmentation loss on the Helen face <cit.> dataset using facial segment labels of <cit.>. We follow pre-defined data partitions of <cit.>, detailed in section <ref>, and use Intersection over Union (IoU) as the evaluation measure. Table <ref> shows nearly 4% boost when attributes are incorporated, indicating the positive effect of attributes in improving semantic face parsing. This shows that there exist an interrelatedness between attribute prediction and semantic segmentation. In future, we will further explore this promising direction. §.§ VisualizationsFigure <ref> illustrates per-attribute weights that the localization branch of the SSP has learned in order to combine the predictions associated with different semantic regions. We observe that attributes such as “Black Hair", “Brown Hair", “Straight Hair" and “Wavy Hair" have strong bias towards the hair region. This matches our expectation. However, attribute “Blond Hair" does not behave similarly. We suspect that it is because the semantic segmentation network does not perform as consistent on light hair colors as it does on the dark ones (refer to Figure <ref>). Attributes such as “Goatee", “Mouth Slightly Open", “Mustache" and “Smiling" are also showing a large bias towards the mouth region. While these are aligned with our human knowledge, “Sideburns" and “Wearing Necklace" apparently have incorrect biases. Unlike the global pooling which equally affects a rather large spatial domain, we expect SSP to generate activations that are semantically aligned. To evaluate our hypothesis, in Figure <ref>, we show the activations for the top fifty channels of the last convolution layer. Top row corresponds to our basic network with global average pooling while the bottom row is generated when we replace global average pooling with SSP. We observe that, activations generated by SSP are clearly more localized than those obtained from the global average pooling. § IMPLEMENTATION DETAILSAll of our experiments were conducted on a single NVIDIA Titan X GPU. We use AdaGrad <cit.> with mini-batches of size 32 to train the attribute prediction models from scratch. The learning rate and weight decay are respectively set to 0.001 and 0.0005. We follow the same setting for training the semantic segmentation network. We perform data augmentation by randomly flipping (horizontally) the input images. In SSP experiments, we resize the output of the semantic segmentation network at Deconv_23 layer to 14×12 (resolution of the final convolution layer). To do so, we use max and average pooling operations. Since max pooling increases the spatial support of the region, we use it for the masks associated with eyes, eyebrows, nose and mouth. This helps us to capture some context as well. We use average pooling for the remaining regions. For SSG experiments, we use the output of Deconv_33 layer, in the semantic segmentation network, as the localization cue. The attribute prediction and semantic segmentation networks are respectively trained for 40K and 75K iterations.§ CONCLUSIONAligned with the trend of part-based attribute prediction methods, we proposed employing semantic segmentation to improve facial attribute prediction. Specifically, we transfer localization cues from the auxiliary task of semantic face parsing to the facial attribute prediction problem. In order to guide the attention of our attribute prediction model to the regions which different attributes naturally show up, we introduced SSP and SSG. While SSP is used to restrict the aggregation procedure of final activation maps to regions that are semantically consistent, SSG carries the same notion but applies it to the earlier layers. We evaluated our proposed methods on CelebA and LFWA datasets and achieved state-of-the-art performance. We also showed that facial attributes can improve semantic face parsing. We hope that this work encourages future research efforts to invest more in the interrelatedness of these two problems. Acknowledgments: We thank anonymous reviewers for insightful feedback, and Amir Emad, Shervin Ardeshir and Shayan Modiri Assari for fruitful discussions. Mahdi M. Kalayeh and Mubarak Shah are partially supported by NIJ W911NF-14-1-0294. Boqing Gong is supported in part by NSF IIS #1566511 and thanks Adobe Systems for a gift. ieee
http://arxiv.org/abs/1704.08740v1
{ "authors": [ "Mahdi M. Kalayeh", "Boqing Gong", "Mubarak Shah" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170427204150", "title": "Improving Facial Attribute Prediction using Semantic Segmentation" }
#1 1=8000 = 100 Pruning Variable Selection Ensembles0Chunxia ZhangCZ gratefully acknowledges research support from the 973 Program of China No. 2013CB329406; National Natural Science Foundation of China, No. 61572393 and No. 11671317; and the China Scholarship Council.School of Mathematics and Statistics, Xi'an Jiaotong University, China[0.2cm] Yilei Wu and Mu ZhuYW and MZ are partially supported by the Natural Sciences and Engineering Research Council of Canada, No. RGPIN-250419 and No. RGPIN-2016-03876.Department of Statistics and Actuarial Science, University of Waterloo, Canada December 30, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= 1In the context of variable selection, ensemble learning has gained increasing interest due to its great potential to improve selection accuracy and to reduce false discovery rate. A novel ordering-based selective ensemble learning strategy is designed in this paper to obtain smaller but more accurate ensembles. In particular, a greedy sorting strategy is proposed to rearrange the order by which the members are included into the integration process. Through stopping the fusion process early, a smaller subensemble with higher selection accuracy can be obtained. More importantly, the sequential inclusion criterion reveals the fundamental strength-diversity trade-off among ensemble members. By taking stability selection (abbreviated as StabSel) as an example, some experiments are conducted with both simulated and real-world data to examine the performance of the novel algorithm. Experimental results demonstrate that pruned StabSel generally achieves higher selection accuracy and lower false discovery rates than StabSel and several other benchmark methods.Keywords: High-dimensional data; Stability selection; Ensemble pruning; Selection accuracy; False discovery rate. 1.45§ INTRODUCTIONWith large availability of high-dimensional data in many disciplines, linear regression models play a pivotal role in the analysis due to their simplicity but good performance. In such situations, it is often assumed that the true model is sparse in the sense that only a few covariates have actual influence on the response. Therefore, variable selection is particularly important to detect these covariates to enhance estimation and prediction accuracy, or to improve the interpretability of the model. In this article, we primarily focus on the variable selection problem in a linear regression model,y= x_1β_1+ x_2β_2+⋯+ x_pβ_p+ε= Xβ+ε,where y=(y_1,y_2,⋯,y_n)^ T∈ℝ^n is the response vector, X=( x_1, x_2,⋯, x_p)∈ℝ^n× p is the design matrix, and {(y_i, x_i)}_i=1^n are n independent observations. Moreover, β=(β_1,β_2,⋯,β_p)^ T∈ℝ^p is a p-dimensional unknown coefficient vector and ε=(ε_1,ε_2,⋯,ε_n)^ T ∈ℝ^n is a normally distributed error term, namely, ε∼ N( 0,σ^2 I) in which σ is unknown. Here, the response and the covariates are assumed to be mean-corrected; there is thus no intercept term in model (<ref>).Variable selection serves two different objectives depending on whether the modelling purpose is for prediction or for interpretation <cit.>. The former aims to seek a parsimonious model so that future data can be well forecast or prediction accuracy can be maximized. But for the latter, analysts would like to identify truly important variables (i.e., those having actual influence on an outcome) from numerous candidates, or to maximize selection accuracy. Due to the significant difference between predictive models and explanatory models, the corresponding variable selection approaches are also very different. In the current paper, we will take selection accuracy (i.e., accurate identification of truly important variables) as our main target.In the literature, a large number of techniques have been developed to tackle variable selection problems under many different circumstances, such as subset selection <cit.>, coefficient shrinkage <cit.>, variable screening <cit.>, Bayesian methods <cit.>, and so on. In high-dimensional situations, much evidence <cit.> has demonstrated that some methods (e.g., subset selection, lasso) are unstable. Here, instability of a method means that small changes in data can lead to much variation of the obtained selection results. If prediction is our final objective, this may not affect the result very much because models including different covariates may have comparable prediction ability. Nevertheless, it is particularly crucial to use a stable method to identify important covariates. Take a biological application as an example, biological experts often expect to get a small but stable set of highly informative variables since they need to invest considerable time and research effort to verify them subsequently. In addition, stable results are more reliable and easier to explain. To stabilize these techniques, ensemble learning has great potential since averaging over a number of independent measures is often beneficial.Ensemble learning, a widely used and efficient technique to enhance the performance of a single learning machine (often called base learner), has had significant success in solving a large variety of tasks <cit.>. The main idea of ensemble learning is to make use of the complementarity of many base machines to better cope with a problem. With regard to most existing ensemble methods (e.g., bagging and boosting), they are developed to improve prediction, and the final models obtained can be called prediction ensembles (PEs). But for variable selection ensembles (VSEs), a phrase first coined by <cit.>, their aim is an accurate identification of covariates which are truly relevant to the response. Existing VSE algorithms include: parallel genetic algorithm (PGA) <cit.>, stability selection (StabSel) <cit.>, random lasso <cit.>, bagged stepwise search (BSS) <cit.>, stochastic stepwise ensembles (ST2E) <cit.>, and bootstrap-based tilted correlation screening learning algorithm (TCSL) <cit.>. These VSE algorithms usually combine all members to generate an importance measure for each variable. As is the case for PEs, a good strength-diversity trade-off among ensemble members is crucial to the success of a VSE <cit.>. However, there inevitably exist some redundant members which are highly correlated because by definition each member is trying to extract the same information from the same training data. In order to filter out these members to attain better selection results, we propose a novel ordering-based selective ensemble learning strategy to construct more accurate VSEs. The core idea is to first sort ensemble members according to how much they decrease overall variable selection loss and thenaggregate only those ranked ahead. In particular, the ordering phase is executed by sequentially including the members into an initially empty ensemble so that the variable selection loss of the evolving ensemble is minimized at each step. Then, only those top-ranked members (typically fewer than half of the raw ensemble) are retained to create a smaller but better ensemble. By taking StabSel as an example, our experiments carried out with both simulated and real-world data illustrate that the pruned ensemble does indeed exhibit better performance — in terms of both selecting the true model more often and reducing the false discovery rate — than the original, full ensemble as well as many other benchmark methods.The remainder of this paper is organized as follows. Section 2 presents some related works about VSEs and selective ensemble learning. Section 3 is devoted to proposing a novel, sequential, ordering-based ensemble pruning strategy. Some theoretical insights are also offered for the sequential inclusion criterion, which is shown to balance the strength-diversity trade-off among ensemble members. Section 4 explains how to apply our pruning strategy to StabSel. Some experiments are conducted with a batch of simulations and two real-world examples to evaluate the performance of our proposed methodology, in Sections 5 and 6, respectively. Finally, Section 7 offers some concluding remarks.§ RELATED WORKSIn this section, we review some related works and ideas. First, there is the basic idea of VSEs. Like PEs, the process of creating a VSE can be generally divided into two steps, that is, ensemble generation and ensemble integration. Most, if not all, existing VSEs utilize a simple averaging rule to assign each variable a final importance measure. Therefore, the key difference among them lies in how to produce a collection of accurate but diverse constituent members. The usual practice is either to execute a base learner (i.e., a variable selection algorithm) on slightly different data sets or to inject some randomness into the base learning algorithm. Among methods of the first type, researchers generally perform selection on a series of bootstrap samples. The representatives include StabSel <cit.>, random lasso <cit.>, BSS <cit.> and TCSL <cit.>. More recently, <cit.> systematically studied the efficiency of subsampling and bootstrapping to stabilize forward selection and backward elimination. The core idea behind methods of the second type is to use a stochastic rather than a deterministic search algorithm to perform variable selection. Approaches such as PGA <cit.> and ST2E <cit.> belong to this class.Next, we would like to discuss StabSel in more detail, as we will use it as the main example to illustrate how our method works. StabSel <cit.> is a general method that combines subsampling with a variable selection algorithm. Its main idea is to first estimate the selection frequencies of each variable by repeatedly applying a variable selection method to a series of subsamples. Afterwards, only the variables whose selection frequencies exceed a certain threshold are deemed as important. More importantly, the threshold can be chosen in such a way that the expected number of false discoveries can be theoretically bounded under mild conditions. Due to its flexibility and versatility, StabSel has received increasing popularity and been successfully applied in many domains <cit.> since its inception.Finally, the idea of selective ensemble learning (also known as ensemble pruning) is not new inmachine learning, either. <cit.> first proved a so-called “many-could-be-better-than-all” theorem, which states that it is usually beneficial to select only some, instead of keeping all, members of a learning ensemble. Since then, a great number of ensemble pruning techniques <cit.> have been proposed. Compared with full ensembles, not only are pruned ensembles more efficient both in terms of storage and in terms of prediction speed, they typically also achieve higher prediction accuracy, a win-win situation. Among existing methods, ranking-based and search-based strategies are the two most widely used approaches to select the optimal ensemble subset. The former works by first ranking all the individuals according to a certain criterion and then keeping only a small proportion of top-ranked individuals to form a smaller subensemble. With respect to the latter, a heuristic search is generally carried out in the space of all possible ensemble subsets by evaluating the collective strength of a number of candidate subsets. However, these ensemble pruning methods are all devised for PEs. Due to the significant difference between PEs and VSEs, they cannot be directly used for VSEs.In the literature of VSEs, only <cit.> had made an attempt to prune VSEs to the best of our knowledge. Nevertheless, our proposed method differs from theirs in several aspects. First, the strategy to sort ensemble members is quite different. Given a VSE, <cit.> used prediction error to evaluate and sort its members, whereas our algorithm sequentially looks for optimal subensembles to minimize variable selection loss. Second, their experiments showed that their subensembles typically do not perform well until they reach at least a certain size; a small ensemble formed by just a few of their top-ranked members usually does not work as well as a random ensemble of the same size. Our method does not suffer from this drawback; empirical experiments show that our subensembles almost always outperform the full ensemble, regardless of their sizes. Last but not least, while <cit.> focused on pruning VSEs from the ST2E algorithm <cit.>, in this paper we will primarily focus on applying our pruning technique to stability selection <cit.>, for which some extra tricks are needed since stability selection does not aggregate information from individual members by simple averaging. § AN ORDERING-BASED PRUNING ALGORITHM FOR VSESRoughly speaking, the working mechanism of almost all VSEs can be summarized as follows. Each base machine first estimates whether a variable is important or not. By averaging the outputs of many base machines, the ensemble generates an average importance measure for each candidate variable. Subsequently, the variables are ranked according to the estimated importance to the response. To determine which variables are important, a thresholding rule can be further implemented. Essentially, the output of each ensemble member can be summarized by an importance vector with each element reflecting how important the corresponding variable is to the response, and so can the output of the ensemble. Suppose that the true importance vector exists; then, each given member and the overall ensemble will both incur some loss due to their respective departures from the true vector. In this way, given a VSE an optimal subset of its members can be found to minimize this variable selection loss.To state the problem formally, we first introduce some notations. Let r^∗=(r_1^∗,r_2^∗,⋯,r_p^∗)^ T (∑_j=1^pr_j^∗=1, r_j^∗≥0) denote the true importance vector, which is not available in practice. The matrix R stores the estimated importance measures, say, r_b=(r_b1,r_b2,⋯,r_bp)^ T (∑_j=1^pr_bj=1, r_bj≥0) (b=1,2,⋯,B), which are produced by B ensemble members. Let L( r^∗, r_b) denote the variable selection loss of member b. In this paper, we adopt the commonly used squared loss function to measure the loss, i.e., L( r^∗, r_b) = ∑_j=1^p(r_j^∗ - r_bj)^2=|| r^∗ -r_b||_2^2. Then, the loss function for an ensemble of size B is1/B∑_b=1^B r_b -r^∗^2=1/B^2∑_b=1^B( r_b -r^∗)^2.If wedefine a matrix E with its element E_ij (i,j=1,2,⋯,B) asE_ij = ( r_i -r^∗)^ T( r_j -r^∗),the loss function in (<ref>) can thus be expressed as1/B^2∑_i=1^B∑_j=1^B E_ij.Assume that there exists a subensemble which can achieve lower loss than the full ensemble, the process of finding the optimal subset is NP-hard, as there are altogether 2^B-1 non-trivial candidate subsets. Despite this, we can still design an efficient, greedy algorithm that sequentially looks for the best local solution at each step.To prune the original VSE, we try to sequentially select a subensemble composed of U<B individuals {s_1,s_2,⋯,s_U} that minimizes the loss function1/U^2∑_i=1^U∑_j=1^U E_s_is_j.First, the member which has the lowest value of E_ii (i=1,2,⋯,B) is chosen. In each subsequent step, each candidate member is included into the current ensemble and the one which minimizes the loss is selected. Repeat this process until there is no candidate member left, and we can get a new aggregation order of all the ensemble members. The time complexity of this operation is of polynomial order O(B^2 p+B^3) while that of the exhaustive search is of exponential order. Algorithm 1 lists the main steps of this greedy method to sort the ensemble members of a VSE.Algorithm 1. A greedy pruning algorithm for VSEs. -0.8cm [] Input0.2cm R=( r_1, r_2,⋯, r_B): a p× B matrix storing the importance measures estimated by B members. 0.1cm r_ref: a reference importance vector to be used in place of r^∗ in practice. 0.2cm Output0.1cm 𝒮: indices for the ordered members. 0.2cm Main steps 1.According to formula (<ref>), compute each element E_ij of the matrix E — replacing r^∗ with r_ref in practice.2.Initialize 𝒮={b}, 𝒞={1,2,⋯,B}∖{b}, where member b is the most accurate one, i.e., b= argmin_1≤ i≤ BE_ii. 3.For u = 2,⋯, B (1)Let minumim⟵ +∞.(2)For k in 𝒞 * Letvalue⟵1/u^2(∑_i=1^u-1∑_j=1^u-1 E_s_is_j + 2∑_i=1^u-1 E_s_ik +E_kk).* If (value < minimum), let sel_ind=k and minimum = value. End For(3)Let 𝒮=𝒮∪{sel_ind} and𝒞=𝒞∖{sel_ind}. End For4.Output the indices of the sorted members 𝒮.-0.8cm []Notice that the true importance vector r^∗ is generally unknown. Therefore, in practice we need to define a certain reference vector, say, r_ref, to be used as a surrogate of r^∗. Any rough estimate of r^∗ can be used. In fact, it is not crucial for r_ref to be in any sense a “correct” importance vector. It is merely used as a rough guide so that we can assess the relatively accuracy of r_b (in terms of its partial effect on r_ref); the final variable selection decision is still based upon information contained in { r_1,..., r_B} rather than upon r_ref itself. In this paper, we use stepwise regression to construct such a reference vector. Given data ( X, y), a linear regression model is estimated by stepwise fitting. Based on the final coefficient estimates β̂_j (j=1,2,⋯,p), r_ref is simply computed as r_ref=|β̂_j|/∑_j=1^p|β̂_j|.r̊ Interestingly, one can discern from this sequential algorithm the fundamental “strength-diversity trade-off” that drives all ensemble learning algorithms. In Algorithm 1, equation (<ref>) is the loss after _̊k is added to the current ensemble, {_̊s_1,_̊s_2,...,_̊s_u-1}. Let_̊-k = 1/u-1∑_j=1^u-1_̊s_jbe the current ensemble estimate, where the subscript “-k” is used to denote “prior to having added _̊k”. Then, an alternative way to express (<ref>) is[(1-1/u)_̊-k+1/u_̊k]-^̊∗^2 = 1/u(_̊k-_̊-k)-(^̊∗-_̊-k)^2 = 1/u^2_̊k-_̊-k^2 -2/u(_̊k-_̊-k)^ T(^̊∗-_̊-k) +^̊∗-_̊-k^2,where the last term ^̊∗-_̊-k^2 is the loss incurred by the ensemble without _̊k. Clearly, _̊k can further reduce the overall ensemble loss only if2/u(_̊k-_̊-k)^ T(^̊∗-_̊-k)> 1/u^2_̊k-_̊-k^2or, equivalently,(_̊k-_̊-k)^ T(^̊∗-_̊-k)/_̊k-_̊-k^̊∗-_̊-k> 1/2u×_̊k-_̊-k/^̊∗-_̊-k.The left-hand side of (<ref>) can be viewed as the partial correlation between _̊k and ^̊∗ after having removed the current estimate _̊-k from both; it is thus a measure of how useful candidate _̊k is. The right-hand side, on the other hand, is a measure of how different candidate _̊k is from the current estimate _̊-k (relative to the difference between ^̊∗ and _̊-k). Hence, one can interpret condition (<ref>) as a lower bound on the usefulness of _̊k in order for it to be considered a viable candidate as the next (i.e., the u-th) member of the ensemble.First, the bound decreases with the index u, that is, the bar of entry is steadily lowered as more and more members are added. This is necessary — since it is more difficult to improve an already sizable ensemble, a new member becomes admissible as long as it has some additional value. Second, if the new candidate _̊k is very different from _̊-k, then it must be very useful as well — in terms of its partial correlation with ^̊∗ — in order to be considered. This observation is consistent with a fundamental trade-off in ensemble learning, referred to as the “strength-diversity trade-off” by Leo Breiman in his famous paper on random forests <cit.>, which implies that something very different (diversity) had better be very useful (strength). The analysis above thus provides some crucial insight about how the accuracy and diversity of individuals in a pruned VSE work together to improve its performance.Based on the output 𝒮 of Algorithm 1, we can create the average importance for the p variables by averaging the results of only some — say, the top U — members of the full ensemble. Then, the variables can be ordered accordingly. Ideally, the value of U should be automatically determined to maximize selection accuracy. However, variable selection accuracy is not as readily computable as is prediction accuracy, since the truly important variables are unknown in practice. The easiest method is to prescribe a desired number for U. According to our experiments (refer to Section 5) as well as evidence in the study of PEs <cit.>, it often suffices to keep only the first 1/4 to 1/2 of the sorted members. § ENSEMBLE PRUNING FOR STABILITY SELECTIONWhile the pruning algorithm (Algorithm 1) we provided in Section <ref> can be applied to any VSE, in this paper we will use StabSel <cit.> as an example to demonstrate its application and effectiveness, in view of StabSel's popularity and flexibility in high-dimensional data analysis <cit.>.In essence, StabSel is an ensemble learning procedure. In the generation stage, it applies the base learner lasso (or randomized lasso) repeatedly to subsamples randomly drawn from the training data. When combining the information provided by the multiple lasso learners, it employs a special strategy, as opposed to simple averaging. For each candidate value of the tuning parameter in the lasso, it first estimates the probability that a variable is identified to be important. Next, it assigns an importance measure to each variable as the maximum probability that the variable is considered as important over the entire regularization region consisting of all candidate tuning parameters. Finally, the selection result is obtained by evaluating the importance measures against a given threshold. To ease presentation, we summarize the main steps of StabSel, with the lasso as its base learner, as Algorithm 2. That StabSel uses an aggregation strategy other than that of simple averaging is another reason why we have chosen to use it as our main example, because it is less obvious how our pruning algorithm should be applied.Algorithm 2. Stability selection for variable selection. -0.8cm [] Inputy: a n× 1 response vector. X: a n× p design matrix. Λ: a set containing K regularization parameters for the lasso.B: number of base learners.π_ thr: a pre-specified threshold value.Outputℐ: index set of the selected variables. Main steps * For b=1,2,⋯,B * Randomly draw a subsample ( X', y') of size ⌊n/2⌋ from ( X, y) where ⌊ x⌋ denotes the largest integer less than or equal to x.* With each regularization parameter λ_k∈Λ, execute lasso with ( X', y') and denote the selection results as 𝒮̂^λ_k_⌊ n/2⌋,b (k=1,2,⋯,K). EndFor* Compute the selection frequencies for each variable asπ̂_j=λ_k∈Λmax{π̂_j^λ_k}, j=1,2,⋯,p,in whichπ̂_j^λ_k≜ P^∗{j∈𝒮̂^λ_k}=1/B∑_b=1^B𝕀{j∈𝒮̂^λ_k_⌊ n/2⌋,b},where 𝕀(·) is an indicator function which is equal to 1 if the condition is fulfilled and to 0 otherwise.* Select the variables whose selection frequency is above the threshold value, i.e., ℐ={j:π̂_j≥π_ thr}.-0.8cm []Let V stand for the number of variables wrongly selected by StabSel, i.e., false discoveries. Under mild conditions on the base learner, <cit.> proved that the expected value of V (also called per-family error rate, PFER) can be bounded for π_ thr∈(1/2,1) byE(V)≤1/2π_ thr - 1·q_Λ^2/p,in which q_Λ means the number of selected variables per base learner. Following guidelines provided by <cit.>, we set π_thr=0.7. Furthermore, we choose a false discovery tolerance of E(V) ≤ 4, which implies a targeted value of q_Λ=⌈(1.6p)^1/2⌉ variables to be chosen by the base learner according to (<ref>). Hence, we set the regularization region Λ for the lasso to be a set of K different values — equally spaced on the logarithmic scale — between λ_ min and λ_ max, where λ_ max=max_j|n^-1 x_j^ T y| andλ_ min=λ argmax{|λ_ max-λ|:0≤λ≤λ_ max,q_λ=⌈(1.6p)^1/2⌉}.That is, starting from λ_ max, we push λ_ min far enough until the lasso is able to include q_Λ=⌈(1.6p)^1/2⌉ number of variables. In our experiments, we usually took K=100.For StabSel, the selection results of each ensemble member, 𝒮̂^λ_k_⌊ n/2 ⌋,b (k=1,2,⋯,K) (see Algorithm 2, Step 1), is a matrix T^(b) of size p× K, rather than simply an importance vector of length p. Each entry T^(b)_jk (j=1,⋯,p;k=1,⋯,K) is a binary indicator of whether variable j is selected when the regularization parameter is λ_k. When applying Algorithm 1 to rearrange the aggregation order of the ensemble members in StabSel, each T^(b) needs to be transformed into an importance vector r_b=(r_b1,r_b2,⋯,r_bp)^ T, with r_bj reflecting the importance of variable j as estimated by member b. To achieve this, r_bj is computed as r_bj=(1/K)∑_k=1^K T^(b)_jk, orr_b = 1/K T^(b) 1.That is, a variable is deemed more important if it is selected over a larger regularization region. Overall, the pruned StabSel algorithm works by inserting our ensemble pruning algorithm (Algorithm 1) as an extra step into the StabSel algorithm (Algorithm 2). First, a StabSel ensemble of size B is generated “as usual” by step 1 of Algorithm 2. Then, the selection result T^(b) of each ensemble member is condensed into r_b according to (<ref>) and Algorithm 1 is utilized to sort the members and obtain the ranked list, 𝒮. Afterwards, the ensemble members are fused “as usual” by steps 2 and 3 of Algorithm 2, except that only the members corresponding to the top U elements of 𝒮 are fused rather than all of the original B members — specifically, only the top U members are used when computing π̂_j^λ_k in (<ref>).§ SIMULATION STUDIESIn this section, some experiments are conducted with simulated data in different experimental settings to study the performance of the pruned StabSel algorithm. Particularly, the effect of modifying the aggregation order of ensemble members in StabSel is first analyzed (scenario 1 below). Then (scenarios 2-5 below), pruned StabSel is examined and compared with vanilla StabSel as well as some other popular benchmark methods including the lasso <cit.>, SCAD <cit.> and SIS <cit.>.In the following experiments, the lasso and StabSel were implemented by the glmnet toolbox <cit.> in Matlab, while SCAD and SIS were available as part of the package ncvreg <cit.> in R. In SIS, ⌈ n/log(n)⌉ variables having the largest marginal correlation with the response were first selected and SCAD was then followed to identify important ones. Ten-fold cross-validation was used to select the tuning parameters for the lasso and SCAD.To extensively evaluate the performance of a method, the following five different measures were employed. Let β^∗=(β_1^∗,β_2^∗,⋯,β_p^∗)^ T be the coefficient vector for the true model 𝒯, i.e., 𝒯={j:β_j^∗≠ 0}. To estimate an evaluation metric, we replicated each simulation M times. In the m-th replication, denote β̂_m=(β̂_1,m,β̂_2,m,⋯,β̂_p,m)^ T as the estimated coefficients and 𝒮̂_m={j:β̂_j,m≠ 0} as the identified model. Moreover, let d_0 and (p-d_0) indicate the number of truly important and unimportant variables, respectively. Then, we define[p̅_1 = 1/d_0× M(∑_m=1^M∑_j=1^p𝕀(β_j^∗≠0 β̂_j,m≠ 0)),; [0.10cm] p̅_0 = 1/(p - d_0)× M(∑_m=1^M∑_j=1^p𝕀(β_j^∗=0 β̂_j,m≠ 0)), ] acc.=1/M∑_m=1^M𝕀(𝒮̂_m=𝒯), FDR = 1/M∑_m=1^M∑_j=1^p𝕀(β_j^∗=0 β̂_j,m≠ 0)/𝕀(β̂_j,m≠ 0), PErr = 1/σ^2E[(ŷ -x^ Tβ^∗)^2]=1/M∑_m=1^M[1/σ^2(β̂_m - β^∗)^ T[E( x x^ T)](β̂_m - β^∗)].In the formulae above, 𝕀(·) represents an indicator function. The p̅_1 and p̅_0 defined in (<ref>), respectively, correspond to the mean selection probability for the truly important and unimportant variables — i.e., true positive and false positive rates, respectively. In general, a good method should simultaneously achieve a p̅_1 value close to 1 and a p̅_0 value close to 0. The selection accuracy (abbreviated as acc.) in (<ref>) indicates the frequency that an algorithm exactly identifies the true model, and the false discovery rate (FDR) assesses the capacity of an approach to exclude noise variables. To evaluate the prediction ability of a method, we utilized the relative prediction error (simply abbreviated as PErr), given in (<ref>), by following the practice of <cit.>. In particular, a linear regression model was built by using the selected variables. Then, E( x x^ T) and the relative prediction error were estimated with an independent test set composed of 10,000 instances. §.§ Simulated dataThe simulated data in the following scenarios 1-4 were generated byy= x_1β_1+ x_2β_2+⋯+ x_pβ_p+ε= Xβ+ε,ε∼ N( 0,σ^2 I),where ε is a normally distributed error term with mean zero and variance σ^2. In scenario 5, we simulated data from a logistic regression model. Although we have focused mostly on linear regression problems in this paper, all of these ideas (i.e., StabSel, SIS, etc) can be generalized easily to other settings, and so can our idea of ensemble pruning. For logistic regression (scenario 5 here and a real-data example later in Section <ref>), rather than the relative prediction error defined in (<ref>), we simply used the average misclassification error,PErr = 1/M∑_m=1^M P(ŷ_m ≠ y),with P(ŷ_m ≠ y) being estimated on an independently generated (or held out) test set, to measure prediction capacity. Scenario 1 This is a simple scenario taken from <cit.>, which we used primarily to study the effect of our proposed re-ordering of the ensemble members (see Figures <ref> and <ref>), rather than to evaluate its performance against various benchmark algorithms. There are p=20 variables and n=40 observations. Particularly, only variables x_5,x_10 and x_15 have actual influence on the response variable y and their true coefficients are 1, 2, 3, respectively. The rest of variables are uninformative. As in <cit.>, for the explanatory variables we considered the following 4 variations:Variation 1: x_1, x_2,⋯, x_20∼ N( 0, I); Variation 2: x_1, x_2,⋯, x_19∼ N( 0, I), x_20= x_5+0.25 z, z∼ N( 0, I); Variation 3: x_1, x_2,⋯, x_19∼ N( 0, I), x_20= x_10+0.25 z, z∼ N( 0, I); Variation 4: x_j= z+ϵ_j, j=1,2,⋯,20, ϵ_j∼ N( 0, I), z∼ N( 0, I).In variation 1, all covariate 1 are independent. In variations 2 and 3, the variable x_20 is highly correlated with x_5 and with x_10, respectively, each with correlation coefficient ρ≈ 0.97. In variation 4, all variables are moderately correlated with each other, with ρ≈0.5. As for the standard deviation σ of ε, it was set to be σ=1 for variations 1-3 and σ=2 for variation 4.Scenario 2 This is a scenario similar to one considered by <cit.>. In this scenario, the covariates come from a normal distribution with a compound symmetric covariance matrix Σ=(Σ_i,j)_p× p in which Σ_i,j=ρ for i≠ j. Five covariates are truly important to the response and their coefficients are taken as β_IV=(0.5, 1.0, 1.5, 2.0, 2.5)^ T. The rest of the covariates are considered as unimportant and their coefficients are all zero. By varying the value of n, p and ρ, we examined the performance of each method for n≥ p and for n≪ p (see Table <ref>). Scenario 3 Here, we considered a setting in which the covariates have a block covariance structure, similar to one used by <cit.>, again. In the model, the signal variables have pairwise correlation ρ_1=0.25; the noise variables have pairwise correlation ρ_2=0.75; and each pair of signal and noise variables has a correlation of ρ_3=0.50. The true coefficient vector is β=(0.5, 1.0, 1.5, 2.0, 2.5, 0_p - 5)^ T. We focused our attention on a high-dimensional p > n setting, with n=200, p = 1000 and σ=1. Scenario 4 In this scenario, we studied a more challenging problem based on a commonly used benchmark data set <cit.>. Here, X is generated from a multivariate normal distribution with mean zero and covariance matrix Σ=(Σ_i,j)_p× p, where Σ_i,j=ρ^|i-j| for i≠ j. The true coefficient vector is β=(3,1.5,0,0,2,0.5,0.5, 0_p-7)^ T and σ is set to 1. Again, we set n=200 and p=1000 to focus on the p > n case, and took ρ=0.50, 0.90 to evaluate the performance of each method. Other than the high correlations, what makes this scenario especially challenging is the existence of two weak signals with true coefficients equal to 0.5. Scenario 5 Finally, we considered a logistic regression model <cit.> with data created fromlogit(p_i)=log(p_i/1-p_i)= x_i^ Tβ,where p_i is the probability that the response variable y_i is equal to 1. The true coefficient vector is β=(3, 1.5, 0, 0, 2, 0_p - 5)^ T. The components of x are standard normal, where the correlation between x_i and x_j is ρ( x_i, x_j)=0.5^|i-j| ∀ i≠ j. We took n=200 and examined both a relatively low-dimensional setting with p=50 and a high-dimensional one with p=1000. §.§ Effect of changing the aggregation orderAs we stated earlier, we used scenario 1 primarily to analyze our proposed reordering algorithm rather than to make general performance comparison. First, we used it to investigate how the performance of StabSel varies if the aggregation order of its constituent members is rearranged. To evaluate the performance of a VSE, the estimated selection accuracy over 100 simulations was employed. Since the dimensionality p=20 is relatively low in scenario 1, we used a slightly different set of parameters — specifically, q_Λ=⌈0.8p⌉ and π_ thr=0.6 — to run StabSel than what we recommended earlier in Section <ref> for high-dimensional problems. Figure <ref> depicts the selection accuracy for subensembles of (regular) StabSel and of ordered StabSel as a function of their respective sizes. For (regular) StabSel, the members were aggregated gradually in the same random order as they were generated, whereas, for ordered StabSel, the members were first sorted by Algorithm 1 and then fused one by one. It can be observed from Figure 1 that the accuracy of (regular) StabSel tends to increase rapidly at the beginning as more members are aggregated. Then, it quickly reaches a nearly optimal value, after which further improvement by adding more members becomes negligible. But for ordered StabSel, this accuracy curve always reaches a maximum somewhere in the middle; afterwards, the curve steadily declines until it reaches the same level of accuracy as that of (regular) StabSel, when the two algorithms fuse exactly the same set of members. Moreover, we can see that the selection accuracy of almost any ordered subensemble would be higher than that of the full ensemble consisting of all B=300 members (i.e, the rightmost point in each subplot). This unequivocally demonstrates the value of our ordering-based ensemble pruning and selective fusion algorithm.§.§ Effect of the original ensemble size BThere is some evidence in the literature of selective ensemble learning for classification that increasing the size of the initial pool of classifiers (i.e., the ensemble size B) generally improved the performance of the final pruned ensemble <cit.>. To verify whether this was true for pruned VSEs as well, we used scenario 1 again to conduct the following experiments. For each variation of scenario 1, an initial StabSel ensemble of size 1000 was built. The ensemble members were then ordered, considering only the first 300, 500 and 1000 individuals of the original ensemble, respectively. Similar to the experiments of the previous section, these steps were repeated 100 times to estimate the respective selection accuracies of the full and pruned ensembles. The accuracy curves in Figure <ref> illustrate that the maximum selection accuracies achieved by re-ordering an initial pool of B = 300, 500 or 1000 base learners are almost the same. §.§ Performance comparisonsWe now proceed to general comparisons. Based on the simulations in Sections <ref> and <ref>, we used an ensemble size of B=100 in all subsequent experiments to strike a reasonable balance between performance and computational cost. Furthermore, these simulations (see Figures <ref> and <ref>) provided overwhelming evidence that only keeping a relatively small number of top-ranked ensemble members (often less than half) was usually sufficient to produce a better subensemble. As a result, in all experiments below we kept only the top 1/3 after re-ordering the initial StabSel ensemble to form our pruned ensemble. To evaluate the performance metrics, every simulation was repeated for M=500 times when p≤ n and M=200 times when p>n.In scenario 2, for each ρ=0, 0.5, we compared all methods when n≥ p, specifically, (n,p)=(100,50),(100,100), and when n ≪ p, specifically, (n,p)=(200,1000). Table <ref> reports the performance of each method, as measured by different metrics. In each row, the number in the parentheses of the last column is the standard error of PErr from M simulations. The following observations can be made. Firstly, all methods could detect important variables in most cases, as shown by p̅_1, with the performance of SIS being slightly worse than others, especially when p≥ n. However, the lasso, SCAD and SIS all paid additional prices by including a relatively large number of uninformative variables, as indicated by the metrics p̅_0, acc. and FDR. Secondly, StabSel and pruned StabSel behaved significantly better than their rivals in terms of all metrics. Their advantages were more prominent in terms of acc. and FDR. More importantly, pruned StabSel often significantly improved upon StabSel in being able to correctly identify the true model; in some cases, the selection accuracy (acc.) almost doubled. Thirdly, the prediction abilities of StabSel and of pruned StabSel were comparable, and both outperformed the other benchmark algorithms.Here, the advantage of using a VSE over a single selector can also be seen clearly by comparing StabSel or pruned StabSel with the lasso, SCAD and SIS.Table 2 summarizes the results for scenario 3. In this situation, it was difficult for any method to distinguish informative variables from uninformative ones because of their high correlation. The results in Table 2 show that the lasso, SCAD and SIS were almost useless in this case; their selection accuracy was almost zero and their FDRs were also very high. Because SIS utilizes the correlation between each covariate with the response to achieve variable screening, the spurious correlations in this scenario caused it to behave badly. By contrast, StabSel and pruned StabSel performed much better. Moreover, by eliminating some unnecessary members in the StabSel ensemble, the pruned StabSel ensemble was able to reach much better selection results (much higher accuracy and lower FDR).Results for scenario 4 are reported in Table 3. We can observe that StabSel and pruned StabSel both had satisfactory performances even when ρ=0.90, with pruned StabSel again significantly outperforming (regular) StabSel in terms of acc. and FDR. However, the other methods could hardly detect the true model at all due to the two weak signals.Finally, Table 4 shows the results for scenario 5, a logistic regression problem, for both p<n and p>n. From Table 4, we can draw some similar conclusions. The pruned StabSel ensemble continued to maintain its superiority over the other methods, especially in terms of selection accuracy and the FDR. The simulation results presented in this section strongly indicate that pruned StabSel is a competitive tool for performing variable selection in high-dimensional sparse models. As far as our current study is concerned, the most important message is that our ordering-based pruning algorithm (Algorithm 1) can give VSE algorithms such as StabSel a significant performance boost.§ REAL DATA ANALYSISTo assess how well each method behaves on real data, we took two real data sets and followed an evaluation procedure utilized by other researchers <cit.>. In particular, the design matrix X of the real data set was used with randomly generated coefficients and error terms to produce the response, so one knew beforehand whether each variable was important or not. Our first example is the Riboflavin data set from <cit.>. It is for a regression task with 111 observations and 4088 continuous covariates. Our second example is the Madelon data set from the UCI repository <cit.>. It is for a binary classification problem, which has been used as part of the NIPS 2003 feature selection challenge. There are 2600 observations and 500 variables.For the Riboflavin data, we first drew p variables at random. Next, the number of nonzero coefficients was set to be s and their true values were randomly taken to be 1 or -1. Then, responses were created by adding error terms generated from a normal distribution N(0,σ^2), where σ^2 was determined to achieve a specific signal-to-noise ratio (snr). Finally, to evaluate the predictive performance of the selected models, these data were randomly split into a training set (90%) and a test set (10%). For any method under investigation, we first applied it to the training set to perform variable selection. Based on the selected variables, we then built a linear regression model and estimated its prediction error on the test set. The entire process was repeated 200 times. For the Madelon data, a similar process was followed, except that, instead of adding normally distributed error terms to generate the responses, we simply generated each response y_i from a binomial distribution with probability 1/[1 + exp(- x_i^ Tβ)]. Since we were mostly interested in the behavior of each method in relatively high-dimensional (large p) situations, only 400 observations were used for training and the remaining ones were taken as the test set. Tuning parameters for each method — such as π_ thr and λ_ min for StabSel — were specified in the same manner as they were in the simulation studies (see Sections <ref> and <ref>).Table 5 summarizes the results. It can be seen that the pruned StabSel ensemble again achieved the best performance in all cases as measured by p̅_0, acc. and FDR. When the ratio snr was high and the model was sparse (small s relative to p), its relative advantage over other methods was more prominent. Although pruned StabSel generally has a slightly lower true positive rate (p̅_1) — a necessary consequence of having a much reduced false positive rate (p̅_0), its overall selection accuracy tends to be much higher than other methods. Finally in terms of prediction capacity, pruned StabSel is better than or competitive with other algorithms.§ CONCLUSIONSIn this paper, we have investigated the idea of selective ensemble learning for constructing VSEs. In particular, we have developed a novel ordering-based ensemble pruning technique to improve the selection accuracy of VSEs. By rearranging aggregation order of the ensemble members, we can construct a subensemble by fusing only members ranked at the top. More specifically, each member is sequentially included into the ensemble so that at each step the loss between the resulting ensemble's importance vector and a reference vector is minimized. This novel technique can be applied to any VSE algorithm, but in our experiments with both simulated and real-world data, we have largely focused on using it to boost the performance of stability selection (StabSel), a particular VSE technique, partly because of the latter's popularity and flexibility but also because it is not directly obvious how our technique can be applied as StabSel does not aggregate information from its members with simple averaging. Our empirical results have been overwhelmingly positive. As such, a pruned StabSel ensemble can be considered an effective alternative to perform variable selection in real applications, especially those with high dimensionality. Chicago 00[Beinrucker et al.(2016)]Beinrucker2016 Beinrucker, A., Ü. Dogan, and G. Blanchard (2016). Extensions of stability selection using subsamples of observations and covariates. Statistics and Computing 26(5), 1058-1077.[Bin et al.(2016)]Bin2016 Bin, R. D., S. Janitza, W. Sauerbrei, and A. Boulesteix (2016). Subsampling versus bootstrapping in resampling-based model selection for multivariable regression. Biometrics 72(1), 272-280.[Breheny and Huang(2011)]Breheny2011 Breheny, P., and J. Huang (2011). Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. The Annals of Applied Statistics 5(1), 232-253.[Breiman(1996a)]Breiman1996 Breiman, L (1996). Heuristics of instability and stabilization in model selection. The Annals of Statistics 24(6), 2350-2383. [Breiman(2001)]Breiman2001 Breiman, L (2001). Random forests. Machine Learning 45(1), 5-32.[Bühlmann et al.(2014)]Buhlmann2014a Bühlmann, P., M. Kalisch, and L. Meier (2014). High-dimensional statistics with a view towards appliations in biology. Annual Review of Statistics and Its Application 1, 255-278.[Bühlmann and Mandozzi(2014)]Buhlmann2014b Bühlmann, P., and J. Mandozzi (2014). High-dimensional variable screening and bias in subsequent inference, with an empirical comparison. Computational Statistics 29(3-4), 407-430. [Chung and Kim(2015)]Chung2015 Chung, D., and H. Kim (2015). Accurate ensemble pruning with PL-bagging. Computational Statistics and Data Analysis 83, 1-13.[Fan and Li(2001)]Fan2001 Fan, J. Q., and R. Z. Li (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348-1360.[Fan and Lv(2008)]Fan2008 Fan, J. Q., and J. C. Lv (2008). Sure independence screening for ultrahigh dimensional feature space (with discussions). Journal of the Royal Statistical Society(Series B) 70(5), 849-911.[Fan and Lv(2010)]Fan2010 Fan, J. Q., and J. C. Lv (2010). A selective overview of variable selection in high dimensional feature space. Statistica Sinica 20(1), 101-148.[Guo and Boukir(2013)]Guo2013 Guo, L., and S. Boukir (2013). Margin-based ordered aggregation for ensemble pruning. Pattern Recognition Letters 34, 603-609.[He et al.(2016)]He2016 He, K., Y. Li, J. Zhu, J. E. Lee, C. I. Amos, T. Hyslop, J. Jin, H. Lin, Q. Wei, and Y. Li (2016). Component-wise gradient boosting and false discovery control in survival analysis with high-dimensional covariates. Bioinformatics 32(1), 50-57.[Hernández-Lobato et al.(2011)]Hernandez2011 Hernández-Lobato, D., G. Martínez-Muñoz, and A. Suárez (2011). Empirical analysis and evaluation of approximate techniques for pruning regression bagging ensembles. Neurocomputing 74, 2250-2264.[Hofner et al.(2015)]Hofner2015 Hofner, B., Boccuto, L., and M. Göker (2015). Controlling false discoveries in high-dimensional situations: boosting with stability selection. BMC Bioinformatics 16, 144.[Kuncheva(2014)]Kuncheva2014 Kuncheva, L. I (2014). Combining Pattern Classifiers: Methods and Algorithms. John Wiley & Sons, Hoboken.[Lichman(2013)]Lichman2013 Lichman, M (2013). UCI Machine Learning Repository. Irvine, University of California, School of Information and Computer Science, http://archive.ics.uci.edu/ml.[Lin and Pang(2014)]Lin2014 Lin, B. Q., and Z. Pang (2014). Tilted correlation screening learning in high-dimensional data analysis. Journal of Computational and Graphical Statistics 23(2), 478-496.[Lin et al.(2016)]Lin2016 Lin, B. Q., Q. H. Wang, J. Zhang, and Z. Pang (2016). Stable prediction in high-dimensional linear models. Statistics and Computing, in press, doi: 10.1007/s11222-016-9494-6.[Liu et al.(2014)]Liu2014 Liu, C., T. Shi, and Y. Lee (2014). Two tales of variable selection for high dimensional regression: screening and model building. Statistical Analysis and Data Mining 7(2), 140-159.[Martínez-Munõz et al.(2009)]Martinez2009 Martínez-Muñoz, G., D. Hernández-Lobato, and A. Suárez (2009). An analysis of ensemble pruning techniques based on ordered aggregation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(2), 245-259.[Meinshausen and Bühlmann(2010)]Meinshausen2010 Meinshausen, N., and P. Bühlmann (2010). Stability selection (with discussion). Journal of the Royal Statistical Society (Series B) 72(4), 417-473.[Mendes-Moreira et al.(2012)]Mendes2012 Mendes-Moreira, J., C. Soares, A. M. Jorge, and J. F. de Sousa (2012). Ensemble approaches for regression: a survey. ACM Computing Survey 45(1), Article 10, 40 pages.[Miller(2002)]Miller2002 Miller, A (2002). Subset Selection in Regression (2nd ed.). Chapman & Hall/CRC Press, New Work.[Narisetty and He(2014)]Narisetty2014 Narisetty, N. N., and X. M. He (2014). Bayesian variable selection with shrinking and diffusing priors. The Annals of Statistics 42(2), 789-817.[Nan and Yang(2014)]Nan2014 Nan, Y., and Y. H. Yang (2014). Variable selection diagnostics measures for high dimensional regression. Journal of Computational and Graphical Statistics 23(3), 636-656.[Qian et al.(2013)]Qian2013 Qian, J., T. Hastie, J. Friedman, R. Tibshirani, and N. Simon (2013). Glmnet for Matlab http://www.stanford.edu/ hastie/glmnet_matlab/.[Roberts and Nowak(2014)]Roberts2014 Roberts, S., and G. Nowak (2014). Stabilizing the lasso against cross-validation variability. Computational Statistics and Data Analysis 70, 198-211.[Sauerbrei et al.(2015)]Sauerbrei2015 Sauerbrei, W., A. Buchholz, A. Boulesteix, and H. Binder (2015). On stability issues in deriving multivariable regression models. Biometrical Journal 57(4), 531-555.[Schapire and Freund(2012)]Schapire2012 Schapire, R. E., and Y. Freund (2012). Boosting: Foundations and Algorithms, MIT Press, Cambridge.[Shah and Samworth(2013)]Shah2013 Shah, R. D., and R. J. Samworth (2013). Variable selection with error control: another look at stability selection. Journal of the Royal Statistical Society (Series B) 75(1), 55-80. [Tibshirani(1996)]Tibshirani1996 Tibshirani, R (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society (Series B) 58(1), 267-288. [Wang et al.(2011)]Wang2011 Wang, S. J., B. Nan, S. Rosset, and J. Zhu (2011). Random lasso. The Annals of Applied Statistics 5(1), 468-485.[Xin and Zhu(2012)]Xin2012 Xin, L., and M. Zhu (2012). Stochastic stepwise ensembles for variable selection. Journal ofComputational and Graphical Statistics 21(2), 275-294.[Zhang et al.(2017)]Zhang2017 Zhang, C. X. J. S. Zhang, and Q. Y. Yin (2017). A ranking-based strategy to prune variable selection ensembles. Knowledge-Based Systems 125: 13-25.[Zhou(2012)]Zhou2012 Zhou, Z. H (2012). Ensemble Methods: Foundations and Algorithms. Taylor & Francis, Boca Raton.[Zhou et al.(2002)]Zhou2002 Zhou, Z. H., J. X. Wu, and W. Tang (2002). Ensembling neural networks: many could be better than all. Artificial Intelligence 137(1-2), 239-263.[Zhu and Chipman(2006)]Zhu2006 Zhu, M., and H. A. Chipman (2006). Darwinian evolution in parallel universes: a parallel genetic algorithm for variable selection. Technometrics 48(4), 491-502.[Zhu and Fan(2011)]Zhu2011 Zhu, M., and G. Z. Fan (2011) Variable selection by ensembles for the Cox model. Journal of Statistical and Computational Simulation 81(12), 1983-1992. [Zou(2006)]Zou2006 Zou, H (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101(476), 1418-1429.
http://arxiv.org/abs/1704.08265v1
{ "authors": [ "Chunxia Zhang", "Yilei Wu", "Mu Zhu" ], "categories": [ "stat.ML", "cs.LG", "62J07, 62F07" ], "primary_category": "stat.ML", "published": "20170426180110", "title": "Pruning variable selection ensembles" }
1920182103287 Periodic Anderson Model with Holstein Phonons for the Description of the Cerium Volume Collapse Enzhi Li^1,2,Shuxiang Yang^1,2, Peng Zhang^3, Ka-Ming Tam^1,2, Mark Jarrell^1,2, and Juana Moreno^1,2 December 30, 2023 =========================================================================================================== When considering binary strings, it's natural to wonder how many distinct subsequences might exist in a given string. Given that there is an existing algorithm which provides a straightforward way to compute the number of distinct subsequences in a fixed string, we might next be interested in the expected number of distinct subsequences in random strings. This expected value is already known for random binary strings where each letter in the string is, independently, equally likely to be a 1 or a 0. We generalize this result to random strings where the letter 1 appears independently with probability α∈ [0,1].Also, we make some progress in the case of random strings from an arbitrary alphabet as well as when the string is generated by a two-state Markov chain. § INTRODUCTIONThis paper uses the definitions for string and subsequence provided in <cit.>. A binary string of length n is some A=a_1a_2...a_n ∈{0,1}^n, and another string B of length m ≤ n is a subsequence of A if there exist indices i_1 < i_2 < ...< i_m such that B=a_i_1a_i_2...a_i_mWe use the notation B ≼ A when B is a subsequence of A.Now, suppose T_n is a fixed binary string of length n. Then, define t_i to be the i^th letter of T_n, T_i to be the string formed by truncating T_n after the ith letter, and ϕ(T_n) to be the number of distinct subsequences in T_n. If we let S_n be a random binary string of length n, it was shown in <cit.> that when Pr[s_i=1]=.5 (that is, when each independently generated letter in S_n is equally likely to be a 0 or a 1), then E[ϕ(S_n)] ∼ k(3/2)^n for a constant k. Later, <cit.> improved this result by finding that E[ϕ(S_n)]=2(3/2)^n-1 under the same conditions.In Section 2, we generalize the second result and find a formula for the expected value of ϕ(S_n) when Pr[s_i=1]=α∈ (0,1). Since the cases when Pr[s_i=1] is 0 or 1 are trivial, we will then have E[ϕ(S_n)] when α∈ [0,1]. Our method for finding this formula is very different from that used by<cit.>. We will define a new property of a string—the number of new distinct subsequences–and then use these numbers as the entries in a binary tree. Our formula is then given as a weighted sum of the entries in this tree.In Section 3 we produce recursions for the expected number of subsequences in two more complicated cases, namely when (i) the string of letters is independently generated in a non-uniform fashion from an arbitrary alphabet; and (ii) the binary string is Markov-dependent.We also show, using subadditivity arguments, that the expected number of distinct subsequences in the first case above is asymptotic to c^n for some c, in the sense that(E[ϕ(S_n)])^1/n→ c(n→∞).Finally, in Section 4 we indicate some important directions for further investigation. § MAIN RESULTIn this section, we let Pr[s_i=1]=α∈ [0,1]. We first consider the trivial cases, when α∈{0,1}. If α∈{0,1}, then ϕ(S_n)=n for any string S_n. If α∈{0,1}, then either S_n=11...1 or S_n=00...0. In either case, there is exactly one distinct subsequence of each length between 1 and n. With those cases dispensed with, we will assume for the remainder of this section that α∈ (0,1). Before continuing, we need to establish three new definitions.We recall that S_n is a random variable that equals a fixed string T_n according to a specified probability distribution.The weight of T_n, a fixed binary string of length n, is ϕ(T_n) ·Pr[S_n=T_n]. That is, the weight of T_n is the number of distinct subsequences in T_n times the probability that a random length n string is T_n. A new subsequence of T_n is a subsequence contained in T_n but not contained in T_n-1. We will let ν(T_n) be the number of distinct new subsequences in a string T_n.The new weight of T_n is the product ν(T_n) ·Pr[S_n=T_n]. It will be useful to be able to compute the number of new distinct subsequences in a string; to do so we modify a result from <cit.>. Given T_n, let l be the greatest number less than n such that t_l=t_n, and if no such number exists, let l=0. Then,ν(T_n)=n l=0;∑_i=l^n-1ν(T_i) l>0. Proof: Fix T_n and suppose l=0. Then, without loss of generality, assume that T_n consists of n-1 0s followed by a 1. The new subsequences contained in T_n are exactly those which contain a 1. There are n such subsequences: 1, 01, 001,..., 00...0_n-11, so ν(T_n)=n. Now suppose l>0 and U_k is a new subsequence of length k in T_n. We assume that the last letter in U_k is t_n because otherwise U_k is clearly not new. If U_k-1≼ T_l-1, then we could use t_l to complete U_k, so U_k ≼ T_l and U_k would not be new in T_n. Conversely, if U_k-1⋠T_l-1, then U_k cannot be completed by t_l, nor can it be completed by any other t_i before t_n, so U_k is a new subsequence in T_n. Therefore, there is one distinct new subsequence in T_n for every distinct new subsequence found in some T_i with l ≤ i ≤ n-1. Summing up all of those distinct new subsequences gives the number of distinct new subsequences in T_n.Now, let B be a binary tree whose entries are binary strings, and let B_n,m be the mth entry in the nth row of B. The root of B is the empty string, each left child is its parent with a 1 appended, and each right child is its parent with a 0 appended. If we call the first row “row 0", then row n of this tree contains all length n binary strings. Rows 0-3 of this tree are shown below:[node/.style=circle, draw=black!0, fill=green!0, thin, minimum size=2mm, level/.style=sibling distance = 8cm/#1, level distance = 1.5cm,] [node] () () child node[node] 1 child node[node]11 child node[node]111 child node[node]110 child node[node] 10 child node[node]101 child node[node]100 child node[node] 0 child node[node]01 child node[node]011 child node[node]010 child node[node]00 child node[node]001 child node[node]000 ;Next, we form the binary tree B' with B'_n,m denoting the mth entry in the nth row of B'. Then, we define each B'_n,m to be ν(B_n,m) which we can calculate using Lemma 2.5. Finally, for each child B'_n,m we assign the edge between it and its parent B'_n-1,⌈m/2⌉ a weight equal to Pr[S_n=B_n,m|S_n-1=B_n-1,⌈m/2⌉]. Thus we give each edge going to a left child the weight α and each edge going to a right child the weight 1-α. Rows 0-3 of B' are shown below: [node/.style=circle, draw=black!0, fill=green!0, thin, minimum size=2mm, level/.style=sibling distance = 8cm/#1, level distance = 1.5cm,] [node] (1) 0 child node[node] 1 child node[node]1 child node[node]1 edge from parent node[left]α child node[node]3 edge from parent node[right]1-α edge from parent node[left]α child node[node] 2 child node[node]3 edge from parent node[left]α child node[node]2 edge from parent node[right]1-α edge from parent node[right]1-α edge from parent node[left]αchild node[node] 1 child node[node]2 child node[node]2 edge from parent node[left]α child node[node]3 edge from parent node[right]1-α edge from parent node[left]α child node[node]1 child node[node]3 edge from parent node[left]α child node[node]1 edge from parent node[right]1-α edge from parent node[right]1-α edge from parent node[right]1-α ;It is clear that the value of the root should be 0 and the value of its two children should be 1. Moreover, we can apply Lemma 2.5 to find that each row begins and ends with a 1. To characterize the remaining entries of B' we need two lemmas which together will give us that the portion of each row between the initial and final 1s consists of pairs of identical numbers. The numbers in every other pair are the sums of the parents of those elements, and the numbers in the remaining pairs have the same value as their parents. For instance, the first pair of the third row is a pair of 3s and parents of the elements of this pair are a 1 and a 2 which sum to 3. Then, the second pair of the third row is a pair of 2s, and the parents of the elements of this pair are also 2s.Suppose n ≥ 2 and m ≡ 2 (mod 4). Then, B'_n,m=B'_n,m+1=B'_n-1,m/2+B'_n-1,m/2+1.Given these restrictions on n and m, both B_n,m and B_n,m+1 are grandchildren of the same string T_n-2. Then, B_n,m=T_n-210 and B_n,m+1=T_n-201. First, consider the case when T_n-2 consists of only 0s or only 1s, and without loss of generality, assume it consists of only 0s. Using Lemma 2.5 we get the following equalities:ν(T_n-210) =ν(T_n-2)+ν(T_n-21)=1+(n-1)=n;ν(T_n-201) =n;ν(T_n-20)+ν(T_n-21) =1+(n-1)=n,so the lemma is proved in this case. Otherwise, there exist k,l >0 such that k is the greatest integer with t_k=0 and l is the greatest integer with t_l=1. Then, the following hold, again by Lemma 2.5:ν(T_n-20) =∑_i=k^n-2ν(T_i);ν(T_n-21) =∑_i=l^n-2ν(T_i);ν(T_n-201) =∑_i=l^n-2ν(T_i)+ν(T_n-20)=ν(T_n-21)+ν(T_n-20);ν(T_n-210) =∑_i=k^n-2ν(T_i)+ν(T_n-21)=ν(T_n-20)+ν(T_n-21).Since T_n-21=B_n-1,m/2 and T_n-20=B_n-1,m/2+1, this completes the proof. Suppose n ≥ 2, m ≡ 0 (mod 4), and m ≠ 2^n. Then B'_n,m=B'_n,m+1=B'_n-1,m/2=B'_n-1,m/2+1 We will proceed by induction on n. As a base case, note that when n=2, the hypotheses of Lemma 2.7 are never satisfied, so it is true.Now, suppose Lemma 2.7 holds for n<p, and consider B'_p,m where m satisfies the hypotheses of the lemma. Since m ≡ 0 (mod 4), it follows that B_p,m=B_p-1,m/20, and B_p-1,m/2 also ends in 0. Then, m+1 ≡ 1 (mod 4), so B_p,m+1=B_p-1,m/2+11, and B_p-1,m/2+1 also ends in 1. Therefore, by Lemma 2.5, B'_p,m=B'_p-1,m/2 and B'_p,m+1=B'_p-1,m/2+1. Now, B'_p-1,m/2 and B'_p-1,m/2+1 are consecutive elements and m/2 is even. If m/2≡ 2 (mod 4), then B'_p-1,m/2=B'_p-1,m/2+1 by Lemma 2.6, and if m/2≡ 0 (mod 4), then B'_p-1,m/2=B'_p-1,m/2+1 by the induction hypothesis. In either case, the proof is finished by induction. Now, the original question about distinct subsequences can reinterpreted as a question about the tree B'. In particular, if we find the path from each node B'_n,m to the root and call the product of the weights of all the edges on that path p_n,m, we find that p_n,m=Pr[S_n=B_n,m], and thatE[ϕ(S_n)]= E[∑_i=1^n ν(S_i)]= ∑_i=1^n E[ν(S_i)] =∑_i=1^n ∑_j=1^2^i B'_i,j· p_i,jThe first equality comes from the fact that each subsequence in S_n is new exactly once, the second equality holds by linearity of expectation, and the third equality rewrites E[ν(S_i)] using the definition of expectation.Now, recall the definition of new weight and note that the combined new weight of all the strings in row i of B is given by ∑_j=1^2^i B'_i,j· p_i,j. With i fixed, we define the quantity a_i to be the total new weight of left children, i.e., strings ending in 1, in row i of B and b_i to be the total new weight of right children, i.e., strings ending in 0, in row i of B. We now find two simultaneous recurrence relations that describe a_i and b_i. The sequences {a_i} and {b_i} satisfy the following recurrence relations: a_i=a_i-1+α b_i-1;b_i=b_i-1+(1-α)a_i-1.Consider the left children in row i of B'. First consider the left children whose parents are also left children. Each of these has the same value as its parent by Lemma 2.7, so the combined new weight of all of them is α· a_i-1. Now consider the left children whose parents are right children. By Lemma 2.6, each of these has a value which is the sum of the values of its parent, a right child, and this parent's sibling, a left child. In this way, each right child in row i-1 contributes its new weight times α to its left child in row i-1. Meanwhile, each left child in row i-1 contributes to the left child of its sibling its new weight times 1-α since its path to the root contains one fewer edges labelled 1-α than the path from the row i left child to the root, but both paths contain the same number of edges labelled α. Thus we get:a_i=αa_i-1 +αb_i-1 + (1-α)a_i-1=a_i-1+αb_i-1The second recurrence relation follows by the same argument. The right children of row i-1 contribute their new weight times 1-α to their own right children, and also contribute their new weight times α to the right children of their siblings. Meanwhile, the left children of row i-1 contribute their new weight times 1-α to their right children, and so we get:b_i=b_i-1+(1-α)a_i-1,which completes the proof.Now, we solve for a_i as follows:α b_i-1 =a_i-a_i-1 α(1-α)a_i-1 =α b_i-α b_i-1 α(1-α)a_i-1 =a_i+1-a_i -(a_i-a_i-1) a_i+1 =2a_i-(1-α(1-α))a_i-1The quadratic x^2=2x-(1-α(1-α)) has two real solutions: (1+√(α(1-α))) and (1-√(α(1-α))), so a_i=c_1(1+√(α(1-α)))^i+c_2(1-√(α(1-α)))^i where c_1 and c_2 are constants. Inspecting the tree B' gives that a_1=α and a_2=2α-α^2, and it is straightforward to verify that the following is an explicit formula of the correct form satisfying the initial conditions:a_i=1/2((α-√(α(1-α)))(1-√(α(1-α)))^i-1+(α+√(α(1-α)))(1+√(α(1-α)))^i-1)To find an explicit formula for b_i, we simply note that if we substitute a_i for b_i, b_i for a_i, and α for 1-α, we obtain the recurrence that we just solved. Thus, making the reverse substitutions, we get that b_i=1/2((1-α-√(α(1-α)))(1-√(α(1-α)))^i-1+(1-α+√(α(1-α)))(1+√(α(1-α)))^i-1)and combining these two expressions gives the total new weight in row i asa_i+b_i=1/2((1-2√(α(1-α)))(1-√(α(1-α)))^i-1+(1+2√(α(1-α)))(1+√(α(1-α)))^i-1)As suggested by (<ref>), the final step will be to find the sum ∑_i=0^n (a_i +b_i). This follows from the geometric sum formula, and we get!∑_i=0^n (a_i +b_i)=((1-2√(α(1-α)))(1-(1-√(α(1-α)))^n)+(1+2√(α(1-α)))((1+√(α(1-α)))^n-1)/2√(α(1-α))We have now derived the main theorem of this section. Suppose Pr[s_i=1]=α∈ [0,1] for all 1 ≤ i ≤ n. Then we have!ϕ(S_n)=nα=0,1,(1-2√(α(1-α)))(1-(1-√(α(1-α)))^n)+(1+2√(α(1-α)))((1+√(α(1-α)))^n-1)/2√(α(1-α)) α≠ 0,1.Since this formula is rather unwieldy for α∈ (0,1), we also give the following asymptotic result. Suppose Pr[s_i=1]=α∈ (0,1) for all 1 < i < n. Then there exists a constant k such thatϕ(S_n) ∼ k (1+√(α(1-α)))^n. We take the limit of the quantity in Theorem 2.9. Since (1-√(α(1-α)))<1, it follows that lim_n →∞(1-√(α(1-α)))^n =0. Therefore lim_n →∞ϕ(S_n)/(1+√(α(1-α)))^n·2√(α(1-α))/(1+2√(α(1-α)))=1,as asserted. § VARIATIONS: LARGER ALPHABETS, MARKOV CHAINS, AND GROWTH RATESIn the previous section, we looked at strings on a binary alphabet generated by a random process in which the probability that any given element was 1 was fixed at α. In this section, we generalize this in two ways. First, we consider strings on the alphabet {1,2,...,d}=[d] where each letter is independently j with probability α_j for all j ∈ [d]. After that, we return to binary strings, but this time they will be generated according to a two-state Markov chain; in particular, if a letter follows a 1, then it is 1 with probability α, but if it follows a 0, then it is 1 with probability β. In both these cases, we will find recurrences for the expected new weight contributed by the nth letter, which will lead to explicit matrix equations for that expected new weight. Unfortunately, we will not be able to find a closed-form formula for the total expected number of subsequences like we did in Section <ref>.§.§ A Larger AlphabetIn this section, we consider strings on the alphabet [d]. In Section <ref>, T_n was a fixed length-n string on the alphabet {0,1}; here we let T_n be a fixed length-n string on the alphabet [d]. Similarly, S_n is now a random length-n string on the alphabet [d] where, independently, Pr[s_i=j] = α_j for all i ∈ [n] ,j ∈ [d] (note that ∑_j=1^d α_j = 1).We begin by stating a generalization of Lemma <ref>. The first paragraph of the proof is the same as in the proof of the original lemma, but is included for completeness.Given T_n, let l be the greatest number less than n such that t_l=t_n, and if no such number exists, let l=0. Then,ν(T_n)= ∑_i=1^n-1ν(T_i) + 1 l=0,∑_i=l^n-1ν(T_i) l>0.Suppose l>0 and U_k is a subsequence of length k in T_n. We assume that the last letter in U_k is t_n because otherwise U_k is clearly not new. If U_k-1≼ T_l-1, then we could use t_l to complete U_k, so U_k ≼ T_l and U_k would not be new in T_n. Conversely, if U_k-1⋠T_l-1, then U_k cannot be completed by t_l, nor can it be completed by any other t_i before t_n, so U_k is a new subsequence in T_n. Therefore, there is one distinct new subsequence in T_n for every distinct new subsequence found in some T_i with l ≤ i ≤ n-1. Counting up all of those distinct new subsequences gives the number of distinct new subsequences in T_n.Now suppose l=0 and U_k is a subsequence of length k in T_n. As before, we can assume that the last letter of U_k is t_n, but this time every subsequence of length ending in t_n is new in T_n. Therefore, there is nearly a bijection between the subsequence which appeared by T_n-1 and the ones which are new in T_n given by mapping a subsequence appearing by T_n-1 to itself with t_n appended. This is not exactly a bijection because the single-element sequence t_n is a new subsequence in T_n even though the empty set is not counted as a subsequence in T_n-1. Therefore, the number of new subsequences is the sum of the number of subsequences which have appeared previously, plus one. Just like in the previous section, we are interested in the expected new weight of S_n. As before, we find it useful to refer to a tree B, but in this case it is a d-ary tree where the root is labelled with the empty string, and the jth child of a node is labelled with the parent's label with j appended. The following figure shows rows 0-2 and part of row 3 of B when d=3.[every tree node/.style=draw=black!0,circle,sibling distance=.25cm][ .() [ .3 [ .33 333 332 331 ] [ .32 323 322 321 ] [ .31 ]] [ .2 23 22 21 ] [ .1 13 12 11 ] ]; Again like in the previous section, we will also use the tree B' where each node T_n of B has been replaced by ν(T_n).[every tree node/.style=draw=black!0,circle,sibling distance=.25cm][ .0 [ .1 [ .1 1 3 3 ] [ .2 3 2 4 ] [ .2 ]] [ .1 2 1 2 ] [ .1 2 2 1 ] ]; We now need to generalize Lemmas <ref> and <ref>. A (j,k)-grandchild is a string whose final two letters are j and k; equivalently, a (j,k)-grandchild is a string which is formed by appending jk to its grandparent.Suppose n ≥ 2 and T_n is a (j,k)-grandchild with j ≠ k whose grandparent is T_n-2; then ν(T_n)=ν(T_n-2 j)+ν(T_n-2 k).As in Lemma <ref>, let l_j and l_k be the greatest numbers less than or equal to n-2 with t_l_j=j and t_l_k=k (or 0 if no such t exist). Suppose that l_k ≠ 0. By Lemma <ref>, we have that ν(T_n)=∑_i=l_k^n-1ν(T_i) = ν(T_n-2j) + ∑_i=l_k^n-2 = ν(T_n-2j) + ν(T_n-2k).Otherwise, l_k = 0 and by Lemma 3.1, we have that ν(T_n)=∑_i=1^n-1ν(T_i) + 1 = ν(T_n-2j) + ∑_i=1^n-2ν(T_i) + 1 = ν(T_n-2j) + ν(T_n-2k).This completes the proof. Suppose n ≥ 2 and T_n is a (j,j)-grandchild whose grandparent is T_n-2. Then, ν(T_n) = ν(T_n-2j). Since T_n=T_n-2jj and T_n-1=T_n-2j, the statement follows immediately from Lemma <ref>. We want to be able to calculate the total new weight in each row of the tree B'. As in Section <ref> it will be convenient to break up that new weight by the final letter of the string it comes from. For each n ∈ℤ^+ and j ∈ [d], let a_j,n be the total new weight of the length-n strings ending in j. Using Lemmas <ref> and <ref> we find that a typical element T_n-2j contributes its weight times α_k to T_n-2jk (for all k ∈ [d]) and also contributes its weight times α_k to T_n-2kj for all k ∈ [d] ∖{j}. Therefore, we obtain the recurrencesa_j,n=∑_k=1^d α_j a_k,n-1 + ∑_k ≠ jα_k a_j,n-1 = a_j,n-1 + ∑_k ≠ jα_j a_k,n-1. Using these d recursions and the fact that the a_j,1=α_j for all j, we find that the vector [a_1,n,a_2,n,...,a_d,n]^T is given by the matrix equation:[ a_1,n; a_2,n; a_3,n; ⋮; a_d,n ] = [ 1 α_1 α_1 … α_1; α_2 1 α_2 … α_2; α_3 α_3 1 … a_3; ⋱; α_d α_d α_d … 1 ]^n-1[ α_1; α_2; α_3; ⋮; α_d ],and therefore the total new weight in row n is [ 1 1 1 … 1 ][ 1 α_1 α_1 … α_1; α_2 1 α_2 … α_2; α_3 α_3 1 … a_3; ⋱; α_d α_d α_d … 1 ]^n-1[ α_1; α_2; α_3; ⋮; α_d ].Therefore, we have a way to compute the expected value of ϕ(S_n) in this general alphabet case, as expressed in the following theorem.Let S_n be a random length-n string on the alphabet [d] where Pr[s_i=j]=α_j for all i,j. Then,E[ϕ(S_n)]= [ 1 1 1 … 1 ](∑_i=0^n-1[ 1 α_1 α_1 … α_1; α_2 1 α_2 … α_2; α_3 α_3 1 … a_3; ⋱; α_d α_d α_d … 1 ]^i) [ α_1; α_2; α_3; ⋮; α_d ]. §.§ Strings from Markov ProcessesWe now return to binary strings, but now the probability of seeing a particular letter will depend on the letter before (we assume that strings are generated from left to right). In the random string S_n, Pr[s_i=1 | s_i-1=1] = α and Pr[s_i = 1 | s_i-1=0]= β. Of course, we will need some other rule for Pr[s_1=1]; one logical choice is to take Pr[s_1=1]=γ where γ is the steady-state probability of a 1 occurring, which in this case gives γ=β/1+β-α. Unlike in the previous subsection, we don't need any new lemmas to discuss this case. Instead, the key will be to categorize the new weight in each row by the last two letters of the strings which contribute it, rather than just by the last single letter. To that end, let a_n be the total new weight of n-letter strings ending in 11, let b_n be the total new weight of n-letter strings ending in 10, let c_n be the total new weight of n letter strings ending in 01, and let d_n be the total new weight of n-letter strings ending in 00. We apply Lemmas <ref> and <ref> in much the same manner that we did in Section <ref> to find the recurrences:a_n+1=α(a_n+c_n); b_n+1=(1-α)(a_n+c_n)+α b_n +β(1-α)/1-β d_n; c_n+1=β(b_n+d_n)+(1-β)c_n+(1-α)β/α a_n; d_n+1=(1-β)(b_n+d_n).Again, these recurrences lead to a matrix equations[ a_n; b_n; c_n; d_n ] = [α0α0;1-αα1-α β(1-α)/1-β; (1-α)β/αβ1-ββ;01-β0 1- β ]^n-1[ a_1; b_1; c_1; d_1 ],and E[ϕ(S_n)]= [ 1 1 1 … 1 ](∑_i=0^n-1[α0α0;1-αα1-α β(1-α)/1-β; (1-α)β/αβ1-ββ;01-β0 1- β ]^i-1) [ a_1; b_1; c_1; d_1 ].While we could define Pr[s_1=1]=γ (recalling that γ = β/1+β-α) it will make our formula work out more nicely if we pretend that there exists a letter s_0 which does not add to any subsequences but which determines Pr[s_1=1]. If we take Pr[s_0 = 1]=γ, then we have a_1=γα, b_1 = γ(1-α), c_1 = (1-γ)β, d_1 = (1-γ)(1-β), and so the definition of the formula is complete.§.§ Exponential Growth via Fekete's Lemma In this subsection, we exhibit the fact that in the case of general alphabets, the expected number of distinct sequences is “around" c^n, mirroringthe result in Corollary 2.10.This fact is hardly surprising, but raises other questions, namely as to whether the “true" numbers contain, additionally, polynomial factors as do several Stanley-Wilf limits in the theory of pattern avoidance (note that there are no polynomial factors in Corollary 2.10). Also, in general the existence of limits is not automatic, as seen by the following example:Assume that n balls are independently thrown into an infinite array of boxes so that box j is hit with probability 1/2^j for j=1,2,….Let π_n be the probability that the largest occupied box has a single ball in it.Then, as seen in <cit.>, lim_n→∞π_n does not exist, and limsup_n→∞π_n and liminf_n→∞π_n differ in the fourth decimal place! Such behavior does not however occur in our context, as we prove next. Let s_1,s_2,… be a sequence of independent and identically distributed random variables with Pr(s_1=j)=α_j, j=1,2,…,d, and ∑_jα_j=1.Set α=(α_1,…,α_d).Let ϕ(S_n) and ϕ(S_n+1,n+m) be the number of distinct subsequences in S_n=(s_1,…,s_n) and (s_n+1,…,s_n+m).Let ψ(n)=E(ϕ(S_n)).Then there exists c=c_d,α≥ 1 such thatψ(n)^1/n→ c; n→∞,where c=1 iff d≥ 1 and max_j α_j=1. As in Arratia's paper <cit.> on the existence of Stanley-Wilf limits, we employ subadditivity arguments and Fekete's lemma.Assume without loss of generality that m≤ n.Let η be a distinct subsequence of S_m+n, and consider the first lexicographic occurrence of η, say η_f.Then η_f|_S_n=η_f_1 and η_f|_(s_n+1,…,s_n+m)=η_f_2 are a pair of subsequences of (s_1,…,s_n) and (s_n+1,…,s_n+m).Moreover, the mapη_f⟶(η_f_1,η_f_2)is one-to-one (note that one of the components η_f_1,η_f_2 may be empty).ThusS_n+m≤ S_nS_n+1,n+m,and thus, by independence we getψ(n+m)≤ψ(n)ψ(n+1,n+m). Since m≤ n, we conclude thatψ(n+m)≤ψ(n)ψ(m).In other words, ξ(n)=logψ(n) satisfies the subadditivity conditionξ(n+m)≤ξ(n)+ξ(m),and Fekete's lemma yields the conclusion thatξ(n)/n→ℓ=inf_n≥1ξ(n)/n.Clearly ℓ∈[0,log d]. We thus getlogψ(n)/n=log [ψ(n)]^1/n→ℓ,and soψ(n)^1/n→ e^ℓ:=c,where c∈[1,d].Clearly c=1 for any d if α_1=1.We need to show that this is the only case when this occurs.By Theorem 2.9, we know that c>1 if d=2 and max(α_1,α_2)1.Using a monotonicity argument (for larger size alphabets, we replace all the letters 2,3,…,d by 2), it is easy to see that c>1 if d≥3 and max_j(α_j)<1.This concludes the proof.§ DISCUSSION AND OPEN QUESTIONS In this section we list and briefly discuss some questions that seem to be quite non-trivial.(a) One of the central questions in the Permutation Patterns community is that of packing patterns and words in larger ensembles; see, e.g., <cit.>.In a similar vein, we have the question of superpatterns, i.e., strings that contain all the patterns or words of a smaller size; see, e.g., <cit.>.A distinguished question in this area is the one posed by Alon, who conjectured (see <cit.>) that a random permutation on [n]=[k^2/4(1+o(1))] contains all the permutations of length k with probability asymptotic to 1 as n→∞.In the present context, a similar question might be:What is the largest k so that each element of {0,1}^k appears as a subsequence of a binary random string with high probability?(b) The basic question studied in this paper appears to not have been considered in the context of permutations; i.e., one might ask: What is the expected number of distinct patterns present in a random permutation on [n]?(c) Computation of the rates of exponential growth in Theorem 3.6 would, of course, be of interest as would be, alternatively, a solution of the recurrences in Sections 3.1 and 3.2.Also, estimation of the width of the intervals of concentration of the number of distinct subsequences, around their expected values, would add significantly to our understanding of the situation.(d) An intriguing question (which leads to a wide area for further investigation) is the following.In the baseline case of binary equiprobable letter generation, we have thatE(ϕ(S_n))∼2(1.5)^n, which implies that the average number of occurrence of a subsequence is 1/22^n/(1.5)^n=1/2(4/3)^n.Now a subsequence such as 1 occurs “just" around n/2 times, and the sequence 11…1 with n/2 ones occurs an average of nn/2·1/2^n/2 times, which simplifies, via Stirling's formula, to around √(2)^n, ignoring constants and polynomial factors.The same is true of any sequence of length n/2;it is, on average, over-represented.We might ask, however, what length sequences occur more-or-less an average number (1.33)^n of times.We can parametrize by setting k=xn and equating the expected number of occurrences of a k-long sequence to (1.33)^n.We seek, in other words, the solution to the equationnxn1/2^xn=(1.33)^n. Ignoring non-exponential terms and employing Stirling's approximation, the above reduces to2^xx^x(1-x)^1-x=0.75,which, via Wolfram Alpha, yields the solutions x=.123… and x=.570…! In a similar fashion we see that the expected number of occurrences of a sequence of length (0.7729…)n or longer is smaller than one.Does this suggest that the solution to the Alon-like question stated in (a) above might be k=(0.7729…)(1+o(1))n?*alpha
http://arxiv.org/abs/1704.08661v3
{ "authors": [ "Yonah Biers-Ariel", "Anant Godbole", "Elizabeth Kelley" ], "categories": [ "math.CO", "05D40, 60C05" ], "primary_category": "math.CO", "published": "20170427171053", "title": "Expected Number of Distinct Subsequences in Randomly Generated Binary Strings" }
Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM, 87501 USADynamics of Voter Models on Simple and Complex Networks S. Redner Received: date / Revised version: date =======================================================§ INTRODUCTIONHow do groups of people come to consensus?While it's hard to imagine a large group being able to agree on anything, there are some settings where unanimity is necessary—juries are one example.The voter model (VM) represents an idealization of this opinion evolution in which each individual, or agent, is influenced only by other members of the group; there is also no notion of a “right” or a “wrong” opinion, and there are no external influences, such as news media.In the VM, each agent, or voter, can assume one of two states (e.g., 0/1, normal/mutant, Democrat/Republican).One agent resides at each node of a lattice or an arbitrary network and updates its state at unit rate until a population of N agents necessarily reaches consensus.In detail, VM evolution is as follows (Fig. <ref>):-1ex* Pick a random node (a voter).* The voter adopts the state of a random neighbor.An anthropomorphic interpretation of VM dynamics is that each agent has zero self-confidence and merely adopts one of its neighbor's state.We also discuss the related invasion process (IP), whose update rule is:-1ex* Pick a random node (an invader).* The invader exports its state to a random neighbor.Pictorially, an invader dictatorially imposes its state on one of its neighbors; equivalently, an invader replicates and its offspring replaces an neighboring agent.While the differences between these two models appear superficially trivial, they are fundamental on complex networks.The two basic questions that we will discuss are: (i) What is the time to reach consensus?(ii) By what route is consensus achieved?We present our collaborative work on this subject with Vishal Sood and Tibor Antal <cit.>.§ CONSERVATION LAWS A crucial feature of VM and IP dynamics on arbitrary networks is that they satisfy conservation laws that determine their long-time behaviors.Let's develop the language to uncover these laws <cit.>.Define η as the state of the entire network, and η(x), which can equal 0 or 1, as the state of node x.In an update, the node state changes from 0 to 1 or vice versa.Let η_x denote the network state after the node at x changes state.We may succinctly write the transition probability that node x changes state asP [η→η_x] =∑_yA_x y/N𝒬 [Φ(x,y)+Φ(y,x)],where Φ(x,y)≡η(x)[1-η(y)]=1 if the states at x and y differ and Φ(x,y)= 0 if these states agree, and A_x y is the adjacency matrix.Although (<ref>) looks formidable, its meaning is simple: A_x y [Φ(x,y)+Φ(y,x)] is non-zero only when nodes x and y are connected and in opposite states, so that an update actually occurs. For the VM, the factor (N𝒬)^-1= (Nk_x)^-1 accounts for first choosing any node x with probability 1/N, and then one of its neighbors y with probability 1/k_x, where k_x is the degree of node x.In the IP, (N𝒬)^-1= (Nk_y)^-1: first choose node y (a neighbor of x) with probability 1/N, and then choose x with probability 1/k_y.The kernel for the evolution of the population is the average change in the state of a single node, ⟨Δη(x)⟩.This change equals the probability that η(x) changes from 0 to 1 minus the probability of a change from 1 to 0: ⟨Δη(x)⟩ = [1-2η(x)] 𝐏[η→η_x]. Summing this transition probability over all nodes gives the average change in ρ, the density of nodes in state 1:⟨Δρ⟩ = ∑_x ⟨Δη(x)⟩ =∑_x,yA_x y/N𝒬[η(y)-η(x)] . Since 𝒬 is constant on regular lattices, the summand on the right is antisymmetric in x and y and ⟨Δρ⟩=0.Thus ⟨ρ⟩ is conserved.This innocuous-looking conservation law has far-reaching consequences.It immediately gives the fixation or exit probability namely, the probability ℰ(ρ) that a finite system with an initial density ρ of 1s attains consensus of 1s.Because ρ is conserved and because the final state consists of either all 1s or all 0s, we have ρ=ℰ(ρ)·1+ [1-ℰ(ρ)]·0. Thus with no calculation the fixation probability equals ρ !The power of this conservation law suggests looking for analogous laws for the VM and the IP on degree-heterogeneous networks.To obtain a conserved quantity, the factor 𝒬 in the denominator of the transition rate in (<ref>) must somehow be canceled out.This leads us to generalize the notion of density to the degree-weighted moments ω_m≡∑_k k^m n_k ρ_k/μ_m (note that ω_0=ρ and for simplicity we write ω_1 as ω), where ρ_k ≡'∑_xη(x)/N_k is the density of 1s on the subset of nodes of degree k, the prime restricts the sum to nodes x of degree k.Here μ_m= ∑_k k^m n_k is the m^ th moment of the degree distribution of the network, with N_k (n_k) the number (density) of nodes of degree k.Repeating the calculation in Eq. (<ref>) for ⟨ω⟩ for the VM and for ⟨ω_-1⟩ for the IP, it is immediate to show that the conserved quantities are:⟨ω⟩  VM,⟨ω_-1⟩IP.Since the initial value of the conserved quantity equals its value in the final unanimous state, the exit probability isℰ(ω)=ω     VM,ℰ(ω_-1)=ω_-1IP.An instructive example is the star graph, where N nodes are connected only to a single central hub.For the VM, if the hub is in state 1 and all other nodes are in state 0, then (<ref>) mandates that the probability of reaching 1 consensus is 1/2 !That is, a single well-connected agent largely determines the final state.Conversely, in the IP, a mutant at the hub is very likely to be extinguished (fixation probability ∝ N^-2), while a mutant at the periphery is more likely to persist (fixation probability ∝ N^-1).§ VOTER MODEL ON NETWORKS §.§ Complete Graph To understand the VM and the IP on complex networks, first consider the complete graph, where the VM and the IP are identical.In each update event, ρ→ρ±δρ, with δρ=1/N, corresponding to a voter undergoing the respective state changes 0→1 or 1→0.The probabilities for these respective events are:𝐑(ρ) ≡𝐏[ρ→ρ+δρ] =(1-ρ)ρ 𝐋(ρ) ≡𝐏[ρ→ρ -δρ] =ρ(1-ρ) .We term 𝐑 and 𝐋 as the raising and lowering operators.We now use these transition probabilities to write the evolution equation for the average time T(ρ) to reach consensus when the fraction of agents initially in state 1 is ρ (the backward Kolmogorov equation <cit.>): T (ρ) = δ t +𝐑(ρ)T(ρ+δρ)+𝐋(ρ) T(ρ-δρ) + [1-𝐑(ρ)-𝐋(ρ)] T(ρ) .This simple-looking, but deceptively powerful equation expresses the average consensus time as the time δ t for a single update step plus the average time to reach consensus after this update.The three terms account for the transitions ρ→ρ±δρ or ρ→ρ, respectively.Expanding Eq. (<ref>) to second order in δρ givesv(ρ)d T(ρ)/dρ+D(ρ)d^2 T(ρ)/d ρ^2 = -1 ,with drift velocity v(ρ)∝ [𝐑(ρ)-𝐋(ρ)] and diffusivity D(ρ)∝ [𝐑(ρ)+ 𝐋(ρ)].On the complete graph, the drift term is zero and only the diffusion term, which quantifies the stochastic noise, remains.For the boundary conditions T(0)= T(1)= 0 (consensus time equals 0 if the initial state is consensus) the solution isT(ρ)= -N[(1-ρ)ln(1-ρ)+ρlnρ].For equal initial densities of each opinion, T(1/2)=Nln 2, while for a single mutant, T(1/N)≈ln N.The linear dependence on N represents the generic behavior for the consensus time of the VM on Euclidean lattices in spatial dimensions d≥ 3. §.§ Complete Bipartite Graph An important clue to understanding how degree heterogeneity affects the dynamics is provided by studying the simplest network network that contains of nodes with different degrees—the complete bipartite graph K_a,b.In this graph, a+b nodes are partitioned into two subgraphs of size a and b (Fig. <ref>).Each node in subgraph 𝐚 links to all nodes in 𝐛, and vice versa.Thus 𝐚 nodes all have degree b, while 𝐛 nodes all have degree a.We can immediately determine the exit probability by using the conservation law from Eq. (<ref>), ⟨ω⟩=1/2 (ρ_𝐚+ρ_𝐛).For example, when one subgraph contains only 0s and the other only 1s, the probability to reach 1 consensus is 1/2, independent of the 𝐚 and 𝐛 subgraph sizes.To determine the dynamical behavior, let N_𝐚,𝐛 be the respective number of voters in state 1 on each subgraph, with ρ_𝐚 = N_𝐚/a, ρ_𝐛 = N_𝐛/b the respective subgraph densities.In an update, these densities change according to the raising/lowering transition probabilities,𝐑_𝐚≡𝐏[ρ_𝐚,ρ_𝐛→ρ_𝐚^+,ρ_𝐛]= a/a+bρ_𝐛 (1-ρ_𝐚),𝐋_𝐚≡𝐏[ρ_𝐚,ρ_𝐛→ρ_𝐚^-, ρ_𝐛]= a/a+bρ_𝐚 (1-ρ_𝐛),with ρ_𝐚^±=ρ_𝐚± a^-1.Here 𝐑_𝐚 is the probability to increase the number of 1s in subgraph 𝐚 by 1, for which we need to first choose an agent in state 0 in 𝐚 and then an agent in state 1 in 𝐛.Similarly, 𝐋_𝐚 gives the corresponding the probability for reducing the number of 1s in 𝐚.Analogous definitions hold for 𝐑_𝐛 and 𝐋_𝐛 by interchanging a↔ b.From these transition probabilities, the rate equations for the average subgraph densities are ρ̇_𝐚= ρ_𝐛-ρ_𝐚 and ρ̇_𝐛= ρ_𝐚-ρ_𝐛.Their solutions show that the subgraph densities are driven to the common value 1/2[ρ_𝐚(0)+ ρ_𝐛(0)] in a time of order 1 (Fig. <ref>(a)).Thus the total density of 1s, which evolves as ρ̇= (aρ̇_𝐚 + bρ̇_𝐛)(a+b), becomes conserved in the long-time limit.Therefore, there is a two time-scale approach to consensus: initially, the effective bias quickly drives the system to equal subgraph densities ρ_𝐚=ρ_𝐛; subsequently, diffusive fluctuations drive the population to consensus.This dynamical picture also arises for general complex networks.To determine the consensus time T(ρ_𝐚,ρ_𝐛), we exploit the feature that ρ_𝐚→ρ_𝐛 in the long-time limit.Then following exactly the same steps as those for the complete graph, the consensus time satisfiesω (1-ω)∂^2T/∂ω^2 =-4ab/a+b ,with solution, for T(0)=T(1)=0,T(ω)= 4 a b/a+b[(1-ω)ln(1-ω) +ωlnω] .The consensus time has the same form as in the complete graph [Eq. (<ref>)], but with an effective population N_eff = 4ab/(a+b).If both the 𝐚 and 𝐛 subgraphs have similar sizes, a, b≈ N/2, then N_ eff≈ N. However, if, for example, a∼𝒪(1) and b≈ N then T∼𝒪(1) !One highly-connected node can promote consensus. §.§ Complex NetworksNow we turn to VM and IP dynamics on complex networks.While the bookkeeping becomes a bit tedious, the approach is morally the same as that for the complete bipartite graph: separate the dynamics according to the degree of each node.From Eq. (<ref>), the transition probabilities for increasing and decreasing the density of voters of type 1 on nodes of fixed degree k are:𝐑_k[{ρ_k}]≡𝐏[ρ_k→ρ_k^+] = 1/N'∑_x,y1/k_xA_xy Φ(y,x) 𝐋_k [{ρ_k}]≡𝐏[ρ_k→ρ_k^-] = 1/N'∑_x,y1/k_xA_xy Φ(x,y),where ρ_k^±=ρ_k± N_k^-1, and the prime restricts the sum to nodes of fixed degree k.In this equation, the densities associated with nodes of degrees k' k are unaltered.We now make the simplification of considering the mean-field configuration model (see, e.g., <cit.>).This is a network that is constructed by starting with a set of nodes that have “stubs” of specified degrees, and then connecting the ends of stubs at random until no free ends remain.By this construction, the degrees of neighboring nodes are uncorrelated.Thus we may replace A_xy by ⟨ A_xy⟩ = k_xk_y/μ_1N in (<ref>).Following the same steps as in the complete bipartite network, the backward Kolmogorov equation for the consensus time is∑_kv_k ∂ T/∂ρ_k + ∑_kD_k ∂^2 T/∂ρ_k^2 =-1,with degree-dependent velocity and diffusivity (v_k,D_k).To simplify (<ref>), it is helpful to first study the time dependence of the density of voters in state 1 on nodes of fixed degree k.As seen in Fig. <ref>(b) (and can be shown analytically) the average densities ⟨ρ_k⟩ all converge to the common value ω in a time of the order of 1.Thus at long times, v_k in (<ref>) vanishes.We also convert derivatives with respect to ρ_k to derivatives with respect to ω by∂ T/∂ρ_k=∂ T/∂ω∂ω/∂ρ_k = k n_k/μ_1∂ T/∂ω ,to reduce (<ref>) toμ_2/Nμ_1^2 ω(1-ω) ∂^2 T/∂ω^2 =-1.Defining an effective population size by N_eff = N μ_1^2/μ_2, and comparing with (<ref>), the consensus time isT_N(ω)= -N_ eff[(1-ω)ln(1-ω) + ωlnω]  .This is the same form as on the complete graph and the complete bipartite network, expect for the value of N_ eff.To compute N_ eff for a network with a power-law degree distribution, n_k ∼ k^-ν, is a standard exercise in extreme-value statistics <cit.>, and the final result isT_N ∝ N_ eff∼ N ν>3,N^2(ν-2)/(ν-1)2<ν<3, O(1) ν<2,with logarithmic corrections in the marginal cases of ν=2,3.For ν< 3, consensus arises quickly because N_ eff is much less than N when the degree distribution is sufficiently broad.Here, a few of high-degree nodes “control” many neighboring low-degree nodes, so the effective number of independent voters is less than N.Applying this same formalism to the IP, the consensus time isT_N(ω_-1)= -N_ eff[(1-ω_-1)ln(1-ω_-1)+ω_-1lnω_-1] .with N_ eff=Nμ_1μ_-1.For power-law degree networks, μ_1 and μ_-1 can be straightforwardly obtained to giveT_N ∝ N_ eff∼ N ν>2,N^3-ν ν<2,with again a logarithmic correction for the marginal case ν=2.Thus the consensus time in the IP is linear in N for ν>2 and superlinear in N for μ<2.Consensus arises slowly because of the difficulty in changing the opinions of agents on the very many low-degree nodes.§ BIASED DYNAMICSWhat happens when the two states are inequivalent?We may view state 1 as a mutant with fitness f>1 that invades a population of “residents” in state 0, each of which has fitness f=1.What is the fixation probability, namely, the probability that a single fitter mutant overspreads the population?Such fixation underlies many social and epidemiological phenomena (see e.g., <cit.>).We implement biased dynamics for the VM as follows:-1ex* Pick a voter with probability proportional to its inverse fitness.* The voter adopts the state of a random neighbor.Thus a “weaker” voter is more likely to be picked and be influenced by a neighbor.We may equivalently view the inverse fitness as the death rate for a given voter.Similarly, the evolution steps in the biased IP are:-1ex* Pick an invader with probability proportional to its fitness.* The invader exports its state to a random neighbor.A fitter mutant is thus more likely to spread its progeny.In unbiased dynamics, we saw that high-degree nodes strongly influence the fixation probability in the VM, while low-degree nodes are more influential in the IP.This trend is confirmed by Fig. <ref>, where the fixation probability is proportional to the degree of the mutant node in the VM and proportional to the inverse of this degree in the IP.To understand the fixation probability, let's again consider the simple example of the complete graph.The raising and lowering operators in Eq. (<ref>) now are𝐑(ρ) ≡𝐏[ρ→ρ+δρ] =ρ(1-ρ)𝐋(ρ) ≡𝐏[ρ→ρ -δρ] =1/f ρ(1-ρ),We now write the backward Kolmogorov equation for ℰ(ρ), the fixation probability to reach consensus when the initial density of agents in state 1 equals ρ:ℰ(ρ) = 𝐑(ρ)ℰ(ρ+δρ)+𝐋(ρ) ℰ(ρ-δρ) + [1-𝐑(ρ)-𝐋(ρ)] ℰ(ρ) ,subject to the boundary conditions ℰ(ρ=0)=0 and ℰ(ρ=1)=1.In analogy with Eq. (<ref>), this equation expresses the fixation probability as the appropriately weighted average of the fixation probabilities after a single update step.In the following, we focus on the weak selection limit, in which f=1+s, with s≪ 1. Expanding (<ref>) to second order in δρ givesρ(1-ρ) [s ∂ℰ/∂ρ +1/N∂^2ℰ/∂ρ^2]=0 .This coincides with the equation for the fixation probability to ρ=1 for biased diffusion on the finite interval [0,1], with solution <cit.>ℰ(ρ;sN)≃1-e^-sNρ/1-e^-sN .Here, we explicitly write the dependence of the fixation probability on ρ as well as on a second natural variable combination sN.To obtain the fixation probability on a complex network, we extend the two time-scale dynamics of the unbiased VM to biased dynamics.Here the population is again quickly driven to a homogeneous state where ρ_k→ω for all k on a time scale of the order of 1.Once this homogeneous state is reached, the new feature is that consensus is driven by the bias, rather than by diffusive fluctuations.Thus we are led to study the evolution of ⟨ω⟩, which, for s>0, evolves as ⟨ω̇⟩= s⟨ω⟩ (1-⟨ω⟩). This gives ⟨ω⟩→ 1 on a time scale of the order of s^-1≫ 1.We now determine the fixation probability by applying the same computational approach as that for the unbiased VM: replace ρ_k by ω in all transition probabilities and the derivative ∂/∂ρ_k by kn_k/μ_1∂/∂ω.With these replacements, the backward Kolmogorov equation for the fixation probability has the same form as Eq. (<ref>), but with N replaced by N_ eff=Nμ_1^2/μ_2 and ρ by ω.The fixation probability for biased dynamics on a complex network is then given by Eq. (<ref>) with these replacements.For a single mutant initially at a node of degree k, ω = k/Nμ_1. Substituting this into (<ref>), we fine generally that the fixation probability is proportional to k for all s≪ 1 and has the limiting behaviors (Fig. <ref>):ℰ≃ k/Nμ_1s ≪ 1/N_eff ; k(s μ_1/μ_2) 1/N_eff≪ s ≪ 1.In the complementary biased IP, the fixation probability for a mutant initially on a node of degree k isinversely proportional to the node degree:ℰ≃ k^-1 /Nμ_-1s≪ 1/N ;k^-1(s/μ_-1)1/N≪ s≪ 1 . § SUMMARY The venerable voter model played a central role in probability theory and statistical physics because it is one of the few exactly soluble many-particle interacting systems in all spatial dimensions and because of the diversity of its applications.Putting the voter model on a complex network—in which there is broad distribution of node degrees—changes its dynamics in crucial ways.A new dynamical conservation law—the degree-weighted magnetization—gives the fixation probability for the voter model and the invasion process on finite networks.Another new feature is a two time-scale approach to consensus—first an initial quick approach to a homogeneous state in which the density of 1s is the same for nodes of any degree, after which diffusive fluctuations drive the consensus.Consensus is achieved quickly in the voter model when the degree distribution is sufficiently broad, as high-degree nodes effectively “control” many neighboring low-degree nodes.When one state is more fit, there is again a two time-scale approach to consensus, but with fitness selection driving ultimate consensus.As a message for evolutionary dynamics, for a mutant to infiltrate a network most effectively, it is advantageous for it to be on a high-degree node in the voter model and on a low-degree node in the invasion process.99SAR V. Sood and S. Redner, Phys. Rev. Lett. 94, 178701 (2005); T. Antal, S. Redner, and V. Sood, Phys. Rev. Lett. 96, 188104 (2006); V. Sood, T. Antal, and S. Redner, Phys. Rev. E 77, 041121 (2008).L T. M. Liggett, Interacting Particle Systems, (Springer-Verlag, Berlin, 2005).K92 P. L. Krapivsky, Phys. Rev. A 45, 1067 (1992).VK N. G. van Kampen, Stochastic Processes in Physics and Chemistry, 2^ nd ed. (North-Holland, Amsterdam, 1997).R01 S. Redner, A Guide to First-Passage Processes, (Cambridge University Press, New York, 2001).N M. E. J. Newman, Networks: An Introduction (Oxford University Press, 2010).G87 J. Galambos, The Asymptotic Theory of Extreme Order Statistics, (R. E. Krieger Publishing Co., Malabar, Florida, 1987).M P. A. P Moran, The Statistical Processes of Evolutionary Theory (Clarendon Press, Oxford, 1962).K83 M. Kimura, The Neutral Theory of Molecular Evolution, (Cambridge University Press, Cambridge, 1983).AM R. M. Anderson and R. M. May, Infectious Diseases in Humans, (Oxford University Press, Oxford, 1992).E W. Ewens, Mathematical Population Genetics I. Theoretical Introduction, (Springer-Verlag, Berlin, 2004).nowak M. A. Nowak, Evolutionary Dynamics, (Harvard Univ.Press, Cambridge MA 2006).PV01 R. Pastor-Satorras and A. Vespignani, Phys. Rev. Lett. 86, 3200 (2001).W D. J. Watts, Proc. Natl. Acad. Sci. USA 99, 5766 (2002).
http://arxiv.org/abs/1705.02249v2
{ "authors": [ "S. Redner" ], "categories": [ "physics.soc-ph", "physics.data-an" ], "primary_category": "physics.soc-ph", "published": "20170426141057", "title": "Dynamics of Voter Models on Simple and Complex Networks" }
No, This is not a Circle! Zoltán Kovács========================= A unique sink orientation (USO) is an orientation of the n-dimensional cube graph (n-cube) such that every face (subcube) has a unique sink. The number of unique sink orientations is n^Θ(2^n) <cit.>. If a cube orientation is not a USO, it contains a pseudo unique sink orientation (PUSO): an orientation of some subcube such that every proper face of it has a unique sink, but the subcube itself hasn't. In this paper, we characterize and count PUSOs of the n-cube. We show that PUSOs have a much more rigid structure than USOs and that their number is between 2^Ω(2^n-log n) and 2^O(2^n) which is negligible compared to the number of USOs. As tools, we introduce and characterize two new classes of USOs: border USOs (USOs that appear as facets of PUSOs), and odd USOs which are dual to border USOs but easier to understand. § INTRODUCTIONUnique sink orientations. Since more than 15 years, unique sink orientations (USOs) have been studied as particularly rich and appealing combinatorial abstractions of linear programming (LP) <cit.> and other related problems <cit.>. Originally introduced by Stickney and Watson in the context of the P-matrix linear complementarity problem (PLCP) in 1978 <cit.>, USOs have been revived by Szabó and Welzl in 2001, with a more theoretical perspective on their structural and algorithmic properties <cit.>.The major motivation behind the study of USOs is the open question whether efficient combinatorial algorithms exist to solve PLCP and LP. Such an algorithm is running on a RAM and has runtime bounded by a polynomial in the number of input values (which are considered to be real numbers). In case of LP, the runtime should be polynomial in the number of variables and the number of constraints. For LP, the above open question might be less relevant, since polynomial-time algorithms exist in the Turing machine model since the breakthrough result by Khachiyan in 1980 <cit.>. For PLCP, however, no such algorithm is known, so the computational complexity of PLCP remains open.Many algorithms used in practice for PCLP and LP are combinatorial and in fact simplex-type (or Bard-type, in the LCP literature). This means that they follow a locally improving path of candidate solutions until they either cycle (precautions need to be taken against this), or they get stuck—which in case of PLCP and LP fortunately means that the problem has been solved. The less fortunate facts are that for most known algorithms, the length of the path is exponential in the worst case, and that for no algorithm, a polynomial bound on the path length is known. USOs allow us to study simplex-type algorithms in a completely abstract setting where cube vertices correspond to candidate solutions, and outgoing edges lead to locally better candidates. Arriving at the unique sink means that the problem has been solved. The requirement that all faces have unique sinks is coming from the applications, but is also critical in the abstract setting itself: without it, there would be no hope for nontrivial algorithmic results <cit.>.On the one hand, this kind of abstraction makes a hard problem even harder; on the other hand, it sometimes allows us to see what is really going on, after getting rid of the numerical values that hide the actual problem structure. In the latter respect, USOs have been very successful. For example, in a USO we are not confined to following a path, we can also “jump around”. The fastest known deterministic algorithm for finding the sink in a USO does exactly this <cit.> and implies the fastest known deterministic combinatorial algorithm for LP if the number of constraints is twice the number of variables <cit.>. In a well-defined sense, this is the hardest case. Also, RandomFacet, the currently best randomized combinatorial simplex algorithm for LP <cit.> actually works on acyclic USOs (AUSOs) with the same (subexponential) runtime and a purely combinatorial analysis <cit.>.The USO abstraction also helps in proving lower bounds for the performance of algorithms. The known (subexponential) lower bounds for RandomFacet and RandomEdge—the most natural randomized simplex algorithm—have first been proved onAUSOs <cit.> and only later on actual linear programs <cit.>. It is unknown which of the two algorithms is better on actual LPs, but on AUSOs, RandomEdge is strictly slower in the worst case <cit.>.Finally, USOs are intriguing objects from a purely mathematical point of view, and this is the view that we are mostly adopting in in this paper. Pseudo unique sink orientations. If a cube orientation has a unique sink in every face except the cube itself, we call it a pseudo unique sink orientation (PUSO). Every cube orientation that is not a USO contains some PUSO. The study of PUSOs originates from the master's thesis of the first author <cit.> where the PUSO concept was used to obtain improved USO recognition algorithms; see Section <ref> below.One might think that PUSOs have more variety than USOs: instead of exactly one sink in the whole cube, we require any number of sinks not equal to one. But this intuition is wrong: as we show, the number of PUSOs is much smaller than the number of USOs of the same dimension; in particular, only a negligible fraction of all USOs of one dimension lower may appear as facets of PUSOs. These border USOs and the odd USOs—their duals—have a quite interesting structure that may be of independent interest. The discovery of these USO classes and their basic properties, as well as the implied counting results for them and for PUSOs, are the main contributions of the paper. Overview of the paper. Section <ref> formally introduces cubes and orientations, to fix the language. We will define an orientation via its outmap, a function that yields for every vertex its outgoing edges. Section <ref> defines USOs and PUSOs and gives some examples in dimensions two and three to illustrate the concepts. In Section <ref>, we characterize outmaps of PUSOs, by suitably adapting the characterization for USOs due to Szabó and Welzl <cit.>. Section <ref> uses the PUSO characterization to describe a USO recognition algorithm that is faster than the one resulting from the USO characterization of Szabó and Welzl. Section <ref> characterizes the USOs that may arise as facets of PUSOs. As these are on the border between USOs and non-USOs, we call them border USOs. Section <ref> introduces and characterizes the class of odd USOs that are dual to border USOs under inverting the outmap. Odd USOs are easier to visualize and work with, since in any face of an odd USO we again have an odd USO, a property that fails for border USOs. We also give a procedure that allows us to construct many odd USOs from a canonical one, the Klee-Minty cube. Based on this, Section <ref> proves (almost matching) upper and lower bounds for the number of odd USOs in dimension n. Bounds on the number of PUSOs follow from the characterization of border USOs in Section <ref>. In Section <ref>, we mention some open problems.§ CUBES AND ORIENTATIONS Given finite sets A⊆ B, the cube =^[A,B] is the graph with vertex set | = [A,B]:={V: A⊆ V ⊆ B} and edges between any two subsets U,V for which |U⊕ V|=1, where U⊕ V=(U∖ V)∪ (V∖ U) = (U∪ V)∖ (U∩ V) is symmetric difference. We sometimes need the following easy fact.(U⊕ V)∩ X = (U∩ X) ⊕ (V∩ X). For a cube =^[A,B], :=|B∖ A| is its dimension, := B∖ A its carrier. A face ofis a subgraph of the form =^[I,J], with A⊆ I⊆ J⊆ B. If =k,is a k-face or k-cube. A facet of an n-cubeis an (n-1)-face of . Two vertices U,V∈| are called antipodal inif V=∖ U.If A=∅, we abbreviate ^[A,B] as ^B. The standard n-cube is ^[n] with [n]:={1,2,…,n}.An orientationof a graph G is a digraph that contains for every edge {U,V} of G exactly one directed edge (U,V) or (V,U). An orientation of a cubecan be specified by its outmap : |→ 2^ that returns for every vertex the outgoing coordinates. On every faceof(includingitself), the outmap induces the orientation_ := (|, {(V,V⊕{i}): V∈|, i∈(V)∩}).In order to actually get a proper orientation of , the outmap must be consistent, meaning that it satisfies i∈(V)⊕(V⊕{i}) for all V∈| and i∈.Note that the outmap of _ is not ϕ but _:|→ 2^ defined by _(V)=(V)∩. In general, when we talk about a cube orientation =_, the domain ofmay be a supercube ofin the given context. This avoids unnecessary indices that we would get in defining =__ via its “official” outmap _. However, sometimes we want to make sure thatis actually the outmap of , and then we explicitly say so.Figure <ref> depicts an outmap and the corresponding 2-cube orientation.§ (PSEUDO) UNIQUE SINK ORIENTATIONS A unique sink orientation (USO) of a cubeis an orientation _ such that every face _ has a unique sink. Equivalently, every face _ is a unique sink orientation. Figure <ref> shows the four combinatorially different (pairwise non-isomorphic) orientations of the 2-cube. The eye and the bow are USOs.[The naming goes back to Szabó and Welzl <cit.>.] The twin peak is not since it has two sinks in the whole cube (which is a face of itself). The cycle is not a USO, either, since it has no sink in the whole cube. The unique sink conditions for 0- and 1-faces (vertices and edges) are always trivially satisfied.If an orientation _ is not a USO, there is a smallest face _ that is not a USO. We call the orientation in such a face a pseudo unique sink orientation. A pseudo unique sink orientation (PUSO) of a cubeis an orientation _ that does not have a unique sink, but every proper face _≠_ has a unique sink. The twin peak and the cycle in Figure <ref> are the two combinatorially different PUSOs of the 2-cube.The 3- cube has 19 combinatorially different USOs <cit.>, but only two combinatorially different PUSOs, see Figure <ref> together with Corollary <ref> below.We let (n) and (n) denote the number of USOs and PUSOs of the standard n-cube. We have (0)=1,(1)=2 as well as (2)=12 (4 eyes and 8 bows). Moreover, (0)=(1)=0 and (2)=4 (2 twin peaks, 2 cycles).§ OUTMAPS OF (PSEUDO) USOS Outmaps of USOs have a simple characterization <cit.>: :|→ 2^ is the outmap of a USO ofif and only if((U)⊕(V))∩ (U⊕ V) ≠∅holds for all pairs of distinct vertices U,V∈|. This condition means the following: within the face ^[U∩ V, U∪ V] spanned by U and V, there is a coordinate that is outgoing for exactly one of the two vertices. In particular, any two distinct vertices have different outmap values, sois injective and hence bijective. This characterization implicitly makes a more general statement: for every face , orientation _ is a USO if and only if (<ref>) holds for all pairs of distinct vertices U,V∈. The reason is that the validity of (<ref>) only depends on the behavior ofwithin the face spanned by U and V. Formally, for U,V∈, (<ref>) is equivalent to the USO-characterizing condition (_(U)⊕_(V))∩ (U⊕ V)≠∅ for the orientation __=_. Letbe a cube, : |→ 2^,a face of . Then _ is a USO if and only if((U)⊕(V))∩ (U⊕ V) ≠∅holds for all pairs of distinct vertices U,V∈|. In this case, the outmap _ of _ is bijective.As a consequence, outmaps of PUSOs can be characterized as follows: (<ref>) holds for all pairs of non-antipodal vertices U,V (which always span a proper face), but fails for some pair U, V=∖ U of antipodal vertices. As the validity of (<ref>) is invariant under replacing all outmap values (V) with '(V)=(V)⊕ R for some fixed R⊆, we immediately obtain that PUSOs (as well as USOs <cit.>) are closed under flipping coordinates (reversing all edges along some subset of the coordinates).Letbe a cube, : |→ 2^,a face of .Suppose that _ is a PUSO and R⊆. Consider the R-flipped orientation _' induced by the outmap'(V) := (V) ⊕ R, ∀ V∈|.Then _' is a PUSO as well. Using this, we can show that in a PUSO, (<ref>) must actually fail on all pairs of antipodal vertices, not just on some pair, and this is the key to the strong structural properties of PUSOs. Letbe a cube of dimension at least 2, :|→ 2^,a face of . Then _ is a PUSO if and only if (i) condition (<ref>) holds for all U,V∈|, V≠ U, ∖ U (pairs of distinct, non-antipodal vertices in ), and(ii) condition (<ref>) fails for all U,V∈|, V=∖ U (pairs of antipodal vertices in ). In view of the above discussion, it only remains to show that (ii) holds if _ is a PUSO. Let U∈|.Applying Lemma <ref> with R=(U) does not affect the validity of (<ref>), so we may assume w.l.o.g. that (U)=∅, hence U is a sink in _. For a non-antipodal W∈|, (i) implies the existence of some i∈(W)∩(U⊕ W)⊆(W)∩=_(W), hence such a W is not a sink in _. But then V=∖ U must be a second sink in _, because PUSO _ does not have a unique sink. This in turn implies that (<ref>) fails for U,V=∖ U. Let _ be a PUSO with outmap . (i) Any two antipodal vertices U,V=∖ U have the same outmap value, (U)=(V). (ii) _ either has no sink, or exactly two sinks.For antipodal vertices, U⊕ V=, so((U)⊕(V))∩ (U⊕ V) = ∅ is equivalent to (U)=(V). In particular, the number of sinks is even but cannot exceed 2, as otherwise, there would be two non-antipodal sinks; the proper face they span would then have more than one sink, a contradiction. We can use the characterization of Theorem <ref> to show that PUSOs exist in every dimension n≥ 2.Let n ≥ 2,the standard n-cube and π:[n]→[n] a permutation consisting of a single n-cycle. Consider the function : 2^[n]↦ 2^[n] defined by(V) = {i∈ [n]: |V∩{i, π(i)}|=1}, ∀V⊆[n].Then _ is a PUSO. According to Theorem <ref>, we need to show that condition (<ref>) fails for all pairs of antipodal vertices, but that it holds for all pairs of distinct vertices that are not antipodal. We first consider two antipodal vertices U and V=[n]∖ U in which case we get (U)=(V), so (<ref>) fails. If U and V are distinct and not antipodal, there is some coordinate in which U and V differ, and some coordinate in which U and V agree. Hence, if we traverse the n-cycle (1,π(1),π(π(1)),…), we eventually find two consecutive elements i,π(i) such that U and V differ in coordinate i but agree in coordinate π(i), meaning that i∈ ((U)⊕(V))∩ (U⊕ V), so (<ref>) holds. We conclude this section with another consequence of Theorem <ref> showing that PUSOs have a parity.Let _ be a PUSO with outmap . Then the outmap values of all vertices have the same parity, that is |(U) ⊕(V)| = 02, ∀ U,V∈|.We call the number |(∅)| 2 the parity of _. By Corollary <ref>, a PUSO of even parity has two sinks, a PUSO of odd parity has none. We first show that the outmap valus of any two distinct non-antipodal vertices U and V differ in at least two coordinates. Let V' be the antipodal vertex of V. As U is neither antipodal to V nor to V', Theorem <ref> along with (V)=(V') (Corollary <ref>) yields((U)⊕(V))∩ (U⊕ V)≠ ∅, ((U)⊕(V))∩ (U⊕ V')≠ ∅.Since U ⊕ V is disjoint from U ⊕ V', (U)⊕(V) contains at least two coordinates.Now we can prove the actual statement. Let I be the image of , I := {(V): V∈|}⊆'=^. We have |I|≥ 2^n-1, because by Lemma <ref>, _ is bijective (and henceis injective) on each facetof . On the other hand, I forms an independent set in the cube ', as any two distinct outmap values differ in at least two coordinates; The statement follows, since the only independent sets of size at least 2^n-1 in an n-cube are formed by all vertices of fixed parity. § RECOGNIZING (PSEUDO) USOSBefore we dive deeper into the structure of PUSOs in the next section, we want to present a simple algorithmic consequence of the PUSO characterization provided by Theorem <ref>.Suppose thatis an n-cube, and that an outmap :|→ 2^ is succinctly given by a Boolean circuit of polynomial size in n. Then it is -complete to decide whether _ is a USO <cit.>.[In fact, it is already -complete to decide whether _ is an orientation.]-membership is easy: every non-USO has a certificate in the form of two vertices that fail to satisfy (<ref>). Finding two such vertices is hard, though. For given vertices U and V, let us call the computation of ((U)⊕(V))∩ (U⊕ V) a pair evaluation. Then, the obvious algorithm needs Θ(4^n) pair evaluations. Using Theorem <ref>, we can improve on this. Letbe an n-cube, :|→ 2^. Using O(3^n) pair evaluations, we can check whether _ is a USO. For every faceof dimension at least 1 (there are 3^n-2^n of them), we perform a pair evaluation with an arbitrary pair of antipodal vertices U,V=∖ U. We output that _ is a USO if and only if all these pair evaluations succeed (meaning that they return nonempty sets). We need to argue that this is correct. Indeed, if _ is a USO, all pair evaluations succeed by Lemma <ref>. If _ is not a USO, it is either not an orientation (so the pair evaluation in some 1-face fails), or it contains a PUSO _ in which case the pair evaluation infails by Theorem <ref>. Using the same algorithm, we can also check whether _ is a PUSO. Which is the case if and only if the pair evaluation succeeds on every face exceptitself.§ BORDER UNIQUE SINK ORIENTATIONSLemma <ref> already implies that not every USO can occur as a facet of a PUSO. For example, let us assume that an eye (Figure <ref>) appears as a facet of a 3-dimensional PUSO. Then, Corollary <ref> (i) completely determines the orientation in the opposite facet: we get a “mirror orientation” in which antipodal vertices have traded outgoing coordinates; see Figure <ref>.But now, every edge between the two facets connects two vertices with the same outmap parity within their facets, and no matter how we orient the edge, the two vertices will receive different global outmap parities. Hence, the resulting orientation cannot be a PUSO by Lemma <ref>.It therefore makes sense to study the class of border USOs, the USOs that appear as facets of PUSOs. A border USO is a USO that is a facet of some PUSO. If the border USO lives on cube =^[A,B], the PUSO may live on ^[A,B∪{n}] (n∉ a new coordinate), or on ^[A∖{n},B] (n∈ A), but these cases lead to combinatorially equivalent situations. We will always think about extending border USOs by adding a new coordinate.In this section, we characterize border USOs. We already know that antipodal vertices must have outmap values of different parities; a generalization of this yields a sufficient condition: if the outmap values of distinct vertices U,V agree outside of the face spanned by U and V, then the two outmap values must have different parities.Let _ be a USO with outmap . _ is a border USO if and only if the following condition holds for all pairs of distinct vertices U,V∈|:(U)⊕(V) ⊆ U⊕ V ⇒ |(U)⊕(V)| = 12.A preparatory step will be to generalize the insight gained from the case of the eye above and show that a USO can be extended to a PUSO of one dimension higher in at most two canonical ways—exactly two if the USO is actually border.Letbe a facet of , ∖={n},and let _ be a USO with outmap .(i) There are at most two outmaps :|→ 2^ such that _ is a PUSO with _=_. Specifically, these are _i,i=0,1, with_i (V) = {[(V), V∈|,  |(V)| = i2,; (V) ∪{n}, V∈|,  |(V)| ≠ i2,; _i(C∖ V),V∉|. ]. (ii) If _ is a border USO, both __0 and __1 are PUSOs.(iii) If __i is a PUSO for some i∈{0,1}, then __1-i is a PUSO as well, and _ is a border USO. Only for =_i,i=0,1, we obtain _=_ and satisfy the necessary conditions of Corollary <ref> (pairs of antipodal vertices have the same outmap values in a PUSO), and of Lemma <ref> (all outmap values have the same parity in a PUSO). Hence, __0 and __1 are the only candidates for PUSOs extending _. This yields (i). If _ is a border USO, one of the candidates is a PUSO by definition; as the other one results from it by just flipping coordinate n, it is also a PUSO by Lemma <ref>. Part (ii) follows. For part (iii), we use that _ is a facet of __i, i=0,1, so as before, if one of the latter is a PUSO, then both are, and _ is a border USO by definition.There are 2 combinatorially different PUSOs of the 3-cube (depicted in Figure <ref>).We have argued above that an eye cannot be extended to a PUSO, so let us try to extend a bow (the front facet in Figure <ref>). The figure shows the two candidates for PUSOs provided by Lemma <ref>. Both happen to be PUSOs, so starting from the single combinatorial type of 2-dimensional border USOs, we arrive at the two combinatorial types of 3-dimensional PUSOs. Concluding this section, we prove the advertised characterization of border USOs. [Theorem <ref>] Letbe a cube with facet , ∖={n}. We show that condition (<ref>) fails for some pair of distinct vertices U,V∈| if and only if __0 is not a PUSO, with _0 as in (<ref>). By Lemma <ref>, this is equivalent to _ not being a border USO.Suppose first that there are distinct U,V∈| such that(U)⊕(V) ⊆ U⊕ V and |(U)⊕(V)| =02, meaning that U and V have the same outmap parity. By definition of _0, we then get(U)⊕(V)=_0(U)⊕_0(V) = _0(U)⊕_0(V') ⊆ U⊕ V,where V'=∖ V is antipodal to V in . Moreover, as U⊕ V is also antipodal to U⊕ V', the inclusion _0(U)⊕_0(V)=_0(U)⊕_0(V')⊆ U⊕ V is equivalent to(_0(U)⊕_0(V'))∩(U⊕ V')=∅.Since U,V are distinct and non-antipodal (in ), U,V' are therefore distinct non-antipodal vertices that fail to satisfy Theorem <ref> (i), so __0 is not a PUSO.For the other direction, we play the movie backwards. Suppose that __0 is not a PUSO. As pairs of antipodal vertices comply with Theorem <ref> (ii) by definition of _0, there must be distinct and non-antipodal vertices U,V' with the offending property (<ref>). Moreover, as _0 induces USOs on both(where we have _) and its opposite facet ' (where we have a mirror image of _), Lemma <ref> implies that U and V' cannot both be in , or in '. W.l.o.g. assume that U∈|, V'∈|', and let V∈| be antipodal to V'.Then, as before, (<ref>) is equivalent to the inclusion _0(U)⊕_0(V)=_0(U)⊕_0(V')⊆ U⊕ V. In particular, _0(U) and _0(V) must agree in coordinate n which in turn implies(U)⊕(V)=_0(U)⊕_0(V) ⊆ U⊕ V,and since |_0(U)⊕_0(V)|=0 2 by definition of _0, we have found two distinct vertices U,V∈| that fail to satisfy (<ref>). For an example of a 3-dimensional border USO, see Figure <ref>. In particular, we see that faces of border USOs are not necessarily border USOs: an eye cannot be a 2-dimensional border USO (Figure <ref>), but it may appear in a facetof a 3-dimensional border USO (for example, the bottom facet in Figure <ref>), since the incident edges along the third coordinate can be chosen such that (<ref>) does not impose any condition on the USO in .Let (n) denote the number of border USOs of the standard n-cube. By Lemma <ref>, (n)=2(n-1),n≥ 2. § ODD UNIQUE SINK ORIENTATIONSBy (<ref>), counting PUSOs boils down to counting border USOs. However, as faces of border USOs are not necessarily border USOs (see the example of Figure <ref>), it will be easier to work in a dual setting where we get a class of USOs that is closed under taking faces. Let _ be a USO of =^B with outmap . Then _^-1 is a USO as well, the dual of _. We use the USO characterization of Lemma <ref>. Since _ is a USO, :2^B→ 2^B is bijective to begin with, so ^-1 exists. Now let U',V'∈|, U'≠ V' and define U:=^-1(U')≠^-1(V')=:V. Then we have(^-1(U')⊕^-1(V'))∩ (U'∩ V') = (U⊕ V) ∩ ((U)⊕(V) ≠∅,since _ is a USO. Hence, _^-1 is a USOas well.An odd USO is a USO that is dual to a border USO. Figure <ref> shows an example of the duality with the following outmaps: [ V ∅ {1} {2} {3} {1,2} {1,3} {2,3} {1,2,3} ^-1(V'); (V) ∅ {1} {1,2} {2,3} {2} {1,2,3} {1,3} {3}V' ] A characterization of odd USOs now follows from Theorem <ref> by swapping the roles of vertices and outmaps; the proof follows the same scheme as the one of Lemma <ref> and is omitted.Let _ be a USO with outmap . _ is an odd USOif and only if the following condition holds for all pairs of distinct vertices U,V∈|:U⊕ V ⊆(U)⊕(V) ⇒ |U⊕ V| = 12.In words, if the outmap values of two distinct vertices U,V differ in all coordinates within the face spanned by U and V, then U and V are of odd Hamming distance.[Hamming distance is defined for two bit vectors, but we can also define it for two sets in the obvious way as the size of their symmetric difference.] As this property also holds for any two distinct vertices within a face , this implies the following.Let C_ be an odd USO,a face of . (i) _ is an odd USO.(ii) If ()=2, _ is a bow.Indeed, as source and sink of an eye violate (<ref>), all 2-faces of odd USOs are bows. To make the global structure of odd USOs more transparent, we develop an alternative view on them in terms of caps that can be considered as “higher-dimensional bows”. Let _ be an orientation with bijective outmap . For W∈|, let W∈| be the unique complementary vertex, the one whose outmap value is antipodal to (W); formally, (W)⊕(W)=. _ is called a cap if|W⊕W|=1 2, ∀ W∈|.Figure <ref> illustrates this notion on three examples. Let _ be an orientation with outmap . _ is an odd USO if and only if all its faces are caps.If all faces are caps, their outmaps are bijective, meaning that all faces have unique sinks. So _ is a USO. It is odd, since the characterizing property (<ref>) follows for all distinct U,V via the cap spanned by U and V. Now suppose that _ is an odd USO. Then every facehas a bijective outmap to begin with, by Lemma <ref>; to show thatis a cap, consider any two complementary vertices W,W in . As W and W are in particular complementary in the face that they span, they have odd Hamming distance by (<ref>).There is a “canonical” odd USO of the standard n-cube in which the Hamming distances of complementary vertices are not only odd, but in fact always equal to 1. This orientation is known as the Klee-Minty cube, as it captures the combinatorial structure of the linear program that Klee and Minty used in 1972 to show for the first time that the simplex algorithm may take exponential time <cit.>.The n-dimensional Klee-Minty cube can be defined inductively: ^[n] is obtained from ^[n-1] by embedding an [n-1]-flipped copy of ^[n-1] into the opposite facet ^[{n},[n]], with all connecting edges oriented towards ^[n-1]; the resulting USO contains a directed Hamiltonian path; see Figure <ref>. As a direct consequence of the construction, ^[n] is a cap: complementary vertices are neighbors along coordinate n. Moreover, it is easy to see that each k-face is combinatorially equivalent to ^[k], hence all faces are caps, so ^[n] is an odd USO.Next, we do this more formally, as we will need the Klee-Minty cube as a starting point for generating many odd USOs. Consider the standard n-cubeand the outmap :2^[n]→ 2^[n] with(V) = {j∈[n]: |V∩{j,j+1,…,n}|=1 2}, ∀ V⊆ [n].Then ^[n]:=_ is an odd USO that satisfies(W) ⊕(W⊕{i}) = [i], ∀ i∈[n].for each vertex W. In particular, for all i and W∈^[i-1], W and W∪{i} are complementary in ^[i], so we recover the above inductive view of the Klee-Minty cube. We first show that (U)⊕(V) = (U⊕ V), ∀ U,V⊆[n].Indeed, j∈(U)⊕(V) is equivalent to U∩{j,j+1,…,n} and V∩{j,j+1,…,n} having different parities, which by (<ref>) is equivalent to (U⊕ V)∩{j,j+1,…,n} having odd parity, meaning that j∈(U⊕ V).Since (U)⊕(V)=(U⊕ V) contains the largest element of U⊕ V, (<ref>) holds for all pairs of distinct vertices, so _ is a USO. Condition (<ref>) follows from(W) ⊕(W⊕{i}) =({i}) = [i].To show that _ is odd, we verify condition (<ref>) of Theorem <ref>. Suppose that U⊕ V ⊆(U)⊕(V) = (U⊕ V) for two distinct vertices. Since (U⊕ V) does not contain the second-largest element of U⊕ V, the former inclusion can only hold if there is no such second-largest element, i.e. U and V have (odd) Hamming distance 1.The Klee-Minty cube has a quite special property: complementing any vertex (reversing all its incident edges) yields another odd USO.[In general, the operation of complementing a vertex will destroy the USO property.] Even more is true: any set of vertices with disjoint neighborhoods can be complemented simultaneously. Thus, if we select a set of N vertices with pairwise Hamming distance at least 3, we get 2^N different odd USOs. We will use this in the next section to get a lower bound on the number of odd USOs. The following lemma is our main workhorse. Let _ be an odd USO of the standard n-cube with outmap , W∈| a vertex satisfying condition (<ref>):(W) ⊕(W⊕{i}) = [i], ∀ i∈[n].Let _' be the orientation resulting from complementing (reversing all edges incident to) W. Formally, [ '(W)=(W)⊕ [n],; '(W⊕{i})=(W⊕{i})⊕ {i},i=1,…,n, ]and '(V)=(V) forall other vertices. Then _' is an odd USO as well.We first show that every face _' has a unique sink, so that ' is a USO. If W∉|, then _'=_, so there is nothing to show. If W∈|, let ={i_1,i_2,…,i_k}, i_1<i_2<⋯<i_k. Using (<ref>), condition (<ref>) yields _(W) ⊕_(W⊕{i_t}) = {i_1,i_2,…,i_t}, ∀ t∈[k]and further_(W⊕{i_s}⊕_(W⊕{i_t}) = {i_s+1,i_s+2,…,i_t}, ∀ s,t∈[k], s<t. In particular, W is complementary to W⊕{i_k} in , but this is the only complementary pair among the k+1vertices inthat are affected by complementing W. From (<ref>), it similarly follows that[ '_(W) = _(W) ⊕{i_1,i_2,…,i_k}(<ref>)= _(W⊕{i_k}),; '_(W⊕{i_1}) =_(W⊕{i_1})⊕{i_1}(<ref>)= _(W),; '_(W⊕{i_t}) = _(W⊕{i_t}) ⊕{i_t}(<ref>)= _(W⊕{i_t-1}), ]for t=2,…,k. This means that the k+1 affected verticesjust permute their outmap values under _→'_. This does not change the number of sinks, so F_' has a unique sink as well.It remains to show that _' is a cap, so _' is an odd USO by Lemma <ref>. Since _ is a cap, it suffices to show that complementary vertices keep odd Hamming distance under _→'_. This can also be seen from (<ref>): for t=2,…,k, the vertex of outmap value _(W⊕{i_t-1}) moves by Hamming distance 2, namely from W⊕{i_t-1} (under _) to W⊕{i_t} (under '_). Hence it still has odd Hamming distance to its unaffected complementary vertex. The two complementary vertices of outmap values _(W) and _(W⊕{i_k}) move by Hamming distance 1 each. Vertices of other outmap values are unaffected. As an example, if we complement the vertex Y in the Klee-Minty cube of Figure <ref>, we obtain the odd USO in Figure <ref> (left); see Figure <ref>. Vertices X and Z have moved by Hamming distance 2, while Y and Y have moved by Hamming distance 1 each. If we subsequently also complement W (whose neighborhood was unaffected, so Lemma <ref> still applies), we obtain another odd USO (actually, a rotated Klee-Minty cube).Let (n) denote the number of odd USOs of the standard n-cube. By Definition <ref>, we get (n)=(n), ∀ n≥ 0,as duality (Lemma <ref>) is a bijection on the set of all USOs.§ COUNTING PUSOS AND ODD USOSWith characterizations of USOs, PUSOs, border USOs, and odd USOs available, one can explicitly enumerate these objects for small dimensions. Here are the results up to dimension 5 (the USO column is due to Schurr <cit.>). We remark that most numbers (in particular, the larger ones) have not independently been verified. The number of PUSOs appears to be very small, compared to the total number of USOs of the same dimension. In this section, we will show the following asymptotic results that confirms this impression. Let (n) denote the number of PUSOs of the standard n-cube.(i) For n≥ 2, (n) ≤ 2^2^n-1.(ii) For n≥ 6, (n) < 1.777128^2^n-1.(iii) For n=2^k, k≥ 2, (n) ≥ 2^2^n-1-log n+1.This shows that the number (n) is doubly exponential but still negligible compared to the number (n) of USOs of the standard n-cube: Matoušek <cit.> has shown that(n) ≥(n/e)^2^n-1,with a “matching” upper bound of (n) = n^O(2^n).As the main technical step, we count odd USOs. We start with the upper bound.Let n≥ 1. Then(i) (n) ≤ 2(n-1)^2 for n>0.(ii) For n≥ 2 and all k<n, 2(n-1) ≤(2(k))^2^n-1-k = √(2(k))^2^n-1.By Corollary <ref> (i), every odd USO consists of two odd USOs in two opposite facets, and edges along coordinate n, say, that connect the two facets. We claim that for every choice of odd USOs in the two facets, there are at most two ways of connecting the facets. Indeed, once we fix the direction of some connecting edge, all the others are fixed as well, since the orientation of an edge {V,V⊕{n}} determines the orientations of all “neighboring” edges {V⊕{i},V⊕{i,n}} via Corollary <ref> (ii) (all 2-faces are bows). Inequality (i) follows, and (ii) is a simple induction.The three bounds on (n) now follow from (n)=2(n-1) (<ref>) and (n-1)=(n-1) (<ref>). For the bound of Theorem <ref> (i), we use (<ref>) with k=0, and for Theorem <ref> (ii), we employ k=5 and (5)= 44'075'264. The lower bound of Theorem <ref> (iii) is a direct consequence of the following “matching” lower bound on the number of odd USOs.Let n=2^k, k≥ 1. Then (n-1) ≥ 2^2^n-1-log n.If n=2^k, there exists a perfect Hamming code of block length n-1 and message length n-1-log n <cit.>. In our language, this is a set W of 2^n-1-log n vertices of the standard (n-1)-cube, with pairwise Hamming distance 3 and therefore disjoint neighborhoods. Hence, starting from the Klee-Minty cube ^[n-1] as introduced in Lemma <ref>, we can apply Lemma <ref> to get a different odd USO for every subset of W, by complementing all vertices in the given subset. The statement follows.§ CONCLUSIONIn this paper, we have introduced, characterized, and (approximately) counted three new classes of n-cube orientations: pseudo unique sink orientations (PUSOs), border unique sink orientations (facets of PUSOs), and odd unique sink orientations (duals of border USOs). A PUSO is a dimension-minimal witness for the fact that a given cube orientation is not a USO. The requirement of minimal dimension induces rich structural properties and a PUSO frequency that is negligible compared to the frequency of USOs among all cube orientations.An obvious open problem is to close the gap in our approximate counting results and determine the true asymptotics of log(n) and hence log(n). We have shown that these numbers are between Ω(2^n-log n) and O(2^n). As our lower bound construction based on the Klee-Minty cube seems to yield rather specific odd USOs, we believe that the lower bound can be improved.Also, border USOs and odd USOs might be algorithmically more tractable than general USOs. The standard complexity measure here is the number of outmap values[provided by an oracle that can be invoked for every vertex] that need to be inspected in order to be able to deduce the location of the sink <cit.>. For example, in dimension 3, we can indeed argue that border USOs and odd USOs are easier to solve than general USOs. It is known that 4 outmap values are necessary and sufficient to locate the sink in any USO of the 3-cube <cit.>. But in border USOs and odd USOs of the 3-cube, 3 suitably chosen outmap values suffice to deduce the orientations of all edges and hence the location of the sink <cit.>; see Figure <ref>.plain
http://arxiv.org/abs/1704.08481v1
{ "authors": [ "Vitor Bosshard", "Bernd Gärtner" ], "categories": [ "math.CO", "cs.DM", "05A16, 68R05", "G.2.1; F.2.2" ], "primary_category": "math.CO", "published": "20170427090115", "title": "Pseudo Unique Sink Orientations" }
Compact Descriptors for Video Analysis: the Emerging MPEG Standard Ling-Yu Duan,Vijay Chandrasekhar, Shiqi Wang, Yihang Lou, Jie Lin, Yan Bai, Tiejun Huang, Alex Chichung Kot, Fellow, IEEE and Wen Gao, Fellow, IEEE Ling-Yu Duan and Vijay Chandrasekhar are joint first authors. =========================================================================================================================================================================================================================This paper provides an overview of the on-going compact descriptors for video analysis standard (CDVA) from the ISO/IEC moving pictures experts group (MPEG). MPEG-CDVA targets at defining a standardized bitstream syntax to enable interoperability in the context of video analysis applications. During the developments of MPEG-CDVA, a series of techniques aiming to reduce the descriptor size and improve the video representation ability have been proposed. This article describes the new standard that is being developed and reports the performance of these key technical contributions. § INTRODUCTIONOver the past decade, there has been an exponential increase in the demand for video analysis, which refers to the capability of automatically analyzing the video content for event detection, visual search, tracking, classification, etc. Generally speaking, a variety of applications can benefit from the automatic video analysis, including mobile augmented reality (MAR), automotive, smart city, media entertainment, etc. For instance, MAR requires object recognition and tracking in real-time for accurate virtual object registration. With respect to automotive, robust object detection and recognition are highly desirable for warning the collision and cross-traffic. The increasing proliferation of surveillance systems is also driving the developments of object detection, classification and visual search technologies. Moreover, a series of new challenges have been brought forward in media entertainment, such as interactive advertising, video indexing and near duplicate detection, which all rely on robust and efficient video analysis algorithms. For the deployment of video analysis functionalities in real application scenarios, a unique set of challenges are presented <cit.>.Basically, it is the central server which performs automatic video analysis tasks, such that efficient transmission of the visual data via a bandwidth constrained network is highly desired <cit.><cit.>. The straightforward way is to encode the video sequences and transmit the compressed visual data over the networks. As such, features can be extracted from the decoded videos for video analysis purpose. However, this may create high-volume data due to the pixel level representation of the video texture. One could imagine that 470,000 closed-circuit television (CCTV) cameras for video acquisition are deployed in Beijing, China. Assuming that for each video 2.5 Mbps bandwidth[2.5Mbps is the standard bitrate for 720P video with standard frame rate (30fps).] is required to ensure that they can be simultaneously uploaded to the server side for analysis, in total 1.2Tbps video data are transmitted on the internet highway for security and safety applications. Due to the massive CCTV camera deployment in the city, it is urgently required to investigate ways to handle the large scale video data. As video analysis is directly performed based on extracted features instead of the texture, shifting the feature extraction and representation into the camera-integrated module is highly desirable, which directly supports the acquisition of features at the client side. As such, compact feature descriptors instead of compressed texture data can be delivered, which can completely satisfy the requirements of video analysis. Therefore, developing effective and efficient compact feature descriptor representation techniques with low complexity and memory cost is the key to such “analyze then compress” infrastructure <cit.>. Moreover, the inter-operability should also be maintained to ensure that feature descriptors extracted by any devices and transmitted in any network environments are fully operable at the server end. The Compact Descriptors for Visual Search (CDVS) standard <cit.><cit.> developed by motion picture experts group (MEPG), standardizes the descriptor bitstream syntax and the corresponding extraction operations of still images to ensure inter-operability for visual search applications.It has been proven to achieve high efficiency and low latency mobile visual search <cit.>, and an order of magnitude data reduction is realized by only sending the extracted feature descriptors to the remote server.However, the straightforward encoding of CDVS descriptors extracted frame by frame from video sequences cannot fulfil the applications of video analysis. For example, as suggested by CDVS, the descriptor length for each frame is 4K, and for a typical 30fps video the feature bit rate is approximately to be 1Mbps. Obviously, this may lead to excessive consumption of storage and bandwidth. Unlike still images, video combines a sequence of high correlated frames to form a moving scene. To fill the gap between the existing MPEG technologies and the emerging requirements of video feature descriptor compression, a Call for Proposals (CfP) on Compact Descriptors for Video Analysis (CDVA) <cit.> was issued in 2015 by MPEG, targeting at enabling efficient and inter-operable design of advanced tools to meet the growing demand of video analysis. It is also envisioned that CDVA can achieve significant savings in memory size and bandwidth resources, and meanwhile provide hardware-friendly support for the deployment of CDVA at application level. As such, the aforementioned video analysis applications such as MAR, automotive, surveillance and media entertainment can be flexibly supported by CDVA <cit.>, as illustrated in Fig. <ref>.In Fig. <ref>, the framework of CDVA is demonstrated, which is comprised of keyframe/shot detection, video descriptors extraction, encoding, transmission, decoding and video analysis against a large scale database. During the development of CDVA, a series of techniques have been developed for these modules. The key technical contributions of CDVA are reviewed in this paper, including the video structure, advanced feature representation, and video retrieval and matching pipeline. Subsequently, the developments of the emerging CDVA standard are discussed, and the performance of the key techniques is demonstrated. Finally, we discuss the relationship between CDVS and CDVA and look into the future developments of CDVA. § THE MPEG CDVS STANDARD MPEG-CDVS provides the standardized description of feature descriptors and the descriptor extraction process for efficient and inter-operable still image search applications. Basically, CDVS can serve as the frame level video feature description, which inspires the inheritance of CDVS features in the CDVA exploration.This section discusses the compact descriptors specified in CDVS, which are capable of adapting the network bandwidth fluctuations for the support of scalability with the predefined descriptor lengths: 512 bytes, 1K, 2K, 4K, 8K and 16K. §.§ Compact Local Feature Descriptor The extraction of local feature descriptors is required to be completed in a low complexity and memory cost way. Obviously this is much more desirable for videos. The CDVS standard adopts the Laplacian of Gaussian interest point detector. The low-degree polynomial (ALP) approach is employed to compute the local response after Laplacian of Gaussian filtering. Subsequently, a relevance measure is defined to select a subset of feature descriptors, which is statistically learned based on several characteristics of local features including scale, peak response of the LoG, distance to image centre, etc.Handcrafted SIFT descriptor is adopted in CDVS as the local feature descriptors, and a compact SIFT compression scheme achieved by transform followed with ternary scalar quantization is developed to reduce the feature size. This scheme is of low-complexity and hardware favorable due to fast processing (transform, quantization and distance calculation).In addition to the local descriptors, location coordinates of these descriptors are also compressed for transmission.In CDVS, the location coordinates are represented as a histogram consisting of a binary histogram map and a histogram counts array. The histogram map and counts array are coded separately by a simple arithmetic coder and a sum context based arithmetic coder <cit.>. §.§ Local Feature Descriptor AggregationCDVS adopts the scalable compressed Fisher Vector (SCFV) representation for mobile image retrieval. In particular, the selected SIFT descriptors are aggregated to the fisher vector (FV) by assigning each descriptor to multiple Gaussians in a soft assignment manner. To compress the high dimensional FVs, a subset of Gaussian components in the Gaussian Mixture Model (GMM) are selected based on the their rankings in terms of the standard deviation of each sub-vector. The number of selected Gaussian functions is dependent on the available coding bits, such that descriptor scalability is achieved to adapt to the available bit budget. Finally, one-bit scalar quantizer is applied to support fast comparison with Hamming distance. § KEY TECHNOLOGIES IN CDVADriven by the success of MPEG-CDVS, which provides a fundamental groundwork for the development of CDVA, a series of technologies have been brought forward. In CDVA, the key contributions can be categorized into the video structure, video feature description and video analysis pipeline. The CDVA framework specifies how the video is structured and organized for feature extraction, where key frame detection and inter feature prediction methods are presented. Subsequently, the deep learning based feature representation is reviewed, and the design philosophy and compression methods of the deep learning models are discussed. Finally, the video analysis pipeline which serves as the server side processing module is introduced. §.§ Video Structure Video is composed of a series of highly correlated frames, such that extracting the feature descriptors for each individual frame may be redundant and lead to unnecessary computational consumptions. In view of this, a straightforward way is to perform key frame detection, following which only feature descriptors of the key frames are extracted. In <cit.>, the global descriptor SCFV in CDVS is employed to compare the distance between the current frame and the previous one. In particular, if the distance is lower than a given threshold, indicating that it is not necessary to preserve the current frame for feature extraction, the current frame is dropped. However, one drawback of this method is that for each frame the SCFV should be extracted, which brings additional computational complexity. In <cit.>, the color histogram instead of the CDVS descriptors is employed for the frame level distance comparison. As such, the SCFV descriptors in non-key frames do not need to be extracted. Due to the advantage of this scheme, it has been adopted into the CDVA experimentation model (CXM) 0.2 <cit.>. In <cit.>, Bailer proposed to modify the segment produced by the color histogram. In particular, for each segment, the medoid frame of each segment is selected, and all frames within this segment that have lower similarity in terms of SCVF than a given threshold are further chosen for feature extraction.The key-frame based feature representation has effectively removed the video temporal redundancy, resulting in low bitrate query descriptor transmission. However, this strategy has largely ignored the intermediate information between two key-frames. In <cit.>, it is interesting to observe that densely sampled frames can bring better video matching and retrieval performance at the expense of increased descriptor size. In order to achieve a good balance between the feature bitrate and video analysis performance, the inter prediction techniques for local and global descriptors of CDVS have been proposed <cit.>. Specifically, in <cit.>, the intermediate frames between two key frames are denoted as the predictive frame (P-frame). In P-frame, the local descriptor is predicted by the multiple reference frame prediction. For those local descriptors which cannot find corresponding references, they are directly written into the bit-stream. For global descriptors in P-frame, for the component selected from both current and previous frames, the binarized sub-vector is copied from the corresponding one in the previous frame to save coding bits. In <cit.>, it is further demonstrated that more than 50% compression rate reduction can be achieved by applying lossy compression of local descriptors, without significant influence on the matching performance. Moreover, it is demonstrated that the global difference descriptors can be efficiently coded using adaptive binary arithmetic coding as well. §.§ Deep Learning Based Video Representation Recently, due to the remarkable success of deep learning, numerous approaches have been presented to employ the Convolutional Neural Networks (CNNs) to extract deep learning features for image retrieval <cit.>. In the development of CDVA, the Nested Invariance Pooling (NIP) has been proposed to obtain the discriminative deep invariant descriptors , and significant video analysis performance improvement over traditional handcrafted features has been observed. In this subsection, we will review the development of deep learning features in CDVA from the perspectives of deep learning based feature extraction, network compression, feature binarization and the combination of deep learning based feature descriptors with handcrafted ones.§.§.§ Deep Learning Based Feature Extraction Robust video retrieval requires the features to be scale, rotation and translation invariant. The CNN models incorporate the local translation invariance by a succession of convolution and pooling operations. In order to further encode the rotation and scale invariance into CNN, motivated by the invariance theory, the NIP was proposed to represent each frame with a global feature vector <cit.>. In particular, the invariance theory provides a mathematically proven strategy to obtain invariant representations with the CNNs. This inspires the improvement on the geometric invariance of deep learning features based on the pooling operations of the intermediate feature maps in a nested way. Specifically, given an input frame, it can be rotated with R times, and for each time the pool5 feature maps (W × H × C) is extracted. Here, W and H denote the width and height of the map and C is the number of feature channels. Based on the feature map, the multi-scale uniform region of interest (ROI) sampling is performed, resulting in the 5-D feature reforestation with dimension (R × S × W^'× H^'× C). Here, S is the number of sampled ROIs in multi-scale region sampling. Subsequently, NIP performs a nested pooling over translations (W^'× H^'), scales (S) and finally rotations (R). Therefore, a C-dimensional global CNN feature descriptor can be generated. The performance of NIP descriptors can be further boosted by the PCA whitening <cit.>. To evaluate the similarity between two NIP feature descriptors, the cosine similarity function is adopted.§.§.§ Network Compression CNN models such as AlexNet <cit.> and VGG-16 <cit.> contain millions of neurons, which cost hundreds of MBs for storage. This creates great difficulties in video analysis, especially when the CNN models are deployed at the client side for feature extraction in the “analyze then compress” framework. Therefore, efficient compression model of the neural network is urgently required for the development of CDVA. In <cit.>, both scalar and vector quantization (VQ) techniques using the Lloyd-Max algorithm are applied to compress the NIP model. The quantized coefficients are further coded with Huffman coding. Moreover, the model is further pruned to reduce the model size by dropping the convolutional layers.It is shown that the compressed models which have two orders of magnitude smaller than the uncompressed models lead to negligible loss in video analysis.§.§.§ Feature Descriptor Compression The deep learning based feature descriptor generated from NIP is usually in float-point, which is not efficient for the subsequent feature comparison process. As hamming distance can facilitate effective retrieval especially for large video collections, the NIP feature binarization has been proposed for compact feature representation <cit.>. In particular, the one-bit scalar quantizer is applied to simply binarize the NIP descriptor. As such, much less memory footprint and runtime cost can be achieved with marginally degraded performance loss.§.§.§ Combination of Deep Learning Based and Handcrafted FeaturesFurthermore, in <cit.>, it is also revealed that there are some complementary effects between CDVS handcrafted and deep learning based features for video analysis. In particular, the deep learning based features are extracted by taking the whole frame into account while CDVS handcrafted descriptors sparsely sample the interest points. Moreover, the handcrafted features work relatively better in rich textured blobs, while deep learning based features are more efficient in aggregating deeper and richer features for global salient regions. Therefore, the combination of deep learning based features and CDVS handcrafted features has been further investigated in the CDVA framework <cit.>, as shown in Fig. <ref>. Interestingly, it is validated that the combination strategy achieves promising performance and outperforms either deep learning based or CDVS handcrafted features.§.§ Video Analysis Pipeline The compact description of videos enables two typical tasks in video analysis, including video matching and retrieval. In particular, video matching aims at determining if a pair of videos shares the object or scene with similar content, and video retrieval performs searching for videos containing similar segment as the one in the query video.§.§.§ Video MatchingGiven the CDVA descriptors of the key frames in the video pair, pairwise matching can be achieved by comparing them in a coarse to fine strategy. Specifically, each keyframe in one video is first compared with all of the keyframes in the other video in terms of the global feature similarity. If the similarity is larger than the threshold, implying that there is a possible match between the two frames, the local descriptor comparison can be further performed with the geometric consistency checking. The keyframe-level similarity is subsequently calculated by the multiplication of matching scores of the global and local descriptors. Finally, we can obtain the video-level similarity by selecting the largest matching score among all keyframe-level similarities.Another criterion in video matching is the temporal localization, which locates the video segment containing similar items of interest based on the recorded timestamps. In <cit.>, a shot level localization scheme was adopted into CXM1.0. In particular, a shot is detected to be the group of consecutive keyframes whose distance to the first keyframe of this shot is smaller than a certain threshold in terms of the color histogram comparison. If the keyframe-level similarity is larger than a threshold, the shot that contains the key frame is regarded as the matching interval. Multiple matching intervals can also be concatenated together to obtain the final interval for localization. §.§.§ Video RetrievalIn contrast to video matching, video retrieval is performed in a one-to-N manner, implying that the videos in the database are all visited and the top ones with higher matching scores are selected. In particular, the key-frame level matching with global descriptors is performed to extract the top K_g candidate keyframes in the database. Subsequently, these key frames are further examined by local descriptor matching, and the key frame candidate dataset is further shrunk to K_l according to the rankings in terms of the combination of global and local similarities. These key frames are reorganized into videos, which are finally ranked by the video level similarity following the principle in video matching pipeline.§ EMERGING CDVA STANDARD§.§ Evaluation Framework The MPEG-CDVA dataset includes 9974 query and 5127 reference videos, and each video takes from 1 sec to 1+ min durations <cit.>. In Fig. <ref>, we provide some typical examples from the MPEG-CDVA dataset. In total, 796 items of interest in those videos are depicted, which can be further divided into three categories, including large objects (eg. buildings, landmarks), small objects (e.g. paintings, books, CD covers, products) and scenes (e.g. interior scenes, natural scenes, multi-camera shots). Approximately 80% of query and reference videos were embedded in irrelevant content (different from those used in the queries). The start and end embedding boundaries were used for temporal localization in video matching task. The remaining 20% of query videos were applied with 7 modifications (text/logo overlay, frame rate change, interlaced/progressive conversion, transcoding, color to monochrome and contrast change, add grain, display content capture) to evaluate the effectiveness and robustness of the compact video descriptor representation technique. As such, 4,693 matching pairs and 46,930 non-matching pairs are created. In addition, for large scale experiments 8,476 videos with a total duration of more than 1,000 hours are involved as distracters, which belong to UGC, broadcast archival and education.The pairwise matching performance is evaluated in terms of the matching and localization accuracy. In particular, the matching accuracy is accessed by the Receiver Operating Characteristic (ROC) curve. The True Positive Rate (TPR) given False Positive Rate (FPR) equaling to 1% is also reported. When a matching pair is observed, the localization accuracy is further evaluated by the Jaccard Index based on the temporal location of the item of interest within the video pair. In particular, it is calculated by [T_start,T_end]⋂ [T_start',T_end']/[T_start,T_end]⋃ [T_start',T_end'], where [T_start,T_end] denotes the ground truth and [T_start',T_end'] denotes the predicted start and end timestamps. The retrieval performance is evaluated by mean Average Precision (mAP), and moreover precision at a given cut-off rank R for query videos (Precisian@R) is calculated. Here, R is set to be 100. As the ultimate goal is to achieve compact feature representation, the feature bitrate consumption is also measured.§.§ Timeline and Core ExperimentsThe Call for Proposals of MPEG-CDVA was issued at the 111th MPEG meeting in Geneva, in Feb. 2015, and responses are evaluated in Feb. 2016. Table 1 lists the timeline for the development of CDVA. In the current stage, there are six core experiments (CE) in the exploration of the MPEG-CDVA standard. The first CE investigates the temporal sampling strategy to better understand the impact of key frames and densities in video analysis. The second CE targets at improving the matching and retrieval performance based on the segment level representation. The CE3 exploits the temporal redundancies of feature descriptors to further reduce the bitrate for feature representation. CE4 investigates the combination strategy of traditional handcrafted and deep learning based feature descriptors, and CE5 develops compact representation methods of the deep learning based feature descriptors. Finally, CE6 study the approaches for deep learning model compression to reduce the run time and memory footprint for deep learning based feature extraction.§.§ Performance ResultsIn this subsection, we report the performance results of the key contributions in the development of CDVA. Firstly, the performance comparisons with the evolution of CXM models are presented. CXM 0.1 (released on MPEG-114) is the first version of CDVA experimentation model that provides the baseline performance, and subsequently CXM0.2 (MPEG-115) and CXM1.0 (MPEG-116) have been released. To flexibly adapt to different bandwidth requirements as well as application scenarios, three operating points in terms of the feature descriptor bit rate 16KBps, 64KBps and 256KBps are defined. Besides, in the matching operation, an additional cross mode 16_256KBps matching has also been considered. In Table <ref>, the performance comparisons from CXM0.1 to CXM1.0 are listed. The performance improvements from CXM0.1 to CXM0.2 are significant, and more than 5% on mAP and 5% in terms of TPR@FPR are observed, which are mainly attributed to key frame sampling based on color histogram. Comparing CXM0.2 with CXM1.0, the retrieval performance is identical since the changes lie in the video matching operation, which improve the localization performance based on the video shot to identify the matching interval. Such matching scheme leads to more than 10% temporal localization performance improvement.In Table <ref>, the performance comparisons between CXM and deep learning based methods are provided.Compared with CXM1.0, simply using the deep learning based feature descriptors in 512 dimension without re-ranking techniques can bring about 5% improvements on both mAP and TPR. It can be seen that the performance of NIP descriptor extracted from a compressed model only suffers a negligible loss while the model size has been reduced from 529.2M to 8.7M using pruning and scalar quantization. To meet the large-scale fast retrieval demand, the performance of binarized NIP (occupying only 512 bits) and its combination with handcrafted feature descriptors are also explored. Compared with CXM1.0, the additional 512 bits deep learning based descriptor in the combination mode significantly boosts the performance from 72.1% to 79.9%. It is worth noting that the results of deep learning method are under the cross-checking stage, and the Ad-hoc group plans to integrate the NIP descriptor into CXM in MPEG 119th meeting in Jul. 2017.In Table <ref>, we list the runtime complexity between CXM1.0 and the deep learning based methods. In the experimental setup, for each kind of feature descriptor, the database is scanned once to generate the retrieval results. CXM1.0 adopts SCFV descriptor to obtain an initial top500 results and then local descriptor re-ranking is applied. The fastest method is binarized NIP that takes 2.89 seconds to implement a video retrieval request in 13603 videos (about 1.2 millon keyframes), and NIP descriptor takes 9.15 seconds to complete this task. For handcrafted descriptor, the CXM1.0 takes 38.63 seconds, including both global ranking with SCFV and re-ranking with local descriptors. It is worth mentioning that here CDVA mainly focuses on the performance improvement in terms of the accuracy of matching and retrieval. Regarding the retrieval efficiency, some techniques that have not been standardized in CDVS such as Multi-Block Index Table (MBIT)<cit.> indexing which can significantly improve the retrieval speed have not been integrated for investigation. § CONCLUSIONS AND OUTLOOKThe current development of CDVA treats the CDVS as the groundwork, as they serve the same purpose of using compact feature descriptors for visual search and analysis. The main difference lies in that CDVS is mainly focusing on still images, while CDVA makes an extension to video sequences. Moreover, the backward compatibility of CDVA supports the feature decoding of the key frame with the existing CDVS infrastructure, such that every standard compatible CDVS decoder can reproduce the features of independently coded frames in the CDVA bitstream. This can greatly facilitate the cross modality search applications, such as using images as queries to search videos, or using videos as queries to search corresponding images.The remarkable technological progress in video feature representation has provided a further boost for the standardization of compact video descriptors. The key frame representation and inter feature prediction provide two granularity levels in video feature representation. The deep learning feature descriptors have also been intensively investigated, including the feature extraction, model compression, compact feature representation, and the combination of deep learned based features with traditional handcrafted features. The optimization of the video matching and retrieval pipelines has also been proved to bring superior performance in video analysis.Nevertheless, the standardization of CDVA is also facing many challenges and more improvements are expected. In addition to video matching and retrieval, more video analysis tasks (such as action recognition, abnormal detection, video tracking) need to be investigated. This requires more advanced video representation techniques to extract the motion information as well as sophisticated deep learning models with high generalization abilty for feature extraction. Moreover, although the deep learning method has achieved significant performance improvement, more deep feature compression and hashing work is necessary to achieve compact representation. Finally, the fusion strategy of deep learning feature and traditional handcrafted feature pose new challenges to the standardization of CDVA and opens up new space for future exploration. IEEEbib
http://arxiv.org/abs/1704.08141v1
{ "authors": [ "Ling-Yu Duan", "Vijay Chandrasekhar", "Shiqi Wang", "Yihang Lou", "Jie Lin", "Yan Bai", "Tiejun Huang", "Alex Chichung Kot", "Wen Gao" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170426143324", "title": "Compact Descriptors for Video Analysis: the Emerging MPEG Standard" }
[email protected] Institute of Experimental Physics, Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, PolandInstitute of Experimental Physics, Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland Center for Polymer Studies, Boston University, Boston, MA 02215 USA Predicting language diversity with complex networks Tomasz Gubiec December 30, 2023 =================================================== Evolution and propagation of the world's languages is a complex phenomenon, driven, to a large extent, by social interactions <cit.>. Multilingual society can be seen as a system of interacting agents <cit.>, where the interaction leads to a modification of the language spoken by the individuals <cit.>. Two people can reach the state of full linguistic compatibility due to the positive interactions, like transfer of loanwords. But, on the other hand, if they speak entirely different languages, they will separate from each other. These simple observations make the network science <cit.> the most suitable framework to describe and analyze dynamics of language change <cit.>. Although many mechanisms have been explained <cit.>, we lack a qualitative description of the scaling behavior for different sizes of a population. Here we address the issue of the language diversity in societies of different sizes, and we show that local interactions are crucial to capture characteristics of the empirical data. We propose a model of social interactions, extending the idea from <cit.>, that explains the growth of the language diversity with the size of a population of country or society. We argue that high clustering and network disintegration are the most important characteristics of models properly describing empirical data. Furthermore, we cancel the contradiction between previous models <cit.> and the Solomon Islands case. Our results demonstrate the importance of the topology of the network, and the rewiring mechanism in the process of language change.Consider a system of N individuals, each using language described by a set of F traits. Individuals are connected by links, indicating social interactions enabling language transfer. Two agents can speak very similar dialects or completely different languages, what is reflected in q different values of every trait. Traits should be interpreted as groups of words, or grammar rules, rather than single words. During the interaction people tend to adapt their languages to each other, if they have anything in common (see FIG. <ref>). The more similar languages they speak, the more probable is the positive interaction and learning from each other, leading to a further increase of the similarity. On the other hand, people using languages with all traits different have no possibility to communicate and will cut the connection and look for a new neighbor. After disconnecting from a neighbor, active node will choose a new one from a set of vertices distant by two edges (see FIG. <ref>), i.e. neighbors of neighbors. This rewiring mechanism assumes only local interactions, what is intuitive for every-day use of language. It was shown that social networks are characterized by high value of the clustering coefficient <cit.>. This rewiring mechanism increases the value of the clustering coefficient by the definition. Parameters F and q reflects the diversity of language. One trait can stand for a vocabulary in a given field. Than, different values of q indicate different words used to describe the same objects.It was shown that the model defined as above displays three significantly different phases <cit.>. In the first phase, for small values of q, we observe death of most of the dialects. In this phase, when the system reaches the final configuration almost all agents speak the same language, and the graph is connected. In the second phase the network disintegrates into many small components, each with a different language. Society is polarized and different clusters use different languages. In the third phase a partial recombination occurs, but the number of languages increases further, resulting in existence of links between individuals speaking different languages. For that reason, the two first phases are more suitable for the explanation of the language change. Additionally, it is a reasonable assumption that languages can vary to a finite extent.Despite the fact that this simple usage-based model of language manages to capture the essence of social interactions, its interpretation considering languages was abandoned after very first publication <cit.>, due to the contradiction with the empirical data. Anthropological study of Solomon Islands in the late 70's <cit.> showed that the number of languages functioning on an island grows with the size of the island. As noted in the original paper, results of the first model defined on a static square lattice were exactly opposite – the number of domains was decreasing with increasing size of the lattice. Moreover, the first adaptive model <cit.>, taking into account coevolution of the nodes' states and the topology of the network, did not solve this issue – the number of domains was approximately constant for different sizes of the network.In FIG. <ref> we analyze behavior of two variants of the model – local rewiring with a uniform probability and local rewiring with a preferential attachment. It is clear that the number of domains, indicating number of languages, increase with the system size. This result is qualitatively consistent with the empirical data for Solomon Islands given in <cit.>. It is worth noting that this dependency is also valid, yet weaker, for different models described in <cit.>, but only for a certain range of values of the parameter q.Based on our findings, we should expect larger number of languages for countries with bigger population. To validate this prediction we analyze two databases. The first one form 1996 consisting information about 6866 languages and their 9130 dialects from 209 different countries <cit.>, and the second one from 2013 (regularly updated) consisting information about 2679 languages in 188 countries <cit.>. In FIG. <ref> we plot the number of languages against the size of a population for countries from six continents. The trend seems to be increasing in every example, but fluctuations darken the picture. Obviously, language diversity on a scale of continents is driven not only by social interactions. There are many factors influencing the linguistic structure of the society, for example language policy and legislation, colonization, border changes, demolition of the population during wars or epidemics, compulsory resettlement etc. Nevertheless, we expect our findings to hold on average. To eliminate fluctuations we aggregate data for consecutive intervals. Results are shown in FIG. <ref>, excluding, for the sake of clarity, four countries that have either the population size (China, India) or the number of languages (Indonesia, Papua New Guinea) grater by almost order of magnitude from the others. We obtain growing number of languages with the population size for both databases. Moreover, this dependency is even more pronounced in the data set of dialects. Again, results of the simulations are qualitatively consistent with the empirical data.In our study we showed that even complex description of nodes' states in social networks is not sufficient to explain real-world phenomena. Furthermore, even sophisticated dynamics of states can be not enough when the structure of the network is divergent to empirical examples. Topology and its transformations are crucial in a proper description of the language change due to social interactions. In this field, models with only local rewiring, leading to high clustering and frequent disintegration, most accurately reproduce empirical data. Here we have taken steps towards understanding the process of the language change and its foundations, although its full structure is undiscovered. Nevertheless, comprehensive model of language should take into account proper dynamics of the network topology.It was shown for many real-world social networks, that the degree distribution obeys power law <cit.> and the average path length grows slowly with the system size, i.e. they exhibit the small-world property <cit.>. However, usually these are specific networks, for example network of actors or scientist collaboration, phone calls or e-mails, or sexual contacts. Our model does not display the small-world property (see FIG. <ref>), and yet it is in agreement with empirical data. This suggests, that the network of linguistic interaction, in contrary to other social networks, consists mostly of local interactions and high clustering. § METHODSAlgorithm. The model we use is described in details in <cit.>. We start every simulation with a random graph with N vertices, each representing one agent. We set the number of links M to obtain a certain value of the average degree ⟨ k ⟩. Every node i is described by a vector of traits σ_i = (σ_i, 1, σ_i, 2, ..., σ_i, F). Every trait can initially adopt one of q discrete values σ_i, f∈{1, 2, ..., q}, f = 1, 2, ..., F, what gives q^F possible different states. At the beginning, we draw a set of F traits for each node with equal probability for every value form 1 to q. Then, every time step consists of following rules: * Draw an active node i and one of its neighbors j.* Compare vectors σ of chosen vertices and determine the number m of identical traits (overlap), such that σ_i, f = σ_j, f. * If all traits are equal i. e. m = F, nothing happens.* If none of the traits are equal m = 0, disconnect the edge (i, j) from node j, draw a new node l, and attach a link to it, creating an edge (i, l).* In other cases, with probability equal m/F the positive interaction occurs, in which we randomly select one of not-shared traits f' (from among F - m) and the active node i adopts its value from the node j, i. e. σ_i, f'→σ'_i, f' = σ_j, f'. * Go to the next time step.The method of selecting new neighbors is crucial. We allow to create a new connection only within a set of nodes distant by two edges (neighbors of neighbors). Multiple connections and auto-connections are prohibited. We analyze two possibilities: uniform probability for every node in the set, and preferential attachment with probability P(i) ∼ ( k_i + 1 )^2. Simulation is ran until frozen configuration is obtained or thermalization is reached. In order to describe behavior of the system we use several quantities and coefficients, which are defined as follows.Component s: two vertices i and j belong to the same component s, if they are connected, or vertex k exists such that vertex i belongs to the same component as vertex k and vertex k belongs to the same component as vertex j. Then, by the largest component of the network we mean the biggest connected subgraph of the network.Domain d: two vertices i and j belong to the same domain d, if they are connected and share all traits σ_i = σ_j, or vertex k exists such that vertex i belongs to the same domain as vertex k and vertex k belongs to the same domain as vertex j. By definition, a given domain cannot exceed the size of the component it shares nodes with. On the other hand,the number of components cannot be superior to the number of domains.Local clustering coefficient c_i: for undirected graphs it can be defined as the number of connections between neighbors of the node i divided by k_i (k_i -1)/2, i.e. the number of links that could possibly exist between them.Global clustering coefficient C: it is defined as three times the number of triangles in the network divided by the number of connected triplets of vertices (one triangle consists three connected triplets).Average path length ⟨ l ⟩: it is the shortest distance between two vertices, averaged over all pairs of vertices in the network. If there is no path between two vertices (network has many components), this pair is not taken into account.Data Availability. The data about languages in different countries that support the findings of this studyare available in two online databases: Ethnologue <www.ethnologue.com/13/names> <cit.> and WALS <www.wals.info> <cit.>. The data about population sizes that support the findings of this studyare available from United Nations World Population Prospects <https://esa.un.org/unpd/wpp> <cit.> The authors would like to thank Mateusz Wilinski for discussions and corrections.10beckner2009language C. Beckner, R. Blythe, J. Bybee, M. H. Christiansen, W. Croft, N. C. Ellis, J. Holland, J. Ke, D. Larsen-Freeman, and T. Schoenemann, “Language is a complex adaptive system: Position paper,” Language learning, vol. 59, no. s1, pp. 1–26, 2009.mufwene2002competition S. S. Mufwene, “Competition and selection in language evolution,” Selection, vol. 3, no. 1, pp. 45–56, 2002.tomasello2010origins M. Tomasello, Origins of human communication. MIT press, 2010.eckert2000language P. Eckert, Language variation as social practice: The linguistic construction of identity in Belten High. Wiley-Blackwell, 2000.baxter2009modeling G. J. Baxter, R. A. Blythe, W. Croft, and A. J. McKane, “Modeling language change: an evaluation of trudgill's theory of the emergence of new zealand english,” Language Variation and Change, vol. 21, no. 02, pp. 257–296, 2009.carro2016coupled A. Carro, R. Toral, and M. San Miguel, “Coupled dynamics of node and link states in complex networks: a model for language competition,” New Journal of Physics, vol. 18, no. 11, p. 113056, 2016.lieberman2007quantifying E. Lieberman, J.-B. Michel, J. Jackson, T. Tang, and M. A. Nowak, “Quantifying the evolutionary dynamics of language,” Nature, vol. 449, no. 7163, pp. 713–716, 2007.bybee2006usage J. L. Bybee, “From usage to grammar: The mind's response to repetition,” Language, vol. 82, no. 4, pp. 711–733, 2006.albert2002statistical R. Albert and A.-L. Barabási, “Statistical mechanics of complex networks,” Reviews of modern physics, vol. 74, no. 1, p. 47, 2002.castello2008modelling X. Castelló, V. Eguıluz, M. Miguel, L. Loureiro-Porto, R. Toivonen, J. Saramäki, and K. Kaski, “Modelling language competition: bilingualism and complex social networks,” in The Evolution of Language: Proceedings of the 7th International Conference. Singapore: World Scientific Publishing Co, pp. 59–66, Citeseer, 2008.schulze2008birth C. Schulze, D. Stauffer, and S. Wichmann, “Birth, survival and death of languages by monte carlo simulation,” Communications in Computational Physics, vol. 3, no. 2, pp. 271–294, 2008.hruschka2009building D. J. Hruschka, M. H. Christiansen, R. A. Blythe, W. Croft, P. Heggarty, S. S. Mufwene, J. B. Pierrehumbert, and S. Poplack, “Building social cognitive models of language change,” Trends in cognitive sciences, vol. 13, no. 11, pp. 464–469, 2009.abrams2003linguistics D. M. Abrams and S. H. Strogatz, “Linguistics: Modelling the dynamics of language death,” Nature, vol. 424, no. 6951, pp. 900–900, 2003.loreto2007social V. Loreto and L. Steels, “Social dynamics: Emergence of language,” Nature Physics, vol. 3, no. 11, pp. 758–760, 2007.patriarca2012modeling M. Patriarca, X. Castelló, J. Uriarte, V. M. Eguíluz, and M. San Miguel, “Modeling two-language competition dynamics,” Advances in Complex Systems, vol. 15, no. 03n04, p. 1250048, 2012.sutherland2003parallel W. J. Sutherland, “Parallel extinction risk and global distribution of languages and species,” Nature, vol. 423, no. 6937, pp. 276–279, 2003.raducha2017coevolving T. Raducha and T. Gubiec, “Coevolving complex networks in the model of social interactions,” Physica A: Statistical Mechanics and its Applications, vol. 471, pp. 427–435, 2017.axelrod1997dissemination R. Axelrod, “The dissemination of culture a model with local convergence and global polarization,” Journal of conflict resolution, vol. 41, no. 2, pp. 203–226, 1997.sanmiguel2007 F. Vazquez, J. C. González-Avella, V. M. Eguíluz, and M. San Miguel, “Time-scale competition leading to fragmentation and recombination transitions in the coevolution of network and states,” Phys. Rev. E, vol. 76, p. 046120, Oct 2007.newman2003social M. E. Newman and J. Park, “Why social networks are different from other types of networks,” Physical Review E, vol. 68, no. 3, p. 036122, 2003.dorogovtsev2002evolution S. N. Dorogovtsev and J. F. Mendes, “Evolution of networks,” Advances in physics, vol. 51, no. 4, pp. 1079–1187, 2002.foster2011clustering D. V. Foster, J. G. Foster, P. Grassberger, and M. Paczuski, “Clustering drives assortativity and community structure in ensembles of networks,” Physical Review E, vol. 84, no. 6, p. 066117, 2011.palla2007quantifying G. Palla, A.-L. Barabási, and T. Vicsek, “Quantifying social group evolution,” Nature, vol. 446, no. 7136, pp. 664–667, 2007.terrell1977human J. Terrell, “Human biogeography in the solomon islands,” Fieldiana. Anthropology, vol. 68, no. 1, pp. 1–47, 1977.grimes1996ethnologue B. F. Grimes et al., Ethnologue Language Name Index. Summer Institute of linguistics, 1996.wals2013 M. S. Dryer and M. Haspelmath, eds., WALS Online. Leipzig: Max Planck Institute for Evolutionary Anthropology, 2013.united2015world U. N. D. of Economic and S. Affairs, World Population Prospects: The 2015 Revision. United Nations, 2015.
http://arxiv.org/abs/1704.08359v1
{ "authors": [ "Tomasz Raducha", "Tomasz Gubiec" ], "categories": [ "cs.SI", "physics.soc-ph" ], "primary_category": "cs.SI", "published": "20170426214117", "title": "Predicting language diversity with complex network" }
firstpage–lastpage On-Chart Success Dynamics of Popular Songs Juyong Park December 30, 2023 ========================================== Recently, large samples of visually classified early-type galaxies (ETGs) containing dust have been identified using space-based infrared observations with the Herschel Space Telescope. The presence of large quantities of dust in massive ETGs is peculiar as X-ray halos of these galaxies are expected to destroy dust in ∼10^7 yr (or less). This has sparked a debate regarding the origin of the dust: is it internally produced by asymptotic giant branch (AGB) stars, or is it accreted externally through mergers? We examine the 2D stellar and ionised gas kinematics of dusty ETGs using IFS observations from the SAMI galaxy survey, and integrated star-formation rates, stellar masses, and dust masses from the GAMA survey. Only 8% (4/49) of visually-classified ETGs are kinematically consistent with being dispersion-supported systems. These “dispersion-dominated galaxies” exhibit discrepancies between stellar and ionised gas kinematics, either offsets in the kinematic position angle or large differences in the rotational velocity, and areoutliers in star-formation rate at a fixed dust mass compared to normal star-forming galaxies. These properties are suggestive of recent merger activity. The remaining ∼90% of dusty ETGs have low velocity dispersions and/or large circular velocities, typical of “rotation-dominated galaxies”. These results, along with the general evidence of published works on X-ray emission in ETGs, suggest that they are unlikely to host hot, X-ray gas consistent with their low M_* when compared to dispersion-dominated galaxies. This means dust will be long lived and thus these galaxies do not require external scenarios for the origin of their dust content.galaxies: kinematics and dynamics - galaxies: interactions - ISM: dust, extinction § INTRODUCTION The recent launch of the Herschel Space Telescope has made it possible for astronomers to study cold dust in a wide variety of galaxies with unprecedented sensitivity. As a consequence, a number of teams have identified large samples of visually-classified early-type galaxies (ETGs) that clearly harbour massive reservoirs of cold dust <cit.>. Although dust is closely related to the formation of stars in star-forming, late-type galaxies (LTGs), this may not be the case in ETGs where the level of on-going star formation is typically much lower (if not non-existent). Furthermore, massive ETGs are known to contain large amounts of hot, X-ray emitting gas that is inhospitable to fragile dust grains. This hot gas rapidly destroys dust through a process known as thermal sputtering, resulting in a dust lifetime of ∼10^5-10^7 yr <cit.>. The now undisputed presence of large quantities of dust in some ETGs has sparked a debate as to its origins. Many works have suggested that dust found in ETGs must have been recently accreted via mergers with gas-rich satellites <cit.>. In such a merger, the accreted dust will be embedded in a cold medium (either atomic or molecular gas) that can provide shielding from X-ray photons, resulting in a longer lifetime than for dust produced internally <cit.>. Alternatively, the dust may result from internal processes such as cooling of hot halo gas <cit.> or production in asymptotic giant-branch (AGB) stars <cit.>. Currently, there is no clear consensus regarding the internal versus external origins of dust in ETGs, and it is possible that both play some role with the balance between the two sources depending on the properties of individual galaxies <cit.>.Much of the recent work on dusty ETGs is based on samples selected by visual morphology. In such cases it is not clear how certain we can be that such galaxies host a hot, X-ray emitting halo. The X-ray properties of ETGs vary considerably. This X-ray emission is less dominant in lower mass ETGs <cit.>, in (apparently) younger ETGs <cit.>, and in ETGs with higher star formation <cit.>. Environment is also thought to play a role <cit.>. Clear evidence of diffuse X-ray emission is found inmassive galaxy clusters as well as the most massive individual ETGs <cit.> with X-ray luminosities (L_X) significantly larger than 10^40 ergs s^-1. For LTGs, <cit.> finds L_X< 10^40 ergs s^-1 corresponding roughly to the high L_X cut-off for X-ray binary stars <cit.>. Thus, for galaxies observed with L_X < ∼10^40 ergs s^-1, particularly those with recent star formation, X-ray emission can be attributed to the cumulative emission from supernova remnants and X-ray binaries. Recent spectroscopic work has provided a connection between galaxy kinematics and X-ray properties. In particular, galaxies with stellar velocity dispersions (σ) larger than ∼150 km s^-1 are often found to have X-ray luminosities in excess of 10^40 ergs s^-1 <cit.>, and these galaxies appear to extend the relationship between L_X and stellar mass found in massive galaxy clusters <cit.> to lower mass systems. Below σ = 150 km s^-1, all ETGs studied by <cit.> have L_X < 10^40 ergs s^-1; in the range attributed to X-ray binaries by <cit.>. Furthermore, recent simulations by <cit.> have shown that galaxy rotation can also act to reduce L_X. This occurs because conservation of angular momentum in rotating galaxy models encourages the growth of cold gas disks, preventing large amounts of hot gas from collecting in the central region. These results suggest that kinematic observations of visually-selected, dusty ETGs may distinguish galaxies embedded in a massive halo of hot gas from those more hospitable to long lived dust reservoirs. It is also worth noting that visual morphology and kinematic classifications are not always well correlated <cit.>.This connection between X-ray emitting gas content and kinematics shows thatspatially-resolved observations using integral field spectroscopy (IFS), which give a detailed description of a galaxy's kinematics, can help in understanding the origins of dust in ETGs. IFS observations of large samples of galaxies identified as dusty ETGs provide a step forward in two respects. First, because IFS allows global measurements of stellar σ covering most of the galaxy, they can clearly identify galaxies with large stellar σ that most likely host X-ray emitting gas. Second, IFS observations provide a strong indicator of recent merger activity through the direct comparison of ionised gas and stellar kinematics. The work of <cit.> using galaxies from the ATLAS^3D survey is an example in this vein, showing a connection between the detection of molecular gas in ETGs and misalignments between ionised gas and stellar kinematics. A scenario in which dust is produced internally is less likely to produce kinematic misalignments, particularly where dust originates directly from AGB stars. Galaxy mergers in simulations often produce misalignments <cit.>, thus mergers represent a natural source for externally produced dust in ETGs. In this work we examine the origins of dust in ETGs using data from the SAMI Galaxy Survey <cit.>. The majority of SAMI galaxies are selected from the Galaxy And Mass Assembly survey <cit.>, therefore we focus on the samples of dusty ETGs selected from GAMA by <cit.> and <cit.>. We begin by considering those 540 GAMA galaxies observed by the SAMI survey that are found to have high quality kinematic measurements (see Section <ref>) and clearly defined visual morphologies. We choose to explore the kinematics of A13/A15 galaxies rather than other samples of dusty ETGs <cit.> as we find the largest overlap with this sample, which amounts to 49 Herschel detected and 99 non-detected galaxies. Together <cit.> and <cit.> have a total of 4 galaxies currently observed by SAMI. This paper is structured as follows: in Sections <ref> and <ref> we present the samples and data-sets considered. Section <ref> presents our method of extracting integrated kinematic quantities from SAMI IFS observations as well as our kinematic criteria for isolatibng those visually-classified dusty ETGsthat are most likely to host hot X-ray emitting gas. In Section <ref> we apply this selection to those galaxies from A13/A15 observed by SAMI. In Section <ref> we discuss the evolutionary implications of our results, and in Section <ref> we summarise our conclusions. Throughout this work we adopt a ΛCDM cosmology with Ω_m = 0.3, Ω_Λ = 0.7, and H_0 = 70 km s^-1 Mpc^-1.§ SAMPLES§.§ Dusty Early Type Galaxies: <cit.> and <cit.> The parent sample of Herschel ATLAS <cit.> detected ETGs were first identified and analysed by A13. Briefly, H-ATLAS is a 550 square degree IR survey using the PACS and SPIRE instruments (targeting 100-500μm) on the Herschel space observatory with an expected detection of ∼250,000 galaxies. A13 began by investigating the H-ATLAS detections for a sample of galaxies identified as ETGs in the GAMA dataset through visual classification <cit.>, with active galaxies excluded based on the prescription of <cit.>. The sample of A13 is restricted to the redshift range 0.013 < z < 0.06 and absolute r-band magnitudes brighter than M_r = -17.4 providing a volume-limited sample in the r-band. They find an H-ATLAS detection rate of 29% (220/771), i.e. 29% of the visually classified ETGs in GAMA have IR detections greater than 5σ. <cit.> show that in the H-ATLAS science demonstration phase that their survey data has a catalogue number density completeness of > 80% with the remaining 20% missing due to noise and/or blending of sources. The completeness for A13/A15 galaxies should be similar to this.Among H-ATLAS detected ETGs there is a trend for the ratio of dust mass to stellar mass to increase for bluer NUV - r colour, implying that recent star formation is likely associated with an increased presence of dust.§.§ SAMI Overlap With A13/A15 In this paper we wish to explore the resolved kinematics of the sample of dusty ETGs presented in A13 and A15. While A13 includes dust masses for 220 dusty ETGs, only 49 of these have high quality observations in the SAMI galaxy survey. Similarly, the study of A13/A15 incudes 551 H-ATLAS non-detected galaxies, of which 99 have high quality SAMI survey observations. We expect the properties of those galaxies from A13/A15 that overlap with our SAMI observations to be fairly representative of the full sample of A13/A15 galaxies as the SAMI survey is selected to be representative of the GAMA survey, the parent sample of A13/A15. It is important to know whether the A13/A15 galaxies for which we can investigate the resolved kinematics are representative of the original parent distribution. In Figure <ref> we show histograms of r-band magnitude, log_10(r_e), and log_10(M_*) comparing those galaxies from A13/A15 that have been observed with SAMI (in blue) with those that have not (in red). We then perform a two sample Kolmogorov-Smirnov (KS) test for each of the three galaxy properties for these two subsamples. The resulting p-values of this are given in the top right corner of each panel. The p-value indicates the percentage of the time we should expect to find the observed level of difference between the two samples, given their sizes, under the null hypothesis that they are randomly drawn from the same parent sample (typically the null hypothesis is not rejected where the p-value is larger than 0.01). For r-band magnitude, r_e, and M_* we find p-values of 0.605, 0.312, and 0.642 meaning we can not reject the nullhypothesis that these samples come from the same parent distribution. KS-test results for H-ATLAS non-detected A13/A15 galaxies show less agreement in properties between the full sample and those observed with SAMI with p-values for r-band magnitude, r_e, and M_*of 0.374, 0.075, and 0.132, respectively. Although we find slightly less agreement for non-detections, the p-values suggest that, again, we can not reject the null hypothesis that those observed by SAMI are representative of the parent sample. § DATA§.§ SAMI Survey Data Data analysed in this work comes from the SAMI Galaxy Survey <cit.> which aims to observe ∼3600 galaxies using the SAMI integral field spectrograph <cit.> at the 3.9m Anglo-Australian Telescope in the redshift range 0.004 < z < 0.095.Observations using the SAMI IFS represent a step forward from more traditional IFS instruments due to the use of multiple fibre bundles <cit.> allowing for simultaneous observations of multiple galaxies with a roughly circular, ∼147 diameter coverage. Fibres are fed into the AAOmega spectrograph <cit.>, which observes two spectral ranges using a red and blue arm setup. This provides coverage with 3700-5700 Å at R=1812 resolution and with 6300-7400 Å at R=4263 resolution <cit.>.The SAMI designed to be representative of the highly complete <cit.> GAMA survey rather than complete itself due to observational constraints. As we have mentioned, H-ATLAS detected A13/A15 galaxies should have a completeness of >80% similar to the overall H-ATLAS survey. We have shown in Figure <ref> that H-ATLAS detected A13/A15 galaxies with reliable SAMI kinematics are representative of overall A13/A15 sample, thus we do not expect completeness issues in the SAMI survey to affect our results.At the time this paper was written, 1094 galaxies have been observed by the SAMI galaxy survey. Of these, 753 have had stellar kinematics measurements performed as described in Section <ref>. We then perform two quality cuts on this sample of 753 galaxies. First we utilise only those galaxy observations that include enough high signal-to-noise spaxels such that we can measure the rotation curve beyond its turnover radius (see Section <ref> for more information). Next we remove galaxies that exhibit highly uncertain visual classifications of their morphologies <cit.>. Our cut on stellar kinematics quality removes 199 galaxies while the morphological cut removes a further 14 galaxies, resulting in a final sample of SAMI survey galaxies of 540. We use the kinematic measurements of this large sample of galaxies to determine if a given galaxy is supported by rotation or by random motions. We note that our stellar kinematic quality cut removes 3 H-ATLAS detected galaxies from A13/A15 due to large stellar velocity dispersion errors and our morphological quality cut removes a further 1 A13/A15 dusty ETG. All four of these galaxies, however, have relatively low velocity dispersions, thus excluding them does not affect our conclusions. §.§.§ Stellar and Ionised Gas Kinematics Here we briefly describe the stellar kinematics fitting process, however, for a more detailed description see <cit.> and van de Sande et al. (accepted for publication in ApJ).Stellar kinematics are measured using the penalised pixel-fitting <cit.> routine, which has become the standard method for use with IFS datacubes <cit.>. The pPXF method convolves spectral templates with a line-of-sight velocity distribution (LOSVD) parameterised using Gauss-Hermite polynomials. The first and second moments of this LOSVD provide the stellar velocity and velocity dispersion, respectively. In this work we are concerned only with these first two moments, however see <cit.> for a detailed analysis of higher order moments. Ionised gas kinematics for SAMI galaxies are measured from emission line spectra using the LZIFU spectral fitting pipeline <cit.>. Prior to fitting, the best fitting stellar continuum model from our pPXF procedure is subtracted from the spectra in each spaxel to provide more reliable fits. Gas velocities and velocity dispersions are then extracted from Gaussian fits to ionised gas emission lines. The LZIFU pipeline provides single Gaussian fits as well as more complex fits employing two and three Gaussian components. For the analysis presented in this paper we are primarily interested in the circular velocity, V_c, and kinematic position angle (both described in Section <ref>) for the primary component of the ionised gas. Therefore we use the simple, single-component fits. For more detail on SAMI ionised gas kinematics fits see <cit.>, <cit.>, and <cit.>. §.§.§ Galaxy Morphology Galaxy morphologies have been determined through visual classification by an internal SAMI working group based on Sloan Digital Sky Survey <cit.> Data Release 9 (DR9) RGB images for all SAMI galaxies observed at the time this paper was written. This classification, described in <cit.>, is independent of, but similar to, the scheme of <cit.> used by the GAMA survey. This involves a step-by-step procedure in which galaxies are first broadly classified as spheroid-dominated or disk-dominated, then placed into subclasses based on finer details. As SAMI galaxies in this work are selected from the GAMA survey, all galaxies here have been classified by both teams with three key differences. First, GAMA team members performed classifications on false colour g, i, H band composite images, while SAMI team members utilise SDSS DR9 gri images. Second, the classification working groups from both surveys are composed of independent groups of classifiers who will each have their own unique classification bias. Third, SAMI classifications include two criteria for identifying LTGs not used by <cit.>, namely the presence of spiral arms and signs of star formation (based on colour rather than purely on morphology). Discussion of differences between the classifications of the two groups can be found in Section <ref>. As noted previously, 14 galaxies determined to be “unclassified” <cit.> are excluded from our analysis, and only one of these comes from the A13/A15 samples. This galaxy, although it has an H-ATLAS detection, has a low velocity dispersion. Thus excluding it does not affect our conclusions. For consistency with A13/A15, galaxies with elliptical, S0, and Sa visual classifications are defined as ETGs.§.§ Stellar Mass, Dust Mass, and SFR In Section <ref> we explore the dust mass scaling relations of A13/A15 galaxies considered in this work. The GAMA survey provides a number of ancilliary data products, which are available for all A13/A15 galaxies observed by SAMI. Briefly, the GAMA survey is a multiwavelength survey of hundreds of thousands of low redshift galaxies. The core of the GAMA survey is a spectroscopic survey at optical wavelengths using the AAOmega instrument at the Anglo-Australian Telescope. This spectroscopic campaign is bolstered by data sharing agreements and coordination with other independent imaging surveys covering the entire electromagnetic spectrum, from X-rays to radio. For more information on the goals, target selection, and public data releases of GAMA survey data see <cit.>, <cit.>, <cit.>, and <cit.>.Using data products from the GAMA survey we explore stellar masses (M_*), dust masses (M_d), and SFRs derived from full spectral energy distribution (SED) fits to ultraviolet (UV) to far-infrared (far-IR) observations using thecode based on the models of <cit.>.has the distinct advantage over more traditional SED fitting techniques <cit.> as the inclusion of far-IR wavelengths allows for a direct balancing of energy from young, hot stars and warm/cold dust emission resulting in more robust SFRs as well as estimates of M_d. Preliminary results from-determined values for GAMA galaxies have been explored by <cit.> and <cit.>, and full details of theanalysis will be presented in Driver et al. (2017). Although all SAMI galaxies (including both H-ATLAS detected and non-detected galaxies from A13/A15) haveestimates of M_d, estimates for H-ATLAS non-detected galaxies are highly uncertain due to the lack of far-IR data. For this reason, we estimate upper limits to the dust masses for H-ATLAS non-detected galaxies following the procedure of A13/A15. This procedure is described in Appendix <ref>. Both M_* and SFR can more reliably be extracted in the absence of far-IR detections, thus these values are taken from the GAMA survey for both H-ATLAS detected and non-detected A13/A15 galaxies.§ GLOBAL KINEMATICS AND KINEMATIC GALAXY SELECTION Here we first describe our methods of extracting the global stellar kinematic quantities of rotational velocity, V_c, and flux weighted velocity dispersion, σ_mean, from our IFS observations. This is followed by a description of our method of selecting galaxies with stellar kinematics dominated by random motions. §.§ Circular Velocity: V_c The first step in determining the stellar V_c for each galaxy is to determine the kinematic position angle (PA) based on the observed stellar velocity map. This is achieved using the code<cit.> on the SAMI stellar velocity maps. This code determines the global kinematic position angle following the method described in Appendix C of <cit.>. Next, we use the measured stellar kinematic PA of each galaxy to extract the projected rotation curves along the kinematic major axis and, from this fit, estimate the value of V_c using a custom Python code. We briefly outline this procedure here, however a more detailed description is given in Appendix <ref>.To recover V_c we trace an artificial slit of width 15 across the velocity map at an angle given by the PA. The velocity as a function of position along the slit is fit by a piecewise function made up of two constant velocity sections separated by a sloped linear segment describing the central velocity gradient. This functional form, which follows <cit.>, provides two parameters: the turnover radius, r_t, and V_c. The latter is given by the constant velocity value beyond r_t. For some observations the coverage of the SAMI bundle does not extend beyond r_t, thus measured values of V_c are largely unconstrained. This is true for 199 of the 753 galaxies tested (including 3 H-ATLAS detected A13/A15 galaxies), and these galaxies are excluded from further analysis. Finally we apply an inclination correction to V_c based on measured ellipticities and bulge-to-total ratios taken from the GAMA survey and from <cit.>, respectively (our inclination correction is described fully in Appendix <ref>).§.§ Flux Weighted Velocity Dispersion: σ_mean We adopt the value σ_mean, the flux weighted stellar velocity dispersion, for our global velocity dispersion measure following previous IFS studies at various redshifts <cit.>.Prior to measuring the stellar σ_mean, we mask spaxels with large uncertainties on σ following the procedure of <cit.>. σ_mean is then defined as:σ_mean = ∑_i∑_j F(i,j) ×σ(i,j)/∑_i∑_j F(i,j)where F(i,j) is the flux observed in the spaxel with i and j as its spatial position, and σ(i,j) is the corresponding stellar velocity dispersion. We find this measurement is robust for all galaxies with SAMI coverage beyond r_t as described in Section <ref>.A rough correction for the effects of beam smearing is applied following <cit.>, where the artificial σ induced by the seeing is subtracted from σ_mean in quadrature. For a detailed description of our masking and beam smearing correction procedure, see Appendix <ref>. For simplicity, all remaining references to V_c and σ_mean in this paper refer specifically to inclination corrected and beam-smearing corrected values respectively.§.§ Kinematic Galaxy Selection In this Section we utilise galaxy stellar kinematics from IFS observations to select dispersion-dominated galaxies in a less ambiguous way than visual morphological classification.We would like to know how many galaxies that are visually classified as ETGs are really dispersion-dominated systems, and how many have kinematic propertiesmore similar to rotationally-supported LTGs. The latter are typified by S0 galaxies, which, by definition, exhibit a significant disk component.Our stellar kinematic selection is depicted in Figure <ref> where we plot σ_mean versus V_c. Plotted symbols indicate different visual morphologies taken from our SAMI classifications.We note that the velocity resolution of the SAMI survey is 70 km s^-1, which means that many of our low stellar σ_mean values will be upper limits. In particular, this will be the case for a very large fraction of visually classified LTGs at low V_c, as this low measurement of V_c often results from a nearly face-on inclination. Measurements for face-on galaxies should provide a lower value of σ when compared to an edge-on view of the same object due to a minimal contribution from rotation and beam smearing.In their IFS study of face-on LTGs from the DiskMass Survey, <cit.> find that 77% (23/30) have line-of-sight stellar σ less than 70 km s^-1 with an average value of 56.8 km s^-1 for their entire sample. Thus we should expect low V_c (more face-on on average) galaxies have σ_mean clustered near our σ resolution limit. We also note that we apply a larger beam smearing correction for galaxies with a large V_c (see Appendix <ref>), and the uncorrected measurements of σ_mean for these galaxies are up to 30 km s^-1 larger than pictured in Figure <ref>.Visually classified ETGs (Elliptical, S0, and Sa galaxies) are found to exhibit a large amount of scatter in σ_mean in Figure <ref> highlighting the pitfalls of assuming a one-to-one correspondence between visual-morphology and kinematics; e.g. visually-classified ETGs have a large stellar velocity dispersion. Visually classified LTGs, on the other hand, are found to be more clustered. This is due to the fact that they are easier to identify from thepresence of clear spiral arms, resulting in a much cleaner selection. S0/Sa galaxies, in general, extend the high V_c end of the LTG distribution to higher σ_mean. This is consistent with the result of <cit.> who show S0 galaxies exhibit a larger V_c than LTGs at fixed M_*. Following the LTGs, we produce a selection to separate galaxies having kinematic properties consistent with those of visually selected LTGs. We initially perform a linear fit to the visual LTGs in Figure <ref>, finding a slope of -0.04±.04. As this is consistent within errors to a flat slope, we simply employ a flat cut in σ_mean, matching the cutoff value to the highest value observed for a visually selected LTG of 108.0 km s^-1. This cut is shown in Figure <ref> by th horizontal, black dashed line. By design, this isolates 100% of LTGs in our sample, however 31% of visually-classified elliptical galaxies also fall below this line (20/65). We are interested in galaxies for which a large fraction of the dynamical support comes from random motions, implying a large σ_mean relative to V_c. For this reason we also plot in both panels the 1-to-1 relation as a solid black line. Galaxies falling above this line have σ_mean > V_c, thus they are the most likely to derive a majority of their support from random motions <cit.>. We define galaxies falling above both this line and the black dashed line as “dispersion-dominated” galaxies. The remaining galaxies we define as “rotation-dominated” galaxies. Comparing this kinematic selection with our sample of 540 SAMI galaxies with reliable kinematics, we find that 100% of dispersion-dominated galaxies (DDGs) and 39% (181/469) of rotation-dominated galaxies (RDGs) are visually classified as ETGs. We reiterate the point from Section <ref> that our definition of ETG includes Sa galaxies, however. If we redefine ETGs more strictly as only galaxies with visual classifications earlier than Sa, we find 93% (66/71) of DDGs and 14% (66/469) of RDGs are considered ETGs. Before moving on, we examine the relationship between the galaxy spin parameter, λ_R, and ellipticity, ϵ, for A13/A15 galaxies. Here, λ_R is calculated from our SAMI stellar kinematics maps as:λ_R = ∑_k=1^n F_kR_k| V_k|/∑_k=1^n F_kR_k√(V_k^2+σ_k^2)Where F_k is the flux in spaxel k, and V_k and σ_k are the line-of-sight velocity and velocity dispersion in spaxel k. The value R_k is the semimajor axis of the ellipse defined by the r-band axis ratio (b/a) on which spaxel k lies (i.e. the intrinsic radius). This sum is performed using only spaxels within an ellipse defined by the galaxy effective radius, R_e, and b/a. For ATLAS^3D galaxies, <cit.> show that these parameters are useful in separating fast and slow-rotators among their sample of ETGs. λ_R versus ϵ for our sample is shown in Figure <ref> where we plot RDGs, DDGs, and H-ATLAS detected ETGs from A13/A15. The dashed line shows the separation between slow- versus fast-rotators taken from <cit.>. We find that there is a correspondence between λ_R vs ϵ and our kinematic selection, with a majority of DDGs falling at low λ_R and ϵ. This suggests that the two methods are tracing similar properties of SAMI galaxies, particularly in light of the large uncertainty in λ_R for SAMI observations (which in some cases is >0.4). Considering only H-ATLAS detected ETGs from A13/A15, we find that employing the slow- versus fast-rotator selection of <cit.> would retain only two galaxies, with one having significant rotation and a relatively large ϵ. We are able to double our sample of H-ATLAS detected ETGs by employing the kinematic selection outlined here. We stress that, within uncertainties in our kinematic measurements, our kinematic selection and that of <cit.> are tracing roughly the same population. In this work, however, we are primarily interested in galaxies with a large stellar velocity dispersion, which can be used as an indication of the presence of a hot, X-ray emitting halo <cit.>. Although stellar σ is often used as a proxy for M_* <cit.>, it has been shown that even massive, X-ray halo hosting galaxies can host disks of cold gas and dust when rotating rapidly <cit.>. Using our kinematic quantities, however, we can identify those galaxies likely to host hot X-ray emitting gas, and further select only those low rotation galaxies in which the presence of this gas would hinder the formation of long-lived dust grains. Figures <ref> and <ref> show that this may not be accomplished considering λ_R versus ϵ or by using M_* alone as an indicator of a hot interstellar medium. § RESULTS§.§ Kinematics of A13/A15 Galaxies Having developed a stellar kinematic selection, we now apply this to those A13/A15 galaxies that have been observed by the SAMI Galaxy Survey. The V_c versus σ_m parameter space used to perform our kinematic selection is depicted in Figure <ref> for A13/A15 galaxies observed by SAMI with H-ATLAS non-detections shown by small cyan circles and H-ATLAS detected galaxies shown by larger red circles. We also indicate those galaxies having kinematic irregularities (described below) by green squares and blue pentagons.From Figure <ref> it can be seen that a far larger fraction of DDGs are non-detections in the H-ATLAS survey. Indeed, considering all DDGs from A13/A15 11% (4/35) are H-ATLAS detections compared with 40% (45/113) of RDGs. The entire sample of A13/A15 represents 771 galaxies with 220 of these being H-ATLAS detections, or 29%. This clearly shows that, although a low fraction of visually classified galaxies host appreciable amounts of dust, it is far more likely for galaxies with kinematics dominated by rotation. In the following Sections, we examine more closely the kinematics of H-ATLAS detected and non-detected galaxies from A13/A15. §.§.§ Kinematics of H-ATLAS Detected ETGs As mentioned in Section <ref>, only 49 of the 220 H-ATLAS detected ETGs of A13/A15 have kinematics maps from the SAMI survey that meet our quality cuts, and this subset is shown in our σ_mean vs V_c diagram in Figure <ref> with red circles. We find that 45/49 (90%) are RDGs, with 35 of these 45 having V_c > 100 km s^-1. This means that these galaxies derive a majority of their dynamical support from rotation as expected for LTGs, particularly for S0/Sa galaxies (as shown in Figure <ref>). Galaxies such as these may host an X-ray emitting halo if they are massive enough <cit.>, however <cit.> and <cit.> have shown that rapid rotation can allow a galaxy to host a cold gas disk even in the presence of such a hot halo. Therefore the presence of dust in these systems does not require an external origin scenario such as galaxy mergers.Next we investigate kinematics in our sample of four H-ATLAS detected DDGs. A comparison between kinematics of the gas and stars for these four galaxies can be seen in Figure <ref>, alongside their SAMI stellar and ionised gas kinematics and their Hα flux maps. All four galaxies exhibit discrepancies between their stellar and ionised gas kinematics, indicating that the dust in these galaxies is related to significant accretion events, such as galaxy mergers, in their evolutionary histories. Here we identify two classes of “kinematically irregular” galaxies, described in the following.The first class of kinematically irregular galaxies are those with significant misalignments between the kinematic position angles of the stars and ionised gas. These galaxies have been identified by Bryant et al. (in preparation), and here we define kinematic misalignments as those galaxies with differences between the stellar and kinematic position angle of >30^∘. Kinematically misaligned galaxies such as these show the clearest evidence from SAMI observations of having undergone a stochastic event, such as gas accretion, in the relatively recent past <cit.>. Bryant et al. (in preparation) find that the cut in stellar versus gas position angle of 30^∘ may not always provide an accurate descriptor of the fraction of misaligned galaxies due to observational issues (e.g. the depth of the data, see Bryant et al. in preperation), however this will not affect our conclusions as kinematically misaligned galaxies included in our H-ATLAS detected DDG sample have misalignments close to 90^∘. Such a large difference between stellar and gas kinematics gives the clearest indication of recent accretion.Galaxies 551505 and 534655 fall into this first class, with both exhibiting kinematic misalignments of ∼90^∘. The two cases are not identical, however. 551505 displays rapid rotation in both stars and ionised gas measured to the edge of the SAMI fibre bundle, which may suggest that this is a polar ring galaxy, a relatively stable configuration resulting from merger activity <cit.>. Galaxy 534655, on the other hand, exhibits very slow stellar rotation with a rapidly rotating ionised gas component in the central region possibly indicative of a nuclear starburst. From preliminary analysis of emission line ratios in the central region of this galaxy we find possible evidence of a low-ionization nuclear emission-line (LINER) like emission (Medling et al. in preparation), consistent with this picture. Nuclear starburst activity such as this has also been linked to merger activity in local luminous IR galaxies <cit.>. In addition to kinematically misaligned galaxies, we also identify a second class of galaxies in which the stars and gas are kinematically aligned but have a significantly larger gas V_c when compared to that of the stars. In order for a galaxy to be included in this classification we require the ratio of stellar to gas rotation, V_c,star/V_c,gas, to be <0.6 noting that values observed in LTGs as a result of asymmetric drift <cit.> fall in the range ∼0.75-0.9 <cit.>. If the origin of the ionised gas content of an ETG were closely related to the existing stellar component, we would expect the two to share similar kinematics, unlike what we see in such cases. This implies that V_c,star/V_c,gas < 0.6 galaxies have experienced an accretion event in the past related to their gas and dust content, but the difference between the time this occurred and the time at which we observe the galaxy may be significantly longer than for kinematically misaligned galaxies depending on the dynamical relaxation time of the system. Estimates of the relaxation time of gas disks in merger remnants range from << 1 Gyr to ∼5 Gyr <cit.>, however see Bryant et al. (in preparation) for discussion of dynamical relaxation time in SAMI survey galaxies.The other two galaxies in Figure <ref>, 508180 and 511892, fall into our second class of kinematic irregularities. As noted, the typical values of V_c,star/V_c,gas seen in LTGs due to asymmetric drift are ∼0.75-0.89, whereas in galaxies 508180 and 511892 this value is roughly half that at 0.35 and 0.49 respectively. One scenario would be a prograde minor merger where gas is accreted with a similar angular momentum as the accreting galaxy. Retrograde merger remnants are more likely to exhibit gas-stellar counter rotation after dynamical relaxation, particularly in cases where the primary galaxy is gas poor prior to the merger (e.g. Bassett et al., submitted to MNRAS). Of the 45 H-ATLAS detected RDGs, four also show kinematic discrepancies similar to the four dusty DDGs. This is reasonable as minor mergers are not limited to massive, dispersion-supported galaxies. It is important to note that, although the dust content of some fraction of low-dispersion galaxies will indeed be related to accretion processes, these processes are not a necessity to account for the observed dust in the absence of a hot, X-ray halo. We also indicate with large symbols those H-ATLAS detected ETGs with log_10(M_*) > 10.8, which <cit.> find is the limiting mass above which galaxies show clear evidence for an X-ray emitting ISM (see Section <ref>). Three RDGs fall in this category, however they all have V_c > 160 km s^-1. <cit.> and <cit.> show in simulations that, even in the presence of an X-ray halo, rapid rotation can allow for the presence of a cold gas disk. Furthermore, two of these massive RDGs have σ_mean ≃ 75 km s^-1, suggesting that these two galaxies do not follow the Faber-Jackson relation for massive ETGs <cit.>. This discrepancy between their large M_* with a low σ_mean clearly illustrates that these galaxies must derive a significant amount of support from rotation.§.§.§ Kinematics of H-ATLAS Non-Detected ETGs In this Section, we briefly discuss the integrated kinematics of H-ATLAS non-detections from A13/A15. These galaxies are indicated in Figure <ref> as small cyan circles. Similar to H-ATLAS detected ETGs, we find that non-detected galaxies also occupy the full range in σ_mean versus V_c as the 540 SAMI galaxies with reliable measurements, including a significant number of RDGs. We do find, however, that a larger percentage of non-detected galaxies fall in our DDG kinematic selection at 31% (31/99) compared to 8% (4/49) for H-ATLAS detections.Next we examine the level of kinematic irregularity among our 31 H-ATLAS non-detected DDGs. As discussed in the previous section, kinematically irregular galaxies are thus defined based on a comparison of their stellar and ionised gas kinematics. While all four of our H-ATLAS detected DDGs have strong ionised gas emission, only 35% (11/31) of H-ATLAS non-detected DDGs have ionised gas emission with a high enough signal-to-noise to evaluate this. Thus a majority (20/31) of H-ATLAS non-detected DDGs in our sample are poor in gas as well as dust, as is typical of low redshift ETGs. Among the H-ATLAS non-detected DDGs with ionised gas emission strong enough to measure rotation, 7/11 have kinematically misaligned gas and 4/11 exhibit V_c,star/V_c,gas < 0.6 (specifically 0.09, 0.10, 0.27, and 0.56). This is similar to kinematic irregularities seen in H-ATLAS detected DDGs, therefore the presence of dust does not impact the relative dynamics of gas compared to stars in ETGs with significant gas. We note, however that the presence of ionised gas in the absence of a secure detection of dust emission is not inconsistent with the presence of a hot, X-ray emitting ISM in massive ETGs. We discuss this point further in Section <ref>. For completeness we note that among the full sample of DDGs in ourSAMI kinematics sample, 58% (41/71) have a high ionised gas emission with high enough signal-to-noise to measure rotation. Among this subsample, 22/41 exhibit kinematic misalignments while 13/41 fall in the V_c,star/V_c,gas < 0.6 class. Given our small sample size, our finding that 54% of DDGs in our sample with appreciable amounts of ionised gas are kinametically misaligned agrees well with the work of Bryant et al. (in preparation). The authors find that, depending on the exact definitions, ∼40-53% of ETGs from the full SAMI survey with high signal-to-noise ionised gas emission are kinematically misaligned. §.§ Dust Properties of H-ATLAS Detected ETGs Having shown that a majority of the visually classified dusty, ETGs from A13/A15 are consistent with being rotationally-supported, we now investigate the dust content of our sample. Any differences between members of our kinematic selections (or lack thereof), may help to further identify the most likely origin scenario for their dust content. We present in Figures <ref> and <ref> the M_d-SFR and M_d-M_s relationships for our sample, with markers indicating our kinematic V_c-σ_mean selection. We also include upper limits on M_d for H-ATLAS non-detections from A13/A15 as small black triangles. In Figure <ref>, our observations of SFR vs M_d for A13/A15 galaxies are plotted over a large sample of “normal” star-forming SDSS galaxies taken from <cit.>, which represent the z=0 star-forming main sequence, plotted as small orange dots. Also plotted in Figure <ref> is a linear fit to M_d vs SFR from <cit.> given by the black dashed line. In Figure <ref>, M_* vs M_d for A13/A15 galaxies are plotted over galaxies from the Herschel Reference Survey <cit.>, representing a wide range of galaxy types and environments. Visually classified LTGs from the HRS are given by light blue pentagons while orange pentagons show ETGs. Visual classifications for HRS galaxies are more reliable than those of SAMI galaxies, because HRS galaxies are extremely nearby objects. The relative proximity of HRS galaxies also means that they are sensitive to much lower levels of total M_d than A13/A15, which is reflected in Figure <ref> where HRS ETGs overlap with A13/A15 upper limits. This selection effect, however, will not affect our conclusions. We also plot in Figure <ref> blue horizontal lines that correspond to stellar masses of galaxies from <cit.> that show evidence of an extended X-ray emitting halo. The solid blue line is located at log_10(M_*/M_⊙) = 10.9, above which all galaxies show clear evidence of such a halo. Considering tentative detections, this can be extended to log_10(M_*/M_⊙) = 10.8, indicated by the dashed blue line. <cit.> explore the relationship between X-ray luminosity (L_X) down to individual galaxy masses of log_10(M_*/M_⊙) = 10.0 using stacking of X-ray observations. Below log_10(M_*/M_⊙) = 10.8 they find no dependence between L_X and M_*, suggesting that the observed L_X can be explained by SNe remnants and X-ray binaries rather than a hot gas halo. The results of <cit.> suggest that an alternative to using kinematics to select galaxies hosting X-ray gas halos is to employ a fixed stellar mass limit of log_10(M_*/M_⊙) = 10.8.Figure <ref> shows that DDGs are found to host a lower SFR at fixed M_d when compared to the bulk of RDGs; this difference is even greater when comparing to SDSS star-forming galaxies <cit.>. In Figure <ref> it can also be seen that the DDGs are among the most massive galaxies in our sample, and they host extremely small dust reservoirs given their stellar masses. consistent with the assertion DDGs represent genuine massive, elliptical galaxies, likely to host a hot, X-ray emitting interstellar and/or intergalactic medium <cit.>. Irregularities between stellar and gas kinematics favour a merger driven explanation for the dust content of H-ATLAS detected DDGs. In this scenario, these galaxies begin as typical quiescent ellipticals hosting very little molecular gas and dust <cit.>, thus occupying the upper left of Figure <ref>. These galaxies will then undergo minor mergers with gas rich satellites containingboth star-forming gas and dust that is stripped by the central galaxy. A minor merger such as this will significantly increase M_d while contributing negligibly to M_*, thus moving galaxies horizontally towards the right. This is consistent with their location in Figure <ref>, offset from HRS LTGs and RDGs. Observations have also shown that the star formation efficiency of gas stripped from galaxies can be extremely low <cit.>, consistent with M_d versus SFR for kinematic ETGs presented here.RDGs more closely follow the relationship for normal star-forming galaxies of <cit.> in Figure <ref> than DDGs, and have a M_d-M_* relationship consistent with HRS LTGs. Although there are examples of suppressed star-formation at a wide range of dust masses, these are found to be within the scatter of the SDSS data. Recently <cit.> examined the scaling relations for ETGs in the HRS finding a significantly larger scatter for ETGs than that observed by <cit.>, with galaxies typically deviating to low SFR, consistent with results presented here. A possible explanation for the position of low SFR RDGs in Figure <ref> is morphological quenching <cit.>, where the efficiency of converting molecular gas into stars is reduced in the presence of a massive bulge. This has been seen in observations previously <cit.> and, given the known correlation between M_* and bulge-to-total ratio <cit.>, can also explain why all three RDGs with log_10(M_*) > 10.8 M_⊙ exhibit a low SFR with a retention of their dust content. § DISCUSSION§.§ Moving Beyond Visual Classification of Galaxies The dusty ETGs from A13/A15 studied here have been visually classified by both the GAMA team <cit.> and the SAMI team using essentially the same classification criteria. Two of the key differences between the classifications are that they are made up of independent groups of classifiers and they used different images in the classification process. GAMA classifications of <cit.> are based on false colour g, i, and H band composite images while SAMI classifications employ SDSS DR9 gri images. The use of longer wavelength data has resulted in the GAMA classifications tending somewhat towards earlier types. There is likely also an influence of the third key difference between SAMI and GAMA classifications, namely those of SAMI include signs of star formation (based on galaxy colour rather than morphology alone) to distinguish LTGs from ETGs. This would help to explain why such a large number of A13/A15 ETGs are identified as Sa galaxies, which are morphologically difficult to separate from S0 galaxies beyond z = 0.05, but would be identified by SAMI as later types due to their blue colours. This difference is illustrated in the top panel of Figure <ref> where we show the GAMA and SAMI classifications for the A13/A15 dusty ETGs studied here. We show classification histograms for H-ATLAS non-detections in the bottom panel of Figure <ref> and, although GAMA classifications are still slightly skewed towards earlier types, the level of agreement is improved compared to H-ATLAS detections. There is often an inherent assumption that there is a connection between visual classification of a galaxy as an ETG and the presence of a hot, X-ray emitting ISM <cit.>. This may not be fully justified, and, in the case of A13/A15, the inclusion of a large number of Sa galaxies makes this connection more dubious. Comparing those GAMA ETGs containing dust to those that do not, A13 show that dusty ETGs are bluer, less concentrated, and have lower Sérsic indices. Further, A13 dusty ETGs have NUV-r colours more similar to H-ATLAS detected GAMA LTGs than to non-detected ETGs. Thus, from A13 there is already an indication that many H-ATLAS detected visual ETGs from GAMA have properties more like LTGs than giant ellipticals hosting X-ray halos. The kinematic analysis of the 49 A13/A15 H-ATLAS detections with reliable SAMI observations agrees well with this assessment. In Section <ref> we show that 44/49 of these galaxies have a V_c versus σ_mean in Figure <ref> suggesting kinematics largely dominated by rotation (termed RDGs here). All of these galaxies have σ_mean below 150 km s^-1 (with only 6 above σ_mean = 100 km s^-1), the approximate value above which galaxies have X-ray luminosities exceeding the value expected for the cumulative emission from supernova remnants and X-ray binaries in empirical studies <cit.>.It has also been shown by <cit.> that M_* can often be used to identify galaxies hosting an X-ray emitting ISM, with the clearest evidence found for galaxies with log_10(M_*) > 10.8 M_⊙. In fact, it is likely that M_* is more fundamental in determining the presence of such a hot ISM as it is the high mass concentration of these galaxies that prevents hot gas from escaping into the inter-galactic medium. This means that σ_mean is a secondary indicator arising through the relationship between M_* and σ in ETGs <cit.>. This connection occurs because massive ETGs derive their dynamical support from random motions, which is not the case for galaxies with significant rotation. Assuming M_* is a better indicator for the presence of hot X-ray emitting gas than σ_mean in rotating galaxies, we also identify those H-ATLAS detected RDGs with log_10(M_*) > 10.8 in Figure <ref>. 41/44 H-ATLAS detected RDGs are found to have masses below this limit, supporting our assertion that they do not host a hot ISM. Although the remaining three RDGs are massive enough to host an X-ray emitting halo, they are also found to have rapid rotation, with V_c > 160 km s^-1 in all three cases. <cit.> and <cit.> have shown that rapid rotation allows massive galaxies to host a disk of cold gas even in the presence of an X-ray emitting halo. This means that, regardless of the properties of the ISM in these three galaxies, dust residing in their disks can be long lived, thus an external origin for their dust content is unnecessary. Thus, the visual classification of a galaxy as an ETG should not be assumed as clear evidence for the presence of a hot ISM that is inhospitable to dust, in agreement with previous works <cit.>. On the contrary, despite their appearance many visually-classified ETGs are actually rotationally-supported, disk-like, star-forming galaxies with a relatively normal dust content comparable to galaxies found on the star-forming main sequence. In other words, the term early-type in this case does not imply structural difference, but mainly a difference in colour and possibly SFR. This means that the dust content of these galaxies is likely produced internally through normal processes such as supernovae and stellar winds without the need for external mechanisms such as cooling flows or minor mergers.§.§ Dispersion Dominated Galaxies, Dust, and Merger Rates From our sample of 49 dusty ETGs from A13/A15 with reliable SAMI observations, we identify 4 galaxies in Figure <ref> that are kinematically consistent with being dispersion-dominated systems. SDSS DR9 images of these galaxies are shown in Figure <ref> alongside velocity and velocity dispersion maps from the SAMI galaxy survey. All four galaxies exhibit inconsistencies between their stellar and ionised gas indicating recent stochastic processes such as gas accretion through merging. Galaxies 551505 and 534655 are found to have kinematic misalignments of ∼90^∘ while 508180 and 511892 have stellar to gas V_c ratios < 0.60, inconsistent with the range observed for asymmetric drift in LTGs <cit.>. Furthermore, the regular appearance of these galaxies from SDSS observations suggests that if mergers are responsible for these kinematic inconsistencies then these mergers must either be minor, as major mergers typically result in disturbed morphologies <cit.>, or they occurred in the fairly distant past thus having allowed significant time for dynamical relaxation. A possible caveat, however, is that observations deeper than those from SDSS may reveal disturbed morphologies apparent as low surface brightness features <cit.>. Evidence that the dust content of these four galaxies may have been recently accreted comes by comparing M_d to other galaxy properties. First, Figure <ref> shows that DDGs have suppressed SFR compared to normal star-forming galaxies, a feature seen in simulations of wet minor mergers <cit.>. DDGs are also significantly offset above the M_d-M_* relationship for star-forming galaxies shown in Figure <ref>, similar to dusty ETGs from <cit.> who observe little variation in M_* with varying M_d. This lack of a correlation is suggested as further evidence of external accretion. Indeed, extremely dust poor massive elliptical galaxies will fall far above the M_d-M_* trend for star-forming galaxies, occupying the top left region of Figure <ref>. A subsequent wet minor merger will provide a negligible increase in M_* while significantly increasing M_d, thus the merger remnants will move horizontally to the right in Figure <ref> towards the region occupied by kinematic ETGs discussed here. Assuming all four H-ATLAS detected DDGs in this work have acquired their dust content in a merger, how do our results compare with expectations based on the cosmological rates of mergers at low redshift? <cit.> compare the measured rate of minor mergers to the theoretical estimates of the destruction time for dust in hot gas of <cit.>. Following <cit.>, the authors make a rough prediction of the expected fraction of dusty ETGs, f_dust, based on estimates of the merger rate of R_merg = 0.07-0.2 Gyr^-1 and a dust lifetime of τ_dust < 0.02 Gyr followingf_dust = R_mergτ_dustThis gives f_dust < 0.14-0.4% implying that a purely external accretion scenario for large samples of dusty ETGs is extremely unlikely from a statistical perspective. The results of our study provide a possible solution to this tension. First we note that the fraction of ETGs with dust quoted by <cit.> is 0.6, whereas for GAMA ETGs in A13 there is only a 29% detection rate from the H-ATLAS survey. As we have shown, however, many of these galaxies are kinematically inconsistent with the presence of a hot ISM. Among DDGs, we find an even lower H-ATLAS detection rate of 11% (4/35), a factor of ∼5 lower than the value assumed by <cit.>, yet still significantly larger than their predicted value of f_dust < 0.14%. The argument of <cit.>, however, is dependent on a number of assumptions regarding the timescales and conditions of dust accretion in ETGs. Some works have estimated a timescale for gas stripping on the order of a few times 10^8 yr <cit.>, which could further increase f_dust predictions by an order of magnitude to ∼1.4-4.0%. The remaining tension between this estimate and the estimate of f_dust=11% found in this work may be partially due to incompleteness as a result of our small sample size. Another possibility, though, is the recent suggestion that accretion of dust that is embedded in a larger cold medium may be shielded from the harsh ISM resulting in a further significant increase in dust lifetimes <cit.>. A full understanding of just how much longer dust may survive in such a scenario is beyond the scope of this work. It should also be noted that, in those galaxies with V_c,star/V_c,gas < 0.6, the fact that the gas is kinematically aligned with the stars suggests that if the presence of the dust is truly the result of a merger then it must have had sufficient time to undergo dynamic relaxation. This process should occur on Gyr timescales <cit.>, supporting the idea that the direct, sub-galactic, environment of dust in ETGs may drastically increase the lifetime of interstellar dust. We can also discuss the comparison between stellar and ionised gas kinematics of dust-free DDGs in this study, i.e. H-ATLAS non-detections from A13/A15. Of the 31 H-ATLAS non-detected DDGs included here, 36% (11/31) show signs of kinematic discrepancies between their ionised gas and stars. Among these 11, 7/11 are kinematically misaligned while 4/11 are aligned with significantly larger gas V_c compared to that of the stars. The remaining 20 galaxies, however, do not have secure enough detections of ionised gas to provide clean ionised gas kinematics maps. Considering our entire sample of 540 SAMI galaxies with high quality stellar kinematics observations, we find a total subsample of 71 DDGs. Out of these, 49% (35/71) show kinematic irregularities, however only 58% (41/71) have strong ionised gas detections. Among the 35 kinematically irregular DDGs, 22 are kinematically misaligned while the remaining 13 have aligned ionised gas rotating significantly faster than the stars. Our finding that 54% (22/41) of DDGs with appreciable amounts of ionised gas are kinematically misaligned agrees well with the findings of Bryant et al. (in preparation) who find that, among a larger sample of SAMI galaxies, 40-53% of ETGs with high signal-to-noise ionised gas emission exhibit kinematically misaligned gas. This begs the question, how are kinematically irregular, H-ATLAS non-detected DDGs related to DDGs with H-ATLAS detections? The strongest statement we can make in this regard is that the presence of dust does not have a large impact on the relative dynamics of gas and stars in DDGs with significant gas. As we do not have strong constraints on the possible dust content of H-ATLAS non-detected galaxies, unlike H-ATLAS detections, we do not have the secondary indications of a mergers as an explanation for their kinematic irregularity (e.g. M_d vs M_*). We can simply say that the presence of ionised gas is likely associated with a stochastic process that has also affected the gas kinematics in these galaxies. Unlike dust, however, ionised gas is not always directly associated with cold gas in galaxies. Indeed, a number of works have shown that old stellar populations (such as post-AGB stars), active galactic nuclei (AGN), or even interactions between warm and hot (X-ray emitting) gas phases may be the dominant sources of ionised gas emission in ETGs <cit.>. This means that the detection of ionised gas emission in the absence of a secure detection of cold dust is not inconsistent with the presence of a hot, X-ray emitting ISM. A caveat here, however, is that some H-ATLAS non-detected galaxies from A13/A15 (particularly those DDGs with strong ionised gas emission) may contain appreciable amounts of dust, but have far-IR fluxes below the sensitivity limits of H-ATLAS. Assuming the presence of ionised gas is always associated with dust in our sample of SAMI galaxies would increase our estimate of the number of DDGs with dust to 58% (41/71). Thus the level of tension between merger rates and dust lifetimes in ETGs here would roughly match that seen by <cit.>. This assumption, however, is unfounded and, as we have noted, estimates of dust lifetimes in the presence of a complex, multiphase ISM in ETGs are equally uncertain. Further study of the ISM of massive galaxies and the precise conditions of gas accretion onto these systems will be required to understand the underlying cause of this tension. Finally, we note that out of the 41 DDGs from our full SAMI sample that exhibit appreciable ionised gas emission, 5 do not show clear discrepancies between ionised gas and stellar kinematics. These galaxies, however, have not been observed by H-ATLAS, thus the presence of dust is uncertain. Given the above discussion regarding sources of ionising radiation in ETGs, unless these galaxies can be shown to host cold gas or dust, they should not be considered to be peculiar objects.§ SUMMARY In this paper we have analysed the 2D kinematics of visually classified ETGs from <cit.> and <cit.> (A13/A15) using IFS data from the SAMI Galaxy Survey. The sample of A13/A15 includes 220 H-ATLAS detected (dusty) galaxies and 551 H-ATLAS non-detected (dust-free) galaxies. We begin by measuring the stellar circular velocity, V_c, and flux-weighted, global, stellar velocity dispersion, σ_mean, for a sample of 540 SAMI galaxies for which we can measure V_c beyond the turnover radius of the rotation curve. These values provide a kinematic selection designed to determine those visual ETGs that are consistent with having dispersion-dominated stellar kinematics, indicative of the presence of a hot, X-ray emitting ISM. This selection is then applied to visually classified ETGs from A13/A15 that have currently been observed with SAMI. Finally, we examine the dust properties of these galaxies in comparison with our kinematic selection in order to better understand the origin of the dust in these systems. Our key results are as follows: * Selecting A13/A15 ETGs based on V_c and σ_mean we find 11% (4/35) of dispersion-dominated A13/A15 galaxies are H-ATLAS detected. This is in contrast to the 29% (220/771) detection rate for the full A13/A15 sample and 40% (45/113) for rotation-dominated galaxies. Thus the detection of dust in visually classified ETGs is 3.5× more likely in galaxies with disk-like rotation than in those with kinematics more consistent with massive elliptical galaxies.* Similarly, only 8% (4/49) of H-ATLAS detected ETGs from A13/A15 with SAMI observations are kinematically consistent with being true, dispersion-dominated galaxies. The remainder have kinematics more similar to blue, visually-classified LTGs and S0/Sa galaxies.* 100% of these dispersion-dominated, dusty ETGs exhibit inconsistencies between their stellar and ionised gas kinematics suggestive of recent merger activity and an external origin for their dust content. The corresponding rate of gas versus stellar kinematic discrepancies in our full sample of dispersion-dominated SAMI galaxies is 45% (34/75).* The four dispersion-dominated, dusty ETGs in our sample are also extremely massive and thus quite likely to host a hot, X-ray emitting halo. As such, external accretion scenario is the most viable source for their dust content. Observations of a suppressed star formation in these four galaxies, typical of gas accreted onto massive galaxies <cit.>, further supports this assertion.* The low velocity dispersions as well as low masses and/or rapid rotation of the remaining galaxies, suggest that dust in these systems may be long-lived thereby eliminating any need for an externalscenario for the origin of their dust content. We have shown that these results may help to reduce the tension between expected dust lifetimes in massive ETGs <cit.>, observed merger rates <cit.>, and the observed number of visual ETGs containing dust <cit.>. A more complete understanding of the complex, multiphase ISM of massive ETGs, as well as the exact conditions through which gas is accreted onto these systems, will be necessary in understanding this tension fully. RB acknowledges support under the Australian Research Council's (ARC) Discovery Projects funding scheme (DP130100664). JvdS is funded under Bland-Hawthorn's ARC Laureate Fellowship (FL140100278). SMC acknowledges the support of an Australian Research Council Future Fellowship (FT100100457). SB acknowledges the funding support from the Australian Research Council through a Future Fellowship (FT140101166). Support for AMM is provided by NASA through Hubble Fellowship grant #HST-HF2-51377 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. M.S.O. acknowledges the funding support from the Australian Research Council through a Future Fellowship Fellowship (FT140100255). We would also like to thank the anonymous referee for comments and suggestions that have improved the clarity and readability of this work. The SAMI Galaxy Survey is based on observations made at the Anglo-Australian Telescope. The Sydney-AAO Multi-object Integral field spectrograph (SAMI) was developed jointly by the University of Sydney and the Australian Astronomical Observatory. The SAMI input catalogue is based on data taken from the Sloan Digital Sky Survey, the GAMA Survey and the VST ATLAS Survey. The SAMI Galaxy Survey is funded by the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020, and other participating institutions. The SAMI Galaxy Survey website is http://sami-survey.org/. GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT, and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is http://www.gama-survey.org/.mnras§ DUST MASS UPPER LIMITS FOR H-ATLAS NON-DETECTIONS As described in Section <ref>, M_d for H-ATLAS detected galaxies is measured by the GAMA survey team using thespectral fitting code of <cit.>.relies on detections in the far-IR in order to provide reliable estimates of M_d, therefor those M_d values provided byfor H-ATLAS non-detected galaxies are highly uncertain. Thus, for these galaxies we estimate upper limits on the dust masses in the following manner:First we take the upper limit for the flux in the 250 μm Herschel SPIRE band, F_250, to be ∼33 mJy <cit.>. This is then converted to an upper limit on M_d using <cit.>:M_d=F_250D_L^2K_250/κ_250B(1+z)(T_d)_250where D_L is the luminosity distance to each galaxy, computed from spectroscopic redshifts, κ_250 is the mass absoprtion coefficient assumed to be 0.89 m^2 kg^-1 at 250 μm <cit.>, and B(T_d)_250 is the Planck function at 250 μm and atdust temperature T_d. For all non-detected galaxies we simply fix T_d at 22.1 K, the average value computed through grey-body fitting for H-ATLAS detected sources from A13. Equation <ref> also includes a factor of (1+z) and a K-correction, which is given by:K =( ν_obs/ν_rf)^3+βe^(hν_rf/kT_d)-1/e^hν_obs/kT_d-1where ν_obs and ν_rf are the observed and rest-frame frequency, β is the dust emissivity index <cit.>, h is the Planck constant, and k is the Boltzmann constant. M_d upper limits are depicted along sidedust masses for H-ATLAS detections in Section <ref>.§ ROTATION CURVE EXTRACTION AND MEASUREMENT OF V_C After measuring the kinematic PA of our kinematics maps, we use this value as input in extracting the rotation curves for each galaxy. This is done by first determining the x and y positions of the galaxy centre from the stellar flux maps, F(x,y) using:x_c= ∑_i∑_j i × F(i,j)/∑_i∑_j F(i,j) ; y_c=∑_i∑_jj × F(i,j)/∑_i∑_jF(i,j)where x_c and y_c are the x and y positions of the galaxy centre. An artificial slit with a width of 15 (3 spaxels) is traced across the stellar velocity map with its position defined by the measured PA and galaxy centre. Examples of these artificial slits are shown in green in the right column of Figure <ref>. The radius, r(x,y), and velocity, v(x,y), are recorded at each spaxel within the slit. Here we define r(x,y) = √((x-x_c)^2+(y-y_c)^2) with the sign taken to match the sign of x-x_c. The choice of the definition of positive and negative radii is arbitrary, however, as this simply defines a positive or negative V_c. In the end, the final value of the circular velocity is taken as |V_c|. The stellar rotation curves extracted in this way are then used to determine the rotation velocity following the procedure of <cit.>. This is done by fitting a piecewise function of the form:V(r) = {[-V_c, r ≤ -r_t;V_c(r/r_t), -r_t < r < r_t;V_c r_t≤ r ].where r_t is the turn-over radius of the rotation curve that defines where the flat portion of the rotation curve begins. This value is left as a free parameter. Examples of these fits for galaxies with low and high (apparent) V_c are shown in the left column of Figure <ref>. Although a fairly common feature of galaxy rotation curves is a decline in V_c beyond the turnover radius, r_t <cit.>, the coverage of our datacubes often does not extend to large enough radii to capture this behaviour. Thus we choose to use the relatively simple model given in Equation <ref> as including more parameters is more likely to result in spurious fits, particularly for low S/N spaxels at large radii. This procedure has the inherent assumption that the ∼80 covered by SAMI datacubes is larger than r_t, which is not true for many galaxies. We check whether or not observations of each galaxy extend beyond r_t to test this. For each galaxy we first identify the spaxel in our traced slit that is furthest from (x_c,y_c). If the radius measured for this spaxel is larger than r_t then we flag this as a reliable measurement of V_c. We also visually inspect the rotation curve fits for each galaxy flagged as reliable for verification, and in this process, a small number of galaxies were identified with spurious fits and subsequently flagged and removed from our sample. Finally, some SAMI observations include a low number of spaxels with high signal-to-noise data, which will reduce the reliability of our fitted value of V_c. We perform a test using galaxies with high fidelity rotation curves in which we incrimentally reduce the artificial slit length by one spaxel and remeasure V_c. We find that for slits containing more than 30 spaxels we are able to recover V_c measured from the full slit for >95% of galaxies tested. This fraction falls off rapidly below 30 spaxels, thus galaxies for which slits contain less than 30 spaxels are excluded from our analysis. Among the 753 galaxies tested, we find that 554 galaxies meet theserequirements.We nextcorrect our stellar V_c measurements for the effects of inclination, which causes observed V_c's to be lower than the intrinsic rotation velocity of a given galaxy. First we determine the inclination of each galaxy usingcos^2i = (1-ϵ)^2-α^2/1-α^2where i is the galaxy inclination, ϵ is the observed ellipticity, and α is the intrinsic flattening for a given galaxy. For each galaxy we provide a rough estimate of α based on the following criteria: first we separate galaxies into those that are strongly disk-dominated from those with a significant influence from a central bulge. The former are identified as having bulge to total ratios (B/T) less than 0.3 while the latter have B/T larger than 0.3, which roughly follows the findings of <cit.>. Here we take B/T from r-band values of <cit.> who perform 2D bulge+disk decompositions for SDSS galaxies using the GIM2D software <cit.>. For disk-dominated galaxies we fix α at 0.23, which is the average value found when comparing α values reported for spiral galaxies by <cit.> and <cit.>. Galaxies with B/T > 0.3 are then separated into pure ellipticals and S0/Sa galaxies based on SAMI morphological classifications. We assign an α of 0.55 to the S0 and Sa classes <cit.> and a value of 0.63 to elliptical galaxies, the former being the average value found for slow rotators in the ATLAS^3D survey <cit.>. This large α value for elliptical galaxies is appropriate because these object appear relatively round even when the viewing angle is perpendicular to the axis of rotation. Galaxies with low intrinsic rotation and a spheroidal shape, for example, would have V_c significantly over estimated if it is assumed the galaxy is much flatter. Thus adopting α=0.63 for elliptical galaxies provides a conservative V_c correction appropriate for dispersion-supported galaxies. The exact assumptions regarding α will have only a minor effect on our results as this value is used to correct V_c while, as we will show in Section <ref>, our kinematic selection is primarily based on stellar velocity dispersion.Finally, we compute the inclination corrected stellar V_c, V_c,corr, asV_c,corr=V_c/(1+z)siniThis correction inherently assumes that galaxies observed face-on are perfectly circular, which is certainly not accurate for all galaxies. By construction, this process has a relatively small effect on ETGs while LTGs may have V_c underestimated by up to 270 km s^-1. This is typically the case galaxies observed close to face-on, for which V_c is already quite uncertain, however. Among the 563 galaxies with well sampled rotation curves, we find a median increase in V_c due to our inclination correction of 4.7 km s^-1, and only 7% have an increase in V_c of more than 50 km s^-1. As we are interested in ETGs in this work, cases such as this will not affect our results. § Σ_MEAN: MASKING AND BEAM SMEARING CORRECTION In this appendix we describe in detail our methods of masking bad spaxels prior to measuring σ_mean and correcting σ_mean for the effects of seeing, commonly referred to as beam smearing.We define bad spaxels as those that do not satisfy σ_error < σ× 0.1 + 25 km s^-1. This requires the measured error in σ in each spaxel to be smaller than a fraction of the measured σ. The inclusion of the +25 km s^-1 is needed in order that we do not exclude a majority of spaxels with a low measurement of σ. Finally, we also exclude spaxels with σ < 35 km s^-1, which is the limit to which we trust our measurements <cit.>, for more on tests of our pPXF procedure).σ_mean is then measured as the flux weighted velocity dispersion over unmasked spaxels in our stellar velocity dispersion maps. Formally this is defined as:Our measurements of σ_mean employ all spaxels meeting the quality cut described above. We test the robustness of σ_mean by remeasuring this value within radii between 10 and 80 (2-16 spaxels). We find that σ_mean measurements level off beyond 15, and remain unchanged out to 80. This means that σ_mean measurements are robust for all galaxies that meet our V_c quality cut, i.e. velocities are measured beyond r_t. The major difficulty in estimating the global velocity dispersions from IFS observations is accounting for the effects of beam smearing, which can artificially inflate σ measured in individual spaxels <cit.>. This effect is enhanced in the central regions of rapidly rotating galaxies where large velocity gradients are observed over individual spaxels. Beam smearing is a complex effect, acting in all three dimensions of IFS datacubes, and a significant ongoing effort to understand beam smearing in SAMI data is underway. In the meantime, we perform a simple beam smearing correction on σ_mean following <cit.>. For each galaxy we estimate the additional σ induced by beam smearing, σ_bs, as:σ_bs≈dV/dθσ_θwhere dV is the velocity gradient defined by our V_c fits as V_c/r_t (see <ref>), dθ is the spaxel size of 05, and σ_θ is the seeing of our observations. Seeing values for SAMI galaxies are catalogued at the time of observation with the median seeing at the AAT site being 18. We estimate a “beam smearing corrected” σ_mean as σ_m,corr = √(σ_mean^2-σ_bs^2). The typical corrections measured in this way reduce the measured σ_mean by ∼0-30 km s^-1 with a clear dependence on V_c. Below V_c = 60 km s^-1, corrections are closer to ∼0-5 km s^-1. Note that this may result in measured values of σ_m,corr below 35 km s^-1, which we regard as the lower limit to which we trust measurements of the stellar σ in individual spaxels. Prior to performing our beam smearing correction, all measured σ_mean values are larger than 40 km s^-1, thus σ_m,corr values below 35 km s^-1 are entirely resultant from our beam smearing correction. We note that rotation curves may also be affected by beam smearing, in particular the velocity gradient in the central regions. Our V_c measure is largely constrained by the asymptotic velocities at large radii and thus will be minimally affected by beam smearing (if at all). We deem the σ_mean correction described here necessary, however, as this is a flux weighted quantity, is biased towards the central regions where beam smearing effects are at a maximum.
http://arxiv.org/abs/1704.08433v1
{ "authors": [ "Robert Bassett", "K. Bekki", "L. Cortese", "W. J. Couch", "A. E. Sansom", "J. van de Sande", "J. J. Bryant", "C. Foster", "S. M. Croom", "S. Brough", "S. M. Sweet", "A. M. Medling", "M. S. Owers", "S. P. Driver", "L. J. M. Davies", "O. I. Wong", "B. A. Groves", "J. Bland-Hawthorn", "S. N. Richards", "M. Goodwin", "I. S. Konstantopoulos", "J. S. Lawrence" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170427050327", "title": "The SAMI Galaxy Survey: Kinematics of Dusty Early-Type Galaxies" }
a4paper.tifpng.png`convert #1 `dirname #1`/`basename #1 .tif`.png
http://arxiv.org/abs/1704.08116v1
{ "authors": [ "Thibault Damour", "Philippe Spindel" ], "categories": [ "gr-qc", "hep-th" ], "primary_category": "gr-qc", "published": "20170426134621", "title": "Quantum Supersymmetric Cosmological Billiards and their Hidden Kac-Moody Structure" }
Semantic Autoencoder for Zero-Shot Learning Elyor Kodirov     Tao Xiang     Shaogang GongQueen Mary University of London, UK {e.kodirov, t.xiang, s.gong}@qmul.ac.ukDecember 30, 2023 ===========================================================================================================================================Existing zero-shot learning (ZSL) models typically learn a projection function from a feature space to a semantic embedding space (e.g. attribute space). However, such a projection function is only concerned with predicting the training seen classsemantic representation (e.g. attribute prediction) or classification. When applied to test data, which in the context of ZSL contains different (unseen) classes without training data, a ZSL model typically suffers from the project domain shift problem. In this work, we present a novel solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the encoder-decoder paradigm, an encoder aims to project a visual feature vector into the semantic space as in the existing ZSL models. However, the decoder exerts an additional constraint, that is, the projection/code must be able to reconstruct the original visual feature. We show that with this additional reconstruction constraint, the learnedprojection function from the seen classes is able to generalise better to the new unseen classes. Importantly, the encoder and decoder are linear and symmetric which enable us to develop an extremely efficient learning algorithm. Extensive experiments on six benchmark datasets demonstrate that the proposed SAE outperforms significantly the existing ZSL models with the additional benefit of lower computational cost. Furthermore, when the SAE is applied to supervised clustering problem, it also beats the state-of-the-art. § INTRODUCTION A recent endeavour of computer vision research is to scale the visual recognition problem to large-scale. This is made possible by the emergence oflarge-scale datasets such as ImageNet <cit.> and the advances in deep learning techniques <cit.>. However, scalability remains an issue because beyond daily objects, collecting image samples for rare and fine-grained object categories is difficult even with modern image search engines.Taking the ImageNet dataset for example, the popular large-scale visual recognition challenge (ILSVRC) <cit.> mainly focuses on the task of recognising 1K classes, a rather small subset of the full ImageNet dataset consisting of 21,814 classes with 14M images. This is because many of the 21K object classes are only composed of a handful of images including 296 classes with only one image. Humans can identify approximately 30,000 basic object categories <cit.> and many more sub-classes, e.g. breeds of dogs and combination of attributes and objects. Importantly, humans are very good at recognising objectswithout seeing any visual samples. In machine learning, this is considered as the problem of zero-shot learning (ZSL). For example, a child would have no problem recognising a “zebra” if he/she has seen horses before and also learned that a “zebra” is like a horse with black-and-white stripes. Inspired by humans' ZSL ability, there is a recent surge of interest in machine learning based ZSL for scaling up visual recognition to unseen object classes without the need for additional data collection <cit.>.Zero-shot recognition relies on the existence of a labelled training set of seen classes and theknowledge about how each unseen class is semantically related to the seen classes. Seen and unseen classes are usually related in a high dimensional vector space, which is called semantic embedding space. Such a space can be asemantic attribute space <cit.> or a semantic word vector space <cit.>. In the semantic embedding space, the names of both seen and unseen classes are embedded as vectors calledclass prototypes <cit.>. The semantic relationships between classes can then be measured by a distance, e.g. the prototypes of zebra and horse should be close to each other. Importantly, the same space can be used to project a feature representation of an object image, making visual recognition possible.Specifically, most existing ZSL methods learn a projection (mapping) function from a visual feature space to a semantic embedding space using the labelled training visual dataconsisting of seen classes only. At test time for recognising unseen objects, this mapping function is then used to project the visual representation of an unseen class image into the same semantic space where both seen and unseen classes reside. The task of unseen class recognition is then realised by a simple nearest neighbour (NN) search – the class label of the test image is assigned to the nearest unseen class prototype in the projected semantic space.The training seen classes and testing unseen classes are different. Although they can be considered as two overlapping domains with some degrees of shared semantics, there exists significant domain differences, e.g. the visual appearance of the same attributes can be fairly different in unseen classes. Existing ZSL models mostly suffer from the projection domain shift problem<cit.>. This is, if the projection for visual feature embedding is learned only from the seen classes, the projections of unseen class images are likely to be misplaced (shifted) due to the bias of the training seen classes. Sometimes this shift could be far away from the correct corresponding unseen class prototypes, making the subsequent NN search inaccurate.In this work, we present a novel approach to zero-shot learning based on the encoder-decoder paradigm <cit.>. Specifically, an encoder projects a visual feature representation of an image into a semantic representation space such as an attributes space, similar to a conventional ZSL model. However, we also consider the visual feature projection as an input to a decoder which aims to reconstruct the original visual feature representation. This additional reconstruction task imposes a new constraint in learning the visual → semantic projection function so that the projection must also preserve all the information contained in the original visual features, i.e. they can be recovered by the decoder <cit.>. We show that this additional constraint is very effective in mitigating the domain shift problem. This is because although the visual appearance of attributes may change from seen classes to unseen classes, the demand for more truthful reconstruction of the visual features is generalisable across seen and unseen domains, resulting in the learned project function less susceptible to domain shift. More precisely, we formulate a semantic autoencoder with the simplest possible encoder and decoder model architecture (Fig. <ref>): Both have one linear projection to or from ashared latent embedding/code layer, and the encoder and decoder are symmetric so that they can be represented by the same set of parameters. Such a design choice is motivated by computational efficiency – the true potential of a ZSL model is when applied to large-scale visual recognition tasks where computational speed is essential.Even with this simple formulation,solving the resultant optimisation problem efficiently is not trivial. In this work, one such solver is developed whose complexity is independent of the training data size therefore suitable for large-scale problems. Our semantic autoencoder differs from conventional autoencoder <cit.> in that the latent layer has clear semantic meaning: It corresponds to the semantic space and is subject to strong supervision. Therefore our model is not unsupervised. Beyond ZSL learning, it can also be readily used for solving otherproblems where a discriminative low-dimensional representation is required to cluster visually similar data points. To demonstrate its general applicability, our SAE model is formulated for the supervised clustering problem <cit.>.Our contributions are: (1) A novel semantic encoder-decoder model is proposed for zero-shot learning. (2) We formulate a semantic autoencoder which learns a low-dimensional semantic representation of input data that can be used for data reconstruction. An efficient learning algorithm is also introduced. (3) We show that the proposed semantic autoencoder can be applied to other problems such as supervised clustering. Extensive experiments are carried out on six benchmarks for ZSL which show that the proposed SAE model achieves state-of-the-art performance on all the benchmarks.§ RELATED WORK Semantic spaceA variety of zero-shot learning models have been proposed recently <cit.>. They use various semantic spaces. Attribute space is the most widely used. However, for large-scale problems, annotating attributes for each class becomes difficult. Recently, semantic word vector space has started to gain popularity especially in large-scale zero-shot learning <cit.>. Better scalability is typically the motivationas no manually defined ontology is required and any class name can be represented as a word vector for free. Beyond semantic attribute or word vector, directlearning from textual descriptions of categories has also been attempted, e.g. Wikipedia articles <cit.>, sentence descriptions <cit.>. Visual → Semantic projectionExisting ZSL models differ in how the visual space → semantic space projection function is established. They can be divided into three groups: (1) Methods in the first group learn a projection function from a visual feature space to a semantic space either using conventional regression or ranking models <cit.> or via deep neural network regression or ranking <cit.>. (2) The second group chooses the reverse projection direction, i.e. semantic → visual <cit.>. The motivation is to alleviate the hubness problem that commonly suffered by nearest neighbour search in a high-dimensional space <cit.>. (3) The third group of methods learn an intermediate space where both the feature space and the semantic space are projected to<cit.>. The encoder in our model is similar to the first group of models, whilst the decoder does the same job as the second group. The proposed semantic autoencoder can thus be considered as a combination of the two groups of ZSL models but with the added visual feature reconstruction constraint. Projection domain shiftThe projection domain shift problem in ZSL was first identified by Fu etal. <cit.>. In order to overcome this problem, a transductive multi-view embedding framework was proposed together with label propagation on graph which requires the access of all test data at once. Similar transdutive approaches are proposed in <cit.>.This assumption is often invalid in the context of ZSL because new classes typically appear dynamically and unavailable before model learning. Instead of assuming the access to all test unseen class data for transductive learning, our model is based on inductive learning and itrelies only enforcing the reconstruction constraint to the training data to counter domain shift.AutoencoderThere are many variants of autoencoders in the literature <cit.>.They canbe roughly divided into two groups which are (1) undercomplete autoencoders and (2) overcomplete autoencoders. In general, undercompleteautoencoders are used to learn the underlying structure of data and used for visualisation/clustering <cit.> like PCA. In contrast, overcomplete autoencoders are used for classification based on the assumption that higher dimensionnal features are better for classification <cit.>. Our model is an undercomplete autoencoder since a semantic space typically has lower dimensionality than that of a visual feature space. All the autoencoders above focus on learning features in a unsupervised manner. On the contrary, our approach is supervisedwhile keeping the main characteristic of the unsupervised autoencoders, i.e. the ability to reconstruct the input signal. Semantic encoder-decoder An autoencoder is only one realisation of the encoder-decoder paradigm. Recently deep encoder-decoder has become popular for a variety of vision problems ranging from image segmentation <cit.> to image synthesis <cit.>. Among them, a few recent works also exploited the idea of applying semantic regularisation to the latent embedding space shared between the encoder and decoder <cit.>. Our semantic autoencoder can be easily extended for end-to-end deep learning by formulating the encoder as a convolutional neural network and the decoder as a deconvolutional neural network with a reconstruction loss.Supervised clustering  Supervised clustering methods exploit labelled clustering training dataset to learn a projection matrix that is shared by a test dataset unlike conventional clustering such as <cit.>. There are different approaches of learning theprojection matrix: 1) metric learning-based methods that use similarity and dissimilarity constraints <cit.>, and 2) regression-based methods that use `labels' <cit.>. Our method is more closely related to theregression-based methods, becausethe training class labels are used to constrain the latent embedding space in our semantic autoencoder. We demonstrate in Sec <ref> that, similar to the ZSL problem, by adding the reconstruction constraint, significant improvements can be achieved by our model on supervised clustering. § SEMANTIC AUTOENCODER§.§Linear autoencoder We first introduce the formulation of a linear autoencoder and then proceed to extend it into a semantic one. In its simplest form, an autoencoder is linear and only has one hidden layer shared by the encoder and decoder. The encoder projects the input data into the hidden layer with a lower dimension and the decoder projects it back to the original feature space and aims to faithfully reconstruct the input data. Formally, given an input data matrix 𝐗∈ℝ^d × N composed of N feature vectors of d dimensions as its columns, it is projected into a k-dimensional latent space with a projection matrix 𝐖∈ℝ^k × d, resulting in a latent representation 𝐒∈ℝ^k × N.The obtained latent representationis then projected back to the feature space with a projection matrix 𝐖^*∈ℝ^d × k and becomes 𝐗̂∈ℝ^d × N. We have k < d, i.e. the latent representation/code reduces the dimensionality of the original data input. We wish that the reconstruction error is minimised, i.e. 𝐗̂ is as similar as possible to 𝐗. This is achieved by optimising against the following objective:min_𝐖, 𝐖^*𝐗-𝐖^*𝐖𝐗_F^2§.§ Model FormulationA conventional autoencoder is unsupervised and the learned latent space has no explicit semantic meaning. With the proposed Semantic AutoEncoder (SAE), we assume that each data point also has a semantic representation, e.g., class label or attributes. To make the latent space in the autoencoder semantically meaningful, we take the simplest approach, that is, we force thelatent space 𝐒 to be the semantic representation space, e.g., each column of 𝐒 is now an attribute vector given during training for the corresponding data point. In other words, the latent space is not latent any more during training. The learning objective thus becomes:min_𝐖, 𝐖^*𝐗-𝐖^*𝐖𝐗_F^2    s.t.   𝐖𝐗 = 𝐒 To further simplify the model, we consider tied weights <cit.>, that is: 𝐖^* = 𝐖^⊤The learning objective is then rewritten as follows:min_𝐖𝐗-𝐖^⊤𝐖𝐗_F^2    s.t.   𝐖𝐗 = 𝐒Now we have only one projection matrix to estimate, instead of two (see Fig. <ref>(c)).§.§ OptimisationTo optimise the objective in Eq. (<ref>), first we change Eq. (<ref>) to the following form:min_𝐖𝐗 - 𝐖^⊤𝐒_F^2    s.t.   𝐖𝐗 = 𝐒by substituting 𝐖𝐗 with 𝐒. Solving an objective with a hard constraint such as 𝐖𝐗 = 𝐒 isdifficult. Therefore, we consider to relax the constraint into a soft one and rewrite the objective as:min_𝐖𝐗-𝐖^⊤𝐒_F^2 + λ𝐖𝐗 - 𝐒_F^2where λ is a weighting coefficient that controls the importance of first and second terms, which correspond to the losses of the decoder and encoder respectively. Now Eq. (<ref>) has a standard quadratic formulation, and it is convex function which has global optimal solution. To optimise it, we simply take a derivative of Eq. (<ref>) and set it zero. First, we re-organiseEq. (<ref>) using trace properties(𝐗) = (𝐗^⊤) and (𝐖^⊤𝐒) = (𝐒^⊤𝐖):min_𝐖𝐗^⊤ - 𝐒^⊤𝐖_F^2 + λ𝐖𝐗 - 𝐒_F^2 Then, we can obtain the derivative of Eq. (<ref>) as follows:-𝐒(𝐗^⊤-𝐒^⊤𝐖) + λ (𝐖𝐗 - 𝐒) 𝐗^⊤=0 𝐒𝐒^⊤𝐖 + λ𝐖𝐗𝐗^⊤= 𝐒𝐗^⊤ + λ𝐒𝐗^⊤If we denote 𝐀 = 𝐒𝐒^⊤, 𝐁 = λ𝐗𝐗^⊤, and 𝐂 = (1+ λ) 𝐒𝐗^⊤,we have the following formulation:𝐀𝐖 + 𝐖𝐁 = 𝐂,which is a well-known Sylvester equation which can be solved efficiently by the Bartels-Stewart algorithm <cit.>. In MATLAB, it can be implemented with a single line of code: sylvester[https://uk.mathworks.com/help/matlab/ref/sylvester.html]. Importantly, the complexity of Eq. (<ref>) depends on the size of feature dimension (𝒪(d^3)), and not on the number of samples; it thus can scale to large-scale datasets. Algorithm 1 shows a 6-line MATLAB implementation of our solver.§ GENERALISATION§.§ Zero-Shot LearningProblem definition   Let 𝐘 = {𝐲_1, ... , 𝐲_s} and 𝐙 = {𝐳_1, ... , 𝐳_u} denote a set of s seen and u unseen class labels, and they are disjoint𝐘∩𝐙 = ∅. Similarly 𝐒_Y = {𝐬_1, ... , 𝐬_s}∈ℝ^s × kand 𝐒_Z = {𝐬_1, ... , 𝐬_u}∈ℝ^u × k denote the corresponding seen and unseen class semantic representations (e.g. k-dimensional attribute vector). Given training data with N number of samples 𝐗_Y = {(𝐱_i, 𝐲_i, 𝐬_i)}∈ℝ^d × N, where 𝐱_i is a d-dimensional visual feature vector extracted from the i-th training image from one of the seen classes, zero-shot learning aims to learn a classifier f: 𝐗_Z→𝐙 to predict the label of the image coming from unseen classes, where 𝐗_Z = {(𝐱_i, 𝐳_i, 𝐬_i)} is the test data and𝐳_i and 𝐬_i are unknown. SAE for zero-shot learning   Given semantic representation 𝐒 such as attributes, and the training data 𝐗_𝐘, using our SAE, we first learn the encoder 𝐖 and decoder 𝐖^⊤ by Algorithm 1. Subsequently, zero-shot classification can be performed in two spaces:1) With the encoder projection matrix 𝐖: We can embed a new test sample 𝐱_i ∈𝐗_Z to the semantic space by ŝ_i = 𝐖𝐱_i. After that, the classification of the test data in the semantic space can be achieved by simply calculating the distance between the estimated semantic representation 𝐬_i and the projected prototypes 𝐒_Z:Φ(𝐱_i) = _j D(ŝ_i,𝐒_Z_j)where 𝐒_Z_j is j-th prototype attribute vector of the j-th unseen class, D is a distance function, and Φ(·) returns the class label of the sample. 2) With the decoder projection matrix 𝐖^⊤: Similarly, we can embed the prototype representations to the visual feature space by 𝐱̂_i=𝐖^T𝐬_iwhere 𝐬_i ∈𝐒_Z and 𝐱̂_i ∈𝐗̂_Z is the projected prototype. Then, the classification of the test data in the feature space can be achieved by calculating the distance between thefeature representation 𝐱_i and the prototype projections in the feature space 𝐗̂_Z:Φ(𝐱_i) = _j D(𝐱_i,𝐗̂_Z_j) where𝐗̂_Z_j is j-th unseen class prototype projected in the feature space.In our experiments we found that the two testing strategies yield very similar results (see Sec. <ref>). We report results with both strategies unless otherwise specified.§.§ Supervised Clustering For supervised clustering we are given a set of training data with class labels only, and a test set that share the same feature representation as the training data and need to be grouped into clusters. Let 𝐘 = {𝐲_1, ... , 𝐲_s} be a set of s training class labels. Denote 𝐒_Y = {𝐬_1, ... , 𝐬_s}∈ℝ^s × kas the correspondingsemantic representations. Given a training data set with N number of samples 𝐗_Y = {(𝐱_i, 𝐲_i, 𝐬_i)}∈ℝ^d × N, where 𝐱_i is a d-dimensional visual feature vector extracted from the i-th training image we aim to learn a projection function f: 𝐗_Y →𝐒_Y from the training data and then apply the same projection functionto a set of test data 𝐗_Z before clustering can be carried out. Using our SAE, the projection function is our encoder 𝐖. With only the training class label, the semantic space is the label space, that is, 𝐬_i is an one-hot class label vector with only the element corresponding to the image class assuming the value 1, and all other elements set to 0. After the test data is projected into the training label space, we use k-means clustering as in existing work<cit.> for fair comparison. The demo code of our model is available at <https://elyorcv.github.io/projects/sae>.§.§ Relations to Existing Models Relation to ZSL models   Many existing ZSL models learn a projection function from a visual feature space to a semantic space (see Fig. <ref>(a)). If the projection function is formulated as linear ridge regression as follows: min_𝐖𝐖𝐗 - 𝐒_F^2 + λ𝐖_F^2,we can see that comparingEq. (<ref>) withEq. (<ref>), this is our encoder with an additional regularisation term on the project matrix 𝐖. Recently, <cit.> proposed to reverse the projection direction: They project the semantic prototypes into the features space:min_𝐖𝐗 - 𝐖^⊤ 𝐒_F^2 + λ𝐖_F^2so this is the decoder of our SAE but again with the regularisation term to avoid overfitting (see Fig. <ref>(b)). Our approach can thus be viewed as the combination of both models when ridge regression is chosen as the project function and without considering the 𝐖_F^2 regularisation. This regularisation is unnecessary in our model due to the symmetric encoder-decoder design – since 𝐖^* = 𝐖^⊤, the norm of the encoder projection matrix 𝐖_F^2 cannot be big because it will then produce large-valued projections in the semantic space, and after being multiplied with the large-norm decoder project matrix, will result in bad reconstruction. In other words, the regularisation on the norm of the projection matrices have been automatically taken care of by the reconstruction constraint <cit.>. Relation to supervised clustering models  Recently, <cit.> show that regression can be used to learn a mahalanobis distance metric for supervised clustering. Specifically, given data 𝐗 with corresponding labels, the so-called `encoded labels' 𝐒 are generated and normalised as 𝐒 = 𝐒(𝐒'𝐒)^-1/2∈ℝ^s× N, where s is the number of training labels <cit.>. Then linear regressionis employed to obtain a projection matrix 𝐖 for projecting the data from the feature space to the label space. At test time, 𝐖 is applied to test data. Then, k-means clustering is applied to the projected data. Again, these models can be considered as the encoder of our SAE. We shall show that with the decoder and the additional reconstruction constraint, the learned code and distance metric become more meaningful, yielding superior clustering performance on the test data. § EXPERIMENTS §.§ Zero-Shot Learning Datasets  Six benchmark datasets are used. Four of them are small-scale datasets: Animals with Attributes (AwA) <cit.>, CUB-200-2011 Birds (CUB) <cit.>, aPascal&Yahoo (aP&Y) <cit.>, and SUN Attribute (SUN) <cit.>. The two large-scale ones are ILSVRC2010 <cit.> (ImNet-1), and ILSVRC2012/ILSVRC2010 <cit.> (ImNet-2). In ImNet-2, as in <cit.>, the 1,000 classes of ILSVRC2012 are used as seen classes, while 360 classes of ILSVRC2010, which are not included in ILSVRC2012, for unseen classes. The summary of these datasets is given in Table <ref>.Semantic spaces  We use attributes as the semantic space for the small-scale datasets, all of which providethe attribute annotations. Semantic word vector representation is used for large-scale datasets. We train a skip-gram text modelon a corpus of 4.6M Wikipedia documents to obtain the word2vec[ https://code.google.com/p/word2vec/ ] <cit.> word vectors. Features  All recent ZSL methods use visual features extracted by deep convolutional neural networks (CNNs). In our experiments, we use GoogleNet features <cit.> which is the 1024D activation of the final pooling layer as in <cit.>. The only exception is for ImNet-1: For fair comparison with published results,we use Alexnet <cit.> architecture, and train it from scratch using the 800 seen classes, resulting in 4096D visual feature vectors computed using the FC7 layer. Parameter settings  Our SAE model has only one free parameter: λ (see Eq. (<ref>)). As in <cit.>, its values is set by class-wise cross-validation using the training data. The dimension of the embedding (middle) layer always equals to that of the semantic space. Only SUN dataset has multiple splits. We use the same 10 splits used in<cit.>, and report the average performance.Evaluation metric  For the small-scale datasets, we use multi-way classification accuracy as in previous works, while for the large-scale datasets flat hit@K classification accuracy is used as in <cit.>. hit@K means that the test image is classified to a `correct label' if it is among the top K labels. We report hit@5 accuracyas in other works, unless otherwise stated. Competitors  14 existing ZSL models are selected for the small-scale datasets and 7 for the large-scales ones (much fewer existing works reported results on the large-scale datasets). The selection criteria are: (1) recent work: most of them are published in the past two years; (2) competitiveness: they clearly represent the state-of-the-art; and (3) representativeness: they cover a wide range of models (see Sec. <ref>). Comparative evaluationFrom the results in Table <ref> we can make the following observations: (1) Our SAE model achieves the best results on all 6 datasets. (2) On the small-scale datasets, the gap between our model's results to the strongest competitor ranges from 3.5% to 6.5%. This is despite the fact that most of the compared models use far complicated nonlinear models and some of them use more than one semantic space.(3) On the large-scale datasets, the gaps are even bigger: On the largest ImNet-2, our model improves over the state-of-the-art SS-Voc <cit.> by 8.8%. (4) Both the encoder and decoder projection functionsin our SAE model (SAE (𝐖) and SAE (𝐖^⊤) respectively) can be used for effective ZSL. The encoder projection function seems to be slightly better overall.Ablation studyThe key strength of our model comes from the additional reconstruction constraint in the autoencoder formulation. Since most existing ZSL models use more sophisticated projection functions than our linear mapping, in order to evaluate how important this additional constraint is, we consider ZSL baselines that use the same simple projection functions as our model.As discussed in Sec. <ref>, without the constraint both the encoder and decoder can be considered as conventional ZSL models with linear ridge regression as projection function, and they differ only in the project directions. Table <ref> shows than, when the projection function is the same, adding the additional reconstruction constraint makes a huge difference. Note that comparing to the state-of-the-art results in Table <ref>, simple ridge regression is competitive but clearly inferior to the best models due to its simple linear projection function. However, when the two models are combined in our SAE, we obtain a much more powerful model that beats all existing models.Generalised Zero-Shot LearningAnother ZSL setting that emerges recently is the generalised setting under which the test set contains data samples from both the seen and unseen classes. We follow the same setting of <cit.>. Specifically, we hold out 20% of the data samples from the seen classes and mix them with the data samples from the unseen classes.The evaluation metric is now Area Under Seen-Unseen accuracy Curve (AUSUC), which measures how well a zero-shot learning method can trade-off between recognising data from seen classes and that of unseen classes <cit.>. The upper bound of this metric is 1.The results on AwA and CUB are presented in Table <ref>comparing our model with 5 other alternatives. We can see that on AwA, our model is slightly worse than the state-of-the-art method SynC^struct <cit.>. However, on the more challenging CUB dataset, our method significantly outperforms the competitors.Computational cost  We evaluate the computational cost of our method in comparison with three linear ZSL models SSE <cit.>, ESZSL <cit.> and AMP <cit.> which are among the more efficient existing ZSL models. Table <ref> shows that for model training, our SAE is at least 10 times faster. For testing, our model is still the fastest, although ESZSL is close. §.§ Supervised ClusteringDatasets   Two datasets are used. Asynthetic dataset is generated following <cit.>. Specifically, the training set is composed of 3-dimensional samples divided into 3 clusters, and each cluster has 1,000 samples.Each of these clusters is composed of two subsclusters as shown in Fig. <ref>(a). What makes the dataset difficult is that the subclusters of the same cluster are closer to the subsclusters from different categories than to each other when the distance is measured with Euclidean distance. Furthermore, some samples are corrupted bynoise which put themin the subclusters of other categories in the feature space. We generate our test dataset with the similar properties (and the same number of examples N=3000) as the training set. To make clustering more challenging, the number of samples for each cluster is made different: 1000, 2000, and 4000 for three clusters respectively. This dataset is designed to evaluate how robust the method is againstthe size of clusters and its ability to avoid being biased by the largest category.More details on the dataset can be found in <cit.>. We also test our algorithm with a real dataset –Oxford Flowers-17 (848 images) <cit.>.We follow exactly the same settings of <cit.>. Specifically, a ground truth foreground/background segmentation is provided for every image. To extract features, first, images are resized with a height of 100 pixels, and SIFT and color features (Lab, RGB, and intensity) are extracted from each 8×8 patch centred at every pixel, resulting a 135D feature vector for each pixel. Each image has about 10^4 patches, and the data matrix for the whole dataset has about 2.2 × 10^6 rows – this is thus a large-scale problem.The dataset has 5 random split with 200 images for training, 30 for validation, and the rest for testing. Evaluation metric   We calculate the clustering quality with a loss defined as Δ =Ĉ-C^2 <cit.>, where C and Ĉ are ground truth and predicted clustering matrix (obtained using k-means) respectively. Competitors   We compare our method with the state-of-the-art methods which all formulate the supervised clustering problem as a metric learning problem. These include Xiang <cit.>, Lajugie <cit.>, KISSME <cit.>, ITML <cit.>, LMNN <cit.>, andMLCA <cit.>.Comparative evaluation   Table <ref> and Table <ref> show the synthetic data results with and without noise respectively. It can be seen that in terms of clustering accuracy, our method is much better than all compared methods. On computational cost, our model is more expensive than MLCA but much better than all others. Figure <ref> visualises the clustering results. On the real image segmentation data, Table <ref> compares our SAE withother methods. Again, we can see that SAE achieves the best clustering accuracy. The training time for SAE is 93 seconds, while MLCA is 39 seconds. Note that the data size is 2.2× 10^6, so both are very efficient. § CONCLUSION We proposed a novel zero-shot learning model based on a semantic autoencoder (SAE). The SAE model uses very simple and computationally fast linear projection function and introduce an additional reconstruction objective function for learning a more generalisable projection function. We demonstrate through extensive experiments that this new SAE model outperforms existing ZSL models on six benchmarks. Moreover, the model is further extended to address the supervised clustering problem and again produces state-of-the-art performance.§ ACKNOWLEDGEMENTThe authors were funded in part by the European Research Council under the FP7 Project SUNNY (grant agreement no. 313243). ieee
http://arxiv.org/abs/1704.08345v1
{ "authors": [ "Elyor Kodirov", "Tao Xiang", "Shaogang Gong" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170426204553", "title": "Semantic Autoencoder for Zero-Shot Learning" }
Periodic Anderson Model with Holstein Phonons for the Description of the Cerium Volume Collapse Enzhi Li^1,2,Shuxiang Yang^1,2, Peng Zhang^3, Ka-Ming Tam^1,2, Mark Jarrell^1,2, and Juana Moreno^1,2 December 30, 2023 =========================================================================================================== Subsequently to the author's preceding paper, we give full proofs of some explicit formulas about factorizations of K-k-Schur functions associated with any multiple k-rectangles.§ INTRODUCTION This paper is a sequel to the author's preceding paper <cit.>. In <cit.>,we investigated some factorization properties of a certain family of symmetric functions called K-k-Schur functionsfrom the combinatorial viewpoint. See <cit.> and its references for the backgrounds of these functions and detailed definitions. In this paper we give a proof of a fundamental formula stated in <cit.>:R_t∪ R_t = R_t∑_⊂ R_t,where R_t (1≤ t≤ k) stands for the partition (t^k+1-t)=(t,t,…,t), and μ∪ν stands for the partition obtained by reordering (μ_1,…,μ_l(μ), ν_1, …, ν_l(ν)) in the weakly decreasing order for any partitions μ, ν . Let k be a positive integer. T. Ikeda suggested that R_t∪ is divisible byand raised a question what the quotient R_t∪/ is.We have shown that, for any k-bounded partitionand any union of k-rectangles P= (1≤ t_1 < … < t_m ≤ k and a_1,…,a_m>0) with R_t^a = R_t∪…∪ R_t_a, P∪ is divisible by . More precisely, we can writeP∪ = P( + ∑_|μ| < || a_P,,μμ)for some coefficients a_P,,μ <cit.>.We have given explicit formulas of the coefficients a_P,,μ for some cases. Moreover, we have shown the following factorization formulas of P (<cit.> and (13) in its proof):R_t_1^a_1∪⋯∪ R_t_m^a_m = R_t_1^a_1⋯R_t_m^a_m,R_t^a = R_t(R_t∪ R_t/R_t)^a-1. This paper is devoted to the proof ofR_t∪ R_t/R_t = ∑_⊂ R_t((<ref>) in Theorem <ref>). Note that, (<ref>) rewritten asR_t_1^a_1∪⋯∪ R_t_m^a_m= R_t_1^a_1∪…∪ R_t_m-1^a_m-1R_t_m^a_m and this formula (<ref>) can be seen as special cases of (<ref>), as (<ref>) is a case without any “smaller terms” and(<ref>) is a case with every “smaller terms”. As a result, we have the formula= R_t_1(∑_^(1)⊂ R_t_1^(1))^a_1-1…R_t_m(∑_^(m)⊂ R_t_m^(m))^a_m-1.Acknowledgement.The author would like to express his gratitude to T. Ikeda for suggesting the problem to the author and helping him with many fruitful discussions.He is grateful to H. Hosaka and I. Terada for many valuable comments and pointing out mistakes and typos in the draft version of this paper.He is also grateful to the committee of the 29th international conference on Formal Power Series and Algebraic Combinatorics (FPSAC) for many valuable comments for the extended abstract version of this paper.This work was supported by the Program for Leading Graduate Schools, MEXT, Japan.The contents of this paper is the second half of the author's master-thesis <cit.>.§ PRELIMINARIES In this paper we use the notations that appeared in the author's preceding paper. See <cit.> for details.Here we review some important notations.Letdenote the set of all k-bounded partitions, which are partitions whose parts are all bounded by k. Letdenote the set of all (k+1)-cores, which are partitions none of whose cells have a hook length equal to k+1.The bijection ; ↦is defined by _i=#{j| (i,j)∈, (i,j)≤ k},and its inverse map is denoted by ; ↦.We denote by R_t the partition(t^k+1-t)=(t,t,…,t) ∈ for 1≤ t ≤ k, which is called a k-rectangle. We sometimes abbriviate removable (resp. addable) corner ofwith residue i to -removable (resp. -addable) i-corner. In order to avoid making equations too wide,we may denote removable corner, addable corner, horizontal strip, vertical strip and weak strip briefly by rem.cor., add.cor., h.s., v.s., and w.s. For a cell c = (i, j), the residue of c is (c) = j - i(k+1) ∈/(k+1).For a partition , (i,j)∈(_>0)^2 is called -blocked if (i+1,j)∈.For partitions ,μ,we denote by r_μ the number of distinct residues of -nonblocked μ-removable corners.We have employed the following “rewritten version” of Morse's Pieri rule for K-k-Schur functions as its definition. Let h_r = ∑_i_1≤ i_2≤…≤ i_r x_i_1… x_i_r (r∈_>0) be the complete symmetric functions. For ∈ and 0 ≤ r ≤ k,h_r · = ∑_s=0^r (-1)^r-s∑_μ (μ)/():weak s-stripr_(μ)()r-sμ.Example. Consider the case =(a,b) with k ≥ a ≥ b. Let us expand (a,b) into a linear combination of products of complete symmetric functions and K-k-Schur functions labeled by partitions with fewer rows.By using the Pieri rule (<ref>) we have(a) h_i= ((a,i) - (a,i-1)) + ((a+1,i-1) - (a+1,i-2)) + …7cm… + ((a+i-1,1) - (a+i-1,0)) +(a+i,0) (if a+i≤ k) 7cm… + ((k-1,a+i-k+1) - (k-1,a+i-k)) +((k,a+i-k) - (k,a+i-k-1)) (if a+i >k)for i≤ a, and summing this over 0 ≤ i ≤ b, we have (a)( h_b+…+h_0 ) = (a,b) + (a+1,b-1) + ⋯(a+b,0) (if a+b≤ k) (k,a+b-k) (if a+b≥ k)= ∑_μ/(a):horizontal strip|μ|=a+bμ_1≤ kμ.Similarly we have (a+1)( h_b-1+…+h_0 ) = (a+1,b-1) + (a+2,b-2) + … = ∑_μ/(a+1):horizontal strip|μ|=a+bμ_1≤ kμ,hence (a,b) = (a)(h_b+…+h_0) - (a+1)(h_b-1+…+h_0). We employ the following notation again which was often used in the preceding paper. Let (∅≠)∈ satisfying ⊂, where we write =(_1,_2,…,_l()-1) and = l() = l() - 1. (Here we consider R_t to be ∅ unless 1≤ t≤ k) (Note: when l()=1, we have = 0 and = ∅ = thussatisfies . When l() > k+1, we have > k and ≠∅ = thusdoes not satisfy . )The following simple lemma is needed later. Throughout this paper, for a condition P we write P=1 if P is true and P=0 if P is false. For q,a,b∈ℤ, we have∑_x=0^min(a,b) (-1)^x q-[x=b]a-x = [a,b ≥ 0] q-1a. Use x+1y+1 - xy = xy+1 repeatedly. Note that in the case where a<b we use qa-a=q-1a-a. § A FACTORIZATION OF R_T^A§.§ Statements and examples In this section we would like to prove R_t∪ R_t = R_t∑_ν⊂ R_tν((<ref>) in Theorem <ref>). Let us illustrate the situation with some example again.Example. The case where t=k is already proved in <cit.>.Next consider the case where t=k-1. Let us do the calculation of R_k-1∪ R_k-1 explicitly when k=4. Then R_k-1 = R_3 = 3,3. We haveR_3 = 3,3 = 3( 3 + 2 + 1 + ∅) - 4( 2 + 1 + ∅)by (<ref>).Then we consider a similar expansion for R_3∪ R_3. We haveR_3∪3( 3 + 2 + 1 + ∅)= R_3∪3,3 + R_3∪2,4 R_3∪4( 2 + 2 1 + 3 ∅)= R_3∪2,4,by <cit.> and <cit.>. From this, or directly by <cit.>, we haveR_3∪ R_3 = R_3∪3( 3 + 2 + 1 + ∅) - R_3∪4( 2 + 2 1 + 3 ∅).Since we proved R_3∪3 = R_3( 3 + 2 + 1 + ∅) and R_3∪4 = R_34 in <cit.>, we haveR_3∪ R_3/R_3 = ( 3 + 2 + 1 + ∅) ( 3 + 2 + 1 + ∅) _(A)=- 4( 2 + 2 1 + 3 ∅)_(B). Then using (<ref>) for (a,b)=(3,3),(3,2),(2,2),(2,1),(1,1),(1,0),(0,0) (for (A)) and for (a,b)=(4,2),(4,1),(4,0) (for (B)), we have(A)= ∑_ l(μ)≤ 2μ_1≤ 4 |μ|≤ 6 μ, (B) = ∑_ l(μ)≤ 2μ_1 = 4 |μ|≤ 6 μ.Hence we obtain R_3∪ R_3/R_3= ∑_ l(μ)≤ 2μ_1≤ 3 |μ| ≤ 6 μ = ∑_μ⊂ R_3μ.Next let us explain how to calculate R_t∪ R_t in general.We shall write R_t = (t^k-t), R_t+(1^i) = ((t+1)^i t^k-t-i). Then we already know thatR_t∪ R_t = R_t∪ (t^k+1-t)= R_t∪R_t∑_i≥ 0ii h_t-i=- R_t∪ (R_t+(1^1))∑_i≥ 0i+1i h_t-1-i=+ R_t∪ (R_t+(1^2))∑_i≥ 0i+2i h_t-2-i=…= + (-1)^k-t-1R_t∪ (R_t+(1^k-t-1))∑_i≥ 0i+k-t-1i h_2t-k+1-i= + (-1)^k-tR_t∪ (R_t+(1^k-t))∑_i≥ 0i+k-ti h_2t-k-i,by applying <cit.> rewritten by using <cit.> (similarly to Remark after <cit.>) for P=R_t, μ=(t^k-t), r=t.Having calculated some examples, we may claim (and actually we shall prove later) thatR_t ∪ (R_t+(1^i)) =R_t∑_(t+1)^i⊂η⊂(R_t+(1^i))η. Now we assume this so that we have R_t∪ R_t/R_t = ∑_η⊂R_tη∑_i≥ 0ii h_t-i=- ∑_ (t+1)⊂η⊂ (R_t+(1^1))η∑_i≥ 0i+1i h_t-1-i=+ ∑_((t+1)^2)⊂η⊂(R_t+(1^2))η∑_i≥ 0i+2i h_t-2-i=…= + (-1)^k-t-1∑_((t+1)^k-t-1)⊂η⊂(R_t+(1^k-t-1))η∑_i≥ 0i+k-t-1i h_2t-k+1-i= + (-1)^k-t∑_((t+1)^k-t)⊂η⊂(R_t+(1^k-t))η∑_i≥ 0i+k-ti h_2t-k-i. Next we substitute the Pieri rule (<ref>) for each of the summations in the RHS of (<ref>),then nontrivial cancellations happen, finally we have, for each 0≤ j≤ k-t,∑_((t+1)^j)⊂η⊂(R_t+(1^j))η∑_i≥ 0i+ji h_t-j-i= ∑_ν s.t. ν⊂ (t+1)^k+1-t j≤ν'_t+1≤ j+1 |νR_t| ≤ t t-ν_k+1-t-[ν'_t+1>0]t-|νR_t|ν=+ ∑_ν s.t. ν⊂ (k^1(t+1)^k-t)ν_1>t+1 j≤ν'_t+1≤ j+1(ν)_1+ν'_t+1-1 ≤ 2t 2t-(ν)_12t-(ν)_1+1-ν'_t+1ν.(This calculation will be shown in a generalized form in Lemma <ref> later) Note that ν'_t+1=k-t+1 never happens in the summations of (<ref>) sinceit violates (ν_k+1-t+ν'_t+1=)|νR_t|≤ t or (ν_1+ν_k-t+1+ν'_t+1-1≤)(ν)_1+ν'_t+1-1 ≤ 2t.As a result, we haveR_t∪ R_t/R_t = ∑_ν⊂ (t+1)^k+1-t ν'_t+1=0 |νR_t| ≤ t t-ν_k+1-t-0t-|νR_t|ν, since all the summations in the RHS of (<ref>) except the first summation of the case ν'_t+1=j=0 are cancelled each other. Noting that |νR_t|=ν_k+1-t when ν'_t+1=0, we have = ∑_ν⊂ (t)^k+1-t=R_tt-ν_k+1-tt-ν_k+1-tν= ∑_ν⊂ R_tν,as desired.As mentioned above, though our first purpose was to calculate R_t ∪ R_t,we shall prove it in a somewhat more general form.This section is devoted to proving the following theorem.For any partition ,let =(_1,…,_i) if _i > t ≥_i+1 (we set =∅ if t≥_1). Let ,, be as in , in Section <ref>. Write v=_l(). Assume _≥ t ≥ v. Then we have R_t∪ = R_t∑_()⊂(ν)⊂()ν. In particular, we haveR_t∪ R_t = R_t∑_ν⊂ R_tν.Substituting this result into (31)in the proof of <cit.>replaced t_m with t and a_m with a, we have For 1≤ t ≤ k and a>0, we haveR_t^a=(∑_⊂ R_t)^a-1.Thus, substituting this into <cit.>we have= R_t_1(∑_^(1)⊂ R_t_1^(1))^a_1-1…R_t_n(∑_^(n)⊂ R_t_n^(n))^a_n-1. §.§ Proof of Theorem <ref>(<ref>) follows from (<ref>) with =R_t, noting thatin the condition of the summation can be dropped since ()= if ⊂ R_t.Recall the notation [P] which is 1 if P is true and 0 if P is false for a proposition P.We prove (<ref>) by induction on =l()=l()-1 ≥ 0. For the case where =0, we consider R_k+1 to be empty. Sinceis also empty, thus in this case (<ref>) follows from <cit.> If _>t, the theorem follows by <cit.>. Assume _=t.First we haveR_t∪= ∑_μ μ⊂ μ/:v.s.(-1)^|μ/|R_t∪μ∑_i≥ 0q_μ+['_t=μ'_t+1]+i-1i h_v-|μ/|-iby <cit.>and <cit.>, where we put q_=|/|+r_'' and rephrased the condition μ_≠_(=t) as '_t=μ'_t+1.=∅ if t≥_1).For μ satisfying ⊂μ⊂,we haveR_t∪μ = R_t∑_μ^∘⊂η⊂μη.by induction hypothesis.Substituting the right-hand side of (<ref>) into (<ref>), we haveR_t∪/R_t = ∑_μ μ⊂ μ/:v.s. (-1)^|μ/|∑_μ^∘⊂η⊂μη∑_i≥ 0q_μ+['_t=μ'_t+1]+i-1i h_v-|μ/|-i. Our task is to simplify the right hand side of (<ref>) into a linear combination ofν (ν∈). Since it involves long complicated calculations, we divide our task into some steps:* Step (A): Simplify ∑_μ^∘⊂η⊂μη∑_i≥ 0q_μ+['_t=μ'_t+1]+i-1i h_v-|μ/|-i into a linear combination of ν (ν∈). (See (<ref>), (<ref>) according to whether ν_1 ≤ k + 1 - or ν_1 > k + 1 - and the remark after Lemma <ref>) * Step (B): Evaluate the coefficient of ν in the RHS of (<ref>) expanded into a linear combination of {ν}_ν, which is the signed sum of the coefficients of ν computed in Step (A) with μ running. Remark. We do not need the assumption _≥ t to calculate the RHS of (<ref>) in the following two subsections, though we assumed it in order to derive (<ref>) itself.Some additional arguments are needed to find whether the equation (<ref>) holds in this more general situation. From examining some examples,it seems to be true when l()≤ k+1-t, but is not always true when l()> k+1-t.§.§ Step (A) This subsection is devoted to proving the following lemma. Note that it does not assume μ_≥ t. Let us introduce some notations: for a partitionand u∈_≥ 0,let _≤ u be a partition (_1,…,_u) and _>u be a skew shape /_≤ u, and define _≥ u and _<u similarly.Note that, in this paper we suppose the condition μ⊂ when we use the notation /μ,although, we also call μ a horizontal (resp. vertical) strip if there is at most one cell in each row (resp. column) of the difference set μ, even if not necessarily μ⊂. Assume μ⊂ and l(μ)=. Let d∈ℤ and a, e∈ℤ_≥ 0. Consider the following sum and write it as a linear combination of {ν}_ν: ∑_μ^∘⊂η⊂μη∑_i≥ 0d+ie h_a-i = ∑_ν b_νν.Then the coefficient b_ν is as follows. (Case 1) If ν_1≤ k+1-, b_ν =[μ^∘⊂ν νμ: h.s.] ∑_x 0≤ x ≤ r_νμ^∘|νμ|+x≤ a (-1)^x d+a - (|νμ|+x)er_νμ^∘x. In addition, if d=e∈ℤ_≥ 0, b_ν =[μ^∘⊂ν νμ: h.s.] [a≥ |νμ|] d+a-|νμ|-r_νμ^∘a-|νμ|. (Case 2) If ν_1>k+1-, thenwe put u=ν_1-(k+1-) and A=ν_-u+1 + |ν_≤-uμ| to avoid making the equation too wide. Then b_ν =[μ⊂ν ν∖μ: h.s.(P)] ∑_x 0≤ x ≤ r_νμ^∘ A+x≤ a (-1)^x d+a - (A+x)er_νμ^∘x. Here (P) is the condition that (P) = an empty condition (if l(μ)<+1-u),“μ_j=ν_j+1 for +1-u≤∀ j≤ l(μ)” (if l(μ)≥+1-u). In addition, if d=e∈ℤ_≥ 0, b_ν =[μ^∘⊂ν νμ: h.s.(P)] [a≥ A] d+a-A-r_νμ^∘a-A. Remark. Step (A) immediately follows from this lemma by putting d = e = q_μ + '_t = μ'_t+1 - 1 and a = v - |μ/|, noting that d+id = d+ii.Due to the Pieri rule (<ref>),the coefficient of ν in the LHS of (<ref>) isb_ν =∑_s=0^a d+a-se∑_i=0^s ∑_η s.t. μ^∘⊂η⊂μ (ν)/(η):w.s.of size i (-1)^s-ir_(ν),(η)s-i. Since η⊂μ⊂, we have (η)=η and there never exist more than one η-removable corners of the same residue. Thus r_(ν)(η)=#{(ν)-nonblocked η-removable corners}andr_(ν)(η)s-i =#{| [ η/⊂{η-removable corners},; |η/| = s-i,(ν)/: h.s. ]}.Thusb_ν = ∑_s=0^a∑_i=0^s ∑_η s.t. μ^∘⊂η⊂μ (ν)/η:w.s.of size i (-1)^s-id+a-se∑_ s.t. ⊂η η/⊂{η-rem. cor.}|η/|=s-i(ν)/: h.s. 1.Then, removing the summations ∑_s and ∑_a with paying attention to the relations i=|ν/η| and s=|ν/η|+|η/|=|ν/| and that the condition on i and s is 0≤ i ≤ s ≤ a, we haveb_ν = ∑_ (η,) (-1)^|η/|d+a-|ν/|e, summing over (η,) with conditions (a)μ^∘⊂η⊂μ,(b)(ν)/η :weak strip,(c)⊂η,(d)η/⊂{η-removable corners},(e)(ν)/ : horizontal strip,(f)|ν/|≤ a. (Note: The conditions (a) and (b) come from the conditions on η in the summation ∑_η in (<ref>), (c),(d) and (e) come from the condition to determinefrom η in the summation ∑_ in (<ref>), and (f) comes from the condition s≤ a. The conditions about the size of η/ and the weak strip (ν)/η have been removed since i runs under 0≤ i ≤ s. ) Note that (d)η/⊂{-addable corners}η⊂, where we put = ∪{-addable corners}. Then we rewrite the summation so as to determinefirst according to the conditions (e) and (f), and then to choose η by the conditions (a)-(d). Here the conditions (a),(c),(d), together with η⊂ν which is trivially implied by (b) (recall <cit.>), can be rewritten as a single condition μ^∘∪⊂η⊂μ∩ν∩, which we denote by (g). Thus we obtainb_ν = ∑_ s.t. (e):(ν)/:h.s. (f): |ν/|≤ a∑_η s.t. (g):μ^∘∪⊂η⊂μ∩ν∩ (b):(ν)/η:w.s. (-1)^|η/|d+a-|ν/|e. _(X) Clearly b_ν=0 if ł(ν) > +1, sincemust satisfy ⊂μ⊂ and (ν)/ must be a horizontal strip. Hereafter we assume ł(ν)≤+1. Next we find conditions onfor which the sum (X) is nonzero.Case 1: ν_1≤ k+1-In this casethe condition (b)((ν)/η : weak strip)is equivalent to the condition that ν/η is a horizontal strip as explained below: by the characterization of weak strips, we have(ν)/η(=(η)) : w.s.(p): ν/η : h.s.and (q): ν/η : v.s. ν'/η'(=η) : h.s. .Since ν_1≤ k+1- we have (ν) = ν or(ν_1+ν_+1,ν_2,ν_3,…,ν_+1), and thus ν = ν'or(ν_1+ν_+1,ν_2,…,ν_)'. Therefore (p) implies (q).Besides, ν/η is always a horizontal strip if (ν)/ is a horizontal strip and ⊂η⊂ν(⊂(ν)). Therefore we can drop the condition(b) in (X).Hence, (X)=0 unless μ^∘∪ = μ∩ν∩ because η runs over the interval [μ^∘∪, μ∩ν∩] which is isomorphic to a Boolean lattice since (μ∩ν∩) / (μ^∘∪) is a subset of an antichain /,and the summands are constant up to a sign determined by η. Moreover,μ^∘∪ = μ∩ν∩ (1): max(μ_j,_j) = min(μ_j,ν_j,_j) (1≤ j≤ l(μ)), (2): _j = min(μ_j,ν_j,_j) (l(μ) < j), (1'): _j ≤μ_j ≤ν_j,_j (1≤ j≤ l(μ)), (2'): _j = min(μ_j,ν_j) (l(μ) < j), (0): μ⊂ν, (1”): _≤ l(μ) = μ∖(some rem. cor. of μ), (2'): _j = min(μ_j,ν_j)(l(μ) < j). Here,(1)(1') is obvious. (1')(0),(1”): (1') (0),_j≤μ_j≤_j (1≤ j≤ l(μ)), (0),_≤ l(μ)⊂μ,μ/_≤ l(μ)⊂{_≤ l(μ)-addable corners}, (0)and(1”). (2)(2'): since ν/ is a horizontal strip by (e), we have ν_j>_j _j-1≥ν_j>_j _j=_j+1. Hencewe have “(2) (ν_j>_j _j=μ_j)”.(2')(2): obvious. [scale=0.25](0,0) -| (17,3) -| (14,6) -| (11,9) -| (0,0);at (6,4.5) μ; (8,9) – (8,10) -| (6,12) -| (4,14) -| (0,9); [loosely dotted,thick] (4,14) -| (21,0);[loosely dotted,thick] (8,10) -| (8,15); [left] at (0,14.5) +1;(0,-0.1) to[out=-12,in=-168] node[below=1pt]k+1- (21,-0.1);(0,15) to[out=12,in=168] node[above=1pt]t (8,15); [red] (8,9.5) to [out=45, in=180] (11,12) node[right=1pt];[blue] (14.1,6.5) to [out=45, in=180] (16,9) node[right=1pt]ν;(6, 12) to [out=30,in=180] (10,15) node[right=1pt]μ; [red] (0.1,0.1) -| (16.9,2.9) -| (13.9,5.9) -| (10.9,8.9) -| (7.9,9.95) -| (5.95,10.95) -| (4.95,11.95) -| (3.95,12.95) -| (2.95,13.9) -| (0.1,0.1);[red] (10,8) rectangle (10.9,8.9);[red] (16,2) rectangle (16.9,2.9); [blue,decorate,decoration=zigzag,segment length=1mm,amplitude=.2mm] (-0.1,-0.1) -| (20,1) -| (17.1,3.1) -| (14.1,7.1) -| (11.1,9.1) -| (10.1,10.05) -| (7.95, 11.05) -| (5.05, 13.05) -| (3.05, 14.1) -| (2.01, 15) -| (-0.1,-0.1); If μ^∘∪ = μ∩ν∩, then η in (X) must be equal to μ∪. Hence we haveb_ν = ∑_ s.t. (e): (ν)/ : h.s.(f): |ν/|≤ a(0): μ⊂ν (1”),(2'):= (μ(some rem. cor. of μ)) ⊔ (ν∩(μ/μ))(-1)^|μ^∘∪/|d+a-|ν/|e. Here, the conditions (1”) and (2') mean that the choices ofcorrespond bijectively to the choices of S⊂{μ-removable corners} by =(μ S) ⊔ (ν∩(μ/μ)) = (ν∩μ) S. Then we have ν/ = ν / ((ν∩μ) S) = (νμ) ⊔ S since S⊂ν∩μ by (0). Hence (f) is equivalent to |S| + |νμ| ≤ a. Moreover, since (ν)_i=ν_i for any i≥ 2, the condition (e) is transformed as follows: (e): (ν)/: h.s. ν/: h.s.ν∖μ: h.s.and every element of S is ν-nonblocked. As a result, letting x be a variable corresponding to |S|, b_ν = [νμ: h.s. (0):μ⊂ν ] ∑_0≤ x≤ r_νμ (f): x≤ a-|νμ| (-1)^x d+a-|νμ|-xer_νμx. If, in addition, d=e≥0, we can obtain b_ν=[μ^∘⊂ν ν∖μ: h.s.] [a≥ |νμ|] d+a-|νμ|-r_νμ^∘a-|ν∖μ| by the following argument and the fact r_νμ≥ 0: in general for d∈ℤ_≥0 and f,r∈ℤ, ∑_0≤ x≤min(r,f)(-1)^xd+f-xdrx= [r,f≥ 0] ∑_0≤ x≤min(r,f)(-1)^xd+f-xdrx= [r,f≥ 0] ∑_0≤ x≤min(r,f)(-1)^f -d-1f-xrx= [r,f≥ 0] ∑_0≤ x≤ f(-1)^f -d-1f-xrx= [r,f≥ 0] (-1)^f r-d-1f= [r,f≥ 0] -r+d+ff. Now we have proved the lemma in Case 1.Case 2: ν_1>k+1-Similar to the above case,we shall find conditions on for which it holds that((X)=)∑_η s.t. (g): μ^∘∪⊂η⊂μ∩ν∩ (b): (ν)/η: w.s. (-1)^|η/|d+a-|ν/|e≠ 0together with (e)((ν)/ :horizontal strip) and (f)(|ν/|≤ a). Hereafter we assume (e) and (f).Since η⊂μ⊂ and ν/η is a horizontal strip by (e), it should hold that ν⊂ (k)∪. Put u = ν_1 - (k+1-). Then we have(ν)= (ν_1+ν_+1-u,ν_2,…,ν_+1), (ν)'= (ν_1+ν_+1-u,ν_2,…,ν̆_+1-u,…,ν_+1)by <cit.>. Hence(ν)/η : w.s. ν/η :h.s. and (ν)'/η : h.s.ν_1≥η_1≥…≥ν_≥η_≥ν_+1and ν_1+ν_+1-u≥η_1≥ν_2≥… ≥η_-u≥ν_+2-u≥η_+1-u≥…≥ν_+1≥η_, ν/η : h.s.,η_+1-u=ν_+2-u,⋮ η_=ν_+1. Hence we have (g): μ^∘∪⊂η⊂μ∩ν∩,(b): (ν)/η:w.s.ν/η:h.s., (μ∪)_1 ≤η_1 ≤ (μ∩ν∩)_1,⋮ (μ∪)_-u≤η_-u≤ (μ∩ν∩)_-u, (μ∪)_-u+1≤η_-u+1=ν_-u+2≤ (μ∩ν∩)_-u+1, ⋮ (μ∪)_≤η_=ν_+1≤ (μ∩ν∩)_.Similarly to Case 1, the condition “ν/η : horizontal strip” can be dropped under the conditions (g),(e), and thus we have(X)≠ 0 (Y):(μ∪)_1 = (μ∩ν∩)_1,⋮(μ∪)_-u = (μ∩ν∩)_-u, (μ∪)_-u+1≤ν_-u+2≤ (μ∩ν∩)_-u+1,⋮(μ∪)_≤ν_+1≤ (μ∩ν∩)_. Case 2-1: l(μ)<+1-uWe have(Y) (1): max(μ_j,_j) = min(μ_j,ν_j,_j) (1≤ j≤ l(μ)), (2): _j = min(μ_j,ν_j,_j) (l(μ) < j≤-u), (3): _j ≤ν_j+1≤min(μ_j,ν_j,_j) (j≥-u+1), (1'): _j ≤μ_j ≤ν_j,_j (1≤ j≤ l(μ)), (2'): _j = min(μ_j,ν_j) (l(μ) < j≤-u), (3'): _j = ν_j+1≤μ_j (j≥-u+1), (0): μ⊂ν, (1”): _≤ l(μ) = μ∖(some rem. cor. of μ), (2'): _j = min(μ_j,ν_j) (l(μ) < j≤-u), (3”): _j = ν_j+1(j≥-u+1), (4): ν_j+1≤μ_j (j≥-u+1).Here, (1)(1')(0),(1”), (2)(2') : by the same argument as Case 1.(3)(3'): since ν/ is a horizontal strip, we have _j≥ν_j+1 (∀ j). Hence (3)_j=ν_j+1 (∀ j≥+1-u). (3')(3”),(4): obvious. [scale=0.25](0,0) -| (17,3) -| (14,6) -| (11,9) -| (0,0);at (6,4.5) μ; (8,9) – (8,10) -| (6,12) -| (3,14) -| (0,9);[loosely dotted,thick] (3,14) -| (21,0);[loosely dotted,thick] (8,10) -| (8,15); [left] at (0,14.5) +1;[left] at (0,11.5) +1-u;(21,-0.1) to [out=-30,in=-150] node[below=1pt]u (24,-0.1);(0,-0.1) to[out=-12,in=-168] node[below=1pt]k+1- (21,-0.1);(0,15) to[out=12,in=168] node[above=1pt]t (8,15); [dotted,thick] (0,11) – (5,11); [red] (8,9.5) to [out=45, in=180] (11,12) node[right=1pt];[blue] (4.1,12.5) to [out=45, in=180] (6,16) node[right=1pt]ν;(6, 12) to [out=45,in=180] (10,15) node[right=1pt]μ; [red] (0.1,0.1) -| (16.9,2.9) -| (13.9,5.9) -| (10.9,8.9) -| (7.9,9.95) -| (5.90,10.90) -| (3.99,11.9) -| (2.9,12.99) -| (1.99,13.9) -| (0.1,0.1);[red] (10,8) rectangle (10.9,8.9);[red] (16,2) rectangle (16.9,2.9); [blue,decorate,decoration=zigzag,segment length=1mm,amplitude=.2mm] (-0.1,-0.1) -| (24,1) -| (17.1,3.1) -| (14.1,7.1) -| (11.1,9.1) -| (10.1,10.05) -| (7.05,11.05) -| (5.01,12.05) -| (4.01,13.01) -| (3.05,14.1) -| (2.01,15) -| (-0.1,-0.1); If (Y) holds, then η in (X) must satisfyη_i= (μ∪)_i (i≤-u),η_i= ν_i+1 = _i (i≥-u+1).Hence we haveb_ν =∑_ s.t. (e): (ν)/ : h.s. (f): |ν/|≤ a (0),(1”),(2'),(3”),(4) (-1)^|μ∖|d+a-|ν/|e. Similarly to Case 1, the conditions (1”), (2'), (3”) mean that the choices ofcorrespond bijectively to the choices of S⊂{μ-removable corners} by _≤-u=(ν∩μ)_≤-u S and (_-u+1,_-u+2,…)=(ν_-u+2,ν_-u+3,…). Hence, we have ν_≤-u/_≤-u = ν_≤-u / ((ν∩μ)_≤-u S) = (ν_≤-uμ)⊔ S,ν_>-u/_>-u = {(ν'_j,j) | 1≤ j≤ν_-u+1}, thus ν/ = {(ν'_j,j) | 1≤ j≤ν_-u+1}⊔ (ν_≤-uμ) ⊔ S. Hence (f) is equivalent to ν_-u+1 + |ν_≤-uμ| + |S| ≤ a. Moreover, the condition (e) is transformed as (e): (ν)/ : h.s. ν/ : h.s. (ν_>-u/_>-u) ⊔ (ν_≤-uμ) : h.s. and every element of S is ν-nonblockedμ_-u≥ν_-u+1and ν_≤-uμ : h.s. and every element of S is ν-nonblocked. Thus we have (e), (4) νμ : h.s. and every element of S is ν-nonblocked. As a result, letting x be a variable corresponding to |S|, b_ν = [νμ : h.s. (0): μ⊂ν ] =×∑_0≤ x≤ r_νμ (f): ν_-u+1+|ν_≤-uμ|+x≤ a (-1)^x d+a-(ν_-u+1 + |ν_≤-uμ|+x)er_νμx. The remaining equality (of the case d=e∈ℤ_≥ 0) can be proved in the same way as Case 1.Case 2-2: l(μ)≥+1-uWe have(Y) (1): max(μ_j,_j) = min(μ_j,ν_j,_j) (1≤ j≤-u), (2): max(μ_j,_j) ≤ν_j+1≤min(μ_j,ν_j,_j) (-u+1 ≤ j≤ l(μ)), (3): _j ≤ν_j+1≤min(μ_j,ν_j,_j) (j≥ l(μ)+1) (1'): _j ≤μ_j ≤ν_j,_j (1≤ j≤-u), (2'): _j = ν_j+1=μ_j (-u+1 ≤ j≤ l(μ)), (3'): _j = ν_j+1≤μ_j (j≥ l(μ)+1)(0): μ⊂ν, (1”): _≤-u = (μ)_≤-u∖(some rem. cor. of (μ)_≤-u), (2”): ν_j+1=μ_j (-u+1 ≤ j≤ l(μ)), (4): ν_j+1≤μ_j (j > l(μ)), (3”): _j = ν_j+1(j≥-u+1). Here, (1)(1'): obvious. (2)(2'):Since ν/ is a horizontal strip, we have _j≥ν_j+1 (∀ j). Hence(2)μ_j≤_j=ν_j+1≤ν_j,_j,μ_j _j=ν_j+1=μ_j.(3)(3'): same as Case 2-1.(2'),(3')(2”),(3”),(4): obvious.(1'),(2”)(0),(1”),(2”): obvious. [scale=0.25](0,0) -| (17,3) -| (14,6) -| (13,7) -| (12,8) -| (10,9) -| (0,0);at (6,4.5) μ; (8,9) – (8,10) -| (5,13) -| (2,14) -| (0,9);[loosely dotted,thick] (2,14) -| (21,0);[loosely dotted,thick] (8,10) -| (8,15); [left] at (0,14.5) +1;[left] at (0,6.5) +1-u;(21,-0.05) to [out=-30,in=-150] node[below=1pt]u (29,-0.05);(0,-0.05) to[out=-12,in=-168] node[below=1pt]k+1- (21,-0.05);(0,15) to[out=12,in=168] node[above=1pt]t (8,15); [dotted,thick] (0,6) – (13,6); [red] (6,9.5) to [out=45, in=180] (10,12) node[right=1pt];[blue] (4.1,12.5) to [out=45, in=180] (6,16) node[right=1pt]ν;(5, 12.5) to [out=45,in=180] (10,15) node[right=1pt]μ; [red] (0.05,0.05) -| (16.90,2.90) -| (13.90, 5.90) -| (12.90, 6.90) -| (11.90, 7.90) -| (9.90,8.90) -| (5.99,9.93) -| (4.93,10.95) -| (3.99,11.99) -| (2.99,12.93) -| (1.93,13.9) -| (0.05,0.05);[red] (13,5) rectangle (13.9,5.9);[red] (16,2) rectangle (16.95,2.95); [blue,decorate,decoration=zigzag,segment length=1mm,amplitude=.2mm] (-0.05,-0.05) -| (29,1) -| (17.05,3.05) -| (16, 4) -| (14.05,6.05) -| (13.05,8) -| (12, 9) -| (10,10) -| (6.01,11.01) -| (5.07,12.01) -| (4.01,13.07) -| (3.01,14.05) -| (2.01,15) -| (-0.05,-0.05);Hence we haveb_ν =∑_ s.t. (e): (ν)/ :h.s. (f): |ν/|≤ a (0),(1”),(2”),(3”),(4) (-1)^|μ∖|d+a-|ν/|e. Similarly to Case 1, the conditions (1”) and (3”) mean that the choices ofcorrespond bijectively to the choices of S⊂{μ_≤-u-removable corners} by _≤-u=μ_≤-u S and (_-u+1,_-u+2,…,)=(ν_-u+2,ν_-u+3,…). Furthermore, by the same way as Case 2-1, we have ν/ = {(ν'_j,j) | 1≤ j≤ν_-u+1}⊔ (ν_≤-uμ) ⊔ S. Hence (f) is equivalent to ν_-u+1 + |ν_≤-uμ| + |S| ≤ a. Moreover, the condition (e) is transformed as (e): (ν)/ : h.s. ν/ : h.s.μ_-u≥ν_-u+1,ν_≤-uμ : h.s.,every element of S is ν-nonblocked, by a similar argument to Case 2-1 and we have (e), (2”), (4) (2”),νμ :h.s.,every element of S is ν-nonblocked. As a result, letting x be a variable corresponding to |S|, we have b_ν = [νμ :h.s. (0): μ⊂ν (2”) ] =×∑_0≤ x≤ r_νμ ν_-u+1+|ν_≤-uμ|+x≤ a (-1)^x d+a-(ν_-u+1 + |ν_≤-uμ|+x)er_νμx. The remaining equality (of the case d=e∈ℤ_≥ 0) can be proved in the same way as Case 1. Now we have completed the proof of Lemma <ref>.§.§ Step (B)As in Step (A), we deal with a slightly more general situation that we only assume ⊂ where =l(), dropping the assumption _≥ t.Notice that q_μ-1+['_t=μ'_t+1]≥ q_μ-1=|μ/|+r_μ''-1≥ 0, since if |μ/|=0 then μ= thus r_μ''=r_>0. Substituting the result of Step (A) for the RHS of (<ref>), if we write R_t∪/R_t =∑_νa_νν, then the coefficient a_ν is as follows: Case 1: if ν_1≤ k+1-,a_ν =∑_μ s.t. μ⊂ μ/: v.s. μ^∘⊂ν ν∖μ: h.s. f(μ),where we putf(μ)= (-1)^|μ/|f_2(μ)≥ 0f_1(μ)f_2(μ), f_1(μ)= q_μ-1+['_t=μ'_t+1] + v-|νμ|-|μ/|-r_νμ^∘, f_2(μ)= v-|νμ|-|μ/|. Case 2: if ν_1> k+1-,Recall the notations u=ν_1-(k+1-)and A=ν_-u+1 + |ν_≤-uμ|.Then similarly to Case 1, we havea_ν = X + Y,whereX=∑_μ s.t. μ⊂ μ/: v.s.l(μ)<+1-u[μ^∘⊂ν ν∖μ: h.s.]g(μ) andY = ∑_μ s.t. μ⊂ μ/: v.s.l(μ)≥+1-u[μ^∘⊂ν ν∖μ: h.s. (P) ] g(μ),where we putg(μ)= (-1)^|μ/|g_2(μ)≥ 0g_1(μ)g_2(μ), g_1(μ)= q_μ-1+['_t=μ'_t+1] + v-|μ/|-A-r_νμ^∘, g_2(μ)= v-|μ/|-A.In fact Y=0, since A ≥ν_-u+1≥μ_-u+1 > t ≥ v. Moreover, in fact the condition “l(μ)<+1-u” in the summation in X can be dropped since if μ satisfies l(μ)≥+1-u then A ≥ν_-u+1≥μ_-u+1>t≥ v. Hence we have a_ν = X =∑_μ s.t. μ⊂ μ/: v.s. μ^∘⊂ν ν∖μ: h.s. g(μ). To complete these calculations of (<ref>) and (<ref>),first we simplify the conditions on μ in the above summations. First, we can easily see some necessary conditions to a_ν≠ 0. In both cases, * ν should be contained by (k)∪ since μ⊂ and νμ is a horizontal strip.* The skew shape ν (⊂(ν∖μ)⊔(μ/)) should be a ribbonsince a union of a horizontal strip and a vertical stripnever contains a 2× 2 square. Otherwise, if ν is not a ribbon, this coefficient a_ν is equal to 0.* Moreover,unless ⊂ν, there are no μ such that⊂μ and μ^∘⊂ν, hence a_ν=0.* If ν_l > v (= _l), then f_2(μ) ≤ v - |νμ| ≤ v - |ν| ≤ v - ν_l < 0 and g_2(μ) ≤ v - (ν_-u+1 + |ν_≤-uμ|) ≤ v - ν_-u+1≤ v - ν_l < 0 for any μ⊂, thus a_ν=0.Now we assume ν⊂ (k)∪, ⊂ν, ν∖ is a ribbon.ν_l ≤ v = _l. We write (ν∩)= A_1 ⊔⋯⊔ A_a so that each A_i is a connected ribbon. We putX_i ={ (r,c)∈ A_i| (r+1,c)∈ A_i}, X'_i ={ (r,c)∈ A_i| (r,c-1)∈ A_i}, y_i =(r_i,c_i) :=the most northwest cell of A_i, t_i ='_c_i-1 - ν'_c_i = '_c_i-1 - r_i (≥ 0). Then A_i=X_i⊔ X'_i ⊔{y_i}. [scale=0.3](0,6) rectangle (1,7);(0,3) rectangle (1,6);(1,3) rectangle (5,4);(4,0) rectangle (5,3);(5,0) rectangle (7,1);(0,7) |- (-3, 10) ; (0,7) to [out=70, in=-70] node[right=1pt]t_i (0,10);at (-2,7.5) ; (0.5,6.5) to [out=170, in=45] (-1.5,5) node[left] y_i;(Xi) at (1,-1) X_i;(0.5,4) to [out=-100, in=100] (Xi.north);(4.5,1) to [out=-100, in=30] (Xi.east);(Xi') at (7,5) X'_i;(4,3.5) to [out=45, in=180] (Xi'.west);(6,0.5) to [out=60, in=-90] (Xi'.south);(15,5) rectangle (16,6);(16,5) rectangle (19,6);(18,2) rectangle (19,5);(19,2) rectangle (22,3);(21,0) rectangle (22,2); (15,6) |- (12,9); (15,6) to [out=70, in=-70] node[right=1pt]t_i (15,9);at (13,6.5) ; (15.5,5.5) to [out=-100, in=100] (15.5,4) node[below] y_i;(Xi2) at (18,0) X_i;(18.5,3) to [out=-100, in=100] (Xi2.north);(21.5,1) to [out=190, in=0] (Xi2.east);(Xi'2) at (22,5) X'_i;(18,5.5) to [out=30, in=150] (Xi'2.west);(21,2.5) to [out=60, in=-90] (Xi'2.south); We can assume c_1 < … <c_b ≤ t < c_b+1 < … < c_afor 0 ≤∃ b ≤ a, without loss of generality.Moreover we put{ d_1,…,d_e } = { c | 1 < c ≤ t,ν'_c ≤'_c < '_c-1}, z_i= '_d_i - 1 - '_d_i. In other words, d_1,…,d_e are the column indices not greater than t in which columnthere is an addable corner ofwhich does not belong to ν, and z_i is the number of boxes which we can add on the d_i-th column of . (See the figure below) [scale=0.2] (0,0) -| (36,5) -| (31,9) -| (23,13) -| (17,16) -| (12,20) -| (7,24) -| (4,29) -| (0,0);(0,9) – (23,9); at (17,4.5) ;at (6,14) /; [loosely dotted, thick] (4,29) -| (40,0);[loosely dotted, thick] (23,13) -| (23,29); (4,24) to [out=70,in=-70] node[right=1pt]z_1 (4,29);[below=-10pt] at (4.5,24) [ ⋮; d_1 ];(12,16) to [out=70,in=-70] node[right=1pt]z_2 (12,20);[below=-10pt] at (12.5,16) [ ⋮; d_2 ]; [red] (7.1,20.1) -| (10,21) -| (8,23) -| (7.1,20.1);[red] (23.1,9.1) -| (28,10) -| (24,14) -| (17.1,13.1) -| (23.1,9.1);[red] (31.1,5.1) -| (34,6) -| (32,8) -| (31.1,5.1);[red] (36.1,0) -| (40,1) -| (37,3) -| (36.1,0);[red,below=-10pt] at (7.5,20) [ ⋮; c_1 ];[red,below=-10pt] at (17.5,13) [ ⋮; c_b ];[red,below=-10pt] at (31.5,5) [ ⋮; c_b+1 ];[red,below=-10pt] at (36.5,0) [ ⋮; c_a ];[red] (10,21) to [out=45,in=190] (14,24) node (A1)[right]A_1;[red] (18,14) to [out=45,in=190] (22,17) node (Ab)[right]A_b;[red, loosely dotted,thick] (A1.south east) – (Ab.north west); [red] (32,6) to [out=45,in=190] (36,9) node (Ab1)[right]A_b+1;[red] (39,1) to [out=45,in=190] (43,4) node (Aa)[right]A_a;[red, loosely dotted,thick] (Ab1.south east) – (Aa.north west); [blue,decorate,decoration=zigzag,segment length=1mm,amplitude=.2mm] (-0.1,-0.1) -| (44,1.1) -| (37.1,3.1) -| (36.1,5.1) -| (34.1,6.1) -| (32.1,8.1) -| (31.1,9.1) -| (28.1,10.1) -| (24.1,14.1) -| (15.1, 15.9)-| (10.1,21.1) -| (8.1,23.1) -| (3,30) -| (-0.1,-0.1);[<-,blue,decorate,decoration=zigzag,segment length=1mm,amplitude=.2mm] (3,30) to [out=30,in=190] (9,32) node[right]ν; (0,0) to [out=110, in=-110] node[left](0,29); [below=0pt] at (20,0) k+1-;(0,9) to [out=-10, in=-170] node[below] t (23,9); Then we claim that the conditions on μ are transformed as follows:Claim 1.(1) μ⊂ (2) μ/: v.s. (3) μ^∘⊂ν (4) ν∖μ: h.s.[;;;;; μ:= μ((s_1,…,s_b),S, (x_1,…,x_e));μ:= ∪⋃_1≤ i≤ a X_i;μ := ∪⋃_1≤ i≤ b{ (r_i+j,c_i) | 0 ≤ j ≤ s_i }; μ := ∪{y_i| i∈ S};μ := ∪⋃_1≤ i≤ e{ ('_d_i+j,d_i) | 1 ≤ j ≤ x_i }; for ∃((s_1,…,s_b),S, (x_1,…,x_e)) with -1≤ s_i ≤ t_i, S⊂{b+1,…,a}, 0 ≤ x_i ≤ z_i. ]Proof of Claim 1:: Every element of ν should belong to νμ or μ/ since ν⊂ (νμ) ⊔ (μ/).Since ν∖μ is a horizontal strip, X_i⊂μ/. Since μ/ is a vertical strip, X'_i⊂ν∖μ.Take an arbitrary element (r,c) of μ/.* If (r,c)∈ν, then we have (r,c)∈(ν), thus (r,c)∈⋃_1≤ i≤ a{y_i}∪ X_i. * If (r,c)∉ν: if c>t, then (r,c)∈μ⊂ν, which is contradiction. Thus we have c≤ t. Since μ/ is a vertical strip, '_c-1≥ r > '_c. * if '_c ≥ν'_c, then c∈{d_1,…,d_e} by definition of d_i. Thus (r,c) = ('_d_i + j, d_i) for ∃ i, 1≤∃ j ≤'_d_i-1-'_d_i = z_i. * if '_c < ν'_c, then ('_c+1, c) ∈ν. Thus ('_c+1, c) ∈ A_i for ∃ i. Since (r,c)∉ν, (r,c)∉⋃_i A_i. Thus (r,c) = (r_i + j, c_i) for ∃ i and 1≤ j ≤'_c_i-1-'_c_i = t_i. ⟸: (1):clear.(3): since c_1,…,c_b, d_1, …, d_e ≤ t,we haveμ = (∪⋃_1≤ i≤ a X_i ∪{y_i| i∈ S}).To show (3), use (<ref>) andthatα⊂β, (r,c)∈β (α∪{(r,c)})⊂β.(Proof: (α∪{(r,c)}) = α, α∪{(r,c)}, α∪{(r,i)| 1≤ i ≤ c}according to whether c≤ t, c>t+1, c=t+1.)(4): Since A_i is a ribbon, we have (the below cell of y_i)∉ X'_i,whence νμ⊂(ν)∪⋃ X'_i∪{ y_1,…,y_a}: horizontal strip.(2):it suffices to show that for any (r,c)∈μ/, it holds (r,c-1)∈. * If (r,c)∈ X_i, then (r+1,c)∈ A_i⊂ν, whence (r,c-1)∈ since ν is a ribbon. * (the left cell of y_i)∈ is obvious by the definition of y_i. * If (r,c)=(r_i+j,c_i) for 1≤ i ≤ b and 0≤ j ≤ t_i, we have r≤ r_i+t_i='_c_i-1 thus (r,c-1)∈. * If (r,c)=('_d_i+j,d_i) for 1≤ i ≤ e and 1≤ j ≤ z_i, we have r≤'_d_i+z_i='_d_i-1 thus (r,c-1)∈.Claim 1 is proved.Claim 2. Put X = ∑_1≤ i ≤ a |X_i| and write μ_min=μ((-1,…,-1),∅,(0,…,0)). For μ = μ((s_1,…,s_b),S, (x_1,…,x_e)), * |μ/| = X + ∑_1≤ i ≤ b (1+s_i) + |S| + ∑_1 ≤ j ≤ e x_j. * |νμ| = |ν| - X - |S| - ∑_1≤ i ≤ bs_i≠ -1. * r_μ'' =- ∑_1≤ i≤ bs_i=t_i - ∑_1≤ j≤ ex_j=z_j - ∑_i∈ Sthe left of y_i is -rem. cor.. where = r_μ'_min,'. * q_μ =+ X + ∑_1≤ i≤ b (1+s_i-s_i=t_i) + ∑_1≤ j≤ e (x_j-x_j=z_j) - ∑_i∈ S (1-the left of y_i is -rem. cor.). * '_t=μ'_t+1 =+ ∑_i∈ Sc_i=t+1the left of y_i is -rem. cor., where = '_t=(μ_min)'_t+1. * r_νμ =+ ∑_i∈ S (1-the left of y_i is a ν-nonblocked -rem. cor.), where = r_ν,μ_min. Moreover, if ν_1> k+1-, *A= ν_-u+1 + |ν_≤-uμ| =- ∑_i∈ Sr_i ≤ - u - ∑_1≤ i≤ bs_i≠ -1r_i≤-u, where = ν_-u+1 + |ν_≤-uμ_min|. Thus, *f_1(μ) = q_μ - 1 + '_t=μ'_t+1 + v - |νμ| - |μ/| - r_νμ=+ ∑_1≤ i ≤ b (1 - s_i=t_i - s_i = -1) - ∑_1≤ j ≤ ex_j=z_j= C_5 + ∑_i∈ S( the left of y_i is a ν-nonblocked -rem. cor.= C_5 + ∑_i∈ S( - the left of y_i is a -rem. cor.= C_5 + ∑_i∈ S( + c_i=t+1the left of y_i is a -rem. cor.), where =+ X - 1 ++ v - |ν| -. *f_2(μ)= v - |νμ| - |μ/| = v - |ν| - ∑_1≤ j≤ e x_j - ∑_1≤ i≤ b (s_i + s_i = -1). *g_1(μ) = q_μ - 1 + '_t=μ'_t+1 + v - |μ/| - A - r_νμ=+ ∑_1≤ i≤ b r_i≤-u(1 - s_i=t_i - s_i = -1) = C_4 - ∑_1≤ i≤ b r_i>-us_i=t_i - ∑_1≤ j≤ ex_j=z_j= C_4 + ∑_i∈ S( the left of y_i is a ν-nonblocked -rem. cor.= C_4 + ∑_i∈ S( - the left of y_i is a -rem. cor.= C_4 + ∑_i∈ S( + c_i=t+1the left of y_i is a -rem. cor.= C_4 + ∑_i∈ S( - r_i > -u), where =- 1 ++ v --(=g_1(μ_min)). *g_2(μ)= v - |μ/| - A = v - X -- ∑_1≤ j ≤ e x_j = - ∑_1≤ i≤ b r_i≤-u (s_i + s_i = -1) - ∑_1≤ i≤ b r_i>-u (1 + s_i) - ∑_i∈ Sr_i>-u. Proof of Claim 2:It suffices to show (1)-(7) since (8)-(11) follow from them.(1), (2), (3), (5), (7): Obvious.(4): Recall q_μ = |μ/| + r_μ''.(6): The value of r_νμ is independent of s_1,…,s_b and x_1,…,x_e since c_1,…,c_b, d_1,…,d_e ≤ t. It suffices to show thatr_νμ_T - r_νμ_T = 1 - the left of y_i is a ν-nonblocked -rem. cor. for all i∈ S, T⊂ S{i} and T = T∪{i}. Put =μ_T, β=μ_ T=∪{y_i}. Recall y_i=(r_i,c_i).Case A: if l()=l(β) i.e. c_i>t+1, then ∪{y_i}=β, whencer_νβ-r_ν= 0(if (r_i,c_i-1) is a ν-nonblocked -rem. cor.), 1(otherwise),by <cit.>. [scale=0.3](0,0) -| (12,1) -| (10,3) -| (6,7) -| (0,0); (4,7) |- (3,9) |- (1,10) |- (0,11) – (0,7);(6,3) rectangle (7,5);(6,5) rectangle (7,6);at (3,3) ; [loosely dotted, thick] (4,9) – (4,11); (0,11) to [out=20,in=160] node[above]t (4,11);(6.5,5.5) to [out=90,in=-135] (8,8) node[anchor=south west] y_i=(r_i,c_i);(6.5,4) to [out=45,in=180] (10,7) node[right] X_i;(0,0) to [out=105, in=-105] node[left]l(γ)=l(β) (0,7);Now (r_i,c_i-1) is a ν-nonblocked -rem. cor. (r_i,c_i-1) is a ν-nonblocked -rem. cor..(Proof. : since (r_i,c_i-1)∈,(r_i,c_i-1) is a -rem. cor.(r_i,c_i-1) is a -rem. cor. : Note that(r_i,c_i)∉,. Thus (r_i,c_i-1) is not a -rem. cor.(r_i+1,c_i-1)∈ (r_i+1,c_i-1)∈ν(r_i,c_i-1) is ν-blocked.)Case B: if l() + 1 = l(β), i.e. c_i=t+1, then β = ∪(t+1), whencer_νβ-r_ν=1.Note that in this case (r_i,c_i-1) must not be a ν-nonblocked -removable corner since (r_i,c_i-1)∉.[scale=0.3](0,0) -| (9,2) -| (7,5) -| (0,0); (4,5) rectangle (5,7);(4,7) rectangle (5,8);at (3.5,2.5) ; (4,5) |- (3,10) |- (1,11) |- (0,12) – (0,5); [loosely dotted, thick] (4,10) – (4,12); (0,12) to [out=20,in=160] node[above]t (4,12);(4.5,7.5) to [out=90,in=-135] (6,10) node[anchor=south west] y_i=(r_i,c_i);(4.5,6) to [out=45,in=180] (8,9) node[right] X_i;[loosely dotted,thick] (0,7) – (4,7);(0,0) to [out=105, in=-105] node[left]l(γ) (0,7);[loosely dotted,thick] (-4,8) – (4,8);[loosely dotted,thick] (-4,0) – (0,0);(-4,0) to [out=105, in=-105] node[left]l(β) (-4,8); Hence in both cases we haver_νβ-r_ν =[(r_i,c_i-1) is not a ν-nonblocked -rem. cor.]. Claim 2 is proved.Now we get back to the calculations of a_ν. Case 1: if ν_1 ≤ k+1-,First we prove that if b>0 then a_ν=0.Assume b>0.Fix s_2,…,s_b, S, x_1,…,x_e and consider a sum f(μ)=f(μ((s_1,…,s_b),S, (x_1,…,x_e))) of (<ref>) according to the variable s_1.By Claim 2 this sum has the form∑_s_1=-1^t_1 (-1)^+s_1 - s_1 - s_1=-1≥ 0-s_1=t_1 - s_1=-1 - s_1 - s_1=-1(for some constants ,,), which is zero by Lemma <ref>.Thus we conclude a_ν = ∑_((s_2,…,s_b),S,(x_1,…,x_e))∑_s_1 f(μ((s_1,…,s_b),S, (x_1,…,x_e))) = 0if b>0.Now we assume b=0. Next we prove that if a>0 then a_ν=0. Assume a>0.Let us fix x_1,…,x_e arbitrarily and put μ_S = μ((),S,(x_1,…,x_e))for S⊂{1,…,a}.It suffices to prove f(μ_T) + f(μ_ T) = 0 for each T⊂{2,…,a} and T={1}∪ T.For such T, it suffices to show * |μ_ T/| = |μ_T/| + 1, * f_1(μ_T)=f_1(μ_ T), * f_2(μ_T)=f_2(μ_ T). Proof of (1), (3): obviously follow from Claim 2.Proof of (2): Recall y_1=(r_1,c_1). By Claim 2, it suffices to show(r_1,c_1-1) is a ν-nonblocked -rem. cor. + c_1=t+1(r_1,c_1-1) is a -rem. cor. - (r_1,c_1-1) is a -rem. cor.= 0. Recall that A_1 ⊔…⊔ A_a = (ν)∩. When (r_1, c_1-1) is a -removable corner,(r_1+1,c_1-1)∉ν by the choice of A_1 and ν_l ≤_l ≤ t < c_1-1. Moreover,(r_1+1, c_1-1) ∉/ since c_1-1 > t, thus (r_1+1,c_1-1)∉ν/. Hence (r_1,c_1-1) is a ν-nonblocked -rem. cor.= (r_1,c_1-1) is a -rem. cor..Recalling the definition of , = c_1 > t+1(r_1,c_1-1) is a -rem. cor.,which completes the proof. Finally we assume a=b=0, namely, ν∩⊂. Note that μ_min = and ν = ν. We shall abbriviate μ((),∅,(x_1,…,x_e)) as μ(x_1,…,x_e), which is with x_i boxes added at d_i-th column for each i.Note that r_ν = r_ sincea ν-blocked -corner can exist only if ν_l ≥_ > t, which never happen since (<ref>) and (<ref>). Then we havef_1(μ(x_1,…,x_e))= C_1 + X - 1 + C_2 + v - |ν| - C_3 - ∑x_j=z_j= r_'' - 1 + '_t='_t+1 + v - |ν| - r_ν - ∑x_j=z_j= e - ∑x_j=z_j + v - |ν|.Here the last equality follows from since r_''-r_ν = r_-r_ = #{removable corner of /} = e + ['_t > '_t+1], andf_2(μ(x_1,…,x_e))= v - |ν| - ∑ x_j. Thus we havea_ν = ∑_x_1,…,x_e f(μ(x_1,…,x_e)) = ∑_x_1=0^z_1 (-1)^x_1…∑_x_e=0^z_e (-1)^x_e[v ≥∑_i=1^e x_i + |ν|] ×e - ∑_i=1^e[x_i=z_i] + v - |ν|v - ∑_i=1^e x_i - |ν|. Now we simplify the summation on x_e using Lemma <ref> (of the form ∑_x=0^z[a-x≥ 0] (-1)^x q-[x=z]a-x = [a≥ 0] q-1a), a_ν = ∑_x_1=0^z_1 (-1)^x_1…∑_x_e-1=0^z_e-1 (-1)^x_e-1[v ≥∑_i=1^e-1 x_i + |ν|] ×e - 1 - ∑_i=1^e-1[x_i=z_i] + v - |ν|v - ∑_i=1^e-1 x_i - |ν|.Then repeating this, = [v ≥ |ν|] v - |ν|v - |ν|= [v ≥ |ν|]= [v ≥ν_+1]. (by (<ref>))Note thatv ≥ν_+1 can be rephrased as()_+1≥(ν)_+1.Case 2: if ν_1>k+1-, By the same argument as Case 1, we can see that a_ν=0 unless {i| 1≤ i ≤ b, r_i ≤-u} = ∅. Thus we assume (r_1 > … >) r_b > -u hereafter. Next we prove a_ν=0 unless b=0. Assume b>0. Then r_b>-u and (r_b,c_b)∈ν, thus ν_-u+1≥ν_r_b > _r_b≥_+1 = v. Hence g_2(μ) ≤ v - ≤ v-ν_-u+1 < 0 for any μ=μ((s_i)_i,S,(x_j)_j), which implies a_ν=0. Thus we assume b=0 hereafter. Next we prove a_ν=0 unless a=0. Assume a>0. As Case 1, fix x_1,…,x_e arbitrarily and put μ_S = μ((),S,(x_1,…,x_e)) for S⊂{1,…,a}. It suffices to prove g(μ_T) + g(μ_ T) = 0 for each T⊂{2,…,a} and T={1}∪ T. * If r_1>-u, then l(ν)≥ r_1>-u i.e. ν_-u+1>t, and thus g(μ_S)=0 for all S since g_2(μ_S) ≤ v - A ≤ v - ν_-u+1 < 0. * If r_1≤-u, we can deduce |μ_ T/| = |μ_T/| + 1, g_1(μ_T)=g_1(μ_ T) and g_2(μ_T)=g_2(μ_ T) by the same proof as Case 1. Finally we assume a=b=0, namely, ν∩⊂. As Case 1, μ_min = and ν = ν. We use the same notation μ(x_1,…,x_e) as Case 1, then we have g_1(μ(s_1,…,s_b,x_1,…,x_e)) = C_1 - 1 + C_2 + v - C_3 - C_4 - ∑_jx_j=z_j= e + v -- ∑_jx_j=z_j and g_2(μ(x_1,…,x_e))= v -- ∑_j x_j by the same argument as Case 1. Thus, similarly to Case 1, we have a_ν = ∑_x_1,…,x_e g(μ(x_1,…,x_e)) = ∑_x_1=0^z_1 (-1)^x_1…∑_x_e=0^z_e (-1)^x_ev -- ∑_j=1^e x_j ≥ 0=×e + v -- ∑_j=1^ex_j=z_jv -- ∑_j=1^e x_j= v - ≥ 0v - v - = v - ≥ 0. Note that _1 = k+1- since ν∩⊂ and ν_1>k+1-, thus = ν_-u+1+|ν_≤-u| = ν_-u+1 + ν_1 - (k+1-). Hence v - ≥ 0 v + (k+1-) ≥ν_1 + ν_+1-u()_1 ≥(ν)_1. To summarize the results, a_ν=1 if (1) ν⊂ (k)∪ (from (<ref>)), (2) ⊂ν (from (<ref>)), (3) ν∩⊂ (from (<ref>) in Case 1 and (<ref>) in Case 2), (4) (when ν_1 ≤ k+1-) ()_+1≥(ν)_+1 (from (<ref>)). (when ν_1 > k+1-) ()_1 ≥(ν)_1 (from (<ref>)). and a_ν=0 otherwise. Note that the assumptions (<ref>) and (<ref>) can be leaded by (1)-(4).Now we have ν_i ≤_i for 2≤ i ≤ since (1) and (3). Besides, ν_i=(ν)_i and _i=()_i for 2≤ i since ν,⊂ (k)∪.In addition, (4) can be replaced by the condition ()_i ≥(ν)_i for i=1,+1:actually, (3) implies the condition ()_1 ≥(ν)_1 when ν_1 ≤ k+1-, and the condition ()_1 ≥(ν)_1 implies the condition ()_+1≥(ν)_+1 when ν_1 > k+1-.Therefore “(1),(3), and (4)” implies(ν)⊂(), and it is easy to see that the converse is also true.Moreover, (2) can be rephrased as⊂(ν) under the condition (ν) ⊂(),since (ν)⊂() implies (ν)_i = ν_i for i ≥ 2 and thus ν≠(ν)occurs only if ν_1≥ k+1-t (≥_1). Hence we have(1),(2),(3),(4) ⊂ν (ν) ⊂() ⊂(ν) ⊂(). Now ==() since we have assumed v ≤ t, thus we concludea_ν = 1(if ()⊂(ν)⊂()), 0(otherwise). Now we have completed the proof of Theorem <ref>. Lam08article author=Lam, Thomas,title=Schubert polynomials for the affine Grassmannian,journal=J. Amer. Math. Soc.,volume=21,date=2008,number=1,pages=259–281,MR1950481articleauthor=Lapointe, L.,author=Lascoux, A.,author=Morse, J.,title=Tableau atoms and a new Macdonald positivity conjecture,journal=Duke Math. J.,volume=116,date=2003,number=1,pages=103–146, MR1851953articleauthor=Lascoux, Alain,title=Ordering the affine symmetric group,conference= title=Algebraic combinatorics and applications (Gößweinstein, 1999),,book= publisher=Springer, Berlin,,date=2001,pages=219–231, MR3379711collection author=Lam, Thomas,author=Lapointe, Luc,author=Morse, Jennifer,author=Schilling, Anne,author=Shimozono, Mark,author=Zabrocki, Mike,title=k-Schur functions and affine Schubert calculus,series=Fields Institute Monographs,volume=33,publisher=Springer, New York; Fields Institute for Research inMathematical Sciences, Toronto, ON,date=2014,pages=viii+219, MR2741963article author=Lam, Thomas,author=Lapointe, Luc,author=Morse, Jennifer,author=Shimozono, Mark,title=Affine insertion and Pieri rules for the affine Grassmannian,journal=Mem. Amer. Math. Soc.,volume=208,date=2010,number=977,pages=xii+82, isbn=978-0-8218-4658-2,MR2079931article author=Lapointe, L.,author=Morse, J.,title=Order ideals in weak subposets of Young's lattice and associatedunimodality conjectures,journal=Ann. Comb.,volume=8,date=2004,number=2,pages=197–219, MR2167475article author=Lapointe, Luc,author=Morse, Jennifer,title=Tableaux on k+1-cores, reduced words for affine permutations,and k-Schur expansions,journal=J. Combin. Theory Ser. A,volume=112,date=2005,number=1,pages=44–81, MR2331242article author=Lapointe, Luc,author=Morse, Jennifer,title=A k-tableau characterization of k-Schur functions,journal=Adv. Math.,volume=213,date=2007,number=1,pages=183–204, MR2923177articleauthor=Lam, Thomas,author=Shimozono, Mark,title=From quantum Schubert polynomials to k-Schur functions via theToda lattice,journal=Math. Res. Lett.,volume=19,date=2012,number=1,pages=81–93,MR2660675article author=Lam, Thomas,author=Schilling, Anne,author=Shimozono, Mark,title=K-theory Schubert calculus of the affine Grassmannian,journal=Compos. Math.,volume=146,date=2010,number=4,pages=811–852, MR1354144bookauthor=Macdonald, I. G.,title=Symmetric functions and Hall polynomials,series=Oxford Mathematical Monographs,edition=2,publisher=The Clarendon Press, Oxford University Press, New York,date=1995,Morse12article author=Morse, Jennifer,title=Combinatorics of the K-theory of affine Grassmannians,journal=Adv. Math.,volume=229,date=2012,number=5,pages=2950–2984, Takigiku_part1article author=Takigiku, Motoki,title=Factorization formulas of K-k-Schur functions I,MasterThesisarticleauthor=Takigiku, Motoki,title=On some factorization formulas of K-k-Schur functions,journal=Master's thesis at University of Tokyo,
http://arxiv.org/abs/1704.08660v1
{ "authors": [ "Motoki Takigiku" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20170427170536", "title": "Factorization formulas of $K$-$k$-Schur functions II" }
apsrevplain
http://arxiv.org/abs/1705.00536v1
{ "authors": [ "M. Shokri", "N. Sadooghi" ], "categories": [ "nucl-th", "gr-qc", "physics.flu-dyn" ], "primary_category": "nucl-th", "published": "20170427165254", "title": "Novel self-similar rotating solutions of non-ideal transverse magnetohydrodynamics" }
=1a]Alfonso Ballon-Bayona,b]Robert Carcassés Quevedo, b]and Miguel S. Costa[a]Instituto de Física Teórica, Universidade Estadual Paulista,Rua Dr. Bento Teobaldo Ferraz, 271 - Bloco II, 01140-070 São Paulo, SP, Brazil [b]Centro de Física do Porto e Departamento de Física e Astronomia da Faculdade de Ciências da Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto, Portugal [email protected] [email protected] [email protected] We develop a formalism where the hard and soft pomeron contributions to high energy scattering arise as leading Regge poles of a single kernel in holographic QCD. The kernel is obtained using effective field theory inspired by Regge theory of a 5-d string theory. It describes the exchange of higher spin fields in the graviton Regge trajectory that are dual to glueball states of twist two. For a specific holographic QCD model we describe Deep Inelastic Scattering in the Regge limit of low Bjorken x, finding good agreement with experimental data from HERA. The observed rise of the effective pomeron intercept, as the size of the probe decreases,isreproduced by considering the first four pomeron trajectories. In the case of soft probes, relevant to total cross sections, the leading hard pomeron trajectory is suppressed, such that in this kinematical region we reproduce an intercept of 1.09 compatible with the QCD soft pomeron data. In the spectral region of positive Maldelstam variable t the first two pomeron trajectories are consistent with current expectations for the glueball spectrum from lattice simulations.
http://arxiv.org/abs/1704.08280v2
{ "authors": [ "Alfonso Ballon-Bayona", "Robert Carcasses Quevedo", "Miguel S. Costa" ], "categories": [ "hep-ph", "hep-th" ], "primary_category": "hep-ph", "published": "20170426182604", "title": "Unity of pomerons from gauge/string duality" }
.pdf, .jpg, .pnp, .bmp, .fig y all #1 #1 #1 #1 6.50in-0.50in 0in0in 9.00in.equationequation equation(.equation)argument .argument
http://arxiv.org/abs/1704.08515v1
{ "authors": [ "Ioannis S. Stamatiou" ], "categories": [ "math.NA", "60H10, 65C20, 65L20" ], "primary_category": "math.NA", "published": "20170427112710", "title": "A note on Asymptotic mean-square stability of stochastic linear two-step methods for SDEs" }
Department of Physics & Astronomy, University College London, Gower Street, WC1E 6BT, UK [email protected] Zentrum für Astronomie der Universität Heidelberg, Landessternwarte, Königstuhl 12, D-69117 Heidelberg, GermanyCharacterization of the atmospheres of transiting exoplanets relies on accurate measurements of the extent of the optically thick area of the planet at multiple wavelengths with a precision ≲100 parts per million (ppm). Next-generation instruments onboard the James Webb Space Telescope (JWST) are expected to achieve ∼10 ppm precision for several tens of targets. A similar precision can be obtained in modeling only if other astrophysical effects, including the stellar limb-darkening, are properly accounted for. In this paper, we explore the limits on precision due to the mathematical formulas currently adopted to approximate the stellar limb-darkening, and due to the use of limb-darkening coefficients obtained either from stellar-atmosphere models or empirically. We recommend the use of a two-coefficient limb-darkening law, named “power-2”, which outperforms other two-coefficient laws adopted in the exoplanet literature in most cases, and particularly for cool stars.Empirical limb-darkening based on two-coefficient formulas can be significantly biased, even if the light-curve residuals are nearly photon-noise limited. We demonstrate an optimal strategy to fitting for the four-coefficient limb-darkening in the visible, using prior information on the exoplanet orbital parameters to break some of the degeneracies that otherwise would prevent the convergence of the fit. Infrared observations taken with the James Webb Space Telescope (JWST) will provide accurate measurements of the exoplanet orbital parameters with unprecedented precision, which can be used as priors to improve the stellar limb-darkening characterization, and therefore the inferred exoplanet parameters, from observations in the visible, such as those taken with Kepler/K2, the JWST, and other past and future instruments.§ INTRODUCTION Observations of transits offer the most accurate means of measuring exoplanet sizes and orbital inclinations, as well as mean stellar densities and, if combined with radial-velocity information, system masses. Transits are revealed through periodic drops in the stellar flux, due to the partial occultation of the stellar disk by the planet for a portion of its orbit. The amplitude of the flux decrement is primarily determined by the size of the planet relative to the star, but also depends on the location of the occulted area of the stellar disk and the wavelength observed, because of limb-darkening (the radial decrease in specific intensity).Inadequate treatment of limb-darkening may give rise to ≳10% errors inexoplanetary radii inferred from transits observed at UV or visible wavelengths, and accurate modeling is paramount in the study of the exoplanetary atmospheres, where differences of 10–100 parts per million (ppm) in transit depths at different wavelengths can be attributed to the wavelength-dependent optical depth of the external layers of the planet, rather than to stellar properties.Stellar-atmosphere models are commonly used to predict the limb-darkening profiles, but empirical estimates are desirable, both to test the stellar models and to reduce potential biases in transit depths due to errors in the theoretical models or to other second-order effects, such as stellar activity, granulation, gravity darkening, etc.Other than for the Sun, the surface of which can be directly observed in great detail, techniques to map the stellar intensity distributions rely mainly on optical interferometry (e.g., ) and microlensing (e.g., ).The former is useful for only a very limited number of stars with large angular diameters, while the latter is limited by the low occurrence rate and non-repeatability of the microlensing events. Eclipsing binaries offer another route to mapping stellar surfaces, but accurate modeling of these systems is handicapped by complicating factors (gravity darkening, reflection effect, tidal distortion…), and a degree of redundancy between limb-darkening and radii. These issues are much reduced in most star+exoplanet systems, thanks to the smaller mass and size of the planetary companions <cit.>.In this paper, we explore the potential biases in high-precision exoplanet spectroscopy using approximate stellar limb-darkening parameterizations, with coefficients obtained either from stellar-atmosphere models or empirically.§.§ Structure of the paper Section <ref> reviews the limb-darkening laws most commonly adopted in the exoplanet literature, the proposed power-2 law, and discusses the current approaches to obtain theoretical and empirical limb-darkening coefficients. Section <ref> describes how wesimulate light-curves from spherical-atmosphere models, and Section <ref> reports the results of our analyses. In particular, Section <ref> outlines the main differences between plane-parallel and spherical stellar-atmosphere models; in Sections <ref> and <ref> we analyze the precision with which different limb-darkening laws describe the intensity profile and the transit morphology, and derive the correct transit depth. Section <ref> describes the equivalent analysis for the case of empirical limb-darkening coefficients, (i.e., allowed as free parameters in the light-curve fit).Section <ref> then focuses on the potential errors in `narrow-band exoplanet spectroscopy' over short wavelength ranges, specifically in the context of Hubble Space Telescope (HST)/WFC3 observations.Section <ref> examines the ability to fit a set of transit parameters and limb-darkening coefficients on transit light-curves, and develops an optimal strategy to maximize the accuracy in the estimated transit parameters and limb-darkening coefficients in the visible, if infrared observations are also available.Finally, Section <ref> discusses the results of our analysis, with emphasis on the synergies between the James Webb Space Telescope (JWST) and Kepler, and on future surveys. § DESCRIBING STELLAR LIMB-DARKENING §.§ Limb-darkening parameterizationsIn exoplanetary studies,the stellar limb-darkening profile is typically described by an analytical function I_λ( μ ), where I denotes the specific intensity, μ = cosθ, θ is the angle between the surface normal and the line of sight, and the λ subscript refers to the monochromatic wavelength or effective wavelength of the passband at which the specific intensities are given.For circular symmetry, μ = √(1-r^2), where r is the projected radial co-ordinate normalized to a reference radius. Numerous functional formsto approximate I_λ(μ) have been proposed in the literature.In the study of exoplanetary transits, the most commonly used of these limb-darkening `laws' are: * the quadratic law <cit.>,I_λ( μ ) / I_λ( 1 )= 1 - u_1 (1-μ) - u_2 (1-μ)^2 ; * the square-root law <cit.>,I_λ( μ ) / I_λ( 1 )= 1 - v_1 (1- √(μ)) - v_2 (1-μ); and * the four-coefficient law <cit.>,I_λ( μ ) / I_λ( 1 )= 1 - ∑_n=1^4 a_n( 1 - μ^n/2 ) , hereinafter referred to as “claret-4”. The quadratic, square-root, and claret-4 laws rely on linear combinations of fixed powers of μ.In this paper, we advocate an alternative two-coefficient law incorporating an arbitrary power of μ which, to the best of our knowledge, has not previously been considered in the exoplanet literature (and which we initially constructed independently): * the `power-2' law <cit.>,I_λ( μ ) / I_λ( 1 )= 1 - c( 1-μ^α ) We find that this form offers more flexibility and a better match to model-atmosphere limb-darkening than do other two-coefficient forms (Section <ref>). The claret-4 law can provide a more accurate approximation to model-atmosphere limb-darkening than other forms, but at the expense of using a larger number of coefficients. We note that the quadratic and square-root laws are subsets of the claret-4 prescription, with a_1=a_3=0, a_2 = u_1+2u_2, a_4 = -u_2 (quadratic) and a_3=a_4=0 (square-root).The power-2 form is a subset only for α = 1/2, 1, 3/2, or 2. §.§ Intensity distributions: plane-parallel vs. sphericalTheoretical limb-darkening coefficients can be obtained from stellar-atmosphere models, by fitting a parametric law (such as Equations <ref>–<ref>) to detailed numerical evaluations of I_λ(μ) using some suitable numerical technique– typically least squares, though detailed numerical results depend on both the method chosen and the data sampling (e.g., ).Tables of theoretical limb-darkening coefficients as a function of stellar parameters (usually the effective temperature, gravity, and metallicity) have been published by several authors for various photometric passbands. Most calculations are based on plane-parallel atmosphere models <cit.>, but some authors have considered spherical geometry, claiming, in some cases, noteworthy improvements compared to the use of plane-parallel models <cit.>.These spherical models show a characteristic steep drop-off in intensity at small, but finite μ (see, e.g., Figure <ref>). The explanation for this drop-off is straightforward (Figure <ref>).In a plane-parallel atmosphere, the optical depthalways reaches unity somewhere along the line of sight, even at grazing incidence. The limb of the star is, consequently, geometrically well-defined (and wavelength independent), and the intensity at the limb is comparableto the intensity at the center of the disk, to within a factor of a few.In spherical geometry, in contrast, there are no constant angles μ at which the characteristic rays intersect the shells; instead they vary as a function of radius. For technical reasons the emergent intensities are usually specified as functions of the angle as measured at the outer boundary of the model atmosphere, which is originally set by the modeller at an arbitrary physical radius or reference optical depth, subject only to the condition that it has a suitably small opacity and emissivity even at the cores of strong lines. The outermost layer of the model atmosphere, corresponding to μ = 0 in this reference frame, is therefore always optically thin and does not correspond to what would be observed as `the' stellar radius in investigations involving interferometric imaging, lunar occultation, or exoplanetary transits. Furthermore, the rapid changes in I_λ(μ) that arise at small μ in spherical models are, inevitably, not well approximated by any of the standard parametric laws developed to represent results of plane-parallel models, because the intensity does not converge to zero at any given radius (see, e.g., Figure 1 of ).<cit.> and <cit.> therefore suggested to re-define μ = 0 to a radius outside of which almost no flux is observed, i.e. at the inflection point of the intensity profile so that the tail-like extension originating from the optically thin outer layers is excluded from fitting the limb-darkening profile. The radius is reasonably well defined by the point at which the gradient dI(μ)/dμ, or almost equivalently |dI(r)/dr|, reaches a maximum. <cit.> found that this radius corresponded closely to the Rosseland radius, defined by a Rosseland optical depth along the normal τ_Ross(r) = 1, for the M giant models they presented. Close to the limb, however, one can expect to observe significant emission as long as the line-of-sight optical depth is at least of order unity. Additionally, in the context of modeling exoplanetary transits, the projected radius for which the total line-of-sight optical depth within the observed wavelength band becomes one should be considered. In <cit.> the differences between radial and line-of-sight optical depths at a given physical depth were relatively modest because the giant-star atmosphere they modeled has a large radial extension with corresponding large angles μ=0.3. Furthermore, they studied the emergent intensities in the K band, which has relatively smaller mean opacity than the Rosseland mean, thus partly cancelling the effect of the off-normal incidence.In this paper, we test empirically the choice of stellar radius, based on the ratios between best-fit and input transit depths (see Section <ref>), finding different values for the best renormalization radius in different wavelength bands (corresponding to the different band-mean opacities). But these radii are larger throughout than the respective τ_Ross(r) = 1, which correspond to μ = 0.0386, 0.049, and 0.0738 for the three models displayed in Figure <ref>, confirming the effect of the off-normal incidence on the emission near the limb.§.§ Limitations on empirical limb-darkening coefficients Empirical limb-darkening coefficients can be inferred by fitting a parametric model to an observed transit light-curve <cit.> Two-coefficient laws are typically used for this purpose (e.g. ), as parameter degeneracies hamper convergence when fitting higher-order models (e.g., the claret-4 characterization).Measuring empirical limb-darkening coefficients is important to test the validity of the stellar-atmosphere models and, if results are sufficiently accurate, to select the best theoretical models. Furthermore, fixing limb-darkening coefficients at incorrect theoretical values can significantly bias other fitted transit parameters, leading to incorrect inferences about planetary sizes andmasses, or confusing the spectral signature of a planetary atmosphere <cit.>. In active stars, the presence of dark or bright spots on the surface canchange the `effective' limb-darkening coefficients relative to the unperturbed case, as well as the inferred stellar parameters adopted to compute the theoretical coefficients <cit.>. Other light-curve distortions may arise from gravity darkening in fast-rotating stars <cit.>, stellar oscillations <cit.>, granulation <cit.>, beaming <cit.>, ellipsoidal variations <cit.>, reflected light <cit.>, planetary thermal emission <cit.>, or exomoons <cit.>. The photometric amplitudes of such distortions can beup to ∼100 ppm. § SIMULATED TRANSIT LIGHT-CURVESIn order to investigate the consequences of various approximations to limb-darkening, we calculated `exact' synthetic transit photometry as a reference, using new model-atmosphere intensities coupled to an accurate numerical integration scheme for the light-curves. §.§ Stellar modelsWe generated three representative model atmospheres using the Phoenix simulator <cit.>; input parameters are summarized in Table <ref>. These models are intended to bracket the range in effective temperature of known exoplanet host stars, and embrace ∼98%of those listed in thedatabase <cit.> as of 2016 December 12.For each stellar model, I_λ(μ) profiles were calculated in both plane-parallel and spherical geometries.In the former case, intensities were calculated at 96 values of μ, chosen as the anchor points for a Gaussian-quadrature integration; the intervals Δμ_i = | μ_i+1 - μ_i | vary in the range 7× 10^-4–1.6 × 10^-2 and are smallest for μ∼0 and ∼1. In spherical geometry, the μ values were determined by the properties of the model atmosphere, and the number of grid points is model-dependent (169–177 in the cases considered here); the limb is the most finely sampled region, down to Δμ∼ 6 × 10^-5. Passband-integrated intensities were calculated for five instruments which have been widely used in the field of exoplanet spectroscopy, from the visible to mid-infrared wavelengths: the STIS/G430L, STIS/G750L and WFC3/G141 gratings onboard HST,[The Kepler passband is similar to the combined STIS passbands, and the results for the STIS passbands are therefore a good proxy for Kepler.] and the IRAC photometric channels 1 and 4 onboard Spitzer. The throughputs of these instruments are shown in Figure <ref>, and the corresponding plane-parallel and spherical-model intensities are shown in Figure <ref>.§.§ Computing transit light-curves from spherical model atmospheres We generated two sets of `exact' transit light-curves for the exoplanet-system parameters reported in Table <ref>. Each set consists of fifteen transit light-curves, one for each stellar model and instrument passband, using the spherical-geometry intensities; the sets differ only in the impact parameter (or, equivalently, the orbital inclination). Each light-curve contains 2001 data points with 8.4 s sampling time, over a ∼4.7 hr interval centered on the mid-transit (the total duration of the transits is ∼2 hr, with the central transit being ∼10 min longer).The orbital parameters determine z, the sky-projected star–planet separation in units of the stellar radius at any given time; for a circular orbitz(t) = a_R √(1 - cos^2 ( 2 π (t - t_0) /P) sin^2(i) ),where a_R is the semimajor axis in units of the stellar radius, P is the orbital period, i is the inclination relative to the sky, and t_0 is the time of conjunction.The fraction of stellar light occulted by the planet is, for a given intensity profile, a function F(p,z(t)), where p is the ratio of planet-to-star radii. Instead of using an analytical function (requiring a numerical approximation to the intensity profile), we computed the light-curve by direct integration of the occulted stellar flux, using our purpose-built `tlc' algorithm. The algorithm: * divides the sky-projected stellar disk into a user-defined number of annuli, n, with uniform radial separation, dr = 1/n;* evaluates the intensity at the central radius of each annulus, I(r_i), where r_i = (0.5 + i)/n for i = 0 … n-1, interpolating in μ from the input stellar-intensity profile (and r_i = √(1-μ_i^2));* evaluates the flux from each annulus, F_i = I(r_i) × 2 π r_i dr, and hence the total stellar flux, F_* = ∑_i=0^n-1F_i.* The occulted fluxis then calculated asF_ occ(p,z) = ∑_i=0^n-1 F_i f_z,p(r_i), where f_z,p(r_i) is the fraction of circumference of each annulus covered by the planet,given byf_p,z(r_i) =1/πarccosr_i^2 + z^2 - p^2/2zr_i|z-p| < r_i < z+p 0 r_i ≤ z-p or r_i ≥ z+p, 1r_i ≤ p-z * whence the normalized flux is F(p,z) = 1 - F_ occ/F_*.Before calculating the actual transit light-curves from the spherical model intensities, we tested the accuracy of the algorithm using a wide range of parametric intensity profiles as input, with the same grid of μ values as the spherical models, comparing the resulting light-curves to those from analytical calculations. We found that, with n=100 000 annuli, the maximum differences between tlc and analytical light-curves were <5× 10^-7 in the worst-case scenarios – negligible compared to the minimum uncertainties that can be obtained with any current or forthcoming instrument. § MODELING TRANSIT LIGHT-CURVESIn empirical studies, it is generally convenient to analyse observed transit light-curves using parameterized modelsin order to fit for the unknown transit parameters and/or limb-darkening coefficients.To mimic this observational approach we employed pylightcurve,[] our pipeline dedicated to the fast computation of model transit light-curves with a parametric limb-darkening profile. The power-2 parameterization (Equation <ref>) was implemented in the code for this work. Based on our proposal, the power-2 law has been implemented also in the batman code <cit.>. §.§ Plane-parallel vs. spherical limb-darkening modelsAs outlined in Section <ref>, discrepancies between the plane-parallel and spherical limb-darkening models are larger at smaller μ (i.e., closer to the stellar limb), solely because of the manner in which μ is defined, at least in the first step, for the spherical models.The spherical models present a steep drop-off in the normalized intensity I(μ)/I(1), approaching zero at some μ >0, while for the plane-parallel models the intensity is significantly greater than zero for all μ (see Figure <ref>). It is reasonable to suppose that the `photometric' stellar radius relevant to transit studies is better represented by the projected radius of the intensity drop-off than by μ = 0, the arbitrary uppermost layer in the atmospheric model. As a pragmatic approach, we assign this projected photometric radius to the point in the intensity distribution at which the gradient dI(μ)/dμ reaches a maximum (estimated as the mean μ value between the two consecutive μ values in the model with the maximum difference quotient, | I( μ_i+1 ) - I( μ_i ) | / | μ_i+1 - μ_i |).The corresponding radius, r_0 = √(1-μ_0^2), hereinafter called the `apparent' radius, is the ratio between the stellar photometric radius and the radius of the outermost layer in the model. Our approach is similar to what is suggested by <cit.> and <cit.>, but here we compute the photometric radius for each passband, while they calculate one wavelength-averaged photometric radius.With this working definition, the best-fit model parameters for any of our simulated transit light-curves are expected to deviate from their input values according to:p_ expected^2≃ (p_ input/ r_0)^2a_R, expected ≃ a_R, input/ r_0 i_ expected ≃ i_ input.Table <ref> reports the ranges of r_0 over the five instrument passbands for the given stellar model, the corresponding percentage variation in transit depth, p^2 =(R_p/R_*)^2, and the absolute variation evaluated at p_ input = 0.15.In the analytical approximations represented by Equations <ref>–<ref>,we find the apparent stellar radius to be systematically smaller than the radius of the uppermost layer of these models by 0.05–0.1% for the M dwarfs, and up to ∼0.2% for the F0-star model; the corresponding percentage errors in transit depths are about twice as large. For the case of a transiting hot Jupiter (p=0.15),the discrepancies in transit depth are at the level of ∼20, 40, and 100 ppm for the two M dwarfs and for the F0 model, respectively. The discrepancies measured for the F0 model currently represent practical upper limits for exoplanet host stars, given that ∼99% of the current population is cooler, and hence have less extended atmospheres (for given ). The wavelength-dependence of the apparent radius is negligible over the parameter space explored here, with a peak-to-peak amplitudeof 11 ppm, in transit depth, from visible to mid-infrared wavelengths in the worst-case scenario (see Table <ref>).§.§ Accuracy of the theoretical limb-darkening lawsWe fitted the limb-darkening laws to the plane-parallel intensity profiles by adopting a simple least-squares method in the fits. We checked, both by using subsets of the precalculated intensity grids and interpolating at different angles, that similar results would be obtained using a uniform sampling in μ. Figure <ref> shows the corresponding best-fit models, hereinafter referred as “theoretical” limb-darkening models, and their residuals, for the case of the M5 V observed in the WFC3 passband. The full list of models and the relevant residuals are reported in Figure <ref> (Appendix <ref>). The power-2 law (Equation <ref>) outperforms the other two-coefficient laws at describing the stellar limb-darkening of all stars observed at near to mid-infrared wavelengths with the HST/WFC3 and Spitzer/IRAC instruments;in some cases, the power-2 model outperforms even the corresponding claret-4 one. At visible wavelengths, the square-root and power-2 models have comparable success, while the claret-4 models fit best. The average errors in specific intensity predicted by the power-2 models are in the range 0.1–1.0%, with a maximum error up to ∼5–7%for the F0 V model in the visible passbands. The claret-4 models are more uniformly robust among all the configurations, with average errors in the range 0.05–0.6% and maximum errors <4%. The quadratic models are the least accurate of those tested, with average errors in the range 1–6% and maximum errors of up to 25%(for the M0 V model in the WFC3 passband).§.§ Transit models with theoretical limb-darkening coefficientsWe measured the potential biases in the model transit depths by fixing the limb-darkening coefficients at the theoretical values obtained from the plane-parallel stellar-atmosphere models and fitting the exact light-curves described in Section <ref>. The free parameters in the fit were p, the ratio of planet-to-star radii, a_R, the orbital semimajor axis in units of the stellar radius, and i, the orbital inclination. We used a Nelder–Mead minimization algorithm to find the values of these parameters which minimize the residuals between the model fits and the exact light-curves. We then carried out Markov-chain Monte-Carlo (MCMC) runs with 300,000 iterations to assess the robustness of the point estimates.Unlike previous investigations reported in the literature (e.g. ), we seek to isolate the potential biases arising from the analysis method, and particularly the use of simplified geometry and a parameterization to characterize the stellar limb-darkening. No other astrophysical sources of error are considered in this study.Figure <ref> illustrates the differences between the best-fit transit depths and input values for i=90^∘; the expected values (from Equations <ref>–<ref>) are also indicated. For all stellar models, the results are less dependent on the parametric law at longer wavelengths; this is to be expected, since the limb-darkening is smaller at longer wavelengths. In particular, the transit depths obtained at 8μm (IRAC channel 4) are all within 45 ppm of expected values, or within 13 ppm if adopting the power-2 or claret-4 coefficients. Overall, the transit depths obtained using the claret-4 coefficients deviate by less than ∼20 ppm from expected values, other than for the M5 V model in the visible passbands, where the discrepancy reaches 34 and 80 ppm for the STIS/G750L and STIS/G430L passbands. The peak-to-peak amplitudes in best-fit transit depths over the five passbands are 94, 28, and 8 ppm, going from the coolest to the hottest model. The results obtained with the power-2 coefficients are more robust for the cooler stars, and are within 44 ppm of expected values, except for the F0 V model in the visible passbands, where the inferred transit depths are 105 and 88 ppm larger forthe STIS/G750L and STIS/G430L passbands. The peak-to-peak amplitudes in best-fit transit depths over the five passbands are 47, 44, and 102 ppm, again from the coolest to the hottest model. The quadratic-law coefficients have the largest scatter in the best-fit transit depth across the different passbands for all models, with peak-to-peak amplitudes of 250, 164, and 107 ppm.Even though the true value of the transit depth is not known in a `real-world' scenario, the presence of biases can be revealed by time-correlated noise in the light-curve residuals. Figure <ref> shows the residuals between the exact light-curve and the best-fit parametric model for the M5 V star in the WFC3 passband. The full list of light-curve residuals is reported in Figure <ref>. The amplitudes of the time-correlated residuals (maximum discrepancies from zero) are in the ranges 97–456, 8–105, and 11–75 ppm with quadratic, power-2, and claret-4 models respectively. Residuals at infrared wavelengths are typically smaller than in the visible, as expected. <cit.> report similar amplitudes for the residuals between the exact light-curves, computed with their CLIV stellar-atmosphere models, and parametric light-curve models. For comparison, residuals with ∼10 ppm root mean square (rms) amplitude have been obtained from the phase-folded Kepler photometry of several targets (e.g., ), and ∼50–200 ppm rms amplitude is typically obtained for the white light-curves observed with the HST/WFC3 (e.g., ).It is possible that better results would be obtained if the limb-darkening coefficients were fitted adopting a different sampling in μ (e.g., uniform in r rather than in μ),a different method (e.g., imposing flux-conservation), and/or using spherical intensities (e.g., ). A detailed study of the different approaches is beyond the scope of this paper, but the analysis in Section <ref> provides some clear indications.§.§ Transit models with empirical limb-darkening coefficients §.§.§ Edge-on transits We repeated the fits to the exact light-curves with the limb-darkening coefficients as free parameters (in addition to p, a_R, and i). The increased flexibility allows parametric models that better match the transits, as shown by the smaller time-correlated residuals in Figures <ref> and <ref> (Appendix <ref>). The residual amplitudes are in the range 10–63, 0–31, and 0–4 ppm with quadratic, power-2, and claret-4 models respectively. The time-correlated residuals due to imperfections in the transit models with empirical limb-darkening coefficients are hardly detectable with current instruments.Figure <ref> shows the corresponding best-fit transit depths. Despite the very small light-curve residuals, the inferred transit depth can be significantly biased. The bias obtained with quadratic limb-darkening is roughly linear in the logarithm of wavelength for the two M dwarfs, ranging from 27–40 ppm at 8μm to 200–225 ppm at 0.4μm; there is no evident trend for the F0 V model, but the transit depth is again systematically over-estimated, by 17–52 ppm. The square-root and power-2 laws have similar performances, with deviations from the expected values smaller than45 ppm, except for three `bad' points: the STIS/G430L passband for the M5 V model, and the STIS/G430L and STIS/G750L passbands for the F0 V model, for which the transit depth estimates are smaller than the apparent values by 170–138, 207–205, and 74–73 ppm for the two laws. The transit depth estimates obtained when fitting the claret-4 coefficients are the most accurate, with deviations from the expected values being smaller than 30 ppm (largest in the STIS/G430L passband) and peak-to-peak amplitudes of 41, 36, and 24 ppm from the coolest to the hottest model.Figure <ref> shows the empirical limb-darkening models and their residuals for the case of the M5 V observed in the WFC3 passband. The full list of models and the relevant residuals are reported in Figure <ref> (Appendix <ref>). It appears that the empirical limb-darkening models are better constrained at larger μ values, corresponding to the inner part of the disk. We also found that the empirical coefficients can be obtained from the stellar-atmosphere models if a uniform sampling in r rather than in μ is used. However, if a functional form is not able to reproduce the intensity profile, the empirical model will be particularly discrepant at the limb, causing larger biases in the best-fit transit depth compared to the case of theoretical coefficients with a uniform sampling in μ. Quadratic models, especially, always overpredict the intensities at the limb, so that an apparently larger planet would be needed to occult the extra stellar flux, in agreement with the larger transit depth estimates. §.§.§ Inclined transitsFor randomly orientated orbits, the inclinations i are distributed such that the probability density of cosi is uniform between 0 and 1.For circular orbits, therefore, the impact parameter, b = a_R cosi, is uniformly distributed between 0 and a_R, the semimajor axis in units of stellar radius.An exoplanet transits if and only if 0 ≤ b < 1 + p.We tested the ability to constrain the stellar limb-darkening profile and to measure the correct transit depth changes for the case b=0.5. This study was conducted for the claret-4 and power-2 laws, as they led to more robust results than the other parameterizations. In this configuration, the area of the stellar disk with r < b-p = 0.35, or μ≳ 0.94, is never occulted by the transiting exoplanet.Figure <ref> shows the comparison between the transit depths estimated for the cases with b=0 and 0.5, using the claret-4 and power-2 laws. In most cases, there are no significant differences in transit depth obtained for the cases with b=0 and 0.5. The largest discrepancies (29–68 ppm) are registered for the three `bad' points of the power-2 law, which are highlighted in Section <ref>.The empirical limb-darkening profiles are also very similar. Figure <ref> shows the difference for the two most discrepant cases. The parametric models obtained from the transits with b=0.5 approximate the intensities at the limb slightly better than those obtained from the transits with b=0, but the bias is also significant. In general, it appears that, if a parametric law does not allow a good approximation of the limb-darkening profile, the empirical model is optimized toward the center of the disk and significantly deviates at the limb (see also the quadratic fits in Figure <ref>). In these cases, inclined transits automatically attribute slightly higher weights to the intensities at the limb, as the planet spends more time occulting areas further from the center. Even if the innermost region of the stellar disk is not sampled during the transit, this is not often a problem, at least if b ≲0.5, because the intensity gradient does not vary significantly near the center.The examples discussed here may suggest that inclined transits can lead to smaller biases in the inferred transit parameters and limb-darkening profiles than edge-on transits, but the improvements appear to be quite small and, more importantly, the error bars have not yet been considered.§.§ Narrow-band (WFC3-like) exoplanet spectroscopy The results discussed in Section <ref> may suggest that it is difficult to model the depth of a hot-Jupiter transit with an absolute precision better than ∼10–30 ppm, because of the intrinsic limitations of the stellar limb-darkening parameterizations (claret-4 being the most accurate among those currently used). This potential bias will be particularly important when analyzing visible to mid-infrared exoplanet spectra measured with the JWST, as it is comparable with the instrumental precision limit <cit.>.In this section, we investigate the potential errors in relative transit depth over multiple narrow bands within a limited wavelength range, so-called “narrow-band exoplanet spectroscopy”.This kind of measurement has been performed with HST/STIS <cit.>, Spitzer/IRS <cit.>, HST/NICMOS <cit.>, HST/WFC3 <cit.>, and other space- and ground-based spectrographs (e.g. ), leading to the discovery of a long list of atomic, ionic, and molecular species in the atmospheres of exoplanets. Since the detection of the chemical species relies on their spectral features, it is not affected by a constant offset in transit depth; hence only the errors intransit depth differences at multiple wavelengths, referred to as relative error, are important. Here, we study the case of exoplanet spectroscopy with HST/WFC3, for which the narrow-band spectra reported in the literature often have ∼20–40 ppm error bars. To fix the ideas, we considered 25 wavelength bins, identical to those ones adopted in <cit.>, to generate one set of exact light-curves (as in Section <ref>) for each stellar model, and calculate the theoretical limb-darkening coefficients. We modeled each exact light-curve using the two approaches outlined in Sections <ref> and <ref>, i.e., with the theoretical and empirical limb-darkening coefficients, respectively. Figure <ref> shows the spectra obtained by using the most accurate claret-4 law. The spectra calculated with the theoretical limb-darkening coefficients are offset by +1, +18, and -18 ppm, on average, in excellent agreement with the measured biases for the broadband light-curves reported in Section <ref>. The relative errors, as measured by the peak-to-peak amplitudes, are 22, 14, and 8 ppm, from the coolest to the hottest model. The use of empirical limb-darkening coefficients reduces the spectral offsets, in these cases, to less than 3 ppm, and also reduces the peak-to-peak amplitudes down to 9, 5, and 4 ppm, respectively. Note that, in all configurations, the peak-to-peak amplitudes across the WFC3 narrow bands are smaller than the respective amplitudes for the broadband photometry from visible to mid-infrared reported in Section <ref> and <ref>, and they also decrease with the increasing model temperature.We remind the reader that the results discussed up to this section focus on the potential biases due to the approximate stellar limb-darkening parameterizations, in the absence of noise. The limits of the actual parameter fitting for light-curves with a low, but realistic, noise level are discussed in Section <ref>. An additional complication is the presence of temporal gaps in the transit light-curves observed with instruments onboard the HST, and from satellites operating on low orbits in general. The impact of such gaps in the retrieval of the transit parameters will be discussed in a separate paper (Karpouzas et al. 2017, in preparation). § LIGHT-CURVE FITTING WITH EMPIRICAL LIMB-DARKENING COEFFICIENTSSections <ref> and <ref> discussed the intrinsic biases due to the use of parametric models with theoretical (plane-parallel) or empirical limb-darkening coefficients. We now consider fitting the transit models with empirical limb-darkening coefficients to more realistic light-curves with noise. Those light-curves are obtained by adding gaussian time series to the exact light-curves (see Section <ref>). The standard deviation for the noise time series was set at ∼100 ppm, which is similar to the best photon-noise limit possible for a short-cadence Kepler frame <cit.>, or for a single HST/WFC3 scan <cit.>, taking into account the different integration times.We focused on four cases: two transits with b=0 and 0.5 across the hottest model star, at 8 μm (F0 V model, IRAC/ch4 passband), and across the coolest model star, at ∼430 nm (M5 V model, STIS/G430L passband). The cases considered correspond to those for which the limb-darkening effects are weakest (mid-infrared) and strongest (visible). §.§ F0 V model, IRAC/ch4 passband Figure <ref> shows the transit depth posterior distributions for the b=0 transitand F0 V model in the IRAC/ch4 passband, MCMC sampled with 1 500 000 iterations. Figure <ref> reports the relevant chains. Similar parameter chains are computed, in parallel, for a_R, i, the limb-darkening coefficients, a normalization factor, and the likelihood's variance. In two cases, the limb-darkening coefficients for the power-2 law are fitted, in the other two cases the claret-4 ones.The sampled posterior distributions of the transit depth, using the power-2 limb-darkening law, are almost identical and, in particular, the mean values and standard deviations differ by less than 1 ppm. Even considering subsets of the chains with 300 000 samples, the mean values and standard deviations are stable to better than 5 ppm. It may appear from Figure <ref> that the results are biased, given that the peak of the posterior distribution is more than 1σ away from the expected value. However, by repeating the analysis with different noise realizations, the best-fit transit depth is within 1σ of the expected value in 7 cases out of 10, consistent with the expectations for gaussian noise (see Figure <ref>). Interestingly, the average of the individual best-fit transit depths is 9 ppm above the expected value, which is very close to the 7 ppm bias measured in the noiseless case (Section <ref>).The chains calculated with the claret-4 law present significant long-term modulations, resulting in wider posterior distributions; there are also moderately large differences between the two repetitions, indicating that they have not converged. The lack of convergence when fitting the claret-4 coefficients is not a surprise, and it is due to strong correlations between the coefficients and with the other transit parameters.Figures <ref> and <ref> (Appendix <ref>) show the posterior distribution and chains obtained for the inclined transit, with b=0.5, using the power-2 limb-darkening law. The posterior distributions are wider than those obtained for the edge-on transit, with 1σ≈ 20 ppm rather than 12 ppm. The estimates from the partial chains with 300 000 samples are also less robust, with the corresponding mean values scattered over a 12 ppm interval.The accuracy and precision of the empirical limb-darkening profiles are discussed in Appendix <ref>. §.§ M5 V model, STIS/G430L passband We conducted corresponding studies for two transits in front of the M5 V model in the STIS/G430L passband.Figure <ref> shows the transit depth posterior distributions for the edge-on transit, and Figure <ref> reports the relevant chains. The sampled posterior distributions of the transit depth, using the power-2 limb-darkening law, are in good agreement and, in particular, the mean values differ by 10 ppm, with standard deviations of 45 and 48 ppm, respectively. As expected, the error bars are larger than those obtained for the less limb-darkened case (with identical noise). The estimates from the fractional chains with 300 000 samples may differ by up to 35 ppm, and the relevant standard deviations are in the range 34–51 ppm. Even in this case, the MCMC process failed to converge when fitting the claret-4 coefficients.Figure <ref> reports the transit depth estimates obtained with 10 different noise realizations, using the power-2 law. Note that their average is significantly biased in the same direction as the bias obtained in absence of noise (see Section <ref>), and the 1-σ error bars are smaller than the bias. Figures <ref> and <ref> (Appendix <ref>) show the posterior distribution and chains obtained for the inclined transit, with b=0.5, using the power-2 limb-darkening law. The posterior distributions are wider than those obtained for the edge-on transit, i.e. 1σ≈ 115 ppm rather than 45 ppm.The accuracy and precision of the empirical limb-darkening profiles are discussed in Appendix <ref>. §.§ The benefits of using prior information The examples discussed in Section <ref> and <ref> show that: * if using the power-2 law, the empirical limb-darkening coefficients can be fitted with the other transit parameters (p, a_R, i; see Section <ref>) and a normalization factor, but the results can be significantly biased, depending on the stellar model and passband;* the analogous fits, using the claret-4 law, fail to converge (at least, using our MCMC routine with up to 1 500 000 iterations).Unfortunately, all the two-coefficients laws are biased for some stellar types and wavelengths (see Section <ref> and <ref>), but most of them are sufficiently accurate in the infrared wavelengths. Some authors proposed to fitting one or two coefficients of the claret-4 law, while keeping the other coefficients fixed (e.g., ). We found that the validity of this approach relies on a good choice of the fixed coefficients, and thus is not fully empirical.Our proposal is that, if the `geometric' parameters, a_R and i, are measured in the infrared, the results can be implemented as an informative prior when fitting at shorter wavelengths, thanks to their small or negligible wavelength-dependence, based on Equations <ref>, <ref> and Table <ref>.We tested fitting for p, a_R, i, the claret-4 limb-darkening coefficients, and a normalization factor, on the transit light-curves obtained for the M5 V model in the STIS/G430L passband, adopting gaussian priors on a_R and i. The parameters of the gaussian priors are reported in Table <ref>.Figure <ref> shows the transit depth posterior distributions for the edge-on and the inclined transits, obtained with 1 500 000 MCMC samples. Figure <ref> shows the relevant chains. The use of gaussian priors on a_R and i leads to convergence of the MCMC fits with claret-4 coefficients. The error bars in transit depth are significantly smaller than those estimated with power-2 limb-darkening coefficients and non-informative priors on a_R and i, falling to ∼25 and ∼50 (compared with ∼45 and ∼115 ppm), for the edge-on and inclined transits, respectively.The biases, averaged over 10 light-curves with different noise levels, are also smaller (+15 and -7 ppm; Figure <ref>). As a final test, we investigated the effect of having a longer integration time, similar to that of the Kepler short-cadence frame. The longer integration time is simulated by binning the transit light-curves over 7 points (7× 8.4=58.8 s). The relevant transit depth posterior distributions are shown in Figure <ref>; they are almost identical to the non-binned ones.We conclude that an integration time of ∼1 min, as for the Kepler short-cadence frames, does not affect the accuracy (error bar) of the fitted transit depth, compared to shorter integration times. § DISCUSSION §.§ Synergies between JWST and Kepler, K2, TESSEmpirical limb-darkening coefficients determined from exoplanetary transit light-curves are desirable, not only to validate the stellar-atmosphere models, but also to improve both the absolute and relative precision of inferred exoplanetary spectra. No two-coefficient limb-darkening law is accurate for all stellar types and/or wavelengths, but can still give near-perfect fits to the transit light-curves, albeit with significantly biased transit parameters and limb-darkening coefficients. To overcome this issue, fitting for the claret-4 limb-darkening coefficients is necessary, but some prior knowledge of the orbital parameters a_R and i is required to enable convergence of the fitting algorithms.Such knowledge can be obtained from the infrared observations, for which the effect of limb-darkening is smaller, and simple two-coefficient laws may be sufficiently accurate. The MIRI instrument onboard JWST will provide suitable observations for tens of exoplanets. A re-analysis of the Kepler and K2 targets, with the approach developed in Section <ref>, can address some of the controversies reported in the literature (e.g. ), if the only problem was the use of inadequate two-parameter limb-darkening laws. The same approach should be used for new observations that will be obtained, in the visible, by TESS and/or other JWST instruments. § CONCLUSIONSWe studied the potential biases in transit depth due to the use of theoretical stellar limb-darkening coefficients obtained from plane-parallel model atmospheres, and when fitting for empirical limb-darkening coefficients, over a range of model temperatures and instrumental passbands. We propose the use of a two-coefficient law, named “power-2”, which outperforms the most common two-coefficient laws adopted in the exoplanet literature, especially for the M-dwarf models. Nevertheless, the Claret four-coefficient law is significantly more robust than any simpler one, especially at visible wavelengths. Our results indicate that an absolute precision of ≲30 ppm can be achieved in the modeled transit depthat visible and infrared wavelegths, with ≲10 ppm relative precision over the HST/WFC3 passband, depending on the stellar type. The intrinsic bias due to the use of theoretical limb-darkening coefficients obtained from plane-parallel models is also ≲30 ppm for most exoplanet host stars (F–M spectral types), but this estimate does not take into account the uncertainties in the stellar models and in the measured stellar parameters, or the effect of stellar activity and other second-order effects. Finally, we developed an optimal strategy to fitting for the four-coefficient limb-darkening in the visible, using prior information on the exoplanet orbital parameters to break some of the degeneracies. This novel approach could solve some of the controversial results reported in the literature, which relies on empirical estimates of quadratic limb-darkening coefficients. The forthcoming JWST mission will provide accurate information on the orbital parameters of transiting exoplanets through observations performed by MIRI, enabling wide application of the approach developed in this paper. This work was supported by STFC (ST/K502406/1) and the ERC project ExoLights (617119). D.H. is supported by Sonderforschungsbereich SFB 881 “The Milky Way System” (subproject A4) of the German Research Foundation (DFG).§ SUPPLEMENTAL FIGURES: BEST-FIT MODELS AND RESIDUALSThis Appendix contains Figures <ref>–<ref> showing the best-fit limb-darkening and transit models for each stellar type and passband analyzed in Sections <ref>–<ref>.§ SUPPLEMENTAL FIGURES: MCMC FITTING RESULTS FOR INCLINED TRANSITSThis Appendix contains Figures <ref>–<ref> showing the histograms and chains for the inclined transits, analogous to those ones presented in Sections <ref>–<ref> for the edge-on transits (Figures <ref>–<ref> and <ref>–<ref>).§ ACCURACY AND PRECISION OF EMPIRICAL LIMB-DARKENING MODELS In contrast to other transit parameters, the limb-darkening coefficients do not correspond directly to some physical property of the star-planet system. Also, for a given limb-darkening law, there exist sets of coefficients which are largely different but generate almost indistinguishable intensity profiles. Instead of studying their posterior distributions, it is more sensitive to calculate the chains of specific intensities at given μ values, then to compare, in the case of simulations, with the input limb-darkening profile. Figure <ref> shows the residuals in specific intensities obtained from the two light-curves relative to the F0 V model in the IRAC/ch4 passband (Section <ref>), one edge-on (b=0) and one inclined (b=0.5) transit, using the power-2 law. The error bars (i.e., the standard deviations of the intensity chains) are smaller than 0.2% for μ>0.4, then increase up to ≳1% near the edge of the disk. For the inclined transit, the error bars are larger by factors 1.0–2.4. The error bars of the predicted intensities along the steep drop-off, i.e. at μ≲0.08, are not representative of the true errors, as the predictions may deviate from the input values by more than 10 σ.We find that a good set of limb-darkening coefficients, which reproduces intensities close to those predicted by the intensity chains, can be obtained by taking medians of the coefficients chains. Figure <ref> shows the intensity profiles estimated in this way, from light-curves with different noise realizations. They show, on average, the same bias obtained for the noiseless case (see Section <ref>).Figures <ref>–<ref> report the analogous results obtained for the M5 V model in the STIS/G430L passband (Section <ref>). The error bars on the specific intensities are, in average, ≳1.5 times larger than those obtained for the less limb darkened cases. Even in this case, the bias is similar to the one obtained for the noiseless case (see Section <ref>).[Allard et al.(2012)]allard12 Allard, F., Homeier, D. & Freytag, B.2012, Philosophical Transactions of the Royal Society of London Series A, 370, 2765 [Aufdenberg et al.(2005)]aufdenberg05 Aufdenberg, J. P., Ludwig, H.-G., & Kervella, P.2005, , 633, 424 [Ballerini et al.(2012)]ballerini12 Ballerini, P., Micela, G., Lanza, A. F., & Pagano, I.2012, , 539, A140 [Barnes(2009)]barnes09 Barnes, J. W.2009, , 705, 683 [Barnes et al.(2011)]barnes11Barnes, J. W., Linscott, E., & Shporer, A.2011, , 197, 10 [Beichman et al.(2014)]beichman14 Beichman, C., Benneke, B., Knutson, H., Smith, R., Lagage, P.-O., Dressing, C., Latham, D., Lunine, J., Birkmann, S., Ferruit, P., Giardino, G., Kempton, E., Carey, S., Krick, J., Deroo, P. D., Mandell, A., Ressler, M. E., Shporer, A., Swain, M., Vasisht, G., Ricker, G., Bouwman, J., Crossfield, I., Greene, T., Howell, S., Christiansen, J., Ciardi, D., Clampin, M., Greenhouse, M., Sozzetti, A., Goudfrooij, P., Hines, D., Keyes, T., Lee, J., McCullough, P., Robberto, M., Stansberry, J., Valenti, J., Rieke, M., Rieke, G., Fortney, J., Bean, J., Kreidberg, L., Ehrenreich, D., Deming, D., Albert, L., Doyon, R., & Sing, D.2014, , 126, 1134 [Borucki et al.(2009)]borucki09 Borucki, W. J., Koch, D., Jenkins, J., Sasselov, D., Gilliland, R., Batalha, N., Latham, D. W., Caldwell, D., Basri, G., Brown, T., et al.2009, Science, 325, 709 [Broomhall et al.(2009)]broomhall09 Broomhall, A. M., Chaplin, W. J., Davies, G. R., Elsworth, Y., Fletcher, S. T., Hale, S. J., Miller, B., & New, R.2009, , 396, L100 [Charbonneau et al.(2002)]charbonneau02 Charbonneau, D., Brown, T. M., Noyes, R. W., & Gilliland, R. L.2002, , 568, 377 [Chiavassa et al.(2016)]chiavassa16 Chiavassa, A., Caldas, A., Selsis, F., Leconte, J., Von Paris, P., Bordé, P., Magic, Z., Collet, R., & Asplund, M.2016, arXiv1609.08966 [Claret(2000)]claret00 Claret, A.2000, , 363, 1081 [Claret(2004)]claret04 Claret, A.2004, , 428, 1001 [Claret(2008)]claret08 Claret, A.2008, , 482, 259 [Claret(2009)]claret09 Claret, A.2009, , 506, 1335 [Claret & Bloemen(2011)]claret11 Claret, A., & Bloemen, S.2011, , 529, A75 [Claret et al.(2012)]claret12 Claret, A., Hauschildt, P. H., & Witte, S.2012, , 546, A14 [Claret et al.(2012b)]claret12b Claret, A.2012, , 538, A3 [Claret et al.(2013)]claret13 Claret, A., Hauschildt, P. H., & Witte, S.2013, , 552, A16 [Claret et al.(2014)]claret14 Claret, A., Dragomir, D., & Matthews, J. M.2014, , 567, A3 [Crouzet et al.(2014)]crouzet14 Crouzet, N., McCullough, P. R., Deming, D., & Madhusudhan, N.2014, , 795, 166 [Csizmadia et al.(2013)]csizmadia13 Csizmadia, S., Pasternacki, T., Dreyer, C., Cabrera, J., Erikson, A., & Rauer, H.2013, , 549, A9 [Danielski et al.(2014)]danielski14 Danielski, C., Deroo, P., Waldmann, I. P., Hollis, M. D. J., Tinetti, G., & Swain, M. R.2014, , 785, 35 [Deming et al.(2013)]deming13 Deming, D., Wilkins, A., McCullough, P., Burrows, A., Fortney, J. J., Agol, E., Dobbs-Dixon, I., Madhusudhan, N., Crouzet, N., Desert, J.-M., Gilliland, R. L., Haynes, K., Knutson, H. A., Line, M., Magic, Z., Mandell, A. M., Ranjan, S., Charbonneau, D., Clampin, M., Seager, S., & Showman, A. P.2013, , 774, 95 [Devinney(1980)]devinney80 Devinney, Jr., E. J.1980, , 12, 501 [Díaz-Cordovés & Giménez(1992)]diaz-cordoves92 Diaz-Cordoves, J., & Gimenez, A.1992, , 259, 227 [Dominik(2004)]dominik04 Dominik, M.2004, , 353, 118 [Espinoza & Jordan(2015)]espinoza15 Espinoza, N., & Jordán, A.2015, , 450, 1879 [Fields et al.(2003)]fields03 Fields, D. L., Albrow, M. D., An, J., Beaulieu, J.-P., Caldwell, J. A. R., DePoy, D. L., Dominik, M., Gaudi, B. S., Gould, A., Greenhill, J., Hill, K., Jørgensen, U. G., Kane, S., Martin, R., Menzies, J., Pogge, R. W., Pollard, K. R., Sackett, P. D., Sahu, K. C., Vermaak, P., Watson, R., Williams, A., Glicenstein, J.-F., Hauschildt, P. H.,2003, , 596, 1305[Fraine et al.(2014)]fraine14 Fraine, J., Deming, D., Benneke, B., Knutson, H., Jordán, A., Espinoza, N., Madhusudhan, N., Wilkins, A., & Todorov, K.2014, , 513, 526 [Gray & Corbally(2009)]gray09book Gray, R. O., & Corbally, C. J.2009, Stellar Spectral Classification, Princeton University Press, ISBN: 978-0-691-12510-7 [Grillmair et al.(2007)]grillmair07 Grillmair, C. J., Charbonneau, D., Burrows, A., Armus, L., Stauffer, J., Meadows, V., Van Cleve, J., & Levine, D.2007, , 658, L115 [Grillmair et al.(2008)]grillmair08 Grillmair, C. J., Burrows, A., Charbonneau, D., Armus, L., Stauffer, J., Meadows, V., van Cleve, J., von Braun, K., & Levine, D.2008, , 456, 767 [Hayek et al.(2012)]hayek12 Hayek, W., Sing, D., Pont, F. & Asplund, M.2012, , 539, A102 [Hébrard et al.(2013)]hebrard13 Hébrard, G., Almenara, J.-M., Santerne, A., Deleuil, M., Damiani, C., Bonomo, A. S., Bouchy, F., Bruno, G., Díaz, R. F., Montagnier, G., & Moutou, C.2013, , 554, A114 [Herrero et al.(2015)]herrero15 Herrero, E., Ribas, I., & Jordi, C.2015, Experimental Astronomy, 40, 695 [Hestroffer(1997)]hestroffer97 Hestroffer, D. 1997, , 327, 199 [Heyrovský(2007)]heyrovsky07 Heyrovský, D. 2007, , 656, 483 [Howarth(2011)]howarth11 Howarth, I. D.2011, , 418, 1165 [Howarth(2011b)]howarth11b Howarth, I. D.2011, , 413, 1515 [Howarth & Morello(2017)]howarth17 Howarth, I. D., & Morello, G.2017, , 470, 932 [Kipping(2009)]kipping09 Kipping, D. M. 2009, , 392, 181 [Kipping(2009b)]kipping09b Kipping, D. M. 2009, , 396, 1797 [Kipping & Tinetti(2010)]kipping10 Kipping, D. M., & Tinetti, G.2010, , 407, 2589 [Kipping & Bakos(2011)]kipping11 Kipping, D., & Bakos, G.2011, , 730, 50 [Kipping & Bakos(2011b)]kipping11b Kipping, D., & Bakos, G.2011, , 733, 36 [Kipping(2014)]kipping14 Kipping, D. M. 2014, , 444, 2263 [Kjeldsen & Bedding(2011)]kjeldsen11 Kjeldsen, H., & Bedding, T. R.2011, , 529, L8 [Knutson et al.(2014)]knutson14 Knutson, H. A., Benneke, B., Deming, D., & Homeier, D.2014, , 505, 66 [Knutson et al.(2014b)]knutson14b Knutson, H. A., Dragomir, D., Kreidberg, L., Kempton, E. M. R., McCullough, P. R., Fortney, J. J., Bean, J. L., Gillon, M., Homeier, D., Howard, A. W.2014, , 794, 155 [Kopal(1950)]kopal50 Kopal, Z.1950, Harvard College Observatory Circular, 454, 1 [Kreidberg et al.(2014)]kreidberg14 Kreidberg, L., Bean, J. L., Désert, J.-M., Line, M. R., Fortney, J. J., Madhusudhan, N., Stevenson, K. B., Showman, A. P., Charbonneau, D., McCullough, P. R., Seager, S., Burrows, A., Henry, G. W., Williamson, M., Kataria, T., & Homeier, D.2014, , 793, L27 [Kreidberg et al.(2014b)]kreidberg14b Kreidberg, L., Bean, J. L., Désert, J.-M., Benneke, B., Deming, D., Stevenson, K. B., Seager, S., Berta-Thompson, Z., Seifahrt, A., & Homeier, D.2014, , 505, 69 [Kreidberg et al.(2015)]kreidberg15 Kreidberg, L., Line, M. R., Bean, J. L., Stevenson, K. B., Désert, J.-M., Madhusudhan, N., Fortney, J. J., Barstow, J. K., Henry, G. W., Williamson, M. H., & Showman, A. P.2015, , 814, 66 [Kreidberg (2015)]kreidberg15b Kreidberg, L.2015, , 127, 1161 [Lane et al.(2001)]lane01 Lane, B. F., Boden, A. F., & Kulkarni, S. F.2001, , 551, L81 [Line et al.(2016)]line16 Line, M. R., Stevenson, K. B., Bean, J., Désert, J.-M., Fortney, J. J., Kreidberg, L., Madhusudhan, N., Showman, A. P., & Diamond-Lowe, H.2016, , 152, 203 [Loeb & Gaudi(2003)]loeb03 Loeb, A., & Gaudi, B. S.2003, , 588, L117 [Magic et al.(2015)]magic15 Magic, Z., Chiavassa, A., Collet, R. & Asplund, M.2015, , 573, A90 [Mandel & Agol(2002)]mandel02 Mandel, K., & Agol, E.2002, , 580, L171 [McCullough et al.(2014)]mccullough14 McCullough, P. R., Crouzet, N., Deming, D., & Madhusudhan, N.2014, , 791, 55 [Micela(2015)]micela15 Micela, G.2015, Experimental Astronomy, 40, 723 [Müller et al.(2013)]muller13 Müller, H. M., Huber, K. F., Czesla, S., Wolter, U., & Schmitt, J. H. M. M.2013, , 560, A112 [Neilson & Lester(2013a)]neilson13a Neilson, H. R., & Lester, J. B.2013, , 554, A98 [Neilson & Lester(2013b)]neilson13b Neilson, H. R., & Lester, J. B.2013, , 556, A86 [Neilson et al.(2017)]neilson17 Neilson, H. R., McNeil, J. T., Ignace, R., & Lester, J. B.2017, arXiv:1704.07376 [Pál(2008)]pal08, Pál, A.2008, , 390, 281 [Pfahl et al.(2008)]pfahl08 Pfahl, E., Arras, P., & Paxton, B.2008, , 679, 783 [Redfield et al.(2008)]redfield08 Redfield, S., Endl, M., Cochran, W. D., & Koesterke, L.2008, , 673, L87 [Reeve & Howarth(2016)]reeve16 Reeve, D. C., & Howarth, I. D.2016, , 456, 1294 [Richardson et al.(2007)]richardson07 Richardson, L. J., Deming, D., Horning, K., Seager, S., & Harrington, J.2007, , 445, 892 [Sartoretti & Schneider(1999)]sartoretti99 Sartoretti, P., & Schneider, J.1999, , 134, 553 [Scandariato & Micela(2015)]scandariato15 Scandariato, G. & Micela, G.2015, Experimental Astronomy, 40, 711 [Schneider et al.(2011)]schneider11 Schneider, J., Dedieu, C., Le Sidaner, P., Savalle, R., & Zolotukhin, I.2011, , 532, A79 [Shporer et al.(2012)]shporer12 Shporer, A., Brown, T., Mazeh, T., & Zucker, S.2012, , 17, 309 [Sing(2010)]sing10 Sing, D. K.2010, , 510, A21 [Snellen et al.(2008)]snellen08 Snellen, I. A. G., Albrecht, S., de Mooij, E. J. W., & Le Poole, R. S.2008, , 487, 357 [Snellen et al.(2009)]snellen09 Snellen, I. A. G., de Mooij, E. J. W., & Albrecht, S.2009, , 459, 543 [Southworth et al.(2004)]southworth04 Soutworth,J., Maxted, P. F. L., & Smalley, B.2004, , 351, 1277 [Southworth(2008)]southworth08 Soutworth,J.2008, , 386, 1644 [Stevenson et al.(2014)]stevenson14 Stevenson, K. B., Désert, J.-M., Line, M. R., Bean, J. L., Fortney, J. J., Showman, A. P., Kataria, T., Kreidberg, L., McCullough, P. R., Henry, G. W., Charbonneau, D., Burrows, A., Seager, S., Madhusudhan, N., Williamson, M. H., & Homeier, D.2014, Science, 346, 838 [Swain et al.(2008)]swain08 Swain, M. R., Vasisht, G., & Tinetti, G.2008, , 452, 329 [Swain et al.(2009)]swain09 Swain, M. R., Vasisht, G., Tinetti, G., Bouwman, J., Chen, P., Yung, Y., Deming, D., & Deroo, P.2009, , 690, L114 [Swain et al.(2009b)]swain09b Swain, M. R., Tinetti, G., Vasisht, G., Deroo, P., Griffith, C., Bouwman, J., Chen, P., Yung, Y., Burrows, A., Brown, L. R., Matthews, J., Rowe, J. F., Kuschnig, R., & Angerhausen, D.2009, , 704, 1616 [Swain et al.(2010)]swain10 Swain, M. R., Deroo, P., Griffith, C. A., Tinetti, G., Thatte, A., Vasisht, G., Chen, P., Bouwman, J., Crossfield, I. J., Angerhausen, D., Afonso, C., & Henning, T.2010, , 463, 637 [Tinetti et al.(2010)]tinetti10 Tinetti, G., Deroo, P., Swain, M. R., Griffith, C. A., Vasisht, G., Brown, L. R., Burke, C., & McCullough, P.2010, , 712, L139 [Tsiaras et al.(2016)]tsiaras16 Tsiaras, A., Rocchetto, M., Waldmann, I. P., Venot, O., Varley, R., Morello, G., Damiano, M., Tinetti, G., Barton, E. J., Yurchenko, S. N., & Tennyson, J.2016, , 820, 99 [Tsiaras et al.(2016b)]tsiaras16b Tsiaras, A., Waldmann, I. P., Rocchetto, M., Varley, R., Morello, G., Damiano, M., & Tinetti, G.2016, , 832, 202[Van Cleve & Caldwell(2009)]kih09 Van Cleve, J. E., & Caldwell, D. A.2009, Kepler Instrument Handbook (KSCI-19033-001) [Vidal-Madjar et al.(2003)]vidal03 Vidal-Madjar, A., Lecavelier des Etangs, A., Désert, J.-M., Ballester, G. E., Ferlet, R., Hébrard, G., & Mayor, M.2003, , 422, 143 [Vidal-Madjar et al.(2004)]vidal04 Vidal-Madjar, A., Désert, J.-M., Lecavelier des Etangs, A., Hébrard, G., Ballester, G. E., Ehrenreich, D., Ferlet, R., McConnell, J. C., Mayor, M., & Parkinson, C. D.2004, , 604, L69 [von Zeipel(1924)]vonzeipel24 von Zeipel, H.1924, , 84, 665 [Waldmann et al.(2012)]waldmann12 Waldmann, I. P., Tinetti, G., Drossart, P., Swain, M. R., Deroo, P., & Griffith, C. A.2012, , 744, 35 [Welsh et al.(2010)]welsh10 Welsh, W. F., Orosz, J. A., Seager, S., Fortney, J. J., Jenkins, J., Rowe, J. F., Koch, D., & Borucki, W. J.2010, , 713, L145 [Wilson & Devinney(1971)]wilson71 Wilson, R. E., & Devinney, E. J.2002, , 166, 605 [Witt(1995)]witt95 Witt, H. J.1995, , 449, 42 [Wittkowski et al.(2001)]wittkowski01 Wittkowski, M., Hummel, C. A., Johnston, K. J., Mozurkewich, D., Hajian, A. R., & White, N. M.2001, , 377, 981 [Wittkowski et al.(2004)]wittkowski04 Wittkowski, M., Aufdenberg, J. P., & Kervella, P.2004, , 413, 711 [Zub et al.(2011)]zub11 Zub, M., Cassan, A., Heyrovský, D., Fouqué, P., Stempels, H. C., Albrow, M.D., Beaulieu, J.-P., Brillant, S., Christie, G. W., Kains, N., Kozłowski, S., Kubas, D., Wambsganss, J., Batista, V., Bennett, D. P., Cook, K., Coutures, C., Dieters, S., Dominik, M., Dominis Prester, D., Donatowicz, J., Greenhill, J., Horne, K., Jørgensen, U. G., Kane, S. R., Marquette, J.-B., Martin, R., Menzies, J., Pollard, K. R., Sahu, K. C., Vinter, C., Williams, A., Gould, A., Depoy, D. L., Gal-Yam, A., Gaudi, B. S., Han, C., Lipkin, Y., Maoz, D., Ofek, E. O., Park, B.-G., Pogge, R. W., McCormick, J., Udalski, A., Szymański, M. K., Kubiak, M., Pietrzyński, G., Soszyński, I., Szewczyk, O., Wyrzykowski, Ł., & PLANET Collaboration2011, , 525, A15 [Zucker et al.(2007)]zucker07 Zucker, S., Mazeh, T., & Alexander, T.2007, , 670, 1326
http://arxiv.org/abs/1704.08232v2
{ "authors": [ "Giuseppe Morello", "Angelos Tsiaras", "Ian D. Howarth", "Derek Homeier" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170426173745", "title": "High-precision stellar limb-darkening in exoplanetary transits" }
Var Cov Lip med thmTheorem[section] cor[thm]Corollary lem[thm]Lemma prop[thm]Proposition example[thm]Example observation[thm]Observation remarks[thm]Remark hypothesisHypothesis definition defnDefinition[section]equationsection remark remRemark[section] HS supsup Sin Lip Å𝒜 ℬ 𝒞 𝒟 ℱ ℰ 𝒥 𝒦 ℒ ℳ 𝒩 𝒫 𝒮 𝒳 𝒰 𝒪ε ⟨ ⟩ ⊂⊂ Nian Yao School of Mathematics and Statistics, Shenzhen University, Shenzhen, Guangdong Province, 518060, [email protected] Yang School of Mathematics and Statistics, Shenzhen University, Shenzhen, Guangdong Province, 518060, China. [email protected] excess-of-loss reinsurance and investment problem for an insurer with default risk under a stochastic volatility model]Optimal excess-of-loss reinsurance and investment problem for an insurer with default risk under a stochastic volatility model[ Zhiming Yang =================Abstract: In this paper, we study an optimal excess-of-loss reinsurance and investment problem for an insurer in defaultable market. The insurer can buy reinsurance and invest in the following securities: a bank account, a risky asset with stochastic volatility and a defaultable corporate bond. We discuss the optimal investment strategy into two subproblems: a pre-default case and a post-default case. We show the existence of a classical solution to a pre-default case via super-sub solution techniques and give an explicit characterization of the optimal reinsurance and investment policies that maximize the expected CARA utility of the terminal wealth. We prove a verification theorem establishing the uniqueness of the solution. Numerical results are presented in the case of the Scott model and we discuss economic insights obtained from these results. 0.3cmKeyword: optimal reinsurance · optimal investment · default risk · Hamilton-Jacobi-Bellman equation · stochastic volatility model.0.3cm 2010 Mathematical Subject Classification: Primary 93E20; Secondary 60H30 § INTRODUCTION The theory of optimal investment dates back to the seminal works of Merton (1969, 1971, 1990). In the setting of continuous-time models, an optimization problem of an agent who invests his/her wealth into a financial market to maximize the expected utility of terminal wealth was studied. He derived a solution to this optimization problem for a complete market by employing tools of optimal stochastic control. Browne(1995) considered the risk process is approximated by a Browmian motion with drift and the stock price process modeled by a geometric Browmian motion and the insurer maximizes the expected constant absolute risk aversion(CARA) utility from the terminal wealth. Under this assumption, when the interest rate of a risk-free bond is zero, the optimal strategy also minimizes the ruin probability. Hipp and Plum(2000) studied risk process follows the classical Cramer-Lundbérg model and the insurer can invest in a risky asset to minimize the ruin probability. However, the interest rate of the bond in their model is implicitly assumed to be zero. Liu and Yang(2004) extended the model of hipp and Plum(2000) to incorporate a non-zero interest rate. But in this case ,a closed-form solution cannot be obtained. Yang and Zhang(2005)considered that the insurer is allowed to invest in the money market and a risky asset. They obtained a closed form expression of the optimal strategy when the utility function is exponential. Ferná ndez et al.(2008) considered the risk model with the possibility of investment in the money market and a risky asset modeled by a geometric Brownian motion. Via the Hamilton-Jacobi-Bellman(HJB) approach, they found the optimal strategy when the insurer's preferences are exponential. Badaoui(2013) extended the model of Fernández et al.(2008) to a risky asset with stochastic volatility, when the insurer preferences are exponential, they prove the existence of a smooth solution, and they give an explicit form of the optimal strategy. 0.3cmFor the reinsurance problem, Promislow and Young (2005) obtained investment and reinsurance strategies to minimize the ruin probability for a diffusion risk model. Bai and Guo (2008) considered an optimal proportional reinsurance and investment problem with multiple risky assets for a diffusion risk model. Cao and Wan (2009) investigated the proportional reinsurance and investment problem of utility maximization for an insurance company. Zeng and Li (2011) obtained the time-consistent investment and proportional reinsurance policies under the mean-variance criterion for an insurer. Gu et al. (2010) introduced the CEV model into the optimal reinsurance and investment problem for insurers. Later, Liang et al. (2012) and Lin and Li (2011) investigated the optimal reinsurance and investment problem for an insurer with a jump diffusion risk process under the CEV model. Li et al. (2012) began to apply the Heston model to study the reinsurance and investment problem under the mean-variance criterion. Asmussen et al. (2000) firstly studied the optimal dividend problem under the control of excess-of-loss reinsurance and showed that excess-of-loss reinsurance is more profitable than the proportional reinsurance. Zhao and Rong(2013) considered the risk process approximated by a Heston model with drift and they obtained the optimal excess-of-loss reinsurance strategy.0.3cmFor the risk of default problem, Bielecki and Jang(2006) considered that the insurer is allowed to invest in bond and risky asset and default asset whose coefficient is constant. Capponi and Figueroa-López(2014) considered the same problem that the risky asset is a markov process with multi-dimensional continuous time in finite state. In these two articles, the dynamic programming method was adopted, and the optimal strategy was obtained. Jiao and Pham (2011) used a default-density modelling approach and addressed the power utility maximization problem using the terminal wealth in a financial market with a stock exposed to a counter-party risk. By decomposing the optimization problem into two sub-problems, one that is stated before the default time and one that is stated after default, they derive the optimal investment strategy by applying standard martingale approaches. Bo et al. (2010, 2013) considered a portfolio optimization problem with default risk under the intensity-based reduced-form framework, and the goal was to maximize the infinite horizon expected discounted HARA utility of consumption, where the default risk premium and the default intensity were assumed to rely on a stochastic factor described by a diffusion process. Zhu et al.(2015) studied the optimal investment and reinsurance problem for an insurer whose investment opportunity set contains a default security and the closed-form expressions for optimal control strategies and the corresponding value functions are derived. Bo et al. (2016) considered an optimal risk-sensitive portfolio allocation problem, which explicitly accounts for the interaction between market and credit risk and show the existence of a classical solution to this system via super-sub solution techniques and give an explicit characterization of the optimal feedback strategy. 0.3cmIn our paper, the insurer is allowed to purchase excess-of-loss reinsurance and invest in a risk-free asset and a risky stock asset follows the general stochastic volatility model and a defaultable corporate bond. Comparing with Badaoui(2013) and Zhu et al.(2015), we add an excess-of-loss reinsurance and default risk into the model and generalize the Heston model to the more general stochastic volatility model. We work under the martingale invarance hypothesis. Herein, we also assume the existence of the conditional density of the default time τ . Let the surplus process of the insurer satisfy a jump–diffusion process, and the dynamics of the risky stock price follow a stochastic volatility model. The insurance company's manager can dynamically choose a proportion reinsurance strategy and allocate the wealth into the above three assets. The goal is to maximize the finite horizon expected exponential utility of terminal wealth. In the spirit of Bielecki and Jang(2006), we decompose the original optimization problem into two sub-problems: a pre-default case and a post-default case. A dynamic programming principle is employed to derive the Hamilton–Jacobi–Bellman (HJB) equation. We show the existence of a classical solution to a pre-default case via super-sub solution techiniques. The closed-form expressions for optimal control strategies and the corresponding value functions are derived.The remainder of this paper is organized as follows: In Section 2, we introduce the model and the problem of our research.In Section 3, we derive the HJB equation for the pre-default case and the post-default case, and then, the explicit expressions for optimal control strategies and the corresponding value functions are obtained. And also we show the existence of a classical solution to a pre-default case via super-sub solution techiniques. In addition, we provide the verification theorem. In Section 4 demonstrates our results with numerical examples. 0.3cmIn the Appendix we give some results about Partial Differential Equations which is important to our proof.§ THE MODEL§.§ Dynamics of reserve process The insurer's surplus process is described by the classical risk model perturbed by a diffusion, i.e.,dR_t=cdt-dC_t,where c is the premium rate, C_t represents the cumulative claims up to time t. Suppose the premium is calculated according to the expected value principle, i.e., c=(1+η )λμ_∞, where η>0 is the safety loading of the insurer. We assume that C_t=∑_i=1^N_tX_i is a compound Poisson process, where N_t is a homogeneous Poisson process with intensity λ and jump times {T_i}_i≥1. The claim sizes {X_i,i≥1} are independent and identically distributed positive random variables with common distribution F(x). Denote the mean value E[X_i]=μ_∞ and D:=sup{z: F(z)<+∞}. Suppose that F(0)=0, 0<F(x)<1 for 0<x<D and F(x)=1 for x≥ D. In addition, we assume that N_t is independent of the claim sizes X_i, i≥1. The insurer is allowed to purchase excess-of-loss reinsurance to reduce the underlying insurance risk. Let a be a (fixed) excess-of loss retention level. Then the corresponding reserve process isdR_t=c^(a)dt-dC^(a)_t,where c^(a)=(1+η)λμ_∞-(1+θ)λ{μ_∞-E[ min(X_1,a)]} =(η-θ)λμ_∞+(1+θ)λ∫_0^aF̅ (x)dx, C^(a)_t=∑_i=1^N_tmin(X_i,a) and θ denotes the safety loading of the reinsurer and F̅(x)=1-F(x). Without loss of generality, we assume that θ>η andexp{∫_0^t e^-rsdC^(a)(s)} <∞, ∀ t<∞.§.§ The financial market We assume (Ω,𝒢,ℚ) to be a complete probability space that is endowed with a reference filtration ℱ={ℱ _t}_t≥0 that satisfies the usual conditions. The probability measure ℚ is a martingale probability measure and is assumed to be equivalent to the real-world measure ℙ. Let τ be a non-negative random variable on this space. τ represents the first jump time of a Poisson process with constant intensity h^Q>0. For the sake of convenience, we assume that ℚ(τ=0)=0 and ℚ(τ>0)>0 , which implies that the default cannot occur at the initial time and can occur at any time until maturity. For t≥0, define a default indicator process H=(H_t;t≥0) by H_t=𝕀_{τ≤ t}. The filtration 𝒢 is defined using 𝒢_t=ℱ _t⋁σ(H(t);s≤ t)=ℱ_t⋁σ(τ⋀ t). Then, 𝒢=(𝒢_t;t≥0) is the smallest filtration such that the random time τ is not necessarily a stopping time, and 𝒢_t is called the enlarged filtration. Such an information structure is standard in the reduced-form approach.Let the conditional survival probability be given byℚ(τ>t|ℱ)=e^-h^Qt,where the risk neutral intensity h^Q is assumed to be constant; then, the following process related to defaultM^Q_t=H_t-∫_0^t (1-H_u)h^Qdu,is a (ℚ,𝒢) martingale.By applying Proposition 1 in Zhu(2015), the ℙ-dynamics of the defaultable bond price process p(t,T_1) are given bydp(t,T_1)=p(t-,T_1)[r(Z_t)dt+(1-H_t)δ(1-Δ)dt-(1-H_t-) ζ dM^P_t],where M^P_t=H_t-h^Q∫_0^t (1-H_u)Δ du is a 𝒢-martingale under the real-world probability ℙ and δ=h^Qζ is the credit spread under the real-world probability measure, ζ is the loss rate, h^P=h^QΔ is a constant and 1/Δ≥1 denote the default risk premium.The price process of the risk-free asset is given bydS_t^0=S_t^0r(Z_t)dt,where r(·) is the interest rate function. The process Z_t can be interpreted as the behavior of some economic factor that has an impact on the dynamics of the risky asset and the bank account. For instance, the external factor can be modeled by the mean reverting Ornstein-Uhlenbeck (O-U) process:dZ_t=δ(κ-Z_t)dt+β dW_t,Z_0=z,where δ and κ are constant.From Badaoui(2013), we assume the risky asset price satisfies the following stochastic volatility model:dS_t=S_t(μ(Z_t)dt+σ(Z_t)dW_1t),where S_0=1, W_1t is a standard Brownian motion; μ(·) and σ(·) are respectively the return rate and volatility functions. Z is an external factor modeled as a diffusion process solving dZ_t=g(Z_t)dt+β(ρ dW_1t+√(1-ρ^2)dW_2t),where Z_0=z∈ℝ, |ρ|≤1 and β≠0, W_2t is a standard Brownian motion , W_1t and W_2t are independent and W=ρ W_1t+√(1-ρ^2)W_2t. For example the risky asset price can be given by the Scott model (Fouque et al., 2000; Rama and Peter, 2003):dS_t=S_t(μ_0dt+e^Z_tdW_1t),S_0=1,Here, we assume that μ_0 is constant.More details about stochastic volatility models can be bound in Fouque et al. (2000). §.§ The wealth process We assume that the insurer is allowed to purchase excess-of-loss reinsurance. The insurer has investment opportunities in a risky stock asset, a risk-free asset and a corporate bond issued by a private corporation, which may default at some random time τ , where the investment horizon is [0,T] and T<T_1. Let π(t)=(l(t), m(t), a(t)) be the reinsurance-investment strategy followed by the insurer, where l(t) represents the amount of wealth invested into the stock market, m(t) is the amount of wealth invested in the corporate bond, and a(t) denotes the reinsurance strategy at time t. We assume that the corporate bond is not traded after default. Let 𝒜 denote all admissible strategies. The reserve process subjected to this choice is denoted by Y^π_t=Y(t,y,z,π), and its dynamics are given by dY^π_t=(Y^π_t-l(t)-m(t))/S_t^0dS_t^0+ l(t)/S_tdS_t+m(t)/p(t)dp(t)+dR_t =[r(Z_t)Y^π_t+(μ(Z_t)-r(Z_t))l(t)+c^(a)+(1-H_t)m(t) δ(1-)]dt+l(t)σ(Z_t)dW_1t-m(t)(1-H_t)ζ dM^P_t-d∑_i=1^N_tmin(X_i,a(t)).Suppose that the insurer is interested in maximizing the CARA utility function for his terminal wealth, say, at time T. The utility function is U(y)=-e^-α y, α>0, which is satisfies U^^'>0 and U^^''<0. We are now in a position to formulate the following optimization problem:V(t,y,z,h)=sup_π∈𝒜 E^P[U(Y^π_T)|(Y^π_t,Z_t,H_t)=(y,z,h)].1.The functions μ(·), σ(·) and g(·) are such that there exists a strong solution for Eqs.(<ref>) and (<ref>). 2.The function r(·) is continuous, positive, and r(z)<μ(z), for all z∈ℝ.§ THE MAIN RESULT Using dynamic programming techniques ,we find the corresponding HJB equation is{ sup_π∈Åℒ^πJ(t,y,z,h)=0,J(T,y,z,h)=U(y). . whereℒ^πJ(t,y,z,h)= J_t(t,y,z,h)+J_y(t,y,z,h)( r(z)y+l(t)(μ(z)-r(z))+c^(a)+m(t)(1-h)δ)+J_z(t,y,z,h)g(z)+1/2J_yy(t,y,z,h)l(t)^2σ(z)^2+ 1/2J_zz(t,y,z,h)β^2 +J_yz(t,y,z,h)βρσ(z)l(t)+λ(EJ(t,y-min (X_1,a),z,h)-EJ(t,y,z,h))+(J(t,y-m(t)ζ,z,h+1)-J(t,y,z,h))h^P(1-h).Now we establish a verification theorem, which relates the value function V with the HJB equation (<ref>).(Verification Theorem). Let J(t,y,z,h) with (t,y,z,h)∈[0,T]× R × R×{0,1} be the classical solution to the HJB equation (<ref>) with terminal condition J(T,y,z,h)=U(y) for all (y,z)∈ R^2. Also assume that for each π∈𝒜,∫_0^T∫_0^∞𝔼| J(t,Y_t^π-min(x,a),Z_t,H_t)-J(t,Y_t-^π,Z_t,H_t)| ^2dF(x)dt<∞,∫_0^T𝔼| l(t,z)J_y(t,Y_t-^π,Z_t,H_t)| ^2dt<∞, ∫_0^T𝔼| J_z(t,Y_t-^π,Z_t,H_t)| ^2dt<∞, ∀ s∈[0,T], {∫_s^v (J(t,Y_t^π-m(t)ζ,Z_t,1-H_t)-J(t,Y_t-^ π,Z_t-,H_t-))dM_t^P}_v∈[s,T] is a martingale .Then, under hypothesis (<ref>-<ref>) and assumptation (<ref>-<ref>), for each u∈[0,t] ,(y,z)∈ R^2,J(u,y,z,h)≥ V(u,y,z,h),If, in addition, there exists an optimal strategy π^*, thenJ(u,y,z,h)=V(u,y,z,h)=E[U(Y_T^π^*)|(Y^ π^*_u,Z_u,H_u)=(y,z,h)].We only prove the pre-default case when h=0. The default-case h=1 is the same as the pre-default case. Let π∈𝒜. Ito's formula implies that for any v∈[u,T],J(v,Y_v^u,y,z,π,Z_v,H_v) =J(u,y,z,0)+∫_u^vJ_t(t,Y_t^u,y,z, π,Z_t,H_t)dt+∫_u^vJ_y(t,Y_t^u,y,z,π,Z_t,H_t)dY_t^c+∫_u^vJ_z(t,Y_t^u,y,z,π,Z_t,H_t)d Z_t+1/2∫_u^vJ_yy(t,Y_t^u,y,z,π,Z_t,H_t)dY_t^c,Y_t^c_t+1/2∫_u^vJ_zz(t,Y_t^u,y,z,π,Z_t,H_t)dZ,Z_t+ ∫_u^vJ_yz(t,Y_t^u,y,z,π,Z_t,H_t)dY,Z_t+∫_u^v(J(t,Y_t^u,y,z,π-m(t)ζ,Z_t,1-H_t)-J(t,Y_t-^u,y,z, π,Z_t-,H_t-))d H_t+∫_u^v∫_0^∞ (J(t,Y_t^u,y,z,π-min(x,a),Z_t,H_t)-J(t,Y_t-^u,y,z,π,Z_t-,H_t-)) N̅(dx,dt)=J(u,y,z,0)+∫_u^vJ_t(t,Y_t^u,y,z,π,Z_t,H_t)dt+ ∫_u^vJ_y(t,Y_t^u,y,z,π,Z_t,H_t)l(t)σ(Z_t)d W_1t+∫_u^vJ_y(t,Y_t^u,y,z,π,Z_t,H_t)[r(Z_t)Y_t^u,y,z,π+( μ(Z_t)-r(Z_t))l(t)+c^(a)+(1-H_t)m(t)δ(1-Δ)+m(t)ζ(1-H_t)^2h^P]dt+ ∫_u^vJ_z(t,Y_t^u,y,z,π,Z_t,H_t)g(Z_t)dt+∫_u^vJ_z(t,Y_t^u,y,z,π,Z_t,H_t)β dW̃_t+1/2∫_u^vJ_yy(t,Y_t^u,y,z,π,Z_t,H_t)l^2(t) σ^2(Z_t)dt+1/2∫_u^vJ_zz(t,Y_t^u,y,z,π,Z_t,H_t)β^2dt+ ∫_u^vJ_yz(t,Y_t^u,y,z,π,Z_t,H_t)ρβ l(t)σ(Z_t)dt+∫_u^v(J(t,Y_t^u,y,z,π-m(t) ζ,Z_t,1-H_t)-J(t,Y_t-^u,y,z,π,Z_t-,H_t-))dH_t+∫_u^v∫_0^∞ (J(t,Y_t^u,y,z,π-min(x,a),Z_t,H_t)-J(t,Y_t-^u,y,z,π,Z_t-,H_t-)) N̅(dx,dt)where N̅ is the Poisson random measure on ℝ _+×[0,∞[ defined by N̅=∑_n≥1 δ_(X_n, T_n).Compensating (<ref>) byλ∫_u^v∫_0^∞ (J(t,Y_t^u,y,z,π-min(x,a),Z_t,H_t)-J(t,Y_t-^u,y,z, π,Z_t-,H_t-))dF(x)dt∫_u^v(J(t,Y_t^u,y,z,π-m(t)ζ,Z_t,1-H_t)-J(t,Y_t-^u,y,z,π,Z_t-,H_t-)(1-H_t)h^P)dtwe obtain the following:J(v,Y_v^u,y,z,π,Z_v,H_v)= J(u,y,z,0)+∫_u^v ℒ^πJ(t,Y_t^u,y,z,π,Z_t-,H_t-)dt+ ∫_u^vJ_y(t,Y_t^u,y,z,π,Z_t,H_t)l(t)σ(Z_t)d W_1t+∫_u^vJ_z(t,Y_t^u,y,z,π,Z_t,H_t)β dW̃_t+ ∫_u^v(J(t,Y_t^u,y,z,π-m(t)ζ,Z_t,1-H_t)-J(t,Y_t-^s,y,z, π,Z_t-,H_t-))dM_t^P+ ∫_u^v∫_0^∞(J(t,Y_t^u,y,z,π-min(x,a),Z_t,H_t)-J(t,Y_t-^u,y,z, π,Z_t-,H_t-))(N̅(dx,dt)-λ dF(x)dt)The assumption of (<ref>), imply that all the stochastic integrals with respect to the Brownian motion are martingales. By assumption (<ref>):∫_u^v∫_0^∞(J(t,Y_t^u,y,z,π- min(x,a),Z_t,H_t)-J(t,Y_t-^u,y,z,π,Z_t-,H_t-))(N̅ (dx,dt)-λ dF(x)dt)is a martingale (see Ikeda and Watanabe, 1989, p. 63). By assumption (<ref>):∫_u^v (J(t,Y_t^π-m(t)ζ,Z_t,1-H_t)-J(t,Y_t-^ π,Z_t-,H_t-))dM_t^Pis a martingale. Then, taking expectations in (<ref>) yields:E[J(v,Y_v^π,Z_v,H_v)]=J(u,y,z,0)+E[∫_u^vℒ ^πF(t,Y_t-^π,Z_t-,H_t)dt]Since F satisfies the HJB equation (<ref>), we obtain thatE[J(v,Y_v^π,Z_v,H_v)]≤ J(u,y,z,0),and letting v=T in (<ref>), we get thatJ(u,y,z,0)≥ V(u,y,z,0).To justify the second part of the theorem, we repeat the above calculations for the strategy given by π^*(t,Z_t-). Then we haveJ(u,y,z,0)=E[U(Y_T^π^*)|(Y_u^π^*,Z_u,H_u))=(y,z,0)]≤ V(u,y,z,0),and with the first part of the proof we get thatJ(u,y,z,0)=E[U(Y_T^π^*)|(Y_u^π^*,Z_u,H_u))=(y,z,0)]= V(u,y,z,0).§.§ Period after default We define the pre-default and post-default value function byV(t,y,z,h)={ V(t,y,z,0),if h=0 ( the pre default case),V(t,y,z,1),if h=1 ( the post default case), .and calculate the post-default case first.When h=1, the HJB equation (<ref>) transforms into a relatively simple form0= J_t(t,y,z,1)+sup_π∈Å {J_y(t,y,z,1)[r(z)y+l(t)(μ(z)-r(z))+c^(a)]+J_z(t,y,z,1)g(z)+1/2J_yy(t,y,z,1)l(t)^2σ(z)^2+ 1/2J_zz(t,y,z,1)β^2+J_yz(t,y,z,1)βρσ(z)l(t)+λ(EJ(t,y-min(X_1,a),z,1)-EJ(t,y,z,1))}= J_t(t,y,z,1)+sup_l∈{J_y(t,y,z,1) [r(z)y+l(t)(μ(z)-r(z))]+J_z(t,y,z,1)g(z)+1/2J_yy(t,y,z,1)l(t)^2σ^2(z)+ 1/2J_zz(t,y,z,1)β^2+J_yz(t,y,z,1)βρσ(z)l(t) }+sup_a∈{c^(a)J_y(t,y,z,1)+λ( EJ(t,y-min(X_1,a),z,1)-EJ(t,y,z,1))}with terminal condition J(T,y,z,1)=U(y).In order to obtain a linear PDE, in this work we considered only the case where the correlation coefficient is equal to zero (ρ=0). In addition to Hypothesis <ref>, we assume the following:1. r(z)=r is constant; 2. g is uniformly Lipschitz and bounded; 3. (μ(z)-r)^2/σ^2(z) bounded with a bounded first derivative. Due to the form of the utility function, we conjecture the following function as a solution to the HJB equation (<ref>):f(t,y,z)=J(t,y,z,1)=-ξ(t,z)exp{ -α ye^r(T-t)} .where ξ(t,z) is defined below as a solution to a Cauchy problem. From ( <ref>), we have: f_t(t,y,z)=(-ξ_t-α yrξ e^r(T-t))exp{-α ye^r(T-t)},f_y(t,y,z)=αξ e^r(T-t)exp{-α ye^r(T-t)},f_yy(t,y,z)=-α^2ξ e^2r(T-t)exp{-α ye^r(T-t)},f_z(t,y,z)=-ξ_z exp{-α ye^r(T-t)},f_zz(t,y,z)=-ξ_zzexp{-α ye^r(T-t)}.E[f(t,y-min(X_1,a),z)-f(t,y,z)]=-ξα e^r(T-t)exp{-α ye^r(T-t)}∫_0^aexp{α xe^r(T-t)}F(x)dx(<ref>) becomes: 0= -ξ_t-1/2β^2ξ_zz-g(z)ξ_z+sup_a∈{c^(a)αξ e^r(T-t) -λξα e^r(T-t)∫_0^aexp{α xe^r(T-t)}F(x)dx}+sup_l∈{-1/2l^2σ^2(z)α^2ξ e^2r(T-t)+(μ(z)-r)lαξ e^r(T-t)}.Then by the first-order maximization conditions we obtain the maximuml^*(t,z)=(μ(z)-r)/ασ^2(z)e^-r(T-t), a^*(t)=e^-r(T-t)/αln(1+θ).Now, we substitute l^* and a^* in (<ref>) into (<ref> ) derive the following Cauchy problem:{ 0=ξ_t+1/2β^2ξ_zz+g(z)ξ_z -([(η-θ)λμ_∞ +(1+θ)λ∫_0^a^*F(x)dx]α e^r(T-t)-λα e^r(T-t)∫_0^a^*exp{α xe^r(T-t)}F(x)dx +(μ(z)-r)^2/2σ^2(z))ξξ(T,z)=1..(Existence and Uniqueness Theorem) Assume that∫_0^∞exp{ 8α xe^rT} dF(x)<∞ , ∫_0^∞xexp{ 8α xe^rT} dF(x)<∞ ,Then the Cauchy problem given by (<ref>) has a unique classical solution ξ̂, which satisfies the following conditions:|ξ̂(t,z)|≤ C_1(1+|z|), |ξ̂_z(t,z)|≤ C_2(1+|z|),where C_1 and C_2 are constants.: In order to prove this theorem, first we verify that the Cauchy problem given by (<ref>) satisfies the conditions of Theorem <ref> (see Appendix).Step 1. Since β is constant, then it is Lipschitz continuous, Hölder continuous, and the operator 1/2β^2∂ ^2_zz is uniformly elliptic. By Hypothesis <ref>, we know that g(z) is bounded and uniformly Lipschitz continuous. Now we prove thath(t,z):=[ (η-θ)λμ_∞+(1+θ )λ∫_0^a^*F(x)dx] α e^r(T-t) _h_1(t) -λα e^r(T-t)∫_0^a^*exp{α xe^r(T-t)}F(x)dx_h_2(t) + (μ(z)-r)^2/2σ^2(z)_h_3(z)is bounded and uniformly Hölder continuous in compact subsets of ℝ×[0,T]. By Hypothesis <ref>, it is easy to check that the last term h_3(z) is bounded. The first term h_1(t) is bounded by (1+η)λμ_∞α e^rT . In order to prove h_2(t) is bounded, we observe thath_2(t) =| λα e^r(T-t)∫_0^a^*exp{α xe^r(T-t)}F(x)dx| ≤λα e^rT{| ∫_0^a^*exp{α xe^r(T-t)}F (x)dx| }≤λα e^rT{| ∫_0^Dexp{α xe^r(T-t)} dx| +| ∫_0^Dexp{α xe^r(T-t)} F(x)dx| }≤2λα e^rT∫_0^Dexp{α xe^r(T-t)} dF(x)≤2λα e^rT∫_0^∞exp{α xe^r(T-t)} dF(x) ≤∞thus h(t,z) is bounded. Step 2. Now we prove that h(z,t) is uniformly Hölder continuous in compact subsets of ℝ×[0,T]. For h_1(t), use the mean value theorem to obtain that for all (t,t_0)∈[0,T]×[ 0,T]:|h_1(t)-h_1(t_0)|=α(θ-η)λμ_∞| e^r(T-t)-e^r(T-t_0)|+(1+θ)λα| ∫_0^a^*(t)F (x)dxe^r(T-t) -∫_0^a^*(t_0)F(x)dxe^r(T-t_0) | ≤[ α(θ-η)λμ_∞e^rT+(1+θ )λα e^rT] | t-t_0| ,then h_1(t) is uniformly Hölder continuous. For h_2(t), the mean value theorem implies that there exists t_1∈ [t_0,t] such that:|h_2(t)-h_2(t_0)| =|λα e^r(T-t)∫_0^a^*(t)exp{α xe^r(T-t)}F(x)dx-λα e^r(T-t_0)∫_0^a^*(t_0)exp{α xe^r(T-t_0)}F(x)dx|=|-λα r e^r(T-t_1)∫_0^a^*(t_1)exp{α xe^r(T-t_1)}F(x)dx-λα e^r(T-t_1)[ α r∫_0^a^*(t_1)xexp{α xe^r(T-t_1)}F(x)dx]+exp{α a^*t_1e^r(T-t_1)F(a^*(t_1)) da^*(t_1)/dt_1} |t-t_0|| ≤{r|h_2(t_1)|+2re^2rT∫_0^∞α xexp{α xe^rT} dF(x)+2(1+θ)r/αe^rTln(1+θ)}|t-t_0|<∞,We get that h_2(t) is uniformly Lipschitz continuous in [0,T]. By Hypothesis <ref>, h^'_3(z) is bounded, then h_3(z) is uniformly Hölder continuous, i.e., for all (z,z_0)∈ ℝ^2 |h_3(z)-h_3(z_0)|≤ C|z-z_0|^1/2.Then h(t,z) is uniformly Hölder continuous in compact subsets of ℝ×[0,T]. So the Cauchy problem (<ref>) has a unique solution ξ̂(t,z) which satisfies (<ref>) and ( <ref>). The next theorem relates the value function with the HJB equation (<ref>). (Post-Default Strategy).If (<ref>), ( <ref>) are satisfied, then the value function (when h=1) defined by (<ref>) has the form:V(t,y,z,1)=-ξ̂(t,z)exp{ -α ye^r(T-t)} ,where ξ̂(t,z) is the unique solution of (<ref>), and { l^*(t,z)=μ(z)-r/ασ^2(z)e^-r(T-t),m^*(t)=0,a^*(t)=ln(1+θ)/αe^-r(T-t), .is the optimal reinsurance-investment strategy.: We have already checked thatf(t,y,z)=-ξ̂(t,z)exp{ -α ye^r(T-t)} ,solves the HJB equation (<ref>). To prove that f(t,y,z) is the true value function, we shall verify that assumptions (<ref>)-( <ref>) of the Theorem <ref> are satisfied by f(t,y,z).Step 1. We consider the case in which r = 0. Let π∈ 𝒜 be an admissible strategy, then:∫_0^∞𝔼| f(t,Y_t^π-min(x,a),Z_t)-f(t,Y_t^π,Z_t)| ^2dF(x)=∫_0^a𝔼| -ξ̂ e^-α(Y_t^π-x)+ξ̂ e^-α Y_t^π| ^2dF(x)+∫_a^∞𝔼 | -ξ̂ e^-α(Y_t^π-a)+ξ̂ e^-α Y_t^π| ^2dF(x)=∫_0^a( e^α x-1) ^2dF(x)𝔼[ ξ̂^2(t,Z_t)exp{ -2α Y_t^π}] +∫_a^∞( e^α a-1) ^2dF(x)𝔼[ξ̂ ^2(t,Z_t)exp{-2α Y_t^π}].To get condition (<ref>), we need only obtain an estimate of:𝔼[ξ̂^2(t,Z_t)exp{-2α Y_t^π}].We observe that𝔼[ξ̂^2(t,Z_t)exp{-2α Y_t^π}]≤ C_1^2𝔼[(1+|Z_t|)^2exp{-2α Y_t^π}] ≤ C_1^2{𝔼[(1+|Z_t|)^4]}^1/2{𝔼 [exp{-4α Y_t^π}]}^1/2,and by Theorem A.2 in Badaoui and Fernández (2013) <cit.> 𝔼(sup_0≤ t≤ TZ_t^4)≤ C_2(1+|z|^4). So we can get that{𝔼[(1+|Z_t|)^4]}^1/2≤{𝔼(√( 2(1+|Z_t|)^2))^4}^1/2≤{4𝔼[(1+|Z_t|)^4]}^1/2≤2(1+C_3(1+|z|^4))^1/2, From (<ref>)we have𝔼[exp(-4α Y_t)]≤𝔼[ exp{-4α ∫_0^tl(s)σ(Z_s)dW_1s +4α∑_i=1^N_tmin(X_i,a)}]=𝔼[ exp{1/2L_t +16α^2∫_0^tl^2(s)σ^2(Z_s)ds +4α∑_i=1^N_tmin(X_i,a)}] ≤ e^16α^2C_4𝔼[ exp{1/2L_t +4α∑_i=1^N_tmin(X_i,a)}] ≤ e^16α^2C_4{𝔼[ exp{ L_t}] } ^1/2{𝔼[ exp{ 8α∑_i=1^N_tmin(X_i,a) }] } ^1/2. whereL_t=-8α∫_0^tl(s)σ(Z_s)dW_1s -32α^2∫_0^tl^2(s)σ^2(Z_s)ds.Since exp{L_t} is a martingale, we obtain:𝔼[exp(-4α Y_t)]≤ e^16α^2C_4{𝔼 [exp{8α∑_i=1^N_tmin(X_i,a)}]}^1/2≤ e^16α^2C_4exp{λ t/2(e^8aα -8α∫_0^ae^8aαF(x)dx)}<∞,which proves (<ref>).Step 2. In order to prove conditions (<ref>), we observe that: 𝔼|f_y(s,Y_s,Z_s)|^2≤ C_5^2𝔼 [(1+|Z_t|)^4exp{-4α Y_s^π}]and𝔼|f_z(t,Y_t,Z_t)|^2≤ C_6^2𝔼 [(1+|Z_t|)^4exp{-4α Y_s^π}].Then by the same arguments as above, we get conditions (<ref>) and (<ref>). For the case in which the interest rate r≠0, let Y_t^π=e^r(T-t)Y_t^π. An application of Itô's formula shows that Y_t^π satisfies the following SDE: dY_t^π= e^r(T-t)[(η-θ)λμ_∞+(1+θ)λ∫_0^aF(x)dx+Y(t)r(Z_t)+(μ(Z_t)-r(Z_t))l(t)]dt+e^r(T-t)l(t)σ(Z_t)dW_1t-e^r(T-t)d ( ∑_i=1^N_tmin(X_i,a)) ,the result can be derived in a similar way as in the first part of the proof.§.§ Period before default In this subsection, we will focus on the pre-default case. When h=0, the HJB equation (<ref>) transforms into0= J_t(t,y,z,0)+sup_π∈Å{[r(z)y+l(t)( μ(z)-r(z))+c^(a)+m(t)δ]J_y(t,y,z,0)+J_z(t,y,z,0)g(z)+1/2J_yy(t,y,z,0)l(t)^2σ(z)^2+ 1/2J_zz(t,y,z,0)β^2+J_yz(t,y,z,0)βρσ(z)l(t)+λ(EJ(t,y-min(X_1,a),z,0)-EJ(t,y,z,0))+(J(t,y-m(t)ζ,z,1)-J(t,y,z,0))h^P}with terminal condition J(T,y,z,0)=U(y).According to Fleming and Soner (1993), if the optimal value function V(t,y,z,0)∈ C^1,2,2([0,T]×ℝ×ℝ), then V satisfies the HJB equation (<ref>). To solve this equation, take as a trial solutionf̅(t,y,z)=J(t,y,z,0)=-ξ̅(t,z)exp{-α ye^r(T-t)},with ξ̅(T,z)=1. Then we have: f̅_t(t,y,z)=(-ξ̅_t-α yrξ̅ e^r(T-t))exp{-α ye^r(T-t)},f̅_y(t,y,z)=αξ̅ e^r(T-t)exp{-α ye^r(T-t)},f̅_yy(t,y,z)=-α^2ξ̅ e^2r(T-t)exp{-α ye^r(T-t)},f̅_z(t,y,z)=-ξ̅_z exp{-α ye^r(T-t)},f̅_zz(t,y,z)=-ξ̅_zzexp{-α ye^r(T-t)}.andE[f̅(t,y-min(X_1,a),z)-f̅(t,y,z)]=-ξ̅α e^r(T-t)exp{-α ye^r(T-t)}∫_0^aexp{α xe^r(T-t)}F(x)dx,(f(t,y-m(t)ζ,z)-f̅(t,y,z))h^P=-ξ̂(z,t)exp{-α(y-m(t)ζ)e^r(T-t)}h^P +ξ̅(z,t)exp{-α ye^r(T-t)}h^P,where ξ̂ is the unique classical solution of the Cauchy problem ( <ref>). Substituting the above formulas (<ref>)-(<ref>) into (<ref>), when ρ=0, we have0= -ξ̅_t-1/2β^2ξ̅_zz-g(z)ξ̅_z +(η-θ)λμ_∞αξ̅e^r(T-t)+sup_l{(μ(z)-r)αξ̅ e^r(T-t)l-1/2α^2ξ̅e^2r(T-t)σ^2(z)l^2}+sup_m{mδαξ̅ e^r(T-t)+(ξ̅-e^α mζ e^r(T-t)ξ̂)h^P}+sup_a{(1+θ)λ∫_0^aF(x)dx αξ e^r(T-t)-λαξ̅e^r(T-t)∫_0^aexp{α xe^r(T-t)}F(x)dx}.According to Theorem <ref>, the first-order conditions for a regular interior maximization in (<ref>) are { l^*(t,z)=μ(z)-r/ασ^2(z)e^-r(T-t),m^*(t,z)=ln1/+lnξ-lnξ̂/ αζe^-r(T-t),a^*(t)=ln(1+θ)/αe^-r(T-t). .where ξ̂ is the unique classical solution of the Cauchy problem ( <ref>).We now insert (<ref>) into (<ref>), thereby obtaining0= ξ_t+1/2β^2ξ_zz+g(z) ξ_z -h^P/Δξ̅lnξ̅-{[(η-θ)μ_∞+(1+θ)∫_0^a^*(t) F(x)dx]λα e^r(T-t)-λα e^r(T-t)∫_0^a^*(t)exp{α xe^r(T-t)}F(x)dx+(μ(z)-r)^2/2σ^2(z) +(1-1/Δ+1/Δln1/Δ)h^P -h^P/Δlnξ̂}ξ̅, We letM(t,z) =[(η-θ)μ_∞+(1+θ)∫_0^a^*(t) F(x)dx]λα e^r(T-t) -λα e^r(T-t)∫_0^a^*(t)exp{α xe^r(T-t)} F(x)dx+(μ(z)-r)^2/2σ^2(z) +(1-1/Δ+ 1/Δln1/Δ)h^P_I -h^P/Δ lnξ̂_û,=h(t,z)+I-h^P/Δû.where h(t,z) is defined in the proof of Theorem <ref>. Then, according to hypothesis <ref>, M(t,z) is bounded and (<ref>) becomes0= ξ_t+1/2β^2ξ_zz+g(z) ξ_z -h^P/Δξ̅lnξ̅-M(t,z)ξ̅. In order to solve this PDE, we make variable substitution u̅=lnξ̅, then u̅(T,z)=0 and we haveξ̅=e^u̅,ξ̅_t=u̅_te^u̅,ξ̅_z=u̅_ze^u̅,ξ̅_zz=(u̅_z^2+u̅_zz)e^u̅, Substituting the above formulas (<ref>) into (<ref>), we get{ 0=u̅_t+1/2β^2(u̅_zz+u̅_z^2)+g(z)u̅_z- h^P/Δu̅-M(t,z).u̅(T,z)=0 .Eq. (<ref>) is indeed a Cauchy initial value problem (CIVP).Use the same transformu=lnξwe rewrite CIVP (<ref>) as{ 0=u_t+1/2β^2(u_zz+u_z^2)+g(z)u_z-h(t,z).u(T,z)=0. . In order to solve CIVP (<ref>), we found that technical complications in quasi-linear parabolic PDEs (<ref>) are generated by the quadratic growth of the gradient. Due to the nonlinearity of (<ref>), we consider the so-called super-sub solution method as in Birge, Bo and Capponi(2016), see Bebernes and Schmitt(1977) and Bebernes and Schmitt (1979) for the general theory in the parabolic case, and establish the so-called ordered pair of lower and upper solutions to the CIVP (<ref>). The definition of lower and upper solutions to the CIVP (<ref>) is given as follows (see also Bebernes and Schmitt (1979) and Birge, Bo and Capponi(2016)).LetLυ(t,z)=υ_t+1/2β^2υ_zz+g(z)υ_z- h^P/ΔυG(t,z,υ,p)=-1/2β^2p^2+M(t,z)A continuous function φ:(0,T)×ℝ→ℝ is called a lower solution of the CIVP(<ref>) if φ(T,z)≤0 for z∈ℝ, and for every (z_0,t_0) ∈(0,T)×ℝ there exists an open neighborhood 𝒪 of (z_0,t_0) such that for (t,z)∈𝒪∩(0,T)×ℝ,Lφ≥ G(t,z,φ,φ_z).If in the above expression the inequality sign is reversed, then φ is called an upper solution of the CIVP (<ref>). Let φ̅ and φ be the upper and lower solution respectively. If φ(t,z)≤φ̅(t,z) for all (t,z)∈ [0,T]× ℝ, we call (φ,φ̅) an ordered pair of lower and upper solutions of the CIVP (<ref>). We next construct lower and upper solutions to the CIVP (<ref>). In Theorem <ref>, we have already proven that ξ̂ is the nonnegative classical solution of the CIVP (<ref>), so û is the classical solution of the CIVP (<ref>). Let φ̅(t,z)=û(t,z),we have Lφ̅=φ̅_t+1/2β^2φ̅_zz++g(z) φ̅_z-h^P/Δφ̅ =-1/2β^2φ̅_z^2+M(t,z)-IG(t,z,φ̅,φ̅_z)=-1/2β^2φ̅_z^2+M(t,z)Since 1-x≤ e^-x for any real number we get that I≥0, so φ̅ is an upper solution of the CIVP (<ref>).Letφ(t,z)=û(t,z)-Δ/h^PI,soLφ=-1/2β^2û_z^2+M(t,z)G(t,z,φ,φ_z)=-1/2β^2 û_z^2+M(t,z)then we have Lφ= G(t,z,φ, φ_z) and φ(T,z)≤0, it follows that φ is a lower solution to the CIVP (<ref>). Moreover, (φ,φ̅) is an ordered pair of lower and upper solution of the CIVP (<ref>). We are now ready to give the main result of the paper, which establishes the existence of classical solutions to the CIVP (<ref>).(Existence Theorem)If (<ref>), ( <ref>) and Hypothesis (<ref>-<ref>) are satisfied. Then there exists a classical solution ũ to CIVP(<ref>). Moreover, it holds thatφ(t,z)≤ũ(t,z)≤φ̅(t,z)where φ̅ and φ are defined in (<ref>) and (<ref>), respectively. Additionally the Cauchy problem given by (<ref>) exists a classical solution ξ̃, which satisfies the following conditions:|ξ̃(t,z)|≤ C_7(1+|z|), |ξ̃_z(t,z)|≤ C_8(1+|z|),where C_7 and C_8 are constants.We follow the proof in Theorem 4.2 of Birge, Bo and Capponi(2016). From the above analysis we know that (φ,φ̅) is an ordered pair of lower and upper solution of the CIVP (<ref>). Next, if ũ is the classical solution to the CIVP (<ref>), using an invariance result (see, e.g. Lemma 1 of Bebernes and Schmitt (1979)), it follows that ũ(t,z)∈[φ(t,z),φ̅(t,z)] for all (t,z)∈[0,T]×ℝ. Let R>0 be an arbitrary constant and B_R:={q:∈ℝ; |q|<R}. Therefore, for all υ∈[ φ(t,z),φ̅(t,z)] and (t,z)∈[0,T]×B̅_R, we obtain that|G(t,z,υ,p)| ≤1/2β^2p^2+|h(t,z)|+H+h^p/Δ|û(t,z)|≤ K_R(1+|p|^2)where K_R>0 is a generic constant which depends on R. This shows that the cofficient f admits the quadratic growth in p. However, f fails to satisfy a Nagumo type condition. (See Theorem 2 of Bebernes and Schmitt (1979) where this condition is treated and it is required that |f(t,y,υ ,p)|≤Φ(|p|) for some positive continuous nondecreasing function Φ such that lim_s→ ∞s^2/Φ (s)=∞. In our case Φ(s)=s^2 does not admit, given that lim_s→∞s^2/ Φ(s)=1.) Hence, Theorem 3 of Bebernes and Schmitt (1979) is not applicable for our case. To overcome this, we adopt an approximation technique used in Loc and Schmitt(2012) which extends the Nagumo conditions to Bernstein-Nagumo conditions. The latter covers the quadratic growth condition of G in p given in Eq. (<ref>). As in Loc and Schmitt (2012), for k∈ℕ, we define a truncated function h_k(p) acting on p∈ℝ as h_k(p)={ p,if   |p|≤ k,k/|p|, .Then we consider the following PDE given by(u_k)_t+1/2β^2(u_k)_zz+g(z)(u_k)_z-H^P/ Δu_k-G_k(t,z,u_k,(u_k)_z)=0where G_k(t,z,υ,p):=-1/2β^2h_k(p)^2+M(t,z). It can be easily seen that, for each k∈ℕ and R>0, G_k(t,z,υ,p) satisfies the Nagumo growth condition in p required by theorem 3 of Bebernes and Schmitt(1979), for all υ∈ [ φ(t,z),φ̅(t,z)] with (t,z)∈[0,T]×B̅_R. Then we can apply theorem 3 of Bebernes and Schmitt(1979), and deduce that Eq. <ref> admits a solution ũ_k(t,z), (t,z)∈[0,T] ×ℝ, in the classic sense for each k∈ℕ. Notice that G_k(t,z,υ,p)→ G(t,z,υ,p) pointwise as k→∞. Then we can extract a subsequence of ũ _k_l(t,z) which converges uniformly on compact subsets of [0,T]× ℝ to a solution of the CIVP (<ref>). Moreover the limit of the above subsequence of ũ_k_l(t,z) also lies in [ φ(t,z),φ̅(t,z)] for all (t,z)∈ [0,T]×ℝ. We write the limit is ũ(t,z) and ξ̃(t,z)=e^ũ (t,z). From (<ref>), we know thate^Δ/h^PIξ̂=e^φ(t,z)≤ξ̃(t,z)≤ e^φ̅(t,z)=ξ̂This completes the proof of the theorem.(Pre-Default Strategy).If (<ref>), (<ref>) are satisfied, then the value function (when h=0) defined by (<ref>) has the form:V(t,y,z,0)=-ξ̃(t,z)exp{ -α ye^r(T-t)} ,The optimal investment strategy is given by π̃_t^*=π ^*(t,Z_t-), where the optimal feedback control function is given as follows:{ l^*(t,z)=μ(z)-r/ασ^2(z)e^-r(T-t),m^*(t,z)=lnξ̃(t,z)-lnξ̂(t,z)+ln1/Δ/ αζe^-r(T-t),a^*(t)=ln(1+θ)/αe^-r(T-t). .where ξ̂ is the unique solution of CIVP (<ref>) and ξ̃ is the unique solution of DPE (<ref>) with terminal condition ξ̃(T,z)=1.: The proof is the same as the post-default case. We have already checked thatJ(t,y,z,0)=f̅(t,y,z)=-ξ̃(t,z)exp{ -α ye^r(T-t)} ,solves the HJB equation (<ref>). To prove that f̅(t,y,z) is the true value function, we shall verify that assumptions (<ref> )-(<ref>) of the Theorem <ref> are satisfied by f̅(t,y,z).Step 1. We consider the case in which r = 0. Let π∈ 𝒜 be an admissible strategy, then:∫_0^∞𝔼| f̅(t,Y_t^π-min(x,a),Z_t)- f̅(t,Y_t^π,Z_t)| ^2dF(x)=∫_0^a𝔼| -ξ̃ e^-α(Y_t^π-x)+ ξ̃ e^-α Y_t^π| ^2dF(x)+∫_a^∞ 𝔼| -ξ̃ e^-α(Y_t^π-a)+ξ̃ e^-α Y_t^π| ^2dF(x)=∫_0^a( e^α x-1) ^2dF(x)𝔼[ ξ̃^2(t,Z_t)exp{ -2α Y_t^π}] +∫_a^∞( e^α a-1) ^2dF(x)𝔼[ξ̃^2(t,Z_t)exp{-2α Y_t^π}].To get condition (<ref>), we need only obtain an estimate of:𝔼[ξ̃^2(t,Z_t)exp{-2α Y_t^π}].From (<ref>)we have𝔼[exp(-4α Y_t^π)]≤𝔼[ exp{ -4α∫_0^tl(s)σ(Z_s)dW_1s +4α∫_0^t m(s)(1-H_s)ζ dM_s^P+4α∑_i=1^N_tmin(X_i,a)} ]By Step 1 in theorem (<ref>), we only need to estimate𝔼exp{ 4α∫_0^t m(s)(1-H_s)ζ dM_s^P} .because of that𝔼[exp(∫_0^t m(s)(1-H_s)ζ dM_s^P) ] ≤𝔼[exp(∫_0^t m(s)(1-H_s)ζ dH_s) ] ≤𝔼[exp(∫_0^t m(s)ζ dH_s)] ≤exp^∫_0^t(e^m(s)ζ-1)h^Pds.From (<ref>) in theorem <ref>, we know that1-Δ-ln1/Δ=-Δ/h^PI≤ũ-û≤0Then we have the lower and upper bound of m^*(t,z) is thatm^*(t,z) =lnξ̃(t,z)-lnξ̂(t,z)+ln1/Δ /αζe^-r(T-t) =ũ-û+ln1/Δ/αζe^-r(T-t)0≤1-Δ/αζe^-r(T-t) ≤ m^*(t,z)≤ln 1/Δ/αζe^-r(T-t) which proves (<ref>).Step 2. It is the same as Step 2 in theorem <ref> which proves (<ref>).Step 3. By Lemma <ref>, we know thatJ(τ_i∧ T, Y^π^*_τ_i∧ T-m^*(τ_i∧ T)ζ, Z_τ_i∧ T, 1-H_τ_i∧ T)-J(τ_i∧ T, Y^π^*_τ_i∧ T, Z_τ_i∧ T, H_τ_i∧ T)is uniformly integrable which proves (<ref>).Let τ_i be the exist time of (Y_t,Z_t,H_t) from the open set M_i , where M_i⊂ M = [ 0 ,∞)×[ 0 ,∞)×{0,1} such that M_i⊂ M_i + 1⊂ M, i∈ N^+ , and M =∪_i M_i . Then we havesup_iE[|J(τ_i∧ T, Y^π^*_τ_i∧ T-m(τ_i∧ T)ζ, Z_τ_i∧ T, 1-H_τ_i∧ T)|^2]< ∞, i∈ N^+ . sup_iE[|J(τ_i∧ T, Y^π^*_τ_i∧ T, Z_τ_i∧ T, H_τ_i∧ T)|^2] < ∞, i∈ N^+ .i.e.J(τ_i∧ T, Y^π^*_τ_i∧ T-m^*(τ_i∧ T)ζ, Z_τ_i∧ T, 1-H_τ_i∧ T)-J(τ_i∧ T, Y^π^*_τ_i∧ T, Z_τ_i∧ T, H_τ_i∧ T)is uniformly integrable.: In view of Eq.(<ref>), the wealth process associated with the strategy π^* is Y^π^*_t =y+∫_0^t[r(Z_t)Y^π^*_t+(μ(Z_t)-r(Z_t))l(t)+c^(a)+(1-H_t)m(t)δ]dt+∫_0^t l(t)σ(Z_t)dW_1t-∫_0^t m(t)(1-H_t)ζ dM^P_t-∑_i=1^N_tmin(X_i,a(t)). LetY̅^*_t=e^-rtY^π^*_t. An application of Itô's formula leads toY̅^*_t =y+∫_0^te^-rsdY^π^*_s+∫_0^t(-r)e^-rsY^π^*_sds=y+∫_0^t[e^-rs(μ(Z_s)-r))l^*(s)+c^(a^*(s))+(1-H_s)m^*(s)δ(1-Δ)]ds+∫_0^t e^-rsl^*(s)σ(Z_s)dW_1s-∫_0^t e^-rsm^*(s)(1-H_s)ζ dM^P_s-∫_0^t e^-rsd∑_i=1^N_smin(X_i,a^*(s))=y+∫_0^te^-rT[(μ(Z_s)-r))^2/ασ^2(Z_s)+c^(a^*(s))+(1-H_s) lnξ̃(s,Z_s)-lnξ̂(s,Z_s)+ln1/Δ/αζδ]ds+∫_0^t e^-rTμ(Z_s)-r/ασ(Z_s)dW_1s-∫_0^t e^-rTlnξ̃(s,Z_s)-lnξ̂(s,Z_s)+ln1/Δ/αζ(1-H_s)ζ dH_s-∑_i=1^N_tmin(e^-rT_iX_i,e^-rT_ia^*(t)). For the case H_t=0, we haveJ(s,Y_s^π^*-m^*(s)ζ,Z_s,1)=-h^P/Δξ̃ (s,Z_s)exp{-α Y_s^π^*e^r(T-s)}J(s,Y_s^π^*,Z_s,0)=-ξ̃(s,Z_s)exp{-α Y_s^π ^*e^r(T-s)}Then, we need only obtain an estimate of:𝔼[J^2(s,Y_s^π^*-m^*(s)ζ,Z_s,1)] =(h^P/Δ)^2𝔼[ξ̃ ^2(s,Z_s)exp{-2α Y_s^π^*e^2r(T-s)}]and𝔼[J^2(s,Y_s^π^*,Z_s,0)] =𝔼[ξ̃^2(s,Z_s)exp{-2α Y_s^π^*e^2r(T-s)}]by the same argument in Step 1 in the proof of Theorem <ref>, we can get the result. Similarly, we have the same result for the case H_t=1. Then by Corollary 7.8 in <cit.>, we conclude thatJ(τ_i∧ T, Y^π^*_τ_i∧ T-m^*(τ_i∧ T)ζ, Z_τ_i∧ T, 1-H_τ_i∧ T)-J(τ_i∧ T, Y^π^*_τ_i∧ T, Z_τ_i∧ T, H_τ_i∧ T)is uniformly integrable.§.§ Numerical resultsIn this section, we solve the Cauchy problem (3.16) and the first initial-boundary value problem (3.40) by using the finite-difference method. First, we assume that the claims are exponentially distributed with parameter b, and T<1/rlog(b/α),.the first step is to reduce the problem (3.16) and (3.42) to a bounded domain, i.e., ℝ is replaced by [-d,d],d<∞, and to add artificial boundary conditions. Then the Cauchy problem (3.16) to solve is the following: { 0=ξ_t+1/2β^2ξ_zz+g(z)ξ_z-ξ{(μ(z)-r)^2/2σ^2(z) +α e^r(T-t)[(1+η)λμ_∞-λ/bexp{(1-b/αe^-r(T-t)ln(1+θ))}]-λαe^r(T-t)/α e^r(T-t)-b[exp{(α e^r(T-t)-b)e^-r(T-t)/αln(1+θ)}-1]},ξ(z,T)=1,∀ z∈[-d,d],ξ(z,t)=1,∀ z∈̅]-d,d[×[0,T]. .From Friedman(1975), we know that the solution of (3.34) exists and is unique. The imposed boundary conditions give a good error estimate for large values of d. Now we discretize (3.43) in the domain A:=[-a,a]×[0,T]. A uniform grid on A is given by:z_i=-d+(i-1)h, i=1,...,N, h=2d/(N-1),t_j=(j-1)k,j=1,...,M,k=T/(M-1). The space and time derivatives are discretized using finite differences as follows:ξ_t(z_i,t_j)≃ξ(z_i,t_j)-ξ(z_i,t_j-k)/k,ξ_z(z_i,t_j)≃ξ(z_i+h,t_j)-ξ(z_i-h,t_j)/2h,ξ_zz(z_i,t_j)≃ξ(z_i+h,t_j)-2ξ(z_i,t_j)+ξ(z_i-h,t_j)/h^2. We denote by ξ_i^j:=ξ(z_i,t_j) the solution on the discretized domain. Then by substituting the derivatives by the expressions given above, (3.34) becomes:ξ_i^j-ξ_i^j-1/k +1/2β^2ξ_i+1^j-2ξ_i^j+ξ_i-1^j/h^2 +g(z)ξ_i+1^j-ξ_i-1^j/2h-ξ_i^j{(μ(z_i)-r)^2/2σ^2(z_i)+α e^r(T-t_j)[(1+η)λμ_∞ -λ/bexp{(1-b/αe^-r(T-t_j)ln(1+θ))}]-λαe^r(T-t_j)/α e^r(T-t_j)-b[exp{(α e^r(T-t_j)-b)e^-r(T-t_j)/αln(1+θ)}-1]}=0. Then for i=2,...,N-1 and j=2,...,M, ξ_i^j satisfies the following explicit scheme: ξ_i^j-1= (1-kβ^2/h^2-k((μ(z_i)-r)^2/2σ^2(z_i)+α e^r(T-t_j)[(1+η)λμ_∞ -λ/bexp{(1-b/αe^-r(T-t_j)ln(1+θ))}]-λαe^r(T-t_j)/α e^r(T-t_j)-b[exp{(α e^r(T-t_j)-b)e^-r(T-t_j)/αln(1+θ)}-1]))ξ_i^j+(kβ^2/2h^2+k/2hg(z_i))ξ_i+1^j +(kβ^2/2h^2-k/2hg(z_i))ξ_i-1^j. The final condition is given by: ξ_i^M=1, for all i=1,...,N.The imposed boundary conditions will be given by: ξ_1^j=1, for all j=1,...,M-1, ξ_N+1^j=1, for all j=1,...,M-1. Similarly, we can obtain u_i^j satisfies the following explicit scheme:u_i^j-1= (1-kβ^2/h^2-kh^p/Δu_i^j +(kβ^2/2h^2+g(z_i)k/2h)u_i+1^j +(kβ^2/2h^2-g(z_i)k/2h)u_i-1^j-k{(μ(z_i)-r)^2/2σ^2(z_i)+α e^r(T-t_j)[(1+η)λμ_∞ -λ/bexp{(1-b/αe^-r(T-t_j)ln(1+θ))}]-λαe^r(T-t_j)/α e^r(T-t_j)-b[exp{(α e^r(T-t_j)-b)e^-r(T-t_j)/αln(1+θ)}-1]+(1-1/Δ+1/Δln1/Δ)h^p-h^p/Δlnξ_i^j-1}. and we haveξ̅_i^j=exp{u_i^j}. The final condition is given by: u_i^M=0, for all i=1,...,N. The imposed boundary conditions will be given by: u_1^j=0, for all j=1,...,M-1, u_N+1^j=0, for all j=1,...,M-1.Our algorithm given by the explicit scheme, final condition and the imposed boundary conditions is backward in time, forward in space, and hence, by the explicit scheme, the numerical solution can be computed. (The Value Functions)Suppose:r=0.04, μ=0.3, σ(z)=e^z, δ=0.1, κ=1, λ=3, α=0.02, β=0.3, b=2, d=2 ,T=5, μ_∞=1/2, η=7/3,θ=8/3, h^p=0.25, Δ=0.25, ζ=0.4, N=401, M=50001.Harnessing the method (3.58), (3.59) and the relation (3.60), we can know the figures of assessment function before and after the cooperate bond default and conclusions as FIGURE 1. From FIGURE 1, we conclude the following:(1) Assessing model is progressively decreasing by time t. (2) Assessing procession is progressively increasing by y, which can be claimed by the function. (3) Before-defaulting assessing model is better than after-defaulting one obviously, which proves that Insurance companies can obtain much more profits after investing surplus in defaultable bonds. The tendency of the optimal investing strategies π^*(t)=(l^*(t),m^*(t),a^*(t)) can be presented by FIGURE 2 respectively and the conclusions are followed:(1) The investments in the asset of risk market is progressively decreasing in z and increasing in t. (2) The investments in corporate bond is increasing in t. These will drop at first and then increase in z. (3) The amount of retention of excess-of-loss reinsurance is increasing about t. Considering the change of interval ζ and default risk premium 1/Δ, we need to make deeper numerical analysis.(1) In FIGURE 3(a)1, the external factor leads to the decrease of the optimal strategyat first and then the increase. At the same time, the corporate bond is positively correlated with default risk premium 1/Δ. Insurance companies should invest a larger proportion of asset on corporate bond with higher risk of default.(2) In FIGURE 3(b)2, the insurer companies will introduce fewer investment in corporate bond when the loss rate is lower. In a nutshell, the adding ζ reflects few influence on the optimal investment of a corporate bond.Suppose:r=0.04, μ=0.3, σ(z)=e^z, δ=0.1, κ=1, λ=3, α=0.2, β=0.3, b=2, d=2 ,T=50, μ_∞=1/2, η=7/3,θ=8/3, h^p=0.25, Δ=0.25, ζ=0.4, N=401, M=50001.The FIGURE 4 express the situation of before default and after default. In pictures, the insurance companies can put most money on defaultable cooperate bond for more profit. (The Sensitivity of the Optimal Investment of a Corporate Bond) Assume T-t=1, α=0.5, r=0.04. Then we operate the optimal strategy for 1/Δ∈[1,10] and θ∈ [0.1,1]. Firstly, fixing varying parameter ξ, we make comparisons between different ζ and 1/Δ. The function of the corporate bond can be expressed as follow:m^*(t)=ln1/Δ/αζ.The comparisons were presented by following FIGURE 5.Herein, we calculate the sensitivity of the optimal investment of a corporate bond. From FIGURE 5 we can tell:(1) The optimal investment of corporate for the default risk has positive relationship with default risk premium in FIGURE 5(a)1. The insurance companies will invest a relatively amount of money in a corporate bond with higher default risk condition. (2) There is a negative relation between loss rate and the optimal investment in FIGURE 5(b)2. FIGURE 5(b)2 describes that insurer will reduce the investment in corporate bond with increasing loss rate. (3) If the risk premium satisfies 1/Δ=1, the insurance companies will not invest in corporate bond any more.FIGURE 5(c)3 depicted comprehensive result. (The Effect of RAP on OPRS) When we treat T=10, r=0.04, the analysis of reinsurance strategy can be explained by exponential value function factor α. Now we have t∈[0,T], which means t∈ [0.10]. We adopted various parameters α in order to compare the effectiveness of optimal excess-of-loss reinsurance. Now the optimal excess-of-loss reinsurance was expressed: α^*(t)=ln(1+θ)/αe^-r(T-t).According to the preconditions, the results can be told by FIGURE 6According to FIGURE 6, the conclusions are presented below:(1) From FIGURE 6(a)1, The utility of optimal excess reinsurance is increasing in time t. (2) When the parameter α grows progressively, the effect of the optimal investment is limited. The insurers will be willing to purchase more excess-of-loss reinsurance in order to reduce the risk of investing a value function with higher interval. (3) We can compare the safety loading sigma. Varying safety loading sigma can generate multiple effects of reinsurance strategies, which can be compared by using previous data. (4) From FIGURE 6(b)2, when sigma is bigger, the utility of excess-of-loss reinsurance strategies will be larger. If the insurers purchase the investing products with higher parameter sigma, they will need to restrain this kind of investment. In contrast, the companies should invest more money on a strategy with lower sigma. (The Effect of RAP on OPRS) The aim of discussion is the relationship between property and exponential value function factor α. Suppose T=10, r=0.04, and then t∈[0,T], which means t∈[0.10].The relation of property l^*(t) is: l^*(t)=μ(z)-r/ασ ^2(z)e^-r(T-t). In this function, the volatility σ(z) is σ(z)=e^z, z_i=-a+(i-1)h, h=2a/n-1.From these functions, we can make further assumptiona=2 and n=10, and then the results are shown as FIGURE 7:According to FIGURE 7, the conclusions are as follows:(1) the longer length of time will result in less utility of investments in market. When the parameter is increasing, the property in investment will reduce. Consequently, for insurers, the investment in large factor α will bring about restricted fortune in risky market. (2) If we ignore the volatility of the market or treat all volatility are the same, the result are as what our FIGURE 7(b)2 about. The results are totally different between the result which have same volatility or not. (3) Now more and more money are invested in market with increasing time. Obviously, if adding the consideration of volatility, the insurers will put less property on market in longer time. As a result, the longer time will generate lager volatility, larger uncertain factors and larger risk. In order to obtain steady income, we do not need to invest more money on market later. However, when the factor α increases, the money which put on market will reduce. If insurers decide to focus on an investment in value function with a lager parameter, the market property will reduce. § ACKNOWLEDGMENTS The authors would like to thank Professor Lijun Bo, for his detailed guidance and instructive suggestions. N. Yao was supported by Natural Science Foundation of China (11101313, 113713283).§ APPENDIX(Friedman,1975). We consider the following Cauchy problem{ u_t(x,t)+ℒ u(x,t)=f(x,t)in ℝ^n×[0,T)u(x,T)=h(x)in ℝ^n, . Where ℒ is given by:ℒ u=1/2∑_i,j=1^na_ij(x,t)u_x_ix_j+ ∑_i=1^nb_i(x,t)u_x_i+c(x,t)u. If the Cauchy problem (<ref>) satisfies the following conditions: 1. The coefficients of ℒ are uniformly elliptic; 2. The functions a_ij, b_i are bounded in ℝ^n×[ 0,T] and uniformly Lipschitz continuous in (x,t) in compact subsets of ℝ^n×[0,T]; 3. The functions a_ij are Hölder continuous in x, uniformly with respect to (x,t) in ℝ^n×[0,T]; 4. The function c(x,t) is bounded in ℝ^n×[0,T] and uniformly Hölder continuous in (x,t) in compact subsets of ℝ^n×[0,T]; 5.f(x,t) is continuous in ℝ^n×[0,T], uniformly Hölder continuous in x with respect to (x,t) and |f(x,t)|≤ B(1+|x|^γ); 6.h(x) is continuous in ℝ^n and |h(x)|≤ B(1 + |x|^γ), with γ> 0; then there exists a unique solution u of the Cauchy problem (4.1) satisfying:|u(x,t)| ≤ const(1 + |x|^γ)and|u_x(x, t)| ≤ const(1 + |x|^γ).In the original theorem of Friedman(1975), the Cauchy problem is given by{ v_t(x,t)-ℒ v(x,t)=f(x,t)in ℝ^n×[0,T)v(x,0)=h(x)in ℝ^n, .We let v(x,T-t)=u(x,t) , then we can get the Cauchy problem shown in (<ref>).99AHT Asmussen, S., Hojgaard, B., Taksar, M., Optimal risk control and dividend distribution policies: example of excess–of–loss reinsurance for an insurance corporation[J]. Finance and Stochastics, 2000, 4: 299-324.BF Badaoui, M., Fernández, B., An optimal investment strategy with maximal risk aversion and its ruin probability in the presence of stochastic volatility on investment[J]. Insurance: Mathematics and Economics, 2013 ,53: 1–13.BG Bai, L.H., Guo, J.Y., Optimal proportional reinsurance and investment with multiple risky assets and no–shorting constraint[J]. Insurance: Mathematics and Economics, 2008, 42: 968–975.BS1 Bebernes, J.W., and Schmitt, K., Invariant sets and the Hukuhara-Kneser property for systems of parabolic partial differential equations. Rocky Mountain J. Math.,1977, 7, 557-567.BS2 Bebernes, J.W., and Schmitt, K., On the existence of maximal and minimal solutions for parabolic partial differential equations. Proceedings of AMS.,1979,73, 211-218.BR Bielecki, T.R., Rutkowski, M., Credit Risk: Modeling, Valuation and Hedging[M]. Springer, 2002: 225-227.BJ Bielecki T R, Jang I., Portfolio optimization with a defaultable security[J]. Asia-Pacific Financial Markets, 2006, 13(2): 113-127.BBC Birge, J.R., Bo, L.J., Capponi, A., Risk Sensitive Asset Management and Cascading Defaults, Revised and Resubmitted to Mathematics of Operations Research, 2016.BLW Bo L, Li X, Wang Y, et al. Optimal investment and consumption with default risk: HARA utility[J]. Asia-Pacific Financial Markets, 2013, 20(3): 261-281.BWY Bo L, Wang Y, Yang X., An optimal portfolio problem in a defaultable market[J]. Advances in Applied Probability, 2010, 42(3): 689-705.BS Browne, S., Optimal investment policies for a firm with a random risk process: exponential utility and minimizing the probability of ruin[J]. Mathematics of Operations Research, 1995, 20: 937–958. CW Cao, Y.S., Wan, N.Q., Optimal proportional reinsurance and investment based on Hamilton–Jacobi–Bellman equation[J]. Insurance: Mathematics and Economics, 2009, 45, 157–162.CF Capponi A, Figueroa–López J E., Dynamic portfolio optimization with a defaultable security and regime–switching[J]. Mathematical Finance, 2014, 24(2): 207-249.CH Castaneda, N., Hernandez, D., Optimal consumption-investment problems in incomplete markets with stochastic coefficients[J]. SIAM Journal on Control and Optimization, 2005, 44 (4): 1322-1344. CHL Christensen J H E, Hansen E, Lando D., Confidence sets for continuous-time rating transition probabilities[J]. Journal of Banking and Finance, 2004, 28(11): 2575-2602.CR Cox, J.C., Ross, S.A., The valuation of options for alternative stochasticprocesses[J]. Journal of Financial Economics, 1976, 3: 145–166.DPS Duffie D, Pedersen L H, Singleton K J., Modeling sovereign yield spreads: a case study of Russian debt[J]. The Journal of Finance, 2003, 58(1): 119-159.DS Duffie D, Singleton K J., Credit Risk: Pricing, Measurement, and Management: Pricing, Measurement, and Management[M]. Princeton University Press, 2012.FHM Fernández, B., Hernández, D., Meda, A., Saavedra, P., An optimal investment strategy with maximal risk aversion and its ruin probability[J]. Mathematical Methods of Operations Research, 2008, 68: 159–179 .FS Fleming, W.H., Soner, H.M., Controlled Markov Processes and Viscosity Solutions[M]. Springer, Berlin, New York, 1993.FPS Fouque, J.-P., Papanicolaou, G., Sircar, K.R., Derivatives in Financial Markets with Stochastic Volatility[M]. Cambridge University Press, 2000.F Friedman, A., Stochastic Differential Equations and Applications[M]. Academic Press., 1975:139-144.FA Friedman, Avner, Partial Differential Equations of Parabolic Type[M]. 1983.GYLZ Gu, M.D., Yang, Y.P., Li, S.D., Zhang, J.Y., Constant elasticity of variance model for proportional reinsurance and investment strategies[J]. Insurance: Mathematics and Economics, 2010, 46: 580–587.H Heston, S.L., A closed-form solution for options with stochastic volatility with applications to bond and currency options[J]. Review of Financial Studies, 1993, 6: 327–343.HP Hipp, C., Plum, M., Optimal investment for insurers[J]. Insurance: Mathematics and Economics, 2000, 27 : 215–228.HW Hull, J., White, A., The pricing of options on assets with stochastic volatilities[J]. The Journal of Finance, 1987, 42: 281–300.IW Ikeda, N., Watanabe, S., Stochastic differential equations and diffusion processes[J]. 1989.JP Jiao Y, Pham H., Optimal investment with counterparty risk: a default–density model approach[J]. Finance and Stochastics, 2011, 15(4): 725-753.KFC Klebaner,F.C, Introduction to Stochastic Calculus with Applications. Imperial College Press, London, 2005.LY Liu, C.S., Yang, H., Optimal investment for an insurer to minimize its probability of ruin[J]. North American Actuarial Journal, 2004, 8: 11–31.LL Lin, X., Li, Y.F., Optimal reinsurance and investment for a jump diffusion risk process under the CEV model[J]. North American Actuarial Journal, 2004, 15: 417–431.LYC Liang, Z.B., Yuen, K.C., Cheung, K.C., Optimal reinsurance–investment problem in a constant elasticity of variance stock market for jump–diffusion risk model[J]. Applied Stochastic Models in Business and Industry, 2012, 28: 585–597.LZL Li, Z.F., Zeng, Y., Lai, Y.Z., Optimal time–consistent investment and reinsurance strategies for insurers under Heston's SV model[J]. Insurance: Mathematics and Economics, 2012, 51: 191–203.P Pham, H., Optimal stopping of controlled jump diffusion processes: a viscosity solution approach[J]. Journal of Mathematical Systems, Estimations, and Control, 1998, 8 (1): 1-27.PY Promislow, D.S., Young, V.R., Minimizing the probability of ruin when claims follow Brownian motion with drift[J]. North American Actuarial Journal, 2005, 9 : 109–128.RP Rama, C., Peter, T., Financial Modelling with Jump Processes. Chapman and Hall, 2003.SS Stein, E.M., Stein, J.C., Stock price distribution with stochastic volatility: an analytic approach[J]. Review of Financial Studies, 1991, 4: 727–752.YZ Yang, H., Zhang, L., Optimal investment for insurer with jump-diffusion risk process[J]. Insurance: Mathematics and Economics, 2005, 37: 615–634.ZT Zeng, X.D., Taksar, M., A stochastic volatility model and optimal portfolio selection[J]. Quant. Finance, 2003, 13: 1547-1558. ZL Zeng, Y., Li, Z.F., Optimal time-consistent investment and reinsurance policies for mean–variance insurers[J]. Insurance: Mathematics and Economics, 2011, 49: 145–154.ZDY Zhu, H., Deng, C., Yue, S., et al. Optimal reinsurance and investment problem for an insurer with counterparty risk[J]. Insurance: Mathematics and Economics, 2015, 61: 242-254.ZRZ Zhao, H., Rong, X., Zhao, Y., Optimal excess-of-loss reinsurance and investment problem for an insurer with jump–diffusion risk process under the Heston model[J]. Insurance: Mathematics and Economics, 2013, 53: 504-514.
http://arxiv.org/abs/1704.08234v1
{ "authors": [ "Nian Yao", "Zhiming Yang" ], "categories": [ "q-fin.PM", "math.OC", "93E20, 60H30" ], "primary_category": "q-fin.PM", "published": "20170426174015", "title": "Optimal excess-of-loss reinsurance and investment problem for an insurer with default risk under a stochastic volatility model" }
http://arxiv.org/abs/1704.08263v1
{ "authors": [ "Allison Sachs", "Robert B. Mann", "Eduardo Martin-Martinez" ], "categories": [ "quant-ph", "gr-qc", "hep-th" ], "primary_category": "quant-ph", "published": "20170426180026", "title": "Entanglement harvesting and divergences in quadratic Unruh-DeWitt detectors pairs" }
Quasimap counts and Bethe eigenfunctions Mina Aganagic and Andrei Okounkov======================================== We associate an explicit equivalentdescendent insertion to anyrelative insertion in quantum K-theory of Nakajima varieties. This also serves as an explicit formula foroff-shell Bethe eigenfunctions for general quantum loopalgebras associated to quivers and gives thegeneral integral solution to the corresponding quantumKnizhnik-Zamolodchikov and dynamicalq-difference equations. § INTRODUCTION§.§ Overview§.§.§ The problem solved in this paper has arepresentation-theoretic side and a geometric side. In representation theory of quantum affine algebras, and itsapplications to exactly solvable models of mathematical physics,a very important role is played by certain q-differenceequations. These are the quantum Knizhnik-Zamolodchikovequations (qKZ), see FR,EFK and the corresponding commuting dynamical equations EV1,EV,FMTV,FTV,TV,TV3. A lot ofresearch has been focused on solving these equations byintegrals of Mellin-Barnes type, see e.g. EFK,Matsuo, Resh,TV1,TV2,TV4,TV5.Such integrals, in particular, give explicit formulas for Bethe eigenvectors in the stationary phase q→ 1limit. Here we give a general integral solution fortensor products of evaluation representations of quantumaffine Lie algebras associated to quivers as in <cit.>.These include, in particular, double loop algebras of the form, which are known under many different names and playa very important role in many branches of modern mathematicalphysics, see <cit.> for a detailed introduction and furtherreferences. For us, these representation-theoretic problems are reflections of certain geometric questions about enumerativeK-theory of quasimaps to Nakajima quiver varieties (see <cit.> for an introduction). In mathematicalphysics, Nakajima varieties appear insupersymmetric gauge theories as Higgs branches of moduli ofvacua, and K-theoretic quasimaps counts may be interpreted as Higgs branch computations of 3-dimensionalsupersymmetric indices[while the Mellin-Barnesintegrals may be interpreted as the equivalentCoulomb branch computations, see e.g. <cit.> forfurther discussion.]. Nekrasov and ShatashviliNS1,NS2 were the first to make the connection betweenthese indices and Bethe equations, see also<cit.>. The actual problem solved here is to associate an explicit equivalent descendent insertion to anyrelative insertion in enumerative K-theory ofquasimaps to Nakajima varieties, see below and <cit.> foran explanation of these terms. Our results are complementary to the recent important work of Smirnov<cit.> who associates an equivalent relative insertionto any descendent insertion in terms of a certain graphicalcalculus and canonical tensors associated to the quantum group. Here we allow a wider supply of descendent insertions, andget a simple formula (with an arguably simpler proof) for a mapgoing in the opposite direction. For quivers of affine ADE type,quasimap counts compute the K-theoretic Donaldson-Thomas invariants of threefolds fibered in ADEsurfaces[Those include local curves, that is, threefolds fibered in A_0 = ^2.]. Finding an equivalence betweenrelative and descendent insertions in Donaldson-Thomastheories of threefolds is a well-known problem of crucial technicalimportance for the developments of the theory, see<cit.> for an early discussion and PP1,PP2 formajor further progress in cohomology.Our formulas are both more explicit and work inK-theory[Equivariant K-theory is similarly the natural setting of Smirnov's formulas <cit.>.]. §.§.§Letbe a Lie algebra associated to a quiverwith a vertex set Ias in <cit.>. For example, modulo center,is the correspondingsimple Lie algebra for quivers of finite ADE type and=𝔤𝔩_ℓ for the cyclic quiverA_ℓ-1 with ℓ vertices. Extending the work ofNakajima <cit.>, tensor products of fundamental evaluationrepresentations F_i(a), i∈ I, of the the correspondingquantum loop algebra _ħ() may be realizedgeometrically using equivariant K-groups of Nakajima quivervarieties MO,OS.Let X=(,) be a Nakajima variety indexed by dimension vectors ,∈^I and letbe atorus of automorphisms of X. It scales the canonicalsymplectic form ω on X andħ = ∈ K_() is the deformation parameter in _ħ(). We set= ħ and assume thatcontains thetorus⊃{[a_i1; ⋱; a_i_i ]}⊂∏ GL(W_i) ⊂(X) acting on the framing spaces W_i of thequiver.A certain integralform of _ħ() acts by correspondences betweenequivariant K-theories of Nakajima varieties so thatK_(X) ⊗_K_()≅( ⊗_i∈ I⊗_j=1^_i F_i(a_i,j)) _= where the weight is with respect to the Cartan subalgebra⊂ acting by linearfunction ofand .§.§.§Quantum Knizhnik-Zamolodchikov equations of I. Frenkel and N. Reshetikhin FR,EFK are certain canonical q-difference equationsfor a function of the variables a_ij in (<ref>)with values in the vector space (<ref>).The shift q∈ here is a free parameter related to the loop-rotationautomorphism of _ħ(). In the original setup of<cit.>, qKZ equations appeared as difference equation forconformal blocks of _ħ() at a fixed level and there wasa relation between q, the deformation parameter ħ, and the level.The geometric meaning of q will be explained below. As a parameter, qKZ equations take z ∈= /or, equivalently, of the torus corresponding to the -part in . The monomials z^ are the characters .Compatible systems of q-difference equations in z werestudied in detail by Etingof, Tarasov, Varchenko, and others inthe case of finite-dimensionalalgebras , see <cit.> andalso for example EV1,EV,FMTV,FTV,TV1,TV2,TV3,TV4,TV5.In particular, for finite-dimensional , thecommuting equations were understood in terms of the lattice part inthe dynamical quantum affine Weyl group of _ħ() in <cit.>. These dynamical difference equations are intrinsic to _ħ() and make sense inan arbitrary weight space even in the absence of tensor productstructure and associated qKZ equations. For general , the dynamical difference equations were constructed in <cit.>. §.§.§First Chern classes of tautological bundles give a natural map ^I → H^2(X,)which is known to be surjective <cit.>. The dual map sends the group algebra of H_2(X,) to []and makes the monomials z^ degree labels for curve counts in X. Thevariables z are known as the Kähler variables for X in the parlance of enumerative geometry. The so-called Kähler moduli space is, in the case ofNakajima varieties, a certain toric compactification ⊃.With the identification (<ref>), the qKZ and dynamical equations become thequantum difference equations in enumerative K-theory of quasimapsto X pcmi,OS.These q-difference equations shift theequivariant variables a and the Kähler variables z by thefundamental weight q of the group _q = (^1,0,∞)that acts on the moduli spaces of quasimaps (X) ={f: ^1X}/ ≅by automorphismsof the domain, see <cit.> for an introduction.§.§.§While the natural evaluation map (X)f ↦ (f(0),f(∞)) ∈×only goes to the stack quotient=[ /G] ⊃/G=X, one can impose constraints on f or modify the moduli spaces to turnenumerative counts into correspondences on X, or correspondences betweenand X. Conditions imposed at 0,∞∈^1 are customary called insertions, just like insertions in functional integrals.K-theoretic counts of quasimaps with different insertions at 0,∞∈^1 give objects of different nature as functions of a, z, and other parameters.For certain insertions, we geta fundamentalsolutions of the quantum difference equations, while for otherinsertions we get integrals ofMellin-Barnes type. §.§.§ By an integral of Mellin-Barnes type we mean an integral of the formI_αβ(z,…) = ∫_γ⊂ T_G/W_G_α(x)𝐠_β(x)(x,z)∏ϕ(x^λ_i b_i)/ϕ(x^λ_i c_i) ∏dx_k/2π i x_k up to multiplicative shift[The exactform of this multiplicative shift, which is of no importancehere, is discussed in the Appendix.] in z, where * the integration is over a middle-dimensional cycle in the quotient of a torus T_G by a finite group W_G. Concretely,T_G⊂ G is a maximal torus of the group G in (<ref>),with Weyl group W_G. Geometrically, the coordinates on T_G/W_G are the characteristic classes of the universal bundles on X. Since these are known to span the K-theory of X <cit.>, we have a naturalembedding K_(X) @^(->^ι[r] × T_G/W_G[d]_π_ finite over the torusof equivariant parameters. The variables inincluding ħ and a are parameters in (<ref>) and the integral should be viewed as an integral in the fibers ofthe projection π_.* the cycle γ extracts the residues of the integrand at q-translates of the pole at the image of ι in (<ref>).* the function ϕ(y) = ∏_n=0^∞ (1-q^n y)solves the simplest q-difference equation and replaces the reciprocal of the Γ-function in the q-world. Ratios ofthe formϕ(x^λ b)/ϕ(x^λ c) generalize complex powers of linear forms ubiquitous in hypergeometricintegrals. Instead of hyperplanes, we have translates of codimension 1 subtori in × T_G.* the weights λ_i and the shifts b_i,c_iinvolve the roots of G and the weights of× G action on theprequotient in (<ref>). For Nakajima varieties,(<ref>) is an algebraic symplectic reduction of acotangent bundle, andthe self-duality of this setupimplies {b_i,c_i} = {q t^ν_i, ħ t^ν_i}for a certain weight t^ν_i ofon the prequotient in(<ref>).* the function(x,z) = exp( (ln q)^-1∑_i,kln x_i,k ln z_i )where the coordinate x_i,k are grouped according toG = ∏_i∈ I GL(_i) solves monomialq-difference equations in x and zand makes the integral (<ref>)a q-difference analog of Fourier or Mellin transform.* the function 𝐠_β(x) is an elliptic function on x(that is, a constant, from the viewpoint of q-difference equations)regular at the location of γ. It is convenient to use asuitable basis of such functions as a mechanism to generate a basisin the K(X)-dimensional space of solutions of the quantumdifference equations. From the perspective of <cit.>, see in particularSection 6.2 in <cit.> andSection 5.4 in <cit.> for detailed examples, it is natural to use elliptic stable envelopes to build functions𝐠_β(x). Our focus in this paper, however, is on the functions _α(x), and their relations to K-theoreticstable envelopes. * the Bethe subscheme = {∂/∂ x =0 }⊂×× T_G/W_Gwhere[The functionis known as the Yang-Yang function.] = lim_q→ 1ln(q) ln( (x,z)∏ϕ(x^λ_i b_i)/ϕ(x^λ_i c_i)) appears as the critical points of the integral in the q→ 1 limit.It is the joint spectrum of the corresponding commuting operators onK_(X) and the map K_(X) α↦_α↦[]gives the Jordan normal form of the []-action on K_(X).The fiber ofover 0∈ is the spectrum ofK-theory of X in (<ref>). The concrete form of Bethe equations is recalled in theAppendix.The connection between Bethe equations and quiver gauge theories whose Higgs branch is X is one of the main points of a very influentialsequence of papers by Nekrasov and Shatashvili, see NS1,NS2.* finally, the function _α(x) is a rational function of x that depends linearly on α∈ K_(X) and restricts toα on the image of ι in (<ref>). It is knownunder various names including “off-shell Bethe eigenfunction”and “weight function”. This function _α(x) will bethe most important player in this paper. Partition functions of supersymmetric gauge theories can be often expressed as integrals of the general form (<ref>), see e.g. <cit.>for prominent examples of such computation. The group G in this case is the complexification of the gauge group and integration corresponds, via Weyl integration formula, to extracting invariants of constant gaugetransformations.[Alternatively, the quotient T_G/W_G isclosely related to the Coulomb branch of vacua of the theory andthe integral (<ref>) may be interpreted as an equivalent directcomputation on the Coulomb branch.] See e.g.<cit.> for and introductory mathematical discussion and an explanation of howintegralsof the form (<ref>) appear in enumerativetheory of quasimaps to X with descendent insertions.See also e.g.<cit.> for a detailed discussion of the Nekrasov-Shatashviliconnection between Betheequation and enumerative theory of quasimaps that does not makean explicit use of Mellin-Barnes integrals. §.§.§The space of possible descendent insertions at 0∈^1 { } = K_× G() = [× T_G/W_G]corresponds to all possible Laurent polynomials f_α(x)in (<ref>). A choice of 𝐠_βcorresponds to a nonsingular insertion at ∞∈^1. There is a third flavor of insertions, called relative and theytake a class α∈ K_(X) as an input. This is explained in Section <ref> and, in more details, in <cit.>.By a geometricargument, K-theoretic count of quasimaps with a relative insertionat 0 and a nonsingular insertion at ∞ gives a fundamentalsolution of the quantum difference equations, see Section 8 in <cit.> for details.§.§.§ In this paper, we will describe a linear map{ }=K_(X) α↦_α∈(× T_G/W_G)= {} that preserves K-theoretic counts, and therefore makes theMellin-Barnes integral (<ref>) a solution of the quantumdifference equations. Among all quasimaps, there aredegree zero, that is, constant quasimaps, which means_α|_K_(X) = α in the diagram (<ref>). In (<ref>), we allow only very specific denominators_α = _α/Δ_ħ ,_α∈[× T_G/W_G] , where Δ_ħ is the Koszul complex for themoment map equations for X, that is,Δ_ħ = ∑_k (-ħ)^k Λ^k (G) = ∏_i ∏_k,l (1- ħ x_i,k/x_i,l) with the coordinates x_i,k grouped as in (<ref>). The numerator _α of _αis suchthat the counts are still defined in integral, that is,nonlocalized K-theory. This integrality is crucial and thegeometricmechanism responsible for it will be explained inSection <ref>. In particular, we will make precise themechanism of restriction (<ref>) of a rationalfunction to a locus that may be contained in the divisorof poles. §.§.§The denominator in the correspondence (<ref>) is whatdifferentiates our approach from other results in the literature,notably from a very general result of Smirnov <cit.> who givesa map{ }→K_(X)⊗(z,q)={ }⊗(z,q) which preserves K-theoretic counts. Restricted to z=0,the map (<ref>) is the pullback ι^* in (<ref>)and hence any set of tautological classes that forms a basis of K_(X) can be used to write integralformulas for solutions of quantum difference equations.§.§.§ Our main result, Theorem <ref> in Section <ref>,is an equivalencebetween a relative insertion α and the corresponding insertion _α inenumerative theory of quasimaps to X.For _α we givea simple formula in terms of K-theoreticstable envelopes, see Definition<ref> in Section <ref>. A representation-theoretictranslation of this formula is given is (<ref>) and(<ref>) in Section <ref>, see also Section <ref>.An introductionto K-theoretic stable envelopes may be found in<cit.>. An interesting feature of our formulafor _α is that is does not depend onvariables z or q, in marked contrast to (<ref>).As an special case, we give explicit formulas for _α for cyclicquivers _ℓ, that is, for the quantum double loopalgebras , see Section <ref>. These formulas canbe seen as an instance of an abelianization formula for stableenvelopes in thestyle of <cit.>.We make the formulas particularlyexplicit in the important case of the Hilbert scheme of pointsin ^2 in Section <ref>.§.§.§ For =𝔤𝔩_n, our formulas specialize, with a verydifferent proof, to integrals studied by Tarasov and Varchenko TV,TV1,TV2,TV3,TV4,TV5.A connection between what they call the weight functionand stable envelopes was observed, in this instance, in the papers RTV1,RTV2. Thesepaperswere an important source of inspiration for thework presented here. For =𝔤𝔩_1, Bethe eigenvectors are obtainedin <cit.> in the shuffle algebra realization. Presumably, these formulas may be extended to =𝔤𝔩_ℓ usinge.g. the shuffle algebra techniques developed in <cit.>. Here we don't use any specific features of𝔤𝔩_ℓ and solve a more general problem, namely the q-difference equations that generalize the eigenvalue problem solved in <cit.>. §.§ Insertions in quantum K-theory §.§.§In enumerative geometry of regular maps f:C→ X, it is natural and important to be able to constrain the values f(c) of f at specific pointsc∈ C. For example, the quantum product in (X) is defined using counts of 3-pointed rational curves f: (C,c_1,c_2,c_3) → Xsuch that the points (f(c_1),f(c_2),f(c_3))∈ X^3 meet 3 given cycles in X. Unlike regular maps, quasimaps may be singular at a finite set of points of C, whence the difficulties with using the rational map_c: (X)ff(c) ∈ X in enumerative K-theory of the moduli space (X) of stable quasimaps to X. There are at least 3 ways around this difficulty, namely: — one can restrict to the open set (X)_ of quasimaps nonsingular at c. While the evaluation map is not proper on this subset, the equivariant counts are well defined if c∈{0,∞}⊂^1≅ C and one works equivariantly with respect to _q= (^1,0,∞). — one can use a resolution of the map (<ref>)_[dr]^ _[rr]@^(->[ru] X provided by the moduli space of quasimaps relative the point c∈ C. The domain of a relative quasimap is allowed to sprout off a chain of rational curves joining the new evaluation point c to its old location on C. —tautological bundles _i on C are part of the quasimap data and one can use Schur functors of their fibers at c to impose constraints on f(c). These are known asdescendent insertions in the parlance. Recall that Nakajima varieties are constructed as quotients by G= ∏ GL(V_i) and the natural map (sometimes called theK-theoretic analog of the Kirwan map)K_G () → K(X) is known to be surjective <cit.>.Precisely because of the singularities, thebundles _i are not pulled back from X by fand, therefore, descendent insertions do not factor through the Kirwan map (<ref>).§.§.§Since the options listed above express in different precise languagesthe same intuitive idea of constraining the value f(c), one expects to have a translation between e.g. relative and descendent insertions at c. This turns out to be a highly nontrivial problem with important geometric applications, for instance, in Donaldson-Thomas theory. Early discussion of it may be found in <cit.> and, in cohomology, a very important progress on this problem was achieved by Pandharipande and Pixton in PP1,PP2. Geometric representation theory provides a different and perhaps more powerful approach to these problems, as demonstrated by A. Smirnov in <cit.>. §.§.§ In a fully equivariant theory, with the action of _q included, it is possible to mix and match the type of insertions at the_q-fixed points {0,∞} of the domain C.It is natural to interpret 2-pointed counts as correspondences actingon K(X) or as correspondences between X and the stack . More precisely, one has to localize K(X) in the presence of nonsingular insertions and work in formal power series in the variables z^ f∈ that keep track of the degree of a quasimap. These are usually called the Kähler variables, as opposed to the theequivariant variables which include q andthe coordinates on a maximal torus = ×_ħ⊂(X) ,where ħ is the -weight of the symplectic form on X and = ħ. §.§.§The geometric, representation-theoretic, and functional nature of the resulting operators strongly depends on the type of insertions chosen, as illustrated by the following list. In this list we indicate the type of insertion at 0 followed by the type of insertion at ∞.Obviously, the roles of 0 and ∞ may be switched by the automorphism of ^1 that permutes them and sends q to q^-1. relative/relative, alsoknown as the glue operator , is a generalization of the longest element in the quantum dynamical Weyl group of the nonaffine subalgebra _ħ(𝔤) ⊂_ħ() ,see <cit.> and also <cit.>. It does not depend on q and is a rational function of the Kähler variables z. It also does not depend the variables a inin certain special bases of K_(X), see Section 10.3 in <cit.>. relative/nonsingular, alsoknown as the capping operator , gives a fundamental solution to q-difference equations in both Kähler and equivariantvariables. Difference equations with respect to z may be interpreted as the action of the lattice inside the quantum dynamical affine Weyl group of _ħ().Difference equations with respect to a∈ are the quantum Knizhnik-Zamolodchikov equations. descendent/nonsingular is also known as the vertex with descendents[The vertex without descendents refers to havingno insertions at 0.], or the so-called big-I function in the more conventional nomenclature that goes back to Givental. Its computation by_q-localization may be converted into a Mellin-Barnes type integral over a certain middle-dimensionalcycle γ in a maximal torus of G. Such integrals are a standard practice in SUSY gauge theory literature, and can be also explained mathematically,see e.g. the Appendix in <cit.>. Descendent insertions become functions _α in (<ref>). descendent/relative, also known as the capped vertex, is the essential piece in thecorrespondence between descendent and relative insertions. As shownin <cit.>, quantum correction to the capped vertex vanish for any fixed insertions and sufficiently large framing. This property iscalled large framing vanishing.Smirnov shows in <cit.> howto use it to obtain an explicit representation-theoreticformula for the capped vertex, which is manifestly a rational function in all variables. §.§.§The technical crux of the paper is the analysis of the cappedvertex with our specific insertions _α. This is donein Section <ref>. Just like the proof of large framingvanishing, this is fundamentally a rigidity result in the classicalspirit of Atiyah, Hirzebruch, Krichever, and others AtHirz,Kr1,Kr2. The mainingredients in this analysis are the integrality establishedin Section <ref> and bounds on equivariant weightsfrom Section <ref>.§.§ Algebraic Bethe Ansatz reformulation§.§.§ In the study of vertex models of statistical physics, from whichquantum groups originated, one associates a representation F of_ħ() to lines in a 4-valent oriented planar graph and aninteraction tensor R_F,F': F ⊗ F' → F ⊗ F'to the vertices of the graph, as in Figure <ref>.This tensor is the R-matrix for_ħ() and the Yang-Baxter equation satisfied by itis central to integrability of such models. See e.g.ChariPress,JM,KS,Slav for an introduction. In the approach of <cit.>, one first constructs geometrically atensor structureon the K-theory of Nakajima varieties, which then yields R-matrices and the quantum group itself, see pcmi,slc for an introduction.§.§.§ Tensor structure is realized geometrically using certaincorrespondences called stable envelopes, and theR-matrix is computed as composition of one stableenvelope with the inverse on another. A certain triangularity inherent in stable envelopes implies thatmatrix elements of the form F ⊗' ⊗ F'where ∈ F and '∈ F' are the vacuum, that is, lowest weight vectors, satisfy R (α⊗') |_⊗ F' =^-1(α⊗') |_⊗ F' for a certain invertible operatoron F'. This operator belongs to a very specific commutative algebra_0 ⊂(F') which may be identified with— the image of thequantum loop algebra _ħ() for the Cartan subalgebra𝔥⊂. — the algebra of multiplication operators in the geometricrealization of F' as a K-theory of a certain algebraic variety. Such realization makes F' a commutative ring and, in fact, a quotientof a ring of W_G-invariant Laurent polynomials. It is in this language thatis presented in (<ref>) below.— _0 is the limit of Baxter's algebra _z ofcommuting transfer matrices(<ref>) as the parameter z goes to 0. This is reviewed in Section <ref>. §.§.§ Our formula for _α is of the form_α = (α⊗') |_⊗⋆ where ⋆ is a specific point (<ref>) in the geometric realization of F'. Its structure sheaf_⋆ is the unique, up to multiple, eigenvector of_0 with a certain eigenvalue computed in Section<ref>, where further details may be found. This gives_α = /Δ_ħ · where=∏_i∈ I∏_k=1^_i∏_l=1^_i(1-ħ x_i,k/a_i,l) and the boundary conditions for the partition function in(<ref>) are explained in Figure <ref>. In Figure <ref>,we make the fundamental representations F_i, i∈ I, evaluated at points x_i,k where k=1,…, _i run alongthe NE-SW lines. Along the NW-SE line runs therepresentation in which K(X) is a weight subspace.We draw this line as a multiple line in reference to a tensorstructure that this module typically possesses. As the boundarycondition at SW corner, we chose a certain specific eigenvectorof _0. The eigenvector property of the boundary conditions meansthe following identity_α⊗_δ = .'/'|_ = δ _α where _δ is the vacuum vector of weightδ and '= ∏_i∈ I∏_k=1^_i∏_l=1^_iħ^1/2 (1-x_i,k/a_i,l).Pictorially, the eigenvalue property (<ref>) may berepresented as follows:-25pt < g r a p h i c s >=. '/'|_ = δ-25pt < g r a p h i c s >Explicit formulas for the eigenvector _⋆ may, in turn,be given in terms of stable envelopes. This is a reflection of thebasic fact that the dual of the stable envelope is againa stable envelope with opposite parameters, see <cit.>. §.§.§ Let (x_i,k) ∈[T_G/W_G]by a symmetric polynomial of x_i,k, that is, a characteristicclass ({V_i}) of the tautological bundles V_i on . Formula(<ref>) means( α,)_K_(X) =χ(α⊗({V_i})) =∫_γ_0_α (x_i,k)… , where γ_0 is the part of γ that encirclesthe image of ι in (<ref>) and the integration measureomitted in (<ref>)is, among other things, the specialization of the integration measure in (<ref>) to quasimaps of degree 0. Using (<ref>), we can read the operators in Figure <ref>backwards and interprete that picture as an operator formula for the off-shell Bethe eigenfunction. Inthe familiar context of the spin 1/2 XXZ spin chain, thisbecomes the classic formula =B(x_1) … B(x_)of the algebraic Bethe Ansatz, further generalized in <cit.>and countless papers since. §.§ Acknowledgments §.§.§ Our interactions with Pavel Etingof, Boris Feigin, Edward Frenkel,Davesh Maulik, Nikita Nekrasov,Nikolai Reshetikhin, and Andrei Smirnov played a very important role in the development of the ideas presented here.§.§.§ As already explained, the connection betweenBethe equations and supersymmetric gauge theories (specifically, enumerative theory of quasimaps) goes back to the pioneering work of Nekrasov and Shatashvili <cit.>. As a next step,Bethe eigenvectors found a gauge theoretic interpretation in Nekrasov's study oforbifold defects in gauge theories, see <cit.>. §.§.§ This paper looks from a somewhat different angle onthe problem which Smirnov essentially already solved in <cit.>building on the large framing vanishing of <cit.>. Smirnov'sresult is used in <cit.> to solve qKZ by Mellin-Barnes integralsand the present work was very much motivated by the desire tobring the formulas of <cit.>closer to those of Tarasov and Varchenko.In this, we were guided by the papers RTV1,RTV2 ofRimányi, Tarasov, and Varchenko and also by the older papers of Matsuo <cit.> and Reshetikhin <cit.>. §.§.§In this paper, we present complete integral solutions to thedynamical and qKZ equations for tensor products of evaluationrepresentations of quantum affine algebras associated to quivers.As a special case, this includes diagonalization of Baxter-Bethecommuting operators acting in these spaces. That problemgoes back to a 1931 paper of Hans Bethe and is the subject ofan immense body of literature both in mathematics and physics. It is unrealistic to analyze how the great many different threadspresent in that literature enter implicitly or explicitly in what wedo here. We cannot attempt to survey the literature and onlyinclude those references that influenced our work. Of the manydifferent approaches to Bethe Ansatz, we suspect the one based on the so-called universal weight function EKP,FKPR,KP,KPTmay be the closest. Stable envelopes which we use here givea geometric Gauss factorization of the R-matrices in thestyle of Khoroshkin and Tolstoy and this is closely related touniversal weights functions. §.§.§Another paper which is particularly close to direction of thiswork is <cit.>, where the authors prove a formula for Betheeigenvectors for _ħ(𝔤𝔩_1)which is a close relative of our formula (<ref>),see Section 4 in <cit.>. Instead of taking the eigenvector boundary condition inFigure <ref>, the authors of<cit.> take the(∅,∅)-matrix element of the R-matrixas a universal map K_(X) →_ħ(), which theyfurther compose with a shuffle algebra realization of _ħ()to get to functions of x_i,k. Our formulation bypasses the need to work with shuffles, and also solves a more general problem — theq-difference equations. For eigenvalue problems, overall factors, such as our denominators Δ_ħ, are not relevant, whichexplains the discrepancy with <cit.>, where the squareΔ_ħ|_ħ=1 of theVandermonde determinant appears in the denominators.§.§.§ M.A. is supported by NSF grant 1521446 and by the Berkeley Center for Theoretical Physics. Both authors are supported by theSimons Foundation as a Simons Investigators. A.O. gratefully acknowledges funding by the Russian Academic Excellence Project '5-100' andRSF grant 16-11-10160.§ MAIN RESULT §.§ Descendent insertions from stable envelopes§.§.§For a given oriented framed quiver like the one in Figure <ref>,let (,)denote the linear space of quiver representation with dimensionvectorsand , where _i =V_i, _i =W_i.Let μ: T^*(,) →(G)^* ,G = ∏ GL(V_i),be the algebraic moment map and let (,) = μ^-1(0)denote its zero locus.By definition, a Nakajimavariety X is an algebraic symplecticreduction X = (,) =(,)Gwhere a certain choice of a GIT stability condition is understood, see e.g. <cit.> for anintroduction. The stability choices are parametrized by vectorsθ∈^I = (G) ⊗_ , which must avoid a finite number of rational hyperplanes,up to a positive proportionality. We also considerquotient stacks X⊂ =[ (,)/G] ⊂ =[ T^*(,)/G] obtained by forgetting the stability condition and themoment map equations, respectively. §.§.§Our goal in this section is to construct acertain K_()-linear mapK_(X) α↦_α∈ K_() such that _α is supportedon ⊂ and_α|_= ι_X,* α , where ι_X: X ↪ is the inclusion.One can thus view (<ref>) asan extension of ι_X,* αto a K-theory class on . This extension is canonical once certain further choices are made.Its construction involves stable envelopes on alarger Nakajima variety (,+).§.§.§ The dependence of what follows on the stability condition (<ref>) may be summarized as follows. Let i∈ I be avertex of the quiver and let δ_i∈^I be thedelta-function at i. Consider T^* (δ_i,δ_i) = T^* (W_i,V_i) ⊕T^* (V_i,V_i)^𝐠where W_i =V_i=1 and 𝐠 =.The moment mapequations take the form ab = 0 , a∈ (W_i, V_i), b∈ (V_i, W_i),and(δ_i,δ_i)_ = a 0, b 0, . For either choice of stability, this gives(δ_i,δ_i) ≅^2 𝐠 equivariantly with respect toSp(2 𝐠) ⊂(,) .Since Nakajima varieties are unchanged under flips of edgeorientation, we may assume that the direction of theinvertible map in (<ref>) coincides with the orientation.To simplify the exposition we willassume that the framing edges are orientedin the direction of (W_i,V_i) in Figure <ref>. It will be clear how to modifythis in the general case. §.§.§By convention, the group _ħ scales the cotangentdirection of T^*(,) by ħ^-1 and thus scales thecanonical symplectic form ωon X with weight ħ. Note thatthis splitting of the exact sequence 1 →(X,ω) →(X) → GL(ω) → 1depends on the choice of the orientation. §.§.§Let V'_i be collection of vector spaces of V'_i = _i and denoteG' = ∏ GL(V'_i) ≅ G.DefineY = (,+)_ / G' where the framing spaces are of the form W_i⊕ V'_i andthe subscript refers to the locus of points where the framingmaps V_i' → V_i ,V_i → V_i', are isomorphism, according to the orientation explained inSection <ref>. In what follows, we will assume thatV'_iV_i.Clearly,(,+)_⊂(,+)_ . §.§.§There is a G-equivariant map :T^*(,) ↪ Ywhich supplements quiver maps by(ϕ, - ϕ^-1∘μ_GL(V_i))∈(V_i',V_i) ⊕(V_i,V_i') for a framing isomorphismV_i' V_i.The dependence on ϕis precisely taken out by the quotient by G'. We denote the induced map:↪[Y/G ]=[(,+)_/G' ] by the same symbol. Formula (<ref>) impliesμ_G' = - ϕ^-1∘μ_G ∘ϕand thus ⊂is cut out but pullback viaofthe moment map equations for G'. In other words,we have a pull-back diagram[r][d] [d]^μ_G'∘ [0/G'] [r][(G')^*/G'] .§.§.§Let ≅⊂(G')be the group acting with its defining weight u oneach V'_i. We haveX ⊔(,) ⊂(,+ )^ and we can choose attracting directions forso that(,) lies in the full attracting set of X. We apply the general machinery of stable envelopes,an introduction towhich may be found in <cit.>, to this action of. Sincecommutes with× G', stable envelopes give a K_× G'(X)-linear map: K_× G'(X) → K_× G'((,+ )) that depends on two pieces of additional data,namely:— a fractional line bundle ∈(X) ⊗, called the slope. The slopeshould be away from the walls of a certain periodic locally finite rationalhyperplane arrangement in (X) ⊗ and stable envelopes depend only on the alcove of that arrangement that contains .We fix the slope to be= ε· , 0 < ε≪ 1. This choice is not material for showing (<ref>), but willbe crucial for what comes later.— a polarization T^1/2 which is a solution of the equation T^1/2 + ħ^-1 (T^1/2)^∨ =in equivariant K-theory. Polarization is anauxiliary piece of data in thatstable envelopes corresponding to different polarizations differ by a shift of the slope. Polarization is also required to set up quasimap counts,see Section 6.1 in <cit.>, and so we assume that apolarization of X has been chosen and setT^1/2_(,+ ) = T^1/2 X + ∑_i ħ^-1 (V_i,V'_i), that is, we select the directions oppositeto the framing mapsV_i'→ V that are assumed to be invertible. §.§.§ Because (α) is G'-equivariant, it descends toa class on Y/G. We make the following We set _α = ^*(α)∈ K_() where the slope of the stable envelope is chosen asin (<ref>) and the polarization is as in (<ref>).The class (<ref>) is supported on ⊂ andsatisfies (<ref>). The moment map μ_G' for the group G' is an-invariant[that is, an equivariantmap to a variety with a trivial -action] map to an affine variety. SinceX in (<ref>) lies in the zero fiber of this map, thefull attracting set of X does, too. From (<ref>),we conclude that_α⊂ .Now let (X) denote the attracting manifold of X in(<ref>). It fits into the diagram (X) [dl]_π_[dr]^ι_ X (,+ ) in which— the map π_ forgets the maps V'_i → V_i,— the map ι_ sets to zero the maps V_i → V'_i.Our choice of the polarization (<ref>) andthe conventions for the normalization of the stable envelopeexplained in Section 9.1 of <cit.> imply (α) |_= ι_,* π_^*α ,whence the conclusion. §.§ Restriction to the origin§.§.§ As a polynomial in universal bundles,the insertion _α is determined by its restriction to the origin 0∈. Our next goal is to bound the G-weights that appearin this restriction.The origin is a fixed point of G and underthe inclusionit corresponds to the point⋆={V'_iV_i ,=0 } which can be viewed as a point in either Y^G or(,+)^G', the isomorphism in (<ref>)giving an identification of G and G'. To bound the G-weight in _α|_0 is thussame as to bound the G' weights of (α)|_⋆.This is equivalent to bounding the '-weights, where'⊂ G' is a maximal torus. §.§.§The torus ' contains . Since X⊂(,+) is fixed by the whole torus', the triangle lemma for stable envelopes implies _'(α) = _(α)for the same slope, polarization, and a small perturbationof the 1-parameter subgroup. See Section 9.2 in <cit.> for adiscussion of the triangle lemma. §.§.§By definition of stable envelopes, the torus weights in theirrestriction to fixed points are bounded in terms of thepolarization, after a shift by the slope∈(X) ⊗_ = (G)⊗_ . The identification in (<ref>) sees V_i as aline bundle on X and as a character of G. While we made a specific choice ofin (<ref>),the following proposition is true for an arbitrary slope. The G-weights of the restriction of _α to the origin 0∈ are contained in+( (T^1/2_(,+))^∨_⋆)⊂⊗_ . Here vee and star denote the dual representation andthe restriction to (<ref>), that is, toV'=V, respectively. Follows directly from the definition of stable envelopes. §.§.§ In general, a polarization of a Nakajima varietyis a virtual bundle on the prequotient in which either thetangent bundle to G-orbits or the target of themoment map equations enters with the minus sign.Note, however, that this term is precisely added backin (<ref>) after the specialization to V'=V.This means that T^1/2_(,+)|_⋆ is an actual representation of G modulo balancedclasses, and thus theexterior algebra in (<ref>) is an actualG-module. Recall from <cit.> that a virtual representation ofG is called balanced if it is of the form V - V^∨, for some V∈ K_G().For balanced classes, one defines (V - V^∨) = (-1)^ V V. §.§.§ The bound in Proposition (<ref>) means that_α may be seen as a stable envelope extensionof the class ι_X,* α to the stackin thesense of D. Halpern-Leistner and his collaborators, see HL,HLMO,HLS. §.§ Integrality of _α-insertions §.§.§ Let () denote the moduli space of stable quasimaps f: C,as defined in<cit.>, see also e.g. <cit.> for aninformal introduction. By definition, a point of () is a collection of vectorbundles _i on C of rank , together with a section ofthe associated bundles like (_i,_j) or(_i,_j) per everyarrow in the doubled quiver, where _j is a trivialbundle of rank _j. A quasimap is stable if it evaluates toa stable point ofat the generic point of C.We setf = ( …, _i, …) ∈^Iby definition. The image of the natural inclusion ι_(X): (X) ↪() is cut out by themoment map equations imposed pointwise. §.§.§ Consider the pull-back _0^*_α∈ K_×_q(())of the class _α under the evaluation map _0: ()f ↦ f(0) ∈ .By Proposition <ref>, every quasimap in the support of this class satisfies f(0)∈. Therefore, the obstruction theory for(X), restricted to the support of _0^*_α hasa trivial factor_(X)|__0^*_α→ħ^-1 ⊕(_i|_0, _i|_0) → 0, corresponding to the moment map equations at 0∈ C.We can take the kernel of (<ref>) as a new reducedobstructiontheory for (X) to produce a reduced virtualfundamental class ^_(X),∈K_×_q(()). §.§.§ The difference between the virtual fundamental class^_(X) and its reduced version is a factor of Δ_ħ = (⊕ħ^-1(V_i,V_i) ) .We make the following We set_α = Δ_ħ^-1 _α and wedefine the product_0^*(_α) ⊗^_(X)∈ K_×_q ((X)) by the equality of K-classesι_(X),* (_0^*(_α) ⊗^_(X))= _0^*(_α) ⊗^_(X), on ().In actual quasimap counting, one uses the so-calledsymmetrized virtual structure sheaves _, see Section 6.1 in <cit.> and (<ref>) below.Those differ from^ by a twist by a line bundle, which is the same line bundle on both sides in (<ref>). §.§.§ The following is clear from construction The class (<ref>) is anintegral K-theory class which equals α for (X)_=0≅ X.Integral formulas for descendent insertions generalizeverbatim to (<ref>) with the insertion of the rationalfunction (<ref>). §.§ Equivalence of descendent and relative insertions §.§.§ Our next goal is to prove the followingA relative insertion of α∈ K_(X) at 0∈ C equalsthe descendent insertion of _α at the same pointin equivariant quasimapcounts with arbitrary insertions at points away from 0∈ C. In Theorem <ref> a certain alignement between the polarizationused to define _α and a polarization required insettings up thequasimap counts is understood. Recall that a polarization ofT^1/2 of X induces a virtual bundle ^1/2 on thedomain of the quasimap and one defines the symmetrized virtual structure sheaf by_ = _⊗(_⊗^1/2_∞/^1/2_0)^1/2 , where the subscripts denote the fibers ofat0,∞∈ C.The quasimap counts from <cit.> are defined using(<ref>). Note that they depend on the polarization onlyvia its determinant. In Theorem <ref>, we assume that the determinants of thetwo polarizations are inverse of each other, up toequivariant constants. In <cit.>, equivariant correspondences are interpreted as operators from the fiber at ∞ to thefiber at 0, which is why it is natural to use dual bases for the fiber at 0. Stable envelopes,in particular, change both the slope and polarizationto opposite(,T^1/2) ↦ (-, ħ^-1( T^1/2)^∨)underduality. It is easier to implement the flipping of the polarization inthe statement of Theorem <ref> than to work with the oppositepolarization throughout the paper. With this change, the localization contributions at 0 take the form_ =1/_-(^1/2_0)^∨⊗… where the dots stand for terms with afinite limitas q^± 1→∞ and_- = ∑_k (-1)^k Λ^k . SeeSection 7.3 of <cit.> for details on the localizationformula (<ref>). §.§.§ The proof of Theorem <ref> proceeds in several steps. As a first step, we canequivariantly degenerate C to a union CC_1 ∪_ C_2so that 0∈ C_1∖{} and allother insertions lie in C_2∖{}.By the degeneration formula, it is therefore enough to show that the counts in Theorem <ref> coincide when we imposea relative insertion β∈ K_(X) at ∞∈ C ≅^1. §.§.§Since the count of quasimaps relative 0,∞∈^1 is theglue matrix , the Theorem is equivalent to showing thatthe operatorα↦_∞,*(_0^*(_α) ⊗_z^) ∈ K_(X)[q^± 1][[z]] equals , whereis the relative evaluation mapas in (<ref>). Here we get polynomials in q becausethe map _∞ is proper and _q-invariant. Recall that the glue matrix does not depend on q, which can beexplicitly seen by its analysis as q^± 1→ 0 as inSection 7.1 of <cit.>. This analysis is based on_q-equivariant localization and we can apply thesame reasoning to (<ref>).The _q-fixed quasimaps are constant on ^1∖{0,∞} and the contributions from 0 and ∞essentially decouple. The contributions from ∞ areliterally the same as for the glue matrix. They arecomputed using the push-pull in the followingdiagramK((X)^_q_) [dr]__∞,* K(X) [ur]__0^* K(X) , where we tensor with _z^ on the middlestage. Theanalysis in Section 7.1 of <cit.> shows → , q→ 0, 1, q→∞ . The contributions from 0∈ C in the localization formulafor arecomputed using a parallel push-pull diagramK((X)^_q_)[dr]__∞,* K() [ur]__0^* K(X) , which computes the so-called vertex with descendents,see Section 7.2 in <cit.>. The_q-fixed locus in (<ref>) has a concretedescription as a certain space of flags of quiver representations, see Section 7.2 of <cit.> and also <cit.>. Since (<ref>) is a product of the two operators,Theorem <ref> follows from the followingThe vertex with descendent_α remains bounded in the q→∞ limit andgoes to α in the limit q→ 0.Note that a vertex with descendents is a power seriesin z and Proposition <ref> implies all terms ofnonzero degree in z in that series vanish in the q→ 0limit. §.§.§ On the fixed locus,the bundles _i can be written in the form _i = ⊕_j _C(d_i,j[0])with their natural linearization. This means that their fiber_i|_∞ at infinity is a trivial _q-modulewhile the _q-weights in the fiber _i|_0 at zeroare {q^d_i,j}. This means that the insertion_0^* _α is a Laurent polynomial in {q^d_i,j}with coefficients in K-theory of the fixed locus. This polynomial does not depend on the quiver maps andtherefore we may assume that all quiver maps are zero.The Newton polygon of _0^* _αis thusbounded by the formula in Proposition<ref>. We find⊂(d,) + ( (^1/2_(,+))^∨_0 ) and this inclusion is strict if (d,) 0 because it is truefor an open set of . Here ^1/2_(,+) is the virtual bundle on C obtained by pluggingthe bundles _i and _j into the formula(<ref>) and subscript refers to its fiber at 0∈ C.As observed in Section <ref>, the exterioralgebra here is a well-defined _q-module.Also in (<ref>) we have the natural pairing of thedegree of the quasimap d = (d_i) ∈^I = H_2(X,),d_i = ∑_j d_i,jwith a fractional bundle ∈(X) ⊗.The moduli spaces of quasimaps of degree d areempty unless d is effective, see Section 7.2 in<cit.>, so we assume that dis effective in what follows. Since was assumed to be an ample bundle, we have(d,) = 0⇔ d = 0. From (<ref>), we have_-(^1/2_(,))^∨_0 =1/Δ_ħ_- (^1/2_(,+))^∨_0 and therefore from (<ref>) we conclude q^-(d,)_0^* _α⊗_→ 0, q→ 0,∞ ,d 0.Sincewas assumed to be a very small ample bundle, wehave 0 < (d,) ≪ 1,d 0.Therefore for d 0 we have _0^* _α⊗_→ 0, q→ 0, , q→∞ ,as was to be shown.§ REFORMULATIONS AND EXAMPLES§.§ R-matrices and Bethe eigenfunctions§.§.§In the setup of Section <ref> consider the R-matrixfor the action ofR: ^-1∘∈ K_× G'((,+)^)_ where the mapis defined as in (<ref>)with the same choice of slope and polarization, but theopposite choice of the 1-parameter subgroup. Our next goal is to express _α in terms ofthe restriction of R(α) to (,) in (<ref>) andmore concretely in term of its restriction to the G'-fixed point⋆∈(,) as in (<ref>). Recall that _α iscompletely determined by its restriction to the point⋆. §.§.§By our choice of the 1-parameter subgroup, (,) wasat the bottom of the attracting order among components of the fixed locus. Since this order is reversed for , we have(β)|_(,) = β ⊗_-( N_^∨) ⊗… , for any β∈ K_T× G'((,)),where dots stand for a certain line bundleand N_ is the repelling part of theof the normal bundle N to (,).We haveN = ∑_i (W_i,V_i)_ + ħ^-1∑_i (V_i,W_i) _ . Fixing the line bundle in (<ref>)requires fixing a polarizationof X. For simplicity, we assumethat the polarization of theframing maps for X is the same as in the newframing terms in (<ref>), that isT^1/2X = ħ^-1∑_i (V_i,W_i) +. Recall from Section (<ref>) that such choiceof orientation onframing edges was dependent on the stability parameter θ, and that both orientation and polarization should beflipped if the entries of θ change sign. §.§.§With the assumption (<ref>), the repelling directions in (<ref>)coincide with the normal directionschosen by polarization and hence the dots in (<ref>) are trivial. In other words (β)|_(,) = β ⊗. where = _-( ħ∑_i (W_i,V_i) ) = ∏_i∈ I∏_k=1^_i∏_l=1^_i(1-ħ x_i,k/a_i,l) .The variables x_i,k and a_i,l in (<ref>)are the Chern roots of V_i and W_i, respectively, asin (<ref>).We deduce the followingWe have_α|_0=R(α) |_⋆ . §.§.§It remains to characterize the fiber at ⋆, which is,abstractly, a linear formK_G'((,)) ↦|_⋆ = χ(⊗_⋆) ∈K_G'(), in representation-theoretic terms. The structure sheaf _⋆∈ K_G'((,))of the G'-fixed point (<ref>) is an eigenvector of operatorsof multiplication in K_G'((,)), namely⊗_⋆ = |_⋆·_⋆ for any ∈ K_G'((,)). Following <cit.>, we recall how express generators of the commutative algebra of operators (<ref>) in terms ofthe vacuum matrix elements of R-matrices.These are operators in K_G'((,)) defined by R_,∅,∅ (β) = R(β)|_(,) , where R is our current R-matrix defined in (<ref>).Its dependence on the dimensionvectoris made explicit in (<ref>). ObviouslyR_,∅,∅ = lim_z→ 0_ (z^⊗ 1) R and so the operators (<ref>) are the limit of Baxter's commutingtransfer matrices as z→ 0. In the description (<ref>) of the normal bundle, therepelling direction forare the attracting directionsforand they are precisely opposite to the polarization.Therefore(β) = ' ⊗β , β∈K_'((,)),where' = ħ^1/4 N _-( ∑_i (W_i,V_i) ) = ∏_i∈ I∏_k=1^_i∏_l=1^_iħ^1/2 (1-x_i,k/a_i,l) .From this and (<ref>) it follows thatR_,∅,∅='/'⊗∈K_G' ((,)) ⊗(').§.§.§Recall that '⊂ G' denote the maximal torus.Extending the analysis of Section <ref>,it is easy to see that (,)^'_ =∏_i (δ_i,δ_i)^_i .This is a vector space with origin ⋆. The Weylgroup of G' acts on it by permutations of factors.Since the K-theory of this fixed component is trivial,we have the following The structure sheaf _⋆ is the unique, up tomultiple, eigenvector of the operatorsR_,∅,∅ with eigenvalueR_,∅,∅ (_⋆) =. '/'|_x_i,k=a'_i,k _⋆ and (<ref>) is the unique, up to multiple, linear form in the dual of this eigenspace. The normalization may be fixed bye.g. (<ref>). To connect with the notations of Section <ref> ofthe Introduction, it suffices to make the inversesubstitution a'_i,k = x_i,k.§.§ Example: §.§.§Our goal here is to produce an explicit basis of the functions_α for quivers of cyclic type _ℓ-1 with ℓ vertices. The corresponding Nakajima varieties are moduli spaces of framed sheaves, including Hilbert schemes of points,on the A_ℓ-1-surfaces, that is, minimalresolutions of x y = z^ℓ ,starting with the affine plane A_0=^2 for ℓ=1.In particular, K-theoretic counts ofquasimaps to these Nakajima varieties aredirectly related to K-theoretic Donaldson-Thomas theory ofthreefold fibered in A_ℓ-1-surfaces. The Lie algebracorresponding to the cyclic quiveris the affine Lie algebra _ℓ, hence the action of a double affine algebraon the K-theories of these Nakajima varieties.Its direct link toimportant questions in enumerative geometry andmathematical physics makesa very interesting object ofstudy. See in particular <cit.> for a detailed discussion andmany references. As a special case, cyclic quiver varieties include quivervarieties for the linear quiver A_ℓ, for which we recover the action of_ħ(_ℓ+1) and the formulas of Tarasov and Varchenko.The connection between those formulas and stable envelopes has already been observed in RTV1,RTV2. §.§.§ For explicit formulas, it is convenient to choose a particularly symmetric polarization of X. We start with a polarizationT^1/2 = ⊕_∘ → ∘'( ∘,∘') -⊕_i(V_i , V_i) obtained from an orientation ofthe framed quiver in Figure<ref>. The firstsum in (<ref>) is over all oriented edges.The weights in the corresponding stable envelopes arethen bounded by the weights in _- (T^1/2)^∨,which is an product of expressions like_- (V,V')^∨ = ∏ (1-x_i/x'_j) .Here _- is the alternating sum of exterior powers asin (<ref>) and {x_i},{x'_j} are the Chern roots ofV and V', respectively.We define(V,V')= _- (V,V')⊗( V)^ V' = ((V',V)) ⊗( V)^1/2 V'⊗( V')^1/2 V =∏ (x_i-x'_j) which is, up to a sign, symmetric in V and V'.Since (<ref>) and (<ref>) differ by a sign and a linebunde, we haveT^1/2 = ± _- (T^1/2_♢)^∨ for a certain polarization T^1/2_♢. In what follows,we consider stable envelopes with this polarization; their weightsare bounded by (<ref>). It is convenient to extend the definition (<ref>) bylinearity in the second factor((V,V') ⊗ M )=(V,V' ⊗ M)=∏_i,j,k (x_i- m_k x'_j ) where M is a multiplicity bundle and {m_k} are its Chernroots.Recall that for Nakajima varieties may have nontrivialautomorphisms acting on edge multiplicity spaces. The rankof the group of such automorphisms is the 1st Betti numberof the quiver. A review of these basis facts may be founde.g. in the introductory material in <cit.>. §.§.§ Let ⊂ denote the subtorus preserving thesymplectic form ω. The torus includes a maximal torus of the framing group GL(W) andan additional _ for the loop in the quiver.In the moduli of sheaves interpretation, this _acts by symplectic automorphisms of the surface. We haveX^ = ⋃_∑^(ij) = ∏_i∈ I∏_j=1^_i_(^(ij), δ_i) where the _ denotes the Nakajima varietycorresponding to the infinite linear quiver A_∞ and theequality ∑^(ij) = in (<ref>) involves summing over the fibers of the mapA_∞_ℓ . The fixed locus (<ref>) may be interpreted as a Nakajimavariety associated to a (disconnected) fixed-point quiverQ^ = with dimension vector = (^(ij)_k),where ||= ∑_i. §.§.§ Note that_(, δ_i) = , ,∅ , , where the first case means that _j =,with(□) = (□) -(□).Indeed, the nonempty moduli spaces in (<ref>) forma basis of a level one Fock module for _∞, also knownas a fundamental representation of this Lie algebra. Thoseare labelled by an integer i and this is the index i in theformulas above. §.§.§Let F be a component of the fixed locus (<ref>). Itcorresponds to a homomorphism ϕ_F: → Gwhich makes all spaces V_i and the -spaces between them -graded. In particular, the fixed locus Fitself parameterizes -invariantquiver maps, modulo the action of thecentralizer G^⊂ G. We choose a generic 1-parameter subgroup intopartition all nonzero weights into attracting and repelling.In particular, the polarization T^1/2 decomposesT^1/2|_F = (T^1/2)_⊕(T^1/2)_⊕(T^1/2)_ according to theweights. We define_F = ∑_w∈ W_G/W_G^ w ·( (T^1/2)_⊕ħ(T^1/2)_) , where the Weyl group acts by permuting the Chern roots of thebundles. Since the decomposition (<ref>) isG^-equivariant, the group the Weyl group W_G^ of G^acts trivially and the summation in (<ref>) is over thecosets of W_G^. Letbe line bundle of the form_♢ = ⊗( V_i )^ε_i , 0 < ε_i ≪ 1.The following proposition may be seen as an instance ofan abelianization formula for stable envelopes, see e.g. Sh, S1,ese. Closely related constructions also appear in HLMO, HLS. The functions _F for all components F of the fixed locus(<ref>) form a ()-basis of the space of functions_α for cyclic quiver varieties for polarization (<ref>)and slope (<ref>) .Note that if the rest of the terms and the cycle of integration in(<ref>) are symmetric then there is no need to symmetrizeunder the integral sign. §.§.§ For example, letX = (^2,n)be the Hilbert scheme of n points in the plane ^2, which corresponds to ℓ=1 , =1 , =n.The tori = {[ t_1; t_2 ]}⊃ = {[t_1;t_1^-1 ]}acts naturally on ^2 and (^2,n) and ħ = 1/t_1 t_2 .The fixed points ofand are indexed by partitions λ of n and V|_λ = ∑_□=(i,j) ∈λt_1^1-j t_2^1-ias a -module. In particular, the -weights in V aregiven by minus contents of the boxes.As a polarization, wemay takeT^1/2= V + (t_1-1) (V,V)= ∑ x_i + (t_1-1) ∑_i,j x_i/x_j where {x_i} are the Chern roots of V. A fixed point isspecified by the assignment of x_i to the boxes ofλ, up to permutation. If we take t_1 to be a repelling weight forthen T^1/2_≷ = ∑_c(i)≷ 0 x_i +t_1 ∑_c(i)≷ c(j)+1x_i/x_j - ∑_c(i)≷ c(j)x_i/x_jwhere T^1/2_> = T^1/2_ , T^1/2_< = T^1/2_ ,and c(i) is the content of the box in λ assigned to x_i.Therefore, up to an ħ multiple, we have _λ = Π_1 Π_2/Π_3where Π_1 = ∏_c(i)<0 (1 - x_i) ∏_c(i)>0 (t_1 t_2 - x_i)andΠ_2 = ∏_c(i)<c(j)+1(x_j - t_1 x_i) ∏_c(i)>c(j)+1(t_2 x_j - x_i)Π_3 = ∏_c(i)<c(j)(x_j -x_i) ∏_c(i)>c(j)(t_1 t_2 x_j - x_i). These are formulas for K-theoretic stable envelopes for(^2,n) with the polarization and slopeas in Proposition <ref>. They are a directK-theoretic generalization of the formulas fromSh,S1. Note that in all cases treated by the formula(<ref>) the slope is near an integral line bundle. Muchmore interesting functions appear at fractional slopes, butthey seem to be not required in the context of BetheAnsatz. §.§.§The proof of Proposition <ref>takes several steps. As a firststep, we clarify the geometric meaning of the formula(<ref>).We separate the numerator and denominator in (<ref>)by writing (T^1/2)_⊕ħ(T^1/2)_= ρ_+- ρ_-as a difference of two -modules. Thenρ_+ is the numerator in (<ref>), whileρ_- is the denominator. We note thatρ_+ = ±_(F)⊗…∈ K_× P (T^*) where dots stand for a character. Here (F) ⊂ T^*is the -attracting manifold and P⊂ G is thethe parabolic subgroup with =P = _ .It acts on the character in (<ref>) via the homomorphism1 →→ P → G^→ 1 to its Levi subgroup G^=P^. §.§.§ Formula (<ref>) illustrates two general facts.First, this is an instance of stableenvelopes for abelian quotients and abelian stacks.In general, in the abelian case, stable envelopes are structure sheaves of the attracting locus, up to line bundles. The second general principle apparent in (<ref>) is summarized in the following, in which Y is an abstractvariety or stack for which stable envelopes are defined.Let P in ⊂ P ⊂(Y)be an algebraic group such that the -weights inareattracting. Stable envelopes define a map K_P (Y^) → K_P(Y)where P acts on Y^ via the projection to P^. Our assumption on P implies that it preserves attractingmanifolds. We thenargue inductively using the attracting order on the components F_i of Y^. For the very bottom component, the stableenvelope is the push-pull in the P-equivariant diagram(F_) [dl]_[dr]^ F_ Y, up to a line bundle pulled back from F_.For all other components F, stable envelopes are uniquelydetermined by having the same structure (<ref>)near F and being orthogonal to all lower stable envelopes in the sense of <cit.>, whence the conclusion. §.§.§By constructionρ_- = / ⊕ ħ, where =N is the nilradical of . The secondterm here has the following interpretation. Since the moment map is a -equivariant map, we haveμ: (F) →^∨_ = ^⊥ .Therefore there is no need to impose the moment map in ^∨.Equivalently, if we planning to multiply by the Koszul complexΔ_ħ ofħ^-1^∨ to get a class supported on , we maydivide by =±(ħ ) ⊗… ,where dots stand for an unspecified character, as before. §.§.§The meaning of the first term in (<ref>) is the following.Given a P-equivariant sheaf on a G-variety Y, we can induce it to aG-equivariant by first, making a G-equivariant sheaf onG/P × Y and then pushing it forward to Y. In the case at hand, up to a line bundle,the denominators (/) andthe summation over W_G/W_G^ in (<ref>) quiteprecisely come from an equivariant localization on G/P. We conclude the followingThe symmetric polynomial Δ_ħ _Frepresents a class in K_() supported on the full -attracting set ofF in §.§.§Note that the formula (<ref>) is universal for all dimensionvectors. By the logic of our Definition <ref>, to prove (<ref>) fora specific X=(,) we need to check somethingfor a larger Nakajima variety (,+). Namely, together with X, thefixed locus F embeds in (,+) andwe need to bound -weights in the corresponding function_F, (,+). From this angle,there is nothing special about the framing dimension beingincreased by exactly , and we can more generally assumethat an action of ≅ is definedby a decomposition of the framing spaces W = W' + u W” ,in which u is the defining weight ofand W',W” aretrivial -modules. We have X^ = ⋃_'+”= (',') ×(”,”)and we choose the attracting directions so thatcomponents with larger ” are attracted to those withsmaller ”'. For the bundle _♢ from (<ref>) we have _♢|_X^ =ε·” .In the context of our Definition <ref>,— we assume that F lies in the ”=0 component ofthe fixed locus X^, — also assume that the attracting direction foragree with those for ⊂, — and we need to prove that. u^- ε·”_F/T^1/2 |_X^ = O(1), u^± 1→∞ , for 0<ε_i ≪ 1.In (<ref>), we select the attracting and repellingdirections in the decomposition (<ref>). Sincein (<ref>) this is compared with the whole polarization,the bound (<ref>) follows from. u^- ε·”1/T^1/2_ |_X^ = O(1), u^± 1→∞ , which will now be established. In _F, the Chern classes x_ij of the universal bundlesare partitioned into various groups according to their -grading. The sizes of these groups aregiven by the dimensionvectorof the quiver (<ref>). Restricted toX^, this dimension vector further splits = ' + ”into components of weight 0 or 1 with respect to . For computations of degree in u, it is natural to use thequadratic form associated to the quiver (<ref>). In general,for any quiver Q with dimension vector , one defines(,)_Q = ∑_i → j_i _j where i→ j means that i and j are connected by an edge of Q. Together with the corresponding dot product ·_Q ' = ∑_i∈(Q)_i '_ithe form (<ref>)enters the dimension formula for Nakajima varieties12 _Q(,) = (,)_Q + ·_Q ( - ) . To prove (<ref>), we consider the cases u→ 0 andu→∞ limits separately. In the u→ 0 limit, we have . 1/T^1/2_ |_X^ = O(u^e_0), u→ 0, wheree_0 = - ”·_Q^” + (”,”)_Q^ because ”=0 by construction.Since Q^ is a unionof quivers of type A_∞ the quadratic form in (<ref>),which is proportional to the Cartan-Killing form for the correspondingLie algebra, isnegatively defined. Therefore e_0 < 0 ” 0and the u→ 0 case of (<ref>) is established. In the opposite limit we have. 1/T^1/2_ |_X^ = O(u^e_∞), u→∞ , wheree_∞ = 12 _Q^(,) -12 _Q^(',). From (<ref>), we conclude e_∞ = 0and the proof of (<ref>) is complete. §.§ Appendix: Bethe equationsFor completeness, we recall the Bethe equations first derived in the current context by Nekrasov and Shatashvili NS1,NS2. Here we derive them formallyas equations for the critical points ofthe integrand in (<ref>). See e.g. <cit.>for a discussion which does not explicitlyinvolve integral representation. LetTX = T ( T^* (,)) - ∑_i(1+ħ^-1) (V_i) be the tangent bundle of X viewed as anelement of K_× G (). This is a Laurentpolynomial in x_i,k and the characters of .The negative terms in it reflect the moment map equations and thequotient by G. Let the transformationbe defined by ( ∑ n_i χ_i) = ∏(χ_i^1/2 - χ_i^-1/2)^ n_i ,n_i ∈ ,where χ_i are weights of × G. This is ahomomorphism from the group algebra of the weight lattice torational functions on a double cover of the maximal torus. The following is a restatement of a result of Nekrasov andShatashvili NS1,NS2. The critical points in the q→ 1 asymptotics of the integral(<ref>) satisfy the following Bethe equations( x_i,k∂/∂ x_i,k TX ) = z_i for all i∈ I and k=1,…, _i.The exact form of the right-hand side in (<ref>) depends on the shift of variable z, which was mentionedbut not made explicit in the discussion of (<ref>).In <cit.> it is explained why it is natural to use z_# = z (-ħ^1/2)^- T^1/2in place of z in (<ref>), see also <cit.>. It is directlyrelated to the shift by the canonical theta-characteristic in<cit.>. With this shift, the equation (<ref>)take the stated form. Note thatT^1/2 = ∏_χ∈ T^1/2Xχ is a line bundle on X and hence a cocharacter of the Kähler torus. It therefore makes sense to shift the variablesz by the value of thiscocharacter at -ħ^1/2.Concretely, the coordinates of (<ref>) in the lattice ofcocharacters are the exponents of x_i,k in (<ref>).Note that theseexponents do not depend on k. Let Φ denote the term with ϕ-functions in(<ref>). We recall from <cit.> thatΦ = ∏_χ∈ T^1/2Xϕ(qχ)/ϕ (ħ χ) where the product is over the weight χ in a polarizationT^1/2X of (<ref>). By definition of a polarization, we haveTX = ∑_χ∈ T^1/2X(χ + 1/ħχ). Approximating a sum by a Riemann integral gives lnϕ(qχ)/ϕ (ħ χ)∼1/ln q∫_1^ħln (1-s χ)ds/s ,q→ 1.Elementary manipulations givex ∂/∂ x∫_1^ħln (1-s χ)ds/s = - x ∂/∂ xχ/χ ln1-χ/1-ħχ = = - ln( x ∂/∂ x(χ + 1/ħχ) ) +ln(-ħ^1/2) x ∂/∂ xlnχ . Note that summed over χthe first term on the second lineof (<ref>) gives ln( x ∂/∂ xTX ). The other exponentially large term in (<ref>) is(x,z_#), where z_# denotes the Kählervariables z shifted by (-ħ^1/2)^- T^1/2, as above.By definition, this meansx_i,k∂/∂ x_i,kln(x,z_#) = 1/ln q(ln z_i - ln(-ħ^1/2) ∑_χx_i,k∂/∂ x_i,klnχ) . Summing (<ref>) and (<ref>) givesln( x_i,k∂/∂ x_i,kTX ) = ln z_ias equations for the critical point of the function in (<ref>),as claimed. afoM. Aganagic, E. Frenkel, and A. Okounkov,Quantum q-Langlands Correspondence,. eseM. Aganagic and A. Okounkov,Elliptic stable envelopes, AtHirzarticleauthor=Atiyah, Michael,author=Hirzebruch, Friedrich,title=Spin-manifolds and group actions,conference= title=Essays on Topology and Related Topics (Mémoires dédiés à Georges de Rham),,book= publisher=Springer, New York,,date=1970,pages=18–28, ChariPressbookauthor=Chari, Vyjayanthi,author=Pressley, Andrew,title=A guide to quantum groups,publisher=Cambridge University Press, Cambridge,date=1994,pages=xvi+651,isbn=0-521-43305-3,CKMarticleauthor=Ciocan-Fontanine, Ionuţ,author=Kim, Bumsig,author=Maulik, Davesh,title=Stable quasimaps to GIT quotients,journal=J. Geom. Phys.,volume=75,date=2014,pages=17–47,issn=0393-0440, EKParticleauthor=Enriquez, B.,author=Khoroshkin, S.,author=Pakuliak, S.,title=Weight functions and Drinfeld currents,journal=Comm. Math. Phys.,volume=276,date=2007,number=3,pages=691–725,issn=0010-3616, EFKbookauthor=Etingof, Pavel I.,author=Frenkel, Igor B.,author=Kirillov, Alexander A., Jr.,title=Lectures on representation theory and Knizhnik-Zamolodchikovequations,series=Mathematical Surveys and Monographs,volume=58,publisher=American Mathematical Society, Providence, RI,date=1998,pages=xiv+198,isbn=0-8218-0496-0,EV1articleauthor=Etingof, Pavel,author=Varchenko, Alexander,title=Traces of intertwiners for quantum groups and differenceequations. I,journal=Duke Math. J.,volume=104,date=2000,number=3,pages=391–432,issn=0012-7094, EVarticleauthor=Etingof, P.,author=Varchenko, A.,title=Dynamical Weyl groups and applications,journal=Adv. Math.,volume=167,date=2002,number=1,pages=74–127,issn=0001-8708, FJMMB. Feigin, M. Jimbo, T. Miwa, and E. Mukhin,Quantum toroidal gl(1) and Bethe ansatz,.FMTVarticleauthor=Felder, G.,author=Markov, Y.,author=Tarasov, V.,author=Varchenko, A.,title=Differential equations compatible with KZ equations,journal=Math. Phys. Anal. Geom.,volume=3,date=2000,number=2,pages=139–177,issn=1385-0172,FTVarticleauthor=Felder, G.,author=Tarasov, V.,author=Varchenko, A.,title=Monodromy of solutions of the elliptic quantumKnizhnik-Zamolodchikov-Bernard difference equations,journal=Internat. J. Math.,volume=10,date=1999,number=8,pages=943–975,issn=0129-167X, FKPRarticleauthor=Frappat, Luc,author=Khoroshkin, Sergey,author=Pakuliak, Stanislav,author=Ragoucy, Éric,title=Bethe ansatz for the universal weight function,journal=Ann. Henri Poincaré,volume=10,date=2009,number=3,pages=513–548,issn=1424-0637, FRarticleauthor=Frenkel, I. B.,author=Reshetikhin, N. Yu.,title=Quantum affine algebras and holonomic difference equations,journal=Comm. Math. Phys.,volume=146,date=1992,number=1,pages=1–60,issn=0010-3616,GinzNakarticleauthor=Ginzburg, Victor,title=Lectures on Nakajima's quiver varieties,conference= title=Geometric methods in representation theory. I,,book= series=Sémin. Congr., volume=24, publisher=Soc. Math. France, Paris,,date=2012,pages=145–219, JMbookauthor=Jimbo, Michio,author=Miwa, Tetsuji,title=Algebraic analysis of solvable lattice models,series=CBMS Regional Conference Series in Mathematics,volume=85,publisher=Published for the Conference Board of the MathematicalSciences, Washington, DC; by the American Mathematical Society,Providence, RI,date=1995,pages=xvi+152,isbn=0-8218-0320-4, HLarticleauthor=Halpern-Leistner, Daniel,title=The derived category of a GIT quotient,journal=J. Amer. Math. Soc.,volume=28,date=2015,number=3,pages=871–912,issn=0894-0347,HLMO D. Halpern-Leistner, D. Maulik and A. Okounkov,in preparation. HLS D. Halpern-Leistner, S. Sam,Combinatorial constructions of derived equivalences, . KParticleauthor=Khoroshkin, Sergey,author=Pakuliak, Stanislav,title=A computation of universal weight function for quantum affinealgebra U q(gl N),journal=J. Math. Kyoto Univ.,volume=48,date=2008,number=2,pages=277–321,issn=0023-608X,KPTarticleauthor=Khoroshkin, S.,author=Pakuliak, S.,author=Tarasov, V.,title=Off-shell Bethe vectors and Drinfeld currents,journal=J. Geom. Phys.,volume=57,date=2007,number=8,pages=1713–1732,issn=0393-0440,Kr1articleauthor=Krichever, Igor,title=Obstructions to the existence of S1-actions. Bordisms ofbranched coverings,journal=Izv. Akad. Nauk SSSR Ser. Mat.,volume=40,date=1976,number=4,pages=828–844, 950,issn=0373-2436, Kr2articleauthor=Krichever, Igor,title=Generalized elliptic genera and Baker-Akhiezer functions,journal=Mat. Zametki,volume=47,date=1990,number=2,pages=34–45, 158,issn=0025-567X,translation= journal=Math. Notes, volume=47, date=1990, number=1-2, pages=132–142, issn=0001-4346,, KRarticleauthor=Kulish, P. P.,author=Reshetikhin, N. Yu.,title=Diagonalisation of GL(N) invariant transfer matrices andquantum N-wave system (Lee model),journal=J. Phys. A,volume=16,date=1983,number=16,pages=L591–L596,issn=0305-4470, KSarticleauthor=Kulish, P. P.,author=Sklyanin, E. K.,title=Quantum spectral transform method. Recent developments,conference= title=,,book= series=Lecture Notes in Phys., volume=151, publisher=Springer, Berlin-New York,,date=1982,pages=61–119, mnop2articleauthor=Maulik, D.,author=Nekrasov, N.,author=Okounkov, A.,author=Pandharipande, R.,title=Gromov-Witten theory and Donaldson-Thomas theory. II,journal=Compos. Math.,volume=142,date=2006,number=5,pages=1286–1304,issn=0010-437X, MO D. Maulik and A. Okounkov,Quantum groups and quantum cohomology,. Matsuoarticleauthor=Matsuo, Atsushi,title=Jackson integrals of Jordan-Pochhammer type and quantumKnizhnik-Zamolodchikov equations,journal=Comm. Math. Phys.,volume=151,date=1993,number=2,pages=263–273,issn=0010-3616, mn K. McGerty and T. Nevins, Kirwan surjectivity for quiver varieties, . MNSarticleauthor=Moore, Gregory,author=Nekrasov, Nikita,author=Shatashvili, Samson,title=Integrating over Higgs branches,journal=Comm. Math. Phys.,volume=209,date=2000,number=1,pages=97–121, Nak3articleauthor=Nakajima, Hiraku,title=Quiver varieties and finite-dimensional representations of quantumaffine algebras,journal=J. Amer. Math. Soc.,volume=14,date=2001,number=1,pages=145–238,issn=0894-0347, NeTh A. Neguţ, Quantum Algebras and Cyclic Quiver Varieties,Ninstarticleauthor=Nekrasov, Nikita A.,title=Seiberg-Witten prepotential from instanton counting,journal=Adv. Theor. Math. Phys.,volume=7,date=2003,number=5,pages=831–864, NekVid1 Nikita Nekrasov, Bethe States As Defects In Gauge Theories, Lecture at the Simons Center for Geometry and Physics,Oct. 1, 2013, video available from.NekVid2 Nikita Nekrasov, Bethe wavefunctions from gauged linear sigma models via Bethe/gauge correspondence, Lecture at the Simons Center for Geometry and Physics, Nov. 3, 2014, video available from. NekPrep Nikita Nekrasov, in preparation.NS1articleauthor=Nekrasov, Nikita A.,author=Shatashvili, Samson L.,title=Supersymmetric vacua and Bethe ansatz,journal=Nuclear Phys. B Proc. Suppl.,volume=192/193,date=2009,pages=91–112,issn=0920-5632, NS2articleauthor=Nekrasov, Nikita A.,author=Shatashvili, Samson L.,title=Quantization of integrable systems and four dimensional gaugetheories,conference= title=XVIth International Congress on Mathematical Physics,,book= publisher=World Sci. Publ., Hackensack, NJ,,date=2010,pages=265–289, pcmi A. Okounkov,Lectures on K-theoretic computations in enumerative geometry, . slc A. Okounkov,Enumerative geometry and geometric representation theory, Proceedings of the 2015 AMS Algebraic GeomterySummer Institute.OS A. Okounkov and A. Smirnov,Quantum difference equations for Nakajima varieties, .PP1 R. Pandharipande and A. Pixton, Descendents on local curves: rationalityCompos. Math. 149 (2013), no. 1, 81–124. PP2 R. Pandharipande and A. Pixton, Descendent theory for stable pairs on toric 3-folds,J. Math. Soc. Japan 65 (2013), no. 4, 1337–1372. PSZ P. Pushkar, A. Smirnov, and A. Zeitlin, Baxter Q-operator from quantum K-theory,. Resharticleauthor=Reshetikhin, N.,title=Jackson-type integrals, Bethe vectors, and solutions to adifference analog of the Knizhnik-Zamolodchikov system,journal=Lett. Math. Phys.,volume=26,date=1992,number=3,pages=153–165,issn=0377-9017, RTV1articleauthor=Rimányi, R.,author=Tarasov, V.,author=Varchenko, A.,title=Trigonometric weight functions as K-theoretic stable envelopemaps for the cotangent bundle of a flag variety,journal=J. Geom. Phys.,volume=94,date=2015,pages=81–119,issn=0393-0440, RTV2articleauthor=Rimányi, R.,author=Tarasov, V.,author=Varchenko, A.,title=Partial flag varieties, stable envelopes, and weight functions,journal=Quantum Topol.,volume=6,date=2015,number=2,pages=333–364,issn=1663-487X,Sh D. Shenfeld, Abelianization of Stable Envelopes in Symplectic Resolutions, PhD thesis, Princeton, 2013. Slavarticleauthor=Slavnov, N. A.,title=The algebraic Bethe ansatz and quantum integrable systems,journal=Uspekhi Mat. Nauk,volume=62,date=2007,number=4(376),pages=91–132,issn=0042-1316,translation= journal=Russian Math. Surveys, volume=62, date=2007, number=4, pages=727–766, issn=0036-0279,, S1 A. Smirnov,Polynomials associated with fixed points on the instanton moduli space,.S2 A. Smirnov,Rationality of capped descendent vertex in K-theory, , and in preparation. TVarticleauthor=Tarasov, V.,author=Varchenko, A.,title=Dynamical differential equations compatible with rational qKZequations,journal=Lett. Math. Phys.,volume=71,date=2005,number=2,pages=101–108,issn=0377-9017, TV1articleauthor=Tarasov, V.,author=Varchenko, A.,title=Geometry of q-hypergeometric functions as a bridge betweenYangians and quantum affine algebras,journal=Invent. Math.,volume=128,date=1997,number=3,pages=501–588,issn=0020-9910, TV2articleauthor=Tarasov, V.,author=Varchenko, A.,title=Geometry of q-hypergeometric functions, quantum affine algebrasand elliptic quantum groups,journal=Astérisque,number=246,date=1997,pages=vi+135,issn=0303-1179, TV3articleauthor=Tarasov, V.,author=Varchenko, A.,title=Difference equations compatible with trigonometric KZ differentialequations,journal=Internat. Math. Res. Notices,date=2000,number=15,pages=801–829,issn=1073-7928, TV4articleauthor=Tarasov, V.,author=Varchenko, A.,title=Combinatorial formulae for nested Bethe vectors,journal=SIGMA Symmetry Integrability Geom. Methods Appl.,volume=9,date=2013,pages=Paper 048, 28,issn=1815-0659, TV5articleauthor=Tarasov, V.,author=Varchenko, A.,title=Jackson integral representations for solutions of theKnizhnik-Zamolodchikov quantum equation,journal=Algebra i Analiz,volume=6,date=1994,number=2,pages=90–137,issn=0234-0852,translation= journal=St. Petersburg Math. J., volume=6, date=1995, number=2, pages=275–313, issn=1061-0022,,
http://arxiv.org/abs/1704.08746v1
{ "authors": [ "Mina Aganagic", "Andrei Okounkov" ], "categories": [ "math-ph", "hep-th", "math.AG", "math.MP", "math.RT" ], "primary_category": "math-ph", "published": "20170427211640", "title": "Quasimap counts and Bethe eigenfunctions" }
[email protected] Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de MéxicoApartado Postal 70-543, Ciudad de México 04510, México [email protected] Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de MéxicoApartado Postal 70-543, Ciudad de México 04510, México Departamento de Física, Universidad Autónoma Metropolitana - IztapalapaSan Rafael Atlixco 186, Ciudad de México 09340, México [email protected] Departamento de Física, Universidad Autónoma Metropolitana - IztapalapaSan Rafael Atlixco 186, Ciudad de México 09340, México The polymer representations, which are partially motivated by loop quantum gravity, have been suggested as alternative schemes to quantize the matter fields. Here we apply a version of the polymer representations to the free electromagnetic field, in a reduced phase space setting, and derive the corresponding effective (i.e., semiclassical) Hamiltonian. We study the propagation of an electromagnetic pulse and we confront our theoretical results with gamma ray burst observations. This comparison reveals that the dimensionless polymer scale must be smaller than 4× 10^-35, casting doubts on the possibility that the matter fields are quantized with the polymer representation we employed. Bounds on the polymer scale from gamma ray bursts Saeed Rastgoo Received: date / Revised version: date =================================================Loop quantum gravity (LQG) <cit.>, which is a prominent quantum gravity candidate, has inspired alternative matter quantization methods, known as polymer representations <cit.>. These alternative methods resemble LQG in that they are nonperturbative and unitarily inequivalent to the Schrödinger representation. Also, the formal way the states and the fundamental operators are expressed in the polymer representations, mimics the cylindrical functions and the holonomy-flux algebra of LQG, respectively.Moreover, the polymer representations have been considered, by themselves, as interesting alternatives to the Schrödinger quantization <cit.>.Notably, most works on the polymer representations of matter fields use scalar fields or do not make contact with experimental data <cit.>. In contrast, our goal is to study the empirical consequences of applying such a quantization scheme to the free electromagnetic field in the framework of Ref. Viqar. To that end, we polymer quantize the Maxwell theory and then use well-known methods to extract the corresponding effective dynamics.As it is well known, the electromagnetic field A_ν(x), ν being a spacetime index, has a U(1) gauge symmetry and, to quantize it, we utilize a reduced phase space quantization (see, for example, Ref. Hanson). Furthermore, we work in the Minkowski spacetime with a global Cartesian coordinate frame where t represents the time index and i,j are spatial indices. We fix the gauge by taking A_t=0=∂_i A^i, which can be consistently imposed when there are no sources <cit.>. In this case the action takes the formS=1/2∫ dt d^3x[∂_tA_i∂_tA^i-∂_iA_j∂^iA^j].In this work we use a metric with signature +2 and adopt natural units, i.e., Lorentz-Heaviside units with the additional conditions c=1=ħ. To get the Hamiltonian H, we use the spacetime foliation associated with constant t hypersurfaces and denote the canonically conjugated momenta by E^i, resulting inH= 1/2∫ d^3x [ E^iE_i+∂_iA_j∂^iA^j].The fact that no constraints arise reflects that there is no remaining gauge freedom.To properly implement the polymer quantization we turn to Fourier space. Notice, however, that a priori we cannot assume Lorentz invariance, and thus, we do not use the standard four-dimensional Fourier transform. Instead we only perform such a transformation on the spatial coordinates. Furthermore, to have a countable number of modes, we consider the system to be in a finite box that induces an energy cutoff Λ_ c. Then, the fields can be written asA_i(𝐱,t) = ∑_𝐤,rϵ^r_i[1+i/2𝒜_r(𝐤,t) + 1-i/2𝒜_r (-𝐤,t)] × e^-i𝐤·𝐱,E^i(𝐱,t) = ∑_𝐤,rϵ^ir[1+i/2ℰ_r(𝐤,t) + 1-i/2ℰ_r (-𝐤,t)] × e^-i𝐤·𝐱, where ϵ^r_i are the polarization vectors which satisfy ϵ^r_ik^i=0, and the polarization index r runs from 1 to 2. It can be checked that 𝒜_r and ℰ_r are real, have mass dimensions 1 and 2, respectively, and are canonically conjugate, that is,{𝒜_r(𝐤,t),ℰ_s(𝐤^',t)} =Λ_ c^3δ_rsδ(𝐤-𝐤^'),with all other Poisson brackets vanishing; s is another polarization index. The factor Λ_ c^3 compensates for the fact that, in contrast to the Dirac delta, the Kronecker delta is dimensionless. In terms of these fields, the Hamiltonian (<ref>) becomesH= 1/2 Λ_ c^3∑_𝐤,r[ℰ_r(𝐤,t)^2+|𝐤|^2𝒜_r(𝐤,t)^2],which has the form of a harmonic oscillator for each mode 𝐤 and each polarization r.We now implement the polymer representation on this classical theory in the spirit of Ref. Viqar. We start by recalling that the Stone–von Neumann theorem <cit.> states that, for any quantum system with finite degrees of freedom, any weakly continuous representation of the Weyl algebra is unitarily equivalent to the standard Schrödinger representation. There are situations, however, where the weak continuity assumption is not valid and the representation of the algebra is thus inequivalent to that of Schrödinger <cit.>.Now, to obtain the elements of the polymer quantization, it is convenient to first define the Weyl algebra for each mode 𝐤. The generators of this algebra are denoted by W(𝒜_1,𝒜_2,ℰ_1,ℰ_2) and their multiplication is given byW(𝒜_1,𝒜_2,ℰ_1,ℰ_2)W(𝒜̃_1,𝒜̃_2,ℰ̃_1,ℰ̃_2)= e^i/2ΩW(𝒜_1+𝒜̃_1,𝒜_2+𝒜̃_2,ℰ_1+ℰ̃_1,ℰ_2+ℰ̃_2), where Ω= ∑_r=1,2(ℰ_r𝒜̃_r - 𝒜_rℰ̃_r)/Λ_ c^3 is the symplectic form evaluated at the corresponding phase-space point. This algebra can be used to define four groups by setting all but one of the arguments of W to zero. The most relevant, for our purposes are V_1,ℰ_1=W(0,0,ℰ_1,0), V_2,ℰ_2= W(0,0,0,ℰ_2).Should the algebra representation be weakly continuous, there would be infinitesimal generators for all four groups defined above satisfying the canonical commutation relations. In our case, which is inspired by the holonomy-flux variables used in LQG, the weakly continuous condition of the Stone–von Neumann theorem is not satisfied, and thus, there are no infinitesimal generators for V_1,ℰ_1 and V_2,ℰ_2. Therefore, the fundamental operators are ℰ_1, ℰ_2, V_1,ℰ_1 and V_2,ℰ_2 which satisfy[V_r,ℰ_r, ℰ_s] = -δ_rsℰ_r V_r,ℰ_r. We now focus on one harmonic oscillator labeled by the fixed index r. The Hilbert space of such an oscillator is H^(r)_poly = L^2(ℝ, dμ_Bohr[𝒜_r]), where ℝ is the Bohr compactification of the real line and dμ_Bohr[𝒜_r] is the corresponding measure <cit.>. Then, the wave functions can be expressed as almost periodic functionsΨ(𝒜_r) = ∑_ℰ^(n)_rΨ_ℰ^(n)_r e^-i ℰ^(n)_r 𝒜_r/Λ_ c^3,with basis elements e^-iℰ^(n)_r 𝒜_r/Λ_ c^3. Such wave functions can be represented by a graph with a finite, but arbitrary, number of vertices N, with the nth vertex having a “color” ℰ^(n)_r, and n=1,2,…,N. Furthermore, the inner product with respect to the measure dμ_Bohr[𝒜_r] is ⟨ e^-iℰ^(n)_r 𝒜_r/ Λ_ c^3|e^-i ℰ^'(m)_r 𝒜_r / Λ_ c^3⟩=δ_ℰ^(n)_r, ℰ^'(m)_r.We emphasize that the right-hand side of the last equation is a Kronecker delta. Finally, the representation of the fundamental operators isℰ_r Ψ(𝒜_r) =- i Λ_ c^3δ/δ𝒜_rΨ(𝒜_r),V_r,ℰ_r e^-iℰ^(n)_r 𝒜_r/Λ_ c^3=e^-i(ℰ^(n)_r -ℰ_r)𝒜_r/Λ_ c^3,which correctly implements the commutators (<ref>).The next step is to write the polymer quantum Hamiltonian. Our starting point is the classical Hamiltonian (<ref>) at a fixed time, so that, when promoted to an operator, it is in the Schrödinger representation. The fact that the operator 𝒜_r does not exist creates serious obstructions in representing the classical Hamiltonian, which depends on 𝒜_r^2. This difficulty can be circumvented by replacing the operators 𝒜^2_r by a combination of Weyl generators. Specifically, we consider only regular graphs[For the polymer harmonic oscillator, the dynamics superselects equidistant graphs with polymer scale μ <cit.>. Moreover, when considering all possible shifts of a regular graph, the energy spectrum has a band structure <cit.>. However, when μ is much smaller than the oscillator characteristic length, the bands' width is extremely narrow and it produces negligible physical effects.] that have equidistant values of ℰ_r, where the separation is given by a fixed, albeit arbitrary, positive parameter μ, i.e., ℰ^(n)_r=nμ. Note that we use the same μ for all polarizations and for every Fourier mode. This is a rather common assumption in this type of polymer quantization <cit.> and μ, known as the polymer scale, is thought to be of the order of the Planck scale (see, e.g., Ref. AshtekarWillis). Concretely, we replace 𝒜_r^2 in the Hamiltonian by𝒜_r^2 →Λ_ c^6/μ^2[2 - V_r,μ -V_r,-μ],where, as can be seen from Eq. (<ref>), V_r,±μ, when applied to a basis element, produces a shift in ℰ^(n)_r by ±μ. Note that, in the formal limit μ→ 0, which only exists for regular representations, the right-hand side of Eq. (<ref>) reduces to 𝒜_r^2. Under this replacement, the quantum polymer Hamiltonian associated with Eq. (<ref>) becomesH = 1/2 Λ_ c^3∑_𝐤,r[ ℰ_r^2(𝐤) +(Λ_ c^3|𝐤|/μ)^2 (2 - V_r,μ - V_r,-μ)]. To derive the theoretical predictions that can be compared with available empirical data, we obtain the effective polymer Hamiltonian. This procedure is somehow technical, and it is thus described in Appendix <ref> (see also Refs. <cit.>). It turns out that such an effective Hamiltonian can be obtained by replacing𝒜_r(𝐤)^2 →(2Λ_ c^3/μ)^2sin^2(μ/2Λ_ c^3𝒜_r(𝐤)),in the classical action, which leads to the effective HamiltonianH_ eff= 1/2 Λ_ c^3∑_𝐤,r[ℰ_r(𝐤)^2+ (2Λ_ c^3 |𝐤| /μ)^2sin^2(μ𝒜_r(𝐤)/2Λ_ c^3)].This Hamiltonian leads to the equations of motion d𝒜_r(𝐤,t)/dt = ℰ_r(𝐤,t), dℰ_r(𝐤,t)/dt = - Λ_ c^3|𝐤|^2/μsin(μ/Λ_ c^3𝒜_r(𝐤,t)). Equations (<ref>) are nonlinear, making it challenging to find wave solutions, and consequently the modified dispersion relations, as is typically done when looking for quantum gravity effects (cf. Ref. Viqar2009). Still, we want to find empirical bounds on μ, hence, we solve Eqs. (<ref>) perturbatively.Note that standard electromagnetism is recovered from Eqs. (<ref>) when μ→ 0, and since this theory properly describes all (classical) experiments, μ must be extremely small. We use this fact to solve Eqs. (<ref>) perturbatively where, to have a well defined perturbative expansion, we utilize the dimensionless polymer parameter μ̃ = μ/Λ_ c^2. To obtain the perturbative equations it is convenient to first combine Eqs. (<ref>) into a single second-order equation for 𝒜_r(𝐤,t), which, when expanded in μ̃, takes the form0= ∂^2_ta_r(𝐤,t)+|𝐤|^2 a_r(𝐤,t) +μ̃^2[∂^2_tδ a_r(𝐤,t)..+|𝐤|^2 δ a_r(𝐤,t)-|𝐤|^2/6Λ_ c^2 a_r^3(𝐤,t)]+O(μ̃^4),where 𝒜_r(𝐤,t)=a_r(𝐤,t)+μ̃^2δ a_r(𝐤,t)+O(μ̃^4). It can be verified that the solution to Eq. (<ref>) is a_r(𝐤,t) = 𝒜_r(𝐤,0) cos(|𝐤| t)+ℰ_r(𝐤,0) /|𝐤|sin(|𝐤|t), δ a_r(𝐤,t) = |𝐤|/6Λ_ c^2∫_0^t dsa^3_r(𝐤,s)[sin(|𝐤| t) cos(|𝐤| s).. -cos(|𝐤| t)sin(|𝐤| s)]. We study the propagation of particular electromagnetic pulses according to Eqs. (<ref>). Such pulses have been detected in the form of gamma ray bursts (GRBs), which are high-energy electromagnetic emissions from astrophysical sources that have played important roles in various quantum gravity phenomenology scenarios (see, for example, Refs. amelino,AlanMatt). We model the GRB to be created in the form of a Gaussian pulse that propagates along the x direction, oscillates transversely in the y direction, and, at t=0, is centered at the origin and around the frequency ω. Concretely, during an infinitesimal time interval around t=0, we take𝐀(𝐱,t)=ŷa e^-σ^2(x-t)^2/2cos[ω(x-t)],where a is the pulse amplitude and σ is the Gaussian frequency width. The pulse's profile is plotted in Fig. <ref> for the particular case where σ=ω. We present the derivation of the solution 𝐀(𝐱,t) for the initial data (<ref>) in Appendix <ref>. To compare with observations, we compute the pulse speed (in the frame we use throughout the paper). To define such a speed we follow the central pulse peak, since, as we mention above, there is no dispersion relation at our disposal from which we can read off a group velocity. We find this speed by using x(t)=t+μ̃^2α(t)+ O(μ̃^4) as an ansatz for the x component of the central peak world line, and we determine α(t) by the conditions that the peak is an extremum of |𝐀(𝐱,t)|, namely, that ∇|𝐀(𝐱,t)|=0, and that, at t=0, this peak is centered at the origin (see Fig. <ref>). We present the derivation of α(t) in Appendix <ref>. Then, the pulse speed is simply dx/dt and, as we also show in Appendix <ref>, the t dependence of dx/dt drops as e^-2 σ^2 t^2/3, and thus, after a small time (with respect to σ^-1) the speed stabilizes to the large-time speed v such that1-v=a^2 μ̃^2 /96 √(3)(σ^2+3 ω^2+e^-4 ω^2/3 σ^2(3 σ^2+ω^2)/σ^2 (σ^2+ω^2))+O(μ̃^4).In Fig. <ref> we plot the difference of the pulse speed dx/dt and the large-time speed v as a function of t, for the particular case where σ=ω. For t≳ 3 ω, such a difference becomes negligible as is also evident from this figure. Thus, given that we are interested in comparing the theoretical predictions with astrophysical observations in which the time of flight is much larger than the time scales associated with σ, we neglect the time dependence of dx/dt and take v to describe the pulse speed. Still, the pulse speed depends on its frequency, frequency width, and amplitude.We use the empirical data of a particular short GRB, known as GRB090510, which was detected by the GBM and LAT instruments onboard the Fermi Gamma-Ray Space Telescope <cit.>. The GRB090510 event has ω≈ 30GeV and the pulse energy ranges from hundreds of keV to tens of GeV, setting the value of σ. Even though the pulse amplitude is not reported, from the total energy released by the GRB, we are able to infer that a ≈ 10^28GeV (see Appendix <ref>). Furthermore, it has been estimated that the traveling time difference for different frequencies satisfies |Δ t|< 859ms <cit.>. Since the traveling distance is d≈ 10^28cm, we can conclude that the speed difference is restricted by |1-v|< 3× 10^-18.These experimental results can be compared with the the pulse speed prediction given by Eq. (<ref>). The result is that, for the effective theory under consideration to properly describe the propagation of such a GRB,μ̃ < 4 × 10^-35.The stringency of this bound comes, mainly, from the enormous energy released by the GRB, and the large distance traveled by the light. There are studies in which more stringent limits on Δ t are set by combining several GRB observations <cit.>, which would yield stronger bounds on μ̃.To get a sense of the stringency of the condition (<ref>), we can set Λ_ c^-1∼ D, where D≈ 10^10 yr is the universe age <cit.>. That is, the size of the box in which we put the system to have a countable number of modes is of the order of the size of the observable universe. Under this assumption we get μ < 10^-118GeV^2=10^-156 l^2_ P/G^2, where G is Newton's constant and l_ P is the Planck length. In other words, with this hypothesis, μ is restricted to be at least 156 orders of magnitude below the expected scale.To summarize, we have successfully applied a polymer quantization scheme to the free electromagnetic field in a fixed gauge. We then obtained the effective Hamiltonian, which leads to a nonlinear evolution and predicts that electromagnetic pulses propagate with subluminal speeds that depend on the pulses' frequency, frequency width, and amplitude. By comparing with the GRB data, we are able to conclude that, to reconcile the theory with observations, the polymer scale μ, when divided by the cutoff scale squared, has to be smaller than 4× 10^-35. We would like to stress that although other studies <cit.> have found obstructions on alternative matter polymer representations, our analysis is the first to use physical fields, to actually connect the predictions with existing observations, and to put a bound on the polymer scale. Importantly, the strong bound we set suggests that the polymer representation that we employed may not be directly related with the presumed quantum gravity scale, and that the method under consideration may not be the way the matter fields in nature are quantized. Finally, an interesting extension of our work which could shed light into the quantum nature of spacetime itself is to study the gravitational waves in the effective polymer description, particularly since there are experimental constraints on the speed of such waves <cit.>. In addition, it would be enlightening to study the behavior of the electromagnetic constraints when the field is quantized polymerically, in which case, one cannot fix the gauge at the classical level. We thank V. Husain, H. Morales-Técotl, D. Sudarsky, and D. Vergara for useful discussions. A. G. C. thanks the University of New Brunswick's Gravity Group for their feedback and hospitality. We acknowledge the support from UNAM-DGAPA-PAPIIT Grants No.IA101116 and No. IA101818 (Y. B.) and No. IN103716 (A. G. C.), CONACyT Grants No. 237351 (S. R., A. G. C.) and No. 237503 (A. G. C.), UNAM-DGAPA postdoctoral fellowship (A. G. C.), and Red FAE CONACyT. § EFFECTIVE DYNAMICS In this appendix we derive the effective dynamics for the theory under consideration. We first study the polymer amplitude and we then take the continuum limit. At this point it is possible to extract the semiclassical action, where the replacement (<ref>) can be justified.The polymer amplitude satisfies⟨𝒜_r, f,t_f |𝒜_r, i,t_i ⟩=⟨𝒜_r, f | e^-i (t_f - t_i) H |𝒜_r, i⟩ = [ ∏^N_n=1∫^+π𝒜^c_-π𝒜^cd 𝒜_r,n/2 π𝒜^c] ∏^N+1_k=1⟨𝒜_r,k , t_k | 𝒜_r, k-1 , t_k-1⟩,where 𝒜^c=Λ^3_c/μ and ϵ = t_k - t_k-1 are infinitesimal. This calculation cannot be done using the conventional techniques since the ℰ_r take values in discrete sets, which implies that the 𝒜_r are compact and satisfy (1/2 π𝒜^c) ∫^+π𝒜^c_-π𝒜^c d 𝒜_r | 𝒜_r⟩⟨𝒜_r | = 1.As it is done in Ref. <cit.>, we first compute the infinitesimal amplitude for a vanishing Hamiltonian⟨𝒜_r,k , t_k | 𝒜_r, k-1 , t_k-1⟩^(0)= ⟨𝒜_r,k| 𝒜_r, k-1⟩=2π𝒜^c ∑_n ∈ℤδ( 𝒜_r, k - 𝒜_r, k-1 - 2 π n 𝒜^c ) = 1/2∑_n ∈ℤ∫^+∞_-∞ dφ_k e^ i φ_k ( 𝒜_r,k - 𝒜_r,k-1 - 2π n 𝒜^c ) /2 𝒜^c,where φ_k are auxiliary variables. Then, to calculate ⟨𝒜_r,k , t_k | 𝒜_r, k-1 , t_k-1⟩ we follow the derivation given in chapter 2.1 of Ref. <cit.>, which calls for the amplitude (<ref>). The result is⟨𝒜_r,k , t_k | 𝒜_r, k-1 , t_k-1⟩ =∑_n_k ∈ℤ∫^+∞_-∞dφ_k/2 e^i φ_k/2 𝒜^c( 𝒜_r,k - 𝒜_r,k-1 - 2π n_k 𝒜^c ) -i ϵ H^(k),where H^(k) is the Hamiltonian (<ref>) evaluated at 𝒜_r=2𝒜^c sin( 𝒜_r,k/2𝒜^c ) and ℰ_r=μφ_k/2. Next we substitute Eq. (<ref>) in the amplitude (<ref>) and redefine the integration variables 𝒜_r,n_k→𝒜_r,n_k - 2 π n_k 𝒜^c, which leaves only one sum (more details on this last step can be found in Ref. [Appendix B]Vergara). Then, the amplitude (<ref>) takes the form⟨𝒜_r, f,t_f |𝒜_r, i ,t_i ⟩ = ∑_l [ ∏^N_n=1∫^+∞_-∞d 𝒜_r,n/2 π𝒜^c] [ ∏^N+1_k=1∫^+∞_-∞dφ_k/2] e^∑^N+1_k=1[ i φ_k ( 𝒜_r,k - 𝒜_r,k-1 - 2π l δ_k,N+1𝒜^c )/2 𝒜^c -i ϵ H^(k)] .After integrating the auxiliary variables, the right-hand side of Eq. (<ref>) becomes∑_l [ ∏^N_n=1∫^+∞_-∞d 𝒜_r,n/2 π𝒜^c] ∏^N+1_k=1√(2 π/i ϵμ^2)exp[ - ( 𝒜_r,k - 𝒜_r,k-1 - 2π l δ_k,N+1𝒜^c )^2/2i Λ^3_c (t_k - t_k-1) - 2i (t_k - t_k-1)|𝐤|^2 (𝒜^c)^2/Λ^3_csin^2( 𝒜_r,k/2 𝒜^c) ].The last step is to take the continuum limit N→∞ in Eq. (<ref>), which implies⟨𝒜_r, f,t_f |𝒜_r, i,t_i ⟩ = ∑_l ∫^𝒜_r, f + 2 π l 𝒜^c_𝒜_r,i D𝒜_r/2 π𝒜^c e^ iS_ eff/ Λ^3_c ,where D𝒜_r is the formal notation for the measure and the effective action isS_ eff = ∫^t_f_t_i dt [ 1/2𝒜̇^2_r - |𝐤|^2/2( 2 Λ^3_c/μ)^2 sin^2( μ𝒜_r/2 Λ^3_c)].Observe that this effective action can be obtained from Eq. (<ref>) after making the replacement (<ref>). This result justifies such a replacement as a method to get the effective limit from the polymer quantum theory. We want to emphasize that our derivation was possible because the field modes are described by quantum harmonic oscillators and, in this case, there are no ambiguities; in other theories one needs to be careful when applying similar replacements.§ DETAILED PHENOMENOLOGICAL ANALYSIS Here we present the computational details of some of the results of the phenomenological part of the paper. We first focus on the expression for 𝒜_r(𝐤,t) that is a solution with the initial data (<ref>). To put these data in the form required by the solution (<ref>), we use the inverse of Eq. (<ref>) and Eq. (<ref>); the result is 𝒜_1(𝐤,0) = a Λ_ c/2 σ( e^-(|𝐤| -ω )^2/2 σ^2+e^-(-|𝐤| -ω )^2/2 σ^2), 𝒜_2(𝐤,0) = 0, ℰ_r(𝐤,0) = |𝐤| 𝒜_r(𝐤,0).We then insert these initial conditions into Eqs. (<ref>), which, after some simplifications, lead to 𝒜_2(𝐤,t)=0 and𝒜_1(𝐤,t) = a Λ_c/2 σ(e^-(|𝐤|+ω )^2/2 σ^2+e^-(-|𝐤|+ω )^2/2 σ^2) [sin (|𝐤| t)+cos (|𝐤| t)] +μ̃^2a^3 Λ_c/768 σ^3(e^-(|𝐤|+ω )^2/2 σ^2+e^-(-|𝐤|+ω )^2/2 σ^2)^3 ×{cos (3 |𝐤| t)-(12 |𝐤| t+1) cos (|𝐤| t)+[12 |𝐤| t-2 cos (2 |𝐤| t)+14]sin (|𝐤| t) }+O(μ^4).It can easily be verified that, at t=0, these expressions reduce to Eqs. (<ref>). Next, we use Eq. (<ref>) to derive𝐀(𝐱,t) = ŷ a e^-1/2σ^2 (t-x)^2cos [ω (t-x)] +μ̃^2ŷ a^3 /384 √(3)σ^2 e^(-9 t^2 σ^4 - x^2 σ^4 - 6 t x σ^4 - 8 ω^2)/(6σ^2) ×{3 (4 σ^2 t^2-4 σ^2 t x+7)e^4σ^2 t (t+x)/3cos[ω/3 (t-x)]+12 t ω e^4 (σ^4 t^2+σ^4 t x+ω^2)/(3 σ^2)sin [ω (t-x)] . +(4 σ^2 t^2-4 σ^2 t x+7) e^4 (σ^4 t^2+σ^4 t x+ω^2)/(3 σ^2)cos [ω (t-x)]-8 e^(4 σ^4 t^2+2σ^4 t x+4 ω^2)/(3 σ^2)cos [ω(t+x)] +e^4 ω^2/(3 σ^2)cos [ω (3 t+x)]+12 t ω e^4 σ^2 t (t+x)/3sin[ω/3 (t-x)]-24 e^2 σ^2 t (2 t+x)/3cos[ω/3 (t+x)].+3 cos(t ω +x ω/3)} +O(μ^4).Because of the complicated form of the above expression, it is hard to do a full consistency check, however, we can verify that the above equation reduces to the corresponding initial data at t=0.We now want to find the propagation speed of the central peak, which is the physical quantity we use to compare with the experimental observations. We employ the ansatz x(t)=t+μ̃^2α(t)+O(μ̃^4) for the central peak world line. The value of α(t) can be found using that the central peak is an extremum of |𝐀(𝐱,t)|, i.e., it satisfies ∇|𝐀(𝐱,t)|=0. When we take the gradient of the norm of Eq. (<ref>) and evaluate it at x(t)=t+μ̃^2α(t), we get that, at order O(μ̃^0),∇|𝐀(𝐱,t)|_μ̃=0=𝐱̂ae^-σ^2(t-x)^2/2{σ^2(t-x)cos[ω(t-x)]+ωsin[ω(t-x)]}.This last equation clearly vanishes for .x(t)|_μ̃=0=t, recovering the well-known result that, according to conventional electrodynamics, pulses propagate at the speed of light. The O(μ̃^2) contribution has two parts: one from evaluating ∇|𝐀(𝐱,t)|_μ̃=0 at μ̃^2α(t), and a second from the O(μ̃^2) part of ∇|𝐀(𝐱,t)|, which is evaluated at .x(t)|_μ̃=0=t. From setting the resulting expression to zero we obtainα(t) = -a^2 /1152 √(3)σ^2 (σ^2+ω^2){12t (σ^2 +3ω^2)+12 t (ω^2 +3 σ^2) e^-4 ω^2/(3 σ^2) -8 e^-2 σ^4 t^2/(3 σ^2)[3 ωsin (2 t ω )+2 σ^2 cos (2 t ω )]. -24e^(-2 σ^4 t^2-4 ω^2)/(3 σ^2)[ ωsin(2 t ω/3)+2 σ^2t cos(2 t ω/3)]+e^-8 σ^4 t^2/(3 σ^2)[3 ωsin (4 t ω )+4 σ^2 t cos (4 t ω )] . +3 e^(-8 σ^4 t^2-4 ω^2)/(3 σ^2)[4σ^2 t cos(4 t ω/3)+3 ωsin(4 t ω/3)]}.It can be directly verified that α(0)=0, and therefore, x(0)=O(μ̃^4), which ensures that we follow the central peak of the pulse and not another extremum of |𝐀(𝐱,t)|. Importantly, the fact that there exists a solution of α(t) for all t, shows that within our perturbative approach, the central peak can be traced for all times. Whether such a peak can be traced using the unperturbed dynamics given in Eqs. (<ref>) is an open question that is left to a future analytical or numerical study. Finally, the speed of the pulse's central peak isdx/dt = 1+ μ̃^2 d α(t)/dt+O(μ̃^4)= 1-μ̃^2a^2/864 √(3)σ^2 (σ^2+ω ^2){9 (σ^2+3 ω^2)+9 (ω^2 +3 σ ^2) e^-4ω ^2/(3σ ^2). +4 e^-2 σ^2 t^2/3 [12t ωσ^2 sin (2 t ω)+(4σ^4 t^2-3σ^2-9 ω^2)cos (2 t ω )]+12 e^(-2 σ ^4 t^2-4ω ^2)/(3σ ^2)[4 σ^2 t ωsin(2 t ω/3) +(4 σ ^4 t^2-3 σ ^2-ω ^2) cos(2 t ω/3)] +e^-8 σ ^4 t^2/(3σ ^2)[-24 σ^2 t ωsin (4 t ω )+(-16 σ^4 t^2+3 σ^2+9 ω^2 ) cos (4 t ω )] .+ e^(-8 σ ^4 t^2-4ω ^2)/(3σ ^2)[ -24 σ^2 t ωsin(4 t ω/3) +(9 σ ^2-48 σ ^4 t^2+3 ω^2)cos(4 t ω/3)]}+O(μ̃^4).Clearly, in the limit t σ≫ 1, the last four lines are exponentially suppressed, and we get the large-time speed v given in Eq. (<ref>).§ GRB AMPLITUDE The goal of this appendix is to infer the value of the pulse amplitude a from the reported data: the pulse's frequency, frequency width, and total released energy, which has been estimated at U≈ 6× 10^55GeV <cit.>. This part of the analysis can be done using standard electromagnetism, since, in Eq. (<ref>), a is suppressed by μ̃^2, and thus, any additional μ̃ correction lies at the order we neglect. Moreover, we assume that the GRB is well described by a three-dimensional spherical Gaussian pulse (since we are only looking for an order-of-magnitude estimation, we ignore that spherical symmetric systems do not radiate). Around the emission time t=0, such a pulse can be described by𝐀(𝐱,t)=aϕ̂ e^-σ^2(r-t)^2/2cos[ω(r-t)],where we use conventional spherical coordinates r, θ, and ϕ. This field is divergence free and, importantly, a, ω, and σ play the same roles as in Eq. (<ref>).The total energy of an electromagnetic configuration is given by the Hamiltonian (<ref>) <cit.>. This total energy for the pulse under consideration [taking into the account that Eq. (<ref>) is written in Cartesian coordinates], at t=0, isU =π^3/2 a^2/4 σ^3[2 ω^2 (1-e^-ω^2/σ^2) + (3+ln 4)σ^2 (1+e^-ω^2/σ^2)].Using the particular values for the GRB under consideration (see the text for the values of ω and σ; we neglect the frequency shift due to the relative speed of the source and the detector) we find a ≈ 10^28GeV, which is the quantity we require.
http://arxiv.org/abs/1704.08750v4
{ "authors": [ "Yuri Bonder", "Angel Garcia-Chung", "Saeed Rastgoo" ], "categories": [ "gr-qc", "astro-ph.HE", "hep-ph", "hep-th" ], "primary_category": "gr-qc", "published": "20170427212148", "title": "Bounds on the Polymer Scale from Gamma Ray Bursts" }
§ INTRODUCTION The magnetic properties of the nucleon such as the magnetic moment and polarisability can be accessed using lattice QCD with the background field method <cit.>. The magnetic polarisability is a measure of the deformation of a system of charges in an external magnetic field. This deformation causes an energy shift in the particle which can be determined by the energy field relation <cit.> E(B) = M +μ⃗B⃗ + qe B/2 M - 4 π/2 β B^2 + B^3. Previous studies <cit.> have faced difficulty in extracting a reliable signal for the polarisability. This is due to the polarisability being a second order effect and the complication of the Landau levels which cannot be easily isolated from the polarisability. The Landau levels are a series of energy levels arising from a charge or system of charges in an external magnetic field. Hence the charged proton has a Landau level that must be accounted for. In the absence of QCD, the consituent quarks would have individual Landau levels. It is an open question as to the extent to which this effect remains (if it all) in the presence of QCD interactions. § BACKGROUND FIELD METHOD To introduce a background field on the lattice, first consider the continuum case. Here the covariant derivative is modified by the addition of an electromagnetic coupling D_μ→D_μ^' = ∂_μ +g G_μ +qe A_μ, Where qe is the charge on the fermion field and A_μ is the electromagnetic four-potential. Discretising this additional term in the same way as the usual gauge fields <cit.> results in the gauge links being multiplied by an exponential phase factor U_μ(x) → U_μ(x)^(B) = e^ i a qe A_μ(x) U_μ(x). Thus far the electromagnetic gauge potential has not been specified uniquely. In order to obtain a magnetic field along the ẑ axis, a potential A_x = -B ŷ, is used over the interior of the N_x × N_y × N_z × N_t lattice. The periodic boundary conditions of the lattice require a non-trivial potential to ensure that the field is uniform over the entirety of the lattice. This requirement produces a quantisation condition on the magnetic field strength <cit.> q_de B = 2 π k_d/N_x N_y a^2, where k_d is an integer governing the field strength. § SIMULATION DETAILS The calculations detailed here use 2+1 flavour dynamical QCD configurations provided by the PACS-CS collaboration <cit.> through the International Lattice Data Grid <cit.>. These lattices have dimensions 32^364 with β = 1.9 and a physical lattice spacing of a = 0.0907(13) fm. A clover fermion action and Iwasaki gauge action are used. A single value of the light-quark hopping paramter, k_ud = 0.13754 corresponding to a pion mass of m_π = 413 MeV is used in this study. The lattice spacing for this mass was set using the Sommer scale with r_0 = 0.49 fm. The configuration ensemble size was 450. To be able to use Eq. (<ref>) to extract the polarisabilities, correlation functions at four distinct magnetic field strengths are calculated. As the u and d quarks have different signs, separate propagators at different field strengths must be calculated for each distinct field strength. These correspond to k_d = 0,± 1,± 2, ± 3, ± 4, ± 6 in Eq. (<ref>). The configurations used in this study did not include a background field when generated. Hence the only quarks which feel the presence of the external magnetic field are the valence quarks of the hadrons. To include the background field on the configurations requires separate gauge field configurations for each field strength. This is prohibitively expensive and also destroys the advantageous correlations between the field strengths. § MAGNETIC POLARISABILITY From correlation functions calculated with a background field in place, the magnetic polarisabilty can be extracted. To do this consider the energy-field relation in Eq.( <ref>). We wish to remove the μ⃗B⃗ and M terms. This can be done by using the spin depdendence of the μ⃗B⃗ term and the zero-field correlator. Taking a combination of spin orientations and field strengths produces the desired result for the energy shift Δ E_p(B)= 1/2 ( E_↑(B) + E_↓(B) - E_↑(0) - E_↓(0) ) = qe B/2 M - 4 π/2 β B^2. A superior method with which to extract this energy is to take a ratio of the correlators directly. This has the advantage of allowing correlated errors to cancel prior to fitting R_p(B,t) = ( G_↓(B+,t) + G_↑(B-,t)/ G_↓(0,t) + G_↑(0,t))( G_↓(B-,t) + G_↑(B+,t) /G_↓(0,t) + G_↑(0,t)). Here the ↑ and ↓ represent spin up and down while B± represents magnetic fields in the postive and negative ẑ directions. From this ratio, an effective energy shift can be extracted in an analogous way to an effective mass. § QUARK PROJECTION The qe B/(2M) term in Eq. (<ref>) is a Landau level term, it corresponds to the lowest lying Landau level of the hadron. The Landau levels are a superposition of energy levels <cit.> E^2 = m^2 + qe B (2 ν+1) - q e B s + p_z^2 , caused by the motion of a charged particle in an external magnetic field. Here ν = 0,1,2,…, spin parameter s = ± 1 and p_z is the component of momentum in the ẑ direction. The charged quarks are also in an external magnetic field and in the absence of QCD would also have Landau level energies. It is possible to obtain these Landau levels using the eigenmodes of the lattice Laplacian operator for each quark in a background magnetic field. A sample of the eigenmodes for the smallest field strength are presented in Figure <ref>. It is clear from Figure <ref> that a particle at the centre of the Lattice will have little overlap with the Landau levels; hence knowledge of the eigenmodes may prove advantageous when constructing quark operators on the lattice. §.§ Eigenmode Projections Eigenmodes of the lattice Laplacian operator are calculated where no QCD effects are present; only the QED background field is present. Once the eigenmodes λ_i have been obtained, the quark propagators can be projected to the eigenmodes at both the source and the sink. Projection operators P^n_QED are defined P^n_QED(x,y) = ∑_i=1^n=3 q_f k_d ⟨x|λ_i⟩ ⟨λ_i|y⟩, where q_f is the fractional quark charge. The propagator is then projected at the sink using these projection operators as, S(x,y) = P_QED(x,z) S(z,y) Any combination of projection operators and smearing can be used at both the source and the sink although only a few have been investigated here. §.§ Polarisability energy shifts The energy shift due to the polarisability is smaller than that due to the magnetic moment and it also contains contributions from the Landau levels. These features make the polarisability considerably more challenging to extract than the magnetic moment. In order to extract a polarisability from the energy shifts, a relevant function must be fitted as a function of the field strength - or the field strength quanta k_d. However it is only sensible to do this where acceptable constant fits to the energy shift at each field strength can be adequately performed. The restrictions imposed on fitting are, * Constant fits to Eq. (<ref>) as a function of t must be acceptable; * Relevant fits to Eq. (<ref>) as a function of B must be acceptable; * Only the same fit window accross all field strengths is considered. These measures help ensure that the final fits produced are free from bias due to the selection of fit window. Best results for the neutron were found using a spatially smeared source with 100 sweeps of Gaussian smearing and a QED eigenmode projected sink. For the neutron, it is clear from Figure <ref> that a fit which is quadratic-only is sufficient to describe the energy shift as a function of the field strength B. That is, the neutron doesn't have a Landau level energy term. This is as expected as the neutron is a neutrally charged particle. From the fitted curves the effective charge of the hadron and the magnetic polarisabilty can be extracted. The quadratic only fit results in a value of β_n = 1.31(38) × 10^-4 fm^3 for our pion mass of 413 MeV. § MAGNETIC MOMENT The magnetic moment of a system of charged particles, μ⃗ is related to the tendency of the system to become aligned with the external magnetic field. Returning to Eq. (<ref>), the magnetic moment is a first order term in B. This results in a much larger shift in energies than the polarisability term making magnetic moments easier to extract. Of additional use in isolating the magnetic moment energy shift is the spin and field direction dependence. By forming combinations of spin up and down correlation functions at both positive and negative field strengths the magnetic moment term of the energy shift can be efficiently isolated. This can be done using the ratio, R_m(B,t) = ( G_↓(B+,t) + G_↑(B-,t)/ G_↓(0,t) + G_↑(0,t))( G_↓(0,t) + G_↑(0,t)/G_↓(B-,t) + G_↑(B+,t)) = ( G_↓(B+,t) + G_↑(B-,t)/G_↓(B-,t) + G_↑(B+,t) ) In an analogous way to an effective mass, a magnetic moment energy shift can be found, Δ E_m(B,t)= 1/δ t log(R_μ(B,t)/R_μ(B,t+δ t)) = -μ B This formulation of the energy shift has the advantage that it removes many of the correlated errors between spin orientations. In the same manner as the magnetic polarisability, magnetic moment energy shifts have constant plateaus fitted to them. This time linear or linear + cubic terms are considered. This cubic term is appropriate as it corresponds to the next lowest order term in Eq. (<ref>). From Figures <ref> and <ref> it is clear that the cubic term is necessary in order to adequately fit the energy shifts. This suggests that the third field strength in particular is becoming too large for the energy relation in Eq. (<ref>) to fully describe the system. This could be remedied by using a larger lattice volume and corresponding smaller field strengths. The magnetic moments for the proton and neutron linear + cubic fits are shown in Table <ref>. Good agreement is seen with results from the alternative three-point function method on the same lattices and at the same pion mass<cit.>. § CONCLUSION Through the use of Landau eigenmode projectors in the sinks of the quark propagators, we have been able to observe plateaus in the correlation functions describing the magnetic polarisability for the first time. We have also examined the utility of a Landau level projector for the proton in its final state. The results are encouraging and the refinement of creation and annihilation operators is in progress. Efforts to expand this method to excited and negative parity states are desirable as well as chiral extrapolations to enable confrontation of experiment. The background field method has been shown again to be a useful tool to access magnetic properties on the lattice. This research is supported by an Australian Government Research Training Program Scholarship. This work was supported with supercomputing resources provided by the Phoenix HPC service at the University of Adelaide. This research was undertaken with the assistance of resources from the National Computational Infrastructure (NCI), which is supported by the Australian Government. JHEP_arXiv
http://arxiv.org/abs/1704.08435v1
{ "authors": [ "Ryan Bignell", "Derek Leinweber", "Waseem Kamleh", "Matthias Burkardt" ], "categories": [ "hep-lat" ], "primary_category": "hep-lat", "published": "20170427050901", "title": "Nucleon Magnetic Properties from Lattice QCD with the Background Field Method" }
FernUniversität in Hagen {Winfried.Hochstaettler, Michael.Wilhelmi}@FernUniversitaet-Hagen.de We prove the equivalence of Kantor's Conjecture and the Sticky Matroid Conjecture due to Poljak und Turzík.Dedicated to Achim Bachem on the occasion of his 70th birthday.Sticky matroids and Kantor's Conjecture Winfried HochstättlerMichael Wilhelmi December 30, 2023 ===========================================§ INTRODUCTIONThe purpose of this paper is to prove the equivalence of two classical conjectures from combinatorial geometry. Kantor's Conjecture <cit.> adresses the problem whether a combinatorial geometry can be embedded into a modular geometry, i.e., a direct product of projective spaces. He conjectured that for finite geometries this is always possible if all pairs of hyperplanes are modular.The other conjecture, the Sticky Matroid Conjecture (SMC) due to Poljak and Turzík <cit.> concerns the question whether it is possible to glue two matroids together along a common part. They conjecture that a “common part” for which this is always possible, a sticky matroid, must be modular.It is well-known (see eg.<cit.>) that modular matroids are sticky and easy to see <cit.> that modularity is necessary for ranks up to three. Bachem and Kern <cit.> proved that a rank-4 matroid that has two hyperplanes intersecting in a point is not sticky. They also stated that a matroid is not sticky if for each of its non-modular pairs there exists an extension decreasing its modular defect. The proof of this statement had a flaw which was fixed by Bonin <cit.>.Using a result of Wille <cit.> and Kantor <cit.> this implies that the sticky matroid conjecture is true if and only if it holds in the rank-4 case.Bonin <cit.> also showed that a matroid of rank ≥ 3 with two disjoint hyperplanes is not sticky and that non-stickiness is also implied by the existence of a hyperplane and a line that do not intersect but can be made modular in an extension.We generalize Bonin's result and show that a matroid is not sticky if it has a non-modular pairthat admits an extension decreasing its modular defect. Moreover by showing the existence of the proper amalgam of two arbitrary extensions of the matroid we prove that in the rank-4 case this condition is also necessary for a matroid not to be sticky.As a consequence from every counterexample to Kantor's conjecture arises a matroidthat can be extended in finite steps to a counterexample ofthe (SMC), implying the equivalence of the two conjectures. A further consequence of our results is the equivalence of both conjectures to the following: In every finite non-modular matroid there exists a non-modular pair and a single-element extension decreasing its modular defect. Finally, we present an example proving that the (SMC), like Kantor's Conjecture fails in the infinite case. We assume familiarity with matroid theory. The standard reference is <cit.>.§ OUR RESULTS Let M be a matroid with groundset E and rank function . We define the modular defect δ(X,Y) of a pair of subsets X,Y ⊆ E as δ(X,Y)= (X) + (Y) - (X ∪ Y) - (X ∩ Y).By submodularity of the rank function, the modular defect is always non-negative. If it equals zero, we call (X,Y) a modular pair.A matroid is called modular if all pairs of flats form a modular pair.An extension of a matroid M on a groundset E is a matroid N on a groundsetsuch that M = N|E. If N_1, N_2 are extensions of a common matroid M with groundsets F_1, F_2, E resp. such that F_1 ∩ F_2=E, then a matroid A(N_1,N_2) with groundset F_1∪ F_2 is called an amalgam of N_1 and N_2 iffor i=1,2. If M is a modular matroid thenfor any pair (N_1,N_2) of extensions of M an amalgam exists. We found a proof of this result only for finite matroids (see eg.<cit.>). We will show that it also holds for infinite matroids of finite rank. If M is a matroid such that for all pairs (N_1,N_2) of extensions of M an amalgam exists, then M is modular. The following preliminary results concerning the (SMC) are known: Let M be a matroid. * If (M)≤ 3 then the (SMC) holds for M.* If the (SMC) holds for all rank-4 matroids, then it is true in all ranks.* Let l be a line and H a hyperplane in M such that . If M has an extension M' such that _M'(_M'(l)∩_M'(H)) =1, then M is not sticky.* If M has two disjoint hyperplanes then is not sticky.We will generalize the last two assertions and prove: Let M be a matroid, X and Y two flats such that δ(X,Y)>0.If M has an extension M' such that δ_M'(_M'(X),_M'(Y)) < δ(X,Y) then M is not sticky.We postpone the proof of Theorem <ref> to Section <ref>.We call a matroid hypermodular if each pair of hyperplanes forms a modular pair. With this notion we can rephrase Kantor's Conjecture. Every finite hypermodular matroid embeds into a modular matroid.Like the (SMC) Kantor's Conjecture can be reduced to the rank-4 case (see Corollary <ref>, Section <ref>).Next, we consider the correspondence between single-element extensions of matroids and modular cuts. A set ℳ of flats of a matroid M is called a modular cut ofM if the following holds:1cm * If F ∈ℳ and F' is a flat in M with F' ⊇ F, then F' ∈ℳ. * If F_1, F_2 ∈ℳ and (F_1,F_2) is a modular pair, then F_1 ∩ F_2 ∈ℳ. There is a one-to-one-correspondence between the single-element extensionsM+_ℳp of a matroid M and the modular cuts ℳ of M. ℳ consists precisely of the set of flats of M containing the new point p in M+_ℳp. The set of all flats of a matroid M is a modular cut, the trivial modular cut, corresponding to an extension with a loop. The empty set is a modular cut corresponding to an extension with a coloop, the only single-element extension increasing the rank of M. For a flat F of M, the set ℳ_F = { G | Gis a flat ofMandG ⊇ F } is a modular cut of M. We call it a principal modular cut. We say that in the corresponding extension the new point is freely added to F. A modular cut ℳ_𝒜 generated by a set of flats 𝒜 is the smallest modular cut containing 𝒜.The following is immediate from Theorem 7.2.3 of <cit.>.If (X,Y) is a non-modular pair of flats of a matroid M, then there exists an extension decreasing its modular defect (we call the pairintersectable) if and only if the modular cut generated by X and Y is not the principal modular cut ℳ_X ∩ Y. We call a matroid OTE (only trivially extendable) if all of its modular cuts different from the empty modular cut are principal.Most of this paper will be devoted to the proof of the following theorem. If M is a rank-4 matroid that is OTE, then M is sticky. As we will prove with Theorem <ref>, Theorem <ref> implies that a matroid that is not OTE is not sticky. Hence Theorem <ref> implies that for rank-4 matroids being sticky is equivalent to being OTE. Since the (SMC) is reducible to the rank-4 case, it is equivalent to the conjecture that everymatroid that is OTE is already modular.For finite matroids, this is our Conjecture <ref>, which is also reducible to the rank-4 case (see the remark after the proof of Corollary <ref>).Like Kantor's Conjecture our Conjecture <ref> is no longer true in the infinite case. This will be a consequence of the following theorem, proven in Section <ref>. Every finite matroid can be extended to a (not necessarily finite) matroid of the same rank that is OTE. Starting from, say, the Vámos matroid this yields an infinite rank-4 non-modular matroid that is OTE, hence a counterexample to the (SMC) in the infinite case.Finally, Theorem <ref> will imply that any finite counterexample to Kantor's Conjecture can be embedded into a finite non-modular matroid that is OTE. In the rank-4 case any counterexample to Kantor's Conjecture this way yields a finite counterexample to the (SMC). We wil show in Corollary <ref> that Kantor's Conjecture is reducible to the rank-4 case, hence the (SMC) implies Kantor's Conjecture. It had already been observed by Faigle (see <cit.>) and was explicitely mentioned by Bonin in <cit.> that Kantor's Conjecture implies the (SMC).The latter is now immediate from Theorem <ref> and the former establishes the equivalence of the two conjectures.Kantor's Conjecture holds true if and only if the Sticky Matroid Conjecture holds true. § PROOF OF THEOREM <REF>We start with a proposition that states that the so called Escher matroid (<cit.> Fig. 1.9) is not a matroid. For easier readability we use lattice theoretic notation here, i.e. x ∨ y for (x ∪ y), x ∧ y for (x ∩ y) and x ≤ y for cl(x) ⊆ cl(y). Letl_1,l_2,l_3 be three lines in a matroidthat are pairwise coplanar but not all lying in a plane. If l_1 and l_2 intersect in a point p, then p must also be contained in l_3. By submodularity of the rank function we haver((l_1∨ l_3) ∧ (l_2∨ l_3)) ≤(l_1∨ l_3) + (l_2∨ l_3) - (l_1∨ l_2 ∨ l_3 )= 3+3-4=2.Now l_3 ∨ p ≤ (l_1∨ l_3) ∧ (l_2∨ l_3) and hence p must lie on l_3. Probably the easiest way to prove that the (SMC) holds for rank 3 is to proceed as follows. If a rank-3 matroid M is not modular, then it has a pair of disjoint lines. We consider two extensions N_1 and N_2of M such that N_1 adds to the two lines a point of intersection and N_2 erects a Vámos-cube (V_8 in <cit.>) using the disjoint lines as base points. By Proposition <ref> the amalgam of N_1 and N_2 cannot exist (see Figure <ref>).Bonin <cit.> generalized this idea to the situation of a disjoint line-hyperplane pair in matroids of arbitrary rank. We further generalize this to a non-modular pair of a hyperplane H and a flat Fthat can be made modular by a proper extension.Our first aim is to show that such a pair exists in any matroidthat is not OTE. Again, the following is immediate: Let M be a matroid, M' an extension of M and (X,Y) a modular pair of flats in M. Then (_M'(X), _M'(Y)) is a modular pair in M'.Moreover_M'(X) ∩_M'(Y) = _M'(X ∩ Y). Let M be a matroid, ℳ a modular cut in M and M' = M+_ℳp the corresponding single-element extension. If M' does not contain a modular pair of flats X'=X∪ p, Y'=Y ∪ p such that X and Y are a non-modular pair in M, thenℳ':={_M'(F) | F ∈ℳ}is a modular cut in M'.Let M_0 be a matroidthat is not OTE and (X,Y) be a non-modular pair of smallest modular defect δ := δ(X,Y) such that there is a single-element extension decreasing their modular defect.Then there exists a sequence M_1,…,M_δ of matroids such that M_i is a single-element extension of M_i-1 for i=1,…, δ and δ_M_i(_M_i(X),_M_i(Y))=δ -i. In particular (_M_δ(X),_M_δ(Y)) are a modular pair in M_δ. Let ℳ denote the modular cut generated by X and Y in M_0. Inductively we conclude, that by the choice of X and Yℳ_i:={_M_i(F) | F ∈ℳ}is a modular cut in M_i for i=1,…,δ-1 implying the assertion.Let M be a matroid that is not OTE.Then there exists an intersectable non-modular pair (F,H) of smallest modular defect, where F is a minimal element in the modular cut ℳ_F,H generated by H and F, and H is a hyperplane of M. Since M is not OTE, it is not modular and hence of rank at least three.Every non-modular pair of flats in a rank-3 matroid clearly satisfies the assertion. Hence we may assume(M) ≥ 4.Let (X,Y) be a non-modular intersectable pair of flats in M of smallest modular defect δ_min and chosen such that, first, X is of minimal and, second, Y of maximal rank. We claim that F=X and H=Y are as required. Let ℳ_X,Y be the modular cut generated by these two flats. Assume, contrary to the first assertion, that there exists an F ∈ℳ_X,Y withF ⊊ X.Since the principal modular cut ℳ_X ∩ Y contains X and Y, it is a supersetof the modular cut ℳ_X,Y. Hence we obtain X ∩ Y ⊆ F.Since ℳ_X,Y contains F and Y but not X ∩ Y = F ∩ Y, the pair (F,Y) is non-modular and intersectable in M(according to Proposition <ref>).Dueto submodularity ofwe have (X) + (F ∪ Y) ≥(X ∪ Y) + (F) and hence:δ(F,Y)= (F) + (Y) - (F ∪ Y) - (F ∩ Y)≤(X) + (Y) - (X ∪ Y) - (X ∩ Y) = δ(X,Y) = δ_min,contradicting the choice of X.Next we show that (X ∪ Y) = E(M). Assume to the contrary that there exists p ∈ E(M) ∖(X ∪ Y) and let Y_1 = (Y ∪ p).Then X ∩ Y = X ∩ Y_1 and hence δ(X,Y_1) = δ(X,Y). Since ℳ_X,Y_1⊆ℳ_X,Y, the pair (X,Y_1) remains intersectable, contradicting the choice of Y, and hence verifying (X ∪ Y) = E(M).Finally, assume Y is not a hyperplane.Let Y' = (Y ∪ p) with p ∈ X ∖ Y.Thenδ(X,Y')= (X) + (Y') - (X ∪ Y') - (X ∩ Y')=(X) + (Y) + 1 - (X ∪ Y) - (X ∩ Y) - 1 = δ(X,Y).Since Y is not a hyperplane and (X ∪ Y) = E(M), we must have X ∩ Y' ⊊ X, and X being minimal in ℳ_X,Y implies X ∩ Y' ∉ℳ_X,Y. Now ℳ_X,Y'⊆ℳ_X,Y yields that X ∩ Y' ∉ℳ_X,Y' and thus by Proposition <ref> the pair (X,Y') is intersectable with δ(X,Y') = δ(X,Y) = δ_min, contradicting the choice of Y. Lemmas <ref> and <ref> now imply the following: Let M be a matroidthat is not OTE.Then there exist * a non-modular pair (F,H) where H is a hyperplane of M and * an extension N of M such that (_N(F),_N(H)) is a modular pair in N.On the other hand we also have: Let M be a matroid and (F,H) a non-modular pair of disjoint flats, where H is a hyperplane of M. Then there exists an extension N of M such that for every extension N' of N, (_N'(F),_N'(H)) is not a modular pair in N'.We follow the idea from <cit.> and Bonin's proof <cit.> anderect a Vámos-type matroid above F and H. Clearly, r := _M(M) ≥ 3 and 2 ≤_M(F) ≤ r - 1.We extend M by first adding a set A of r - 1 - _M(F) elements freely to H. Next, we add, first,a coloop e, and then an element f freely to the resulting matroid, yielding an extension N_0 with groundset E(M) ∪ A ∪{e,f} and of rank r+1. Note, that _N_0(H) = H ∪ A. We consider the following sets: * T_1 = F ∪ A ∪ e* T_2 = H ∪ A ∪ e * B_1 = F ∪ A ∪ f * B_2 = H ∪ A ∪ fNote that (T_1,T_2), (B_1,B_2) are non-modular pairs of hyperplanes of rank r in N_0 with the same modular defectδ(T_1,T_2)= 2r -(r+1) -(r-_M(F))= _M(F)-1=δ(B_1,B_2).Any non-modular pair of hyperplanes in a matroid is intersectable because the modular cut generated by the two hyperplanes contains additionally only the groundset of the matroid and hence is non-principal (see Proposition <ref>).In the corresponding single-element extension the modular defect of the hyperplane-pair decreases by one. If this defect is still non-zero these two hyperplanes remain intersectable.Repeating this process until they become a modular pair, the modular defect of other hyperplane-pairs stays unaffected in these extensions.This way, we obtain an extension N of the matroid N_0 of rank r+1 with groundset E(N_0) ∪ P ∪ Q where P and Q are independent sets of size _M(F) - 1 such that (_N(T_1),_N(T_2)) and (_N(B_1),_N(B_2)) are modular pairs in N and P ⊆_N(T_1) ∩_N(T_2) resp. Q ⊆_N(B_1) ∩_N(B_2). We will show now that the matroid N is as required.Assume to the contrary that there existsan extension N' of N such that (_N'(F), _N'(H)) is a modular pair.As A ⊆_N'(H) and A ∩_N'(F) = ∅ we compute_N'((_N'(F) ∩_N'(H)) ∪ A) = _N'(_N'(F) ∩_N'(H))) + |A| = _N'(_N'(F)) + |A| +_N'(_N'(H)) - _N'(_N'(F) ∪_N'(H))) =_N'(_N'(F ∪ A))+ _N'(_N'(H)) -_N'(_N'(F ∪ H)) = (r - 1) + (r - 1)- r = r - 2. Let D_1= _N'(A ∪ P∪ e) and D_2= _N'(A ∪ Q ∪ f). Proposition <ref> yields _N'(_N(T_1)) ∩_N'(_N(T_2)) = _N'(_N(T_1) ∩_N(T_2)) and it holds _N'(D_1) = _N'(D_2) = r-1. We obtain _N'((_N'(F) ∩_N'(H)) ∪ D_1) ≤_N'((_N'(F ∪ D_1) ∩_N'((H ∪ D_1))= _N'(_N'(T_1) ∩_N'(T_2)) = _N'(_N(T_1) ∩_N(T_2)) = r - 1 = _N'(D_1). This implies _N'(F) ∩_N'(H) ⊆ D_1.Similarly, using B_1 and B_2 instead of T_1 and T_2, we get _N'(F) ∩_N'(H) ⊆ D_2and conclude (_N'(F) ∩_N'(H))  ∪ A ⊆ D_1 ∩ D_2. This yields_N'(D_1 ∩ D_2) ≥_N'((_N'(F) ∩_N'(H))  ∪ A) (<ref>)= r-2.From_N'(D_1 ∪ D_2) = _N'(A ∪ P ∪ Q ∪ e ∪ f)) = r+1 we finally obtain _N'(D_1) + _N'(D_2) (<ref>)= 2r - 2 < (r+1) + (r-2)(<ref>)≤_N'(D_1 ∪ D_2) + _N'(D_1 ∩ D_2) contradicting submodularity.Summarizing the two previous theorems yields the final result of this section:Let M be a matroidthat is not OTE. Then M is not sticky.By Theorem <ref>, M has a non-modular intersectable pair of flats (F,H) such that H is a hyperplane, and there exists an extension N_1 of M such that (_N_1(F), _N_1(H)) is a modular pair.Possibly contracting (F ∩ H), and referring to Lemma 7 of <cit.>, we may assume that F and H are disjoint.Thus, by Theorem <ref>, there also exists an extension N_2 of M such that in every extension N of N_2 the pair (_N(F), _N(H)) is not modular.Hence M is not sticky. § HYPERMODULARITY AND OTE MATROIDS We collect some facts about hypermodular matroids and OTE matroids that we need for the proof of Theorem <ref> and the embedding theorems in the next section. Recall that a matroid is hypermodular if any pair of hyperplanes intersects in a coline. Modular matroids are hypermodular and hypermodular matroids of rank at most 3 must be modular. Thus, a contraction of a hypermodular matroid of rank n by a flat of rank n-3 is a modular matroid of rank 3.Every projective geometry P(n,q) is hypermodular and remains hypermodular if we delete up to q - 3 of its points.In the following we will focus on the case of hypermodular matroids of rank 4. Let M be a hypermodular rank-4 matroid. If M contains a disjoint line and hyperplane, then M also contains two disjoint coplanar lines. The same holds for a modular cut in M.Let (l_1,e_1) be a disjoint line-plane pair in M. Take a point p in e_1. Because of hypermodularity, the plane l_1p intersects the plane e_1 in a line l_2 in M. The lines l_1 and l_2 are coplanar and disjoint.If now l_1 and e_1 are elements of a modular cut ℳ in M then it holds also l_2 ∈ℳ. The next results are matroidal versions of similar results of Klaus Metsch (see <cit.>) for linear spaces. Let M be a hypermodular matroid of rank 4 on a groundset E. Let l_1, l_2 be two disjoint coplanar lines. Then E can be partitioned into l_1,l_2 and lines that are coplanar with l_1 and with l_2. The modular cut ℳ generated by l_1 and l_2 always contains such a line-partition of E. We set e=(l_1 ∪ l_2). Then l_p:=(l_1 ∨ p) ∧ (l_2 ∨ p) is a line for every p ∈ E ∖ e and coplanar to l_1 and l_2. By Proposition <ref> it must be disjoint from l_1 and l_2 and from e.This together with Proposition <ref> implies that for q ∈ E ∖ e with pq we must have either l_p ∧ l_q= 0 or l_p=l_q= p ∨ q.We denote the set of lines constructed this way by Δ.Now we choose a line l_p^*∈Δ and for each r ∈ e ∖ (l_1 ∪ l_2) we get a line l_r = e ∧ (l_p^*∨ r).Let Σ be the set of lines obtained in that way. It is clear that Σ is a line partition of e ∖ (l_1 ∪ l_2).Again, Proposition <ref> implies that these lines must be pairwise disjoint and disjoint from l_1,l_2, l_p^* and all lines l_q ∈Δ. Now, the set Γ = l_1 ∪ l_2 ∪Σ∪Δ is the desired set of lines partitioning E. Obviously, it holds Γ⊆ℳ. A non-trivial and non-principal modular cut in a matroid always contains a non-modular pair of flats. Proposition <ref> implies, that in a hypermodular rank-4 matroid it even must contain two disjoint coplanar lines. By Lemma <ref> we, thus, get a set of pairwise disjoint lines that partition the ground set.Moreover we have:* Under the assumptions of Lemma <ref> the following two statements are equivalent: * There exists a single-element extension M' where _M'(l_1) and _M'(l_2) intersect. * The modular cut generated by l_1 and l_2 in M contains a set of pairwise coplanar lines, l_1 and l_2 among them, partitioning the groundset E(M). * If a single-element extension M' as in (i) exists, then the restriction to M of anyline in M' is a line.* If there is no single-element extension as in (i), the matroid M contains two non-coplanar lines l_3,l_4 such that l_i and l_j are coplanar for all i ∈{1,2} and j ∈{3,4} and no three of them are coplanar, i.e., it has the Vámos matroid containing l_1 and l_2 as a restriction.(i) By Lemma <ref> the modular cut ℳ generated by l_1 and l_2 contains a set of lines partitioning the groundset E.Since any two of these lines intersect in the extension M' in the new point, they must be coplanar.On the other hand, if we have a set Γ of pairwise coplanar lines partitioning the groundset E, l_1 and l_2 among them, these lines must form the minimal elements of a modular cut. This is seen as follows.Consider the set ℳ of flats in M which are elements or supersets of elements of Γ.Any two lines of ℳ are disjoint and coplanar, hence they do not form a modular pair.For p∈ E let l_p denote the line in Γ containing p and let h ∈ℳ be a hyperplane containing p. Then h contains l_p or some other line l that is coplanar with l_p.Since in the second case l_p ≤ l ∨ p ≤ h we always have l_p ≤ h.Let h_1h_2 be two hyperplanes in ℳ, let l=h_1 ∧ h_2 and pq be two points on l.Then l_p ≤ h_i and l_q ≤ h_i for i ∈{1,2} implying l_p=l_q=l.Finally, consider a hyperplane h and a line l. If they are a modular pair then they must intersect in a point r, hence l=l_r and l ≤ h.Thus ℳ is a modular cut defining a single-element extension where l_1 and l_2 intersect.(ii) Let p denote the new point and l a line containing p. Let q be another point on l. Then q is contained in a line l_q in M of thepartition of E(M) in lines. In M' we obtain {p,q}⊆_M'(l_q).Since {p,q}⊆ l we obtain _M'(l_q) = l hence the restriction of l to M is the line l_q.(iii) Let Γ = l_1 ∪ l_2 ∪Σ∪Δ be the line-partition of the groundset E from the proof of Lemma <ref>. By (i) there exist l_3 and l_4 inΓ∖{l_1,l_2} that are not coplanar and hence l_3 ∪ l_4 ⊈(l_1 ∪ l_2) = e. If l_3,l_4 ∈Δ we are done hence we may assume that l_3=l_r ∈Σ and l_4=l_q ∈Δ where l_q = (l_1 ∨ q) ∧ (l_2 ∨ q) and l_r = e ∧ (l_p^*∨ r), as in the proof of Lemma <ref>. Since l_p^∗ and l_3 are coplanar we conclude l_p^∗ l_4. If l_p^∗ and l_4 are not coplanar, we replace l_3 by l_p^∗ and are done. Hence we may assume that they are coplanar.The hyperplanes l_4 ∨ r and l_p^∗∨ r intersect in the line l_3'=(l_4 ∨ r) ∧ (l_p^∗∨ r).Assuming l_3' ≤ e yields l_3'=(l_4 ∨ r) ∧ (l_p^∗∨ r)∧(l_1 ∨ l_2)=l_r=l_3, contradicting l_3 and l_4 being not coplanar. Hence l_3' intersects e only in r.Furthermore, by Proposition <ref>, l_3' must be disjoint from l_p^∗ and l_4. Choose p' on l_3' but not on e and define l_3”:=l_p'∈Δ. We claim that l_p' must be noncoplanar with at least one of l_p^∗ or l_4. Otherwise, we would havel_3”=(l_p^∗∨ l_p') ∧ (l_4∨ l_p')= (l_p^∗∨ p') ∧ (l_4∨ p')=(l_p^∗∨ l_3') ∧ (l_4∨ l_3')=l_3'which is impossible since l_3”∈Δ is disjoint from e. The absence of a configuration in Theorem <ref> (iii) is called bundle condition in the literature. A matroid M of rank at least 4 satisfies the bundle condition if for any four disjoint lines l_1, l_2,l_3, l_4 of M, no three of them coplanar, the following holds: If five of the six pairs (l_i,l_j) are coplanar, then all pairs are coplanar. Since a non-modular pair of hyperplanes together with the entire groundset always forms a modular cut that is not principal, OTE matroids must be hypermodular.Hence, Theorem <ref> has the following corollary: Let M be an OTE matroid of rank 4. If the bundle-condition in M holds, then M is modular. Let M be a rank-4 OTE-matroid that is not modular. Then, because M is hypermodular and because of Proposition <ref> it contains two disjoint coplanar lines.From Theorem <ref> (iii) follows that the bundle-condition does not hold in M. § EMBEDDING THEOREMS With these results, we can prove a first embedding theorem. Assertion (iii) is a result of Kahn <cit.>. Let M be a hypermodular rank-4 matroid with a finite or countably infinite groundset. Then M is embeddable in an OTE matroid M of rank 4 where the restriction of any line of M is a line in M. Furthermore: * M is finite if and only if M is finite.* The simplification of M/p is isomorphic to the simplification of M/p for every p ∈ E(M).* If M fulfills the bundle-condition then M is modular. Let P be a list of all disjoint coplanar pairs of lines of M. Clearly, P is finite or countably infinite. We inductively define a chain of matroids M = M_0, M_1, M_2 as follows: Let M_0 = M, suppose M_i-1 has already been defined for an i ∈ℕ.Let l_i1 and l_i2 denote the pair of disjoint lines in the list at index i. If l_i1 and l_i2 are not intersectable in the matroid M_i-1, set M_i = M_i-1. Otherwise, let M_i be the single-element extension of M_i-1 corresponding to the modular cut ℳ_i-1 generated by l_i1 and l_i2 in M_i-1.By Theorem <ref> (ii), the restriction of a line in M_i+1 is a line in M_i and hence is also a line in M. As a consequence also the restriction of a plane in M_i+1 is a plane in M hence two planes in M_i+1 intersect in a line. Thus all matroids M_i are hypermodular of rank 4.Now let M be the set system (E,ℐ) where ℐ⊆𝒫(E), E = ⋃_i=0^∞(E(M_i)) and I ∈ℐ if and only if I is independent in some M_i.Clearly, ℐ satisfies the independence axioms of matroid theory.We call M the union of the chain of matroids.The matroid M is hypermodular of rank 4 and has no new lines as well.Assume there were a modular cut ℳ in M that is not principal.By Proposition <ref> it contains a pair of disjoint coplanar lines. The restriction of this pair in M is on the list, say with index i.The modular cut ℳ_i-1 generated by these two lines in M_i-1 must contain _M_i-1(∅), otherwise the lines would intersect in M_i, hence also in M.Since {_M(X) | X ∈ℳ_i}⊆ℳ we also must have _M(∅) ∈ℳ, a contradiction to ℳ not being principal. Thus, M is OTE.If M is finite, so is the list P and hence M proving (i).It suffices to show that for every point p ∈ E(M) every point q ∈ E(M) - (E(M)∪ p) is parallel to a point in M/p. As the restriction of the line spanned by p and q in M is a line in M it contains a point different from p and (ii) follows. Finally, (iii) is Corollary <ref>. This embedding theorem has the following corollary:Kantor's conjecture is reducable to the rank-4 case. Assume Kantor's conjecture holds for rank-4 matroids.Let M be a finite hypermodular matroid of rank n>4.All contractions of M by a flat of rank n-4 are finite hypermodular matroids of rank 4, hence are embeddable into a modular matroid.Using Theorem <ref>, it is easy to see that these contractions are also strongly embeddable (as defined in <cit.>, Definition 2) into a modular matroid. Hence the matroid M satisfies the assumptions of Theorem 2 in <cit.>, and thus is embedabble into a modular matroid, implying the general case of Kantor's Conjecture. Similarly, it is easy to show that our Conjecture <ref> is reducible to the rank-4 case.We have a second embedding theorem: Let M be a matroid of finite rank on a set E where E is finite or countably infinite. Then M is embeddable in an OTE matroid of the same rank.We proceed similar to the proof of Theorem <ref>. Let P be the list of all intersectable non-modular pairs of M.We build a chain of matroids M = M_0, M_1, where each matroid M_i+1 is the extension of M_i, where the modular defect of the i-th pair on the list can no longer be decreased.Let M be the union of the extension chain as in the proof before. Then M is a matroid of finite rank with a finite or countably infinite ground set.If there still are intersectable non-modular pairs in M we repeat the process and obtain M _1.This yields a chainof matroids M , M _1, M _2,.Let M be the union of that extension chain.Clearly, M is a matroid. We claim it is OTE. For assume it had a non-trivial modular cut generated by a non-modular pair of intersectable flats f_1,f_2. Since their rank is finite, there exists an index k such that the matroid M _k contains a basis of f_1 as well as off_2. But then in the matroid M _k+1 the pair would not be intersectable anymore and we get a contradiction. Thus, M is an OTE matroid. We have a similar result for hypermodular matroids: Every matroid M of finite rank r with finite or countably infinite groundset is embeddable in a infinite hypermodular matroid M of rank r.The proof mimics the one of Theorem <ref>, except that we have only the non-modular pairs of hyperplanes in the list.This generalizes the technique of free closure of rank-3 matroids and it is not difficult to show (see e.g. Kantor <cit.>, Example 5) that if M is non-modular (hence ≥ 3),every contraction of M by a flat of rank -3 in Mis an infinite projective non-Desarguesian plane and hence M must be infinite, too. § ON THE NON-EXISTENCE OF CERTAIN MODULAR PAIRS IN EXTENSIONS OF OTE MATROIDS In order to prove that the proper amalgam exists for any two extensions of a finite rank-4 OTE matroid we need some technical lemmas. We will show that certain modular pairs cannot exist in extensions of rank-4 OTE matroids.We need some preparations for that. Let M be matroid with groundset T, let (X,Y) be a modular pair of subsets of T and let Z ⊆ X ∖ Y.Then (X ∖ Z,Y) is a modular pair, too.Submodularityimplies (X ∪ Y) -(X) ≤((X ∖ Z) ∪ Y)-(X ∖ Z). Using modularity of (X,Y) we find(X ∖ Z) +(Y)= (X ∪ Y) + ((X ∖ Z) ∩ Y) -(X) + (X ∖ Z)≤ ((X ∖ Z)∪ Y) +((X ∖ Z) ∩ Y)and another application of submodularity implies the assertion. By (D) we abbreviate the following list of assumptions:* M is a matroid with groundset T and rank function . * M' is an extension of M with rank function ' and groundset E'. * X and Y are subsets of E' such that X ∩ T = l_X and Y ∩ T = l_Y are two disjoint coplanar lines in M.* X ∩ Y is a flat in M'. Assume (D) and, furthermore, that X ∖ T ⊆ Y and that (X,Y) is a modular pair of sets in M'. Then x ∉_M'(Y) for allx ∈ l_X.Assume to the contrary that there exists x∈ l_X with x ∈_M'(Y). Then coplanarity of l_X and l_Yimplies X ∩ T = l_X ⊆ l_Xl_Y = xl_Y ⊆_M'(Y).Hence X ⊆_M'(Y), implying '(Y) ='(X ∪ Y) andmodularity of (X,Y) yields'(X) = '(X ∩ Y), a contradiction, because X ∩ Y is a flat in M' and a proper subset of X. Assume (D) and that M is of rank 4 (the rank of M' may be larger) and, furthermore, * (X,Y) is a modular pair of sets in M' with X ∖ T ⊆ Y and T ⊈_M'(X ∪ Y) and* l' ⊆ T is a line disjoint coplanar to l_X and l_Y, not lying in l_Xl_Y.Then X' = (X ∖ T) ∪ l' implies '(X') = '(X).Choose x ∈ l_X and x' ∈ l' = X' ∩ T.Because l_X and l_Y are coplanar and X ∖ T ⊆ Y we conclude _M'(x ∪ Y) = _M'(X ∪ Y). Similarly, we get _M'(x' ∪ Y) = _M'(X' ∪ Y). By assumption M, being of rank 4, is spanned by l',l_X and l_Y and hence T ⊆_M'({x, x'}∪ Y). If we had x' ∈_M'(x ∪ Y), then this would imply that T ⊆_M'(x ∪ Y) = _M'(X ∪ Y), contradicting the assumptions, thus x' ∉_M'(x ∪ Y). In particular x' ∉_M'(X).Proposition <ref>yields x ∉_M'(Y). If we had x ∈_M'(x' ∪ Y) using theexchange-axiom of the closure-operator we would find x' ∈_M'(x ∪ Y) which is impossible.Hence we obtainx ∉_M'(x' ∪ Y) = _M'(X' ∪ Y). In particular x ∉_M'(X').The choice of x and x' implies _M(l_X ∪ x') = _M(l' ∪ x) and using X ∖ T=X' ∖ T weobtain _M'(X ∪ x') = _M'(X' ∪ x). We conclude '(X') + 1 = '(X' ∪ x) = '(X ∪ x') = '(X) + 1,hence '(X') = '(X).Assume (D), M is a rank-4 OTE matroid andX ∖ T ⊆ Y, Y ∖ T ⊆ X and T ⊈_M'(X ∪ Y).Then (X,Y) is not a modular pair in M'.OTE matroids are hypermodular, hence M is hypermodular, OTE and of rank 4. By Theorem <ref> (iii), it has two lines l_1 und l_2 that span M but are both disjoint coplanar to l_X and l_Y and disjoint to l_X ∨ l_Y.Assume that(X,Y) were a modular pair in M'. Let X' = (X ∖ T) ∪ l_1 and Y' = (Y ∖ T) ∪ l_2. Then by Lemma <ref>'(X') = '(X)and '(Y') = '(Y). Since T ⊆_M(l_1,l_2) ⊆_M'(X' ∪ Y') and T ⊈_M'(X ∪ Y) we get '(X ∪ Y) < '(X ∪ Y ∪ T) = '(X' ∪ Y' ∪ T) = '(X' ∪ Y'). By definition X' ∩ Y' = (X ∖ T)∩ (Y ∖ T) = X ∩ Y and hence by sumodularity '(X ∪ Y) + '(X ∩ Y)< '(X' ∪ Y') + '(X' ∩ Y')by (<ref>)≤'(X') + '(Y') = '(X) + '(Y)by (<ref>)contradicting (X,Y) being a modular pair. We come to the main result of this section.Let M be a rank-4 OTE matroid with groundset T and M' an extension of M with ground set E'.Let X,Y ⊆ E' be sets such that X ∩ Y is a flat in M' and the restrictions l_X=X ∩ T and l_Y=Y ∩ T are disjoint coplanar lines in M.If T ⊈_M'(X ∪ Y) then (X, Y) is not a modular pair in M'. Assume to the contrary that (X,Y) were a modular pair in M'. Let X' = (X ∩ T) ∪ (X ∩ Y) and Y' = (Y ∩ T) ∪ (X ∩ Y).Applying Proposition <ref> twice, we find that the pair (X',Y') is modular in M', too, and satisfies the assumptions of Lemma <ref> yielding the required contradiction. By contraposition we get Let M be a rank-4 OTE matroid with groundset T and M' an extension of M.Let (X,Y) be a modular pair of flats in M' such that (X ∩ T,Y ∩ T) is a non-modular pair in M. Then T ⊆_M'(X ∪ Y).Regarding the case that (X ∩ T, Y ∩ T) is a disjoint line-plane pair, we show the following. Let M be a rank-4 OTE matroid with groundset T and rank functionand let M' be an extension of M with groundset E' and rank function '.Assume that X,Y ⊆ E' are sets such that X ∩ T = e_X is a plane, Y ∩ T = l_Y a line disjoint from e_X in M, and that X ∩ Y is a flat in M'.Assume that there exists a line l_X ⊆ e_X coplanar with l_Y such that '((X ∩ Y) ∪ e_X) = '((X ∩ Y) ∪ l_X) + 1.Then (X, Y) is not a modular pair in M'.Assume, for a contradiction, (X,Y) were a modular pairin M' and let X' = (X ∩ Y) ∪ e_X.Since X' = X ∖ Z with Z = X ∖ (Y ∪ e_X) ⊆ X ∖ Y, we find that by Proposition <ref>, (X', Y) is a modular pairin M', too.Let X” =(X ∩ Y) ∪ l_X. By assumption '(X') = '(X”) + 1 and X”∩ T is a line disjoint from and coplanar to l_Y.Moreover X”∩ Y = X ∩ Y, thus X”∩ Y is a flat in M'. Furthermore submodularity implies '(X' ∪ Y) ≤'(X”∪ Y) + 1.Because (X',Y) is a modular pair we obtain:'(X”∪ Y) + 1 + '(X”∩ Y)≥ '(X' ∪ Y) + '(X' ∩ Y) = '(X') + '(Y) = '(X”) + 1 + '(Y)and again submodularity of r' implies that equality must hold throughout.Hence (X”,Y) is a modular pair and'(X”∪ Y) + 1 = '(X' ∪ Y) = '(X' ∪ Y ∪ T) ='(X”∪ Y ∪ T)implying T ⊈_M'(X”∪ Y). The pair (X”,Y) now contradicts Theorem <ref>. § THE PROPER AMALGAM We prove Theorem <ref> by constructing theproper amalgam of two given extensions of a rank-4 OTE matroid. In this section we define this amalgam and we analyse some of its properties.Throughout, if not mentioned otherwise, we assume the following situation.Let M be a matroid with groundset T and rank function r and M_1 and M_2 be extensions of M with groundsets E_1 resp. E_2 and rank functions r_1 resp. r_2, where E_1 ∩ E_2 = T and E_1 ∪ E_2 = E.All matroids are of finite rank with finite or countably infinite ground set.We define two functions η: 𝒫(E) →ℤ und ξ: 𝒫(E) →ℤ byη(X) = _1(X ∩ E_1) + _2(X ∩ E_2) - (X ∩ T) and ξ(X) = min{η(Y)Y ⊇ X }. The following is immediate: The function ξ is subcardinal, finite and monotone. That is, R1: 0 ≤ξ(X) ≤ |X| , for allX ⊆ E. R1a: For allX ⊆ Ethere exist anX' ⊆ X, |X'| < ∞, such that ξ(X) = ξ(X'). R2: For allX_1 ⊆ X_2 ⊆ Ewe have ξ(X_1) ≤ξ(X_2).Moreover ξ(X) ≤η(X) for all X ⊆ E. If ξ is submodular on 𝒫(E), then ξ is the rank function of an amalgam of M_1 and M_2 along M (see eg. <cit.>, Proposition 11.4.2).This amalgam, if it exists, is called the proper amalgam of M_1 and M_2 along M.Now let ℒ(M_1,M_2) be the set of all subsets X of E, so that X ∩ E_1 and X ∩ E_2 are flats in M_1 resp. M_2.Then it is easy to see that ℒ(M_1,M_2) with the inclusion-ordering is a complete lattice of subsets of E. Let _ℒ and _ℒ be the meet resp. the join of this lattice. Clearly, for two sets X,Y ∈ℒ(M_1,M_2) we have X _ℒ Y = X ∩ Y and X _ℒ Y ⊇ X ∪ Y.We need two results from <cit.>.For all X ⊆ E ξ(X) = min{η(Y)Y ∈ℒ(M_1,M_2)andY ⊇ X }. Let Y ⊆ E and Z be the smallest element of ℒ(M_1,M_2) containing Y, then η(Z) ≤η(Y) holds. The proof of Lemma 11.4.6 in <cit.> must be slightly modified in the end in order to make it work for matroids of finite rank but infinite groundset as well. As in <cit.> for all X ⊆ E we define ϕ_1(X) = _1(X ∩ E_1) ∪ (X ∩ E_2) and ϕ_2(X) = (X ∩ E_1) ∪_2 (X ∩ E_2).Following <cit.> we deriveη(ϕ_i(X))≤η(X)for all X ⊆ E and i=1,2. Now let Z be the minimal element in ℒ(M_1,M_2) such that Y ⊆ Z and chooseY ⊆ W ⊆Zmaximal with η(W)≤η(Y).From Y ⊆ W ⊆ϕ_i(W) ⊆ Z and η(ϕ_i(W)) ≤η(W) follows ϕ_i(W) =W for i=1,2 and henceW=Z ∈ℒ(M_1,M_2) and Lemma <ref> follows, also implying Lemma <ref>.Note that the proof of this lemma and part (R1a) of Proposition <ref> imply that Theorem <ref> holdsfor infinite matroids of finite rank as well. Now we generalize a result of Ingleton (cf. <cit.>, Theorem 11.4.7): Assume that for any pair (X,Y) of sets of ℒ(M_1,M_2) the inequality defining submodularity is satisfied for at least one of η or ξ. Then ξ is submodular on 𝒫(E) and the proper amalgam of M_1 and M_2 along M exists. Let X_1,X_2 ⊆ E. By Lemma <ref> we find Y_i ∈ℒ(M_1,M_2) such that X_i ⊆ Y_i and ξ(X_i) = η(Y_i) for i = 1,2.From η(Y_i) = ξ(X_i) ≤ξ(Y_i) ≤η(Y_i) we conclude that ξ(X_i) = ξ(Y_i) = η(Y_i). By assumption either η or ξ or both are submodular on the pair of flats (Y_1,Y_2). Furthermore, X_1 ∩ X_2 ⊆ Y_1 ∩ Y_2 = Y_1 _ℒ Y_2 and X_1 ∪ X_2 ⊆ Y_1 ∪ Y_2 ⊆ Y_1 _ℒ Y_2. Hence, by Proposition <ref> ξ(X_1 ∩ X_2) + ξ(X_1 ∪ X_2) ≤ξ(Y_1 _ℒ Y_2) + ξ(Y_1 _ℒ Y_2).Thus, if η is submodular on (Y_1,Y_2)ξ(X_1 ∩ X_2) + ξ(X_1 ∪ X_2)≤ η(Y_1 _ℒ Y_2) +η(Y_1 _ℒ Y_2)≤ η(Y_1) + η(Y_2) = ξ(X_1) + ξ(X_2)and otherwiseξ(X_1 ∩ X_2) + ξ(X_1 ∪ X_2)≤ξ(Y_1) + ξ(Y_2) = ξ(X_1) + ξ(X_2).Hence ξ is submodular on 𝒫(E) and the proper amalgam exists. Lemma <ref> immediately yields If X,Y are in ℒ(M_1,M_2), then η(X ∪ Y) ≥η(X _ℒ Y).Moreover we have ξ(X ∪ Y) = ξ(X _ℒ Y). We finish this section with a small lemma.Additionally to the assumptions from the second paragraph of this section let M be of rank 4. Let X ∈ℒ(M_1,M_2) with (X ∩ T) ≥ 2. Thenξ(X) = η(X).Assume there exists Y ⊇ X such that ξ(X)=η(Y)<η(X). Then (Y ∩ T)> (X ∩ T). Hence there exists an element t ∈ (Y∩ T) ∖ X, and because X ∩ E_1, X ∩ E_2 and X ∩ T are flats we getη(X ∪ t)= _1((X ∪ t) ∩ E_1) + _2((X ∪ t) ∩ E_2) - ((X ∪ t) ∩ T) = _1(X ∩ E_1) + 1 + _2(X ∩ E_2) + 1- (X ∩ T) - 1 = η(X) + 1. But since M is of rank 4 and ((X ∪ t) ∩ T) ≥ 3, the decrease of η for supersets of X ∪ t is bounded by 1 and thus η(Y) ≥η(X ∪ t) - 1 = η(X), a contradiction. § PROOF OF THEOREM <REF>Our proof of Theorem <ref> may be considered as a generalization of the proof of Proposition 11.4.9. in <cit.>. Oxley refers to unpublished results of A.W. Ingleton.We start with a lemma.Let M be a rank-4 OTE matroid with ground set T.Let M_1 and M_2 be two extensions of M with the ground sets E_1,E_2 and rank functions r_1,r_2. Let E_1 ∩ E_2 = T and E_1 ∪ E_2 = E and let η, ξ and ℒ(M_1,M_2) be defined as in Section <ref>.Let (X,Y) be a pair of elements of ℒ(M_1,M_2) that violates the submodularity of η. Then * η(X) + η(Y) - η(X ∩ Y) - η(X ∪ Y)=δ(X ∩ E_1, Y ∩ E_1) + δ(X ∩ E_2, Y ∩ E_2) - δ(X ∩ T, Y ∩ T) = -1.* (X ∩ E_i,Y ∩ E_i) is a modular pair in M_i for i=1,2.* (X ∩ T,Y ∩ T) are two disjoint coplanar lines or a disjoint line-plane pair in M.* η(X) = ξ(X)andη(Y) = ξ(Y). For part (i) a straightforward computation yields the first equality. The second one follows from the fact that OTE-matroids are hypermodular and that the modular defect in a hypermodular rank-4 matroid is bounded by 1. Parts (ii) and (iii) are immediate from (i) and part (iv) follows from Lemma <ref>.Under the assumptions of Lemma <ref>, let (X,Y) be a pair of elements of ℒ(M_1,M_2) such that the submodularity of η in ℒ(M_1,M_2) is violated, and eitherξ(X ∪ Y) < η(X ∪ Y) or ξ(X ∩ Y) < η(X ∩ Y).Then ξ is submodular for (X,Y) in ℒ(M_1,M_2).Recall that ξ(X ∪ Y) ≤η(X ∪ Y) and ξ(X ∩ Y) ≤η(X ∩ Y) and ξ(X ∩ Y) = ξ(X _ℒ Y) as well as ξ(X ∪ Y) = ξ(X _ℒ Y) by Lemma <ref>. Moreover by Lemma <ref> (iv), η(X) = ξ(X) and η(Y) = ξ(Y). Altogether this impliesξ(X) + ξ(Y) - ξ(X _ℒ Y) - ξ(X _ℒ Y) = ξ(X) + ξ(Y) - ξ(X ∩ Y) - ξ(X ∪ Y) > η(X) + η(Y) - η(X ∩ Y) - η(X ∪ Y) = -1proving the assertion. We are now ready to tackle the proof of Theorem <ref> which is an immediate consequence of the following:Let M be a rank-4 OTE matroid. Then for any pair of extensions of M the proper amalgam exists. Let T denote the ground set of M and M_1,M_2 be two extensions of M with ground sets E_1,E_2 and rank functions r_1,r_2, such that E_1 ∩ E_2 = T and E_1 ∪ E_2 = E. We show that for these two extensions the proper amalgam exists. Let η and ξ be defined as in the previous section. By Lemma <ref> it suffices to show that for each pair (X,Y) of elements of ℒ(M_1,M_2) either η or ξ is submodular.By cases, we check all possible pairs (X,Y) of sets of ℒ(M_1,M_2) where the submodularity of η could be violated, and show thatξ(X ∪ Y) < η(X ∪ Y) or ξ(X ∩ Y) < η(X ∩ Y) and hence (by Lemma <ref>) ξ is submodular on (X,Y).By Lemma <ref>, (X ∩ E_i,Y ∩ E_i) are modular pairs of flats in M_i for i=1,2 and (X ∩ T, Y ∩ T) is a pair of disjoint coplanar lines or a disjoint line-plane-pair.Disjoint coplanar lines: Assume X ∩ T = l_X and Y ∩ T = l_Y are two disjoint coplanar lines. By Corollary <ref> the fact that (X ∩ E_i,Y ∩ E_i) are modular pairs for i=1,2 implies that T ⊆_M_i((X ∪ Y) ∩ E_i) for i=1,2. Let t ∈ T ∖_M(l_X ∪ l_Y). Thenη( X ∪ Y ∪t ) = r_1((X ∪ Y ∪t ) ∩ E_1) + r_2((X ∪ Y ∪t ) ∩ E_2)- ((X ∪ Y ∪t ) ∩ T) =r_1((X ∪ Y) ∩ E_1) + r_2((X ∪ Y) ∩ E_2) - ((X ∪ Y) ∩ T) - 1 = η(X ∪ Y) - 1.Hence ξ(X ∪ Y) < η(X ∪ Y).Disjoint point-line pair: Assume X ∩ T = e_X is a plane and Y ∩ T = l_Y is a line disjoint from e_X. By Lemma <ref> for everyline l ⊆ e_X such that (l ∨ l_Y)=3 we must haver_i((X ∩ Y ∩ E_i) ∪ e_X) = r_i((X ∩ Y ∩ E_i) ∪ l)for i=1,2.Choose a point p_1 ∈ e_X. Since M must be hypermodular l_X=(e_X ∧ (l_Y ∨ p_1)) is a line in M and p_1 ∈ l_X.Since Y ∩ E_1 is a flat in M_1 not containing p_1 and X ∩ Y ∩ E_1 is a flat in M_1 disjoint from T we have r_1((Y ∪ p_1) ∩ E_1)=r_1(Y ∩ E_1) + 1r_1((X ∩ Y ∩ E_1) ∪ p_1)=r_1(X ∩ Y ∩ E_1) + 1.Choose a second point p_2 ∈ l_X such that p_2 ≠ p_1. Since l_X and l_Y are coplanar, we obtain p_2 ∈ l_X ⊆_M(p_1 ∪ l_Y) = _M(p_1 ∪ (Y ∩ T)) ⊆_M_1(p_1 ∪ (Y ∩ E_1))and thusr_1((Y ∪ l_X) ∩ E_1) = r_1((Y ∪{p_1,p_2}) ∩ E_1) = r_1((Y ∪ p_1) ∩ E_1).Furthermore, since {p_1,p_2}⊆ l_X ⊆ X:r_1((X ∪ Y ∪{p_1,p_2}) ∩ E_1) = r_1((X ∪ Y) ∩ E_1)Using these equations and the modularity of (X ∩ E_1,Y ∩ E_1) in M_1 we computer_1(X ∩ E_1) + r_1((Y ∪{p_1,p_2}) ∩ E_1) (<ref>)= r_1(X ∩ E_1) + r_1((Y ∪ p_1) ∩ E_1) (<ref>)= r_1(X ∩ E_1) + r_1(Y ∩ E_1) + 1 (Mod.)= r_1((X ∪ Y) ∩ E_1) + r_1(X ∩ Y ∩ E_1) + 1 (<ref>)=r_1((X ∪ Y ∪{p_1,p_2}) ∩ E_1) + r_1(X ∩ Y ∩ E_1) + 1 (<ref>)= r_1((X ∪ Y ∪{p_1,p_2}) ∩ E_1) + r_1((X ∩ Y ∩ E_1) ∪ p_1) ≤r_1((X ∪ Y ∪{p_1,p_2}) ∩ E_1) + r_1((X ∩ Y ∩ E_1) ∪{p_1,p_2})By submodularity of r_1 the last inequality must hold with equality andhence r_1((X ∩ Y ∩ E_1) ∪ l_X) = r_1((X ∩ Y ∩ E_1) ∪ p_1).By symmetry (<ref>) and (<ref>) are also valid for r_2 and E_2.Recalling that X ∩ Y ∩ T = ∅, we computeη((X ∩ Y) ∪ e_X) = [ ∑_i=1^2 r_i((X ∩ Y ∩ E_i) ∪ e_X) ] - (e_X) (<ref>)= [ ∑_i=1^2 r_i((X ∩ Y ∩ E_i) ∪ l_X) ] - 3 (<ref>)= [ ∑_i=1^2 r_i((X ∩ Y ∩ E_i) ∪ p_1) ] - 3 (<ref>)= [ ∑_i=1^2 (r_i(X ∩ Y ∩ E_i) + 1)] - (X ∩ Y ∩ T) - 3 = η(X ∩ Y) - 1.Hence ξ(X ∩ Y) < η(X ∩ Y). § CONCLUSION Now if we put the embedding theorems together with Theorem <ref>, we get the equivalence of three conjectures: The following statements are equivalent:* All finite sticky matroids are modular. (SMC)* Every finite hypermodular matroid is embeddable in a modular matroid. (Kantor's Conjecture) * Every finite OTE matroid is modular.(i) ⇒ (ii) These two statements can be reduced to the rank-4 case (see Theorem <ref> and Corollary <ref>).Now consider a finite hypermodular rank-4 matroid M. Because of Theorem <ref>, it can be embedded into a finite rank-4 OTE matroid M' that is sticky due to Theorem <ref>. If (i) holds then M' is modular and M can be embedded into a modular matroid and (ii) holds.(ii) ⇒ (iii) Let M be a finite OTE matroid. It is also hypermodular. If (ii) holds, it is embeddable into a modular matroid. Since M is OTE, it must itself already be modular.(iii) ⇒ (i) Let M be a finite sticky matroid. Because of Theorem <ref> it must be an OTE matroid and, if (iii) holds, must be modular and (i) holds.A slightly weaker conjecture than the (SMC) in the finite case, which could also hold in the infinite case, is the generalization of Theorem <ref> to arbitrary rank.A matroid is sticky if and only if it is an OTE matroid. Our proof of Theorem <ref> frequently uses the fact that we are dealing with rank 4 matroids.We think there is a way to avoid Lemma <ref>, but the case checking in the proof of of Theorem <ref> seems to become tedious even for ranks only slightly larger than 4.Moreover, we need a generalization of Theorem <ref> (iii) in order to generalize Lemma <ref>.§ ACKNOWLEDGEMENTThe authors are greatful to an anonymous referee who carefully read the paper, and whose comments helped toimprove its readability. plain
http://arxiv.org/abs/1704.08478v2
{ "authors": [ "Winfried Hochstättler", "Michael Wilhelmi" ], "categories": [ "math.CO", "05B35, 05B25, 06C10, 51D20" ], "primary_category": "math.CO", "published": "20170427084346", "title": "Sticky matroids and Kantor's Conjecture" }
Communication complexity of approximate maximum matching in the message-passing modelZengfeng HuangUniversity of New South Wales, Sydney, Australia. Email: [email protected] Bozidar RadunovicMicrosoft Research, Cambridge, United Kingdom. Email: [email protected] Milan VojnovicDepartment of Statistics, London School of Economics (LSE), London, United Kingdom. Email: [email protected] Qin ZhangComputer Science Department, Indiana University, Bloomington, USA. Email: [email protected] ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ We consider the communication complexity of finding an approximate maximum matching in a graph in a multi-party message-passing communication model. The maximum matching problem is one of the most fundamental graph combinatorial problems, with a variety of applications. The input to the problem is a graph G that has n vertices and the set of edges partitioned over k sites, and an approximation ratio parameter α. The output is required to be a matching in G that has to be reported by one of the sites, whose size is at least factor α of the size of a maximum matching in G. We show that the communication complexity of this problem is Ω(α^2 k n) information bits. This bound is shown to be tight up to a log n factor, by constructing an algorithm, establishing its correctness, and an upper bound on the communication cost. The lower bound also applies to other graph combinatorial problems in the message-passing communication model, including max-flow and graph sparsification. § INTRODUCTIONComplex and massive volume data processing requires to scale out to parallel and distributed computation platforms. Scalable distributed computation algorithms are needed that make efficient use of scarce system resources such as communication bandwidth between compute nodes in order to avoid the communication network becoming a bottleneck. A particular interest has been devoted to studying scalable computation methods for graph data, which arises in a variety of applications including online services, online social networks, biological, and economic systems.In this paper, we consider the distributed computation problem of finding an approximate maximum matching in an input graph whose edges are partitioned over different compute nodes (we refer to as sites). Several performance measures are of interest including the communication complexity in terms of the number of bits or messages, the time complexity in terms of the number of rounds, and the storage complexity in terms of the number of bits. In this paper we focus on the communication complexity. Our main result is a tight lower bound on the communication complexity for approximate maximum matching. We assume a multi-party message-passing communication model <cit.>, we refer to as message-passing model, which is defined as follows. The message-passing model consists of k≥ 2 sites p^1, p^2, …, p^k. The input is partitioned across k sites, with sites p^1, p^2, …, p^k holding pieces of input data x^1, x^2, …, x^k, respectively. The goal is to design a communication protocol for the sites to jointly compute the value of a given function f:𝒳^k →𝒴 at point (x^1,x^2, …, x^k). The sites are allowed to have point-to-point communications between each other. At the end of the computation, at least one site should return the answer. The goal is to find a protocol that minimizes the total communication cost between the sites. For technical convenience, we introduce another special party called the coordinator. The coordinator does not have any input. We require that all sites can only talk with the coordinator, and at the end of the computation, the coordinator should output the answer. We call this model the coordinator model. See Figure <ref> for an illustration. Note that we have essentiallyreplaced the clique communication topology with a star topology, which increases the total communication cost only by a factor of 2 and thus, it does not affect the order of the asymptotic communication complexity. The edge partition of an input graph G = (V,E) over k sites is defined by a partition of the set of edges E in k disjoint sets E^1, E^2, …, E^k, and assigning each set of edges E^i to site p^i. For bipartite graphs with a set of left vertices and a set of right vertices, we define an alternative way of an edge partition, referred to as the left vertex partition, as follows: the set of left vertices are partitioned in k disjoints parts, and all the edges incident to one part is assigned to a unique site. Note that left vertex partition is more restrictive, in the sense that any left vertex partition is an instance of an edge partition. Thus, lower bounds holds in this model are stronger as designing algorithms might be easier in this restrictive setting. Our lower bound is proved for left vertex partition model, while our upper bound holds for an arbitrary edge partition of any graph.§.§ Summary of resultsWe study the approximate maximum matching problem in the message-passing model which we refer to as Distributed Matching Reporting () that is defined as follows: given as input is a graph G = (V,E) with V = n vertices and a parameter 0 < α≤ 1; the set of edges E is arbitrarily partitioned into k≥ 2 subsets E^1, E^2, ⋯ , E^k such that E^i is assigned to site p^i; the coordinator is required to report an α-approximation of the maximum matching in graph G. In this paper, we show the following main theorem.For every 0<α≤ 1 and the number of sites 1< k≤ n, any α-approximation randomized algorithm forin the message-passing model with the error probability of at most 1/4 has the communication complexity of Ω(α^2kn) bits.Moreover, this communication complexity holds for an instance of a bipartite graph. In this paper we are more interested in the case when k≫log n, since otherwise the trivial lower bound of Ω(nlog n) bits (the number of bits to describe a maximum matching) is already near-optimal.For , a seemingly weaker requirement is that, at the end of the computation, each site p^i outputs a set of edges M^i⊆ E^i such that M^1∪ M^2 ∪⋯∪ M^k is a matching of size that is at least factor α of a maximum matching. However, given such an algorithm, each site might just send M^i to the coordinator after running the algorithm, which will increase the total communication cost by at most an additive term of n. Therefore, our lower bound also holds for this setting. A simple greedy distributed algorithm solvesfor α = 1/2 with the communication cost of O(kn log n) bits. This algorithm is based on computing a maximal matching in graph G. A maximal matching is a matching whose size cannot be enlarged by adding one or more edges. A maximal matching is computed using a greedy sequential procedure defined as follows. Let G(E') be the graph induced by a subset of edges E'⊆ E. Site p^1 computes a maximal matching M^1 in G(E^1), and sends it to p^2 via the coordinator. Site p^2 then computes a maximal matching M^2 in G(E^1∩ E^2) by greedily adding edges in E^2 to M^1, and then sends M^2 to site p^3. This procedure is continued and it is completed once site p^k computed M^k and sent it to the coordinator. Notice that M^k is a maximal matching in graph G, hence it is a 1/2-approximation of a maximum matching in G. The communication cost of this protocol is O(knlog n) bits because the size of each M^i is at most n edges and each edge's identifier can be encoded with O(log n) bits. This shows that our lower bound is tight up to a log n factor. This protocol is essentially sequential and takes O(k) rounds in total. We show that Luby's classic parallel algorithm for maximal matching <cit.> can be easily adapted to our model with O(log n) rounds of computation and O(k nlog^2 n) bits of communication. In Section <ref>, we show that our lower bound is also tight with respect to the approximation ratio parameter α for any 0 < α≤ 1/2 up to a log n factor. It was shownin <cit.> that many statistical estimation problems and graph combinatorial problems require Ω(kn) bits of communication to obtain an exact solution. Our lower bound shows that foreven computing a constant approximation requires this amount of communication.The lower bound established in this paper applies also more generally for a broader range of graph combinatorial problems. Since a bipartite maximum matching problem can be found by solving a max-flow problem, our lower bound also holds for approximate max-flow. Our lower bound also implies a lower bound for graph sparsification problem; see <cit.> for definition. This is because in our lower bound construction (see Section <ref>), the bipartite graph under consideration contains many cuts of size Θ(1) which have to be included in any sparsifier. By our construction, these edges form a good approximate maximum matching, and thus any good sparsifier recovers a good matching. In <cit.>, it was shown that there is a sketch-based O(1)-approximate graph sparsification algorithm with the sketch size of Õ(n) bits, which directly translates to an approximation algorithm of Õ(kn) communication in our model. Thus, our lower bound is tight up to a poly-logarithmic factor for the graph sparsification problem.We briefly discuss the main ideas and techniques of our proof of the lower bound for . As a hard instance, we use a bipartite graph G = (U, V, E) with U = V = n/2. Each site p^i holds a set of r = n/(2k) vertices which is a partition of the set of left vertices U. The neighbors of each vertex in U is determined by a two-party set-disjointness instance (, defined formally in Section <ref>). There are in total rk = n/2instances, and we want to perform a direct-sum type of argument on these n/2instances. We show that due to symmetry, the answer ofcan be recovered from a reported matching, and then use information complexity to establish the direct-sum theorem. For this purpose, we use a new definition of the information cost of a protocol in the message-passing model. We believe that our techniques would prove useful to establish the communication complexity for other graph combinatorial problems in the message-passing model. The reason is that for many graph problems whose solution certificates “span" the whole graph (e.g., connected components, vertex cover, dominating set, etc), it is natural that a hard instance would be like for the maximum matching problem, i.e., each of the k sites would hold roughly n/k vertices and the neighbourhood of each vertex would define an independent instance of a two-party communication problem.§.§ Related work The problem of finding an approximate maximum matching in a graph has been studied for various computation models, including the streaming computation model <cit.>, MapReduce computation model <cit.>, and a traditional distributed computation model known as 𝖫𝖮𝖢𝖠𝖫 computation model.In <cit.>, the maximum matching was presented as one of open problems in the streaming computation model. Many results have been established since then by various authors <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. Many of the studies were concerned with a streaming computation model that allows for Õ(n) space; referred to as the semi-streaming computation model. The algorithms developed for the semi-streaming computation model can be directly applied to obtain a constant-factor approximation of maximum matching in a graph in the message-passing model that has a communication cost of Õ(kn) bits.For approximate maximum matching problem in the MapReduce model, <cit.> gave a 1/2-approximation algorithm, which requires a constant number of rounds and uses Õ(m) bits of communication, for any input graph with m edges.The approximate maximum matching has been studied in the 𝖫𝖮𝖢𝖠𝖫 computation model by various authors <cit.>. In this computation model, each processor corresponds to a unique vertex of the graph and edges represent bidirectional communications between processors. The time advances over synchronous rounds. In each round, every processor sends a message to each of its neighbours, and then each processor performs a local computation using as input its local state and the received messages. Notice that in this model, the input graph and the communication topology are the same, while in the message-passing model the communication topology is essentially a complete graph which is different from the input graph and, in general, sites do not correspond to vertices of the topology graph. A variety of graph and statistical computation problems have been recently studied in the message-passing model <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. A wide range of graph and statistical problems has been shown to be hard in the sense of requiring Ω(kn) bits of communication, including graph connectivity <cit.>, exact counting of distinct elements <cit.>, and k-party set-disjointness <cit.>. Some of these problems have been shown to be hard even for random order inputs <cit.>. In <cit.>, it has been shown that the communication complexity of the k-party set-disjointness problem in the message-passing model is Ω(kn) bits. This work was independent and concurrent to ours. Incidentally, it uses a similar but different input distribution to ours. Similar input distributions were also used in previous work such as <cit.> and <cit.>. This is not surprising because of the nature of the message-passing model. There may exist a reduction between the k-party set-disjointness and but showing this is non-trivial and would require a formal proof. The proof of our lower bound is different in that we use a reduction of the k-partyto a 2-party set-disjointness using a symmetrisation argument, while <cit.> uses a coordinative-wise direct-sum theorem to reduce the k-party set-disjointness to a k-party 1-bit problem. The approximate maximum matching has been recently studied in the coordinator model under additional condition that the sites send messages to the coordinator simultaneously and once, referred to as the simultaneous-communication model. The coordinator then needs to report the output that is computed using as input the received messages. It has been shown in <cit.> that for the vertex partition model, our lower bound is achievable by a simultaneous protocol for any α≤ 1/√(k) up to a poly-logarithmic factor.The communication/round complexity of approximate maximum matching has been studied in the context of finding efficient economic allocations of items to agents, in markets that consist of unit-demand agents in a distributed information model where agents' valuations are unknown to a central planner, which requires communication to determine an efficient allocation. This amounts to studying the communication or round complexity of approximate maximum matching in a bipartite graph that defines preferences of agents over items. In a market with n agents and n items, this amounts to approximate maximum matching in the n-party model with a left vertex partition. <cit.> and <cit.> studied this problem in the so called blackboard communication model, where messages sent by agents can be seen by all agents. For one-round protocols, <cit.> established a tight trade-off between message size and approximation ratio. As indicated by the authors in <cit.>, their randomized lower bound is actually a special case of ours. In a follow-up work, <cit.> obtained the first non-trivial lower bound on the number of rounds for general randomized protocols. §.§ Roadmap In Section <ref> we present some basic concepts of probability and information theory, communication and information complexity that are used throughout the paper. Section <ref> presents the lower bound and its proof, which is the main result of this paper. Section <ref> establishes the tightness of the lower bound up to a poly-logarithmic factor. Finally, in Section <ref>, we conclude. § PRELIMINARIES §.§ Basic facts and notationLet [q] denote the set {1, 2, …, q}, for given integer q ≥ 1. All logarithms are assumed to have base 2. We use capital letters X, Y, … to denote random variables and the lower case letters x, y, … to denote specific values of respective random variables X, Y, …. We write X∼μ to mean that X is a random variable with distribution μ, and x ∼μ to mean that x is a sample from distribution μ. For a distribution μ on a domain 𝒳×𝒴, and (X,Y)∼μ, we write μ(x|y) to denote the conditional distribution of X given Y = y.For any given probability distribution μ and positive integer t≥ 1, we denote with μ^t the t-fold product distribution of μ, i.e. the distribution of t independent and identically distributed random variables according to distribution μ.We will use the following distances between two probability distributions μ and ν on a discrete set 𝒳: (a) the total variation distance defined asd(μ,ν) = 1/2∑_x∈𝒳 |μ(x) - ν(x)| = max_S⊆𝒳|μ(S)-ν(S)|and, (b) the Hellinger distance defined ash(μ,ν) = √(1/2∑_x∈𝒳(√(μ(x)) - √(ν(x)))^2).The total variation distance and Hellinger distance satisfy the following relation: For any two probability distributions μ and ν, the total variation distance and the Hellinger distance between μ and ν satisfyd(μ,ν) ≤√(2) h(μ,ν). With a slight abuse of notation for two random variables X∼μ and Y∼ν, we write d(X,Y) and h(X,Y) in lieu of d(μ,ν) and h(μ,ν), respectively.We will use the the following two well-known inequalities.Hoeffding's inequality Let X be the sum of t≥ 1 independent and identically distributed random variables that take values in [0,1]. Then, for any s≥ 0,[X -[X] ≥ s]≤ e^-2s^2/t.Chebyshev's inequality Let X be a random variable with variance σ^2 > 0. Then, for any s > 0,[|X-[X]|≥ s] ≤σ^2/s^2.§.§ Information theory For two random variables X and Y, let H(X) denote the Shannon entropy of the random variable X, and let H(X|Y)=_y[H(X|Y=y)] denote the conditional entropy of X given Y. Let I(X;Y)=H(X)-H(X|Y) denote the mutual information between X and Y, and let I(X;Y|Z) = H(X|Z)- H(X|Y,Z) denote the conditional mutual information given Z. The mutual information between any X and Y is non negative, i.e. I(X; Y) ≥ 0, or equivalently, H(X|Y)≤ H(X). We will use the following relations from the information theory: Chain rule for mutual information For any jointly distributed random variables X^1, X^2, …, X^t, Y and Z,I(X^1,X^2,…,X^t; Y|Z) = ∑_i=1^t I(X_i;Y|X_1,…,X_i-1,Z).Data processing inequality If X and Z are conditionally independent random variables given Y, then I(X; Y| Z)≤ I(X; Y)andI(X; Z)≤ I(X; Y).Super-additivity of mutual information If X^1,X^2, …, X^t are independent random variables, then I(X^1,X^2, …,X^t; Y)≥∑_i=1^t I(X^i; Y).Sub-additivity of mutual information If X^1,X^2, …, X^t are conditionally independent random variables given Y, thenI(X^1,X^2, …, X^t; Y) ≤∑_i=1^t I(X^i; Y). §.§ Communication complexity In the two party communication complexity model two players, Alice and Bob, are required to jointly compute a function f:𝒳×𝒴→𝒵. Alice is given x∈𝒳 and Bob is given y∈𝒴, and they want to jointly compute the value of f(x,y) by exchanging messages according to a randomized protocol Π. We use Π_xy to denote the random transcript (i.e., the concatenation of messages) when Alice and Bob run Π on the input (x,y), and Π(x,y) to denote the output of the protocol. When the input (x,y) is clear from the context, we will simply use Π to denote the transcript. We say that Π is a γ-error protocol if for every input (x,y), the probability that Π(x,y)≠ f(x,y) is not larger than γ, where the probability is over the randomness used in Π. We will refer to this type of error as worst-case error. An alternative and weaker type of error is the distributional error, which is defined analogously for an input distribution, and where the error probability is over both the randomness used in the protocol and the input distribution.Let Π_xy denote the length of the transcript in information bits. The communication cost of Π is max_x,yΠ_xy. The γ-error randomized communication complexity of f, denoted by R_γ(f), is the minimal cost of any γ-error protocol for f.The multi-party communication complexity model is a natural generalization to k≥ 2 parties, where each party has a part of the input, and the parties are required to jointly compute a function f:𝒳^k →𝒵 by exchanging messages according to a randomized protocol. For more information about communication complexity, we refer the reader to <cit.>. §.§ Information complexityThe communication complexity quantifies the number of bits that need to be exchanged by two or more players in order to compute some function together, while the information complexity quantifies the amount of information of the inputs that must be revealed by the protocol. The information complexity has been extensively studied in the last decade, e.g., <cit.>. There are several definitions of information complexity. In this paper, we follow the definition used in <cit.>. In the two-party case, let μ be a distribution on 𝒳×𝒴, we define the information cost of Π measured under μ asIC_μ(Π)=I(X, Y ; Π_XY | R)where (X,Y)∼μ and R is the public randomness used in Π. For notational convenience, we will omit the subscript of Π_XY and simply use I(X,Y;Π | R) to denote the information cost of Π.It should be clear that IC_μ(Π) is a function of μ for a fixed protocol Π. Intuitively, this measures how much information of X and Y is revealed by the transcript Π_XY. For any function f, we define the information complexity of f parametrized by μ and γ asIC_μ,γ(f)=min_γ-error Π IC_μ(Π).§.§ Information complexity and coordinator modelWe can indeed extend the above definition of information complexity to k-party coordinator model. That is, let X^i be the input of player i with (X^1, …, X^k) ∼μ and Π be the whole transcript, then we could define IC_μ(Π)=I(X^1, X^2,…, X^k;Π | R). However, such a definition does not fully explore the point-to-point communication feature of the coordinator model. Indeed, the lower bound we can prove using such a definition is at most what we can prove under the blackboard model and our problem admits a simple algorithm with communication O(n log n + k) in the blackboard model.In this paper we give a new definition of information complexity for the coordinator model, which allows us to prove higher lower bounds compared with the simple generalization. Let Π^i be the transcript between player i and the coordinator, thus Π = Π^1 ∘Π^2 ∘…∘Π^k. We define the information cost for a function f with respect to input distribution μ and the error parameter γ∈ [0,1] in the coordinator model as IC_μ, γ(f) = min_γ-errorΠ∑_i=1^k I(X^1, X^2,⋯,X^k; Π^i). R_γ(f) ≥ IC_μ, γ(f) for any distribution μ. For any protocol Π, the expected size of its transcript is (we abuse the notation by using Π also for the transcript) [Π] = ∑_i=1^k [Π^i]≥∑_i=1^k H(Π^i)≥ IC_μ,γ(Π). The theorem then follows because the worst-case communication cost is at least the average-case communication cost. If Y is independent of the random coins used by the protocol Π, thenIC_μ, γ(f) ≥min_Π∑_i=1^k I(X^i, Y; Π^i).The statement directly follows from the data processing inequality because given X^1, X^2, …, X^k, Π is fully determined by the random coins used, and is thus independent of Y.§ LOWER BOUNDThe lower bound is established by constructing a hard distribution for the input bipartite graph G=(U,V, E) such that U = V = n/2. We first discuss the special case when the number of sites k is equal to n/2, and each site is assigned one unique vertex in U together with all its adjacent edges. We later discuss the general case.A natural approach to approximately compute a maximum matching in a graph is to randomly sample a few edges from each site, and hope that we can find a good matching using these edges. To rule out such strategies, we construct random edges as follows.We create a large number of noisy edges by randomly picking a small set of nodes V_0⊆ V of size roughly α n/10 and connect each node in U to each node in V_0 independently at random with a constant probability. Note that there are Θ(α n^2) such edges and the size of any matching that can be formed by these edges is at most α n/10, which we will show to be asymptotically α/2OPT, where OPT is the size of a maximum matching. We next create a set of important edges between U and V_1=V∖ V_0 such that each node in U is adjacent to at most one random node in V_1. These edges are important in the sense that although there are only Θ(|U|) = Θ(n) of them, the size of a maximum matching they can form is large, of the order OPT. Therefore, to compute a matching of size at least αOPT, it is necessary to find and include Θ(αOPT) = Θ(α n) important edges. We then show that finding an important edge is in some sense equivalent to solving a set-disjointness () instance, and thus, we have to solve Θ(n)instances. The concrete implementation of this intuition is via an embedding argument. In the general case, we create n/(2k) independent copies of the above random bipartite graph, each with 2k vertices, and assign n/(2k) vertices to each site (one from each copy). We then prove a direct-sum theorem using information complexity.In the following, we introduce the two-partyproblem and the two-partyproblem. These two problems have been widely studied and tight bounds are known (e.g. <cit.>). For our purpose, we need to prove stronger lower bounds for them. We then give a reduction fromtoand prove an information cost lower bound forin Section <ref>.§.§ The two-partyproblemIn the two-partycommunication problem, Alice and Bob hold bits a and b respectively, and they want to compute the value of the function (a, b) = a ∧ b. Next we define input distributions for this problem. Let A,B be random variables corresponding to the inputs of Alice and Bob respectively. Let p ∈ (0,1/2] be a parameter. Let τ_q denote the probability distribution of a Bernoulli random variable that takes value 0 with probability q or value 1 with probability 1-q. We define two input probability distributions ν and μ for (A, B) as follows. ν: Sample w∼τ_p, and then set the value of (a,b) as follows: if w = 0, let a∼τ_1/2 and b = 0; otherwise, if w = 1, let a = 0, and b ∼τ_p. Thus, we have[ (A, B)={[ (0, 0) w. p.p(3 - 2p)/2; (0, 1)w. p. (1-p)^2; (1, 0)w. p. p/2 ]. . ]μ: Sample w∼τ_p, and then choose (a,b) as above (i.e. sample (a,b) according to ν). Then, reset the value of a to be 0 or 1 with equal probability (i.e. set a∼τ_1/2). Here w is an axillary random variable to break the dependence of A and B, as we can see A and B are not independent, but conditionally independent given w. Let δ be the probability that (A, B) = (1, 1) under distribution μ, which is (1-p)^2/2.For the special case p = 1/2, by <cit.>, it is shown that, for any private coin protocol Π with worst-case error probability 1/2-β, the information costI(A, B; Π | W)≥Ω(β^2) where the information cost is measured with respect to ν and W is the random variable corresponding to w. Note that the above mutual information is different from the definition of information cost; it is referred to as conditional information cost in <cit.>. It is smaller than the standard information cost by data processing inequality (Π and W are conditionally independent given A,B). For a fixed protocol Π, the joint probability distribution (A,B,Π,W) is determined by the distribution of (A,B,W) and so is I(A, B; Π | W). Therefore, when we say the (conditional) information cost is measured w.r.t. ν, it means that the mutual information, I(A, B; Π | W), is measured under the joint distribution (A,B,Π,W) determined by ν.The above lower bound might seem counterintuitive, as the answer tois always 0 under the input distribution ν and a protocol can just output 0 which does not reveal any information. However, such a protocol will have worst-case error probability 1, i.e., it is always wrong when the input is (1,1), contradicting the assumption. When distributional error is considered, the (distributional) error and information cost can be measured w.r.t. different input distributions. In our case, the error will be measured under μ and the information cost will be measured under ν, and we will prove that any protocol having small error under μ must incur high information cost under ν.We next derive an extension that generalizes the result of <cit.> to any p∈ (0,1/2] and distributional errors. We will also use the definition of one-sided error. For a two-party binary function f(x,y), we say that a protocol has a one-sided error γ for f under a distribution if it is always correct when the correct answer is 0, and is correct with probability at least 1-γ conditional on f(x, y) = 1.Recall that δ is the probability that (A,B)=(1,1) when (A,B)∼μ, which is (1-p)^2/2. Recall that p∈(0,1/2], and thus δ≤ 1/2. Note that a distributional error of δ under μ is trivial, as a protocol that always outputs 0 achieves this (but it has one-sided error 1). Therefore, for two-sided error, we will consider protocols with error probability slightly better than the trivial protocol, i.e., with error probability δ-β for some β≤δ. Suppose that Π is a public coin protocol for which has distributional error δ - β, for β∈ (0, δ), under input distribution μ; let R denote its public randomness. Then I(A, B; Π | W ,R) = Ω(p(β/δ)^2)where the information is measured with respect to ν.If Π has a one-sided error 1-β, then I(A, B; Π|W,R) = Ω(pβ). If we set p=1/2, the first part of Theorem <ref> recovers the result of <cit.>. We will use Π_ab to denote the transcript when the input is a,b. By definition,I(A,B; Π_AB | W)=p· I(A, 0; Π_A0 | W=0)+(1-p)· I(0, B; Π_0B | W=1)= p· I(A; Π_A0)+(1-p)· I(B; Π_0B). With a slight abuse of notation, in (<ref>), A and B are random variables with distributions τ_1/2 and τ_p, respectively.For any random variable U with distribution τ_1/2, the following two inequalities were established in <cit.>:I(U; Π_U0)≥ h^2(Π_00,Π_10)andI(U; Π_0U) ≥ h^2(Π_00, Π_01) where h(X,Y) is the Hellinger distance between two random variables X and Y. We can apply these bounds to lower bound the term I(A; Π_A0). However, we cannot apply them to lower bound the term I(B; Π_0B) when p < 1/2 because then the distribution of B is not τ_1/2. To lower bound the term I(B; Π_0B), we will use the following well-known property, whose proof can be found in the book <cit.> (Theorem 2.7.4).Let (X,Y)∼ p(x,y)=p(x)p(y|x). The mutual information I(X,Y) is a concave function of p(x) for fixed p(y|x). Hence, the mutual information I(B;Π_0B) is a concave function of the distribution τ_p of B, since the distribution of Π_0B is fixed given B. Recall that τ_p is the probability distribution that takes value 0 with probability p and takes value 1 with probability 1-p. Note that τ_p can be expressed as a convex combination of τ_1/2 and τ_0 (always taking value 1) as follows: τ_p = 2p τ_1/2 + (1-2p) τ_0. (Recall that p is assumed to be smaller than 1/2.) Let B_0∼τ_1/2 and B_1 ∼τ_0. Then, using Lemma <ref>, we have I(B; Π_0B)≥2p· I(B_0; Π_0B_0)+(1-2p)· I(B_1; Π_0B_1) ≥ 2p· h^2(Π_00,Π_01)where the last inequality holds by (<ref>) and non-negativity of mutual information. Thus, we haveI(A,B; Π_AB | W)=p· I(A; Π_A0)+(1-p)· I(B; Π_0B) ≥p · h^2(Π_00,Π_10)+(1-p)2p· h^2(Π_00,Π_01)≥p ·( h^2(Π_00,Π_10)+ h^2(Π_00,Π_01))where the last inequality holds because p≤ 1/2.We next show that if Π is a protocol with error probability smaller than or equal to δ-β under distribution μ, thenh^2(Π_00,Π_10)+ h^2(Π_00,Π_01) = Ω((β/δ)^2),which together with other above relations implies the first part of the theorem.By the triangle inequality,h(Π_00,Π_10)+ h(Π_00,Π_01)≥h(Π_01,Π_10)= h(Π_00,Π_11)where the last equality is from the cut-and-paste lemma in <cit.> (Lemma 6.3).Thus, we haveh(Π_00,Π_10)+ h(Π_00,Π_01)≥ 1/2h(Π_00,Π_10)+1/2(h(Π_00,Π_10)+ h(Π_00,Π_01))≥ 1/2(h(Π_00,Π_10)+h(Π_00,Π_11))≥ 1/2 h(Π_10,Π_11)where the last inequality is by the triangle inequality.Similarly,it holds thath(Π_00,Π_10)+ h(Π_00,Π_01)≥1/2 h(Π_01,Π_11).From (<ref>), (<ref>) and (<ref>), for any positive real numbers a, b, and c such that a+b+c = 1, we haveh(Π_00,Π_10)+ h(Π_00,Π_01)≥ 1/2 (a · h(Π_00,Π_11) + b· h(Π_01,Π_11)+ c· h(Π_10,Π_11)). Let p^e denote the error probability of Π and p^e_xy denote the error probability of Π conditioned on that the input is (x,y). Recall δ=μ(1,1)≤ 1/2. We have p^e =μ(0,0) p^e_00+μ(1,0)p^e_10+μ(0,1)p^e_01+δ p^e_11≥ δ( μ(0,0) p^e_00+μ(1,0)p^e_10+μ(0,1)p^e_01/1-δ+ p^e_11) =δ(a^*(p^e_00+p^e_11)+b^*(p^e_01+p^e_11)+c^*(p^e_10+p^e_11))where a^* = μ(0,0)/1-δ, b^* = μ(0,1)/1-δ andc^* = μ(1,0)/1-δ,and clearly a^*+b^*+c^*=1. Let Π(x,y) be the output of Π when the input is (x,y), which is also a random variable. Note that p^e_00+p^e_11 = [Π(0,0)=1]+[Π(1,1)=0]= 1-([Π(0,0)=0]-[Π(1,1)=0])≥1-d(Π_00,Π_11)where d(X,Y) denote the total variation distance between probability distributions of random variables X and Y. Using Lemma <ref>, we havep^e_00+p^e_11≥ 1 - √(2)h(Π_00,Π_11).By the same arguments, we also have p^e_01+p^e_11≥ 1-√(2)h(Π_01,Π_11)andp^e_10+p^e_11≥ 1-√(2)h(Π_10,Π_11). Combining (<ref>), (<ref>) and (<ref>) with (<ref>) and the assumption that p^e ≤δ - β, we obtaina^* h(Π_00,Π_11) + b^* h(Π_10,Π_11)+ c^* h(Π_01,Π_11)≥ β/√(2)δ. By (<ref>), we haveh(Π_00,Π_10)+ h(Π_00,Π_01)≥β/2√(2)δ.From the Cauchy-Schwartz inequality, it follows h^2(Π_00,Π_10)+ h^2(Π_00,Π_01)≥ 1/2 (h(Π_00,Π_10)+ h(Π_00,Π_01))^2. Hence, we haveh^2(Π_00,Π_10)+ h^2(Π_00,Π_01) ≥β^2/16δ^2which combined with (<ref>) establishes the first part of the theorem.We now go on to prove the second part of the theorem. Assume Π has a one-sided error 1-β, i.e., it outputs 1 with probability at least β if the input is (1,1), and always output correctly otherwise. To boost the success probability, we can run m parallel instances of the protocol and answer 1 if and only if there exists one instance which outputs 1. Let Π' be this new protocol, and it is easy to see that it has a one-sided error of (1-β)^m. By setting m=O(1/β), it is at most 1/10, and thus the (two-sided) distributional error of Π' under μ is smaller than δ/10. By the first part of the theorem, we know I(A, B; Π' | W) = Ω(p). We also haveI(A, B; Π' | W) =I(A, B; Π_1,Π_2,…, Π_m | W) ≤ ∑_i=1^m I(A, B; Π_i | W) =m I(A, B; Π | W),where the inequality follows from the sub-additivity and the fact that Π_1, Π_2,…, Π_m are conditionally independent of each other given A, B and W. Thus, we have I(A, B; Π | W)≥Ω(p/m) = Ω(pβ).§.§ The two-partycommunication problem The two-party communication problem with two players, Alice and Bob, who hold strings of bits x = (x_1,x_2, …, x_k) and y = (y_1, y,…, y_k), respectively, and they want to compute (x,y) = (x_1,y_1)∨⋯∨(x_k,y_k).By interpreting x and y as indicator vectors that specify subsets of [k], (x,y)=1 if and only if the two sets represented by x and y are disjoint. Note that this accommodates the problem as a special case when k = 1. Let A = (a_1,a_2, …, a_k) be Alice's input and B = (b_1, b_2,…,b_k) be Bob's input. We define two input distributions ν_k and μ_k for (A,B) as follows. ν_k: For each i∈[k], independently sample (a_i,b_i)∼ν, and let w_i be the corresponding auxiliary random variable (see the definition of ν). Define w=(w_1,w_2,⋯, w_k).μ_k: Let (a,b)∼ν_k, then pick d uniformly at random from [k], and reset a_d to be 0 or 1 with equal probability. Note that (a_d, b_d) ∼μ, and the probability that (A,B)=1 is equal to δ.We define the one-sided error forsimilarly: A protocol has a one-sided error δ forif it is always correct when (x,y) = 0, and is correct with probability at least 1-δ when (x,y) = 1. Let Π be any public coin protocol forwith error probability δ - β on input distribution μ_k, where β∈ (0, δ), and let R denote the public randomness used by the protocol. Then I(A,B; Π | W, R) = Ω(kp (β/δ)^2)where the information is measured w.r.t. μ_k. If Π has a one-sided error 1-β, then I(A, B; Π | W, R) = Ω(k p β). We first consider the two-sided error case. Let Π be a protocol forwith distributional error δ-β under μ_k. Consider the following reduction fromto .Alice has input u, and Bob has input v. They want to decide the value of u∧ v. They first publicly sample j ∈ [k], and embed u,v in the j-th position, i.e. set a_j=u and b_j=v. Then they publicly sample w_j' according to τ_p for all j' ≠ j. Let w_-j = (w_1, …, w_j-1, w_j+1, …, w_k). Conditional on w_j', they sample (a_j', b_j') such that (a_j,b_j)∼ν for each j' ≠ j. Note that this step can be done using only private randomness, since, in the definition of ν, a_j' and b_j' are independent given w_j'. Then they run the protocol Π on the input (a,b) and output whatever Π outputs. Let Π' denote this protocol for . Let U,V,A,B,W,J be the corresponding random variables of u,v,a,b,w,j respectively. It is easy to see that if (U,V)∼μ, then (A,B)∼μ_k, and thus the distributional error of Π' is δ-β under μ. The public coins used in Π' include J, W_-J and the public coins R of Π. We first analyze the information cost of Π' under (A,B)∼ν_k. We have 1/k I(A,B;Π | W,R)≥ 1/k∑_j=1^k I(A_j,B_j;Π | W_j, W_-j,R)= 1/k∑_j=1^k I_ν(U,V;Π' | W_j,J=j,W_-j,R) =I(U, V;Π' | W_J,J,W_-J,R)= Ω(p (β/δ)^2)where (<ref>) is by the supper-additivity of mutual information, (<ref>) holds because when (U,V)∼ν the conditional distribution of (U,V,Π, W_j,W_-j,R) given J = j is the same as the distribution of (A_j,B_j,Π, W_j, W_-j,R), and (<ref>) follows from Theorem <ref> using the fact that Π' has error δ-β under μ. We have established that when (A,B)∼ν_k, it holdsI(A,B;Π | W,R)=Ω(kp (β/δ)^2).We now consider the information cost when (A,B)∼μ_k. Recall that to sample from μ_k, we first sample (a,b)∼ν_k, and then pick d uniformly at random from [k] and reset a_d to 0 or 1 with equal probability. Let ξ be the indicator random variable of the event that the last step does not change the value of a_d. We note that for any jointly distributed random variables X, Y, Z and W,I(X;Y|Z) ≥ I(X;Y|Z,W) - H(W).To see this note that by the chain rule for mutual information, we haveI(X,W;Y|Z) = I(X;Y|Z) + I(W;Y|X,Z)andI(X,W;Y|Z) = I(W;Y|Z) + I(X;Y|W,Z).Combining the above two equalities, (<ref>) follows by the facts I(W;Y|X,Z)≥ 0 and I(W;Y|Z) ≤ H(W|Z) ≤ H(W).Let (A,B)∼μ_k and (A',B')∼ν_k. We haveI(A,B;Π | W,R)≥I(A,B;Π | W,R, ξ) - H(ξ)= 1/2 I(A,B;Π | W,R, ξ=1) + 1/2I(A,B;Π | W,R, ξ=0)-1≥ 1/2I(A',B';Π | W,R)-1= Ω(kp (β/δ)^2)where the first inequality is from (<ref>) and the last equality is by (<ref>).The proof for the one-sided error case is the same, except that we use the one-sided error lower bound Ω(pβ) in Theorem <ref> to bound (<ref>).§.§ Proof of Theorem <ref>Here we provide a proof of Theorem <ref>. The proof is based on a reduction ofto . We first define the hard input distribution that we use for .The input graph G is assumed to be a random bipartite graph that consists of r=n/(2k) disjoint, independent and identically distributed random bipartite graphs G^1, G^2, …, G^r. Each bipartite graph G^j = (U^j, V^j, E^j) has the set U^j = {u^j, i: i∈ [k]} of left vertices and the set V^j = {v^j, i: i∈ [k]} of right vertices, both of cardinality k. The sets of edges E^1, E^2, …, E^r are defined by a random variable X that takes values in {0,1}^r× k × k such that whether or not (u^j,i,v^j,l) is an edge in E^j is indicated by X_l^j,i.The distribution of X is defined as follows. Let Y^1, Y^2, …, Y^r be independent and identically distributed random variables with distribution μ_k(b).[μ_k(b) is the marginal distribution of b of the joint distribution μ_k.]. Then, for each j∈ [r], conditioned on Y^j = y^j, let X^j,1, X^j,2, …, X^j,k be independent and identically distributed random variables with distribution μ_k(a|y^j), where μ_k(a|y^j) is the conditional distribution of a given b=y^j. Note that for every j∈ [r] and i∈ [k], (X^j,i,Y^j)∼μ_k.We will use the following notation:X^i = (X^1,i, X^2,i,…,X^r,i)fori∈ [k],andX = (X^1, X^2, …,X^k),where each X^j,i∈{0,1}^k, and X^j,i_l is the lth bit. In addition, we will also use the following notation:X^-i = (X^1, …, X^i-1,X^i+1, …, X^k),fori∈ [k]andY = (Y^1, Y^2, …, Y^r). Note that X is the input to , and Y is not part of the input for , but it is used to construct X.The edge partition of input graph G over k sites p^1, p^2, …, p^k is defined byassigning all edges incident to vertices u^1,i, u^2,i,…, u^r,i to site p^i, or equivalently p^i gets X^i. See Figure <ref> for an illustration. Input Reduction Let a ∈{0,1}^k be Alice's input and b ∈{0,1}^k be Bob's input for . We will first construct an input offrom (a,b), which has the above hard distribution. In this reduction, in each bipartite graph G^j, we carefully embed k instances of . The output of ainstance determines whether or not a specific edge in the graph exists. This amounts to a total of k r = n/2instances embedded in graph G. The original input of Alice and Bob is embedded at a random position, and the other n/2 - 1 instances are sampled by Alice and Bob using public and private random coins. We then argue that if the originalinstance is solved, then with a sufficiently large probability, at least Ω(n) of the embeddedinstances are solved. Intuitively, if a protocol solves aninstance at a random position with high probability, then it should solve many instances at other positions as well, since the input distribution is completely symmetric. We will see that the originalinstance can be solved by using any protocol solving , the correctness of which also relies on the symmetric property.Alice and Bob construct an input X foras follows:* Alice and Bob use public coins to sample an index I from a uniform distribution on [k]. Alice constructs the input X^I for site p^I, and Bob constructs input X^-I for other sites (see Figure| <ref>). * Alice and Bob use public coins to sample an index J from a uniform distribution on [r]. * G^J is sampled as follows: Alice sets X^J, I = a, and Bob sets Y^J = b. Bob privately samples (X^J, 1, …, X^J,I-1, X^J,I, X^J,k)∼μ_k(a|Y^J)^k-1. * For each j∈ [r]∖{J}, G^j is sampled as follows: * Alice and Bob use public coins to sample W^j = (W_1^j, W_2^j,…, W_k^j) ∼τ_p^k. * Alice and Bob privately sample X^j,I and Y^j from ν_k(a|W^j) and ν_k(b|W^j), respectively. Bob privately and independently samples (X^j, 1, …, X^j,I-1, X^j,I, X^j,k)∼μ_k(a|Y^j)^k-1. * Alice privately draws an independent sample d from a uniform distribution on [k], and resets X^j, I_d to 0 or 1 with equal probability. As a result, (X^j,I,Y^j)∼μ_k. For each i∈ [k]∖{I}, Bob privately draws a sample d from a uniform distribution on [k] and resets X^j, i_d to a sample from τ_1/2.Note that the input X^I of site p^I is determined by the public coins, Alice's input a and her private coins. The inputs X^-I are determined by the public coins, Bob's input b and his private coins. Let ϕ denote the distribution of X when (a,b) is chosen according to the distribution μ_k. Let α be the approximation ratio parameter. We set p = α/20 ≤ 1/20 in the definition of μ_k.Given a protocol ' forthat achieves an α-approximation with the error probability at most 1/4 under ϕ, we construct a protocolforthat has a one-sided error probability of at most 1 - α/10 as follows. Protocol * Given input (A, B) ∼μ_k, Alice and Bob construct an input X ∼ϕ foras described by the input reduction above. Let Y = (Y^1,Y^2, …, Y^r) be the samples used for the construction of X. Let I, J be the two indices sampled by Alice and Bob in the reduction procedure. * With Alice simulating site p ^I and Bob simulating other sites and the coordinator, they run ' on the input defined by X. Any communication between site p^I and the coordinator will be exchanged between Alice and Bob. For any communication among other sites and the coordinator, Bob just simulates it without any actual communication. At the end, the coordinator, that is Bob, obtains a matching M.* Bob outputs 1 if, and only if, for some l ∈ [k], (u^J, I, v^J,l) is an edge in M such that Y_l^J≡ B_l = 1, and 0, otherwise. Correctness Suppose that (A,B)=0, i.e., A_l = 0 or B_l = 0 for all l ∈ [k]. Then, for each l ∈ [k], we must either have Y_l^J≡ B_l = 0 or X^J,I_l ≡ A_l=0, but X^J,I_l=0 means that (u^J,I,v^J,l) is not an edge in M. Thus,will always answer correctly when (A,B)=0, i.e., it has a one-sided error.Now suppose that A_l = B_l = 1 for some l ∈ [k]. Note that there is at most one such l according to our construction, which we denote by L. The output ofis correct if (u^J,I,v^J,L) is an edge in M. We next bound the probability of this event.For each G^j, for z∈{0,1}, we letU_z^j = {u^j,i∈ U^j: (X^j, i, Y^j) = z}, V_z^j = {v^j,i∈ V^j: Y_i^j = z}andU_z = ∪_j∈ [r] U_z^jandV_z = ∪_j∈ [r] V_z^j. Intuitively, the edges between vertices U_0 ∪ U_1 and V_0 can be regarded as noisy edges because the total number of such edges is large, but the maximum matching they can form is small (Lemma <ref> below). On the other hand, the edges between vertices U_1 and V_1 can be regarded as important edges because a maximum matching they can form is large though the total number of such edges is small. Note that there is no edge between vertices U_0 and V_1. See Figure <ref> for an illustration. To find a good matching we must choose many edges from the set of important edges. A key property is that all important edges are statistically identical, that is, each important edge is equally likely to be the edge (u^J, I, v^J, L). Thus, (u^J, I, v^J, L) will be included in the matching returned by ' with a large enough probability. Using this, we can answer whether X^J, I and Y^J intersect or not, thus, solving the originalproblem.Recall that we set p=α/20≤ 1/20 and δ=(1-p)^2/2. Thus, 9/20<δ<1/2. In the following, we assume α≥ c/√(n) for some constant, since otherwise the Ω(α^2 k n) lower bound will be dominated by the trivial lower bound of k.[Since none of the sites can see messages sent by other sites to the coordinator (unless this is communicated by the coordinator), each site needs to communicate with the coordinator at least once to determine the status of the protocol.]With probability at least 1-1/100, V_0≤ 2pn.Note that each vertex in ∪_j∈ [r] V^j is included in V_0 independently with probability p(2-p). Hence, [V_0] = p(2 - p)n/2, and by the Hoeffding's inequality, we have[V_0≥ 2pn]≤ [V_0 - [V_0] ≥ pn]≤e^-2 p^2 n≤1/100.Notice that Lemma <ref> implies that with probability at least 1-1/100, the size of a maximum matching formed by edges between vertices V_0 and U_0 ∪ U_1 is smaller than or equal to 2pn.With probability at least 1-1/100, the size of a maximum matching in G is at least n/5.Consider the size of a matching in G^j for an arbitrary j∈ [r]. For each i ∈ [k], let L^i be the index l∈ [k] such that X_l^j, i = Y_l^j = 1 if such an l exists (note that by our construction at most one such index exists), and let L^i be defined as NULL, otherwise. We use a greedy algorithm to construct a matching between vertices U^j and V^j. For i∈ [k], we connect u^j, i and v^j, L^i if L^i is not NULL and v^j, L^i is not connected to some u^j, i' for i' < i. The size of such constructed matching is equal to the number of distinct elements in {L^1, L^2, …, L^k}, which we denote by R. We next establish the following claim: [R ≥ k/4] ≥ 1 - O(1/k). By our construction, we have[U_1^j]= δ kand [V_1^j] = (1-p)^2k.By the Hoeffding's inequality, with probability 1 - e^-Ω(k), V_1^j≥9/10[V_1^j]≥4/5kand U_1^j≥9/10[U_1^j] ≥2/5k. It follows that with probability 1 - e^-Ω(k), it holds that R is at least of value R', where R' is as defined as follows. Consider a balls-into-bins process with s balls and t bins. Throw each ball to a bin sampled uniformly at random from the set of all bins. Let Z be the number of non-empty bins at the end of this process. Then, it is straightforward to observe that the expected number of non-empty bins is[Z] = t (1-(1-1/t)^s) ≥ t (1-e^-s/t) .By Lemma 1 in <cit.>, for 100 ≤ s ≤ t/2, the variance of the number of non-empty bins satisfies[The constants used here are slightly different from <cit.>.][Z] ≤ 5s^2/t Let R' be the number of non-empty bins in the balls-into-bins process with s = 2k/5 balls and t = 4k/5 bins. Then, we have[R'] ≥4/5k (1-1/√(e))and[R'] ≤ 5(2k/5)^2/4k/5 = k. By the Chebyshev's inequality,[R' < [R'] - k/20] ≤[R']/(k/20)^2 < 320/k.Hence, with probability 1 - O(1/k), R ≥ R' ≥ k/4, which proves the claim in (<ref>).It follows that for each G^j, we can find a matching in G^j of size at least k/4 with probability 1 - O(1/k). If r = n/(2k) = o(k), then by the union bound, it holds that with probability at least 1-1/100, the size of a maximum matching in G is at least n/4. Otherwise, let R^1, R^2, …, R^r be the sizes of matchings that are independently computed using the greedy matching algorithm described above for respective input graphs G^1,G^2, …, G^r. Let Z_j = 1 if R^j ≥ k/4, and Z_j = 0, otherwise. Since R_j ≥ k Z_j/4 for all j∈ [r] and [Z_j] = 1 - O(1/k), by the Hoeffding's inequality, we have[∑_j=1^r R^j < n/5] ≤[∑_j=1^r Z_j < 4n/5k] ≤ e^-Ω(r)Hence, the size of a maximum matching in G is at least n/5 with probability at least 1-e^-Ω(r)≥ 1-1/100.If ' is an α-approximation algorithm with error probability at most 1/4, then by Lemma <ref>, with probability at least 3/4 - 1/100 ≥ 2/3, ' will output a matching M that contains at least α n/5 - 2pn important edges, and we denote this event by ℱ. We know that there are at most n/2 important edges and edge (u^J, I, v^J, L) is one of them. We say that (i,j,l) is important for G, if (u^j,i, v^j,l) is an important edge in G. Given an input G, the algorithm cannot distinguish between any two important edges. We can apply the principle of deferred decisions to decide the value of (I,J) after the matching has already been computed, which means, conditioned on ℱ, the probability that (u^J, I, v^J, L) ∈ M is at least (α n/5 - 2pn)/(n/2) = α/5, where p = α/20. Since ℱ happens with probability at least 2/3, we have [(u^J, I, v^J, L) ∈ M]≥α/10. To sum up, we have shown that protocolsolvescorrectly with one-sided error of at most 1 - α/10. Information cost We analyze the information cost of . Let Π = Π^1 ∘Π^2 ∘⋯∘Π^k be the best protocol forwith respect to input distribution ϕ and the one-sided error probability 1 - α/10. Let W^-J = (W^1, …,W_J-1,W_J+1,…, W^r), and W = (W^1,W^2,…,W^r). Let W_A,B∼τ_p^k denote the random variable used to sample (A,B) from μ_k. Recall that in our input reduction I, J, W^-J are public coins used by Alice and Bob. We have the following:2/n IC_ϕ, δ() ≥ 1/rk∑_i=1^k I(X^i, Y; Π^i) ≥ 1/rk∑_i = 1^k I(X^i, Y; Π^i | W)≥ 1/rk∑_i = 1^k ∑_j=1^r I(X^j, i, Y^j; Π^i | W^-j, W^j) =1/rk∑_i = 1^k ∑_j=1^r I(A, B; Π^i | I = i, J = j, W^-j, W_A,B) =I(A, B; Π^I | I , J , W^-J, W_A,B ) ≥ I(A, B; Π^* | W_A,B, R) =Ω(α^2 k),where (<ref>) is by Lemma <ref>, (<ref>) is by data processing inequality, (<ref>) is by the super-additivity property, (<ref>) holds because the distribution of W^j is the same as that of W_A,B, and the conditional distribution of (X^j, i, Y^j, Π^i) given W^-j, W^j is the same as the conditional distribution of (A,B, Π^i) given I = i, J = j, W^-j, W_A,B, in (<ref>), Π^* is the best protocol forwith one-sided error probability at most 1 - α/10 and R is the public randomness used in Π^*, and (<ref>) holds by Theorem <ref> where recall that we have set p = α/20.We have thus shown that IC_ϕ, 1/4() ≥Ω(α^2 kn). Since by Theorem <ref>, R_1/4() ≥ IC_ϕ, 1/4(), it follows thatR_1/4() ≥Ω(α^2 kn)which proves Theorem <ref>.§ UPPER BOUND In this section we present an α-approximation algorithm for distributed matching problem with an upper bound on the communication complexity that matches the lower bound for any α≤ 1/2 up to poly-logarithmic factors. We have described a simple algorithm that guarantees an 1/2-approximation forat the communication cost of O(kn log n) bits in Section <ref>. This algorithm is a greedy algorithm that computes a maximal matching. The communication cost of the algorithm is O(α^2 knlog n) bits. If 1/8< α≤ 1/2, we simply apply the greedy 1/2-approximation algorithm that has the communication cost of O(knlog n) bits. Therefore, we assume that α≤ 1/8 in the rest of this section. We next present an α-approximation algorithm that uses the greedy maximal matching algorithm as a subroutine. Algorithm: The algorithm consists of two steps: * The coordinator sends a message to each site asking to compute a local maximum matching, and each site then follows up with reporting back to the coordinator the size of its local maximum matching. The coordinator sends a message to a site that holds a local maximum matching of maximum size, and this site then responds with sending back to the coordinator at most α n edges from its local maximum matching. Then, the algorithm proceeds to the second step.* The coordinator selects each site independently with probability q, where q is set to 8α (recall we assume α≤ 1/8), and computes a maximal matching by applying the greedy maximal matching algorithm to the selected sites. It is readily observed that the expected communication cost of Step 1 is at most O((k+α n) log n) bits, and that the communication cost of Step 2 is at most O((k+α^2 kn) log n) bits. We next show correctness of the algorithm. Correctness of the algorithm. Let X_i be a random variable that indicates whether or not site p^i is selected in Step 2. Note that [X_i] = q and [X_i] = q(1 - q). Let M be a maximum matching in G and let m denote its size. Let m_i be the number of edges in M which belong to site p^i. Hence, we have ∑_i=1^k m_i = m because the edges of G are assumed to be partitioned disjointly over the k sites. We can assume that m_i ≤α m for all i ∈ [k]; otherwise, the coordinator has already gotten an α-approximation from Step 1.Let Y be the size of the maximal matching that is output of Step 2. Recall that any maximal matching is at least 1/2 of any maximum matching. Thus, we have Y ≥ X/2, where X = ∑_i=1^k m_i X_i. Note that we have [X] = q m and [X] = q (1 - q)∑_i=1^k m_i^2. Under the constraint m_i ≤α m for all i∈ [k], we have ∑_i=1^k m_i^2 ≤α m∑_i=1^km_i =α m^2.Hence, combining with the assumption q = 8α, it follows that [X] ≤ 8α^2m^2. By Chebyshev's inequality, we have[|X - q m|≥ 6α m] ≤8/36 < 1/4.Since q=8α, it follows that X ≥ 2α m with probability at least 3/4. Combining with Y≥ X/2, we have that Y ≥α m with probability at least 3/4.We have shown the following theorem.For every α≤ 1/2, there exists a randomized algorithm that computes an α-approximation of a maximum matching with probability at least 3/4 at the communication cost of O((α^2 kn + α n + k)log n) bits. Note that Ω(α n) is a trivial lower bound, simply because the size of the output could be as large as Ω(α n). Obviously, Ω(k) is a lower bound, because the coordinator has to send at least one message to each site. Thus, together with the lower bound Ω(α^2 kn) in Theorem <ref>, the upper bound above is tight up to a log n factor.One can see that the above algorithm needs O(α k) rounds, as we use a naive algorithm to compute a maximal matching among α k sites. If k is large, say, n^β for some constant β∈(0,1), this may not be acceptable. Fortunately, Luby's parallel algorithm <cit.> can be easily adapted to our model, using only O(log n) rounds at the cost of increasing the communication by at most a log n factor. The details are provided in Appendix <ref>. § CONCLUSIONWe have established a tight lower bound on the communication complexity for approximate maximum matching problem in the message-passing model. An interesting open problem is the complexity of the counting version of the problem, i.e., the communication complexity if we only want to compute an approximation of the size of a maximum matching in a graph. Note that our proof of the lower bound relies on the fact that the algorithm has to return a certificate of the matching. Hence, in order to prove a lower bound for the counting version of the problem, one may need to use new ideas and it is also possible that a better upper bound exists. In a recent work <cit.>, the counting version of the matching problem was studied in the random-order streaming model. They proposed an algorithm that uses one pass and poly-logarithmic space, which computes a poly-logarithmic approximation of the size of a maximum matching in the input graph. A general interesting direction for future research is to investigate the communication complexity for other combinatorial problems on graphs, for example, connected components, minimum spanning tree, vertex cover and dominating set. The techniques used for approximate maximum matching in the present paper could be of use in addressing these other problems.§ LUBY'S ALGORITHM IN THE COORDINATOR MODELLuby's algorithm <cit.>: Let G=(V,E) be the input graph, and M be a matching initialized to ∅. Luby's algorithm for maximal matching is as follows. * If E is empty, return M.* Randomly assign unique priority π_e to each e∈ E.* Let M' be the set of edges in E with higher priority than all of its neighboring edges. Delete M' and all the neighboring edges of M' from E, and add M' to M. Go to step <ref>.It is easy to verify that the output M is a maximal matching. The number of iterations before E becomes empty is at most O(log n) in expectation <cit.>. Next we briefly describe how to implement this algorithm in the coordinator model. Let E^i be the edges held by site p^i. * For each i, if E^i is empty, p^i halts. Otherwise p^i randomly assigns unique priority π_e to each e∈ E^i.* Let M'^i be the set of edges in E^i with higher priority than all of its neighboring edges in E^i. Then p^i sends M'^i together with their priorities to the coordinator.* Coordinator gets W=M'^1∪ M^2 ∪⋯∪ M'^k. Let M' be the set of edges in W with higher priority than all of its neighboring edges in W. Coordinator adds M' to M and then sends M' to all sites.* Each site p^i deletes all neighboring edges of M' from E^i, and goes to step <ref>.* After all the sites halt, the coordinator outputs M. It is easy to see that the above algorithm simulates the algorithm of Luby. Therefore, the correctness follows from the correctness of Luby's algorithm, and the number of rounds is the same, which is O(log n). The communication cost in each round is at most O(knlog n) bits because, in each round, each site sends a matching to the coordinator, and the coordinator sends back another matching. Hence, the total communication cost is O(knlog^2n) bits. 10 urlstyleAG11 Ahn, K.J., Guha, S.: Laminar families and metric embeddings: Non-bipartite maximum matching problem in the semi-streaming model. CoRR abs/1104.4058 (2011). <http://arxiv.org/abs/1104.4058> AG11b Ahn, K.J., Guha, S.: Linear programming in the semi-streaming model with application to the maximum matching problem. Inf. Comput. 222, 59–79 (2013) AGM12b Ahn, K.J., Guha, S., McGregor, A.: Analyzing graph structure via linear measurements. In: Proceedings of the Twenty-third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '12, pp. 459–467 (2012) AGM12 Ahn, K.J., Guha, S., McGregor, A.: Graph sketches: Sparsification, spanners, and subgraphs. In: Proceedings of the 31st ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS '12, pp. 5–14 (2012) AMS99 Alon, N., Matias, Y., Szegedy, M.: The space complexity of approximating the frequency moments. Journal of Computer and System Sciences 58(1), 137 – 147 (1999) ANRW15 Alon, N., Nisan, N., Raz, R., Weinstein, O.: Welfare maximization with limited interaction. In: Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pp. 1499–1512 (2015) AKLY16 Assadi, S., Khanna, S., Li, Y., Yaroslavtsev, G.: Maximum Matchings in Dynamic Graph Streams and the Simultaneous Communication Model, chap. 93, pp. 1345–1364 (2016) BYJKS02 Bar-Yossef, Z., Jayram, T., Kumar, R., Sivakumar, D.: Special issue on focs 2002 an information statistics approach to data stream and communication complexity. Journal of Computer and System Sciences 68(4), 702 – 732 (2004) barak2010compress Barak, B., Braverman, M., Chen, X., Rao, A.: How to compress interactive communication. SIAM Journal on Computing 42(3), 1327–1363 (2013) braverman2012interactive Braverman, M.: Interactive information complexity. SIAM Journal on Computing 44(6), 1698–1739 (2015) BEOPV13 Braverman, M., Ellen, F., Oshman, R., Pitassi, T., Vaikuntanathan, V.: A tight bound for set disjointness in the message-passing model. In: Proceedings of the 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, FOCS '13, pp. 668–677 (2013) Chakrabarti01 Chakrabarti, A.: Informational complexity and the direct sum problem for simultaneous message complexity. In: Proceedings of the 42Nd IEEE Symposium on Foundations of Computer Science, FOCS '01, pp. 270– (2001) cover2006 Cover, T., Thomas, J.: Elements of information theory. Wiley-interscience (2006) DNO14 Dobzinski, S., Nisan, N., Oren, S.: Economic efficiency requires interaction. In: Proceedings of the 46th Annual ACM Symposium on Theory of Computing, STOC '14, pp. 233–242. ACM, New York, NY, USA (2014). 10.1145/2591796.2591815. <http://doi.acm.org/10.1145/2591796.2591815> ELMS11 Epstein, L., Levin, A., Mestre, J., Segev, D.: Improved approximation guarantees for weighted matching in the semi-streaming model. SIAM Journal on Discrete Mathematics 25(3), 1251–1265 (2011) GSZ11 Goodrich, M.T., Sitchinava, N., Zhang, Q.: Sorting, searching, and simulation in the mapreduce framework. Algorithms and Computation 7074 of the series Lecture Notes in Computer Science, 374–383 (2011) israeli1986fast Israeli, A., Itai, A.: A fast and simple randomized parallel algorithm for maximal matching. Information Processing Letters 22(2), 77–80 (1986) kane10:_optim Kane, D.M., Nelson, J., Woodruff, D.P.: An optimal algorithm for the distinct elements problem. In: Proceedings of the Twenty-ninth ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS '10, pp. 41–52 (2010) K13 Kapralov, M.: Better bounds for matchings in the streaming model, chap. 121, pp. 1679–1697 KKS14 Kapralov, M., Khanna, S., Sudan, M.: Approximating matching size from random streams, chap. 55, pp. 734–751 KSV10 Karloff, H., Suri, S., Vassilvitskii, S.: A model of computation for mapreduce. In: Proceedings of the Twenty-first Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '10, pp. 938–948 (2010) KNPR13 Klauck, H., Nanongkai, D., Pandurangan, G., Robinson, P.: The distributed complexity of large-scale graph processing. CoRR abs/1311.6209 (2013) Konrad15 Konrad, C.: Maximum Matching in Turnstile Streams, pp. 840–852. Springer Berlin Heidelberg, Berlin, Heidelberg (2015) KonradMM12 Konrad, C., Magniez, F., Mathieu, C.: Maximum Matching in Semi-streaming with Few Passes, pp. 231–242. Springer Berlin Heidelberg, Berlin, Heidelberg (2012) kushilevitz1997communication Kushilevitz, E., Nisan, N.: Communication Complexity. Cambridge University Press LMSV11 Lattanzi, S., Moseley, B., Suri, S., Vassilvitskii, S.: Filtering: A method for solving graph problems in mapreduce. In: Proceedings of the Twenty-third Annual ACM Symposium on Parallelism in Algorithms and Architectures, SPAA '11, pp. 85–94 (2011) LPP08 Lotker, Z., Patt-Shamir, B., Pettie, S.: Improved distributed approximate matching. In: Proceedings of the Twentieth Annual Symposium on Parallelism in Algorithms and Architectures, SPAA '08, pp. 129–136 (2008) LPR07 Lotker, Z., Patt-Shamir, B., Rosen, A.: Distributed approximate matching. In: Proceedings of the Twenty-sixth Annual ACM Symposium on Principles of Distributed Computing, PODC '07, pp. 167–174 (2007) luby1986simple Luby, M.: A simple parallel algorithm for the maximal independent set problem. SIAM journal on computing 15(4), 1036–1053 (1986) M05 McGregor, A.: Finding Graph Matchings in Data Streams, pp. 170–181. Springer Berlin Heidelberg, Berlin, Heidelberg (2005) open McGregor, A.: Question 16: Graph matchings: Open problems in data streams and related topics. In: Workshop on algorithms for data streams (2006). <http://www.cse.iitk.ac.in/users/sganguly/data-stream-probs.pdf> PVZ12 Phillips, J.M., Verbin, E., Zhang, Q.: Lower bounds for number-in-hand multiparty communication complexity, made easy. SIAM Journal on Computing 45(1), 174–196 (2016) WW04 Wattenhofer, M., Wattenhofer, R.: Distributed Weighted Matching, pp. 335–348. Springer Berlin Heidelberg, Berlin, Heidelberg (2004) WZ12 Woodruff, D.P., Zhang, Q.: Tight bounds for distributed functional monitoring. In: Proceedings of the Forty-fourth Annual ACM Symposium on Theory of Computing, STOC '12, pp. 941–960 (2012) WZ14 Woodruff, D.P., Zhang, Q.: An Optimal Lower Bound for Distinct Elements in the Message Passing Model, chap. 54, pp. 718–733 (2014) WZ13 Woodruff, D.P., Zhang, Q.: When distributed computation is communication expensive. Distributed Computing pp. 1–15 (2014) Z12 Zelke, M.: Weighted matching in the semi-streaming model. Algorithmica 62(1), 1–20 (2012)
http://arxiv.org/abs/1704.08462v1
{ "authors": [ "Zengfeng Huang", "Bozidar Radunovic", "Milan Vojnovic", "Qin Zhang" ], "categories": [ "cs.DS" ], "primary_category": "cs.DS", "published": "20170427074913", "title": "Communication complexity of approximate maximum matching in the message-passing model" }
-3.5mm 3.5mm plain We prove that if an analytic map f:=(f_1,… ,f_n):U⊂ℂ^n→ℂ^n admits analgebraic addition theorem then there exists a meromorphic map g:=(g_1,… ,g_n):ℂ^nℂ^n admitting analgebraic addition theorem such that f_1,… ,f_n are algebraic over ℂ(g_1,… ,g_n) on U(this was proved by K. Weierstrass in dimension 1). Furthermore, (g_1,… ,g_n) admits a rational addition theorem. Grüneisen parameter for strongly coupled Yukawa systems Sergey A. Khrapak December 30, 2023 =======================================================§ INTRODUCTION The aim of this paper is to study maps admitting an algebraic addition theorem, maps whose coordinate functions can be viewed aslimitting (degenerate) cases of abelian functions. Let 𝕂 be ℂ or ℝ and M_𝕂,n be the quotient field of𝒪_𝕂,n, the ring of power series in n variables with coefficients in 𝕂 thatare convergent in a neighborhood of the origin.Definition. Let u and v be variables of ℂ^n. We say (ϕ _1,… ,ϕ _n)∈M_𝕂,n^n admits an algebraic addition theorem (AAT) ifϕ _1,… ,ϕ _n are algebraically independent over 𝕂 and if each ϕ _i(u+v), i=1,… ,n, isalgebraic over 𝕂(ϕ _1(u),… ,ϕ _n(u),ϕ _1(v),… ,ϕ _n(v)). The concept of AAT was introduced by K. Weierstrass during his lectures on abelian functions in Berlin in 1870 (see <cit.>). He stated that the coordinate functions of a global meromorphic map admitting an AAT are either abelian functions ordegenerate abelian functions. He proved it for dimension 1 and F. Severi in <cit.> (see also Y. Abe <cit.>) for dimension n. Weierstrass also proved the following extension result: the germ of an analytic function admitting an AAT can be transformedalgebraically into the germ of a global function admitting an AAT; and he stated, without a proof, an n-dimensional version of it. As far as we know, no such proof existed in the literature so far.We prove it here as a consequence (Corollary <ref>) of the main result of the paper, that we now state. Let ϕ :=(ϕ _1,… ,ϕ _n)∈M_𝕂,n^n admit an AAT. Then, there exist ψ :=(ψ _1,… ,ψ _n)∈M_𝕂,n^n admitting an AAT and algebraic over𝕂(ϕ), and an additional meromorphic series ψ _0∈M_𝕂,n algebraic over𝕂(ψ ) such that, (1) For each f(u)∈𝕂(ψ _0(u),… ,ψ _n(u)), (a) f(u+v)∈𝕂(ψ _0(u),… ,ψ _n(u),ψ _0(v),… ,ψ _n(v)) and(b) f(-u)∈𝕂(ψ _0(u),… ,ψ _n(u)). (2) Each ψ _0,… ,ψ _n is the quotient of two convergent power series whose complex domain of convergence isℂ^n. Any ϕ∈M_𝕂,n^n admitting an AAT is algebraic over 𝕂(ψ) for someψ∈M_𝕂,n^n admitting an AAT and whose coordinate functions are the quotient of two convergent powerseries whose complex domain of convergence is ℂ^n.We point out that this theorem gives not only an extension result, but also a uniform rational version of the AAT.In fact, given a ϕ admitting an AAT, we obtain the rational version in Theorem <ref> (1a) through the coefficients of thepolynomial asssociated to each ϕ _i(u+v). Then, we obtain the extension result of Theorem <ref> (2) by considering the rational expression obtained in Theorem <ref> (1a). In particular, this shows that any ϕ admitting an AAT can be analytically extended to a multivalued analytic map with a finite number ofbranches. Thus, we provide a new way of proving Weierstrass' extension result in dimension 1, whose classical proofs go the other way around (and do not provide a rational counterpart): first it is proved the finiteness of the number of branches of the extension of such analytic ϕ and then, making use of the coefficientsof the relevant polynomials, it is given a global univaluated meromorphic function admitting an AAT.The motivation of the results of this paper is to study abelian locally 𝕂-Nash groups, for 𝕂=ℝ orℂ. Charts at the identity of such groups admit an AAT.Locally Nash groups (i.e. for 𝕂=ℝ) were studied by J.J. Madden and C.M. Stanton <cit.> and M. Shiota <cit.>, mainly in dimension 1.In particular, the Extension Theorem will allow us to reduce the study of simply connected abelian locally Nash groups to those whosecharts are restrictions of (global) meromorphic functions admitting an AAT (see <cit.>).The results of this paper are part of the second author's Ph.D. dissertation.§ THE EXTENSION THEOREM. For each ϵ >0, let U_𝕂,n(ϵ):={ a∈𝕂^na <ϵ}. We will only consider convergence over open subsets of ℂ^n, let U_n(ϵ ):=U_ℂ,n(ϵ). We say that (ϕ _1,… ,ϕ _m) ∈M_𝕂,n^m is convergent in U_n(ϵ ) if eachϕ _1,… ,ϕ _m is the quotient of two power series convergent on U_n(ϵ ).As usual, by the identity principle for analytic functions, we identify 𝒪_𝕂,n with the ring of germs of analyticfunctions at 0, and M_𝕂,n with its quotient field. We will use without mention properties of 𝒪_𝕂,n, see e.g. R.C. Gunning and H. Rossi <cit.> andJ.M. Ruiz <cit.>.Let ϵ >0. Let ϕ :=(ϕ _1,… ,ϕ _m)∈M_𝕂,n^m be convergent on U_n(ϵ ), leta∈ U_𝕂,n(ϵ ) and let (u,v):=(u_1,… ,u_n,v_1,… ,v_n) be a 2n-tuple of variables.We will use the following notation: [ ϕ _(u,v):=(ϕ _1(u),… ,ϕ _m(u), ϕ _1(v),… ,ϕ _m(v)) ∈M_𝕂,2n^2m.;ϕ _u+v:=(ϕ _1(u+v),… ,ϕ _m(u+v)) ∈M_𝕂,2n^m.; ϕ _u+a:=(ϕ _1(u+a),… ,ϕ _m(u+a)) ∈M_𝕂,n^m. ] Given ϕ∈M_𝕂,p^n and ψ∈M_𝕂,p^m we say that the tupleϕ is algebraic over 𝕂(ψ):=𝕂(ψ _1 , … ,ψ _m) if each component,ϕ _1,… ,ϕ _n, is algebraic over 𝕂(ψ).Thus, ϕ∈M_𝕂,n^n admits an algebraic addition theorem (AAT) ifϕ _1,… ,ϕ _n are algebraically independent over 𝕂 and ϕ _u+v is algebraic over𝕂(ϕ _(u,v)).Note that if ϕ∈M_ℝ,n admits an AAT then ϕ also admits an AAT when considered as an elementof M_ℂ,n.We first prove two properties of maps admitting an AAT.Let ϵ >0 and let ϕ∈M_𝕂,n^n be convergent on U_n(ϵ). If ϕ admits an AAT then ϕ _u+a is algebraic over 𝕂(ϕ), for eacha∈ U_𝕂,n(ϵ). Fix j∈{1,… ,n} and let f(u,v):=ϕ _j(u+v). By hypothesis, there exists P∈𝕂[X_1,… ,X_2n][Y] such thatP(ϕ (u),ϕ (v);Y)≠ 0 and P(ϕ (u),ϕ (v);f(u,v))=0. For any a∈ U_𝕂,n(ϵ) such that P(ϕ (u),ϕ (a),Y) is not identically zero,we clearly obtain that f(u,a) is algebraic over 𝕂(ϕ). We have to consider those a∈ U_𝕂,n(ϵ) such that P(ϕ (u),ϕ (a);Y) is identically zero.We first check that there exists an open dense subset U of U_𝕂,n(ϵ ) such that for eacha∈ U, P(X_1,… ,X_n,ϕ (a);Y)∈𝕂[X_1,… ,X_n][Y] is a non-zero polynomial. Let W be an open dense subset of U_𝕂,n(ϵ ) such thatW⊂{a∈ U_𝕂,n(ϵ )ϕ (a)∈𝕂^n }and ϕ :W→𝕂^n is analytic. Let U:={ a∈ WP(X_1,… ,X_n,ϕ (a);Y) ≠ 0}.Since W is an open dense subset of U_𝕂,n(ϵ ), it is enough to show that W∖ U is closedand nowhere dense in W. Clearly W∖ U is closed in W because ϕ is continuous in W. To prove the density, we note that if W∖ U contains an open subset of W then{ a∈ U_𝕂,n(ϵ ) P(ϕ (u),ϕ (a);Y)∈M_𝕂,n+1 and P(ϕ (u),ϕ (a);Y)=0}contains an open subset of U_𝕂,n(ϵ ) and therefore P(ϕ (u),ϕ (v);Y)=0, a contradiction.To finish the proof we will show that for each a∈ U_𝕂,n(ϵ ), there existsQ_a∈𝕂[X_1,… ,X_n][Y] such that Q_a (ϕ (u);Y) is not identically zero and Q_a (ϕ (u);f(u,a))=0. We follow the proof of <cit.>. For each a∈ U, where U is as above, letP_a(X_1,… ,X_n;Y)=∑ _i,μ≤ N b_i,μ ,a X_1^μ _1… X_n^μ _nY ^idenote the polynomial P(X_1,… ,X_n,ϕ (a);Y). We have that U is dense in U_𝕂,n(ϵ ) and P_a≠ 0 for all a∈ U. For each a∈ U, we defineE(P_a):= ∑ _i,μ≤ N b_i,μ ,a ^2.We note that E(P_a)>0, for all a∈ U. For each a∈ U, let Q_a(X_1,… ,X_n;Y):=∑ _i,μ≤ N c_i,μ ,a X_1^μ _1… X_n^μ _nY ^i,wherec_i,μ ,a:=b_i,μ ,a/√(E(P_a)).Hence, for each a∈ U, we have that Q_a(ϕ (u);Y) is not identically zero, Q_a(ϕ (u);f(u,a))=0 and E(Q_a)=1. We definev⃗(a):=(c_i,μ ,a)_i,μ≤ N∈{ z∈𝕂^(N+1)^(n+1)z=1 }.Take a∈ U_𝕂,n(ϵ)∖ U. Since U is an open dense subset of U_𝕂,n(ϵ ), there exists a sequence {a_k}_k ∈ℕ⊂ Uthat converges to a. For each a_k, the identity Q_a_k(ϕ (u) ;f(u,a_k))=0 holds, therefore∑ _i,μ≤ N c_i,μ ,a_kϕ _1( u) ^μ _1…ϕ _n(u)^μ _n f (u,a_k)^i=0.By hypothesis there are α ,β∈𝒪_𝕂,2n, β≠ 0, convergent onU_2n(ϵ), such that f(u,v) =α (u,v)/β (u,v) and β (u,a)≠ 0 forall a∈ U_𝕂,n(ϵ ). In particular∑ _i,μ≤ N c_i,μ ,a_kϕ _1(u) ^μ _1…ϕ _n(u)^μ _nα (u,a_k)^iβ (u,a_k)^N-i=0.Since { z∈𝕂^(N+1)^(n+1) z=1 } is compact, taking a suitable subsequence we can assume that thesequence {v⃗(a_k)}_k∈ℕ is convergent. For each i,μ≤ N, we definec_i,μ ,a:=lim _k →∞ c_i,μ ,a_k.Since α and β are continuous, when k tends to infinity equation (<ref>) becomes∑ _i,μ≤ N c_i,μ ,aϕ _1(u) ^μ _1…ϕ _n(u)^μ _nα (u,a)^iβ (u,a)^N-i=0.So dividing by β (u,a)^N, we also have∑ _i,μ≤ N c_i,μ ,aϕ _1(u) ^μ _1…ϕ _n(u)^μ _n f(u,a)^i=0and hence the polynomialQ_a(X_1,… ,X_n;Y):= ∑ _i,μ≤ N c_i,μ ,aX_1 ^μ _1… X_n^μ _n Y^isatisfies Q_a(ϕ (u) ,f(u,a))=0. We note that E(Q_a)=lim _k→∞ E(Q_a_k) =1, so Q_a≠ 0. Since ϕ _1,… ,ϕ _n are algebraically independent over 𝕂 and Q_a(X_1,… ,X_n,Y)≠ 0, we haveQ_a(ϕ (u), Y) is not identically zero.Let ϕ ,ψ∈M_𝕂,n^n and suppose that ϕ is algebraic over 𝕂(ψ ). If ϕ admits an AAT then ψ admits an AAT. The converse is also true, provided ϕ_1,… ,ϕ _n are algebraically independent over 𝕂. Assume that ϕ admits an AAT, hence ψ _1,… ,ψ _n are algebraically independent over 𝕂 becauseϕ is algebraic over𝕂(ψ ). To check that ψ _ u+ v is algebraic over 𝕂(ψ _(u,v)) it is enough to show thatψ _ u+ v is algebraic over 𝕂(ϕ _ u+ v), ϕ _ u+ v is algebraic over𝕂(ϕ _(u,v)) and ϕ _(u,v) is algebraic over 𝕂(ψ _(u,v)). The three conditions above are trivially satisfied because ϕ admits an AAT and both ϕ is algebraic over𝕂(ψ) and ψ is algebraic over 𝕂(ϕ). The converse follows by symmetry because if ϕ _1,… ,ϕ _n are algebraically independent over 𝕂 thenψ is algebraic over 𝕂(ϕ ). Now, we adapt to our context a result on AAT due to H.A.Schwarz, see <cit.> for details. Let ϵ >0 and let ϕ∈M_𝕂,n^n be convergent on U_n(ϵ ) such that it admits an AAT. Then, there exist a finite subset 𝒞⊂ U_𝕂,n(ϵ), with 0∈𝒞 and𝒞=-𝒞, and ϵ ' ∈ (0, ϵ] satisfying: each element of𝕂(ϕ _u+a a∈𝒞) is convergent on U_n(2ϵ '), and there exist A_0,… ,A_N∈𝕂(ϕ _(u+a,v+a) a∈𝒞) convergent on U_2n(2ϵ ') such that ϕ _u+v is algebraic over 𝕂(A_0,… , A_N)and, for each j∈{0,… ,N}, A_j(u,v)=A_j(u+a,v-a), for alla∈ U_𝕂,n(ϵ ').Fix i∈{1,… ,n}. Let 𝒮_0:={ 0} and 𝕂_0:=𝕂(ϕ _(u,v)). LetP_0(X)=X^ℓ _0+1+∑ _j=0^ℓ _0 A_0,j( u, v) X^jbe the minimal polynomial of ϕ _i(u+v) over 𝕂_0. If each A_0,j satisfies equation (<ref>) for ϵ '=2^-1ϵ then we are done for this i lettingϵ ':=2^-1ϵ, 𝒞:=𝒮_0 and A_j:=A_0,j, for each 0≤ j ≤ℓ _0. Otherwise, there exists a_1∈ U_𝕂,n(2^-1ϵ ) such thatQ_0(X):= X^ℓ _0+1+∑ _j=0^ℓ _0 A_0,j( u, v)X^j -X^ℓ _0+1-∑ _j=0^ℓ _0 A_0,j(u+a_1,v-a_1)X^jis not zero. Since u+v=(u+a_1)+(v-a_1), we deduce that ϕ _i(u+v) is a root of Q_0(X). Let 𝒮_1:=𝒮_0∪{a_1,-a_1} and𝕂_1:=𝕂(ϕ _u+a,v+a a∈𝒮_1). By definition 𝕂_0⊂𝕂_1. Let P_1(X)=X^ℓ _1+1+∑ _j=0^ℓ _1 A_1,j(u,v) X^jbe the minimal polynomial of ϕ _i(u+v) over 𝕂_1. We note that the elements of 𝕂_1 are convergent on U_2n(2^-1ϵ ). If each A_1,j satisfies equation (<ref>) for ϵ '=2^-2ϵ then we are done for this i lettingϵ ':=2^-2ϵ, 𝒞:=𝒮_1 and A_j:=A_1,j, for each 0≤ j ≤ℓ _1. Otherwise, we can repeat the process to obtain sets 𝒮_2, 𝒮_3 and so on, where the set𝒮_k is obtained from the set 𝒮_k-1 as𝒮_k:=𝒮_k-1∪{a+a_ka∈𝒮_k-1}∪{a-a_ka∈𝒮_k-1},for some a_k∈ U_𝕂,n(2 ^-kϵ ) such that Q_k-1 is not 0.Similarly, we obtain 𝕂_k:=𝕂(ϕ_u+a,v+a a∈𝒮_k) whose elements are convergent onU_2n(2^-kϵ). Since in the k repetition the degree of P_k is smaller than that of P_k-1, this process eventually stops, say at step s. Letting ϵ ':=2^-s-1ϵ, 𝒞:=𝒮_s and A_j:=A_s,j, for each 0≤ j ≤ℓ _s,we are done for this i. The elements A_0,… ,A_ℓ _s are convergent on U_2n(2ϵ ') because they are elements of 𝕂_s.For each i, 1≤ i ≤ n, denote by ϵ '_i, 𝒞_i and A_0^i,… ,A_N_i^i the elementsϵ ', 𝒞 and A_1,… ,A_ℓ _s previously obtained for that choice of i. To complete the proof, take 𝒞:=⋃ _i 𝒞_i, ϵ ' :=min _i {ϵ '_i}, and let{A_0,… ,A_N} be the union of the sets {A_0^i,… ,A_N_i^i}. We need two additional lemmas before proving the Extension Theorem. Let ϕ∈M_𝕂,n^n admit an AAT.Then, ϕ (-u) is algebraic over 𝕂(ϕ (u)). Take ϵ >0 such that ϕ∈M_𝕂,n^n is convergent on U_n(ϵ ). Since ϕ admits an AAT, we know that ϕ (u+v) is algebraic over 𝕂(ϕ (u),ϕ (v)). Taking into account transcendence degrees, it follows that ϕ (v) is algebraic over 𝕂(ϕ (u+v),ϕ (u)). For some a∈ U_𝕂,n(ϵ ), we may substitute v by -u+a, so ϕ (-u+a) is algebraic over 𝕂(ϕ (u)). By Lemma <ref>, ϕ (-u) is algebraic over 𝕂(ϕ (-u+a)) and hence over 𝕂(ϕ (u)).Let ϵ >0. Let ϕ∈M_𝕂,n^n be convergent on U_n(ϵ ) such that it admits an AAT. Then there exist ϵ _1∈ (0,ϵ ] and Ψ :=(ψ _0,… ,ψ _n)∈M_𝕂,n^n+1convergent on U_n(ϵ _1) and algebraic over𝕂(ϕ ) satisfying ψ :=(ψ _1,… ,ψ _n) admits an AAT,ψ _0 is algebraic over 𝕂(ψ ) and, for each f ∈𝕂(Ψ ), f(-u)∈𝕂(Ψ (u)), there exists δ∈ (0, ϵ _1] such that for each a∈ U_𝕂,n(δ ), f_u+a∈𝕂(Ψ)and f_u+a is convergent on U_n(ϵ _1). We will define a field 𝕃 generated over 𝕂 by certain elements of M_𝕂,n, next we willprove that each f∈𝕃 satisfy the conclusion of the lemma and finally we find a primitive element Ψ such that𝕃=𝕂(Ψ).Let ϵ ' ∈ (0, ϵ], 𝒞⊂ U_𝕂,n(ϵ ) andA_0,… ,A_N ∈𝕂(ϕ _(u+c,v+c) c∈𝒞)be the ones provided by Lemma <ref> for ϕ. Let U be an open dense subset of U_𝕂,n(ϵ ') such thatU⊂{a∈ U_𝕂,n(ϵ ') ϕ (a+c)∈𝕂^nfor all c∈𝒞}andU⊂{ a∈ U_𝕂,n(ϵ ')A_0 (u,a),… ,A_N (u,a) ∈M_𝕂,n}.In particular, U⊂{a∈ U_𝕂,n(ϵ ') ϕ (a)∈𝕂^n} because 0∈𝒞. Since U is open there exist b∈ U and ϵ”∈ (0,ϵ '-b] such that V:={a∈ U_𝕂,n(ϵ ') a-b < ϵ”}⊂ U.Fix such b. Then, for each a ∈ U_𝕂,n(ϵ” ), each A_j(u,a+b), j=1,…, N is an element ofM_𝕂,n. We note that since each A_j (u,v) is convergent on U_2n(2ϵ ') and by definition of b and ϵ”,each A_j(u,a+b) is convergent on U_n(ϵ '), for each a∈ U_𝕂,n(ϵ” ). Also, since each A_j satisfies the equation (<ref>) of Lemma <ref>,A_j(u,a+b)=A_j(u+a,b)for all a∈ U_𝕂,n(ϵ” ).For each j∈{0,… ,N}, we define B_j(u):=A_j(u,b). Let𝕃_1 :=𝕂((B_j)_u+a a∈ U_𝕂,n(ϵ” ), 0≤ j ≤ N).Since, for each a∈ U_𝕂,n(ϵ” ), each A_j(u,a+b) is convergent on U_n(ϵ '),by equation (<ref>) all the elements of 𝕃_1 are convergent on U_n(ϵ ') andin particular in U_n(ϵ”). Let𝕃_2 :=𝕂((B_j)_-u+a a∈ U_𝕂,n(ϵ” ), 0≤ j ≤ N).Note that all the elements of 𝕃_2 are also convergent on U_n(ϵ”). Hence, if we define 𝕃 :=𝕂((B_j)_u+a,(B_j)_-u+a a∈ U_𝕂,n(ϵ” ), 0≤ j ≤ N),all the elements of 𝕃 are also convergent on U_n(ϵ”).Let us show that 𝕃⊂𝕂(ϕ _u+c,ϕ _-u+cc∈𝒞)and that each element of 𝕃 is algebraic over 𝕂(ϕ ).We begin proving that 𝕃_1⊂𝕂(ϕ _u+c c ∈𝒞)and that each element of 𝕃_1 is algebraic over 𝕂(ϕ ). Fix j∈{0,… ,N} and a∈ U_𝕂,n(ϵ” ). We recall from Lemma <ref> that A_j(u,v) is convergent on U_2n(2ϵ ') andA(u,v)∈𝕂(ϕ_(u+c,v+c) c∈𝒞). Hence we can evaluate A_j(u,v) at v=a+b to deduce that A_j(u,a+b)∈𝕂(ϕ _u+c c∈𝒞). Thus, by equation (<ref>), A_j(u+a,b)∈𝕂(ϕ _u+c c∈𝒞). Hence, 𝕃_1⊂𝕂(ϕ _u+c c∈𝒞) and therefore, by Lemma <ref>,each element of 𝕃_1 is algebraic over 𝕂(ϕ ). By symmetry of 𝒞, 𝕃_2⊂𝕂(ϕ _-u+c c∈𝒞) andeach element of 𝕃_2 is algebraic over 𝕂(ϕ (-u)). Therefore 𝕃⊂𝕂(ϕ_u+c,-u+c c∈𝒞) and, since by Lemma <ref> we have thatϕ(-u) is algebraic over 𝕂(ϕ(u)), we deduce that each element of 𝕃 is algebraic over 𝕂(ϕ(u)),as required. Next, we show that ϕ _1(u+b),… ,ϕ _n(u+b) are algebraically independent over 𝕂. Let P∈𝕂[X_1,… ,X_n] be such that P(ϕ _u+b)=0. By notation, for each a∈ U_𝕂,n(ϵ”), we have that P(ϕ _u+b(a))=0 if and only if P(ϕ (a+b))=0. Hence,V ⊂{a∈ U_𝕂,n(ϵ ) ϕ (a)∈𝕂 andP(ϕ (a))=0}.Since V is open in U_𝕂,n(ϵ ), P(ϕ )=0 by the identity principle. Since ϕ_1,… ,ϕ _n are algebraically independent over 𝕂, P=0 and we are done.Next, we show that 𝕃 is finitely generated over 𝕂 and its transcendence degree is n. Firstly, we note that ϕ is algebraic over 𝕂(ϕ _u+b) becausethe coordinate functions of ϕ _u+b are algebraically independent over𝕂 and ϕ _u+b is algebraic over 𝕂(ϕ ) by Lemma <ref>. Since ϕ _ u+ v is algebraic over 𝕂(A_0,… ,A_N), evaluating each A_j(u,v) at v=b we deduce thatϕ _u+b is algebraic over 𝕂(B_0,… ,B_N). Therefore, ϕ is algebraic over 𝕂( B_0,… ,B_N). On the other hand, 𝕂(B_0,… ,B_N) is a subset of 𝕂(ϕ _u+c c∈𝒞) and thelatter field is algebraic over 𝕂(ϕ ) by Lemma <ref>. Hence the three fields have transcendence degree n over 𝕂. Recall that 𝒞=-𝒞, so𝕂(ϕ _-u+c c∈𝒞)=𝕂(ϕ _-u-c c ∈𝒞). We also note that ϕ (-u) is algebraic over𝕂(ϕ (u)), so 𝕂(ϕ _u+c, ϕ _-u-c c∈𝒞) has transcendence degree n over 𝕂. Now, 𝒞 is finite and 𝕂(B_0(u),… ,B_N(u))⊂𝕃⊂𝕂(ϕ _u+c,ϕ _-u-c c∈𝒞),therefore, 𝕃 is finitely generated over 𝕂 and its transcendence degree is n.Fix f ∈𝕃 and let us check that f(-u)∈𝕃 and that there exists δ >0 such that foreverya∈ U_𝕂,n(δ ), f_u+a∈𝕃 and f_u+a is convergent on U_n(ϵ”).Since f ∈𝕃, there exist m,m'∈ℕ, j(1),… ,j(m+m')∈{0,… ,N}and a_1,… , a_m+m'∈ U_𝕂,n(ϵ” ) such that f is a rational function of (B_j(1))_u+a_1, … ,(B_j(m))_u+a_m,(B_j(m+1))_-u+a_m+1, … ,(B_j(m+m'))_-u+a_m+m'.In particular, f(-u) is a rational function of (B_j(1))_-u+a_1, … ,(B_j(m))_-u+a_m, (B_j(m+1))_u+a_m+1, … ,(B_j(m+m'))_u+a_m+m',so f(-u)∈𝕃. Take δ >0 such that δ <ϵ” -max{ a_1,… ,a_m+m'}. Then, for all a∈ U_𝕂,n(δ ), f _u+a∈𝕃 and f_u+a is convergent on U_n(ϵ”).Finally, take ψ _1,… ,ψ _n∈𝕃 algebraically independentover 𝕂 and ψ _0 algebraic over 𝕂(ψ _1,… ,ψ _n) such that𝕃=𝕂(ψ _0,ψ _1,… ,ψ _n). Now, since all the elements of 𝕃 are algebraic over 𝕂(ϕ ), ψ:= (ψ _1,… ,ψ _n) admitsan AAT by Lemma <ref>. We now have all the ingredients to prove our main result. Let ϕ:=(ϕ _1,… ,ϕ _n)∈M_𝕂,n^n admit an AAT. Take ϵ >0 such that ϕ is convergent on U_n(ϵ ). Applying Lemma <ref> we obtain ϵ _1∈ (0,ϵ ] andΨ :=(ψ _0,… ,ψ _n)∈M_𝕂,n^n+1 as in the lemma. We next check that this Ψ satisfies the conditions of the theorem.(1) By Lemma <ref>, if f∈𝕂(Ψ) then f(-u)∈𝕂(Ψ), so we only have to checkf(u+v)∈𝕂(Ψ _(u,v)). Fix a non-constant f∈𝕂(Ψ ) and δ∈ (0, ϵ _1] such that f_u+a∈𝕂(Ψ ), for eacha∈ U_n(δ), as in Lemma <ref>. Let 0<ε < δ be such that f_u+v is convergent on U_2n(ε ). Let U be an open connected subset of U_n(ε ) such that Ψ(u) is analytic on U.In particular, Ψ_(u,v) is analytic on U× U. On the other hand, if for each a∈ U we have that g(u,v):=f(u+v) is not analytic in (a,a) then we would deduce that f(u)is not analytic on an open subset of U_n(ε), a contradiction. Therefore, shrinking U we can assume that g(u,v):=f(u+v) is also analytic on U× U. By Lemma <ref>, we have that g(u,a) ∈𝕂(Ψ (u)) and g(a,v) ∈𝕂(Ψ(v)), for each a ∈ U.Hence, by Bochner <cit.>, g(u,v) ∈ℂ(Ψ _(u,v)) on U× U.Since U× U is an open subset of U_2n(ε), it follows that g(u,v) ∈ℂ(Ψ _(u,v)) on U_2n(ε).Moreover, clearly g(u,v) ∈𝕂(Ψ _(u,v)) on U_2n(ε) since bothΨ∈M_𝕂,n^n+1 and f∈𝕂(Ψ). This concludes the proof of (1).(2) We may assume that ψ _0≠ 0. Fix i∈{0,… ,n}. We have already shown that ψ _i(u+v)∈𝕂(Ψ _(u,v)). Let A(u,v):=ψ _i(u+v). By Lemma <ref> and taking a smaller ϵ >0 if necessary, we may assume that Ψ is convergent onU_n(ϵ ) and 𝕂(Ψ _u+a)⊂𝕂(Ψ), for each a∈ U_𝕂,n(ϵ). Let us show that there exists c∈ U_𝕂,n(ϵ) such that A(u+c,u-c)∈M_𝕂,n.Take α ,β∈𝒪_𝕂,2n, β≠ 0 such thatA(u,v)=α (u,v) /β (u,v). Suppose by contradiction that β (u+c,u-c)=0 for all c∈ U_𝕂,n(ϵ). Then β(a+b/2+a-b/2,a+b/2-a-b/2) =0,for all a,b∈ U_𝕂,n(ϵ/2). So β (a,b)=0, for all (a,b)∈ U_𝕂,n(ϵ/2), that is, β =0, which is a contradiction. Consequently,ψ _i(2u)=A(u+c,u-c)∈𝕂(Ψ _u+c( u), Ψ _u-c(u))⊂𝕂(Ψ (u)). By induction we deduce that ψ _0(u),… ,ψ _n(u)∈𝕂(Ψ (2^-Nu)),for each N∈ℕ. Hence since Ψ (2^-Nu) is convergent on U_n(2^Nϵ), Ψ is also convergent on U_n(2^Nϵ).Thus each ψ_i is the quotient of two power series convergent in all ℂ^n(by Poincaré's problem <cit.>).Let ϕ∈M_𝕂,n^n admit an AAT. By Theorem <ref>, there exists ψ∈M_𝕂,n^n admitting an AAT whose coordinate functions are thequotient of two convergent whose complex domain of convergence is ℂ^n and such that ψ is algebraic over𝕂(ϕ). Since the coordinate functions of ψ are algebraically independent, ϕ is algebraic over 𝕂(ψ). § ACKNOWLEDGEMENTS The second author thanks E. Pantelis for the support to attend “Summer School in Tame Geometry”,Konstanz, July 18-23, 2016, where the results of this paper were presented. The authors also would like to thank José F. Fernando for helpful suggestions on an earlier version of this paper and Mark Villarinofor his comments. abbrv
http://arxiv.org/abs/1704.08514v3
{ "authors": [ "E. Baro", "J. de Vicente", "M. Otero" ], "categories": [ "math.CV", "32A20 (Primary), 33E05, 14P20 (Secondary)" ], "primary_category": "math.CV", "published": "20170427112701", "title": "An extension result for maps admitting an algebraic addition theorem" }
13(2:14)2017 1–LastPage Jan. 30, 2016 Jun. 30, 2017fit,shapes arrows,positioning,shapes,fit,calc claim[thm]Claim tbs
http://arxiv.org/abs/1704.08637v2
{ "authors": [ "Sebastian Enqvist", "Fatemeh Seifan", "Yde Venema" ], "categories": [ "cs.LO" ], "primary_category": "cs.LO", "published": "20170427160619", "title": "An expressive completeness theorem for coalgebraic modal mu-calculi" }
SIT: A Lightweight Encryption Algorithm for Secure Internet of Things Nicolas Honnorat, Christos Davatzikos Center for Biomedical Image Computing and Analytics University of Pennsylvania, USA December 30, 2023 ================================================================================================================================= Many neuroimaging studies focus on the cortex, in order to benefit from better signal to noise ratios and reduced computational burden. Cortical data are usually projected onto a reference mesh, where subsequent analyses are carried out. Several multiscale approaches have been proposed for analyzing these surface data, such as spherical harmonics and graph wavelets. As far as we know, however, the hierarchical structure of the template icosahedral meshes used by most neuroimaging software has never been exploited for cortical data factorization.In this paper, we demonstrate how the structure of the ubiquitous icosahedral meshes can be exploited by data factorization methods such as sparse dictionary learning, and we assess the optimization speed-up offered by extrapolation methods in this context. By testing different sparsity-inducing norms, extrapolation methods, and factorization schemes, we compare the performances of eleven methods for analyzing four datasets: two structural and two functional MRI datasets obtained by processing the data publicly available for the hundred unrelated subjects of the Human Connectome Project. Our results demonstrate that, depending on the level of details requested, a speedup of several orders of magnitudes can be obtained. § INTRODUCTION Several modalities, such as EEG and MEG, are not able to image deep brain structures.MRI modalities benefit from better signal-to-noise ratios at the surface of the brain.For these practical reasons, many neuroimaging studies focus on the cortex.Cortical data are usually processed indepedently for the two hemispheres.The data of each hemisphere are projected onto a reference mesh, where subsequent analyses are carried out.Several multiscale approaches have been proposed for analyzing these surface data, such as spherical harmonics and graph wavelets <cit.>. As far as we know, however, these tools have not been exploited for accelerating data factorization schemes, such asnonnegative matrix factorization <cit.> and sparse dictionary learning <cit.>.In this paper, we accelerate cortical data factorization by exploiting the hierarchical structure of icosahedral meshesand by investigating the effects of a novel extrapolation scheme adapted to nonnegative factorizations.Our results demonstrate that, depending on the level of details requested, a speedup of several orders of magnitudes can be obtained.The remainder of the paper is organized as follows.In section 2 we present the four factorization scheme considered for this work, we explain how mesh structure was exploited, how the factorizations were initialized and how they were accelerated by extrapolation.Section 3 presents our experimental results, obtained with two structural and functional datasets generated from the data available for the hundred unrelated HCP subjects <cit.>. Discussions conclude the paper. § METHODS§.§ FactorizationsIn this work, we compare four factorizing schemes: a variant of sparse dictionary learning <cit.>,two Non-negative Matrix factorizations (NNMF) <cit.> and a projected Non-negative Matrix Factorization (PNMF) <cit.>.Our goal is to decompose a data matrix X of size n_f × n_s, containing n_f positive measurements acquired for n_s subjects,as a product BC between n_d basis vectors, stored in a matrix B of size n_f × n_d and loadings C of size n_d × n_s.We assume that the locations of the n_f measures are in bijection with the faces of an icosahedral mesh M generated from the icosahedron Mo as explained in the next section.In the context of neuroimaging the number of subjects n_s is usually of the order of a few hundreds.The dimension n_f is generally much larger, for instance 327680 faces for the largest HCP and freesurfer cortical meshes <cit.>.This large dimension significantly slows down the computations and multiplies the local minima which could trap the alternating minimizationscheme commonly used for solving factorization problems. In this work, we propose to reduce the spatial dimension by introducing a positive design matrix D of size n_f × n_k and decomposing X as follows:X ≈ DBC D can be interpreted as an additional set of positive cortical basis.We start with a restricted number of coarse cortical maps, which we gradually refineto let the optimization focus on the cortical regions where decomposition errors are large.For sparse dictionary learning, the L1 norm of the matrices B and C is penalized at the same time as the L2 decomposition error.Starting from a random initialization, B and C are iteratively updated by an alternating proximal gradient descent <cit.>. Under the following notationsL=X^T D      ,      K=D^T D      ,      M=L^T Lthe parametric dictionary learning problem solved and the alternating minimization scheme, known as PALM <cit.>, write: (DL)     min1/2||X-DBC||_2^2 + λ(||B||_1+||C||_1)   {[ B ← S(B-η(KBCC^T-L^TC^T),λη); C ← S(C-η(B^TKBC-B^TL^T),λη) ].where λ is a constant sparsity parameter, we set η=10^-1/||L||_2, and the proximal operator S applies a soft thresholding to matrix components:S(z,α)_ij=sign(z_ij)max(0,|z_ij|-α)NNMF frameworks impose a positivity constraint on B and C and are usually optimized via multiplicative updates <cit.>.This constraint is often sufficient for generating sparse and non overlapping basis <cit.>. We compared two “parametric” NNMF frameworks: a framework where the square of the Frobenius norm of B and C is penalized toalleviate the ambiguity problem <cit.> and a framework generating sparse B and C <cit.>:(PNNMF)     min ||X-DBC||_2^2 + λ(||B||^2_2+||C||^2_2)      B ≥ 0    C ≥ 0  {[ B ← B ⊙[L^TC^T-λ B]_+⊘[K B C C^T]; C ← C ⊙[B^TL^T-λ C]_+⊘[B^T K B C]; ]. (SPNNMF)     min1/2||X-DBC||_2^2 + λ(||B||_1+||C||_1)      B ≥ 0    C ≥ 0  {[ B ← B ⊙[L^TC^T-λ1(n_k,n_d)]_+⊘[K B C C^T]; C ← C ⊙[B^TL^T-λ1(n_d,n_s)]_+⊘[B^T K B C] ].where ⊙ denotes the entrywise product, ⊘ the entrywise division, [x]_+=max(x,0) and 1(n,m) is the matrix of ones of size as n × m.In addition, we considered a projective NMF (PNMF) scheme <cit.>.PNMF generates the loadings by projecting the data on the basis. As a result, only B needs to be determined and no penalization is required for handling ambiguity issues. We obtained the following parametric scheme after reducing the amplitude of the updates by two for stabilizing the optimization <cit.>:(PPNMF)     min ||X-DBB^TD^TX||_2^2          B ≥ 0                              B ← B ⊙(1(n_k,n_d)/2+[MB] ⊘[ (KBB^TM+MBB^TK)B ])§.§ Hierarchical Optimization on Icosahedral Meshes Icosahedral meshes are generated by iteratively subdividing the faces of an icosahedron in four smaller triangles, as illustrated in figure <ref>.This procedure generates almost perfectly regular meshes.Spherical harmonics and graph wavelets allow to exploit the hierarchical structure of these meshes <cit.>.In order to preserve positivity when factorizing cortical data, we adopted an approach more primitive but closely related to spherical wavelets <cit.>.More precisely, we initialize D by concatenating twenty rotated versions of the same positive cortical map, centered on the faces of the original icosahedron.A thousand time, we initialize B and C randomly and run a thousand iterations of our factorization method. The best pair (B,C) is then gradually refined by a procedure preserving the association between the column of D and the faces of a icosahedral mesh gradually refined.Each refinement consists of two steps.An average local error is first computed for each column of D by projecting the reconstruction error X-DBC using D:e=[[L^T-KBC] ⊙[L^T-KBC] ]1(n_s,1)Then, the faces associated with the worst errors are subdivided, D is updated by rotating the removed columns and scaling their support by half accordingly, the matrices K,L,M required for the factorization are updated, and B is refined by dividing by four and replicating four time the rows corresponding to the columns removed from D.Local errors e and B updates are not exact because D is not an orthogonal basis.However, these updates are very efficient and constitutes good approaximations when the overlap between the cortical maps in D is limited. §.§ Initialization and Design Matrices For dictionary learning, B and C were initialized by sampling from N(0,1).For PPNMF schemes, B was then scaled by the inverse of the Frobenius norm of L and replaced by its absolute value.For NMF schemes, C was initialized uniformly with 1/n_d1(n_d,n_s) and B was generated by averaging random selections of five colums of L<cit.>.The following positive functionsf(x)= {[ exp( -cos^-1(⟨ x_o,x ⟩)/πσ)cos^-1(⟨ x_o,x ⟩)/πσ≤τ; 0 ].where ⟨ .,. ⟩ is the standard inner product, were evaluated for (σ=0.015,τ=3), for each of the n_f face centers x of M and for x_o center of the twenty original Mo icosahedron faces. We built an initialization D_O for D by concatenating these twenty cortical maps, which means that optimization started at n_k=20.§.§ Extrapolation Extrapolation was introduced for accelerating the convergence of iterative soft-thresholding algorithms <cit.>.Extrapolation exploits previous gradient steps for extending the current one, as shown in figure <ref>.For NMF schemes, an additional projection is necessary to preserve positivity.However, this projection generates zeros which can trap the optimization.For this reason, we investigated the log extrapolation illustrated in figure <ref>.This novel extrapolation, which was bounded in amplitude and run after ten standard updates to prevent instability,corresponds to an extrapolation of the logarithm of the matrix components.§ RESULTS§.§ Data and Parameters We validated our methods using the data of the hundred unrelated HCP subjects <cit.>. The outliers of each subject myelin and cortical thickness maps were removed by limiting the absolute difference with the median to 4.4478 median absolute deviations.We generated a regional homogeneity map (reHo) <cit.> for each subject by first bandpass-filtering between 0.05 and 0.1 Hz the rs-fMRI processed with the ICA+FIX pipeline with MSMAll registration <cit.>. The timeseries obtained were then normalized to zero mean and unit L2 norm, concatenated, and reHo was measured for neighborhoods of three edges of radius <cit.>. The amplitude of low frequency fluctuations (ALFF) <cit.> was on the contrary measured for each subject scan separately, for the frequency band 0.05-0.1 Hz. All these positive maps were projected onto the fsaverage5 mesh using the transformation provided on Caret website <cit.>. For all the experiments sparsity level λ was set to 5 for dictionary learning, 1/2 for sparse NMF schemes.λ was set to 1/||L||_2 for alleviating ambuiguity issues with non-sparse NMF schemes.§.§ Computational Time and Extrapolation We measured the maximal speed up by comparing the computational time required for running the first million of optimization steps with the small design matrix D_O with the computational time required for running the same algorithm at the original fsaverage5 resolution.The results presented in figure <ref> correspond to speedups between 2.4 × 10^3 and 6.25 × 10^5 for the most time consuming projected NMF schemes.These speedups of three to six orders of magnitude grant us the possibility to generate a good initial pair (B,C) by running a large number of random initializations.We compared the extrapolation strategies by measuring the reconstruction error ||X-DBC||_2^2 when running our eleven algorithms ten times for factorizing the left hemisphere myelin data. D was set to D_O and the algorithms were run for a thousand iterations. The median errors reported in figure <ref> demonstrate that both extrapolation approaches significantly speed up the convergence. However, the log-extrapolation is less likely to get trapped in local minima, and always reaches a slightly lower energy than state of the art extrapolation.§.§ Methods Comparison Figure <ref> presents the L1 norm of B and C obtained for all our algorithm and datasets.Ten steps of refinement were conducted. At each step, five faces were subdivided and the algorithms run for ten optimization steps.Our results suggest that the tradeoff between the sparsity of B and C is different for the parameters selected.Because projected NMF do not control the sparsity of the loadings, PNMF basis tend to be very sparse but the projected loadings are not.The other factorization schemes balanced the L1 norms of B and C. For the parameters selected, the basis generated by dictionary learning were slightly sparser.We illustrate in figure <ref> the results obtained when decomposing the myelin data using LE-PNNMF for a larger number of iterations. The basis obtained nicely decompose the map of large data variability into weakly overlapping components. The refinement had focused accordingly.§ DISCUSSIONS In this paper we exploit the structure of the icosahedral meshes commonlyused in neuroimaging for accelerating optimization tasks such as data factorization.We compare four factorizations schemes and investigate the use of extrapolation forfurther reducing computational time.Our experiments with structural and functional data acquired by the Human Connectome Projectdemonstrate that our approach is particularly interesting for processing massive datasets.splncs03§ PROOFS All the optimization schemes implement an alternative gradient descent.They alternatively minimize the basis B and the loadings C by performing a gradient descent,which depends on the scheme.For the Dictionary Learning, a step of proximal gradient is performed <cit.>as explained in the next section.For the non-negative schemes, a multiplicative update is performed <cit.>.This update adapts the step size of the gradient for each matrix component independently,so that the matrix components remain positive.More precisely, let Y denote B or C and G the gradient of the differentiable part of theobjective function. G is first decomposed into its positive and negative parts:G=G^+-G^-     G^+≥ 0  ,  G^-≥ 0A standard gradient update with positive step size α would be, for any component Y_i,j:Y_i,j← Y_i,j - α(G^+_i,j-G^-_i,j) For nonnegative factorization, the positive step size α is set independently for each component as follows <cit.>:α_i,j = Y_i,j/G^+_i,jwhich yieldsY_i,j←Y_i,j/G^+_i,j(G^+_i,j-G^+_i,j+G^-_i,j) = Y_i,j/G^+_i,jG^-_i,j This update maintains the positivity of Y and can be expressed in a more elegant form using componentwise products and divisions:Y ← Y ⊙ G^-⊘G^+ All the gradients were derived using Gâteaux derivatives, as explained in the next sections.We introduce the notation ⟨ .,. ⟩ for the standard inner product, hence:||Y||_2^2=⟨ Y,Y ⟩ We recall the following notations introduced in the paper:L = X^T DK = D^T DM = L^T L = D^T X X^T D§ DICTIONARY LEARNING The differenciable part of the objective of the parametric dictionary learning:(DL)     min1/2||X-DBC||_2^2 + λ(||B||_1+||C||_1)can be expressed as follows:e=1/2[ ⟨ X,X ⟩+⟨ DBC,DBC ⟩-2⟨ X,DBC ⟩] As a result, for B, the Gâteaux derivative of e in the direction F writes:∂ e(B+τ F)/∂τ|_τ=0 = 1/2∂/∂τ[ ⟨ X,X ⟩+⟨ DBC,DBC ⟩-2⟨ X,DBC ⟩+2τ⟨ DFC,DBC ⟩. + .τ^2 ⟨ DFC,DFC ⟩ -2τ⟨ X,DFC ⟩]|_τ=0= ⟨ F,D^TDBCC^T⟩-⟨ F,D^TXC^T⟩= ⟨ F,KDBCC^T - L^TC^T⟩Hence, a gradient descent of step η on B writes:B ← B - η(KDBCC^T - L^TC^T)Applying a componentwise soft-thresholding S by ηλ <cit.> leads to the optimization step describe in Section 2.1. The derivation for C is very similar:∂ e(C+τ F)/∂τ|_τ=0 = 1/2∂/∂τ[ ⟨ X,X ⟩+⟨ DBC,DBC ⟩-2⟨ X,DBC ⟩+2τ⟨ DBF,DBC ⟩. + .τ^2 ⟨ DBF,DBF ⟩ -2τ⟨ X,DBF ⟩]|_τ=0= ⟨ F,B^TD^TDBC ⟩-⟨ F,B^TD^TX ⟩= ⟨ F, B^TKBC-B^TL^T⟩As a result, our parametric dictionary learning writes:(DL)     min1/2||X-DBC||_2^2 + λ(||B||_1+||C||_1)   {[ B ← S(B-η(KBCC^T-L^TC^T),λη); C ← S(C-η(B^TKBC-B^TL^T),λη) ]. § PPNMF For our parametric PNMF scheme <cit.>(PPNMF)     min ||X-DBB^TD^TX||_2^2          B ≥ 0the Gâteaux derivative in the direction F writes:∂ e(B+τ F)/∂τ|_τ=0 = -2⟨ X,DFB^TD^TX ⟩+2⟨ DFB^TD^TX,DBB^TD^TX ⟩  -2⟨ X,DBF^TD^TX ⟩+2⟨ DBF^TD^TX,DBB^TD^TX ⟩= -2⟨ F,D^TXX^TDB ⟩+2⟨ F,D^TDBB^TD^TXX^TDB ⟩  -2⟨ XX^TDF,DB ⟩+2⟨ DB,DBB^TD^TXX^TDF ⟩= ⟨ F,-4D^TXX^TDB ⟩+⟨ F,2D^TDBB^TD^TXX^TDB ⟩  +⟨ F,2D^TXX^TDBB^TD^TDB ⟩= ⟨ F,-4MB+2(KBB^TM+MBB^TK)B ⟩ which leads to the following multiplicative update:B ← B ⊙ 2MB ⊘[ (KBB^TM+MBB^TK)B ] Shrinking the amplitude of this update by two as follows:B ←1/2B+ 1/2[B ⊙ 2MB ⊘[ (KBB^TM+MBB^TK)B ]]leads to the update presented in Section 2.1:   (PPNMF)     min ||X-DBB^TD^TX||_2^2          B ≥ 0                              B ← B ⊙(1(n_k,n_d)/2+[MB] ⊘[ (KBB^TM+MBB^TK)B ])§ PNNMF For the parametric NMF scheme(PNNMF)     min ||X-DBC||_2^2 + λ(||B||^2_2+||C||^2_2)      B ≥ 0    C ≥ 0the Gâteaux derivative with respect to B in the direction F writes:∂ e(B+τ F)/∂τ|_τ=0 = 2⟨ DBC,DFC ⟩ - 2⟨ X,DFC ⟩ +2λ⟨ F,B ⟩= 2⟨ F,D^TDBCC^T⟩ - 2⟨ F,D^TXC^T⟩ +2λ⟨ F,B ⟩= 2⟨ F,KBCC^T - L^TC^T +λ B ⟩Following <cit.>, we join the gradient term originating from the penalty with the negative part of the gradient, and we project the components of the matrix back to positive real numbers. These operations yield the following multiplicative update:B ← B ⊙[ L^TC^T-λ B ]_+⊘[KBCC^T] The same derivation yields for C:∂ e(C+τ F)/∂τ|_τ=0 = 2⟨ DBC,DBF ⟩ - 2⟨ X,DBF ⟩ +2λ⟨ F,C ⟩= 2⟨ F,B^TD^TDBC ⟩ - 2⟨ F,B^TD^TX ⟩ +2λ⟨ F,C ⟩= 2⟨ F,B^TKBC - B^TL^T +λ C ⟩hence the update:C ← C ⊙[ B^TL^T-λ C ]_+⊘[B^TKBC] and we obtain the PNNMF scheme presented in Section 2.1.§ SPNNMF The derivation of the SPNNMF scheme follows exactly the derivation of the PNNMF.Since B and C, the derivative of the L1 norms is a matrix of ones of the same size as the updated matrix.
http://arxiv.org/abs/1704.08631v1
{ "authors": [ "Nicolas Honnorat", "Christos Davatzikos" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170427155223", "title": "Sparse Hierachical Extrapolated Parametric Methods for Cortical Data Analysis" }
Consensus measure of rankings Zhiwei Lin^#1,Yi Li^*2,Xiaolian Guo^#3 1010^# School of Computingand Mathematics, Ulster University Northern Ireland, United Kingdom 99^1 [email protected] ^3 [email protected] 1010 ^* Division of Mathematics, SPMSNanyang Technological University 21 Nanyang Link, Singapore 99 ^2 [email protected] ===========================================================================================================================================================================================================================================================================================================================================================A ranking is an ordered sequence of items, in which an item with higher ranking score is more preferred thanthe items with lower ranking scores. In many information systems,rankings are widely used to represent the preferences over a set of items or candidates. The consensus measureof rankingsis the problem of how to evaluatethe degree to which the rankings agree.The consensus measure can be used to evaluate rankings in many information systems,as quite oftenthere is not ground truth available for evaluation.This paper introduces a novel approach for consensus measure of rankings by using graph representation, in which the vertices or nodes are the items and the edges are the relationship of items in the rankings.Such representation leads to various algorithms for consensus measure in terms of different aspects of rankings, including the number of common patterns, the number of common patterns with fixed length and the length of the longest common patterns.The proposed measure can be adopted for various types of rankings, such as full rankings, partial rankings and rankings with ties.This paperdemonstrates how the proposed approaches can be used to evaluate the quality of rank aggregation and the quality of top-k rankings from Google and Bing search engines. § INTRODUCTIONIn many information systems,rankings are widely used to represent the preferences over a set of items or candidates, ranging from information retrieval, recommender todecision making systems <cit.>, in order to improve quality of the services provided by the systems. For example, in search engine, the list of the terms suggested by a search engine after a user's few keystrokes is a typical ranking and such ranking service, widely adopted nowadays, has great impact on user's search experience; it is also recognized that the list of search results is a ranking after a query is issued. A ranking is an ordered sequence of items, in which an item with higher ranking score is more preferred thanthe items with lower ranking scores. The consensus of rankingsis the degree to which the rankings agree according tocertain common patterns. The consensus measure,can be used in many information systems, in order to uncover how close or related the rankings are.For example, in the group decision making, a group of experts express their preferences over a set of candidates by using rankings and the measure of the degree of consensus is very useful for reachingconsensus <cit.>.In many information system with large volume of items, such as search engines, it is hard to clearly define what ground truth is, which make it more difficult to evaluateand compare the rankings returned from the systems. The consensus measureof rankings, as a tool for understanding how related or close the rankings are, willhelp engineers and researchersto discernwhat aspects of a ranking system need to be improved and to detect outliers <cit.>.For a set of rankings ={_1, …, _n},one approach to understanding the degree to which rankings agreeis to use rank correlation or similarity function by pairwise comparison <cit.>.The notable functions include theKendall indexτ(_i,_j) and the Spearman index ρ(_i,_j) <cit.>, which howeverdo not have a weighting scheme so that less important items can be penalized. It is common that in information retrieval, the documents (items) at the top of a ranking list are more important than those at the bottom <cit.>. As such, it makes sense toreduce the impact from the bottom items with aweighting scheme. For example, the variation of τ index, denoted by τ_ap,withaverage precision,is able to give greater weight to the top items of the ranking lists <cit.>.These methods assumerankings are conjoint, meaning that items in the rankings are completely overlapped. Undoubtedly, they cannot be used for partial rankings, in which items may not be mutually overlapped. As a similarity function for two partialrankings, the RBO (rank-biased overlap) proposes to weight the number of common items according to the depth of rankings <cit.>, and it doesn't take into account the order of items in the rankings. When one of the correlation or similarity functions is used for consensus measure for a set of n rankings in , we can aggregate the pairwise comparison values across all rankings for n2 times. Since the pairwise comparison is based on the degree of commonality in two rankings, with respect to features or patterns (e.g, the common items, or the concordant pairs against the disconcordant pairs), the aggregated result is not informative enough to tell the extend to which the rankings agreein , according to the study byElzinga et al. <cit.>. Also, the type of rankings can be full or partial, specially the top-k <cit.>, the existing measures fail to meet the requirements for handling different types of rankings. In order to effectively evaluate and compare rankings, which could be full or partial and especially in which some items need to be weighted, this paper propose a new approach based on graph representation.The novelty of this paper lies in that fact the new proposedconsensus measure of rankings does notneed pairwise comparison, which is significantly different from the pairwise approaches using similarity or correlationfunctions. The contribution of the paper includes: * we introduce a directed acyclic graph (DAG) to represent the relationship between items in the rankings so that such representation can be used to induce efficient algorithms for consensus measure of rankings; * the proposed representation of DAG enables us toapproach consensus measure of rankings in terms of different aspects of the common features or patterns hidden in the rankings, includingκ() – the number of common patterns, κ_p() – the number of common patterns with a fixed length p, andℓ() – the length of the longest or largest common patterns. * the proposed representation of DAG is extended to allow the edges in the graph to have weights so that more “important” features or patterns are assigned with higher values and the features or patterns with less “importance” are penalized. * we also demonstrate that the consensus measure of rankings with graph representation canbe extended to calculateconsensus measure forduplicate rankings, for rankings with ties and forrankings whose top items need to be weighted. * we show that our approach can be used for different types of rankings, including the full rankings and top-k rankings.The rest ofpaper is organized as follows. Section <ref> introduces the important notations and concepts used in the paper, followed by a review of related work in Section <ref>.Section <ref> presents a directedgraph representation approach for consensus measure. Section <ref> shows how the proposed approaches can be used to evaluate rank aggregation and to compare top-k rankings. The paper is concludedin Section <ref>.§ PRELIMINARIESThis section introduces notations and concepts of graph, ranking sequence, and consensus measurethat will be used in the rest of the paper.§.§ Directed graphA directed graph is a pair G=(V,E), where V is the set ofnodes (or vertices) and E is the set of directed edges. A directed edge (x,y) means that the edge leaves node x∈ V and enters node y∈ V.An edge (x,x) is called a loop, which leaves node x and returns to itself. Given a graph G=(V,E) with n=|V| nodes,matrix 𝐀=(A_ij)_n× nis used to denote the adjacency matrix of graph G=(V,E),where A_ij=1 if there exists edge (x_i,x_j)∈ E; and A_ij=0, otherwise. The adjacency matrixassumes that all the edges have identical weights of 1, and this can be relaxed in the weighted directed graph. A weighted directed graphG=(V,E, W) is a directed graph, in which W is a set of weights on the edges and each edge (x_i,x_j)∈ E is assigned a non-zeroweight w(i,j) ∈ W.Then, the adjacency matrixfor G=(V,E, W) is defined as A_ij=w(i,j) if (x_i,x_j)∈ E and A_ij=0, otherwise. Apath from node x_i to x_j isa sequence of distinctnon-loop edges(x_i,x_k_1), (x_k_1,x_k_2),…, (x_k_p, x_j)connecting node x_iandx_j. §.§ Ranking sequencesA rankingis an ordered sequence =(σ_i_1,σ_i_2,…,σ_i_m) of m distinct items drawn from a universe Σ={σ_1,⋯,σ_n }, where m≤ n and σ_i_j is more preferred than σ_i_k if i_j<i_k. The length ofis denoted by ||. For notational simplicity, we shall simply write a ranking as a sequence of = σ_i_1σ_i_2⋯σ_i_m in the rest of the paper. For a ranking =r_1⋯ r_k, where r_j∈Σ for 1≤ j≤ k, we candefine the embedded patterns with respect to subsequences. A sequence '=r'_1⋯ r'_m is called a subsequence of , denoted by '⊑, if ' can beobtained bydeleting k-m items from . We denote by '⋢ that ' is not a subsequence of.For example,bde ⊑ abcde, and bac⋢abcde.A ranking sequence with no items is an empty sequence. We use () to denote the set of all possible non-empty subsequences of . () can be partitioned into subsets _p(), where _p() consists of all subsequences of length p. For example, if =abcde, then_3()={abc,abd, abe, acd, ace, ade, bcd, bce, bde, cde}, in which each subsequence has length 3.The degree to which rankings agree lies in the common patterns or features which are embedded in the rankings. For ranking sequences, the subsequences are the patterns or features. Given a set of N rankings ={_1,…,_N}, consider () = (_1)∩⋯∩(_N),each element x∈() is a common subsequence of _1,…,_m, for which we also use the notation x⊑.Similar to _p(), we also define_p() to denote the subsets of all common subsequences of length p. Therefore, it holds that ()=⋃_1≤ p≤ l_p(), where l=min{ || : ∈}. In a special case, for two rankings _i and _j, we will write _p(_i,_j) to denote the set ofp-long common subsequences between_i and _j.It is clear that () accommodates all common features (subsequences), which are subsumed by each ranking ∈. Let κ() denote the number of all common subsequences of , i.e,κ()=|()|.The more common features () has or the bigger κ() is, the higher degree of consensushas.We also define κ_p()=|_p()| in order to measure the consensus inwith respect to the number of subsequences of a given length p.The length of the longest common subsequences of rankings inis denoted byℓ() or simply ℓ. Then, ℓ=max{|z| : z ∈()}.Therefore, we have the following properties: * For a set with only oneranking ={}, where n=||, κ()=2^n-1; * For a set of two rankings ={_x, _y}, where m=|_x| and n=|_y|, we have 0≤κ()≤ 2^min{m,n }-1; * For two sets of rankings _x and _y, if _x⊆_y, then κ(_y)≤κ(_x)§.§ Consensus measure of rankings in feature spacesFor a set of n rankings , we can form a set of features =⋃_∈().Let m=|| and = {y_1,…,y_m}. Each rankingcan be represented by a feature vector with a mapping function ϕ : →{0,1}^m:ϕ()= (f_(y_1), …, f_(y_m)),where f_(y_k)= 1y_k 0y_kIt is clear that κ(), defined in Equation (<ref>) can be rewritten using the inner producton n-inner product spaces <cit.> as κ() = ⟨ϕ(_1), …, ϕ(_n)⟩ = ∑_k=1^m∏_∈ f_ (y_k)With the generalized inner product, we find thatthe κ() is a kernel function <cit.> when ||=2. The rewritten κ() relies on the definition of f_(y_k) as defined inEquation (<ref>), whose co-domain is {0, 1}. It is computationally expensive to enumerate all the features and to form . In this section, we will transformrelationship between items to a graph so that efficient algorithms can be found withoutenumerating features explicitly, which is similar to the kernel trick for kernel functions <cit.>. § RELATED WORKA ranking can be full or partial ranking, depending on the number of items from Σ being ranked. A rankingis a full ranking if|Σ|=||. Arankingis called partial ranking if the items informs a subset of Σ. A top-k ranking is a sub-ranking of full ranking but only with the top-k items.Rankings with ties occur when some items share an identical ranking score, which happens very often in the decision making or voting process <cit.>. For example, in a ranking ={a}{bc}{d}, both items b and c are assigned with an identical ranking score. Evaluation or comparison of rankings is an important tasks in many ranking related systems, including decision making, information retrieval, voting and recommender <cit.>. One approach to evaluating rankings is to use rank correlation between two rankings.The widely used Kendall τ index <cit.> is a measure of rank correlation between two rankings_i and _j over n itemsby taking into account 2-long common subsequences between them, whichcan be formulated as τ(_i,_j)=|_2(_i,_j)|-|_2(_i,_j)|/n2where _i is a reverse ranking of _i. In rank aggregation,one could also use the Kendall distance d_τ(_i,_j) – a variation of the Kendall τ:d_τ(_i,_j)=|_2(_i,_j)|/n2The Spearman ρ index is another measure of rank correlation that does not utilize the 2-long common subsequences butinstead takes into accountof each item positions in _i and _j <cit.>.It is defined as followsρ(_i,_j)=1-6∑_σ∈Σ( η_i(σ)-η_j(σ) )^2 /n(n^2-1)where n=|Σ|. The Spearman footrule distance d_ρ is an L_1 distance, which is a variation of the Spearman ρ:d_ρ(_i,_j)=∑_σ∈Σ | η_i(σ)-η_j(σ) | /n(n^2-1) Compared with the Spearman ρ,the Kendall τ ignores the use of items positions, which are in many cases very important factors, e.g,for the top-k rankings. Again, the Spearman ρ can not be used for sensitivity detection and analysis, as studied in <cit.>.Both Kendall and Spearman can only be used for full rankings. They cannot be used for partial or top-k rankings.Even in full rankings, both of them lack of weighting schemes and are not flexible enough for rankings whose items at the top are more important than the items at the bottom <cit.>.Therefore, it is necessary toreduce the impact from the bottom items with adown-weighting scheme for those bottom items. For example, the variation of τ index, denoted by τ_ap,withaverage precision,is able to give greater weight to the top items of the ranking lists <cit.>.Shieh also developed a weighted metric τ_w based on the Kendall τby adding weighting factors to the 2-long subsequences <cit.>.For full rankings with ties, τ_twas proposed based on the Kendall index <cit.>. One extensionρ_w to the Spearman index by Iman et al. was to assign higher weights to the items at the top <cit.>. The above methods assumethatrankings are full rankings, meaning that items in the rankings are completely overlapped. Therefore, they cannot be used for partial rankings. In information retrieval, it is more interesting to compare the the rankings based on theirtop-kitems. Fagin et al. proposed two measures τ_k and ρ_k by adapting both Kendall τ and Spearman ρ for top-k rankings<cit.>.As a similarity function for two partialrankings, the RBO (rank-biased overlap) proposes to weight the number of common items according to the depth of rankings <cit.>, but it does not take into account the order of items in the rankings.These functions are pairwise comparison and they can be transferred into consensus measure for a setof n rankingsbyaggregating the pairwise distance values across all rankings. For example, one can use ∑_i=1^n∑_j=1, i≠ j^nτ(_i,_j) if the Kendall index is preferred.However,this aggregated result is not informative enough to tell the extend to which the rankings agreein , according to the study byElzinga et al. <cit.>.We summarize the popularly used indices in Table <ref> and we show that our approach ofκ_p() andκ() is more flexible for various types of rankings, which will be demonstrated in the next section. Also, those existing indices shown in Table <ref>cannot be used for sensitivity detection in the consensus measure, while our approach has the ability to discern how the rankings come to agree by varying the parameters to the gaps and positions of items, as pointed out in Section <ref> and as verified in Section <ref>.§ GRAPH REPRESENTATION FOR CONSENSUS MEASURE OF RANKINGS This section will introduce a graph approach to consensus measure of rankings by calculating κ() and κ_p(). §.§ A motivating example Consider a set of rankings ={_1=abcdef, _2=bdcefa,_3=bcdeghijkf, _4=badefc}. Without loss of generality, we randomly pick _1∈ (note that |_1|=6) and form a lower triangle matrix 𝐀=(A_ij)_6× 6 of size 6× 6, where for i≥j,A_ij=1 if the i^th item and the j^th item of _1 both occur in the same order in all rankings in , and A_ij=0 otherwise. Then we obtainmatrix 𝐀:𝐀_6× 6 = abcdef a 0 0 0 0 0 0 b 0 1 0 0 0 0 c 0 1 1 0 0 0 d 0 1 0 1 0 0 e 0 1 0 1 1 0 f 0 1 0 1 1 1 With matrix , we can inducea weighted directed graph G=(V,E_ℓ∪ E_e) on the diagonal elements of 𝐀, where V = {A_11,…, A_66} is the set of vertices, E_ℓ is the set of loops and E_e is the set of non-loop edges. Later, we may use V = {a,b, c, d, e, f } interchangeably without confusion as each A_ii stands for an item. Hereinafter in this paper, we shall distinguish loops and non-loops edges, and abuse the notation and simply call the latter edges. The edges are drawn according to the following: for 1≤ i, j≤ ||,an edge from A_ii to A_jj is added if the following conditions all hold: (1) i<j;(2) A_ii=A_jj=1; (3) A_ji≠ 0. We also add dashed loops on diagonal elements of value 1. Figure <ref> shows theweighted directed graph for the matrix 𝐀, in which there are seven directed (solid) edges, i.e,E_e= {(A_22,A_33), (A_22,A_44), (A_22,A_55), (A_22,A_66),(A_44,A_55),(A_44,A_66),(A_55,A_66) } orsimplyE_e={(b,c),(b,d), (b,e), (b,f), (d,e), (d,f), (e,f) }. Those edges are the 2-long common subsequences: bc, bd, be, bf,de, df, ef, and all of them occurin _1, _2, _3 and _4. As such, κ_2()=|E_e|=7. Similarly, paths[Recall that our definition of path excludes loop edges.] of length 3 corresponds to common subsequences of length 3. We find that κ_3()=4 with common subsequences being bde, bdf, bef and def. Next, κ_4()=1 with the common subsequence being bdef. There is no longer common subsequences since the length of the longest path in G is 4.In Figure <ref>, the five dashed loops mean five singletons, i.e., b, c, d, e, f. As a result, κ_1()=5. Therefore, we obtainκ()=κ_1()+κ_2()+κ_3()+κ_4()=5+7+4+1=17.This process of finding patterns of various lengthswith graph representation not only allows us to calculate κ_p(), but also makes it easy for us to calculate the number of allcommon patterns κ() and the length of the longest common subsequences ℓ(). §.§ Consensus measure by graph representation The above example shown in Fig. <ref> presents an approach with graph representation for consensus measure of rankings when f_(y_k)∈{0, 1}.This section will extends the graph representation to the consensus measure of rankings by calculating κ() and κ_p(), whenf_(y_k)∈ [0, 1]. In Equation (<ref>),thedefinition off_(y_k)∈{0, 1} assumes that features inare equally assignedwith a weight of 1. However, this is not true in many cases whensome features or items inis more important than the others <cit.>. Thedefinition off_(y_k)∈{0, 1} is not flexible enough todifferentiate the importance of the features.As such, we shall extend it to f_(y_k)∈ [0,1] if y_k, and f_(y_k)=0 if y_k so that “important” features will receive higher values of f_(y_k) while features with less importance will be “penalized” withlowerf_(y_k). Therefore, we rewriteEquation (<ref>) as κ() = ∑_k=1^m∏_∈ f_ (y_k) forf_ (y_k) ∈ [0,1]. In the DAG shown in Figure <ref>, we assume that the weights on the edges equal to 1, which does not reflect the nature of how each subsequence is embedded in the original rankings. Consider the four rankings in={_1=abcdef, _2=bdcefa,_3=bcdeghijkf, _4=badefc}, item f occurs at different positions in the rankings, which is shown in the following table: _1 _2_3 _4 6 5 10 5 The position for f in _3 is 10, which deviatesfrom the positions of f inthe other rankings substantially. In order to incorporatethose factors which may affect the degree of consensus, we relax the assumption that the weights are identical to 1 and generalize the induced weighted DAG by introducing two functions θ(σ) and ψ(σ_i, σ_j).Figure <ref> shows the new DAG, where each edge is associated with weight ψ(σ_i,σ_j) and each loop is assigned with θ(σ), where ψ(σ_i,σ_j)∈ (0,1] andθ(σ)∈ (0,1].We will illustrate how the two functions reflect those factors in the following sections, and how they are related to f_(y_k)∈ [0,1] for Equation (<ref>). For simpler presentation of our algorithm, we introduce the (left-continuous) Heaviside function H(x)=1 x> 0;0otherwise.Now we present the followingtheorem for measuring the consensus of rankings. Given a set ={_1, ⋯, _N }of N rankings over a universe Σ, where each ranking _k=r_k_1⋯ r_k_m is naturally associated with a mapη_k: Σ→{0, 1, …, |Σ|} defined as η_k(σ)= 0,σ⋢_k; j,σ = r_k_j. Let _x = r_x_1r_x_2⋯ r_x_n be an arbitrary ranking from , n=|_x|, and 𝐀=(A_ij)_n× n be an adjacencymatrix of a graph, where A_ij= 0, i<j; θ(r_x_i)∏_k=1^N H(η_k(r_x_i)),i= j;ψ(r_x_i, r_x_j) ∏_k=1^N H(η_k(r_x_i)-η_k(r_x_j))H(A_ii)H(A_jj),i> j. andlet 𝐋 = (L_ij)_n× n be strictly lower triangle of 𝐀, and 𝐳=(1,…, 1)^T be a vector of all ones. Then, κ_p()= (), p=1; 𝐳^T ^p-1𝐳 , p>1. Note that 𝐳^T 𝐳 gives the sum of all entries in a matrix . By definition ofwe know that _ii>0 if r_x_i∈ and _ii=0 otherwise. It follows that κ_1() is the number of 1s on the diagonal line, which equals to,noting that all other entries on the diagonal lines are 0s, the sum of diagonal entries of , or (). For p≥ 2, it is a classical inductive argument that (^p-1)_ij = N_p(i,j) when i > j, where N_p(i,j) is the number of common sequences of length p which begin with r_x_i and end with r_x_j. The advertised result then follows from the fact that all rankings have distinct items and thus the common subsequences also have distinct items. Theorem <ref> shows that the individual items are weighted by θ(σ) and the edges between any two items by ψ(σ_i,σ_j), reflecting the strength of the relationship between two items σ_i and σ_j.§.§.§ θ(σ) – weighted by standard deviation of item's positions The position of item σ in a ranking is an indication of the strength ofbeing preferred. To show the importance of the position of σ, we define μ(σ), the average of thepositions of σ in _k, as follows.μ(σ)=-∞ σ⋢ 1/N∑_k=1^Nη_k(σ)σ⊑ If an item is placed in a small range of positions throughout all rankings, it is assumed that this item is preferred consistently at the same level by all rankings. On the other hand, if an item has a low position η_i(σ) in one ranking _i while has a highpositionη_j(σ) in another ranking _j, the big difference between the positions|η_j(σ)-η_i(σ)| indicates the inconsistency of the preferences over this item. To take into account the differences of item's positionsin consensus measure, wedefine θ(σ)=γ^d,where d= 1/N∑_k=1^N |η_k(σ)- μ(σ)|, in ordertoweigh the item using the positions η_k(σ).In fact, when feature y_k is a singleton (i.e.,y_k=σ), it is clear that θ(σ)=∏_∈ f_(σ),where f_(σ)=1/N|η_k(σ)- μ(σ)| for Equation (<ref>). §.§.§ ψ(σ_i,σ_j) – weighted by gapsThe gap between items has been used for pairwisekernel functions or sensitivity detection <cit.>. Now we extend this to the set-wise consensus measure of κ_p(). Theweighted DAG in Figure <ref> shows that the edges (b,f) and (b,c), i.e., subsequences bf and bc are quite different in terms of the distance between b and f, and between b and c, in _1. There are no items between b and c, however b and f are separated by three other items c, d and e, which means that c is much more preferred than f.Therefore, for each _k and every 2-long subsequence σ_iσ_j, we define the gap ϖ_k(σ_i,σ_j)=η_k(σ_j)-η_k(σ_i), which indicates how much σ_i is more preferred than its successor σ_j in the 2-long subsequence σ_iσ_j with respect to the original ranking sequence _k. The following table shows the gaps for bf and bc with respect to the example rankings.   _1 _2_3 _4 ϖ_k(b,c) 1 2 1 5 ϖ_k(b,f) 4 4 94 Clearly, the accumulated gaps for b and f is much bigger than that for b and c. This suggests that f is less likely to be preferred over c. To take into account this likelihood, we defineψ(σ_i,σ_j)=λ^g,where 0<λ≤ 1 and g=1/N∑_k=1^N |ϖ_k(σ_i,σ_j)|, so that any subsequence σ_iσ_j with bigger average gapswill be “penalized”. Now we relate ψ(σ_i,σ_j) to f_(y_k) for Equation (<ref>). For p>1, a p-long subsequencey_k=(σ_k_1,σ_k_2,…, σ_k_p) ∈ is represented bya (p-1)-long path (σ_k_1,σ_k_2),…, (σ_k_p-1,σ_k_p) in the graph. As each edge (σ_k_i,σ_k_j) has weight ψ(σ_k_i,σ_k_j), defining f_(y_k)=∏_i=1^p-1ψ(σ_k_i,σ_k_i+1)makes Equation (<ref>) consistent with Equation (<ref>).§.§.§ An example forψ(σ_i, σ_j)=1andθ(σ)=1Comparing Fig. <ref> and Fig. <ref>, obviously the example in Section <ref> is a special case for Theorem <ref> when ψ(σ_i, σ_j)=1andθ(σ)=1.Now we can use Theorem <ref> to calculate the consensus score for the example in Section <ref>. Consider ={_1=abcdef, _2=bdcefa,_3=bcdeghijkf, _4=badefc}, based on the matrix 𝐀 in Equation (<ref>), we have 𝐋= ( 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 1 1 0),   𝐋^2= ( 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 2 0 1 0 0), 𝐋^3= ( 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0),  𝐋^4= ( 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0). Then, [κ_1()=()=5, κ_2()=𝐳^T𝐋𝐳=7,; κ_3()=𝐳^T𝐋^2𝐳=4, κ_4()=𝐳^T𝐋^3𝐳=1. ] §.§ℓ() and κ()In Example <ref>, we observe that 𝐋^p = 0 for p≥ 4, which implies that there are no common subsequences with length more than 4 and hence the length of the longest common subsequences ofis 4.Based on this fact, thefollowing corollary of Theorem <ref>provides analgorithm for calculating ℓ(). Under the assumptions inTheorem <ref> , the length ℓ() of the longestsubsequences in () can be obtained by max{p :κ_p()>0}. Under the assumptions inTheorem <ref> , κ()=κ_1()+𝐳^T (-)^-1𝐳 - n where n=|_x| andis the identity matrix of size n× n. Consequently κ() can be computed in O(n+|E|) time. Since the longest possible length of a common subsequence is n, Theorem <ref>implies that κ() = κ_1() + ∑_p=2^n κ_p() = () + ∑_i=1^n-1𝐳^T ^i 𝐳. Invoking the identity ( - )( ++ ^2 +⋯ + ^n-1) =- ^n and the observation that ^n= 0 sinceis strictly lower triangular, we obtain that κ()= () + 𝐳^T((-)^-1-)𝐳= () + 𝐳^T(-)^-1𝐳 - n.Now we discuss the runtime. Sinceis a strictly lower triangular matrix, - is lower triangular. Note that computing ^-1u is equivalent to solving v=u and, ifis lower triangular, can be done efficiently in O(()+n) time using forward elimination (degenerated Gaussian elimination), where () denotes the number of non-zero entries in . Therefore (-)^-1𝐳 can be computed in O(n+|E|) time, and thus κ() in O(n+|E|) time.We remark that the runtime in Corollary <ref> is significantly faster, by a factor of n, than the naïve algorithm to sum up κ_p(A) over p, which would take O(n^2+n|E|) time. §.§ RemarksFor efficiency purpose, we can choose the ranking with the least number of items.Algorithm <ref> is the pseudocode for Theorem <ref>.Generating(Line <ref>–<ref>) takes O(N n^2) time. Computing κ_1() = (A) (Line <ref>) takes O(n) time. For each p≥ 2, the matrix-vector multiplication 𝐲 in Line <ref> takes O(n+|E|) time, sincehas at most O(|E|) non-zero entries. In fact, |E|=κ_2().Line 20 takes O(n) time. Overall computing κ_p(), after generation of , takes O(p(n+|E|)) time for p≥ 2. §.§.§ Sensitivity detection by gaps and positions of itemsThere are some differences among κ_p(). Note that κ_1() is controlled by the γ and d in Equation (<ref>), where d is a factor to reflect the deviation of each item's positions in the rankings. Higher disagreement ofthe items positions in the rankings will result in lower κ_1(). Hence κ_1() is sensitivity detection of the variation ofitems positions. Whereas for p>1, the measure κ_p() takes into account the extent of the relationship between two items by incorporating λ and g in Equation (<ref>). We demonstrate such ability for sensitivity detection in terms of the gaps and positions of items in Section <ref>.§.§.§ Duplicate rankingsIn the above example of ={_1=abcdef, _2=bdcefa,_3=bcdeghijkf, _4=badefc} and its adjacent matrix 𝐀 in Equation (<ref>), we assume that there are only distinct rankings in . However, this is not the case, especially in the group decision making process, where there may be duplicate rankings produced by the experts. For example, we may have a multi-set '={_1=abcdef, _2=bdcefa,_3=bcdeghijkf, _4=badefc, _5=badefc,_6=badefc, _7=badefc}, which contains duplicate ranking of badefc. Obviously,' has higher degree of consensus than . However, as κ_p(')=κ_p(),that is,κ_p(·) cannot discriminate ' and . In order to distinguish the difference, we let o() be the number of occurrences of rankingin ', anddefine κ̂(')=κ(')+|'|· s/||, where s=max{o() : ∀∈' } andis the set of all distinct rankings in '. §.§.§ Rankings with ties Rankings with ties occur when the preference scores over some items are identical.Let _k=_k_1…_k_n be a ranking with ties, where _k_i is a set of items with an identical ranking score,_k_i∩_k_j=∅ if i≠ j,and for i<j, every x∈_k_i is more preferred than all y∈_k_j. If we replace Equation (<ref>) with η_k(σ)=j,σ∈_k_j;0,otherwise.thenTheorem <ref> can be used, without any modification,to measure consensus of rankings with ties. §.§.§ Weighting scheme for top-k itemsThe measure κ_p() is a subsequence-based consensus measure and has the ability to handle top-k rankings. In the case where the top-k items rankings need to be weighted with greater value and the items after top-kare not important(see, e.g. <cit.>),a slight revision to Equation (<ref>) willaccommodate this. Consider a given cut-off value ζ for items position, we can weigh the itemswith κ_1()= ∑_σ∈Σ H(ζ-μ(σ)-d) p^μ(σ)γ^dwhere 0<p< 1. Clearly,if an items isranked after position ζ in one of the rankings, H(ζ-μ(σ)-d) p^μ(σ)γ^d =0.§ EXPERIMENTS This section will present how the proposed κ_p(·) and κ(·) can be applied to evaluate rankaggregation andcompare search engine rankings. The source codes and the experimental data used are available on the github repository [<https://github.com/zhiweiuu/secs>]. §.§ Evaluation of rank aggregation of full rankings Rank aggregation, with wide applications in decision making systems, machine learning and social science,is the problem of how to combine many rankings in order to obtain one consensus ranking <cit.>. Dwork et al. has proved that, even if ||=4,obtaining an optimal aggregation with the Kendall τ index is NP-hard <cit.>.Another problem with rank aggregation is the lack of ground truth for evaluation. Here, we show how we can use the proposed κ_p(·) to evaluate the ranking aggregation results.The rankings used in this experiment are the seven rankings (shown in Table <ref>)of 10 clustering algorithms with respect to 7 different validation measures when they are used to cluster microarray data into five clusters <cit.>. The details for the 7 validation measures ofAPN, AD, ADM, FOM, connectivity, Dunn and Silhouette and the details for10 clustering algorithms ofSM, FN, KM, PM, HR, AG, CL, DI andMO can be found in <cit.>.Here, we use _c to denote the set of the 7 rankings. Two rank aggregation algorithms (the Cross-Entropy Monte Carlo algorithm (CE) and the Genetic algorithm (GA)) have been used to aggregate the 7 rankings in<cit.> and the aggregated rankings are shown in Table <ref>. We create two sets of rankings _GA and _CE by adding each aggregated ranking into _c,_GA=_c∪{GA}, _CE=_c∪{CE}. Based on the property in Equation (<ref>), clearly κ(_GA)≤κ(_c) and κ(_CE)≤κ(_c). Therefore, we would reasonably expect that a better aggregated ranking would result less decline of κ(·) when adding the aggregated ranking into _c. Table <ref> shows experimental results forκ(_GA) and κ(_CE) by varying values of γ and λ from 1 to 0.45. Figure <ref> shows the changes of κ_p(·) for both GA and CE. We find that the aggregated ranking by the CE algorithm is more sensitive to γ while the aggregated ranking by the GA algorithm is more sensitive to λ. §.§ Evaluation of consensus for top-k rankings In this section, we show how the proposed consensus measure can be used to evaluate top-k rankings <cit.>. Here, we are interested in the top-25 items from Google and Bing searches.Twelve top-25 rankingsused in this experiment arethe search rankings from Google and Bing by using 6 “related” key words: “Bond films”, “Bond Movies”, “007 films”, “007 movies”, “James Bond films” and “James Bond movies”, with 6 rankings from Google and 6 rankings from Bing. Table <ref> shows the twelve rankings.As these key words refer to an identical concept from human perspective,we want to know how close or related these rankings are, which can be evaluated by the proposed consensus measure κ() and κ_p(), in terms of “relatedness” or “closeness”. More details about theextracted rankings can be found on the github repository. The κ() values for Bing and Google rankings are shown inTable<ref>.The results show that for the given key words, Google has consistently higher κ() values than Bing when both λ andγ varies from 1 to 0.45. However, it is difficult to discern which search engine results are more sensitive to γ and λ. Therefore, we show the changes of κ_p() in terms of γ and λ in Figure <ref>. In Figure <ref>, though Bing has 8 links in common (κ_1()=8) and Google has only 7 links in common (κ_1()=7), Bing's search results are more sensitive to the γ, which shows that the deviation of the links positions in Bing's search results are bigger than thatby Google search. Especially, when γ≤ 0.85, the κ_1() for Google's rankings is in fact higher than that from Bing's rankings. Also, as shown in Figure <ref>, both κ_2() and κ_3() suggest that Google's rankings have a higher degree of consensus than Bing's rankings. § CONCLUSION AND FUTURE WORK This paper introduces a novel approach for consensus measure of rankings by using graph representation, in which the vertices are the items and the edges are the relationship of items in the rankings.Such representation leads to various algorithms for consensus measure in terms of different aspects in the rankings, including the number of common patterns, the number of common patterns with fixed length and the length of the longest common patterns. We presenthow the proposed approaches can be used to evaluaterank aggregation and compare search engines rankings. In future, we will look into the property, shown in Equation (<ref>) and use it to define a new objective function for rank aggregation so that the proposed approaches can be usedto develop elastic algorithms for rank aggregation. A challenging task for future is to extract the common patterns of rankings and use these common pattern to define a probabilistic model for evaluating rankings generated from different systems.
http://arxiv.org/abs/1704.08464v2
{ "authors": [ "Zhiwei Lin", "Yi Li", "Xiaolian Guo" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20170427075847", "title": "Consensus measure of rankings" }
[Corresponding author: ][email protected] BBN Technologies, Cambridge, MA 02138, USAWe describe the hardware, gateware, and software developed at Raytheon BBN Technologies for dynamic quantum information processing experiments on superconducting qubits. In dynamic experiments, real-time qubit state information is fedback or fedforward within a fraction of the qubits' coherence time to dynamically change the implemented sequence. The hardware presented here covers both control and readout of superconducting qubits. For readout we created a custom signal processing gateware and software stack on commercial hardware to convert pulses in a heterodyne receiver into qubit state assignments with minimal latency, alongside data taking capability.For control, we developed custom hardware with gateware and software for pulse sequencing and steering information distribution that is capable of arbitrary control flow on a fraction superconducting qubit coherence times. Both readout and control platforms make extensive use of FPGAs to enable tailored qubit control systems in a reconfigurable fabric suitable for iterative development.Hardware for Dynamic Quantum Computing Thomas A. Ohki December 30, 2023 ======================================§ INTRODUCTION Building a large scale quantum information processor is a daunting technology integration challenge. Most current experiments demonstrate static circuits, where a pre-compiled sequence of gates is terminated by qubit measurements. In some cases, conditional control flow is emulated by postselecting data on certain measurement outcomes <cit.>, or by gating duplicate hardware behind a switch to handle a single branch in a pulse program <cit.>. However, because of the need for quantum error correction <cit.>, fault-tolerant quantum computation is inevitably an actively controlled process. This active control may manifest as: continuous entropy removal from the system via active reset <cit.>, active error correction after decoding syndrome measurements, Pauli frame updates for subsequent pulses after state injection <cit.>, or non-deterministic “repeat-until-success” <cit.> gates. The community is now tackling the challenge of dynamically steering an experiment within the coherence time of the qubits <cit.>. For superconducting qubits this coherence time—although continuously improving—is currently 50–100μ s. To achieve control fidelities compatible with expected thresholds for fault-tolerant quantum computation <cit.>, the feedback/feedforward time must be less than 1% of this coherence time, or on the order of a few hundred nanoseconds.Superconducting qubit control systems send a coordinated sequence of microwave pulses, with durations from tens to hundreds of nanoseconds, down coaxial lines of a dilution refrigerator to implement both control and readout of the qubits. Currently, the microwave pulses are produced and recorded at r.f. frequencies by mixing up or down with a microwave carrier, allowing commonly available ≈1 GS/s digital-to-analog (DAC) and analog-to-digital (ADC) converters to be used. In the circuit quantum electrodynamics (QED) platform <cit.>, the qubit state is encoded in the amplitude and phase of a measurement pulse that interacts with a microwave cavity coupled dispersively to the qubit. This microwave pulse is typically captured with a room temperature receiver, then converted into a qubit state assignment via a digital signal processing (DSP) pipeline. Programming the control sequences for dynamic experiments also requires a supporting framework from the pulse sequencing language and hardware. Conventional arbitrary waveform generator (AWG) sequence tables are far too restrictive to support control flow beyond simple repeated sections. The desired control flow requires conditional execution, loops with arbitrary nesting, and subroutines for code reuse.The required timescale for active control is beyond the capabilities of a software solution running on a general purpose operating system; however, it is within reach of custom gateware running on field programmable gate arrays (FPGAs) directly connected to analog ↔ digital converters for both qubit control and measurement. Many groups in superconducting and ion trap quantum computing have turned to this approach and started to build a framework of controllers and actuators. For trapped ions, the Advanced Real-Time Infrastructure for Quantum physics (ARTIQ) <cit.> is a complete framework of hardware, gateware, and software for controlling quantum information experiments. However, ARTIQ's control flow architecture uses general purpose CPUs implemented in FPGA fabric, so called soft-core CPUs, which cannot maintain the event rate required by superconducting qubits (gates are 1–2 orders of magnitude slower in ion traps). Researchers at UCSB/Google <cit.>, ETH Zurich <cit.>, TU Delft <cit.>, and Yale <cit.> have also built superconducting qubit control and/or readout platforms using FPGAs, and even explored moving them to the cryogenic stages <cit.>, but have generally not made these tools available to the broader quantum information community.In this work, we introduce theframework and Arbitrary Pulse Sequencer 2 (APS2) for qubit readout and control, respectively.implements state assignment and data recording in FPGA gateware for a commercially available receiver/exciter system (the Innovative Integration X6-1000M, also used in the Yale work<cit.>). We show how latency can be minimized for rapid qubit state decisions by consolidating many of the conventional DSP stages into one. The APS2, shown in Fig. <ref>, has gateware designed to naturally support arbitrary control flow in quantum circuit sequences on superconducting qubits. For circuits involving multiple qubits, state information from many qubits must be collated and synthesized into a steering decision by a controller. To this end we designed the Trigger Distribution Module (TDM) to capture up to eight channels of qubit state information, execute arbitrary logic on an FPGA, and then distribute steering information to APS2 output modules over low-latency serial data links. All the systems presented here are either commercially available or full source code for gateware and drivers has been posted under a permissive open-source license.To validate the developed gateware and hardware we demonstrate multi-qubit routines and quantum gates that require feedback and feedforward: active qubit initialization, entanglement generation through measurement, and measurement-based logic gates. Although these are specific examples, they are implemented in a general framework that enables arbitrary steering of quantum circuits. Furthermore, with appropriate quantum hardware, different circuits are all achieved without re-wiring the control systems, but simply by executing different programs on the APS2 and TDM.§ QUBIT STATE DECISIONS IN HARDWAREThe first requirement for quantum feedback is extracting qubit state decisions with minimal latency. Typical superconducting qubit measurements involve sending a microwave pulse to a readout resonator, recording the reflected/transmitted signal, filtering noise and other out-of-band signals, and reducing the record to a binary decision about the qubit state. Conventionally, this is accomplished with a superheterodyne transmitter and receiver operating with an intermediate frequency (IF) of 10s of MHz which allows the IF stages to be handled digitally.Since many measurement channels may be frequency multiplexed onto the same line, the DSP chain involves several stages of filtering to channelize the signal. This involves mixing the captured record with a continuous wave (CW) IF signal—produced by a numerically controlled oscillator (NCO)—and several low-pass filtering and decimation stages to recover a baseband complex-valued phasor as a function of time (Fig. <ref>). This complex-time series is then integrated with a kernel, which may be a simple box car filter or optimized to maximally distinguish the qubit states <cit.>. A final qubit state is determined by thresholding the integrated value. These receiver functions, which have frequently been implemented in software, are ideally suited to DSP resources available in modern FPGAs. Moving these functions into custom gateware has additional benefits for parallel processing of simultaneous measurements, reducing CPU load on the control PC, and greatly reducing latency of qubit state decisions. §.§ Filter design The design of the channel filter for qubit readout is the result of balancing several considerations: * bandwidth of the channel—should be some small multiple of the resonator bandwidth, κ;* stopband attenuation sufficient to remove channel crosstalk;* numerical stability—particularly when implemented with either single precision or fixed-point representation;* latency;* computational resources. Some of these criteria are in competition with each other. For instance, one may decrease channel crosstalk by using a higher-order filter, but this comes at the expense of increased latency and computational cost. Qubit devices used in our lab have typical resonator bandwidths of 1-3MHz. In the high fidelity, QND readout regime we have noticed harmonic content in the readout signal at multiples of the dispersive shift, χ, that extends the signal bandwidth by roughly a factor of 2. Consequently, we have designed channel filters with 10MHz bandwidth. The downconversion structure of Fig. <ref> selects symmetric channels around the IF frequency; thus, a 10MHz channel corresponds to a filter with a 3dB bandwidth of 5MHz. We also want sufficient stopband attenuation to limit channel crosstalk. We have chosen the stopband attenuation such that a fullscale signal in an adjacent channel is suppressed below the least-significant bit of the selected channel. Given the signed 12-bit ADCs on our target platform, this requires 20log_10(1/2^11) ≈ 66dB stopband attenuation.The relatively narrow bandwidth of the readout channels compared to the 1 GS/s sampling rate of the ADC leads to numerical stability problems in fixed-point or single-precision designs. Re-expressed as a relative bandwidth, the f_3dB = 5 MHz channel described above has n_3dB = 0.01. However, it is difficult to construct stable filters with normalized bandwidth n_3dB<0.1. This may be solved by cascading several polyphase decimating filters to boost the 3 dB bandwidth of the later stages— this brings an additional benefit of reducing the computational resources. §.§ Fast Integration Kernels While the complete time-trace of the measurement record is a useful debugging tool for observing and understanding the cavity response from the two (or more) qubit states, a conventional channelizer with multiple stages of signal processing (NCO mixing, filtering and integrating) forces an undesirable latency. Take a typical example of 10MHz channels spaced 20MHz apart. A Parks-McClellan <cit.> designed FIR low-pass filter for a 250MHz sampling rate with a pass band from 0-5MHz and stop-band from 15-125MHz with 60dB suppression requires at least 86 taps. At a typical FPGA clock speed of 250MHz this results in 100s of nanoseconds of latency. However, the qubit state decision reduces the time dimension to a single value with a kernel integrator. The intermediate filtering stage is thus superfluous if we can construct an appropriate frequency-selective kernel. This crucial insight enables us to drive down the signal processing latency to a few clock cycles.More formally, consider the discrete time measurement record v(t_l) for a total of length L samples. Applying the DSP chain of Fig. <ref> , the final single complex value qubit state signal (before thresholding and ignoring decimation for simplicity) is:q = ∑_l=0^L k_l_kernel[∑_n=0^N b_n _filter[ e^-iω (t_l-n)_mix-down v(t_l-n) ]],where the demodulation frequency is ω, the channel is selected with an N-tap FIR filter with coefficients b_n and a final kernel integration k_l is applied for the length of the record L. The nested sum and product can be expanded and the terms collected into a single kernel integration, with a modified kernel q = ∑_l=0^L k_l' v(t_l);k_l' = e^-iω t_l∑_n=0^N k_l+n b_n.Thus, the three-stage pipeline of Fig. <ref> is reduced into a single-stage pipeline consisting solely of the kernel integration step.This reduction of the pipeline to a single stage has substantial advantages for DSP latency. In particular, the FIR filter block of the three-stage pipeline has a minimum latency of N clock cycles for a N-tap filter. As discussed above, this can be 100s of nanoseconds and this single filter stage consumes the entire latency budget in a single step. By constrast, the DSP pipeline of Eq. <ref> can be achieved with 1-3 clock cycles of latency on the FPGA, or ≤ 15ns.While equations <ref> and <ref> demonstrate the mathematical equivalence of the 1-stage and 3-stage DSP pipelines, in practice it is not necessary to transform a baseband integration kernel via Eq. <ref>. Instead, one can use the average unfiltered (IF) response at the ADC after preparing a qubit in |0⟩ and |1⟩ to construct a matched filter <cit.>. The frequency response of the resulting filter will match that of the measurement pulse itself. Consequently, as long as the measurement pulse is itself band-limited — which should always be the case with an appropriately designed dispersive cavity measurement — the resulting matched filter will also optimally “channelize” the ADC input and suppress interference from other multiplexed qubit measurement channels. §.§ Hardware Implementation To minimize overall latency, we implement ourqubit readout system in custom FPGA gateware ( - <github.com/BBN-Q/BBN-QDSP-X6>) and software drivers ( - <github.com/BBN-Q/libx6/>) for a commercially available hardware platform (Innovative Integration X6-1000M). The X6 hardware provides two 12-bit 1 GS/s ADCs and four 16-bit 500 MS/s DACs. Althoughfocuses on the receiver application, it also provides basic AWG functionality to drive the DACs for simple waveforms such as measurement pulses. A block diagram of the receiver section of thegateware is shown in Fig. <ref>. The structure includes a fast path for low-latency qubit state decision output, as well as a conventional receiver chain for debugging and calibration. The gateware and drivers allow users to tap the data stream at several points for data recording or debugging.The raw ADC values from each ADC are presented to the FPGA four samples wide at 250 MHz when sampling at 1 GS/s (we sample at the maximum rate to minimize noise aliasing).We immediately decimate by a factor of 4 by summing the four values so that subsequent processing deals with only one sample per clock. This is mainly for convenience: the raw integrators could run in parallel and the data could be serialized for the subsequent filtering. The data is copied to N IF kernel integrators for multiplexed readout. The outputs of these fast integrators are connected to variable thresholders which drive digital outs to make fast qubit state decisions available to the pulse sequencing hardware for feedback. These values are also available in software as complex values.For more conventional downconversion, each raw stream is also broadcast to a channelizer module. The module consists of a numerically controlled oscillator (NCO) that generates cosine and sine at the chosen frequency.The incoming ADC data is multiplied with the NCO outputs in a complex multiplier. The mixed signal is then low-pass filtered by a two-stage decimating finite-impulse response (FIR) filter chain. Polyphase FIR filters are chosen for each stage to minimize use of specialized DSP hardware on the FPGA. The FIR filters are equiripple with the coefficients designed by the Remez exchange algorithm <cit.>. The number of taps was chosen to optimally fit onto the DSP blocks of the FPGA (with reuse from hardware oversampling) and to suppress the stopband by 60 dB, nearly down to the bit level of the 12-bit ADCs. The low-pass filtered and decimated stream is useful for observing and debugging the cavity response. Finally, a decision engine using a baseband kernel integrator is attached to the demodulated stream to complete the conventional DSP chain.§ DYNAMIC ARBITRARY PULSE SEQUENCING: APS2 There are demanding requirements on bandwidth, latency and noise for dynamic pulse sequencing with superconducting qubits. The sequencer should naturally represent the quantum circuit being applied, i.e., it should be able to apply a sequence of ≈ 20ns pulses (typical single qubit gate times) rather than treating the entire sequence as a waveform. Simply concatenating waveforms together to create a sequence places extreme demands on the size of waveform memory, and transferring and compiling sequences to the AWG becomes an experimental bottleneck. The sequencer should be able to respond to real-time information from qubit measurement results to make dynamic sequence selection within some small fraction of the relaxation time of the qubits. Finally, the sequencer output should have sufficiently low noise not to limit gate fidelity.Typical AWGs rely on a precalculated list of sequences played out in a predetermined manner, or at best, loops of segments with simple jump responses to an event trigger. Dynamic sequences that implement quantum algorithms require more sophisticated control flow with conditional logic and branching in response to measurement results. In addition to dynamic control flow, the sequencer should also support code reuse through function calls and looping constructs to keep memory requirements reasonable for long verification and validation experiments such as randomized benchmarking <cit.> or gate set tomography <cit.>.Figure <ref> shows some elementary circuits that require fast feedback or feedforward. A simple and immediately useful primitive is the active reset of a qubit shown in Fig. <ref>(a). This can remove entropy from the system by refreshing ancilla qubits or simply improve the duty cycle of an experiment in comparison to waiting several multiples of T_1 for the qubit to relax to the ground state. With appropriate control flow instructions, reset with a maximum number of tries is naturally expressed as a looping construct with conditional branching for breaking out of the loop. Indeed the entire routine could be wrapped as a function call to be reused at the beginning of every sequence. Entanglement generation by measurement, shown in Fig. <ref>(b) is another useful primitive for resource state production that relies on feedforward. The circuit is also a useful testbench as it is very similar to the circuits for syndrome measurement in error correcting codes. Finally, Fig. <ref>(c) shows a more sophisticated use of feedforward. Implementing T gates will most likely dominate the run time of an error corrected quantum circuit <cit.>. However, if the circuit can be probabilistic then the average T gate depth can be reduced. These “repeat-until-success” circuits <cit.> bring in one or more ancilla qubits and perform a series of gates and interactions. Then, conditional on the result of measuring the ancilla either the desired gate or a identity operation has been applied to the data qubit. In the identity case, the gate can be attempted again by repeating the circuit with a refreshed ancilla.The APS2 was constructed to satisfy all these criteria by tailored design of the sequencer. The sequencing engine processes an instruction set that provides full arbitrary control flow and can play a new waveform every 6.66ns (two FPGA clock cycles) to naturally and compactly represent any superconducting qubit circuit with feedback or feedfoward. Realtime state information is fed in via high-speed serial links from the TDM. A cache controller intermediates access to deep memory for longer experiments. We now discuss in detail some of the design choices. §.§ Arbitrary Control FlowArbitrary control flow can be fulfilled with three concepts: sequences, loops (repetition) and conditional execution. We add to this set the concept of subroutines because of their value in structured programming and memory re-use. The gateware implements a control unit state machine with four additional resources: a loadable incrementing program counter indicating the current address in instruction memory; a loadable decrementing repeat counter; a stack that holds the repeat and program counter values; and a comparison register that holds the last comparison boolean result. The specific instruction set supported is shown in Table <ref>.The ,andinstructions enable analog and digital output and are immediately dispatched to output execution engines (see sections <ref> and <ref> below). The next two instructions,and , enable synchronization both between output engines on the same APS2 and between APS2 modules (see section <ref> below). The next set of instructions provides arbitrary control-flow:andenable looping constructs;enables access to the real-time steering information fed from the TDM;andenable conditional branching;andallow for subroutines and recursion, enabling, for example, nested loops without multiple loop counters. Finally, although not directly related to control flow,gives hints to the cache controller to avoid cache misses.§.§.§ Super-scalar ArchitectureEach APS2 module has multiple outputs driven by individual execution engines: two analog channels and four marker channels. We use dispatch from a single instruction stream to simplify synchronization of control flow across multiple output engines (Fig. <ref>). Since each execution engine has its own internal FIFO buffer, this also allows the decoder/dispatcher to greedily look ahead and process instructions (contingent on deterministic control flow) and potentially dispatch to the execution units. The look ahead strategy absorbs the pipelining latency due to an instruction counter address jump after a ,orinstruction.The superscalar approach has to accept some additional complexity in order to convert a serial instruction stream into potentially simultaneous operations in the execution engines. The APS2 provides two mechanisms to solve this synchronization task. The first mechanism is ainstruction that stalls the execution engines until a trigger signal arrives. While the engines are stalled, the control flow unit/dispatcher continues to load instructions into the output engine buffers. The execution engines respond synchronously to trigger signals, so in this mechanism an external signal provides simultaneity and a method to synchronize multiple modules. The second mechanism, theinstruction, acts as a fence or barrier to ensure that all execution engines are at the same point by stalling processing of instructions until all engines' execution queues are empty. This is also useful for resynchronizing after a non-deterministic wait time - e.g. an uncertain delay before a measurement result is valid.§.§.§ Output EnginesEach analog and digital output channel is sequenced by a waveform or marker “output engine” that takes a more limited set of instructions. Waveform Engine The waveform engines create analog waveforms from the following set of instructions:*play a waveform starting at a given address for a given count;*stall playback until a trigger arrives;*stall until the main decoder indicates all engines are synchronized;*fill a page of the waveform cache from deep memory - see section <ref> for further details. Typically, eachinstruction corresponds to a pulse implementing a gate and so it is important that the waveform engine be fed and be able to process instructions on a timescale commensurate with superconducting qubit control pulses. The main decoder can dispatch a waveform instruction every 3.33ns and the waveform engine can jump to a new pulse every 6.66ns. In addition, typical pulse sequences contain idle periods of zero or constant output. Rather than inefficiently storing repeated values in waveform memory. Rather the instruction is “play this waveform value for n samples” <cit.>. We refer to these aspairs and can mark any waveform command as such. Marker EngineMarker engines creates digital outputs from the following set of instructions:*play marker with a given state for a given count;*stall playback until a trigger arrives;*stall until the main decoder indicates all engines are synchronized. The natural sample rate for the markercommands are in terms of the sequencer FPGA clock which runs at a quarter of the analog output rate. To provide single sample resolution we route the marker outputs through dedicated serializer hardware (Xilinx OSERDESE2). For all but the last sample the 4 marker samples are simply copies of the desired output state. However, the last word is programmable as part of theinstruction to provide full 833ps resolution of the marker rising/falling edge.§.§.§ Modulation EngineAn APS2 module is typically used to drive the I and Q ports of an I/Q mixer to modulate the amplitude and phase of a microwave carrier, thus producing the control or readout signal. To improve the on/off ratio, the carrier is typically detuned from the qubit or cavity frequency and the I/Q waveforms modulated at the difference frequency with an appropriate phase shift to single-sideband (SSB) modulate the carrier up or down to the qubit/cavity frequency. Qubit control is defined in a rotating frame at the qubit frequency so the phase of the modulation has to track the detuning frequency. Z-rotations are implemented as frame updates that shift the phase of all subsequent pulses <cit.>. For deterministic sequences, the modulation and frame changes can be pre-calculated and stored as new waveforms in the pulse library. However, for conditional execution or for experiments with non-deterministic delays, this is not possible and the modulation and frame changes must be done in real-time.To support both SSB modulation and dynamic frame updates, the APS2 includes a modulation engine which phase modulates the waveform output, and that can be controlled via sequence instructions. The modulation engine contains multiple NCOs to enable merging multiple “logical” channels at different frequencies onto the same physical channel pair. For example, to control two qubits, two NCOs can be set to the detuning frequencies of each qubit, and control pulses can be sent to either qubit with the appropriate NCO selection, while the hardware tracks the other qubit's phase evolution. The phase applied to each pulse is the sum of the accumulated phase increment (for frequency detuning), a fixed phase offset (e.g. for setting an X or Y pulse), and an accumulated frame (to implement Z-rotations). The modulation engine supports the following instructions*stall until a trigger is received;*stall until the main decoder indicates all engines are synchronized;*reset the phase and frame of the selected NCO(s);*set the phase offset of the selected NCO(s);*set the phase increment of the selected NCO(s);*update the frame of the selected NCO(s);*select a NCO for a given number of samples. All NCO phase commands are held until the the next instruction boundary, which is the end of the currently playingcommand or a synchronization signal being received. The commands are held to allow them to occur with effectively no delay: for example, the phase should be reset when the trigger arrives; or a Z rotation should happen instantaneously between two pulses.In addition, I/Q mixers have imperfections that can be compensated for by appropriate adjustments to the waveforms. In particular, carrier leakage may be minimized by adjusting DC offsets, and amplitude/phase imbalance compensated with a 2x2 correction matrix applied to the I/Q pairs. The APS2 includes correction matrix and offset blocks after the modulator to effect these adjustments, as shown in Fig. <ref>. §.§ Caching StrategiesSome qubit experiments, e.g. calibration and characterization, require long sequences and/or many waveform variants. Supporting such sequences requires an AWG with deep memory. However, AWG sequencers immediately run into a well-known depth/speed trade-off for memory: SDRAM with many gigabytes of memory has random access times of 100s of nanoseconds whereas SRAM, or on-board FPGA block RAM, can have access times of only a few clock cycles but are typically limited to only a few megabytes. This memory dichotomy drives some of the sequencing characteristics of commercial AWGs. For example, the Tektronix 5014B requires 400ns to switch sequence segments and the Keysight M8190A requires a minimum sequence segment length of 1.37ms. These delay times are incompatible with the typical gate times of 10s of nanoseconds for superconducting qubits. However, it is possible to borrow from CPU design and hide this latency by adding instruction and waveform caches to the memory interface.The APS2 has 1GB of DDR3 SDRAM to dynamically allocate to a combination of sequence instructions and waveforms. This corresponds to up to 128 million sequence instructions or 256 million complex waveform points, sufficient for most current experiments. The sequencer and waveform engines interface with this deep memory through a cache controller with access to FPGA block RAM. If the requested data is in the cache, then it can be returned deterministically within a few clock cycles, whereas if there is a cache miss the sequencer stalls while the data is fetched from SDRAM. Cache misses during a sequence are generally catastrophic given superconducting qubit coherence times. However, with heuristics andhints from the compiler, the cache controller can ensure data has been preloaded into the block RAM before it is requested and avoid any cache stalls.§.§.§ Instruction CacheThe APS2 instruction cache is split into two parts to support two different heuristics about how sequences advance through the instruction stream—see Fig. <ref>(a-b). We chose cache line sizes of 128 instructions or 1 kB, which is significantly larger than those used in a typical CPU (Intel/AMD processors typically have cache lines of only 64 bytes) but reflects the lack of a nested cache hierarchy and the more typical linear playback of quantum gate sequences. The first cache is a circular buffer centered around the current instruction address that supports the notion that the most likely direction is forward motion through the instructions, with potential local jumps to recently played addresses when looping. The controller greedily prefetches additional cache lines ahead of the current address but leaves a buffer of previously played cache lines for looping. Function calls, or subroutines, require random access so the second instruction cache is fully associative. The associative cache lines are filled in round-robin fashion with explicitinstructions. This first-in-first-out replacement strategy for the associative cache ignores any information about cache line usage. Since the cache controller tracks access, a simple extension would be a Least Recently Used (LRU) or pseudo-LRU algorithm. It also places a significant burden on the compiler to insert theinstructions and group subroutines into cache lines. However, given the severe penalty of a cache miss it is difficult to envisage a hardware-implemented cache controller that can alleviate that burden.§.§.§ Waveform CacheIn use cases we have examined, waveform access does not have the nearly linear structure of sequence instructions. Rather, a sequence tends to require random access to a small library of short waveforms, where that library may change over time due to calibration or feedback signals, or the desire to scan a range of waveforms. The APS2 has a waveform cache of 128 ksamples to support fast access to a large waveform library. For scenarios demanding that the library change over time, the cache is split into two pages of 64 ksamples—see Fig. <ref>(c). The cache is composed of dual-port block RAM and so a sequence can be actively playing waveforms from one page while the second page is filled from SDRAM. The two pages' roles can then alternate supporting total waveform library sizes up to the limit of the SDRAM. For this mode of operation we do not expect to change the waveform library within a single sequence. Filling an entire waveform cache page takes ∼180μ s, meaning that at typical repetition rates of 10s of kHz we can exchange the waveform library every few sequences.§ SYNTHESIZING AND DISTRIBUTING STEERING INFORMATIONAs we move beyond simple single qubit feedback circuits we need to synthesize steering decisions from multiple qubit measurement results, and then communicate the steering decision to multiple sequencers. We have designed a dedicated hardware module, the Trigger Distribution Module (TDM), to take in up to eight qubit state decisions and send steering information to up to nine pulse sequencers—see Fig. <ref> for a block diagram.There are eight SMA inputs that feed variable comparators for reading in qubit measurement results from , with one input used as a data valid strobe. The TDM can communicate to all the APS2 modules in an enclosure via a high speed serial connection over SATA cables. The star distribution network also allows us to use the distribution module for synchronization. A reserved symbol acts as a trigger that can be broadcast to all APS2 modules in an enclosure for synchronous multi-module output. There is one additional SATA serial link that can be used for inter-crate communications with other TDM's for for future larger circuits that cannot be controlled with a single crate.The baseline TDM gateware(<github.com/BBN-Q/APS2-TDM>) currently broadcasts the measurement results to all APS2 modules. As a result, every APS2 must allow a sequence branch for each result, even when the controlled qubit is not affected by that particular measurement. A more flexible decision logic and sequence steering will become critical in larger circuits. Since all measurement results flow through the TDM, it is natural to consider it orchestrating the entire experiment. For example, in error correction, syndrome decoding could be implemented by the TDM and the required qubit corrections sent to the relevant APS2s only. We see the TDM as a testbed for building out a more scalable qubit control platform with a hierarchy of controllers, where the TDM assumes the role of routing measurement results and steering the computation. § LATENCY With all the pieces in place we can examine the latency budget of a closed feedback loop and highlight potential areas for improvement.A detailed listing is provided in Table <ref>. The total latency from the end of a measurement pulse to the next conditional pulse coming out of the APS2 is ≈430ns. Our test setup incurs an additional ≈110ns of latency from cabling to/from the qubit device in the dilution refrigerator, as well as analog filtering. The total latency is comparable to 1% of the qubit relaxation time and our measurement time, and is not the limiting factor in our circuit implementation fidelities.However, there are a few areas amenable to improvement. The APS2 design prioritized instruction throughput and waveform cache size. This required significant buffering and pipelining. Optimizing instead for latency could tradeoff those capabilities for reduced latency for an APS2 address jump. The serial link between the TDM and APS2 is slow due to FIFOs that manage data transfer through asynchronous clock domains. However, synchronizing the TDM and APS2 to a common 10MHz reference creates a stable phase relationship between clocks domains which would allow these FIFOs to be removed and save ≈ 100ns. Modest benefit could be obtained by integrating the readout system into the TDM, saving two data transfer steps.While not listed in the table, the delays from cabling and analog filtering are also non-negligible. Since we digitize data at 1GS/s, minimal analog low pass filtering after mixing down to the IF is necessary, except to prevent overloading amplifiers or the ADC. Moving the hardware physically closer to the top of the dilution refrigerator would save ≈20ns. The reduction in cable delays is one potential benefit to cyrogenic control systems, but is only a fraction of the total latency budget. § FEEDBACK AND FEEDFORWARD IN CIRCUIT QED The integration ofsystems and APS2/TDM modules into a circuit QED apparatus enables a variety of qubit experiments requiring feedback or feedforward. Feedback indicates that measurements modify control of the measured qubit, while in feedforward the conditional control acts on different qubits. Here we present some examples of simultaneous dynamic control of up to three qubits. We emphasize that the hardware system was designed for flexible multi-qubit experiments that allows for programming different experiments in software, with minimal or no hardware changes.The quantum processor used here, first introduced in Ref. Riste2017, is a five-qubit superconducting device housed in a dilution refrigerator at ≈10mK. The wiring inside the refrigerator is very similar to the reference above, with the exception of the addition of a Josephson parametric amplifier (JPA) <cit.> to boost the readout fidelity of one qubit. The control flow of qubit instructions, previously a pre-orchestrated sequence of gates and measurements, is now steered in real time by a TDM. This module receives the digital qubit measurements fromdigital outputs, and distributes the relevant data to the APS2 units which then conditionally execute sequences. §.§ Fast qubit initialization As a first test of our control hardware, we start with the simplest closed-loop feedback scheme — fast qubit reset <cit.>. A reliable way to initialize qubit registers is one of the prerequisites for quantum computation <cit.>. Conventionally, initialization of superconducting qubits is accomplished by passive thermalization of the qubit to the near zero-temperature environment. However, with a characteristic relaxation time T_1 = 40μ s (see Table <ref> for relaxation time details), the necessary waiting constitutes the majority of the experiment wall clock time. Furthermore, passive initialization slows re-use of ancilla qubits during a computation, a feature that would relieve the need for a continuous stream of fresh qubits in a fault-tolerant system <cit.>.Feedback-based reset aims to remove entropy on demand using measurement and a conditional bit-flip gate (Fig. <ref> inset) <cit.>. This operation ideally resets the qubit state to |0⟩ if the measurement result is 1, or leaves it unchanged if 0, giving an unconditional output state |0⟩. The effect of reset is evident when considering the initialization success probability compared to no reset (passive initialization) (Fig. <ref>). As the initialization time is decreased to T_1 or lower, passive initialization becomes increasingly faulty, while active reset is largely unaffected.We extend this protocol to reset a register of three qubits simultaneously. This is accomplished with no additional hardware beyond that already required for the open-loop control of the same number of qubits. We exploit frequency multiplexing to combine two readout signals, so that all signal processing can be accomplished with the two analog inputs of a single X6-1000M. The control flow simply replicates the conditional bit-flip logic across the three qubits |A⟩, |B⟩, |C⟩ (Fig. <ref>a). We assess the performance of the three-qubit reset by measuring the success probabilities for resetting each individual qubit starting from the eight computational input states (Fig. <ref>b). The deviation in success probabilies is largely due to the difference in readout fidelities (Table <ref>), as only qubit |C⟩ is equipped with a JPA. §.§ Measurement-based S and T gates Our hardware is also readily applicable to feedforward scenarios, where the result of a measurement conditions the control of different qubits. A first example is the realization of measurement-based gates. In an error-corrected circuit, gates on a logical qubit can be made fault-tolerant by applying them transversally to all the underlying physical qubits. However, for any given code, a universal gate set cannot all be implemented transversally <cit.>. For instance, in the surface code, all Pauli operations X, Y, Z are transversal, but partial rotations such as Z(π/2) are not.To fill this gap, fault-tolerant gates can be constructed with interactions with ancilla qubits and control conditioned on measurement results <cit.>.Here we demonstrate the basic principle of measurement-based gates, implementing partial Z rotations on a physical qubit, using an ancilla and feedforward operations. The initial state of the ancilla, which can be prepared offline to the computation, determines the rotation angle θ. Typical gates are denoted with S (θ = π/4) and T (θ = π/8). An S gate can be decomposed into an ancilla measurement and a conditional Z(π) gate <cit.>, which is transversal in the surface code (Fig. <ref>a). Starting with the ancilla in a superposition state, |⟩ = (|0⟩ + |1⟩)/√(2), the result of the ancilla measurement determines whether the final state approximates the desired S|⟩ = (|0⟩ + i|1⟩)/√(2) (Fig. <ref>d), or the π shifted ZS|⟩ (e). In the latter case, a corrective Z, applied as a frame update (see Sec. <ref>), gives the intended state S|⟩deterministically (f). The reduced coherence, indicated by the length of the arrow, is mainly due to the measurement time (0.9μ s), with the addition of ∼ 0.54μ s decision latency in (f).Similarly, a T gate can be implemented with a different ancilla preparation and a conditional S gate (Fig. <ref>b). However, as seen before, the S gate cannot be applied transversally, so it is in turn decomposed into the feedforward sequence above. The result is a nested feedforward loop with up to two ancilla measurements and conditional sequences (Fig. <ref>c). We reuse the same ancilla in the second round, taking advantage of the first measurement to initialize it in a known state. By using the CLEAR protocol <cit.>, we reduce the latency before we can reuse the ancilla (Fig. <ref>g-i). §.§ Entanglement generation through measurement With three qubits, feedforward control can be used to generate entanglement by measurement. Two qubits separately interact with a third ancilla qubit to implement a parity measurement of the first two qubits (Fig. <ref>a). With the first two qubits starting in an equal superposition state, the parity measurement projects them onto either an even or odd Bell state with the ancilla measurement result containing the information about which (Fig. <ref>b-c). This parity measurement scenario, with ancillas and feedforward, is also relevant for syndrome extraction in quantum error correction schemes <cit.> and has been experimentally demonstrated in post-selected form<cit.>. With our hardware we can go one step further, and deterministically create the odd state by converting the projected even state into an odd one by a conditional bit-flip on one of the data qubits (Fig. <ref>d). This deterministic protocol has also been realized in Ref. Riste2013, but with the ancilla qubit replaced by a cavity mode.§ CONCLUSION The APS2 andplatforms are a complete hardware solution for dynamic quantum computing systems. They achieve this with tailored gateware and hardware that enable flexible, low-latency manipulation, thus allowing users to program generic quantum circuits without hardware reconfiguration. We have proved this hardware in situ with a superconducting quantum processor, showing a variety of novel dynamic circuits utilizing feedback and feedforward. To further improve this platform we intend to integrate control and readout into a unified hardware system, investigate improvements to the APS2 analog output chain and generalize system synchronization.Upconversion systems generically require a multitude of components and suffer from various mixer imperfections, leading to instability and a spectrum polluted by mixer product spurs. Future hardware revisions may solve these issues by moving to faster RF DACs that can directly generate microwave tones with a cleaner spectrum <cit.>. Direct RF output allows for greater frequency agility, allowing for channel re-use for both control and measurement. New DACs with sampling rates from 4–6GS/s support output modes that direct power into higher Nyquist zones, removing pressure for ultra-high clock speeds. Future FPGAs may include many on-chip RF DACs <cit.>, potentially drastically increasing channel densities in control systems.The typical way to achieve system synchronization is by building trigger fanout trees. This strategy becomes increasingly cumbersome and fragile as system sizes grow. A more scalable approach consists of sharing frequency and time between all devices, so that all modules in the system have a synchronous copy of a global counter. To achieve this, future hardware revisions may incorporate a time distribution protocol such as White Rabbit <cit.>. Sharing time changes the synchronization paradigm from “go on trigger” to “go at time t”.Finally, we are exploring methods to combine real-time computation with dynamic control-flow on the individual APSs. For example, a controller of a system of logical qubits must combine information from a logical decoder with program control-flow. A softcore CPU running on the TDM would enable rapid development of realtime infrastructure.Schematic capture and PC board layout for the APS2 and TDM were done by Ray Zeller and Chris Johnson of ZRL Inc., Bristol, RI. Nick Materise developed an initial prototype of thesystem in VHDL. This was converted into a Simulink model and tested with MATLAB HDL Coder before finally being converted back into pure VHDL. The data analysis for the experimental section was performed using code written in Julia <cit.>, and the figures were made with Seaborn <cit.> and matplotlib <cit.>. We used Scipy <cit.> to construct the filter coefficients for thesystem. The authors would like to thank George A. Keefe and Mary B. Rothwell for device fabrication, and Nissim Ofek for discussions about AWG instruction sets. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Office contract No. W911NF-10-1-0324 and No. W911NF-14-1-0114. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI, or the U.S. Government.
http://arxiv.org/abs/1704.08314v1
{ "authors": [ "Colm A. Ryan", "Blake R. Johnson", "Diego Ristè", "Brian Donovan", "Thomas A. Ohki" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170426193619", "title": "Hardware for Dynamic Quantum Computing" }
Abrupt disappearance and reemergence of the SU(2) and SU(4) Kondo effects due to population inversion Yigal Meir December 30, 2023 =====================================================================================================§ INTRODUCTION Exclusive jet processes, i.e. those with a fixed number of hard signal jets in the final state, play a crucial role in the Large Hadron Collider (LHC) physics program. Many important processes, such as Higgs or W/Z boson production or diboson production, are measured in different exclusive jet bins. Furthermore, jet substructure techniques have become increasingly important both in Standard Model and in new physics analyses, and the associated observables often exploit the properties of a fixed number of subjets. Theoretical predictions at increasingly high precision are needed to match the increasing precision of the data. Compared to color-singlet final states, the presence of jets makes perturbative QCD calculations more challenging and the singularity structure more complicated. Furthermore, a fixed number of jets is imposed through a jet veto, which restricts the phase space for additional collinear and soft emissions, and generates large logarithms that often need to be resummed to obtain predictions with the best possible precision.Soft Collinear Effective Theory (SCET) <cit.> provides a framework to systematically carry out the resummation of logarithms to higher orders by factorizing the cross section into hard, collinear, and soft functions, and then exploiting their renormalization group evolution. Schematically, the cross section for pp→ N jets factorizes for many observables in the singular limit asσ_N=H_N×[B_aB_b∏_i=1^NJ_i]⊗ S_N,where the hard function H_N contains the virtual corrections to the partonic hard scattering process, the beam functions B_a,b contain parton distribution functions and describe collinear initial-state radiation. The jet functions J_i describe final-state radiation collinear to the direction of the hard partons, and the soft function S_N describes wide-angle soft radiation. The resummation of large logarithms is achieved by evaluating each component at its natural scale and then renormalization-group evolving all components to a common scale. For an interesting class of observables, the jet and beam functions are of the inclusive type and do not depend on the precise definition of the jet regions. They are known for a variety of jet and beam measurements, typically at one loop or beyond <cit.>. Hard functions are also known for many processes at one loop or beyond (see e.g. ref. <cit.> and references therein). In this paper, we focus on determining the soft functions that appear for a wide class of jet algorithms and jet measurements. The resummation at NLL^' and NNLL requires the soft function at one loop. Compared to the beam and jet functions, the perturbative calculation of the soft function generally requires a more sophisticated setup, since it depends not only on the measurements made in the jet and beam regions, but also on the angles between all jet and beam directions and the precise definition of the jet boundaries.N-jettiness <cit.> is a global event shape that allows one to define exclusive N-jet cross sections in a manner that is particularly suitable for higher-order analytic resummation. The calculation of the one-loop soft function for exclusive N-jet processes using N-jettiness has been carried out for arbitrary N in ref. <cit.>. There, N-jettiness is used both as the algorithm to partition the phase space into jet and beam regions and as the measurement performed on those regions. To simplify the calculation, the version of N-jettiness used in ref. <cit.> was taken to be linear in the constituent four-momenta p^μ_i, thrust-like N-jettiness:Τ_N=∑_imin_m{2q_m· p_i/Q_m}= ∑_i min_m {n_m· p_i/ρ_m} . This is essentially a generalization of beam thrust <cit.> to the case of N jets. In Nj the sum runs over the four-momenta p^μ_i of all particles that are part of the hadronic final state, and the minimization over m runs over the beams and N jets identified by the reference momenta q^μ_m=E_m n_m^μ or lightlike vectors n_m^μ= (1,n̂_m), where E_m is the jet energy. The directions n̂_m for the beams are fixed along the beam axis and for the jets are predetermined by a suitable procedure.Finally, the Q_m or ρ_m=Q_m/(2E_m) are dimension-one or dimension-zero measure factors. The minimization in Nj assigns each particle to one of the axes, thus partitioning the phase space into N jet regions and 2 beam regions. This definition of N-jettiness depends only on the choices of jet directions n̂_m and measure factors ρ_m, which determine the precise partitioning and in particular the size of the jet and beam regions. For the cross section with a measurement of Τ_N, the Τ_N→ 0 singular region is fully described by a factorization formula of the form in simpleFact with inclusive jet and beam functions <cit.>. As Τ_N→ 0, different choices of jet axes often differ only by power-suppressed effects in the cross section.N-jettiness can also be used more generally as a means of defining an exclusive jet algorithm, which partitions the particles in an event into a beam region and a fixed number of N jet regions <cit.>.Here particle i is assigned to region m for which some generic distance measure d_m(p_i) is minimal.These regions are defined by region m ={particles i: where d_m(p_i) < d_j(p_i) for all j m } . This partitioning can be obtained from a generalized version of N-jettiness defined by Τ_N({n̂_m}) =∑_i p_Timin{ d_1(p_i) , …, d_N(p_i) , d_a(p_i), d_b(p_i) } . Here the d_m jet measures depend on pre-defined jet axis n̂_m, while the beam measures d_a and d_b are defined with fixed beam axes along ±ẑ. Infrared safety requires that all particles in the vicinity of the axis n_m^μ =(1,n̂_m) are assigned to the respective mth region. More precisely the measures have to satisfy d_m(p_i) <d_j(p_i) for all j ≠ m in the limit p^μ_i → E_i n^μ_m. Different choices of the d_m correspond to different N-jettiness partitionings, and include for example the Geometric, Conical, and XCone measures <cit.>. The measure in Nj corresponds to taking p_Ti d_m(p_i) = (n_m· p_i)/ρ_m.The two beam regions can be combined into a single one by defining the common beam measure d_0(p_i) = min{ d_a(p_i), d_b(p_i) } . Given a common beam region with a single beam measure d_0(p_i), we can always divide it into two separate beam regions for η>0 and η<0 by taking for example d_a(p_i)=[1+θ(-η_i)] d_0(p_i) and d_b(p_i)=[1+θ(η_i)] d_0(p_i).Constructing a full jet algorithm requires in addition to the partitioning an infrared-safe method to determine the jet axes n̂_m. This could be done by simply taking the directions of the N hardest jets obtained from a different (inclusive) jet algorithm. For a standalone N-jettiness based jet algorithm, the axes can be obtained by minimizing N-jettiness itself over all possible axes, Τ_N = min_n̂_1, …, n̂_NΤ_N({n̂_m}), as in refs. <cit.>.For the calculations in this paper, we consider a very general set of distance measures for determining the partitioning into jet and beam regions as in Nj2, and a different set of fairly general infrared safe observables measured on these regions. We explore and compare properties of different jet partitionings in jetregions.For the measured observables we consider the generic version of N-jettiness variables, m, given by m = ∑_i ∈region mf_m(η_i,ϕ_i)p_Ti . Here, η_i, ϕ_i, and p_Ti denote the pseudorapidity, azimuthal angle, and transverse momentum of particle i in region m. The dimensionless functions f_m encode the angular dependence of the observable and in the collinear limit behave like an angularity, see jetmeasure. When considering a single beam region we have acommon beam measurement 0 = a + b. Earlier analytic calculations of N-jettiness cross sections have all been done for the case where the observable and partitioning measure coincide, f_m = d_m, in which case the total N-jettiness used for the partitioning is equal to the sum over the individual measurements Τ_N= ∑_m m.The exact definition of the axes n̂_m is irrelevant for the calculation of the soft function. For our purposes we can therefore separate the jet-axes finding from the partitioning and measurement, and we will assume predetermined axes obtained from a suitable algorithm. However, one should make sure to use recoil-free axes <cit.> for angularities to avoid -type perpendicular momentum convolutions between soft and jet functions. This is ensured if one defines the axes through a global minimization as in minNj2.In this paper, we determine factorization theorems, which describe the singular perturbative contributions in the Τ_N→ 0 limit for these generic versions of N-jettiness.We then establish a generalized hemisphere decomposition for computing the corresponding one-loop soft function. We carry out the computations explicitly for a number of interesting cases. As underlying hard process we consider color-singlet plus jet production, and we discuss results for generic angularities as jet measurements. For the beam measurement we discuss different types of jet vetoes, including beam thrust, beam C parameter, and a jet-p_T veto. We also discuss different partitionings, including anti-k_T <cit.> and XCone <cit.>. We find that the one-loop soft function can be written in terms of universal analytic contributions and a set of numerical integrals, which explicitly depend on the partitioning and observable (i.e. the specific definitions of the d_m and f_m). We show that fully analytical results can be obtained in the limit of small jet radius R. Furthermore, we show that the small-R expansion works remarkably well for the soft function even for moderate values of R, if one includes corrections up to R^2.The rest of the paper is organized as follows. In genTau, we discuss in more detail the generalized definition of N-jettiness, jet algorithms, and relevant factorization theorems. In GenHemiDecomp, we discuss the generalized hemisphere decomposition to calculate the one-loop soft function. In OneJetCase, we discuss the explicit results for the case of single-jet production. We conclude in conclusions. Details of the calculations are given in anal_soft_pieces and num_evaluation, and results for dijet production are discussed in dijets.§ JET MEASUREMENTS AND JET ALGORITHMSIn this section, we discuss the general properties we assume for the jet measurements and for the jet algorithms (partitioning). We consider the cross section for events with at least N hard jets in the final state with transverse momenta p^J_T,m≥1∼ p_T^J ∼ Q, where Q denotes the center-of-mass energy of the hard process. In jetmeasure we define the generalized form of N-jettiness measurements, in jetregions we discuss and compare different jet algorithms, and in factorization we present the form of the factorization theorems for different choices of jet and beam measurements. §.§ Generalized N-jettiness measurements Assuming a partitioning of the phase space into N jet regions (m=1,…,N) and two beam regions (m=a,b), the observable that we will study is defined in each region m by the sum over all particle momenta (but excluding the color-singlet final state),[We consider only cases without unconstrained phase space domains, i.e. no regions with nonzero area in (η,ϕ) coordinates where f_m=0.] m = ∑_i ∈region mm(p_i) withm(p_i) = f_m(η_i,ϕ_i)p_Ti . Here η_i and ϕ_i denote the pseudorapidity and azimuthal angle of the particle i. The associated jet and beam axes are normalized lightlike directions, and are given in terms of these coordinates by n^μ_m≥1 = 1/coshη_m(coshη_m, cosϕ_m, sinϕ_m, sinhη_m), n^μ_a,b = (1,0,0,±1) . The f_m in Tauidef are dimensionless functions encoding the angular dependence of the observable. To satisfy infrared safety, we require that m→ 0 for soft and n_m-collinear emissions, implying in particular that lim_η_i →∞f_a(η_i,ϕ_i) e^-η_i = 0,lim_η_i → -∞f_b(η_i,ϕ_i) e^η_i = 0,lim_η_i→η_m, ϕ_i →ϕ_m f_m≥1(η_i,ϕ_i) =0. For definiteness we will consider the case that the asymptotic behavior of m in the vicinity of its axis is given by an angularity measurement, which holds for all common single-differential observables, i.e., m(p_i) p^μ_i → E_i n^μ_m⟶ c_m (n_m · p_i)^β_m/2(n̅_m · p_i)^1-β_m/2 , with β_m>0 and some normalization factors c_m. Defining γ≡β_a = β_b, this is equivalent tof_a(η_i,ϕ_i)η_i →∞⟶ c_a e^(1-γ)η_i , f_b(η_i,ϕ_i) η_i → -∞⟶ c_b e^-(1-γ)η_i, f_m≥1(η_i,ϕ_i)(η_i,ϕ_i)→ (η_m, ϕ_m)⟶ c_m (2coshη_m)^1-β_m[(η_i-η_m)^2 + (ϕ_i-ϕ_m)^2]^β_m/2 . We will discuss several examples in GenHemiDecompOneJetCase. The behavior of f_m determines whether the associated collinear and soft sectors are described by a -type or -type theory. The case γ=β_m =2 corresponds to the standard situation with a thrust-like measurement m(p_i) ∼ n_m · p_i. §.§ Jet algorithms Given a set of jet and beam axes {n_m}, the partitioning of the phase space into jet and beam regions is determined by the distance measures d_m(p_i). As shown in partitioning, particle i is assigned to region m if d_m(p_i) < d_j(p_i) for all j≠ m, i.e., when it is closest to the mth axis.For m≥1, the distance measures d_m(p_i)≡ d_m(R,n_m,p_T,m^J,η_i,ϕ_i) can depend on the jet size parameter R and the jet transverse momentum p_T,m^J. In factorization, we will show that for Τ_N ≪ p_T^J and for well-separated jets and beams and sufficiently large jet radii, the differential cross section in the m can be factorized into hard, collinear, and soft contributions. This requires a jet algorithm which exhibits soft-collinear factorization, such that m-collinear emissions are sufficiently collimated to not be affected by different distance measures d_j≠ m and do not play a role for the partitioning of the event. Furthermore, the recoil on the location of the jet axes due to soft emissions is power suppressed for the description of the soft dynamics.[Note that for angularities with β_m≤ 1 the recoil due to soft radiation does matter for the description of the collinear dynamics <cit.>.] Thus the partitioning of soft radiation in the event can be obtained by comparing the distance measures d_m for soft emissions with respect to N+2 fixed collinear directions independently of the axes finding and the jet and beam measurements.We consider the following examples of partitionings for comparisons of numerical results: I: Conical Measure (equivalent to anti-k_T for isolated jets) <cit.>:d_0(p_i)=1,d_m≥ 1(p_i)= R_im^2/R^2. II: Geometric-R Measure <cit.>: d_0(p_i)= e^-|η_i| ,d_m≥ 1(p_i)=n_m · p_i/ρ_τ(R,η_m)p_T i= 1/ρ_τ(R,η_m) ℛ^2_im/2coshη_m. III: Modified Geometric-R Measure <cit.>: d_0 (p_i)=1/2 coshη_i ,d_m≥ 1(p_i)=n_m · p_i/ρ_C(R,η_m) p_T i= 1/ρ_C(R,η_m) ℛ^2_im/2coshη_m. IV: Conical Geometric Measure (XCone default) <cit.>:d_0 (p_i)=1, d_m≥ 1 (p_i)= 2 coshη_m(n_m · p_i)/R^2p_T i=ℛ_im^2 /R^2 , where ρ_τ and ρ_C are discussed below, and the distances in azimuthal angle and rapidity are given by _im ≡√((η_i -η_m)^2 +(ϕ_i-ϕ_m)^2) ,_im ≡√(2 cosh(η_i -η_m) - 2 cos(ϕ_i-ϕ_m)). Since these measures only depend on η_i and ϕ_i, we can obtain explicit jet regions in the η-ϕ plane. The jet regions for an isolated jet with R=1 at different jet rapidities and different R at central rapidity are shown in jetregions. For small R all distance metrics approach a conical partitioning, which means in particular that the deviations from this shape are suppressed by powers of R. For isolated jets the conical distance measure includes all soft radiation within a distance R in η-ϕ coordinates from the jet axis into the jet. Thus, in this case the soft partitioning is equivalent to the one obtained in the anti-k_T algorithm <cit.>, which first clusters collinear energetic radiation before clustering soft emissions into the jets (allowing thus for soft-collinear factorization <cit.>). As explained above, the algorithm for the jet-axes finding is irrelevant for the description of the soft dynamics and the soft function depends only on the soft partitioning with respect to fixed collinear axes. Thus, the soft function for anti-k_T jets and N-jettiness jets with the conical measure are identical for isolated jets.For overlapping jets, the anti-k_T and N-jettiness partitionings differ. The distance metrics in the anti-k_T algorithm between soft and the clustered collinear radiation depend also on the transverse momenta of the jets, which starts to matter in the singular region Τ_N ≪ p_T^J once two jets start to overlap, i.e. for R_lm < 2R. In this case, anti-k_T assigns soft radiation in the overlap region to the more energetic jet, while the N-jettiness partitioning remains purely geometric. This is illustrated in ThreeJetAlgorithms, for three jets with different transverse momenta that share common jet boundaries. When the distance between two clusters of energetic collinear radiation drops below R, anti-k_T clustering will merge these into a single jet, while the N-jettiness partitioning still gives two closeby jets, thus exhibiting a very different behavior.The (modified) geometric-R measures in d_GeometricRd_ModGeometricR have the feature that p_Ti d_m(p_i) ∼ n_m· p_i is linear in the particle momenta p_i, as for the pure geometric measure in Nj from which they are derived. The geometric-R measure was first used in ref. <cit.> to study the jet mass for pp → H +1 jet, taking advantage of the fact that the soft function for this type of measure was computed in ref. <cit.>. The parameters ρ_τ(R,η_m) and ρ_C(R,η_m) are determined by requiring the area in the η-ϕ-plane for an isolated jet with rapidity η_m to be π R^2, i.e. by solving∫_-π^πϕ∫_-∞^∞η θ[d_0(η) - d_m(ρ,η_m,η,ϕ)] =π R^2 .The solution for ρ in terms of η_m and R can be computed analytically in an expansion for small R, which givesρ_τ(R,η_m) = R^2 1+tanh|η_m|/2{1+ 2 R/π θ(R-|η_m|) [√(1-η_m^2/R^2) -|η_m|/Rarccosη_m/R] + 𝒪(R^2)} ,ρ_C(R,η_m) = R^2 {1+R^2/4(1-3tanh^2η_m) + 𝒪(R^4)} . Note that the kink at η_m=0 leads to 𝒪(R) corrections for ρ_τ for |η_m|<R. The full R dependence is obtained numerically. In rho, we show ρ_τ and ρ_C as functions of R for η_m=0 and as functions of η_m for R=1.Compared to the conical measure the shapes of the jet regions are more irregular for the geometric-R measures, as seen in jetregions. In particular the beam thrust measure in d_GeometricR has a cusp at η =0 due to the absolute value in the beam distance measure, which is not present for the smooth beam C-parameter measure in d_ModGeometricR. Furthermore, we also see a distortion from the circular shape for large jet rapidities towards an elongated shape, which is common to both measures since their beam distance measures become identical in the forward region.Finally, the conical geometric measure was introduced in ref. <cit.> and corresponds to the XCone default measure. It is designed to combine the linear dependence of p_Ti d_m≥ 0(p_i) on the particle momenta of the geometric measures with a nearly conical shape, as can be seen in jetregions. One can show that deviations from the circular shape are only of 𝒪(R^4) and still independent of the jet rapidity, since the distance measures in d_ConicalGeometric only depend on the differences with respect to the jet coordinates. The jet area is π R^2 up to very small corrections of 𝒪(R^6), which reach only ≈ 1% even for large R=1.2. §.§ Factorization for different observable choices In this section we display the form of the factorized cross section for pp → L+ N jets, where L denotes a recoiling color-singlet state, with generic observables in the limit Τ_N ≪ p_T^J. The observables can be categorized according to their parametric behavior close to the jet and beam axes into -type and -type cases. For notational simplicity we assume that the same observable is measured in each jet region (which asymptotically behaves like Taum_coll with β≡β_m ≥ 1). We will mainly focus on the properties of the relevant soft function, which also encodes all dependence of the singular cross section on the distance measure used for the partitioning.The scaling of the modes in the effective theory follows in general from the constraints on radiation imposed by the N-jettiness measurements m in Tauidef with m=a,b,1,…,N, the jet boundaries determined by the distance measures in TauN and potential hierarchies in the hard kinematics. We work in a parametric regime with m≪ p_T^J and without additional hierarchies in the jet kinematics (which corresponds to a generic setup), i.e. assuming hard jets with p_T,m^J ∼ Q, large jet radii R∼ 1, well-separated collinear directions n_l · n_m ∼ 1, and nonhierarchical measurements in the different regions l∼m. The parametric scaling of the collinear and soft modes is then given byn_a,b -collinear:p_n_a,b^μ ∼p_T^J (λ^4/γ,1,λ^2/γ)_n_a,b, n_m ≥1-collinear:p_n_m^μ ∼p_T^J (λ^4/β,1,λ^2/β)_n_m,soft:p_s^μ ∼p_T^J (λ^2,λ^2,λ^2) ,where we adopt the scaling λ^2∼𝒯_N/p_T^J, and give momenta in terms of lightcone coordinates p^μ =(n · p, n̅· p, p_⊥)_n with respect to the lightcone direction n = (1, n̂) and n̅ = (1, -n̂). The properties of the factorization formulas depend on the values of β and γ and the resulting invariant mass hierarchies between the soft and collinear modes. If β,γ≠ 1 the associated collinear fluctuations live at a different invariant mass scale than the soft modes, leading to a -type description. Otherwise at least one collinear mode is separated from the soft modes only in rapidity, giving rise to a -type theory involving rapidity divergences for the individual bare quantities and a dependence on an associated rapidity RG scale ν in the renormalized quantities <cit.>. Being fully differential in the hard kinematic phase space Φ_N and all N-jettiness observables m, the factorization formulae for the four cases with β, γ=1 and β, γ≠ 1 read:[We do not include effects from Glauber gluon exchange here. For active-parton scattering their perturbative contributions start at O(α_s^4) <cit.> and can be calculated and included using the Glauber operator framework of ref. <cit.>. For proton initial states the factorization formulae also do not account for spectator forward scattering effects, since the Glauber Lagrangian of ref. <cit.> has been neglected.]A) γ≠ 1, β≠ 1 (beams and jets): (n ∈a,b,1, … N)σ_κ(Φ_N)/a⋯N= ∫(∏_n k_n) tr[ H_N^κ(Φ_N,μ)S_N^κ( {m-c_m k_m},{n_m},{d_m},μ) ] ×ω_a^γ-1 B_a(ω_a^γ-1 k_a , x_a, μ) ω_b^γ-1 B_b(ω_b^γ-1 k_b, x_b, μ) ∏_j=1^N ω_j^β-1 J_j (ω_j^β-1 k_j,μ). B) γ = 1, β≠1 (beams and jets):σ_κ(Φ_N)/a⋯N=∫(∏_n k_n) tr[ H_N^κ(Φ_N,μ)S_N^κ( {m-c_m k_m},{n_m},{d_m},μ,ν/μ) ] × B_a(k_a , x_a, μ,ν/ω_a)B_b(k_b , x_b, μ,ν/ω_b) ∏_j=1^N ω_j^β-1 J_j (ω_j^β-1 k_j,μ). C) γ≠1, β = 1 (beams and jets):σ_κ(Φ_N)/a⋯N= ∫(∏_n k_n)tr[ H_N^κ(Φ_N,μ)S_N^κ( {m-c_m k_m},{n_m},{d_m},μ,ν/μ) ] ×ω_a^γ-1 B_a(ω_a^γ-1 k_a , x_a, μ) ω_b^γ-1 B_b(ω_b^γ-1 k_b, x_b, μ) ∏_j=1^NJ_j(k_j,μ,ν/ω_j) . D) γ = 1, β = 1 (beams and jets):σ_κ(Φ_N)/a⋯N= ∫(∏_n k_n)tr[ H_N^κ(Φ_N,μ)S_N^κ( {m-c_m k_m},{n_m},{d_m},μ,ν/μ) ] × B_a(k_a , x_a, μ,ν/ω_a)B_b(k_b , x_b, μ,ν/ω_b) ∏_j=1^NJ_j(k_j,μ,ν/ω_j).In eqs. (<ref>)–(<ref>) the hard function H^κ_N encodes the hard interaction process for the partonic channelκ_a(q_a)κ_b(q_b) →κ_1(q_1)κ_2(q_2) …κ_N(q_N) + L(q_L),κ ={κ_a,κ_b;κ_1,…, κ_N}in terms of the massless (label) momenta q_m^μ =ω_m n_m^μ/2, which satisfy partonic (label) momentum conservationq_a^μ + q_b^μ = q_1^μ + … + q_N^μ + q_L^μ ,where q_L^μ is the total momentum of the recoiling color-singlet final state. The x_a,b and label momenta for the initial states are defined viaq_a,b^μ =ω_a,bn_a,b^μ/2≡ x_a,b E_ cmn_a,b^μ/2 .The jet functions J_m≥ 1 and beam functions B_a, B_b describe the final-state and initial-state collinear dynamics, respectively, and S^κ_N denotes the soft function. H^κ_N and S^κ_N are matrices in color space. The c_m are the normalization factors of the observable as defined in Taum_coll. Due to the requirement Τ^(m)≪ p_T^J the collinear modes do not resolve the jet boundaries, such that the jet functions are of the inclusive type and have been computed at one-loop in ref. <cit.> for arbitrary values β>0.[For β=2 they have been computed before in refs. <cit.>.] Note that in the jet functions, for cases C and D (β=1), a rapidity regularization in close correspondence to refs. <cit.> leads to an additional dependence on the scale ratio ν/ω_m.The factorization for the pure case, for β = γ=2, is well studied in the literature <cit.> and has been applied to phenomenological predictions for single-jet production <cit.>. Also, both cases A and B have been studied in ref. <cit.>(with the focus on β=2). In this work, we present for the first time cases C and D, and we will focus on those in the following discussion. These represent a generalization of the previous cases, and assume that the jet and beam axes are insensitive to effects due to mutual recoil or to recoil from soft emissions. The recoil of the jet axis due to collinear radiation can be relevant for β>1 (see e.g. ref. <cit.>), but as discussed in ref. <cit.>, is avoided by properly aligning the jet axes. For β≤ 1, the jet axis can in addition recoil against soft radiation, leading to nontrivial perpendicular momentum convolutions between the jet, beam, and soft functions for recoil-sensitive axes (see e.g. refs. <cit.>). Recoil-free jet axes avoiding this issue can be defined, e.g., through a global minimization of N-jettiness,Τ_N =min_n_1,…,n_N∑_i∑_m=a,b,1, …, Nm(p_i) = min_n_1,…,n_N∑_i∑_m=a,b,1, …, N f_m(η_i,ϕ_i)p_T i ∏_l ≠ mθ(d_l(p_i)-d_m(p_i)). Other sets of axes deviating by only a sufficiently small amount, i.e. by an angle ≪λ^2/β, yield the same result up to power corrections.The measurement in the beam region requires a separate discussion, as the beam axes are fixed by the collider setup.However, one can still avoid transverse momentum convolutions by making a less granular measurement of the jet energies or transverse momenta, with a procedure analogous to the one discussed in ref. <cit.>. Momentum conservation in the direction transverse to the beam impliesk_T^μ≡ p_T,a^μ+p_T,b^μ=q_T,L^μ+∑_m= 1^N p_T,m^μ,where p_T,m is the transverse component of the m-th jet momentum, so that measurements of the jet transverse momenta (or of the p_T of a recoiling leptonic state) within a bin size Δ p_T^J ≫ p_T^J λ^2/γ for γ >1 and Δ p_T^J ≫ p_T^J λ^2 for γ≤ 1 allow one to integrate over the unresolved transverse momenta and eliminate residual transverse momentum convolutions. This leads to the appearance of the common beam functions which are known at one-loop for γ=1 and γ=2 <cit.>.The soft function, which we are primarily interested in here, depends on the measurements m in the different regions, the angles between any collinear directions n_l· n_m, and the distance measures d_m involving the jet radius. If either a jet or beam measurement is type, it also involves a dependence on the rapidity renormalization scale ν besides the invariant mass scale μ. The (bare) soft matrix element is defined as S^κ_N ({k_m}, {n_l},{d_m}) = 0Y_κ^†({n_l}) ∏_m δ(k_m - Τ̂^(m))Y_κ({n_l})0 .Here Τ̂^(m) denotes the operator that measures m on all particles in region m, i.e.Τ̂^(m)| X_s⟩ =∑_i ∈ X_sm(p_i)∏_l ≠ mθ[d_l(p_i) -d_m(p_i)] | X_s⟩.The color matrix Y_κ({n_l}) is a product of N+2 soft Wilson lines pointing in the collinear directions n_a,n_b,n_1,…,n_N. For a given partonic channel, each of these is given in the color representation of the associated external parton with the appropriate path-ordering prescription. In the following, we use a normalization such that the tree level result for S^κ_N is diagonal in color space, S^κ(0)_N =1_N∏_m δ(k_m).The full one-loop soft function for processes with at least one final state jet is so far only known for specific cases. In ref. <cit.> it has been computed for the thrust-like N-jettiness with β=γ=2 using them simultaneously for the measurement and partitioning as in Nj. In ref. <cit.> the one-loop soft function for angularities with β>1 in e^+ e^- collisions has been calculated also for a common measurement and partitioning. In the following we will extend these calculations to arbitrary angularity measurements (including jet mass) and jet vetoes (including a standard transverse momentum veto) at pp-colliders with the separate partitionings as described in genTau (including the anti-k_T case). At one loop, our results with a global measurement in the beam region are identical to those for the corresponding jet-based vetoes.§ GENERAL HEMISPHERE DECOMPOSITION AT ONE LOOP The Feynman diagrams for the computation of the one-loop soft function are displayed in NjetSoneloop. The virtual diagrams vanish in pure dimensional regularization and the real radiation contribution associated with only one collinear direction vanish in Feynman gauge due to n_i^2=0. Thus the one-loop expression is given as a sum over real radiation contributions from different color dipoles each associated with two external hard partons, S^bare_N({k_m}, {n_m},{d_m} ) = ∑_i<j_i·_j S_ij({k_m}, {d_m} )with i,j =a,b,1,…, N andS_ij ({k_m}, {d_m})=-2g^2 (e^γ_Eμ^2/4π)^∫^d p/(2π)^d (ν/2p_0)^η n_i· n_j/(n_i· p)(n_j· p)× 2πδ(p^2) θ(p^0)F({k_m}, {d_m},p). We have included a factor to account for the regularization of possible rapidity divergences. Since (ν/(2p_0))^η→ (ν/n̅_i · p)^η for p^μ→ (n̅_i · p) n^μ_i/2, the common expressions for the rapidity regularized jet and beam functions can be used. By contrast, naively applying the Wilson line regulator in refs. <cit.> for every single collinear direction would give the factor (ν/|n̅_i · p - n_i · p|)^η/2×(ν/|n̅_j · p - n_j · p|)^η/2p^μ→ (n̅_i · p) n^μ_i/2⟶(ν/n̅_i · p)^η 1/|n̂_i ·n̂_j|^η/2. The additional factor |n̂_i ·n̂_j|^-η/2leads to different finite 𝒪(η^0) terms, which would lead to a hard function that differs from the standardresult, and hence we chose not to use this regulator here. While refs. <cit.> chose the spatial p_3-component for the regularization, in particular to preserve analyticity properties for virtual corrections, we choose here to only introduce aregulator for real radiation corrections, for which the energy component is suitable.[Rapidity regulators that only act on the real radiation contributions have been used earlier in the literature <cit.> (the regulator we use for our multi-jet situation differs from theirs). An alternative would be a rapidity regulator for the dipole that preserves analyticity and hence can be used for both real and virtual corrections in S_ij, of the form(νn_i· n_j/2|n_i · p -n_j · p|)^η .This regulator does not have an obvious interpretation as coming from the soft Wilson lines. ] This is related to a moment of the exponential rapidity regulator used in ref. <cit.>.The function F incorporates the phase-space constraints on the single soft real emission. In terms of the N-jettiness measurements m(p) with given distance measures d_m(p) for m=a,b,1, … N it readsF({k_m}, {d_m},p)=∑_mδ(k_m - m(p)) ∏_lmδ(k_l)θ(d_l(p) - d_m(p)) .To compute the integral in Sijbare_generalN for arbitrary (one-dimensional) measurements and a general phase-space partitioning we generalize the hemisphere decomposition employed in ref. <cit.>. Our method is based on the fact that the full (IR, UV, rapidity) divergent structure of the soft function contribution S_ij is reproduced using arbitrary (IR safe) measurements Τ̃^(i), Τ̃^(j) that asymptotically satisfy Taum_coll, and using arbitrary distance measures {d̃_k}, with the only requirement that emissions in the vicinity of the axes n_i and n_j have to be assigned to regions i and j, respectively. Having found a combination of measures that allows for an analytic calculation one can then compute the mismatch to the correct measurement and phase-space partitioning in terms of finite (numerical) integrals.The most straightforward choice to enable an analytic calculation with the same singular structure as the full result is to employ directly angularities as measurements in the regions i, j which are defined by thrust hemispheres, i.e. to useΤ̃^(i)(p) =c_i (n_i · p)^β_i/2 (n̅_i · p)^1-β_i/2,Τ̃^(j)(p) =c_j (n_j · p)^β_j/2 (n̅_j · p)^1-β_j/2with the distance measures d̃_i(p) = n_i · p/ρ_i, d̃_j(p)= n_j · p/ρ_j, d̃_k ≠ i,j(p) = ∞.We have included factors ρ_i,ρ_j to allow for the possibility of nonequal hemisphere regions i and j, which we will exploit in OneJetCase to analytically calculate the result in the small-R limit. Taking into account the difference to the actual jet boundaries and measurement, we decompose the measurement function F for the dipole correction S_ij as F({k_l}, {d_l},p)= F̃_i<j({k_l}, p) + Δ F_i<j({k_l},p)+ F̃_j<i({k_l}, p) + Δ F_j<i({k_l},p)+ ∑_m=a,b,1,…,N F_ij^m({k_l}, {d_l},p),with all indices distinguishing separate beam regions a,b andF̃_i<j({k_l}, p) =δ(k_i- Τ̃^(i)(p))θ(n_j · p/ρ_j - n_i · p/ρ_i)∏_liδ(k_l) , Δ F_i<j({k_l}, p)= [ δ(k_i - Τ^(i)(p)) - δ(k_i-Τ̃^(i)(p))] θ(n_j · p/ρ_j - n_i · p/ρ_i) ∏_liδ(k_l) , F_ij^i({k_l}, {d_n},p) = [ δ(k_i - Τ^(i)(p)) δ(k_j) - δ(k_j -Τ^(j)(p)) δ(k_i)] ×θ( n_i · p/ρ_i-n_j · p/ρ_j)θ(d_j(p)-d_i(p)) ∏_l ≠ i,jθ(d_l(p)-d_i(p))δ(k_l) , F_ij^m ≠ i,j({k_l}, {d_n},p) = [ δ(k_m - Τ^(m)(p)) δ(k_i) - δ(k_i -Τ^(i)(p)) δ(k_m)] ×θ(n_j · p/ρ_j - n_i · p/ρ_i) θ(d_i(p)-d_m(p)) ∏_l ≠ iθ(d_l(p)-d_m(p))δ(k_l) +(i ↔ j) . The terms F̃_j<i, Δ F_j<i, and F^j_ij in FNdecomp are defined in analogy by replacing i ↔ j in these expressions for F̃_i<j, Δ F_i<j and F^i_ij. A specific example for this hemisphere decomposition is illustrated in GeneralHemiDecompRect. The F̃_i<j denote the measurement of Τ̃^(i) in the hemisphere i, which can be computed analytically and encodes all divergences. The measurement contribution Δ F_i<j is present if i is not identical to the angularity Τ̃^(i). It corrects for this mismatch within the hemisphere boundaries and therefore does not depend on the final partitioning. Since i and Τ̃^(i) yield the same collinear and rapidity divergences and also the soft divergences cancel in the difference of the two IR-safe observables this is a finite correction. The remaining pieces F^k_ij correct the measurement with the hemisphere boundaries to the actual partitioning given in terms of the distance measures {d_h}. Here the superscript m indicates that the measurement of m instead of i or j needs to be performed in the associated phase space region where d_m is minimal. For m= i and m=j this corresponds to the boundary mismatch corrections between the regions i and j. The only singularities in the phase space mismatch regions are soft IR divergences which cancel between two IR safe measurements, such that the corresponding correction to the soft function is also finite and can be calculated numerically in terms of finite (observable and partitioning dependent) integrals. We decompose the contribution of the ij dipole to the soft function in direct correspondence with FNdecomp S_ij ({k_l}, {n_k},{d_m})=S̃_i<j({k_l},ŝ_ij) + Δ S_i<j({k_l},ŝ_ij)+ S̃_j<i({k_l},ŝ_ij)+ Δ S_j<i({k_l},ŝ_ij) +∑_m=a,b,1,…, N S_ij^m({k_l}, {d_n},ŝ_ij) , where the terms on the right-hand side distinguish between two beam regions with separate measurements.The expressions for the individual terms follow by replacing the measurement F({k_l}, {d_n}, p) in SN_color by the corresponding term in FNdecomp. The hemisphere corrections to the soft function S̃_i<j and S̃_j<i have been calculated analytically for β_i=2 in <cit.>. For β_i≠ 1 the result has been given in ref. <cit.> in terms of a finite numerical integral. The latter can be evaluated analytically and vanishes for ρ_i=ρ_j. This yields the bare resultS̃_i<j^β_i ≠ 1({k_l},ŝ_ij)= α_s/4π1/β_i-1 ∏_l ≠ iδ(k_l){8/μ ξ_i<j ℒ_1(k_i/μ ξ_i<j) - 4/1/μ ξ_i<j ℒ_0(k_i/μ ξ_i<j) +δ(k_i) [2/^2-π^2/6-(β_i-2)(β_i-1) θ(ρ_i/ρ_jŝ_i̅ j -1) ln^2 (ρ_i/ρ_jŝ_i̅ j)] +𝒪(ϵ)},with the rescaling factor ξ_i<j given in terms of the angular term ŝ_ij, withξ_i<j≡ c_i (ρ_i/ρ_jŝ_ij)^β_i-1/2,ŝ_ij≡n_i · n_j/2 = 1-cosθ_ij/2, ŝ_i̅j≡n̅_i· n_j/2 = 1+cosθ_ij/2.The plus distributions _n are defined as_n(y) ≡[θ(y) ln^n y/y]_+.For β_i =1 the computation is carried out in anal_soft_pieces which gives the resultS̃_i<j^β_i = 1({k_l},ŝ_ij)= α_s/4π ∏_l≠ iδ(k_l) {8/μ c_i ℒ_1(k_i/μ c_i) - 8/μ c_i ℒ_0(k_i/μc_i) [1/η+ln(ν/μ√(ρ_i/ρ_jŝ_ij))]+δ(k_i) [4/η -2/^2+4/ ln( ν/μ√(ρ_i/ρ_jŝ_ij))+π^2/6+ θ(ρ_i/ρ_jŝ_i̅ j -1) ln^2 (ρ_i/ρ_jŝ_i̅ j)] +𝒪(η,ϵ)}.The hemisphere results S̃_j<i are given by simply replacing i ↔ j in S_hemiS_hemi_b1.We will now explicitly display the corrections to the hemisphere results in S_hemiS_hemi_b1 in terms of finite integrals that can be computed numerically. Depending on the specific partitioning and N-jettiness measurement, different integration variables can be appropriate, e.g. the rapidity η and azimuthal angle ϕ in the lab frame (i.e. coordinates with respect to the beam axis) or the relative rapidity η' and azimuthal angle ϕ' in a boosted frame where the collinear directions n_i and n_j are back-to-back. The former is usually more convenient for the conical (anti-k_T) distance measure in d_Conical since the integration boundaries are just circles in the η-ϕ plane, while the geometric measures in eqs. (<ref>)–(<ref>) involve naturally the momentum projections n_i · p, n_j · p for which the variables η', ϕ' are usually more practical (seerefs. <cit.>). For definiteness we use here beam coordinates, since our general N-jettiness measurements for pp → N jets in Tauidef and also the distance measures in eqs. (<ref>)–(<ref>) are displayed in terms of those, and since our main focus will be the anti-k_T case. First we write the momentum projections in eqs. (<ref>), (<ref>) and (<ref>) as n_k · p = p_Tg_k(η,ϕ) , n̅_k · p = p_Tg_k̅(η,ϕ) withg_a(η,ϕ)≡ g_0(η,ϕ) = e^-η, g_b(η,ϕ)≡ g_0̅(η,ϕ) = e^η, g_m>0 (η,ϕ) =cosh (η -η_m) -cos (ϕ -ϕ_m)/coshη_m, g_m̅>0 (η,ϕ) = cosh (η +η_m) +cos (ϕ -ϕ_m)/coshη_m.Keeping only the ϵ-dependence in the phase space integration of Sijbare_generalN which is required to regulate the soft singularities, we can write the correction terms as Δ S_i<j ({k_l},ŝ_ij) =- α_s/π^2 μ^2∫_0^∞ p_T/p_T^1+2∫_-π^πϕ∫_-∞^∞η ŝ_ij/g_i(η,ϕ) g_j(η,ϕ) Δ F_i<j({k_l},p)+, and similarly for S^m_ij. We can then use thatμ^2 ϵ∫_0^∞ p_T/p_T^1+2 ϵ[ δ(k_i)δ(k_m - p_T f_m(η,ϕ)) - δ(k_i - p_T f_i(η,ϕ))δ(k_m)]= δ(k_i)1/μ_0 ( k_m/μ) - 1/μ _0 ( k_i/μ)δ(k_m) - ln( f_m(η,ϕ)/f_i(η,ϕ))δ(k_i) δ(k_m) +𝒪(ϵ).To obtain the correction Δ S_i<j we replace in abpTintegral k_m → k_i, f_m →f̃_i = c_ig^β_i/2_i g^1-β_i/2_i̅ givingΔ S_i<j({k_l},ŝ_lm) = α_s/π I_1,i<j(f_i,ŝ_ij)∏_lδ(k_l)in terms of the angle dependent integral I_1,i<j which depends only on the observable Τ^(i) (via f_i) and the angle ŝ_ij,I_1,i<j (f_i,ŝ_ij)= ŝ_ij/π∫_-π^πϕ∫_-∞^∞η ln( f_i(η,ϕ)/c_i [g_i(η,ϕ)]^β_i/2 [g_i̅(η,ϕ)]^1-β_i/2)1/g_i(η,ϕ) g_j(η,ϕ)×θ(g_j(η,ϕ)/ρ_j-g_i(η,ϕ)/ρ_i) .Similar expressions appear also in ref. <cit.> in computations of soft corrections for general event shapes in e^+ e^--collisions.Finally, the non-hemisphere correction S^m_ij can be written as (see also refs. <cit.>)S^m_ij({k_l}, {d_n},ŝ_ij) = α_s/π{[δ(k_m)1/μ_0 ( k_i/μ) - 1/μ_0 ( k_m/μ)δ(k_i)] I^m_0,ij({d_l},ŝ_ij) ∏_l≠ i,mδ(k_l)+ I^m_1,ij({d_l},f_i,f_m,ŝ_ij) ∏_lδ(k_l)} +(i ↔ j) ,in terms of the integrals I_0,ij^m (and I_0,ji^m), which depends on the partitioning and the angle ŝ_ij, and the integrals I_1,ij^m (and I_1,ji^m), which in addition depend on the measurements Τ^(i) (Τ^(j)) and Τ^(m). These are given byI^m_0,ij({d_l},ŝ_ij) = ŝ_ij/π∫_-π^πϕ∫_-∞^∞η 1/g_i(η,ϕ) g_j(η,ϕ)×θ(g_j(η,ϕ)/ρ_j-g_i(η,ϕ)/ρ_i) ∏_l ≠ mθ(d_l(η,ϕ)-d_m(η,ϕ)) ,I^m_1,ij({d_l},f_i,f_m,ŝ_ij) = ŝ_ij/π∫_-π^πϕ∫_-∞^∞η ln( f_m(η,ϕ)/f_i(η,ϕ))1/g_i(η,ϕ) g_j(η,ϕ)×θ(g_j(η,ϕ)/ρ_j-g_i(η,ϕ)/ρ_i) ∏_l ≠ mθ(d_l(η,ϕ)-d_m(η,ϕ)) .The above expressions allow for a determination of the N-jet soft function at one-loop for arbitrary measurements and distance measures. In practice, evaluating these integrals can be quite tedious, since the phase-space constraints can lead to slow or unstable numerical evaluations. For the one-jet case and distance measures we consider next we solve for the integration limits allowing for fast and precise numerical integrations.§ L+1 JET PRODUCTION AT HADRON COLLIDERS §.§ SetupAs a concrete example for the comparison of numerical results we discuss the case pp → L + 1 jet. Choosing ϕ_J = 0 without loss of generality the lightcone direction of the jet is given by n^μ_J=(1,n̂_J) = (1,1/coshη_J,0, tanhη_J),In this case we partition the phase space only into a single jet and a beam region and the observable is given by Τ_1 = ∑_i {Τ_B(p_i),for d_B(p_i) < d_J(p_i) , Τ_J(p_i),for d_J(p_i) < d_B(p_i). . For Τ_B ≡0 and Τ_J ≡1 we use the parameterizations in Tauidef to specify the observable. As jet observables we consider angularities defined by Angularity Τ_J^β: f^β_J(η_i,ϕ_i) = ℛ_iJ^β where ℛ_iJ denotes the distance of the emission i with respect to the jet axis as defined in DeltaRdef. Among these is for β=2 the observable Τ_J^β=2(p_i)=2coshη_J(n_J · p_i) corresponding directly to the measurement of the jet mass, m_J^2 ≃ p_T^J Τ_J^β=2, as exploited in refs. <cit.>. In contrast to Tau_hemi, which is the more common definition in e^+ e^- collisions, we have defined the angularities in a way which is invariant under boosts along the beam direction and corresponds to the measurement for the Conical Geometric case in ref. <cit.> with the specification γ = 1 (including the XCone default and the Recoil-Free default). For β=1 the definition in fJchoices also corresponds to the default way to study N-subjettiness <cit.>.As measurements of the beam region observable (or jet vetoes) we discussbeam thrust Τ_B^τ (γ=2): f_B^τ(η) = e^-|η|, C-parameter Τ_B^C (γ=2): f_B^C(η)= 1/2 coshη,transverse energy Τ_B^p_T (γ=1): f_B^p_T(η_i) = 1 . These choices include both -type observables (beam thrust and C-parameter) and -type observables (transverse energy). Thus, with the various choices for Τ_B and Τ_J, we cover all possible combinations of observable types for which the factorization was discussed in factorization.§.§ Computation of the soft functionThe color space for the soft function S^κ_1 with three external collinear directions is one-dimensional and we write the one-loop expression in analogy toSN_color asS^κ_1({k_j},{d_j},η_J) = _a·_b S_ab({k_j},{d_j},η_J)+ _a·_J S_aJ({k_j},{d_j},η_J)+_b·_J S_bJ({k_j},{d_j},η_J) , where S_bJ can be inferred from S_aJ due to symmetry, S_bJ({k_j},{d_j},η_J)= S_aJ({k_j},{d_j},-η_J) . For a pure gluonic channel κ = {g,g;g} the color factors are _a·_b = _a·_J =_b·_J = -C_A/2, while for the channel κ = {g,q;q} (and in analogy for its permutations)_a·_b = _a·_J = -C_A/2, _b·_J = C_A/2-C_F , The expressions for the Feynman diagrams of the corrections S_ab and S_aJ are given by Sijbare_generalN with N=1. Following the hemisphere decomposition in GenHemiDecomp, for the beam-beam dipole correction S_ab the full hemisphere corrections, i.e. without considering the jet region, can be computed analytically for the measurements in fBchoices. Thus the contributions F̃_a<b, F̃_b<a, Δ F_a<b and Δ F_b<a in FNdecomp can be represented by a single function F_B^whole encoding the full measurement of the beam region observable Τ_B in the whole phase space. We therefore write the measurement function F as[Compared to GenHemiDecomp we perform here the decomposition for a single beam region.] F({k_j},{d_j},η_J, p) = F_B^whole({k_j},p) + F_ab^J({k_j},{d_j},η_J, p),F_B^whole({k_j},p) = δ(k_B - p_T f_B(η)) δ(k_J), F_ab^J({k_j},{d_j},η_J,p) =[ δ(k_B)δ(k_J - p_T f_J(η,ϕ)) - δ(k_B - p_T f_B(η)) δ(k_J) ]×θ(d_B(η) - d_J(η,ϕ)) , which is illustrated in Small_R_Decomp_Beam.The analytic corrections S_ab^whole corresponding to F_B^whole can be easily obtained from S_hemiS_hemi_b1 (and using I1_hemi for the C-parameter), see also e.g. refs. <cit.>, S_ab^ whole, τ({k_j}) = α_s/4π δ(k_J) {16/μ_1(k_B/μ) -8/μϵ_0(k_B/μ)+[4/^2- π^2/3] δ(k_B) +𝒪(ϵ)},S_ab^ whole, C({k_j})= α_s/4π δ(k_J) {16/μ_1(k_B/μ) -8/μϵ_0(k_B/μ)+[4/^2- π^2] δ(k_B) +𝒪(ϵ)}, S_ab^ whole, p_T({k_j})=α_s/4π δ(k_J) {16/μ ℒ_1(k_B/μ) - 16/μ ℒ_0(k_B/μ) [1/η+ln(ν/μ)]+ δ(k_B) [8/η -4/^2+8/ ln( ν/μ)+π^2/3]+𝒪(η,ϵ)}. The remaining correction S_ab^J due to the angularity measurement in the jet region is of 𝒪(R^2), i.e. the jet area, and is given by S_ab^J({k_j},{d_j},η_J)= α_s/π{I^J_0,ab({d_j},η_J) [δ(k_J) 1/μ_0 ( k_B/μ) - 1/μ_0 ( k_J/μ)δ(k_B) ] + I^J_1,ab({d_j},{f_j},η_J)δ(k_B)δ(k_J)} , I^J_0,ab({d_j},η_J) = 1/π∫^π_-πϕ∫_-∞^∞η θ(d_B(η) - d_J(η,ϕ)) , I^J_1,ab({d_j},{f_j},η_J)= 1/π∫^π_-πϕ∫_-∞^∞η ln( f_J(η,ϕ)/f_B(η)) θ(d_B(η) - d_J(η,ϕ)) .I^J_0,ab corresponds just to the jet area in the η-ϕ plane and is identical to R^2 for the conical and the geometric-R measures, while for the conical geometric measure there are deviations of 𝒪(R^6). In order to compute the integrals for the beam-jet dipoles, one can follow the hemisphere decomposition as presented in GenHemiDecomp which yields numerical corrections of 𝒪(1) and logarithmically enhanced terms for small R. However, we will present here a more efficient adaption of this decomposition exploiting the fact that for the measurements considered in this section the soft function can be computed analytically in an expansion in terms of the jet radius R. As already discussed in ref. <cit.> this provides a fairly good approximation for not too large values of R. In the following we will compute numerically only deviations from these results, such that the numerical integrals will scale with powers of R thus avoiding large cancellations for R ≪ 1.[We have checked that the numerical results from the two alternative decompositions agree.] First, we can choose in deltaF the parameter ρ_J such that for R ≪ 1 it yields a conical shape for the jet region with an active area π R^2. In this limit all distance measures considered here lead to the same partitioning as shown in jetregions with deviations being suppressed by R. Using g_m the associated condition for the parameter ρ_J reads for the aJ-dipole (with ρ_a =1)∫_π^πϕ∫_-∞^∞η θ[ρ_Je^-η - ℛ_iJ^2/2 coshη_J] =π R^2 .Expanding the phase space constraint in the small-R limit gives an analytic relation for ρ_J,ρ_J(R) =ρ^R_J[1+ 𝒪(R)]with ρ^R_J = R^21+tanhη_J/2. The soft function corrections due to the measurement of angularities in the jet hemisphere can be computed analytically. If the corrections due to the measurement of the beam region observable in the beam hemisphere can also be computed analytically, all remaining numerical corrections will be automatically small for R ≪ 1. This is the case for the transverse energy veto, where S_hemi_b1 provides an exact hemisphere result for arbitrary ρ. However, for a general veto (including beam thrust and C-parameter) we have not obtained an analytic hemisphere result. To avoid large numeric corrections from the term Δ F_a<J in deltaF, we can instead decompose the hemisphere measurement function F_a<J into a piece without constraints due to a jet region and its measurement, calculated analytically in ref. <cit.>, and a subtraction term in the jet hemisphere (with the measurement of the beam region observable), which can be computed in a series expansion in R. For the correction S_aJ we thus write F asF({k_l},{d_n},η_J, p) = F_a<J({k_l},R,η_J,p) + F_J<a({k_j},R,η_J,p) + ∑_m=J,B F^m_aJ({k_l}, {d_n},η_J,p)= F_B^whole({k_l},p) -F̃_J<a^B({k_l},R,η_J,p)+ F_J<a({k_l},R,η_J,p) + Δ F_J<a^B({k_l},R,η_J,p) + ∑_m=J,B F^m_aJ({k_l}, {d_n},η_J,p) ,where F_B^whole({k_l},η_J,p) =δ(k_B - p_T f_B(η))δ(k_J) ,F̃_J<a^B({k_l},R,η_J,p) = δ(k_B - p_T f̃_B(η-η_J))δ(k_J)θ(n_a · p -n_J · p/ρ_J^R ) , F_J<a({k_l},R,η_J,p) =δ(k_B)δ(k_J - p_T f_J(η,ϕ))θ(n_a · p -n_J · p/ρ_J^R) ,Δ F_J<a^B({k_l},R,η_J,p) =[ δ(k_B - p_T f̃_B(η-η_J)) - δ(k_B - p_T f_B(η)) ] δ(k_J)×θ(n_a · p -n_J · p/ρ_J^R ) , F^B_aJ({k_l}, {d_n},η_J,p) = [δ(k_B - p_T f_B(η)) δ(k_J)- δ(k_B)δ(k_J - p_T f_J(η,ϕ)) ] ×θ(d_J(η,ϕ)-d_B(η))θ(n_a · p-n_J · p/ρ_J^R ) , F^J_aJ({k_l}, {d_n},η_J,p) = [ δ(k_B)δ(k_J - p_T f_J(η,ϕ)) - δ(k_B - p_T f_B(η)) δ(k_J) ] ×θ(d_B(η) - d_J(η,ϕ))θ(n_J · p/ρ_J^R -n_a · p). Here the expanded measurement of the beam region observable in the jet region is denoted by Τ̃_B=p_T f̃_B(η-η_J) withf̃_B(η-η_J)≡ f_B(η_J)e^η_J-η =n_a · p/p_Tf_B(η_J) e^η_J.The corresponding decomposition of the soft function is given byS_aJ({k_l},{d_n},η_J, p)= S_aJ^whole({k_l},η_J,p) -S̃_J<a^B({k_l},η_J)+ S_J<a({k_l},R,η_J) + Δ S_J<a^B({k_l},R,η_J) + ∑_m=J,B S_aJ^m({k_l}, {d_n}, η_J),where each individual term is given by replacing the measurement F({k_l}, {d_n}, p) in SN_color by the corresponding term in aJFdecomp. This decomposition is illustrated in Small_R_Decomp_Rect. We now discuss the different pieces in turn, giving the associated results.The term F_B^whole corresponds to the measurement of the beam observable within the complete phase space without constraints due to the jet region. In the context of pp → L +1 jet this correction was calculated in <cit.> for the measurements in fBchoices and denoted by S_B therein.[For an energy veto at e^+e^- collisions the associated “inclusive" correction to the one-loop soft function has been first computed in <cit.>. For pp → dijets also the correction from the jet-jet dipole can be calculated for a p_T-veto <cit.>.] The bare corrections are given byS^ whole,τ_aJ ({k_l},η_J)= α_s/4 π δ(k_J) { 16 η_Jθ(-η_J)1/μ _0(k_B/μ)+ δ(k_B) [-8 η_J/ θ(-η_J) -4 _2(e^-2|η_J|)-8 η_J^2θ(-η_J) ]+}, S^ whole,C_aJ({k_l},η_J) = α_s/4 π δ(k_J) { 8 ln(1+tanhη_J/2) 1/μ _0(k_B/μ) + δ(k_B)[-4/ln(1+tanhη_J/2)+4 _2(1+tanhη_J/2)+2 ln^2(1-tanhη_J/2) -8 ln^2(2coshη_J)-2π^2/3]+}, S^ whole,p_T_aJ({k_l},η_J) = α_s/4 π δ(k_J){1/μ _0(k_B/μ)[-8/η +4/-8ln(ν e^-η_J/μ)]+ δ(k_B)[4/η -4/^2+4/ln(ν e^-η_J/μ)+π^2/3]+η,}. The measurement of the beam region observable leads to a different divergent behavior for radiation collinear to the jet axis than for the jet measurement. This requires the computation of the analytic piece -F̃_J<a^B (in the jet hemisphere) to correct for this mismatch. For its calculation we employ a measurement Τ̃_B which is linear in the momentum component n_a · p and identical to the beam observable Τ_B in the vicinity of n_J (i.e. for η→η_J), see tilde_fB. In dimensional regularization the associated correction gives just the result for the hemisphere contribution in <cit.> (with an appropriate rescaling factor),S̃^B_J<a ({k_l},R,η_J)= α_s/4 π δ(k_J){8R/μ f_B(η_J) _1 ( k_BR/μf_B(η_J)) -4/ϵ R/μ f_B(η_J) _0 ( k_B R/μ f_B(η_J))+[2/ϵ^2- π^2/6] δ(k_B)}. The term F_J<a corresponds to the measurement of the jet observable in the rescaled jet hemisphere. The results for the angularities defined in fJchoices can be obtained analytically from the hemisphere results in S_hemiS_hemi_b1 and a finite correction coming from I1_hemi. The latter accounts for the difference of the boost invariant jet angularity in fJchoices from the generic definition in Tau_hemi and is calculated in anal_soft_pieces. In total we obtainS^β≠ 1_J<a({k_l},R,η_J)= α_s/4π δ(k_B)/β-1{8/μ R^β-1 ℒ_1(k_J/μ R^β-1) -4/1/μ R^β-1 ℒ_0(k_J/μ R^β-1) +δ(k_J) (2/^2-π^2/6-2(β-1)(β-2) θ(R-1)ln^2 R) +𝒪(ϵ)},S^β = 1_J<a({k_l},R,η_J)= α_s/4π δ(k_B) {8/μ ℒ_1(k_J/μ) - 8/μ ℒ_0(k_J/μ) [1/η+ln(ν R/2μcoshη_J)] +δ(k_J) (4/η -2/^2+4/ϵln(ν R/2μcoshη_J)+π^2/6+2 θ(R-1) ln^2 R)+𝒪(η,ϵ)}. The analytic contributionsin the small R limit are given byS_aJ ({k_l},R,η_J)=S^ whole_aJ ({k_l},η_J)+S̃^B_J<a ({k_l},R,η_J) +S_J<a ({k_l},R,η_J) +O(R^1,2)where the displayed terms are O(R^0) corrections and depend only logarithmically on R. They are independent of the specific partitioning (jet definition), and for R ≪ 1 yield the full result up to power corrections.In the context of an effective theory for a small jet radius the soft radiation is factorized into different types of soft modes <cit.>. The measurement F_B^whole applies to wide-angle soft radiation, which does not resolve the jet region but depends on the Wilson line of the jet. The corrections S̃^B_J<a and S_J<a correspond to the results for the matrix elements of “soft-collinear" and “collinear-soft" modes, respectively, in the nomenclature of ref. <cit.>. These are boosted and constrained by the jet boundary. In the limit R ≪ 1the beam-jet dipoles give the same results, S_aJ=S_bJ, and the Wilson lines from the beams a and b fuse giving a total color factor _J · (_a+_b) = -^2_J <cit.>.The measurement corrections Δ F_J<a^B, F^B_aJ and F^J_aJ can be in general not computed analytically, but are again finite corrections that allow for a numerical evaluation. The term Δ F_J<a^B corrects the subtraction in the jet hemisphere from the measurement in the beam region with f̃_B to the correct observable f_B. As in GenHemiDecomp we can write this correction in terms of an integral in η-ϕ coordinates, Δ S^B_J<a ({k_i},R,η_J)= α_s/π Δ I^B_1,aJ(f_B,R,η_J)δ(k_B)δ(k_J) , with Δ I^B_1,aJ(f_B,R,η_J) = 1/2π∫_-π^πϕ∫_-∞^∞η e^η - η_J/cosh(η-η_J) -cosϕ ln( e^η_J f_B(η_J)/e^η f_B(η))θ(n_a · p - n_J · p/ρ_J^R)= θ(R-1)[∫^R-1_0 x h_1(f_B,η_J,x)+ ∫^R+1_R-1 xh_2(f_B,R,η_J,x)]+ θ(1-R)∫^R+1_1-R xh_2(f_B,R,η_J,x) ,where we have defined the integration variable x ≡ e^η-η_J and h_1(f_B,η_J,x) = 2x/x^2-1 ln( f_B(η_J)/x f_B(η_J + ln x)) , h_2(f_B,R,η_J,x) = [1-2/πarctan(|x-1|/x+1√((1+x)^2-R^2/R^2-(x-1)^2)) ] h_1(f_B,η_J,x).This correction depends also only on the specific shape of the hemisphere for a given value of R, but not on the general partitioning. Since the full integrand does not exhibit singular behavior close to the jet axis (i.e. for η→η_J and ϕ→ 0), it scales with the jet area for a smooth measurement in the beam region, i.e. Δ I^B_1,aJ is 𝒪(R^2).[We have checked numerically that for the transverse momentum veto with f_B(η)=1 the integral Δ I^B_1,aJ vanishes for R≤ 1 and gives -4 ln^2 R for R>1 as implied by the full analytic hemisphere result in S_hemi_b1.]The terms F^B_aJ and F^J_aJ correct for the difference between the actual jet definition (through the partitioning) and the employed jet hemisphere with scaling parameter ρ_J^R. Their contribution to the soft function directly corresponds to S_lm^n. S^B_aJ is given by S^B_aJ({k_l},{d_n},η_J)= α_s/π{ I^B_0,aJ({d_n},η_J) [δ(k_B)1/μ_0 ( k_J/μ) - 1/μ_0 ( k_B/μ)δ(k_J) ]+I^B_1,aJ({d_n},{f_n},η_J) δ(k_B)δ(k_J) } , where the relevant integrals depend now on the specific distance measures and are given by I^B_0,aJ({d_n},η_J) = 1/2π∫_-π^πϕ∫_-∞^∞η e^η - η_J/cosh(η-η_J) -cosϕ ×θ(d_J(η,ϕ) - d_B(η)) θ(R^2 e^η_J-η-2cosh(η-η_J) +2cosϕ), I^B_1,aJ({d_n},{f_n},η_J)= 1/2π∫_-π^πϕ∫_-∞^∞η e^η - η_J/cosh(η-η_J) -cosϕ ln( f_B(η)/f_J(η,ϕ)) ×θ(d_J(η,ϕ) - d_B(η)) θ(R^2 e^η_J-η-2cosh(η-η_J) +2cosϕ) . In analogy, S^J_aJ is given by S^J_aJ({k_l},{d_n},η_J)= α_s/π{ I^J_0,aJ({d_n},η_J) [δ(k_J)1/μ_0 ( k_B/μ) - 1/μ_0 ( k_J/μ)δ(k_B) ]+I^J_1,aJ({d_n},{f_n},η_J) δ(k_B)δ(k_J) }, withI^J_0,aJ({d_n},η_J) = 1/2π∫_-π^πϕ∫_-∞^∞η e^η - η_J/cosh(η-η_J) -cosϕ ×θ(d_B(η,ϕ) - d_J(η)) θ(2cosh(η-η_J) -2cosϕ-R^2 e^η_J-η), I^J_1,aJ({d_n},{f_n},η_J)= 1/2π∫_-π^πϕ∫_-∞^∞η e^η - η_J/cosh(η-η_J) -cosϕ ln( f_J(η,ϕ)/f_B(η,ϕ)) ×θ(d_B(η,ϕ) - d_J(η)) θ(2cosh(η-η_J) -2cosϕ-R^2 e^η_J-η) .These integrals scale individually as 𝒪(R), but yield in total 𝒪(R^2) contributions, as explained in scalingR.[This holds only for a smooth measurement in the beam region. For the beam thrust veto and |η_J|<R the resulting total correction is of 𝒪(R) due to the kink at η=0.] We will discuss in num_evaluation how the numerical evaluation of these integrals can be carried out efficiently by explicitly determining the integration domains. While a full analytic calculation of these does not seem feasible in general, it is possible to compute them in an expansion for R ≪ R_0 (where R_0 denotes the generic convergence radius where the expansion breaks down). We calculate the terms at 𝒪(R^2) in R_expansion. Such an expansion has been also applied in <cit.> for the inclusive jet mass spectrum where it was found that 𝒪(R^4) corrections have a negligible impact for phenomenologically relevant values of R. §.§ Summary of corrections To give a transparent overview of all corrections we display in the following the structure of the full (renormalized) soft functions for all combinations β≠ 1, β=1 and γ=1,2. Since S_hemiS_hemi_b1 encode the full μ- and ν-dependence of the soft function, one can directly read off the counterterms for the soft function absorbing all 1/ϵ- and 1/η-divergences. These result in the well-known one-loop anomalous dimensions for the associated soft function defined by μ/μ S^κ_1 ({k_i},{d_i}, η_J,μ,ν)= ∫ k_B' k_J'γ^κ_S_1 ({k_i-k_i'},η_J,μ,ν)S^κ_1 ({k_i'},{d_i}, η_J,μ,ν) ,ν/ν S^κ_1 ({k_i},{d_i}, η_J,μ,ν) =∫ k_B' k_J'γ^κ_S_1,ν ({k_i-k_i'},μ)S^κ_1 ({k_i'},{d_i}, η_J,μ,ν) .The ν-anomalous dimension is only present for β=1 or γ=1. The explicit one-loop expressions for all cases readγ^κ(1)_S_1,β≠ 1, γ =2 ({k_i},η_J,μ)=α_s(μ)/4π2Γ_0{_J^21/β-1 1/μ _0(k_J/μ) δ (k_B)+ (_a^2+ _b^2)1/μ _0(k_B/μ) δ(k_J)+(_a^2-_b^2) η_J δ(k_J) δ(k_B) },γ^κ(1)_S_1,β≠ 1, γ =1 ({k_i},η_J,μ,ν)= α_s(μ)/4π2Γ_0{_J^2 1/β-1 1/μ _0(k_J/μ)δ (k_B)+[-(_a^2+_b^2)ln(ν/μ)+(_a^2-_b^2) η_J] δ(k_J) δ(k_B) },γ^κ(1)_S_1,β =1, γ =2({k_i},η_J,μ,ν) = α_s(μ)/4π2Γ_0{(_a^2+ _b^2)1/μ _0(k_B/μ) δ(k_J) +[-_J^2ln(ν/2μcoshη_J)+(_a^2-_b^2) η_J] δ(k_J) δ(k_B)},γ^κ(1)_S_1,β =1, γ =1 ({k_i},η_J,μ,ν) = α_s(μ)/4π2Γ_0 δ(k_J) δ(k_B){-(_a^2+_b^2+_J^2)ln(ν/μ) +_J^2ln(2 coshη_J)+(_a^2-_b^2) η_J },for the μ-anomalous dimensions with Γ_0 =4 being the coefficient of the one-loop cusp anomalous dimension. The ν-anomalous dimensions are given byγ^κ(1)_S_1,ν,β≠ 1, γ =1 ({k_i},μ)= α_s(μ)/4π2Γ_0 (_a^2+_b^2) 1/μ _0(k_B/μ) δ(k_J) , γ^κ(1)_S_1,ν,β = 1, γ =2 ({k_i},μ)= α_s(μ)/4π2Γ_0_J^2 1/μ _0(k_J/μ) δ(k_B) ,γ^κ(1)_S_1,ν,β = 1, γ =1 ({k_i},μ)= α_s(μ)/4π2Γ_0{(_a^2+_b^2) 1/μ _0(k_B/μ) δ(k_J)+_J^2 1/μ _0(k_J/μ) δ(k_B) }.For β≠ 1 and γ =2, i.e. jet and beams, the renormalized result for the one-loop soft function readsS^κ(1)_1,β≠ 1,γ=2 ({k_i},{d_i}, η_J,μ) = α_s(μ)/4π{_a ·_b[16/μ _1(k_B/μ)(k_J) + s_ab,B({d_i},η_J) (1/μ_0(k_B/μ) δ(k_J) - 1/μ_0(k_J/μ)δ(k_B)) +s_ab,δ({d_i},{f_i},η_J) δ(k_B)(k_J)] + _a ·_J[ 1/β-1 8/μ _1(k_J/μ)δ(k_B)+ 8/μ _1(k_B/μ)δ(k_J)+ s_aJ,B({d_i},η_J)1/μ _0(k_B/μ)δ(k_J) + s_aJ,J({d_i},η_J) 1/μ _0(k_J/μ)δ(k_B) + s_aJ,δ({d_i},{f_i},η_J)δ(k_J)δ(k_B)] + _b ·_J[η_J↔ -η_J] }, For β≠ 1 and γ =1, i.e. a jet and beams, the result readsS^κ(1)_1,β≠ 1,γ=1 ({k_i},{d_i}, η_J,μ,ν) = α_s(μ)/4π{_a ·_b[16/μ _1(k_B/μ)δ(k_J)-16/μ _0(k_B/μ) ln(ν/μ)(k_J) + s_ab,B({d_i},η_J) (1/μ_0(k_B/μ) δ(k_J) - 1/μ_0(k_J/μ)δ(k_B)) +s_ab,δ({d_i},{f_i},η_J) δ(k_B)(k_J)] + _a ·_J [1/β-1 8/μ _1(k_J/μ)δ(k_B) + 8/μ _1(k_B/μ)δ(k_J)-8/μ ℒ_0(k_B/μ) ln(ν/μ)δ(k_J) + s_aJ,B({d_i},η_J) 1/μ _0(k_B/μ)δ(k_J)+ s_aJ,J({d_i},η_J) 1/μ _0(k_J/μ)δ(k_B) + s_aJ,δ({d_i},{f_i},η_J)δ(k_J)δ(k_B)]+ _b ·_J[η_J↔ -η_J] },For β = 1 and γ =2, i.e. a jet and beams, the result reads S^κ(1)_1,β =1,γ=2 ({k_i},{d_i}, η_J,μ,ν) = α_s(μ)/4π{_a ·_b[16/μ _1(k_B/μ)(k_J) + s_ab,B({d_i},η_J) (1/μ_0(k_B/μ) δ(k_J) - 1/μ_0(k_J/μ)δ(k_B)) +s_ab,δ({d_i},{f_i},η_J) δ(k_B)(k_J)] + _a ·_J[8/μ ℒ_1(k_J/μ)δ(k_B) - 8/μ ℒ_0(k_J/μ) ln(ν/2μcoshη_J)δ(k_B) + 8/μ _1(k_B/μ)δ(k_J)+s_aJ,B({d_i},η_J) 1/μ _0(k_B/μ)δ(k_J) +s_aJ,J({d_i},η_J) 1/μ _0(k_J/μ)δ(k_B) + s_aJ,δ({d_i},{f_i},η_J)δ(k_J)δ(k_B)] + _b ·_J[η_J↔ -η_J] }, For β = 1 and γ =1, i.e. jet and beams, the result readsS^κ(1)_1,β = 1,γ=1 ({k_i},{d_i}, η_J,μ,ν) = α_s(μ)/4π{_a ·_b[16/μ _1(k_B/μ)δ(k_J)-16/μ _0(k_B/μ) ln(ν/μ)(k_J) + s_ab,B({d_i},η_J) (1/μ_0(k_B/μ) δ(k_J) - 1/μ_0(k_J/μ)δ(k_B)) +s_ab,δ({d_i},{f_i},η_J) δ(k_B)(k_J)] + _a ·_J[8/μ ℒ_1(k_J/μ)δ(k_B) - 8/μ ℒ_0(k_J/μ) ln(ν/2μcoshη_J)δ(k_B)+ 8/μ _1(k_B/μ)δ(k_J)-8/μ ℒ_0(k_B/μ) ln(ν/μ) δ(k_J)+ s_aJ,B({d_i},η_J) 1/μ _0(k_B/μ)δ(k_J) + s_aJ,J({d_i},η_J) 1/μ _0(k_J/μ)δ(k_B) + s_aJ,δ({d_i},{f_i},η_J)δ(k_J)δ(k_B)] + _b ·_J[η_J↔ -η_J] },Using the analytic results in eqs. (<ref>), (<ref>), (<ref>) and (<ref>) the coefficients of the distributions are given by s_ab,B ({d_i},η_J)= 4I^J_0,ab ({d_i},η_J) ≃ 4R^2, s_ab,δ ({d_i},f^τ_B,f_J^β,η_J)=-π^2/3 + 4 I^J_1,ab ({d_i},f^τ_B,f_J^β,η_J), s_ab,δ({d_i},f^C_B,f_J^β,η_J)= -π^2 +4 I^J_1,ab ({d_i},f^C_B,f_J^β,η_J), s_ab,δ ({d_i},f^p_T_B,f_J^β,η_J) = π^2/3 +4 I^J_1,ab ({d_i},f_B^p_T,f_J^β,η_J), s_aJ,B({d_i},η_J)= 8 (η_J+ ln R) - 4 I^B_0,aJ({d_i},η_J) +4 I^J_0,aJ({d_i},η_J) , s_aJ,J({d_i},η_J)= -8ln R+4 I^B_0,aJ({d_i},η_J) - 4I^J_0,aJ({d_i},η_J), s_aJ,δ ({d_i},f^τ_B,f_J^β,η_J) = -4 _2(e^-2|η_J|)+4 η_J^2 [θ(η_J) - θ(-η_J)] +2ln^2 R[2β-(β-2)θ(R-1)]+ 8 |η_J| ln R- π^2/6 β/β-1 δ_β≠ 1+4Δ I_1,aJ^B(f^τ_B,R,η_J)+4 ∑_m=B,JI^m_1,aJ({d_i},f^τ_B,f_J^β,η_J), s_aJ,δ ({d_i},f^C_B,f_J^β,η_J) = 4_2(1+tanhη_J/2) - 2 ln^2(1+tanhη_J/2) +4η_J^2 + 8 ln Rln (2coshη_J)+2ln^2 R[2β-(β-2)θ(R-1)] - π^2/6[4+β/β-1 δ_β≠ 1]+4Δ I_1,aJ^B(f^C_B,R,η_J)+4∑_m=B,J I^m_1,aJ({d_i},f^C_B,f_J^β,η_J), s_aJ,δ ({d_i},f^p_T_B,f_J^β,η_J) =2ln^2 R[2β-(β-2)θ(R-1)] +π^2/6[2-β/β-1 δ_β≠ 1]+4∑_m=B,J I^m_1,aJ({d_i},f^p_T_B,f_J^β,η_J),where δ_β≠1=1 for β≠ 1 and zero otherwise. The numerical integrals I^J_0,ab and I^J_1,ab are defined in abnum, I_0,aJ^B and I_1,aJ^B are defined in Iedge, I_0,aJ^J and I_1,aJ^J are defined in Iedge2 and Δ I_1,aJ^B(f_B,R,η_J) is given in I1diff.As one can see from soft_coeffs the soft function contains Sudakov double logarithms ln R and ln e^η_J which deteriorate the perturbative expansion of the soft function for a small jet radius and forward jets and may require an all-order resummation. This can be achieved by additional factorization of the soft function in the framework of theories as discussed e.g. in refs. <cit.>. §.§ Full numerical results We now compare the contributions to the soft function, shown through plots of the various coefficients s_ab, s_aJ of the distributions defined in soft_coeffs. Our main focus is on the jet mass measurement (β=2) but we also show a few results for a jet angularity measurement with β=1 in S_beta1. We consider the various partitionings described in jetregions and beam region observables in fBchoices.The contributions from the beam-beam dipole s_ab,δ are shown in Sab12 for η_J=0 and |η_J|=1 as a function of R, and in Sab1_eta for R=1 as function of η_J. The results deviate from the 𝒪(R^0) result away from R =0, in particular also for the phenomenologically relevant values R ∼ 0.5. However, including the 𝒪(R^2) corrections, the analytic contributions agree very well with the exact results for central rapidities even for values as large as R ∼ 1. These 𝒪(R^2) corrections are the same for all distance measures, which explains why they behave very similar, and they are enhanced by logarithms of the jet radius, as can be seen from sab_delta_R2saj_delta_R. For the transverse momentum beam measurement with a conical anti-k_T jet (red curves in the right panels of Sab12Sab1_eta), there are in fact no higher order R corrections beyond 𝒪(R^2) for s_ab,δ. Otherwise, the next corrections are 𝒪(R^4) except for the beam thrust case with |η_J| ≲ R where they are 𝒪(R^3) due to the kink at η=0. This explains the larger deviation between the analytic 𝒪(R^2) beam thrust result and the exact result for η_J =0 as seen in the top-left panel of Sab12. At large jet rapidities there are sizable differences between the geometric-R measures and the conical (and conical geometric) measure, which is due to the different jet shapes illustrated in jetregions.Results for the beam-jet dipole coefficients s_aJ,B and s_aJ,J are shown in SaJ0 and these coefficients are independent of the measurements in the beam and jet regions. For central rapidities both coefficients differ very little between different distance measures. Away from η_J=0 there are noticeable differences between the geometric-R, modified geometric-R and conical (anti-k_T and XCone) measures, as can be seen in the right panel of SaJ0. In SaJ12 we plot s_aJ,δ for η_J= -1,0,1 as function of R and in SaJ1_eta for R=1 in terms of η_J. Once again results are shown for the beam-thrust, C-parameter and p_T-measurements and β=2. Compared to the beam-beam dipole, the coefficients are not any more symmetric in η_J-η_J. Furthermore, the 𝒪(R^2) corrections are not universal for different partitionings, which can lead to sizable deviations for R ∼ 1, especially for forward jets. This is clearly visible for s_aJ,J, as shown in the right panel of SaJ0, or e.g. for s_aJ, with η_J=1 shown in the middle row of SaJ12. The analytic results including 𝒪(R^2) corrections that are shown correspond to the conical partitioning. The difference with respect to the exact result is very small up to values of R ∼ 2 for all measurements in the beam region, suggesting that the effective expansion parameter is R/R_0 with R_0 ≳ 2. For the geometric-R measures the corresponding 𝒪(R^2) corrections (not shown) are also close to the full results for R ≲ 1, but deviate much stronger for large values of R. In general, the results for anti-k_T and XCone jets are almost identical for isolated jets and reasonable values of the jet radius, as expected from the very similar shapes displayed in jetregions. This will be different when the distance between jets becomes less than 2R, as illustrated in ThreeJetAlgorithms. Furthermore, since the shape of isolated anti-k_T and XCone jets is invariant under boosts along the beam axis, the results for the corresponding soft function coefficients s_ab,B,s_ab,δ, s_aJ,δ, s_aJ,J and s_aJ,δ do not depend on the jet rapidity when using the (boost invariant) p_T-measurement in the beam region. For different values of β the qualitative behavior looks similar. To illustrate this, we display the coefficients s_ab,δ and s_aJ,δ for β=1 and the p_T-measurement in S_beta1. The most noticeable differences between the distance measures are again between the (modified) Geometric-R and the conical measures away from central rapidity. § CONCLUSIONS In this paper we worked out a general setup to calculate one-loop soft functions for exclusive N-jet processes at hadron colliders. This method applies to any jet algorithm that satisfies soft-collinear factorization, and for generic infrared- and collinear safe jet measurements and jet vetoes, as long as they reduce to an angularity in the limit where they approach the jet/beam axis. The soft function is calculated using a hemisphere decomposition of the phase space, extending the approach that was used in ref. <cit.> to calculate the N-jettiness soft function. The divergences are extracted analytically, such that numerical computations only arise for the finite terms.We also demonstrated how the method works in practice, providing explicit expressions for single jet production pp→ L+1 jet for several cases: angularities as jet measurements, beam thrust, C-parameter, and transverse momentum as jet vetoes, and anti-k_T and XCone as jet algorithms. We optimized our method by expanding the finite corrections in the jet radius R, obtaining a fully analytical result in the limit R≪ 1. It turns out that the remaining (numerical) contributions are rather small, even for relatively large values of R, thus improving the stability.With the soft functions discussed in this paper, one can calculate resummed cross-section at NNLL or NLL' accuracy for exclusive jet processes at the LHC. This same soft function also enters in jet substructure calculations, see e.g. the 2-jettiness calculation of ref. <cit.>, and the subtraction techniques could prove useful for other jet substructure calculations as found in ref. <cit.>. P.P. would like to thank Bahman Dehnadi for pointing out some typos in the draft. This work was supported by the German Science Foundation (DFG) through the Emmy-Noether Grant No. TA 867/1-1, and the Collaborative Research Center (SFB) 676 Particles, Strings and the Early Universe, by the Office of Nuclear Physics of the U.S. Department of Energy under the Grant No. DE-SC0011090, Grant No. DE-AC02-05CH11231, Grant No. DE-AC52-06NA25396, and through the Los Alamos National Lab LDRD Program, by the Simons Foundation through the Investigator grant 327942, by the European Research Council under Grant No. ERC-STG-2015-677323, by the D-ITP consortium, a program of the Netherlands Organization for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW), and by a Global MISTI Collaboration Grant from MIT. We also thank the Erwin Schrödinger Institute's program “Challenges and Concepts for Field Theory and Applications in the Era of the LHC Run-2", where portions of this work were completed. § ANALYTIC CONTRIBUTIONS FOR PP → L +1 JET In this appendix we collect some details about the analytic calculation of several soft function corrections for pp → L +1 jet discussed in OneJetCase. We discuss the jet hemisphere correction to the soft function for angularity measurements in hemi_soft, and compute the analytic results for the 𝒪(R^2) terms of the soft function coefficients in soft_coeffs for anti-k_T in R_expansion. §.§ Hemisphere soft function correction We perform the calculation of the jet hemisphere correction for the boost-invariant angularities defined in fJchoices, i.e. S_a<J in SaJ_decomp. It is given by the integralS_J<a({k_l},ρ,η_J) = -2 (μ^2 e^γ_E/4π)^ g^2∫^d p/(2π)^d n_a· n_J/(n_a· p)(n_J· p)2πδ(p^2)θ(p^0)F_J<a({k_m},ρ,η_J,p),with the size of the hemisphere adjusted by the parameter ρ and the measurement given byF_J<a({k_m},ρ,η_J,p) =δ(k_J-p_Tℛ_J^β) θ(n_a· p-n_J· p/ρ) δ(k_B), in analogy to F_onejet. Here ℛ_J≡ℛ_sJ denotes the distance of the soft emission with momentum p^μ with respect to the jet direction in azimuth-rapidity space as defined in DeltaRdef. Let us define the momentum projection p_k along a generic light-like direction n_k and the angular distance between two light-like directions ŝ_ij asp_k ≡ n_k· p , ŝ_ij≡n_i· n_j/2 =1-cosθ_ij/2.For any ij-dipole, the gluon four-momentum can be decomposed asp^μ=p_i/2ŝ_ijn_j^μ+p_j/2ŝ_ijn_i^μ+p_⊥_ij^μ,with the integration measure given by^4-2p=p_⊥_ij^1-2/2ŝ_ijp_ip_j p_⊥_ij Ω_2-2.The boost-invariant jet angularity can be expressed in this basis, by first writing p_Tℛ_J^β=(2p_jcoshη_j)^β/2(p_T)^1-/̱2,and then substituting p_T= p_⊥_ij𝒢(q,ϕ)/q withq=p_j/p_⊥_ij.The function 𝒢(q,ϕ) is given in general by𝒢(q,ϕ)= (ŝ_aj+ŝ_ai/ŝ_ijq^2-2 √(ŝ_ajŝ_ai/ŝ_ij) q cosϕ )^1/2(ŝ_bj+ŝ_bi/ŝ_ijq^2-2 √(ŝ_bjŝ_bi/ŝ_ij) q cos (ϕ-Δϕ_ij) )^1/2.Here ϕ is the azimuthal angle in the two-dimensional ⊥_ij-space, and Δϕ_ij is the difference in azimuth (with respect to the beam axis) between the dipole directions i and j. Thus the jet angularity can be written asp_Tℛ_J^β=p_⊥_ijq^-̱1 [𝒢(q,ϕ)]^1-/̱2 (2coshη_J)^/̱2.Let us specialize to the case i=a and j=J. The hemisphere phase space is given byHemisphere J<a:θ(q_0-q), q_0=√(ρ ŝ_aJ),with ŝ_aJ = e^-η_J/(2coshη_J). For the case >̱1, dimensional regularization regulates all the divergences. Using the basis of basis, after the trivial integrations and changing variable from p_J to q, Sij readsS_J<a ({k_l},ρ,η_J) =-2g^2/(2π)^3-2(μ^2 e^γ_E/4π)^ (2coshη_J)^/k_J^1+2 δ(k_B) ×∫Ω_2-2∫_0^q_0 q /q^1-2(-̱1)[𝒢(q,ϕ)]^(2-)̱. Performing the integrals and expanding in ,S_J<a({k_l},ρ,η_J) = α_s/4π δ(k_B)/β-1[8/μ r^β-1 ℒ_1(k_J/μ r^β-1) - 4/1/μ r^β-1 ℒ_0(k_J/μ r^β-1) +δ(k_J) (2/^2-π^2/6-2(β-1)(β-2) ℐ) +𝒪(ϵ)] , where r=(2coshη_Je^-η_Jρ)^1/2 and ℐ=1/π∫_-π^πϕ∫_0^q_0 q/qln[2coshη_J𝒢(q,ϕ)]=θ(r-1)ln^2 r . Setting ρ =ρ_R^J (R,η_J) as defined in leadingrhoRJ yields r=R and thus the result in S_hemi2. For the case =̱1, one can see from Sij1 that an additional rapidity regulator is needed as q→ 0, which can be chosen to be (ν/2p^0)^η, as discussed below Sijbare_generalN. Following a similar procedure, one obtains the result of S_hemi2_b1.Alternatively, one can get the hemisphere soft function for boost-invariant angularities by adding the finite correction in I1_hemi to S_hemiS_hemi_b1, which correspond to the standard angularities in e^+ e^--collisions defined in Tau_hemi. Using the same variables defined above, one gets 4 I_1,J<a =-2(β-2)/π∫_-π^πϕ∫_0^q_0 q/qln[2coshη_J 𝒢(q,ϕ)/1+e^2η_Jq^2-2e^η_Jqcosϕ] =-2(β-2)[θ(R-1)ln^2R-2 θ(R e^η_J/2coshη_J-1)ln^2 (R e^η_J/2coshη_J)].By adding this correction to S_hemiS_hemi_b1, with c_J=(2coshη_J)^β-1, one recovers again the results in S_hemi2S_hemi2_b1.§.§ Corrections at 𝒪(R^2) Here we outline the analytic calculation of the soft function corrections in soft_coeffs at 𝒪(R^2) in the small jet radius expansion. A similar computation has been performed in ref. <cit.> for a jet mass measurement in dijet processes close to the kinematic threshold. We give the results for a conical (anti-k_T) jet with the measurement of arbitrary jet angularities and general smooth jet vetoes (including in addition the beam thrust case).First, we consider the contributions from the beam-beam dipole. Here the 𝒪(R^2) corrections are the leading contributions that account for the jet region. Since the deviations between the jet boundaries for different partitionings are in addition power suppressed by the jet radius all sets of distance measures discussed in genTau lead to the same result at 𝒪(R^2).The term s_ab,B in soft_coeffs corresponds to the jet area giving s_ab,B =4 R^2.The coefficient s_ab,δ is given by the integral in abnum, which yields at 𝒪(R^2)s_ab,δ = 4/π∫_-∞^∞Δη∫_-π^πϕ (β/2ln[(Δη)^2 + ϕ^2] -ln f_B(η_J)) θ(R^2- (Δη)^2 -ϕ^2) +𝒪(R^4) = 2 R^2 [β(2ln R-1)-2ln f_B(η_J)] +𝒪(R^4).In fact, for conical jets and a transverse momentum veto, i.e. f_B(η)=1, any higher order corrections in R vanish, so that sab_delta_R2 provides already the exact one-loop result for this case.Next, we discuss the contributions from the beam-jet dipole, which in general differ for different partitionings. The corrections for real radiation inside the jet region can be written asS_aJ^(J)= - α_s/2π e^γ_E ϵμ^2ϵ√(π)/Γ(1/2-ϵ) 1/k_J^1+2ϵ [I_aJ, R≪ 1^(J) + (I_aJ^(J) - I_aJ, R≪ 1^(J) )_=Δ I_aJ^(J)] ,where I_aJ, R≪ 1^(J) denotes the leading small-R result at 𝒪(1) and Δ I_aJ^(J) contains all corrections which are suppressed by the jet size. The latter term can be expanded in ϵ and is given up to 𝒪(ϵ) byΔ I_aJ^(J) =1/π∫_-∞^∞Δη∫_-π^πϕθ[d_B(η_J+Δη) -d_J(η_J+Δη,ϕ,R)]×[(e^Δη/coshΔη - cosϕ-2/(Δη)^2+ϕ^2) +2ϵ(e^Δη(β/2ln[2coshΔη - 2cosϕ]-lnsinϕ)/coshΔη - cosϕ-βln[(Δη)^2+ ϕ^2] -2 lnϕ/(Δη)^2+ ϕ^2)] .Expanding the integrand in R yields for conical jets Δ I_aJ^(J, k_T)|_𝒪(R^2) = R^2 [1/2+ϵ(7/6-β/2+(β-1)ln R+ ln 2)] .The corrections for real radiation inside the beam region can be similarly written asS_aJ^(B)= - α_s/2π e^γ_E ϵμ^2ϵ√(π)/Γ(1/2-ϵ) 1/k_B^1+2ϵ [I_aJ, R≪ 1^(B) + (I_aJ^(B) - I_aJ, R≪ 1^(B) )_=Δ I_aJ^(B)].Here Δ I_aJ^(B) acts as a subtractive contribution inside the jet region and is given byΔ I_aJ^(B) =-1/π∫_-∞^∞Δη∫_-π^πϕθ[d_B(η_J+Δη) -d_J(η_J+Δη,ϕ,R)]×[(e^Δη/coshΔη - cosϕ-2/(Δη)^2+ϕ^2) +2ϵ(e^Δη(ln f_B(η)-lnsinϕ)/coshΔη - cosϕ-2(lnf_B(η_J) - lnϕ)/(Δη)^2+ ϕ^2)] .Expanding the integrand in R yields for conical jets and a smooth function f_B(η)Δ I_aJ^(B,k_T) =-R^2[1/2+ϵ(7/6- ln R + ln f_B(η_J) + 2 f_B'(η_J) + f_B”(η_J)/f_B(η_J)-(f_B'(η_J) /f_B(η_J))^2+ln2)] .Using eqs. (<ref>), (<ref>), (<ref>) and (<ref>) the soft function coefficients at 𝒪(R^2) for the beam-jet dipole contributions read for anti-k_T jetss^(k_T)_aJ,J|_𝒪(R^2) =- s^(k_T)_aJ,B|_𝒪(R^2)= - R^2 , s^(k_T)_aJ,δ|_𝒪(R^2)=R^2 [ βln R -β/2 -ln f_B(η_J)- 2 f_B'(η_J) + f_B”(η_J)/f_B(η_J)+(f_B'(η_J) /f_B(η_J))^2 ] .Since the beam thrust veto has a kink at η=0, saj_delta_R does not fully determine all power suppressed terms up to 𝒪(R^2) if |η_J|<R. In this case the next-to leading correction is of 𝒪(R) and the additional contribution with respect to saj_delta_R readsΔ s^(k_T)_aJ,δ = θ(1-|x|){16R/π[√(1-x^2) + x (ln(2x)-1) arccosx -x/2Cl_2 (arccos(1 - 2 x^2)) ] + 4R^2/π[π/2(θ(-x) -θ(x)) +3x √(1-x^2)+ arcsin(x)- 2 xxarccosx) ]+𝒪(R^3)},where x ≡η_J/R and the Cl_2(θ) ≡ Im[_2(e^i θ)].Results for jet regions from a different partitioning can be obtained by considering deviations from the circular jet shape in addition. For the conical geometric distance measure in d_ConicalGeometric corresponding to a XCone default jet the results at 𝒪(R^2) are the same as for the conical measure (i.e. for an anti-k_T jet). § NUMERICAL EVALUATION OF SOFT FUNCTION INTEGRATIONS We discuss the numerical evaluation of the boundary mismatch integrals I_aJ^B and I_aJ^J in Iedge for pp → L +1 jet. To compute them efficiently we need to determine the integration bounds. These depend on the relations between the distance measures d_B(p) and d_J(p) and between the projections n_a · p and n_J · p/ρ_J used for the analytic calculation of the hemisphere results. We discuss here the explicit boundaries only for the most important case, the conical (anti-k_T) measure. For the geometric measures (including the conical geometric XCone measure) one can follow a strategy similar to <cit.> using coordinates based on the lightcone projections n_a · p and n_J · p. Furthermore, we also explain why the integrals encoding the corrections to the small R limit give only a moderate numerical impact, even for sizable values of the jet radius. §.§ Integration bounds for the conical measureFor the conical measure the integration boundaries can be most easily obtained in beam coordinates η, ϕ. The conditions from the measurement functions in F_onejet read F^B_aJ: R^2< (Δη)^2 + ϕ^2andρ_J e^-η_Jcoshη_J < e^Δη(coshΔη - cosϕ) , F^J_aJ: R^2> (Δη)^2 + ϕ^2andρ_J e^-η_Jcoshη_J > e^Δη(coshΔη - cosϕ) .We use the value ρ_J= ρ_J^R in leadingrhoRJ, which eliminates the dependence on the jet rapidity η_J (in favor of the jet radius R) in the second relation and leads to integrals which are power suppressed in R. (The computation for arbitrary ρ_J can be carried out similarly.) The associated hemisphere mismatch regions are displayed in jetregions_R. For F^J_aJ the integration boundaries read ∫_-∞^∞Δη∫_-π^πϕ θ(R^2-(Δη)^2-ϕ^2) θ(n_J · p/ρ^R_J-n_a · p)= ∫_η_0(R)^η^ max_ hemi(R)Δη∫_ϕ^ max_ hemi(Δη,R)^√(R^2-(Δη)^2)ϕ + ∫_η^ max_ hemi(R)^R Δη∫_0^√(R^2-(Δη)^2)ϕ+(ϕ↔ - ϕ),where we have definedϕ^ max_ hemi(Δη,R) =arccos(e^Δη+(1-R^2)e^-Δη/2),η^ max_ hemi(R) =ln(1+R),and η_0(R) is the solution of the transcendental equation[η_0(R)]^2 +[ϕ^ max_ hemi(η_0(R),R)]^2 = R^2 .For F^B_aJ we get ∫_-∞^∞Δη∫_-π^πϕ θ((Δη)^2+ϕ^2-R^2) θ(n_a · p -n_J · p/ρ^R_J) = θ(R ≤ 1) [∫_η^ min_ hemi(R)^-RΔη∫_0^ϕ^ max_ hemi(Δη,R)ϕ + ∫_-R^η_0(R)Δη∫_√(R^2-(Δη)^2)^ϕ^ max_ hemi(Δη,R)ϕ]+ θ(R_π> R >1)[∫_-∞^η_π(R)Δη∫_0^πϕ + ∫_η_π(R)^-RΔη∫_0^ϕ^ max_ hemi(Δη,R)ϕ +∫_-R^η_0(R)Δη∫_√(R^2-(Δη)^2)^ϕ^ max_ hemi(Δη,R)ϕ]+ θ(R ≥ R_π) [∫_-∞^-RΔη∫_0^πϕ + ∫^η_π(R)_-RΔη∫_√(R^2-(Δη)^2)^πϕ + ∫_η_π(R)^η_0(R)Δη∫_√(R^2-(Δη)^2)^ϕ^ max_ hemi(Δη,R)ϕ]+(ϕ↔ -ϕ),where we have definedη^ min_ hemi(R) =ln(1-R),η_π(R) = ln(R-1),and R_π≈ 1.28 is the solution of the transcendental equationη_π(R_π) = -R_π.With these explicit limits the integrals can be evaluated efficiently.§.§ Power suppression of boundary integrals We have seen in jetregions_R that for a small jet radius the jet region from the hemisphere decomposition with ρ^R_J and the actual conical partitioning largely overlap giving small results for the non-hemisphere corrections. However, for R ∼ 1 the areas in the η-ϕ plane begin to differ very significantly, which might suggest that the associated corrections become very large in this regime and the results for the small R-expansion do not provide a good approximation. As we have seen in num_softs this turns out not to be the case since the deviations of the jet areas in the beam coordinates are not representative for the size of the associated corrections. Instead it is more meaningful to compare the jet areas in the boosted frame where the jet and beam direction are back-to-back and soft radiation from the beam-jet dipole aJ is uniform in the respective rapidity-azimuth coordinates η̃, ϕ̃. The associated transformation rules between the sets of coordinates are explicitly given in ref. <cit.>. In jetregions_boost_R we display the jet regions in these coordinates for the conical measure (red) and for the hemisphere decomposition with ρ_J=ρ_J^R for different values of R. The areas which do not overlap correspond directly to the integrals I_0,aJ^B and I_0,aJ^J, respectively, while I_1,aJ^B and I_1,aJ^J are (logarithmic) moments in these regions. These areindividually of ∼𝒪(R), which can be also confirmed by an analytic expansion indicated by the black, dotted line. In total the contributions from F_aJ^B andF_aJ^J cancel each other at this order leading to a net contribution to the soft function of 𝒪(R^2).[For the corrections s_aJ,B ands_aJ,J in soft_coeffs this is obvious since only the difference between the two mismatch areas in jetregions_boost_R enters. For the correction s_aJ,δ this holds for measurements which are continuous functions in η, ϕ due to the fact that at leading order in R the integrands are constant in these areas.]§ ANALYTIC CORRECTIONS FOR PP → DIJETS Beyond single jet production, pp → dijets is another process of phenomenological relevance for measurements like jet mass. The full computation of the associated soft function corrections for arbitrary jet and beam measurements and partitionings can be carried out following the hemisphere decompositions discussed in secs. <ref> and <ref>. Here we compute the analytic corrections for pp → dijets (j_1,j_2) in a small R expansion up to terms at 𝒪(R^2), whereas the full R dependence can be determined numerically but now including a jet-jet dipole. For definiteness and simplicity we consider conical jets with a jet mass measurement (i.e. angularity in defined in fJchoices with β=2) and a p_T jet veto.For generic R<π/2 we can write the renormalized one-loop soft function as[For R<π/2 (i.e. as long as the jet regions do not share a common boundary) the measurements and partitioning are invariant under boosts along the beam axis,such that this correction mainly depends on the relative rapidity of the jets Δη_12 and the jet radius. Since the rapidity regularization breaks boost invariance, there is, however, also a residual dependence on the individual jet rapidities appearing in s_a1,B.]S^κ(1)_2 ({k_i},R,η_1,η_2,μ,ν) = α_s(μ)/4π{_a ·_b[16/μ( _1(k_B/μ) - _0(k_B/μ) ln(ν/μ) ) δ(k_1)δ(k_2) + s_ab,B(R) (2/μ _0(k_B/μ) δ(k_1)δ(k_2)- 1/μ _0(k_1/μ) δ(k_B) δ(k_2) - 1/μ _0(k_2/μ) δ(k_B) δ(k_1)) +s_ab,δ(R) δ(k_B)(k_1) (k_2)] + _1 ·_2 [8/μ _1(k_1/μ)δ(k_2) δ(k_B) +8/μ _1(k_2/μ)δ(k_1) δ(k_B) + s_12,J(R,Δη_12) (1/μ _0(k_1/μ)δ(k_2) +1/μ _0(k_2/μ)δ(k_1) )δ(k_B)+ s_12,B(R,Δη_12) 1/μ _0(k_B/μ)δ(k_1) δ(k_2) + s_12,δ(R,Δη_12)δ(k_1)δ(k_2)δ(k_B)]+ _a ·_1 [8/μ _1(k_1/μ)δ(k_B)δ(k_2) + 8/μ _1(k_B/μ)δ(k_1)δ(k_2)-8/μ ℒ_0(k_B/μ) ln(ν/μ)δ(k_1)δ(k_2) +s_a1,1(R) 1/μ _0(k_1/μ)δ(k_B)δ(k_2) + s_a1,2(R,Δη_12) 1/μ _0(k_2/μ)δ(k_B)δ(k_1)+ s_a1,B(R,η_1,Δη_12) 1/μ _0(k_B/μ)δ(k_1)δ(k_2) + s_a1,δ(R,Δη_12)δ(k_B)δ(k_1)δ(k_2) ]+ _b ·_1[Δη_12→ -Δη_12] + _b ·_2[(k_1,k_2,η_1)→ (k_2,k_1,η_2)]+ _a ·_2[(k_1,k_2,η_1,Δη_12)→ (k_2,k_1,η_2,-Δη_12)] },where Δη_12≡η_1-η_2 is the difference between the rapidities of the two jets and R_1=R_2 ≡ R <π/2. The replacements in the last line are always with respect to the terms with the color factor _a ·_1.The contributions from the beam-beam dipole are equivalent to the case of single production given in soft_coeffs and R_expansion, i.e.s_ab,B(R)=4R^2 +𝒪(R^4) ,s_ab,δ(R)= -π^2/3 +4 R^2 (2 ln R-1) + 𝒪(R^4) . The contributions from the beam-jet dipoles are also closely related to the ones for single production given in soft_coeffs and R_expansion with the difference that starting at 𝒪(R^2) there is now also a correction due to emissions into the phase space region of the second jet, which concerns the coefficients s_a1,2, s_a1,B and s_a1,δ and can be easily computed analytically in analogy to R_expansion. We gets_a1,1(R)= -8 ln R -R^2+ 𝒪(R^4) , s_a1,2(R,Δη_12)=- R^2e^-Δη_12/cosh^2(Δη_12/2)+ 𝒪(R^4) , s_a1,B(R,η,Δη_12)= 8ln R+ 8η +R^2 [1+ e^-Δη_12/cosh^2(Δη_12/2)]+𝒪(R^4) , s_a1,δ(R,Δη_12)=4 ln^2 R (2-θ(R-1)) +R^2 (2ln R-1)[1+e^-Δη_12/cosh^2(Δη_12/2)]+𝒪(R^4) .We demonstrate in Sa12 that including the terms up to 𝒪(R^2) gives a very good approximation of the full results, even for R∼ 1.The only remaining ingredient is the correction from the jet-jet dipole.The leading small-R results have been computed in ref. <cit.>, which we have reproduced.[Reference <cit.> considers a p_T-veto with a rapidity cutoff η_ cut. For the jet-jet dipole the effect due to η_ cut is power suppressed in 1/e^η_ cut, while for the other dipole contributions it leads to different results than those given above.] The 𝒪(R^2) corrections can be computed following R_expansion. This givess_12,J(R,Δη_12) =-8 ln R - R^2tanh^2 Δη_12/2 +𝒪(R^4),s_12,B(R,Δη_12) = -16 ln( 2coshΔη_12/2) -2 s_12,J(Δη_12,R),s_12,δ(R,Δη_12) =16 ln^2 R -8ln^2( 2coshΔη_12/2) + 2 (Δη_12)^2 -π^2/3+ R^2 [2 (2ln R-1) tanh^2 Δη_12/2] +𝒪(R^4) .In S12 we compare the full numeric results for these coefficients to the analytic expressions. Again the small R expansion provides an excellent approximation of the full result for the jet-jet dipole contribution. Together with the findings for the beam-beam and beam-jet dipole corrections this indicates that keeping terms up to 𝒪(R^2) is likely sufficient for phenomenological purposes.We remark that for jet vetoes which are not boost invariant, all of the dipoles, in particular also the jet-jet dipole, dependon the individual jet rapidities. For multijet processes or an additional recoiling color singlet state the soft function depends in addition on the separation of the jets in azimuth. The analytic computation for these cases is significantly more involved. tocsectionReferences jhep
http://arxiv.org/abs/1704.08262v3
{ "authors": [ "Daniele Bertolini", "Daniel Kolodrubetz", "Duff Neill", "Piotr Pietrulewicz", "Iain W. Stewart", "Frank J. Tackmann", "Wouter J. Waalewijn" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170426180010", "title": "Soft Functions for Generic Jet Algorithms and Observables at Hadron Colliders" }
From chiral NN(N) interactions to giant and pygmy resonances via extended RPA Presented at the Zakopane Conference on Nuclear Physics “Extremes of the Nuclear Landscape”, Zakopane, Poland, August 28 – September 4, 2016 Panagiota Papakonstantinou^1, Richard Trippel^2,Robert Roth^2 ^1Rare Isotope Science Project, Institute for Basic Sceince, Daejeon 34047, S.Korea ^2Institut für Kernphysik, T.U. Darmstadt, 64283 Darmstadt, GermanyReceived; accepted ====================================================================================================================================================================================================================================================================== The properties of giant and pygmy resonances are calculated starting from chiral two-and three-nucleon interactions. The aim is to assess the predictive power of modern Hamiltonians and especially the role of the three-nucleon force. Methods based on the random-phase approximation (RPA) provide an optimal description of the modes of interest with minimal computational requirements. Here we discuss the giant resonances (GRs) of ^40,48Ca isotopes and their low-energy dipole response. A comparison with previous results obtained with a transfromed Argonne V18 two-nucleon potential points to certain improvements.21.60.Jz; 24.30.Cz; 21.30.Fe; 13.75.Cs§ INTRODUCTION A starting point for nuclear structure theory ideally involves realistic two-plus-three nucleon (NN+NNN) potentials and, most consistently, nuclear Hamiltonians derived from quantum chromodynamics. Starting from these interactions, unitary transformations can be employed, e.g. the Similarity Renormalization Group (SRG), to pre-diagonalize the Hamiltonian and to improve the convergence behavior of many-body methods. This approach has been applied successfully to light and medium-mass nuclei using interactions from chiral effective field theory (χEFT) in the framework of the No-Core Shell Model and in Coupled-Cluster Theory and related methods.In order to reach computationally heavy nuclei as well as higher-lying collective excitations, we have been exploring the performance of pre-diagonalized interactions within the Random Phase Approximation (RPA) and extensions thereof, in particular the Second RPA (SRPA). A two-body Hamiltonian based on the Argonne V18 potential was used before in large-scale SRPA calculations <cit.>with promising results for giant resonances (GRs), notwithstanding the insufficient treatment of three-body effects.Since then, NN+NNN χEFT interactions have become available <cit.>. They can be utilized in a two-body formalism, by performing a normal ordering and neglecting the three-nucleon residual interaction - a truncation whose validity has been demonstrated <cit.>. Thanks to the above advances, we are now in a position to study collective phenomenawith realistic potentials and with reasonable computational effort. As a linear-response theory, RPA would be the obvious many-body method of choice.There are two main reasons to go beyond first-order RPA, SRPA: The traditional phenomenologist’s goal is to describe the resonances' fragmentation because of collisional damping. The many-body theorist’s goal applicable here is convergence with respect to the model space, when the functional or interaction is not fitted to the mean field and RPA level. This contributionfocuses on recent results within RPA and SRPA <cit.> and the relevance of utilizing a realistic three-nucleon interaction. § GIANT RESONANCESIn the past we employed the SRPA with the Argonne V18 potential transformed via the unitary correlation operator method (AV18+UCOM) and looked at the giant monopole resonance (GMR), giant dipole resonance (GDR), and giant quadrupole resonance (GQR) <cit.>.Within in a 15-shell model space, a very good and almost-converged description of the GDR and GQR was obtained, including some very interesting applications in the observed fragmentation of the GQR <cit.>, but the energy of the GMR was underestimated. Overal, the energetic discrepancies were between, approximately, -10 (GMR) to 0 (GQR) MeV. The calculated charge radii also were too small. We attributed the discrepancies to missing NNN effects. One is now in a position to use NN+NNN interactions determined in a systematic way. The interaction used at present is a chiral NN (at N^3LO) and NNN (at N^2LO) interaction with a cutoff at 400 MeV, evolved within the SRG (χEFT+SRG). The NNN interaction is rewritten in a normal ordered form. The one and two body operators are kept, whereas the NNN residual interaction is neglected. Then the convenient two-nucleon formalism can be used. Fig. 1(a),(b) shows results for the GRs of the ^40,48Ca isotopes obtained within a 13-shell model space. Previous results with the AV18+UCOM potential in the same space but within SRPA0 are shown for comparison, as well as experimental data. SRPA0 stands for the diagonal approximation <cit.>, whereby the couplings amongst the 2p2h configurations are neglected. It has been found very good whenever tested against full SRPA. The energetic discrepancies, with respect to data, observed with the two potentials are of a different quality: When using the AV18+UCOM the energies are underestimated, while with the use of the χEFT+SRG they are overestimated. The latter results could be ameliorated still if we extend the harmonic-oscillator basis. The new results on GRs therefore constitute an improvement with respect to AV18+UCOM. However, the radii are still too small. In particular, the obtained values for the root-mean-square charge radii of ^16O, ^40Ca and ^48Ca are 2.41, 2.98 and 2.6 fm, to be compared with the measured values 2.70, 3.48 and 3.48 fm, respectively. Next we may consider other versions of chiral interactions, for example the SAT family <cit.>, or the new two-body Daejeon16 interaction <cit.>, which promise improved radii.§ ON THE LOW-ENERGY DIPOLE SPECTRUMAnother interesting benchmark, especially because it is qualitative, is the low-energy isovector (IV) dipole response of Ca isotopes and the nature of the low-energy isoscalar state (IS-LED) <cit.>. The AV18+UCOM potential yields extremely strong low-energy transitions for ^40,48Ca and predicts a neutron-skin oscillation for ^48Ca. The result qualitatatively contradicts observations <cit.>. Simple phenomenological corrections <cit.> could not improve this stiff result. Numerical results for the IS-LED in Ca with the new χEFT+SRG NN+NNN interaction are shown in Fig. 1(c). The new results constitute an improvement of an order of magnitude. Furthermore the IS-LED is not predicted to be a veritable neutron-skin oscillation in ^48Ca. Whether the properties of a realistic NNN force are responsible for this outcome will have to be investigated using different NN(+NNN) potentials and different test nuclei.§ PROSPECTS The correct desription of giant and pygmy resonances is a potential benchmark for chiral and other modern nuclear interactions. Results with a chiral NN+NNN potential are promising. Next we shall examine the SAT family of chiral interactions and the two-nucleon Daejeon16 interaction. In the spirit of ab initio nuclear structure, we aim for a theoretical description of nuclear linear response based on chiral and in general microscopic interactions, for more-unbiased results and predictive power. Acknowledgements PP's work is supported by the Rare Isotope Science Project of the Institute for Basic Science funded by Ministry of Science, ICT and Future Planning and theNational Research Foundation (NRF) of Korea (2013M7A1A1075764). RT's and RR's work is supported by DFG through the SFB 1245 and BMBF through contract 05P15RDFN1.10PaR2009 P. Papakonstantinou, R. Roth, Phys. Lett. B671, 356 (2009).MaE2011 R. Machleidt, D.R. Entem, Physics Reports 503, 1(2011).Rot2012 R. Roth, S. Binder, K. Vobig, A. Calci, J. Langhammer, P. Navrátil, Phys. Rev. Lett. 109, 052501 (2012).TPR2016XXR. Trippel, P. Papakonstantinou, R. Roth (in preparation).Usm2011 I. Usman, et al., Phys. Lett. B 698, 191 (2011).Usm2016 I. T. Usman, et al., Phys. Rev. C 94, 024308 (2016).Eks2015 A. Ekström, et al., ibid. 91, 051301 (2015).Shi2016 A.M. Shirokov, I.J. Shin, Y. Kim, M. Sosonkina, P. Maris, J.P. Vary, Phys. Lett. B 761, 87(2016).PHP2012 P. Papakonstantinou, H. Hergert, V.Yu Ponomarev, R. Roth, ibid. 709, 270 (2012).Der2014 V. Derya, et al., ibid. 730, 288 (2014).GPR2014 A. Günther, P. Papakonstantinou, R. Roth, Journal of Physics G: Nuclear and Particle Physics 41, 115107 (2014).
http://arxiv.org/abs/1704.08429v1
{ "authors": [ "P. Papakonstantinou", "R. Trippel", "R. Roth" ], "categories": [ "nucl-th" ], "primary_category": "nucl-th", "published": "20170427042248", "title": "From chiral NN(N) interactions to giant and pygmy resonances via extended RPA" }
Center for Theoretical Physics and Department of Physics, Columbia University, New York, NY, 10027, USA Center for Theoretical Physics and Department of Physics, Columbia University, New York, NY, 10027, USA Center for Theoretical Physics and Department of Physics, Columbia University, New York, NY, 10027, USAWe apply a recently developed effective string theory for vortex lines to the case of two-dimensional trapped superfluids. We do not assume a perturbative microscopic description for the superfluid, but only a gradient expansion for the long-distance hydrodynamical description and for thetrapping potential. For any regular trapping potential, we compute the spatial dependence of the superfluid density and the orbital frequency and trajectory of an off-center vortex. Our results are fully relativistic, and in the non-relativistic limit reduce to known results based on the Gross-Pitaevskii model. In our formalism, the leading effect in the non-relativistic limit arises from two simple Feynman diagrams in which the vortex interacts with the trapping potential through the exchange of hydrodynamical modes. Vortex precession in trapped superfluids from effective field theory Alberto Nicolis December 30, 2023 ==================================================================== § INTRODUCTIONVortex lines in superfluids are topological string-like objects with quantized circulation and a microscopic thickness given by the superfluid healing length. They are the only degrees of freedom that can carry vorticity, and the velocity field far from their core is irrotational but non-trivial—for a textbook treatment see <cit.>. Indirect evidence for the presence of these objects was obtained over half a century ago in superfluid helium <cit.>, and was later followed by direct observations <cit.>.In this paper we focus on the peculiar behavior of an off-axis vortex in a non-rotating, two-dimensional trapped superfluid. The vortex is observed to orbit the center of the trapped particle cloud. If the cloud is circular, so is the orbit. If the cloud is elliptical, so is the orbit, with the same aspect ratio, as in Fig. <ref>. For vortices close to the cloud's center, the orbital frequency is independent of the orbit's size. The first observation of this phenomenon (known as precession) was achieved in <cit.> using a nearly spherical condensate of ^87Rb containing a superposition of two internal components. More recently, the authors of <cit.> performed a careful analysis of the motion and precession frequency of a vortex, using a superposition of the two lowest states of ^6Li confined in an elliptical cloud. See also <cit.> for a similar study. There has already been extensive theoretical study of precession <cit.> in the regime where the Gross-Pitaevskii equation holds, and the solution is often treated numerically.Here we show that the same problem can be successfully tackled using the effective field theory (EFT) methods first introduced in <cit.> and recently generalized and developed in <cit.> (see also <cit.>).The main idea behind EFT is to focus directly on the long distance/low frequency degrees of freedom of a given system: one parametrizes their dynamics in terms of the most general effective action or Hamiltonian compatible with the symmetries, organized as a power expansion in energy and momentum, that is, time derivatives and spatial gradients. One of the advantages of this approach is that the validity of perturbation theory relies only on the long distance/low frequency degrees of freedom being weakly coupled, regardless of how weakly coupled the microscopic constituents of the system are. Another advantage is that one is able to derive universal predictions that follow purely from the symmetries and are insensitive to the system's microscopic physics, which might be unknown or strongly coupled, and hence intractable.In our particular case, we will write our EFT for a generic relativistic superfluid, because we find it easier to impose the relevant spacetime symmetries in the relativistic case (see also a related discussion in <cit.>). However, taking the non-relativistic (NR) limit will be a simple matter of neglecting certain terms in the action explicitly suppressed by powers of c (speed of light), and in that limit our derivation of the precession effect will simplify. Of course, this approach also allows us to compute the relativistic corrections to the non-relativistic result, which in principle could be important for, e.g., neutron star physics.In our EFT language, the phenomenon of vortex precession is entirely due to long-distance/low-energy physics and is insensitive to the microscopic physics of the superfluid or vortex core. The vortex interacts with the superfluid hydrodynamical modes, which in turn interact with the trapping potential. This leads to an indirect interaction between the vortex and the trapping potential. The symmetries of the system are so powerful that they constrain the structure of this interaction up to a few free macroscopic parameters, which can be measured experimentally. Conventions: Throughout the paper we will set ħ=1. We will start with c=1 as well, but we will later reinstate explicit factors of cin order to make the non-relativistic limit straightforward and the comparison with data easier.We will adopt a metric signature η_μν=diag(-,+,+,+). The indices μ,ν,… run over all space-time coordinates,i,j,… run over spatial coordinates only, and α, β, … run over worldsheet coordinates.§ THE UNTRAPPED ACTIONWe now review the main aspects of the effective theory presented in <cit.>, which will require some familiarity with high energy ideas. The reader who is unfamiliar or uninterested in these concepts may refer directly to Eqs. (<ref>)–(<ref>) below, where we report the simple action for the interaction of superfluid modes with vortex lines.The EFT description of a superfluid with vortices involves a two-form field A_μν(x) for the bulk degrees of freedom and an embedding position field X^μ(τ, σ) for each vortex, where τ and σ are arbitrary worldsheet coordinates, such as proper time and physical length along the vortex. To lowest order in derivatives and in the case of a single vortex, the action reads <cit.> S = S_bulk + S_KR + S_NG^' + S_bulk ≡∫ d^4x G(Y) ,Y=-F_μ F^μS_KR ≡λ∫ dτ dσA_μν∂_ τ X^μ∂_σ X^ν S_NG^' ≡-∫ dτ dσ √(-det g) 𝒯(g^αβh_αβ,Y).Here F^μ=1/2ϵ^μνρσ∂_ν A_ρσ is the gauge invariant field strength for A, G is an a priori arbitrary function, λ is a coupling constant, T is a generalized tension, and g_αβ=η_μν∂_α X^μ∂_β X^ν,h_αβ=F_μ F_ν/Y∂_α X^μ∂_β X^νare two independent induced worldsheet metrics. The local values of the superfluid number density n and (relativistic) energy density ρ are related to Y and G byY = n^2 ,G(Y) = - ρ,so that the superfluid equation of state ρ = ρ(n) uniquely determines the bulk action G(Y).A homogeneous superfluid at rest with number density n̅ corresponds to a background field A_ij = -1/3n̅ ϵ_ijk x^k. If one is interested in studying superfluid configurations close to such a state, one can expand the above action in powers of perturbations of A about its background and apply standard perturbative field theory techniques. Since at some point we will be taking the NR limit, it is useful to be explicit about powers of c. Hence, we parametrize the perturbations A⃗ and B⃗ of A_μν asA_0i=n̅A_i(x)/c ,A_ij=n̅ ϵ_ijk(-13x^k+B^k(x)).In this way, they both have regular propagators in the c→∞ limit. Indeed, choosing the τ = t =X^0/c gauge, the expansion of the action above reads <cit.>S →w̅/c^2∫d^3x dt[12(×A⃗ )^2+12(Ḃ⃗̇^2-c_s^2(·B⃗)^2) +12(1-c_s^2c^2)·B⃗(Ḃ⃗̇-×A⃗)^2]-∫ dtdσ[13n̅λ ϵ_ijkX^k∂_tX^i∂_σ X^j + T_(00)|∂_σX⃗| ] + ∫ dtd σ [ n̅λ(A_i∂_σ X^i + ϵ_ijkB^k∂_tX^i∂_σ X^j )+ |∂_σX⃗|(2T_(01)·B⃗+2T_(10)(Ḃ⃗̇-×A⃗)·v⃗_⊥/c^2) ],where w̅ is the background relativistic enthalpy density (≃× c^2, in the NR limit), c_s is the sound speed,the Ts are effective couplings obtained from the generalized string tension byT_(mn)= a^m b^n∂^m/∂ a^m∂^n/∂ b^n𝒯(a,b) ,evaluated on the background, and v⃗_⊥ is the local string's velocity in the direction orthogonal to the string itself. For non-relativistic superfluids, the constant λ is related to the vortex's circulation Γ by λ = m Γ, where m is the mass of the superfluid's microscopic constituents. We stopped the expansion at cubic order in the bulk and at linear order on the worldsheet, since we will not need higher order terms. The cubic term that we have kept (second line of Eq. (<ref>)) isknown  to play a role in the classical running of T_(01) <cit.>, and it will be important for us as well. We also implicitly chose a gauge fixing term for A_μν that makes A⃗ purely transverse and B⃗ purely longitudinal—again, see <cit.>. B⃗ can thus be identified with the phonon field, while A⃗ is a constrained field playing a role similar to that of the Coulomb potential of electrodynamics: it does not feature propagating wave solutions, but it can mediate long-range interactions between sources (vortex lines, in this case). It has been dubbed the hydrophoton <cit.>.For convenience for what follows, we organized the expanded action in this way: the first integral (Eq. (<ref>)) collects the terms that make up the action for the bulk A⃗ and B⃗ fields, i.e. the action describing the superfluid in the absence of vortices. The associated propagators areG^ij_A(k)=c^2/w̅i (δ^ij-k̂^ik̂^j)/k^2,G_B^ij(k)=c^2/w̅ik̂^ik̂^j/ω^2-c_s^2k^2,wherethe iϵ prescription is understood. The second integral (Eq. (<ref>)) describes the motion of a free vortex line in an unperturbed superfluid. The last integral (Eq. (<ref>)) collects all the interaction terms between the bulk fields and the string. In the units that we are using, λ is dimensionless, all the Ts have units of tension (energy per unit length), n̅ is a number density, and w̅/c^2 is a mass density. All these coupling constants are finite in the c →∞ limit, so the only suppressed term is the last one, which involves an explicit 1/c^2 factor. § MODELING TRAPPINGNow that we have set up the formalism, we can perform our analysis. Let us forget for the moment about the presence of the vortex and focus on the superfluid only.To describe the spatial confinement of the superfluid, we introduce a position-dependent action term for the superfluid modes.In line with the EFT approach, we write the most general trapping term that, at lowest order in derivatives, is compatible with the symmetries of our system: S_tr=-∫ d^3 x dt E(√(Y),u⃗,x⃗), where u⃗=Ḃ⃗̇-∇⃗×A⃗/1-∇⃗·B⃗ is the superfluid velocity field, and for now E(√(Y),u⃗,x⃗) is a generic function of its arguments, with units of energy density.Noting that Y=n̅^2[(1-·B⃗)^2-1/c^2(Ḃ⃗̇-×A⃗)^2 ] and expanding in perturbations of Y and u⃗, we get new interaction terms for A⃗ and B⃗ of the form S_ tr →∫ d^3x dt{n̅ V(x⃗)[·B⃗ + 12c^2( Ḃ⃗̇-×A⃗ )^2 ] - 12ρ_ij(x⃗)(Ḃ⃗̇-∇⃗×A⃗ )^i(Ḃ⃗̇-∇⃗×A⃗ )^j } where V(x⃗) ≡∂ E/∂√(Y), ρ_ij(x⃗) ≡∂^2E/∂ u^i ∂ u^j,both evaluated on the background (√(Y) = n̅, u⃗ = 0). Notice that V has units of energy, and ρ_ij has units of mass density. We are assuming that the trapping mechanism does not involve a breaking of time reversal, and we are thus setting to zero terms with odd powers of u⃗.If this assumption is violated—say, as in the case of magnetic trapping of charged particles—then the second line in Eq. (<ref>) should be replaced with a linear term in u⃗.Lastly, we have kept only the lowest order terms for any combination of derivatives (time or space) and fields (A or B).This truncation is all we need to compute lowest-order results in perturbation theory.In the standard Gross-Pitaevskii approach, the confinement of the superfluid is modeled with an interaction between a trapping potential (x⃗) and the superfluid density only. This amounts to considering the particular case of E(Y,u⃗,x⃗)=( x⃗)√(Y). Ourtrapping action (<ref>) is a more general starting point. Notice, however, that to lowest order in perturbation theory and if we neglect the velocity dependence of E, the two approaches coincide.To first order in V and ρ_ij, the action (<ref>) provides an external source for the B⃗ field. This is given by J⃗_B(x)=-n̅ ∇⃗V(x⃗) . From standard Green's functions theory, the expectation value for B⃗ in the presence of a source is ⟨ B^i(x) ⟩ =∫d^3k dω/(2π)^4iG_B^ij(k)J_B^j(k) e^i k · x, and one easily finds ⟨·B⃗⟩=n̅ c^2/w̅ c_s^2V(x⃗) . Since there are no linear sources for A⃗, this implies that the superfluid density in the presence of (weak) trapping is n(x⃗)= √(Y) = n̅(1-n̅ c^2/w̅ c_s^2V(x⃗)) . It is interesting to note that, to this order in perturbation theory, the geometry of the density field is the same as that of the trapping potential, in the sense that the two have the same level surfaces. This is due to the derivatives entering J⃗_B andY: they compensate for the non-locality of the propagator, effectively turning the interaction with the trap into a contact term. Our result in Eq. (<ref>) is completely general, valid for any relativistic superfluid. In the non-relativistic limit one has w̅≃ mn̅ c^2, where m is the mass of the microscopic constituents of the superfluid, and the expression above simplifies ton(x⃗) →n̅(1-V(x⃗)/m c_s^2) ,which matches the standard result obtained in the Thomas-Fermi approximation—see e.g. <cit.>.[In the Gross-Pitaevskii approximation one also has c_s^2=μ/m and μ=nU_0, where μ is the chemical potential and U_0 is the coupling of the non-linear interaction.]Since we have been working to first order in the trapping potential, Eqs. (<ref>) and (<ref>) can be trusted only when V(x⃗) can be treated as a small perturbation, which is certainly not the case close to the edge of the cloud. However, the spatial point about which we are expanding is arbitrary, and so our expansion is really an expansion in small variations of V, or, equivalently, small gradients (in units of the healing length). Following standard renormalization group (RG) logic, we can thus rewrite Eqs. (<ref>) and (<ref>) as differential equations, the non-linear solutions of which are valid to all orders in V. They formally correspond to resummingan infinite series of tree-level diagrams, which for the case at hand are those of Fig. <ref>. In particular, in the non-relativistic case one can rewrite Eq. (<ref>) asc_s^2(n) dn/n = - dV/m.If one knows the equation of state of the superfluid, and thus c_s^2(n), one can integrate both sides and find the fully non-linear relationship between n(x⃗) and V(x⃗). For instance, in Gross-Pitaevskii theory one has c_s^2 ∝ n, and thus the linearized solution (<ref>) is in fact valid to all orders in V, in agreement with standard results. For more general (and realistic) equations of state, there can be sizable nonlinear corrections.§ VORTEX PRECESSION IN TWO DIMENSIONSWe will now study the motion of a single vortex in a trapped superfluid. In the field theoretical approach, this is done by integrating out the superfluid's bulk modes (the A and B fields). This results in an effective interaction between the vortex line and the trapping potential, which, in our formalism, is at the origin of the observed precession of the vortex.For simplicity we consider the cylindrical case only, that is, a three-dimensional superfluid trapped only along the (x,y) ≡x⃗_⊥ directions. We parametrize our vortex as a straight line, (t,z)≡(X(t),Y(t),z), and we assume that its distance from the center is much smaller than the typical transverse size of the cloud.Moreover,we will work in the non-relativistic limit, which is the relevant one for experimental questions. (The EFT language allows one to compute relativistic corrections with little extra work; we present such corrections in the Appendix.) To this end, we will assume that the second line of Eq. (<ref>) is a relativistic correction, i.e. it is secretly suppressed by inverse powers of c, since it describes a direct coupling of the trap to the superfluid velocity;this can come, for example, from Doppler-like effects <cit.>, which are indeed suppressed by inverse powers of c.Thus, we assumeρ_ij (x⃗ ) = 1c^2n̅ V_ij(x⃗) ,V_ij∼ V .We emphasize, however, that for magnetic trapping of charged particles this assumption should be lifted, and in fact the second line in Eq. (<ref>) should be replaced by a linear coupling to u⃗. So, in the c→∞ limit, the only surviving interaction terms for A and B in Eqs. (<ref>), (<ref>) and (<ref>) are the cubic vertex as well as the sourcesJ⃗_A(x) =n̅ λδ^2(-) ẑ J⃗_B(x) = [(n̅λϵ_abẊ^b - 2 T_(01)∂_a)δ^2(-)-n̅∂_aV()]x̂_⊥^a ,where from now on the indices a,b run over x,y. Now consider integrating out A and B along the lines of the field theoretical methods of <cit.>. At tree level this amounts to replacing them in Eq. (<ref>) with the solutions to their classical equations of motion. The corresponding corrections to the vortex effective action are of the formS_eff^(A)[X⃗] =∫d^3kdω/(2π)^4J_A^i(-k)iG_A^ij(k)J_A^j(k)S_eff^(B)[X⃗ ] =∫d^3kdω/(2π)^4J_B^i(-k)iG_B^ij(k)J_B^j(k) .These will both have mixed terms that make the vortex interact with the trapping potential, corresponding to the diagrams in Fig. <ref>. To compute these, we can neglect the time-derivative term in J⃗_B, since there is already a one-derivative kinetic term for the string in Eq. (<ref>), and thus anyO(V) corrections to it can be neglected in first approximation. On the other hand, there is no position-dependent potential for the string in the free string action—the only breaking of translational invariance comes from the trapping potential—so whatever position-dependent O(V) non-derivative term we get will be the leading source of “forces" for the string. In conclusion, we can simply restrict to the (Fourier-space) sources J⃗_A(k) =ẑ n̅λ(2π)^2 δ(ω)δ(k_z)e^-ik⃗_⊥·, J⃗_B(k) =- i(2π)^2δ(ω)δ(k_3)k⃗_⊥[ 2T_(01)e^-ik⃗_⊥·+n̅V(k⃗_⊥)]. If the source for B is substituted into Eq. (<ref>), it produces the following contribution: S_eff^(B) [X⃗]⊃2T_(01)/m c_s^2∫ dtdz V() .(We used the fact that for a non-relativistic superfluid w̅≃n̅m c^2). Once again, the derivatives entering the interactions of B⃗ cancel the non-locality of its propagator, and the net result is a purely local interaction between the vortex and the trapping potential[This is not a general phenomenon: when integrating out gapless modes, one in general expects to get long-range interactions.]. The cubic interaction (∇⃗·B⃗)(∇⃗×A⃗)^2 of Eq. (<ref>) in the presence of a non-trivial V can instead be thought of as a modification to the hydrophoton propagator (see the second diagram in Fig. <ref>), which plugged into Eq. (<ref>) gives S_eff^(A) [X⃗]⊃n̅^3λ^2c^4/8π^2w̅^2c_s^2(1-c_s^2/c^2)∫ dt dz d^2x_⊥V(x⃗_⊥+X⃗)/x_⊥^2.Up to terms that do not depend on the vortex position, the complete NR vortex effective action is thereforeS^(NR)_eff [X⃗] = ∫ dtdz[n̅λ/3ϵ_abX^aẊ^b+2T_(01)/m c_s^2V()+n̅λ^2/8π^2m^2c_s^2∫ d^2x_⊥V(x⃗_⊥+X⃗)/x_⊥^2],and studying the motion of the vortex is now reduced to a straightforward problem of point particle mechanics in two dimensions.Note that the last term is non-local, in the sense that it involves values of V away from X⃗. It is effectively a long-range interaction between the trapping potential and the vortex. The equations of motion read2 n̅λ/3ϵ_abẊ^b - ∂_a V_ eff (X⃗) = 0 ,where the vortex's effective potential energy (per unit length) isV_ eff (X⃗) ≡ - 2T_(01)/m c_s^2V() -n̅λ^2/8π^2m^2c_s^2∫ d^2x_⊥V(x⃗_⊥+X⃗)/x_⊥^2. Things become more transparent if we consider a vortex close to the center of the cloud. In that case, we can expand V_ eff for small X⃗ and, to quadratic order, we getV_ eff (X⃗)≃ -1/2 X^a X^b [ 2T_(01)/m c_s^2∂_a ∂_b V(0)+ n̅λ^2/8π^2m^2c_s^2∫ d^2x_⊥∂_a ∂_b V(x⃗_⊥)/x_⊥^2].(We are assuming that the linear terms vanish—that is for us what defines the “center” of the cloud). There are now two qualitatively different cases, depending on whether ∂_a ∂_b V(0) vanishes or not.§.§ Harmonic trapping (∂_a ∂_b V(0) ≠ 0) In the case of approximately harmonic trapping, V(x⃗_⊥) is quadratic close to the center of the cloud, so ∂_a ∂_b V(0) does not vanish. Then the first line in Eq. (<ref>) is nonzero, and the integral in the second line has a logarithmic divergence at x⃗_⊥ = 0:∫ d^2x_⊥∂_a ∂_b V(x⃗_⊥)/x_⊥^2 = - ∂_a ∂_b V(0) · 2πlog a + …where a is an arbitrary UV cutoff length, and the dots stand for terms that are finite for a → 0. As usual with UV log divergences, by dimensional analysis they must be accompanied by the log of a physical infrared scale. In the integral above, the only (implicit) candidate for such a scale is the typical transverse size of the cloud R_⊥; our perturbation theory breaks down there, thus making the extrapolation of such integrals beyond that point nonsensical.We find∫ d^2x_⊥∂_a ∂_b V(x⃗_⊥)/x_⊥^2 = ∂_a ∂_b V(0)2πlog (R_⊥/ a) ,so the second line of Eq. (<ref>) can be thought of as a renormalization of the first. In particular, followingstandard RG ideas, we can parametrize V_ eff in terms of a running coupling T_(01)(q) evaluated at a typical momentum q ∼ 1/R_⊥:V_ eff (X⃗)≃ - T_(01)(1/R_⊥)/m c_s^2∂_a ∂_b V(0) X^a X^b ,whereT_(01)(q) = - n̅λ^2/8π mlog(q ℓ) ,and ℓ is a physical microscopic scale, which we expect to be of the order of the healing length, but the precise value of which has to be determined from experiments. Notice that this result matches precisely the running of T_(01) found in <cit.> via somewhat different methods.If we parametrize the harmonic trapping potential in the usual elliptical form,V() = m/2(ω_x^2 x^2+ω_y^2 y^2)+𝒪(r^4), the equations of motion (<ref>) reduce toẊ(t)=ω_pω_y/ω_x Y(t) ,Ẏ(t)=-ω_pω_x/ω_y X(t) ,the solutions of which are elliptical orbits with the same orientation and aspect ratio as the trapping potential, with angular frequencyω_p ≡3Γ/8π c_s^2ω_xω_y log(R_⊥/ℓ).(Weused that λ=mΓ in the NR limit.) This matches the standard results derived by more traditional methods <cit.>. Nonetheless, we emphasize the generality of our result: it does not rely on the Gross-Pitaevskii model or the Hartree approximation. In fact, one can go further and make Eq. (<ref>) completely predictive. Recall that at this level ℓ is a free parameter the value of which has to be determined by experiment. However, if we consider the combinationχ≡ω_p/ω_x ω_yand compare the values it takes for different trapping potentials (`1' and `2'), the ℓ dependence cancels out and we getχ_1 - χ_2 = 3 Γ/8 π c_s^2 log_,1/_,2. §.§ Flatter trapping potentials (∂_a ∂_b V(0) = 0)The case of flatter trapping potentials—i.e. such that ∂_a ∂_b V(0) = 0—is easier to study:the first line in Eq. (<ref>) is zero, and the integral in the second line is convergent at x⃗_⊥ = 0. Consider then parametrizing the trapping potential asV(x⃗_⊥) = m c_s^2 f(x⃗_⊥/R_⊥) ,where R_⊥ is again the typical transverse size of the cloud, f is a dimensionless function generically with order 1 coefficients (but vanishing second derivatives at the origin), and the overall prefactor follows from consistency with Eq. (<ref>). For the integral in (<ref>), we now simply have∫ d^2x_⊥∂_a ∂_b V(x⃗_⊥)/x_⊥^2 = m c_s^2/R_⊥^2f_ab,where f_ab is a constant symmetric tensor with order 1 entries.If we align the x and y axes with the eigenvectors of f_ab, the equations of motion (<ref>) readẊ(t)=ω_p √(f_yy/f_xx)Y(t) ,Ẏ(t)=-ω_p √(f_xx/f_yy)X(t) ,the solutions of which now are elliptical orbits with aspect ratio √(f_yy/f_xx) and angular frequencyω_p=3/16π^2Γ/R_⊥^2√(f_xx f_yy).This is in perfect agreement with the results recently found in <cit.> by more traditional methods.Notice that for the harmonic potential (<ref>), the typical transverse size is R_⊥∼ c_s/ ω_x ∼ c_s /ω_y (assuming ω_x ∼ω_y), so Eqs. (<ref>) and (<ref>) scale in the same way with R_⊥ and Γ, but the harmonic case (<ref>) has a logarithmic enhancement that the anharmonic case lacks. Notice also that the f_ab tensor defined in Eq. (<ref>) depends not only on the specific function f that defines the trapping potential, Eq. (<ref>), but also on how the integral in Eq. (<ref>) is cut off atx⃗_⊥∼ R_⊥. At the edge of the cloud, our perturbative approximationsbreak down, so we cannot predict at present what is the most physical way to implement this cutoff. For the time being, we therefore leave f_ab undetermined; it is plausible that RG ideas like those that led us to Eq. (<ref>) might help us understand how an integral like Eq. (<ref>) is made finite in the IR. § DISCUSSIONWe close with some comments on our results and possible generalizations.First, notice that our effective action for the vortex line, Eq. (<ref>), is valid for any trapping potential. This means that our result can also be applied to homogeneous superfluids “in a box," such as those realized in <cit.>.Our analysis in Sec. <ref> indicates that even for potentials that are arbitrarily flat near the center, the vortex will still exhibit precession with an angular frequency scaling as ω_p ∼Γ / R_⊥^2, which is a well-known result (see e.g. <cit.>). Moreover, the orbits near the center are always elliptical (or circular), regardless of the shape of the cloud. Second, notice that the effective string potential (<ref>)is negative definite (on general grounds the coupling T_(01) isexpected to be positive <cit.>). This can lead to instabilities once the interactions of the vortex with phonons are taken into account; for instance, at non-zero temperature we expect the phonon thermal bath to create an effective friction for the precessing vortex, making it slowly migrate to regions of lower and lower effective potential, that is, away from the center of the cloud—see e.g. <cit.>. It would be interesting to use our EFT to quantify the effect. In particular, the inclusion of finite temperature effects should not require too much effort.Lastly, one could generalize our results to systems with trapping along the z-direction as well.There are experimental indications that vortices in that case get bent by the trap <cit.>, and it would be interesting to see how that effect arises within our EFT. We are grateful to A. Morales for introducing us to this interesting phenomenon, and to G. Iwata, R. McNally, and K. Wenz for useful discussions on trapping techniques. We also thank E. A. Cornell, S. Endlich,A. Pilloni, M. Zwierlein, and especially R. Carretero, P. Kevrekidis, R. Penco, and S. Will for enlightening discussions. This work has been supported by the US Department of Energy grant DE-SC0011941.* § RELATIVISTIC CORRECTIONSUsing theEFT methods outlined in the main text, with a minimal amount of extra work we can compute the relativistic corrections to the results of Sect. <ref>. Neglecting terms involving time derivatives and the string's velocity for the same reasons as before, the only new terms we should consider in the action read (see Eqs. (<ref>) and (<ref>))S ⊃∫ d^3x dt n̅/2c^2 U_ij()(∇⃗×A⃗ )^i(∇⃗×A⃗ )^j,withU_ij () ≡ V( ) δ_ij - V_ij(),and which, combined with the source (<ref>) for A, can give a vortex/trapping potential interaction mediated by A as in the diagramof Fig. <ref>.After straightforward algebra, the new contribution is found to be S_eff[X⃗]⊃ -n̅^3λ^2c^2/2w̅^2∫ d^3 xdt ϵ^abϵ^cdU_ac(+)∫d^2 p_⊥ d^2 q_⊥/(2π)^4e^-i(p⃗_⊥+q⃗_⊥)·p^b_⊥ q^d_⊥/p_⊥^ 2 q_⊥^ 2=n̅^3λ^2 c^2/8π^2w̅^2∫ dt dσ∫ d^2 x_⊥ ϵ^abϵ^cdU_ac(+)x_⊥^b x_⊥^d/x_⊥^4.Together with Eq. (<ref>) and replacing m→w̅/n̅c^2, this new term gives the complete relativistic vortex action S_eff [X⃗]= ∫ dtdσ[n̅λ/3ϵ_abX^aẊ^b+2T_(01)n̅ c^2/w̅ c_s^2V()+n̅^3λ^2c^4/8π^2w̅^2c_s^2∫ d^2x_⊥V(x⃗_⊥+X⃗)/x_⊥^2-n̅^3λ^2c^2/8π^2w̅^2∫ d^2x_⊥ϵ^abϵ^cdV_ac(x⃗_⊥+)x_⊥^bx_⊥^d/x_⊥^4]. It is interesting to note that, in the absence of coupling of the trapping to the superfluid velocity, i.e. for V_ab=0, the relativistic result is formally equal to the one in Eq. (<ref>), since the relativistic correction in Eq. (<ref>) cancels exactly. The same considerations of Sec. <ref> apply to this action. apsrev4-1
http://arxiv.org/abs/1704.08267v2
{ "authors": [ "Angelo Esposito", "Rafael Krichevsky", "Alberto Nicolis" ], "categories": [ "hep-th", "cond-mat.quant-gas" ], "primary_category": "hep-th", "published": "20170426180122", "title": "Vortex precession in trapped superfluids from effective field theory" }
Aix Marseille University, CNRS, PIIM, Marseille, France; Institut für Materialphysik im Weltraum, Deutsches Zentrum für Luft- und Raumfahrt (DLR), Oberpfaffenhofen, Germany; Joint Institute for High Temperatures, Russian Academy of Sciences, Moscow, Russia The Grüneisen parameter is evaluated for three-dimensional Yukawa systems in the strongly coupled regime. Simple analytical expression is derived from the thermodynamic consideration and its structure is analysed in detail. Possible applications are briefly discussed.52.27.Lw, 52.27.Gr, 05.20.Jj Grüneisen parameter for strongly coupled Yukawa systems Sergey A. Khrapak December 30, 2023 =======================================================§ INTRODUCTION An equation of state (EoS) in the form of a relation between the pressure and internal energy of a substance (often referred to as the Grüneisen or Mie-Grüneisen equation) has been proven very useful in describing condensed matter under extreme conditions. Central to this form of EoS is the Grüneisen parameter, whose thermodynamic definition is <cit.> γ_ G= V(∂ P/∂ T)_V/(∂ E/∂ T)_V=V/C_V(∂ P/∂ T)_V,where V is the system volume, P is the pressure, T is the temperature, E is the internal energy, and C_V=(∂ E/∂ T)_V is the specific heat at constant volume. Under the assumption that γ_ G is independent of P and E one can write <cit.>PV=γ_ G(ρ)E+C(ρ)V, where C(ρ) is the “cold pressure”, which depends only on the density ρ=N/V. Grünesein parameter depends considerably on the substance in question as well as on the thermodynamic conditions (location on the corresponding phase diagram). In most metals and dielectrics in the solid phase, γ_ G is in the range from ≃ 1 to ≃ 4. <cit.> For fluids it is usually somewhat smaller, typically ranging from ≃ 0.2 to ≃ 2. <cit.> The focus of this paper is on Yukawa model systems, which are often applied as a first approximation to complex (dusty) plasmas, representing a collection of highly charged particles immersed in a neutralizing environment. <cit.> In the context of complex plasmas, the Grünesein parameter can be useful in describing shock wave phenomena observed in various complex plasma experiments. <cit.> Therefore, it is desirable to have a practical approach allowing to estimate the Grüneisen parameter and related quantities under different experimental conditions (an attempt to estimate γ_ G has been previously reported in Ref. UsachevNJP2014). In this paper we evaluate Grüneisen parameter for strongly coupled three-dimensional (3D) one-component Yukawa systems. To be precise, Yukawa systems studied in this work represent a collection of point-like charged particles, which interact via the pairwise repulsive potential of the formV(r)= (Q^2/r)exp(-r/λ),where Q is the particle charge (assumed constant), λ is the screening length, and r is the distance between a pair of particles. Thermodynamics of considered Yukawa systems is fully characterized by the two dimensionless parameters. The first is the coupling parameter, Γ=Q^2/aT, where a=(4πρ/3)^-1/3 is the characteristic interparticle separation (Wigner-Seitz radius) and T is the temperature (in energy units). The second is the screening parameter, κ=a/λ. In the limit κ→ 0, the interaction potential tends to the unscreened Coulomb form, and Yukawa systems approach to the one-component-plasma (OCP). <cit.> Note, however, that in the OCP limit a uniform neutralizing background should be applied to keep the thermodynamic quantities finite. Thermodynamic properties of Yukawa systems received considerable attention. In particular, accurate data for the internal energy and compressibility obtained using Monte Carlo (MC) and molecular dynamics (MD) numerical simulations have been tabulated for a wide (but discrete) range of state variables Γ and κ. <cit.> Various integral theory approaches to the equation of state have also been used to describe strongly coupled Yukawa systems. <cit.> Recently, a shortest-graph method has been applied to accurately describe thermodynamics of Yukawa crystals. <cit.>Simple and reliable analytical expressions for the energy and pressure of strongly coupled Yukawa fluids have been proposed in Refs. KhrapakPRE02_2015,KhrapakJCP2015. These expressions are based on the Rosenfeld-Tarazona (RT) scaling <cit.> of the thermal component of the excess internal energy when approaching the freezing transition. These expressions demonstraterelatively good accuracy <cit.> and are very convenient for practical applications. In this paper they are employed to estimate the Grüneisen parameter of strongly coupled 3D Yukawa fluids. In this way very simple analytical expressions are obtained and analysed.§ THERMODYNAMIC PROPERTIES The total system energy E and pressure P are the sums of kinetic and potential contributions. For 3D systems we can writeE=3/2NT+U=3/2NT+NTu_ ex, PV= NT+W=NT+NTp_ ex,where U is the potential energy and W is the configurational contribution to the pressure or virial. These are expressed in terms of conventional reduced (dimensionless) excess energy u_ ex and excess pressure p_ ex, respectively. It should now be briefly reminded how the excess energy u_ exand pressure p_ ex of one-component Yukawa fluids can be evaluated. We only provide the expressions required in subsequent calculations, further details can be found in Refs. KhrapakPRE02_2015,KhrapakJCP2015,KhrapakPPCF2016. The reduced excess energy of a strongly coupled Yukawa fluid can be approximated with a good accuracy by the expressionu_ ex=M_ fΓ+δ(Γ/Γ_ m)^2/5.Here the first term corresponds to the static energy contribution within the ion sphere model (ISM). <cit.> The quantity M_ f is referred to as the fluid Madelung constant <cit.> and is given by M_ f(κ) = κ(κ+1)/(κ+1)+(κ-1)e^2κ.The second term in Eq. (<ref>) is the thermal contribution to the excess energy, which scales universally with respect to Γ/Γ_ m, where Γ_ m is the coupling parameter at the fluid-solid (freezing) phase transition. This scaling holds for various soft repulsive particle systems, including the present case of Yukawa repulsion, provided the screening is not too strong. <cit.>Regarding the dependence Γ_ m(κ), it can be well described by a simple approximation <cit.>Γ_ m(κ)≃172 exp(ακ)/1+ακ+1/2α^2κ^2,where the constant α=(4π/3)^1/3≃ 1.612 is the ratio of the mean interparticle distance Δ=ρ^-1/3 to the Wigner-Seitz radius a. The value of the constant δ in Eq. (<ref>) is δ=3.1, as suggested in Ref. KhrapakJCP2015. Using this approximation for the excess energy, the reduced pressure can be readily obtained as <cit.>p_ ex = p_0+δ/3(Γ/Γ_ m)^2/5f_ Z(ακ).Here p_0 is the static component of the pressure (associated with the static component of the internal energy)p_0= κ^4Γ/6[κ cosh(κ)- sinh(κ)]^2,and the function f_ Z is defined asf_ Z(x)=2+2x+x^2+x^3/2+2x+x^2. The model described by Eqs. (<ref>) - (<ref>) demonstrated excellent performance <cit.> in the regime κ≲ 5 and Γ/Γ_ m≳ 0.1, which will be considered in this work. § RELATIONS BETWEEN PRESSURE AND ENERGY§.§ Excess pressure-to-energy ratio Using the approximation of Eqs. (<ref>) - (<ref>), importantrelationships between the pressure and internal energy of Yukawa fluids can be investigated. We start with evaluating simply the ratio of the virial W to the potential energy U, which is equal to the ratio p_ ex/u_ ex. This ratio has been previously evaluated for 2D Yukawa fluids. <cit.> The calculation for 3D Yukawa fluids, using the thermodynamic functions described above, is presented in Figure <ref>. We note that the excess pressure-to-excess energy ratio is not very sensitive to the reduced coupling parameter Γ/Γ_ m. On the other hand, the ratio exhibits strong dependence on the screening parameter κ (it increases with κ). §.§ OCP limit An important observation in Fig. <ref> is that p_ ex/u_ ex→ 1 as κ→ 0. At first glance, this seems perhaps counter-intuitive, because one would naturally expect p_ ex/u_ ex=1/3 as in the OCP limit in 3D. We remind, that for inverse-power-law (IPL) interactions of the form V(r)∝ r^-n in 3D, a general relationship p_ ex=n3u_ ex holds (n is referred to as the IPL exponent). The difference should be attributed to the presence of the uniform neutralizing background in the OCP limit, which is absent in one-component Yukawa systems. Let us prove this mathematically. In the limit of very soft interaction, the energy and pressure at strong coupling (Γ≫ 1) are dominated by their static contributions. The series expansion of the fluid Madelung energy [Eq. (<ref>)] and the corresponding static pressure [Eq. (<ref>)] in the limit κ→ 0 yieldM_f(κ)Γ≃-9Γ/10+κΓ/2+3Γ/2κ^2+𝒪(κ^2Γ), andp_0(κ)≃-3Γ/10+3Γ/2κ^2+𝒪(κ^2Γ).In the absence of explicit thermodynamic contribution from the neutralizing medium (that is for one-component Yukawa systems), both M_ f and p_0 are divergent at κ→ 0, but their ratio remains finite and we have p_ ex/u_ ex=1. The contribution from the neutralizing mediumto the excess energy (in the linear approximation) is <cit.>u_m= -3Γ/2κ^2-κΓ/2.Similarly, contribution of the neutralizing medium to the excess pressure is <cit.>p_m= - 3Γ/2κ^2.Adding these contributions we get the familiar results for the OCP within the ISM model: u_ ex≃ -910Γ and p_0≃ -310Γ, which implies p_ ex/u_ ex=1/3. This consideration demonstrates that Yukawa systems in the limit κ→ 0 are not fully equivalent to the Coulomb (OCP) systems with the neutralizing background. Similar observation has recently been reported in relation to 2D Yukawa fluids. <cit.> §.§ Density scaling exponent Let us now consider correlations between configurationalcomponents of energy U and pressure W in more detail. The density scaling exponent can be defined as <cit.>γ=(∂ W/∂ T)_V/(∂ U/∂ T)_V. Substituting W and U and making use of the identity T/∂ T=-Γ/∂Γ the density scaling exponent becomesγ=p_ ex-Γ(∂ p_ ex/∂Γ)/u_ ex-Γ(∂ u_ ex/ ∂Γ).When substituting expressions for u_ ex and p_ ex into Eq. (<ref>), the terms linear in Γ will cancel out and a very simple result is obtainedγ=1/3f_ Z(ακ).This simple expression agrees with the expected behaviour. In the limit κ→ 0 we get the expected OCP limiting value γ = 1/3, corresponding to the unscreened Coulomb interaction. For the “Veldhorst state point” with κ=4.30 and Γ=4336.3 (using the definitions of κ and Γ adopted in this paper) Eq. (<ref>) yields γ= 2.07 in good agreement with the result obtained from a direct MD simulation, <cit.> γ = 2.12.Let us also consider another possible derivation of the density scaling exponent γ. For an arbitrary potential V(r) an effective IPL exponent (or inverse effective softness parameter) can be introduced using ratios of derivatives of the potential, <cit.>n_ eff^(p)=-ΔV^(p+1)(Δ)/V^(p)(Δ)-p,where V^(p) is the p-th derivative of the potential, and Δ characterizes mean separation between the particles. For IPL potentials, V(r)∝ r^-n, we get n_ eff^(p)≡ n for any p and Δ. Moreover, for IPL potentials the density scaling exponent is trivially related n: γ = n/3 (in 3D). For other potentials, the effective IPL exponent will generally depend on p and also on the exact definition of Δ. Previously, Δ=ρ^-1/3 with p=0 and p=1 were used to identify universalities in melting and freezing curves of various simple systems (Yukawa, IPL, Lennard-Jones, generalized Lennard-Jones, Gaussian Core Model, etc.). <cit.> It was, however, argued that the choice p=2 is more physically justified. <cit.> Indeed, it is straightforward to verify that, for the Yukawa potential, Eq. (<ref>) with p=2 yields n_ eff^(2)=f_ Z(ακ), that is γ=n_ eff^(2)/3, similarly to the conventional IPL result. Thus, identical results for the density scaling exponent γ can be obtained using the two seemingly very different routes: (i) thermodynamic approach based on explicit knowledge of the system pressure and internal energy and (ii) effective IPL exponent consideration, which operates only with the third and second derivatives of the interaction potential evaluated at the mean interparticle separation. An interesting related question, whether this is a special property of the Yukawa interaction or perhaps a more general result, requires careful consideration and will not be discussed here.§.§ Grüneisen parameter Because the density scaling exponent does not depend on the temperature, the Grüneisen parameter can be easily expressed using γ as:γ_ G= 1/c_ V[1+γ(c_ V-3/2)],where c_ V = C_ V/N is the reduced heat capacity at constant volume. The derivation is straightforward, for details see e.g. Ref. SchroderJCP2009.TheGrüneisen parameter evaluated using Eq. (<ref>) is plotted in Figure <ref>. Clearly, γ_ G is not independent of temperature. Let us discuss the main trends observed. In the limit of very weak coupling (ideal gas limit) we have c_ V=3/2 and hence γ_ G = 2/3, as expected for the ideal gas in 3D. <cit.> As the coupling becomes stronger, we can apply the RT scaling to get c_ V≃ 3/2+(3δ/5)(Γ/Γ_ m)^2/5. Assuming that the ideal gas contribution to c_ V exceeds that due to strong coupling effects (this is justified for Γ≲ 0.5Γ_ m), the following estimate is obtainedγ_G≃2/3+6γ-4/15δ(Γ/Γ_m)^2/5.This expression indicates that γ_ G can either increase or decrease compared to the ideal gas value of 2/3. The bifurcation occurs at γ=2/3, that is at κ≃ 1.4 for Yukawa systems.This behaviour is further illustrated in Fig. <ref>, which shows the dependence of γ_ G on the reduced coupling strength Γ/Γ_ m [calculated from Eq. (<ref>)] for four different screening parameters. In particular, Fig. <ref> documents the existence of a range of screening parameters near the transitional value κ≃ 1.4, where the Grüneisen parameter remains close to its ideal-gas limiting value even in the strongly coupled regime. For κ≳ 1.4 the Grüneisen parameter increases with coupling, forκ≲ 1.4 the tendency is opposite.On approaching the fluid-solid phase transition from the fluid side, c_ V reaches values slightly above 3. <cit.> In the OCP limit, the accurate analytical EoS <cit.> predicts c_ V≃ 3.4. <cit.> The same estimate is obtained using the RT scaling (with δ=3.1, as adopted here). This corresponds to the following approximation of γ_ G for 3D Yukawa melts:γ_ G^ m≃ 0.56γ + 0.29.The minimum value of γ_ G^ m≃ 0.48 occurs in the OCP limit with κ→ 0 and γ→ 1/3. As κ increases, the density scaling exponent also increases monotonously and so does theGrüneisen parameter, see Fig. <ref>. Finally, deep into the solid phase, the harmonic approximation is appropriate and we have c_ V≃ 3 (Dulong-Petit law). In this regime γ_ G^ s≃γ/2+1/3, comparable to the result for Yukawa melt, Eq. (<ref>).§ CONCLUSION In this paper simple analytical expressions for the density scaling exponent and the Grüneisen parameter of strongly coupled Yukawa fluids in three dimensions have been derived and analysed. It turns out that identical results for the density scaling exponent γ can be obtained using the thermodynamic approach (based on explicit knowledge of the system pressure and internal energy) as well as from an effective IPL exponent consideration (which requires only the third and second derivatives of the interaction potential, evaluated at the mean interparticle separation). The Grüneisen parameter evaluated here can potentially be useful in the context of shock-waves experiments in complex (dusty) plasmas. It appears in the expressions relating the pressure and density jumps across a shock wave front (known as Hugoniot equations). For a relevant example of experimental analysis and previous estimate of the Grüneisen gamma the reader is referred to Ref. UsachevNJP2014. The results obtained can be useful provided (i) shock-waves are excited in three dimensional particle clouds, (ii) the Yukawa potential is a reasonable representation of the actual interactions between the charged particles under these conditions, (iii) there is no or weak dependence of particle charge on particle density (in the theory described here the particle charge is constant), and (iv) the screening length is not very much smaller compared to the mean interparticle separation. These conditions can (at least partially) be met in complex plasma experiments under microgravity conditions, e.g. in the PK 4 laboratory, currently operational onboard the International Space Station. This work was supported by the A*MIDEX project (Nr. ANR-11-IDEX-0001-02) funded by the French Government “Investissements d'Avenir” program managed by the French National Research Agency (ANR). aipnum4-1
http://arxiv.org/abs/1704.08517v1
{ "authors": [ "Sergey Khrapak" ], "categories": [ "physics.plasm-ph", "cond-mat.soft" ], "primary_category": "physics.plasm-ph", "published": "20170427113533", "title": "Grüneisen parameter for strongly coupled Yukawa systems" }
Leiden Observatory, University of Leiden, PO Box 9513, 2300 RA, Leiden, The [email protected] Department of Earth and Space Sciences, Chalmers University of Technology, Onsala Space Observatory, 439 92, Onsala, Sweden Department of Geology and Geophysics, University of Hawaii at Manoa, Honolulu, HI 96822, USA Dipartimento di Fisica, Universitá di Torino, via Pietro Giuria 1, I-10125, Torino, Italy Landessternwarte Königstuhl, Zentrum für Astronomie der Universität Heidelberg, Königstuhl 12, D-69117 Heidelberg, Germany Institute of Planetary Research, German Aerospace Center (DLR), Rutherfordstrasse 2, D-12489 Berlin, Germany Department of Earth and Planetary Sciences, Tokyo Institute of Technology, Meguro-ku, Tokyo, Japan Rheinisches Institut für Umweltforschung an der Universität zu Köln, Aachener Strasse 209, 50931 Köln, Germany Lund Observatory, Department of Astronomy and Theoretical Physics, Lund University, 22100, Lund, Sweden Instituto de Astrof'ısica de Canarias, 38205 La Laguna, Tenerife, Spain Departamento de Astrof'ısica, Universidad de La Laguna, 38206 La Laguna, Tenerife, Spain Thüringer Landessternwarte Tautenburg, Sternwarte 5, 07778 Tautenburg, Germany Center for Astronomy and Astrophysics, TU Berlin, Hardenbergstr. 36, D-10623 Berlin, Germany Department of Astronomy and McDonald Observatory, University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712, USA Stellar Astrophysics Centre, Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, DK-8000 Aarhus C, Denmark Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Princeton University, Department of Astrophysical Sciences, 4 Ivy Lane, Princeton, NJ 08544 USA Department of Astronomy, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan Astrobiology Center, NINS, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan National Astronomical Observatory of Japan, NINS, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Subaru Telescope, National Astronomical Observatory of Japan, 650 North Aohoku Place, Hilo, HI 96720, USA The starhas been identified from a light curve acquired through the K2 space mission as possibly orbited by a transiting planet.Our aim is to confirm the planetary nature of the object and derive its fundamental parameters. We analyse the light curve variations during the planetary transit using packages developed specifically for exoplanetary transits. Reconnaissance spectroscopy and radial velocity observations have been obtained using three separate telescope and spectrograph combinations. The spectroscopic synthesis package SME has been used to derive the stellar photospheric parameters that were used as input to various stellar evolutionary tracks in order to derive the parameters of the system. The planetary transit was also validated to occur on the assumed host star through adaptive imaging and statistical analysis. The star is found to be located in the background of the Hyades cluster at a distance at least 4 times further away from Earth than the cluster itself. The spectrum and the space velocities ofstrongly suggest it to be a member of the thick disk population. The co-added high-resolution spectra show that that it is a metal poor ([Fe/H] = -0.53±0.05 ) and α-rich somewhat evolved solar-like star of spectral type G3. We find an= 5730±50 K,= 4.15±0.1 cgs, and derive a radius of= 1.3±0.1 and a mass of= 0.88±0.02 . The currently availableradial velocity data confirms a super-Earth class planet with a mass ofand a radius of . A second more massive object with a period longer than about 120 days is indicated by a long term radial velocity drift.The radial velocity detection together with the imaging confirms with a high level of significance that the transit signature is caused by a planet orbiting the star . This planet is also confirmed in the radial velocity data. A second more massiveobject (planet , brown dwarf or star) has been detected in the radial velocity signature. With an age of ≳ 10 Gyrs this system is one of the oldest where planets is hitherto detected. Further studies of this planetary system is important since it contains information about the planetary formation process during a very early epoch of the history of our Galaxy.Fridlund et al. The Super-Earth planet EPIC 210894022b- A short period super-Earth transiting a metal poor, evolved old star Malcolm Fridlund1,2Eric Gaidos 3 Oscar Barragán4 Carina M. Persson2 Davide Gandolfi 4Juan Cabrera 6 Teruyuki Hirano 7 Masayuki Kuzuhara 19,20 Sz. Csizmadia6 Grzegorz Nowak10,11Michael Endl14Sascha Grziwa 8 Judith Korth 8 Jeremias Pfaff13 Bertram Bitsch9 Anders Johansen9 Alexander J. Mustill9 Melvyn B. Davies9 Hans Deeg10,11 Enric Palle10,11William D. Cochran14 Philipp Eigmüller6Anders Erikson6Eike Guenther12 Artie P. Hatzes12Amanda Kiilerich15Tomoyuki Kudo21Phillip MacQueen14Norio Narita18,19,20David Nespral10,11Martin Pätzold8Jorge Prieto-Arranz10,11Heike Rauer6,13Vincent Van Eylen1 January 2017 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Exoplanetary transits give valuable information about the planetary size in terms of the host star. Very high precision transit photometry, preferably carried out from space, gives us access to the orbital parameters, which combined with either radial velocity (RV) data and/or transit timing variations (TTVs) enables the measurement of the planetary fundamental parameters, most notable, the planet's radius, mass, and mean density <cit.>. Determination of the fundamental parameters of exoplanets, and their host stars, are necessary in order to study the internal structure, composition, dynamical evolution, tidal interactions, architecture ofsystems, and the atmosphere of exoplanets <cit.>. The successfulandspace missions <cit.>, have found large numbers of transiting exoplanets of different types and have also led to the discovery and measurements of the fundamental parameters of the first rocky exoplanetsand Kepler-10b <cit.>, as well as introduced detailed modelling to the field of exoplanetary science <cit.>. One of the most important results of these missions is the realisation of how diverse exoplanets are. Later discoveries, primarily by themission, have led to the understanding that small and dense planets ("super-Earths") are quite common <cit.>, and that they may even have formed early in our Galaxy's evolution <cit.>.The repurposed space mission, provides long-timeline, high-precision photometry for exoplanet and astrophysics research. It is the new name given to NASA'smission after the failure of one of its non-redundant reaction wheels in May 2013 which caused the pointing precision of the telescope to be non-compliant with the original mission. was resumed in early 2014 by adopting a completely different observing strategy <cit.>. The key difference of this new strategy with respect to the original one, is that the telescope can now only be pointed towards the same field in the sky for a period of maximum of ∼80 days, and has to be confined to regions close to the ecliptic.is thus limited instead to detect planets with much shorter orbital periods than . observes stars that are on average 2-3 magnitudes brighter than those targeted by the originalmission <cit.>, and in fields (designated "campaigns"), re-targeting every ∼80 days along the ecliptic. This entails an opportunity to gain precious knowledge on the mass of small exoplanets via ground based radial velocity follow-up observations. By observing almost exclusively brighter stars than the previous missions the quality of the necessary ground based follow-up observations (e.g., spectroscopic characterisations and radial velocity measurements) has improved significantly. The approximately 10,000 – 15,000 objects observed in each field are listed in the Ecliptic Plane Input Catalog (EPIC) of themission[https://archive.stsci.edu/k2.]. The capability ofto detect small (down to super-Earth size) transiting planets in short period orbits around such stars has recently been demonstrated <cit.>.As part of our ongoing studies of individual exoplanetary candidates from themission, and using methods <cit.> we develop for the interpretation of as well as the expected<cit.>,<cit.>, andmissions <cit.>, we have confirmed a short-period transiting super-Earth that together with a larger body with a significantly longer period, orbits the solar-like star [The star was a target of three programs duringCampaign 4, GO4007, GO4033and GO4060.]. This star was previously designated as a false positive <cit.>. As is true in this case, and as it was learned during themission, it is quite common that automatic analysis methods give false positives for true detections, and the evolution of the pipeline software during a space mission may motivate further analyses. It should also be stressed in this context that different algorithms may give differing results. The star is a metal poor, high velocity object indicative of an old age. Planets orbiting such stars are very rare and important since they provide information about the earliest phases of planetary formation in our Galaxy. In this paper we describe our follow-up study of this object, to confirm the planetary nature of the transits, model the evolution and age of the system, as well as the formation process.The paper is organised in the following way: in Sect. 2 we present thephotometry, and in Sect. 3 we present the ground based follow-up with spectral classification and validation of the planetary signal with a calculation of the false positive probability. In Sect. 4 and 5 we classify the host star kinematically, determine its distance and derive the stellar mass, the radius and age of the system. In Sect. 6 we then carry out the transit and radial velocity curve modelling and determine the exoplanetary physical parameters, the results of which make it increasingly probable that there is a second body in this system. In Sect. 7 we model the orbital dynamics of the system and finally, in Sect. 8 we discuss and summarise the results.§PHOTOMETRY OF THE TRANSIT SIGNAL Observations of theField 4 took place between February 7 and April 23, 2015. This campaign included the Hyades, Pleiades, and NGC 1647 clusters. This was by intention and most selected targets were members of these clusters. A total of 15 847 long cadence (30 minute integration time) and 122 short cadence (1 minute integration time) targets were observed, and the data were made publicly available on September 4, 2015. The part of the light curve containing the actual primary (and possibly also a secondary) transit provide significant information about both the transiting object and the host star <cit.>. The actual light curve is, however, contaminated with noise caused by a number of instrumental and natural effects and needs to be processed before it can be interpreted. We used two different and independent methods to produce cleaned and interpretable light curves for all 15 969 targets. The first technique follows the methodology outlined in <cit.>. Thetarget pixel files were analyzed for stellar targets and a mask for each target was calculated and assigned. After the light curve extraction, disturbances produced by the drift[This drift is caused by the fact that the operation of thespacecraft using only two reaction wheels, requires using a combination of carefully balanced solar radiation pressure together with the fine adjustment thrusters in order to stabilize the spacecraft around the third axis. This results in a periodic rotation of the spacecraft about the bore sight of the telescope <cit.>.] of the telescope over the sky were corrected by computing the rotation of the telescope's CCDs[The focal plane ofis equipped with an array of 21 individual CCD's covering an area of ∼116 deg^2 on the sky]. After corrections we then used thebased pipeline <cit.> in order to separate stellar variability and discontinuities and to search for transit signals in the resulting light curves. In the second method, we used circular apertures to extract the light curves. An optimal aperture size was selected in order to minimize the noise. The background was estimated by calculating the median value of the target pixel file after the exclusion of all pixels brighter than a threshold value that may belong to a source. The resulting light curves were de-correlated using the movement of the centroid as described in <cit.>. For more details we refer to <cit.>. We then used theDétection Spécialiseé de Transits () algorithm <cit.>, originally developed for themission to search for transit signals in the resulting light curves. Both theandalgorithms have been applied extensively to both<cit.> anddata <cit.>. These transit detection algorithms search for a pattern in the data and use statistics to decide if a signal is present in the data or not, for example box-fitting Least Squares () algorithms <cit.>.uses an optimized transit shape, with the same number of free parameters as , and an optimized statistic for signal detection.uses a combination of the wavelet based filter technique<cit.> and .was originally developed to remove or reduce the impact of stellar variability and discontinuities in the light curves of themission.When applied, bothandresulted in the discovery of a shallow transit signature in the light curve of the star designated occurring every ∼5.35 days. The depth of the signal ( 0.014%), shown in Fig. <ref>, is compatible with a super-Earth-size planet transiting a solar-like star. Table <ref> lists the main designations, optical and infrared magnitudes, and proper motion of . The detection and characterisation of the planet were then confirmed using <cit.>[https://www.cfa.harvard.edu/ avanderb/k2.html] and EVEREST lightcurves <cit.>. Together withand , weobtained consistent parameters (e.g., period, depth, duration) within the uncertainties. The analysis of the light curve extracted with Vanderburg's pipeline revealed a transit-like feature close to phase 0.5 in the folded light curve with asignificance of 3.6 sigma. Depending on the circumstances, the presence of secondary eclipses in the folded light curve of a planetary candidate can be a clear sign of contamination by background eclipsing binaries. Ruling out the presence of such secondary eclipses is a mandatory step in the photometric confirmation of planetary candidates. It was found that the transit-like feature was not consistent with the expected duration and dilution factor of a secondary eclipse by a background eclipsing binary. The duration and depth of the transit-like feature actually depended on the binning chosen in the folding process, which is typically not the case for genuine astrophysical signals. We concluded that the transit-like feature was either some residual of correlated noise in the light curve or simply a statistical fluctuation without astrophysical origin.§ GROUND-BASED FOLLOW-UP§.§ High resolution spectroscopyIn November 2015 we obtained 4 reconnaissance high-resolution (R ≈ 60 000) spectra ofusing the Coudé Tull spectrograph <cit.> at the 2.7-m telescope at McDonald Observatory (Texas, USA). The spectra have a signal-to-noise ratio (SNR) of ∼25-40 per resolution element at 5500 Å. We reduced the data using standardroutines and derived preliminary spectroscopic parameters using the code<cit.> and radial velocities via cross-correlation with the RV standard star HD 50692. The results from all 4 spectra are nearly identical and reveala star with effective temperature,= 5 778 ±60 K , surface gravity,= 4.19 ±0.2 dex, metallicity, [M/H] = -0.3±0.1 dex anda slow projected rotational velocity of 3.7±0.3 . The spectra show no significant radial velocity variation at a level of ∼150 m/s.We started the high-precision RV follow-up ofusing the Fibre-fed Echelle Spectrograph <cit.> mounted at the 2.56-m Nordic Optical Telescope (NOT) of the Roque de los Muchachos Observatory (La Palma, Spain). We collected 6 high resolution spectra (R ≈ 67 000) in November 2015, as part of the CAT observing program 35-MULTIPLE-2/15B. The exposure time was set to 2400s – 3600s, leading to a SNR of 40 – 60 per pixel at 5500 Å. In order to remove cosmic ray hits, we split each exposure in 3 consecutive sub-exposures of 800s – 1200s. Following the observing strategy outlined in <cit.> and <cit.>, we traced the RV drift of the instrument by acquiring long exposure (T_exp≈35s) ThAr spectra immediately before and after the three sub-exposures. The data were reduced followingandroutines. Radial velocities were extractedvia SNR-weighted, multi-order, cross-correlation with the RV standard star HD 50692 which was observed with the same instrument set-up as the target. Twelve additional high-resolution spectra (R ≈ 115 000) were obtained with the HARPS-N spectrograph <cit.> mounted at the 3.58-m Telescopio Nazionale Galileo (TNG) of Roque de los Muchachos Observatory (La Palma, Spain). The observations were performed between November 2015 and January 2016 as part of CAT and OPTICON programs 35-MULTIPLE-2/15B, 15B/79 and 15B/064. We set the exposure to 1800s and monitored the sky background using the second fibre. The data reduction was performed with the dedicated HARPS-N pipeline. The extracted spectra have a SNR of 20 - 60 per pixel at 5500 Å. Radial velocities were extracted by cross-correlation with a G2 numerical mask <cit.>.The FIES and HARPS-N RVs are listed in Table <ref>, along with the full-width at half maximum (FWHM) and the bisector span (BIS) of the cross-correlation function (CCF). Time stamps are given in barycentric Julian day in barycentric dynamical time (BJD_TDB).The FIES and HARPS-N RVs show a ∼2-σ significant RV variation in phase with theephemeris, and, superimposed on a long negative linear trend (γ̇=-0.217±0.077  m s^-1 d^-1with a ∼3-σ significance level.), as discussed in Sect. <ref>. In order to assess if the observed RV variation is caused by a distortion of the spectral line profile – unveiling the presence of activity-induced RV variations and/or of a blended eclipsing binary system – we searched for possible correlations between the RV and the BIS and FWHM measurements. The linear correlation coefficient between the RV and FWHM measurements is 0.14 (p-value = 0.79) for the FIES data, and -0.13 (p-value = 0.70) for the HARPS-N data; the correlation coefficient betweenthe RV and BIS measurements is -0.14 (p-value = 0.79) for FIES, and 0.15 (p-value = 0.64) for HARPS-N. The lack of significant correlations suggest that the observed RV variations are Doppler shifts induced by the orbiting companions. We can therefore confirm the transiting planetary candidate with a mass of , and find support for the presence of a secondary body with a significantly longer period. §.§ Spectral classification The most useful method to determine the fundamental stellar parameters (e.g. , , and the stellar age), required for the interpretation of the exoplanet data, is so far to analyse the high resolution spectra obtained in order to prepare the RV curve used for the planetary mass determination. After correcting for the RV variation, the spectra of the FIES and HARPS-N spectra were co-added to produce a high signal-to-noise ratio (SNR).This resulted in one spectrum with SNR ∼120 per pixel at 5 500 Å for the co-added FIES spectrum and another with SNR ∼150 at 5 500 Å for the HARPS-N spectrum respectively. To determine the the profile of either of the strong Balmer line wings is thenfitted to the appropriate stellar spectrum models <cit.>. This fitting procedure has to be carried out carefully since the determination of the level of the adjacent continuum can be difficult for modern high-resolution Echelle spectra where each order can only contain a limited wavelength band <cit.>. A suitable part of the Balmer line core is excluded since this part of the line profile originate in layers above the actual photosphere and thus would be contributing to a different value of the .The analysis was then carried out as follows. We fitted the observed spectra to a grid of theoreticalmodel atmospheres from <cit.>. We selected parts of the observed spectrum that contained spectral features that are sensitive to the required parameters. We used the empirical calibration equations for Sun-like stars from <cit.> and <cit.> in order to determine the micro-turbulent () and macro-turbulent () velocities, respectively. The projected stellar rotational velocitywas measured by fitting the profile of about 100 clean and unblended metal lines. In order to calculate the best model that fitted the different parameters, we made use of the spectral analysis package SME <cit.>. SME calculates, for a set of given stellar parameters, synthetic spectra and fits them to observed high-resolution spectra using a χ^2minimization procedure. We used SME version 4.43 and a grid of themodel atmospheres <cit.> which is a set of 1-D models applicable to solar-like stars.The final adopted values are listed in Table <ref>. We report the individual abundances of some elementsin Table <ref>. We find= 5730±50 K,= 4.15±0.1 cgs, and an iron abundance of [Fe/H] = -0.53±0.05 dex. <cit.> obtained a spectrum using the HIRES spectrograph and Specmatch. They find = 5788 ±71K and= 4.224 ± 0.078, in agreement with our values. Based on an average of the Ca, Si and Ti abundances (excluding the abundance of Mg, since that is based on just two lines), we find the [α/Fe] = +0.2 ± 0.05 andis thus iron-poor and moderately α-rich. Using the <cit.> calibration scale for dwarf stars, the effective temperature andofdefines the spectral type of this object as an early G-type. The low value of theparameter suggest thatthe star is evolving off the main sequence, indicating a high age and consistent with the high space velocities, as well as the low iron abundance. §.§ Validation of the transiting planet §.§.§ High resolution imagingTransits such as , that appear to be planetary in origin, may actually be false positives arising from the diluted signal of a fainter, unresolved eclipsing binary (EB) that is either an unrelated background system or a companion to the primary star. In order to identify this potential false alarm source, we searched for faint stars close to the target in images acquired with high spatial resolution.was first observed on November 18, 2015 with the FastCam lucky imaging camera <cit.> at the 1.52-m Carlos Sánchez Telescope at Teide observatory, Tenerife. We acquired ten “cubes” of 1,000 images through an I-band filter, each with 50 ms exposure time. Due to the 1.5seeing and the relative faintness of the target, only four of these cubes could be processed successfully with the `shift and add' technique. Two processing attempts were made, using in one case the 1% and in the other the 10% of the images that have the smallest point spread function. In neither of the processed combined images, which cover an area of ≈ 5× 5 centred on , could any further stars be discerned, up to 4 magnitudes fainter than the target. In order to further check if an unresolved eclipsing binary mimics planetary transits, we also performed an adaptive-optics (AO) imaging with the HiCIAO instrument on the Subaru 8.2-m telescope <cit.> on December 31, 2015. Employing the AO188 <cit.> and Direct Imaging (DI) mode, we observedin the H band with 3-point dithering. To search for possible faint companions, we set each exposure time to 15s × 10 coadds and let the target be saturated with the saturation radius being ∼0.08^''. For the flux calibration, we also obtained an unsaturated image ofwith an exposure time of 1.5s × 5 coadds for each of the three dithering points using a 9.74% neutral density (ND) filter. The total integration times were 900s for the saturated image and 22.5s for the unsaturated one. We reduced the HiCIAO images following the procedure described in <cit.> and <cit.>. The raw imageswere first processed to remove the correlated read-out noises (so-called “stripes"). The hot pixels were masked and the resulting images were flat-fielded and distortion-corrected by comparing the images of the globular cluster M5 with data taken by the Hubble Space Telescope.All images in each category (saturated and unsaturated) were finally aligned and median combined. The combined unsaturated image shows that the full width at half maximum (FWHM) ofafter the AO correction is 0.052^''. The images were finally aligned and median combined. With a visual inspection of the combined saturated image (see the inset of Fig. <ref>), we did notfind any bright companion candidate up to 5^'' from the target. Two neighboringfaint objects were found to the North-East of at a separation of ∼8.5. These objects are, however,only partially in the photometric aperture, and too faint (flux contrastsless than 4× 10^-5 in the H band) to be a source of transit-likesignals in thelight curve. To draw a flux contrast curve around , we convolve the combined saturated image with an aperture equivalent to the FWHM of the object. The standard deviation of flux counts of the convolved image was computed within an arbitrary annulus as a function of separation from . After carrying out aperture photometry of the combined unsaturated image using an aperture radius of the FWHM of the point spread function and applying a correction for the integration times and the transmittance of the neutral density (ND) filter, we measure the 5σ contrast from . The solid line of Fig. <ref> plots the measured 5σ contrast as a function of separation from the target in arcseconds and the 5-σ contrast is < than 3× 10^-4 at 1. Given the transit depth of Δ F/ F = 1.8× 10^-4, we can exclude the presence of false alarm sources further than 1away from .§.§.§ False Positive ProbabilityTo further exclude the possibility of a false positive due to a faint, blended eclipsing binary, we performed a Bayesian calculation based on the stellar background. This simulation does not include the probability that such a star is actually a binary on an eclipsing orbit, only the probability that an appropriate star is at the location of EPIC 210894022, and thus it is an upper limit on the False Positive Probability (FPP).The procedure is described in detail in <cit.> and summarized here. The Bayesian prior is based on a model of the background stellar population and the likelihoods are based on observational constraints. A background stellar population equivalent to 10 square degrees (to improve counting statistics) was constructed at the location ofusingVersion 1.6 <cit.>. The background was computed to K_p = 22, fainter than the faintest EB (K_p ≈ 20) that could produce the signal. The likelihood for a hypothetical background star is the product of the probabilities that (a) it can produce the observed transit depth; (b) its mean density is consistent with the observed transit duration; and (c) it does not appear in our Subaru HiCIAO H-band imaging of the(Sect. <ref>). More advanced FPP calculations can take into account the precise shape of the transit but we show that such refinement is not needed in this case.The calculation was performed by random sampling of the synthetic background population, placing the stars in a uniformly random distribution over a region with a 15 radius centred on . Stars that exceeded the AO contrast ratio constraint (condition c) were excluded. Given the known orbital period and mean density of the synthetic star, the probability that a binary would have an orbit capable of producing the observed transit duration (condition b) was calculated assuming a Rayleigh distribution of orbital eccentricities with mean of 0.1. (Binaries on short-period orbits should quickly circularize.)[The eclipse duration calculation uses the formula for a “small” occulting object and so is only approximate.] To determine whether a background star could produce the observed transit signal with an eclipse depth <50% (condition a), we determined the relative contribution to the flux ofassuming a 7 × 7 pixel photometric aperture and using bilinear interpolations of the pixel response function for detector channel 48 with the tables provided in the Supplement to theInstrument Handbook (E. Van Cleve & D. A. Caldwell, KSCI-19033). The calculations were performed in a series of 1000 Monte Carlo iterations and a running average used to monitor convergence.We found a FPP of ≈ 2 × 10^-7.We estimated the probability that the transit signal could be due to a companion EB or transiting planet system by using the 99.9% upper limit of the stellar density derived from the fitting of the transit light curve but without spectroscopic priors. This calculates a minimum mass and radius, and by using a stellar isochrone, the absolute brightness of a hypothetical companion with the same age and metallicity as . The contrast ratio between the hypothetical stellar companion andcan then be established via the photometric distance. We then used an 11.5 Gyr, [Fe/H] = -0.5 isochrone (see Sect. <ref>) generated by the Dartmouth Stellar Evolution program<cit.> to put lower limits on the companion effective temperature and mass, ( > 5 900 K and> 0.79 ) and faint limits on the magnitudes, K_s < 10.3 and aK_P < 11.5using a photometric distance of 230 pc. The predicted K-band contrast is < 0.9 magnitudes and the AO imaging we performed by Subaru-ICRS (Sect. <ref>) limits any such companion to within 0.095(Fig. <ref>) or about 22 AU. Such a companion would have a typical projected RV difference of at least a fewand because of the relatively modest contrast, we would have expected to resolve a second set of lines in our FIES and HARPS-N spectra, which we do not. If the companion exists and hosts the transiting object, the object must be smaller than our estimate (and thus still a planet), because the star is hotter and thus its surface brightness is higher than .§ THE STAR, ITS DISTANCE AND SPACE VELOCITIESThe objectis a relatively bright (Table <ref>) star. Based on colours and proper motion measurements, <cit.> suggested thatis a G0 star and probably a member of the Hyades open cluster. <cit.> found, based on the proper motions the object to be a likely member of the Hyades, but with incompatible photometry and radial velocities. The final conclusion of those authors was that the star is not a member of the cluster. Our observations and analysis is definitely not compatible with Hyades membership. Instead we find an old, low metallicity, early G-type star (Sect. <ref>). The low iron abundance of -0.53±0.05 dex is not in agreement with measurements of the Hyades stars, and the apparent magnitude,is also not consistent with that expected for a main sequence early G star in the Hyades cluster. Radial velocity measurements of(-16.3 ) also support that it is not a Hyades star, since such stars on average have radial velocities of about +40 . Considering the= 11.137 mag and colour index B-V=0.659 mag, and assuming no or very little reddening and a main sequence star of (bolometric) absolutemagnitude M_V = 4.75 mag, indicative of an early G-type main sequence star, we find a lower limit to the distance of ∼190 pc.Figure <ref> shows our HARPS spectrum of the Na D doublet ofwhere three separate components are clearly seen in each Na line: the stellar absorption profile and two (overlapping) interstellar absorption lines at different radial velocities. This is also a strong indication that the star must have a distance much larger than the Hyades cluster (45 pc).We can correct the observed B-V = 0.659 ± 0.05 for reddening using the absorption by the intervening neutral Na I along the line of sight as a measure, and the relationship between the total equivalent width of Na I absorption in both the D1 and D2 resonant lines (0.50±0.05 Å) and E(B-V) reddening by <cit.>. This relation predicts E(B-V) = 0.055 ± 0.014, corresponding to anof 0.17±0.04 slightly less than the upper limit one would expect from the H I column density map of <cit.> of 0.18.We can also estimate the interstellar reddening towardsfollowing the method outlined in <cit.>. Briefly, we assume R_V = 3.1 and adopt an extinction law <cit.>. We fit the spectral energy distribution using synthetic colours calculated "ad hoc" from the BT-NEXTGEN low resolution model spectrum <cit.> with the same parameters as we find for the star (see Sect. <ref>), resulting in a value forof 0.15 ±0.03 magnitude, similar to what we find from Na D lines. Anof 0.15 would be consistent with a distance of 210 pc if the star has the same absolute (bolometric) magnitude as the Sun. It appears, however, from our spectroscopic analysis that the star is somewhat evolved ( 4.15) and therefore brighter. Using the stellar parameters derived from our high resolution high signal-to-noise spectroscopy (see Sect. <ref>) we have= K, which is representative ofspectral type of G3. If we then apply the equations forandderived empirically by <cit.> we can derive an upper limit to the intrinsic luminosity of 1.9 . Using the reddening derived above this translates into a maximum distance of  230 pc. We therefore conclude that the distance to this object is 190 pc to 230 pc with a most likely distance of 210 ±20 pc. Applying that distance to the velocity components of the star, see table <ref>, demonstrates thatis a very fast moving object, quite similar to the object Kepler-444 studied by <cit.>. Assuming a distance of 210 pc, we find the individual velocities with respect to the local standard of rest are (U_LSR, V_LSR, W_LSR) = (130.6±2.6,-35.2±1.5,-16.3±0.5 ). Correcting for the Sun's peculiar motion this is equivalent to a space velocity of 143.8 ±3  almost the same as the peculiar velocity found for Kepler-444. Contrary to that objectbeing of higher mass, is evolving, and therefore presumably an old object. Based on the kinematics of and following <cit.> and <cit.> we can calculate the probabilities of membership in the different populations of the Galaxy. We find that these are:* Thick disk = 96.2%* Halo = 3.8%* Thin disk < 0.1% Kinematically, therefore, it is most likely thatbelongs to the thick disk population.§ THE STELLAR MASS, RADIUS AND AGE OF THE SYSTEMWe can infer stellar parameters, including age, by comparing the observed parameters to those predicted by the Dartmouth Stellar Evolution Program (DSEP)<cit.>.We selected isochrones for [Fe/H] = -0.5 and [α/Fe] = +0.2 and +0.4, and compared predicted parameters vs. observed B-V, density ρ_*, and spectroscopicand , via a standard χ^2 function, which is minimized. Applying the correction for reddening quoted in Sect. <ref>, we plot the reddening-corrected B-Vversus the density in Fig. <ref> and compared to the DSEP predictions for [Fe/H] = -0.5 and [α/Fe]=+0.4.The dark points have predicted T_ eff within 50 K of the spectroscopic value of 5 730 K, and the others are outside this range.The best-fit (χ^2 = 2.56) isochrone of 12.5 Gyr is plotted as the heavy curve.The stellar mass is 0.88 , the radius is 1.23 , and theis 4.21, which is reasonably consistent with the parameters derived from the stellar spectrum (Sect. <ref>).The 68% confidence intervals (based on Δχ^2) for the posterior parameter values are:= 5 750-5 814 K,= 4.20-4.25 dex,= 0.87-0.91 , = 1.13-1.33 , and an age of 11.5 - 13 Gyr (upper limit of isochrone models).There is a slight tension between the spectroscopically derived parameters and other parameters, i.e. the errors do not overlap (Fig. <ref>).Using an [α/Fe] = +0.2 grid the minimum χ^2 increases the discrepancy and the model age increases beyond 13 Gyr. On the other hand, a slightly higher T_ eff and log g would reconcile these estimates and yield a slightly younger age.Regardless, these comparisons suggest a model-dependent age of at least 10 - 11 Gyr, i.e. at least as old as the Galactic disk itself <cit.>. has aof 11.137 ± 0.040 (Table <ref>). Applying the interstellar extinction of 0.150 ± 0.025 magfound in Sect. <ref>, the de-reddenedis 10.987 ± 0.047 mag.In order to calculate the stellar parameters including its age, we apply the Bayesian PARAM 1.3tool <cit.>[http://stev.oapd.inaf.it/cgi-bin/param_1.3]. This tool accept as input the stellar , the metallicity, the de-reddened visual magnitude, and the parallax. Using the de-reddenedand the distance range determined in Sect. <ref> (and converting those distances to parallaxes), we ran three separate models using our observedand [Fe/H](Sect. <ref>). We find results between 8.8 Gyr and 11 Gyr, masses of 0.8 – 0.89 radii between 0.85and 1.6andbetween 4.46 and 3.96 (Table <ref>).We then compare with the observed( Sect. <ref>), in order to assess which of the 3 distances better matches the spectroscopic parameters. Our data indicate= 4.15 ± 0.1 dex. This would be indicative of a distance of 210 pc. The age would in this case be 10.770 Gyr and the mass of the star would be 0.9but with a slightly larger of1.3 . We note here, however, that the error bars in this particular model are large. If we use the stellar parameters derived from our model of the observed spectrum ( and [Fe/H]) Sect. <ref> as input to derive the mass and radius based only on the equations of <cit.>,we find higher values of = 1.0 ± 0.07 , and = 1.4 ± 0.14 . Theseequations of <cit.> are based on the observed high precisionandof 95 eclipsing binary stars of different luminosity classes where the masses and radii are known to better than 3%, leading to a numerical relation based on the stellar parameters. It is, however, difficult to know how well these relations specifically describe . The number of stars in the generation of the numerical relation is small and of course not enough to generate "empirical" isochrones and the parameters derived in this way have to be treated with care. Specifically, the ages derived from the DSEP and PARAM 1.3 models indicate that a 1star would already be evolving towards the white dwarf stage and the mass ofmust thus be lower. On the other hand our observation of a lower value forthan would be expected for a star with a< 1is indicative of that the radius ofshould be larger than 1 . Based on the above, we conclude, that all known facts are consistent withbeing an 0.86star that has begun to evolve off the main sequence, has aof 1.2-1.3 , and thus with a very high age of the star. Our models are consistent with an age that is ≳ 10 Gyrs, most likely being 10.8 Gyr or somewhat larger.§ TRANSIT AND RV JOINT MODELINGWe performed the joint fit of the photometric and RV data using the code , a Python/Fortran software suite based on Marcov Chain Monte Carlo (MCMC) simulations (Barragán et al., in preparation). Thephotometry we analyzedare subsets of the 's light curve extracted by <cit.>. Here we selected ∼ 7 hours of data points around each of the 13 transits observed by and de-trended each transit using a second order polynomial fitted to the out-of-transit data points. The RV data set includes the 6 FIES and 12 HARPS-N measurements presented in Sect. <ref>.We used the equations of <cit.> to fit the transit light curves and a Keplerian orbit to model the RV measurements. We adopted the Gaussian likelihood described by the equation ℒ =[ ∏_i=1^n_tot ( 2 πσ_i^2 )^- 1/2] exp{ - ∑_i=1^n_tot( D_i - M_i )^2/2 σ_i^2 }, where n_tot = n_rv + n_tr is the number of RV and transit points, σ_i is the error associated to each data point D_i, and M_i is the model associated to a given D_i. We fit the same parameters as in <cit.> to the light curve. For the orbital period (P_orb), mid-time of first transit (T_0), impact parameter (b), planet-to-star radius ratio (R_p/R_⋆), RV semi-amplitude variation (K), and gamma velocity, we set uniform uninformative priors, i.e., we adopted rectangular distributions over given ranges of the parameters spaces.The ranges are T_0=[7067.9708, 7067.9786] days for the mid-time of first transit, P_orb=[5.3503, 5.3514] days for the orbital period, b=[0, 1] for the impact parameter, R_p/R_⋆=[0, 1] for the planet-to-star radius ratio, K=[0, 1000]  for the RV semi-amplitude variation, and γ_FIES = [-17,-15]  and γ_HARPS-N = [-17,-15]  for the systemic velocities as measured with FIES and HARPS-N, respectively. Given the limited number of available RV measurements and their error bars, we assumed a circular orbit (e=0). We adopted a quadratic limb darkening law and followed the parametrization described in <cit.>. To account for thelong integration time (∼30 minutes), we integrated the transit models over 10 steps. The shallow transit and 's long cadence data do not enable a meaningful determination of the scaled semi-major axis (a_p/R_⋆) and limb darkening coefficients u_1 and u_2. We thus set Gaussian priors for the stellar mass and radius (Sect. <ref>) and constrain the scaled semi-major axis using Kepler's third law. We also used the online applet[Available at <http://astroutils.astronomy.ohio-state.edu/exofast/limbdark.shtml>.] written by <cit.> to interpolate <cit.>'s limb darkening tables to the spectroscopic parameters of the host star (Sect. <ref>) and set Gaussian priors for the limb darkening coefficients u_1 and u_2 adopting 20% conservative error bars. We explore the parameter space with 500 chains created randomly inside the prior ranges. The chain convergence was analyzed using the Gelman-Rubin statistics. The number of iterations required for the Marcov Chains to converge ("burn-in phase") uses 25,000 more iterations with a thin factor of 50. The posterior distribution of each parameter has 250,000 independent data points.We searched for evidence of an outer companion in the RV measurements by adding a linear trend γ̇ to the Keplerian model fitted to the RV data. The best fitting solution provide a linear trend of γ̇=-0.217±0.077  m s^-1 d^-1with a ∼3-σ significance level. To assess if this model is better, we have to compare it with the model without linear trend. When comparing models, the one with the largest likelihood has to be preferred. At the same time, we have to check if we are not overfitting the number of parameters with the Bayesian information criteria (BIC). This is defined as BIC = k ln(n) - 2 lnℒ, where n is the number of data points and k the number of fitted parameters. The BIC penalizes the model with more fitted parameters. When comparing models with different number of parameters, we have to prefer the one with the smallest BIC <cit.>. For our RV measurements, the model with linear trend has lnℒ_ RV=78 and BIC_RV=-144, while the model without it gives lnℒ_ RV= 74 and BIC_RV= -139. We therefore conclude that the model with a linear trend is favored.The final parameters are given in Table <ref>. They are defined as the median and 68 % credible interval of the posterior distribution for each parameter. We show the folded transit light curve in Fig. <ref> and the RV curves in Figs. <ref> and <ref>. § ORBITAL DYNAMICS The mass, orbital period and eccentricity of the body responsible for the RV trend can be constrained by requiring that the system is dynamically stable. Bodies too close, too massive, and on too eccentric orbits would result in an unstable system.In Fig. <ref>, we show for given periods of the outer body, the allowed mass ranges that are (a) large enough to generate the observed RV trend with P >120 d (above the solid lines); and (b) small enough to avoid dynamical instability (below the dotted lines). For an outer body on a circular orbit we use the criterion of <cit.> while for eccentric outer bodies we use <cit.>.We show results for four values of the outer body's eccentricity. If the outer body is on a circular orbit, it must be a gas giant planet or more massive, and the system is stable even for stellar-mass companions. If it is on a highly-eccentric orbit, gas giant planets at P1 yr are ruled out by dynamical stability. In this case, the outer planet may be a lower-mass planet on a close orbit (P1 yr) or a gas giant on a wider orbit (P >2 yr). Note that an eccentric orbit permits lower masses for the outer body, but this requires a specific alignment of the orbit with respect to the observer (edge-on orbit and pericentre pointing along the line of sight). In general, one can also place limits on what additional planets could be in a system between two known ones. For example, if the second planet is a Jupiter at 1 AU on a circular orbit, the separation is roughly 20 mutual Hill radii, meaning that one (or more) additional planets could be accommodated between the two planets.We include a line in Fig. <ref> that shows the final masses and orbital periods of planets formed in the planet formation model of <cit.>. This model makes use of the accelerated core accretion rates by pebble accretion <cit.> and incorporates planet migration, meaning that the planets move through the disc as they form. Here, we use a simple power law disc model (with alpha viscosity parameter of 0.001) for the surface density and temperature following <cit.> for sun-like stars to calculate the evolution of planets. We also make use of the metallicity measurements and evolve our planetary growth using a metallicity of [Fe/H] = -0.5.The dashed line marks the final mass of planets as a function of their period as predicted by our simulations of planet formation. The vertical part of the line indicates that planets having a broad range of masses have migrated to the inner edge of the disc, where they stop their accretion. Our model here predicts thatcore has formed around 6 AU, i.e., beyond the water ice line.The results from the simulations also indicate that the potential other companion in the system should be between 20 - 50 Earth masses, provided the planets evolved independently (they did not influence each other's growth and orbits). Follow-up observations of the planetary system can thus provide a deeper insight into the formation process of the planets in this system.§ DISCUSSION AND SUMMARYThesystem is demonstrated to be a rare and important object among the plethora of transiting exoplanets that has been discovered by space missions in the last decade. Using adaptive optics imaging and statistical methods, and also detecting the RV signature of this planet we have confirmed the presence of aplanet in a 5.35d orbit, as giving rise to thetransit signature. We find that the planet has a mass of . The periodic RV signal is overlaid on a trend that we identify with a second more massive object. The evidence for the planetare strong enough for us to say that it is confirmed, while we would require more data in order to confirm also the second body.We believe this planet to be extremely old. The reasons for this is as follows: a) The low but α-rich metal content of . b) This star has a very high space velocity of145  making it a likely member of the thick disk population. c) The modelling of the measured stellar parameters in Sect. <ref>. The best fit to the data is for a 0.86star with a most likely age of 10.8 ± 1.5 Gyr. The star appear to be beginning to move off the main-sequence as indicated by both the low value ofand the radius of the models that are most likely around 1.25 ± 0.2 . Different populations in the Galaxy can be traced through the abundance of the α elements, O, Mg, Si, S, Ca, and Ti. In this context we note that thereare similarities betweenand the planet host star Kepler-444. The latter object is a metal poor low-mass solar-like star and one of the brightest stars to be observed with . By following this object during the 4 years of that mission, <cit.> succeeded in detecting 5 transiting sub Earth-size planets in a compact system. They also could record the asteroseismic signature of the host star. Interpreting the seismic data allowed a high precision determination of mass (0.76 ), radii (0.75 ) and age (11.23 ± 1 Gyr) for the host star by these authors. Kepler-444 has very similar space velocities (see Sect. <ref>) and α element abundance asdoes, something that indicate that both stars are bona-fide members of the thick disk population. It has also been suggested that Kepler-444 is a member of the Arcturus stream, a group of older iron poor stars that possibly originates from outside the Milky Way Galaxy. There exist data on a handful of other small size (super-Earth or Neptune class) planets, where there are also indications of high age. Kepler-10b and c <cit.>, the first small planets confirmed by themission, have been determined (asteroseismologically) to have an age of 11.9 ±4.5 Gyr. This system has been suggested to belong to the halo population <cit.>. The metallicity of the star is, however, higher thanat [Fe/H] = -0.15 ± 0.03. Also the error bars on the age are high, and no proper motions are available to kinematically determine the population of the star. The recently confirmed Kepler-510 system <cit.> has a host star with a metallicity of [Fe/H] = -0.35 ± 0.1 and an asteroseismic age of 11.8 Gyr <cit.>. While the planet (orbital period 19.6d) have a radius of  2.2no mass of this object has as yet been determined. We point out in this context that future releases of the Gaia astrometric catalogue will alleviate this situation and allow for a kinematical determination of old host star populations. There is also the case of Kapteyn's star (GJ 191, LHS 29 or HD 33793), an M1 sub-dwarf star <cit.> with a[Fe/H] = -0.86 ±0.05. It is kinematically classified as a halo star and is in fact the closest such object at a distance of only 3.91 ±0.01 pc. Two planets were detected in radial velocity measurements <cit.>, withperiods of 48.6d and 121.5d and m_p sin i of 4.8 and 7.0respectively. The age of the star is very likely older than 10 Gyr because of the low metallicity and the kinematics, but exactly how old it is can not be determined at this time. <cit.> used a somewhat different data set, almost as large as that of <cit.>, and concluded that the RV signature of Kapteyn-b very likely was caused by an activity signal coming from the star. <cit.>analysed this latter data set and came to the conclusion that there is no activity signal but instead most likely the bona-fide planet-b is a real planet. This demonstrates the difficulty when one is working at the limit of the sensitivity of one's instrumentation. While it is only the three objects Kepler-444, Kepler-510 andthat have both confirmed planets and relatively well secured ages, very old stars appear to be as likely to possess planetary systems as younger systems, a not too surprising result. It is however more interesting in terms of what kind of planets form in early low-metallicity systems, as compared to the more recently formed systems where the metallicity would generally be higher. It is clear thatand its planet(s) is a welcome addition to Kepler-444 and Kepler-510. Thatis being abundant in α elements is interesting since the bulk of rocky planets consist of those elements <cit.>.Together with the 5 planets in the Kepler-444 system, Kepler-510 and possibly the other exoplanet systems described above,and its possible companion suggested here are among the oldest planets known to date. Assuming a radius of the planet has an average density ofg cm^-3, placing it in the same class as far as geometrical size is concerned asand Kepler-10b. In this context it is indeed a super-earth and the planetary density appear similar to that of Venus and the Earth itself. The errors in ρ_p are, at the moment, however, large enough that it allow compositions that deviate from being truly "Earth-like" and more observations are required. It would have formed together with a star having a low metallicity, and more importantly at a very early epoch of our Galaxy. Althoughis also iron-poor, it is moderately α-rich, in common with the planet host Kepler-444, which could be favourable for the formation of an Earth-like body. But we also have indications for a more massive planet in the same system. A number of studies so far have pointed out a correlation where metal-rich stars are more likely to harbour gas-giants (e.g. <cit.>), while the correlation appear to be missing for the sample of small planets discovered by<cit.>. Having formed 5-6 Gyrs before the birth of the Solar Systemand its system carries information about the early stages of stellar and planetary formation in the Galaxy. It would therefore be very interesting to continue to study this system, primarily to confirm the presence of the second more massive planet and finding also its period. Finding more systems, similar toand Kepler-444, would allow us to begin to determine what the implications for planetary formation as a function of galactic age is.aaWe acknowledge the very constructive comments of an anonymous referee, which have improved our paper.We thank the McDonald, NOT, TNG, Subaru staff members for their unique and superb support during the observations.Based on observations obtained a) with the Nordic Optical Telescope (NOT), operated on the island of La Palma jointly by Denmark, Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos (ORM) of the Instituto de Astrofísica de Canarias (IAC); b) with the Italian Telescopio Nazionale Galileo (TNG) also operated at the ORM (IAC) on the island of La Palma by the INAF - Fundación Galileo Galilei. This research made use of data acquired with the Carlos Sánchez Telescope, operated at Teide Observatory on the island of Tenerife by the Instituto de Astrofísica de Canarias.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2013-2016) under grant agreement No. 312430 (OPTICON).M.F. and C.M.P. gratefully acknowledge the support of the Swedish National Space Board.M.F. acknowledge the hospitality of the Instituto de Astrofísica de Canarias where the paper was written during a 2-month stay under a Jesus Serra Foundation fellowship.D.G. gratefully acknowledges the financial support of the Programma Giovani Ricercatori – Rita Levi Montalcini – Rientro dei Cervelli (2012) awarded by the Italian Ministry of Education, Universities and Research (MIUR).W.D.C., M.E. and P.M.Q. were supported by NASA grants NNX15AV58G, NNV16AE70G and NNX16AJ11G to The University of Texas at Austin.Sz.Cs. thankstheHungarianNationalResearch,DevelopmentandInnovationOffice,fortheNKFIH-OTKA K113117 grant.This work has made use of SME package, which benefits from the continuing development work byJ. Valenti and N. Piskunov and we gratefully acknowledge their continued support. This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna <cit.>.
http://arxiv.org/abs/1704.08284v1
{ "authors": [ "Malcolm Fridlund", "Eric Gaidos", "Oscar Barragán", "Carina M. Persson", "Davide Gandolfi", "Juan Cabrera", "Teruyuki Hirano", "Masayuki Kuzuhara", "Sz. Csizmadia", "Grzegorz Nowak", "Michael Endl", "Sascha Grziwa", "Judith Korth", "Jeremias Pfaff", "Bertram Bitsch", "Anders Johansen", "Alexander J. Mustill", "Melvyn B. Davies", "Hans Deeg", "Enric Palle", "William D. Cochran", "Philipp Eigmüller", "Anders Erikson", "Eike Guenther", "Artie P. Hatzes", "Amanda Kiilerich", "Tomoyuki Kudo", "Phillip MacQueen", "Norio Narita", "David Nespral", "Martin Pätzold", "Jorge Prieto-Arranz", "Heike Rauer", "Vincent Van Eylen" ], "categories": [ "astro-ph.EP", "astro-ph.SR" ], "primary_category": "astro-ph.EP", "published": "20170426183310", "title": "EPIC 210894022b - A short period super-Earth transiting a metal poor, evolved old star" }
firstpage–lastpage 2017A Massive Prestellar Clump Hosting no High-Mass Cores Ken'ichi Tatematsu===================================================== We characterize the contribution from accreted material to the galactic discs of the Auriga Project,a set of high resolution magnetohydrodynamic cosmological simulations of late-type galaxies performed with the moving-mesh codeAREPO. Our goal is to explore whether a significant accreted (or ex-situ)stellar component in the Milky Way disc could be hidden within thenear-circular orbit population, which is strongly dominated by stars born in-situ. One third of our models shows a significant ex-situ disc but this fraction would be larger if constraints on orbital circularity were relaxed. Most of the ex-situ material (≳ 50%) comes from single massive satellites(> 6 × 10^10 M_⊙). These satellites are accreted with a wide range of infall times and inclination angles (up to 85^∘). Ex-situ discs are thicker, older and more metal-poor than their in-situ counterparts. They show a flat median age profile,which differs from the negative gradient observed in the in-situ component. As a result, the likelihood of identifying an ex-situ disc in samples of old stars on near-circular orbits increases towards the outskirts of the disc.We show three examples that, in addition to ex-situ discs, have a strongly rotating dark matter component. Interestingly, two of these ex-situ stellar discs show an orbital circularity distribution that is consistent with that of the in-situ disc.Thus, they would not be detected in typical kinematic studies.Galaxy: disc – Galaxy: evolution – galaxies: evolution – galaxies: interactions – galaxies: kinematics and dynamics – methods: numerical. § INTRODUCTIONAccording to the currently favored cosmological model, Λ cold dark matter (ΛCDM), galaxies like our own merge and interact with companions of widely different masses throughout their history <cit.>. The quantification and characterization of the merger activity that a galaxy has undergone can thus be used to constrain galaxy formation models.Mergers and interactions with very low mass satellites (masses ≲ 10^7M_⊙) are difficult to detect since such satellites are expected to possess, if any, only a small number of stars <cit.>. The detection of truly “dark” satellites would be extremelyrewarding as it could put stringent constraints on the nature of DM <cit.>. Nonetheless,an undisputed detection of this type of substructure is yet to be made. The identification of mergers associated with intermediate mass satellites (i.e. masses < 10% of the host mass) is significantly less challenging. As such satellites interact with the hostgravitational potential, they are tidally disrupted leaving behind debris in the form of stellar streams. Depending on the time since disruption this debris can be detected either in real space, in the form of extended cold streams (recent disruption)<cit.>,or as clumps in the space ofquasi-conserved integrals of motions (well after disruption) <cit.>. In the Milky Way several streams have been identified and, in some cases,their progenitors have been characterized. However, a robust quantification of our Galaxy's merger activity is still lacking.The main reason for this has been the lack of sufficiently large and accurate full phase-space catalogs that could unveil debris from early accretion events, which would be generally be deposited in the inner Galactic regions. Thanks to the astrometric satellite Gaia <cit.>, incombination with previous and upcoming spectroscopic surveys <cit.>, this will soon be possible <cit.>. Indeed,the first Gaia data release has started to uncover previously unknown substructure in the Galactic stellar halo <cit.>. Isolating debris from more massive merger (i.e. masses ≥ 10% of the host mass) is, however, more challenging.As discussed by <cit.>, thereason for this is two-fold. First, dynamical friction is most efficient for these massive objects. They are quickly dragged to small radii where the mixing time scales are short. Second, debris from these satellites is kinematically hotter than that from smallermass objects and thus mixes faster.Relatively massive mergers canbe detected indirectly, by searching for perturbations in the local velocity field of the Galactic disc, both in-plane <cit.> and perpendicular to the Galactic plane<cit.>. Substructure in the local disc velocity field has already been identified<cit.>. However, perturbations from the Galactic bar or spiral arms and debris from significantly less massive satellites may explain much of this substructure<cit.>. The addition of extra dimensions to the analysis, based on chemical abundances patterns and stellar ages, is thus crucial toisolate debris coming from individual progenitors <cit.>. Indeed, <cit.> analyzed a sample of ∼ 5000 stars from the Gaia-ESO survey with full phase-space, [Mg/Fe] and [Fe/H] measurements to search for signatures of the most massive merger events our Galaxy has undergone. Their efforts were focused on the identification of an accreted or ex-situ disc component. Such an ex-situ disc is expected to arise during massive mergers at low inclination angles with respect to the plane of the main disc<cit.>. A DM disc may also form at the same time, which could have important consequences for direct DMdetection experiments<cit.>.To select ex-situ disc star candidates, R15 used a chemodynamical template first introduced byR14. As they discussed, for [Fe/H] > -1.3, stellar populations of survivingdwarf galaxies generally have [Mg/Fe]< 0.3, which is lower than typical MW stars at the same [Fe/H]. To furtherisolate ex-situ disc candidates, R14 focused on stars co-rotating with the disc on orbits with significant eccentricity.Debris from massive mergers is expected to lie preferentially onsuch orbits. Both R14 and R15 found no evidence of a significant prograde ex-situ disc component of this type. They concluded that the MW has no significant ex-situ stellar disc, and thus possesses no significant DM discformed by a merger. In this paper we study the formation of ex-situ stellar discs in the Auriga simulations, a set ofhigh-resolution magneto-hydrodynamic simulations of disc galaxy formation from ΛCDM initial conditions <cit.>. Our goal is to explore whether a significant ex-situ stellar disc component could be hidden within the near-circular orbitpopulation, which is strongly dominated by in-situ stars. In Section <ref> we introduce the Auriga suite.We define our ex-situ discs in Section <ref> and quantify the number of models with a significantsuch component in Section <ref>. In Section <ref> we show how our ex-situ discs areformed, and we characterize their mainstellar populations properties in Section <ref>. In Section <ref> we discuss how these discs might be detected withupcoming observational campaigns. We discuss the implications of our findings for a possible ex-situ stellar and DM discs in our Galaxy in Section <ref>. We conclude in Section <ref>.§ THE AURIGA SIMULATIONSIn this paper we analyze a subsample of the cosmological magnetohydrodynamic simulations of the Auriga suite <cit.>.In what follows we summarize the main characteristics of these simulations. For a more detailed description we refer the reader to GR17. The Auriga suite is composed of 30 high-resolution cosmological zoom-in simulations of the formation of late-type galaxies withinMilky Way-sized haloes. Thehaloes were selected from a lower resolution dark matter onlysimulation from the Eagle Project <cit.>, a periodic boxof side 100 Mpc. Each halo was chosen tohave,at z=0, a virial mass in the range of 10^12 – 2 × 10^12 M_⊙and to be more distant than nine times the virial radius from any other halo of mass more than 3% ofits own mass. Each halo was run at multiple resolution levels. The typical dark matter particle and gas cell mass resolutions for the simulations used in this work(Auriga level 4) are∼ 3 × 10^5 M_⊙ and ∼ 5 ×10^4 M_⊙,respectively. Thegravitational softening length used for stars and DM grows with scalefactor up to amaximum of 369 pc,after which itis keptconstant in physical units.Thesoftening lengthof gas cellsscales with themean radiusof thecell, but is never allowed to drop below thestellar softening length. Aresolutionstudyacrossthree resolution levels (GR17) shows that many galaxy properties,suchas surfacedensity profiles,orbital circularitydistributions, star formation historiesanddiscverticalstructuresare already well converged at the resolution level used in this work. The simulationswere carriedout using theN-body +moving-mesh, magnetohydrodynamicscode AREPO <cit.>.A ΛCDMcosmology, with parametersΩ_m =Ω_dm+Ω_ b= 0.307,Ω_b=0.048, Ω_Λ=0.693,and Hubbleconstant H_0= 100 hkm s^-1Mpc^-1 =67.77 km s^-1 Mpc^-1, was adopted.The baryonic physics model used in these simulations is a slightly updatedversion of that in <cit.>. Itfollows a number ofprocesses that play akey role in the formationof late-typegalaxies, suchas gascooling/heating, star formation, massreturn andmetal enrichment fromstellar evolution, thegrowth ofsupermassive blackholes, andfeedback both from stellar sources andfrom black holeaccretion. In addition, magnetic fields were implemented as described in <cit.>. The effectof these magnetic fields on the global evolution of the Auriga galaxiesis discussed in detail in <cit.>. The parameters that regulate the efficiency of each physical process were chosen by comparing the results obtained in simulations of cosmologically representative regions to a wide range of observations of the galaxy population <cit.>. From now on, we will refer to these simulations as Au-i, with i enumerating the different initial conditions. We will focus on the subset of 26 models that,at the present-day, show a well defined stellar disc. The main properties of each simulated galaxy are listed in Table <ref>. The disc/bulge decomposition is made by simultaneously fitting exponential and Sersic profiles to the stellar surface face-on density profiles. A detailed description of how these parameters were obtained is given in GR17.§ EX-SITU DISC DEFINITIONThe goal of this work is to characterize the contribution from ex-situ formed(accreted)[In this work the terms ex-situ and accreted are used interchangeably.]material to the Auriga stellar discs. In what follows we will designate as ex-situ all star particles formed within the potential well of a self-bound satellite galaxy prior to its disruption. At the present-day, such star particles can either belong to the main host, afterbeing tidally stripped from their progenitor, or they can still be bound to this progenitor.Conversely, following <cit.>, allparticles formed within the potential well of the main host halo will be referred to as in-situ star particles. Note that,contrary to <cit.>, thisdefinition includes star particles formed within the host virial radius out of gas recently stripped from satellites.These stellar particles are notfound in the galactic disc and thus do not affect our results.To define star particles that are in the galactic disc we perform a kinematic decomposition based on the circularity parameter, ϵ. Following <cit.>, this parameter is defined as ϵ = L_ z/L_ z^ max(E),where L_ z is the Z-component of angular momentum of a given star particle and L_ z^ max(E)is the maximum angular momentum allowed for its orbital energy, E. Before computing the star particle's angularmomentum, each galaxy is re-oriented as described in GR17, i.e., the Z-axis direction is defined through thesemi-minor axis of the moment of inertia tensor of the star particles within 0.1 R_200. All star particles that satisfy i) ϵ≥ 0.7, ii) | Z | < 10 kpc and iii) R < R_ optare considered as disc star particles, independently of birth location. Here R is the galactocentric cylindrical distance and R_ opt is the galacticoptical radius, defined as the 25  mag  arcsec^-2 B-band isophotal radius <cit.>. To minimize contamination from stellar halo populations, for significantly lopsided galaxies R_ opt is defined asthe minimum cylindrical radius whereμ_ B = 25  mag  arcsec^-2.§ EX-SITU DISC QUANTIFICATIONIn this Section we quantify the number of Auriga galaxies that possesses a significant ex-situ disc. To identify such discswe show in Figure <ref> the ratio of total ex-situ toin-situ disc mass, η = M_ exsitu^ tot / M_ insitu^ tot, for allour galactic discs. Interestingly, 31% (8) of our disc sample show a significant ex-situ disc, defined as η≳ 0.05(dashed line). The two largest ex-situ discs, Au-8 and Au-20 have η of ∼ 0.15 and ∼ 0.3, respectively. In general, the value of η rapidly decreases as we increase the circularitythreshold from ϵ = 0.7 to 0.9. This is not surprising since the orbital eccentricity of ex-situ discstars is expected to be, on average, larger than that of their in-situ counterparts <cit.>.Figure <ref> shows the surface density ratio,μ = Σ_ exsitu / Σ_ insitu, as a function of galactocentric distance.To allow a direct comparison, distances are normalized by the corresponding R_ opt. Note that, in general,μ has a rising profile. This indicates that the relative contribution of ex-situ material to the disc risesas we move towards the outer disc regions. In the most extreme case, Au-14,μ varies by approximately two orders of magnitudes within R_ opt.Two-thirds of our disc sample shows either a very small or a negligible fraction of ex-situ material. Thus, in what follows, we will focus on the subsample of galaxies with significant ex-situ discs (η > 0.05). Figure <ref> shows the B-band surface brightness maps of these galaxies, obtained considering onlyex-situ star particles. In this figure we include all ex-situ star particles that at the present-day belong to the main host,independently of their circularity parameter. Thus, stellar populations that belong to the galactic spheroid, or stellar halo, are also included.In general, the ex-situ material shows a mildly oblate distribution, flattened along the Z-direction<cit.>. Only galaxiesAu-2, Au-8, and Au-24 show a significantly flattened distribution that visually resembles the structure expected for a stellar disc.In some cases, such as Au-14, Au-19 and Au-20, clear signatures of cold substructure can be observed. This is debris from recent or on-going disruption events, which crosses the inner galactic regions, but has not yet hadtime to fully mix.Figure <ref> shows B-band surface brightness maps of the same galaxies, now obtained using only ex-situ star particlesthat satisfy the condition ϵ≥ 0.7. This figure exposes, in all cases, a clear ex-situ disc component composed of star particles on near-circular orbits. As discussed before, some of these ex-situ discs (e.g. Au-14 and Au-19), show several spatially coherent stellar streams, associated with recent accretion events. On the other hand, discs such as Au-2show a very smooth spatial distribution. As we show below, the Au-2 ex-situ disc formed as a result of two ∼ 1:10 mergers that took place 8 Gyr ago, giving enough time for the resulting debris to fully mix.In Figure <ref> we show the circularity (ϵ) distribution of all star particles (black lines) that are located within ourspatial disc selection box, i.e. R<R_ opt and |Z| < 10 kpc, obtained from galaxies with significant ex-situ disc components.The blue and red lines show the contribution from the in-situ and ex-situ stellar populations, respectively. Interestingly, in half of our sample, the circularity distribution of the ex-situ component peaks at values of ϵ < 0.7(Au-4, Au-7, Au-19 and Au-20). However, for the remaining half, it peaks at ϵ≥0.7. Note that, as discussed in Section <ref>,previous studies that attempted to identify an ex-situ component in the Milky Way disc have focusedtheir analysis on stellar samples with 0.2 < ϵ < 0.8 (R14, R15). This selection criterion is clearly justified by the complexity of detecting an ex-situ component on in-situ dominated near-circular orbits (ϵ > 0.7). However, this figure showsthat most stars of a hypothetical accreted Milky Way disc could be buried in this region. § FORMATION OF EX-SITU DISCSIn this section we explore how and when these ex-situ discs are formed. In particular, we are interested in characterizingthe number, orbital properties, and the total and stellar mass spectrum of the satellites that have contributed to their formation.In Figure <ref> we decompose thetotal ex-situ disc mass into different accreted satellite contributors. Satellitesare ranked according to their fractional mass contribution in decreasing order (i.e. the larger the contributor,the smaller the rank assigned). Interestingly, we find that only a few satellites are needed to account for the bulk of the mass of the ex-situ discs. The number ofsignificant contributors, defined as thenumber of satellites required to account for 90% of the mass,ranges from 1 to 3, with a median of 2.This is smaller thanthe number of significant contributors that build up the accreted stellar halos of these galaxies,which ranges between 3 to 8, with a median of 5 <cit.>. Note as well that, in allcases, there is a dominant contributor that accounts for ≳ 50% of the total mass. In Table <ref> we list some of the main properties of the most significant contributor to each ex-situ disc. Second contributors arelisted in those cases where their contribution is also significant (≳ 20% of the ex-situ disc mass). In several cases, we find that the most significant contributors are massive satellites. Their peak total masses, i.e.the maximum instantaneous mass that these satellites have reached, are M_ sat^ peak > 10^11 M_⊙. Thus, they areassociated with large merger events. However, we also find cases in which these satellites have relatively low mass. The peak masses of the significant contributors range from 0.6 × 10^11 M_⊙ (Au-24)to 5.3 × 10^11 M_⊙ (Au-4). Asexpected,second significant contributors are associated with less massive satellites. We note that, in all these cases,M_ peak is achieved just prior to infall through the host virial radius.Table <ref> also lists the lookback time, t_ cross, at which each satellite first crosses the host virial radius, R_ vir, for the first time. The dispersion in t_ cross values is large, ranging from early infall events witht_ cross = 9.1 Gyr (Au-14) to very late infall events with t_ cross = 3.1 Gyr (Au-4). Au-4 is an interestingcase in which the host undergoes a major merger of mass ratio M_ sat/ M_ host≈ 0.67, approximately 3 Gyr ago. The host disc is strongly perturbed but survives the interaction and quickly regrows (within 2 Gyr)to reach a present-dayoptical radius of R_ opt = 24.5 kpc.Finally, Table <ref> lists the satellite's infalling angle, θ_ infall, defined as the angle between thedisc angular momentumand that of the satellite's orbit, both measuredat t_ cross. Again wefind a large spread in θ_ infall, with values that range from 15^∘ (Au-20) to 85^∘ (Au-7). Significant ex-situ discs are expected to form from mergers events in which massive satellites are accreted withlow grazing angles, such as Au-20. Ex-situ discs formed as a result of mergers with massive satellitesthat are accreted with large θ_ infall are more interesting and thus we study them in more detail.In the top panels of Figure <ref> we show with red (blue) lines the time evolution of the angle between thedisc angular momentumand the most significant (second) contributor orbital angular momentum vectors. Only four representative examplesare shown, but similar results are found for the remaining galaxies. It is interesting to notice howthese massive satellites start to align with the host disc as soon as, or even before, theycross the host R_ vir. This is particularly clear in Au-2. The angle between the disc and the two mostsignificant contributorsbefore they cross R_ vir, ∼ 9 Gyr ago,is ∼ 60^∘ and 70^∘,respectively. For reference, we show in the bottom panels ofFig. <ref> the time evolution ofthe satellites' galactocentric distances, R_ s1 and R_ s2, and of the hostvirial radius, R_ vir. In many cases, it only takes ∼ 2 Gyr for these satellites to become almost perfectly alignedwith the host disc. A very similar situation can be seen in the remaining examples. Note that the rapid alignment between these two angular momentumvectors is due not only to changes in the orbits of the satellite galaxies, but also to a strong response of the host galactic discs. This can be seen in the top panels of Fig. <ref> where we show, with black lines, thetime evolution of the disc's angular momentum vector orientation with respect to its orientation at the present-day. In general,the discs start to rapidly tilt as soon as the satellites cross R_ vir, and this continues until the satellites arefully merged. For example, the Au-2 disc tilts ∼ 60^∘ during the merger of the two ex-situ disc most significantcontributors. As discussed in <cit.>,even low-mass satellites thatpenetrate the outer regions of a galaxy can significantly perturb and tilt a host galactic disc. This is not only due to direct tidal perturbation <cit.>but also to the generation of asymmetric features in the DM halothat can be efficientlytransmitted to its inner regions, thereby affecting the deeply embeddeddisc.Note that the two most significant contributorsto the ex-situ disc in Au-2are accreted onto the host as a group, but they are not bound to each other.This can be seen from their very similar t_ cross and θ_ infall values (Table <ref>).In the remaining galaxies with two significant contributors, the satellites are independently accreted.§ CHARACTERIZATION OFEX-SITU DISCSIn this Section we characterize the main stellar population properties and the vertical distribution of eachex-situ disc. The left panels of Figure <ref> show median [Fe/H] profiles as a function of galactocentricdistance. The solid lines are the profiles obtained from the ex-situ stellar populations whereas the dashed lines are those obtained from their in-situ counterparts. In all cases, the in-situ [Fe/H] profiles show clear negative[Fe/H] radial gradients, associated with inside-out formation of the main disc <cit.>. In general, ex-situ discsalso exhibit [Fe/H] gradients. These are a reflection of the [Fe/H] gradients of the most significant contributors prior to their disruption. Notethat the galaxy with the most metal-rich and steepest [Fe/H] gradient in the ex-situ disc component is Au-4. As previously discussed, the largest contributor to this ex-situ disc is a ∼ 5.3 × 10^11 M_⊙ satellite that crossed the host R_ vir just ∼ 3 Gyr ago.A detailed analysis of the chemical evolution of the Auriga galaxies will be presented in a forthcoming work. Here we are mainly interested in the differences between the in-situ and ex-situ [Fe/H] profiles. We find that, in all cases and at all radii,ex-situ discs are significantly more metal-poor than their in-situ counterparts.Differences in median metallicity can be as large as ∼ 0.5 dex (Au-2, Au-19). The middle panels of Figure <ref> show the median age profiles as a function of galactocentricdistance for both stellarpopulations. In all cases,ex-situ discs (solid lines) show approximately flat age profiles, reflecting themedian age of the stellar populations of the most significant satellite contributor. Note that ex-situ discs with youngerpopulations accreted their most significant contributor later on (Au-4, Au-7 and Au-20). Median ages of ex-situdisc populations range between 6 and 9 Gyr.Conversely, the mean age profiles of the in-situ stellardiscs show, in general, negative gradients, reflecting their inside-out formation. In addition, this component is significantly younger than the ex-situ disc, with differences inmedian ages as large as ∼ 6 Gyr (e.g. Au-14). Finally, the right panels of Figure <ref> showthe ratio ofex-situ to in-situ mass-weighted vertical dispersion, σ_ Z as a function of galactocentric distance. This quantity provides a measure of disc thickness. In general, we find ex-situ discs to be thicker than thein-situ components, with typical values of σ_ Z^ exsitu / σ_ Z^ insitu∼ 1.5.Some galaxies show a clear negative gradientin the outer disc regions (R ≳ 0.5 R_ opt, e.g. Au-4, Au-7 and Au-14).As shown by GR17 and <cit.>, the in-situ component of these Auriga discs shows strong flaring, warping and bending in the outer regions. This causes σ_ Z^ insitu to rise steeply atgalactocentric distances where the disc stops being strongly cohesive due to its weak self-gravity.The significant differences that the ex-situ and in-situ disc stellar populations exhibit could be used to defineindicators to identify ex-situ discs. We discuss this further in Section <ref>.§ IDENTIFICATION OF EX-SITU DISCS In the previous Section we have shown that the median age and [Fe/H] of the ex-situ disc stellar populations areolder and more metal-poor, respectively, than their in-situ counterparts. Here we explore how these twocharacteristics could be used to search for this galactic component in our own Galactic disc. In Figure <ref> we show the ex-situ to in-situ mass ratio, ν, obtained from subsets of stellarparticles located in different regions of the age and [Fe/H] space. As before, we only show four representativeexamples, but similar results are found for the remaining galaxies with significant ex-situ discs. To generate these two dimensional histograms we first selected all disc stellar particles (recall, ϵ≥ 0.7, R < R_ opt  and |Z| < 10 kpc)located within three different cylindrical shells defined as (0.2 ± 0.1, 0.5 ± 0.1, 0.9 ± 0.1)R_ opt. On each cylinder wegridded the (Age, [Fe/H]) space with an N × N regular mesh of bin size (0.65  Gyr, 0.15 dex).Finally, we computed the ratio ν considering only the stellar particles that are located within each(Age, [Fe/H]) bin. The colour bar in Fig. <ref> indicatesdifferent values of ν. Regions of the (Age-[Fe/H]) space that are dominated by in-situ stellar populations, i.e. 0 ≤ν < 1, are shown in dark blue. It is evident that, at all galactocentric distances, the in-situ discdominates in regions with young and metal-rich stellar populations. Close to the galactic center, 0.2R_ opt, ex-situ stellarpopulationsarefound to dominate at very old ages (≳ 8 Gyr) and low [Fe/H](≲ -0.5 dex). Nonetheless, an interesting pattern arises when larger galactocentric distancesare considered. It is clear from this figure that the regions dominated by the ex-situ disc gradually increase as we move further out. In a few examples (Au-14 and Au-24), regions with ages > 6 Gyr are mainly dominated by ex-situ stellar populations already at 0.5R_ opt.Note that, assuming an opticalradius of 19 kpc for the Milky Way <cit.>, these regions can be regarded as SolarNeighborhood analogs. The trend continues for larger galactocentric distances (0.9R_ opt). This indicates that the likelihood of identifying an ex-situ disc in samples of old stars increases towards the outskirts of the galactic discs. The reason for this was already discussed in Section <ref>. In general, the discs in this subset of Auriga galaxies show inside-out formation (see Fig. <ref>). The in-situ stellar populations become, on average, younger with galactocentric distance. Conversely, we find that the age distribution of the ex-situ population remains nearly constant with galactocentric distance. Thus, as thein-situ populations recede towards regions with younger ages, the ex-situ population takes over. This can be seen more clearly in Figure <ref>, where we show Gaussian kernel histograms of the stellar age distribution for the four disc examples previously discussed. Note that no cut in [Fe/H] has beenimposed to generate these histograms. Again, we can see that close to the galactic center,i.e. ∼ 0.2 R_ opt, the in-situ stellar populations (blue lines) dominate these distributions at all ages. As we move outwards, the ex-situ populations (red lines) start to take over at old ages. In all cases, the old tail of these distributions (age ≳ 7 Gyr) is dominated by ex-situ star particles at galactocentric distances of ∼ 0.9R_ opt. It is important to highlight that the presence of an old tail in this distribution does not necessarilyimply the presence of an ex-situ disc component. For example, in-situ old stars can be found in the outer regions of a galactic disc as a result of processes such as radial migration <cit.>.To unambiguously identify ex-situstars additional information based on chemical tagging should be used (R14, R15).§ DISCUSSIONIn this work we have shown that massive satellites can be can bedisrupted in a plane that is well-aligned with that of thehost disc, depositing material on near-circular orbitsthat are dynamically indistinguishable from thoseof stars born in-situ. An ex-situ disc would not only be relevant for probing the merger history of the Milky Way, but would also hint at the presence of an underlying DM disc<cit.>.The quantification and characterization of the DM discs in the Auriga simulations will be presented in another paper(Schaller et al., in prep). Here we merely show thatsome of the Auriga galaxies with significant ex-situ discs also have a significant rapidly rotating DM component. In Figure <ref> we show three examples of the velocity distribution of dark matter particles located withina cylinder defined as |Z| ≤ 5 kpc and 6 ≤ R ≤ 10 kpc. In all cases we can see that both the radial, V_ rad, and the vertical, V_ z, velocity components can be well fitted with a single Gaussiancentred at 0 km s^-1.However, it is clear that a single Gaussian centred at 0 km s^-1 cannot describe the azimuthal velocity distributions, V_ rot.Following <cit.>, we used a double Gaussian to describe these distributions. As in <cit.>one of the Gaussian is centred at V_ rot = 0 km s^-1.Very similar results are obtained if the center of both Gaussian are left as free parameters. The blue lines show the result of suchfits, while the red and green lines show the contribution from each individual Gaussian. In all cases we findthesecond Gaussian to be centred at high V_ rot, with values of ∼ 133, 124 and 126 km s^-1 for Au-2, Au-8 and Au-20, respectively. Following <cit.>, we estimate the amount of DM that the secondary Gaussian contributes to these cylinders by evaluating its integral. We find contributed mass fractions of 32%, 35% and 50%, respectively. In general, galaxies with significant ex-situ discs show V_ rot distributions that cannot be described with single Gaussian (Schaller et al. in prep).The Au-2 case is particularly interesting. It possesses a significant rotating DM component and, as previouslydiscussed, the circularity distribution of its ex-situ component (within the spatially defined disc) peaksatϵ≥ 0.7 (see also Au-8). Themost significant contributors were accreted ∼ 8.5 Gyr ago, and quickly merged with the host. More importantly, this galaxy has a smoothly rising age-vertical velocitydispersion relation during the last 8 Gyr of evolution <cit.>, and thusits behaviour is qualitatively consistent with that observed in the MW <cit.>.Recently, <cit.> combined Gaia data release 1 astrometry with Sloan Digital Sky Survey (SDSS) imagesto measure proper motions of old stars in the MW stellar halo. They find a gently rotating prograde signal, which shows littlevariation with Galactocentric radius out to 50 kpc. As discussed by D17, some Auriga galaxies with significant ex-situ discs (Au-2, Au-4, Au-7, Au-19 and Au-24) also show mildly rotating old stellar halos, consistent with the observations. An ex-situ component with the characteristics found in e.g. Au-2, and its associated rotating DM component, would not have been detected to date in the MW disc.§ CONCLUSIONSWe have studied the formation of ex-situ discs in model galaxies with similar mass to theMW, simulated in a fully cosmological context. An important goal of this study was to explore whether a significantex-situ stellar component could be buried within the near-circular orbit population of the MW disc which is stronglydominated by in-situ stars. For this purpose, wefocused our analysison star particles with large circularity parameter,ϵ≥ 0.7. This differs from the strategy in observational studies,such as those presented by R14 and R15. These attempted to identify an ex-situ disc in the Milky Way by focusing on stars with0.2 < ϵ < 0.8.Our study shows that approximately one third of our sample (8 out of 26 models)contains a significant ex-situ disc. These galaxies show an ex-situ to in-situ disc (ϵ≥ 0.7) mass ratio η > 0.05. Note however, that the fraction of models with significantex-situ discs would be larger if wewere to relax our circularity threshold to lower values. In fact, as shown in Figure 1 of R15, the circularity distributions of theex-situ stellar discs presented in <cit.> peak at ϵ∼ 0.5.We find that, in general, the ex-situ toin-situ disc mass fraction rises as we move towards the outer disc regions. We have characterized the circularity distribution of all stellar particles that are spatially located within regionsassociated with the galactic discs (i.e. R < R_ opt and |Z| < 10 kpc). Half of the ex-situ discs sample have a distribution that isconsistent with those shown in R15, peaking at values of ϵ < 0.7. Interestingly, for the remaining half we find a circularity distribution that peaks at values of ϵ≥ 0.7. Such discs would not have been detected in existing observational studies.In general, ex-situ discs are formed from the debris of fewer than three massive satellite galaxies, but most of their mass (>50 %) always comes from a single significant contributor. The peak total mass of this dominant contributor ranges between6 × 10^10 M_⊙and 5.3 × 10^11 M_⊙. Both the virial radius crossing time and the infall angle of these satellites have a very large scatter, with values ranging between 3.1 to 9.1 Gyr and 15^∘ to 85^∘, respectively. We highlight that significant ex-situ discs can arise from merger events with massive satellites that are accreted at infallangles as large as 85^∘ <cit.>. In these cases we find that the disc and satellite angular momentumvectors rapidly align. This is notpurely due to an evolution of the infalling satellite'sorbit, but also to a strong response of the host galactic disc. We find that host discs start to tilt as soon as thesemassive satellites cross R_ vir. This tilt can be driven both by direct tidal perturbations <cit.> and by the generation of asymmetric features inthe host DM halo that can extend into the inner regions, affecting the deeply embeddeddisc<cit.>. It is important to note that the response of a disc depends, among other things, on its verticalrigidity. A disc tilts as a wholeonly in regions where it isstrongly cohesive thanks to its self-gravity <cit.>. Thus, the frequency and propertiesof ex-situ discs may be misrepresented in simulations of MilkyWay-like galaxies if these are not able to reproduce correctly the vertical structure ofthe MW disc <cit.>. This could be an issue for the Auriga simulations, since our final stellar discs are thicker than observed (h_ z∼ 1 kpc, see GR17).Hence, their vertical rigidity may be significantly lower than that of the MW disc.Ex-situ discs tend to bethicker than in-situ discs, with vertical dispersion ratios σ_ Z^ exsitu / σ_ Z^ insitu∼ 1.5. In all cases, and at all radii, theex-situ disc component is significantly more metal-poor than the in-situ disc. Differences in the median [Fe/H] can be as large as 0.5 dex. Ex-situ discs are also significantly older than their in-situ counterparts, withage differences that can be as large as 6 Gyr. Their median age profiles are flat and reflect the median age of the stellar populations from the most significant contributors. In contrast, the median age profiles of the in-situ discs show,in general, negative gradients, reflecting the inside-out formation of these stellar discs. We have shown that the different properties that the in-situ and ex-situ stellar populations exhibit could beused to isolate ex-situ star candidates on near-circular orbits in the Milky Way disc (recall, ϵ≥ 0.7).By gridding the Age-[Fe/H] space, and computing the ex-situ to in-situ mass ratio within each bin, we find that the regions dominated by ex-situ disc gradually increase as we movetowards the outer disc, while in the inner galactic regions (i.e.R ∼ 0.2R_ opt)in-situ stellar populations dominate nearly everywhere. In a few cases we find that regions with ages > 6 Gyr are dominated by ex-situ disc stars already at galactocentric distances R = 0.5R_ opt. Thelikelihood of identifying anex-situdiscinsamplesof oldstarsincreasestowardsthe outskirts of the disc. However, the presence of an old tail in the age distribution maynot uniquely implythe presence of anex-situ disc component. To unambiguously identifysuch an ex-situ component in the MW,additionalinformationbasedonchemicaltaggingshouldbe considered. § ACKNOWLEDGEMENTSWe are grateful to Adrian Jenkins and David Campbell for the selection of the sample and making the initial conditions. RG and VS acknowledge support through the DFG Research Centre SFB-881 `The Milky Way System' through project A1. VS and RP acknowledge support by the European Research Council under ERC-StG grant EXAGAL-308037. This work was supported by the Science and Technology Facilities Council (grant number ST/L00075X/1) and the European Research Council (grant number GA 267291, `Cosmiway'). This work used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grants ST/H008519/1 and ST/K00087X/1, STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure. SB acknowledges support from the International Max-Planck Research School for Astronomy and Cosmic Physics of Heidelberg (IMPRS-HD) and financial support from the Deutscher Akademischer Austauschdienst (DAAD) through the program Research Grants - Doctoral Programmes in Germany (57129429).mnras
http://arxiv.org/abs/1704.08261v1
{ "authors": [ "Facundo A. Gómez", "Robert J. J. Grand", "Antonela Monachesi", "Simon D. M. White", "Sebastian Bustamante", "Federico Marinacci", "Rüdiger Pakmor", "Christine M. Simpson", "Volker Springel", "Carlos S. Frenk" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170426180007", "title": "Lessons from the Auriga discs: The hunt for the Milky Way's ex-situ disc is not yet over" }
L. Paunonen]Lassi Paunonen [L. Paunonen]Department of Mathematics, Tampere University of Technology, PO. Box 553, 33101 Tampere, Finland [email protected]. Seifert]David Seifert [D. Seifert]St John's College, St Giles, OxfordOX1 3JP, United Kingdom [email protected] This paper investigates the asymptotic behaviour of solutions of periodic evolution equations. Starting with a general result concerning the quantified asymptotic behaviour of periodic evolution families we go on to consider a special class of dissipative systems arising naturally in applications. For this class of systems we analyse in detail the spectral properties of the associated monodromy operator, showing in particular that it is a so-called Ritt operator under a natural `resonance' condition.This allows us to deduce from our general result a precise description of the asymptotic behaviour of the corresponding solutions.In particular, we present conditions for rational rates ofconvergence to periodic solutions in the case where the convergence fails to be uniformly exponential.We illustrate our general results by applying them to concrete problems including the one-dimensional wave equation with periodic damping. [2010]35B40,47D06( 35B10,47A10,35L05 )This work was carried out while the first author was visiting Oxford from January to June 2017. The visit was hosted by Professor C.J.K. Batty.L. Paunonen is funded by the Academy of Finland grant number 298182. Asymptotics for periodic systems [   ================================§ INTRODUCTIONThe aim of this paper is to study stability properties of solutions to non-autonomous periodic evolution equations. An important motivating example is the one-dimensional damped wave equation, { z_tt(s,t) =z_ss(s,t)-b(s,t)z_t(s,t),(s,t)∈Ω_+,z(0,t) =z(1,t)=0,t>0,z(s,0) =u(s),z_t(s,0)=v(s),s∈ (0,1). .Here Ω_+=(0,1)×(0,∞), b is a suitable non-negative function and the initial data satisfy u∈ H_0^1(0,1) and v∈ L^2(0,1). It is well known that if b is not the zero function but independent of t, then the energyE(t)=1/2∫_0^1 |z_s(s,t)|^2+|z_t(s,t)|^2s, t≥0,associated with any solution satisfies E(t)≤ Me^-β t(u_H_0^1^2+v_L^2^2), t≥0,for some constants M,β>0 which are independent of the initial data; see for instance <cit.>. Similarly, it has recently been observed <cit.> that for periodically time-dependent systems the energy of the solutions decays with an exponential rate provided the region in (s,t)-space where the damping coefficient b is strictly positive satisfies a certain Geometric Control Condition (GCC).A similar phenomenon occurs in the context of wave equations with autonomous damping on higher-dimensional spatial domains, where uniform exponential energy decay is in fact characterised by validity of the GCC; see <cit.>.For autonomous damped wave equations there is moreover a rich literature investigating the situation where the GCC is violated, showing in particular that it is possible even in this case to obtain rates of energy decay for solutions corresponding to particular initial data; see for instance <cit.>. To date, however, nothing is known about such non-uniform rates of decay in the non-autonomous case. Our principal aim in the present work is to narrow this gap.In fact, in the non-autonomous setting the energy of the solutions of (<ref>)generally no longer decays at a uniform exponential rate, even in the presence of a significant amount of damping; see for instance <cit.>.Indeed, as our examples in Section <ref> demonstrate, there is no reason to expect energy decay at all if the period of the damping coincides with the period of the undamped wave equation, and instead in this resonant case, which will be of particular interest in what follows, one merely obtains convergenceto periodic solutions, which have constant but possibly non-zero energy. One of our main objectives is to obtain statements about the rate at which this convergence takes place, both when the GCC holds and when it is violated. To investigate this problem we begin by viewing the damped wave equation (<ref>) as a non-autonomous abstract Cauchy problem of the form{ż(t) =A(t)z(t), t≥0,z(0) =x. .Here x=(u,v)^T∈ H_0^1(0,1)× L^2(0,1) is the initial data and the operators A(t) are the form A(t)=A_0-B(t)B(t)^*, t≥0, where A_0 is the wave operator corresponding to the undamped wave equation and the periodic operator-valued function B captures the effect of the damping.In Section 2, we introduce a general framework for the study of rates of convergence for periodic non-autonomous systems of the form (<ref>). Our approach is based on studying the associatedevolution family . The main result of the section, Theorem <ref>, characterises the quantified asymptotic behaviour of the solutions of (<ref>) in terms of the properties of the so-called monodromy operator U(τ,0), where τ>0 is the period of the function B. This result may be viewed as a quantified version of several earlier results concerning the stability of periodic evolution families; see <cit.> and also <cit.> and the references therein. Then in Section 3 we introduce a class of dissipative systems which includes the damped wave equation. For this class of systems we obtain precise upper and lower bounds for the `energy' of solutions in terms of natural quantities associated with the family {A(t):t≥0}. We moreover analyse the spectral properties of the associated monodromy operator,showing among other things thatU(τ,0) is a so-called Ritt operator under the naturalresonance condition that the period τ of the damping coincides with the period of the group generated by A_0. Based on these results wethen provide, in the form of Theorem <ref>, a detailed description of the asymptotic behaviour of the corresponding solutions.This result shows in particular that there is a rich supply of initial data for which the solution converges (faster than) polynomially to a periodic solution even when uniform exponential convergence is ruled out.Finally, in Sections <ref> and <ref>, we apply our general theory to specificperiodic partial differential equations in one space dimension,namely the transport equation and the damped wave equation (<ref>). These examples demonstratethatin many natural cases involvingsubstantial damping at any given time, the solution of the non-autonomous systemmay well converge to a non-zero periodic solution,which, as discussed above,is in stark contrast to the situation for autonomous systems. The examples also show how, depending on the precise nature of the damping function b, different initial values can lead to different rates of convergence. The notation we use is more or less standard throughout. In particular, we write X for a generic complex Hilbert space, or occasionally for a general Banach space.We write (X) for the space of bounded linear operators on X, and given T∈(X) we write (T) for the kernel and (T) for the range of T. We let T= (I-T). If A is an unbounded operator on X then we denote its domain by D(A). Furthermore, we write σ(T) for the spectrum and σ_p(T) for the point spectrum of T. The spectral radius of an operator T is denoted by r(T), and for λ∈∖σ(T) we write R(λ,T) for the resolvent operator (λ-T)^-1. We occasionally make use of standard asymptotic notation, such as `little o'. Finally, we denote bythe unit circle {λ∈:|λ|=1} and bythe open unit disc {λ∈:|λ|<1}. § ASYMPTOTICS FOR GENERAL PERIODIC SYSTEMS Let X be a Hilbert space. An evolution family (U(t,s))_t≥ s≥0 is a family {U(t,s)∈(X):t≥ s≥0} of bounded linear operators on X such that U(t,t)=I for all t≥0, U(t,r)U(r,s)=U(t,s) for t≥ r≥ s≥0, and the map (t,s)↦ U(t,s)x is continuous on {(t,s):t≥ s≥0} for all x∈ X.We say that the evolution family (U(t,s))_t≥ s≥0 is bounded if sup_t≥ s≥0U(t,s)<∞. Evolution families arise naturally in the context of non-autonomous Cauchy problems of the form{ż(t) =A(t)z(t), t≥0,z(0) =x, .where A(t), t≥0, are closed and densely defined linear operators, and the initial condition x∈ X is given. Indeed, if the family {A(t):t≥0} is sufficiently well-behaved then there exists an evolution family (U(t,s))_t≥ s≥0 associated with the problem (<ref>) with the property that the function z_+→ X of (<ref>) defined by z(t)=U(t,0)x, t≥0,satisfies (<ref>) in an appropriate sense, at least for certain initial values x∈ X.As has been explained in Section <ref> we shall be interested only in a rather particular type of family {A(t):t≥0}, to be introduced formally in Section <ref> below, for which the evolution family (U(t,s))_t≥ s≥0 is related to the family {A(t):t≥0} through a certain variation of parameters formula and the function z_+→ X defined in (<ref>) solves (<ref>) in a natural weak sense.We point out, however, that in general the relationship between the family {A(t):t≥0} and the associated evolution family is a rather delicate matter; see for instance <cit.>, <cit.> and <cit.>. This is in contrast with the autonomous case where A(t)=A for all t≥0 and A is the generator of a C_0-semigroup (T(t))_t≥0. Here we may take U(t,s)=T(t-s) for t≥ s≥0, and the function z(t)=T(t)x, t≥0, is then the mild solution of (<ref>) in the usual sense, and it is a so-called classical solution if and only if x∈ D(A); see <cit.>. The main result in this section, Theorem <ref> below, may be viewed as a theorem about the asymptotic behaviour of orbits of evolution families. However, motivated by the particular class of problems to be introduced in Section <ref>, we refer to the function z_+→ X defined in (<ref>) as the solution of (<ref>), and consequently the evolution family (U(t,s))_t≥ s≥0 is said to be associated with the non-autonomous Cauchy problem (<ref>). We are particularly interested in evolution families (U(t,s))_t≥ s≥0 which are τ-periodic for some τ>0 in the sense that U(t+τ,s+τ)=U(t,s) for all t≥ s≥0. This situation will arise in our concrete setting of Section <ref> if the family {A(t):t≥0}is τ-periodic. In this case it is natural to consider the so-called monodromy operator T=U(τ,0), and in particular the large-time asymptotic behaviour of the solution operators U(t,0) as t→∞ is determined by the behaviour as n→∞ of the powers T^n of the monodromy operator; see instance <cit.>.We say that a function z_+→ X isasymptotically periodic if there exists a periodic function z_0_+→ X such that z(t)-z_0(t)→0 as t→∞, and we say that the convergence is superpolynomially fast if z(t)-z_0(t)=o(t^-γ) as t→∞ for all γ>0. We say that the system (<ref>) is asymptotically periodic if the solution z(t), t≥0, is asymptotically periodic for all initial values x∈ X, and we say that the system is stable if z(t)→0 as t→∞ for all initial values x∈ X.Recall that for any power-bounded operator T∈(X), the operator I-T is sectorial of angle (of most) π/2, so that the fractional powers (I-T)^γ are well-defined for all γ≥0; see <cit.> for details. Our first result is a quantified asymptotic result in the spirit of <cit.>.Consider the non-autonomous Cauchy problem (<ref>) on a Hilbert space X, and suppose that the evolution family(U(t,s))_t≥ s≥0 associated with this problemis bounded and τ-periodic for some τ>0. Let T=U(τ,0) be the monodromy operator, and suppose that σ(T)∩⊆{1} and that R(e^iθ,T)=O(|θ|^-α),θ→0,for some α≥1. Then X= T⊕ Z, where Z denotes the closure of (I-T), and if we let P denote theprojection onto T along Z, then for any initial value x∈ X the solution z_+→ X of (<ref>) satisfiesz(t)-z_0(t)→0, t→∞,where z_0_+→ X is the τ-periodic solution of (<ref>) with initial condition z_0(0)=Px. In particular, the system (<ref>) is asymptotically periodic, and it is stable if and only if T={0}.Moreover, z(t)-z_0(t)≤ Me^-β tx, t≥0,x∈ X,for someM, β>0 if and only if (I-T) is closed. In any case, if x∈ X is such that x-Px∈(I-T)^γ for some γ>0 thenz(t)-z_0(t)=o(t^-γ/α), t→∞.Furthermore, there exists a dense subspace X_0 of X such that for all x∈ X_0 the convergence in (<ref>) is superpolynomially fast.Since the evolution family (U(t,s))_t≥ s≥0 is assumed to be bounded, there exists C>0 such that U(t,s)≤ C for t≥ s≥ 0. In particular, sup_n≥0T^n≤ C, so the monodromy operator Tis power-bounded. It follows from the mean ergodic theorem <cit.> that X= T⊕ Z. Since σ(T)∩⊆{1} we have that T^n(I-T)→0 as n→∞ by the Katznelson-Tzafriri theorem <cit.>. Hence T^nx→0 as n→∞ for all x∈(I-T), and by power-boundedness of T the statement extends to all x∈ Z. In particular, we deduce that T^nx-Px=T^n(I-P)x→0, n→∞,for all x∈ X. Given t≥0, if we let n≥0 be the unique integer such that t-nτ∈[0,1), then by periodicity and contractivity of (U(t,s))_t≥ s≥0 we havez(t)-z_0(t)=U(t-nτ,0)(T^nx-Px)≤ C T^nx-Px,which implies (<ref>). Now let S denote the restriction of T to the invariant subspace Z and recall that X= T⊕ Z. Then σ(S)⊆σ(T) and hence σ(S)⊆∪{1}. Moreover, the operator I-S maps Z bijectively onto (I-T), so by the inverse mapping theorem we have 1∈∖σ(S) if and only if (I-T) is closed.Thus if (I-T) is closed, then r(S)<1 and we may take r∈(r(S),1) and find a constant K>0 such that S^n≤ Kr^n for all n≥0. It follows from (<ref>) that for t≥0 and n≥0 such that t-nτ∈[0,1) we have z(t)-z_0(t)≤ CKr^nx, so (<ref>) holds for M=CKr^-1/τ and β =1/τlog1/r. On the other hand, if (<ref>) holds for some M,β>0, then for x∈ Z we have S^nx=z(nτ)≤ Me^-β nτx, n≥0, and in particular S^n<1 for sufficiently large n≥0. Hence r(S)<1 and (I-T) is closed.For the last part, note that by <cit.> condition (<ref>) implies that for x∈(I-T) we have T^nx=o(n^-1/α) as n→∞. By iterating this result and applying the moment inequality <cit.> to the sectorial operator I-T it is now straightforward to obtain (<ref>). Finally, consider the spaces X_k= T⊕(I-T)^k, k≥1, and let X_0=⋂_k≥1 X_k. SinceX_k is dense in X for each k≥1, it follows from a straightforward application of the Esterle-Mittag-Leffler theorem <cit.> that X_0 is also dense in X. By construction the convergence in (<ref>) is superpolynomially fastfor each x∈ X_0, so the proof is complete.* We remark that the restriction in the statement of Theorem <ref> that α≥1 is natural, since if 1∈σ(T) then the standard lower bound R(λ,T)≥((λ,σ(T)))^-1, λ∈∖σ(A), implies that no smaller values of α can arise.* In the case where (I-T) is not closed we can in fact say more. Indeed, in this case r(S)=1 and it follows from <cit.> that for every sequence (r_n)_n≥0 of non-negative terms converging to zero there exists x∈ Z such that S^nx≥ r_n for all n≥0. A simple argument as in the first part of the proof of <cit.> now shows the convergence in (<ref>) is in fact arbitrarily slow in the sense that for any function r_+→[0,∞) such that r(t)→0 as t→∞ there exists x∈ X such that z(t)-z_0(t)≥ r(t) for all t≥0. So we have a dichotomy for the rate of decay: either it is uniformly exponentially fast, or it is arbitrarily slow for suitable initial values.*It follows from (<ref>) that the projection P onto T along Z satisfies P≤sup_n≥0T^n. In particular, if T is a contraction then the projection P is orthogonal.It is straightforward for any α≥1to construct examples of families {A(t):t≥0} of suitable multiplication operators to which Theorem <ref> can be applied. In the next section we consider a special class of operators A(t), t≥0,which are useful in applications and to which Theorem <ref> can be applied with α=1.§ A CLASS OF DISSIPATIVE SYSTEMS We now restrict our attention to the case where A(t)=A_0-B(t)B(t)^*, t≥0,with D(A(t))=D(A_0), t≥0. Here A_0 isassumed to be the infinitesimal generator of a unitary group (T_0(t))_t∈ on X and B∈ L_loc ^2(_+;(V,X)) for some Hilbert space V. In particular, the operators A(t), t≥0, are dissipative. It follows from the Lumer-Phillips theorem and the results in<cit.>, and in particular from <cit.>, that there exists an evolution family (U(t,s))_t≥ s≥0 of contractions associated with (<ref>) in the sense that the function z_+→ X defined by z(t)=U(t,0)x, t≥0, satisfies the variation of parameters formulaz(t)=T_0(t)x-∫_0^tT_0(t-s)B(s)B(s)^*z(s)s, t≥0 and hence may be viewed as a mild solution of (<ref>). As is easily verified, this mild solution can moreover be thought of as a weak solution of (<ref>) in the sense that for every y∈ D(A_0^*) the map t↦(z(t),y) is absolutely continuous on _+ and / t(z(t),y)=(z(t),A(t)^*y)for almost all t≥0. We begin with a simple lemma which will be useful in studying the asymptotic behaviour of the solution of (<ref>).Let A(t), t≥0, be as in (<ref>) and let τ>0. Then x^2-U(τ,0)x^2/2=∫_0^τB(t)^*U(t,0)x^2t, x∈ X. If B is constant on (0,τ) then the identity follows from the fundamental theorem of calculus for x∈ D(A_0), and by density it then holds for all x∈ X. A similar argument applies when B is a step function. Since B∈ L_loc ^2(_+;(V,X)), a standard approximation argument yields the same identity in the general case. Let τ>0 and x∈ X. Then by Lemma <ref> and the variation of parameters formula (<ref>)we have that U(τ,0)x=x if and only if U(t,0)x=T_0(t)x for all t∈[0,τ]. LetB^*∈ L_loc ^2(_+;(X,V)) be the function defined by B^*(t)=B(t)^*, t≥0. Given a subset Z of X and a constant τ>0 we say that the pair (B^*, A) is approximately Z-observable on (0,τ) if for all x∈ Z the condition ∫_0^τB(t)^*U(t,0)x^2t=0implies that x=0, and we say that (B^*, A) is exactly Z-observable on (0,τ) if there exists a constant κ>0 such that∫_0^τB(t)^*U(t,0)x^2t≥κ^2x^2for all x∈ Z. If Z=X we simply call the pair (B^*,A) approximately or exactly observable on (0,τ). For further discussion of observability and related concepts for non-autonomous systems see for instance <cit.>. Let A(t), t≥0, be as in (<ref>) and let τ>0. Then for all x∈ X we have1/c_τ^2∫_0^τB(t)^*T_0(t)x^2t ≤∫_0^τB(t)^* U(t,0)x^2t ≤∫_0^τB(t)^*T_0(t)x^2t,where c_τ=1+B_L^2(0,τ)^2. In particular, given any subset Z of X the pair (B^*,A) is approximately (respectively, exactly) Z-observable on (0,τ) if and only if(B^*,A_0) is approximately (respectively, exactly) Z-observable on (0,τ). Consider the operators Φ_τ,Ψ_τ∈(X,L^2(0,τ;V)) given, for x∈ X and t∈(0,τ), by(Φ_τ x)(t)= B(t)^*T_0(t)x (Ψ_τ x)(t)= B(t)^*U(t,0)x.We show that there exists an isomorphism Q_τ∈(L^2(0,τ;V)) such that Φ_τ=Q_τ∘Ψ_τ, and that moreover Q_τ≤ c_τ andQ_τ^-1≤ 1. Indeed,a straightforward calculation using(<ref>)shows that(Ψ_τ x)(t) =(Φ_τ x)(t)- ((R_τ∘Ψ_τ )x)(t)for all x∈ X and almost all t∈(0,τ), where(R_τ y)(t)=∫_0^t B(t)^*T_0(t-s)B(s)y(s)sfor y∈ L^2(0,τ;V) and almost all t∈(0,τ). Thus Φ_τ=Q_τ∘Ψ_τ, where Q_τ=I+R_τ, and a simple estimate givesQ_τ∈( L^2(0,τ;V)) with Q_τ≤ c_τ. We now show that R_τ≥0.Let y∈ L^2(0,τ; V). Then (R_τ y,y)=∫_0^τ∫_0^t( T_0(-t)B(t)y(t), T_0(-s)B(s)y(s))st.Using Fubini's theorem to interchange the order of integration, we may rewrite the double integral to obtain(R_τ y,y)=∫_0^τ∫_t^τ( T_0(-t)B(t)y(t), T_0(-s)B(s)y(s))st.Adding these two identities gives(R_τ y,y)=1/2∫_0^τ T_0(-t)B(t)y(t)t^2≥0,as required. We now show that Q_τ is invertible. Indeed, Q_τ is dense because if z∈ L^2(0,τ;V) is such that (Q_τ y,z)=0 for all y∈ L^2(0,τ;V), then in particular z^2≤z^2+(R_τ z,z)=(Q_τ z,z)=0,so z=0. Moreover, y^2≤(Q_τ y,y)≤Q_τ yyfor all y∈ L^2(0,τ;V), which shows that Q_τ is closed and that Q_τ is invertible with Q_τ^-1≤ 1. This completes the proof.Recall that an operator T on a Banach space X is said to be a Ritt operator if r(T)≤1 andR(λ,T)≤C/|λ-1|, |λ|>1,for some constant C>0; see <cit.>. It is shown in <cit.> that T is a Ritt operator if and only if T is power-bounded and T^n(I-T)=O(n^-1) as n→∞. It is also known that a power-bounded operator is a Ritt operator if and only if σ(T)∩⊆{1} and (<ref>) holds with α=1; see <cit.>. The next result provides the type of spectral information required in Theorem <ref>; see <cit.> for a related result on eigenvalues of monodromy operators.Let A(t), t≥0, be as in (<ref>) and suppose T_0(τ)=I for some τ>0. Moreover, let T=U(τ,0). ThenTis a Ritt operator and T={x∈ X:∫_0^τB(t)^*T_0(t)x^2t=0}.In particular,σ(T)∩⊆{1}, and we have1∉(T) if and only if (B^*,A_0) is approximately observable on (0,τ). We show first that σ(T)∩⊆{1}. Indeed, since T is a contraction we know that r(T)≤1, and hence if λ∈σ(T)∩ then λ must be an approximate eigenvalue of T. In particular, we may find vectors x_n∈ X, n≥1, such that x_n=1 for all n≥1 and Tx_n-λ x_n→0 as n→∞. By Lemma <ref> we have∫_0^τB(t)^*U(t,0)x_n^2t=x_n^2-Tx_n^2/2→0, n→∞.Since T_0(τ)=I, it follows from (<ref>) thatTx_n-x_n^2≤B_L^2(0,τ)^2∫_0^τB(t)^*U(t,0)x_n^2t→0, n→∞, and hence|1-λ|≤Tx_n-x_n+Tx_n-λ x_n→0, n→∞,so that λ=1, as required. Next we establish that T is a Ritt operator. To this end let x∈ X with x=1 and let θ∈[-π,π]. We first observe that|e^iθ-1|≤e^iθx-Tx+|(Tx-x,x)|. Using (<ref>) and the fact that T_0(τ)=I, so that in particular T_0(τ-t)^*=T_0(t) for t∈[0,τ],we have|(Tx-x,x)|^2 =|∫_0^τ(T_0(τ-t)B(t)B(t)^*U(t,0)x,x)t|^2≤(∫_0^τB(t)^*U(t,0)x^2t)(∫_0^τB(t)^*T_0(t)x^2t).By Lemma <ref> and the reverse triangle inequality we see that∫_0^τB(t)^*U(t,0)x^2t=x^2-Tx^2/2≤e^iθx-Tx,and hence by Lemma <ref>∫_0^τB(t)^*T_0(t)x^2t≤ c_τ^2e^iθx-Tx.Combining (<ref>) and (<ref>) in the previous estimate we find that |(Tx-x,x)|≤ c_τe^iθx-Tx, and hence (<ref>) gives|e^iθ-1|≤ (1+c_τ)e^iθx-Tx. Itfollows that R(e^iθ,T)=O(|θ|^-1) as θ→0, so T is a Ritt operator. In order to characterise the set T, note first that if Tx=x then by Lemmas <ref> and <ref> we have ∫_0^τB(t)^*T_0(t)x^2t=0.Conversely, suppose that x∈ X is such that (<ref>) holds. Then Lemma <ref> shows that ∫_0^τB(t)^*U(t,0)x^2t=0,and it follows from (<ref>) thatTx-x^2≤B_L^2(0,τ)^2∫_0^τB(t)^*U(t,0)x^2t =0,and hence Tx=x. In particular, we obtain (<ref>), and hence 1∉(T) if and only if (B^*,A_0) is approximately observable on (0,τ).Note that even without the assumption T_0(τ)=I, approximate observability of(B^*,A_0) on (0,τ) implies that ( T )∩=∅ for T=U(τ,0).Indeed, if (B^*,A_0) is approximately observable on (0,τ) so is (B^*,A) by Lemma <ref>. Hence by Lemma <ref> we have Tx<x for all x∈ X∖{0}, and in particular (T)∩=∅. It follows from the Arendt-Batty-Lyubich-Vũ theorem <cit.> thatif the evolution familyis τ-periodic then the system (<ref>) is stable whenever (B^*,A_0) is approximately observable on (0,τ) and the boundary spectrum σ(T)∩ is countable. The next result establishes a connection between exact observability and the spectral radius of certain restrictions of the monodromy operator. Let A(t), t≥0, be as in (<ref>) and suppose that B is τ-periodic for some τ>0. Moreover, let T=U(τ,0)and suppose that Z is a closed T-invariant subspace of X. Then r(T|_Z)<1 if and only if(B^*,A_0) is exactly Z-observable on (0,nτ) for some n∈. If T_0(τ)=I then r(T|_Z)<1 if and only if (B^*,A_0) is exactly Z-observable on (0,τ).Let S=T|_Z. Then r(S)<1 if and only if there exists n∈ such that S^n=U(nτ,0)|_Z<1. Suppose that n∈ is such that S^n<1 and let x∈ Z. By Lemma <ref> we have∫_0^nτB(t)^*U(t,0)x^2t=x^2-S^nx^2/2≥1-S^n^2/2x^2,and hence (B^*,A) is exactly Z-observable on (0,nτ). By Lemma <ref> the same is true of (B^*,A_0). Now suppose conversely that (B^*,A_0) is exactly Z-observable on (0,nτ) for some n∈. By Lemma <ref> the same is true of(B^*,A), and hence there exists a constant κ>0 such that∫_0^nτB(t)^*U(t,0)x^2t≥κ^2x^2, x∈ Z.Using Lemma <ref> we deduce thatS^nx^2=x^2-2∫_0^nτB(t)^*U(t,0)x^2t≤ (1-2κ^2)x^2, x∈ Z,and in particular S^n<1. Hence r(S)<1.Suppose finally that T_0(τ)=I and that r(S)<1. If n∈ is such that S^n<1, then by the first part we know that (B^*,A_0) is exactly Z-observable on (0,nτ). Hence there exists a constant κ>0 such that ∫_0^nτB(t)^*T_0(t)x^2t≥κ^2x^2, x∈ Z.Since both B and T_0 are τ-periodic, we have∫_0^nτB(t)^*T_0(t)x^2t=n∫_0^τB(t)^*T_0(t)x^2t.Thus ∫_0^τB(t)^*T_0(t)x^2t≥κ^2/nx^2, x∈ Z,so (B^*,A_0) is exactly Z-observable on (0,τ), as required. We now formulate a variant of Theorem <ref> foroperators A(t), t≥0, which are of the form given in (<ref>). Suppose that the operators A(t), t≥0, are as in (<ref>) and that B is τ-periodic for some τ>0. Suppose also that T_0(τ)=I and let T=U(τ,0) be the monodromy operator. Furthermore, letP denote theorthogonal projection onto the closed subspaceY={x∈ X:∫_0^τB(t)^*T_0(t)x^2t=0}of X and let Z=Y^⊥. Then for any initial value x∈ X the solution z_+→ X of (<ref>) satisfiesz(t)-z_0(t)→0, t→∞,where z_0_+→ X is the τ-periodic solution of (<ref>) with initial condition z_0(0)=Px. In particular, the system (<ref>) is asymptotically periodic. The system is stable if and only if(B^*,A_0) is approximately observable on (0,τ).Moreover,z(t)-z_0(t)≤ Me^-β tx, t≥0,x∈ X,for someM, β>0 if and only if (B^*,A_0) is exactly Z-observable on (0,τ). In any case, if x∈ X is such that x-Px∈(I-T)^γ for some γ>0 thenz(t)-z_0(t)=o(t^-γ), t≥0.Furthermore, there exists a dense subspace X_0 of X such that for all x∈ X_0 the convergence in (<ref>) is superpolynomially fast.The result follows immediately from Theorem <ref>, Proposition <ref> and Proposition <ref>. Indeed, Proposition <ref> shows that the monodromy operator T is a Ritt operator, so that σ(T)∩⊆{1} and (<ref>) holds for α=1, and moreover that Y= T, so that the system is stable if and only if (B^*,A_0) is approximately observable on (0,τ). Note that by Remark <ref>(<ref>)the closure of (I-T) coincides with the orthogonal complement Z of T. From the proof of Theorem <ref> it is clear that (I-T) is closed if and only if the restriction S=T|_Z of the monodromy operator T to Z satisfies r(S)<1. Hence by Proposition <ref> the estimate in (<ref>) holds for someM,β>0 if and only if (B^*,A_0) is exactly Z-observable on (0,τ). The result now follows from Theorem <ref>.In the setting of Theorem , the system (<ref>) is stable if and only if (B^*,A_0) is approximately observable on (0,τ). Moreover, z(t)≤ M e^-β tx, t≥0,x∈ X,for someM,β>0 if and only if (B^*,A_0) is exactly observable on (0,τ). § THE TRANSPORT EQUATION Let Ω=(0,1)× and Ω_+=(0,1)×(0,∞),and consider the following initial-value problem for the transport equation subject to periodic boundary conditions,{ z_t(s,t) =z_s(s,t)-b(s,t)z(s,t),(s,t)∈Ω_+,z(0,t) =z(1,t),t>0,z(s,0) =x(s), s∈(0,1), .where x∈ L^2(0,1) and b∈ L^∞(Ω) are given. We suppose that the damping term b is 1-periodic in t and that b(s,t)≥0 for almost all (s,t)∈Ω.The problem can be cast in the form of (<ref>) with A(t), t≥0, as in (<ref>)by letting X=L^2(0,1), A_0x=x' for x∈ D(A_0)={x∈ H^1(0,1):x(0)=x(1)} and B(t)x=b(·,t)^1/2x for t≥0 andx∈ X. Notice in particular that A_0 is the generator of the unitary group (T_0(t))_t∈ given by T_0(t)x=x(·+t) for x∈ X andt∈. Here and in the remainder of this section any function on (0,1) is identified with its 1-periodic extension to . In particular, we have T_0(1)=I. The unique mild solution z_+→ X of (<ref>) in the sense of (<ref>) is givenby z(s,t)=x(s+t)exp(-∫_0^tb(s+t-r,r)r), (s,t)∈Ω_+.Hence the monodromy operator T=U(1,0) of the evolution family associated with the family {A(t):t≥0} is the multiplication operator corresponding to the function m∈ L^∞(0,1) given by m(s)=exp(-a(s)), wherea(s)=∫_0^1b(s-r,r)r, s∈(0,1).Define, modulo null sets,I_a={s∈(0,1):a(s)>0} and J_a={s∈(0,1):a(s)=0}. Then T=L^2(J_a) and the orthogonal projection P onto T is given simply by Px=1_J_a x, x∈ X. Similarly, the orthogonal complement Z=( T)^⊥ of T is given by Z=L^2(I_a). In particular, (B^*,A_0) is approximately observable on (0,1) if and only if J_a is a null set, and (B^*,A_0) is exactly Z-observable on (0,1) if and only if a(s)≥ c for almost all s∈ I_a and some c>0 . For γ>0 we have (I-T)^γ={x∈ Z:(1-m)^-γx∈ X}={x∈ Z:a^-γx∈ X}. These observations lead to the following special case of Theorem <ref>. Let X=L^2(0,1) and let I_a,J_a⊆(0,1) be as above. For any initial value x∈ X the solution z_+→ X of the problem (<ref>) corresponding to (<ref>) satisfiesz(t)-(1_J_a x)(·+t)→0, t→∞.In particular, the system (<ref>) is asymptotically periodic and it is stable if and only ifJ_a is a null set. Moreover, z(t)-(1_J_a x)(·+t)≤ Me^-β tx, t≥0,x∈ X,for someM, β>0 if and only ifa(s)≥ c for almost all s∈ I_a and some c>0. In any case, if x∈ X is such that a^-γ1_I_ax∈ X for some γ>0 thenz(t)-(1_J_a x)(·+t)=o(t^-γ), t→∞.Furthermore, if x lies in the dense subspace of functions satisfying a^-k1_I_ax∈ X for allk≥1 then the convergence in (<ref>) is superpolynomially fast. We illustrate Theorem <ref> in the case where b=1_ω is an indicator function of some measurable subset ω of Ω. To ensure periodicity of our system we assume thatω is translation invariant in the t-direction, so that ω+{(0,1)}=ω. Thus b is completely described by the set ω_0=ω∩(0,1)^2.Suppose that ω_0={(s,t)∈(0,1)^2:|s-1/2|+|t-1/2|<δ}for someδ∈[0,1/2].Then a=(δ/2)1_(1-2δ,1), so I_a=(1-2δ,1) and J_a=(0,1-2δ). Thus the system (<ref>) corresponding to (<ref>) is stable if and only if δ=1/2, and in any case (<ref>) holds for some M,β>0. Suppose that ω_0={(s,t)∈(0,1)^2:0<s,t<δ}for someδ∈[0,1].For δ∈[0,1/2) we have a(s)={ s,0<s<δ,2δ-s, δ< s<2δ,0,2δ<s<1. .and for δ∈[1/2,1] we havea(s)={ 2δ-1,0<s< 2δ-1,s,2δ-1<s<δ,2δ-s, δ< s<1. .Thus for δ∈[0,1/2) we have I_a=(0,2δ) and J_a=(2δ,1), while forδ∈[1/2,1] we have I_a=(0,1) and J_a=∅, so the system (<ref>) corresponding to (<ref>) is stable if and only if δ∈[1/2,1]. When δ∈(1/2,1] the solution z_+→ X satisfies z(t)≤ Me^-β t, t≥0, for someM,β>0, whereas for δ=1/2 no such constants exist. In this case, however, we have z(t)=o(t^-γ) as t→∞ for γ>0 provided ∫_0^1|x(s)|^2/(min{s,1-s})^2γs<∞,and the convergence is superpolynomially fast if (<ref>) holds for all γ∈. This is the case in particular if there exists ε>0 such that x(s)=0 for almost all s∈(0,ε)∪(1-ε,1). For δ∈[0,1/2) the system fails to be stable but it is still asymptotically periodic, and in fact z(t)-(1_J_ax)(·+t)→0 as t→∞. The convergence is not uniformly exponentially fast, but for γ>0 and x∈ X such that∫_0^2δ|x(s)|^2/(min{s,2δ-s})^2γs<∞,we have z(t)-(1_J_ax)(·+t)=o(t^-γ) as t→∞, and the convergence is superpolynomially fast if (<ref>) holds for all γ∈, as is the case in particular if there exists ε>0 such that x(s)=0 for almost all s∈(0,ε)∪(2δ-ε,2δ). § THE TIME-DEPENDENT DAMPED WAVE EQUATION We return finally to the time-dependent damped wave equation introduced in Section <ref>. Let Ω=(0,1)× and Ω_+=(0,1)×(0,∞), and assume that b∈ L^∞(Ω) with b(s,t)≥0 for almost all (s,t)∈Ω. Then the problem can be written in the form of (<ref>) with operators A(t), t≥0, as in (<ref>) by choosing X=H_0^1(0,1)× L^2(0,1), A_0x=(v,u”)^T for x=(u,v)^T∈ D(A_0)=(H^2(0,1)∩ H_0^1(0,1))× H_0^1(0,1) and B(t)x=(0,b(·,t)^1/2v)^T for x=(u,v)^T∈ X and t≥0. Note in particular that the unitary group (T_0(t))_t∈ generated by A_0 satisfies T_0(2)=I since solutions of the undamped wave equation on (0,1) are 2-periodic. Indeed,the undamped wave equation with initial data x=(u,v)^T∈ X can be solved explicitly using d'Alembert's formula, which in this case givesz(s,t)=ũ(s+t)+ũ(s-t)/2+1/2∫_s-t^s+tṽ(r)r, (s,t)∈Ω,where ũ and ṽ are the odd 2-periodic extensions toof u and v, respectively. Furthermore, the energy of the solution z_+→ X of (<ref>) satisfies E(t)=1/2z(t)^2, t≥0. Consider the special case where b=1_ω and suppose that ω is (up to a null set) an open subset of Ω. Given τ>0, let ω_τ=ω∩((0,1)×(0,τ)). We say that ω satisfies the geometric control condition (GCC) on (0,τ) if every characteristic ray intersects ω_τ. It follows from <cit.> that (B^*,A_0) is exactly observable on (0,τ) provided ω satisfies the GCC on (0,τ). Now suppose that ω is τ-translation-invariant in the sense that ω+{(0,τ)}=ω. It follows from Kronecker's theorem and a simple compactness argument that if τ is irrational thenω satisfies the GCC on (0,nτ) for some n∈. Hence by Proposition <ref> our system is necessarily uniformly exponentially stable for such τ. Since our main interest here is in non-uniform rates of convergence, we restrict our attention to the case where τ∈. In fact, replacing τ by nτ for suitable n∈ we mayfurther assume that we are in the resonant case where T_0(τ)=I.We therefore assume henceforth, without essential loss of generality, that τ=2.It then follows from Proposition <ref> that the associated monodromy operator U(2,0) is a Ritt operator. Letting Ω_0={(s,t)∈Ω:1<t<2}, we obtain the following version of Theorem <ref>. Consider the system (<ref>) corresponding to the damped wave equation. Suppose that that b is 2-periodic in t and let Y={x∈ X:∬_Ω_0b(s,t)|v(s,t;x)|^2(s,t)=0}of X, where v(·,·;x) is the velocity component of the solution to the undamped wave equation on (0,1) with initial data x∈ X. Let Z=Y^⊥ and let P denote theorthogonal projection onto Y. Then for any initial value x∈ X the solution z_+→ X of (<ref>) satisfiesz(t)-z_0(t)→0, t→∞,where z_0_+→ X is thesolution of the undamped wave equation with initial condition z_0(0)=Px. In particular, the system is asymptotically periodic, and it is stable if and only ifY={0}.Moreover, z(t)-z_0(t)≤ Me^-β tx, t≥0,x∈ X,for someM, β>0 if and only if∬_Ω_0b(s,t)|v(s,t;x)|^2(s,t)≥κ^2x^2for all x∈ Z and some κ>0. If b=1_ω for some open 2-translation-invariant subset ω of Ω, then the estimate in (<ref>) is satisfied for all x∈ X provided ω satisfies the GCC on (0,2).In any case, there exists a dense subspace X_0 of X such that for all x∈ X_0 the convergence in (<ref>) is superpolynomially fast.We conclude with two simple examples illustrating the way Theorem <ref> can be applied in the case where b=1_ω for a 2-translation invariant subset ω of Ω. We introduce a novel approach to analysing exponential convergence to periodic orbits by studying uniform exponential stability of a related problem with a `collapsed' damping region. The collapsing technique in particular allows us to focus our attention on the complement Z of the initial values resulting in non-trivial periodic orbits, and to deduce exact Z-observability of the original wave equation by verifying the GCC for the modified problem.Let δ∈ [0,1] and let p[0,2]→[0,1] be (part of) the characteristic ray passing through the points (1/2,0), (0,1/2), (1/2,1), (1,3/2) and (1/2,2). Suppose that ω_0=ω∩Ω_0 is given byω_0={(s,t)∈Ω_0: |s-p(t)|>1/2-δ}.If δ≥1/2 then up to a null set ω_0 equals Ω_0. In particular, Y={0} and ω satisfies the GCC on (0,2), so we have stability and uniform exponential convergence. Suppose now that δ∈[0,1/2) and let I_δ=(δ,1-δ). Then we have the orthogonal decompositionY=⟨[ u_δ; v_δ ]⟩⊕{[w; w' ]∈ X: w∈ H_0^1(I_δ)},where u_δ(s)= { s, 0<s<δ,δ(1-2s)/1-2δ, δ<s<1-δ,s-1,1-δ<s<1, .v_δ(s)= { 0, 0<s<δ,-1/1-2δ, δ<s<1-δ,0,1-δ<s<1.- .In particular, the system is asymptotically periodic but not stable. In this example the orthogonal projection P onto Y can be computed explicitly. Indeed, for δ≤ s≤1-δ let ϕ_δ,s X→ be the functional given by ϕ_δ,s(x)=u(s)-u(δ)/2+1/2∫_δ^sv(r)r,x=[ u; v ]∈ X,and let ψ_δ=ϕ_δ,1-δ. ThenPx=-2ψ_δ(x)/1+2δ[ u_δ; v_δ ]+[w; w' ], x=[ u; v ]∈ X,wherew(s)=ϕ_δ,s(x)-s-δ/1-2δψ_δ(x),s∈ I_δ.Note that the orthogonal complement Z of Yis given by Z={[ u; v ]∈ X:u'+v=0I_δ}.We now show that the convergence to the periodic solution is exponentially fast, and this is achieved by `collapsing' the phase plane in such a way that the resulting damping region satisfies the GCC for the wave equation on a shorter interval. Indeed, let J_δ=(0,1/2+δ) and Ω_0'=J_δ×(0,2). Moreover, let ω_0'={(s,t)∈Ω_0': 1/2<s+t<3/2+2δ,t<3/2+δ}.For x∈ Z it follows from a calculation based on d'Alembert's formula (<ref>) that there exists y∈ H_0^1(J_δ)× L^2(J_δ) such that y=x and∬_Ω_01_ω_0|v(s,t;x)|^2(s,t)=∬_Ω_0'1_ω_0'|v(s,t;y)|^2(s,t),where v(·,·;x) and v(·,·;y) denote the velocity components of the undamped wave equation on (0,1) with initial data x and on J_δ with initial data y, respectively. Since ω_0' satisfies the GCC on (0,2) for the wave equation on J_δ, it follows that (<ref>) holds for some κ>0 and all x∈ Z. In particular, we have uniform exponential convergence to the periodic solution.Let δ∈[0,1] and supposethat ω_0=ω∩Ω_0 is given byω_0=((1-δ,1)×(0,1))∪((0,δ)×(1,2)).This can be viewed as a model of a wave equation with switched damping. Note that if δ>0 then separately the damping in each of the two time intervalswould lead to uniform exponential decay for all solutions.However,the periodically switched system is stable if and only if δ≥1/2, since Y={0} for precisely these values of δ. If δ>1/2 then (the interior of) ω satisfies the GCC on (0,2), so (<ref>) holds for some M,β>0.For δ=1/2 it is easy to see, by considering initial values of the form x=(u,u')^T for u∈ H_0^1(0,1) with support concentrated near the point s=1/2, that (<ref>) does not hold, and hence nor does (<ref>).Now suppose that δ∈(0,1/2), noting that δ=0 corresponds to the uninteresting case of the undamped wave equation. Letting I_δ=(δ,1-δ), the spaces Y, Z and the projection P are the same as in Example <ref>. By considering initial data x∈ Z of the form x=(u,u')^T with u∈ H_0^1((0,1)∖ I_δ) having support concentrated near the points s=δ and s=1-δ, it is easy to see as before that (<ref>) again fails to hold, and hence(<ref>) does not hold either. By Remark <ref>(<ref>) the convergence to the periodic solution is in fact arbitrarily slow in this case. On the other hand if x∈ X is of the form x=y+z, where y∈ Y and z=(u,v)^T∈ Z is such that u'+v=0 on an open interval strictly containing I_δ, then by a similar `collapsing' argument to the one in Example <ref> we in fact have exponentially fast convergence to the periodic solution.The fact that the monodromy operator is not known explicitly in this case makes it difficult to give a precise description of those initial values x∈ X which lead to, say, polynomial rates of convergence to the periodic solution.10AreBat88 W. Arendt and C. J. K. Batty. Tauberian theorems and stability of one-parameter semigroups. Trans. Amer. Math. Soc., 306:837–841, 1988.ABHN11 W. Arendt, C.J.K. Batty, M. Hieber, and F. Neubrander. Vector-valued Laplace transforms and Cauchy problems. Birkhäuser, Basel, second edition, 2011.BarLeb92 C. Bardos, G. Lebeau, and J. Rauch. Sharp sufficient conditions for the observation, control, and stabilization of waves from the boundary. SIAM J. Control Optim., 30(5):1024–1065, 1992.BatChi02a C.J.K. Batty, R. Chill, and Y. Tomilov. Strong stability of bounded evolution families and semigroups. J. Funct. Anal., 193(1):116–139, 2002.BD08 C.J.K. Batty and T. Duyckaerts. Non-uniform stability for bounded semi-groups in Banach spaces. J. Evol. Equ., 8:765–780, 2008.BatHut99 C.J.K. Batty, W. Hutter, and F. Räbiger. Almost periodicity of mild solutions of inhomogeneous periodic Cauchy problems. J. Differential Equations, 156(2):309–327, 1999.BenDaP07book A. Bensoussan, G. Da Prato, M.C. Delfour, and S.K. Mitter. Representation and Control of Infinite Dimensional Systems. Birkhäuser, Boston, second edition, 2007.Bur98 N. Burq. Décroissance de l'énergie locale de l'équation des ondes pour le problème extérieur et absence de résonance au voisinage du réel. Acta Mathematica, 180(1):1–29, 1998.BurGer97 N. Burq and P. Gérard. Condition nécessaire et suffisante pour la contrôlabilité exacte des ondes. C. R. Acad. Sci. Paris Sér. I Math., 325(7):749–752, 1997.CasCin14 C. Castro, N. Cîndea, and A. Münch. Controllability of the linear one-dimensional wave equation with inner moving forces. SIAM J. Control Optim., 52(6):4027–4056, 2014.CheFul91 G. Chen, S. A. Fulling, F. J. Narcowich, and S. Sun. Exponential decay of energy of evolution equations with locally distributed damping. SIAM Journal on Applied Mathematics, 51(1):266–301, 1991.CoLi16 G. Cohen and M. Lin. Remarks on rates of convergence of powers of contractions. J. Math. Anal. Appl., 436(2):1196–1213, 2016.Da78 C.M. Dafermos. Asymptotic behavior of solutions of evolution equations. In Nonlinear evolution equations (Proc. Sympos., Univ. Wisconsin, Madison, Wis., 1977), volume 40 of Publ. Math. Res. Center Univ. Wisconsin, pages 103–123. Academic Press, New York-London, 1978.EngNag00book K.-J. Engel and R. Nagel. One-Parameter Semigroups for Linear Evolution Equations. 2000.Est84 J. Esterle. Mittag-Leffler methods in the theory of Banach algebras and a new approach to Michael's problem. In Proceedings of the conference on Banach algebras and several complex variables (New Haven, Conn., 1983), volume 32 of Contemp. Math., pages 107–129. Amer. Math. Soc., Providence, RI, 1984.HaTo10 M. Haase and Y. Tomilov. Domain characterizations of certain functions of power-bounded operators. Studia Math., 196(3):265–288, 2010.Ha83 A. Haraux. Asymptotic behavior of trajectories for some nonautonomous, almost periodic processes. J. Differential Equations, 49(3):473–483, 1983.KT86 Y. Katznelson and L. Tzafriri. On power bounded operators. J. Funct. Anal., 68:313–328, 1986.Kre85 U. Krengel. Ergodic Theorems. Walter de Gruyter, Berlin, 1985.LatRan98 Y. Latushkin, T. Randolph, and R. Schnaubelt. Exponential dichotomy and mild solutions of nonautonomous equations in Banach spaces. Journal of Dynamics and Differential Equations, 10(3):489–510, 1998.Leb96 G. Lebeau. Équation des ondes amorties. In Algebraic and geometric methods in mathematical physics (Kaciveli, 1993), volume 19 of Math. Phys. Stud., pages 73–109. Kluwer Acad. Publ., Dordrecht, 1996.Lyu99 Y. Lyubich. Spectral localization, power boundedness and invariant subspaces under Ritt's type condition. Studia Math., 134(2):153–167, 1999.Mue88 V. Müller. Local spectral radius formula for operators in Banach spaces. Czechoslovak Math. J., 38(4):726–729, 1988.NaZe99 B. Nagy and J. Zemánek. A resolvent condition implying power boundedness. Studia Math., 134(2):143–151, 1999.Paz83 A. Pazy. Semigroups of Linear Operators and Applications to Partial Differential Equations. Springer, New York, 1983.RauTay74 J. Rauch and M. Taylor. Exponential decay of solutions to hyperbolic equations in bounded domains. Indiana Univ. Math. J., 24:79–86, 1974.RLTT16 J. Le Rousseau, G. Lebeau, P. Terpolilli, and E. Trélat. Geometric control condition for the wave equation with a time-dependent observation domain. Anal. PDE, 10(4):983–1015, 2017.Sch02 R. Schnaubelt. Feedbacks for nonautonomous regular linear systems. SIAM J. Control Optim., 41(4):1141–1165, 2002.Sei16 D. Seifert. Rates of decay in the classical Katznelson-Tzafriri theorem. J. Anal. Math., 130(1):329–354, 2016.vN96 J.M.A.M. van Neerven. The Asymptotic Behaviour of Semigroups of Linear Operators. Birkhäuser, Basel, 1996.Vu95 Q.P. Vũ. Stability and almost periodicity of trajectories of periodic processes. J. Differential Equations, 115(2):402–415, 1995.
http://arxiv.org/abs/1704.08081v3
{ "authors": [ "Lassi Paunonen", "David Seifert" ], "categories": [ "math.FA", "math.AP", "35B40, 47D06, 35B10, 47A10, 35L05" ], "primary_category": "math.FA", "published": "20170426124923", "title": "Asymptotics for periodic systems" }
=0pt plus 1pt minus 1ptWe observe for the first time the process e^+e^-→η h_c with data collected by the BESIII experiment. Significant signals are observed at the center-of-mass energy √(s)=4.226 GeV, and the Born cross section is measured to be (9.5^+2.2_-2.0± 2.7) pb. Evidence for η h_c is observed at √(s)=4.358 GeV with a Born cross section of (10.0^+3.1_-2.7± 2.6) pb, and upper limits on the production cross section at other center-of-mass energies between 4.085 and 4.600 GeV are determined. 13.25.Gv, 13.66.Bc, 14.40.Pq, 14.40.Rt Observation of e^+e^-→η h_c at center-of-mass energies from 4.085 to 4.600 GeVM. Ablikim^1, M. N. Achasov^9,d, S.  Ahmed^14, X. C. Ai^1, O. Albayrak^5, M. Albrecht^4, D. J. Ambrose^45, A. Amoroso^50A,50C, F. F. An^1, Q. An^47,38, J. Z. Bai^1, O. Bakina^23, R. Baldini Ferroli^20A, Y. Ban^31, D. W. Bennett^19, J. V. Bennett^5, N. Berger^22, M. Bertani^20A, D. Bettoni^21A, J. M. Bian^44, F. Bianchi^50A,50C, E. Boger^23,b, I. Boyko^23, R. A. Briere^5, H. Cai^52, X. Cai^1,38, O.  Cakir^41A, A. Calcaterra^20A, G. F. Cao^1,42, S. A. Cetin^41B, J. Chai^50C, J. F. Chang^1,38, G. Chelkov^23,b,c, G. Chen^1, H. S. Chen^1,42, J. C. Chen^1, M. L. Chen^1,38, S. Chen^42, S. J. Chen^29, X. Chen^1,38, X. R. Chen^26, Y. B. Chen^1,38, X. K. Chu^31, G. Cibinetto^21A, H. L. Dai^1,38, J. P. Dai^34,h, A. Dbeyssi^14, D. Dedovich^23, Z. Y. Deng^1, A. Denig^22, I. Denysenko^23, M. Destefanis^50A,50C, F. De Mori^50A,50C, Y. Ding^27, C. Dong^30, J. Dong^1,38, L. Y. Dong^1,42, M. Y. Dong^1,38,42, Z. L. Dou^29, S. X. Du^54, P. F. Duan^1, J. Z. Fan^40, J. Fang^1,38, S. S. Fang^1,42, X. Fang^47,38, Y. Fang^1, R. Farinelli^21A,21B, L. Fava^50B,50C, F. Feldbauer^22, G. Felici^20A, C. Q. Feng^47,38, E. Fioravanti^21A, M.  Fritsch^22,14, C. D. Fu^1, Q. Gao^1, X. L. Gao^47,38, Y. Gao^40, Z. Gao^47,38, I. Garzia^21A, K. Goetzen^10, L. Gong^30, W. X. Gong^1,38, W. Gradl^22, M. Greco^50A,50C, M. H. Gu^1,38, Y. T. Gu^12, Y. H. Guan^1, A. Q. Guo^1, L. B. Guo^28, R. P. Guo^1, Y. Guo^1, Y. P. Guo^22, Z. Haddadi^25, A. Hafner^22, S. Han^52, X. Q. Hao^15, F. A. Harris^43, K. L. He^1,42, F. H. Heinsius^4, T. Held^4, Y. K. Heng^1,38,42, T. Holtmann^4, Z. L. Hou^1, C. Hu^28, H. M. Hu^1,42, T. Hu^1,38,42, Y. Hu^1, G. S. Huang^47,38, J. S. Huang^15, X. T. Huang^33, X. Z. Huang^29, Z. L. Huang^27, T. Hussain^49, W. Ikegami Andersson^51, Q. Ji^1, Q. P. Ji^15, X. B. Ji^1,42, X. L. Ji^1,38, L. W. Jiang^52, X. S. Jiang^1,38,42, X. Y. Jiang^30, J. B. Jiao^33, Z. Jiao^17, D. P. Jin^1,38,42, S. Jin^1,42, T. Johansson^51, A. Julin^44, N. Kalantar-Nayestanaki^25, X. L. Kang^1, X. S. Kang^30, M. Kavatsyuk^25, B. C. Ke^5, P.  Kiese^22, R. Kliemt^10, B. Kloss^22, O. B. Kolcu^41B,f, B. Kopf^4, M. Kornicer^43, A. Kupsc^51, W. Kühn^24, J. S. Lange^24, M. Lara^19, P.  Larin^14, H. Leithoff^22, C. Leng^50C, C. Li^51, Cheng Li^47,38, D. M. Li^54, F. Li^1,38, F. Y. Li^31, G. Li^1, H. B. Li^1,42, H. J. Li^1, J. C. Li^1, Jin Li^32, K. Li^33, K. Li^13, Lei Li^3, P. R. Li^42,7, Q. Y. Li^33, T.  Li^33, W. D. Li^1,42, W. G. Li^1, X. L. Li^33, X. N. Li^1,38, X. Q. Li^30, Y. B. Li^2, Z. B. Li^39, H. Liang^47,38, Y. F. Liang^36, Y. T. Liang^24, G. R. Liao^11, D. X. Lin^14, B. Liu^34,h, B. J. Liu^1, C. X. Liu^1, D. Liu^47,38, F. H. Liu^35, Fang Liu^1, Feng Liu^6, H. B. Liu^12, H. H. Liu^16, H. H. Liu^1, H. M. Liu^1,42, J. Liu^1, J. B. Liu^47,38, J. P. Liu^52, J. Y. Liu^1, K. Liu^40, K. Y. Liu^27, L. D. Liu^31, P. L. Liu^1,38, Q. Liu^42, S. B. Liu^47,38, X. Liu^26, Y. B. Liu^30, Y. Y. Liu^30, Z. A. Liu^1,38,42, Zhiqing Liu^22, H. Loehner^25, Y.  F. Long^31, X. C. Lou^1,38,42, H. J. Lu^17, J. G. Lu^1,38, Y. Lu^1, Y. P. Lu^1,38, C. L. Luo^28, M. X. Luo^53, T. Luo^43, X. L. Luo^1,38, X. R. Lyu^42, F. C. Ma^27, H. L. Ma^1, L. L.  Ma^33, M. M. Ma^1, Q. M. Ma^1, T. Ma^1, X. N. Ma^30, X. Y. Ma^1,38, Y. M. Ma^33, F. E. Maas^14, M. Maggiora^50A,50C, Q. A. Malik^49, Y. J. Mao^31, Z. P. Mao^1, S. Marcello^50A,50C, J. G. Messchendorp^25, G. Mezzadri^21B, J. Min^1,38, T. J. Min^1, R. E. Mitchell^19, X. H. Mo^1,38,42, Y. J. Mo^6, C. Morales Morales^14, N. Yu. Muchnoi^9,d, H. Muramatsu^44, P. Musiol^4, Y. Nefedov^23, F. Nerling^10, I. B. Nikolaev^9,d, Z. Ning^1,38, S. Nisar^8, S. L. Niu^1,38, X. Y. Niu^1, S. L. Olsen^32, Q. Ouyang^1,38,42, S. Pacetti^20B, Y. Pan^47,38, M. Papenbrock^51, P. Patteri^20A, M. Pelizaeus^4, H. P. Peng^47,38, K. Peters^10,g, J. Pettersson^51, J. L. Ping^28, R. G. Ping^1,42, R. Poling^44, V. Prasad^1, H. R. Qi^2, M. Qi^29, S. Qian^1,38, C. F. Qiao^42, L. Q. Qin^33, N. Qin^52, X. S. Qin^1, Z. H. Qin^1,38, J. F. Qiu^1, K. H. Rashid^49,i, C. F. Redmer^22, M. Ripka^22, G. Rong^1,42, Ch. Rosner^14, X. D. Ruan^12, A. Sarantsev^23,e, M. Savrié^21B, C. Schnier^4, K. Schoenning^51, W. Shan^31, M. Shao^47,38, C. P. Shen^2, P. X. Shen^30, X. Y. Shen^1,42, H. Y. Sheng^1, W. M. Song^1, X. Y. Song^1, S. Sosio^50A,50C, S. Spataro^50A,50C, G. X. Sun^1, J. F. Sun^15, S. S. Sun^1,42, X. H. Sun^1, Y. J. Sun^47,38, Y. Z. Sun^1, Z. J. Sun^1,38, Z. T. Sun^19, C. J. Tang^36, X. Tang^1, I. Tapan^41C, E. H. Thorndike^45, M. Tiemens^25, I. Uman^41D, G. S. Varner^43, B. Wang^30, B. L. Wang^42, D. Wang^31, D. Y. Wang^31, K. Wang^1,38, L. L. Wang^1, L. S. Wang^1, M. Wang^33, P. Wang^1, P. L. Wang^1, W. Wang^1,38, W. P. Wang^47,38, X. F.  Wang^40, Y. Wang^37, Y. D. Wang^14, Y. F. Wang^1,38,42, Y. Q. Wang^22, Z. Wang^1,38, Z. G. Wang^1,38, Z. H. Wang^47,38, Z. Y. Wang^1, Z. Y. Wang^1, T. Weber^22, D. H. Wei^11, P. Weidenkaff^22, S. P. Wen^1, U. Wiedner^4, M. Wolke^51, L. H. Wu^1, L. J. Wu^1, Z. Wu^1,38, L. Xia^47,38, L. G. Xia^40, Y. Xia^18, D. Xiao^1, H. Xiao^48, Z. J. Xiao^28, Y. G. Xie^1,38, Y. H. Xie^6, Q. L. Xiu^1,38, G. F. Xu^1, J. J. Xu^1, L. Xu^1, Q. J. Xu^13, Q. N. Xu^42, X. P. Xu^37, L. Yan^50A,50C, W. B. Yan^47,38, W. C. Yan^47,38, Y. H. Yan^18, H. J. Yang^34,h, H. X. Yang^1, L. Yang^52, Y. X. Yang^11, M. Ye^1,38, M. H. Ye^7, J. H. Yin^1, Z. Y. You^39, B. X. Yu^1,38,42, C. X. Yu^30, J. S. Yu^26, C. Z. Yuan^1,42, Y. Yuan^1, A. Yuncu^41B,a, A. A. Zafar^49, Y. Zeng^18, Z. Zeng^47,38, B. X. Zhang^1, B. Y. Zhang^1,38, C. C. Zhang^1, D. H. Zhang^1, H. H. Zhang^39, H. Y. Zhang^1,38, J. Zhang^1, J. J. Zhang^1, J. L. Zhang^1, J. Q. Zhang^1, J. W. Zhang^1,38,42, J. Y. Zhang^1, J. Z. Zhang^1,42, K. Zhang^1, L. Zhang^1, S. Q. Zhang^30, X. Y. Zhang^33, Y. Zhang^1, Y. Zhang^1, Y. H. Zhang^1,38, Y. N. Zhang^42, Y. T. Zhang^47,38, Yu Zhang^42, Z. H. Zhang^6, Z. P. Zhang^47, Z. Y. Zhang^52, G. Zhao^1, J. W. Zhao^1,38, J. Y. Zhao^1, J. Z. Zhao^1,38, Lei Zhao^47,38, Ling Zhao^1, M. G. Zhao^30, Q. Zhao^1, Q. W. Zhao^1, S. J. Zhao^54, T. C. Zhao^1, Y. B. Zhao^1,38, Z. G. Zhao^47,38, A. Zhemchugov^23,b, B. Zheng^48,14, J. P. Zheng^1,38, W. J. Zheng^33, Y. H. Zheng^42, B. Zhong^28, L. Zhou^1,38, X. Zhou^52, X. K. Zhou^47,38, X. R. Zhou^47,38, X. Y. Zhou^1, K. Zhu^1, K. J. Zhu^1,38,42, S. Zhu^1, S. H. Zhu^46, X. L. Zhu^40, Y. C. Zhu^47,38, Y. S. Zhu^1,42, Z. A. Zhu^1,42, J. Zhuang^1,38, L. Zotti^50A,50C, B. S. Zou^1, J. H. Zou^1 (BESIII Collaboration)^1 Institute of High Energy Physics, Beijing 100049, People's Republic of China^2 Beihang University, Beijing 100191, People's Republic of China^3 Beijing Institute of Petrochemical Technology, Beijing 102617, People's Republic of China^4 Bochum Ruhr-University, D-44780 Bochum, Germany^5 Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA^6 Central China Normal University, Wuhan 430079, People's Republic of China^7 China Center of Advanced Science and Technology, Beijing 100190, People's Republic of China^8 COMSATS Institute of Information Technology, Lahore, Defence Road, Off Raiwind Road, 54000 Lahore, Pakistan^9 G.I. Budker Institute of Nuclear Physics SB RAS (BINP), Novosibirsk 630090, Russia^10 GSI Helmholtzcentre for Heavy Ion Research GmbH, D-64291 Darmstadt, Germany^11 Guangxi Normal University, Guilin 541004, People's Republic of China^12 Guangxi University, Nanning 530004, People's Republic of China^13 Hangzhou Normal University, Hangzhou 310036, People's Republic of China^14 Helmholtz Institute Mainz, Johann-Joachim-Becher-Weg 45, D-55099 Mainz, Germany^15 Henan Normal University, Xinxiang 453007, People's Republic of China^16 Henan University of Science and Technology, Luoyang 471003, People's Republic of China^17 Huangshan College, Huangshan 245000, People's Republic of China^18 Hunan University, Changsha 410082, People's Republic of China^19 Indiana University, Bloomington, Indiana 47405, USA^20 (A)INFN Laboratori Nazionali di Frascati, I-00044, Frascati, Italy; (B)INFN and University of Perugia, I-06100, Perugia, Italy^21 (A)INFN Sezione di Ferrara, I-44122, Ferrara, Italy; (B)University of Ferrara, I-44122, Ferrara, Italy^22 Johannes Gutenberg University of Mainz, Johann-Joachim-Becher-Weg 45, D-55099 Mainz, Germany^23 Joint Institute for Nuclear Research, 141980 Dubna, Moscow region, Russia^24 Justus-Liebig-Universitaet Giessen, II. Physikalisches Institut, Heinrich-Buff-Ring 16, D-35392 Giessen, Germany^25 KVI-CART, University of Groningen, NL-9747 AA Groningen, The Netherlands^26 Lanzhou University, Lanzhou 730000, People's Republic of China^27 Liaoning University, Shenyang 110036, People's Republic of China^28 Nanjing Normal University, Nanjing 210023, People's Republic of China^29 Nanjing University, Nanjing 210093, People's Republic of China^30 Nankai University, Tianjin 300071, People's Republic of China^31 Peking University, Beijing 100871, People's Republic of China^32 Seoul National University, Seoul, 151-747 Korea^33 Shandong University, Jinan 250100, People's Republic of China^34 Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China^35 Shanxi University, Taiyuan 030006, People's Republic of China^36 Sichuan University, Chengdu 610064, People's Republic of China^37 Soochow University, Suzhou 215006, People's Republic of China^38 State Key Laboratory of Particle Detection and Electronics, Beijing 100049, Hefei 230026, People's Republic of China^39 Sun Yat-Sen University, Guangzhou 510275, People's Republic of China^40 Tsinghua University, Beijing 100084, People's Republic of China^41 (A)Ankara University, 06100 Tandogan, Ankara, Turkey; (B)Istanbul Bilgi University, 34060 Eyup, Istanbul, Turkey; (C)Uludag University, 16059 Bursa, Turkey; (D)Near East University, Nicosia, North Cyprus, Mersin 10, Turkey^42 University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China^43 University of Hawaii, Honolulu, Hawaii 96822, USA^44 University of Minnesota, Minneapolis, Minnesota 55455, USA^45 University of Rochester, Rochester, New York 14627, USA^46 University of Science and Technology Liaoning, Anshan 114051, People's Republic of China^47 University of Science and Technology of China, Hefei 230026, People's Republic of China^48 University of South China, Hengyang 421001, People's Republic of China^49 University of the Punjab, Lahore-54590, Pakistan^50 (A)University of Turin, I-10125, Turin, Italy; (B)University of Eastern Piedmont, I-15121, Alessandria, Italy; (C)INFN, I-10125, Turin, Italy^51 Uppsala University, Box 516, SE-75120 Uppsala, Sweden^52 Wuhan University, Wuhan 430072, People's Republic of China^53 Zhejiang University, Hangzhou 310027, People's Republic of China^54 Zhengzhou University, Zhengzhou 450001, People's Republic of China^a Also at Bogazici University, 34342 Istanbul, Turkey^b Also at the Moscow Institute of Physics and Technology, Moscow 141700, Russia^c Also at the Functional Electronics Laboratory, Tomsk State University, Tomsk, 634050, Russia^d Also at the Novosibirsk State University, Novosibirsk, 630090, Russia^e Also at the NRC "Kurchatov Institute", PNPI, 188300, Gatchina, Russia^f Also at Istanbul Arel University, 34295 Istanbul, Turkey^g Also at Goethe University Frankfurt, 60323 Frankfurt am Main, Germany^h Also at Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education; Shanghai Key Laboratory for Particle Physics and Cosmology; Institute of Nuclear and Particle Physics, Shanghai 200240, People's Republic of China^i Government College Women University, Sialkot - 51310. Punjab, Pakistan. December 30, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTION The spectroscopy of charmonium states below the open charm threshold is well established, but the situation above the threshold is more complicated. From the inclusive hadronic cross section in e^+e^- annihilation, some vector charmonium states, ψ(3770), ψ(4040), ψ(4160), ψ(4415) are known with properties as expected in the quark model <cit.>. However, besides these states, several new vector states, namely the Y(4260), Y(4360) and Y(4660), have been discovered experimentally <cit.>.In addition, some new states with other quantum number configurations are also found in experiment, such as the X(3872), Z_c(3900) and Z_c(4020) states <cit.>. The common properties of these states are their relatively narrow width for decaying into a pair of charmed mesons, and their strong coupling to hidden charm final states. Therefore, it is hard to explain all these resonances as charmonia and they are named `charmonium-like states' collectively. Several unconventional explanations, such as hybrid charmonium <cit.>, tetraquark <cit.>, hadronic molecule <cit.>, diquarks <cit.> or kinematical effects <cit.> have been suggested. See also Ref. <cit.> and references therein for a recent review.To understand the nature of these charmonium-like states, it is mandatory to investigate both open and hidden charm decays. Most of the observed vector charmonium-like states transit to spin-triplet charmonium states with large rate since the spin alignment of the c and c̅-quarks does not need to be changed between initial and final states. However, the spin-flip process e^+e^-→ππ h_c has also been observed by the CLEO <cit.> and BESIII experiments <cit.>, and the large cross section exceeds theoretical expectations <cit.>. Furthermore, two new structures have been reported in e^+e^- →π^+π^- h_c <cit.>.This may suggest the existence of hybrid charmonium states with a pair of cc̅ in spin-singlet configuration which easily couples to an h_c final state. Consequently, searching for the process e^+e^-→η h_c will provide more information about the spin-flip transition, and the structures observed in e^+e^-→ππ h_c may be observedalso in the η h_c process. In addition, the transition Υ(4S)→η h_b has been observed in the bottomonium system <cit.>. The analogous process in the charmonium system is worth searching for to understand the dynamics in the η transition between heavy quarkonia. The CLEO Collaboration observed evidence of about 3σ for e^+e^-→η h_c based on 586 pb^-1 data taken at √(s)=4.17 GeV <cit.>, and the measured cross section is (4.7± 2.2) pb. In comparison, BESIII has collected data samples of about 4.7 fb^-1 in total at √(s)>4.0 GeV.In this paper, a search is performed for the process e^+e^-→η h_c with h_c →γη_c based on data samples collected with the BESIII detector at center-of-mass (c.m.) energies from 4.085 to 4.600 GeV, as listed in Table <ref>. The integrated luminosities of these data samples are measured by analyzing large-angle Bhabha scattering events with an uncertainty of 1.0% <cit.>, and the c.m. energies are measured using the di-muon process <cit.>. In the analysis, η_c is reconstructed with 16 hadronic final states: pp̅, 2(π^+ π^-), 2(K^+ K^-), K^+ K^- π^+ π^-, pp̅π^+ π^-, 3(π^+ π^-), K^+ K^- 2(π^+ π^-), K^+ K^- π^0, p p̅π^0, K^0_S K^±π^∓, K^0_S K^±π^∓π^±π^∓, π^+ π^- η, K^+ K^- η, 2(π^+ π^-) η, π^+ π^- π^0 π^0, and 2(π^+π^-) π^0 π^0, in which K_S^0 is reconstructed from its π^+π^- decay, and π^0 and η from their γγ final state. § DETECTOR AND DATA SAMPLES BEPCII is a two-ring e^+e^- collider designed for a peak luminosity of 10^33 cm^-2s^-1 at a beam current of 0.93 A per beam. The cylindrical core of the BESIII detector consists of a helium-gas-based main drift chamber (MDC) for charged-particle tracking and particle identification (PID) through the specific energy loss dE/dx, a plastic scintillator time-of-flight (TOF) systemfor additional PID, and a 6240-crystal CsI(Tl) electromagnetic calorimeter (EMC) for electron identification and photon detection. These components are all enclosed in a superconducting solenoidal magnet providing a 1-T magnetic field.The solenoid is supported by an octagonal flux-return yoke instrumented with resistive-plate-counter muon detector modules interleaved with steel.The geometrical acceptance for charged tracks and photons is 93% of 4π, and the resolutions for charged-track momentum at 1 GeV is 0.5%. The resolutions of photon energy in barrel and end-cap regions are 2.5% and 5%, respectively.More details on the features and capabilities of BESIII are provided in Ref. <cit.>. A Monte Carlo (MC) simulation is used to determine the detection efficiency and to estimate physics background. The detector response is modelled with a geant4-based <cit.> detector simulation package. Signal and background processes are generated with specialized models that have been packaged and customized for BESIII. 40,000 MC events are generated for each decay mode of η_c at each c.m. energy with kkmc <cit.> and besevtgen <cit.>. The events are generated with an h_c mass of 3525.28 MeV/c^2 and a width of 1.0 MeV.The E1 transition h_c→γη_c is generated with an angular distribution of 1 +cos^2θ^*, where θ^* is the angle of the E1 photon with respect to the h_c helicity direction in the h_c rest frame. Multi-body η_c decays are generated uniformly in phase space. In order to study potential backgrounds, inclusive MC samples with the same size as the data are produced at √(s)=4.23, 4.26 and 4.36 GeV. They are generated using kkmc, which includes the decay of Y(4260), ISR production of the vector charmonium states, charmed meson production, QED events, and continuum processes. The known decay modes of the resonances are generated with besevtgen with branching fractions set to the world average values <cit.>.The remaining charmonium decays are generated with lundcharm <cit.>, while other hadronic events are generated with pythia <cit.>.§ EVENT SELECTION AND STUDY OF BACKGROUND According to the MC simulation of e^+e^-→η h_c with h_c→γη_c at √(s)=4.226 GeV, the energy of the photon emitted in the E1 transition h_c →γη_c is expected to be in the range (400, 600) MeV in the laboratory frame. Therefore, the signal event should have one E1 photon candidate with energy located in the expected region and one η candidate with recoil mass in the region of (3480,3600) MeV/c^2. We define the η recoil mass M_recoil(η) as M_recoil(η)^2c^4≡(E_cm-E_η)^2-|p⃗_cm-p⃗_η|^2c^2, where (E_cm,p⃗_cm) and (E_η,p⃗_η) are the four-momenta of the e^+e^- system and η in the e^+e^- rest frame.Since the E1 photon energy distribution in the laboratory frame will broaden with increasing c.m. energy, the energy window requirement is enlarged to (350, 650) MeV for the data sets collected at √(s)>4.416 GeV. The η_c candidate is reconstructed by the hadronic systems determined by the corresponding decay mode. The invariant mass of the hadronic systems is required to be within the mass range of (2940, 3020) MeV/c^2. For the selected candidates, we apply a fit to the distribution of the η recoil mass to obtain the signal yield.Charged tracks in BESIII are reconstructed from MDC hits within a fiducial range of |cosθ| < 0.93, where θ is the polar angle of the track.We require that the point of closest approach (POCA) to the interaction point (IP) is within 10 cm in the beam direction and within 1 cm in the plane perpendicular to the beam direction. A vertex fit constrains the production vertex, which is determined run by run, and all the charged tracks to a common vertex.Since the K_S^0 has a relatively long lifetime, it will travel a certain distance in the detector to the point where it decays into daughter particles.The requirements on the track POCA and the vertex fit mentioned above are therefore not applied to its daughter particles.The TOF and dE/dx information are combined to form PID confidence levels (C.L.) for the pion, kaon, and proton hypotheses; both PID and kinematic fit information is used to determine the particle type of each charged track, as discussed below.Electromagnetic showers are reconstructed by clustering EMC crystal energies. Efficiency and energy resolution are improved by including energy deposits in nearby TOF counters. A photon candidate is defined by showers detected with the EMC exceeding a threshold of 25 MeV in the barrel region (|cosθ| < 0.8) or of 50 MeV in the end-cap region (0.86 < |cosθ| < 0.92). Showers in the transition region between the barrel and the end-cap are excluded because of the poor reconstruction. Moreover, EMC cluster timing requirements are used to suppress electronic noise and energy deposits unrelated to the event.Candidates for π^0 (η) mesons are reconstructed from pairs of photons with an invariant mass M(γγ) satisfying |M(γγ)-m_π^0(η)|<15 MeV/c^2. A one-constraint (1C) kinematic fit with the M(γγ) constrained to the π^0 (η) nominal mass m_π^0 (m_η) <cit.> is performed to improve the energy resolution. We reconstruct K^0_S→π^+π^- candidates with pairs of oppositely charged tracks with an invariant mass in the mass range of |M(ππ)-m_K_S| < 20 MeV/c^2. Here, m_K_S denotes the nominal mass of K_S^0 <cit.>. A vertex fit constrains the charged tracks to a common decay vertex, and the corrected track parameters are used to calculate the invariant mass. To reject random π^+π^- combinations, a kinematic constraint between the production and decay vertices, called a secondary-vertex fit, is employed <cit.>, and the decay length is required to be more than twice the vertex resolution.The η_c candidate is reconstructed in its decay to one of the 16 decay modes mentioned earlier. After the above selection, a four-constraint (4C) kinematic fit is performed for each event imposing overall energy-momentum conservation, and the χ^2_ 4C is required to be less than 25 to suppress background events with different final states. If multiple η_c candidates are found in an event, only the one with the smallest χ^2 ≡χ^2_ 4C+χ^2_ 1C+χ^2_ pid+χ^2_ vertex is retained, where χ^2_ 1C is the χ^2 of the 1C fit for π^0 (η), χ^2_ pid is the sum over all charged tracks of the χ^2 of the PID hypotheses, and χ^2_ vertex is the χ^2 of the K_S^0 secondary-vertex fit. If more than one η candidate with recoil mass in the h_c signal region (3480<M_recoil(η)<3600 MeV/c^2) is found, the one which leads to a mass of the η_c candidate closest to the η_c nominal mass m_η_c is selected to reconstruct the η_c.The requirement on χ_4C^2 and mass (energy) windows for η, η_c and E1 photonreconstruction are determined by maximizing the figure-of-merit,FOM=N_S/√(N_S+N_B), where N_S represents the number of signal events determined by MC simulation, and N_B represents the number of background events obtained from h_c sidebands in the data sample. The cross section of e^+e^-→η h_c measured by CLEO <cit.> and the η_c branching ratios given by the Particle Data Group (PDG) <cit.> are used to scale the number of signal events in the optimization.After applying all the criteria to the data sample taken at √(s)=4.226 GeV, the events cluster in the signal region in the two-dimensional distribution as shown in Fig. <ref>(a). If the two-dimensional histogram is projected to each axis, clear η_c and h_c signals can be found in the expected regions as shown in Fig. <ref>(b) and (c). Meanwhile, no structure is observed in the events from the η_c (h_c) sideband regions.To further understand the background shape, events located in the η sideband regions are also investigated, which are shown by the green shaded area in Fig. <ref> (d) and are well described by a smooth distribution.In addition, inclusive MC samples generated at √(s)=4.23 GeV are analyzed to study the background components. Here, the ratios among different components are fixed according to theoretical calculation or experimental measurements, except for the Bhabha process. A sample of1.0×10^7 Bhabha events (about 2% of the Bhabha events in real data)is generated with the Babayaga generator <cit.> for background estimation. From this study, the dominant background sources are found to be continuum processes according to the MC truth information, while Y(4260) decays only give a small contribution to the total background. Most background events from resonance decays are ππ J/ψ, ωχ_c0 and open charm production. A similar conclusion can be drawn for data samples taken at other c.m. energies. From the study above, we conclude that the background shape in the η recoil mass can be described by a linear function. § FIT TO THE RECOIL MASS OF Η To obtain the h_c yield for each η_c decay channel, the 16 η recoil mass distributions are fitted simultaneously using an unbinned maximum likelihood method.In the fit, the signal shape is determined by the MC simulation and the background shape is described by a linear function. The total signal yield of 16 channels is set to be N_ obs, which is the common variable for all sub-samples and required to be positive. N_ obs× f_i is the signal yield of the i-th channel. Here, f_i refers to the weight factor f_i ≡ℬ_iϵ_i/∑ϵ_iℬ_i, in which the ℬ_i denotes the branching fraction of η_c decays to the i-th final state and ϵ_i represents the corresponding efficiency. The efficiency for two-body η_c decays is about 20%, for three- or four-body decays is about 10% and for six-body decays it is about 6%.The signal and the background normalization for each mode are free parameters in the fit. The mode-by-mode and summed fit results are shown in Figs. <ref> and <ref>, respectively. The χ^2 per degree of freedom (dof) for this fit is χ^2/dof=17.2/15=1.15, where sparsely populated bins are combined so that there are at least 7 counts per bin in the χ^2 calculation. The total signal yield is 41±9 with a statistical significance of 5.8 σ.With the same method, evidence for e^+e^-→η h_c is found in the data sample taken at √(s)=4.358 GeV, as shown in Fig. <ref>, but no obvious signals are observed for the data sets taken at other c.m. energies. § BORN CROSS SECTION MEASUREMENTThe Born cross section is calculated using the following formula: σ^ Born(e^+e^-→η h_c)=N_ obs/ℒ (1+δ) |1+Π|^2 ℬ(η→γγ) ℬ(h_c→γη_c)Σ_iϵ_iℬ_i.Here, ℒ is the integrated luminosity of the data sample taken at each c.m.energy. (1+δ) is the radiative correction factor, which is defined as(1+δ) = ∫σ(s(1-x)) F(x,s)dx/σ(s),where F(x,s) is the radiator function, which is known from a QED calculation with an accuracy of 0.1% <cit.>. Here, s is squared c.m. energy, and s(1-x) is the squared c.m. energy after emission of the ISR photons. σ(s) is the energy dependent Born cross section in the range of [4.07, 4.6] GeV. Actually, the radiative correction depends on the Born cross section from the production threshold to the e^+e^- collision energy, which is also what we want to measure in this analysis. Therefore, the final Born cross section is obtained in an iterative way.The efficiencies from a set of signal MC samples without any radiative correction are used to calculate a first approximation to the observed cross section. Then, by taking the observed cross sections as inputs, new MC samples are generated with radiative correction and the efficiencies as well as (1+δ) are updated. After that, the cross sections can also be recalculated accordingly. The iterations are performed in this way until a stable result is obtained. The values of (1+δ) from the last iteration are shown in Table <ref>.The term |1+Π|^2 is the vacuum-polarization (VP) correction factor, which includes leptonic and hadronic contributions. This factor is calculated with the package provided in Ref. <cit.>. The package provides leptonic and hadronic VP both in the space-like and time-like regions. For the leptonic VP the complete one- and two-loop results and the known high-energy approximation for the three-loop corrections are included. The hadronic contributions are given in tabulated form in the subroutine hard5n <cit.>. The |1+Π|^2 values are also shown in Table <ref>.Table <ref> and Fig. <ref> show the energy dependent Born cross sections from this measurement. Taking into account the CLEO measurement at √(s)=4.17 GeV <cit.>, the cross section from 4.085∼4.600 GeV is parameterized as the coherent sum of three Breit-Wigner (BW) functions, as shown by the solid line in Fig. <ref>.In the fit, the parameters of the BW around 4.36 GeV are fixed to those of the Y(4360) <cit.> while the parameters of the other two BW functions are left free in the fit. The fitted parameters of the free BW are: M_1 = (4204±6) MeV/c^2, Γ_1 = (32 ± 22) MeV and M_2 = (4496±26) MeV/c^2, Γ_2 = (104 ± 69) MeV, where the uncertainties are statistical.§ SYSTEMATIC UNCERTAINTIES In this section, the study of the systematic uncertainty for the cross section measurement at √(s)=4.226 GeV is described.The same method is applied to the other c.m. energies.The main contributions to the systematic uncertainties are from the luminosity measurement, the fit method, ℬ(h_c→γη_c)ℬ(η→γγ), ISR correction, VP correction and ∑ϵ_iℬ(η_c→ X_i). The systematic uncertainties from different sources are listed in Table <ref>. All sources are treated as uncorrelated, so the total systematic uncertainty is obtained by summing them in quadrature. The following subsections describe the procedures and assumptions that led to these estimates of the uncertainties. §.§ Luminosity The integrated luminosity is measured using Bhabha events, with an uncertainty of 1.0% <cit.>. §.§ Signal shape In the fit procedure, a discrepancy in the mass resolution between data and MC, as well as choices of background shapes and fit range introduce uncertainties on the results. Since the statistical fluctuation is large in the data sets, we cannot obtain a stable and reasonable estimation by simply comparing two fits with different choices. To avoid the influence of statistical fluctuations, ensembles of simulated data samples (toy MC samples) are generated according to an alternative fit model with the same statistics as data, then fitted by the nominal model and the alternative model. These trials are performed 500 times, and the deviation of mean values in the two trials is taken as the systematic uncertainty. The data samples taken at √(s)= 4.226, 4.258, 4.358, and 4.416 GeV are used to obtain an average uncertainty.A discrepancy in mass resolution and mass scale between data and MC simulation affects the fit result. To estimate this uncertainty, the signal shape is smeared and shifted by convolving it with a Gaussian function with a mean value of -1.2 MeV and standard deviation of 0.04 MeV, which are obtained from the study of acontrol sample of e^+e^-→η J/ψ. Toy MC samples are generated according to the smeared MC shape and fitted with a smeared and unsmeared signal shape.The average deviation determined from the four data samples is 7.5% and is taken as systematic uncertainty. §.§ Background shape Similarly, to estimate the uncertainty due to the background shape, a sum of signal shape and a second-order polynomial function with parameters determined from the fit on data is used to generate toy MC, then the toy MC samples are fitted by models with a first-order and a second-order polynomial background, respectively. The average deviation from the four data samples is found to be 6.3% and is taken as systematic uncertainty. §.§ Fitting range The systematic uncertainty for the fit range is determined by varying the fit ranges randomly for 400 times. The standard deviation of the fit results is taken as systematic uncertainty, which is determined to be 2.8% from the four data samples. §.§ ℬ(h_c→γη_c)ℬ(η→γγ)The branching fraction of h_c→γη_c is taken from Ref. <cit.>. The uncertainty in this measurement is 15.7% and the uncertainty of ℬ(η→γγ) is 0.5% <cit.>. These uncertainties propagate to the cross section measurement. §.§ ISR correction To obtain the ISR correction factor, the energy dependent cross section is parameterized with the sum of 3 coherent BW functions fitted to the cross sections measured in this analysis and the CLEO value at 4.17 GeV <cit.>.The uncertainty of the input cross section is estimated by two alternative models. First, the energy-dependent cross sections are fitted with a sum of BW and a second order polynomial function. Second, the cross sections are fitted with a second order polynomial function only. The maximum difference in ISR correction factor and detection efficiency among these hypotheses is taken as systematic uncertainty due to the ISR correction. §.§ Vacuum polarization correction To investigate the uncertainty due to the vacuum polarization factor, we use two available VP parameterisations <cit.>. The difference between them is 0.3% and is taken as the systematic uncertainty. §.§ ∑_iϵ_iℬ(η_c→ X_i)The branching ratios ℬ(η_c→ X_i) are taken from BESIII measurements <cit.>, and the uncertainty of each channel is given in Table <ref>. The systematic uncertainties associated with the efficiency include many items: tracking, photon and PID efficiency, K^0_S, π^0, η and η_c reconstruction, kinematic fit, cross feed and size of the MC sample. The procedure to estimate each item is described below, and the results are also listed in Table <ref>. * Charged track, photon reconstruction and PID efficiencies Both the tracking and PID efficiency uncertainties for charged tracks from the interaction point are determined to be 1% per track, using the control samples of J/ψ→π^+π^-π^0, J/ψ→ pp̅π^+π^- and J/ψ→ K_S^0K^+π^-+c.c. <cit.>.The uncertainty due to the reconstruction of photons is 1% per photon and it is determined from studies of e^+e^-→γμ^+μ^- control samples <cit.>.* K_S^0 efficiencyThe uncertainty caused by K_S^0 reconstruction is studied with the processes J/ψ→ K^*±K^∓ and J/ψ→ϕ K^0_SK^±π^∓.The discrepancy of K_S^0 reconstruction efficiency between data and MC simulation is found to be 1.2% and is taken as systematic uncertainty. * η/π^0 efficiencyTo estimate the uncertainty due to the resolution difference in M(γγ) between data and MC simulation in the η and π^0 candidate selection, the MC shape of η (π^0) is smeared by convolving it with a Gaussian function that represents the discrepancy of resolution and is determined by the study of an e^+e^-→η J/ψ control sample.The difference of reconstruction efficiencies with and without smearing is taken as systematic uncertainty. * η_c decay modelWe use phase space to simulate η_c decays in our analysis. To estimate the systematic uncertainty due to neglecting intermediate states in these decays, we study the intermediate states in η_c decays from ψ(3686)→γη_c,η_c → X_i and generate MC samples accordingly. For channels with well-understood intermediate states, MC samples with these intermediate states are generated according to the relative branching ratios given by PDG <cit.>. The spreads of the efficiencies obtained from the phase-space and alternative MC samples are taken as the systematic uncertainties. * η_c line shapeThe uncertainties of the η_c line shape originate from the model of η_c and the errors of its resonant parameters.In the current MC generator, the η_c line shape is described by a BW function. However, in E1 transitions h_c→γη_c a cubic photon energy term with a damping term at higher energies is introduced to the signal shape because of the transition matrix element and phase space factor. To estimate this uncertainty, toy MC samples, generated according to the model that takes the E1 photon energy dependency into account, are analyzed to obtain the efficiency difference. The uncertainties due to the η_c resonant parameters are considered by varying m_η_c and Γ_η_c in the MC simulation within their errors given by PDG <cit.>. The sum of these two items added in quadrature is taken as systematic uncertainty due to the η_c line shape. * Kinematic fitFor the signal MC samples, corrections to the track helix parameters and the corresponding covariance matrix for all charged tracks are made to obtain improved agreement between data and MC simulation <cit.>. The difference between the obtained efficiencies with and without this correction is taken as the systematic uncertainty due to the kinematic fit. * Cross feedTo check the contamination among the 16 decay modes of η_c, 40,000 MC events for each channel are used to test the event misjudgement. * Size of the MC sampleThe efficiency of each channel is obtained by MC simulation. The statistical uncertainty is calculated according to a binomial distribution. In the fit procedure, ϵ_iℬ(η_c→ X_i) / ∑ϵ_iℬ(η_c→ X_i) is used to constrain the strength among different η_c decay modes, so the uncertainty from ϵ_iℬ(η_c→ X_i) will affect the fit results. In this case, we cannot simply add the uncertainty from ϵ_iℬ(η_c→ X_i) in quadrature with the other uncertainties. To consider the uncertainties of ϵ_iℬ(η_c→ X_i) and their influence to the simultaneous fit, we change the ϵ_iℬ(η_c→ X_i) within their errors and refit the data sample. The change of the cross section with the new results is taken as systematic uncertainty.In this procedure, systematic uncertainties are divided into two categories: the correlated part, which includes tracking, photon efficiency, PID efficiency, π^0/η/K_S^0 efficiency, η_c line shape and kinematic fit, and the uncorrelated part, which includes the η_c decay mode, cross feed, MC samples and ℬ(η_c→ X). These uncertainties are assumed to be distributed according to a Gaussian distribution. The uncertainties of the correlated part are changed dependently (increasing or decreasing at the same time for all channels), while the uncertainties of the uncorrelated part are changed independently.We change the uncertainties (both correlated and uncorrelated parts) with a Gaussian constraint and refit the data set 500 times. The cross sections calculated with these trials are fitted with a Gaussian function, whose standard deviation is taken as systematic uncertainty.To obtain a conservartive estimation, the maximum deviation of 16.7% from the data samples at √(s)= 4.226, 4.258, 4.358 and 4.416 GeV is adopted as systematic uncertainty from ∑_iϵ_i×ℬ(η_c→ X_i) for all the data sets.§ UPPER LIMIT WITH SYSTEMATIC UNCERTAINTYFor the data sets without significant η h_c signals observed, anupper limit at the 90% C.L. on the cross section is set using a Bayesian method, assuming a flat prior in σ. In this method, the probability density function of the measured cross section σ, P(σ), is determined using a maximum likelihood fit. The 90% confidence limit (L) is then calculated by solving the equation 0.1=∫_L^∞ P(σ) dσ. To include multiplicative systematics, P(σ) is convolved with a probability distribution function of sensitivity, which refers to the denominator of Eq. (<ref>) and is assumed to be a Gaussian with central value Ŝ and standard deviation σ_s <cit.>: P^'(σ)=∫_0^∞ P(S/Ŝσ) exp[-(S-Ŝ)^2/2σ_s^2]dS. Here, P(σ) is the likelihood distribution obtained from the fit and parameterized as double Gaussian. By integrating P^'(σ) we obtain the 90% C.L. upper limit taking the systematic uncertainties into account.§ RESULTS AND DISCUSSION In this study, the Born cross section and its upper limits of e^+e^-→η h_c are measured with statistical and systematical uncertainties at c.m. energies from 4.085 to 4.600 GeV, and the results are listed in Table <ref>. Clear signals of e^+e^-→η h_c are observed at √(s)=4.226 GeV for the first time. The Born cross section is measured to be (9.5^+2.2_-2.0± 2.7) pb. We also observe evidence for the signal process at √(s)=4.358 GeV with a cross section of (10.0^+3.1_-2.7± 2.6) pb. For the other c.m. energies considered, no significant signals are found, and upper limits on the cross section at the 90% C.L. are determined. The cross sections measured in this analysis and CLEO <cit.> are modeled with a coherent sum of three BW functions (as shown in Fig. <ref>) to calculate the ISR correction factors. Comparing with the process e^+e^-→η J/ψ <cit.>, if we suppose both processes come from higher mass vector charmonia, the ratio Γ(ψ→η h_c )/Γ(ψ→η J/ψ ) is determined to be 0.20±0.07 and 1.79±0.84 at √(s)=4.23 GeV and 4.36 GeV, respectively. These results are larger than theoretical expectation: Γ(ψ(4160) →η h_c )/Γ(ψ(4160) →η J/ψ ) = 0.07887 and Γ(ψ(4415) →η h_c )/Γ(ψ(4415) →η J/ψ ) = 0.06736 <cit.>. Comparing with the cross section of e^+e^-→π^+ π^- h_c <cit.>, we find that the cross section of e^+e^-→η h_c is smaller. But due to the limited statistics we cannot determine the line shape of c.m. energy dependent cross section precisely.The BESIII collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key Basic Research Program of China under Contract No. 2015CB856700; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11235011, 11322544, 11335008, 11425524, 11635010; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); the Collaborative Innovation Center for Particles and Interactions (CICPI); Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contracts Nos. U1232201, U1332201, U1532257, U1532258; CAS under Contracts Nos. KJCX2-YW-N29, KJCX2-YW-N45; 100 Talents Program of CAS; National 1000 Talents Program of China; INPAC and Shanghai Key Laboratory for Particle Physics and Cosmology; German Research Foundation DFG under Contracts Nos. Collaborative Research Center CRC 1044, FOR 2359; Istituto Nazionale di Fisica Nucleare, Italy; Koninklijke Nederlandse Akademie van Wetenschappen (KNAW) under Contract No. 530-4CDP03; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Natural Science Foundation of China (NSFC) under Contract No. 11505010; The Swedish Resarch Council; U. S. Department of Energy under Contracts Nos. DE-FG02-05ER41374, DE-SC-0010118, DE-SC-0010504, DE-SC-0012069; U.S. National Science Foundation; University of Groningen (RuG) and the Helmholtzzentrum fuer Schwerionenforschung GmbH (GSI), Darmstadt; WCU Program of National Research Foundation of Korea under Contract No. R32-2008-000-10155-0. 99 Ablikim:2007gdM. Ablikim et al. [BES Collaboration],eConf C 070805, 02 (2007) [Phys. Lett. B 660, 315 (2008)]; T. Barnes, S. Godfrey and E. S. Swanson,Phys. Rev. D 72, 054026 (2005). Aubert:2005rmB. Aubert et al. [BaBar Collaboration],Phys. Rev. Lett.95, 142001 (2005).He:2006kgQ. He et al. [CLEO Collaboration],Phys. Rev. D 74, 091104 (2006). Yuan:2007sjC. Z. Yuan et al. [Belle Collaboration],Phys. Rev. Lett.99, 182004 (2007). Ablikim:2015tbpM. Ablikim et al. [BESIII Collaboration],Phys. Rev. Lett.115, 112003 (2015). Aubert:2007zzB. Aubert et al. [BaBar Collaboration],Phys. Rev. Lett.98, 212001 (2007). Wang:2007ea X. L. Wang et al.[Belle Collaboration],Phys. Rev. Lett.99, 142002 (2007).Choi:2003ueS. K. Choi et al. [Belle Collaboration],Phys. Rev. Lett.91, 262001 (2003).Ablikim:2013mioM. Ablikim et al. [BESIII Collaboration],Phys. Rev. Lett.110, 252001 (2013). Liu:2013dauZ. Q. Liu et al. [Belle Collaboration],Phys. Rev. Lett.110, 252002 (2013).Ablikim:2013xfrM. Ablikim et al. [BESIII Collaboration],Phys. Rev. Lett.112, 022001 (2014).Ablikim:2015gdaM. Ablikim et al. [BESIII Collaboration],Phys. Rev. Lett.115, 222002 (2015). Ablikim:2013wzq M. Ablikim et al. [BESIII Collaboration],Phys. Rev. Lett.111, 242001 (2013). Ablikim:2013emmM. Ablikim et al. [BESIII Collaboration],Phys. Rev. Lett.112, 132001 (2014).Ablikim:2014dxl M. Ablikim et al. [BESIII Collaboration],Phys. Rev. Lett.113, 212002 (2014).Ablikim:2015vvnM. Ablikim et al. [BESIII Collaboration],Phys. Rev. Lett.115, 182002 (2015). Close:2005iz F. E. Close and P. R. Page,Phys. Lett. B 628, 215 (2005). Zhu:2005hp S. -L. Zhu,Phys. Lett. B 625, 212 (2005); E. Kou and O. Pene,Phys. Lett. B 631, 164 (2005);X. Q. Luo and Y. Liu,Phys. Rev. D 74, 034502 (2006) [Phys. Rev. D 74, 039902 (2006)]. Chen:2016ejoY. Chen, W. F. Chiu, M. Gong, L. C. Gui and Z. Liu,Chin. Phys. C 40, 081002 (2016).Ebert:2005nc D. Ebert, R. N. Faustov and V. O. Galkin,Phys. Lett. B 634, 214 (2006). Maiani:2005pe L. Maiani, V. Riquer, F. Piccinini and A. D. Polosa,Phys. Rev. D 72, 031502 (2005). TWChiuT. W. Chiu et al. [TWQCD Collaboration],Phys. Rev. D 73, 094510 (2006). Liu:2005ay X. Liu, X. -Q. Zeng and X. -Q. Li,Phys. Rev. D 72, 054023 (2005). CFQiaoC. F. Qiao,Phys. Lett. B 639, 263 (2006). Yuan:2005dr C. Z. Yuan, P. Wang and X. H. Mo,Phys. Lett. B 634, 399 (2006). Chen:2015digH.-X. Chen, L. Maiani, A. D. Polosa and V. Riquer,Eur. Phys. J. C 75, 550 (2015). Padmanath:2015eraM. Padmanath, C. B. Lang and S. Prelovsek,Phys. Rev. D 92, 034501 (2015). Bugg:2008wuD. V. Bugg,J. Phys. G 35, 075005 (2008). Chen:2011xkD. Y. Chen and X. Liu,Phys. Rev. D 84, 034032 (2011).Wang:2013cyaQ. Wang, C. Hanhart and Q. Zhao,Phys. Rev. Lett.111, 132003 (2013). Swanson:2014traE. S. Swanson,Phys. Rev. D 91,034009 (2015).Wang:2013kraQ. Wang, M. Cleven, F. K. Guo, C. Hanhart, U. G. Meißner, X. G. Wu and Q. Zhao,Phys. Rev. D 89, 034001 (2014). Lebed:2016hpiR. F. Lebed, R. E. Mitchell and E. S. Swanson,Prog. Part. Nucl. Phys.93, 143 (2017).CLEO:2011aa T. K. Pedlar et al.[CLEO Collaboration],Phys. Rev. Lett.107, 041803 (2011).BESIII:2016adjM. Ablikim et al. [BESIII Collaboration], Phys. Rev. Lett.118, 092002 (2017).Voloshin:2004mhM. B. Voloshin,Phys. Lett. B 604, 69 (2004). Tamponi:2015xzb U. Tamponi et al. [Belle Collaboration],Phys. Rev. Lett.115, 142001 (2015).Ablikim:2015nan M. Ablikim et al. [BESIII Collaboration],Chin. Phys. C 39, 093001 (2015). Ablikim:2015zaaM. Ablikim et al. [BESIII Collaboration],Chin. Phys. C 40, 063001 (2016). ref:bes3 M. Ablikim et al.[BESIII Collaboration], Nucl. Instrum. Meth. A 614, 345 (2010).Agostinelli:2002hh S. Agostinelli et al. [GEANT4 Collaboration], Nucl. Instrum. Meth.A 506, 250 (2003); Geant4 version: v09-03p0; Physics List simulation engine: BERT; Physics List engine packaging library: PACK 5.5.Allison:2006ve J. Allison et al., IEEE Trans. Nucl. Sci. 53, 270 (2006).ref:kkmc S. Jadach , B. F. L. Ward and Z. Was, Comp. Phys. Commun. 130, 260 (2000); S. Jadach, B. F. L. WardandZ. Was, Phys. Rev. D 63, 113009 (2001).ref:bes3gen R. G. Ping, Chin. Phys. C 32, 599 (2008); D. J. Lange, Nucl. Instrum. Meth. A 462, 152 (2001).Olive:2016xmwC. Patrignani et al. [Particle Data Group],Chin. Phys. C 40, 100001 (2016). Chen:2000tv J. C. Chen, G. S. Huang, X. R. Qi, D. H. Zhang and Y. S. Zhu,Phys. Rev. D 62, 034003 (2000).Sjostrand:2001yuT. Sjöstrand et al.,Comput. Phys. Commun.191, 159 (2015).ref::ks0-reconstruction M. Xu, et al., Chin. Phys. C 33 (2009) 428. Balossini:2006wcG. Balossini, C. M. Carloni Calame, G. Montagna, O. Nicrosini and F. Piccinini,Nucl. Phys. B 758, 227 (2006).Kuraev:1985hb E. A. Kuraev and V. S. Fadin,Yad. Fiz.41, 733 (1985) [Sov. J. Nucl. Phys.41, 466 (1985)]. Eidelman:1995ny S. Eidelman and F. Jegerlehner,Z. Phys. C 67, 585 (1995). Jegerlehner:2011mwF. Jegerlehner,Nuovo Cim. C 034S1, 31 (2011). Ablikim:2010rc M. Ablikim et al.[BESIII Collaboration],Phys. Rev. Lett.104, 132002 (2010). Actis:2010gg S. Actis et al.[Working Group on Radiative Corrections and Monte Carlo Generators for Low Energies Collaboration],Eur. Phys. J. C 66, 585 (2010).Ablikim:2012ur M. Ablikim et al.[BESIII Collaboration],Phys. Rev. D 86, 092009 (2012).Ablikim:2011kv M. Ablikim et al. [BESIII Collaboration],Phys. Rev. D 83, 112005 (2011).Prasad:2015braV. Prasad, C. Liu, X. Ji, W. Li, H. Liu and X. Lou,Physics 174, 577 (2016).Ablikim:2012pg M. Ablikim et al. [BESIII Collaboration],Phys. Rev. D 87, 012002 (2013). KStensonK. Stenson, arXiv:physics/0605236.Ablikim:2015xhkM. Ablikim et al. [BESIII Collaboration],Phys. Rev. D 91, 112005 (2015).Muhammad M. N. Anwar, Y. Lu, B. S. Zou, arXiv:1612.05396.
http://arxiv.org/abs/1704.08033v3
{ "authors": [ "BESIII collaboration", "M. Ablikim", "M. N. Achasov", "S. Ahmed", "X. C. Ai", "O. Albayrak", "M. Albrecht", "D. J. Ambrose", "A. Amoroso", "F. F. An", "Q. An", "J. Z. Bai", "O. Bakina", "R. Baldini Ferroli", "Y. Ban", "D. W. Bennett", "J. V. Bennett", "N. Berger", "M. Bertani", "D. Bettoni", "J. M. Bian", "F. Bianchi", "E. Boger", "I. Boyko", "R. A. Briere", "H. Cai", "X. Cai", "O. Cakir", "A. Calcaterra", "G. F. Cao", "S. A. Cetin", "J. Chai", "J. F. Chang", "G. Chelkov", "G. Chen", "H. S. Chen", "J. C. Chen", "M. L. Chen", "S. Chen", "S. J. Chen", "X. Chen", "X. R. Chen", "Y. B. Chen", "X. K. Chu", "G. Cibinetto", "H. L. Dai", "J. P. Dai", "A. Dbeyssi", "D. Dedovich", "Z. Y. Deng", "A. Denig", "I. Denysenko", "M. Destefanis", "F. DeMori", "Y. Ding", "C. Dong", "J. Dong", "L. Y. Dong", "M. Y. Dong", "Z. L. Dou", "S. X. Du", "P. F. Duan", "J. Z. Fan", "J. Fang", "S. S. Fang", "X. Fang", "Y. Fang", "R. Farinelli", "L. Fava", "F. Feldbauer", "G. Felici", "C. Q. Feng", "E. Fioravanti", "M. Fritsch", "C. D. Fu", "Q. Gao", "X. L. Gao", "Y. Gao", "Z. Gao", "I. Garzia", "K. Goetzen", "L. Gong", "W. X. Gong", "W. Gradl", "M. Greco", "M. H. Gu", "Y. T. Gu", "Y. H. Guan", "A. Q. Guo", "L. B. Guo", "R. P. Guo", "Y. Guo", "Y. P. Guo", "Z. Haddadi", "A. Hafner", "S. Han", "X. Q. Hao", "F. A. Harris", "K. L. He", "F. H. Heinsius", "T. Held", "Y. K. Heng", "T. Holtmann", "Z. L. Hou", "C. Hu", "H. M. Hu", "T. Hu", "Y. Hu", "G. S. Huang", "J. S. Huang", "X. T. Huang", "X. Z. Huang", "Z. L. Huang", "T. Hussain", "W. Ikegami Andersson", "Q. Ji", "Q. P. Ji", "X. B. Ji", "X. L. Ji", "L. W. Jiang", "X. S. Jiang", "X. Y. Jiang", "J. B. Jiao", "Z. Jiao", "D. P. Jin", "S. Jin", "T. Johansson", "A. Julin", "N. Kalantar-Nayestanaki", "X. L. Kang", "X. S. Kang", "M. Kavatsyuk", "B. C. Ke", "P. Kiese", "R. Kliemt", "B. Kloss", "O. B. Kolcu", "B. Kopf", "M. Kornicer", "A. Kupsc", "W. Kühn", "J. S. Lange", "M. Lara", "P. Larin", "H. Leithoff", "C. Leng", "C. Li", "Cheng Li", "D. M. Li", "F. Li", "F. Y. Li", "G. Li", "H. B. Li", "H. J. Li", "J. C. Li", "Jin Li", "K. Li", "K. Li", "Lei Li", "P. R. Li", "Q. Y. Li", "T. Li", "W. D. Li", "W. G. Li", "X. L. Li", "X. N. Li", "X. Q. Li", "Y. B. Li", "Z. B. Li", "H. Liang", "Y. F. Liang", "Y. T. Liang", "G. R. Liao", "D. X. Lin", "B. Liu", "B. J. Liu", "C. X. Liu", "D. Liu", "F. H. Liu", "Fang Liu", "Feng Liu", "H. B. Liu", "H. H. Liu", "H. H. Liu", "H. M. Liu", "J. Liu", "J. B. Liu", "J. P. Liu", "J. Y. Liu", "K. Liu", "K. Y. Liu", "L. D. Liu", "P. L. Liu", "Q. Liu", "S. B. Liu", "X. Liu", "Y. B. Liu", "Y. Y. Liu", "Z. A. Liu", "Zhiqing Liu", "H. Loehner", "Y. F. Long", "X. C. Lou", "H. J. Lu", "J. G. Lu", "Y. Lu", "Y. P. Lu", "C. L. Luo", "M. X. Luo", "T. Luo", "X. L. Luo", "X. R. Lyu", "F. C. Ma", "H. L. Ma", "L. L. Ma", "M. M. Ma", "Q. M. Ma", "T. Ma", "X. N. Ma", "X. Y. Ma", "Y. M. Ma", "F. E. Maas", "M. Maggiora", "Q. A. Malik", "Y. J. Mao", "Z. P. Mao", "S. Marcello", "J. G. Messchendorp", "G. Mezzadri", "J. Min", "T. J. Min", "R. E. Mitchell", "X. H. Mo", "Y. J. Mo", "C. Morales Morales", "N. Yu. Muchnoi", "H. Muramatsu", "P. Musiol", "Y. Nefedov", "F. Nerling", "I. B. Nikolaev", "Z. Ning", "S. Nisar", "S. L. Niu", "X. Y. Niu", "S. L. Olsen", "Q. Ouyang", "S. Pacetti", "Y. Pan", "M. Papenbrock", "P. Patteri", "M. Pelizaeus", "H. P. Peng", "K. Peters", "J. Pettersson", "J. L. Ping", "R. G. Ping", "R. Poling", "V. Prasad", "H. R. Qi", "M. Qi", "S. Qian", "C. F. Qiao", "L. Q. Qin", "N. Qin", "X. S. Qin", "Z. H. Qin", "J. F. Qiu", "K. H. Rashid", "C. F. Redmer", "M. Ripka", "G. Rong", "Ch. Rosner", "X. D. Ruan", "A. Sarantsev", "M. Savrié", "C. Schnier", "K. Schoenning", "W. Shan", "M. Shao", "C. P. Shen", "P. X. Shen", "X. Y. Shen", "H. Y. Sheng", "W. M. Song", "X. Y. Song", "S. Sosio", "S. Spataro", "G. X. Sun", "J. F. Sun", "S. S. Sun", "X. H. Sun", "Y. J. Sun", "Y. Z. Sun", "Z. J. Sun", "Z. T. Sun", "C. J. Tang", "X. Tang", "I. Tapan", "E. H. Thorndike", "M. Tiemens", "I. Uman", "G. S. Varner", "B. Wang", "B. L. Wang", "D. Wang", "D. Y. Wang", "K. Wang", "L. L. Wang", "L. S. Wang", "M. Wang", "P. Wang", "P. L. Wang", "W. Wang", "W. P. Wang", "X. F. Wang", "Y. Wang", "Y. D. Wang", "Y. F. Wang", "Y. Q. Wang", "Z. Wang", "Z. G. Wang", "Z. H. Wang", "Z. Y. Wang", "Z. Y. Wang", "T. Weber", "D. H. Wei", "P. Weidenkaff", "S. P. Wen", "U. Wiedner", "M. Wolke", "L. H. Wu", "L. J. Wu", "Z. Wu", "L. Xia", "L. G. Xia", "Y. Xia", "D. Xiao", "H. Xiao", "Z. J. Xiao", "Y. G. Xie", "Y. H. Xie", "Q. L. Xiu", "G. F. Xu", "J. J. Xu", "L. Xu", "Q. J. Xu", "Q. N. Xu", "X. P. Xu", "L. Yan", "W. B. Yan", "W. C. Yan", "Y. H. Yan", "H. J. Yang", "H. X. Yang", "L. Yang", "Y. X. Yang", "M. Ye", "M. H. Ye", "J. H. Yin", "Z. Y. You", "B. X. Yu", "C. X. Yu", "J. S. Yu", "C. Z. Yuan", "Y. Yuan", "A. Yuncu", "A. A. Zafar", "Y. Zeng", "Z. Zeng", "B. X. Zhang", "B. Y. Zhang", "C. C. Zhang", "D. H. Zhang", "H. H. Zhang", "H. Y. Zhang", "J. Zhang", "J. J. Zhang", "J. L. Zhang", "J. Q. Zhang", "J. W. Zhang", "J. Y. Zhang", "J. Z. Zhang", "K. Zhang", "L. Zhang", "S. Q. Zhang", "X. Y. Zhang", "Y. Zhang", "Y. Zhang", "Y. H. Zhang", "Y. N. Zhang", "Y. T. Zhang", "Yu Zhang", "Z. H. Zhang", "Z. P. Zhang", "Z. Y. Zhang", "G. Zhao", "J. W. Zhao", "J. Y. Zhao", "J. Z. Zhao", "Lei Zhao", "Ling Zhao", "M. G. Zhao", "Q. Zhao", "Q. W. Zhao", "S. J. Zhao", "T. C. Zhao", "Y. B. Zhao", "Z. G. Zhao", "A. Zhemchugov", "B. Zheng", "J. P. Zheng", "W. J. Zheng", "Y. H. Zheng", "B. Zhong", "L. Zhou", "X. Zhou", "X. K. Zhou", "X. R. Zhou", "X. Y. Zhou", "K. Zhu", "K. J. Zhu", "S. Zhu", "S. H. Zhu", "X. L. Zhu", "Y. C. Zhu", "Y. S. Zhu", "Z. A. Zhu", "J. Zhuang", "L. Zotti", "B. S. Zou", "J. H. Zou" ], "categories": [ "hep-ex" ], "primary_category": "hep-ex", "published": "20170426093347", "title": "Observation of $e^{+}e^{-} \\to ηh_{c}$ at center-of-mass energies from 4.085 to 4.600 GeV" }
[ [ December 30, 2023 ===================== A computation-oriented representation of uncertain kinetic systems is introduced and analysed in this paper. It is assumed that the monomial coefficients of the ODEs belong to a polytopic set, which defines a set of dynamical systems for an uncertain model. An optimization-based computation model is proposed for the structural analysis of uncertain models. It is shown that the so-called dense realization containing the maximum number of reactions (directed edges) is computable in polynomial time, and it forms a super-structure among all the possible reaction graphs corresponding to an uncertain kinetic model, assuming a fixed set of complexes. The set of core reactions present in all reaction graphs of an uncertain model is also studied. Most importantly, an algorithm is proposed to compute all possible reaction graph structures for an uncertain kinetic model. Keywords: reaction networks, uncertain models, reaction graphs, algorithms, convex optimization § INTRODUCTIONKinetic models in the form of nonlinear ordinary differential equations are widely used for describing time-varying physico-chemical quantities in (bio-)chemical environments <cit.>. Moreover, the kinetic system class is dynamically rich enough to characterize general nonlinear behaviour in other application fields as well, particularly where the state variables are nonnegative and the model has a networked structure, such as in the modelling of process systems, population or disease dynamics, or even transportation processes <cit.>. In biochemical applications, the exact values (or even sharp estimates) of the model parameters are often not known, making the models uncertain <cit.>. Even when we have measurements of sufficient quantity and quality, the lack of structural or practical identifiability may result in highly uncertain models even with the most sophisticated estimation methods <cit.>. This inherent uncertainty was a key factor in the development of Chemical Reaction Network Theory (CRNT), where (among other goals) a primary interest is to study the relations between the network structure and the qualitative properties of the corresponding dynamics, preferably without the precise knowledge of model parameters. From the earlier results of CRNT, we have to mention the well-known Deficiency One and Deficiency Zero Theorems <cit.> opening the way towards a structure-based (essentially parameter-free) dynamical analysis of biological networks. Recent particularly important findings in this area are the identification of biologically plausible structural sources of absolute concentration robustness <cit.>,and the proof of the Global Attractor Conjecture <cit.>.The efficient treatment of uncertain quantitative models is a fundamental task in mathematics, physics, (bio)chemistry and in related engineering fields <cit.>.An important early result is <cit.>, where the solutions of linear compartmental systems are studied with uncertain flow rates that are assumed to belong to known intervals. In <cit.> a probabilistic framework is proposed for the representation and analysis of uncertain kinetic systems.In <cit.> an analytical expression is computed for the temperature dependence of the uncertainty of reaction rate coefficients, and a method is proposed for computing the covariance matrix and the joint probability density function of the Arrhenius parameters. A recent outstanding result is <cit.>, where a deterministic computation interpolation scheme for uncertain reaction network models is proposed, which is able to handle large-scale models with hundreds of species and kinetic parameters.The description of model uncertainties using convex sets is often a computationally appealing way of solving model analysis, estimation or control problems <cit.>. From the numerous applications, we mention here only a few selected works from different fields. In <cit.>, a stabilization scheme was given for nonlinear control system models, where the uncertain coefficients of smooth basis functions in the system equations are assumed to form a polytopic set. An interval representation of fluxes in metabolic networks was introduced in <cit.>, which enables the computation of the α-spectrum even from an uncertain flux distribution. In <cit.>, a nonlinear feedback design method is proposed which is able to robustly stabilize parametrically uncertain kinetic systems using the convexity of the constraint ensuring the complex balance property. Recently, a new approach was given for the stability analysis of general Lotka-Volterra models with polytopic parameter uncertainties in <cit.>. It is known from the fundamental dogma of chemical kinetics that the reaction graph structure corresponding to a kinetic ODE-model is generally non-unique, even in the case when the rate coefficients are assumed to be known <cit.>. This property is usually called dynamical equivalence, macro-equivalence or confoundability in the literature <cit.>.The first solution to the inverse problem, namely the construction of one possible reaction network (called the canonical network) for a given set of kinetic differential equations was described in <cit.>. The notion of dynamical equivalence was extended by introducing linear conjugacy of kinetic systems in <cit.> allowing a positive diagonal transformation between the solutions of the kinetic differential equations. The simple factorization of kinetic models containing the Laplacian matrix of the reaction graph allows the development ofefficient methods in various optimization frameworks for computing reaction networks realizing or linearly conjugate to a given dynamics with preferred properties such as density/sparsity <cit.>, weak reversibility <cit.>, complex or detailed balance <cit.>, minimal or zero deficiency <cit.>. Using the superstructure property of the so-called dense realizations, it is possible to algorithmically generate all possible reaction graph structures corresponding to linearly conjugate realizations of a kinetic dynamics <cit.>.Even if the monomials of a kinetic system are known, the parameters (i.e., the monomialcoefficients) are often uncertain in practice. For example, one may consider the situation when a kinetic polynomialODE model with fixed structure is identified from noisy measurement data. In such a case, using the covariance matrix of theestimates and the nonnegativity/kinetic constraints for the system model, we can define a simple interval-based (see, e.g. <cit.>), or more general (e.g., polytopic or ellipsoidal) uncertain model <cit.>. Based on the above, the goal of this paper is to extend and illustrate previously introduced notions, computational models and algorithms for kinetic systems with polytopic uncertainty.§ NOTATIONS AND COMPUTATIONAL BACKGROUND In this section we summarize the basic notions of kinetic polynomial systems and the generalized model definedwith uncertain parameters.The applied general notations are listed below: ℝ the set of real numbers ℝ_+ the set of nonnegative real numbers ℕ the set of natural numbers H^n× m the set of matrices having entries from a set H with n rows and m columns[M]_ij the entry of matrix M with row index i and column index j[M]_.j the jth column of matrix MR_j the jth coordinate of vector R0^n the null vector in ℝ^n1^n a vector in ℝ^n with all coordinates equal to 1e_i^n a vector in ℝ^n for which the ith coordinate is 1 and all the others are zero§.§ Kinetic polynomial systems and their models Nonnegative polynomial systems are defined in the following general form:ẋ=M ·φ(x)where x: ℝ→ℝ_+^n is a nonnegative valued function, M ∈ℝ^n × p is a coefficient matrix andφ: ℝ_+^n →ℝ_+^p is a monomial-typevector-mapping.The invariance of the nonnegative orthant with respect to the dynamics (<ref>) can be ensured byprescribing sign conditions for the entries of matrix M depending on the exponents of φ,see <cit.>.In this paper, we treat kinetic models as a general nonlinear system class that is suitable for the description of biochemical reaction networks. Hence, we do not require that all models belonging to the studied class are actually chemically realizable. Several physically or chemically relevant properties such as component mass conservation, detailed or complex balancecan be ensured by adding further constraints to the computations (see, e.g. <cit.>). A chemical reaction network (CRN) can be characterized by three sets <cit.>. species: 𝒮= {X_i|i ∈{1,…,n}} complexes: 𝒞 = { C_j = ∑_i=1^nα_ji X_i | α_ji∈ℕ, j∈{1,…,m}, i ∈{1,…,n}} reactions: ℛ⊆{(C_i,C_j)|C_i,C_j ∈𝒞}For all i,j ∈{1, … m}, i ≠ j the reaction C_i → C_j is represented by the orderedpair(C_i,C_j), and it is described by a nonnegative real number k_ij∈ℝ^+ calledreaction rate coefficient.The reaction C_i → C_j is present in the reaction network if and only if k_ij is strictlypositive. The relation between species and complexes is described by the complex composition matrixY ∈ℝ^n × m, the columns of which correspond to the complexes, i.e.[Y]_ij=α_ji i∈{1, …, n}, j ∈{1, …, m} The presenceof the reactions in the CRN is defined through the rate coefficients as the off-diagonal entriesof the Kirchhoff matrix A_k ∈ℝ^m × m which is a Metzler compartmental matrix with zero column-sums. Its entries are defined as: [A_k]_ij= k_jiifi ≠ j-∑_l=1, l≠ i^m k_ilifi=ji,j ∈{1, …, m}According to this notation, the reaction C_i → C_j takes place in the reaction network if and onlyif [A_k]_ji is positive, and [A_k]_ji =0 implies that (C_i,C_j) ∉ℛ. Since a chemical reaction network is uniquely characterized by thematrices Y and A_k, we refer to a CRN by thecorresponding pair (Y,A_k).If mass action kinetics is assumed, the equations governing the dynamics of the concentrations of thespecies in the CRN defined by the function x:ℝ→ℝ^n_+ canbe written in the form:ẋ=Y · A_k ·ψ^Y(x)where ψ^Y: ℝ_+^n →ℝ_+^mis the monomial function ofthe CRN with coordinate functionsψ^Y_j(x)=∏_i=1^n x_i^[Y]_ij, j∈{1,…,m}The nonnegative polynomial system (<ref>) is called a kinetic system if there exists areaction network (Y,A_k) so that its dynamics satisfies the equation <cit.>:M·φ(x) = Y· A_k·ψ^Y(x) As it has been mentioned in the Introduction, reaction networks with different sets of complexes andreactions may be governed by the same dynamics. If Equation (<ref>) is fulfilled, then the CRN(Y,A_k) is called a dynamically equivalent realization of the kinetic system (<ref>).The description of thepolynomial system (<ref>) can be transformed so that the monomial function φ is equal toψ^Y (and p=m holds) while the described dynamics remains the same. After the transformation andsimplification based on the properties of polynomials, Equation (<ref>) can be simplified to:M = Y · A_k Reaction networks have another representation, which is more suitable for illustrating the structuralproperties. It is a weighted directed graph G(V,E) called the Feinberg-Horn-Jackson graph orreaction graph for brevity <cit.>. The complexes are represented by the vertices, and thereactions by the edges.Let the vertices v_i and v_j correspond to the complexes C_i and C_j, respectively. Then there is adirected edge v_i v_j ∈ V(G) with weight k_ij if and only if the reaction C_i → C_j takesplace in the CRN. §.§ Uncertain kinetic systems For the uncertainty modelling, we assume that the monomial coefficients in matrix M are constant but uncertain, and they belong to an n· m dimensional polyhedron. In previous sections the set of uncertain parameters is noted as a polytope or a polytopic set, but from now on we use the notion of a polyhedron as well. The former one is defined as the convex hull of its vertices, while the latter one is the intersection of halfspaces, and the two definitions are not equivalent in general. However, in the examined problems it is assumed that the parameters of the kinetic models are bounded, and a bounded polyhedron is equivalent to a bounded polytope.We represent the matrix M as a point denoted by M in theEuclidean space ℝ^nm. In the uncertain model it is assumed that the possible pointsM are all the points of a closed convex polyhedron 𝒫, which is defined as the intersection of q halfspaces. The boundaries of the halfspaces are hyperplanes with normal vectorsn_1, … ,n_q ∈ℝ^nm and constants b_1, …, b_q ∈ℝ.Applying these notations, the polyhedron 𝒫 can be described by a linearinequality system as 𝒫= {M∈ℝ^nm |M^⊤· n_i ≤ b_i,1 ≤ i ≤ q}} For the characterization of the polyhedron 𝒫 not only the possible values of the parametersshould be considered, but also the kinetic property of the polynomial system. This can be ensured (see <cit.>) by prescribing the sign pattern of the matrix M as follows:[Y]_ij=0 ⟹ [M]_ij≥ 0,i∈{1, …, n}, j ∈{1, …, m}These constraints are of the same form as the inequalities in Equation (<ref>), for example theconstraint M_j ≥ 0 can be written by choosing the normal vector n_i to be the unit vector -e_j^nm and b_i to be the null vector 0^nm.We note that there is a special case when the possible values of the parameters of the polynomial system aregiven as intervals, and the polyhedron 𝒫 is a cuboid.It is possible to define a set L of finitely many additional linear constraints on the variables tocharacterize a special property of the realizations, for example a set of reactions to be excluded, or massconservation on a given level, see e.g. <cit.>. These constraints can affect not only the entries ofthe coefficient matrix M but the Kirchhoff matrix of the realizations as well. If the Kirchhoff matrix A_k of the realization is represented by the pointA_k∈ℝ^m^2-m storing the off-diagonal elements, and r is the number of constraints in the set L, then theequations can be written in the formM^⊤·α_i + A_k^⊤·β_i ≤ d_iwhere α_i ∈ℝ^nm, β_i ∈ℝ^m^2-m and d_i ∈ℝ holdfor all i ∈{1, …, r}. These constraints do not change the general properties of the model, and as it will be shown in Section <ref>, it canbe modelled as a linear programming problem. In the case of the uncertain model, we will examine realizations assuming a fixed set of complexes. Therefore, the known parameters are the polyhedron 𝒫, the set L of constraints and the matrix Y. Hence a constrained uncertain kinetic system is referred to as the triple [𝒫,L,Y], but we will callit an uncertain kinetic system for brevity.A reaction network (Y,A_k)is called a realization of the uncertain kineticsystem [𝒫,L,Y] if there exists a coefficient matrix M ∈ℝ^n× m so that the equation M = Y · A_k holds, the point M is in the polyhedron 𝒫 andthe entries of the matrices M and A_k fulfil the set L of constraints.Since the matrix Y is fixed but the coefficients of the polynomial system can vary, this realization isreferred to as the matrix pair (M,A_k). §.§ Computational model Assuming a fixed set of complexes, a realization (M,A_k) of an uncertain kinetic system [𝒫,L,Y]can be computed using a linear optimization framework. In the constraint satisfaction or optimization model, the variables are the entries of the matrix M and the off-diagonal entries of the matrix A_k. The constraints regarding the realizations of the uncertain model can be written as follows: M^⊤· n_i ≤ b_i, i ∈{1, …, p} M = Y · A_k,[A_k]_ij≥ 0, i ≠ j, i,j ∈{1, …, m} ∑_j=1^m [A_k]_ij =0, j ∈{1, …, m}Equations (<ref>) ensure that the parameters of the dynamics correspond to a point of thepolyhedron 𝒫. Dynamical equivalence is defined by Equation (<ref>), while Equations(<ref>) and (<ref>) are required for the Kirchhoff property of matrix A_k to befulfilled. Moreover, the constraints in the set L can be written in the form of Equation(<ref>).The objective function of the optimization model can be defined according to the desired properties of therealization, for example in order to examine if the reaction C_i → C_j can be present in thereaction network or not, the objective can be defined as max [A_k]_ji.We apply the representation of realizations of the uncertain model as pointsof the Euclidean space ℝ^m^2-m+nm. The coordinates with indices i ∈{1, …, m^2-m}characterize the Kirchhoff matrix of the realization and the remaining coordinatesj ∈{m^2-m+1, … m^2-m+nm} define the coefficient matrix M of the polynomial system. Due to the linearity of the constraints in the computational model, the set of possible realizations of anuncertain kinetic system [𝒫,L,Y] is a convex bounded polyhedron denoted by 𝒬. § STRUCTURAL ANALYSIS OF REALIZATIONS OF THE UNCERTAINKINETIC MODELIn this section we summarize some of the special structural properties of the realizations of an uncertain kinetic system [𝒫,L,Y]. §.§ Superstructure property of the dense realizationsA dynamically equivalent or linearly conjugate realization of a kinetic system with a fixed set of complexeshaving maximal or minimal number of reactions is called dense or sparse realization, respectively<cit.>. It is known that for any kinetic system there might be several differentsparse realizations, however, the dense realization is structurally unique and it defines a superstructureamong all realizations, see <cit.>.The directed graph G(V,E) is called a superstructure with respect to a set 𝒢 of directedgraphs with labelled vertices, if it contains every graph in the set 𝒢 as subgraph, and it isminimal under inclusion. By the definition it follows that for any set 𝒢 there exists asuperstructure graph and it is unique.In the case of dynamical equivalent and linearly conjugate realizations of kinetic systems the superstructureis the reaction graph of a dense realization, that contains all the reaction graphs representing realizationsof the kinetic system as subgraphs,not considering the edge weights. This means that the set of reactions that take place in any of therealizations is the same as the set of reactions in the dense realization. Dense and sparse realizations can be introduced in the case of the uncertain model as well, thatare useful during the structural analysis. A realization (M,A_k) of the uncertain kinetic system [𝒫,L,Y] is called adense (sparse) realization if it has maximal (minimal) number of reactions. It can be proved that the superstructure property holds for uncertain kinetic systems as well, and theproof is based on the same idea as in the non-uncertain case, see <cit.>.A dense realization (M,A_k) of an uncertain kinetic system [𝒫,L,Y]determines a superstructure among all realizations of the model.If the point D in the polyhedron 𝒬 of possible realizations represents a dense realization, thenthe superstructure property is equivalent tothe property that any coordinate with indexi ∈{1, …, m^2-m} of an arbitrary point in 𝒬 can be positive only if the samecoordinate of D is positive. Let us assume by contradiction that there is another realization R ∈𝒬 so that there is anindex j ∈{1, …, m^2-m} for which D_j =0 and R_j >0 hold.Since the polyhedron 𝒬 is closed under convex combination, the point T = c · D + (1-c) · Rc ∈ (0,1)is also in 𝒬.The coordinates with indices of the set {1,…,m^2-m} of all the points in𝒬 are nonnegative, therefore such a coordinate of the convex combination is positive if thecorresponding coordinate of Dor R is positive. Consequently, T has more positive coordinates withindices j ∈{1,…,m^2-m} than the dense realization does,which is a contradiction. It follows from Proposition <ref> that the structure of the dense realization is unique.If there were two different dense realizations, then the reaction graphs representing them would contain eachother as subgraphs, which implies that these graphs are structurally identical.The dense and sparse realizations are useful for checking the structural uniqueness of the uncertain model.The dense and sparse realizations of an uncertain kinetic system [𝒫,L,Y] have the same number ofreactions if and only if all realizations of the model are structurally identical.According to the definitions if in the dense and sparse realizations there is the samenumber of reactions, then in all realizations there must be the same number of reactions. Since the structureof the dense realization is unique, there cannot be two realizations with the maximalnumber of reactions but different structures, therefore all realizations must be structurally identical tothe dense realization. The converse statement is trivial: If all the realizations of the model are structurally identical, then thedense and sparse realizations must have identical structures, too.§.§ Polynomial-time algorithm to determine dense realizationsA dense realization of the uncertain kinetic system can be computed by the application of arecursive polynomial-time algorithm. The basic principle of the method is similar to the one presented in<cit.>: To each reaction a realization is assigned where the reaction takes place, if it is possible.In general, the same realization can be assigned to several reactions. Therefore, there is no need to performa separate computation step for each reaction. The convex combination of the assigned realizations is also a realization of the uncertain model.If all the coefficients of the convex combination are positive then all reactions that take place inany of the assigned realizations are present in the convex combination as well. Consequently, the obtained realization represents a dense realization, where all reactions arepresent that are possible.The computation can be performed in polynomial time since it requires at most m^2-m steps of LP optimizationand some minor computation.It follows from the operation of the algorithm that if there are at least two realizationsassigned to reactions as defined, then there are infinitely many dense realizations, since at least onecoefficient of the convex combination can be chosen arbitrarily from the interval (0,1). In the algorithm the assigned realizations are represented as points in ℝ^m^2-m+nm andare determined using the following procedure:FindPositive([𝒫,L,Y],H) returns a pair (R,B). The point R ∈𝒬 represents a realization of the uncertain model [𝒫,L,Y] for which the value of theobjective function ∑_j ∈ H R_j considering a set H ⊆{1, …, m^2-m} of indices ismaximal. The other returned object is a set B of indices where k ∈ B if and only if Q_k >0.If there is no realization fulfilling the constraints then the pair (0,∅) is returned.In the algorithm we apply the arithmetic mean as convex combination, i.e. if the number ofthe assigned realizations is k then all the coefficients of the convex combination are 1/k. The realization returned by Algorithm 1 is a dense realization of the uncertain kineticsystem. Since the set of all possible solutions can be represented as a convex polyhedron,the point Result computed as the convex combination of realizations is indeed a realization of theuncertain kinetic system [𝒫,L,Y].Let us assume by contradiction that the returned point Result does not represent the dense realization. Thenthere is a reaction (C_i,C_j) which is present in the dense realization but it does not take place inResult. By the operation of the algorithm it follows that there must be a realization assigned to thereaction (C_i,C_j), consequently this reaction takes place in the realization computed as the convexcombination of the assigned realizations as well. This is a contradiction.§.§ Core reactions of uncertain modelsA reaction is called core reaction of a kinetic system if it is present in every realization of thekinetic system <cit.>. It is possible that there are no core reactions, but there can be several of them as well. If all the realizations are structurally identical, then by Proposition <ref> it follows thateach reaction is a core reaction. The notion of core reactions can be extended to the case of uncertain models in a straightforward way. A reaction C_i → C_j is called a core reaction of the uncertain kinetic system[𝒫,L,Y]if it is present in each realization of the model, considering all possible coefficientmatrices M for which M∈𝒫 holds. Let [𝒫,L,Y] and [𝒫',L,Y] be two uncertain kinetic systems considering the same sets ofcomplexes and additional linear constraints so that the polyhedron 𝒫' is a subset of𝒫. If the sets of core reactions inthe models are denoted as C_𝒫 and C_𝒫', respectively, thenC_𝒫⊆ C_𝒫' must hold.This property holds even if 𝒫' is a single point in ℝ^nm and [𝒫',L,Y] isa kinetic system defined as an uncertain kinetic system.The set of core reactions of an uncertain kinetic system can be computed using a polynomial-time algorithm. This method has been first published in <cit.> for a special case, where the coefficients of thepolynomial system have to be in predefined intervals, therefore the polyhedron 𝒫 is a cuboid.Since the model applies only the property that all the constraints characterizing the model are linear, it canbe applied without any modification to uncertain kinetic systems as well.The question whether a certain reaction is a core reaction of a kinetic model or not, can be answered bysolving a linear optimization problem. If this question has to bedecided for all possible reactions, thecomputation can be done more effectively than doing separate optimization steps for every reaction. The ideais to minimize the sum of variables representing the off-diagonal entries of the Kirchhoff matrix. Generally, several variables in the minimized sum are zero in the computed realization, which means that thereactions corresponding to these variables are not core reactions. This step is repeated with the remainingset of variables until the computation does not return any non-core reactions. Finally, the remaining variables need to bechecked one-by-one.In the algorithm we refer to sets of indices corresponding to the off-diagonal entries of theKirchhoff matrix A_k by their characteristic vectors. The set B ⊆{1, …, m^2-m} represented by the vector b ∈{0,1}^m^2-m, which is defined as b_i= 1 ifi ∈ B0 ifi ∉ B The procedure applied during the computation is more formally the following: FindNonCore([𝒫,L,Y],b) computes a realization of the uncertain kinetic system[𝒫,L,Y] represented as a point R ∈ℝ^m^2-m+nm, for which the sum of the coordinateswith indices in the set B ∈{1, …, m^2-m} is minimal.The procedure returns the vector c, the characteristic vector of the set C whichcontains the indices corresponding to zero entries of the Kirchhoff matrix of the realization R, i.e.C ⊆{1, …, m^2-m} and [ i ∈ C⟺ R_i=0 ]. We also need to utilize some operations on the sets represented by their characteristic vectors:b*c represents the set B ∩ C, i.e. it is an element-wise `logical and'c represents the complement of the set C,i.e. it is an element-wise negation.Algorithm 2 computes the set of core reactions of an uncertain kinetic system [𝒫,L,Y] in polynomial time.Let us assume by contradiction that the algorithm does not return the proper set of core reactions. There can be two different types of error: a) Let us assume that there is an index i for which the corresponding reaction is a core reaction, butaccording to the algorithm it is not. In this case there must be a realization R computed by the algorithmso that R_i is zero. This is a contradiction.b) Let us assume that there is an index j for which the corresponding reaction is not a core reaction butthe algorithm returns the opposite answer. Consequently, after the while loop of the computation (from line 8) thecoordinate b_j must be equal to 1. Then the remaining possible core reactions are examined one by one,therefore the procedure FindNonCore([𝒫,L,Y],e_j^m^2-m) is also applied. According to the assumption therealization R computed by the procedure must be so that R_j is zero, which also yields a contradiction.The computation according to the algorithm can be performed in polynomial time, since it requires the solutionof at most m^2-m LP optimization problems and some additional minor computation steps. § ALGORITHM TO DETERMINE ALL POSSIBLE REACTION GRAPH STRUCTURES OF UNCERTAIN MODELS In this section we introduce an algorithm for computing all possible reaction graph structures of anuncertain kinetic system [𝒫,L,Y]. The proposed method is animproved versionof the algorithm published in <cit.>, where all the optimization steps can be doneparallelly. We also give a proof of the correctness of the presented method. Before presenting the pseudocode of the algorithm, we give a brief explanation of its data structures and operating principles.We represent reaction graph structures by binary sequences, where each entry encodes the presence or lack of areaction. During the algorithm, all data (i.e. the Kirchhoff and the coefficient matrices) of the realizations arecomputed, but only the binary sequences encoding the directed graph structures are stored and returned asresults.According to the superstructure property described in Proposition<ref>,only the reactions belonging to the dense realization need representation and storage.Moreover, if there are core reactions as well, then the coordinates corresponding to these can also be omitted.Both sets can be computed in polynomial time as it has been presented in Sections <ref> and<ref>.Let us refer to the set of reactions in the dense realization and the set of core reactions in the uncertainkinetic system [𝒫,L,Y] as D_𝒫 and C_𝒫, respectively.Then a realization of the uncertain model [𝒫,L,Y] can be represented by a binarysequence R of length z, where z is the size of the set D_𝒫∖ C_𝒫 ofnon-core reactions in the dense realization. To define the binary sequence R it is necessary to fix an ordering on the set of non-core reactions. The coordinate R_i is equal to 1 if and only if the ith non-core reaction ispresent in the realization, otherwise it is zero.It is easy to see that knowing its structure, a realization can be determined in polynomial time:For each reaction C_i → C_j which is known not to be present in the realization the constraint[A_k]_ji=0 needs to be added to the constraint set L, and a dense realization of the (constrained) modelhas to be computed.Since it is known that there exists a realization where all non-excluded reactions take place, all of themhave to be present in the computed constrained dense realization, consequently it will have exactly theprescribed structure. During the computation the initial substrings of the binary sequences have a special role. Therefore, for allk ∈{1, … z} a special equivalence relation =_k is defined on the binary sequences. We say that R =_k W holds if for alli ∈{1, … k} the coordinate R_i is equal to W_i.The equivalence class of the relation =_k that contains the sequence R as a representative is referred toas C_k(R). (We note that in general there are several representatives of an equivalence class.) The elements of an equivalence class C_k(R) can be characterized by a set of linear constraints added to themodel. According to this property and Proposition <ref>, the dense realization in C_k(R)determines a superstructure among all the realizations in the same set. The procedure FindRealization applied during the algorithm computes dense realizations of the uncertainmodel determined by the initial substrings. A realization is referred to as a pair (R,k) if the corresponding realization represents the dense realization in C_k(R). The realizations represented by such pairs get stored for some time in a stack S, the command `push (R,k) into S' puts the pair (R,k) into the stack and `pop from S' takes a pair out of the stack and returns it.The number of elements in the stack S is denoted by size(S).The result of the entire computation is collected in a binary array called Exist, where all the computedgraph structures are stored. The indices of the elements are the sequences asbinary numbers, and the value of element Exist[R] is equal to 1 if and only ifa realization with the structure encoded by R has been found. Considering the data structures, the main difference between the proposed method and the algorithm presentedin <cit.> is that the sequences encoding the reaction graph structures are stored in only one stack inour current solution. Furthermore, the optimization steps using the sequences popped from this stack can be run in parallel. However, in this case the use of the binary array Exist is necessary.Within the algorithm we repeatedly apply two subroutines:FindRealization((R,k),i) computes a dense realization of the uncertain kinetic system[𝒫,L,Y], for which the representing binary sequence W is in C_k(R), and for every indexj ∈{k+1,…, i} the coordinate W_j is zero. It is possible thatamong the first k coordinates there are more zeros than required, therefore the computed sequence W iscompared to the sequence R. The procedure returns the sequence W only if W=_k R holds, otherwise -1 is returned. If the optimization task is infeasible then the returned object is also -1.FindNextOne((R,k)) returns the smallest index ifor which k<i and R_i=1 hold. Ifthere is no such index, i.e. R_j is zero for all k<j, then it returns z+1, where we recall that z is the length of the sequences that encode the graph structures.Let the sequence D=1 represent the dense realization. Then the pseudocode of the algorithm for computing all possible graph structures can be given as follows. Using the description of the algorithm, we can give formal results about its main properties. Algorithm 3 computes all possible reaction graph structures representing realizations of an uncertain kinetic system [𝒫,L,Y].Let us assume by contradiction that there is a realization of the uncertain kinetic system [𝒫,L,Y]represented by the sequence V which is not returned by Algorithm 3.Let R be another sequence that was stored in the stack S as (R,p) at some point during the computation, for which V =_p R holds and p is the greatest such number.If p=0 then D is suitable to be R, and by the operation of the algorithm it follows that p<z holds. (Ifp were equal to z, then V would be equivalent to R which is a contradiction.)There is a point during the computation when (R,p) is popped out from the stack S.Let us assume that FindNextOne(R,p) returns i and FindNextOne(V,p) returns j.In this case i ≤ j must hold since R represents the superstructure in C_p(R) and if i were equal toj then p would not be maximal.For the examination of sequence R, the procedure FindRealization((R,p),i) is applied first(line 10), and it must return a valid sequence W_1, since its constrains are fulfilled by the realizationV as well.If FindNextOne(W_1,p) is j_1 then j_1 ≤ j must hold, since W_1 represents the superstructure in C_i(W_1) and V is also in C_i(W_1). If j_1 was equal to j then pwould not be maximal. Otherwise,the computation can be continued by calling the procedure FindRealization((R,p),j_1). It mustreturn a valid sequence W_2 for which we get that FindNextOne(W_2,p)=j_2 ≤ j holds by applyingsimilar reasoning as above. These steps must lead to contradiction either by p not being maximal or by creating an infinite increasingsequence of integers that has an upper bound.It follows that every possible reaction graph structure that represents a realization of the uncertain kineticsystem [𝒫,L,Y] is returned by the algorithm. Since the calculations of procedure FindRealization((R,k),i) are independent of theresults of previous calls of the same procedure, the order of the calls is irrelevant regarding the resultof the entire computation.The proof of Proposition 3.2 in <cit.> can be applied for verifying the propertythat during the computation according to Algorithm 3 every reaction graphstructure is returned only once. We can also give an upper bound to the number of required optimization steps by considering therealizations (R,k) regarding k. For all k the number of possible realizations R stored in the stackS is at most 2^k. When such a realization is popped from the stack the required optimization steps is atmost z-k. Consequently, a rough upper bound to the number of optimization stepsrequired during Algorithm 3 can be given as ∑_k=0^z-1 2^k (z-k). § ILLUSTRATIVE EXAMPLES In this section we demonstrate the operation of the algorithms presented in this paper on two examples in caseof different degrees and types of uncertainties, and even in the case of additional linear constraints.§.§ Example 1: a simple kinetic systemThe model that serves as a basis for this example was presented previously in <cit.>.The uncertain model is generated using the kinetic system ẋ_1 = 3c_1 · x_2^3-c_2 · x_1^3 ẋ_2 = -3c_1 · x_2^3+c_2 · x_1^3,where c_1,c_2>0. We consider realizations on a fixed set 𝒞={C_1,C_2, C_3} ofcomplexes, where the complexes C_1 = 3X_2, C_2 = 3X_1, C_3 = 2X_1+X_2 are formed of the species X_1and X_2. It follows that the characterizing matrices Y and M of the kinetic system referred to as[M,Y] areY= [ 0 3 2; 3 0 1 ]M= [r] 3c_1 -c_2 0 -3c_1 c_2 0 During the numerical computations the parameter values c_1 = 1 and c_2 = 2 were applied. A. Uncertainty defined by independent intervalsThis model represents a special case in the class of uncertain kinetic systems defined in Section<ref>, since the possible values of every coefficient of the kinetic system are determined by independent upper and lower bounds that are defined as relative distances. Let us represent the entry[M]_ij of the coefficient matrix M by the coordinate M_l of the point M∈ℝ^6. Moreover, let the relative distances of the upper and lower bounds of M_l be given by the real constants γ_l and ρ_l from the interval [0,1], respectively. Then theequations defining the polyhedron 𝒫_A ⊂ℝ^6 of the uncertain parameters can bewritten in terms of the coordinates M_l asM_ ^⊤· e_l^6 ≤ (1+ γ_l) · [M]_ij M_ ^⊤· (-e_l^6) ≤ (ρ_l - 1) · [M]_ij In the examined uncertain kinetic system [𝒫_A,L,Y] no additional linearconstraints are considered, i.e. L = ∅. In <cit.> all possible reaction graphs – with the indication of the reaction rate constants definedas functions of the parameters c_1 and c_2 – representing dynamically equivalent realizations of thekinetic system [M,Y] have been presented.Obviously, these structures must appear among the realizations of the uncertain kinetic model [𝒫_A,∅,Y] as well, butthere might be additional possible structures among the realizations of the uncertain kinetic system.Interestingly, the result of the computation was that in the case of any degree of uncertainty (γ_l, ρ_l ∈ [0,1) for all l∈{1, … 6}), the sets of possiblereaction graph structures of the uncertain model [𝒫_A,∅,Y] and that of the non-uncertain system[M,Y] are identical. This result might be contrary to expectations, but for this small example it is easy to prove that the obtained graph structures are indeed correct forall positive values of the parameters c_1 and c_2. For this, we divide the computation into smallersteps. It has been shown in <cit.> that in the case of dynamically equivalent realizations the computation can be done column-wise (since the jthcolumn of matrix A_kdepends only on the jth column of matrix M). These computations can be performedseparately, and all the possible reaction graph structures can be constructed by choosing a column structurefor every index j ∈{1, …,m} and building the Kirchhoff matrix A_k of the realization from them. Consequently, if in the case of the jth column the number of different structures is p_j, then the numberof structurally different realizations is ∏_j=1^m p_j.First the original kinetic system [M,Y] is examined.To make the notations less complicated, the entries of the Kirchhoff matrix are denoted by the corresponding reaction rate coefficients, i.e. [A_k]_ij = k_ji for all i,j ∈{1,2,3}, i≠ j. In the case of the first column we get:Y ·[ -k_12-k_13; k_12; k_13 ]= [3c_1; -3c_1 ] k_12, k_13∈ℝ^+ ⟹k_12∈ [0,c_1],k_13=3/2c_1 - 3/2k_12 It can be seen that for every positive value of the parameter c_1 the two corresponding reactionrates can realize 3 of the 2^2 =4 possible structurally different solutions. Both can be positive, or either onecan be positive while the other one is zero. (Possible outcomes are for example:k_12= 1/2c_1 , k_13=3/4c_1 or k_12= 0, k_13=3/2c_1 ork_12= c_1 , k_13=0.) The fourth case, when both k_12 and k_13 are zero ispossible only when [M]_.1= [0    0]^⊤, which requires the corresponding parameters of uncertaintyρ_i to be at least one. In the case of the second column, 3 of the 4 possible outcomes can be realized and a similar reasoningcan be applied:Y ·[ k_21; -k_21-k_23; k_23 ]= [ -c_2;c_2 ] k_21,k_23∈ℝ^+ ⟹k_21∈ (0,c_2/3),k_23=c_2-3k_21 In the third column there is no uncertainty becausethere are only zero entries in [M]_.3.Consequently,in the case of [A_k]_.3 only 2 solutions are possible. The two corresponding reactions caneither be both present or both missing.Y ·[ k_31; k_32; -k_31-k_32 ]= [ 0; 0 ] k_31,k_32∈ℝ^+ ⟹k_31∈ℝ^+,k_32=2k_31 It follows from the above computations that the number of possible reaction graph structures is 3· 3 · 2=18, and the generatedstructures are identical to the ones presented in <cit.>. This number could be larger only if all the reaction rates in the first or second column of A_k can be zero,but this requires the entries in the corresponding column [M]_.1 or [M]_.2 to be zero.B. Uncertainty defined as a general polyhedronNow we examine the uncertain kinetic system that was also generated from the kinetic system [M,Y], but theset 𝒫_B of possible coefficients is defined as a more general polyhedron. If the matrix M of coefficients is represented by the point M∈ℝ^6 so thatM_ ^⊤ = [M_11,M_12,M_13,M_21,M_22,M_23], then let the equations determining the polyhedron 𝒫_B be the following: M_ ^⊤· (-e_1^6)≤ 0 M_ ^⊤· (-e_5^6)≤ 0 M_ ^⊤· e_3^6 = 0 M_ ^⊤· e_6^6 = 0 M_ ^⊤· [1,1,0,1,1,0]^⊤=0 M_ ^⊤· [0,-1,0,-1,0,0]^⊤ ≤ 7 M_ ^⊤· [-1,0,0,0,1,0]^⊤ ≤ -1 In this case, again, no additional linear constraints are considered in the uncertain model, i.e. weexamine the uncertain model [𝒫_B, ∅, Y]. The computation of all possible reaction graphstructures shows that in addition to the structures realizing the non-uncertain kinetic system [M,Y], there are 6 more possiblestructures, presented in Figure <ref>.It can be seen that the point M_1^⊤ = [3,-2,0,-3,2,0] corresponding to the original kineticsystem is in the polyhedron 𝒫_B, therefore the 18 structures determined by its realizations mustbe among the realizations of the uncertain kinetic system.Then we can apply a reasoning similar to that in Section 5.1.A.Since the entries in column [M]_.3 are all zero in every point of 𝒫_B, only the two outcomesthat appear in the case of the original kinetic system [M,Y] are possible in the case of this column. The uncertain model can have more realizations than the original kinetic system only if all the reaction ratesin at least one of the columns [A_k]_.1 or [A_k]_.2 can be zero. This is possible only if all theentries in [M]_.1 or [M]_.2 are zero. From the constraints of the polyhedron 𝒫_B it follows that [M]_11≥ 1, consequently the column [M]_.1 cannot be zero. But [M]_.2 can have only zero entries, for example the pointM_2 = [3,0,0,-3,0,0] ^⊤∈𝒫_B satisfies this property.For the columns of the matrices M and M_2 the following hold: [M_2]_.1 = [M]_.1 and[M_2]_.3 = [M]_.3. Therefore, for the first and third columns of A_k there are 3 and 2 possibleoutcomes, respectively. Since in the case of the second column there is one additional possible outcome, thenumber of further reaction graph structures (compared to the original kinetic system [M,Y]) is3· 2= 6. It is easy to see that these are exactly the ones presented in Figure <ref> withthe indicated reaction rate coefficients for an arbitraryp > 0.§.§ Example 2: G-protein networkThe G-protein (guanine nucleotide-binding protein) cycle has a key role in several intracellular signallingtransduction pathways. The G-protein located on the intracellular surface of the cell membrane is activated bythe binding of specific ligand molecules to the G-protein coupled receptor of the extracellular membranessurface. The activated G-protein dissociates to different subunits which take part in intracellular signallingpathways. After the termination of the signalling mechanisms, the subunits become inactive and bind into eachother <cit.>.We examined the structural properties of the yeast G-protein cycle using the model published in <cit.>.The model involves a so-called heterotrimeric G-protein containing three different subunits. In response tothe extracellular ligand binding, the protein dissociates to G-α and G-βγ subunits, wherethe active and inactive forms of the G-α subunit can also be distinguished. The reaction network model involves the following species: R and L represent the receptor and thecorresponding ligand, respectively, RL refers to the ligand-bound receptor, G is the G-protein located onthe intracellular membrane surface, G_a and G_d denote the active and the inactive forms of the G-αsubunit and G_bg is the G-βγ subunit.The model can be characterized as a chemical reaction network (Y,A_k), where the structures of the complexesand the reactions are defined by the complex composition matrix Y ∈ℝ^7 × 10 and theKirchhoff matrix A_k ∈ℝ^10 × 10 as follows: Y = [ 1 0 1 0 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0 0; 0 1 0 0 0 0 1 0 0 0; 0 0 0 1 0 0 1 0 0 0; 0 0 0 0 1 0 0 1 0 0; 0 0 0 0 0 0 0 1 1 0; 0 0 0 0 0 1 0 0 1 0 ]A_k= [ -0.400000000 4000;0-140.3220000000;0 10 -0.3220000000;00000000 10000;0000 -1100000000;00001100000000;000000-0.01000;000000 0.01000;00000000-10000;0.440000000-4000 ] The kinetic system that is realized by the model is ẋ = M ·ψ^Y = Y· A_k ·ψ^Y, i.e.M = Y· A_k ∈ℝ^7 × 10.The reaction graph structure of the G-protein model can be seen in Figure <ref> with theindication of the linkage classes. (The linkage classes are the undirected connected components of thereaction graph.)The computation of all possible reaction graph structures and the solution of the linear equations shows thatthe heterotrimeric G-protein cycle with the given parametrization is not just structurally but alsoparametrically unique. Thus the prescribed dynamics without uncertainty cannot be realized by any other set ofreactions or different reaction rate coefficients using the given set of complexes. A. Uncertainty defined with independent relative distance intervalsWe have examined the uncertain kinetic systems defined by relative parameter uncertainty as it was presentedin Section <ref>. First we examined the uncertain model [𝒫_0.1,∅,Y], where the uncertainty coefficientsγ_l and ρ_l for all l ∈{1, … , 70} are 0.1 and there are no additional linearconstraints in the model. By computing all possible reaction graph structures and the set of core reactions ofthis uncertain kinetic system, we obtained that all the reactions in the original G-protein cycle are corereactions. Moreover, in the dense realization there are 10 further reactions, and these can be present in therealization independently of each other. Consequently, the total number of different graph structures is2^10=1024. Figure <ref> shows the number of possible reaction graph structures withdifferent number of reactions. The dense realization for this case is shown in Figure <ref>.If we increase the relative uncertainty to 0.2, we obtain the uncertain kinetic system [𝒫_0.2,∅,Y] with γ_l=ρ_l=0.2 for i=1,…,70. In this case, the reaction RL→ 0is no longer a core reaction, and it can also be added or removed independently of allother reactions (which remain independent of each other). Therefore, the number of possible structures becomes 2^11=2048.B. Constrained uncertain modelWe have also examined the possible structures in the case of constrained uncertain models. The set L_1 ofconstraints prohibits every reaction between different linkage classes. It can be seen in Figure<ref> that the dense realization of the uncertain kinetic system [𝒫_0.1,L_1,Y] has 3 reactions that are exactly the ones that are present in the dense realization of[𝒫_0.1,∅,Y] and do not connect different linkage classes.These reactions are independent of each other, therefore the number of structurally different realizations is2^3=8 in the case of the uncertain kinetic system [𝒫_0.1,L_1,Y] and 2^4=16 for the model[𝒫_0.2,L_1,Y].The sets of core reactions are the same as in the case of the unconstrained model for both degrees of uncertainty.The independence of non-core reactions is a special property of the studied uncertain model. As a consequence of this and the superstructure property of the dense realization, the dense realization of the constrained model will contain each reaction of the unconstrained model that is not excluded by the constraints. We emphasize that the dense realizations in the above example contain all mathematically possible reactions that can be compatible with the studied uncertain models. If, using prior knowledge, the biologically non-plausible reactions are excluded and/or certain relations between model parameters are ensured via linear constraints, then the described methodology is still suitable to check the structural uniqueness of the resulting uncertain kinetic model.§ CONCLUSIONThe set of reaction graph structures realizing uncertain kinetic models was studied in this paper. For this, an uncertain polynomial model class was introduced, where the coefficients of monomials belong to a polytopic set. Thus, an uncertain kinetic model includes a set of kinetic ordinary differential equations. Using the convexity of the parameter set, it was proved that the unweighted dense reaction graph containing the maximum number of reactions corresponding to an uncertain model, forms a superstructure among the possible realizations assuming a fixed complex set. This means that any unweighted reaction graph realizing any kinetic ODE within an uncertain model is a subgraph of the unweighted directed graph of the dense realization.To search through the possible graph structures, an optimization-based computational model was introduced, where the decision variables are the reaction rate coefficients, and the entries of the monomial coefficient matrix. It was shown that the dense realization can be computed in polynomial time using linear programming steps. An algorithm was proposed to compute those `invariant' reactions (called core reactions) of uncertain models, that are present in any realization of the uncertain model. Most importantly, an algorithm with correctness proof was also proposed in the paper for enumerating all possible reaction graph structures for an uncertain kinetic model.The theoretical results and proposed algorithms were illustrated on two examples. The examples show that the proposed approach is suitable for the structural uniqueness analysis of uncertain kinetic models.§ ACKNOWLEDGEMENTSThis project was developed with the support of the PhD program of the Roska Tamás Doctoral School of Sciences andTechnology, Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Budapest.The authors gratefully acknowledge the support of grants PPKE KAP-1.1-16-ITK, and K115694 of the National Research, Development and Innovation Office - NKFIH.10Anderson2011 D. F. Anderson. A proof of the Global Attractor Conjecture in the single linkage class case. SIAM Journal on Applied Mathematics, 71:1487–1508, 2011. http://arxiv.org/abs/1101.0761,.Badri2017 V. Badri, M. J. Yazdanpanah, and M. S. Tavazoei. On stability and trajectory boundedness of Lotka–Volterra systems with polytopic uncertainty. IEEE Transactions on Automatic Control, pages available online, DOI 10.1109/TAC.2017.2663839, 2017.Boyd1994 S. Boyd, L. El-Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in Systems and Control Theory. SIAM Books, Philadelphia, PA, 1994.Briggs2016 W. Briggs. Uncertainty: The Soul of Modeling, Probability & Statistics. Springer, 2016.Chellaboina2009 V. Chellaboina, S. P. Bhat, W. M. Haddad, and D. S. Bernstein. Modeling and analysis of mass-action kinetics – nonnegativity, realizability, reducibility, and semistability. IEEE Control Systems Magazine, 29:60–78, 2009.Chen2012 W. W. Chen, M. Niepel, and P. K. Sorger. Classic and contemporary approaches to modeling biochemical reactions. Geners & Development, 24:1861–1875, 2012.Chis2011 O. T. Chis, J. R. Banga, and E. Balsa-Canto. Structural identifiability of systems biology models: a critical comparison of methods. PLOS One, 6(11):e27755, 2011.Chis2016 O. T. Chis, A. F. Villaverde, J. R. Banga, and E. Balsa-Canto. On the relationship between sloppiness and identifiability. Mathematical Biosciences, 282:147–161, 2016.Craciun2015 G. Craciun. Toric differential inclusions and a proof of the global attractor conjecture. arXiv:1501.02860 [math.DS], January 2015.Craciun2008 G. Craciun and C. Pantea. Identifiability of chemical reaction networks. Journal of Mathematical Chemistry, 44:244–259, 2008.Feinberg:79 M. Feinberg. Lectures on chemical reaction networks. Notes of lectures given at the Mathematics Research Center, University of Wisconsin, 1979.Feinberg1987 M. Feinberg. Chemical reaction network structure and the stability of complex isothermal reactors - I. The deficiency zero and deficiency one theorems. Chemical Engineering Science, 42 (10):2229–2268, 1987.Guy2015 T. V. Guy, M. Karny, and D. H. Wolpert, editors. Decision Making: Uncertainty, Imperfection, Deliberation and Scalability. Springer, 2015.Haddad2010 W. M. Haddad, VS. Chellaboina, and Q. Hui. Nonnegative and Compartmental Dynamical Systems. Princeton University Press, 2010.Harrison1979 G. W. Harrison. compartmental models with uncertain flow rates. Mathematical Biosciences, 43:131–139, 1979.Horn1972 F. Horn and R. Jackson. General mass action kinetics. Archive for Rational Mechanics and Analysis, 47:81–116, 1972.Hars1981 V. Hárs and J. Tóth. On the inverse problem of reaction kinetics. In M. Farkas and L. Hatvani, editors, Qualitative Theory of Differential Equations, volume 30 of Coll. Math. Soc. J. Bolyai, pages 363–379. North-Holland, Amsterdam, 1981.Johnston2011conj M. D. Johnston and D. Siegel. Linear conjugacy of chemical reaction networks. Journal of Mathematical Chemistry, 49:1263–1282, 2011.Johnston2012a M. D. Johnston, D. Siegel, and G. Szederkényi. Dynamical equivalence and linear conjugacy of chemical reaction networks: new results and methods. MATCH Commun. Math. Comput. Chem., 68:443–468, 2012.Johnston2012 M. D. Johnston, D. Siegel, and G. Szederkényi. A linear programming approach to weak reversibility and linear conjugacy of chemical reaction networks. Journal of Mathematical Chemistry, 50:274–288, 2012.Johnston2013 M. D. Johnston, D. Siegel, and G. Szederkényi. Computing weakly reversible linearly conjugate chemical reaction networks with minimal deficiency. Mathematical Biosciences, 241:88–98, 2013.Liebermeister2005 W. Liebermeister and E. Klipp. Biochemical networks with uncertain parameters. IEE Proceedings Systems Biology, 152:97–107, 2005.Liptak2015 G. Lipták, G. Szederkényi, and K. M. Hangos. Computing zero deficiency realizations of kinetic systems. Systems & Control Letters, 81:24–30, 2015.Liptak2016 G. Lipták, G. Szederkényi, and K. M. Hangos. Kinetic feedback design for polynomial systems. Journal of Process Control, 41:56–66, 2016.Ljung1999 L. Ljung. System Identification - Theory for the User. Prentice Hall, 1999.Llaneras2007 F. Llaneras and J. Picó. An interval approach for dealing with flux distributions and elementary modes activity patterns. Journal of Theoretical Biology, 246:290–308, 2007.Lodish2000 H. F. Lodish. Molecular cell biology. W.H. Freeman, New York, 2000.Nagy2011 T. Nagy and T. Turányi. Uncertainty of Arrhenius parameters. International Journal of Chemical Kinetics, 43:359–378, 2011.Schillings2015 C. Schillings, M. Sunnaker, J. Stelling, and C. Schwab. Efficient characterization of parametric uncertainty of complex biochemical networks. PLOS Computational Biology, 11(8):e1004457, 2015.Schnell2006 S. Schnell, M. J. Chappell, N. D. Evans, and M. R. Roussel. The mechanism distinguishability problem in biochemical kinetics: The single-enzyme, single-substrate reaction as a case study. Comptes Rendus Biologies, 329:51–61, 2006.Shinar2010 G. Shinar and M. Feinberg. Structural sources of robustness in biochemical reaction networks. Science, 327:1389–1391, 2010.Son:2001 E. Sontag. Structure and stability of certain chemical networks and applications to the kinetic proofreading model of T-cell receptor signal transduction. IEEE Transactions on Automatic Control, 46:1028–1047, 2001.Szederkenyi2009b G. Szederkényi. Computing sparse and dense realizations of reaction kinetic systems. Journal of Mathematical Chemistry, 47:551–568, 2010.Szederkenyi2011c G. Szederkényi, J. R. Banga, and A. A. Alonso. Inference of complex biological networks: distinguishability issues and optimization-based solutions. BMC Systems Biology, 5:177, 2011.Szederkenyi2011a G. Szederkényi and K. M. Hangos. Finding complex balanced and detailed balanced realizations of chemical reaction networks. Journal of Mathematical Chemistry, 49:1163–1179, 2011.Szederkenyi2011 G. Szederkényi, K. M. Hangos, and T. Péni. Maximal and minimal realizations of reaction kinetic systems: computation and properties. MATCH Commun. Math. Comput. Chem., 65:309–332, 2011.Szederkenyi2016a G. Szederkényi, B. Ács, and G. Szlobodnyik. Structural analysis of kinetic systems with uncertain parameters. In 2nd IFAC Workshop on Thermodynamic Foundations for a Mathematical Systems Theory - TFMST, IFAC Papersonline, volume 49, pages 24–27, 2016.Tak:96 Y. Takeuchi. Global Dynamical Properties of Lotka-Volterra Systems. World Scientific, Singapore, 1996.Tuza2015 Z. A. Tuza and G. Szederkényi. Computing core reactions of uncertain polynomial kinetic systems. In 23rd Mediterranean Conference on Control and Automation (MED), June 16-19, 2015. Torremolinos, Spain, pages 1187–1194, 2015.Weinmann1991 A. Weinmann. Uncertain Models and Robust Control. Springer-Verlag, 1991.Wu2006 J-L. Wu. Robust stabilization for single-input polytopic nonlinear systems. IEEE Transactions on Automatic Control, 2006.Yi2003 T. M. Yi, H. Kitano, and M. I. Simon. A quantitative characterization of the yeast heterotrimeric g protein cycle. Proc. Natl. Acad. Sci. USA, 100(19):10764–10769, 2003.Acs2017 B. Ács, G. Szederkényi, and D. Csercsik. A new efficient algorithm for determining all structurally different realizations of kinetic systems. MATCH Commun. Math. Comput. Chem., 77:299–320, 2017.Acs2015 B. Ács, G. Szederkényi, Z. A. Tuza, and Z. Tuza. Computing linearly conjugate weakly reversible kinetic structures using optimization and graph theory. MATCH Commun. Math. Comput. Chem., 74:481–504, 2015.Acs2016 B. Ács, G. Szederkényi, Zs. Tuza, and Z. A. Tuza. Computing all possible graph structures describing linearly conjugate realizations of kinetic systems. Computer Physics Communications, 204:11–20, 2016.Erdi1989 P. Érdi and J. Tóth. Mathematical Models of Chemical Reactions. Theory and Applications of Deterministic and Stochastic Models. Manchester University Press, Princeton University Press, Manchester, Princeton, 1989.
http://arxiv.org/abs/1704.08633v1
{ "authors": [ "Bernadett Ács", "Gergely Szlobodnyik", "Gábor Szederkényi" ], "categories": [ "math.DS", "math.NA", "q-bio.MN", "80A30, 90C35", "G.1.7; G.2.2; J.2" ], "primary_category": "math.DS", "published": "20170427155301", "title": "A computational approach to the structural analysis of uncertain kinetic systems" }
[email protected] di Ingegneria e Fisica dei Materiali, Dipartimento di Scienza Applicata e Tecnologia, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy National Research Nuclear University MEPhI (Moscow Engineering Physics Institute), Kashira Hwy 31, Moskva 115409, RussiaIstituto di Ingegneria e Fisica dei Materiali, Dipartimento di Scienza Applicata e Tecnologia, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, ItalyDonostia International Physics Center (DIPC), 20018 San Sebastian/Donostia, Basque Country, SpainDonostia International Physics Center (DIPC), 20018 San Sebastian/Donostia, Basque Country, Spain St. Petersburg State University, 199034, St. Petersburg, Russian Federation Departamento de Fisica de Materiales, Facultad de Ciencias Quimicas, UPV/EHU, Apdo. 1072, 20080 San Sebastian/Donostia, Basque Country, Spain Centro de Fisica de Materiales CFM - Materials Physics Center MPC, Centro Mixto CSIC-UPV/EHU, 20018 San Sebastian/Donostia, Basque Country, Spain.Karlsruher Institut für Technologie, Institut für Festkörperphysik, D-76021 Karlsruhe, Germany We calculate the effect of a static electric field on the critical temperature of a s-wave one band superconductor in the framework of proximity effect Eliashberg theory. In the weak electrostatic field limit the theory has no free parameters while, in general, the only free parameter is the thickness of the surface layer where the electric field acts. We conclude that the best situation for increasing the critical temperature is to have a very thin film of a superconducting material with a strong increase of electron-phonon (boson) constant upon charging. 74.45.+c, 74.62.-c,74.20.Fg Proximity Eliashberg theory of electrostatic field-effect-doping in superconducting films R. Heid December 30, 2023 =========================================================================================§ INTRODUCTIONIn the last decade, electric double layer (EDL) gating has come to the forefront of solid state physics due to its capability to tune the surface carrier density of a wide range of different materials well beyond the limits imposed by solid-gate field-effect devices. The order-of-magnitude enhancement in the gate electric field allows this technique to reach doping levels comparable to those of standard chemical doping. This is possible due to the extremely large specific capacitance of the EDL that builds up at the interface between the electrolyte and the material under study <cit.>.EDL gating was first exploited to control the surface electronic properties of relatively low-carrier density systems, where the electric field effect is more readily observable. Field-induced superconductivity was first demonstrated in strontium titanate <cit.> and zirconium nitride chloride <cit.>, and subsequently on other insulating systems such as perovskites <cit.> and layered transition-metal dichalcogenides <cit.>. Significant effort was also invested in the control of the superconducting properties of cuprates <cit.>, although in this class of materials the mechanism behind the carrier density modulation is still debated <cit.>.More recently however, several experimental studies have been devoted to the exploration of field effect in superconductors <cit.> with a large (≳ 1 · 10^22 cm-3) native carrier density. The interplay between two different ground states, namely superconductivity and charge density waves, was explored in titanium and niobium diselenides <cit.>. The thickness and gate voltage dependence of a high-temperature superconducting phase were studied in iron selenide, both in thin-film <cit.> and thin flake <cit.> forms. The effect of the ultrahigh interface electric fields achievable via EDL gating were also probed in standard BCS superconductors, namely niobium <cit.> and niobium nitride <cit.>.With the exception of the work of Ref.XiPRL2016 on niobium diselenide, all of these studies have been performed on relatively thick samples (≳ 10 nm) with a thickness larger than the electrostatic screening length. These systems are thus expected to develop a strong dependence of their electronic properties along the z direction (z being perpendicular to the sample surface). As a first approximation, this dependence can be conceptualized by schematizing the system as the parallel of a surface layer (where the carrier density is modified by the electric field) and an unperturbed bulk. The two electronic systems can be expected to couple via superconducting proximity effect, resulting in a complicated response to the applied electric field that goes well beyond a simple modification of the superconducting properties of the surface layer alone <cit.> and is strongly dependent on both the electrostatic screening length and the total thickness of the film.So far, the only quantitative assessment of this phenomenon has been reported in the framework of the strong-coupling limit of the BCS theory of superconductivity <cit.>. A proper theoretical treatment for field effect on more complex materials, which can be described only by means of the more complete Eliashberg theory <cit.>, is lacking. Given the rising interest in the control of the properties of superconductors by means of surface electric fields, the development of such a theoretical approach would be very convenient not only to quantitatively describe the results of future experiments, but also to determine beforehand the experimental conditions (e.g., device thickness) most suitable for an optimal control of the superconducting order via electric fields.In this work, we use the Eliashberg theory of proximity effect to describe a junction composed by the perturbed surface layer (T_c=T_c,s), where the carrier density is modulated (with a doping level per unitary cell x), and the underlying unperturbed bulk (T_c=T_c,b). Here s and b indicate “surface" and “bulk" respectively (see Fig. <ref>). Under the application of an electric field, T_c,s≠ T_c,b and the material behaves like a junction between a superconductor and a normal metal in the temperature range bounded by T_c,s and T_c,b. If the application of the electric field increases (decreases) T_c,s, then the surface layer will be the superconductor (normal metal) and the bulk will be the normal metal (superconductor). We perform the calculation for lead since all the input parameters of the theory are well-known in the literature for this strong-coupling superconductor <cit.>. § MODEL: PROXIMITY ELIASHBERG EQUATIONSIn general, a proximity effect at a superconductor/normal metal junction is observed as the opening of a finite superconducting gap in the normal metal together with its reduction in a thin region of the superconductor close to the junction. In our model we use the one band s-wave Eliashberg equations <cit.> with proximity effect to calculate the critical temperature of the system. In this case we have to solve four coupled equations for the gaps Δ_s,b(iω_n) and the renormalization functions Z_s,b(iω_n), where ω_n are the Matsubara frequencies. The imaginary-axis equations with proximity effect <cit.> are: ω_nZ_b(iω_n)=ω_n+ π T∑_mΛ^Z_b(iω_n,iω_m)N^Z_b(iω_m)+ +Γb N^Z_s(iω_n)Z_b(iω_n)Δ_b(iω_n)=π T∑_m[Λ^Δ_b(iω_n,iω_m)-μ^*_b(ω_c)]× ×Θ(ω_c-|ω_m|)N^Δ_b(iω_m) +Γb N^Δ_s(iω_n)aaaaaaω_nZ_s(iω_n)=ω_n+ π T∑_mΛ^Z_s(iω_n,iω_m)N^Z_s(iω_m)+ Γs N^Z_b(iω_n)Z_s(iω_n)Δ_s(iω_n)=π T∑_m[Λ^Δ_s(iω_n,iω_m)-μ^*_s(ω_c)]× ×Θ(ω_c-|ω_m|)N^Δ_s(iω_m) +ΓsN^Δ_b(iω_n)aaaaaa where μ^*_s(b) are the Coulomb pseudopotentials in the surface and in the bulk respectively, Θ is the Heaviside function and ω_c is a cutoff energy at least three times larger than the maximum phonon energy. Thus we haveΛ_s(b)(iω_n,iω_m)=2 ∫_0^+∞dΩΩα^2_s(b)F(Ω)/[(ω_n-ω_m)^2+Ω^2] Γ_s(b)=π|t|^2Ad_b(s)N_b(s)(0)and thus Γ_s/Γ_b=d_bN_b(0)/d_sN_s(0),N^Δ_s(b)(iω_m)=Δ_s(b)(iω_m)/ √(ω^2_m+Δ^2_s(b)(iω_m))andN^Z_s(b)(iω_m)=ω_m/√(ω^2_m+Δ^2_s(b)(iω_m))where α^2_s(b)F(Ω) are the electron-phonon spectral functions, A is the junction cross-sectional area, |t|^2 the transmission matrix equal to one in our case because the material is the same, d_s and d_b are the surface and bulk layer thicknesses respectively, such that (d_s+d_b=d where d is the total film thickness) and N_s(b)(0) are the densities of states at the Fermi level E_F,s(b) for the surface and bulk material. The electron-phonon coupling constants are defined asλ_s(b)=2∫_0^+∞dΩα^2_s(b)F(Ω)/Ωand the representative energies asln(ω_ln,s(b))=2/λ_s(b)∫_0^+∞dΩ lnΩα^2_s(b)F(Ω)/ΩThe solution of these equations requires eleven input parameters: the two electron-phonon spectral fuctions α^2_s(b)F(Ω), the two Coulomb pseudopotentials μ^*_s(b), the values of the normal density of states at the Fermi level N_s(b)(0), the shift of the Fermi energy Δ E_F=E_F,s-E_F,b that enters in the calculation of the surface Coulomb pseudopotential (as shown later), the value of the surface layer d_s, the film thickness d and the junction cross-sectional area A. The values of d and A are experimental data. The exact value of d_s, in particular in the case of very strong electric fields at the surface of a thin film, is in general difficult to determine a priori <cit.>. Thus, we leave it as a free parameter of the model, and we perform our calculations for different reasonable estimations of its value.Typically, the bulk values of α^2_bF(Ω), μ^*_b, N_b(0) and E_F,b are known and can be found in the literature. Thus, we assume that we need to determine only their surface values. In the next Section, we will use density functional theory (DFT) to calculate α^2_sF(Ω), Δ E_F and N_s(0).The value of the Coulomb pseudopotential in the surface layer μ^*_s can be obtained in the following way: in the Thomas-Fermi theory where the dielectric function is <cit.> ε(q)=1+k^2_TF/q^2 and the bare Coulomb pseudopotential μ is the angular average of the screened electrostatic potentialμ=1/4π^2ħ v_F∫ _0^2k_FV(q)/ε(q)qdqSince V(q)=4 π e^2/q^2 <cit.> it turns out that μ=k^2_TF/8 k^2_Fln(1+4 k^2_F/ k^2_TF).Hence we writeμ_b=a^2_b/2ln(1+1/a^2_b).with a_b=2k_TF,b/k_F,b. Since a_b can be calculated by numerically solving Eq. 13, and by remembering that the square of Thomas-Fermi wave number k_TF,b(s) is proportional to N_b(s)(0), we havea^2_s=a^2_b(N_s(0)/N_b(0))/(1+Δ E_F/E_F,b)and thusμ_s=a^2_s/2ln(1+1/a^2_s).The new Coulomb pseudopotential <cit.> in the surface layer is thusμ^*_s(ω_c)=μ_s/1+μ_s ln((E_F,b+Δ E_F)/ω_c)We note that, usually, the effect of electrostatic doping on μ^* is very small and can be neglected. We can quantify the effect on T_c of this small modulation of μ^* by computing it in the case of maximum doping x=0.40 and very thin film (d=5 nm), i.e. when the effect is largest. As discussed in the next Section, the unperturbed Coulomb pseudopotential is μ^*(x=0)=0.14164, while for the maximum doping Eqs. 12-16 give μ^*(x=0.4)=0.14048. If we use d_s=d_TF we find respectively T_c=7.3770 K for the bulk (unperturbed) value of the Coulomb pseudopotential and T_c=7.3768 K for the surface value of the Coulomb pseudopotential. Thus, if we consider the Coulomb pseudopotential to be doping-independent we underestimate the critical temperature of a Δ T_c|_Δμ^*=-0.0002 K (Δ T_c|_Δμ^*/T_c=0.0027 percent).However, a possible critical situation can appear when the applied electric field is very strong and the Thomas-Fermi approximation does not hold anymore. In such a case, μ^* becomes ill-defined as the Thomas-Fermi dielectric function is no longer strictly valid for very large electric fields. Nevertheless, the true dielectric function ε(q) should still be a function of the ratio k_TF/k_F <cit.>, which in the free-electron model is independent on the normal density of states at the Fermi level. Thus, Eq. <ref> should still be able to describe the behavior of the system as a first approximation.§ CALCULATION OF Α^2_SF(Ω), Δ E_F AND N_S(0) DFT calculations are performed within the mixed-basis pseudopotential method (MBPP) <cit.>. For lead a norm-conserving relativistic pseudopotential including 5d semicore states and partial core corrections is constructed following the prescription of Vanderbilt <cit.>. This provides both scalar-relativistic and spin-orbit components of the pseudopotential.Spin-orbit coupling (SOC) is then taken into account within each DFT self-consistency cycle (for more details on the SOC implementation see <cit.>). The MBPP approach utilizes a combination of local functions and plane waves for the basis set expansion of the valence states, which reduces the size of the basis set significantly.One d type local function is added at each lead atomic site to efficiently treat the deep 5d potential. Sufficient convergence is then achieved with a cutoff energy of 20 Ry for the plane waves. The exchange-correlation it treated in the local density approximation (LDA) <cit.>. Brillouin zone (BZ) integrations are performed on regular k-point meshes in conjuction with a Gaussian broadening of 0.2 eV. For phonons, 16 × 16 × 16 meshes are used, while for the calculations of density of states and electron-phonon coupling (EPC) even denser 32 × 32 × 32 meshes are employed.Phonon properties are calculated with the density-functional perturbation theory <cit.> as implemented in the MBPP approach <cit.>, which also provides direct access to the electron-phonon coupling matrix elements. The procedure to extract the Eliashberg function is outlined in Ref. <cit.>. Dynamical matrices and corresponding EPC matrix elements are obtained on a 16 × 16 × 16 phonon mesh. These quantities are then interpolated using standard Fourier techniques to the whole BZ, and the Eliashberg functions are calculated by integration over the BZ using the tetrahedron method on a 40 × 40 × 40 mesh. SOC is consistently taken into account in all calculations including lattice dynamical and EPC properties. It is well known from previous work, that SOC is necessary for a correct quantitative description of both the phonon anomalies and EPC of undoped bulk lead <cit.>.Charge induction is simulated by adding an appropriate number of electrons during the DFT self-consistency cycle, compensated by a homogeneous background charge to retain overall charge neutrality. Electronic structure properties and the Eliashberg function are calculated for face centred cubic (fcc) lead with the lattice constant a=4.89 Å as obtained by optimization for the undoped case.For doping levels considered here, we found that to a good approximation charge induction does not change the band structure but merely results in a shift of E_F.In a previous study, the variation of the EPC was studied as function of the averaging energy <cit.>. The present approach goes beyond this analysis by taking into account explicitly the effect of charge induction on the screening properties, which modifies both the phonon spectrum and the EPC matrix elements.Finally we point out that, since this DFT approach simulates the effect of the electric field by adding extra charge carriers to the system together with a uniform compensating counter-charge (Jellium model <cit.>) is unable to describe inhomogeneous distributions caused by the screening of the electric field itself. A more complete approach has been developed in Ref. Brumme, and requires the self-consistent solution of the Poisson equation; however, this method is currently unable to compute the phonon spectrum of the gated material, making it unsuitable for the application of the proximity Eliashberg formalism.§ RESULTS AND DISCUSSIONWe start our calculations by fixing the input parameters for bulk lead according to the established literature. We set T_c,b to its experimental value <cit.> T_c,b = 7.22 K. The undoped α^2_bF(Ω) gives a corresponding electron-phonon coupling λ_b=1.5596. Assuming a cutoff energy ω_c = 60 meV and a maximum energy ω_max = 70 meV in the Eliashberg equations, we are thus able to determine the bulk Coulomb pseudopotential to be μ^*_b = 0.14164 to obtain the exact experimental critical temperature T_c,b.In Fig. <ref>a we show the calculated electron-phonon spectral functions α^2F(Ω) resulting at the increase of the doping level x. Specifically, we plot the curves corresponding to x=0.000, 0.075, 0.150, 0.300 ,0.400 e-/unit cell. We calculate the spectral functions until x=0.4 e^-/cell because for larger values of doping an instability emerges in the calculation processes. We can see the phonon softening evidenced by a reduction of ω_ln with increasing doping level. The increase of the carrier density gives rise to two competing effects: the value of ω_ln (i.e. the representative phonon energy) decreases while the value of electron-phonon coupling costant λ increases (see Fig. <ref>b). Since the critical temperature is an increasing function of both ω_ln and λ, in general this could result in either a net enhancement or suppression of T_c, depending on which of the two contributions is dominant. Consequently the ideal situation for obtaining largest critical temperature in an electric field doped material is to have a strong increase of λ and ω_ln concurrently. In the case of lead the contribution from the increase of λ is dominant over that from the reduction of ω_ln, giving rise to a net increase of the superconducting critical temperature (as we report in Fig. <ref>c). In addition, in Table <ref> we summarize all the input parameters of the proximity Eliashberg equations as obtained from the DFT calculations.Having determined the response of the superconducting properties of a homogeneous lead film to a modulation of its carrier density, we can now consider the behavior of the more realistic junction between the perturbed surface layer and the unperturbed bulk. In order to do so, however, it is now mandatory to select a value for the thickness of the perturbed surface layer. Close to T_c, the superfluid density is small <cit.> and the screening is dominated by unpaired electrons. Thus, a very rough approximation would be to set d_s to the Thomas-Fermi screening length d_TF, which for lead can be estimated to be 0.15 nm <cit.>. However, we have recently shown <cit.> that this assumption might not be satisfactory in the presence of the very large electric fields that build up in the electric double layer. Indeed, our experimental findings on niobium nitride indicated that the screening length increases for very large doping values <cit.>. However, it is reasonable to assume the exact entity of this increase to be specific to each material. Thus, while the qualitative behavior can be expected to be general, the exact values of d_s determined for niobium nitride cannot be directly applied to lead.In order not to lose the generality of our approach, we calculate the behavior of our system for three different choices of the behavior of d_s. We start by expressing d_s = d_TF[1 + mΘ(x - x_0)], where m is a dimensionless parameter indicating how much d_s expands beyond the Thomas-Fermi value for large doping levels, and x_0 is the specific doping value upon which this increase in d_s takes place. By selecting x_0 = 0.2, we allow the upper half of our doping values to go beyond the Thomas-Fermi approximation. We then perform proximity-coupled Eliashberg calculations for m = 0,1,4 and five different film thicknesses d = 5, 10, 20, 30, 40 nm, always assuming the junction area to be A = 10^-7 m2. Note that the case m = 0 of course corresponds to the case where the material satisfies the Thomas-Fermi model for any value of doping: in this case, the model has no free parameters.In Fig. <ref> we plot the evolution of T_c upon increasing electron doping and assuming that the Thomas-Fermi model always holds (m = 0 and d_s = d_TF), for different values of film thickness. The calculations show that the qualitative increase in T_c with increasing doping level that we observed in the homogeneous case is retained also in proximized films of any thinckess (see Fig. <ref>a). However, the presence of a coupling between surface and bulk induced by the proximity effect gives rise to a key difference with respect to the homogeneous case, namely, a strong dependence of T_c on film thickness in the doped films. Indeed, the magnitude of the T_c shift with respect to the homogeneous case is heavily suppressed already in films as thin as 5 nm. This behavior is best seen in Fig. <ref>b, where we plot the same data as a function of the total film thickness for all doping levels. As we can see the increase of critical temperature drops dramatically with increasing film thickness. We have not calculated the critical temperature for monolayer films since the approximations of the model would no longer apply in this case: in particular the unperturbed electron-phonon spectral function would have been different from the bulk-like one we employed in our calculations <cit.>.We now consider the effect of the different degrees of confinement for the induced charge carriers at the surface of the films. We do so by allowing the perturbed surface layer to spread further in the depth of the film for large electron doping, i.e. by increasing the m parameter in the definition of d_s. In Fig. <ref> we plot the evolution of T_c with increasing electron doping and for different film thicknesses, in the two cases m = 1 (d_s is allowed to expand up to 2d_TF = 0.3 nm) and m = 4 (d_s is allowed to expand up to 5d_TF = 0.75 nm). We can first observe how a different value of d_s does not change the qualitative behavior of the films. The evolution of T_c with increasing electron doping is still comparable to both the homogeneous case and the proximized films in the Thomas-Fermi limit. The suppression of the T_c increase with increasing film thickness is also similar to the latter case. However, the magnitude of the T_c shift for the same values of film thickness and doping level per unit cell is clearly the more enhanced the larger the value of d_s. This is to be expected, as larger values of d_s increase the fraction of the film that is perturbed by the application of the electric field and reduce the T_c shift dampening operated by the proximity effect. In principle, for values of m large enough (or film thickness d small enough) one could reach the limit value d_s ≃ d and recover the homogeneous case where the T_c shift is maximum.All the calculations we performed so far assumed that one could directly control the induced carrier density per unit volume, x, in the surface layer, without an explicit upper limit. However, this is not an experimentally achievable goal in a field-effect device architecture. In this class of devices, the polarization of the gate electrode allows one to tune the electric field at the interface and thus the induced carrier density per unit surface, Δ n_2D, required to screen it, i.e. Δ n_2D=∫_0^d_sΔ n_3Ddz within our model is distributed within a layer of thickness d_s. In general, the determination of the exact depth profile of this distribution requires the self-consistent solution of the Poisson equation <cit.>; however, as a first approximation we can consider this distribution to be constant, obtaining an effective doping level per unit volume simply as x =Δ n_2D/d_s. This procedure allows one to employ the same DFT-Eliashberg formalism we developed before in order to simulate a field effect experiment on a superconducting thin film.In addition, in the previous calculations we supposed that d_s can only take on two values as a function of x, depending on the threshold value x_0. When we consider the field-effect architecture, however, the parameters m and x_0 in the expression d_s = d_TF[1 + mΘ(x - x_0)] are no longer independent as in the previous case. Moreover, according to our recent experimental findings on niobium nitride <cit.>, d_s is a monotonically increasing function of Δ n_2D. We include this behavior in our calculations in the following way: Once the maximum doping level x_0 is selected, m = m(Δ n_2D) is automatically determined by the requirement d_s(Δ n_2D) = Δ n_2D· x_0 for any x > x_0. Fig. <ref>a shows the resulting dependence of the doping per unit volume x and surface layer thickness d_s on the induced carrier density per unit surface Δ n_2D, for two different values of the maximum doping level x_0 = 0.3 and x_0 = 0.4 e-/unit cell. When Δ n_2D is small enough so that x < x_0, the Thomas-Fermi screening holds, d_s = d_TF is constant and x linearly increases with Δ n_2D. As soon as Δ n_2D becomes large enough that x = x_0 is constant (Δ n_2D(x_0)), the Thomas-Fermi screening is no longer valid and d_s increases linearly with Δ n_2D.In Fig. <ref>b and <ref>c we plot the resulting modulation of T_c for five different film thicknesses in the cases x_0 = 0.4 and 0.3 e-/unit cell respectively. In both cases we can readily distinguish between two regimes of Δ n_2D. When Δ n_2D≲Δ n_2D(x_0), Thomas-Fermi screening holds and we reproduce the behavior we observed in Fig. <ref>a. In this regime, the induced carrier density directly modulates x and thus the electron-phonon spectral function α^2 F(Ω). The T_c modulation is thus a result of a direct modification of the material properties at the surface, with proximity effect simply operating a “smoothing" the larger the value of the film thickness. On the other hand, when Δ n_2D > Δ n_2D(x_0), the surface properties (α^2 F(Ω)) are no longer modified by the extra charge carriers, and the further modulation of T_c originates entirely from the proximity effect as determined by the increase in d_s. We can also compare the T_c shifts for different maximum doping levels x_0. Fig. <ref>d shows the difference between the T_c corresponding to x_0 = 0.4 and 0.3 e-/unit cell as a function of the total film thickness, for different values of Δ n_2D. We can clearly see how T_c is always larger for the films with larger x_0, for any value of film thickness, even if the associated values of d_s are always smaller. This indicates that the maximum achievable value of x_0 is dominant with respect to the increase of d_s to determine the final value of T_c, also in the doping regime Δ n_2D > Δ n_2D(x_0).Of course, in a real sample we don't expect the transition between the two regimes to be so clear-cut, as the saturation of x to x_0 would occur over a finite range of Δ n_2D. In this intermediate region, the modulation of α^2 F(Ω) and d_s would both contribute in a comparable way to the final value of T_c in the film. We stress, however, that in both regimes the proximity effect is fundamental in determining the T_c of the gated film. We also note that the proximized Eliashberg equations are able to account for a non-uniform scaling of the T_c shift for different values of film thickness, unlike the models that use approximated analytical equations for T_c.§ CONCLUSIONS In this work, we have developed a general method for the theoretical simulation of field-effect-doping in superconducting thin films of arbitrary thickness, and we have benchmarked it on lead as a standard strong-coupling superconductor. Our method relies on ab-initio DFT calculations to compute how the increasing doping level x per unit volume modifies the structural and electronic properties of the material (shift of Fermi level Δ E_F, density of states N(0), and electron-phonon spectral function α^2 F(Ω)). The Coulomb pseudopotential μ_* is determined by simple calculations from some of these parameters. The properties of the pristine thin film (critical temperature T_c, device area A and total film thickness d) can be obtained either from the literature or experimentally from standard transport measurements. For doping values where the Thomas-Fermi theory of screening is satisfied, the perturbed surface layer thickness is constant (d_s = d_TF) and the theory has no free parameters.Once all the input parameters are known, our method allows to compute the transition temperature T_c for arbitrary values of film thickness d and electron doping in the surface layer x by solving the proximity-coupled Eliashberg equations in the surface layer and unperturbed bulk. On the other hand, if no reliable estimations of the surface layer thickness d_s are available, our method allows one to determine d_s(x) by reproducing the experimentally-measured T_c(x). This allows to probe deviations from the standard Thomas-Fermi theory of screening in the presence of very large interface electric fields.We also show how, even in the case where the Thomas-Fermi approximation breaks down and the doping level x can no longer be increased, the transition temperature T_c of a thin film can still be indirectly modulated by the electric field by changing the surface layer thickness d_s. For what concerns artificial enhancements of T_c in superconducting thin films, we conclude that very thin films (d ≲ d_s, in order to minimize the smoothing operated by the proximity effect) of a superconductive material characterized by a strong increase of the electron-phonon (boson) coupling upon changing its carrier density are required to optimize the effectiveness of the field-effect-device architecture.Finally, our calculations indicate that sizable T_c enhancements of the order of ∼ 0.5 K should be achievable in thin films of a standard strong-coupling superconductor such as lead, for easily realizable thicknesses of ∼ 10 nm and doping levels routinely induced via EDL gating in metallic systems. These features may open the possibility for superconducting switchable devices and electrostatically reconfigurable superconducting circuits above liquid helium temperature. The work of G. A. U. was supported by the Competitiveness Program of NRNU MEPhI. 99 FujimotoReview2013 T. Fujimoto and K. Awaga, Phys. Chem. Chem. Phys. 15, 8983 (2013)UenoReview2014 K. Ueno, H. Shimotani, H. T. Yuan, J. T. Ye, M. Kawasaki, and Y. Iwasa, J. Phys. Soc. Jpn. 83, 032001 (2014)GoldmanReview2014 A. M. Goldman, Annu. Rev. Mater. Res. 44, 45 (2014)SaitoReview2016 Y. Saito, T. Nojima, and Y. Iwasa, Supercond. Sci. Technol. 29, 093001 (2016)UenoNatureMater2008 K. Ueno, S. Nakamura, H. Shimotani, A. Ohtomo, N. Kimura, T. Nojima, H. Aoki, Y. Iwasa, and M. Kawasaki, Nat. Mater. 7, 855 (2008)YeNatureMater2010 J. T. Ye, S. Inoue, K. Kobayashi, Y. Kasahara, H. T. Yuan, H. Shimotani, and Y. Iwasa, Nat. Mater. 9, 125 (2010)SaitoScience2015 Y. Saito, Y. Kasahara, J. T. Ye, Y. Iwasa, and T. Nojima, Science 350, 409 (2015)UenoNatureNano2011 K. Ueno, S. Nakamura, H. Shimotani, H. T. Yuan, N. Kimura, T. Nojima, H. Aoki, Y. Iwasa, and M. Kawasaki, Nat. Nanotech. 6, 408 (2011)YeScience2012 J. T. Ye, Y. J. Zhang, R. Akashi, M. S. Bahramy, R. Arita, and Y. Iwasa, Science 338, 1193 (2012)JoNanoLett2015 S. Jo, D. Costanzo, H. Berger, andA. F. Morpurgo, Nano Lett. 15 1197 (2015)LuScience2015 J. M. Lu, O. Zheliuk, I. Leermakers, N. F. Q. Yuan, U. Zeitler, K. T. Law, and J. T. Ye, Science 350, 1353 (2015)ShiSciRep2015 W. Shi, J. T. Ye, Y. Zhang, R. Suzuki, M. Yoshida, J. Miyazaki, N. Inoue, Y. Saito, and Y. Iwasa, Sci. Rep. 5, 12534 (2015)YuNatNano2015 Y. Yu, F. Yang, X. F. Lu, Y. J. Yan, Y.-H. Cho, L. Ma, X. Niu, S. Kim, Y.-W. Son, D. Feng, S. Li, S.-W. Cheong, X. H. Chen, and Y. Zhang, Nat. Nanotechnol. 10, 270 (2015)CostanzoNatNano2016 D. Costanzo, S. Jo, H. Berger, and A. F. Morpurgo, Nat. Nanotechnol. 11, 399 (2016)SaitoNatPhys2016 Y. Saito, Y. Nakamura, M. S. Bahramy, Y. Kohama, J. T. Ye, Y. Kasahara, Y. Nakagawa, M. Onga, M. Tokunaga, T. Nojima, Y. Yanase, and Y. Iwasa, Nat. Phys. 12, 144 (2016)bollinger11 A. T. Bollinger, G. Dubuis, J. Yoon, D. Pavuna, J. Misewich, and I. Božović, Nature 472, 458 (2011)LengPRL2011 X. Leng, J. Garcia-Barriocanal, S. Bose, Y. Lee, and A. M. Goldman, Phys. Rev. Lett. 107, 027001 (2011)LengPRL2012 X. Leng, J. Garcia-Barriocanal, B. Yang, Y. Lee, J. Kinney, and A. M. Goldman, Phys. Rev. Lett. 108, 067004 (2012)MaruyamaAPL2015 S. Maruyama, J. Shin, X. Zhang, R. Suchoski, S. Yasui, K. Jin, R. L. Greene, and I. Takeuchi, Appl. Phys. Lett. 107, 142602 (2015)JinSciRep2016 K. Jin, W. Hu, B. Zhu, J. Yuan, Y. Sun, T. Xiang, M. S. Fuhrer, I. Takeuchi, and R. L. Greene, Sci. Rep. 6, 26642 (2016)FeteAPL2016 A. Fête, L. Rossi, A. Augieri, and C. Senatore, Appl. Phys. Lett. 109, 192601 (2016)BurlachkovPRB1993 L. Burlachkov, I. B. Khalfin, and B. Ya. Shapiro, Phys. Rev. B 48, 1156 (1993)GhinovkerPRB1995 M. Ghinovker, V. B. Sandomirsky, and B. Ya. Shapiro, Phys. Rev. B 51, 8404 (1995)walter16 J. Walter, H. Wang, B. Luo, C. D. Frisbie, and C. Leighton, ACS Nano 10, 7799 (2016)libro P. Lipavsky, J. Kolacek, K. Morawetz, E. H. Brandt, and T.-J. Yang, Bernoulli Potential in Superconductors How the Electrostatic Field Helps to Understand Superconductivity, Lect. Notes Phys. 733, Springer, Berlin Heidelberg (2008)LiNature2016 L. J. Li, E. C. T. O'Farrell, K. P. Loh, G. Eda, B. Özyilmaz, and A. H. Castro Neto, Nature 529, 185 (2016)YoshidaAPL2016 M. Yoshida, J. T. Ye, T. Nishizaki, N. Kobayashi, and Y. Iwasa, Appl. Phys. Lett. 108, 202602 (2016)XiPRL2016 X. X. Xi, H. Berger, L. Forró, J. Shan, and K. F. Mak, Phys. Rev. Lett. 117, 106801 (2016)ShiogaiNaturePhys2015 J. Shiogai, Y. Ito, T. Mitsuhashi, T. Nojima, and A. Tsukazaki, Nat. Phys. 12, 42 (2016)LeiPRL2016 B. Lei, J. H. Cui, Z. J. Xiang, C. Shang, N. Z. Wang, G. J. Ye, X. G. Luo, T. Wu, Z. Sun, and X. H. Chen, Phys. Rev. Lett. 116, 077002 (2016)HanzawaPNAS2016 K. Hanzawa, H. Sato, H. Hiramatsu, T. Kamiya, and H. Hosono, Proc. Natl. Acad. Sci. USA 113, 3986 (2016)ChoiAPL2014 J. Choi, R. Pradheesh, H. Kim, H. Im, Y. Chong, and D. H. Chae, Appl. Phys. Lett 105, 012601 (2014)PiattiJSNM2016 E. Piatti, A. Sola, D. Daghero, G. A. Ummarino, F. Laviano, J. R. Nair, C. Gerbaldi, R. Cristiano, A. Casaburi, and R. S. Gonnelli, J. Supercond. Novel Magn. 29, 587 (2016)PiattiNbN E. Piatti, D. Daghero, G. A. Ummarino, F. Laviano, J. R. Nair, R. Cristiano, A. Casaburi, C. Portesi, A. Sola, and R. S. Gonnelli, Phys. Rev. B 95, 140501 (2017)carbibastardo J. P. Carbotte, Rev. Mod. Phys. 62, 1027 (1990)ummarinorev G. A. Ummarino, Eliashberg Theory. In: Emergent Phenomena in Correlated Matter, edited by E. Pavarini, E. Koch, and U. Schollwöck, Forschungszentrum Jülich GmbH and Institute for Advanced Simulations, pp.13.1-13.36 (2013) ISBN 978-3-89336-884-6Mc W.L. McMillan, Phys. Rev. 175, 537, (1968)Carbi1 E. Schachinger and J. P. Carbotte, J. Low Temp. Phys. 54, 129 (1984)Carbi2 H. G. Zarate and J. P. Carbotte, Phys. Rev. B 35, 3256, (1987)Carbi3 H. G. Zarate and J. P. Carbotte, Physica B+ C 135, 203 (1985)kresin V. Z. Kresin, H. Morawitz, and S. A. Wolf, Mechanisms of Conventional and High Tc Superconductivity, Oxford University Press (1999)UmmaC60 G. A. Ummarino and R. S. Gonnelli, Phys. Rev. B 66, 104514 (2002)Morel P. Morel and P. W. Anderson, Phys. Rev. 125, 1263 (1962)pastore G. Grosso and G. Pastori Parravicini, Solid state Physics, Academic Press (2014), ISBN 9780123850300Grimvall G. Grimvall, The Electron-phonon Interaction in Metals, North Holland, Amsterdam (1981); G. A. Ummarino, Physica C 423, 96102 (2005)Thomas F. Stern, Phys. Rev. Lett. 18, 546 (1967); E. Canel, M. P. Matthews, and R. K. P. Zia, Phys. Kondens. Mater. 15, 191 (1972); J. Lee and H. N. Spector, J. Appl. Phys. 54, 6989 (1983)meyer B. Meyer, C. Elsässer, and M. Fähnle, FORTRAN90 Program for Mixed-Basis Pseudopotential Calculations for Crystals, Max-Planck-Institut für Metallforschung, Stuttgart (unpublished)vanderbilt D. Vanderbilt, Phys. Rev. B 32, 8412 (1985)hedin L. Hedin and B. I. Lundqvist, J. Phys. C 4, 2064 (1971)heid99 R. Heid and K. P. Bohnen, Phys. Rev. B 60, R3709 (1999)heid10 R. Heid, K.-P. Bohnen, I. Yu. Sklyadneva, and E. V. Chulkov, Phys. Rev. B 81, 174527 (2010)baroni S. Baroni, S. de Gironcoli, A. Dal Corso, and P. Giannozzi, Rev. Mod. Phys. 73, 515 (2001)zein N. E. Zein, Fiz. Tverd. Tela (Leningrad) 26, 3028 (1984); N. E. Zein, Sov. Phys. Solid State 26, 1825 (1984)sklyadneva I. Yu. Sklyadneva, R. Heid, P. M. Echenique, K.-B. Bohnen, and E. V. Chulkov, Phys Rev. B 85, 155115 (2012)Pines D. Pines and P. Nozieres, The theory of quantum liquids, Benjamin, New York (1966)Basavaiah S. Basavaiah, J. M. Eldridge, and J. Matisoo, J. of Appl. Phys. 45, 457 (1974)Pratappone S. Bose, C. Galande, S. P. Chockalingam, R. Banerjee, P. Raychaudhuri, and P. Ayyub, J. Phys.: Condens. Matter 21, 205702 (2009)Brumme T. Brumme, M. Calandra, and F. Mauri, Phys Rev. B 91, 155436 (2012)HirschPRB2004 J. E. Hirsch, Phys. Rev. B 70, 226504 (2004)
http://arxiv.org/abs/1704.08159v1
{ "authors": [ "G. A. Ummarino", "E. Piatti", "D. Daghero", "R. S. Gonnelli", "Irina Yu. Sklyadneva", "E. V. Chulkov", "R. Heid" ], "categories": [ "cond-mat.supr-con", "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.supr-con", "published": "20170426151700", "title": "Proximity Eliashberg theory of electrostatic field-effect-doping in superconducting films" }
./ Paul-Drude-Institut für Festkörperelektronik,Hausvogteiplatz 5–7,10117 BerlinWe study the electronic properties of GaAs nanowires composed of both the zincblende and wurtzite modifications using a ten-band 𝐤·𝐩 model. In the wurtzite phase, two energetically close conduction bands are of importance for the confinement and the energy levels of the electron ground state. These bands form two intersecting potential landscapes for electrons in zincblende/wurtzite nanostructures. The energy difference between the two bands depends sensitively on strain, such that even small strains can reverse the energy ordering of the two bands. This reversal may already be induced by the non-negligible lattice mismatch between the two crystal phases in polytype GaAs nanostructures, a fact that was ignored in previous studies of these structures. We present a systematic study of the influence of intrinsic and extrinsic strain on the electron ground state for both purely zincblende and wurtzite nanowires as well as for polytype superlattices. The coexistence of the two conduction bands and their opposite strain dependence results in complex electronic and optical properties of GaAs polytype nanostructures. In particular, both the energy and the polarization of the lowest intersubband transition depends on the relative fraction of the two crystal phases in the nanowire. Modelling the electronic properties of GaAs polytype nanostructures:impact of strain on the conduction band character Oliver Brandt December 30, 2023 ======================================================================================================================= § INTRODUCTION GaAs can be considered as the prototype compound semiconductor material and is used for a wide range of electronic and optoelectronic applications including high electron mobility transistors, solar cells, and infrared laser diodes. <cit.> Consequently, the material properties of GaAs have been extensively studied and are known with higher accuracy than for any other compound semiconductor.<cit.> This statement, however, only applies to the equilibrium zincblende (ZB) modification of GaAs, whereas the material properties of the metastable wurtzite (WZ) phase are poorly known. This lack of knowledge results from the fact that WZ GaAs cannot be obtained in bulk form or by conventional heteroepitaxy.<cit.> As a consequence, there has been no need to be concerned with the properties of a metastable phase that escaped investigation in any case. However, this situation has radically changed with the advent of GaAs nanowires (NWs) in which the WZ phase is regularly observed to coexist with the ZB phase in the form of multiple ZB and WZ segments along the NW axis, i. e., ⟨ 111 ⟩_ZB or ⟨ 0001 ⟩_WZ.<cit.> The NWs thus constitute polytype heterostructures that are interesting in their own right. However, to unambiguously extract the material properties of bulk WZ GaAs from experiments on these NWs is beset with many difficulties. As a consequence, even fundamental properties of WZ GaAs, such as its band gap and the nature of the lowest conduction band (CB), are still controversially discussed.<cit.> While the ZB phase is characterized by a single CB of Γ_6c symmetry with a light effective mass, two energetically close CBs exist in the WZ phase: the Γ_7c band, the equivalent of Γ_6c of the ZB phase with a comparably light effective mass, and the Γ_8c band, which has no equivalent in the ZB phase, but originates from folding the L valley of the ZB band structure to the center of the Brillouin zone and thus exhibits a heavy and anisotropic effective mass.<cit.> To our knowledge, all available studies agree that the energy difference between the two CBs in the WZ phase is small (<0.1 eV), but differ concerning the ordering of the two bands, namely, whether the Γ_8c band is energetically below the Γ_7c band or vice versa.<cit.>The study of Cheiwchanchamnangij and Lambrecht<cit.> has shown that the ordering of these two bands depends sensitively on strain. In particular, for a uniaxial strain parallel to the NW axis, the deformation potentials of the Γ_7c and Γ_8c bands were found to be of opposite signs such that the bands cross for small uniaxial strains ϵ_zz, with the exact magnitude depending on the equilibrium lattice constants used for the calculation. This theoretical result was experimentally confirmed by Signorello et al.,<cit.> who performed experiments on single NWs to which an external uniaxial strain was applied. Signorello et al.<cit.> observed the Γ_7c/Γ_8c crossover at ϵ_zz = -0.14%. We note that a strain of this magnitude may also be introduced unintentionally upon dispersal of the NWs on a substrate.<cit.> In addition to these extrinsic sources of strain, an intrinsic source exists that has so far been ignored in studies of the electronic structure of GaAs polytype NWs: the non-negligible lattice mismatch between ZB and WZ GaAs. High-resolution x-ray diffraction experiments demonstrate that the in-plane lattice constant a of WZ GaAs is smaller than the equivalent interatomic distance on the (111) plane of the ZB phase by -(0.27 ± 0.05)%.<cit.>Considering the sensitivity of the band structure of WZ GaAs to strain of this magnitude, it is obviously essential to take into account this lattice mismatch for the interpretation of experiments performed on polytypic GaAs NWs. As a consequence, it is imperative for any such interpretation to rely on a model that includes both the Γ_7c and Γ_8c bands in WZ GaAs explicitely.In the present work, we employ and evaluate a ten-band 𝐤·𝐩 model suitable to describe polytype heterostructures represented by two intersecting potentials formed by the Γ_6c (ZB) and Γ_7c (WZ) bands as well as by the Γ_8c (WZ) band which has no equivalent in the ZB phase. The model treats the Γ_7c and Γ_8c bands on an equal footing and thus allows us to take into account strain from both the lattice mismatch between the ZB and the WZ phase as well as from external influences. Parameters for (111)-oriented ZB systems are transformed to their respective WZ counterparts such that both crystal phases can be described within the same Hamiltonian. We compute the electronic properties of pure ZB and WZ GaAs NWs as well as of polytypic GaAs NW heterostructures. We show that strain-induced modifications of the two CBs in the WZ phase have a decisive influence on both the character and the confinement of electrons in polytype GaAs heterostructures.§ FORMALISM AND PARAMETERS Our simulations employ a 𝐤·𝐩 Hamiltonian based on the eight-band model for strained WZ semiconductors developed by Chuang and Chang, <cit.> expanded to ten bands with the parabolic Γ_8c band under the influence of strain. <cit.> .This simple approach captures the fundamental feature of the potentials formed by two uncoupled, coexisting CBs in the WZ phase. All parameters employed for the calculations are compiled in Table <ref> and were taken from Ref. ChLa11 unless indicated otherwise. We have chosen the lattice constants computed within the local density approximation (LDA), since these values are much closer to the experimentally obtained lattice constants<cit.> than the ones obtained via the generalized gradient approximation (GGA).<cit.> As a result, E(Γ_8c) < E(Γ_7c) at zero strain, contrary to the ordering reported in Ref. SiLo14 in which the GGA values were used. This difference reflects the present uncertainty in parameters. In any case, the energy difference between the two bands is small, and the bands cross for uniaxial strains of the same magnitude (but opposite signs). The notation of the deformation potentials follows the one of Ref. SiLo14. Lattice, elastic, and piezoelectric constants of the ZB crystal along the ⟨111⟩ direction and the WZ phase were obtained from the respective ZB values and transformed via the relations given in Ref. ScCa11. As there is no equivalent of the WZ Γ_8c band in the ZB phase, we assigned a barrier of 1.5 eV to it, which is approximately the energy separation between the Γ_6c and the next higher CB in the ZB phase. We have assigned the same electron effective masses as in the WZ phase for this band in the ZB segment, since the employed ten-band model requires the consistent treatment of the Γ_8c band in both crystal phases. The Hamiltonian (see Appendix) was implemented within the generalized multiband 𝐤·𝐩 module of the S/PHI/nX software library. <cit.>Figure <ref> shows the bulk bandstructure as well as the response of the band edges at the Γ point to an external uniaxial strain ϵ_zz obtained with the parameters listed in Table <ref> for both the ZB and the WZ modifications of GaAs. The familiar band structure of ZB GaAs in Fig. <ref>(a) is different from the band structure of WZ GaAs displayed in Fig. <ref>(b) not only for the valence bands (VBs), but particularly for the CBs. The energy splitting of the two CBs close to the Γ point is visualized in the inset of Fig. <ref>(b). Figures <ref>(c) and <ref>(d) illustrate the influence of an external uniaxial strain on the Γ point CB and VB energies obtained with the parameters listed in Table <ref>. For the ZB phase, the VBs are degenerate at zero strain and split at any finitestrain value. For the WZ phase, the VBs are split already at zero strain, and their order does not change within the intervals of strains considered here. However, the character of the lowest CB changes from Γ_8c to Γ_7c at ϵ_zz = 0.12%. This change has important consequences: not only does the energy of the optical transition change, but also the oscillator strength. <cit.> § PURE ZINCBLENDE AND WURTZITE NANOWIRES We start with a discussion of the electronic properties of pure ZB and WZ NWs under the influence of strain and radial confinement. Figure <ref> shows the energy difference between electron and hole ground states relative to the band gap of the corresponding phase as a function of the diameter of NWs that are subject to an uniaxial strain ϵ_zz of up to 1%. For an unstrained ZB NW [cf. Fig. <ref>(a)], the energy decreases with increasing diameter due to a reduced radial confinement, and converges towards the unstrained ZB band gap. Note that dielectric confinement <cit.> is not considered in this model. For finite tensile strain, the energy is reduced for all diameters.The electron state has in all cases a Γ_6c character, as this band is energetically well separated from any other band. The hole ground state is subject to strong band mixing for all NW diameters and strains considered. The character of the hole state thus changes continuously such that no abrupt change of the hole energy is observed. The contribution of the light hole (Γ_7v-) decreases with decreasing diameter and larger strain ϵ_zz.The situation changes entirely when considering a pure WZ NW [Fig. <ref>(b)]. For the parameter set employed in the present work, the energetically lower band for the unstrained NW is the Γ_8c band, which exhibits a heavier effective electron mass as compared to the Γ_7c band. Hence,the electron ground state is of Γ_8c character regardless of the NW diameter.We furthermore consider the two CBs to be uncoupled, as shown for bulk WZ GaAs <cit.> so that no band mixing occurs. Under the influence of tensile uniaxial strain, the Γ_7c band is lowered energetically and crosses the Γ_8c band for ϵ_zz = 0.12% [cf. Fig. <ref>(d)]. For larger strains, the electron ground state is of Γ_8c character up to a certain diameter due to the large effective mass of this band. For larger diameters, the ground state changes its character to Γ_7c since the influence of quantum confinement diminishes. Since the two bands are not electronically coupled, this change of character is abrupt, in marked contrast to the behavior known from VB states in ZB GaAs NWs. Hence, WZ GaAs NWs of slightly different diameter or experiencing slightly different strain may exhibit drastically different optical properties in terms of polarization selection rules and oscillator strength. The coexistence of the Γ_7c and Γ_8c bands has thus important consequences for the interpretation of experimental results obtained from single NWs.§ POLYTYPE SUPERLATTICESIn this section, we address the electronic properties of WZ/ZB polytype heterostructures as computed in the framework of our ten-band 𝐤·𝐩 model. Since we are interested here in the influence of axial confinement, we restrict the following discussion to NW diameters for which radial confinement can be safely neglected, i. e., to diameters larger than 50 nm. Assuming further that other radial contributions to the potential landscape, such as surface potentials induced by Fermi level pinning, can also be excluded, we may approximate GaAs NWs consisting of WZ and ZB segments by a planar polytype heterostructure. §.§ Spatially direct and indirect transitions It is generally accepted that WZ/ZB heterostructures from III-V semiconductors represent type II heterostructures with the CB minimum in the ZB phase and the VB maximum in the WZ phase.<cit.> Consequently, electrons and holes are expected to be spatially separated in these structures. This view, however, is too simplistic in that it neglects the coexistence of two CB in the WZ phase. In fact, the Γ_6c,7c and the Γ_8c bandsform two intersecting but not interacting potentials for electrons. Figure <ref> illustrates that, depending on the length of the segments, both spatially indirect and direct optical transitions are possible in a WZ/ZB heterostructure. In Fig. <ref>(a), the electron ground state is located in the comparatively long ZB segment due to the potential offset between the Γ_6c band in the ZB phase and the equivalent Γ_7c band in the WZ phase [cf. Tab. <ref>]. Since the hole ground state always resides in the WZ segment due to the Γ_8v/Γ_9v potential offset between the ZB and the WZ phase, the optical transitions are spatially indirect in this case. The situation may change for thin ZB segments as shown in Fig. <ref>(b). Here, the quantized state in the ZB segment is at an energy higher than the Γ_8c band in the WZ segment. This band has no equivalent in the ZB segment, which thus represents a high energy barrier for an electron in the WZ segment. For thin ZB segments, the electron ground state is thus confined in the potential well formed by the Γ_8c band in the WZ segment. Spatially direct transitions between these electrons are allowed with holes in the Γ_9v VB for a polarization perpendicular to the ⟨0001⟩ direction with a small, but nonzero dipole matrix element. The green dash-dotted line in Fig. <ref> (b) indicates the first electron state that is confined in the ZB segment, which is energetically above the ground state confined in the WZ segment.§.§ Intrinsic strain and polarizationThe above qualitative considerations show that it is essential to treat both CBs in the WZ phase on an equal footing. For quantitative results, it is important to note that the electronic properties of ZB and WZ segments in GaAs NWs are modified by strain as well as spontaneous and piezoelectric polarization potentials, P_sp and P_pz, respectively. The in-plane lattice constants of WZ and ⟨ 111 ⟩-oriented ZB crystals differ by about 0.3%. Polytype NWs will adopt an average lattice constant that depends on the overall fraction of ZB and WZ segments. These segments are thus under compressive and tensile biaxial strain ε_ij with i,j=x,y,z, respectively, which in turn induce a corresponding piezoelectric polarization. In addition, WZ GaAs exhibits a spontaneous polarization of P_sp= -2.3 × 10 ^-3 C/m^2 along the ⟨0001⟩ direction. <cit.> The total polarization discontinuity at the ZB/WZ interfaces gives rise to a polarization potential in polytype NWs composed of ZB and WZ segments. For the following calculations, we consider a superlattice consisting of a ZB and a WZ segment with a total length of 40 nm, and individual lengths between 1 and 39 nm. We first evaluate the influence of internal strain and built-in electric fields on the electronic properties of this WZ/ZB superlattice in the absence of additional external strain. Figure <ref> shows the energy difference between electron and hole ground states relative to the band gaps of unstrained ZB and WZ GaAs as a function of the length of the WZ segment. The intrinsic biaxial strain ε_ij within the segments was computed assuming that the equilibrium in-plane lattice constant is given by an average of the ZB and WZ lattice constants weighted by the respective segment length. <cit.> If both the lattice mismatch and the polarization potentials are neglected [cf. curve a in Fig. <ref>], the energy difference between the electron and the hole ground states first drops abruptly due to decreasing hole confinement in the WZ segment, reaches a minimum at a length of 7 nm, and increases for longer WZ segments due to the increasing electron confinement in the ZB segment. The electron remains confined in the ZB segment with a Γ_6c,7c character (dash-dotted line) up to a WZ segment length of 36 nm. For even longer segments, the electron ground state becomes confined in the WZ segment and its character changes to Γ_8c (solid line). For all segment lengths, the energy difference between electron and hole with respect to the ZB band gap remains negative, i. e., the energy of optical transitions would be below the ZB band gap due to the VB offset between ZB and WZ GaAs.When we include spontaneous polarization in our simulations, as shown in curve b in Fig. <ref>, the overall energy redshifts becomes larger with increasing length of the WZ segment. At the minimum of the curve at a length of 20 nm, the energy shift amounts to 90 meV as compared to curve a. Considering, in addition, the lattice mismatch and the resulting biaxial strain and piezoelectric polarization potentials [cf. curve c], significant differences are observed with respect to curve b both for short and long WZ segments. In particular, for WZ segments longer than 36 nm, the energy difference between the Γ_8c electron and the Γ_9v hole states exceeds the band gap of ZB GaAs. Note, however, that we never reach or even exceed the band gap of WZ GaAs, which is a consequence of the presence of internal electrostatic fields in the heterostructure. §.§ Influence of external strainWe next study the influence of an additional uniaxial strain ϵ_zz on the electronic properties of WZ/ZB GaAs superlattices. We focus here on the case of a superlattice with ϵ_zz<0, for which the interplay of spatially direct and indirect transitions (cf. Fig. <ref>) is most clearly seen. Figure <ref>(a) shows the energy difference between electron and hole ground states as a function of the length of the WZ (ZB) segment for different values of ϵ_zz. The intrinsic biaxial strain due to the lattice mismatch as well as spontaneous and piezoelectric polarization are taken into account. Upon the application of the external uniaxial strain, the character of the electron ground state changes to Γ_8c already for shorter WZ segments (for example, 30 nm at ϵ_zz=-0.2%, 15 nm at -0.6%, 5 nm at -1%), as compared to the previously discussed case where the external strain was absent (cf. Fig. <ref>). This change of character can also be seen when examining the charge carrier overlap 𝒪 between the electron and hole ground state as defined in Ref. MaHa13. The overlap is below 10^-5 between Γ_8c electrons and Γ_9v hole states for WZ segments longer than 10 nm despite the fact that both particles are confined within the same segment implying spatially direct transitions as schematically depicted in Fig. <ref>. The origin of this unexpected behavior is the polarization potential, which results in a strong confiment of electrons and holes at the opposite facets of the WZ segment. In contrast, the overlap between Γ_6c electrons confined in the ZB segment and Γ_9v holes in the WZ segment is much larger (10^-4 to 10^-1) thanks to the weak confinement of the light Γ_6c electrons. However, for ϵ_zz≤ -0.8% and short WZ segments, 𝒪 increases drastically for Γ_8c electrons. In these cases, strain reduces the Γ_8c CB energy to such an amount that the electron remains confined in the WZ segment even for very short segments. To illustrate this behavior, Fig. <ref> shows the charge density of the electron ground state together with the potentials formed by the Γ_6c,7c and the Γ_8c bands for WZ segments of 10 nm [cf. Figs. <ref>(a), (c), (e)] and 30 nm [cf. Figs. <ref>(b), (d), (f)] length and different values of ϵ_zz. For ϵ_zz = -0.2%, Ψ_el is confined in the ZB segment for both cases [cf. Figs. <ref>(a) and <ref>(b)], but the wavefunction penetrates into the WZ segment and the confinement of the electron is rather weak. For a strain of -0.4%, the electron remains weakly confined in the ZB segment for a WZ length of 10 nm, but is transfered to the WZ segment and thus changes its character to Γ_8c for a WZ length of 30 nm [cf. Figs. <ref>(c) and <ref>(d)]. Evidently, the confinement of the electron in the WZ segment is much stronger due to the large effective mass of the Γ_8c band along the ⟨0001⟩ direction, so that tunneling into the ZB segment is negligible. Finally, for a strain of -0.8%, the electron is strongly confined within the WZ segment for both the short and long WZ segment [cf. Figs. <ref>(e) and <ref>(f)].§ SUMMARY AND CONCLUSIONSOur findings show that the description of the electronic properties of GaAs polytype nanostructures requires the explicit consideration of the two energetically lowest CBs. We find that the intrinsic strain ε_ij that arises from the lattice mismatch between the two polytypes as well as the piezoelectric and spontaneous polarization have a significant influence on the electronic properties of WZ/ZB GaAs heterostructures and must not be neglected. In particular, both the character of the electron ground state and its energy depend sensitively on the polytype fraction in a given NW. These properties are furthermore affected by external, uniaxial strain acting on the NW. For a range of -1% < ϵ_zz < 1%, the energy difference between the two relevant CBs of the WZ phase varies between -250 and +200 meV. The significant influence of comparatively small strains on the optical properties of polytype GaAs NWs is a possible explanation for the controversial experimental results regarding the character of the lowest CB and the energy of the corresponding band gap that were reported in the past. We finally note that many of the parameters employed for our simulations are not known with high accuracy. However, as long as the energy difference between the Γ_7c and the Γ_8c CB of the WZ segment is small (as is the case not only in GaAs, but also in GaSb <cit.>), the character of the electron ground state will depend on strain state and dimensions of the WZ/ZB heterostructure such that the explicit treatment of the two CBs is required for any simulation of its electronic properties.The authors thank Friedhelm Bechstedt for his help and valuable suggestions and Lutz Schrottke for a critical reading of the manuscript. P. C. acknowledges funding from the Fonds National Suisse de la Recherche Scientifique through project 161032. § APPENDIXThe Hamiltonian employed is based on an eight band model by Chuang and Chang, <cit.> where the additional Γ_8 CB is added: Ĥ^10× 10 =([C000R00000;0C0000000R;00S0 -VUV^*000;000S000 -VUV^*;R0 -V^*0F -M^* -K^*000;00U0 -MλM^*Δ00;00V0 -KMG0Δ0;000 -V^*0Δ0G -M^* -K^*;000U00Δ -MλM^*;0R0V000 -KMF ]).The entities within the matrices are the operators:S = E_cb + A_1'∂_z^2 + A_2'(∂_x^2 + ∂_y^2),F = Δ_1 + Δ_2 + λ + θ,    G = Δ_1 - Δ_2 + λ + θ, λ = ħ^2/2m_0(Ã_1∂_z^2 + Ã_2 [∂_ x^2 + ∂_y^2]) + E_vb, θ = ħ^2/2m_0(Ã_3∂_z^2 + Ã_4[∂_ x^2 + ∂_y^2]),K = ħ^2/2m_0Ã_5(∂_x + i∂_y)^2,     M = ħ^2/2m_0Ã_6∂_z(∂_x + i∂_y), U = i∂_z P_1,     V = i(∂_x + i∂ y) P_2,   Δ = √(2)Δ_3,withA_1' = ħ^2/2m_e^∥ - P_1^2/E_G,   A_2' = ħ^2/2m_e^⊥ - P_2^2/E_G, Ã_1 = A_1 + 2 m_0/ħ^2P_2^2/E_G,   Ã_2 = A_2 Ã_3=A_3 - 2 m_0/ħ^2P_2^2/E_G,   Ã_4 = A_4 + 2 m_0/ħ^2P_1^2/E_G, Ã_5 = A_5 + 2 m_0/ħ^2P_1^2/E_G,   Ã_6 = A_6 + √(2) m_0/ħ^2P_1 P_2/E_G, P_1^2= ħ^2/2m_0(m_0/m_e^⊥-1) ×(E_G + Δ_1 + Δ_2)(E_G + 2Δ_2)-2Δ_3^2/E_G + 2Δ_2,P_2^2= ħ^2/2m_0(m_0/m_e^∥-1) ×E_G[(E_G + Δ_1 + Δ_2)(E_G + 2Δ_2)-2Δ_3^2]/(E_G + Δ_1 + Δ_2)(E_G + Δ_2) - Δ_3^2, Δ_1 = Δ_cr   and   Δ_2=Δ_3=1/3Δ_so.E_cb and E_vb denote the conduction and valence band edge, E_G=E_cb-E_vb is the band gap, and m_0 is the bare electron mass. m^∥_e and m^⊥_e are the electron effective masses of the Γ_6 (ZB) and Γ_7 (WZ) CB, Δ_cr and Δ_so denote the crystal field and the spin-orbit splitting parameter, respectively. A_1 to A_6 are the Luttinger-like parameters. The Γ_8 band is added via the term:C = E_cb + Δ E(Γ_8,Γ_7) + ħ^2/2m_8,∥∂_z^2 + ħ^2/2m_8,⊥(∂_x^2+∂_y^2)Here, Δ E(Γ_8, Γ_7) denotes the energy splitting between the two bands at the Γ-point and m_8,∥ and m_8,⊥ denote the effective mass along the [0001] direction or perpendicular to it. R ≈ 0 denotes the small, but dipole-allowed coupling of the Γ_8 CB and the Γ_9v VB. Strain enters the Hamiltonian via the additional contribution:Ĥ_strain = ([ c 0 0 0 0 0 0 0 0 0; 0 c 0 0 0 0 0 0 0 0; 0 0 s 0 0 0 0 0 0 0; 0 0 0 s 0 0 0 0 0 0; 0 0 0 0 f-h^⋆ - k^⋆ 0 0 0; 0 0 0 0-h λ_ϵ h^⋆ 0 0 0; 0 0 0 0-k h f 0 0 0; 0 0 0 0 0 0 0 f-h^⋆-k^⋆; 0 0 0 0 0 0 0-h λ_ϵ h^⋆; 0 0 0 0 0 0 0-k h f ])where:[c = (Ξ_d,h-Ξ_b,h)·τ·ϵ_zz + Ξ_d,u· (1-τ)·ϵ_zz,; s = (Ξ_b,h-D_1-2D_2)·τ·ϵ_zz + D_3· (1-τ)·ϵ_zz,;τ = (1-2ν)/3  and  ν= C_12/(C_12+C_11),;λ_ϵ = D_1ϵ_zz + D_2(ϵ_xx + ϵ_yy),;θ_ϵ = D_3ϵ_zz + D_4(ϵ_xx + ϵ_yy),; f = λ_ϵ + θ_ϵ,; k = D_5(ϵ_xx + 2iϵ_xy - ϵ_yy),; h = D_6(ϵ_zx + iϵ_yz). ] Wa88 O. Wada, https://link.springer.com/article/10.1007/BF00635747Opt. Quant. Electron. 20, 441 (1988). MoJa09 S. Mokkapati and C. Jagadish, http://www.sciencedirect.com/science/article/pii/S1369702109701105Mater. Today 12, 22 (2009). Bl82 J. S. Blakemore, http://aip.scitation.org/doi/abs/10.1063/1.331665J. Appl. Phys. 53, R123 (1982). Ad94 S. Adachi, GaAs and related materials: Bulk semiconducting and superlattice properties (World Scientific, Singapore, 1994). VuMe01 I. Vurgaftman, J. R. Meyer, and L. R. Ram-Mohan, http://scitation.aip.org/content/aip/journal/jap/89/11/10.1063/1.1368156J. Appl. Phys. 89, 5815 (2001). McNe05 M. I. McMahon and R. J. Nelmes, http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.95.215505Phys. Rev. Lett. 95, 215505 (2005). SoCi05 I. P. Soshnikov, G. É. Cirlin, A. A. Tonkikh, Yu. B. Samsonenko, V. G. Dubrovskii, V. M. Ustinov, O. M. Gorbenko, D. Litvinov, and D. Gerthsen, https://link.springer.com/article/10.1134/1.2142881Phys. Solid State 47, 2213 (2005). ZaCo09 I. Zardo, S. Conesa-Boj, F. Peiro, J. R. Morante, J. Arbiol, E. Uccelli, G. Abstreiter, and A. Fontcuberta i Morral, http://journals.aps.org/prb/abstract/10.1103/PhysRevB.80.245324Phys. Rev. B 80, 245324 (2009). HeCo11 M. Heiß, S. Conesa-Boj, J. Ren, H.-H. Tseng, A. Gali, A. Rudolph, E. Uccelli, F. Peiró, J. R. Morante, D. Schuh, E. Reiger, E. Kaxiras, J. Arbiol, and A. Fontcuberta i Morral, http://journals.aps.org/prb/abstract/10.1103/PhysRevB.83.045303Phys. Rev. B 83, 045303 (2011). DePr10 A. De and C. Pryor, http://journals.aps.org/prb/abstract/10.1103/PhysRevB.81.155210Phys. Rev. B 81, 155210 (2010). BePa12 A. Belabbes, C. Panse, J. Furthmüller, and F. Bechstedt, http://journals.aps.org/prb/abstract/10.1103/PhysRevB.86.075208Phys. Rev. B 86, 075208 (2012). GrCo13 A. M. Graham, P. Corfdir, M. Heiss, S. Conesa-Boj, E. Uccelli, A. Fontcuberta i Morral, and R. T. Phillips, http://journals.aps.org/prb/abstract/10.1103/PhysRevB.87.125304Phys. Rev. B 87, 125304 (2013). BeBe13 F. Bechstedt and A. Belabbes, http://iopscience.iop.org/article/10.1088/0953-8984/25/27/273201J. Phys.: Condens. Matter. 25, 273201 (2013). TrKi99 P. Tronc, Y. E. Kitaev, G. Wang, M. F. Limonov, A. G. Panfilov, and G. Neu, http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1521-3951(199911)216:1 MuNa94 M. Murayama and T. Nakayama, http://journals.aps.org/prb/abstract/10.1103/PhysRevB.49.4710Phys. Rev. B 49, 4710 (1994). ChLa11 T. Cheiwchanchamnangij and W. R. L. Lambrecht, http://journals.aps.org/prb/abstract/10.1103/PhysRevB.84.035203Phys. Rev. B 84, 035203 (2011). SiLo14 G. Signorello, E. Lörtscher, P. A. Khomyakov, S. Karg, D. L. Dheeraj, B. Gotsmann, H. Weman, and H. Riel, http://www.nature.com/articles/ncomms4655Nature Comm. 5, 3655 (2014). CoFe15 P. Corfdir, F. Feix, J. K. Zettler, S. Fernández-Garrido, and O. Brandt, http://iopscience.iop.org/article/10.1088/1367-2630/17/3/033040New J. Phys. 17, 033404 (2015). TcHa06 M. Tchernycheva, J. C. Harmand, G. Patriarche, L. Travers, and G. E. Cirlin, http://iopscience.iop.org/article/10.1088/0957-4484/17/16/005/metaNanotechnology 17, 4025 (2006). BiBr12 A. Biermanns, S. Breuer, A. Trampert, A. Davydok, L. Geelhaar, and U. Pietsch, http://iopscience.iop.org/article/10.1088/0957-4484/23/30/305703/metaNanotechnology 23, 305703 (2012). JoYa15 D. Jacobsson, F. Yang, K. Hillerich, F. Lenrick, S. Lehmann, D. Kriegner, J. Stangl, L. R. Wallenberg, K. A. Dick, and J. Johansson, http://pubs.acs.org/doi/abs/10.1021/acs.cgd.5b00507Crys. Growth Des. 15, 4795 (2015).ChCh96 S. L. Chuang and C. S. Chang, http://journals.aps.org/prb/abstract/10.1103/PhysRevB.54.2491Phys. Rev. B 54, 2491 (1996).ScCa11 S. Schulz, M. A. Caro, E. P. O'Reilly, and O. Marquardt, http://journals.aps.org/prb/abstract/10.1103/PhysRevB.84.125312Phys. Rev. B 84, 125312 (2011). BoFr11 S. Boeck, C. Freysoldt, A. Dick, L. Ismer, and J. Neugebauer, http://www.sciencedirect.com/science/article/pii/S0010465510003619Computer Phys. Commun. 182, 543 (2011). MaBo14 O. Marquardt, S. Boeck, C. Freysoldt, T. Hickel, S. Schulz, J. Neugebauer, and E. P. O'Reilly, http://www.sciencedirect.com/science/article/pii/S0927025614004583Comp. Mat. Sci. 95, 280 (2014).BeZu06 G. Bester, A. Zunger, X. Wu, and D. Vanderbilt, http://journals.aps.org/prb/abstract/10.1103/PhysRevB.74.081305Phys. Rev. B 74, 081305(R) (2006). ClSe16 J. I. Climente, C. Segarra, F. Rajadell, and J. Planelles, http://scitation.aip.org/content/aip/journal/jap/119/12/10.1063/1.4945112J. Appl. Phys. 119, 125705 (2016). Ad85 S. Adachi, http://scitation.aip.org/content/aip/journal/jap/58/3/10.1063/1.336070J. Appl. Phys. 58, R1 (1985). ScWi07 A. Schliwa, M. Winkelnkemper, and D. Bimberg, http://journals.aps.org/prb/abstract/10.1103/PhysRevB.76.205324Phys. Rev. B 76, 205324 (2007). ZeCo16 J. K. Zettler, P. Corfdir, C. Hauswald, E. Luna, U. Jahn, T. Flissikowski, E. Schmidt, C. Ronning, A. Trampert, L. Geelhaar, H. T. Grahn, O. Brandt, and S. Fernández-Garrido, http://pubs.acs.org/doi/ipdf/10.1021/acs.nanolett.5b03931Nano Lett. 16, 973 (2016). Va89 C. G. Van de Walle, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.39.1871Phys. Rev. B 39, 1871 (1989). MaHa13O. Marquardt, C.Hauswald, M. Wölz, L. Geelhaar, and O. Brandt, http://pubs.acs.org/doi/abs/10.1021/nl4015183Nano Lett. 13, 3298 (2013).
http://arxiv.org/abs/1704.08499v1
{ "authors": [ "Oliver Marquardt", "Manfred Ramsteiner", "Pierre Corfdir", "Lutz Geelhaar", "Oliver Brandt" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170427104219", "title": "Modelling the electronic properties of GaAs polytype nanostructures: impact of strain on the conduction band character" }
A morphology study on the epitaxial growth of graphene and its buffer layer]Tailoring the SiC surface - a morphology study on the epitaxial growth of graphene and its buffer layer^1Physikalisch-Technische Bundesanstalt, Bundesallee 100, 38116 Braunschweig, Germany^2Institute of Semiconductor Technology of Technische Universität Braunschweig, Hans-Sommer-Straße 66, 38106 Braunschweig, Germany ^3Laboratory for Emerging Nanometrology (LENA), TU Braunschweig, [email protected], [email protected] July 2017Keywords: epitaxial graphene, buffer layer growth, polymer-assisted, giant step bunching, hydrogen etchingWe investigate the growth of the graphene buffer layer and the involved step bunching behavior of the silicon carbide substrate surface using atomic force microscopy. The formation of local buffer layer domains are identified to be the origin of undesirably high step edges in excellent agreement with the predictions of a general model of step dynamics. The applied polymer-assisted sublimation growth method demonstrates that the key principle to suppress this behavior is the uniform nucleation of the buffer layer. In this way, the silicon carbide surface is stabilized such that ultra-flat surfaces can be conserved during graphene growth on a large variety of silicon carbide substrate surfaces. The analysis of the experimental results describes different growth modes which extend the current understanding of epitaxial graphene growth by emphasizing the importance of buffer layer nucleation and critical mass transport processes. § INTRODUCTION Clean crystal surfaces at high temperatures reveal characteristics that may be described by “annealing”, “etching” and “growth” <cit.>. In the case of the two-component crystal SiC, these mechanisms may coincide since here the partial pressures of silicon and carbon-containing vapor species need to be considered separately to describe the equilibrium conditions at the surface. Especially below temperatures of 2000 the low vapor pressure of carbon-containing species compared to that of silicon species is the reason why, for example, thermal etching of the terrace edges may be accompanied by the growth of carbon domains <cit.>. In the case of annealing, thermally activated species reorganize on the surface without ultimately leaving the surface and enable the formation of an energetically preferred configuration <cit.>. In the case of clean SiC (0001) surfaces at temperatures of about 1400 in an argon atmosphere, this restructuring process leads to the formation of new surface steps but also to the formation of carbon domains due to preferred silicon desorption <cit.>. The first carbon layer is the so-called buffer layer that saturates about 1/3 of the dangling silicon bonds of the underlying substrate surface forming the (6√(3)× 6√(3))R30 reconstruction <cit.>. Depending on the process parameters and substrate properties, these morphological changes are often accompanied by the formation of terrace structures with heights of several nanometers, so-called giant steps. Especially substrates with a larger miscut angle(≥0.2) or those that were initially hydrogen-etched are known to support giant step bunching <cit.>. Graphene formation across giant steps is typically described by step flow growth leading to the formation of monolayer domains on the terraces but also to continuous multilayer domains along the edges <cit.>. Most of the published experiments on epitaxial graphene growth, solely focus on the development of the graphene layer presuming a SiC surface that is already reconstructed by a buffer layer to explain their proposed models <cit.>. However, this cannot adequately describe the dependency of giant step formation on process parameters such as the pressure of the ambient inert gas, heating rate, temperature and the properties of the starting surface <cit.>. Some recent studies show that substrates may be processed such that low steps (e.g. 0.75nm) are conserved by employing substrates with a small miscut angle (≤0.1), by suitable annealing sequences or by polymer deposition for improved buffer layer growth <cit.>.While the driving force of faceting of 6H-SiC is known to increase as a function of the miscut angle <cit.>, understanding the reasons for step bunching during epitaxial graphene growth requires further analysis of the role of carbon layers and the involved mass transport mechanisms. The present work focuses on the morphological changes that occur during the restructuring process of the surface to identify critical aspects which determine the quality of the final graphene sample. Figure <ref> shows typical mass transport mechanisms at the interface of the SiC crystal surface during the initial stage of surface restructuring and buffer layer formation. Following the descriptions in literature, the equilibrium conditions at the surface are typically described by local reservoirs and sinks leading to growth and etching <cit.>. Here, step edges, as well as the vapor/crystal interface, are assigned as silicon and carbon reservoirs while nuclei and the ambient atmosphere may represent sinks. The basic idea of the model is to differentiate between local mass transport processes by surface diffusion (1a, 1b) and so-called global mass transport processes via desorption and adsorption of silicon and carbon species through the vapor phase (2a, 2b). For epitaxial graphene growth, these mechanisms are often manipulated by affecting the mean free path length and partial pressures at the crystal/vapor interface using an inert argon gas atmosphere <cit.> or by confinement control <cit.>. The red shade layer in Figure <ref> represents the Knudsen layer which is the highly non-equilibrium interface region in which direct interactions between gas molecules and the surface dominate <cit.>. In the case of 1bar argon pressure, the thickness of the Knudsen layer is estimated to be a few hundred nanometers <cit.>. The partial pressures of mostly Si and small quantities of Si_2C and SiC_2 species forming above the SiC surface at temperatures below 2000 are known to significantly shift the growth temperature of the buffer layer and graphene towards higher values <cit.>. The morphology study presented in this work identifies decisive mass transport and nucleation mechanisms that determine the step bunching behavior as well as the uniformity of the epitaxially grown carbon layers. The first two sets of experiments (Figure <ref>(a-b)) focus on the influence of the substrate miscut angles. Both surfaces are “as-delivered” with terrace structures of single SiC bilayers and corresponding step heights of 0.25nm. The third set of experiments applies a “hydrogen-etched” surface (Figure <ref>(c)) with predefined steps and terraces having regular heights of 0.75nm which is a stable configuration of the SiC surface for the applied process parameters. In the last set of experiments (Figure <ref>(d)) hydrogen-etched substrates are treated with a polymer to investigate the impact of an additional carbon source on the growth behavior. § SAMPLE PREPARATION The substrates were cut from semi-insulating epi-ready 6H-SiC (0001) wafers. The two wafers had different miscut angles of 0.05 (small-miscut) and 0.37 (large-miscut) with respect to the (0001) crystal plane. The substrates used in the experiments presented in Figure <ref>-<ref> were initially prepared by hydrogen etching at temperatures of 1400 and 1200. Etching of the Si-face at 1200 leads to step heights of 0.25nm and 0.5nm. This is only slightly higher compared to the step height of "as-delivered" surfaces (Figure <ref>(a)) and a noticeable improvement compared to the 0.75nm steps that are typically obtained by the standard etching procedure at 1400 (Figure <ref>(a)). After etching, the samples were applied to post-annealing at 1175 for at least 30min to desorb adsorbed hydrogen from the SiC substrate which is accompanied by a further restructuring of the surface as shown in Figure <ref>(a). Additional information about hydrogen etching and post-annealing is given in the supplementary information. Polymer-assisted sublimation growth (PASG) was applied to support uniform buffer layer nucleation <cit.>. The deposition of polymer adsorbates was realized by liquid phase deposition (LPD) of AZ5214E photoresist (see ref. <cit.>). To control the size distribution of the adsorbate with heights ≤2nm shown in Figure <ref>(a) the sample was purged in an ultrasonic bath of isopropanol. The higher adsorbate density used for the experiment given in Figure <ref>(c) was realized by applying the procedure for the moderate density in the first step as well as by spin-coating of a weak solution of AZ5214E photoresist (6000rpm, 4 droplets from the pipet solved in 50ml isopropanol) in the second step. The samples were introduced into the inductively heated hot-wall reactor which is typically evacuated to a base pressure of ≤e-6mbar before starting the process. The buffer layer samples were processed by annealing at 1400 in argon at atmospheric pressure. To initiate graphene growth the temperature was further increased to 1750 for 6 minutes.§ GIANT STEP BUNCHING INDUCED BY THE BUFFER LAYER RECONSTRUCTION In the first experiment; the beginning of giant step formation on the SiC surface during high-temperature annealing (1400, 15 min) in 1 bar argon atmosphere is studied on as-delivered substrates with a relatively large miscut angle as sketched in Figure <ref>(a). The atomic force microscopy (AFM) images in Figure <ref> show three different kinds of terraces. In the magnified topographic images in Figure <ref>(a-b) one can identify narrow-stepped regions corresponding to terraces with a width of ≈100nm and a height of 0.75nm (see profile in Figure <ref>(b)) surrounding giant steps of different shapes. While the configuration of the narrow steps corresponding to repeated bunches of three SiC crystal layers is stable during the initial phase of the restructuring, also giant step formation is observed at some sites. The larger terraces are up to 20nm high, a few micrometers wide and several tens of micrometers long. Less developed giant steps such as those shown in Figure <ref>(b) are a few micrometers long and less than 0.5 wide. Here, the step formation seems to be connected to a particle-like topographic signal in the center. This situation with terraces at different development stages is the beginning of completely giant stepped surfaces as it is observed after graphene growth, Figure <ref>(b). The AFM phase contrast in the inset of Figure <ref>(b) and in Figure <ref>(c) indicates a clear correlation between the light contrast and giant step formation while the darker contrast corresponds to regions with narrow steps. Raman measurements at such sites were performed with a spot size smaller 1 to identify the reason for the distinctive signals. The two Raman spectra in the inset of Figure <ref>(c) are difference spectra which were obtained by subtracting the spectrum of an unprocessed SiC reference sample. The measurements reveal the characteristic signal of the buffer layer on broad terraces (red spectrum) while narrow stepped terrace regions show no Raman signal (blue spectrum) other than that of clean SiC. This surface condition describes the initiation of giant step bunching at 1400 and demonstrates that uncovered narrow steps become unstable once local buffer layer nucleation occurs.This behavior is explained by the general model developed by Jeong and Weeks. It states that the energetically favorable state of a new surface reconstruction is an important driving mechanism for step enlargement <cit.>. Our experimental results show that in the case of the SiC surface this reconstruction is represented by the buffer layer. Figure <ref> depicts the top view of representative terraces and the corresponding side view of the step profiles created from the profile lines P1 (across narrow steps) and P2 (across the giant step). The drawings are derived from AFM measurements marked in Figure <ref>(a). The depicted terrace regions covered by buffer layer (red shade area) visualize the correlation between the locally formed buffer layer and step enlargement. The process of giant step bunching can be divided into two stages: the initial phase of step nucleation (Figure <ref>(a)) and the developed phase of continued growth of giant steps (Figure <ref>(b)). Despite the long annealing time of 15min, many of the evolving broader terraces did not reach the stage of fully developed giant steps which suggests that the mechanisms in the initial growth stage are relatively slow. Regarding the model of Jeong and Weeks, this is typically the case as long as the width of a terrace is insufficiently wide to create a so-called “critical nucleus” <cit.>. On the SiC surface, one needs to distinguish between (i) a growing buffer layer domain that slowly enforce a widening of the terrace in the first stage (Figure <ref>(a)) and (ii) in the second stage the relatively fast growing giant steps when the critical width is reached (Figure <ref>(b)). Both processes (buffer layer and giant step formation) have their individual critical nucleus size. Critical nuclei for forming local buffer layer domains (leading to the stage (i)) are expected to be sometimes related to surface contamination e.g. remaining particles after cleaning (derived from Figure <ref>(b)). A giant step of critical size as depicted in Figure <ref>(b) has an estimated step height of at least 3 unit cell heights (4.5nm) which corresponds to a width of ≥0.6 in the case of the large-miscut substrate (derived from Figure <ref>).The growth of the buffer layer implies carbon supersaturation at the SiC surface. This is understandable since the much higher vapor pressure of silicon supports enhanced silicon desorption while the low vapor pressure of carbon species implies a relatively low desorption rate leading to carbon enrichment. Thus, the surface conditions during buffer layer growth may be described by nucleation in the presence of a “sea” of diffusing carbon adatoms. The growth of the buffer layer islands causing giant step bunching seems to suppress the formation of others in their vicinity within a distance of several micrometers (approximately 3 to 4 in Figure <ref>). In the literature this behavior is described as being typical when a single nucleus locally reduces the level of supersaturation, thus preventing the formation of other stable domains in the nearby region <cit.>. Considering a mean terrace width of about 0.1, this implies transport of carbon species across several step edges. Considering a mean terrace width of about 0.1 this implies transport of carbon species across several edges.The nucleation behavior and the uncorrelated and correlated motion of steps during giant step formation suggest the presence of mass transport processes that match the description of the general model presented in Figure 1. Depending on the development stage of SiC terraces the characteristics of both local and global transport mechanisms were identified for the samples annealed in an argon atmosphere. Evidence of global transport via the crystal/vapor interface is given by the step/terrace structure during the initial phase of giant step formation as depicted in Figure <ref>(a). Here, the growth of a surface-reconstruction-induced giant step is uncorrelated with the shape of neighboring terraces due to mass transport from distant terraces via desorption and adsorption. The surface profiles P1 (blue profile) across the narrow steps and P2 (green line) across the giant step on the right side of Figure <ref>(a) indicate that a substantial amount of material is transported towards the nucleation site via the vapor phase denoted by ṁ. The material to create the giant step is indicated by the green shade area which is a result of the overlapping of P1 and P2. A different situation was identified in the case of developed giant steps as depicted in Figure <ref>(b). Here, the shape and width of uncovered neighboring terraces correlate with the growth of the giant step which is a characteristic of local mass transport processes. In this case, the mass transport ṁ mainly occurs due to surface diffusion between neighboring step edges. Due to different retraction velocities, the narrow terraces close to the giant step become broader than more distant ones. This situation can also be understood from the corresponding profile lines P1 (blue line) across the narrow steps and P2 (green line) across the giant step shown on the right side of Figure <ref>(b). In this case, the material to create the volume of the giant step (green shade) is obtained via surface diffusion from neighboring uncovered terraces (blue shade). Note that the blue shaded area was also for other profile positions (not shown) slightly smaller than the green area which also implies a contribution of global transport processes in the case of developed steps. Further details about giant step formation on small-miscut substrates are given in Figure <ref>(b).§ STEP-NUCLEATION ON SMALL-MISCUT SUBSTRATE - A COMPARISON TO LARGER MISCUT ANGLES One approach to slow the decomposition of the SiC surface and to prevent giant step bunching is to predefine continuous buffer layer domains using substrates with very low-miscut angles <cit.>. In this experiment, the annealing procedure at 1400 in an argon atmosphere was applied using as-delivered SiC substrate with a small-miscut angle. The AFM image in Figure <ref>(a) shows the starting surface configuration with single SiC bilayer steps. After annealing regular steps with heights of 0.75nm evolve, see Figure <ref>(b). The phase image reveals that in addition to SiC (dark contrast), small buffer layer domains (light contrast) form along the energetically preferred lower side of the edges. Steps lower than 0.75nm were not observed since they are not stable and decompose quickly during the initial phase of surface restructuring.The results imply that during annealing at 1400 in argon atmosphere silicon and carbon species may exchange between terraces enabling the formation of a new step configuration and only partly contributing to the buffer layer growth. While this holds also true for the large-miscut substrate the nucleation of the buffer layer is significantly different. This demonstrates that the miscut angle is a critical parameter that strongly influences the dynamics of the restructuring processes.The simplified schematics of surface profiles of large (left) and small (right) miscut surfaces in Figure <ref>(c) describe the thermally activated conversion of the starting surface configuration (black outline, 0.25nm steps) into the energetically preferred stable configuration (blue outline, 0.75nm steps). The comparison shows that as a result of broader terraces on a small-miscut surface a bigger net mass transfer (gray shaded area) is involved in fulfilling the conversion. The higher amount of released carbon is expected to be the reason for the increased buffer layer coverage and the reduced tendency to form giant steps.§ GIANT STEP BUNCHING AND BUFFER LAYER GROWTH ON HYDROGEN-ETCHED SUBSTRATE In the following set of experiments, the initial phase of surface restructuring is skipped by using hydrogen-etched samples that were processed at the standard etching temperature of 1400 as sketched in Figure <ref>(c). The corresponding topography in Figure <ref>(a) shows that in this way the starting surface of the "as-delivered" epi-ready wafer (see Figure <ref>(a)) is converted into regular bunches of 3 SiC bilayers before the buffer layer growth is initiated. By skipping the restructuring process, its influence on the buffer layer formation can be analyzed. First, a low-temperature post-annealing process step is necessary to desorb remaining hydrogen from the substrate after the hydrogen etching process. Without this process, any subsequent process involving buffer layer or graphene growth is significantly delayed or can be very nonuniform, see supplementary data. The etching and post-annealing procedure results in an extremely clean, (1x1) reconstructed SiC surface with uniform and 0.75nm high steps in agreement with AFM (Figure <ref>(a)) and low energy electron diffraction (LEED) (data not shown).Figure <ref>(b) shows the buffer layer growth on the hydrogen-etched substrate. While most of the terraces remain uncovered and preserve their heights, nucleation of the buffer layer occurs locally at a few randomly distributed sites that are separated by several micrometers. The formation of these sites is accompanied by local step enlargement and further giant step bunching once the step height reaches about 1.5 unit cell heights (2.25 nm). This corresponds to an estimated critical width of about 2 for the small-miscut substrate which is nearly a factor of four wider compared to the critical width identified on large-miscut substrates. From the comparison with the as-delivered small-miscut substrate without applying hydrogen etching (Figure <ref>) one can conclude that skipping the restructuring process of the SiC surface leads to non-uniform buffer layer nucleation during annealing at 1400 in an argon atmosphere. One possible reason for this effect is the absence of nuclei due to the extreme cleanness of the etched surface. Additionally, less carbon is released since the step configuration is already stable and no initial restructuring takes place. Therefore, the first domains must have formed by spontaneous nucleation once a critical level of carbon supersaturation is reached. As it was already found in the case of the large-miscut substrate (Figure <ref>), the formation of each stable domain seems to suppress the formation of others in their vicinity by locally reducing the level of supersaturation.§ BUFFER LAYER NUCLEATION AND SUPPRESSION OF GIANT STEP FORMATION BY SURFACE POLYMER TREATMENT The following set of experiments is a direct comparison to the results shown in Figure <ref> using the same annealing procedure as well as hydrogen-etched SiC surfaces. The samples presented in Figure <ref> were additionally treated with a polymer adsorbate which supports the buffer layer formation. The principle of this so-called polymer-assisted sublimation growth (PASG) method <cit.> is to control the amount of available carbon and related nuclei as described in the sample preparation. The starting surface of the first sample shown in Figure <ref>(a) was modified with a moderate adsorbate density resulting in increased structure heights of ≈2nm (Figure <ref>(a)).Indeed, after annealing at 1400 in an argon atmosphere, the topography of the two PASG buffer layer samples (Figure <ref>(b-c)) showed no giant step bunching which is in contrast to the hydrogen-etched substrate without polymer treatment (Figure <ref>(b)). However, the results are remarkably similar compared to that observed on the as-delivered small-miscut substrate shown in Figure <ref>(b) even though the starting surfaces were different before annealing and only the etched surface was treated with the polymer. On both surfaces uniformly distributed buffer layer domains formed along the lower side of each step edge and act as preferred nucleation sites. In a second experiment (Figure <ref>(c)), a hydrogen-etched sample was treated with with a higher adsorbate density as described in the sample preparation. Due to the larger amount of available carbon, the buffer layer coverage significantly increased.These results show that carbon which is released during the restructuring process of the SiC substrate may be substituted by the carbon from the deposited polymer leading to enhanced nucleation and forming a high density of buffer layer domains. The seeded growth prevents the spontaneous formation of large and separated domains and thus the concomitant giant step bunching.§ STEP-EDGE AND TERRACE-NUCLEATION MODELS From the different shapes and the location of the buffer layer domains at terrace or edge sites one can distinguish between two different mechanisms of 2D-island growth (Figure <ref>(b) and Figure <ref>(b-c)). Depending on the amount of available carbon a transition from one to the other type can be obtained. The first is called “step-nucleation” (Figure <ref>(a-b)) and describes the case where small domains form along the lower side of step edges. Based on the theory of heteroepitaxy, nucleation at step edges implies near equilibrium growth conditions with a diffusion length λ_dif larger than the terrace width (w_terrace≈1) such that the species have sufficient time to migrate on the terrace until a stable edge position is found <cit.>. The comparison of the morphology described in Figure <ref>(a) and that of the AFM measurements in Figure <ref>(b) shows increasing the annealing time by 15min leads to the formation of continuous buffer layer stripes (light phase contrast) along the lower side of the edges from where they continue to grow onto the terrace. The measurements indicate that the growing domains are fed from decomposing step edges as well as from crystal layers from underneath since they are typically located slightly lower than the height level of the SiC surface as indicated in the step profiles.The second type of 2D-layer growth shown in Figure <ref>(c-d) is called “terrace-nucleation”and describes the case where buffer layer domains form on the terraces and not along the step edges. Based on theory, nucleation on the terrace is obtained under supersaturated conditions once the effective diffusion length λ_dif becomes smaller than the terrace width <cit.>. On the SiC surface, the reduction of λ_dif leading to a transition from step-nucleation to terrace-nucleation of buffer layer nuclei is achieved by increasing the amount of the deposited carbon. Figure <ref>(c) depicts the morphology identified in Figure <ref>(c) with randomly distributed domains. The sketched surface profiles (below) and the AFM images in Figure <ref>(d) show that in the case of longer annealing times the distributed islands coalesce and form continuous domains which align along the upper side of the terrace edges.§ GRAPHENE FORMATION ON HYDROGEN-ETCHED POLYMER-TREATED SURFACES The significance of the predefined buffer layer on the graphene growth process can be well demonstrated using simultaneously processed hydrogen-etched surfaces. Compared to the etched surface in Figure <ref>(a) the step height of 0.75nm was reduced by applying a lower etching temperature of 1200 such that a sequence of two steps with heights of 0.25nm and 0.5nm are obtained as shown in Figure <ref>(a). This incompletely restructured surface is expected to support the buffer layer growth in agreement with the previous experiments. However, graphene growth without polymer treatment (Figure <ref>(b)) led to broad terrace structures with heights of up to 10nm. Compared to the result after buffer layer growth (Figure <ref>(b)), giant step bunching is significantly enhanced during graphene growth at T = 1750. Thus, the slightly reduced step heights and the buffer layer formed during annealing in argon atmosphere alone cannot sufficiently stabilize the surface. This behavior is typical for hydrogen-etched substrates <cit.>. The previous experiments in Figure <ref>(b-c) showed that the application of polymer adsorbates successfully circumvent giant step bunching at 1400. The PASG graphene sample shown in Figure <ref>(c) proves that this also holds true for high temperatures. The slightly reduced initial step heights of the etched surface of 0.25nm and 0.5nm lead to small but noticeable improvements in the final topography of the graphene sample e.g. higher terrace uniformity and reduced step heights compared to a hydrogen-etched starting surface with exclusively 0.75nm (not shown). This is expected to be a consequence of the incomplete restructuring process of the surface before the buffer layer, and graphene growth processes are initiated. The realization of large-area growth of monolayer graphene with step heights ≤0.75nm on hydrogen-etched substrates demonstrates the versatility of the PASG method and underlines the high importance of the buffer layer formation.§ CONCLUSION Understanding the morphological changes that occur during the restructuring of the SiC surface lead to significant improvements in the concept of large-area graphene growth. The identified critical point is the conversion of the starting surface into the next stable configuration which is usually accompanied by 2D-island growth of the buffer layer. Three different growth mechanisms were identified namely (i) surface-reconstruction-induced giant step formation as well as buffer layer formation by (ii) step-nucleation or (iii) terrace-nucleation. Their occurrence is closely connected to the properties of the surface and the selected process conditions e.g. the carbon supply. Surface-reconstruction-induced giant step bunching usually occurs when the initial restructuring process of the starting surface involves only a small amount of carbon as it was demonstrated using clean as-delivered large-miscut or hydrogen-etched substrates. Here, the high mobility mass transport between adjacent terraces involving surface diffusion as well as between distant terraces involving desorption and adsorption favors the formation of separated buffer layer domains. The observed mechanisms reveal that the widening of covered terraces is due to the locally created energetically favorable surface configuration of the buffer layer reconstruction. This behavior can be prevented throughout the whole graphene formation process if a high density of small buffer layer domains is grown using suitable substrates and an additional carbon source. Moderate carbon supply favors step-nucleation along the lower side of step edges and usually results in a low coverage. Increasing the amount of available carbon, however, realizes a transition from step-nucleation to terrace-nucleation. Terrace-nucleation under highly supersaturated conditions was identified to be the ideal growth mode since it enables to conserve the lowest possible step heights throughout the whole graphene process and extends the range of suitable substrates. The manipulation of the growth dynamics by seeded buffer layer growth suppresses critical step bunching mechanisms and makes uniform buffer layer growth also possible for hydrogen etched substrates. These results are of particular importance for the large-area growth of monolayer graphene on SiC with ultra-low steps.§ ACKNOWLEDGEMENTSWe gratefully acknowledge funding by the School for Contacts in Nanosystems (NTH nano) and the support by the Braunschweig International Graduate School of Metrology (B-IGSM) and NanoMet.§ REFERENCES [pages=-]StudyOnTheMorphologyAndGrowth_SupplementaryData
http://arxiv.org/abs/1704.08078v3
{ "authors": [ "Mattias Kruskopf", "Klaus Pierz", "Davood Momeni Pakdehi", "Stefan Wundrack", "Rainer Stosch", "Andrey Bakin", "Hans W. Schumacher" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170426124509", "title": "Tailoring the SiC surface - a morphology study on the epitaxial growth of graphene and its buffer layer" }
ICNet for Real-Time Semantic SegmentationH. Zhao, X. Qi, X. Shen, J. Shi, J. Jia^1The Chinese University of Hong Kong, ^2 Tencent Youtu Lab, ^3SenseTime Research{hszhao,xjqi,leojia}@cse.cuhk.edu.hk,[email protected], [email protected] for Real-Time Semantic Segmentationon High-Resolution Images Hengshuang Zhao^1, Xiaojuan Qi^1, Xiaoyong Shen^2, Jianping Shi^3, Jiaya Jia^1,2====================================================================================We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve high-quality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.§ INTRODUCTIONSemantic image segmentation is a fundamental task in computer vision. It predicts dense labels for all pixels in the image, and is regarded as a very important task that can help deep understanding of scene, objects, and human. Development of recent deep convolutional neural networks (CNNs) makes remarkable progress on semantic segmentation <cit.>. The effectiveness of these networks largely depends on the sophisticated model design regarding depth and width, which has to involve many operations and parameters.CNN-based semantic segmentation mainly exploits fully convolutional networks (FCNs). It is common wisdom now that increase of result accuracy almost means more operations, especially for pixel-level prediction tasks like semantic segmentation. To illustrate it, we show in Fig. <ref>(a) the accuracy and inference time of different frameworks on Cityscapes <cit.> dataset.§.§.§ Status of Fast Semantic SegmentationContrary to the extraordinary development of high-quality semantic segmentation, research along the line to make semantic segmentation run fast while not sacrificing too much quality is left behind. We note actually this line of work is similarly important since it can inspire or enable many practical tasks in, for example, automatic driving, robotic interaction, online video processing, and even mobile computing where running time becomes a critical factor to evaluate system performance.Our experiments show that high-accuracy methods of ResNet38 <cit.> and PSPNet <cit.> take around 1 second to predict a 1024× 2048 high-resolution image on one Nvidia TitanX GPU card during testing. These methods fall into the area illustrated in Fig. <ref>(a) with high accuracy and low speed. Recent fast semantic segmentation methods of ENet <cit.> and SQ <cit.>, contrarily, take quite different positions in the plot. The speed is much accelerated; but accuracy drops, where the final mIoUs are lower than 60%. These methods are located in the lower right phase in the figure. Blue ones are tested with downsampled images. Inference speed is reported with single network forward while accuracy of several mIoU aimed approaches (like PSPNet^⋆) may contain testing tricks like multi-scale and flipping, resulting much more time. See supplementary material for detailed information.§.§.§ Our Focus and ContributionsIn this paper, we focus on building a practically fast semantic segmentation system with decent prediction accuracy. Our method is the first in its kind to locate in the top-right area shown in Fig. <ref>(a) and is one of the only two available real-time approaches. It achieves decent trade-off between efficiency and accuracy.Different from previous architectures, we make comprehensive consideration on the two factors of speed and accuracy that are seemingly contracting. We first make in-depth analysis of time budget in semantic segmentation frameworks and conduct extensive experiments to demonstrate insufficiency of intuitive speedup strategies. This motivates development of image cascade network (ICNet), a high efficiency segmentation system with decent quality. It exploits efficiency of processing low-resolution images and high inference quality of high-resolution ones. The idea is to let low-resolution images go through the full semantic perception network first for a coarse prediction map. Then cascade feature fusion unit and cascade label guidance strategy are proposed to integrate medium and high resolution features, which refine the coarse semantic map gradually. We make all our code and models publicly available[https://github.com/hszhao/ICNethttps://github.com/hszhao/ICNet]. Our main contributions and performance statistics are the following. * We develop a novel and unique image cascade network for real-time semantic segmentation, it utilizes semantic information in low resolution along with details from high-resolution images efficiently. * The developed cascade feature fusion unit together with cascade label guidance can recover and refine segmentation prediction progressively with a low computation cost. * Our ICNet achieves 5× speedup of inference time, and reduces memory consumption by 5× times. It can run at high resolution 1024× 2048 in speed of 30 fps while accomplishing high-quality results. It yields real-time inference on various datasets including Cityscapes <cit.>, CamVid <cit.> and COCO-Stuff <cit.>. § RELATED WORKTraditional semantic segmentation methods <cit.> adopt handcrafted feature to learn the representation. Recently, CNN based methods largely improve the performance.§.§.§ High Quality Semantic SegmentationFCN <cit.> is the pioneer work to replace the last fully-connected layers in classification with convolution layers. DeepLab <cit.> and <cit.> used dilated convolution to enlarge the receptive field for dense labeling. Encoder-decoder structures <cit.> can combine the high-level semantic information from later layers with the spatial information from earlier ones. Multi-scale feature ensembles are also used in <cit.>. In <cit.>, conditional random fields (CRF) or Markov random fields (MRF) were used to model spatial relationship. Zhao  <cit.> used pyramid pooling to aggregate global and local context information. Wu  <cit.> adopted a wider network to boost performance. In <cit.>, a multi-path refinement network combined multi-scale image features. These methods are effective, but preclude real-time inference. §.§.§ High Efficiency Semantic SegmentationIn object detection, speed became one important factor in system design <cit.>. Recent Yolo <cit.> and SSD <cit.> are representative solutions. In contrast, high speed inference in semantic segmentation is under-explored.ENet <cit.> and  <cit.> are lightweight networks. These methods greatly raise efficiency with notably sacrificed accuracy.§.§.§ Video Semantic SegmentationVideos contain redundant information in frames, which can be utilized to reduce computation. Recent Clockwork <cit.> reuses feature maps given stable video input. Deep feature flow <cit.> is based on a small-scale optical flow network to propagate features from key frames to others. FSO <cit.> performs structured prediction with dense CRF applied on optimized features to get temporal consistent predictions. NetWarp <cit.> utilizes optical flow of adjacent frames to warp internal features across time space in video sequences. We note when a good-accuracy fast image semantic-segmentation framework comes into existence, video segmentation will also be benefited. § IMAGE CASCADE NETWORKWe start by analyzing computation time budget of different components on the high performance segmentation framework PSPNet <cit.> with experimental statistics. Then we introduce the image cascade network (ICNet) as illustrated in Fig. <ref>, along with the cascade feature fusion unit and cascade label guidance, for fast semantic segmentation. §.§ Speed AnalysisIn convolution, the transformation functionis applied to input feature map V ∈ℝ^c × h × w to obtain the output map U ∈ℝ^c' × h' × w', where c, h and w denote features channel, height and width respectively. The transformation operation : V → U is achieved by applying c' number of 3D kernels K ∈ℝ^c × k × k where k × k (e.g, 3 × 3) is kernel spatial size. Thus the total number of operations O() in convolution layer is c'ck^2h'w'. The spatial size of the output map h' and w' are highly related to the input, controlled by parameter stride s as h'=h/s, w'=w/s, making O() ≈ c'ck^2hw/s^2.The computation complexity is associated with feature map resolution (e.g., h, w, s), number of kernels and network width (e.g., c, c'). Fig. <ref>(b) shows the time cost of two resolution images in PSPNet50. Blue curve corresponds to high-resolution input with size 1024 × 2048 and green curve is for image with resolution 512 × 1024. Computation increases squarely regarding image resolution. For either curve, feature maps in stage4 and stage5 are with the same spatial resolution, i.e., 1/8 of the original input; but the computation in stage5 is four times heavier than that in stage4. It is because convolutional layers in stage5 double the number of kernels c together with input channel c'. §.§ Network ArchitectureAccording to above time budget analysis, we adopt intuitive speedup strategies in experiments to be detailed in Sec. <ref>, including downsampling input, shrinking feature maps and conducting model compression. The corresponding results show that it is very difficult to keep a good balance between inference accuracy and speed. The intuitive strategies are effective to reduce running time, while they yield very coarse prediction maps. Directly feeding high-resolution images into a network is unbearable in computation.Our proposed system image cascade network (ICNet) does not simply choose either way. Instead it takes cascade image inputs (i.e., low-, medium- and high resolution images), adopts cascade feature fusion unit (Sec. <ref>) and is trained with cascade label guidance (Sec. <ref>). The new architecture is illustrated in Fig. <ref>.The input image with full resolution (e.g., 1024 × 2048 in Cityscapes <cit.>) is downsampled by factors of 2 and 4, forming cascade input to medium- and high-resolution branches.Segmenting the high-resolution input with classical frameworks like FCN directly is time consuming. To overcome this shortcoming, we get semantic extraction using low-resolution input as shown in top branch of Fig. <ref>. A 1/4 sized image is fed into PSPNet with downsampling rate 8, resulting in a 1/32-resolution feature map. To get high quality segmentation, medium and high resolution branches (middle and bottom parts in Fig. <ref>) help recover and refine the coarse prediction. Though some details are missing and blurry boundaries are generated in the top branch, it already harvests most semantic parts. Thus we can safely limit the number of parameters in both middle and bottom branches. Light weighted CNNs (green dotted box) are adopted in higher resolution branches; different-branch output feature maps are fused by cascade-feature-fusion unit (Sec. <ref>) and trained with cascade label guidance (Sec. <ref>).Although the top branch is based on a full segmentation backbone, the input resolution is low, resulting in limited computation. Even for PSPNet with 50+ layers, inference time and memory are 18ms and 0.6GB for the large images in Cityscapes. Because weights and computation (in 17 layers) can be shared between low- and medium-branches, only 6ms is spent to construct the fusion map. Bottom branch has even less layers. Although the resolution is high, inference only takes 9ms. Details of the architecture are presented in the supplementary file. With all these three branches, our ICNet becomes a very efficient and memory friendly architecture that can achieve good-quality segmentation.§.§ Cascade Feature Fusionrt0.4 < g r a p h i c s > Cascade feature fusion. To combine cascade features from different-resolution inputs, we propose a cascade feature fusion (CFF) unit as shown in Fig. <ref>. The input to this unit contains three components: two feature maps F_1 and F_2 with sizes C_1 × H_1 × W_1 and C_2 × H_2 × W_2 respectively, and a ground-truth label with resolution 1 × H_2 × W_2. F_2 is with doubled spatial size of F_1.We first apply upsampling rate 2 on F_1 through bilinear interpolation, yielding the same spatial size as F_2. Then a dilated convolution layer with kernel size C_3 × 3 × 3 and dilation 2 is applied to refine the upsampled features. The resulting feature is with size C_3 × H_2 × W_2. This dilated convolution combines feature information from several originally neighboring pixels. Compared with deconvolution, upsampling followed by dilated convolution only needs small kernels, to harvest the same receptive field. To keep the same receptive field, deconvolution needs larger kernel sizes than upsampling with dilated convolution (i.e., 7 × 7 vs. 3 × 3), which causes more computation.For feature F_2, a projection convolution with kernel size C_3 × 1 × 1 is utilized to project F_2 so that it has the same number of channels as the output of F_1. Then two batch normalization layers are used to normalize these two processed features as shown in Fig. <ref>. Followed by an element-wise `sum' layer and a `ReLU' layer, we obtain the fused feature F_2' as C_3 × H_2 × W_2. To enhance learning of F_1, we use an auxiliary label guidance on the upsampled feature of F_1. §.§ Cascade Label GuidanceTo enhance the learning procedure in each branch, we adopt a cascade label guidance strategy. It utilizes different-scale (e.g., 1/16, 1/8, and 1/4) ground-truth labels to guide the learning stage of low, medium and high resolution input. Given 𝒯 branches (i.e., 𝒯=3) and 𝒩 categories. In branch t, the predicted feature map ℱ^t has spatial size 𝒴_t ×𝒳_t. The value at position (n,y,x) is ℱ^t_n,y,x. The corresponding ground truth label for 2D position (y,x) is n̂. To train ICNet, we append weighted softmax cross entropy loss in each branch with related loss weight λ_t. Thus we minimize the loss function ℒ defined asℒ = - ∑_t=1^𝒯λ_t1/𝒴_t 𝒳_t∑_y=1^𝒴_t∑_x=1^𝒳_tloge^ℱ^t_n̂,y,x/∑_n=1^𝒩 e^ℱ^t_n,y,x.In the testing phase, the low and medium guidance operations are simply abandoned, where only high-resolution branch is retained.This strategy makes gradient optimization smoother for easy training. With more powerful learning ability in each branch, the final prediction map is not dominated by any single branch.§ STRUCTURE COMPARISON AND ANALYSISNow we illustrate the difference of ICNet from existing cascade architectures for semantic segmentation. Typical structures in previous semantic segmentation systems are illustrated in Fig. <ref>. Our proposed ICNet (Fig. <ref>(d)) is by nature different from others. Previous frameworks are all with relatively intensive computation given the high-resolution input. While in our cascade structure, only the lowest-resolution input is fed into the heavy CNN with much reduced computation to get the coarse semantic prediction. The higher-res inputs are designed to recover and refine the prediction progressively regarding blurred boundaries and missing details. Thus they are processed by light-weighted CNNs. Newly introduced cascade-feature-fusion unit and cascade label guidance strategy integrate medium and high resolution features to refine the coarse semantic map gradually. In this special design, ICNet achieves high-efficiency inference with reasonable-quality segmentation results.§ EXPERIMENTAL EVALUATIONOur method is effective for high resolution images. We evaluate the architecture on three challenging datasets, including urban-scene understanding dataset Cityscapes <cit.> with image resolution 1024 × 2048, CamVid <cit.> with image resolution 720 × 960 and stuff understanding dataset COCO-Stuff <cit.> with image resolution up to 640 × 640. There is a notable difference between COCO-Stuff and object/scene segmentation datasets of VOC2012 <cit.> and ADE20K <cit.>. In the latter two sets, most images are of low resolution (e.g., 300 × 500), which can already be processed quickly. While in COCO-Stuff, most images are larger, making it more difficult to achieve real-time performance.In the following, we first show intuitive speedup strategies and their drawbacks, then reveal our improvement with quantitative and visual analysis.§.§ Implementation DetailsWe conduct experiments based on platform Caffe <cit.>. All experiments are on a workstation with Maxwell TitanX GPU cards under CUDA 7.5 and CUDNN V5. Our testing uses only one card. To measure the forward inference time, we use the time measure tool `Caffe time' and set the repeating iteration number to 100 to eliminate accidental errors during testing. All the parameters in batch normalization layers are merged into the neighboring front convolution layers. For the training hyper-parameters, the mini-batch size is set to 16. The base learning rate is 0.01 and the `poly' learning rate policy is adopted with power 0.9, together with the maximum iteration number set to 30K for Cityscapes, 10K for CamVid and 30K for COCO-Stuff. Momentum is 0.9 and weight decay is 0.0001. Data augmentation contains random mirror and rand resizing between 0.5 and 2. The auxiliary loss weights are empirically set to 0.4 for λ_1 and λ_2, 1 for λ_3 in Eq. <ref>, as adopted in <cit.>. For evaluation, both mean of class-wise intersection over union (mIoU) and network forward time (Time) are used. §.§ CityscapesWe first apply our framework to the recent urban scene understanding dataset Cityscapes <cit.>. This dataset contains high-resolution 1024 × 2048 images, which make it a big challenge for fast semantic segmentation. It contains 5,000 finely annotated images split into training, validation and testing sets with 2,975, 500, and 1,525 images respectively. The dense annotation contains 30 common classes of road, person, car, etc. 19 of them are used in training and testing.§.§.§ Intuitive SpeedupAccording to the time complexity shown in Eq. (<ref>), we do intuitive speedup in three aspects, namely downsampling input, downsampling feature, and model compression. Downsampling Input Image resolution is the most critical factor that affects running speed as analyzed in Sec. <ref>. A simple approach is to use the small-resolution image as input. We test downsampling the image with ratios 1/2 and 1/4, and feeding the resulting images into PSPNet50. We directly upsample prediction results to the original size. This approach empirically has several drawbacks as illustrated in Fig. <ref>. With scaling ratio 0.25, although the inference time is reduced by a large margin, the prediction map is very coarse, missing many small but important details compared to the higher resolution prediction. With scaling ratio 0.5, the prediction recovers more information compared to the 0.25 case. Unfortunately, the person and traffic light far from the camera are still missing and object boundaries are blurred. To make things worse, the running time is still too long for a real-time system. Downsampling Feature Besides directly downsampling the input image, another simple choice is to scale down the feature map by a large ratio in the inference process. FCN <cit.> downsampled it for 32 times and DeepLab <cit.> did that for 8 times. We test PSPNet50 with downsampling ratios of 1:8, 1:16 and 1:32 and show results in the left of Table <ref>. A smaller feature map can yield faster inference at the cost of sacrificing prediction accuracy. The lost information is mostly detail contained in low-level layers. Also, even with the smallest resulting feature map under ratio 1:32, the system still takes 131ms in inference.Model Compression Apart from the above two strategies, another natural way to reduce network complexity is to trim kernels in each layer. Compressing models becomes an active research topic in recent years due to the high demand. The solutions <cit.> can make a complicated network reduce to a lighter one under user-controlled accuracy reduction. We adopt recent effective classification model compression strategy presented in <cit.> on our segmentation models. For each filter, we first calculate the sum of kernel ℓ_1-norm. Then we sort these sum results in a descending order and keep only the most significant ones. Disappointingly, this strategy also does not meet our requirement given the compressed models listed in the right of Table <ref>. Even by keeping only a quarter of kernels, the inference time is still too long. Meanwhile the corresponding mIoU is intolerably low – it already cannot produce reasonable segmentation for many applications.§.§.§ Cascade BranchesWe do ablation study on cascade branches, the results are shown in Table <ref>. Our baseline is the half-compressed PSPNet50, 170ms inference time is yielded with mIoU reducing to 67.9%. They indicate that model compression has almost no chance to achieve real-time performance under the condition of keeping decent segmentation quality. Based on this baseline, we test our ICNet on different branches. To show the effectiveness of the proposed cascade framework, we denote the outputs of low-, medium- and high-resolution branches as `sub4', `sub24' and `sub124', where the numbers stand for the information used. The setting `sub4' only uses the top branch with the low-resolution input. `sub24' and `sub124' respectively contain top two and all three branches.We test these three settings on the validation set of Cityscapes and list the results in Table <ref>. With just the low-resolution input branch, although running time is short, the result quality drops to 59.6%. Using two and three branches, we increase mIoU to 66.5% and 67.7% respectively. The running time only increases by 7ms and 8ms. Note our segmentation quality nearly stays the same as the baseline, and yet is 5.2× times faster. The memory consumption is significantly reduced by 5.8×.§.§.§ Cascade StructureWe also do ablation study on cascade feature fusion unit and cascade label guidance. The results are shown in Table <ref>. Compared to the deconvolution layer with 3 × 3 and 5 × 5 kernels, with similar inference efficiency, cascade feature fusion unit gets higher mIoU performance. Compared to deconvolution layer with a larger kernel with size 7 × 7, the mIoU performance is close, while cascade feature fusion unit yields faster processing speed. Without the cascade label guidance, the performance drops a lot as shown in the last row. Single network forward costs 1288ms (with TitanX Maxwell, 680ms for Pascal) while mIoU aimed testing for boosting performance (81.2% mIoU) costs 51.0s. §.§.§ Methods ComparisonWe finally list mIoU performance and inference time of our proposed ICNet on the test set of Cityscapes. It is trained on training and validation sets of Cityscapes for 90K iterations. Results are included in Table <ref>. The reported mIoUs and running time of other methods are shown in the official Cityscapes leadboard. For fairness, we do not include methods without reporting running time. Many of these methods may have adopted time-consuming multi-scale testing for the best result quality.Our ICNet yields mIoU 69.5%. It is even quantitatively better than several methods that do not care about speed. It is about 10 points higher than ENet <cit.> and SQ <cit.>. Training with both fine and coarse data boosts mIoU performance to 70.6%.ICNet is a 30fps method on 1024 × 2048 resolution images using only one TitanX GPU card. Video example can be accessed through link[https://youtu.be/qWl9idsCuLQhttps://youtu.be/qWl9idsCuLQ].§.§.§ Visual ImprovementFigs. <ref> and <ref> show the visual results of ICNet on Cityscapes. With proposed gradual feature fusion steps and cascade label guidance structure, we produce decent prediction results. Intriguingly, output of the `sub4' branch can already capture most of semantically meaningful objects. But the prediction is coarse due to the low-resolution input. It misses a few small-size important regions, such as poles and traffic signs. With the help of medium-resolution information, many of these regions are re-estimated and recovered as shown in the `sub24' branch. It is noticeable that objects far from the camera, such as a few persons, are still missing with blurry object boundaries. The `sub124' branch with full-resolution input helps refine these details – the output of this branch is undoubted the best. It manifests that our different-resolution information is properly made use of in this framework.§.§.§ Quantitative AnalysisTo further understand accuracy gain in each branch, we quantitatively analyze the predicted label maps based on connected components. For each connected region R_i, we calculate the number of pixels it contains, denoted as S_i. Then we count the number of pixels correctly predicted in the corresponding map as s_i. The predicted region accuracy p_i in R_i is thus s_i/S_i. According to the region size S_i, we project these regions onto a histogram ℋ with interval 𝒦 and average all related region accuracy p_i as the value of current bin.In experiments, we set bin size of the histogram as 30 and interval 𝒦 as 3,000. It thus covers region size S_i between 1 to 90K. We ignore regions with size exceeding 90K. Fig. <ref> shows the accuracy change in each bin. The blue histogram stands for the difference between `sub24' and `sub4' while the green histogram shows the difference between `sub124' and `sub24'. For both histograms, the large difference is mainly on the front bins with small region sizes. This manifests that small region objects like traffic light and pole can be well improved in our framework. The front changes are large positives, proving that `sub24' can restore much information on small objects on top of `sub4'. `sub124' is also very useful compared to `sub24'.§.§ CamVidCamVid <cit.> dataset contains images extracted from high resolution video sequences with resolution up to 720 × 960. For easy comparison with prior work, we adopt the split of Sturgess et al. <cit.>, which partitions the dataset into 367, 100, and 233 images for training, validation and testing respectively. 11 semantic classes are used for evaluation.The testing results are listed in Table <ref>, our base-model is no compressed PSPNet50. ICNet gets much faster inference speed than other methods on this high resolution, reaching the real-time speed of 27.8 fps, 5.7 times faster than the second one and 5.1 times faster compared to the basic model. Apart from high efficiency, it also accomplishes high quality segmentation. Visual results are provided in the supplementary material.§.§ COCO-StuffCOCO-Stuff <cit.> is a recently labeled dataset based on MS-COCO <cit.> for stuff segmentation in context. We evaluate ICNet following the split in <cit.> that 9K images are used for training and another 1K for testing. This dataset is much more complex for multiple categories – up to 182 classes are used for evaluation, including 91 thing and 91 stuff classes.Table <ref> shows the testing results. ICNet still performs satisfyingly regarding common thing and stuff understanding. It is more efficient and accurate than modern segmentation frameworks, such as FCN and DeepLab. Compared to our baseline model, it achieves 5.4 times speedup. Visual predictions are provided in the supplementary material. § CONCLUSIONWe have proposed a real-time semantic segmentation system ICNet. It incorporates effective strategies to accelerate network inference speed without sacrificing much performance. The major contributions include the new framework for saving operations in multiple resolutions and the powerful fusion unit.We believe the optimal balance of speed and accuracy makes our system important since it can benefit many other tasks that require fast scene and object segmentation. It greatly enhances the practicality of semantic segmentation in other disciplines.splncs
http://arxiv.org/abs/1704.08545v2
{ "authors": [ "Hengshuang Zhao", "Xiaojuan Qi", "Xiaoyong Shen", "Jianping Shi", "Jiaya Jia" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170427130249", "title": "ICNet for Real-Time Semantic Segmentation on High-Resolution Images" }
§ INTRODUCTION Probing the variation of nature's fundamental constants (such as the finestructure constant, α), through the analysis of absorption spectra,is one of the most direct ways of testing the universality of physical laws.Interactive methods for analysing high-resolution quasar spectra of heavyelement absorption systems are complex and require considerable expertise.Recently, we presented a new “artificial intelligence” method forthe analysis of high-resolution absorption spectra<cit.>.Our new method unifies three established numerical methods: a geneticalgorithm (GVPFIT); non-linear least-squares optimisation withparameter constraints (VPFIT); and Bayesian Model Averaging(BMA). This method requires evaluation before being applied to the analysis of large sets of absorption spectra.In particular, it is unknown how the accuracy of GVPFIT and BMAis effected by the complexity of an absorption systems velocity structure.We investigate the performance of GVPFIT and BMA over a broad rangeof velocity structure complexities using synthetic spectra.This is the first time a sample of synthetic spectra has been used to investigatehow we analyse quasar absorption spectra.Using synthetic spectra, we can provide stringent tests of the modelling process.When analysing spectral data, one cannot uniquely determine the velocity structure ofthe absorbing cloud and the physical parameters are unknown.In contrast, with synthetic spectra, the underlying (real) velocity structure and inputparameters are uniquely determined.By directly comparing our models, parameter estimates, and statistical uncertaintieswith the underlying (real) velocity structures and input values, we can establish thestability, precision and accuracy of our approach over a broad range of complexitylevels in the velocity structure. Such an investigation was previously infeasible due to the time-consuming natureof the interactive method of absorption spectra analysis. § METHOD We previously applied GVPFIT and BMA to the analysis of ahigh signal-to-noise, high spectral resolution and complexabsorption system at z_abs = 1.839 towards J110325-264515 <cit.>.In this analysis, GVPFIT was iterated for over 80 generations andgenerated a large database of candidate models over a broad range of modelcomplexity.From this large database, we selected 37 models each corresponding to the minimum-AICc model with 1 through 37 velocity components, where AICc is the Akaike Information Criteria corrected for small sample size (see <cit.> and Equation (<ref>)).We go up to 37 because a 37-component model corresponded to the minimum-AICc model for the real data. For each of these 37 models, we utilised the Voigt profile parameters to generatea synthetic spectra, with Δα/α set to zero.The appropriate VPFIT[R. F. Carswell and J. K. Webb, 2015,<http://www.ast.cam.ac.uk/ rfc/vpfit.html>.] output was applied togenerate the synthetic models, and hence we convolved the synthetic spectra usingthe same instrumental profile as the real spectra.Using the actual error arrays from the real spectra, we assigned a Gaussianstandard deviation to each pixel and used the Box–Muller transform approach<cit.> to add noise to the synthetic spectra.The real spectra have multiple observations at different epochs and instrumentalsettings (see <cit.> Table 2);we generated synthetic spectra corresponding to all of these for each of theselected 37 models.Thus, the synthetic spectra emulate the characteristics of the realspectra of this absorption system. We then treated the synthetic spectra described above as if they were real spectraas described in <cit.>.To each spectra, we applied GVPFIT generating a large set of models to the syntheticspectra.The synthetic spectra were both created and fitted using turbulent b-parameters andusing the same atomic data.We then estimated Δα/α using BMA, with AICc providing the relativelikelihood used to weight the contribution of each model.In this method, AICc provides a measure of the relative quality of a model, basedon a balance of goodness-of-fit (chi-squared) against the complexity (number ofcomponents compared to the number of data points) of each model.We define AICc in the normal way (<cit.>)AICc_j = χ^2_j + 2k + 2k(k+1)/(n-k-1)where k is the number of free parameters and n is the number of data points. Statistical uncertainties are determined from the diagonal terms of the covariancematrix at the best-fitting solution. § RESULTS The new “artificial intelligence” method, GVPFIT and BMA, results inexcellent fits to the synthetic spectra. As an example, Figures <ref> and <ref> illustrate the BMA modelfor the most complex synthetic spectrum we analysed, with 37 underlying (real)velocity components.These figures show the residuals are well behaved and there are no discrepanciesbetween the data and the model.The BMA model is determined by summing over all models for each pixel inthe data, with the contribution of each model being weighted by its relativelikelihood using AICc (using Equations (7) and (13) from <cit.>): ω(AICc_j) = ℒ(AICc_j)/∑_l=1^Sℒ(AICc_l) =e^-AICc_j/2/∑_l=1^S e^-AICc_l/2such that ω(AICc_j) is the weight of model j. Similarly, the relative likelihood of velocity components at each pixel isdetermined by summing the probability density function of each redshift parameterfrom each component in all models, weighted by relative likelihood using AICc (Equation (<ref>)). For the most complex synthetic spectra, the 37 underlying (real) velocitycomponents represent a total of 148 Voigt profile parameters, with each component contributing four Voigt profile parameters: FeII and MgII column densities, redshift and Doppler broadening b-parameter.When we compared the minimum-AICc model to the underlying (real) model, we foundthat 136 parameters, or 91.9%, were identified.GVPFIT failed to identify three velocity components, and inaccurately estimated(discrepancies of >3σ in at least one Voigt profile parameter,using the statistical uncertainties determined from the diagonal terms of thecovariance matrix at the best-fitting solution) a further three velocity components.This is illustrated in Figure <ref>.The missing components are among the weakest components in the underlying modeland are surrounded by stronger components, while theinaccurately estimated components are weak compared to the surrounding velocitycomponents and occur in regions of dense absorption.This trend is repeated throughout the entire set of 37 synthetic spectra, withGVPFIT identifying 653, or 92.9%, of the 703 underlying(real) velocity components.Additionally, four spurious (extra) weak velocity components were introduced in theGVPFIT process that were not present in the original models.GVPFIT recovered the underlying (real) Δα/α for the syntheticspectra in our sample.Figure <ref> illustrates the Δα/α estimates of all modelsgenerated by GVPFIT for each of the synthetic spectra with 34, 35, 36 and 37underlying velocity components.A clear plateau is seen at Δα/α = 0, the underlying (real) value.At lower generations, i.e., when the models are under-fit, we see conspicuous departuresfrom zero. Figure <ref> plots the BMA estimates of Δα/α for the sample of 37 synthetic spectra. The inverse-variance weighted mean is Δα/α = 0.04 ± 0.20 × 10^-6.This is consistent with zero, as expected given that the underlying (real) value ofΔα/α is zero for these synthetic spectra, and hence we found no evidenceof a systematic bias.Figure <ref> also shows that the statistical uncertainties grow as the absorptionsystem complexity increases, as would be expected, and this is consistent with absorption systemswith similar quality spectral data and numbers of components from previous analyses<cit.>.For example, the statistical uncertainty from the analysis of the (real) spectral datafor this system in King et al. (2012) <cit.> is 4.0 × 10^-6 (with 14 velocity components and using less spectral data and different transitions)and in Bainbridge and Webb (2017) <cit.> is 2.9 × 10^-6(with 37 velocity components).§ DISCUSSION We found that the method described in <cit.>, GVPFIT andBMA, recovers the velocity structures of absorption systems and accuratelyestimates Δα/α over a broad range of velocity structures. GVPFIT recovered almost all the underlying (real) Voigt profile parametersfrom the synthetic spectra (see Figure <ref>).The velocity components that GVPFIT missed or inaccurately estimated areweak and occur in locations of dense absorption.We believe that it is unlikely that a human interactively fitting this set ofsynthetic spectra would perform better than GVPFIT. Figure <ref> shows interesting characteristics in the evolution ofΔα/α, similar to those seen by <cit.> in thereal spectra of the z_abs=1.839 absorption system towards J110325-264515. There appears to be an underlying linear trend in the evolution ofΔα/α, with occasional conspicuous departures(see Figure <ref>). These conspicuous departures exhibit a dramatic shift in Δα/αover a small change in complexity.Previous interactive methods, relying on a single “best-fit” model, lack thisbroad picture of how Δα/α evolves with velocity structure andmay lead to a spurious estimate of Δα/α. These results also highlight the importance of having an accurate spectra errorestimate.The spectral error estimate heavily influences the statistics of the fittingprocess, as an incorrect spectral error can artificially increase or decreasethe chi-squared “goodness-of-fit” statistic for a model and influence AICc orany similar statistical criteria.This can lead to incorrectly estimating the number of components that arerequired to adequately fit the data and, as we have shown, have a largeimpact on the final estimate of Δα/α.Future work will increase the sample size, include a more diverse set of velocitystructures and refine the method used to generate noise for the synthetic spectra.The sample of synthetic spectra used in this paper is small.Ideally, a study of this type would consist of 1000s of synthetic spectra and theautomated nature of the new “artificial intelligence” method lends itself toanalysing large samples.A larger sample will allow us to increase the precision of our analysis by reducingthe uncertainty on our weighted mean and probe for any smaller systematic bias.For example, a similar sample consisting of 1000 synthetic spectra should allow usto estimate the weighted mean below 1 × 10^-8.In addition, we would like to include synthetic spectra based on a broader rangeof real absorption systems, to show that this method is generalizable to a largerrange of velocity structures, data qualities and combinations of species. In addition, we expect that a more refined analysis will allow us to optimise our approach.In this work, we generate Gaussian noise using the error array from (real) spectraldata.However, in spectral data, the noise is not of Gaussian nature near zero flux andthe noise in adjacent pixels is not independent.At the current level of precision with which α is being probed, these effectsmay become important. However, we believe that this work is an important contribution, giving initialindications that this new method is accurate and unbiased.The size of this sample is adequate to show there is no evidence of bias inΔα/α, when using our method, at the 2 × 10^-7 level underideal circumstances (correct spectral error, high signal to noise and high resolution).This level of precision is two orders of magnitude smaller than thesystematic uncertainty estimated in previous analyses of real spectral data(for example in <cit.> σ_rand is approximately 0.9 × 10^-5,using almost eight times the number of absorption systems).In addition, although GVPFIT is automated, the analysis still requires time andcomputing resources;time and resources which otherwise could be used to analyse (real) spectral datainstead of synthetic spectra.Future work will extend these results, consider non-ideal circumstances and applythis approach to (real) spectral data.Studies such as this one are required to test the new method ofGVPFIT and BMA before being applied to the analysis of large sets ofdata.This is the first time that synthetic spectra have been utilised to evaluate how weanalyse absorption spectra.One of the main limiting factors in the use of absorption spectra to probe fundamentalphysics is the human interaction required during the interactive modelling process.This human interaction involves many complex decisions, considerable expertise andcan be very time-consuming for even a single moderately complex absorption system,such as a typical damped Lyman-α absorption system(such as <cit.>).Furthermore, the end result can be somewhat unreliable, with the literatureproviding many examples of fits to absorption systems which are clearly inadequate.Much time is devoted to echelle spectroscopy of quasars on large optical telescopes andconsiderable amounts of spectra exist in telescope archives which remain unpublished,or which have only partially been analysed, representing a great deal of valuablescientific information.With new instruments constantly being developed, such as ESPRESSO, the quality andquantity of available quasar echelle spectra are only going to increase. Since the new method presented in Bainbridge and Webb (2017) <cit.>removes the previously required human interaction, we can begin to analyse theever-increasing number of quasar echelle spectra more efficiently and undertakeprojects that were previously unrealistic.One example of this is modelling both thermally and turbulently broadened models foreach absorption system independently, allowing a more reliable comparison betweenmodels and data. The development and testing of this new “artificial intelligence” method(GVPFIT and BMA) are key to moving past the limiting factor of humaninteraction and open the way for projects that were previously unrealistic.This research used the ALICE High Performance Computing Facility at the University of Leicester. M.B.B. conceived, designed and performed the experiment, analyzed the data, and wrote the paper.J.K.W. contributed to the design of the project and the writing of the paper.M.B.B. and J.K.W. invented GVPFIT and first demonstrated the application and advantages of this method. All authors commented on the manuscript at all stages and approved the final version to be published. The authors declare no conflict of interest.The following abbreviations are used in this manuscript:GVPFIT Genetic Voigt Profile FITting software VPFIT Voigt Profile FITting softwareBMA Bayesian Model Averaging, and AICcAkaike Information Criteria corrected for small sample sizemdpi999 [Bainbridge and Webb(2017)]bainbridge2017 Bainbridge, M.B.; Webb, J.K.Artificial intelligence applied to the automatic analysis ofabsorption spectra. Objective measurement of the fine structure constant.Mon. Not. R. Astron. Soc. 2017, 468, 1639–1670.https://arxiv.org/abs/1606.07393[Akaike(1973)]akaike1973 Akaike, H. Information theory and an extension of the maximum likelihood principle. In Second International Symposium on Information Theory; Petrov, B.N., Csaki, F., Eds.; Akademia Kiado: Budapest, Hungary, 1973; pp. 267–281.[Hurvich and Tsai(1989)]hurvich1989 Hurvich, C.M.; Tsai, C.L. Regression and time series model selection in small samples. Biometrika 1989, 76, 297–307. http://xxx.lanl.gov/abs/http://biomet.oxfordjournals.org/content/76/2/297.full.pdf+html[Box and Muller(1958)]box1958 Box, G.E.P.; Muller, M.E. A note on the generation of random normal deviates. Ann. Math. Stat. 1958, 29, 610–611.[Webb et al.(1999)Webb, Flambaum, Churchill, Drinkwater, and Barrow]webb1999 Webb, J.K.; Flambaum, V.V.; Churchill, C.W.; Drinkwater, M.J.; Barrow, J.D. Search for time variation of the fine structure constant. Phys. Rev. Lett. 1999, 82, 884–887. http://xxx.lanl.gov/abs/astro-ph/9803165[Murphy et al.(2004)Murphy, Flambaum, Webb, Dzuba, Prochaska, and Wolfe]murphy2004 Murphy, M.T.; Flambaum, V.V.; Webb, J.K.; Dzuba, V.A.; Prochaska, J.X.; Wolfe, A.M. Constraining variations in the fine-structure constant, quark masses and the strong interaction. In Astrophysics, Clocks and Fundamental Constants; Karshenboim, S.G., Peik, E., Eds.; Lecture Notes in Physics; Springer: Berlin, Germany, 2004; Volume 648, pp. 131–150. http://xxx.lanl.gov/abs/astro-ph/0310318[Webb et al.(2011)Webb, King, Murphy, Flambaum, Carswell, and Bainbridge]webb2011 Webb, J.K.; King, J.A.; Murphy, M.T.; Flambaum, V.V.; Carswell, R.F.; Bainbridge, M.B. Indications of a spatial variation of the fine structure constant. Phys. Rev. Lett. 2011, 107, 191101. http://xxx.lanl.gov/abs/1008.3907[King et al.(2012)King, Webb, Murphy, Flambaum, Carswell, Bainbridge, Wilczynska, and Koch]king2012 King, J.A.; Webb, J.K.; Murphy, M.T.; Flambaum, V.V.; Carswell, R.F.; Bainbridge, M.B.; Wilczynska, M.R.; Koch, F.E. Spatial variation in the fine-structure constant — new results from VLT/UVES. Mon. Not. R. Astron. Soc. 2012, 422, 3370–3414. http://xxx.lanl.gov/abs/1202.4758[Riemer-Sørensen et al.(2015)Riemer-Sørensen, Webb, Crighton, Dumont, Ali, Kotuš, Bainbridge, Murphy, and Carswell]riemer2015 Riemer-Sørensen, S.; Webb, J.K.; Crighton, N.; Dumont, V.; Ali, K.; Kotuš, S.; Bainbridge, M.; Murphy, M.T.; Carswell, R. A robust deuterium abundance; re-measurement of the z = 3.256 absorption system towards the quasar PKS 1937-101. Mon. Not. R. Astron. Soc. 2015, 447, 2925–2936. http://xxx.lanl.gov/abs/1412.4043
http://arxiv.org/abs/1704.08710v1
{ "authors": [ "Matthew B. Bainbridge", "John K. Webb" ], "categories": [ "astro-ph.IM" ], "primary_category": "astro-ph.IM", "published": "20170427183549", "title": "Evaluating the New Automatic Method for the Analysis of Absorption Spectra Using Synthetic Spectra" }
Decremental Data Structures for Connectivity and Dominators in Directed GraphsAccepted to the 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017). Loukas GeorgiadisWork partially done while visiting University of Rome Tor Vergata. University of IoanninaThomas Dueholm HansenWork partially done while visiting University of Rome Tor Vergata. Supported by the Carlsberg Foundation, grant no. CF14-0617. Aarhus UniversityGiuseppe F. ItalianoPartially supported by MIUR, the Italian Ministry of Education, University and Research, under Project AMANDA (Algorithmics for MAssive and Networked DAta). University of Rome Tor VergataSebastian KrinningerWork partially done while visiting University of Rome Tor Vergata and while at Max Planck Institute for Informatics, Saarland Informatics Campus, Germany. University of ViennaNikos Parotsidis University of Rome Tor Vergata============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We introduce a new dynamic data structure for maintaining the strongly connected components (SCCs) of a directed graph (digraph) under edge deletions, so as to answer a rich repertoire of connectivity queries. Our main technical contribution is a decremental data structure that supports sensitivity queries of the form “are u and v strongly connected in the graph G ∖ w?”, for any triple of vertices u, v, w, while G undergoes deletions of edges. Our data structure processes a sequence of edge deletions in a digraph with n vertices in O(m n logn) total time and O(n^2 logn) space, where m is the number of edges before any deletion, and answers the above queries in constant time. We can leverage our data structure to obtain decremental data structures for many more types of queries within the same time and space complexity. For instance for edge-related queries, such as testing whether two query vertices u and v are strongly connected in G ∖ e, for some query edge e. As another important application of our decremental data structure, we provide the first nontrivial algorithm for maintaining the dominator tree of a flow graph under edge deletions.We present an algorithm that processes a sequence of edge deletions in a flow graph in O(m n logn) total time and O(n^2 logn) space. For reducible flow graphs we provide an O(mn)-time and O(m + n)-space algorithm. We give a conditional lower bound that provides evidence that these running times may be tight up to subpolynomial factors.§ INTRODUCTION Dynamic graph algorithms have been extensively studied for several decades, and many important results have been achieved for dynamic versions of fundamental problems, including connectivity, 2-edge and 2-vertex connectivity, minimum spanning tree, transitive closure, and shortest paths (see, e.g., the survey in <cit.>).We recall that a dynamic graph problem is said to be fully dynamic if it involves both insertions and deletions of edges, incremental if it only involves edge insertions, and decremental if it only involves edge deletions.The decremental strongly connected components (SCCs) problem asks us to maintain, under edge deletions in a directed graph G, a data structure that given two vertices u and v answers whether u and v are strongly connected in G. We extend this problem to sensitvity queries of the form “are u and v strongly connected in the graph G ∖ w?”, for any triple of vertices u, v, w, i.e., we additionally allow the query to temporarily remove a third vertex w. We show that this extended decremental SCC problem can be used to answer fast a rich repertoire of connectivity queries, and we present a new and efficient data structure for the problem. In particular, our data structure for the extended decremental SCC problem can be used to support edge-related queries, such as maintaining the strong bridges of a digraph, testing whether two query vertices u and v are strongly connected in G ∖ e,reporting the SCCs of G ∖ e, or the largest and smallest SCCs in G ∖ e, for any query edge e.Furthermore, using our framework, it is possible to maintain the 2-vertex-and 2-edge-connected components of a digraph under edge deletions. All of these extensions can be handled with the same time and space bounds as for the extended decremental SCC problem. (Most of these reductions have been deferred to the full version of the paper.) A naive approach to solving the extended decremental SCC problem is to maintain separately the SCCs in every subgraph G ∖ w of G, for all vertices w. After an edge deletion we then update the SCCs of all these n subgraphs, where n is the number of vertices in G. If we simply perform a static recompution after each deletion, then we, for example, obtain decremental algorithms with O(m^2n) total time and O(n^2) space by recomputing the SCCs in each G ∖ w <cit.> or O(m^2+mn) total time and O(m + n) space by constructing a more suitable static connectivity data structure <cit.>, respectively. Here m denotes the initial number of edges. The current fastest (randomized) decremental SCC algorithm by Chechik et al. <cit.> trivially gives O(m n^3/2logn) total update time and O(m n) space for our extended decremental SCC problem. The main technical contribution of this paper is a data structure for the extended decremental SCC problem with O(m n logn) total update time that uses O(n^2 logn) space, and that answers queries in constant time. We obtain this data structure by extendingŁącki's decremental SCC algorithm <cit.>. His algorithm maintains the SCCs of a graph under edge deletions by recursively decomposing the SCCs into smaller and smaller subgraphs. We therefore refer to his data structure as an SCC-decomposition. His total update time is O(mn) and the space used is O(m+n). We observe that the naive algorithm based on SCC-decompositions can be implemented in such a way that most of the work performed is redundant. We obtain our data structure by merging n SCC-decompositions into one joint data structure, which we refer to as a joint SCC-decomposition. Our data structure, like that of Łącki, is deterministic. Using completely different techniques, Georgiadis et al. <cit.> showed howto answer the same sensitivity queries in O(mn) total time in theincremental setting, i.e., when the input digraph undergoes edgeinsertions only. The extended SCC problem is related tothe so-called fault-tolerant model. Here, one wishes to preprocess a graph G into a data structure that is able to answer fast certain sensitivity queries, i.e., given a failed vertex w (resp., failed edge e), compute a specific property of the subgraph G ∖ w (resp., G ∖ e) of G.Our data structuresupports sensitivity queries when a digraph G undergoes edge deletions, which gives an aspect of decremental fault-tolerance. This may be useful in scenarios where we wish to find the best edge whose deletion optimizes certain properties (fault-tolerant aspect) and then actually perform this deletion (decremental aspect). This is, e.g., done in the computational biology applications considered by Mihalák et al. <cit.>. Their recursive deletion-contraction algorithm repeatedly finds the edge of a strongly connected digraph whose deletion maximizes quantities such as the number of resulting SCCs or minimizes their maximum size.As another important application of our joint SCCs data structure, we provide the first nontrivial algorithm for maintaining the dominator tree of a flow graph under edge deletions.A flow graph G=(V,E,s) is a directed graph with a distinguished start vertex s ∈ V, w.l.o.g. containing only vertices reachable from s. A vertex w dominates a vertex v (w is a dominator of v) if every path from s to v includes w. The immediate dominator of a vertex v, denoted by d(v), is the unique vertex that dominates v and is dominated by all dominators of v. The dominator tree D is a tree with root s in which each vertex v has d(v) as its parent. Dominator trees can be computed in linear time <cit.>. The problem of finding dominators has been extensively studied, as it occurs in several applications, includingprogram optimization and code generation <cit.>,constraint programming <cit.>, circuit testing <cit.>, theoretical biology <cit.>, memory profiling <cit.>, fault-tolerant computing <cit.>, connectivity and path-determination problems <cit.>, and the analysis of diffusion networks <cit.>. In particular, the dynamic dominator problem arises in various applications, such as data flow analysis and compilation <cit.>.Moreover, the results of Italiano et al. <cit.> imply thatdynamic dominatorscan be used for dynamically testing 2-vertex connectivity, and for maintaining the strong bridges and strong articulation points of digraphs. The decremental dominator problem appears in the computation of maximal 2-connected subgraphs in digraphs <cit.>.The problem of updating the dominator relation has been studied for a few decades (see, e.g., <cit.>).For the incremental dominator problem, there are algorithms that achieve total O(mn) running time for processing a sequence of edge insertion in a flow graph with n vertices, where m is the number of edges after all insertions <cit.>.Moreover, they can answer dominance queries, i.e., whether a query vertex w dominates another query vertex v, in constant time.Prior to our work, to the best of our knowledge, no decremental algorithm with total running time better than O(m^2) was known for general flow graphs.In the special case of reducible flow graphs (a class that includes acyclic flow graphs), Cicerone et al. <cit.> achieved an O(mn) update bound for the decremental dominator problem. Both the incremental and the decremental algorithms of <cit.> require O(n^2) space, as they maintain the transitive closure of the digraph.Our algorithm is the first to improve the trivial O(m^2) bound for the decremental dominator problem in general flow graphs. Specifically, our algorithm can process a sequence of edge deletions in a flow graph with n vertices and initially m edges in O(mn logn) time and O(n^2 logn) space, and after processing each deletion can answer dominance queries in constant time.For the special case ofreducible flow graphs, we give an algorithm that matches the O(mn) running time of Cicerone et al. while improving the space usage to O(m+n). We remark that the reducible case is interesting for applications in program optimization since one notion of a “structured” program is that its flow graph is reducible. (The details about this result appear in the full version of the paper.)Finally, we complement our results with a conditional lower bound, which suggests that it will be hard to substantially improve our update bounds. In particular, we prove that there is no incremental nor decremental algorithm for maintaining the dominator tree (or more generally, a dominance data structure) that has total update time O ((m n)^1-ϵ) (for some constant ϵ > 0) unless theConjecture <cit.> fails. The same lower bound applies to the extended decremental SCC problem. Unlike the update time, it is not clear that the O(n^2log n) space used by our joint SCC-decomposition is near-optimal. We leave it as an open problem to improve this bound. § NOTATION AND TERMINOLOGY For a given directed graph G = (V,E), we denote the set of vertices by V(G) = V and the set of edges by E(G) = E. We let m and n be the number of edges and vertices, respectively, of G. Two vertices u and v are strongly connected in G if there is a path from u to v as well as a path from v to u in G and G is strongly connected if every vertex is reachable from every other vertex. The strongly connected components (SCCs) of G are its maximal strongly connected subgraphs. The SCCs of a graph can be computed in O(m+n) time <cit.>. We denote by G ∖ S (resp., G ∖ (u, v)) the graph obtained afterdeleting a set S of vertices (resp., an edge (u, v)) from G. Additionally, we let G[S] be the subgraph of G induced by the set of vertices S. For a strongly connected graph H, we say that deleting an edge (u, v) breaks H, if H ∖ (u, v) is not strongly connected. An edge (resp., a vertex) of G is a strong bridge (resp., a strong articulation point) if its removal increases the number of SCCs.Let G be a strongly connected graph.We say that G is 2-edge-connected (resp., 2-vertex-connected) if it has no strong bridges (resp., at least three vertices and no strong articulation points).For a set of vertices C ⊆ V its induced subgraph G[C] is a maximal 2-edge-connected subgraph (resp., maximal 2-vertex-connected subgraph) of G ifG[C] is a 2-edge-connected (resp., 2-vertex-connected) graph and no superset of C has this property. Two vertices u and w are 2-edge-connected (resp., 2-vertex-connected) if there are two edge-disjoint (resp., internally vertex-disjoint) paths from u to w and two edge-disjoint (resp., internally vertex-disjoint) paths from w to u. (Note that a path from u to w and a path from w to u need not be edge-disjoint or internally vertex-disjoint.) A 2-edge-connected (resp., 2-vertex-connected) component of G is a maximal subset of vertices such that any pair of distinct vertices is 2-edge-connected (resp., 2-vertex-connected).We denote by G^ the reverse graph of G, i.e., the graph which has the same vertices as G and contains an edge (v, u) for every edge (u, v) of G. If D is the dominator tree of G, then D^ denotes the dominator tree of G^. A spanning tree T of G is a tree with root s that contains a path from s to v for all reachable vertices v.Given a rooted tree T, we denote by T(v) the subtree of T rooted at v (we also view T(v) as the set of descendants of v).Let G=(V,E,s) be a flow graph with start vertex s, and let D be the dominator tree of G. We represent G by adjacency lists 𝐼𝑛(v) = { u : (u,v) ∈ E } and 𝑂𝑢𝑡(v) = { w : (v,w) ∈ E }. We represent D by storing parent and child pointers, i.e., each vertex v stores its parent d(v) in D and the list of children C(v).Let T be a tree rooted at s with vertex set V(T) ⊆ V, and let t(v) denote the parent of a vertex v ∈ V(T) in T. If v is an ancestor of w, T[v, w] is the path in T from v to w. In particular, D[s,v] consists of the vertices that dominate v. If v is a proper ancestor of w, T(v, w] is the path to w from the child of v that is an ancestor of w. Tree T is flat if its root is the parent of every other vertex.For any vertex v ∈ V, we denote by C(v) the set of children of v in D.§ A DATA STRUCTURE FOR MAINTAINING JOINT SCC-DECOMPOSITIONS For a given initial graph G, the decremental SCC problem asks us to maintain a data structure that allowsedge deletions and can answer whether (arbitrary) pairs (u,v) of vertices are in the same SCC. The goal is to update the data structure as quickly as possible while answering queries in constant time. In this paper we present a data structure for the extended decremental SCC problem in which a query provides an additional vertex w and asks whether u and v are in the same SCC when w is deleted from G. We maintain this information under edge deletions, and our data structure relies on Łącki's SCC-decomposition <cit.> for doing so. §.§ Review of Łącki's SCC Decomposition An SCC-decomposition recursively partitions the graph G into smaller strongly connected subgraphs. This generates a rooted tree T, whose root r represents the entire graph, and where the subtree rooted at each node ϕ represents some vertex-induced strongly connected subgraph G_ϕ (we refer to vertices of T as nodes to distinguish T from G). Every non-leaf node ϕ is a vertex of G_ϕ, and the children of ϕ correspond to SCCs of G_ϕ∖ϕ. The concept was introduced by Łącki <cit.> and was slightly extended by Chechik et al. <cit.> to allow partial SCC-decompositions where leaves represent strongly connected subgraphs rather than single vertices. We adopt the notation from <cit.>.Let G=(V, E) be a strongly connected graph. An SCC-decomposition of G is a rooted tree T, whose nodes form a partition of V. For a node ϕ of T we define G_ϕ to be the subgraph of G induced by the union of all descendants of ϕ (including ϕ). Then, the following properties hold: * Each internal node ϕ of T is a single-element set.[In this case, we sometimes abuse notation and assume that ϕ is the vertex itself.]* Let ϕ be any internal node of T, and let H_1,…,H_t be the SCCs of G_ϕ∖ϕ. Then the node ϕ has t children ϕ_1,…,ϕ_t, where G_ϕ_i = H_i for all i ∈{1,…,t}.An SCC-decomposition of a graph G that is not strongly connected is a collection of SCC-decompositions of the SCCs of G. We say that T is a partial SCC-decomposition when the leaves of T are not required to be singletons.Observe that for each node ϕ, the graph G_ϕ is strongly connected. Moreover, the subtree of T rooted at ϕ is an SCC-decomposition of G_ϕ. Also, for a leaf ϕ we have that ϕ = V(G_ϕ). To build an SCC-decomposition T of a strongly connected graph G we pick an arbitrary vertex v, put it in the root of T, then recursively build SCC-decompositions of SCCs of G ∖{v} and make them the children of v in T. This procedure is described in Build-SCC-Decomposition(G,S). Note that since the choice of v is arbitrary, there are many ways to build an SCC-decomposition of the same graph. The procedure Build-SCC-Decomposition(G,S) takes as input a set of vertices S and returns a partial SCC-decomposition whose internal nodes are the vertices of S, i.e., these vertices are picked first and therefore appear at the top of the constructed tree. We refer to the vertices in S as internal nodes and the remaining nodes as external nodes. Note that all external nodes appear in the leaves of T, while internal nodes can be both leaves and non-leaves. This distinction is helpful when describing our algorithm. We therefore let Internal(T) be the nodes of T from S and External(T) be the nodes of T that are not from S. In particular, External(T) is a subset of the leaves of T.Łącki <cit.> showed that the total initialization and update time under edge deletions of an SCC-decomposition is O(mγ), where γ is the depth of the decomposition.§.§ Towards a Joint SCC-Decomposition Recall that the extended decremental SCC problem asks us to maintain under edge deletions a data structure for a graph G such that we can answer whether u and v are strongly connected in G ∖{w} when given u,v,w ∈ V(G). A naive algorithm does this by maintaining n SCC-decompositions, each with a distinct vertex w as its root. The children of w in an SCC-decomposition that has w as its root are then exactly the SCCs of G ∖{w}. Hence, u and v are in the same SCC if and only if they appear in the same subtree below w. The total update time of this data structure is however O(mn^2), which is undesirable. With a more refined approach, we improve the time bound to O(mnlog n).Observe that the external nodes of a partial SCC-decomposition T produced by the procedure Build-SCC-Decomposition(G,S) exactly correspond to the SCCs of G ∖ S. This is true regardless of the order in which vertices from S are picked by the procedure. If two SCC-decompositions are built using the same set S, but with vertices being picked in a different order, then the nodes below S represent the same SCCs, which means that they can be shared by the two SCC-decompositions. Our algorithm is based on this observation. We essentially construct the n SCC-decompositions of the naive algorithm described above such that large parts of their subtrees are shared, and such that we do not need to maintain multiple copies of these subtrees. The idea is to partition the set S into two subsets S_1 and S_2 of equal size (we assume for simplicity that n is a power of 2), and then construct half of the SCC-decompositions with S_1 at the top and the other half with S_2 at the top. The procedure is repeated recursively on the top part of both halves. We refer to the bottom part, i.e., nodes that are not from S_1 and S_2, respectively, as the extension of the top part. Note that we eventually get a distinct vertex as the root of each of the n SCC-decompositions. The following definition formalizes the idea. A joint SCC-decomposition J is a recursive structure. It is either a regular SCC-decomposition T (the base case), or a pair of joint SCC-decompositions J_1,J_2 with the same set of internal nodes S and a shared set of external nodes Φ.In the second case we refer to J as the tuple (J_1,J_2,S,Φ).A joint SCC-decomposition J = (J_1,J_2,S,Φ) is balanced on S if it has one of the following two properties: * S is a singleton and J is a regular (partial) SCC-decomposition T with the vertex from S as root and no other internal nodes (the base case).* S can be partitioned into two equally sized halves S_1 and S_2, andJ consists of two joint SCC-decompositions J_1 = (J_1,1,J_1,2,S_1,Φ_1) and J_2 = (J_2,1,J_2,2,S_2,Φ_2) that are balanced on S_1 and S_2, respectively. Also, each external node ϕ in Φ_1 and Φ_2 is extended with an associated SCC-decomposition T_ϕ for G_ϕ whose internal nodes are those of ϕ∩ S. The combined set of external nodes of T_ϕ for all ϕ∈Φ_1 is equal to the combined set of external nodes of T_ϕ' for all ϕ' ∈Φ_2, and these nodes are the external nodes Φ of J.The procedure Build-Joint-SCC-Decomposition(G,S) describes how we build a balanced joint SCC-decomposition. G is the graph that we wish to decompose, and S is the set of vertices that we wish to place at the top. Initially S is the set of all vertices. If S only contains a single vertex r, then we make r the root of a regular SCC-decomposition. Note that in this case the vertex r is the only internal node of the partial SCC-decomposition returned by Build-SCC-Decomposition(G,S). If S contains more than one vertex, then we split it into two equal halves S_1 and S_2 and recursively compute a joint SCC-decomposition for each half. The procedure Build-Joint-SCC-Decomposition(G,S_i) only uses vertices from S_i as internal nodes, and we therefore compute regular SCC-decompositions for the remaining vertices from S for each of the resulting external nodes. This gives us two structures that both have the vertices from S as internal nodes, and since their external nodes are shared they form a joint SCC-decomposition. We add the external nodes to a list Φ that is used as an interface between the different SCC-decompositions. Recall that each node is a subset of the vertices in G, and observe that the nodes in Φ form a partition of V(G) ∖ S.A balanced joint SCC-decomposition for a graph G with n vertices consists of O(n) SCC-decompositions.Observe that the procedure Build-Joint-SCC-Decomposition(G,S) constructs a number of SCC-decompositions that is given by the following simple recurrence, where |E(G)| = m and |S| = s:g(m,s) =2g(m,s/2) + 2if s > 11otherwiseSince g(m,s) = O(s), the lemma follows. The following lemma shows that a joint SCC-decomposition achieves a much more compact representation than the naive algorithm described at the beginning of the section. We will later use the lemma in our analysis. Let J = (J_1,J_2,S,Φ) be a balanced joint SCC-decomposition of a graph G such that S = V(G). Then the total number of nodes of J is O(nlog n), where n = |V(G)|.The proof is by induction. Our induction hypothesis says that the total number of internal nodes of a balanced joint SCC-decomposition J = (J_1,J_2,S,Φ), counting not only S but also recursively the number of internal nodes of J_1 and J_2, is |S| · (1+log |S|). In the base case, J = (J_1,J_2,S,Φ) is an SCC-decomposition with a single internal node, and the induction hypothesis is clearly satisfied. For the induction step we count separately the total number of internal nodes of J_1 = (J_1,1,J_1,2,S_1,Φ_1) and J_2 = (J_2,1,J_2,2,S_2,Φ_2), and add the number of internal nodes of the SCC-decompositions T_ϕ for ϕ∈Φ_1 and ϕ∈Φ_2, i.e., the extensions of J_1 and J_2 to S. Since |S_1| = |S_2| = |S|/2, it follows from the induction hypothesis that both J_1 and J_2 have |S|/2·log |S| internal nodes in total. The internal nodes of T_ϕ for ϕ∈Φ_1 are exactly S_2, and the internal nodes of T_ϕ for ϕ∈Φ_2 are exactly S_1. Hence the number of internal nodes in the extensions are |S_1|+|S_2| = |S|. It follows that the total number of internal nodes of J is |S| · (1+log |S|) as desired.It remains to count the external nodes of J. Note that external nodes of J_1 and J_2 correspond to internal nodes of J, i.e., they are roots of the SCC-decompositions that extend J_1 and J_2. Therefore there are at most as many external nodes inside the recursion as there are internal nodes in total. There are at most O(n) external nodes in the extensions of J_1 and J_2 to S, and we therefore conclude that the total number of nodes when S=V(G) is at most O(nlog n).The procedure Build-Joint-SCC-Decomposition(G,S) constructs a joint SCC-decomposition in time O(mnlog n).We already argued that Build-Joint-SCC-Decomposition(G,S) constructs a joint SCC-decomposition with vertices from S as internal nodes. It thus remains to show that this is done in O(mnlog n) time.Recall that Łącki <cit.> showed that an SCC-decomposition of depth γ can be initialized in time O(mγ). This follows straightforwardly from the fact that the SCCs of a graph can be computed in O(m+n) time <cit.>. The depth of an SCC-decomposition is at most equal to the number of internal nodes plus one. Since Build-SCC-Decomposition(G,S) only uses vertices from S as internal nodes, it runs in time O(|E(G)| · (|S|+1)). The running time of Build-Joint-SCC-Decomposition(G,S) is dominated by the calls that it makes to the procedure Build-SCC-Decomposition(G,S). In particular, it is not difficult to merge the leaves on line <ref> in linear time. It follows that the running time of Build-Joint-SCC-Decomposition(G,S) when |E(G)| = m and |S| = s is upper bounded by the following recurrence for some constant c:f_c(m,s) =2f_c(m,s/2) + c m sif s > 1c· motherwiseSince f_c(m,s) = ∑_i=0^log s cms = cmslog s, the claim follows.Alternatively, the lemma can be proved by observing that Lemma <ref> shows that the total number of nodes in a joint SCC-decomposition is O(nlog n), which implies that the combined depth of all the SCC-decompositions that make up a joint SCC-decomposition is at most O(nlog n). To answer queries for the extended decremental SCC problem in constant time, we also construct and maintain an n × n matrix A such that A[u,w] is the index of the SCC of G ∖{w} that contains u. Two vertices u and v are in the same SCC of G ∖{w} if and only if A[u,w] = A[v,w]. To avoid cluttering the pseudo-code we describe separately how A is maintained. In Build-Joint-SCC-Decomposition(G,S) we initialize A in the base case when we compute an SCC-decomposition T for a singleton S = {w}. Indeed, in this case w is the root of T, and the external nodes are exactly the SCCs of G∖{w}. Hence, for every vertex u ∈ V(G) ∖{w} we make A[u,w] equal to the index of the SCC it is part of in G∖{w}.Note that storing the matrix A takes space O(n^2). The time spent initializing A is however dominated by the other work performed by the algorithm.§.§ Deleting Edges from a Joint SCC-Decomposition We next show how to maintain a joint SCC-decomposition under edge deletions. It is again instructive to consider the work performed by the naive algorithm that maintains n SCC-decompositions with distinct roots. If these are constructed as described in Section <ref>, then the SCC-decompositions will share many identical subtrees, and the work performed on these subtrees will be the same. In the joint SCC-decomposition such subtrees are shared, but otherwise the work performed is the same as the work performed for individual SCC-decompositions. We therefore use Łącki's algorithm <cit.> to delete edges from the individual SCC-decompositions, and we introduce a new procedure for handling the interface between the SCC-decompositions. We next briefly sketch Łącki's algorithm. We refer to <cit.> for a more comprehensive presentation.Recall that each node ϕ of an SCC-decomposition T represents a strongly connected subgraph G_ϕ induced by the vertices in the subtree rooted at ϕ. If ϕ is an internal node of T, then the children of ϕ are the SCCs of G_ϕ∖ϕ. Łącki uses the following two operations to compactly represent edges among ϕ and its children. Let G be a graph. The condensation of G, denoted by (G), is the graph obtained from G by contracting all its SCCs into single vertices. Let v ∈ V(G). By (G, v) we denote the graph obtained from G by splitting v into two vertices: v_in and v_out. The in-edges of v are connected to v_in and the out-edges to v_out. The two operations are often used together, and to simplify notation we use the shorthand G^_v = ((G, v)). The graph G^_ϕ is stored with every internal node ϕ of the SCC-decomposition T. This introduces at most three copies of every vertex v of G: The two vertices v_in and v_out in G^_v, and possibly a third vertex in the condensed graph of the parent of v in T. Moreover, every edge (u, v) appears in exactly one condensed graph, namely that of the lowest common ancestor of u and v in T, which we denote by (u,v). The combined space used for storing all the condensed graphs is thus O(m+n).To delete an edge (u, v), Łącki <cit.> locates ϕ = (u,v), and deletes (u', v') from G^_ϕ, where u' and v' are the vertices whose subtrees contain u and v. (He uses O(m) space to store a pointer from every edge (u, v) to (u,v), enabling him to find the lowest common ancestor in constant time.) To preserve connectivity, he then checks whether u' and v' have non-zero out- and in-degrees, respectively, in G^_ϕ. If this is not the case, then he repeatedly removes vertices with out- or in-degree zero and their adjacent edges from G^_ϕ. All such vertices can be located, starting from u' and v', in time that is linear in the number of edges adjacent to the removed vertices. The corresponding children of ϕ are then moved up one level in T and are made siblings of ϕ. They are also inserted into G^_par(ϕ), where par(ϕ) is the parent of ϕ, and their edges and the edges of ϕ in G^_par(ϕ) are updated correspondingly. This can again be done in time linear in the number of edges in the original graph that are adjacent to vertices in the subtrees that are moved. The procedure is then repeated in G^_par(ϕ). Since every vertex increases its level at most γ times, where γ is the initial depth of T, it follows that the total update time of the algorithm is at most O(mγ).We let Delete-Edge-from-SCC-decomposition(T,u,v) be the procedure described above for deleting an edge (u, v) from an SCC-decomposition T. We also denote the recursive procedure for moving nodes ϕ_1,…,ϕ_k from being children of ϕ to being siblings of ϕ in T after an edge (u, v) is deleted by Fix-SCC-decomposition(T,u,v,ϕ,{ϕ_1,…,ϕ_k}). Both procedures return the resulting SCC-decomposition, or a collection of SCC-decompositions in case the graph is not strongly connected. Armed with these two procedures, we are now ready to describe how our algorithm updates a joint SCC-decomposition when an edge (u, v) is deleted.[t!] An SCC-decomposition T for a strongly connected graph G, and an edge (u, v) to be deleted from G. The updated collection of SCC-decompositions {T_1,…,T_k} for G∖ (u, v). Let ϕ = (u,v) be the lowest common ancestor of u and v in T.Let u' and v' be the vertices of G^_ϕ that are ancestors of u and v, respectively, in T.Remove one copy of (u', v') from G^_ϕ. Starting from u', find all vertices ϕ'_1,…,ϕ'_r in G^_ϕ that cannot reach ϕ_in.Starting from v', find all vertices ϕ”_1,…,ϕ”_s in G^_ϕ that are unreachable from ϕ_out.Remove ϕ'_1,…,ϕ'_r,ϕ”_1,…,ϕ”_s and their adjacent edges from G^_ϕ, and remove the corresponding subtrees from T.Fix-SCC-decomposition(T,u,v,ϕ,{ϕ'_1,…,ϕ'_r,ϕ”_1,…,ϕ”_s}). Delete-Edge-from-SCC-decomposition(T,u,v)[t!] A broken SCC-decomposition T for a graph G. The two endpoints u and v of the edge whose removal broke T. A node ϕ of T, and a set {ϕ_1,…,ϕ_k} of nodes that are to be made siblings of ϕ. A collection of valid SCC-decompositions {T_1,…,T_ℓ} for G ∖ (u, v). ϕ has no parent, or k = 0 {T,T_ϕ_1,…,T_ϕ_ℓ}.Let ϕ' be the parent of ϕ in T. Add ϕ_1,…,ϕ_k as children of ϕ' in T, and add ϕ_1,…,ϕ_k to G^_ϕ'. Let E' be the edges of G_ϕ' for which one end-point is part of G_ϕ_i, for some i ∈{1,…,k}, and the other end-point is not part of G_ϕ_i. (u', v') ∈ E' u' = ϕ' Let u” = ϕ'_out. Let u” be the vertex of G^_ϕ' whose subtree contains u'. v' = ϕ' Let v” = ϕ'_in. Let v” be the vertex of G^_ϕ' whose subtree contains v'. Add (u”, v”) to G^_ϕ'.Starting from the vertex in G^_ϕ' that contains u, find all vertices ϕ'_1,…,ϕ'_r in G^_ϕ' that cannot reach ϕ'_in. Starting from the vertex in G^_ϕ' that contains v, find all vertices ϕ”_1,…,ϕ”_s in G^_ϕ' that are unreachable from ϕ'_out. Remove ϕ'_1,…,ϕ'_r,ϕ”_1,…,ϕ”_s and their adjacent edges from G^_ϕ', and remove the corresponding subtrees from T.Fix-SCC-decomposition(T,u,v,ϕ',{ϕ'_1,…,ϕ'_r,ϕ”_1,…,ϕ”_s}). Fix-SCC-decomposition(T,u,v,ϕ,{ϕ_1,…,ϕ_k})In a joint SCC-decomposition, vertices and edges may appear in multiple nodes as part of smaller SCC-decompositions. We therefore need to find every occurence of the edge that we wish to delete. We introduce a procedure Delete-Edge(J,u,v) that does that by recursively searching through the nested joint SCC-decompositions and deleting (u,v) from the relevant SCC-decompositions. The procedure also handles the interface between SCC-decompositions. Note that deleting (u,v) from an SCC-decomposition T may cause the SCC corresponding to the root ϕ of T to break. The procedure Delete-Edge-from-SCC-decomposition(T,u,v) will in this case return a collection of SCC-decompositions {T_1,…,T_k}, one for each new SCC. Suppose J = (J_1,J_2,S,Φ). If T extends J_1 (resp. J_2), then it is an SCC-decomposition of the subgraph G_ϕ associated with some external node ϕ of J_1 (resp. J_2). ϕ is then itself a leaf of an SCC-decomposition T' in J_1 (resp. J_2). Moreover, when the SCC corresponding to ϕ breaks, then this leaf must be split into multiple leaves of T', one for each new SCC. Note however that the levels in T' of the involved vertices do not change after the split. We therefore cannot charge the work performed when splitting ϕ to the analysis by Łącki <cit.>.Let ϕ_1,…,ϕ_k be the roots of the SCC-decompositions T_1,…,T_k that are created when the deletion of (u,v) breaks the SCC G_ϕ. As mentioned above, we need to replace ϕ in T' by ϕ_1,…,ϕ_k, which means that ϕ_1,…,ϕ_k should replace ϕ in G^_par(ϕ), where par(ϕ) is the parent of ϕ in T'.To efficiently reconnect ϕ_1,…,ϕ_k in G^_par(ϕ) we identify the vertex ϕ_i whose associated graph G_ϕ_i has the most vertices, and we then scan through all the vertices in the other graphs G_ϕ_1,…,G_ϕ_i-1,G_ϕ_i+1,…,G_ϕ_k and reconnect their adjacent edges in G^_par(ϕ) when relevant. The work performed is exactly the same as when Łącki fixes an SCC-decomposition after an edge is removed. We can therefore call Fix-SCC-decomposition(T',u,v,ϕ,{ϕ_1,…,ϕ_i-1,ϕ_i+1,…,ϕ_k}). Note that this makes ϕ_i take over the role of ϕ. Also note that we provide the procedure with the end-points u and v of the edge that was deleted, since u and v are used as starting points for the search for disconnected vertices when propagating the update further up the tree.Finally, observe that splitting the leaf ϕ of the SCC-decomposition T' may propagate all the way to the root of T' and break the SCC corresponding to T'. We therefore use a recursive procedure, Split-Leaf(J,u,v,ϕ,{ϕ_1,…,ϕ_k}), to perform the split. Here u and v are the end-points of the edge that was deleted and caused the need for the joint SCC-decomposition to be updated. We include them in the function call since they are used by Fix-SCC-decomposition to initiate the search for vertices that are no longer strongly connected to ancestors of ϕ.[t!] A balanced joint SCC-decomposition J = (J_1,J_2,S,Φ) for a graph G, and an edge (u, v) to be deleted from G. The edge (u, v) is removed from G, and the joint SCC-decomposition J is updated correspondingly. {u,v}⊆ V(G_ϕ) for some ϕ∈ΦLet T_ϕ be the SCC-decomposition for G_ϕ.{T_1,…,T_k} = Delete-Edge-from-SCC-decomposition(T_ϕ,u,v).Let ϕ_1,…,ϕ_k be the roots of T_1,…,T_k.k > 1 Split-Leaf(J_1,u,v,ϕ,{ϕ_1,…,ϕ_k}).Split-Leaf(J_2,u,v,ϕ,{ϕ_1,…,ϕ_k}).Replace ϕ by ϕ_1,…,ϕ_k in Φ.Delete-Edge(J_1,u,v).Delete-Edge(J_2,u,v).J. Delete-Edge(J,u,v)[t!] A balanced joint SCC-decomposition J = (J_1,J_2,S,Φ') for a graph G. Two vertices u and v, a node ϕ in the extension of J, and a collection of nodes {ϕ_1,…,ϕ_k}. u and v are the end-points of an edge whose deletion causes ϕ_1,…,ϕ_k to break off from ϕ. The node ϕ is split by adding ϕ_1,…,ϕ_k to J, and J is updated to correctly reflect the deletion of (u, v). Let ϕ' ∈Φ' be the external node of J for which the SCC-decomposition T_ϕ' contains ϕ as a leaf. Let i = _j ∈{1,…,k} |V(G_ϕ_j)| be the index of the node ϕ_i whose associated graph G_ϕ_i contains the most vertices. ϕ has a parent par(ϕ) Remove from G^_par(ϕ) all edges that are adjacent to vertices from G_ϕ_1,…,G_ϕ_i-1,G_ϕ_i+1,…,G_ϕ_k.{T'_1,…,T'_ℓ} = Fix-SCC-decomposition(T_ϕ',u,v,ϕ,{ϕ_1,…,ϕ_i-1,ϕ_i+1,…,ϕ_k}).ℓ > 1 Let ϕ'_1,…,ϕ'_ℓ be the roots of T'_1,…,T'_ℓ.Split-Leaf(J_1,u,v,ϕ',{ϕ'_1,…,ϕ'_ℓ}).Split-Leaf(J_2,u,v,ϕ',{ϕ'_1,…,ϕ'_ℓ}).Replace ϕ' by ϕ'_1,…,ϕ'_ℓ in Φ'.Split-Leaf(J,u,v,ϕ,{ϕ_1,…,ϕ_k}) Before analyzing the running time of our algorithm, we first describe a small implementation detail that was left out in the pseudo-code. Recall from Lemma <ref> that a balanced joint SCC-decomposition for at graph G with n vertices is consists of O(n) SCC-decompositions. For each SCC-decomposition T we maintain an array on the vertices of the original graph G, such that T⟨ v⟩ = True if v appears in T, and T⟨ v⟩ = False otherwise. This allows us to check in constant time whether {u,v}⊆ V(T_ϕ) on line <ref> of Delete-Edge(J,u,v). Storing these arrays takes up O(n^2) space, and they are updated when the SCC of the root of an SCC-decomposition breaks.The total update time spent by Delete-Edge(J,u,v) in order to maintain a balanced joint SCC-decomposition under edge deletions is O(mnlog n).The time spent by Delete-Edge(J,u,v) consists of three parts: * Checking whether {u,v}⊆ V(G_ϕ) for some ϕ∈Φ on line <ref>.* The work performed by Delete-Edge-from-SCC-decomposition(T_ϕ,u,v).* The work performed by Split-Leaf(J_i,u,v,ϕ,{ϕ_1,…,ϕ_k}), for i ∈{1,2}. As described above, we can check in constant time whether {u,v}⊆ V(G_ϕ) by checking whether T_ϕ⟨ u⟩ = True and T_ϕ⟨ v⟩ = True.Recall that Łącki <cit.> showed that the total initialization and update time of an SCC-decomposition is O(mγ), where γ is the depth of the decomposition. It follows from an argument identical to the proof of Lemma <ref> that Delete-Edge-from-SCC-decomposition(T_ϕ,u,v) spends O(mnlog n) time in total. Alternatively, Lemma <ref> shows that the total number of nodes of the SCC-decompositions is O(nlog n), which again implies that their combined depth is O(nlog n).It remains to analyze the time spent on Split-Leaf(J_i,u,v,ϕ,{ϕ_1,…,ϕ_k}). Recall that Split-Leaf finds the SCC-decomposition T_ϕ' of J that contains ϕ as an external node and splits ϕ into ϕ_1,…,ϕ_k whereafter it fixes T_ϕ' if necessary. If ϕ has no parent then we simply replace ϕ by ϕ_1,…,ϕ_k as independent SCCs without changing the edges adjacent to ϕ_1,…,ϕ_k. Otherwise we replace ϕ by ϕ_1,…,ϕ_k in G^_par(ϕ) and update the edges adjacent to ϕ_1,…,ϕ_k. Note that before doing the split, ϕ represents a graph that contains all the graphs G_ϕ_1,…,G_ϕ_k as subgraphs. In particular all connections to the rest of G_par(ϕ) are already present in the graph. The split causes ϕ to break, and the relevant edges to be transferred to the new vertices. For one of the vertices we can however reuse the connections from ϕ. In particular our algorithm reuses the connections for the vertex ϕ_i ∈{ϕ_1,…,ϕ_k} whose associated graph G_ϕ_i contains the most vertices. The time it takes to split a vertex is therefore proportional to the number of edges adjacent to vertices of all the graphs G_ϕ_1,…,G_ϕ_i-1,G_ϕ_i+1,…,G_ϕ_k.Note that all the new SCCs, excluding G_ϕ_i, contain at most half as many vertices as G_ϕ originally did. Consider one of the SCC-decompositions T that make up J. The external nodes of T are a collection of disjoint subsets of V(G), i.e., they contain at most n vertices from the original graph. Since a split moves vertices to new nodes of half the size, each vertex v can only be moved O(log n) times in T by Split-Leaf. Each move takes time proportional to the number of edges adjacent to v, so the total time spent splitting leaves of T is at most O(m log n). Since, by Lemma <ref>, there are only O(n) SCC-decompositions in J, it follows that the total time spent splitting leaves is O(mnlog n).It remains to consider the time that Split-Leaf spends on fixing the SCC-decomposition in the call Fix-SCC-decomposition(T_ϕ',u,v,ϕ,{ϕ_1,…,ϕ_i-1,ϕ_i+1,…,ϕ_k}). Here the work can however be charged to the depth reduction of the vertices that are moved, as it was the case in Delete-Edge-from-SCC-decomposition(T_ϕ,u,v). This completes the proof. (Note that charging the work to depth reduction was not possible when splitting a leaf since the depth did not change for the vertices that were moved.) We have skipped until now the details of how the matrix A for answering queries is updated. As in the initialization of the joint SCC-decomposition, this is done when updating the topmost SCC-decompositions that each only contain a single internal node. Let w be the internal node of such an SCC-decomposition T_w. When Fix-SCC-decomposition or Split-Leaf adds a new child ϕ to w in T_w, A[u,w] is updated for all the vertices u ∈ V(G_ϕ) to be equal to the index of this new child. A[u,w] is similarly updated when G_w breaks and a new SCC is formed, i.e., this SCC is independent of whether w is deleted from G or not. Note that updating A does not affect the running time of our algorithm since this work is dominated by the work performed by Fix-SCC-decomposition and Split-Leaf, i.e., we anyway run through all vertices of new SCCs of G∖{w} to update edges to the rest of the condensed graph G^_w.As described briefly in Section <ref>, Łącki's SCC-decomposition can be implemented such that is uses O(m+n) space <cit.>. Since a balanced joint SCC-decomposition consists of O(n) SCC-decompositions (Lemma <ref>), it follows that a naive implementation of our data structure uses O(mn) space. In the next section we show how to obtain an alternative bound of O(n^2log n).§.§ Space-Efficient Representation of SCC-Decompositions for Dense Graphs Recall that when we delete an edge (u, v), then we search for vertices with no out-going or in-coming edges in the condensed graph G^_ϕ, where ϕ = (u,v) is the lowest common ancestor of u and v. The key to improving the space bound is to observe that we do not need to explicitly store the edges to retrieve this information; it is enough to store the in- and out-degrees of the vertices of G^_ϕ. The same is true when nodes are moved up in the tree of an SCC-decomposition by Fix-SCC-decomposition. For a vertex v ∈ V(G^_ϕ), we therefore let in-deg(v) and out-deg(v) be the in- and out-degree of v, respectively.When we search for vertices that should be removed from a condensed graph, we repeatedly visit neighbors of vertices whose in- or out-degree has been reduced to zero. We here make use of the edges in G^_ϕ, but Łącki's analysis actually allows us to spend time linear in the number of edges adjacent to the corresponding vertices in G. We use this observation to remove the edges from G^_ϕ, and instead operate with the original edges in G. For every node ϕ of every SCC-decomposition in the joint SCC-decomposition, we therefore maintain a list vertices(ϕ) of all the vertices in the subgraph G_ϕ. We also maintain a double-array contains⟨ϕ, v⟩ such that contains⟨ϕ, v⟩ is the vertex of G^_ϕ that contains v or such that contains⟨ϕ, v⟩ = Null if v does not appear in G_ϕ. Since the total number of nodes of all SCC-decompositions is O(nlog n) (Lemma <ref>), the space needed to store vertices and contains is O(n^2log n). The information in vertices and contains is updated when nodes are moved in the SCC-decompositions. Note that we only update a single condensed graph at a time, and since the analysis allows us to spend time linear in the number of edges adjacent to vertices that are moved, it is straightforward to update vertices and contains within the same time bound.Suppose we wish to delete an edge (u, v) with ϕ = (u,v), and with u' and v' being the vertices of G^_ϕ that contain u and v, respectively. We then reduce out-deg(u') and in-deg(v') by 1 each, and check whether either of them is reduced to zero. If, e.g., in-deg(v') is reduced to zero then we scan through all vertices in vertices(v') and collect all edges in G that leave v. For each such edge vw, we retrieve w' = contains⟨ϕ, w⟩ and reduce in-deg(w') by 1. The process is repeated if in-deg(w') becomes zero. We can thus use the information in vertices and contains instead of the edges from G^_ϕ. The information about edges in the condensed graphs dominates the space used to represent a joint SCC-decomposition, and by replacing the edges with the lists vertices and the double-array contains, we reduce the space to O(n^2log n).Recall that Łącki <cit.> finds the lowest common ancestor of the end-points of an edge (u, v) by storing a pointer from the edge to (u,v). This takes O(m) space per SCC-decomposition, so we need another way to find (u,v) that uses less space. Since a joint SCC-decomposition only contains O(nlog n) nodes in total (Lemma <ref>), we actually have time to visit all the nodes when looking for lowest common ancestors after being asked to delete an edge (u, v). We can therefore locate the lowest nodes that contain u and v, and move from there toward the root until finding their lowest common ancestor. The process can be simplified by storing for each vertex v and each SCC-decomposition T a pointer to the lowest node ϕ of T whose subgraph G_ϕ contains v. Since there are O(n) SCC-decompositions this takes O(n^2) space.Combining the above observations with Theorem <ref> gives us the following theorem. A balanced joint SCC-decomposition can be maintained under edge deletions using O(mnlog n) total update time and using O(n^2log n) space.§ STRONG CONNECTIVITY UNDER VERTEX FAILURESIn this section we use the decremental joint SCC-decomposition in order device decremental algorithms for various connectivity notions defined with respect to vertex failures. §.§ Decrementally reporting SCCs in G∖{v}In Section <ref> we showed that together with a joint SCC decomposition, in a total of O(mn log n) time and O(n^2 log n) space, we can also maintain an n × n matrix A such that A[u,v] is the index of the SCC of G ∖{v} that contains u. This gives us an easy interface to test whether two vertices u and w are strongly connected in G ∖{v}, by simply testing whether A[u,v] = A[w,v]. We additionally maintain a data structure that can report each SCC in time proportional to its size, once its ID is specified.That is, for each vertex v, we maintain an array L_v of doubly linked lists, such that L_v[i] are the vertices in the SCC with ID i in G ∖{v}. These lists can be easily updated as the matrix A is updated, i.e., whenever an entry A[i,j] changes from α to β, we remove i from the L_j[α] and we add it to L_j[β]. It straightforward to additionally monitor the size of those lists. Using this data structure, we can report the vertices of an the SCC with index i in G∖{v} in time proportional to its size.Notice that, whenever an SCC C in G ∖{v} breaks into several SCCs C_1, …, C_k we can report all the SCCs except of the SCC containing the most vertices in time proportional to their size. Whenever C breaks, the entries A[u,v] for some vertices u∈ C change.The indexes of the SCC IDs are the initial value of A[u,v], for any u∈ C, and the new indexes that are assigned to the resulting SCCs. Therefore, we can collect the resulting SCCs and reporting all of their vertices, except of the vertices contained in the resulting SCC with the most vertices. We can further report all new resulting vertices that do not contain a specific vertex w in time proportional to their size, by reporting all the reporting SCCs, except the one with index A[w,v]. §.§ Maintaining decrementally the dominator treeLet G be a directed graph and let s be the starting vertex from which we wish to maintain the dominator tree.We first produce a flow graph G_s from G by adding an edge from each vertex v∈ V∖{s} to s. The addition of those edges has the following property.If a vertex v is not strongly connected to s in G_s, then there is no path from s to v in G. Conversely, if a vertex v is not strongly connected to s in G_s∖{x}, while s and v are strongly connected in G_s, then all paths from s to v in G contain x. That is, x is a dominator of v in G.Dominance queries of the form “does x dominate v?” can be answered by simply testing whether s and v are strongly connected in G_s∖{v}. That is, we test if A[v,x] = A[s,x], and if the answer is positive then x dominates v.In this Section we show how to additionally maintain an explicit O(n)-size representation of all the dominance relations; that is, the dominator tree D of G_s. We assume that all vertices that become unreachable from s are ignored for the rest of the algorithm.Moreover, deletion of edges whose tail is unreachable from s are ignored. We assume that we have access to a data structure that reports all vertices that become unreachable from s in G_s.This can be achieved by running independently a decremental algorithm for single-source reachability, i.e., in O(mn) total update time and linear space <cit.>. Let (x,y) be the edge to be deleted. We denote by D' the resulting dominator tree after the deletion of (x,y). We describe how to compute the updated dominator tree D' after the deletion of (x,y). We denote by depth(v) the depth of the v in the dominator tree D; that is the number of edges in the path from s to v in D. Respectively, we define as depth'(v) to be the depth of v in the dominator tree D' (the dominator tree after the edge deletion).As shown in Section <ref>, the vertices that are no longer strongly connected to s in G_s∖{v}, for any v, can be reported in time proportional to their number. These are exactly the vertices that are dominated by v and were not dominated by v before the deletion of (x,y), as we explained above. Let N(v) = D'(v) ∖ D(v) be the set of vertices w that are dominated by vertex v in G'_s but were not dominated by v in G_s. In this case, v becomes an ancestor of w in D'.We can compute the depth in D' for each vertex w that acquired new dominators after the deletion by setting 𝑑𝑒𝑝𝑡ℎ'(w) ←𝑑𝑒𝑝𝑡ℎ(w) + | { v : w ∈ N(v)}|.Next, we need to locate the parent d'(w) in D' of each vertex w.Notice that all vertices { v : w ∈ N(v)} become ancestors of w in D'.The parent d'(w) in D' of w is the vertex v with maximum 𝑑𝑒𝑝𝑡ℎ'(v) such that w ∈ N(v).To perform the above computations efficiently, we process each N(v) set as computed by the corresponding SCC-decomposition data structure. First, we increase the depth of each vertex w ∈ N(v) by one. After processing all the sets N(v) in this way we will have found the new depths of the vertices in D'.Finally, we perform a second pass over the sets N(v) in order to locate the unique parent d'(w) for each vertex w. To that end, for each w we maintain a temporary variable d(w), and initialize d(w) ← d(w). When we process a set N(v) we update d(w) ← v for all vertices w ∈ N(v) such that 𝑑𝑒𝑝𝑡ℎ'(v) > 𝑑𝑒𝑝𝑡ℎ'(d(w)). At the end of the second pass we will have d(w) = d'(w) for all w, as desired.Now we bound the total time required to maintain the dominator tree.Recall that our data structure can report each N(v) set in O(|N(v)|) time. Hence, the running time of the above procedure, during the whole sequence of deletions, is bounded by the total size of all the sets N(v). Note that any vertex can appear in a specific N(v) set at most once during the deletion sequence.Hence, the total size of all the sets N(v) is O(n^2). The dominator tree of a directed graph G with start vertex s can be maintained decrementally in O(mn log n) total update time and O(n^2 log n) space, where m is the number of edges in the initial graph and n is the number of vertices. §.§ Answering decrementally strong connectivity queries under vertex failuresIn this section we show how to answer various strong connectivity queries under single-vertex failures in optimal time while we maintain a directed graph decrementally. More specifically, under any sequence of edge deletions, we consider answering the following types of queries: (i) Report the total number of SCCs in G∖{v}, for a query vertex v ∈ V. (ii) Report the size of the largest and of the smallest SCC in G∖{v}, for a query vertex v∈ V. (iii) Report all the SCCs of G∖{v}, for a query vertex v∈ V. (iv) Test if two query vertices u and w are strongly connected in G∖{v}, for a query vertex v. (v) For query vertices u and w that are strongly connected in G, report all vertices v such that u and w are not strongly connected in G∖{v} anymore. For static strongly connected graphs, it was shown that after linear time preprocessing one can answer all of the above queries in optimal time <cit.>. Here, we show how to preserve asymptotically optimal query time on a graph subject to edge deletions. Before proving the main result of this section we need the following two technical lemmas.By definition of dominators, all paths from s to vertices in D(v), for any v, contain v. Our first lemma shows that this property does not hold for the paths from s but also for the paths from all vertices that are not in D(v). Let G be a digraph and D be the dominator tree of the corresponding flow graphs G_s, for an arbitrary start vertex s. Suppose v is a vertex such that D(v)∖{v}≠∅. Then there is a path from a vertex w∉ D(v) to v in G that contain no vertex in D(v)∖{v}. Moreover, all simple paths in G from w to any vertex in D(v) contain v. Let u be a strong articulation point that is a separating vertex for vertices x and y. Then u must appear in at least one of the paths D[s,x], D[s,y], D^[s,x], and D^[s,y]. Now we are ready to prove the main result of this section. We can maintain a digraph G decrementally in O(mn log n) total update time and O(n^2 log n) space, where m is the number of edges in the initial graph and n is the number of vertices, so that after each edge deletion we can answer in asymptotically optimal time the following types of queries: (i) Report in O(1) time the total number of SCCs in G∖{v}, for a query vertex v ∈ V. (ii) Report in O(1) time the size of the largest and of the smallest SCC in G∖{v}, for a query vertex v∈ V. (iii) Report in O(n) worst-case time all the SCCs of G∖{v}, for a query vertex v∈ V. (iv) Test in O(1) time if two query vertices u and w are strongly connected in G∖{v}, for a query vertex v. (v) For query vertices u and w that are strongly connected in G, report all vertices v such that u and w are not strongly connected in G∖{v}, in asymptotically optimal worst-case time, i.e., in time O(k), where k is the number of separating vertices. (For k=0, the time is O(1)). The queries (i),(iii),(iv) can be answered by maintaining the matrix A, as shown in Section <ref>. As we already mentioned in Section <ref>, we can maintain for each G∖{v} a list of SCCs. We also shown how we maintain the size of each SCC in G∖{v}. In order to have fast access to the minimum and the maximum size of the SCCs, we store the sizes of the SCCs in a min-heap and a max-heap. Those heaps can be updated in total time O(n log n) for each subgraph G∖{v}, for each v, as follows. Whenever an SCC breaks, we add the IDs of the newly created SCCs together with their size into the heaps, and we also update the size of the SCC that maintained the same ID. Since each time an SCC breaks at least two vertices are no longer in the same SCC, this can happen at most n-1 times. Moreover, there can be at most n SCCs in a graph. Therefore, at most O(n) insertions and updates are executed to each heap. This implies that the total time spend on maintaining each heap for each G∖{v} is O(n log n), which sums up to O(n^2log n) for all v. Given one min-heap and one-max heap, we can answer type (ii) queries in constant time. We are left to show how to answer queries of type (v). For this type of queries we will assume that we maintain the dominator tree D of the graph and the dominator tree D^ of the reverse graph. By Lemma <ref>, all separating vertices for u and w are either ancestors of u or w in D or ancestors of u or w in D^. We only show how to report the separating vertices for u and w that are ancestors of u or w in D since the procedure for D^ is completely analogous. By Lemma <ref>, notice that if there exist a vertex z such that u∈ D(z) and w∉ D(z) or such that w∈ D(z) and u∉ D(z) then z is a separating vertex for u and w. That means, all vertices on the path from _D(u,w) (the nearest common ancestor of u and w in D) to d(u) and from _D(u,w) to d(w) are separating vertices for u and w. We can find and report all those vertices in asymptotically optimal time by simply following the parents of u and w in D until they meet at _D(u,w) (here we assume that also the depth of each vertex in D is available, otherwise we can compute the depth for all vertices in O(n) time after each edge deletion). Next, we show that all vertices z ∉ D(_D(u,w)) that separate u and w appear on a path of D. More specifically, there is a vertex z∈ V∪{∅}∖{D(_D(u,w))}, such that all vertices on the path from z to _D(u,w) in D are separating vertices for u and w. Let z be the first vertex on the path from s to _D(u,w) on D that separates u and w. If there is no such vertex z, then none of the vertices on the path from s to _D(u,w) is a separating vertex for u and w and we are done. We can verify the existence of such z by testing if _D(u,w) is a separating vertex for u and w (this can be done in constant time by executing one type (iv) query). Assume now that z≠∅. By Lemma <ref>, either all paths from u to w contain z or all paths from w to u contain z. Assume, without loss of generality, that all paths from u to w contain z. Since z∉ D(_D(u,w)), all paths from z to w contain all vertices on the path from z to w in D (including all vertices on the path from z to _D(u,w)). This allows us to efficiently identify and report all separating vertices z∉ D(_D(u,w)) for u and w as follows. If z≠∅, we start testing the vertices on the path from _D(u,w) to s in D, reporting all vertices that are separating vertices for u and w, and once we find a vertex that is not a separating vertex for u and w we stop (as we proved, there are not further vertices on the path from s to _D(u,w)). Notice that we only spend time proportional to the vertices that we report, the computation of _D(u,w), and only a single type (iv) query that does not report a vertex. We also spend the same time on the dominator tree D^ of the reverse graph. We only need to be careful not to report the same vertex twice, which can be trivially implemented within the claimed time bounds.§.§ Maintaining decrementally the vertex-resilient components In this section we show how we can maintain the vertex-resilient components of a directed graph. By definition, two vertices u and w are vertex-resilient if and only if there is no vertex v such that u and w are not strongly connected in G∖{v}. In our algorithm we will be testing this property after every edge deletion, and whenever we identify a vertex-resilient component B containing vertices from different SCCs in G∖{v}, for some v, then we refine the vertex-resilient component according to these components. Assume that a vertex-resilient component B breaks after an edge deletion. That is, there is a vertex v such that vertices of B lie in different SCCs in G∖{v}. Let C_1,C_2,…, C_k be the SCCs in G∖{v}, then we replace B by {B∩ C_1}∪{v} , {B∩ C_1}∪{v}, …, {B ∩ C_k}∪{v}.These refinements can be easily carried out in O(n) time, and therefore we spend total time O(n^2) for this part.Now we show how to efficiently detect whether two vertex-resilient vertices appear in different SCCs in G∖{v}, for some v. Whenever an SCC C in G∖{v}, for some v, breaks into many SCCs C_1,…, C_k the vertices of all SCCs except of one can be listed in time proportional to their edges as shown in Section <ref>. Without loss of generality, let C_1, …, C_k-1 be those SCCs. For SCC C_i, for 1≤ i ≤ k-1, we examine whether the vertex-resilient components containing subsets of vertices of C_i are entirely contained in C_i. This can be easily done in time proportional to |C_i|. Notice that we do not examine the vertices in C_k. We claims that, if we do not find a vertex-resilient pair that is disconnected in G∖{v} by the searches in C_i, 1≤ i≤ k-1, then there is no such pair. Indeed, assume that there is a pair of vertex-resilient vertices x,y such that x∈ C_k, y∉ C_k.By the fact that x and y were vertex-resilient before the edge deletion it follows that y∈ C, and therefore y∈ C_i, 1≤ i≤ k-1, in which case we would find this by searching in C_i. If we detect a vertex-resilient component whose vertices lie in different SCCs in G∖{v}, for some v, then we perform the refinement phase in O(n) time.Notice that all the tests that we described above (excluding the time for the refinement operations) can be executed in time proportional to the number of vertices of a broken SCC in G∖{v}, for some v, and that are not contained in the largest resulting SCC. Each vertex can appear at most log n times in an SCC of G∖{v} that is not the largest after a big SCC breaks.That means we spend O(n log n) time for each graph G∖{v} on the aforementioned queries, for some vertex v, and thus, O(n^2 log n) in total.Thus, we have the following lemma. The vertex-resilient components of a directed graph Gcan be maintained decrementally in O(mn log n) total update time and O(n^2 log n) space, where m is the number of edges in the initial graph and n is the number of vertices.§.§ Maintaining decrementally the maximal 2-vertex-connected subgraphs In this section we show how to maintain the maximal 2-vertex-connected subgraphs of a directed graph decrementally. Recall from the Introduction, that a graph is 2-vertex-connected if it has at least three vertices and for each v∈ V it holds that G∖{v} remains strongly connected. We will allow for degenerate maximal 2-vertex-connected subgraphs consisting of two mutually adjacent vertices.Maximal 2-vertex-connected subgraphs do not induce a partition of the vertices of the graph. More specifically, two maximal 2-vertex-connected subgraphs might share at most one common vertex. First, we describe a simple-minded algorithm that computes the maximal 2-vertex-connected subgraphs of a directed graph and later we show how to dynamize this algorithm. The simple-minded algorithm proceeds as follows. Compute for each vertex v∈ V the SCCs of G∖{v} and remove all edges between the different SCCs that are created. The algorithm is repeated until no more edges are removed from the graph after examining all vertices. Let G' be the resulting graph obtained at the end of this process.. The vertex-resilient components of G' of size more than 2 are the non-degenerate maximal 2-vertex-connected subgraphs of G, as the following lemma shows. Graph G' has the same 2-vertex-connected subgraphs as G (including the degenerate 2-vertex-connected subgraphs), and there is no edge between vertices that are not in the same2-vertex-connected subgraph. Moreover, the vertex-resilient components of G' are equal to its maximal 2-vertex-connected subgraphs. We begin with the first part of the lemma, that is, we show that G' has the same 2-vertex-connected subgraphs as G. First, notice that G' cannot contain a 2-vertex-connected subgraph that is not a 2-vertex-connected subgraph in G since G' is a subgraph of G. Now we show that each 2-vertex-connected subgraph of G is a 2-vertex-connected subgraph of G'. To do so, we show that no edge (x,y) is removed such that x and y are in the same 2-vertex-connected subgraph C. Assume by contradiction, that this is not true, and (x,y) is the first edge that is removed from C. This implies that, throughout the procedure of the algorithm that removes edges there is an instance G and a vertex w such that x and y are in different SCCs in G∖{w}. This cannot happen if w=x or w=y, since in those cases x and y do not appear in G∖{x} or G∖{y}, respectively. If w ∉ C, then this should not happen as C is strongly connected and we assume that (x,y) is the first edge deleted from C. If on the other hand w ∈ C, then x and y should be in the same SCC in G∖{w} by the definition of the 2-vertex-connected subgraph C, clearly a contradiction. Now we prove the second part of the lemma: namely, there is no edge between vertices that are not in the same 2-vertex-connected subgraphs. Assume by contradiction that such an edge (x,y) exists. In particular, we show that for any edge (x,y), x and y are in the same 2-vertex-connected component. Since there is no vertex w in the graph disconnecting x and y, x and y are vertex-resilient. Let C be the set of all the vertices that are vertex-resilient to both x and y in G'. Now we show that for each vertex v∈ C all vertices C∖{v} remain strongly connected in G'[C∖{v}]. Assume not: then there is a pair of vertices u,w ∈ C that are strongly connected in G'∖{v} (which follows from the fact that they are vertex-resilient) but not in G'[C∖{v}]. Then either all paths from u to w or all paths from w to u in G'∖{v} contain vertices in V∖ C. Without loss of generality, let P be any such path from u to w, and let z∈ P ∩{V∖ C}. Then, by the definition of vertex-resilient components there exists a vertex q such that u (as well as w) and z are not strongly connected in G'∖{q}. As a result, the procedure that removes edges and generated G' would eliminate all paths from z to u and all paths from w to z that do not contain q (since x and y are not in the same SCC with z in G'∖{q}). This implies that P is not a simple path as it contains u, then q, then z, then again q, and finally w. Therefore, there is a path from x to q and a path from q to y avoiding z (and also v since P is a path in G'∖{v}). If q∈{V∖ C}, then we repeat the same argument for z=q. If we apply the same argument for every path from u to w and for each vertex z ∉ C, it follows that there exists a path from u to w in G'∖{v} avoiding all vertices in V∖ C. This contradicts the assumption that all paths from u to w in G'∖{v} contain vertices in V∖ C. Therefore,for each vertex v∈ C all vertices C∖{v} remain strongly connected in G'[C∖{v}]. Then, C satisfies the definition of a 2-vertex-connected subgraph, which contradicts the fact that x and y are not in the same 2-vertex-connected subgraph. Thus, all edges are between vertex-resilient components of G'. Notice that we showed the above for any edge (x,y). Hence, we also proved that every vertex-resilient component is fully contained inside a 2-vertex-connected subgraph. By definition, each (maximal) 2-vertex-connected subgraph of G' is fully contained in a vertex-resilient component of G'. Thus, the vertex-resilient components in G' are equal to the maximal 2-vertex-connected subgraphs of G'. We now present our decremental algorithm for maintaining the maximal 2-vertex-connected subgraphs of G. Our algorithms is a simple extension of the simple-minded static algorithm for computing the maximal 2-vertex-connected subgraphs. More specifically, we will maintain G' decrementally, and we will additionally be running the decremental vertex-resilient components algorithm on G' simultaneously. In order to do so, we might be deleting from the maintained graph more edges than dictated by the sequence of edge deletions on G.First, we initialize a joint SCC-decomposition, as described in Section <ref>. We will maintain a subgraph G' of the current version of G; by current version of G we mean the initial graph minus the edges that were deleted from it. Let v be the root of an SCC-decomposition. Initially, we collect all edges among different SCCs in G'∖{v} and remove them from G' by executing them as additional edge deletions. We assume that these edges are marked and are inserted into a global set data structure L and after the necessary updates in the joint SCC-decomposition they are deleted from G' one at a time, making sure every edge appears at most once in L. Whenever an SCC in G'∖{v} breaks into several SCCs C_1,…, C_k after an edge deletion, we collect all the unmarked edges between different SCCs and add them into the global set data structure L so they will be deleted from the graph. If some vertices in G' are no longer strongly connected to v, then we simply ignore them from the SCC-decomposition rooted at v. (Notice that v can no longer affect the strong connectivity between vertices that are not strongly connected to v.) We do the same for each v ∈ V. The above procedure maintains G' under any sequence of edge deletions, since for each vertex we only remove edges between different SCCs in G' ∖{v} and when the auxiliary edge deletions end there are no edges between different SCCs in G' ∖{v}.In order to maintain decrementally the maximal 2-vertex-connected subgraphs, we additionally run in the algorithm from Section <ref> for maintaining decrementally the vertex-resilient components of size more than 2 on the side.To implement the size restriction, we simply ignore all vertex-resilient components of size at most 2. The correctness of the algorithm follows from Lemma <ref>.Now we bound the running time of the algorithm. The time for handling all edge deletions in the maintained graph G' is bounded by O(mn log n) by Lemma <ref>. Moreover, the total time spent to maintain the vertex-resilient components of G' is O(mn log n) by Lemma <ref>. We only need to bound the time we spend to collect all edges among different resulting SCCs after some SCC in G' ∖{v} breaks, for any v. We do that as follows. Whenever an SCC C breaks into several SCCs C_1,…, C_k in G' ∖{v}, for some v, we only need to identify all the edges among C_1, …, C_k, since the algorithm previously should have removed all edges from C to other SCCs in G'∖{v}. Without loss of generality, let C_k be the largest SCC among C_1,…, C_k. For each C_i, 1≤ i ≤ k-1 we iterate over all the unmarked edges incident to their vertices and test whether their endpoints are in different SCCs in G'∖ v, and if yes we mark them and insert them into the global set data structure L in order to delete them from the graph. Notice that whenever the algorithm iterates over the edges of a vertex, that vertex is contained in an SCC in G' ∖ v that is at most half the size of its previous SCC. That means each vertex will be listed at most log n times. Therefore, for each v, we consider the edges of each vertex at most log n times, and therefore at most nlog n times in total, for all v. Hence we spend at most O(mnlog n) to collect all edges among different resulting SCCs after some SCC in G' ∖{v} breaks, for any v. Thus, we have the following lemma. The maximal 2-vertex-connected subgraphs of a directed graph Gcan be maintained decrementally in O(mn log n) total update time and O(n^2 log n) space, where m is the number of edges in the initial graph and n is the number of vertices. § STRONG CONNECTIVITY UNDER EDGE FAILURES In this Section we consider several application our joint SCC-decomposition that are related to strong connectivity under single edge failures, and to 2-edge-connectivity. In order to devise efficient algorithm for the problems that we consider, we need first to be able to maintain all the strong bridges of a graph (the edges whose deletion affects the strong connectivity of the graph). To achieve the latter, we in turn first show how to maintain both an in- and out-dominator tree at some arbitrary vertex s, where in our case we choose s uniformly at random for reasons of efficiency. §.§ Maintaining decrementally an in-out dominator tree in each SCCLet D be a dominator tree of a directed graph G and D^ be a dominator tree of the reverse graph G^, both rooted at the same starting vertex s. We call such a pair of dominator trees as an in-out dominator tree, rooted at s. In this section we will show how to maintain an in-out dominator tree in each SCC of G decrementally. More specifically, we present a randomized algorithm that runs in O(mn log n) expected total update time, and maintain an in-out dominator tree in each SCC, rooted at a vertex chosen uniformly at random.Recall that the algorithm from Section <ref> maintains decrementally a dominator tree rooted at an arbitrary root in O(mn log n) total time and O(n^2 log n) space. Given a graph G, and a starting vertex s, we decrementally maintain a in-out dominator tree from s as follows. We maintain a dominator tree D of G and a dominator tree D^ of G^, both rooted at s, using the algorithm from Section <ref>. That is, we apply each edge deletion to both instances of the decremental dominators algorithm. The two algorithms might run on different sets of vertices throughout their execution, since the set of reachable vertices from s might differ from the set of vertices that reach s. This required O(mn log n) time and O(n^2 log n) space in total, by the fact that we simply run two instances of the decremental dominators algorithm simultaneously. Bellow, we refer to the maintainance of an in-out dominator tree of a graph without getting into the details of the two instances that we maintain.Our algorithm works as follows.For each SCC C of G, we randomly choose a starting vertex s∈ C, and maintain an in-out dominator tree rooted at s. Whenever an SCC C, with starting vertex s, breaks into several SCCs C_1,…, C_l we proceed as follows. Let, without loss of generality, C_l be the new resulting SCC that contains s. We update the in-out dominator tree, rooted at s, by removing from G[C] all edges (u,v)∈ C_i × C_j, i≠ j. The resulting in-out dominator tree is a valid for G[C_l]. Furthermore, for each SCC C_i, 1≤ i ≤ l-1, we randomly choose a starting vertex s_i and we maintain decrementally an in-out dominator tree of G[C_i], rooted at s_i.The correctness of the aforementioned algorithm follows from the correction of the decremental dominators algorithm from Section <ref>. We next analyze the running time of the algorithm in the following lemma. We use the type of analysis that was used in <cit.>, where the authors presented an algorithm with O(mn) total expected running time for the problem of decrementally maintaining the SCCs of a directed graph. Let G be a directed graph. An in-out dominator tree in each SCC of G, rooted at a vertex chosen uniformly at random, can be maintained in O(mn log n) total expected time against an oblivious adversary, using O(n^2 log n) space. By Lemma <ref>, the algorithm for maintaining the in-out dominator tree of a directed graph G with respect to an arbitrary root, runs in total O(mn log n) time and O(n^2 log n) space. In order to simplify our analysis we will charge all its running time at the beginning of the algorithm. The algorithm can be easily extended to maintain the in-out dominator tree of the SCC C containing s, as follows. We initiate the algorithm to run on G[C] Whenever C breaks after an edge deletion, we remove from the graph all edges that contain at least one endpoint that is not contained in the new SCC of s. Notice that this additional edge deletion does not affect the total running time of the algorithm as we already charged the algorithm with any possible sequence of edge deletions. The space total requirement is O(n^2 log n) since the total number of vertices in all SCCs remains n, independently of the number of the SCCs. In order to simplify the analysis, we assume that the running time of the algorithm for maintaining an in-out dominator tree is exactly mn log n.Now let f(m,n) be the expected running time of the algorithm for maintaining the in-out dominator tree in a strongly connected graph (if the graph is not strongly connected then we assume we refer to any SCC of the graph), rooted at a vertex s chosen uniformly at random. We havef(m,n) = mn log n + ∑_i=1^l f(m_i,n_i) - ∑_i=1^ln_i/n m_in_i log n_ifor some l≥ 2, where l is the number of SCCs when the graph is no longer strongly connected, and m_1, …, m_l and n_1, …, n_l refer to the number of edges and vertices of those SCCs, respectively. Obviously, n_i ≥ 1, m_i ≥ 0, 1≤ i ≤ l, and moreover ∑_i=1^l m_i < m (since at least one edge should had endpoints from two different SCCs in the initial graph) and ∑_i=1^l n_i = n. The term mn log n represents the total time spent by the algorithm for maintaining the in-out dominator tree rooted at s in its SCC. The term ∑_i=1^l f(m_i,n_i) represents the total time spent by the decremental dominators algorithm in each of the l SCCs that are created after the graph is no longer strongly connected. However, the first term already covers for the work done in all future SCCs containing s as we can keep using the same instance of the algorithm. Therefore, the time spent in the new SCC of s after the graph breaks into several SCCs is already covered and we have to subtract it, since we accounted for it in the second term. Notice that s can be in the i-th SCC with probability n_i/n as it was chosen uniformly at random, and therefore with probability n_i/n the algorithm will not spend m_in_i log n_i time in order to maintain the in-out dominator tree in C_i. We point out that the fact that the graph breaks into the specific l SCCs (or that it breaks in the first place) does not depend on any choice made by the algorithm, including the random choice of the starting vertex s. Thus, the aforementioned formula correctly captures the expected running time of the algorithm.We now show that f(m,n) ≤ 2 m n log n. Our proof is by induction on f(m,n). The basis of the induction is the case where n=1,m=0, for which we have f(m,n)=0. Assume now that the claim holds for every (m',n') where m'<m and n'<n. Our goal is to show that ∑_i=1^l 2 m_in_i log n_i - ∑_i=1^ln_i/n m_in_i log n_i ≤ mn log n.We set x_i=m_i/m, y_i=n_i/n, z_i = log n_i / log n, for which it holds that 0≤ x_i < 1,0 < y_i< 1, 0≤ z_i<1. Moreover, it holds that ∑_i=1^lx_i < 1, and by the fact that z_i<1, ∀ 1≤ i ≤ l it also holds that ∑_i=1^lx_iz_i < 1. We devide both sides of the inequality by mnlog n, and now we are left to show2∑_i=1^l x_iy_iz_i - ∑_i=1^l x_iy^2_iz_i ≤∑_i=1^lx_iz_i < 1.Moving everything on the right side we get∑_i=1^lx_iz_i - 2∑_i=1^l x_iy_iz_i + ∑_i=1^l x_iy^2_iz_i= ∑_i=1^lx_iz_i(1-y_i)^2≥ 0 which obviously holds. §.§ Maintaining decrementally the strong bridges of a digraph Let G be a digraph. We show how to maintain decrementally the strong bridges in each SCC of G.Note that edges whose endpoints are not both in the same SCC, cannot be strong bridges since their removal cannot affect the strong connectivity of G.Our algorithm uses the decremental algorithm for maintaining an in-out dominator tree of a subgraph induced by an SCC, presented in Section <ref>. We only use this algorithm in order to have access to the dominator trees D and D^ of each SCC (rooted at a randomly chosen vertex), and to the set of affected vertices (that change parent in in the dominator tree after a deletion) in each of the dominator trees after every edge deletion. Note that the latter can be easily made available even without modifying the algorithm to report the affected vertices, as one can get this information by simply comparing the dominator tree before and after an edge deletion in O(n) time. The following lemma shows that all strong bridges of a strongly connected graph G appear as edges of either the dominator tree D of G or the dominator tree D^ of G^, both rooted at the same arbitrary start vertex s. Let G be a strongly connected graph, and let D and D^ be the dominator tree of G and G^, respectively, both rooted at the same arbitrary start vertex s. An edge (u,v) is a strong bridge of G only if u=d(v) or d^(u)=v.Given the dominator trees D and D^ of the strongly connected digraph G, rooted at an arbitrary start vertex s, we can compute the strong bridges of G in time O(m+n) by testing for each vertex the condition given by the following lemma.Let G be a strongly connected graph, and let D and D^ be the dominator tree of G and G^, respectively, both rooted at the same arbitrary start vertex s. An edge (d(v),v) is a strong bridge of G if and only if for all w∈{q:(q,v) ∈ E, q ≠d(v)} it holds that w∈ D(v). An edge (d^(v),v) is a strong bridge of G if and only if for all w∈{q:(q,v) ∈ E^, q ≠d^(v)} it holds that w∈ D^(v).Our algorithm for maintaining the strong bridges of each SCC of a directed graph uses a simple modification of the condition of Lemma <ref> which we are able to dynamize. Let (x,y) be the edge to be deleted from the graph, We denote by D' the dominator tree D after the deletion of (x,y). In general we use the notation f' to refer to a relation f after the deletion of (x,y).Let G be a strongly connected graph, or an SCC of a directed graph. Moreover, let D and D^ be the dominator trees of G and G^, respectively, rooted at an arbitrary start vertex s. We maintain for each vertex w a counter c(w) counting the number of incoming edges (z,w) to w in G such that z∉ D(w). Analogously, we maintain for each vertex w a counter c^(w) counting the number of incoming edges (z,w) to w in G^ such that z∉ D^(w). Lemma <ref> suggests that an edge (d(w),w) (resp., (w,d^(w))) is a strong bridge if and only if c(w)=1 (resp., c^(w)=1). Our goal is to update those counters, as the graph undergoes edge deletions. We only describe the process of updating the counters c(w) for each vertex w, as the analogous process is used to update the counters c^(w).In order to simplify our algorithm we recompute from scratch the counters c(w) for each vertex w whenever the SCC of s breaks. Assume that s was chosen as the root when s was part of an SCC with n' vertices and m' edges. The in-out dominator tree of the SCC of s, rooted at s, is maintained in O(m'n' log n') total update time. Notice that we can compute the counters c(w) for all w in O(m'+n') time, by simply traversing all the incoming edges of a vertex and applying Lemma <ref>. Moreover the SCC of s can break at most O(n') times. Therefore, these recomputations require total time O(m'n'), which we can charge to the algorithm for maintaining the in-out dominator tree rooted at s. Thus, in what follows we can assume that after the deletion of an edge (x,y) both x and y are strongly connected to s.Now we show how to update the counters c(w) for each w after the deletion of an edge (x,y). First, we collect in S all the affected vertices in D (that is, the vertices that change parent in D) and their descendants in D. For each vertex z∈ S, and for each edge (z,w) such that w∉ S and z ∉ D(w), we set c(w)=c(w)-1. Notice that the vertices in V∖ S retain their parent-descendant relations in D'. Therefore, the counters c(w) correctly contain the number of incoming edges (z,w) to w, such that z ∉ S and z∉ D'(w). We are left to add to c(w) the incoming edges (z,w) to w, such that z ∈ S and z∉ D'(w). We do that as follows. For each vertex z∈ S, and for each edge (z,w) such that w∉ S and z∉ D'(w), we set c(w)=c(w)+1. This completes the update of the counter c(w) for each vertex w∉ S. For the vertices w∈ S, we simply iterate over the incoming edges to w and execute the test of Lemma <ref>. Now an edge (d(w),w) is a strong bridge if and only if c(w)=1. Notice that each time a vertex is either affected in D or a descendant of an affected edge it increases its depth in the dominator tree. Therefore, a vertex can be included in S at most O(n) times. That means, we iterate over the edges of a vertex at most O(n) times. Hence, we spend O(mn) time in total to update the counters c(w) for all vertices w. We analogously update the counters c^(w) for each vertex w. Thus, we have the following lemma. The strong bridges of a directed graph G, with m edges and n vertices, can be maintained decrementally in O(mn log n) total expected time against an oblivious adversary, using O(n^2 log n) space. §.§ Maintaining decrementally the SCCs in G∖ e for each eWe show how to maintain the SCCs in G∖ e, for each strong bridge e, under any sequence of edge deletions. Notice that if an edge e is not a strong bridge then the SCCs in G ∖ e are equal to the SCCs in G. The following lemma shows the relation between the SCCs of G∖ e, for some strong bridge e=(u,v), and the SCCs of G∖{u} and G∖{v}. This will be helpful since we already shown (in Section <ref>) how to maintain the SCCs of G ∖{v}, for all vertices v. Let G be a strongly connected graph and let e=(u,v) be a strong bridge of G. Two vertices are strongly connected in G ∖ e if and only if they are strongly connected in either G∖{u} or G ∖{v}. Moreover, let C_u, C_v be the SCCs of G∖ e containing u and v, respectively. All SCCs of G∖ e, except C_u and C_v, are SCCs of both G∖{u} and G ∖{v}. If two vertices w and z are not strongly connected in G∖ e, then all paths from w to z contain e, and therefore both its endpoints u and v. Therefore, w and z are not strongly connected in neither G ∖{u} nor G∖{v}. First, note that if in G∖ e, u and v are not strongly connected.Now assume that w and z are strongly connected in G ∖ e. Let C be the SCC C of G ∖ e containing w and z. Clearly, C cannot contain both u and v since (w,z) is a strong bridge. If C does not contain any of u or v, then C is an SCC of both G∖{u} and G ∖{v}, since all the other vertices of C and the edges among them remain in both G ∖{u} and G ∖{v}.(Notice that this also proves the second part of the claim.) If on the other hand C contains one of the two vertices, say v, then C is an SCC of G ∖{u} since all vertices of C and the edges among them remain in G ∖{u}. We have proven both directions of the necessary and sufficient condition. The lemma follows. Notice that Lemma <ref> implies that the SCC of v in G∖ e is an SCC in G ∖{u}, while the SCC of u in G ∖ e is an SCC in G ∖{v}.As suggested by Lemma <ref>, we can get the SCCs of G ∖ e as follows. Let C be the SCC in G∖{u} that contains v. Then C is an SCC of G∖ e. All SCCs in G ∖{v} that do not contain any vertex in C are SCCs of G∖ e (in particular, we need to test only for one arbitrary vertex since by Lemma <ref> the SCCs of G ∖{v} and those of G ∖{u} are either nested or disjoint). Recall that we denote by A[u,v] the index of the SCC of G ∖{v} that contains u.In order to dynamize the above algorithm we handle edge deletions as follows. Assume that an SCC C of G ∖{v}, for some v, breaks into several SCCs C_1, …, C_l. Without loss of generality, let C_l be the largest SCC that is created. Recall from Section <ref>, only the SCCs C_1,…, C_l-1 are listed since we cannot afford to list them all. Therefore, for each strong bridge e=(u,v) or e=(v,u) we test for an arbitrary vertex z∈ C_i, for each 1 ≤ i ≤ l-1, whether A[v,u] = A[z,u].If the above condition does holds we remove from C the vertices of C_i and we add C_i as an SCC of G ∖ e, otherwise we remove from C the vertices of C_i but we do not add C_i as an SCC of G∖ e. We note that the tests A[v,u] = A[z,u] refer to the indexes of the SCCs after all the necessary updates after the corresponding edge deletion.Now we show the correctness of the above algorithm. The SCCs of G∖ e that contain neither u nor v are SCCs of G∖{u}. As implied by Lemma <ref>, the SCC of G∖ e containing v appears as an SCC in G∖{u} (since there is no SCC in G∖{v} containing v). Thus, if an SCC of G∖ e does not contain u, then it appears as an SCC of G∖{u}. Since the SCC of G∖ e containing u is an SCC of G∖{v}, we can test whether a vertex z is in the same SCC with u in G ∖ e we simply can test if it is the same SCC with u in G∖{v}. Therefore, the SCCs of G∖ e that do not contain u are correctly updated. The SCCs of G∖ e that do not contain v are correctly updated when we examine the SCCs of G∖{v}. Hence, we correctly update all the SCCs of G∖ e.Let G be a digraph. Throughout any sequence of edge deletions, at most 2(n-1) strong bridges can appear in G. Once an edge becomes a strong bridge, it remains a strong bridge until its endpoints are separated in different SCCs. Therefore, it suffices to bound the number of strong bridges whose endpoints end up into different SCCs after some SCC breaks into smaller SCCs. Assume an SCC C breaks into k new SCCs C_1, …, C_k.Consider the SCC C, before it breaks, and an arbitrary vertex s∈ C. Let any path P in G[C] from s to any vertex in C_i. Without loss of generality, assume that only the last vertex on the path belongs to C_i. Let e=(u,v) be the last edge on P. Then, there is no other edge e_2∈ C_j × C_i, e_2 ≠ e_1 in G[C] that disconnects a vertex w∈ C_i from s, since the path P followed by any path from v to w in G[C_i] avoids e_2. Therefore, for each SCC there is at most one incoming edge disconnecting its vertices from s. If we apply the same argument on the reverse graph we get that each SCC has at most one outgoing edge disconnecting its vertices from s. Since, by Lemma <ref> all the strong bridges of a strongly connected graph are the edges that disconnected vertices from an arbitrary vertex s, if follows that there are at most 2(k-1) strong bridges whose endpoints lie in different SCCs C_i, C_j. Therefore, we charge at most 2k strong bridges every time the number of SCCs of the graph increases by k.The lemma follows from the fact that the number of SCCs can only increase and also that after all edges are removed from the graph there are n SCCs left.Now we bound the total running time of the algorithm. By Lemma <ref>, at most O(n) strong bridges appear in the graph throughout the execution of the algorithm. We show that for each strong bridge (u,v), our algorithm spends at most O(n) time. Indeed, whenever an SCC in G∖{u} or in G∖{v} breaks we execute O(k+k') constant time queries, where k and k' are the number of the resulting SCCs after an SCC breaks in G∖{u} and in G ∖{v}, respectively. Consider the number of queries that are executed whenever SCCs of G ∖{u} break. After an SCC breaks and k SCCs are created, the number of SCCs increases by k-1. Since the number of SCCs can only increase, and there can be at most O(n) cases where an SCC breaks, it follows that we execute at most O(n) queries for the strong bridge (u,v) when SCCs of G ∖{u} breaks. The same holds when SCCs of G ∖{v} break. Moreover, whenever an SCC of G ∖{u}, for some vertex u, breaks into several SCCs then we spend time proportional to the size of all the newly created SCCs, excluding the largest one, to remove the vertices from the SCC that broke to new SCCs. With an analysis similar to Section <ref> we can show that we move each vertex w at most log n times for each strong bridge, which results in O(n^2 log n) total time spent in the moving of the vertices. Thus, we spend in total O(n^2 ) constant time queries throughout the course of the algorithm maintaining the SCCs of G ∖ e, for each strong bridge e.We can moreover build a data structure that can report each SCC in time proportional to its size, once its ID is specified.This can be done similarly to Section<ref> by using doubly linked lists. We also assume that we maintain the list of IDs of the SCCs in G ∖ e, for each edge e. §.§ Answering decrementally strong connectivity queries under edge failures In this section we show how to answer various strong connectivity queries under single edge failures in asymptotically optimal time while we maintain a directed graph G=(V,E) decrementally. More specifically, under any sequence of edge deletions, we consider answering the following types of queries: (i) Report the total number of SCCs in G∖ e, for a query edge e ∈ E. (ii) Report the size of the largest and of the smallest SCC in G∖ e, for a query edge e∈ E. (iii) Report all the SCCs of G∖ e, for a query edge e∈ E. (iv) Test if two query vertices w and z are strongly connected in G∖ e, for a query edge e∈ E. (v) For query vertices w and z that are strongly connected in G, report all edges e such that w and z are not strongly connected in G∖ e anymore. In Section <ref> we shown how to answer the same queries in asymptotically optimal time with respect to vertex failures. For static strongly connected graphs, it is known that after linear time preprocessing one can answer all of the above queries in optimal time <cit.>. Before proving the main result of this section we need two supporting lemmas.By the definition of strong bridges, an edge e=(d(v),v), for some v, is a strong bridge if all paths from s to v contain e. Since all paths from s to a vertex in D(v) contain v, it follows that all paths from s to vertices in D(v) contain e. The first supporting lemma shows that this is not only true for s but also for all vertices that are not in D(v).(<cit.>) Let G be a strongly connected digraph, let D be its dominator tree rooted at an arbitrary start vertex s, and let e=(u,v) be a strong bridge of G such that d(v)=u. Then there is a path from any vertex w∉ D(v) to v in G that does not contain any vertex in D(v). Moreover, all simple paths in G from w to a vertex in D(v) must contain e. Let e=(u,v) be a strong bridge that is a separating edge for vertices w and z. Then e must appear in at least one of the paths D[s,w], D[s,z], D^[s,w], and D^[s,z]. Now we are ready to prove the main result of this section. We can maintain a digraph G decrementally in O(mn log n) total expected update time against an oblivious adversary, using O(n^2 log n) space, where m is the number of edges in the initial graph and n is the number of vertices, and between any two edge deletionsanswer in asymptotically optimal time the following type of queries under edge failures: (i) Report in O(1) time the total number of SCCs in G∖ e, for a query edge e ∈ E. (ii) Report in O(1) time the size of the largest and of the smallest SCC in G∖ e, for a query edge e ∈ E. (iii) Report in O(n) worst-case time all the SCCs of G∖ e, for a query edge e ∈ E. (iv) Test in O(1) time if two query vertices w and z are strongly connected in G∖ e, for a query edge e ∈ E. (v) For query vertices w and z that are strongly connected in G, report all edges e such that w and z are not strongly connected in G∖ e, in optimal worst-case time, i.e., in time O(k), where k is the number of separating vertices. (For k=0, the time is O(1)). Queries (i),(iii),(iv) can be answered by maintaining the labels A[u,v], for each u,v∈ V, as shown in Section <ref>. As we also mentioned in Section <ref>, we can maintain for each G∖ e a list of its SCCs. This list can be easily extended to maintain the size of each SCC. In order to have fast access to the minimum and the maximum size of the SCCs, we store the sizes of the SCCs in a min-heap and a max-heap. Those heaps can be updated in total time O(n log n) for each subgraph G∖ e, for each e, as follows. Whenever an SCC breaks, we add the IDs of the newly created SCCs together with their size into the heaps, and we also update the size of the one that kept the same ID. Since at most n SCCs can be created, and moreover there can be at most n cases of a broken SCC, there exist at most O(n) insertions and updates to each heap. That means the total time spend on maintaining each heap for each G∖ e is O(n log n), which sums up to O(n^2log n) for all strong bridges e (since the number of strong bridges that appear throughout the algorithm is <ref>). Given one min-heap and one-max heap, we can answer type (ii) queries in constant time. We are left to show how to answer the queries of type (v). For this type of queries we will assume that we maintain the dominator tree D of the graph and the dominator tree D^ of the reverse graph. We assume that after each edge deletion we compute the last strong bridge on the path from s to q on D, for each vertex q, denoted by ℓ(q). If no such edge exists for some vertex q, we let ℓ(q) = This can be easily done in O(n) time. By Lemma <ref>, all separating edges for w and z are either ancestors of w or z in D or ancestors of w or z in D^. We only show how to report the separating edges for w and z that are ancestors of w or z in D since the procedure for D^ is completely analogous. By Lemma <ref>, notice that if there exist an edge e'=(t,q) such that w∈ D(q) and z∉ D(q) or such that z∈ D(q) and w∉ D(q) then e' is a separating edge for w and z. That means, all strong bridges on the path in D from _D(w,z) (the nearest common ancestor of w and z in D) to d(w) and from _D(w,z) to d(z) are separating edges for w and z. We can simply report all strong bridges on the path in D from _D(w,z) to w and on the path in D from _D(w,z) to z. These strong bridges can be reported in output sensitive time, plus O(1) time, as follows. We first check whether for ℓ(w)=(x,y) if holds that x∈ D(_D(w,z)), and if yes we report ℓ(w), otherwise we stop. We continue in the same way by testing ℓ(x), etc. We repeat the same process to report all strong bridges on the path from _D(w,z) to z. Next, we show that all edges e'=(t,q) such that t,q ∉ D(_D(w,z)) that separate w and z appear as consecutive strong bridges on a path in D. More specifically, there is a vertex t∈ V∪{∅}∖{D(_D(w,z))}, such that all strong bridges on the path from t to _D(w,z) in D are separating edges for w and z. Let (t,q) be the first edge on the path from s to _D(w,z) on D that separates w and z. If there is no such edge (t,q), then none of the edges on the path from s to _D(w,z) is a separating edge for w and z and we are done. We can verify the existence of such edge by testing if ℓ(_D(w,z)) ≠∅ and ℓ(_D(w,z)) is a separating edge for w and z (the test can be executed in constant time as one type (iv) query). Assume now that such a separating edge (t,q) exists. By Lemma <ref>, either all paths from w to z contain (t,q) or all paths from z to w contain (t,q). Assume, without loss of generality, that all paths from w to z contain (t,q). Since q∉ D(_D(w,z)), all paths from q to z contain all strong bridges on the path from q to z in D (including all strong bridges on the path from q to _D(w,z)). This allows us to efficiently identify and report all separating edges (t,q) such that t,q∉ D(_D(w,z)) for w and z as follows. If there exists one such separating edge, we start testing the edge on the path from _D(w,z) to s in D (by testing the strong bridges provided by following the pointers ℓ as done previously), reporting all strong bridges that are separating edges for w and z, and once we find an edge that is not separating w and z we stop (as we proved, there are not further edge on the path from s to _D(w,z)). Notice that we only spend time proportional to the edges that we report, the computation of _D(w,z), and only a single type (iv) query that does not report an edge. We also spend the analogous time on the dominator tree D^ of the reverse graph. We only need to be careful not to report the same vertex twice, which can be trivially implemented within the claimed time bound. The Lemma follows. §.§ Maintaining decrementally the 2-edge-connected componentsIn this section we show how to maintain the 2-edge-connected components of directed graph. By definition, two vertices w and z are 2-edge-connected if and only if there is no edge e such that w and z are not strongly connected in G∖ e. Therefore, a simple-minded algorithm for computing the 2-edge-connected components is the following. We start with the trivial partition 𝒫 of the vertices that is equal to the set of SCCs of the graph. For every strong bridge e, we compute the SCCs C_1,…, C_k of G∖ e and we refine the maintained partition 𝒫 according to the partition induced by the SCCs C_1, …, C_k. After we execute all of the refinements on 𝒫 two vertices are in the same set if and only if we did not find an edge that separates them, which is exactly the definition of 2-edge-connected components.Our algorithm is a dynamic version of the aforementioned simple-minded algorithm. That is, we are executing the refined operations after every edge deletion and only if it is necessary. More specifically, we maintain decrementally the SCCs of G ∖ e for each strong bridge e. Whenever we identify that a 2-vertex-connected component B contains vertices from different SCCs in G∖ e, for some strong bridge e, then we refine the 2-edge-connected components according to these SCCs. Assume that a 2-edge-connected component C breaks after an edge deletion. That is, there is a strong bridge e such that vertices of B lie in different SCCs in G∖ e. Let C_1,C_2,…, C_k be the SCCs in G∖ e, then we replace B by {B∩ C_1}, …, {B ∩ C_k}.Notice that we can afford to spend up to O(m) time every time we detect that a 2-edge-connected component should be refined, as this can happen at most O(n) times (follows from the fact that each time at least two vertices stop being 2-edge-connected). However, these refinements can be easily executed in O(n) time, and therefore we spend total time O(n^2) for all refinements throughout the algorithm.In order to make our algorithm efficient we need to specify how to detect whether two 2-edge-connected vertices appear in different SCCs in G∖ e, for some edge e. Whenever an SCC C in G∖ e, breaks into several SCCs C_1,…, C_k the vertices of all SCCs except of one can be listed in time proportional to their edges as shown in Section <ref>. Without loss of generality, let C_1, …, C_k-1 be those SCCs. For SCC C_i, for 1≤ i ≤ k-1, we examine whether the 2-edge-connected components containing subsets of vertices of C_i are entirely contained in C_i. This can be easily done in time proportional to |C_i|, by simply testing the ID of their 2-edge-connected components. Notice that we do no need to examine the vertices in C_k since if we do not find a 2-edge-connected pair that is disconnected in G∖ e by the searches in C_i, 1≤ i≤ k-1, then there is no such pair, as we now explain. Assume that there is a pair of 2-edge-connected vertices w,z such that w∈ C_k, z∉ C_k.By the fact that w and z were 2-edge-connected before the edge deletion it follows that z∈ C, and therefore z∈ C_i, 1≤ i≤ k-1, in which case we would find this by searching in C_i.If we detect a 2-edge-connected component whose vertices lie in different SCCs in G∖ e, for some strong bridge e, then we execute the refinement phase in O(n) time. Notice that all the necessary tests that we described above can be executed in time proportional to the number of vertices of a broken SCC in G∖{v}, for some v, that are not contained in the largest resulting SCC. Each vertex can appear at most log n times in an SCC of G∖ e, for some strong bridge e, that is not the largest after a big SCC breaks.That means we spend O(n log n) time for each graph G∖ e on the aforementioned queries, for some strong bridge e, and therefore O(n^2 log n) in total. The 2-edge-connected components of a directed graph Gcan be maintained decrementally in O(mn log n) total expected time against an oblivious adversary, using O(n^2 log n) space, where m is the number of edges in the initial graph and n is the number of vertices. §.§ Maintaining decrementally the maximal 2-edge-connected subgraphs A strongly connected graph G=(V,E) is 2-edge-connected if for each e∈ E it holds that G∖ e remains strongly connected. In this section we show how to maintain the maximal 2-edge-connected subgraphs of a directed graph decrementally. The maximal 2-vertex-connected subgraphs induce a partition of the vertices of the graph. A simple-minded algorithm for computing the 2-edge-connected subgraphs removes iteratively one strong bridges from each SCC until there are no more strong bridges in any SCC. Clearly the remaining SCCs of the resulting graph are 2-edge-connected subgraphs since they do not contain any strong bridges. Moreover, all maximal 2-edge-connected subgraphs of the initial graph remain intact since, no edge can separate its vertices in different SCCs (including the edges inside the 2-edge-connected subgraph).We now present our decremental algorithm for maintaining the maximal 2-edge-connected subgraphs of G. Our algorithms is a simple extension of the simple-minded static algorithm for computing the maximal 2-edge-connected subgraphs. That is, we maintain the graph G' resulting after removing iteratively all strong bridges. Notice that the maximal 2-edge-connected subgraphs can only be further partitioned, so we can ignore all the edges between the different SCCs ofG'. In order to do so, we simply remove them from G', once we discover such edges. Therefore after an edge deletion, we only need to detect whether some edge deletion introduces new strong bridges inside the SCCs of G'. If such a strong bridge is discovered, we simply remove it from G' and also remove all edges among the different resulting SCCs. Throughout this process we continue searching for new strong bridges in the maintained SCCs and we repeat this process.Now we show how to implement the aforementioned decremental algorithm efficiently. We maintain the SCCs in G∖ e for each strong bridge e, as shown in Section <ref>. Whenever an SCC in G'∖ e, for some strong bridge e, breaks into several SCCs C_1,…, C_k after an edge deletion, we collect all edges between different SCCs and remove them from G' (we assume that we keep all edges that are removed from this process in a global set data structure L, and they are executed as normal edge deletion from our data structure). The above procedure maintains G' under any sequence of edge deletions, since for each strong bridge e we only remove edges between different SCCs in G' ∖ e and when the additional edge deletions end there are no edges between different SCCs in G' ∖ e.Now we bound the running time of the algorithm. The time for handling all edge deletions in the maintained graph G' is bounded by O(mn log n), and it uses O(n^2 log n) space, by Lemma <ref>. We maintain the SCCs in G'∖ e for each strong bridge e, in total O(n^2 log n) time for all strong bridges, as shown in Section <ref>. We only need to bound the time we spend to collect all edges among different resulting SCCs after some SCC in G' ∖ e breaks, for any strong bridge e. We do that as follows. Whenever an SCC C breaks into several SCCs C_1,…, C_k in G' ∖ e, for some strong bridge e, we only need to identify all the edges among C_1, …, C_k, since the algorithm previously should have removed all edges from C to other SCCs in G'∖ e. Without loss of generality, let C_k be the largest SCC among C_1,…, C_k. For each C_i, 1≤ i ≤ k-1 we iterate over all the unmarked edges incident to their vertices and test whether their endpoints are in different SCCs in G'∖{v}, and if yes we mark them and insert them into the global set data structure L in order to delete them from the graph. Notice that whenever the algorithm iterates over the incident edges of a vertex, that vertex is contained in an SCC in G' ∖ e that is at most half the size of its previous SCC. That means each vertex will be listed at most log n times. Therefore, for each strong bridge e, we consider the edges of each vertex at most log n times, and therefore at most nlog n times in total, for all strong bridges. Hence we spend at most O(mnlog n) to collect all edges among different resulting SCCs after some SCC in G' ∖ e breaks, for any strong bridge e. Thus, we have the following lemma. The maximal 2-edge-connected subgraphs of a directed graph Gcan be maintained decrementally in O(mn log n) total expected update time against an oblivious adversary, using O(n^2 log n) space, where m is the number of edges in the initial graph and n is the number of vertices. § DECREMENTAL DOMINATORS IN REDUCIBLE GRAPHSIn this section we give a specialized solution for maintaining the dominator tree under edge deletions in reducible flow graphs. The algorithm has a total update time of O (m n) and uses space O (m + n).A reducible flow graph <cit.> is one in which every strongly connected subgraph S has a single entry vertex v such that every path from s to a vertex in S contains v.There are many equivalent characterizations of reducible flow graphs <cit.>, and there are algorithms to test reducibility in near-linear <cit.> and truly linear <cit.> time. A flow graph is reducible if and only if it becomes acyclic when every edge (v, w) such that w dominates v is deleted <cit.>.We refer to such an edge as a back edge. Deletion of such edges does not change the dominator tree, since no such edge can be on a simple path from s. Deleting such edges thus reduces the problem of computing dominators on a reducible flow graph to the same problem on an acyclic graph. Such a graph has a topological order (a total order such that if (x, y) is an edge, x is ordered before y) <cit.>.It is well-known that the dominator tree D of an acyclic flow graph G can be computed by the following simple algorithm, which builds D incrementally <cit.>. Fix a topological order of G (for the vertices reachable from s). Initially D consists of only its root s. We process the vertices in topological order, and for each vertex v we compute the nearest common ancestor u of 𝐼𝑛(v) in D. Then we set d(v) ← u.§.§ Preliminary Observations §.§.§ The Parent and Sibling PropertiesLet T be a rooted tree whose vertex set V(T) consists of the vertices reachable from s. Tree T has the parent property if for all (v, w) ∈ E with v and w reachable, v is a descendant of t(w) in T.Tree T has the sibling property if v does not dominate w for all siblings v and w in T. The parent and sibling properties are necessary and sufficient for a tree to be the dominator tree.(<cit.>)A tree T has the parent and sibling properties if and only if T = D. §.§.§ Derived Edges and Derived GraphsDerived graphs, first defined in <cit.>, reduce the problem of finding dominators to the case of a flat dominator tree.By the parent property of D, if (v, w) is an edge of G, the parent d(w) of w is an ancestor of v in D. Let (v,w) be an edge of G, with w not an ancestor of v in D. Then, the derived edge of (v, w) is the edge (v, w), where v = v if v = d(w), v is the sibling of w that is an ancestor of v if v ≠ d(w). If w is an ancestor of v in D, then the derived edge of (v, w) is null. Note that a derived edge (v, w) may not be an original edge of G.For any vertex w ∈ V such that C(w) ≠∅, we define the derived flow graph of w, denoted by G_w = (V_w, E_w, w), as the flow graph with start vertex w, vertex set V_w = C(w) ∪{ w }, and edge set E_w = { (u, v)|v ∈ V_w(u, v)E }.By definition, G_w has flat dominator tree, that is, w is the only proper dominator of any vertex v ∈ V_w ∖ w. (<cit.>)Given the dominator tree D of a flow graph G=(V,E,s) and a list of edges S ⊆ E, we can compute the derived edges of S in O(|V|+|S|) time.§.§.§ Affected VerticesNow consider the effect that a single edge deletion has on the dominator tree D. Let (x,y) be the deleted edge. We let G' and D' denote the flow graph and its dominator tree after the update. Similarly, for any function f on V, we let f' be the function after the update. In particular, d'(v) denotes the parent of v in D'.By definition, D' ≠ D only if x is reachable before the update. We say that a vertex v is affected by the update if d'(v) ≠ d(v). (Note that we can have 𝐷𝑜𝑚'(v) ≠𝐷𝑜𝑚(v) even if v is not affected.) If v is affected then d'(v) does not dominate v in G.Suppose that x is reachable and y remains reachable after the deletion of (x,y). The deletion of an edge does not violate the parent property of the dominator tree but may violoate the sibling property. Since the effect of an edge deletion is the reverse of an edge insertion, <cit.> and <cit.> give the following result:Suppose x is reachable and y does not becomes unreachable after the deletion of (x,y). Then the following statements hold: (a) A vertex v is affected only if d(v)=d(y) and there is a path π_yv from y to v such that 𝑑𝑒𝑝𝑡ℎ(d(v)) < 𝑑𝑒𝑝𝑡ℎ(w) for all w ∈π_yv.(b) All affected vertices become descendants in D' of a child c of d(y).(c) After the deletion, each affected vertex v becomes a child of a vertex on D'[c,y]. See Figure <ref>. We refer to D'[c,y] as the critical path of the deletion. Notice that the above lemma only provides a necessary condition for a vertex to be affected.§.§ Decremental Algorithm Let G=(V,E,s) be the input reducible flow graph. Before we begin executing our decremental algorithm, we compute the dominator tree D of G and delete its back edges. Henceforth, we assume that G is acyclic.Throughout the execution of our algorithm we will need to test the ancestor-descendant relation between pairs of vertices in D, by applying a simple O(1)-time test <cit.>. Specifically, after each edge update, we perform a dfs traversal of D where we number the vertices from 1 to n in preorder and compute the number of descendants of each vertex v. We denote these numbers by 𝑝𝑟𝑒𝑜𝑟𝑑𝑒𝑟(v) and 𝑠𝑖𝑧𝑒(v), respectively. Then v is a descendant of u if and only if 𝑝𝑟𝑒𝑜𝑟𝑑𝑒𝑟(u) ≤𝑝𝑟𝑒𝑜𝑟𝑑𝑒𝑟(v) < 𝑝𝑟𝑒𝑜𝑟𝑑𝑒𝑟(u) + 𝑠𝑖𝑧𝑒(u).The next lemma follows from <cit.>.Suppose x is reachable and y does not become unreachable after the deletion of (x,y). Then y is affected if and only if (d(y),y) is not an edge of G ∖ (x,y) and all edges (v,y) ∈ E ∖ (x,y) correspond to the same derived edge (v,y)=(c,y) of G. Our goal is to apply Lemma <ref> in order to locate the affected vertices in some topological order of G. For each vertex v we maintain a count 𝐼𝑛𝑆𝑖𝑏𝑙𝑖𝑛𝑔𝑠(v) which corresponds to the number of distinct siblings w of v such that (w,v) is a derived edge. We also maintain the lists 𝐷𝑒𝑟𝑖𝑣𝑒𝑑𝑂𝑢𝑡(v) of the derived edges (v,u) leaving each vertex v. As we locate each affected vertex, we find its new parent in the dominator tree and update the counts 𝐼𝑛𝑆𝑖𝑏𝑙𝑖𝑛𝑔𝑠 for the siblings of v. We compute the updated 𝐼𝑛𝑆𝑖𝑏𝑙𝑖𝑛𝑔𝑠 counts and 𝐷𝑒𝑟𝑖𝑣𝑒𝑑𝑂𝑢𝑡 lists in a postprocessing step.Let (x,y) be the deleted edge. The first step is to test if y is affected after the deletion, as suggested by Lemma <ref>. Specifically, we compute the nearest common ancestor z of all vertices in 𝐼𝑛(y).From Lemma <ref> we have that y is affected if and only if z ≠ d(y). In this case, by Lemma <ref>, z is a descendant of a sibling c of y in D. Note that we can locate z and c in O(n) time, using the parent function d.Let G be an acyclic flow graph. Let (x,y) be an edge of G that is not a bridge such that x is reachable.Then, after the deletion of (x,y), no vertex on D'[c,d'(y)] is affected. The lemma is true if y is not affected, so suppose that y is affected. Suppose, for contradiction, that there is an affected vertex v on D'[c,d'(y)]. Then v is a dominator of y in G', so all paths from s to y in G' contain v. Let π_sy be such a path, and let π_vy be the part of π_sy from v to y. Since G' is a subgraph of G, path π_vy also exists in G. From the fact that v is affected and from Lemma <ref>, we have that G also contains a path π_yv from y to v. But then G is not acyclic, a contradiction. Algorithmgives the outline of our algorithm to update the dominator tree after an edge deletion. To identify the affected vertices, we update the counters 𝐼𝑛𝑆𝑖𝑏𝑙𝑖𝑛𝑔𝑠, using Procedure <ref>, and apply Lemma <ref>. We store the affected vertices in a queue Q and process them as they are extracted from Q. To process an affected vertex w ≠ y, we first need to locate the position of d'(w) on the critical path D'[c,d'(y)]. This is handled by Procedure <ref>.Let 𝑑𝑒𝑔𝑟𝑒𝑒_0(v) denote the initial degree (indegree and outdegree) of a vertex v ∈ V. We define the potential ϕ(v) of a vertex v as ϕ(v) = 𝑑𝑒𝑝𝑡ℎ(v) ·𝑑𝑒𝑔𝑟𝑒𝑒_0(v) if v is reachable, and ϕ(v)=n ·𝑑𝑒𝑔𝑟𝑒𝑒_0(v) otherwise. The flow graph potential Φ is the sum of the vertex potentials. Note that vertex potentials are nondecreasing. Also, n-1 ≤Φ < 2nm. For a set of vertices S, we let 𝑑𝑒𝑔𝑟𝑒𝑒_0(S) = ∑_v ∈ S𝑑𝑒𝑔𝑟𝑒𝑒_0(v) and Φ(S) = ∑_v ∈ Sϕ(v).After we have found a new affected vertex w, the next step is to update the 𝐼𝑛𝑆𝑖𝑏𝑙𝑖𝑛𝑔𝑠 counters for the siblings u of w in D that have an entering edge from D'(w). Since we discover the affected vertices in topological order, none of these siblings of w has been inserted into Q yet. Moreover, by Lemma <ref> and Corollary <ref>, no descendant of w in D is affected, so D'(w)=D(w). So the siblings of w that have an entering edge from D'(w) are precisely those in 𝐷𝑒𝑟𝑖𝑣𝑒𝑑𝑂𝑢𝑡(w).Then we can update the counters as shown in Procedure <ref>. UpdateInSiblings(w) vertex q ∈𝐷𝑒𝑟𝑖𝑣𝑒𝑑𝑂𝑢𝑡(w)q ∈𝐷𝑒𝑟𝑖𝑣𝑒𝑑𝑂𝑢𝑡(c) set 𝐼𝑛𝑆𝑖𝑏𝑙𝑖𝑛𝑔𝑠(q) ←𝐼𝑛𝑆𝑖𝑏𝑙𝑖𝑛𝑔𝑠(q)-1𝐼𝑛𝑆𝑖𝑏𝑙𝑖𝑛𝑔𝑠(q) = 1 and d(q) ∉𝐼𝑛(u)insert q into Q 𝐷𝑒𝑟𝑖𝑣𝑒𝑑𝑂𝑢𝑡(c) ←𝐷𝑒𝑟𝑖𝑣𝑒𝑑𝑂𝑢𝑡(c) ∪ q Set𝐷𝑒𝑟𝑖𝑣𝑒𝑑𝑂𝑢𝑡(w) ←∅.We say that a vertex is scanned if it is visited in line 21 of Algorithm . Hence, every scanned vertex is a descendant of an affected vertex in D. Notice that a vertex v can be scanned at most once per edge deletion, and when this happens an ancestor w of v in D is affected. We maintain this information in a variable 𝐴𝑓𝑓𝑒𝑐𝑡𝑒𝑑𝐴𝑛𝑐𝑒𝑠𝑡𝑜𝑟(v) for each vertex v.Now we describe how to find the new parent of each affected vertex. Let w be the next affected vertex extracted from Q. Lemma <ref> and Corollary <ref> imply that d'(w) is located on the critical path D'[c,d'(y)]. To locate the position of d'(w), we find the deepest vertex u on the critical path such that setting d'(w) ← u satisfies the parent property. See Procedure <ref>.We test the vertices u ∈ D'[c,d'(y)] in top-down order, because this allows us to charge the cost of these tests to the increase of ϕ(w). (See Theorem <ref> below.)LocateNewParent(w) vertex u ∈ D'(c,d'(y)] in top-down orderthere is an edge (v,w) ∈ E' such that v ∉D'(u)set d'(w) ← d(u) and Consider an affected vertex w and a vertex u on the critical path D'[c,d'(y)]. Let (v,w) be an edge in E'. To test if v is a descendant of u in D' we consider two cases: * If v is not a scanned vertex, then the relative location of v and u has not changed. That is, v ∈ D'(u) if and only if v ∈ D(u).* If v is a scanned vertex, then it becomes a descendant of a vertex q on D'[c,d'(y)]. Let p=𝐴𝑓𝑓𝑒𝑐𝑡𝑒𝑑𝐴𝑛𝑐𝑒𝑠𝑡𝑜𝑟(v). Then q=d'(p), and since we locate the affected vertices in topological order, q is already known. So we have v ∈ D'(u) if and only if q ∈ D(u). Therefore, in both cases we can use the O(1)-time test for the ancestor-descendant relation in D, using arrays 𝑝𝑟𝑒𝑜𝑟𝑑𝑒𝑟 and 𝑠𝑖𝑧𝑒. Algorithmis correct. We will argue that the affected vertices are processed in topological order. The correctness of our algorithm follows from this fact because each execution of Procedure <ref>(w) sets d'(w) to be the nearest common ancestor of 𝐼𝑛(w). This means that our algorithm updates the dominator tree by mimicking the incremental construction of the dominator tree <cit.>.To prove our claim that the affected vertices are processed in topological order, we first note that by Lemma <ref> there is a path in G' from y to each affected vertex. So, y is the first affected vertex in topological order. A vertex w is inserted into Q if and only if 𝐼𝑛𝑆𝑖𝑏𝑙𝑖𝑛𝑔𝑠(w)=1 and (d(w),w) is not an edge of G'. Suppose that w is inserted into Q. Since w ≠ y, (d(w),w) ≠ (x,y). Hence, (d(w),w) is not an edge of G and 𝐼𝑛𝑆𝑖𝑏𝑙𝑖𝑛𝑔𝑠(w)>1 before the deletion. Thus 𝐼𝑛𝑆𝑖𝑏𝑙𝑖𝑛𝑔𝑠(w) was decreased by Procedure <ref>. Then, the condition in line 3 of that procedure implies that the moment w is inserted into Q, all vertices in 𝐼𝑛(w) have become descendants of c. This proves the claim, so the lemma follows. Algorithmmaintains the dominator tree of a reducible flow graph G with n vertices through a sequence of edge deletions in O(mn) total time, where m is number of edges in G before all deletions. Consider the deletion of an edge e=(x,y). If e is a bridge, then y becomes unreachable in G' and we compute D' from scratch in O(m) time. Throughout the whole sequence of deletions, such an event can happen at most n-1 times, so all deletions that result to newly unreachable vertices are handled in O(mn) total time.Now we consider the cost of executing Algorithmwhen xis reachable and (x,y) is not a bridge.Lines 8–12 can be implemented in O(n) time. Also, we can compute z and c in lines 14 and 16, remove the affected vertices from 𝐷𝑒𝑟𝑖𝑣𝑒𝑑𝑂𝑢𝑡(c) in line 24, and compute the arrays 𝑝𝑟𝑒𝑜𝑟𝑑𝑒𝑟' and 𝑠𝑖𝑧𝑒' in line 28 in O(n) time. So, not accounting for lines 25–27 and for the total running time of Procedures <ref> and <ref>, Algorithmexecutes the sequence of deletions in O(mn) time. It remains to bound the running time of lines 25–27, andof Procedures <ref> and <ref> by O(mn).We first bound the time of Procedure <ref>(w). Line 1 takes O(|D(w)| + 𝑑𝑒𝑔𝑟𝑒𝑒_0(D(w))) = O(𝑑𝑒𝑔𝑟𝑒𝑒_0(D(w))) time. Set 𝑆𝑖𝑏𝑙𝑖𝑛𝑔𝑠(w) contains at most 𝑑𝑒𝑔𝑟𝑒𝑒_0(D(w)) vertices. To perform the test in line 3 in O(1) time, we mark each vertex in 𝐷𝑒𝑟𝑖𝑣𝑒𝑑𝑂𝑢𝑡(c), since this list only grows during the execution offor (x,y). (We unmark all vertices after the deletion.)We can also test if d(u) ∈𝐼𝑛(u) in constant time, by maintaining throughout the whole sequence of deletions, a Boolean array 𝐷𝑜𝑚𝐸𝑑𝑔𝑒 such that 𝐷𝑜𝑚𝐸𝑑𝑔𝑒[u] = 𝑡𝑟𝑢𝑒 if and only (d(u),u) ∈ E.Finally, inserting a vertex into Q and into a 𝐷𝑒𝑟𝑖𝑣𝑒𝑑𝑂𝑢𝑡 list takes O(1) time. Thus, Procedure <ref>(w) is executed in O(𝑑𝑒𝑔𝑟𝑒𝑒_0(D(w))) time. Since the depth of each vertex in D(w) increases by at least one, this running time is at most β ( Φ'(D(w)) - Φ(D(w)) ), for an appropriate constant β.Now we bound the time of Procedure <ref>(w). Each execution of the foreach loop takes O(𝑑𝑒𝑔𝑟𝑒𝑒_0(w)) time. If the loop is executed k times, then the depth of w increases by k-1. Hence the running time is bounded by β ( ϕ'(w) - ϕ(w) ).We conclude that the total running time of Procedures <ref> and <ref> is bounded by the total increase in the potential, which is O(mn).Finally, we turn to lines 25–27. Let A be the set of affected vertices.Set S consists of at most μ = 𝑑𝑒𝑔𝑟𝑒𝑒_0 (A) edges. By Lemma <ref>, we can computeS in O(n+μ) time. Again we note that for each affected vertex v,𝑑𝑒𝑔𝑟𝑒𝑒_0(v) ≤ϕ'(v)-ϕ(v), so μ≤Φ'(A) - Φ(A), which implies that the total running time for lines 25–27 is O(mn). § CONDITIONAL LOWER BOUND In the following we give a conditional lower bound for the partially dynamic dominator tree problem. We show that there is no incremental nor decremental algorithm for maintaining the dominator tree that has total update time O ((m n)^1-ϵ) (for some constant ϵ > 0) unless theConjecture <cit.> fails. This also holds for algorithms that do not explicitly maintain the tree, but are able to answer parent-queries. Formally, this section contains the proof of the following statement. For any constant δ∈ (0, 1/2] and any n and m = Θ (n^1 / (1-δ)), there is no algorithm for maintaining a dominator tree under edge deletions/insertions allowing queries of the form “is x the parent of y in the dominator tree” that uses polynomial preprocessing time, total update time u (m, n) = (m n)^1-ϵ and query time q (m) = m^δ-ϵ for some constant ϵ > 0, unless theconjecture fails. Under this conditional lower bound, the running time of our algorithm is optimal up to sub-polynomial factors. We give the reduction for the decremental version of the problem. Hardness of the incremental version follows analogously.§.§ Hardness Assumption In the online Boolean matrix-vector problem we are first given a Boolean n × n matrix M to preprocess. After the preprocessing, we are given a sequence of n-dimensional Boolean vectors v^(1), …, v^(n) one by one. For each 1 ≤ t ≤ n, we have to return the result of the matrix-vector multiplication M v^(t) before we are allowed to see the next vector v^(t+1). TheConjecture states that there is no algorithm that computes each matrix-vector product correctly (with high probability) and in total spends time O (n^3-ϵ) for some constant ϵ > 0.We will not use theproblem directly as the starting point of our reduction. Instead we consider the followingproblem (for a fixed γ > 0) and parameters n_1, n_2, and n_3 such that n_1 = ⌊ n_2^γ⌋: We are first given a Boolean n_1 × n_2 matrix M to preprocess.After the preprocessing, we are given a sequence of pairs of n_1-dimensional Boolean vectors (u^(1), v^(1)), …, (u^(n_3), v^(n_3)) one by one. For each 1 ≤ t ≤ n_3, we have to return the result of the Boolean vector-matrix-vector multiplication (u^(t))^⊺ M v^(t) before we are allowed to see the next pair of vectors (u^(t+1), v^(t+1)). It has been shown <cit.> that under theConjecture as stated above, there is no algorithm for this problem that has polynomial preprocessing time and for processing all vectors spends total time O (n_1^1-ϵ_1 n_2^1-ϵ_2 n_3^1-ϵ_3) such that all ϵ_i are ≥ 0 and at least one ϵ_i is a constant > 0.§.§ Reduction We now give the reduction from theproblem with γ = δ / (1-δ) to the decremental dominator tree problem. In the following we denote by v_i the i-th entry of a vector v and by M_i,j the entry at row i and column j of a matrix M.Consider an instance of theproblem with parameters n_1 = m^1-δ, n_2 = m^δ, and n_3 = m^1-δ. We preprocess the matrix M by constructing a graph G^(0) with the set of verticesV = { s, x_1, …, x_n_3, x_n_3+1, y_1, …, y_n_1, z_1, …, z_n_2}and the following edges: * an edge (s, x_1), and, for every 1 ≤ t ≤ n_3, an edge (x_t, x_t+1) (i.e. a path from s to x_n_3+1)* for every 1 ≤ j ≤ n_2, an edge (x_n_3+1, z_j)* for every 1 ≤ t ≤ n_3 and every 1 ≤ i ≤ n_1, an edge (x_t, y_i) (i.e., the complete bipartite graph between { x_1, …, x_n_3} and { y_1, …, y_n_1})* for every 1 ≤ i ≤ n_1 and every 1 ≤ j ≤ n_2, an edge (y_i, z_j) if and only if M_i,j = 1 (i.e. a bipartite graph between { y_1, …, y_n_1} and { z_1, …, z_n_2} encoding the matrix M in the natural way). Whenever the algorithm is given the next pair of vectors (u^(t), v^(t)), we first create a graph G^(t) by performing the following edge deletions in G^(t-1): If t ≥ 2, we first delete all outgoing edges of x_t-1, except the one to x_t. Then (for any value of t), for every i such that u^(t)_i = 0 we delete the edge from x_t to y_i. Thus, for every 1 ≤ i ≤ n_1, there will be an edge from x_t to y_i in G^(t) if and only if u^(t)_i = 1. Having created G^(t), we now, for every j such that v^(t)_j = 1, check whether x_t is the parent of z_j in the dominator tree. If this is the case for at least one j we return that (u^(t))^⊺ M v^(t) is 1, otherwise we return 0. Correctness.The correctness of our reduction follows from the following lemma. For every 1 ≤ t ≤ n, the j-th entry of (u^(t))^⊺ M is 1 if and only if x_t is the immediate dominator of z_j in G^(t)If the j-th entry of (u^(t))^⊺ M is 1, then there is an i such that u^(t)_i = 1 and M_i,j = 1. Thus, G^(t) contains the edges (x_t, y_i) and (y_i, z_j) and consequently a path from s to z_j, namely ⟨ s, x_1, …, x_t, y_i, z_j ⟩. Vertices that are not on this path cannot be dominators of z_j. Furthermore, y_i can also not be a dominator of z_j because there is a path from s to z_j not containing y_i, namely ⟨ s, x_1, …, x_n_3, x_n_3 + 1, z_j ⟩. For every 1 ≤ t' ≤ t-1, the vertex x_t' has only one outgoing edge, which goes to x_t'+1, as all other outgoing edges are not present in G^(t) anymore. Thus, all paths from s to z_j necessarily contain the vertices s, x_1, …, x_t, in this order. Therefore x_t is the immediate dominator of z_j in G^(t).If the j-th entry of (u^(t))^⊺ M is 0, then there is no i such that u^(t)_i = 1 and M_i,j = 1. This implies that there is no path (of length 2) from x_t to z_j avoiding x_t+1 (via some vertex y_i). Thus, every path from s to z_j contains x_t+1, and in particular x_t+1 appears after x_t on such a path. Thus, x_t cannot be the immediate dominator of z_j in G^(t). Note that (u^(t))^⊺ M v^(t) is 1 if and only if there is a j such that both the j-th entry of u^(t) M as well as the j-th entry of v^(t) are 1. Furthermore, x_t is the parent of z_j in the dominator tree if and only if x_t is an immediate dominator of z_j in the current graph. Therefore the lemma establishes the correctness of the reduction.Complexity.The initial graph G^(0) has n := Θ (n_1 + n_2 + n_3) = Θ (m^δ + m^1-δ) = Θ (m^1-δ) vertices and Θ (n_1 n_2 + n_2 n_3) = Θ (m) edges. The total number of parent-queries is O (n_1 n_3) = m^2 (1-δ). Suppose the total update time of the decremental dominator tree algorithm is O (u(m, n)) = (m n)^1-ϵ and its query time is O (q(m)) = m^δ - ϵ. Using the reduction above, we can thus solve theproblem for the parameters n_1, n_2, n_3 with polynomial preprocessing time and total update timeO (u (m, n) + m^2 (1-δ) q (m)) = O (u (m, m^1-δ) + m^2 (1-δ) q (m)) = O (m^2-δ - ϵ) .Since n_1 n_2 n_3 = m^2-δ, this means we would get an algorithm for theproblem with polynomial preprocessing time and total update time O (n_1^1-ϵ_1 n_2^1-ϵ_2 n_3^1-ϵ_3) where at least one ϵ_i is a constant > 0. This contradicts theConjecture. plain
http://arxiv.org/abs/1704.08235v1
{ "authors": [ "Loukas Georgiadis", "Thomas Dueholm Hansen", "Giuseppe F. Italiano", "Sebastian Krinninger", "Nikos Parotsidis" ], "categories": [ "cs.DS" ], "primary_category": "cs.DS", "published": "20170426174420", "title": "Decremental Data Structures for Connectivity and Dominators in Directed Graphs" }
[email protected] Department of Mathematical Sciences, University of Nevada Las Vegas, Las Vegas, NV 89154Kontorovich is partially supported by an NSF CAREER grant DMS-1455705, an NSF FRG grant DMS-1463940, and a BSF grant. [email protected] Department of Mathematics, Rutgers University, New Brunswick, NJ 08854 [2010]51N20, 01A20Efficiently ConstructingTangent Circles Alex Kontorovich December 30, 2023 ========================================= § INTRODUCTION The famous Problem of Apollonius is to construct a circle tangent to three given ones in a plane. The three circles may also be limits of circles, that is, points or lines, and “construct” of course refers to straightedge and compass. In this note, we consider the problem of constructing tangent circles from the point of view of efficiency. By this we mean using as few moves as possible, where a move is the act of drawing a line or circle. (Points are free as they do not harm the straightedge or compass, and all lines are considered endless, so there is no cost to “extending” a line segment.) Our goal is to present, in what we believe is the most efficient way possible, a construction of four mutually tangent circles. (Five circles of course cannot be mutually tangent in the plane, for their tangency graph, the complete graph K_5, is non-planar.) We first present our construction before giving some remarks comparing it to others we found in the literature.§ BABY CASES: ONE AND TWO CIRCLES Constructing one circle obviously costs one move: let A and Z be any distinct points in the plane and draw the circle O_A with center Aand passing through Z. Given O_A, constructing a second circle tangent to it costs two more moves: draw a line through AZ,and put an arbitrary point B on this line (say, outside O_A). Now draw the circle O_Bwith center B andpassing through Z; then O_A and O_B are obviously tangent at Z, see fig:1. It should be clear that one cannot do better than two moves, for otherwise one could draw the circle O_B immediately; but thisrequires knowledge of a point on O_B.§ WARMUP: THREE CIRCLES Given fig:1, that is, the two circles O_A and O_B, tangent at Z, and the line AB, how many moves does it take to construct a thirdcircle tangent to both O_A and O_B? We encouragereaders at this point to stop and try this problem themselves.Given fig:1, a circle tangent to both O_A and O_B is constructible in at most five moves.We first give the construction, then the proof that it works. §.§ The ConstructionDraw an arbitrary circle O_Z centered at Z(this is move 1), and let it intersect AB at F and G, say, with A and F on the same side of Z.Next draw the circlecentered at A and passing through G (move 2), and the circle centered at B through F (move 3); see fig:2.Let these two circles intersect at C.Construct the line AC (move 4) and let it intersect O_A at Y.Finally, draw the circle O_C centered at Cand passing through Y (move 5); then O_C is tangent toO_B at X, say. §.§ The Proof It is elementary to verify that the above construction works, and that the radius of O_C is the same as that of O_Z. Note in fact that the locus of all centers C of circles O_C tangent to both O_A and O_B forms a hyperbola with foci A and B. Indeed, let the circles O_A, O_B, and O_C have radii a, b, and c, resp.; then |AC| =a+c and |BC| =b+c, so |AC|-|BC| =a-b is constant for any choice of c. We claim that the locus of all centers C of circles O_C tangent to both O_A and O_B forms a hyperbola with foci A and B. Indeed, let the circles O_A, O_B, and O_C have radii a, b, and c, resp.; then AC=a+c and BC=b+c, so AC-BC=a-b is fixed. It is then elementary to verify that the above construction works. § MAIN THEOREM: THE FOURTH CIRCLE Finally we come to the main event, the fourth tangent circle, which we call the Apollonian circle.[Many objects in the literature are named after Apollonius though he had nothing to do with them, such as the Apollonian gasket and the Apollonian group (see, e.g., <cit.>). The fourth tangent circle really is due to him, though most authors refer to it as the “Soddy” circle.] We are given three mutually tangent circles, O_A, O_B and O_C, lines AB and AC, and the points of tangency X, Y, and Z; that is, we are given the already constructed objects in fig:2.An Apollonian circle tangent to O_A, O_B and O_C in fig:2 is constructible in at most sevenmoves. §.§ The construction Draw the line XZ (this is move 1) and let it intersect AC at B'.Draw the circle O_B' centered at B'and passing through Y (move 2).It intersects O_B at Q and Q'; see fig:3. We repeat this procedure: draw the line XY, let it intersect AB at C', draw the circle O_C' with center C' andpassing through Z, and let O_C' intersect O_C at R and R' (with Ron the same side as Q). This repetition used two more moves. Next we extend BQ and CR (now up to move 6) and let them meet at S. Finally, use the seventh move to draw the desired Apollonian circle O_S centered at S and passing through Q;see fig:4.If a pair of lines, e.g.,AC and XZ, are parallel (so B' is at infinity), thenuse the line BY in lieu of O_B' (the former is the limit ofthe latter as B'→∞). §.§ The Proof There is a unique circle such that inversion through it fixes O_B and sends O_A to O_C; we claim that O_B' is this circle.Indeed, such an inversion must send X to Z, so its center must lie on XZ.Its center also lies on the line perpendicular to O_A and O_C, which is the line AC; thus its center is B'=XZ∩ AC.Finally, the point Y is fixed by this inversion, giving the claim. Next it is easy to seethat the point of tangency of O_B and the Apollonian circle O_S must also lie on this inversion circle O_B' (in which casethis point must be Q as constructed). Indeed, since the inversion preserves the initial configuration of three circles, it must also fixO_S, and hence also its point of tangency with O_B.Finally, since O_B and O_S are tangent at Q, their centers are collinear with Q; that is, S lies on the line BQ. The rest is elementary.The second solution O_S' to the Apollonian problem can now be constructed in a further three moves. Indeed,the extra points of tangency Q' and R' are already on the page.Extend BQ' and CR' (two more moves); these intersect at S', and drawing the circle O_S' centered at S'and passing through Q' costs a third move.Let A'=BC∩ YZ be constructed similarly to B' and C'. Note that the triangles ABC and XYZ are perspective from the Gergonne point [See, e.g., Wikipedia for any (standard) terms not defined here and below.] . By Desargue's theorem, they are therefore perspective from a line, which is A'B'C', the so-called Gergonne line; see Oldknow <cit.>, who seems to have been just shy of discovering the construction presented here. § OTHER CONSTRUCTIONSApollonius's own solution did not survive antiquity <cit.> and we only know of its existence through a “mathscinet review” by Pappus half a millennium later; perhaps we have simply rediscovered his work. Viète's originalsolutionthrough inversion (see, e.g., <cit.>)is logically extremely elegant but takes countless elementary moves. There are many others but we highlight two in particular.§.§ Gergonne Gergonne's own solution to the general Apollonian problem (that is, when the given circles are not necessarily tangent) is perhaps closest to ours (but of course the problem he is solving is more complicated).He begins by constructing the radical circle O_I for the initial circles O_A, O_B, and O_C, and identifies the six points X, X', Y, Y', Z, and Z', where it intersects the three original circles.Those points are taken in order around O_I, with Y' and Z on O_A, Z' and X on O_B, and X' and Y on O_C.In our configuration, the radical circle is the incircle of triangeABC and X=X', Y=Y', and Z=Z'.Every pair of circles can be thought of as being similar to each other via a dilation through a point.In general, there are two such dilations.This gives us six points of similarity, which lie on four lines, the four lines of similitude.Each line generates a pair of tangent circles.In our configuration, the point B' is the center of the dilation that sends O_A to O_C.Since O_A and O_C are tangent, there is only one dilation, so we get only one line of similitude, the Gergonne line.The radical circle of O_B, O_I, and a pair of tangent circles is centered on the line of similitude, so is where XZ' intersects that line.In our configuration, that gives us B'.The radical circle is the one that intersects O_I perpendicularly, so in our configuration it goes through Y. §.§ Eppstein The previously simplest solution to our problem seems to have been that of Eppstein <cit.>, which used eleven elementary moves to draw O_S. His construction finds the tangency point Q by first dropping the perpendicular to AC through B, and then connecting a second line from Y to one of the two points of intersection of this perpendicular with O_B. This second line intersects O_B at Q (or Q', depending on the choice of intersection point). Note that constructing a perpendicular line is not an elementary operation, costing 3 moves. The second line is elementary, so Eppstein can construct Q in 4 moves, then R in 4 more, then two more lines BQ and CR to get the center S, and finally the circle O_S in a total of 11 moves. To construct the other solution, O_S', using his method, it would cost another five moves (as opposed to our three; see rmk:second), since one needs to draw two more lines to produce Q' and R' (whereas our construction gives these as a byproduct).§.§ Challenge:Construct (a generic configuration of) four mutually tangent circles in the plane using fewer than 15 (=1+2+5+7) moves. Or prove (as we suspect) that this is impossible!alpha
http://arxiv.org/abs/1704.08747v1
{ "authors": [ "Arthur Baragar", "Alex Kontorovich" ], "categories": [ "math.HO", "math.MG", "51N2, 01A20" ], "primary_category": "math.HO", "published": "20170427211731", "title": "Efficiently constructing tangent circles" }
[][email protected] Ohio State UniversityWe report on laser-based ion acceleration using freely suspended liquid crystal film targets, formed with thicknesses varying from 100 nm to 2 μ m for this experiment. Optimization of Target Normal Sheath Acceleration (TNSA) of protons is shown using a 1 × 10^20 W/cm^2, 30 fs laser with intensity contrast better than 10^-7:1. The optimum thickness was near 700 nm, resulting in a proton energy maximum of 24 MeV. Radiochromic film (RCF) was employed on both the laser and target normal axes, revealing minimal laser axis signal but a striking ring distribution in the low energy target normal ion signature that varies with liquid crystal thickness. Discussion of this phenomenon and a comparison to similar observations on other laser systems is included.Study of accelerated ion energy and spatial distribution with variable thickness liquid crystal targets D. W. Schumacher December 30, 2023 =======================================================================================================§ INTRODUCTION As the repetition rate of ultrashort (30 fs), ultra-intense (I > 10^21 W/cm^2) laser systems increases to 10 Hz and beyond,<cit.> several applications are being pursued with renewed interest. Among them is hadron cancer therapy,<cit.> which requires both higher energy ions and higher repetition rate target delivery than has currently been demonstrated. For this application the proton energy determines the depth of energy deposition of the ion beam, therefore understanding how to control the energy of laser-accelerated ions is critical. Investigations into the fundamental physics behind the various acceleration mechanisms promises other applications such as neutron beam generation, laboratory astrophysics, and positron production.<cit.>Currently studied ion acceleration mechanisms can be distinguished roughly by two experimental parameters: the target thickness and laser intensity contrast. Target Normal Sheath Acceleration (TNSA)<cit.> typically dominates for targets thicker than 1 μ m and for lasers with moderate to poor intensity contrast. The laser pre-pulses create a plasma at the front of the target from which electrons are accelerated by the main pulse; the resulting electric field at the target rear surface can accelerate ions at this location to tens of MeV/nucleon energies in the target normal direction.Higher contrast lasers and thinner targets can enable other mechanisms for relativistically intense lasers. Radiation Pressure Acceleration (RPA)<cit.> and Break-Out Afterburner (BOA),<cit.> involve penetration of the laser past the classically expected plasma critical surface due to a relativistic modification to the electron plasma frequency. This relativistic transparency<cit.> can potentially accelerate the entire target volume along the laser axis, but only for sufficiently high contrast pulses. These newer acceleration methods have been studied in various simulations and experiments,<cit.> but details of their underlying physics and interplay with TNSA are still under investigation.In addition to their energy spectrum, the spatial distribution of accelerated protons is also of interest in enabling applications and as it reveals physical processes governing the laser interaction and subsequent target evolution. For example, a ring of proton signal has been observed centered around the target normal direction for low energy ions, with several explanations as to the origin. It has been suggested that this feature is due to front surface generated shocks that accelerate ions from that location<cit.> or that penetrate to the rear target in time for the low energy proton acceleration,<cit.> or to an interaction between early accelerated protons and heavier ions accelerated via relativistic transparency effects pushing them from behind.<cit.>Discussed here are the energy spectra and spatial distributions obtained during a study comparing the target normal and laser axis accelerated ions from a variety of target thicknesses using thin films of liquid crystal. This material can be drawn into freely suspended films with thicknesses from 10 nm to over 40 μ m;<cit.> this range includes (and extends well above) that which is necessary for accessing any of the currently studied ion acceleration processes. The target formation technique will be presented along with experimental energy and spatial distribution data, and discussed in comparison to related results. § LIQUID CRYSTAL FILM FORMATION The formation process for freely suspending liquid crystal films utilizes the smectic mesophase of 4-octyl-4'-cyanobiphenyl (8CB), which intrinsically forms in stacked molecular layers, allowing thickness variation, and with sufficient surface tension to form planar films within rigid apertures.<cit.> Creating uniform thickness films and then controlling that thickness can be done by tuning film formation parameters such as temperature (near 28.5 ^∘C) and volume (on the order of 100 nL). For the experiment described here films were formed in 4 mm diameter circular holes punched into copper plates roughly 5 mm × 10 mm × 1 mm in dimension. These plate target frames were initially placed into a copper block heated by resistive heaters (25 W maximum), and a type T thermocouple was used for temperature control and monitoring. Liquid crystal volume was applied to each frame individually with a precision syringe pump (Harvard Apparatus), allowed to equilibrate over several seconds, and then drawn across the aperture with a teflon-coated razor blade to minimize scratching to the copper surface. Film thickness was determined with a Filmetrics F20 multi-wavelength interference device, which measures thickness to within 2 nm. Films were formed repeatedly within a single frame until a desired thickness was achieved. If the correct volume and temperature parameters were used during the initial formation, the films could then be stored indefinitely: once the frame returned to room temperature the film would remain “frozen” at the formed thickness regardless of frame orientation, air currents, or even gross mechanical motion.§ EXPERIMENTAL SETUP The experiment was performed on the Scarlet laser facility, which optimally yields 400 TW from 12 J in 30 fs, and is routinely capable of intensities above 5 × 10^21 W/cm^2 due to a 2 μ m FWHM focal spot produced from an F/2 off-axis parabola. <cit.> For this experiment, reduced energy (5 J) and a defocused spot resulted in an intensity of ∼ 1 × 10^20 W/cm^2. The experimental chamber setup is shown in Fig. <ref>a, indicating the alignment and experimental diagnostics to be described below. The liquid crystal films were shot with p polarization at an angle of incidence of 22.5^∘. This was critical to determine the ion acceleration mechanism, since TNSA ions will travel along the target normal axis while mechanisms that require thinner targets are expected to produce ions that propagate along the laser axis.Previous experiments on liquid crystals<cit.> revealed some difficulty in target alignment due to the uniform and transparent nature of the films. As such a target alignment procedure involving confocal microscopy was developed.<cit.> Here an alignment laser was tightly focused at normal incidence onto the film, and the reflection was imaged onto a 10 μ m single core fiber that served as a pinhole for the confocal system. Improper z alignment would result in an increased spot size once the alignment beam was relayed back to the fiber such that the transmitted light signal was reduced. A variable gain photodiode was used to measure the reflected signal from thin, mostly transparent liquid crystal films as a function of target position. In this way the target z position could be determined with ± 1 μ m accuracy.The primary experimental diagnostics were a compact-design Thomson parabola spectrometer<cit.> placed along the target normal axis, and radiochromic film (RCF) stacks placed along both the target normal and laser axes. Here the target normal RCF had a 4 mm hole punched to allow line of sight from the target to the Thomson parabola. RCF stacks utilized EBT, MDV2, and MDV3 films (Gafchromic/Ashland Specialty Ingredients) with appropriate thicknesses of copper and aluminum foil in between to allow measurement of protons from 1-30 MeV. The details of the RCF energy deposition curves and chamber placement are shown in Fig. <ref>b and c. Energy deposition plots were calculated in the CSDA approximation in SRIM <cit.> using stoichiometric and thickness data from the film manufacturer.<cit.> When extracting data from films, the energy dose was corrected for under-response at the Bragg peak due to high linear energy transfer according to an empirical fit.<cit.> The diagnostic utilized four or six RCF layers, spanning ion energy ranges up to 11 or 14 MeV, respectively. The laser axis RCF was mounted to a structure above the objective of the focal spot alignment camera such that lowering this to prevent damage during a shot brought the RCF into its appropriate position.Liquid crystal films were formed in the copper frames (shown in inset of Fig. <ref>a) on a setup bench to desired thicknesses. They were then transported to and installed within the experimental chamber target positioner. After chamber evacuation and target alignment the film thickness was verified with a Filmetrics device that had been set up outside a chamber port and optically relayed to the film. Not only did films typically survive installation, chamber evacuation, and target alignment, but also they maintained their original formation thickness.§ RESULTS Figure <ref> shows the maximum proton energy along the target normal axis recorded on the Thomson parabola spectrometer as a function of target thickness. Most of these shots are from 8CB films, with a few 100 nm Si_3N_4 and 2 μ m copper foil targets for comparison. A maximization of the TNSA proton energy can be seen around 700 nm target thickness, with protons reaching 24 MeV–a factor of 2 optimization over the cutoff energy from thicknesses only 200 nm different. Additionally, this proton energy from only 5 J incident on target constitutes a high TNSA energy for a Ti:sapphire based system.<cit.>The decrease in proton energy for thicknesses below 700 nm is attributed at least in part to insufficiently clean laser contrast. There was a pre-pulse 160 ps prior to the main pulse seven orders of magnitude lower in intensity; the pre-plasma resulting from this would have affected thinner targets more strongly. The highest energy protons coming from 700 nm thick targets is likely due to a trade-off between a thick enough target to survive the initial prepulse but also thin enough that electrons accelerated from the resulting pre-plasma have less time to diverge and therefore make a stronger sheath field at the target rear surface and also have more volume over their refluxing path where they are not within the target and can therefore contribute to surface sheath fields.<cit.>RCF data was also collected on target normal and laser axis. No appreciable laser axis signal was seen for any thickness, which is possibly attributable to the laser contrast inhibiting thin target acceleration. However, nearly every liquid crystal shot resulted in a ring of high density signal for the low energy protons, roughly centered on the target normal axis. An example RCF stack featuring this is shown in Fig. <ref>. This feature was not prominent in the 100 nm Si_3N_4 nor seen at all in the 2 μ m Cu targets, nor were they affected by varying the pulse duration from 30 to 100 fs. Other experiments have shown this sort of annular feature in low energy protons onto RCF, with varying explanations. One possibility is that the ring of protons come from a different mechanism than TNSA, for example from front-surface collisionless shock acceleration.<cit.> However, this effect would be enhanced with a longer pulse, which would drive the front-side acceleration longer, but no such effect was observed when the pulse duration was increased here. Another possibility is that prepulse target heating causes an expansion of the rear surface,<cit.> especially when it arrives > 100 ps prior to main pulse as was the case here. This shaped rear surface would give variation in target normal vector pointed radially outward from the laser spot location, and one which evolved over time as the target continued to expand. In this case, the largest rear surface deformation should occur at late times, which is when the slowest ions are accelerated, but one would expect the high energy ions to all be within the ring on later layers of the RCF since they are accelerated before this deformation has occurred. This was not the case for the high energy ions shown here, as will be shown below. Finally, recent work<cit.> suggests this ring is an artifact of relativistic transparency onset, as was shown on the 200 J VULCAN laser for 40 nm targets. While relativistic transparency is reasonable for those conditions, it is not expected for 700 nm thick targets for the 5 J energies used in this experiment.To investigate the origin of this ring feature and its dependence on target parameters like thickness, a detailed RCF analysis was performed to combine the information recorded on the individual stacks into spatial maps of spectral information. This was done by binning the ions at energies corresponding to the peak response of each RCF layer.Due to the broad high energy tail in the RCF response functions, the observed dose of each film can be considered as a sum of ions from its own bin and all higher energy bins, weighted by the film response function at each bin energy.A weighted subtraction of the RCF film data then isolates the ion flux at each energy bin. For more closely spaced energy bins, recent unfolding techniques from Schollmeier et al.<cit.> provide more accurate ion spectra but require optimization, relying on minimization of higher order derivatives to avoid unphysical oscillation in the calculated spectra.For the 4-6 energy bins in this experiment, the approximation used here avoids this oscillatory behavior, resulting in a more physically accurate spectral fit. As the bin energies are set at the response function maxima, results calculated with this method represent lower bounds.A resulting spectral unfold plot is shown in Fig. <ref>a, where the angle of the highest dose region on each RCF layer is recorded for a variety of thicknesses. Here 0^∘ corresponds to the target normal axis as measured to be the center of the through hole punched to allow Thomson parabola spectrometer line of sight to target chamber center. All liquid crystal shots show the same slope of ring angle increasing at higher energies (deeper RCF layers). Figure <ref>b plots the ring angle as observed on the first non-saturated RCF layer as a function of thickness, revealing that thicker targets generate rings with less divergence, possibly due to being less susceptible to prepulse-generated rear target expansion.<cit.>Additionally, the individual RCF layers were combined into an integrated ion flux spatial map as shown in Fig. <ref>a. This was done by applying the spectral unfold described above as a function of position. The dose-weighted average ion slope temperature of Fig. <ref>b was defined to beT_slope = √(-𝒩/2⟨∂ N(ϵ)/∂ϵ⟩_𝒩), ⟨∂ N(ϵ)/∂ϵ⟩_𝒩 = ∫∂ N(ϵ)/∂ϵN(ϵ)dϵ/∫ N(ϵ)dϵ where N(ϵ) is the extracted ion spectral function expressed in terms of ion energy ϵ, and 𝒩 is the total integrated ion flux at a given position. This expression allows an estimate of the slope temperature calculated using the highest flux ions, critically preventing lower-dose electron and x-ray background from having a dramatic effect on the calculated temperature.This dose-weighted approximate slope temperature reduces to the real value in the case of a true Boltzmann spectrum input. The result shows that the highest temperature ions are outside the ring feature.This analysis showing an increase of the ring angle with increasing ion energy is unexpected given previous explanations. A possible reconciliation is to assume that the lowest energy protons are those that can most easily be caught by heavier ions accelerated at later times of TNSA (expected because of their higher charge to mass ratio). The fastest carbon species, accelerated centrally from the target rear normal, could snowplow the slowest protons both forward and outward, coupling these two motions such that the most affected protons are both more energetic and diverging to a larger radius. The necessity of two ion species would explain why this result was seen for the liquid crystal targets but not for copper, which has a much lower population of carbon ions to affect protons (these coming only from the hydrocarbon contaminant layer on the back of the metal foil). This sort of multi-species interaction has been seen in some recent simulations,<cit.> with the key difference here being that relativistic effects were likely not present at this laser energy and contrast. Further simulations are required to ascertain the exact nature of the rear surface expansion and multi-species acceleration.§ CONCLUSION We have demonstrated the optimization of TNSA protons recorded along the target normal direction with both a Thomson parabola spectrometer and RCF, obtaining 24 MeV with 5 J of incident energy. Fine thickness variation was possible with liquid crystal films, and techniques adapted after this experiment now allow film formation in-situ at repetition rates surpassing once per minute.<cit.> The proton spatial distribution has been analyzed from liquid crystal shots, showing possible evidence for the interplay between both target rear surface deformation and multi-species interaction during acceleration. Additional experiments and simulations are underway to investigate these findings.§ ACKNOWLEDGMENTSThis work was supported by the DARPA PULSE program through a grant from AMRDEC and by the US Department of Energy under contract DE-NA0001976.unsrt
http://arxiv.org/abs/1704.08287v1
{ "authors": [ "P. L. Poole", "C. Willis", "C. D. Andereck", "L. Van Woerkom", "D. W. Schumacher" ], "categories": [ "physics.plasm-ph" ], "primary_category": "physics.plasm-ph", "published": "20170426184112", "title": "Study of accelerated ion energy and spatial distribution with variable thickness liquid crystal targets" }
School of Physics, University of the Witwatersrand, Private Bag 3, WITS-2050, Johannesburg, South [email protected], [email protected] The recent Madala hypothesis, a conjecture that seeks to explain anomalies within Large Hadron Collider (LHC) data (particularly in the transverse momentum of the Higgs boson), is interesting for more than just a statistical hint at unknown and unpredicted physics. This is because the model itself contains additional new particles that may serve as Dark Matter (DM) candidates. These particles interact with the Standard Model via a scalar mediator boson S. More interesting still, the conjectured mass range for the DM candidate (65 - 100 GeV) lies within the region of models viable to try explain the recent Galactic Centre (GC) gamma-ray excess seen by Fermi Large Area Telescope (Fermi-LAT) and the High Energy Stereoscopic System (HESS). Therefore, assuming S decays promptly, it should be possible to check what constraints are imposed upon the effective DM annihilation cross-section in the Madala scenario by hunting signatures of S decay that follows DM annihilation within dense astrophysical structures. In order to make use of existing data, we use the Reticulum II dwarf galaxy and the galactic centre gamma-ray excess data sets from Fermi-LAT, and compare these to the consequences of various decay paths for S in the aforementioned environments. We find that, based on this existing data, we can limit τ lepton, quark, direct gamma-ray, and weak boson channels to levels below the canonical relic cross-section. This allows us to set new limits on the branching ratios of S decay, which can rule out a Higgs-like decay branching for S, in the case where the Madala DM candidate is assumed to comprise all DM. § INTRODUCTIONThe unknown nature of dark matter (DM) sits as a major eye-sore on the completeness of the concordance cosmology. Multiple methods are being pursued in order to catch a glimpse of this elusive substance. In particular, collider searches like at the LHC ATLAS <cit.> and CMS <cit.> experiments can probe numerous models of theoretical interest <cit.>. However, there is one promising candidate to be found within the Madala hypothesis <cit.>. This hypothesis was put forward to explain anomalies in existing LHC data but also conveniently provides a DM candidate (see Section <ref> for details). This candidate is especially interesting because kinematic considerations place it in the mass range 65 to 100 GeV <cit.>. This regionis significant as it lies within collection of models that might be compatible with explaining the gamma-ray excess seen in the galactic centre <cit.> by Fermi-LAT <cit.> and HESS <cit.>, as well as anti-particle excesses, according to the consideration of modelling uncertainties performed by <cit.>.For these reasons we will examine the consequences of the annihilation of DM introduced by the Madala hypothesis within both the galactic centre and the promising dwarf galaxy target Reticulum II. Using Fermi-LAT gamma-ray data from these targets <cit.> we can formulate limits on possible annihilation cross-sections for multiple channels connecting the Madala DM to the Standard Model by comparing the existing data to gamma-ray fluxes expected from DM annihilation within these astrophysical structures. In addition to this we will examine the scalar S, that mediates with the DM particle in the Madala scenario, comparing what limits we can place on the decay branching ratios of S to the assumption S is Higgs-like <cit.>. This will be performed under the assumption that Madala DM constitutes all DM in the universe (and thus has an annihilation cross-section set to the canonical relic value <cit.>).This paper is structured as follows: in Section <ref> we provide more detail on the galactic centre gamma-ray excess as well as observations of Reticulum II. In Section <ref> we will detail the relevant aspects of the Madala hypothesis and its bearing on the problem of DM. In Section <ref> we will detail the formalism used to calculate gamma-ray emmisions from GC and Reticulum II. Finally, in Section <ref> we present and discuss our results.§ THE GALACTIC CENTRE GAMMA-RAY EXCESS AND RETICULUMIIThe galactic centre has been a prime target for DM hunts in gamma-rays ever since the discovery of unexpected gamma-ray excesses around 1 - 10 GeV <cit.>. This is evinced by the numerous works dedicated to using DM to explain the excess <cit.>. In our work here we will make use of the spectrum for the excess within the region of interest (ROI) between 1^∘ and 20^∘ from the galactic centre found by <cit.>. This choice of ROI is a standard when analysing galactic centre gamma-ray data (see <cit.> and references therein) in order to avoid the emissions of the powerful Sagittarius A^* complex within the galactic core region.Reticulum II is a faint dwarf galaxy recently found by the Dark Energy Survey (DES) project <cit.>. It is notable because it is calculated to posses a very large J-factor <cit.>, a parameter that tracks the density of dark matter within halos. The density is abetted by the fact that Reticulum II is very close to Earth (30 kpc away <cit.>), thus reducing inverse-square flux attenuation. This particular dwarf was the source of some speculation, as there appeared a small excess in its gamma-ray spectrum observed by Fermi-LAT <cit.> (attributed by the aforementioned authors at 2σ confidence level to a WIMP with relic cross-section and around 60 GeV mass). However, subsequent Fermi collaboration analysis revealed that the excess could not associated with DM when considered against other dwarf galaxy targets <cit.>. This was reinforced by an analysis in <cit.>, which concluded that the Reticulum II DM model would produce unacceptable excesses in the radio and gamma-ray spectra of other targets. Despite this, Reticulum II makes an ideal test-bed for producing constraints on DM models, as a large J-factor means large DM-induced fluxes from annihilation, which can then be compared to the Fermi-LAT upper limits. We make use of the upper limits established on gamma-ray fluxes from Reticulum II as used in <cit.>. § MADALA HYPOTHESIS AND DARK MATTERThe Madala hypothesis is one that sets out to explain anomalies in LHC data concerning the transverse momentum of the Higgs boson within LHC collisions (among others - see <cit.>). It does so through the introduction of a set of particles: a large scalar “Madala" boson H (∼ 270 GeV mass) which is Higgs-like, and a scalar S with a mass range 130 - 200 GeV which couples a dark particle χ to the standard model and to H. This scalar S acts as mediator between the standard model and dark particles proposed in the Madala hypothesis (as seen in Fig. <ref>). Therefore, in order to explore what kinds of indirect astrophysical signatures might be expected from DM that results from this conjecture, one must concentrate upon the possible couplings, and thus decay paths, between S and the Standard Model. Such limits may be of particular interest in determining which couplings to the standard model are permissible for S given current astrophysical data.§ DARK MATTER HALOS AND GAMMA-RAY FLUXIn a given DM halo, the differential gamma-ray flux resulting from annihilation can be specified byϕ (E_γ, ΔΩ, l) = 1/4π⟨σ V⟩/2 m_χ^2N_γE_γ∫_ΔΩ∫_lρ^2 (ř) dl^'dΩ^',where E_γ is the gamma-ray energy, m_χ is the mass of the WIMP, ρ is the DM halo spatial density profile, ⟨σ V⟩ is the velocity averaged thermal annihilation cross-section, and N_γE_γ is the γ-ray yield from S decay following WIMP annihilations (sourced from PYTHIA <cit.> routines in DarkSUSY <cit.> as well as <cit.>). In this work we will assume S decays promptly, thus we will place a limit on the effective annihilation cross-section for the process χχ→ S → SM.The expression Eq. (<ref>) can be simplified by splitting it into two factors. The first is the astrophysical “J"-factor, which encompasses the above two integrals,J (ΔΩ, l) = ∫_ΔΩ∫_lρ^2 (ř) dl^'dΩ^',with the integral being extended over the line of sight l, and ΔΩ is the observed solid angle. The second factor is determined only by particle physics:ψ (E_γ) = 1/4π⟨σ V⟩/2 m_χ^2N_γE_γ.Thus the flux will be found fromϕ (E_γ) =ψ (E_γ) × J(ΔΩ, l) . For the Reticulum II dwarf we take the J-factor to be 2.0 × 10^19 GeV^2 cm^-5 <cit.>. However, for the galactic centre we follow the methodology of <cit.> and use a contracted NFW profile <cit.>ρ_N(r,r_s,η)=ρ_s/(r/r_s)^η(1+r/r_s)^3-η,where we take the scale radius r_s = 20 kpc following <cit.>, and ρ_s is defined by ensuring that ρ = 0.4 GeV cm^-3 at a radius of 8.5 kpc. We then calculate the average J-factor for a profile with η = 1.2 for the ROI between 1^∘ and 20^∘ (as explained in Section <ref>) from the galactic centre using formula Eq. (<ref>). We choose this particular profile as contracted NFW halos are often found in best-fit scenarios for DM explanations of the GC excess <cit.>, thus making these results more relevant to the existing literature.§ RESULTS AND DISCUSSION In Figure <ref> we display the limits derived from gamma-ray fluxes in the galactic centre and Reticulum II on the effective annihilation cross-section into a variety of standard model particles (assuming a branching fraction of b_f = 1 for each channel individually). The most significant region of the plot is the purple Madala mass band, where the DM masses correspond with the mass range expected for the boson S. Here we see that, for the channels τ^+τ^-, qq̅, and direct gamma-ray production, we can use both targets to explore the region of the parameter space in which the Madala DM constitutes all the DM in the universe (relic band). In the case of Reticulum II we can also do this for the Higgs channel. For the weak boson channels we can probe below the relic band of cross-sections only in Reticulum II. We note that neither of these targets can be used to rule out the entire region of the Madala band that overlaps the galactic centre excess region from <cit.>. However, we stress that Reticulum II has only extant upper-limits on its gamma-ray flux, thus we might expect these already strong constraints to improve with further observations.The significance of these limits can be understood as follows: if the cross-section can be constrained below the relic level then we rule out the DM model as a candidate for all DM, as its present abundance would be too great to match cosmological constraints. However, since we assume b_f = 1 in each case, we can instead derive a limit on the decay branching for S should the Madala DM particle constitute all DM (assuming ⟨σ V ⟩ = 3 × 10^-26 cm^3 s^-1 for all channels). The results of this analysis are shown in Table <ref>. Any entry with a dash signifies that no constraint can be derived (we leave out the light leptons as this is true for all masses). The final column shows the branching ratios for the Standard Model Higgs boson with mass equal to S in each case (as S is assumed Higgs-like <cit.>).What is evident from these results is that both decay of S into quarks and into gamma-rays as well as W and Z bosons (not shown on table) can be constrained below ∼ 30%. For W bosons this means that the branching of S rules out the Higgs-like case (which ranges from ∼ 0.5 to ∼ 0.75 in the suggested S mass range <cit.>). While for qq̅ our constraint rules out the Higgs-like range for S masses below ≲ 145 GeV. Our constraints for direct photon and Z decays cannot rule out a Higgs-like S. Limits on decay of S into Higgs bosons show large variability with WIMP mass. This arises from the movement of the gamma-ray resonant peak within the data domain for each source. Due to the hardness of the resulting gamma-ray spectrum <cit.>, the τ lepton channel is subject to weaker constraints, with ∼ 70% limit being possible for the largest masses considered, thus this channel does not affect the Higgs-like S case. The branching ratios of S into light leptons cannot be constrained at all.This leaves considerable room for a Madala WIMP to constitute all DM, but raises serious doubts about a Higgs-like S if the Madala hypothesis is to account for > 50% of DM. Finally, future radio frequency searches with the Square Kilometre Array <cit.> can probe significantly lower cross-sections <cit.> (even in the presence of large radio background fluxes) and will thus be integral in further constraint, or dismissal, of the Madala hypothesis and its attendant particles.§ ACKNOWLEDGMENTSS.C. acknowledges support by the South African Research Chairs Initiative of the Department of Science and Technology and National Research Foundation and by the Square Kilometre Array (SKA). G.B acknowledges support from a post-doctoral grant through the same initiative and institutions.§ REFERENCES99atlas-docs G. Aad et al., JINST, 3, S08003 (2008).cms-docs M. Della Negra, A. Petrilli, A. Herve, & L. Foa, CMS Physics Technical Design Report Volume I: Software and Detector Performance (2008), <http://doc.cern.ch//archive/electronic/cern/preprints/lhcc/public/lhcc-2006-001.pdf>lhc1 H. M. Lee, M. Park, & W. Park, Phys. Rev., D, 86, 103502 (2012).lhc2 J Brooke, M. R. Buckley, P. Dunne, B. Penning, J. Tamanas, & M. Zgubic̆, Phys. Rev. D, 93, 113013 (2016).lhc3 F. Bishara & J. J. Zupan, JHEP, 1601, 010 (2016).madala1 S. von Buddenbrock, N. Chakrabarty, A. S. Cornell, D. Kar, M. Kumar, T. Mandal, B. Mellado, B. Mukhopadhyaya, & R. G. Reed, preprint, arXiv:1506.00612 [hep-ph] (2015).madala2 S. von Buddenbrock, N. Chakrabarty, A. S. Cornell, D. Kar, M. Kumar, T. Mandal, B. Mellado, B. Mukhopadhyaya, R. G. Reed, & X. Ruan, Eur. Phys. J. C, 76, 580 (2016), arXiv:1606.01674 [hep-ph].hessexcess M. Ackermann et al. for the Fermi-LAT collaboration, Phys. Rev. D, 82, 092004 (2010). arXiv: 1008.3999.fermiexcess F. Aharonian et al. for the Fermi-LAT collaboration, Astron. Astrophys. 425, L13 (2004).fermi-docs W. B. Atwood et al. for the Fermi-LAT collaboration, Astrophys. J., 697, 1071 (2009). arXiv:0902.1089 [astro-ph].hess-details <https://www.mpi-hd.mpg.de/hfm/HESS/>calore2014 F. Calore, I. Cholis, C. McCabe, & C. Weniger, Phys. Rev. D, 91, 063003 (2015), arXiv:1411.4647 [astro-ph]Fermidwarves2015 A. Drlica-Wagner et al. for the Fermi-LAT collaboration & T. Abbott et al. for the DES collaboration, ApJ, 809, L4 (2015), arXiv: 1503.02632 [astro-ph].daylan2016 T. Daylan, D. P. Finkbeiner, D. Hooper, T. Linden, S. K. N. Portillo, N. L. Rodd, & T. L. Slatyer, Physics of the Dark Universe, 12, 1 (2016). arXiv: 1402.6703.jungman1996 G. Jungman, M. Kamionkowski, & K. Griest, J. Phys. Rep., 267, 195 (1996).dmgc1 L. Goodenough & D. Hooper, preprint, arXiv:0910.2998 (2009).dmgc2 D. Hooper & L. Goodenough, Phys. Lett. B, 697, 412 (2011), arXiv:1010.2752.dmgc3 A. Boyarsky, D. Malyshev, & O. Ruchayskiy, Phys. Lett. B, 705, 165 (2011), arXiv:1012.5839.dmgc4 D. Hooper & T. Linden, Phys. Rev. D, 84, 123005 (2011), arXiv:1110.0006.dmgc5 K. N. Abazajian & M. Kaplinghat, Phys. Rev. D, 86, 083511 (2012), arXiv:1207.6047.dmgc6 C. Gordon & O. Macias, Phys. Rev. D, 88, 083521 (2013), arXiv:1306.5725.dmgc7 K. N. Abazajian, N. Canac, S. Horiuchi, & M. Kaplinghat, Phys. Rev. D, 90, 023526 (2014), arxXiv:1402.4090.des <http://www.darkenergysurvey.org>desdwarf K. Bechtol et al. for the DES collaboration, ApJ, 807(1), 50 (2015), arXiv:1503.02584 [astro-ph.GA].bonnivard2015 V. Bonnivard, C. Combet, D. Maurin, A. Geringer-Sameth, S. M. Koushiappas, M. G. Walker, M. Mateo, E. Olszewski, & J. I. Bailey III, ApJ, 808, L36 (2015).geringer-sameth2015 A. Geringer-Sameth, M. G. Walker, S. M. Koushiappas, S. E. Koposov, V. Belokurov, G. Torrealba, & N. W. Evans, Phys. Rev. Lett., 115, 081101 (2015), arXiv:1503.02320 [astro-ph].beck2016 G. Beck & S. Colafrancesco, JCAP, 05, 013 (2016).pythia T. Sjöstrand, Comput. Phys. Commun., 82, 74 (1994).darkSUSY P. Gondolo, J. Edsjo, P. Ullio, L. Bergstrom, M. Schelke, & E.A. Baltz, JCAP, 0407, 008 (2004).ppdmcb1 M. Cirelli, G. Corcella, A. Hektor, G. Hütsi, M. Kadastik, P. Panci, M. Raidal, F. Sala, & A. Strumia, JCAP, 1103, 051 (2011). Erratum: JCAP, 1210, E01 (2012). arXiv 1012.4515.ppdmcb2 P. Ciafaloni, D. Comelli, A. Riotto, F. Sala, A. Strumia, & A. Urbano, JCAP, 1103, 019 (2011). arXiv 1009.0224.nfw1996 J. F. Navarro, C. S. Frenk, & S. D. M. White, ApJ, 462, 563 (1996).smhiggs S. Heinemeyer (ed) et al. for the LHC Higgs Cross Section Working Group, Handbook of LHC Higgs Cross Sections: 3. Higgs Properties : Report of the LHC Higgs Cross Section Working Group (CERN, 2013). arXiv: 1307.1347 [hep-ph].ska P. Dewdney, W. Turner, R. Millenaar, R. McCool, J. Lazio, & T. Cornwell, SKA baseline design document (2012), <http://www.skatelescope.org/wp-content/uploads/2012/07/SKA-TEL-SKO-DD-001-1_BaselineDesign1.pdf>.
http://arxiv.org/abs/1704.08031v1
{ "authors": [ "Geoff Beck", "Sergio Colafrancesco" ], "categories": [ "astro-ph.CO", "hep-ph" ], "primary_category": "astro-ph.CO", "published": "20170426092907", "title": "What Can Gamma-rays from Space tell us About the Madala Hypothesis?" }
=100000 APS/123-QED More than six hundreds new familiesof Newtonian periodic planar collisionless three-body orbitsXiaoming Li^1 andShijun Liao^1, 2, *^1 School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiaotong University, China ^2 Ministry-of-Education Key Laboratory in Scientific and Engineering Computing, Shanghai 200240, ChinaThe corresponding author:[email protected] The famous three-body problem can be traced back to Isaac Newton in1680s.In the 300 years sincethis“three-body problem”was first recognized,only three families of periodic solutions had been found, until 2013 when Šuvakov and Dmitrašinović [Phys. Rev. Lett. 110, 114301 (2013)] made a breakthrough to numerically find 13 new distinct periodic orbits, which belongto 11 new families of Newtonian planar three-body problem with equal mass and zero angular momentum. In this paper,wenumericallyobtain 695families of Newtonian periodicplanarcollisionlessorbits of three-body system with equal mass and zero angular momentumincaseofinitial conditionswith isosceles collinear configuration, including the well-known Figure-eight family found by Moore in 1993, the 11 families found by Šuvakov and Dmitrašinović in 2013, and more than 600 new families that have been never reported, to the best of our knowledge. With the definition of the average period T̅ = T/L_f, where L_f is the length of the so-called “free group element”, these 695 familiessuggest thatthere shouldexist the quasi Kepler's third law T̅^* ≈ 2.433 ± 0.075 for the considered case,where T̅^*= T̅ |E|^3/2 is the scale-invariantaverageperiod and E is its total kinetic and potential energy, respectively. The movies of these 695 periodic orbits in the real space and the corresponding close curves on the “shape sphere”can be found via the website:<http://numericaltank.sjtu.edu.cn/three-body/three-body.htm>PACS numbers 45.50.Jf, 05.45.-a, 95.10.Ce§ INTRODUCTIONThe famous three-body problem <cit.> can be traced back to Isaac Newton in1680s. According to Poincaré <cit.>,a three-body system is not integrable in general.Besides, orbits of three-body problem are often chaotic <cit.>, say,sensitive to initial conditions <cit.>,although there exist periodic orbits in some specialcases.In the 300 years sincethis“three-body problem” <cit.>was first recognized, only three families of periodic solutions had been found,until 2013 when Šuvakov and Dmitrašinović <cit.> made a breakthrough to find 13 new distinct periodic collisionless orbits belonging to 11 new families of Newtonian planar three-body problem with equal mass and zero angular momentum.Before their elegant work, only three families of periodic three-body orbits were found: (1) the Lagrange-Euler familydiscovered by Lagrange and Euler in 18th century; (2) the Broucke-Hadjidemetriou-Hénon family <cit.>; (3) the Figure-eight family, first discovered numerically by Moore <cit.> in 1993 and rediscovered by Chenciner and Montgomery <cit.> in 2000, and then extended to the rotating case <cit.>. In 2014, Li and Liao <cit.> studied the stability of the periodic orbits in <cit.>.In 2015 Hudomal <cit.> reported 25 families of periodic orbits, including the 11 families found in <cit.>.Some studies on topological dependence of Kepler's third law for three-body problemwerecurrently reported<cit.>.Recently, Šuvakov and Dmitrašinović <cit.> specifically illustrated their numerical strategiesused in <cit.>for their 11 families of periodic orbits with the periods T ≤ 100.They suggested that more new periodic solutions are expected to be found when T≥ 100. In this paper,we used a different numerical approach to solve the same problem, i.e.Newtonian planar three-body problem with equal mass and zero angular momentum, but gained 695 families of periodic orbits without collision, i.e. 229 families within T ≤ 100 and 466 families within 100< T≤ 200,including the well-known Figure-eight family found by Moore <cit.>, the 11 families found by Šuvakov and Dmitrašinović <cit.>,the 25families mentioned in <cit.>,and especially more than 600 new families that have been never reported, to the best of our knowledge.§ NUMERICALAPPROACHES The motions of Newtonian planar three-body system are governed by the Newton's second law and gravitational lawr̈_i=∑_j=1,j≠ i^3G m_j(r_j-r_i)/| r_i-r_j |^3,where r_i and m_j are the position vector and mass of the ith body (i=1,2,3), G is the Newtonian gravity coefficient,andthe dot denotes the derivative with respect to the time t, respectively.LikeŠuvakov and Dmitrašinović <cit.>,we consider a planarthree-body system with zero angular momentum in the case ofG=1, m_1=m_2=m_3=1, andthe initial conditions in case of the isosceles collinear configurations:{[r_1(0)=(x_1,x_2)=-r_2(0), r_3(0)=(0,0),; ṙ_1(0)= ṙ_2(0)=(v_1,v_2), ṙ_3(0)=-2ṙ_1(0), ].which are specified by the four parameters (x_1, x_2, v_1, v_2). Write y(t)=(r_1(t), ṙ_1(t)). A periodic solution with the period T_0 is the root of the equation y(T_0)-y(0)=0, where T_0 is unknown.Note that x_1=-1 and x_2=0correspond to the normal case considered in <cit.> that regards r_1(0)=(-1,0) to be fixed.However, unlike Šuvakov and Dmitrašinović <cit.>, we regard x_1 and x_2 as variables.So, mathematically speaking,we search for the periodic orbits of the same three-body problem usinga larger degree of freedomthan Šuvakov and Dmitrašinović <cit.>. First,likeŠuvakov and Dmitrašinović <cit.>,we use thegrid search method tofindcandidatesoftheinitial conditions y(0)=(x_1, x_2, v_1, v_2) for periodic orbits. As is well known, the grid search method suffers from the curse of dimensionality.In order to reduce the dimension of the search space, we set the initial positions x_1=-1 and x_2=0.Then,we search for the initial conditions of periodic orbits in the two dimensional plane: v_1∈ [0,1] and v_2 ∈ [0,1].We set 1000 points in each dimension and thus have one million grid points in the square search plane.With these different 10^6 initial conditions, the motion equations (<ref>) subject to the initial conditions (<ref>)are integrated up to the time t = 100 by means of the ODE solver dop853 developed by Hairer et al. <cit.>,which is based on an explicit Runge-Kutta method of order 8(5,3) in double precision with adaptive step size control. The corresponding initial conditions and the period T_0are chosen as the candidates when the return proximity function|y(T_0)-y(0)| = √(∑_i=1^4(y_i(T_0)-y_i(0))^2) is less than 10^-1.Secondly, we modifythesecandidatesof the initial conditionsbymeansofthe Newton-Raphson method <cit.>.At this stage,the motion equations are solved numerically by means of the same ODE solver dop853 <cit.>. Aperiodicorbitisfound whenthe level of the return proximity function (<ref>) is less than 10^-6. Note that, different from the numerical approach in<cit.>, not only the initial velocity ṙ_1(0)=(v_1, v_2) but also the initial position r_1(0)=(x_1,x_2) are also modified.In other words,our numerical approach also allows r_1(0)=(x_1,x_2)to deviate from its initial guess (-1,0).With such kind of larger degree of freedom,our approachgives 137families of periodic orbits,including the well-known Figure-eight family <cit.>,the 10 families found by Šuvakov and Dmitrašinović <cit.>, and lots of completely new families that have been never reported. However,one family reported in <cit.> was notamong these 137 periodic orbits.So, at least one periodic orbit was lost at this stage. This is not surprising, since three-body problem is not integrable in general<cit.>and might berathersensitivetoinitial conditions, i.e. the butterfly-effect<cit.>. For example,Hooveret al. <cit.> compared numerical simulations of a chaotic Hamiltonian system given by five symplectic and two Runge-Kutta integrators in double precision, and found that “allnumerical methods are susceptible”, “which severely limits the maximum time for which chaotic solutions can be accurate”, although “all of theseintegrators conserve energy almost perfectly”.In fact,there exist many examples which suggest that numerical noises have great influence on chaotic systems. Currently, somenumericalapproaches weredevelopedtogain reliableresultsofchaoticsystems in a long (but finite) interval of time.One of them is the so-called “Clean Numerical Simulation” (CNS) <cit.>,which is based on the arbitrary order of Taylorseries method <cit.>inarbitraryprecision <cit.>, and more importantly,a check of solution verification (in a given interval of time) by comparing two simulations gained with different levels of numerical noise.First, we checkedthe 137 periodic orbits by means of the high-order Taylor series method in the 100-digit precision with truncation errors less than 10^-70, and guaranteed that they are indeed periodic orbits.Especially, wefurther found theadditional 27families of periodic orbits (with the periods less than 100)bymeansofthe Newton-Raphson method <cit.>for the modifications of initial conditionsand using the high-order Taylor series method (in 100-digit precision with truncation errors less than 10^-70) for the evolution of motion equations (<ref>),insteadofthe ODE solver dop853<cit.>based on the Runge-Kutta method in double precision.In addition, we use the CNS with even smaller round-off error (in 120-digit precision) and truncation error (less than 10^-90)to guarantee the reliability of these 27families.It is found that one of them belongs to the 11 families found byŠuvakov and Dmitrašinović <cit.>.Similarly, we found 165 families within the period 100<T_0<200, including 119 families gainedby the ODE solver dop853<cit.>in double precision and theadditional 46 families by the CNSin the multiple precision.Obviously,more periodic orbits can be found within a larger period.It is interesting that more periodic orbits can be found by means of finer search grids.Using 2000× 2000 grids for thecandidates of initial conditions, we gainedtotally 498periodic orbitswithin 0<T_0<200 in the similar way, including the 163 families within 0≤ T_0≤ 100 by the ODE solver dop853<cit.> in double precision, the additional 33 families within 0≤ T_0≤ 100by the CNSin the multiple precision,the 182 families within 100<T_0≤ 200 by the ODE solver dop853in double precision, theadditional 120 families within 100<T_0≤200by the CNSin the multiple precision, respectively.Similarly,using 4000× 4000 grids, wetotallygained 695periodic orbitswithin 0<T_0<200, including the 192 families within 0≤ T_0≤ 100 by the ODE solver dop853<cit.> in double precision, theadditional 37 families within 0≤ T_0≤ 100by the CNSin the multiple precision,the 260 families within 100<T_0≤ 200 by the ODE solver dop853in double precision, and theadditional 206 families within 100<T_0≤200by the CNSin the multiple precision, respectively. Thus, the finer the search grid,the more periodic orbits can be found.It is indeed asurprise that there exist much more families of periodic orbits of three-body problem than we had thought a few years ago!It should be emphasized that,in case ofthe search grid 4000× 4000, wefound 243 more periodic orbits by means ofthe CNS <cit.> in multiple precision thanthe ODE solver dop853<cit.> in double precision.It indicates that the numerical noises mightlead to great loss ofperiodic orbits of three-body system.§ PERIODIC ORBITS OF THE THREE-BODY SYSTEM The Montgomery's topological identification and classification method<cit.> is used here to identify these periodic orbits.The positions r_1, r_2 and r_3 of the three-body correspond to a unit vector n in the so-called “shape sphere” with the Cartesian componentsn_x=2ρ·λ/R^2,n_y=λ^2-ρ^2/R^2, n_z=2(ρ×λ) ·e_z /R^2,whereρ=1/√(2)(r_1-r_2), λ=1/√(6)(r_1+r_2-2r_3) and the hyper-radius R=√(ρ^2+λ^2). A periodic orbit of three-body system gives a closed curve on the shape sphere, which can be characterized by its topology with three punctures (two-body collision points).With one of the punctures as the “north pole", the sphere can be mapped onto a plane by a stereographic projection.And a closed curve can be mapped onto a plane with two punctures and its topology can be described by the so-called “free group element” (word) with letters a (a clockwise around right-hand side puncture),b (a counter-clockwise around left-hand side puncture) and their inverses a^-1=A and b^-1=B.For details, please referto<cit.>. The periodic orbits can be divided into different classes according to their geometric and algebraic symmetries <cit.>. There are two types of geometric symmetries in the shape space:(I) the reflection symmetries of two orthogonal axes — the equator and the zeroth meridian passing through the “far” collision point; (II) a central reflection symmetry about one point — the intersection of the equator and the aforementioned zeroth meridian. Besides, Šuvakov and Dmitrašinović <cit.> mentioned three types of algebraic exchange symmetries for the free group elements:(A) the free group elements are symmetric with a ↔ A and b ↔ B;(B) free group elements are symmetric with a ↔ b and A ↔ B;(C) free group elements are not symmetric under either (A) or (B). The 695 families of the periodic collisionless orbits can be divided into five classes:I.A, II.A,I.B, II.B and II.C,as listed in Table S. III-XXX in Supplementary material <cit.>. Note that the class II.Awasnotincluded in <cit.>. Here,we regard all periodic orbits (and its satellites) with the same free group element asone family,sothe“moth I”orbit and its satellite“yarn”orbit in <cit.> belong to one family in this paper.These 695 families include the Figure-eight family <cit.>, the 11 families found byŠuvakov and Dmitrašinović <cit.> (see Table S.I) and the 25 families reported in <cit.> (11 among them was given in <cit.>,see Table S.II). In Table S.I and S.II, the superscript i.c. indicates the case of the initial conditions with isosceles collinearconfiguration, due to the fact that there exist periodic orbits in many other cases. Note that Rose <cit.> currently reported 90 periodic planar collisionless orbits in the same case ofthe isosceles collinear configurations (<ref>), which include the Figure-eight family <cit.> and many families reported in <cit.>.Even considering these, more than 600 families among our 695 ones are new.Note that the initial positions r_1 =(x_1,x_2) in Table S.III - XVI in Supplementary material <cit.> depart from (-1,0) a little.However, it is well-known that, ifr_i(t) (i=1,2,3)denotes a periodic orbit with the period T of a three-body system, thenr'_i(t') = αr_i(t), v'_i(t') =v_i(t)/√(α), t' = α^3/2t, is also a periodic orbit with the period T' = α^3/2T for arbitrary α>0. Thus, through coordinate transformation and then the scaling of the spatial and temporal coordinates, we can always enforce (-1,0), (1,0) and (0,0) as the initial positions of the body-1, 2 and 3, respectively, with the initial velocities ṙ_1(0)=ṙ_2(0) and ṙ_3(0)=-2 ṙ_1(0),corresponding to zero angular momentum.This is the reason why we choose -1 and 0 as the initial guesses of x_1 and x_2 for our search approaches.The corresponding initial velocities of the 695 periodic orbits are listed in Tables S.XVII - XXX in Supplementary material <cit.>.The scatterplot of the initial velocities of the 695 periodic orbits are shown in FIG. <ref>.Note that two very close initial conditions can give completely different periodic orbits. This well explainswhy more periodic orbits can be found by means of finer search grids. The so-called “free group elements”of these695 families are listed in Table S.XXXI-LIV in Supplementary material <cit.>.Due to the limited length,only six newly-found ones are listed in Table <ref>,and their real space orbits are shown in FIG. <ref>.In addition,the real space orbits of a few families are shown in FIG. S.1.The movies of these 695 periodic orbits in the real space and the corresponding close curves on the shape spherecan be found via the website: <http://numericaltank.sjtu.edu.cn/three-body/three-body.htm>For a two-body system, there exists the so-called Kepler's third law r_a∝T^2/3, where T is the period and r_a is the semi-major axis of periodic orbit. For a three-body system,Šuvakov and Dmitrašinović <cit.>mentionedthatthere should existthe relationT^2/3 |E| = constant,where Edenotes the total kinetic and potential energy of the three-body system. But, they pointed out that “the constant on the right-hand side of this equation is not universal”, which may dependon “both of the family of the three-body orbitanditsangularmomentum” <cit.>. However, with the definition of the average period T̅ = T/L_f, where L_f is the length of free group element of periodic orbit of a three-body system,the 695 families of periodic planar collisionless orbitsapproximately satisfy such a generalisedKepler's third lawR̅∝ |E|^-1 = 0.56T̅^2/3, as shown in FIG. <ref>, whereR̅ is the mean of hyper-radius of the three-body system.In other words, the scale-invariantaverageperiod T̅^* = T̅ |E|^3/2 should be approximately equal to a universal constant, i.e.T̅^* ≈ 2.433 ± 0.075,for the three-body system with equal mass and zero angular momentum in the case of initial conditionswith isosceles collinear configuration (<ref>).Note thatthe scale-invariant periodT^* = T |E|^3/2and L_f (the length of free group element) are invariable under the scaling (<ref>) of the spatial and temporal coordinates for arbitrary α>0.So, theyare twocharacteristics with important physical meanings for each family that contains an infinite number of periodic orbits corresponding to different scaling parameters α>0 in (<ref>).§ CONCLUDING REMARKS In this paper, we gain 695 families of periodic orbits of the three-body system with equal mass,zero angular momentum and initial conditions in the isosceles collinear configuration r_1=(-1,0), r_2=(+1,0), r_3=(0,0). These 695 families include the Figure-eight family <cit.>, the 11 families found byŠuvakov and Dmitrašinović <cit.> (see Table S.I) and the 25 families reported in <cit.> (11 among them was given in <cit.>,see Table S.II).Especially,more than six hundreds among them are completely new and have been never reported, to the best of our knowledge. It should be emphasized that 243 more periodic orbits are found by means ofthe CNS <cit.> in multiple precision than the ODE solver dop853<cit.> in double precision.This indicates the great potential of the CNS for complicated nonlinear dynamic systems.It should be emphasized that,inthe consideredinitial conditions with isosceles collinear configuration,more and moreperiodic planar three-body orbits could be found by means offiner search grids within a larger period. Similarly,a large number of periodic orbits can be gained inother cases of three-body systems.Thereafter,adatabasefor periodic orbits of three-body problem could be built, which is of benefit to better understandings of three-body systems, averyfamousproblem thatcan be traced back to Isaac Newton in 1680s.§ ACKNOWLEDGMENTThis work was carried out on TH-2 at National Supercomputer Centre in Guangzhou, China.It is partly supported by National Natural Science Foundation of China (Approval No. 11432009).* Supplementary information for “More than six hundreds new families of Newtonianperiodic planar collisionless three-body orbits"Xiaoming Li^1 andShijun Liao^1, 2, *^1 School of Naval Architecture, Ocean and Civil Engineering, Shanghai Jiaotong University, China ^2 Ministry-of-Education Key Laboratory in Scientific and Engineering Computing, Shanghai 200240, ChinaThe corresponding author:[email protected] The free group elements for the periodic three-body orbits.1 1ptClass & numberfree group element I.A^i.c._1 BabAI.A^i.c._2 BAbabaBAI.A^i.c._3 BaBabAbAI.A^i.c._4 BabaBABabABAbabAI.A^i.c._5 BAbabABAbabaBABabaBAI.A^i.c._6 BabaBAbaBABabABAbaBAbabAI.A^i.c._7 BAbaabABBAbabaBAABabbaBAI.A^i.c._8 BabaBABabaBABabABAbabABAbabAI.A^i.c._9 BababABAbabABABabABABabaBABababAI.A^i.c._10 BAbabABAbabABAbabaBABabaBABabaBAI.A^i.c._11 BabABAbaBAbabABabABabaBAbaBABabAI.A^i.c._12 BabAAbaBAbaBAbbABabABaaBAbaBAbaBBabAI.A^i.c._13 BabaBAbabABAbaBABabABAbaBABabaBAbabAI.A^i.c._14 BAbabABABababABAbabaBABababABABabaBAI.A^i.c._15 BabaBABabaBABabaBABabABAbabABAbabABAbabAI.A^i.c._16 BabABAbaBAbaBAbabABabABabaBAbaBAbaBABabAI.A^i.c._17 BabABaaBAbaBAbaBBabABabABabAAbaBAbaBAbbABabAI.A^i.c._18 BabaBAbabABabABAbaBABabABAbaBABabABabaBAbabAI.A^i.c._19 BAbabABAbabABAbabABAbabaBABabaBABabaBABabaBAI.A^i.c._20 BabAAbaBAbaBAbaBAbbABabABaaBAbaBAbaBAbaBBabAI.A^i.c._21 BAbabABABabaBABababABAbabaBABababABAbabABABabaBAI.A^i.c._22 BabABabaBAbaBAbaBABabABabABabABAbaBAbaBAbabABabAI.A^i.c._23 BabaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbabAI.A^i.c._24 BabaBAbaBABabABabaBAbaBABabABAbaBAbabABabABAbaBAbabAI.A^i.c._25 BabABaaBAbaBAbaBAbaBBabABabABabAAbaBAbaBAbaBAbbABabAI.A^i.c._26 BAbabaBABababABABabaBABAbabaBABAbabABABababABAbabaBAI.A^i.c._27 BAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBAI.A^i.c._28 BabAAbaBAbaBBabABaaBAbaBAbbABabABaaBAbaBAbbABabAAbaBAbaBBabAI.A^i.c._29 BabaBABabaBAbabABAbaBABabaBABabABAbabABAbaBABabaBAbabABAbabAI.A^i.c._30 BabABabAAbaBAbaBAbaBAbbABabABabABabABaaBAbaBAbaBAbaBBabABabAI.A^i.c._31 BabaBABabABAbabABAbabABabaBABabABAbabABabaBABabaBABabABAbabAI.A^i.c._32 BAbabABABabaBABAbabaBABababABAbabaBABababABAbabaBABAbabABABabaBAI.A^i.c._33 BabaBABabaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbabABAbabAI.A^i.c._34 BabaBAbabABabaBAbaBABabABAbaBABabABAbaBABabABAbaBAbabABabaBAbabAI.A^i.c._35 BAbabaBABababABABababABABabaBABAbabaBABAbabABABababABABababABAbabaBAI.A^i.c._36 BAbabABAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBABabaBAI.A^i.c._37 BabAAbaBAbaBBabABabABaaBAbaBAbbABabABaaBAbaBAbbABabABabAAbaBAbaBBabAI.A^i.c._38 BAbabABAbabABABabaBABababABAbabABAbabaBABabaBABababABAbabABABabaBABabaBAI.A^i.c._39 BabaBABabaBAbabABAbabABAbaBABabaBABabABAbabABAbaBABabaBABabaBAbabABAbabAI.A^i.c._40 BAbabABABabaBABAbabABAbabaBABababABAbabaBABababABAbabaBABabaBABAbabABABabaBAI.A^i.c._41 BabAAbaBAbaBAbbABabABabAAbaBAbaBAbbABabABaaBAbaBAbaBBabABabABaaBAbaBAbaBBabAI.A^i.c._42 BAbabABABababABAbabaBABAbabABABababABAbabaBABababABABabaBABAbabaBABababABABabaBAI.A^i.c._43 BAbabABAbabABAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBABabaBABabaBAI.A^i.c._44 BAbabABAbabABABabaBABabaBABababABAbabABAbabaBABabaBABababABAbabABAbabABABabaBABabaBAI.A^i.c._45 BAbabaBABAbabABABababABABababABAbabaBABAbabaBABAbabaBABababABABababABABabaBABAbabaBAI.A^i.c._46 BAbabABAbabaBABababABAbabABABabaBABAbabABAbabaBABabaBABAbabABABabaBABababABAbabaBABabaBAI.A^i.c._47 BAbabABABabaBABAbabABABababABAbabaBABababABAbabaBABababABAbabaBABababABABabaBABAbabABABabaBAI.A^i.c._48 BAbabABabaBABabABAbabaBABabABAbabABabaBAI.A^i.c._49 BababABAbabaBABAbabABABabABABabaBABAbabaBABababAI.A^i.c._50 BabAAbaBAbaBAbaBAbaBAbbABabABaaBAbaBAbaBAbaBAbaBBabA 1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element I.A^i.c._51 BAbaBABabaBAbabABAbaBABabaBAbabaBAbabABAbaBABabaBAbabABAbaBAI.A^i.c._52 BabABAbaBAbaBABabABabaBAbaBAbabABabABabaBAbaBAbabABabABAbaBAbaBABabAI.A^i.c._53 BabaBABabaBABabaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbabABAbabABAbabAI.A^i.c._54 BabABabABaaBAbaBAbaBAbaBAbaBBabABabABabABabABabAAbaBAbaBAbaBAbaBAbbABabABabAI.A^i.c._55 BabABAbaBAbaBABabABabABabaBAbaBAbabABabABabaBAbaBAbabABabABabABAbaBAbaBABabAI.A^i.c._56 BabaBAbabABabaBAbabABAbaBABabABAbaBABabABAbaBABabABAbaBABabaBAbabABabaBAbabAI.A^i.c._57 BabABaaBAbaBAbaBBabABabABaaBAbaBAbaBBabABabABabAAbaBAbaBAbbABabABabAAbaBAbaBAbbABabAI.A^i.c._58 BabABAbaBAbaBAbabABabABabABAbaBAbaBAbabABabABabaBAbaBAbaBABabABabABabaBAbaBAbaBABabAI.A^i.c._59 BabaBAbabABabaBAbabABabABAbaBABabABAbaBABabABAbaBABabABAbaBABabABabaBAbabABabaBAbabAI.A^i.c._60 BabABabaBAbaBAbabABabABabABAbaBAbaBABabABabABabABAbaBAbaBABabABabABabaBAbaBAbabABabAI.A^i.c._61 BabaBABabaBABabaBABabaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbabABAbabABAbabABAbabAI.A^i.c._62 BabaBAbaBABabABAbaBABabABabaBAbabABabaBAbaBABabABAbaBAbabABabaBAbabABabABAbaBABabABAbaBAbabAI.A^i.c._63 BabABaaBAbaBAbaBAbbABabABabAAbaBAbaBAbaBBabABabABabAAbaBAbaBAbaBBabABabABaaBAbaBAbaBAbbABabAI.A^i.c._64 BabAAbaBAbaBBabABabAAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBBabABabAAbaBAbaBBabAI.A^i.c._65 BAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBAI.A^i.c._66 BabaBAbabABabaBAbabABabaBABabABAbaBABabABAbaBABabABAbaBABabABAbaBABabABAbabABabaBAbabABabaBAbabAI.A^i.c._67 BabaBABabaBABabaBAbabABAbabABAbaBABabaBABabaBABabABAbabABAbabABAbaBABabaBABabaBAbabABAbabABAbabAI.A^i.c._68 BAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBAI.A^i.c._69 BabABaaBAbaBAbaBAbbABabABabABabAAbaBAbaBAbaBBabABabABabAAbaBAbaBAbaBBabABabABabABaaBAbaBAbaBAbbABabAI.A^i.c._70 BabABabaBAbaBAbaBAbabABabABabABAbaBAbaBAbaBABabABabABabABAbaBAbaBAbaBABabABabABabaBAbaBAbaBAbabABabAI.A^i.c._71 BabAAbaBAbaBBabABabAAbaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbaBBabABabAAbaBAbaBBabAI.A^i.c._72 BAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBAI.A^i.c._73 BabABAbaBAbaBABabABabABAbaBAbabABabABabaBAbaBAbabABabABabaBAbaBAbabABabABabaBAbaBABabABabABAbaBAbaBABabAI.A^i.c._74 BabaBABabaBAbabABAbabABabaBABabABAbabABAbaBABabaBABabABAbabABAbaBABabaBABabABAbabABabaBABabaBAbabABAbabAI.A^i.c._75 BabaBABabABAbabABAbabABabaBABabABAbabABAbabABabaBABabABAbabABabaBABabaBABabABAbabABabaBABabaBABabABAbabAI.A^i.c._76 BabaBABabaBAbabABAbaBABabaBABabaBAbabABAbaBABabaBABabABAbabABAbaBABabaBAbabABAbabABAbaBABabaBAbabABAbabAI.A^i.c._77 BAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBAI.A^i.c._78 BAbabABABabaBABAbabABABabaBABababABAbabaBABababABAbabaBABababABAbabaBABababABAbabABABabaBABAbabABABabaBAI.A^i.c._79 BabaBAbabABabaBAbabABabaBAbaBABabABAbaBABabABAbaBABabABAbaBABabABAbaBABabABAbaBAbabABabaBAbabABabaBAbabAI.A^i.c._80 BAbabABAbabABAbabABABabaBABabaBABababABAbabABAbabABAbabaBABabaBABabaBABababABAbabABAbabABABabaBABabaBABa baBAI.A^i.c._81 BabABAbaBAbaBABabABabABAbaBAbaBAbabABabABabaBAbaBAbabABabABabaBAbaBAbabABabABabaBAbaBAbaBABabABabABAbaBA baBABabAI.A^i.c._82 BabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbabABAbabABAbabABAbabABAba bABAbabAI.A^i.c._83 BabaBAbabABabABAbaBABabABAbaBAbabABabaBAbabABabABAbaBABabABAbaBABabABabaBAbabABabaBAbaBABabABAbaBABabABa baBAbabAI.A^i.c._84 BAbabaBABababABABababABAbabaBABAbabABABababABABabaBABAbabaBABAbabABABababABABabaBABAbabaBABababABABababABAbabaBAI.A^i.c._85 BAbabABAbabaBABabaBABababABAbabABABabaBABabaBABAbabABAbabaBABabaBABAbabABAbabABABabaBABababABAbabABAbabaBABabaBAI.A^i.c._86 BabaBABabaBAbabABAbabABabaBABabaBABabABAbabABAbaBABabaBABabABAbabABAbaBABabaBABabABAbabABAbabABabaBABabaBAbabABAbabAI.A^i.c._87 BAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBAI.A^i.c._88 BabAAbaBAbaBAbbABabABabAAbaBAbaBAbbABabABabAAbaBAbaBAbbABabABaaBAbaBAbaBBabABabABaaBAbaBAbaBBabABabABaaBAbaBAbaBBabAI.A^i.c._89 BAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBAI.A^i.c._90 BabAAbaBAbbABabABaaBAbaBAbbABabAAbaBAbaBBabABabAAbaBAbbABabABaaBAbaBBabABabAAbaBAbaBBabABaaBAbaBAbbABabABaaBAbaBBabA 1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element I.A^i.c._91 BAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBAI.A^i.c._92 BAbabABAbabABAbabABABabaBABabaBABabaBABababABAbabABAbabABAbabaBABabaBABabaBABababABAbabABAbabABAbabABABabaBABabaBABabaBAI.A^i.c._93 BAbabABABabaBABAbabABABabaBABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABAbabABABabaBABAbabABABabaBAI.A^i.c._94 BabaBAbabABabaBAbabABabaBAbaBABabaBAbaBABabABAbaBABabABAbaBABabABAbaBABabABAbaBABabABAbaBAbabABAbaBAbabABabaBAbabABabaBAbabAI.A^i.c._95 BabABabABAbaBAbaBAbaBAbabABabABabABabABAbaBAbaBAbaBAbabABabABabABabABabaBAbaBAbaBAbaBABabABabABabABabaBAbaBAbaBAbaBABabABabAI.A^i.c._96 BAbabaBABababABAbabaBABAbabABABababABAbabaBABAbabABABabaBABAbabaBABAbabABABabaBABAbabaBABababABABabaBABAbabaBABababABAbabaBAI.A^i.c._97 BabABaaBAbaBAbaBBabABabABaaBAbaBAbaBBabABabABaaBAbaBAbaBBabABabABabAAbaBAbaBAbbABabABabAAbaBAbaBAbbABabABabAAbaBAbaBAbbABabAI.A^i.c._98 BAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABabaBAI.A^i.c._99 BAbabABAbabABABabaBABabaBABAbabABAbabaBABabaBABababABAbabABAbabaBABabaBABababABAbabABAbabaBABabaBABAbabABAbabABABabaBABabaBAI.A^i.c._100 BabAAbaBAbaBBabABabAAbaBAbaBBabABaaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbbABabAAbaBAbaBBabABabAAbaBAbaBBabAI.A^i.c._101 BabaBABabaBABabABAbabABAbaBABabaBABabaBAbabABAbabABabaBABabaBABabABAbabABAbabABabaBABabaBAbabABAbabABAbaBABabaBABabABAbabABAbabAI.A^i.c._102 BAbabABAbabaBABababABAbabABABabaBABababABAbabABABabaBABAbabABAbabaBABabaBABAbabABABabaBABababABAbabABABabaBABababABAbabaBABabaBAI.A^i.c._103 BAbabaBABababABABababABAbabaBABAbabaBABAbabABABababABABabaBABAbabaBABAbabABABababABABabaBABAbabaBABAbabaBABababABABababABAbabaBAI.A^i.c._104 BabaBABabaBABabaBABabaBAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBAbabABAbabABAbabABAbabAI.A^i.c._105 BAbabaBABAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBABAbabaBAI.A^i.c._106 BAbabABABabaBABababABABabaBABAbabABAbabaBABababABABabaBABababABAbabaBABababABAbabABABababABAbabaBABabaBABAbabABABababABAbabABABabaBAI.A^i.c._107 BabAAbaBAbaBBabABabAAbaBAbaBBabABabABaaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbbABabABabAAbaBAbaBBabABabAAbaBAbaBBabAI.A^i.c._108 BAbabaBABAbabaBABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABAbabaBABAbabaBAI.A^i.c._109 BAbabABAbabABABabaBABabaBABAbabABAbabABAbabaBABabaBABababABAbabABAbabaBABabaBABababABAbabABAbabaBABabaBABabaBABAbabABAbabABABabaBABabaBAI.A^i.c._110 BAbabABABababABAbabaBABababABAbabaBABAbabABABabaBABAbabABABababABAbabaBABababABABabaBABAbabABABabaBABAbabaBABababABAbabaBABababABABabaBAI.A^i.c._111 BabABaaBAbaBAbaBAbbABabABabABaaBAbaBAbaBBabABabABabAAbaBAbaBAbaBBabABabABabAAbaBAbaBAbaBBabABabABabAAbaBAbaBAbbABabABabABaaBAbaBAbaBAbbABabAI.A^i.c._112 BabaBABabaBABabaBABabaBAbabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabaBAbabABAbabABAbabABAbabAI.A^i.c._113 BAbabABAbabABAbabABAbabABABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABABabaBABabaBABabaBABabaBAI.A^i.c._114 BAbabaBABababABABababABABabaBABAbabaBABAbabaBABababABABababABABabaBABAbabaBABAbabABABababABABababABAbabaBABAbabaBABAbabABABababABABababABAbabaBAI.A^i.c._115 BAbabABAbabABAbabaBABabaBABababABAbabABAbabABABabaBABabaBABAbabABAbabABAbabaBABabaBABabaBABAbabABAbabABABabaBABabaBABababABAbabABAbabaBABabaBABabaBA 1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element I.A^i.c._116 BAbabABABabaBABAbabaBABabaBABAbabABABababABAbabaBABabaBABAbabaBABababABAbabaBABababABAbabaBABAbabABAbabaBABababABABabaBABAbabABAbabaBABAbabABABabaBAI.A^i.c._117 BabAAbaBAbaBAbbABabABabAAbaBAbaBBabABabABaaBAbaBAbbABabABabAAbaBAbaBAbbABabABaaBAbaBAbaBBabABabABaaBAbaBAbbABabABabAAbaBAbaBBabABabABaaBAbaBAbaBBabAI.A^i.c._118 BabaBABabaBAbabABAbaBABabaBABabaBAbabABAbaBABabaBABabaBAbabABAbaBABabaBABabABAbabABAbaBABabaBAbabABAbabABAbaBABabaBAbabABAbabABAbaBABabaBAbabABAbabAI.A^i.c._119 BabaBABabaBAbabABAbabABabaBABabaBAbabABAbaBABabaBABabABAbabABAbaBABabaBABabABAbabABAbaBABabaBABabABAbabABAbaBABabaBAbabABAbabABabaBABabaBAbabABAbabAI.A^i.c._120 BAbabaBABAbabaBABAbabABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABabaBABAbabaBABAbabaBAI.A^i.c._111 BabABaaBAbaBAbaBAbbABabABabABaaBAbaBAbaBBabABabABabAAbaBAbaBAbaBBabABabABabAAbaBAbaBAbaBBabABabABabAAbaBAbaBAbbABabABabABaaBAbaBAbaBAbbABabAI.A^i.c._112 BabaBABabaBABabaBABabaBAbabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabaBAbabABAbabABAbabABAbabAI.A^i.c._113 BAbabABAbabABAbabABAbabABABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABABabaBABabaBABabaBABabaBAI.A^i.c._114 BAbabaBABababABABababABABabaBABAbabaBABAbabaBABababABABababABABabaBABAbabaBABAbabABABababABABababABAbabaBABAbabaBABAbabABABababABABababABAbabaBAI.A^i.c._115 BAbabABAbabABAbabaBABabaBABababABAbabABAbabABABabaBABabaBABAbabABAbabABAbabaBABabaBABabaBABAbabABAbabABABabaBABabaBABababABAbabABAbabaBABabaBABabaBAI.A^i.c._116 BAbabABABabaBABAbabaBABabaBABAbabABABababABAbabaBABabaBABAbabaBABababABAbabaBABababABAbabaBABAbabABAbabaBABababABABabaBABAbabABAbabaBABAbabABABabaBAI.A^i.c._117 BabAAbaBAbaBAbbABabABabAAbaBAbaBBabABabABaaBAbaBAbbABabABabAAbaBAbaBAbbABabABaaBAbaBAbaBBabABabABaaBAbaBAbbABabABabAAbaBAbaBBabABabABaaBAbaBAbaBBabAI.A^i.c._118 BabaBABabaBAbabABAbaBABabaBABabaBAbabABAbaBABabaBABabaBAbabABAbaBABabaBABabABAbabABAbaBABabaBAbabABAbabABAbaBABabaBAbabABAbabABAbaBABabaBAbabABAbabAI.A^i.c._119 BabaBABabaBAbabABAbabABabaBABabaBAbabABAbaBABabaBABabABAbabABAbaBABabaBABabABAbabABAbaBABabaBABabABAbabABAbaBABabaBAbabABAbabABabaBABabaBAbabABAbabAI.A^i.c._120 BAbabaBABAbabaBABAbabABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABabaBABAbabaBABAbabaBAI.A^i.c._120 BAbabaBABAbabaBABAbabABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABabaBABAbabaBABAbabaBAI.A^i.c._121 BabAAbaBAbaBBabABaaBAbaBAbbABabABaaBAbaBBabABabAAbaBAbaBBabABaaBAbaBAbbABabABaaBAbaBAbbABabAAbaBAbaBBabABabAAbaBAbbABabABaaBAbaBAbbABabAAbaBAbaBBabAI.A^i.c._122 BAbabaBABAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBABAbabaBAI.A^i.c._123 BabaBABabaBABabABAbabABAbabABAbaBABabaBABabaBAbabABAbabABAbabABabaBABabaBABabABAbabABAbabABabaBABabaBABabaBAbabABAbabABAbaBABabaBABabaBABabABAbabABAbabAI.A^i.c._124 BAbabABABababABAbabaBABAbabABABabaBABAbabaBABababABAbabaBABAbabABABababABAbabaBABababABABabaBABAbabaBABababABAbabaBABAbabABABabaBABAbabaBABababABABabaBAI.A^i.c._125 BAbabABABabaBABababABAbabaBABabaBABababABABabaBABabaBABAbabABABabaBABababABAbabaBABababABAbabABABabaBABAbabABAbabABABababABAbabABAbabaBABababABAbabABABabaBAI.A^i.c._126 BAbabaBABababABABabaBABAbabaBABababABABababABABabaBABAbabaBABababABABabaBABAbabaBABAbabABABababABAbabaBABAbabABABababABABababABAbabaBABAbabABABababABAbabaBAI.A^i.c._127 BAbabaBABAbabaBABAbababABABababABABababABABababABABababABABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABABababABABababABABababABABababABABababaBABAbabaBABAbabaBAI.A^i.c._128 BAbabABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBABAbabABAbabaBABabaBABAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABabaBAI.A^i.c._129 BAbabaBABAbabABABababABABababABAbabaBABAbabaBABAbabABABababABABababABAbabaBABAbabaBABAbabaBABababABABababABABabaBABAbabaBABAbabaBABababABABababABABabaBABAbabaBAI.A^i.c._130 BAbabABABabaBABAbabaBABababABAbabaBABababABABabaBABAbabABABabaBABAbabaBABababABAbabaBABababABAbabaBABAbabABABabaBABAbabABABababABAbabaBABababABAbabaBABAbabABABabaBA 1pt 0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element I.A^i.c._131 BAbabaBABAbabaBABAbabABABababABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABababABABabaBABAbabaBABAbabaBAI.A^i.c._132 BAbabABABabaBABababABAbabaBABababABAbabABABababABAbabABABabaBABAbabABABabaBABababABAbabaBABababABAbabABABabaBABAbabABABabaBABababABABabaBABababABAbabaBABababABAbabABABabaBAI.A^i.c._133 BAbabaBABAbabaBABAbababABABababABABababABABababABABababABABababABABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABABababABABababABABababABABababABABababABABababaBABAbabaBABAbabaBAI.A^i.c._134 BAbabABABabaBABAbabABABabaBABAbabABABabaBABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABAbabABABabaBABAbabABABabaBABAbabABABabaBAI.A^i.c._135 BAbabaBABababABAbabaBABababABABabaBABAbabaBABabaBABAbabaBABababABABabaBABAbabABABabaBABAbabaBABAbabABABabaBABAbabABABababABAbabaBABAbabABAbabaBABAbabABABababABAbabaBABababABAbabaBAI.A^i.c._136 BAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBAI.A^i.c._137 BAbabABABabaBABababABABabaBABAbabABAbabaBABababABABabaBABAbabABAbabaBABababABABabaBABababABAbabaBABababABAbabABABababABAbabaBABabaBABAbabABABababABAbabaBABabaBABAbabABABababABAbabABABabaBAI.A^i.c._138 BAbabaBABAbabABABababABAbabaBABAbabaBABababABABababABABabaBABAbabaBABAbabABABababABAbabaBABAbabaBABAbabaBABababABABabaBABAbabaBABAbabABABababABABababABAbabaBABAbabaBABababABABabaBABAbabaBAI.A^i.c._139 BAbabABABabaBABababABAbabaBABabaBABababABAbabaBABabaBABAbabABABabaBABabaBABAbabABABabaBABababABAbabaBABababABAbabABABabaBABAbabABAbabABABabaBABAbabABAbabaBABababABAbabABAbabaBABababABAbabABABabaBAI.A^i.c._140 BAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBAI.A^i.c._141 BAbabaBABababABAbabaBABababABABababABAbabaBABAbabABAbabaBABAbabABABababABABabaBABAbabABABabaBABAbabaBABAbabABABabaBABAbabABABababABABabaBABAbabaBABabaBABAbabaBABababABABababABAbabaBABababABAbabaBAI.A^i.c._142 BAbabABAbabABABabaBABAbabABAbabABABabaBABabaBABAbabABAbabaBABabaBABababABAbabABAbabaBABababABAbabABAbabaBABabaBABababABAbabaBABabaBABababABAbabABAbabaBABabaBABAbabABAbabABABabaBABabaBABAbabABABabaBABabaBAI.A^i.c._143 BAbabABABabaBABAbabABABabaBABAbabaBABabaBABAbabABABababABAbabaBABabaBABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABAbabABAbabaBABababABABabaBABAbabABAbabaBABAbabABABabaBABAbabABABabaBAI.A^i.c._144 BAbabABAbabABAbabaBABabaBABababABAbabABAbabABABabaBABabaBABababABAbabABAbabABABabaBABabaBABAbabABAbabABAbabaBABabaBABabaBABAbabABAbabABABabaBABabaBABababABAbabABAbabABABabaBABabaBABababABAbabABAbabaBABabaBABabaBAI.A^i.c._145 BAbabABAbabABABabaBABabaBABababABAbabABAbabABABabaBABabaBABababABAbabABAbabABABabaBABabaBABababABAbabABAbabaBABabaBABababABAbabABAbabABABabaBABabaBABababABAbabABAbabABABabaBABabaBABababABAbabABAbabABABabaBABabaBAI.A^i.c._146 BAbabaBABAbabaBABAbabaBABAbababABABababABABababABABababABABababABABababABABababABABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABABababABABababABABababABABababABABababABABababABABababaBABAbabaBABAbabaBABAbabaBAI.A^i.c._147 BAbabaBABababABABabaBABAbabABABababABABabaBABAbabaBABAbabABAbabaBABAbabaBABababABABababABAbabaBABababABABabaBABAbabaBABAbabABABababABAbabaBABababABABababABAbabaBABAbabaBABabaBABAbabaBABAbabABABababABABabaBABAbabABABababABAbabaBAI.A^i.c._148 BababABAbabABAbabABAbabABABabABABabaBABabaBABabaBABababAI.A^i.c._149 BababABAbabaBABabaBABabaBABAbabABABabABABabaBABAbabABAbabABAbabaBABababAI.A^i.c._150 BAbabaBABababABABabaBABababABABabaBABAbabaBABAbabABABababABAbabABABababABAbabaBAI.A^i.c._151 BabaBAbaBABabABabaBAbaBABabABabaBAbaBABabABAbaBAbabABabABAbaBAbabABabABAbaBAbabAI.A^i.c._152 BabABabABAbaBAbaBAbaBAbaBAbaBAbabABabABabABabABabaBAbaBAbaBAbaBAbaBAbaBABabABabAI.A^i.c._153 BabABAbaBAbaBAbaBABabABabaBAbaBAbaBAbabABabABabaBAbaBAbaBAbabABabABAbaBAbaBAbaBABabAI.A^i.c._154 BabABabABaaBAbaBAbaBAbaBAbaBAbaBBabABabABabABabABabAAbaBAbaBAbaBAbaBAbaBAbbABabABabAI.A^i.c._155 BabaBAbabABAbaBABabABAbabABabaBAbabABAbaBABabABAbaBABabaBAbabABabaBABabABAbaBABabaBAbabAI.A^i.c._156 BabABabaBAbaBAbaBAbaBABabABabABabaBAbaBAbaBAbaBABabABabABabABAbaBAbaBAbaBAbabABabABabABAbaBAbaBAbaBAbabABabA1pt 0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element I.A^i.c._157 BabABabAAbaBAbaBAbaBAbaBBabABabABabABaaBAbaBAbaBAbaBAbbABabABabABabABaaBAbaBAbaBAbaBAbbABabABabABabAAbaBAbaBAbaBAbaBBabABabAI.A^i.c._158 BabaBAbabABAbaBABabABAbaBABabABAbabABabaBAbabABabaBAbabABAbaBABabABAbaBABabaBAbabABabaBAbabABabaBABabABAbaBABabABAbaBABabaBAbabAI.A^i.c._159 BabABabAAbaBAbaBAbaBAbaBBabABabABabABabABaaBAbaBAbaBAbaBAbbABabABabABabABaaBAbaBAbaBAbaBAbbABabABabABabABabAAbaBAbaBAbaBAbaBBabABabAI.A^i.c._160 BAbabABABabaBABAbabABABabaBABAbabABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABabaBABAbabABABabaBABAbabABABabaBAI.A^i.c._161 BAbabABABabaBABAbabABABabaBABababABABabaBABababABAbabaBABababABAbabaBABababABAbabaBABababABAbabABABababABAbabABABabaBABAbabABABabaBAI.A^i.c._162 BabAAbaBAbaBAbbABabABaaBAbaBAbbABabABabAAbaBAbaBBabABabAAbaBAbaBAbbABabABaaBAbaBAbaBBabABabAAbaBAbaBBabABabABaaBAbaBAbbABabABaaBAbaBAbaBBabAI.A^i.c._163 BAbabaBABAbabaBABABababABABababABABababABABababABABababaBABAbabaBABAbabaBABAbabaBABAbababABABababABABababABABababABABababABABAbabaBABAbabaBAI.A^i.c._164 BAbabaBABababABABabaBABAbabABABababABABababABAbabaBABababABABabaBABAbabaBABAbabABABababABAbabaBABababABABababABABabaBABAbabABABababABAbabaBAI.A^i.c._165 BAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBAI.A^i.c._166 BAbabABAbabaBABabaBABababABAbabABABabaBABababABAbabABABabaBABabaBABAbabABAbabaBABabaBABAbabABAbabABABabaBABababABAbabABABabaBABababABAbabABAbabaBABabaBAI.A^i.c._167 BabaBABabaBAbabABAbabABabaBABabaBAbabABAbabABAbaBABabaBABabABAbabABAbaBABabaBABabABAbabABAbaBABabaBABabABAbabABAbaBABabaBABabaBAbabABAbabABabaBABabaBAbabABAbabAI.A^i.c._168 BAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBAI.A^i.c._169 BAbabABAbabABABabaBABAbabABAbabABABabaBABabaBABababABAbabABAbabaBABababABAbabABAbabaBABabaBABababABAbabaBABabaBABababABAbabABAbabABABabaBABabaBABAbabABABabaBABabaBAI.A^i.c._170 BAbabABAbabABABabaBABababABAbabABAbabaBABabaBABAbabABAbabABABabaBABababABAbabABAbabaBABabaBABababABAbabABABabaBABabaBABAbabABAbabaBABabaBABababABAbabABABabaBABabaBAI.A^i.c._171 BAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABabaBAI.A^i.c._172 BAbabaBABababABABababABAbabaBABAbabaBABababABABabaBABAbabaBABAbabABABababABABabaBABAbabaBABAbabABABababABABabaBABAbabaBABAbabABABababABAbabaBABAbabaBABababABABababABAbabaBAI.A^i.c._173 BAbabABAbabABAbabaBABabaBABabaBABababABAbabABAbabABABabaBABabaBABabaBABAbabABAbabABAbabaBABabaBABabaBABAbabABAbabABAbabABABabaBABabaBABababABAbabABAbabABAbabaBABabaBABabaBAI.A^i.c._174 BAbabABABababABABababABAbabaBABAbabaBABAbabABAbabaBABAbabaBABAbabABABababABABababABAbabaBABababABABababABABabaBABAbabaBABAbabaBABabaBABAbabaBABAbabaBABababABABababABABabaBAI.A^i.c._175 BAbabaBABAbabABABababABABababABABabaBABAbabaBABAbabaBABababABABababABABababABAbabaBABAbabaBABAbabaBABababABABababABABababABAbabaBABAbabaBABAbabABABababABABababABABabaBABAbabaBAI.A^i.c._176 BAbabABAbabABABabaBABAbabABAbabABABabaBABAbabABAbabaBABababABAbabABAbabaBABababABAbabABAbabaBABabaBABababABAbabaBABabaBABababABAbabaBABabaBABAbabABABabaBABabaBABAbabABABabaBABabaBAI.A^i.c._177 BAbabABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABabaBAI.A^i.c._178 BAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBAI.A^i.c._179 BAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBAI.A^i.c._180 BAbabaBABababABABababABAbabaBABAbabaBABababABABababABABabaBABAbabaBABAbabABABababABABabaBABAbabaBABAbabABABababABABabaBABAbabaBABAbabABABababABABababABAbabaBABAbabaBABababABABababABAbabaBA1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & numberfree group element I.A^i.c._181 BAbabABABabaBABAbabABABabaBABAbabABABabaBABAbabABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABabaBABAbabABABabaBABAbabABABabaBABAbabABABabaBAI.A^i.c._182 BAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBAI.A^i.c._183 BAbabaBABAbabaBABababABABababABABababABABababABABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABABababABABababABABababABABababABAbabaBABAbabaBAI.A^i.c._184 BAbabaBABababABABabaBABAbabaBABababABABababABAbabaBABAbabABABababABABabaBABAbabaBABababABABabaBABAbabaBABAbabABABababABAbabaBABAbabABABababABABabaBABAbabaBABababABABababABAbabaBABAbabABABababABAbabaBAI.A^i.c._185 BAbabaBABAbabaBABAbabaBABABababABABababABABababABABababABABababABABababABABababaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbababABABababABABababABABababABABababABABababABABababABABAbabaBABAbabaBABAbabaBAI.A^i.c._186 BAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBAI.A^i.c._187 BAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBAI.A^i.c._188 BAbabABAbabABAbabaBABabaBABabaBABAbabABAbabABABabaBABabaBABababABAbabABAbabaBABabaBABabaBABAbabABAbabABAbabaBABabaBABabaBABAbabABAbabABAbabaBABabaBABababABAbabABAbabABABabaBABabaBABAbabABAbabABAbabaBABabaBABabaBAI.A^i.c._189 BAbabaBABababABABabaBABababABABababABAbabaBABAbabaBABabaBABAbabaBABAbabABABababABABabaBABababABABabaBABAbabaBABAbabABABababABAbabABABababABABabaBABAbabaBABAbabABAbabaBABAbabaBABababABABababABAbabABABababABAbabaBAI.A^i.c._190 BAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._1 BabaBAbabAI.B^i.c._2 BAbabABABabaBAI.B^i.c._3 BAbaabABABabbaBAI.B^i.c._4 BabAAbaBAbaBBabAI.B^i.c._5 BabABAbaBAbaBABabAI.B^i.c._6 BabaBABabaBAbabABAbabAI.B^i.c._7 BabABaaBAbaBAbaBAbbABabAI.B^i.c._8 BabABabaBAbaBAbaBAbabABabAI.B^i.c._9 BAbabABAbabABABabaBABabaBAI.B^i.c._10 BAbabaBABababABABababABAbabaBAI.B^i.c._11 BabABabAAbaBAbaBAbaBAbaBBabABabAI.B^i.c._12 BabaBABabaBABabaBAbabABAbabABAbabAI.B^i.c._13 BabABabABAbaBAbaBAbaBAbaBABabABabAI.B^i.c._14 BabaBAbaBABabABabaBAbabABabABAbaBAbabAI.B^i.c._15 BAbabABAbabABAbabABABabaBABabaBABabaBA 1pt 0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element I.B^i.c._16 BabABabABaaBAbaBAbaBAbaBAbaBAbbABabABabAI.B^i.c._17 BAbabaBABAbabABABababABABababABABabaBABAbabaBAI.B^i.c._18 BabaBABabaBABabaBABabaBAbabABAbabABAbabABAbabAI.B^i.c._19 BabABabABabAAbaBAbaBAbaBAbaBAbaBAbaBBabABabABabAI.B^i.c._20 BAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBAI.B^i.c._21 BabaBAbabABAbabABAbabABabaBAbabABabaBABabaBABabaBAbabAI.B^i.c._22 BAbabABAbabaBABababABAbabABABabaBABababABAbabaBABabaBAI.B^i.c._23 BabAAbaBAbaBAbbABabABabAAbaBAbaBBabABabABaaBAbaBAbaBBabAI.B^i.c._24 BAbabABABababABAbabaBABAbabABABabaBABAbabaBABababABABabaBAI.B^i.c._25 BabaBAbaBABabABAbaBABabABabaBAbabABabABAbaBABabABAbaBAbabAI.B^i.c._26 BAbabABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABabaBAI.B^i.c._27 BabaBAbabABAbaBABabABAbabABabaBAbabABabaBABabABAbaBABabaBAbabAI.B^i.c._28 BabABAbaBAbaBAbabABabABabABAbaBAbaBABabABabABabaBAbaBAbaBABabAI.B^i.c._29 BabABaaBAbaBAbaBBabABabABaaBAbaBAbaBAbbABabABabAAbaBAbaBAbbABabAI.B^i.c._30 BAbabABAbabABABabaBABAbabABAbabABABabaBABabaBABAbabABABabaBABabaBAI.B^i.c._31 BAbabABAbabaBABabaBABababABAbabABABabaBABababABAbabABAbabaBABabaBAI.B^i.c._32 BAbabaBABAbabaBABABababABABababABABababABABababABABAbabaBABAbabaBAI.B^i.c._33 BabaBABabaBABabaBABabaBABabaBABabaBAbabABAbabABAbabABAbabABAbabABAbabAI.B^i.c._34 BAbabABABababABABabaBABAbabaBABAbabABABabaBABAbabaBABAbabABABababABABabaBAI.B^i.c._35 BAbabABAbabABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABabaBABabaBAI.B^i.c._36 BAbabaBABababABABabaBABAbabaBABababABABababABAbabaBABAbabABABababABAbabaBAI.B^i.c._37 BabaBABabaBABabABAbabABAbaBABabaBABabaBAbabABAbabABAbaBABabaBABabABAbabABAbabAI.B^i.c._38 BAbabaBABAbabaBABAbabABABababABABababABABababABABababABABabaBABAbabaBABAbabaBAI.B^i.c._39 BAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBAI.B^i.c._40 BAbabABAbabaBABAbabABAbabABABababABAbabABABabaBABababABABabaBABabaBABAbabaBABabaBAI.B^i.c._41 BAbabaBABAbabaBABAbababABABababABABababABABababABABababABABababaBABAbabaBABAbabaBAI.B^i.c._42 BAbabABAbabABAbabABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABabaBABabaBABabaBAI.B^i.c._43 BAbabABABababABAbabaBABababABAbabaBABAbabABABabaBABAbabaBABababABAbabaBABababABABabaBAI.B^i.c._44 BAbabABAbabABAbabaBABabaBABababABAbabABAbabABABabaBABabaBABababABAbabABAbabaBABabaBABabaBAI.B^i.c._45 BAbabABABababABABababABAbabaBABAbabaBABAbabABABabaBABAbabaBABAbabaBABababABABababABABabaBAI.B^i.c._46 BAbabaBABAbabABABababABAbabaBABAbabABABababABABababABABabaBABAbabaBABababABABabaBABAbabaBAI.B^i.c._47 BAbabABAbabaBABabaBABAbabABABabaBABababABAbabABABabaBABababABAbabABABabaBABAbabABAbabaBABabaBAI.B^i.c._48 BAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBAI.B^i.c._49 BAbabaBABAbabaBABAbabaBABABababABABababABABababABABababABABababABABababABABAbabaBABAbabaBABAbabaBAI.B^i.c._50 BAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABabaBABAbabaBABababABABabaBABAbabaBABababABAB abaBAI.B^i.c._51 BAbabABAbabABABabaBABAbabABAbabABABabaBABAbabABAbabABABabaBABabaBABAbabABABabaBABabaBABAbabABABaba BABabaBAI.B^i.c._52 BAbabaBABAbabABABababABABabaBABAbabaBABAbabABABababABABababABABabaBABAbabaBABAbabABABababABABaba BABAbabaBAI.B^i.c._53 BAbabaBABababABABababABABabaBABAbabaBABAbabaBABababABABababABAbabaBABAbabaBABAbabABABababABABabab ABAbabaBAI.B^i.c._54 BAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABABababABABababABABabaBABAbabaBABA babaBABAbabaBAI.B^i.c._55 BAbabaBABAbabaBABAbabaBABAbababABABababABABababABABababABABababABABababABABababABABababaBABAbaba BABAbabaBABAbabaBA 1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element I.B^i.c._56 BabABabABabaBAbaBAbaBAbaBAbaBAbabABabABabAI.B^i.c._57 BabABabABabABaaBAbaBAbaBAbaBAbaBAbaBAbaBAbbABabABabABabAI.B^i.c._58 BabaBABabaBABabaBABabaBABabaBAbabABAbabABAbabABAbabABAbabAI.B^i.c._59 BAbabaBABAbabaBABababABABababABABababABABababABAbabaBABAbabaBAI.B^i.c._60 BababABAbabABAbabABAbabABAbabABAbabaBAbabaBABabaBABabaBABabaBABabaBABababAI.B^i.c._61 BabaBAbabABabABAbaBABabABAbaBAbabABabaBAbabABabaBAbaBABabABAbaBABabABabaBAbabAI.B^i.c._62 BabABaaBAbaBAbaBAbaBBabABabABabABaaBAbaBAbaBAbbABabABabABabAAbaBAbaBAbaBAbbABabAI.B^i.c._63 BabaBAbabABAbaBABabABAbaBABabABAbabABabaBAbabABabaBABabABAbaBABabABAbaBABabaBAbabAI.B^i.c._64 BabABAbaBAbabABabABabaBAbaBAbabABabABAbaBAbaBABabABabaBAbaBAbabABabABabaBAbaBABabAI.B^i.c._65 BabaBAbaBABabABAbaBAbabABabABAbaBABabABabaBAbabABabABAbaBABabABabaBAbaBABabABAbaBAbabAI.B^i.c._66 BabAAbaBAbaBAbbABabABaaBAbaBAbbABabABabAAbaBAbaBBabABabABaaBAbaBAbbABabABaaBAbaBAbaBBabAI.B^i.c._67 BabABabAAbaBAbaBAbaBAbbABabABabABabAAbaBAbaBAbaBAbaBBabABabABabABaaBAbaBAbaBAbaBBabABabAI.B^i.c._68 BabaBABabaBABabABAbabABAbabABAbaBABabaBABabaBAbabABAbabABAbaBABabaBABabaBABabABAbabABAbabAI.B^i.c._69 BabAAbaBAbaBAbbABabABabAAbaBAbaBAbbABabABabAAbaBAbaBBabABabABaaBAbaBAbaBBabABabABaaBAbaBAbaBBabAI.B^i.c._70 BAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBAI.B^i.c._71 BabaBAbabABabABAbaBABabABAbaBABabABAbaBAbabABabaBAbabABabaBAbaBABabABAbaBABabABAbaBABabABabaBAbabAI.B^i.c._72 BabABAbaBAbaBAbabABabABabaBAbaBAbabABabABabABAbaBAbaBABabABabABabaBAbaBAbabABabABabaBAbaBAbaBABabAI.B^i.c._73 BAbabABAbabABAbabaBABabaBABabaBABababABAbabABAbabABABabaBABabaBABababABAbabABAbabABAbabaBABabaBABabaBAI.B^i.c._74 BabABaaBAbaBAbaBBabABabABaaBAbaBAbaBBabABabABaaBAbaBAbaBAbbABabABabAAbaBAbaBAbbABabABabAAbaBAbaBAbbABabAI.B^i.c._75 BabAAbaBAbaBBabABaaBAbaBAbbABabABaaBAbaBBabABabAAbaBAbaBBabABabAAbaBAbbABabABaaBAbaBAbbABabAAbaBAbaBBabAI.B^i.c._76 BAbabABAbabaBABabaBABAbabABAbabABABabaBABababABAbabABABabaBABababABAbabABABabaBABabaBABAbabABAbabaBABabaBAI.B^i.c._77 BababABAbabaBABabaBABababABAbabaBABabaBABababABAbabaBAbabaBABababABAbabABAbabaBABababABAbabABAbabaBABababAI.B^i.c._78 BabaBAbaBABabABAbaBAbabABabaBAbabABabABAbaBABabABabaBAbabABabABAbaBABabABabaBAbabABabaBAbaBABabABAbaBAbabAI.B^i.c._79 BAbabABABabaBABababABAbabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabaBABababABAbabABABabaBAI.B^i.c._80 BAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBAI.B^i.c._81 BabABaaBAbaBAbaBBabABabABabAAbaBAbaBAbaBBabABabABaaBAbaBAbaBAbbABabABabAAbaBAbaBAbaBBabABabABabAAbaBAbaBAbbABabAI.B^i.c._82 BababABAbabABAbabABABabaBABabaBABabaBABAbabABAbabABAbabaBAbabaBABabaBABabaBABAbabABAbabABAbabABABabaBABabaBABababAI.B^i.c._83 BAbabABABabaBABAbabaBABababABAbabaBABababABABabaBABAbabABABabaBABAbabABABababABAbabaBABababABAbabaBABAbabABABabaBAI.B^i.c._84 BabABAbaBAbaBABabABabaBAbaBAbabABabABabaBAbaBABabABabABAbaBAbaBABabABabABAbaBAbabABabABabaBAbaBAbabABabABAbaBAbaBABabAI.B^i.c._85 BAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBAI.B^i.c._86 BAbabABAbabABABabaBABababABAbabABAbabaBABabaBABAbabABAbabABABabaBABabaBABAbabABAbabaBABabaBABababABAbabABABabaBABabaBAI.B^i.c._87 BabAAbaBAbaBBabABabABaaBAbaBAbbABabABaaBAbaBAbaBBabABabAAbaBAbaBBabABabAAbaBAbaBAbbABabABaaBAbaBAbbABabABabAAbaBAbaBBabAI.B^i.c._88 BAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBAI.B^i.c._89 BabaBABabaBAbabABAbabABAbaBABabaBABabABAbabABAbabABabaBABabaBAbabABAbabABabaBABabaBABabABAbabABAbaBABabaBABabaBAbabABAbabAI.B^i.c._90 BAbabABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABabaBA 1pt 0pt The free group elements for the periodic three-body orbits.1 1ptClass & numberfree group element I.B^i.c._91 BababABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabaBAbabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABababAI.B^i.c._92 BAbabaBABAbabABABababABABababABAbabaBABAbabaBABAbabABABababABABababABABabaBABAbabaBABAbabaBABababABABababABABabaBABAbabaBAI.B^i.c._93 BAbabABAbabABAbabABAbabaBABabaBABabaBABababABAbabABAbabABAbabABABabaBABabaBABabaBABababABAbabABAbabABAbabaBABabaBABabaBABabaBAI.B^i.c._94 BabaBAbaBABabABAbaBABabABabaBAbabABabaBAbaBABabABAbaBABabABabaBAbabABabABAbaBABabABAbaBAbabABabaBAbabABabABAbaBABabABAbaBAbabAI.B^i.c._95 BabaBABabaBABabaBABabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabABAbabABAbabABAbabAI.B^i.c._96 BAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._97 BabAAbaBAbaBAbbABabABaaBAbaBAbaBBabABabABaaBAbaBAbbABabABabAAbaBAbaBBabABabABaaBAbaBAbbABabABabAAbaBAbaBAbbABabABaaBAbaBAbaBBabAI.B^i.c._98 BAbabaBABababABAbabaBABababABABabaBABAbabaBABababABAbabaBABababABABababABAbabaBABababABAbabaBABAbabABABababABAbabaBABababABAbabaBAI.B^i.c._99 BAbabABABababABAbabaBABababABABabaBABAbabaBABababABAbabaBABAbabABABabaBABAbabaBABababABAbabaBABAbabABABababABAbabaBABababABABabaBAI.B^i.c._100 BAbabaBABAbabaBABAbabaBABAbabaBABABababABABababABABababABABababABABababABABababABABababABABababABABAbabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._101 BabaBABabaBABabABAbabABAbaBABabaBABabaBABabABAbabABAbaBABabaBABabaBAbabABAbabABAbaBABabaBABabABAbabABAbabABAbaBABabaBABabABAbabABAbabAI.B^i.c._102 BabaBAbabABAbaBABabABAbaBABabaBAbabABabaBABabABAbaBABabABAbabABabaBAbabABabaBABabABAbaBABabABAbabABabaBAbabABAbaBABabABAbaBABabaBAbabAI.B^i.c._103 BabaBABabaBABabABAbabABAbabABabaBABabaBAbabABAbabABAbaBABabaBABabaBAbabABAbabABAbaBABabaBABabaBAbabABAbabABabaBABabaBABabABAbabABAbabAI.B^i.c._104 BabAAbaBAbaBAbbABabABabAAbaBAbaBAbbABabABabAAbaBAbaBAbbABabABabAAbaBAbaBBabABabABaaBAbaBAbaBBabABabABaaBAbaBAbaBBabABabABaaBAbaBAbaBBabAI.B^i.c._105 BabAAbaBAbaBBabABaaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBBabABabAAbaBAbaBBabABabAAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbbABabAAbaBAbaBBabAI.B^i.c._106 BAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBAI.B^i.c._107 BAbabABABabaBABAbabABAbabaBABababABAbabaBABababABAbabABABabaBABAbabABABabaBABAbabABABabaBABababABAbabaBABababABAbabaBABabaBABAbabABABabaBAI.B^i.c._108 BabaBABabaBAbabABAbaBABabaBABabaBAbabABabaBABabaBABabABAbabABabaBABabaBAbabABAbabABabaBABabABAbabABAbabABabaBAbabABAbabABAbaBABabaBAbabABAbabAI.B^i.c._109 BAbabABABabaBABAbabaBABababABAbabaBABababABAbabaBABababABABabaBABAbabABABabaBABAbabABABababABAbabaBABababABAbabaBABababABAbabaBABAbabABABabaBAI.B^i.c._110 BAbabABAbabABABabaBABabaBABababABAbabABAbabaBABabaBABabaBABAbabABAbabABABabaBABabaBABAbabABAbabABAbabaBABabaBABababABAbabABAbabABABabaBABabaBAI.B^i.c._110 BAbabABAbabABABabaBABabaBABababABAbabABAbabaBABabaBABabaBABAbabABAbabABABabaBABabaBABAbabABAbabABAbabaBABabaBABababABAbabABAbabABABabaBABabaBAI.B^i.c._111 BAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._112 BabABaaBAbaBAbaBBabABabABaaBAbaBAbaBBabABabABaaBAbaBAbaBBabABabABaaBAbaBAbaBAbbABabABabAAbaBAbaBAbbABabABabAAbaBAbaBAbbABabABabAAbaBAbaBAbbABabA1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element I.B^i.c._113 BAbabaBABababABABabaBABababABABababABAbabaBABAbabaBABabaBABAbabaBABababABABababABAbabaBABAbabABAbabaBABAbabaBABababABABababABAbabABABababABAbabaBAI.B^i.c._114 BabaBABabaBABabABAbabABAbabABabaBABabaBABabaBAbabABAbabABAbaBABabaBABabaBAbabABAbabABAbaBABabaBABabaBAbabABAbabABAbabABabaBABabaBABabABAbabABAbabAI.B^i.c._115 BAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABabaBABAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABabaBAI.B^i.c._116 BAbabaBABAbabaBABAbabaBABAbabaBABAbababABABababABABababABABababABABababABABababABABababABABababABABababABABababaBABAbabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._117 BabaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabaBAbabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbabAI.B^i.c._118 BabAAbaBAbaBBabABabABaaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbaBBabABabAAbaBAbaBBabABabAAbaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbbABabABabAAbaBAbaBBabAI.B^i.c._119 BAbabABAbabABAbabaBABabaBABabaBABAbabABAbabABABabaBABabaBABababABAbabABAbabABABabaBABabaBABababABAbabABAbabABABabaBABabaBABAbabABAbabABAbabaBABabaBABabaBAI.B^i.c._120 BabaBABabaBABabaBAbabABAbabABAbaBABabaBABabaBABabABAbabABAbabABabaBABabaBABabaBAbabABAbabABAbabABabaBABabaBABabABAbabABAbabABAbaBABabaBABabaBAbabABAbabABAbabAI.B^i.c._121 BAbabABAbabaBABabaBABababABAbabABABabaBABabaBABAbabABAbabaBABabaBABababABAbabABABabaBABababABAbabABAbabaBABabaBABAbabABAbabABABabaBABababABAbabABAbabaBABabaBAI.B^i.c._122 BAbabABABababABAbabaBABababABABabaBABAbabABABabaBABAbabaBABababABAbabaBABAbabABABabaBABAbabaBABababABAbabaBABAbabABABabaBABAbabABABababABAbabaBABababABABabaBAI.B^i.c._123 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._124 BAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBAI.B^i.c._125 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._126 BAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBAI.B^i.c._127 BAbabABAbabABAbabaBABabaBABabaBABAbabABAbabABAbabABABabaBABabaBABababABAbabABAbabABABabaBABabaBABababABAbabABAbabABABabaBABabaBABabaBABAbabABAbabABAbabaBABabaBABabaBAI.B^i.c._128 BAbabaBABababABABababABABabaBABAbabaBABAbabABABababABABabaBABAbabaBABAbabaBABababABABababABAbabaBABAbabaBABAbabABABababABABabaBABAbabaBABAbabABABababABABababABAbabaBAI.B^i.c._129 BAbabABABabaBABAbabABAbabaBABababABAbabaBABababABAbabaBABababABAbabABABabaBABAbabABABabaBABAbabABABabaBABababABAbabaBABababABAbabaBABababABAbabaBABabaBABAbabABABabaBAI.B^i.c._130 BAbabABABabaBABAbabABABababABAbabaBABababABAbabaBABababABAbabaBABAbabABABabaBABAbabABABabaBABAbabABABabaBABAbabaBABababABAbabaBABababABAbabaBABababABABabaBABAbabABABabaBAI.B^i.c._131 BAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBAI.B^i.c._132 BAbabABAbabaBABabaBABAbabABAbabaBABabaBABAbabABABabaBABababABAbabABABabaBABababABAbabABABabaBABababABAbabABABabaBABababABAbabABABabaBABAbabABAbabaBABabaBABAbabABAbabaBABabaBAI.B^i.c._133 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._134 BAbabABABabaBABababABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABababABAbabABABabaBAI.B^i.c._135 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBA1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element I.B^i.c._136 BAbabABAbabaBABabaBABAbabABAbabaBABabaBABAbabABAbabABABabaBABababABAbabABABabaBABababABAbabABABabaBABababABAbabABABabaBABababABAbabABABabaBABabaBABAbabABAbabaBABabaBABAbabABAbabaBABabaBAI.B^i.c._137 BAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABabaBABAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBABAbabABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBAI.B^i.c._138 BAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABabaBABAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABabaBAI.B^i.c._139 BAbabaBABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBABababABABababABAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABAbabaBAI.B^i.c._140 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._141 BAbabaBABababABABababABAbabaBABAbabABABababABABabaBABAbabaBABAbabABABababABAbabaBABAbabaBABababABABababABAbabaBABAbabaBABababABABabaBABAbabaBABAbabABABababABABabaBABAbabaBABababABABababABAbabaBAI.B^i.c._142 BAbabaBABababABABababABABababABAbabaBABAbabaBABAbabABABababABABababABAbabaBABAbabaBABAbabaBABababABABababABAbabaBABAbabaBABAbabaBABababABABababABABabaBABAbabaBABAbabaBABababABABababABABababABAbabaBAI.B^i.c._143 BAbabaBABAbabABABababABABababABAbabaBABAbabaBABAbabABABababABABababABAbabaBABAbabaBABAbabABABababABABababABABabaBABAbabaBABAbabaBABababABABababABABabaBABAbabaBABAbabaBABababABABababABABabaBABAbabaBAI.B^i.c._144 BAbabABABababABAbabaBABababABABabaBABAbabABABababABAbabaBABAbabABABabaBABAbabaBABababABAbabaBABAbabABABabaBABAbabaBABababABAbabaBABAbabABABabaBABAbabaBABababABABabaBABAbabABABababABAbabaBABababABABabaBAI.B^i.c._145 BAbabaBABAbabaBABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABAbabaBABAbabaBAI.B^i.c._146 BAbabABABabaBABababABAbabaBABabaBABAbabABABababABAbabABAbabaBABAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBABAbabaBABabaBABababABABabaBABAbabABAbabaBABababABAbabABABabaBAI.B^i.c._147 BAbabaBABababABABabaBABAbabABABababABABabaBABAbabaBABababABABabaBABAbabaBABAbabABABabaBABAbabaBABababABABababABAbabaBABAbabABABabaBABAbabaBABAbabABABababABAbabaBABAbabABABababABABabaBABAbabABABababABAbabaBAI.B^i.c._148 BAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABabaBABAbabaBABababABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBABAbabABABababABAbabaBAI.B^i.c._149 BAbabaBABAbabaBABAbabABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABabaBABAbabaBABAbabaBAI.B^i.c._150 BAbabABAbabABABabaBABabaBABAbabABAbabaBABabaBABababABAbabABAbabaBABabaBABababABAbabABABabaBABabaBABAbabABAbabABABabaBABabaBABAbabABAbabABABabaBABababABAbabABAbabaBABabaBABababABAbabABAbabaBABabaBABAbabABAbabABABabaBABabaBAI.B^i.c._151 BAbabABABabaBABababABABabaBABAbabABAbabaBABababABAbabaBABababABAbabaBABababABAbabABABabaBABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABAbabABABabaBABababABAbabaBABababABAbabaBABababABAbabaBABabaBABAbabABABababABAbabABABabaBAI.B^i.c._152 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBA1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element I.B^i.c._153 BAbabABABabaBABababABAbabaBABabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabABAbabaBABababABAbabABABabaBAI.B^i.c._154 BAbabaBABababABAbabaBABababABABabaBABAbabABABabaBABAbabaBABababABABabaBABAbabABABabaBABAbabaBABababABAbabaBABababABABababABAbabaBABababABAbabaBABAbabABABabaBABAbabABABababABAbabaBABAbabABABabaBABAbabABABababABAbabaBABababABAbabaBAI.B^i.c._155 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._156 BabABabABabABAbaBAbaBAbaBAbaBAbaBAbaBABabABabABabAI.B^i.c._157 BabABabABabABabAAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBBabABabABabABabAI.B^i.c._158 BabaBAbaBABabABabaBAbaBABabABabaBAbabABabABAbaBAbabABabABAbaBAbabAI.B^i.c._159 BabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabAI.B^i.c._160 BabaBABabABAbabABAbabABabaBAbabABAbabABAbaBABabaBAbabABAbaBABabaBABabaBAbabABabaBABabaBABabABAbabAI.B^i.c._161 BabaBAbabABabaBABabABAbaBABabABAbaBABabaBAbabABabaBAbabABabaBAbabABAbaBABabABAbaBABabABAbabABabaBAbabAI.B^i.c._162 BabABAbaBAbaBAbaBABabABabABAbaBAbaBABabABabABabABAbaBAbaBABabABabABabABAbaBAbaBABabABabABAbaBAbaBAbaBABabAI.B^i.c._163 BabABabaBAbaBAbaBABabABabABAbaBAbaBABabABabABabaBAbaBAbaBAbabABabABabABAbaBAbaBABabABabABAbaBAbaBAbabABabAI.B^i.c._164 BabABAbaBAbabABabABabaBAbaBAbabABabABabaBAbaBAbabABabABAbaBAbaBABabABabaBAbaBAbabABabABabaBAbaBAbabABabABabaBAbaBABabAI.B^i.c._165 BabaBAbabABabaBAbaBABabABAbaBABabABAbaBABabABabaBAbabABabaBAbabABabaBAbabABabABAbaBABabABAbaBABabABAbaBAbabABabaBAbabAI.B^i.c._166 BAbabABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabABABabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABabaBAI.B^i.c._167 BAbabaBABababABABabaBABAbabaBABAbabABABababABABabaBABAbabaBABababABABababABAbabaBABAbabABABababABABabaBABAbabaBABAbabABABababABAbabaBAI.B^i.c._168 BAbabABAbabaBABabaBABAbabABAbabaBABababABAbabABABabaBABababABAbabABABabaBABababABAbabABABabaBABababABAbabaBABabaBABAbabABAbabaBABabaBAI.B^i.c._169 BAbabABABababABAbabaBABAbabaBABababABAbabaBABababABABababABAbabaBABAbabABABabaBABAbabaBABababABABababABAbabaBABababABAbabaBABAbabaBABababABABabaBAI.B^i.c._170 BAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBAI.B^i.c._171 BabaBABabaBAbabABAbaBABabaBABabaBABabABAbaBABabaBABabaBABabABAbabABabaBABabaBAbabABAbabABabaBABabABAbabABAbabABAbaBABabABAbabABAbabABAbaBABabaBAbabABAbabAI.B^i.c._172 BAbabaBABAbabaBABababABABababABABababABAbabaBABAbabaBABAbabaBABababABABababABABababABABababABAbabaBABAbabaBABAbabaBABababABABababABABababABAbabaBABAbabaBAI.B^i.c._173 BAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBAI.B^i.c._174 BabaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabaBAbabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbabAI.B^i.c._175 BAbabABAbabABABabaBABababABAbabABAbabaBABabaBABababABAbabABAbabaBABabaBABAbabABAbabABABabaBABabaBABAbabABAbabaBABabaBABababABAbabABAbabaBABabaBABababABAbabABABabaBABabaBA1pt 0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element I.B^i.c._176 BAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBAI.B^i.c._177 BAbabaBABababABABabaBABAbabaBABAbabABABababABAbabaBABAbabABABababABABabaBABAbabaBABababABABababABAbabaBABAbabABABababABABabaBABAbabaBABababABABabaBABAbabaBABAbabABABababABAbabaBAI.B^i.c._178 BAbabABAbabABAbabABABabaBABabaBABababABAbabABAbabABAbabaBABabaBABabaBABAbabABAbabABAbabABABabaBABabaBABabaBABAbabABAbabABAbabaBABabaBABabaBABababABAbabABAbabABABabaBABabaBABabaBAI.B^i.c._179 BAbabaBABababABABababABABabaBABAbabaBABAbabaBABababABABababABABabaBABAbabaBABAbabaBABababABABababABAbabaBABAbabaBABAbabABABababABABababABAbabaBABAbabaBABAbabABABababABABababABAbabaBAI.B^i.c._180 BAbabABABababABAbabaBABababABAbabaBABAbabABABabaBABAbabABABababABAbabaBABababABAbabaBABAbabABABabaBABAbabaBABababABAbabaBABababABABabaBABAbabABABabaBABAbabaBABababABAbabaBABababABABabaBAI.B^i.c._181 BAbabABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABabaBAI.B^i.c._182 BAbabABABababABAbabaBABAbabaBABababABAbabaBABAbabABABababABAbabaBABababABABababABAbabaBABAbabABABabaBABAbabaBABababABABababABAbabaBABababABABabaBABAbabaBABababABAbabaBABAbabaBABababABABabaBAI.B^i.c._183 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._184 BAbabaBABAbabABABababABABabaBABAbabaBABAbabaBABAbabABABababABABababABABabaBABAbabaBABAbabABABababABABababABABabaBABAbabaBABAbabABABababABABababABABabaBABAbabaBABAbabaBABAbabABABababABABabaBABAbabaBAI.B^i.c._185 BAbabABABabaBABAbabABABababABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABAbabABABabaBABAbabABABabaBABAbabABABabaBABAbabaBABababABAbabaBABababABAbabaBABababABAbabaBABababABABabaBABAbabABABabaBAI.B^i.c._186 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._187 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._188 BAbabaBABAbabaBABababABABababABAbabaBABAbabaBABAbabaBABababABABababABABababABAbabaBABAbabaBABababABABababABABababABABababABAbabaBABAbabaBABababABABababABABababABAbabaBABAbabaBABAbabaBABababABABababABAbabaBABAbabaBAI.B^i.c._189 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBAI.B^i.c._190 BAbabaBABAbabaBABababABABababABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBABababABABababABABababABABababABAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABababABABababABAbabaBABAbabaBAI.B^i.c._191 BAbabaBABababABABabaBABAbabaBABababABABababABABabaBABAbabaBABababABABabaBABAbabaBABAbabaBABababABABabaBABAbabaBABababABABababABAbabaBABAbabABABababABAbabaBABAbabaBABAbabABABababABAbabaBABAbabABABababABABababABAbabaBABAbabABABababABAbabaBAII.A^i.c._1 BabABabABAbaBAbaBAbaBAbaBAbabABabABabABabABabaBAbaBAbaBAbaBAbaBABabABabAII.A^i.c._2 BabABabABabaBAbaBAbaBAbaBAbaBABabABabABabABabABabABAbaBAbaBAbaBAbaBAbabABabABabAII.A^i.c._3 BAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBAII.A^i.c._4 BabABaBabABabABabAbABabA 1pt 0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element II.B^i.c._1 BabABabABabABabaBAbaBAbaBAbaBAbaBAbaBAbaBAbabABabABabABabAII.B^i.c._2 BAbabABABabaBABababABAbabaBABabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabABAbabaBABababABAbabABABabaBAII.B^i.c._3 BAbabABABababABABababABABabaBABAbabaBABAbabaBABAbabABABabaBABAbabaBABAbabaBABAbabABABababABABababABABabaBAII.B^i.c._4 BabABaaBAbaBAbaBAbaBBabABabABabAAbaBAbaBAbaBBabABabABabABaaBAbaBAbaBAbbABabABabABabAAbaBAbaBAbaBBabABabABabAAbaBAbaBAbaBAbbABabAII.B^i.c._5 BAbabABABabaBABababABAbabaBABAbabABAbabABABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABABabaBABabaBABAbabaBABababABAbabABABabaBAII.B^i.c._6 BabAAbaBAbaBAbaBBabABabABabAAbaBAbaBBabABabABabAAbaBAbaBAbaBBabAII.B^i.c._7 BabABabABabABabABAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBABabABabABabABabAII.B^i.c._8 BabABabABabABabABaaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbbABabABabABabABabAII.B^i.c._9 BAbabABABababABAbabaBABAbabABABabaBABAbabABABababABAbabaBABAbabABABabaBABAbabaBABababABABabaBABAbabABABabaBABAbabaBABababABABabaBAII.B^i.c._10 BababABAbabaBABabaBABababABAbabaBABabaBABababABAbabaBABabaBABababABAbabaBAbabaBABababABAbabABAbabaBABababABAbabABAbabaBABababABAbabABAbabaBABababAII.C^i.c._1 BAbabABabaBAII.C^i.c._2 BaBAbaBAbaBAbAII.C^i.c._3 BababABAbabaBABababAII.C^i.c._4 BababABabaBAbabABababAII.C^i.c._5 BaBAbaBAbbABaaBAbaBAbAII.C^i.c._6 BabaBABabABAbabaBABabABAbabAII.C^i.c._7 BabABAbaBAbabABabaBAbaBABabAII.C^i.c._8 BAbabABABababABababABABabaBAII.C^i.c._9 BAbabaBAABabbaBABAbaabABBAbabaBAII.C^i.c._10 BabABabABaaBAbaBAbaBAbbABabABabAII.C^i.c._11 BabABabABabAAbaBAbaBBabABabABabAII.C^i.c._12 BabaBABAbabABabaBAbabABabaBABAbabAII.C^i.c._13 BabABabABabaBAbaBAbaBAbabABabABabAII.C^i.c._14 BabaBAbaBABabABAbaBABabABAbaBAbabAII.C^i.c._15 BabaBABabABAbaBABabABAbaBABabABAbabAII.C^i.c._16 BabABabAAbaBAbaBAbaBAbaBAbaBBabABabAII.C^i.c._17 BAbabABABabaBABAbabaBABAbabABABabaBAII.C^i.c._18 BabABabABabaBAbaBAbaBAbaBAbabABabABabAII.C^i.c._19 BAbabaBABAbabaBABababABAbabaBABAbabaBAII.C^i.c._20 BAbaBABabaBABabABAbaBABabABAbabABAbaBAII.C^i.c._21 BabaBAbaBABabABabaBAbaBAbabABabABAbaBAbabAII.C^i.c._22 BabABabABabABaaBAbaBAbaBAbaBAbbABabABabABabAII.C^i.c._23 BabABabABabAAbaBAbaBAbaBAbaBAbaBBabABabABabAII.C^i.c._24 BAbabABabaBAbabABAbabABabaBABabaBAbabABabaBAII.C^i.c._25 BAbabABABababABAbabaBABAbabaBABababABABabaBAII.C^i.c._26 BabABabABabaBAbaBAbaBAbaBAbaBAbaBAbabABabABabAII.C^i.c._27 BabaBABabaBABabaBABabaBABAbabABAbabABAbabABAbabAII.C^i.c._28 BAbabABabaBABabaBABabaBABAbabABAbabABAbabABabaBAII.C^i.c._29 BabaBABabABAbabABAbabABAbaBABabaBABabaBABabABAbabAII.C^i.c._30 BAbabaBABababABABababABAbabaBABababABABababABAbabaBAII.C^i.c._31 BabaBAbaBABabABAbaBAbabABabABabaBAbaBABabABAbaBAbabAII.C^i.c._32 BabaBAbabABAbaBABAbabABabaBAbabABabaBABAbaBABabaBAbabAII.C^i.c._33 BAbabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabaBAII.C^i.c._34 BabABAbaBAbaBAbabABabABabaBAbaBAbabABabABabaBAbaBAbaBABabAII.C^i.c._35 BAbabABAbabABAbabABAbabABAbabABabaBABabaBABabaBABabaBABabaBA 1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element II.C^i.c._36 BabABabABaaBAbaBAbaBAbaBBabABabABabAAbaBAbaBAbaBAbbABabABabAII.C^i.c._37 BAbabABABabaBABababABAbabaBABababABAbabaBABababABAbabABABabaBAII.C^i.c._38 BabABabABabABabaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbabABabABabABabAII.C^i.c._39 BabaBABabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabABAbabAII.C^i.c._40 BabABaaBAbaBAbaBBabABabABabAAbaBAbaBAbaBBabABabABabAAbaBAbaBAbbABabAII.C^i.c._41 BababABAbabABAbabABABabaBABabaBABababABAbabABAbabABABabaBABabaBABababAII.C^i.c._42 BAbabaBABAbabABABababABABababABABababABABababABABababABABabaBABAbabaBAII.C^i.c._43 BAbabABAbabABAbabABAbabABAbabABAbabABabaBABabaBABabaBABabaBABabaBABabaBAII.C^i.c._44 BAbabABAbabABAbabABabaBABAbabABabaBABAbabABabaBABAbabABabaBABabaBABabaBAII.C^i.c._45 BAbabABAbabABAbabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabaBABabaBABabaBAII.C^i.c._46 BAbabaBABAbabaBABAbabaBABAbabABABababABABababABABabaBABAbabaBABAbabaBABAbabaBAII.C^i.c._47 BAbabABABabaBABAbabABABababABAbabaBABababABAbabaBABababABABabaBABAbabABABabaBAII.C^i.c._48 BabaBABabaBABabaBAbabABAbabABAbabABAbabABabaBABabaBABabaBABabaBAbabABAbabABAbabAII.C^i.c._49 BAbabABAbabABAbabABAbabABAbabABAbabABAbabABabaBABabaBABabaBABabaBABabaBABabaBABabaBAII.C^i.c._50 BAbabABAbabABABabaBABAbabABAbabABABabaBABAbaBABAbabABABabaBABabaBABAbabABABabaBABabaBAII.C^i.c._51 BAbabaBABababABAbabABABababABABabaBABAbabABABabaBABAbabABABababABABabaBABababABAbabaBAII.C^i.c._52 BAbabaBABAbabaBABababABABababABABababABABababABABababABABababABABababABAbabaBABAbabaBAII.C^i.c._53 BAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBAII.C^i.c._54 BAbaBABabaBABabABAbabABAbabABAbabABAbabABAbabaBAbabaBABabaBABabaBABabaBABabaBABabABAbabABAbaBAII.C^i.c._55 BAbabaBABababABAbabaBABababABABabaBABAbabaBABababABAbabaBABAbabABABababABAbabaBABababABAbabaBAII.C^i.c._56 BAbabaBABAbabaBABababABABababABABababABABababABABababABABababABABababABABababABAbabaBABAbabaBAII.C^i.c._57 BAbabABABabaBABabaBABAbabABAbabaBABababABAbabaBAbabaBABababABAbabaBABabaBABAbabABAbabABABabaBAII.C^i.c._58 BAbabaBABababABAbabaBABababABABababABAbabaBABAbabABABabaBABAbabaBABababABABababABAbabaBABababABAbabaBAII.C^i.c._59 BAbabaBABababABABabaBABAbabABABababABABabaBABAbabaBABababABAbabaBABAbabABABababABABabaBABAbabABAB ababABAbabaBAII.C^i.c._60 BAbabaBABababABABabaBABAbabABABababABABabaBABAbabaBABAbabABABabaBABAbabaBABAbabABABababABABabaBABA babABABababABAbabaBAII.C^i.c._61 BaBabABabABabAbAII.C^i.c._62 BAbabaBABababABAbabaBAII.C^i.c._63 BabABabABAbaBAbaBABabABabAII.C^i.c._64 BAbabABBAbaabABABabbaBAABabaBAII.C^i.c._65 BabABabABabABaaBAbaBAbbABabABabABabAII.C^i.c._66 BAbaabABBAbabaBABAbabaBABAbabaBAABabbaBAII.C^i.c._67 BaBabABabABabABabABabABabABabABabABabAbAII.C^i.c._68 BabaBABabABAbabABabaBABAbabABabaBABabABAbabAII.C^i.c._69 BAbabaBAABabbaBABAbabaBABAbabaBABAbaabABBAbabaBAII.C^i.c._70 BabABabABabABabaBAbaBAbaBAbaBAbaBAbabABabABabABabAII.C^i.c._71 BabaBAbabABabABAbaBABabaBAbabABAbaBABabABabaBAbabAII.C^i.c._72 BabABabABAbaBAbaBAbaBABabABabABAbaBAbaBAbaBABabABabAII.C^i.c._73 BabABabABabABabaBAbaBAbaBAbaBAbaBAbaBAbabABabABabABabAII.C^i.c._74 BabAAbaBAbaBAbbABabABabABaaBAbaBAbbABabABabABaaBAbaBAbaBBabAII.C^i.c._75 BabABabABabAAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBBabABabABabA 1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element II.C^i.c._76 BabaBAbabABABabABAbabABabaBABabABAbabABabaBABabABABabaBAbabAII.C^i.c._77 BabABabABabABaaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbbABabABabABabAII.C^i.c._78 BAbabaBABababABABababABABabaBABAbabABABababABABababABAbabaBAII.C^i.c._79 BabABabABabABaaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbbABabABabABabAII.C^i.c._80 BabaBAbabABAbaBABabaBAbabaBABabABAbaBABabABAbabaBAbabABAbaBABabaBAbabAII.C^i.c._81 BabaBABabaBABabaBABabaBABabaBABabABABabABAbabABAbabABAbabABAbabABAbabAII.C^i.c._82 BabABabABabABabABabABabABabAAbaBAbaBAbaBAbaBBabABabABabABabABabABabABabAII.C^i.c._83 BabABabABabABabABabABaaBAbaBAbaBAbaBAbaBAbaBAbaBAbbABabABabABabABabABabAII.C^i.c._84 BabABabABabABabABabABabABaaBAbaBAbaBAbaBAbaBAbbABabABabABabABabABabABabAII.C^i.c._85 BabABabABaaBAbaBAbaBAbaBAbaBBabABabABabABabAAbaBAbaBAbaBAbaBAbbABabABabAII.C^i.c._86 BabABabABabABabABAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBABabABabABabABabAII.C^i.c._87 BAbabABabaBABabABabaBABabABABabaBAbabABabaBAbabABABabABAbabABabABAbabABabaBAII.C^i.c._88 BabaBABabABAbabaBABabABAbabABAbabABAbabaBABabaBABabaBABabABAbabaBABabABAbabAII.C^i.c._89 BabABabABaaBAbaBAbaBAbaBAbaBAbbABabABabABabABaaBAbaBAbaBAbaBAbaBAbbABabABabAII.C^i.c._90 BabABabABabABabABabABabABabaBAbaBAbaBAbaBAbaBAbaBAbabABabABabABabABabABabABabAII.C^i.c._91 BabABabABabABabABabABabaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbabABabABabABabABabABabAII.C^i.c._92 BabABabABAbaBAbaBAbaBAbaBAbaBABabABabABabABabABabABAbaBAbaBAbaBAbaBAbaBABabABabAII.C^i.c._93 BabABAbaBAbabABabaBAbaBABabABabaBAbaBABabABAbaBAbabABabABAbaBAbabABabaBAbaBABabAII.C^i.c._94 BabaBAbaBABabABabaBAbabABabABAbaBAbabABabABabaBAbaBABabABabaBAbabABabABAbaBAbabAII.C^i.c._95 BabABabABabABabABabABabABabaBAbaBAbaBAbaBAbaBAbaBAbaBAbabABabABabABabABabABabABabAII.C^i.c._96 BabaBAbaBABabABAbaBAbabABabaBAbabABabABAbaBABabABabaBAbabABabaBAbaBABabABAbaBAbabAII.C^i.c._97 BabaBAbaBABabABabABAbaBAbabABabABAbaBAbabABabaBAbaBABabABabaBAbaBABabABabABAbaBAbabAII.C^i.c._98 BabABabABabABabABabaBAbaBAbaBAbaBAbabABabABabABabaBAbaBAbaBAbaBAbabABabABabABabABabAII.C^i.c._99 BababABAbabABAbabABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABabaBABabaBABababAII.C^i.c._100 BabABaaBAbaBAbaBBabABabABaaBAbaBAbaBAbbABabABabABaaBAbaBAbaBAbbABabABabAAbaBAbaBAbbABabAII.C^i.c._101 BabaBABabABAbaBABabABAbaBABabABAbabABabaBAbabABabaBAbabABabaBABabABAbaBABabABAbaBABabABAbabAII.C^i.c._102 BababABAbabABAbabABAbabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabaBABabaBABabaBABababAII.C^i.c._103 BabaBABabaBABabaBABabABAbabaBAbabABAbabABAbabABABabaBABabaBABabaBAbabaBABabABAbabABAbabABAbabAII.C^i.c._104 BabABAbaBAbabABabABAbaBABabABabaBAbaBABabABabaBAbabABabABAbaBAbabABabABAbaBABabABabaBAbaBABabAII.C^i.c._105 BAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBAII.C^i.c._106 BabaBABabaBABabaBAbabABAbabABAbabABabaBABabaBABabABAbabABAbabABabaBABabaBABabaBAbabABAbabABAbabAII.C^i.c._107 BabABabABAbaBAbaBAbaBAbabABabABabABabABabaBAbaBAbaBAbaBAbabABabABabABabABabaBAbaBAbaBAbaBABabABabAII.C^i.c._108 BabABAbaBAbabABabABabaBAbaBAbabABabABabaBAbaBAbaBAbaBAbaBAbabABabABabaBAbaBAbabABabABabaBAbaBABabAII.C^i.c._109 BabABAbaBABabABabaBAbaBAbabABabABAbaBAbabABabABAbaBABabABabaBAbaBABabABabaBAbaBAbabABabABAbaBABabAII.C^i.c._110 BabABaaBAbaBAbaBAbaBBabABabABabAAbaBAbaBAbaBBabABabABabAAbaBAbaBAbaBBabABabABabAAbaBAbaBAbaBAbbABabAII.C^i.c._111 BabaBABabaBABabABAbabABAbabABabaBABabaBAbabABAbabABabaBABabaBAbabABAbabABabaBABabaBABabABAbabABAbabAII.C^i.c._112 BabABabABabABAbaBAbaBAbaBAbaBAbaBAbabABabABabABabABabABabABabABabaBAbaBAbaBAbaBAbaBAbaBABabABabABabAII.C^i.c._113 BabaBABabaBABabABAbabABAbaBABabaBABabaBAbabABAbabABabaBABabaBAbabABAbabABAbaBABabaBABabABAbabABAbabAII.C^i.c._114 BabaBAbaBAbabABabABAbaBAbabABabABAbaBAbaBABabABAbaBAbaBABabABAbaBAbaBABabABabaBAbaBABabABabaBAbaBAbabAII.C^i.c._115 BabABabABabABabABabABabABabABabABabABabABAbaBAbaBAbaBAbaBAbaBABabABabABabABabABabABabABabABabABabABabAII.C^i.c._116 BabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabAII.C^i.c._117 BababABAbabABAbabABAbabABabaBABAbaBABabaBABabaBABababABAbabABAbabABAbaBABAbabABabaBABabaBABabaBABababAII.C^i.c._118 BabaBAbaBAbabABabaBAbaBAbabABabABAbaBAbaBABabABAbaBAbaBABabABAbaBAbaBABabABabaBAbaBAbabABabaBAbaBAbabAII.C^i.c._119 BAbabABABabaBABababABAbabaBABabaBABababABABabaBABAbaBABAbabABABababABAbabABAbabaBABababABAbabABABabaBAII.C^i.c._120 BabaBAbaBABabABAbaBAbabABabABAbaBABabABabaBAbaBABabABabABAbaBAbabABabABAbaBABabABabaBAbaBABabABAbaBAbabA 1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element II.C^i.c._121 BabaBAbaBABabABAbaBAbabABabaBAbaBABabABAbaBABabABabaBAbabABabABAbaBABabABAbaBAbabABabaBAbaBABabABAbaBAbabAII.C^i.c._122 BabAAbaBAbaBAbbABabABaaBAbaBAbbABabABabAAbaBAbaBAbbABabABaaBAbaBAbaBBabABabABaaBAbaBAbbABabABaaBAbaBAbaBBabAII.C^i.c._123 BabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBABAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbabAII.C^i.c._124 BabABabABabABabABabABabABabABabaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbabABabABabABabABabABabABabABabAII.C^i.c._125 BabaBABabaBABabaBABabaBAbabABAbabABAbabABabaBABabaBABabaBAbabABAbabABAbabABabaBABabaBABabaBAbabABAbabABAbabABAbabAII.C^i.c._126 BabaBABabaBABabABAbabABAbabABAbabABAbabABabaBABabaBABabaBAbabABAbabABAbabABabaBABabaBABabaBABabaBABabABAbabABAbabAII.C^i.c._127 BabAAbaBAbaBAbaBBabABabABaaBAbaBAbbABabABabAAbaBAbaBAbbABabABaaBAbaBAbaBBabABabABaaBAbaBAbbABabABabAAbaBAbaBAbaBBabAII.C^i.c._128 BabaBAbaBABabABabABAbaBAbabABabABAbaBAbabABabABAbaBAbaBABabABAbaBAbaBABabABabaBAbaBABabABabaBAbaBABabABabABAbaBAbabAII.C^i.c._129 BababABABabaBABAbabABABabaBABababABAbabaBABAbabABAbabaBABababABAbabaBABabaBABAbabaBABababABAbabABABabaBABAbabABABababAII.C^i.c._130 BabABabAAbaBAbaBAbaBAbaBBabABabABabABabAAbaBAbaBAbaBAbbABabABabABaaBAbaBAbaBAbaBBabABabABabABabAAbaBAbaBAbaBAbaBBabABabAII.C^i.c._131 BAbabABABabaBABAbabABABababABAbabaBABabaBABAbabaBABababABAbabaBABababABAbabaBABAbabABAbabaBABababABABabaBABAbabABABabaBAII.C^i.c._132 BabaBABabaBAbabABAbabABabaBABabaBABabABAbabABabaBABabaBABabaBAbabABAbabABAbabABabaBABabABAbabABAbabABabaBABabaBAbabABAbabAII.C^i.c._133 BabaBABabaBABabaBABabaBABabaBAbabABAbabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabaBABabaBAbabABAbabABAbabABAbabABAbabAII.C^i.c._134 BAbabABAbabABAbabaBABabaBABabaBABAbabABAbabABAbabaBABabaBABababABAbabABAbabaBABabaBABabaBABAbabABAbabABAbabaBABabaBABabaBAII.C^i.c._135 BAbabABAbabABAbabABABabABABabaBABabaBABabaBABAbabABAbabABAbabaBAbabaBABabaBABabaBABAbabABAbabABAbabABABabABABabaBABabaBABabaBAII.C^i.c._136 BabaBABabABAbabABAbabABabaBABabABAbabABAbabABabaBABabABAbabABAbaBABabaBABabABAbabABabaBABabaBABabABAbabABabaBABabaBABabABAbabAII.C^i.c._137 BAbabABabaBABAbabABabaBABAbabABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABabaBABAbabABabaBABAbabABabaBAII.C^i.c._138 BAbabABABabaBABAbabABABababABAbabaBABababABAbabaBABAbabABABabaBABAbabABABabaBABAbabaBABababABAbabaBABababABABabaBABAbabABABabaBAII.C^i.c._139 BAbabaBABAbabABABababABAbabaBABAbabaBABababABABababABABabaBABAbabaBABAbabABABababABABababABAbabaBABAbabaBABababABABabaBABAbabaBAII.C^i.c._140 BAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBAII.C^i.c._141 BabaBAbaBABabABAbaBAbabABabaBAbaBABabABAbaBAbabABabaBAbabABabABAbaBABabABabaBAbabABabaBAbaBABabABAbaBAbabABabaBAbaBABabABAbaBAbabAII.C^i.c._142 BabaBABabaBABabaBAbabABAbabABAbaBABabaBABabaBABabABAbabABAbabABAbaBABabaBABabaBABabABAbabABAbabABAbaBABabaBABabaBAbabABAbabABAbabAII.C^i.c._143 BabaBABabABAbabABAbabABAbaBABabABAbabABAbabABAbaBABabaBAbabABAbabABabaBABabaBAbabABAbaBABabaBABabaBABabABAbaBABabaBABabaBABabABAbabAII.C^i.c._144 BabAAbaBAbaBBabABabAAbaBAbaBBabABabAAbaBAbaBBabABabABaaBAbaBAbbABabABaaBAbaBAbbABabABabAAbaBAbaBBabABabAAbaBAbaBBabABabAAbaBAbaBBabAII.C^i.c._145 BAbabABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBABAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABabaBA1pt 0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element II.C^i.c._146 BababABAbabABABabaBABabaBABAbabABAbabABAbabaBABabaBABababABAbabABAbabaBABabaBABababABAbabABAbabaBABabaBABabaBABAbabABAbabABABabaBABababAII.C^i.c._147 BababABAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBABabaBABAbabABAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBABababAII.C^i.c._148 BababABAbabaBABabaBABababABAbabABABabaBABabaBABAbabABAbabABABabaBABAbaBABAbabABABabaBABabaBABAbabABAbabABABabaBABababABAbabABAbabaBABababAII.C^i.c._149 BabaBABabaBABabaBABabaBABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABAbabABAbabABAbabABAbabAII.C^i.c._150 BAbabABABababABAbabaBABAbabABABabaBABAbabaBABababABABabaBABAbabaBABababABAbabaBABAbabABABababABAbabaBABAbabABABabaBABAbabaBABababABABabaBAII.C^i.c._151 BabAAbaBAbaBBabABabAAbaBAbaBBabABaaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbbABabAAbaBAbaBBabABabAAbaBAbaBBabAII.C^i.c._152 BabaBABabaBABabaBAbabABAbaBABabaBABabaBAbabABAbabABabaBABabaBAbabABAbabABabaBABabaBAbabABAbabABabaBABabaBAbabABAbabABAbaBABabaBAbabABAbabABAbabAII.C^i.c._153 BabaBABabaBABabABAbabABabaBABabaBABabaBAbabABAbabABAbabABAbabABAbabABABabABABabaBABabaBABabaBABabaBABabaBAbabABAbabABAbabABabaBABabABAbabABAbabAII.C^i.c._154 BAbabABAbabABAbabaBABabaBABabaBABAbabABAbabABABabaBABabaBABababABAbabABAbabaBABabaBABababABAbabABAbabABABabaBABabaBABAbabABAbabABAbabaBABabaBABabaBAII.C^i.c._155 BabaBABabaBAbabABAbabABAbaBABabaBABabABAbabABAbabABabaBABabaBAbabABAbabABAbaBABabaBABabaBAbabABAbabABabaBABabaBABabABAbabABAbaBABabaBABabaBAbabABAbabAII.C^i.c._156 BAbabABabaBABAbabABabaBABAbabABabaBABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABAbabABabaBABAbabABabaBABAbabABabaBAII.C^i.c._157 BAbabABABababABAbabaBABAbabABABabaBABAbabaBABababABABabaBABAbabABABabaBABAbabaBABAbabABABabaBABAbabABABababABAbabaBABAbabABABabaBABAbabaBABababABABabaBAII.C^i.c._158 BAbabABABabaBABababABAbabaBABAbabABAbabaBABababABAbabaBABabaBABAbabABABabaBABAbabABABabaBABAbabABAbabaBABababABAbabaBABabaBABAbabaBABababABAbabABABabaBAII.C^i.c._159 BAbabaBABAbabABABababABABabaBABAbabaBABAbabaBABababABABababABABababABAbabaBABAbabaBABababABABababABABababABAbabaBABAbabaBABAbabABABababABABabaBABAbabaBAII.C^i.c._160 BabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabaBABAbabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabAII.C^i.c._161 BabaBABabaBAbabABAbaBABabaBABabABAbabABabaBABabaBABabABAbaBABabaBABabaBAbabABAbaBABabaBAbabABAbabABAbaBABabABAbabABAbabABabaBABabABAbabABAbaBABabaBAbabABAbabAII.C^i.c._162 BAbabABABabaBABAbabABABabaBABababABABabaBABAbabABAbabaBABababABABabaBABababABAbabaBABababABAbabABABababABAbabaBABabaBABAbabABABababABAbabABABabaBABAbabABABabaBAII.C^i.c._163 BAbabaBABababABAbabaBABababABABababABAbabaBABAbabABAbabaBABAbabaBABababABABabaBABAbabABABababABAbabaBABAbabaBABabaBABAbabaBABababABABababABAbabaBABababABAbabaBAII.C^i.c._164 BabaBABabaBABabaBAbabABAbabABAbaBABabaBABabaBABabABAbabABAbabABAbaBABabaBABabaBABabABAbabABAbabABAbaBABabaBABabaBABabABAbabABAbabABAbaBABabaBABabaBAbabABAbabABAbabAII.C^i.c._165 BAbabABABababABAbabaBABAbabaBABababABAbabaBABAbabABABababABABabaBABAbabABABabaBABAbabaBABAbabABABabaBABAbabABABababABABabaBABAbabaBABababABAbabaBABAbabaBABababABABabaBAII.C^i.c._166 BabaBABabaBAbabABAbaBABabaBABabaBAbabABAbaBABabaBABabaBAbabABAbaBABabaBABabABAbabABAbaBABabaBABabABAbabABAbaBABabaBAbabABAbabABAbaBABabaBAbabABAbabABAbaBABabaBAbabABAbabAII.C^i.c._167 BAbabABababABAbabABabaBABAbaBABabaBABAbabABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABabaBABAbabABAbaBABAbabABabaBABababABabaBA1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element II.C^i.c._168 BAbabABabaBABAbabABabaBABAbabABabaBABAbabABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABabaBABAbabABabaBABAbabABabaBABAbabABabaBAII.C^i.c._169 BAbabABabaBABAbabABabaBABAbabABabaBABAbabABabaBABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABAbabABabaBABAbabABabaBABAbabABabaBABAbabABabaBAII.C^i.c._170 BAbabABABabaBABAbabABABabaBABAbabaBABabaBABAbabABABababABAbabaBABabaBABAbabaBABababABAbabaBABababABAbabaBABAbabABAbabaBABababABABabaBABAbabABAbabaBABAbabABABabaBABAbabABABabaBAII.C^i.c._171 BAbabABABabaBABababABAbabaBABababABAbabABABabaBABAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBABAbabABABabaBABababABAbabaBABababABAbabABABabaBAII.C^i.c._172 BabaBABabaBABabABAbabABAbaBABabaBABabaBAbabABAbabABAbaBABabaBABabABAbabABAbaBABabaBABabaBAbabABAbabABAbaBABabaBABabABAbabABAbaBABabaBABabaBAbabABAbabABAbaBABabaBABabABAbabABAbabAII.C^i.c._173 BAbabABAbabaBABabaBABababABAbabABABabaBABababABAbabABAbabaBABabaBABAbabABAbabABABabaBABababABAbabABABabaBABabaBABAbabABAbabaBABabaBABababABAbabABABabaBABababABAbabABAbabaBABabaBAII.C^i.c._174 BAbabABABabaBABAbabaBABababABAbabaBABababABAbabaBABAbabABABabaBABAbabABABababABAbabaBABababABAbabaBABababABABabaBABAbabABABabaBABAbabaBABababABAbabaBABababABAbabaBABAbabABABabaBAII.C^i.c._175 BAbabaBABababABABabaBABAbabABABababABABabaBABAbabaBABababABAbabaBABAbabaBABababABABababABAbabaBABababABABababABAbabaBABAbabaBABababABAbabaBABAbabABABababABABabaBABAbabABABababABAbabaBAII.C^i.c._176 BAbabaBABababABABabaBABAbabaBABAbabABABababABABabaBABAbabaBABababABABababABAbabaBABAbabABABababABABabaBABAbabaBABababABABababABAbabaBABAbabABABababABABabaBABAbabaBABAbabABABababABAbabaBAII.C^i.c._177 BAbabaBABAbabaBABababABABababABABabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABabaBABAbabaBABAbabABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabABABababABABababABAbabaBABAbabaBAII.C^i.c._178 BAbabaBABAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBABAbabaBAII.C^i.c._179 BAbabABabaBABAbabABabaBABAbabABabaBABAbabABabaBABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABAbabABabaBABAbabABabaBABAbabABabaBABAbabABabaBAII.C^i.c._180 BAbabaBABababABABabaBABAbabABABababABABababABAbabaBABAbabABABabaBABAbabaBABAbabaBABababABABababABAbabaBABababABABababABAbabaBABAbabaBABAbabABABabaBABAbabaBABababABABababABABabaBABAbabABABababABAbabaBAII.C^i.c._181 BAbabaBABababABABababABAbabaBABAbabABABababABABababABAbabaBABAbabABABababABABabaBABAbabaBABAbabaBABababABAbabaBABAbabaBABAbabABABababABABabaBABAbabaBABababABABababABABabaBABAbabaBABababABABababABAbabaBAII.C^i.c._182 BAbabABababABAbabABabaBABAbaBABabaBABAbabABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABAbaBABababABabaBABAbabABAbaBABAbabABabaBABababABabaBAII.C^i.c._183 BAbabaBABAbabABABababABAbabaBABAbabaBABababABABababABABabaBABAbabaBABababABABababABAbabaBABAbabaBABAbabABABabaBABAbabaBABAbabaBABababABABababABAbabaBABAbabABABababABABababABAbabaBABAbabaBABababABABabaBABAbabaBAII.C^i.c._184 BAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABababABAbabABABabaBABAbabABAbabABABababABAbabABABabaBABababABABabaBABabaBABAbabABABabaBABababABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBAII.C^i.c._185 BAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBAII.C^i.c._186 BAbabaBABAbabABABababABAbabaBABAbabaBABababABABababABABababABAbabaBABAbabABABababABABabaBABAbabaBABAbabaBABAbabABABabaBABAbabaBABAbabaBABAbabABABababABABabaBABAbabaBABababABABababABABababABAbabaBABAbabaBABababABABabaBABAbabaBAII.C^i.c._187 BAbabaBABababABAbabaBABAbabABABababABAbabaBABAbabABABabaBABAbabaBABAbabABABababABAbabaBABababABABababABAbabaBABAbabABABabaBABAbabaBABababABABababABAbabaBABababABABabaBABAbabaBABAbabABABabaBABAbabaBABababABABabaBABAbabaBABababABAbabaBAII.C^i.c._188 BAbabaBABababABABabaBABababABABababABAbabaBABAbabaBABabaBABAbabaBABababABABababABAbabaBABababABAbabaBABAbabaBABababABABabaBABAbabABABababABAbabaBABAbabaBABababABAbabaBABababABABababABAbabaBABAbabABAbabaBABAbabaBABababABABababABAbabABABababABAbabaBA 1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element II.C^i.c._189 BAbabABAbaBABabaBAII.C^i.c._190 BabaBABBabAABAbabAII.C^i.c._191 BaBAbaBAbaBAbaBAbAII.C^i.c._192 BAbaabABBBAbabaBAAABabbaBAII.C^i.c._193 BAbabABAbaBABababABAbaBABabaBAII.C^i.c._194 BabABAbaBAbaBAbaBAbaBAbaBAbaBABabAII.C^i.c._195 BabABabaBAbaBAbaBAbaBAbaBAbaBAbabABabAII.C^i.c._196 BabaBAbabABAbaBABAbaBABAbaBABabaBAbabAII.C^i.c._197 BAbaabaBABAbabABABababABABabaBABAbabbaBAII.C^i.c._198 BabABaBAbaBABabABabABabABabABAbaBAbABabAII.C^i.c._199 BAbabABAbaBABababABAbaBABababABAbaBABabaBAII.C^i.c._200 BAbabABAbabABAbabABABabABABabaBABabaBABabaBAII.C^i.c._201 BAbabABabaBABAbabABabaBAbabABabaBABAbabABabaBAII.C^i.c._202 BAbabABabaBABabABAbaBABabABAbaBABabABAbabABabaBAII.C^i.c._203 BabABabaBAbaBAbaBABabABabABabABabABAbaBAbaBAbabABabAII.C^i.c._204 BabABAbaBAbabABabABabABAbaBAbaBABabABabABabaBAbaBABabAII.C^i.c._205 BabaBAbabABABabaBAbabABABabaBAbabABABabaBAbabABABabaBAbabAII.C^i.c._206 BababABabaBABabABAbaBABabABAbabaBABabABAbaBABabABAbabABababAII.C^i.c._207 BabABabAAbaBAbaBAbaBAbaBBabABabABabAAbaBAbaBAbaBAbaBBabABabAII.C^i.c._208 BabABabABabABabABabaBAbaBAbaBAbaBAbaBAbaBAbabABabABabABabABabAII.C^i.c._209 BAbabABabaBABabABAbaBABabaBAbabABabaBAbabABAbaBABabABAbabABabaBAII.C^i.c._210 BabABabABabABabaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbabABabABabABabAII.C^i.c._211 BabABabABabABAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBABabABabABabAII.C^i.c._212 BabABabABabABabABabAAbaBAbaBAbaBAbaBAbaBAbaBAbaBBabABabABabABabABabAII.C^i.c._213 BabABabABabABabABaaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbbABabABabABabABabAII.C^i.c._214 BababABAbabaBABababABAbabABABabaBABAbabABABabaBABababABAbabaBABababAII.C^i.c._215 BAbabABabaBABAbabABabaBABAbabABabaBAbabABabaBABAbabABabaBABAbabABabaBAII.C^i.c._216 BabABabABabABabABabABAbaBAbaBAbaBAbaBAbaBAbaBAbaBABabABabABabABabABabAII.C^i.c._217 BabABabABAbaBAbaBAbaBAbaBABabABabABabABabABabABAbaBAbaBAbaBAbaBABabABabAII.C^i.c._218 BAbabaBABAbabaBABababABABabaBABAbabaBABAbabaBABAbabABABababABAbabaBABAbabaBAII.C^i.c._219 BabABabABabABabABaaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbbABabABabABabABabAII.C^i.c._220 BabABAbaBAbaBAbabABabABabaBAbaBAbabABabABabaBAbaBAbabABabABabaBAbaBAbaBABabAII.C^i.c._221 BabABabABabABabABabaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbabABabABabABabABabAII.C^i.c._222 BAbaBABabaBABabaBABabaBABabaBABabaBABAbaBABAbabABAbabABAbabABAbabABAbabABAbaBAII.C^i.c._223 BabABabABabABabABabABabABaaBAbaBAbaBAbaBAbaBAbaBAbaBAbbABabABabABabABabABabABabAII.C^i.c._224 BabaBABAbabABabaBABAbabABabaBABabaBABabaBABAbabABAbabABAbabABabaBABAbabABabaBABAbabAII.C^i.c._225 BabABabABabAAbaBAbaBAbaBAbaBAbaBBabABabABabABabABabAAbaBAbaBAbaBAbaBAbaBBabABabABabAII.C^i.c._226 BabABabABAbaBAbaBAbaBAbaBAbabABabABabABabABabABabABabABabaBAbaBAbaBAbaBAbaBABabABabAII.C^i.c._227 BabABabABAbaBAbaBAbaBABabABabABabaBAbaBABabABAbaBAbabABabABabABAbaBAbaBAbaBABabABabAII.C^i.c._228 BabaBAbaBABabABAbaBAbabABabaBAbaBABabABabaBAbabABabABAbaBAbabABabaBAbaBABabABAbaBAbabA1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element II.C^i.c._229 BabABabABabABabABabABabABabaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbabABabABabABabABabABabABabAII.C^i.c._230 BabABaaBAbaBAbaBAbaBBabABabABabABaaBAbaBAbaBAbaBAbaBAbbABabABabABabAAbaBAbaBAbaBAbbABabAII.C^i.c._231 BabABabABaaBAbaBAbaBAbaBAbaBAbaBAbbABabABabABabABabABabABaaBAbaBAbaBAbaBAbaBAbaBAbbABabABabAII.C^i.c._232 BabaBABabaBAbabABabaBABabaBABabaBAbabABabaBAbabABabaBAbabABabaBAbabABAbabABAbabABabaBAbabABAbabAII.C^i.c._233 BAbaBABabaBAbabABabaBABabaBABabABAbabABabaBABabaBAbabABAbabABabaBABabABAbabABAbabABabaBAbabABAbaBAII.C^i.c._234 BabaBAbaBABabABabaBAbaBAbabABabABAbaBABabABabaBAbaBAbabABabABAbaBABabABabaBAbaBAbabABabABAbaBAbabAII.C^i.c._235 BabaBAbaBABabABabABAbaBAbabABabABAbaBABabABabaBAbaBAbabABabABAbaBABabABabaBAbaBABabABabABAbaBAbabAII.C^i.c._236 BabABabaBAbaBAbaBAbabABabABabABabaBAbaBAbaBABabABabABabABAbaBAbaBAbabABabABabABabaBAbaBAbaBAbabABabAII.C^i.c._237 BabABabABabABabAAbaBAbaBBabABabABabAAbaBAbaBAbbABabABaaBAbaBAbaBBabABabABabAAbaBAbaBBabABabABabABabAII.C^i.c._238 BabaBABabaBABabaBABabaBABababABAbabABAbabABAbaBABababABAbaBABabaBABabaBABababABAbabABAbabABAbabABAbabAII.C^i.c._239 BabABabABabABabABabABabABabaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbaBAbabABabABabABabABabABabABabAII.C^i.c._240 BabAAbaBAbaBAbaBAbbABabABaaBAbaBAbaBAbbABabABabAAbaBAbaBBabABabABaaBAbaBAbaBAbbABabABaaBAbaBAbaBAbaBBabAII.C^i.c._241 BabABabABaaBAbaBAbaBAbaBAbbABabABabABabABabAAbaBAbaBAbaBAbaBBabABabABabABabABaaBAbaBAbaBAbaBAbbABabABabAII.C^i.c._242 BabABabaBAbaBAbaBABabABabABabABAbaBAbaBAbaBAbabABabABabABabaBAbaBAbaBAbaBABabABabABabABAbaBAbaBAbabABabAII.C^i.c._243 BAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBAII.C^i.c._244 BabaBAbabABabABAbaBABabABAbaBABabABAbaBAbabABabaBAbabABabaBAbabABabaBAbaBABabABAbaBABabABAbaBABabABabaBAbabAII.C^i.c._245 BabABabABaaBAbaBAbaBAbaBAbbABabABabABabABabAAbaBAbaBAbaBAbaBAbaBBabABabABabABabABaaBAbaBAbaBAbaBAbbABabABabAII.C^i.c._246 BabABabaBAbaBAbaBAbabABabABabABabaBAbaBAbaBAbaBABabABabABabABAbaBAbaBAbaBAbabABabABabABabaBAbaBAbaBAbabABabAII.C^i.c._247 BAbabABAbabABAbabABababABAbaBABabaBABabABABabaBAbabABAbaBABabaBAbabABABabABAbabABAbaBABababABabaBABabaBABabaBAII.C^i.c._248 BAbabaBABAbabaBABAbabABABababABABababABABababABABababABABababABABababABABababABABababABABabaBABAbabaBABAbabaBAII.C^i.c._249 BAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABabaBABAbaBABAbabABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBAII.C^i.c._250 BabaBAbabABAbaBABabABAbaBABabaBAbabABabaBAbabABabaBABabABAbaBABabABAbabABabaBAbabABabaBAbabABAbaBABabABAbaBABabaBAbabAII.C^i.c._251 BabaBABabaBAbabABAbaBABabaBABabaBAbabABabaBABabaBABabABAbabABabaBABabABAbabABAbabABabaBAbabABAbabABAbaBABabaBAbabABAbabAII.C^i.c._252 BabAAbaBAbaBAbaBAbbABabABaaBAbaBAbbABabABabABabAAbaBAbaBBabABabAAbaBAbaBBabABabABabABaaBAbaBAbbABabABaaBAbaBAbaBAbaBBabAII.C^i.c._253 BabABAbaBAbaBABabABabaBAbabABabABAbaBAbabABabABAbaBABabABabaBAbabABabABAbaBABabABabaBAbaBABabABabaBAbabABabABAbaBAbaBABabAII.C^i.c._254 BAbabABABabaBABAbabaBABAbabABAbabaBABababABABababABAbabaBABababABAbabaBABababABABababABAbabaBABabaBABAbabaBABAbabABABabaBAII.C^i.c._255 BabABabABaaBAbaBAbaBAbaBAbaBAbaBBabABabABabABabABabAAbaBAbaBAbaBAbaBAbaBBabABabABabABabABabAAbaBAbaBAbaBAbaBAbaBAbbABabABabAII.C^i.c._256 BabABabABabAAbaBAbaBAbaBAbaBAbaBAbbABabABabABabABabAAbaBAbaBAbaBAbaBAbaBBabABabABabABabABaaBAbaBAbaBAbaBAbaBAbaBBabABabABabAII.C^i.c._257 BababABAbabABAbabABAbabaBABabaBABabaBABAbaBABAbabABAbabABAbabABABabaBABabaBABabaBABAbaBABAbabABAbabABAbabaBABabaBABabaBABababAII.C^i.c._258 BAbabABABabaBABAbabaBABababABAbabaBABababABABabaBABAbabABABabaBABAbabaBABAbabABABabaBABAbabABABababABAbabaBABababABAbabaBABAbabABABabaBA1pt0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element II.C^i.c._259 BAbabABabaBABabaBABabaBAbabABAbaBABabABAbabABAbabABabaBABabABAbabABAbaBABabaBABabABAbabABabaBABabaBABabABAbaBABabaBAbabABAbabABAbabABabaBAII.C^i.c._260 BAbabaBABababABAbabaBABababABABabaBABAbabaBABabaBABAbabaBABababABABabaBABAbabABABababABAbabaBABAbabABAbabaBABAbabABABababABAbabaBABababABAbabaBAII.C^i.c._261 BAbabABABabaBABababABAbabaBABAbabABABabaBABababABAbabaBABAbabABAbabaBABababABAbabaBABabaBABAbabaBABababABAbabABABabaBABAbabaBABababABAbabABABabaBAII.C^i.c._262 BabAAbaBAbaBBabABabAAbaBAbaBAbbABabABaaBAbaBBabABabABaaBAbaBAbbABabABaaBAbaBAbbABabABaaBAbaBAbbABabABabAAbaBAbbABabABaaBAbaBAbaBBabABabAAbaBAbaBBabAII.C^i.c._263 BAbabABAbabABABabaBABababABababABAbabaBABabaBABababABAbabABAbabABABabaBABAbaBABAbabABABabaBABabaBABababABAbabABAbabaBABababABababABAbabABABabaBABabaBAII.C^i.c._264 BAbabABAbabABAbabABAbabABabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabABabaBABabaBABabaBABabaBAII.C^i.c._265 BAbabABabaBABabaBABabaBABAbabABAbabABAbabABAbabABAbaBABabaBABabaBABabABAbabABabaBABabABAbabABAbabABAbaBABabaBABabaBABabaBABabaBABAbabABAbabABAbabABabaBAII.C^i.c._266 BababABAbabaBABabaBABababABAbabaBABabaBABababABAbabABAbabaBABababABAbabABAbabaBABabaBABababABAbabaBABabaBABababABAbabABAbabaBABababABAbabABAbabaBABababAII.C^i.c._267 BabaBABabaBABabaBABabABAbabABAbabABAbaBABabaBABabaBABabaBABabABAbabABAbabABAbaBABabaBABabaBABabABAbabABAbabABAbabABAbaBABabaBABabaBABabABAbabABAbabABAbabAII.C^i.c._268 BAbabaBABAbabABABababABABababABAbabaBABAbabaBABAbabABABababABABababABABabaBABAbabaBABAbabABABababABABababABABabaBABAbabaBABAbabaBABababABABababABABabaBABAbabaBAII.C^i.c._269 BababABAbabaBABabaBABababABAbabaBABabaBABAbabABABabaBABabaBABAbabABABabaBABabaBABAbabABAbabABABabaBABAbabABAbabABABabaBABAbabABAbabaBABababABAbabABAbabaBABababAII.C^i.c._270 BabaBABabaBABabaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbabABAbabABAbabABAbabABabaBABabaBABabaBABabaBABabaBABabaBABabaBABabABAbabABAbabABAbabABAbabABAbabABAbabAII.C^i.c._271 BAbabABABabABABabaBABAbabABAbabABABabaBABAbabABAbabABABabaBABAbabABAbabABABabaBABAbaBABAbabABABabaBABabaBABAbabABABabaBABabaBABAbabABABabaBABabaBABAbabABABabABABabaBAII.C^i.c._272 BAbabaBABababABABababABABababABABabaBABAbabaBABAbabaBABababABABababABAbabaBABAbabaBABAbabaBABAbabaBABababABABababABAbabaBABAbabaBABAbabABABababABABababABABababABAbabaBAII.C^i.c._273 BAbabABAbabABAbabABAbabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabABAbabABAbabABAbaBABabaBABabaBABabaBABabaBABabaBABabaBABabaBAII.C^i.c._274 BAbabaBABAbabaBABababABABababABAbabaBABAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBABAbabaBABababABABababABAbabaBABAbabaBAII.C^i.c._275 BAbabABABabaBABababABAbabaBABabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabaBABababABababABAbabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabABAbabaBABababABAbabABABabaBAII.C^i.c._276 BAbabABAbabaBABabaBABAbabABAbabABABabaBABAbabABAbabaBABabaBABababABAbabaBABabaBABababABAbabABABabaBABababABAbabABAbabaBABababABAbabABAbabaBABabaBABAbabABABabaBABabaBABAbabABAbabaBABabaBAII.C^i.c._277 BAbabaBABababABABabaBABababABABababABAbabaBABAbabaBABababABAbabaBABababABABababABAbabaBABAbabABABabaBABAbabaBABababABABababABAbabaBABababABAbabaBABAbabaBABababABABababABAbabABABababABAbabaBAII.C^i.c._278 BAbabABAbabABAbabABababABAbaBABabaBABabaBABababABAbabABAbabABAbabABAbabABabaBABabaBABabaBABababABababABAbabABAbabABAbabABabaBABabaBABabaBABabaBABababABAbabABAbabABAbaBABababABabaBABabaBABabaBAII.C^i.c._279 BAbabaBABababABABabaBABAbabaBABAbabABABababABABabaBABAbabaBABababABABababABAbabaBABAbabaBABAbabABABabaBABAbabaBABAbabaBABababABABababABAbabaBABAbabABABababABABabaBABAbabaBABAbabABABababABAbabaBAII.C^i.c._280 BAbabaBABababABAbabaBABababABABabaBABAbabABABabaBABAbabaBABababABABabaBABAbabABABabaBABAbabaBABababABAbabaBABAbabABABabaBABAbabABABababABAbabaBABAbabABABabaBABAbabABABababABAbabaBABababABAbabaBAII.C^i.c._281 BAbabABABabaBABababABABabaBABAbabABAbabaBABababABAbabaBABabaBABAbabaBABababABAbabABABabaBABAbabABABabaBABAbabABABabaBABababABAbabaBABAbabABAbabaBABababABAbabaBABabaBABAbabABABababABAbabABABabaBAII.C^i.c._282 BAbabaBABababABAbabABABababABAbabaBABAbabABAbabaBABAbabABABababABAbabaBABababABAbabaBABAbabABABabaBABAbabABABabaBABAbabaBABababABAbabaBABababABABabaBABAbabaBABabaBABAbabaBABababABABabaBABababABAbabaBAII.C^i.c._283 BAbabaBABababABAbabaBABababABABabaBABAbabaBABababABAbabaBABAbabABABabaBABAbabABABababABAbabaBABAbabABABabaBABAbabaBABababABABabaBABAbabABABabaBABAbabaBABababABAbabaBABAbabABABababABAbabaBABababABAbabaBA1pt 0pt The free group elements for the periodic three-body orbits.1 1ptClass & number free group element II.C^i.c._284 BAbabABABabABABabaBABAbabABAbabaBABababABAbabABAbabaBABababABAbabABAbabaBABababABAbabABABabaBABAbabABABabABABabaBABAbabABABabaBABababABAbabaBABabaBABababABAbabaBABabaBABababABAbabaBABabaBABAbabABABabABABabaBAII.C^i.c._285 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBAII.C^i.c._286 BAbabABABabaBABababABAbabaBABabaBABababABABabaBABabaBABAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABababABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBABAbabABAbabABABababABAbabABAbabaBABababABAbabABABabaBAII.C^i.c._287 BAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABabaBABAbabABAbabABAbabABAbabABAbabaBABabaBABabaBABabaBABababABAbabABAbabABAbabABAbabABABabaBABabaBABabaBABabaBAII.C^i.c._288 BAbabABABabaBABababABAbabaBABabaBABababABAbabaBABabaBABAbabABABabaBABabaBABAbabABABabaBABababABAbabaBABababABababABAbabaBABababABAbabABABabaBABAbabABAbabABABabaBABAbabABAbabaBABababABAbabABAbabaBABababABAbabABABabaBAII.C^i.c._289 BAbabaBABababABAbabaBABababABABabaBABAbabaBABabaBABAbabaBABababABABabaBABAbabABABabaBABAbabaBABababABABabaBABAbabABABababABAbabaBABAbabABABabaBABAbabABABababABAbabaBABAbabABAbabaBABAbabABABababABAbabaBABababABAbabaBAII.C^i.c._290 BAbabaBABAbabABABababABAbabaBABAbabaBABababABABababABABabaBABAbabaBABababABABababABABabaBABAbabaBABAbabaBABababABAbabaBABAbabaBABAbabABABababABABababABAbabaBABAbabABABababABABababABAbabaBABAbabaBABababABABabaBABAbabaBAII.C^i.c._291 BAbabaBABAbabaBABAbabaBABababABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabABABababABABababABABababABAbabaBABAbabaBABAbabaBAII.C^i.c._292 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBAII.C^i.c._293 BAbabABABabaBABAbabABABabaBABAbabaBABabaBABAbabABABababABAbabABABabaBABAbabaBABababABAbabABABababABAbabaBABababABAbabaBABababABABabaBABababABAbabaBABAbabABABabaBABababABABabaBABAbabABAbabaBABAbabABABabaBABAbabABABabaBAII.C^i.c._294 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBAII.C^i.c._295 BAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBABAbabaBABabaBABababABABabaBABababABAbabABABababABAbabABAbabaBABAbabABABabaBABababABAbabaBABabaBABAbabABABabaBABAbabABAbabaBABababABAbabABABabaBAII.C^i.c._296 BAbabABAbabABABabaBABAbabABAbabABABabaBABAbabABAbabABABabaBABababABAbabaBABabaBABababABAbabaBABabaBABababABAbabaBAbabaBABababABAbabABAbabaBABababABAbabABAbabaBABababABAbabABABabaBABabaBABAbabABABabaBABabaBABAbabABABabaBABabaBAII.C^i.c._297 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBAII.C^i.c._298 BAbabABABababABAbabABABabaBABAbabaBABababABAbabABABababABAbabaBABAbabABAbabaBABababABABabaBABAbabABAbabaBABAbabABABabaBABAbabABABabaBABAbabaBABabaBABAbabABABababABAbabaBABabaBABAbabaBABababABABabaBABababABAbabaABAbabABABabaBABababABABabaBAII.C^i.c._299 BAbabABABababABAbabABABabaBABAbabaBABababABAbabaBABababABABabaBABAbabABAbabaBABAbabABABababABAbabABABabaBABAbabaBABababABAbabaBABababABAbabaBABAbabABABabaBABababABABabaBABAbabaBABabaBABAbabABABababABAbabaBABababABAbabaBABAbabABABabaBABababABABabaBAII.C^i.c._300 BAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABababABABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBABAbabaBA 1pt
http://arxiv.org/abs/1705.00527v4
{ "authors": [ "Xiaoming Li", "Shijun Liao" ], "categories": [ "nlin.CD", "astro-ph.EP", "physics.comp-ph" ], "primary_category": "nlin.CD", "published": "20170426221737", "title": "More than six hundreds new families of Newtonian periodic planar collisionless three-body orbits" }
[email protected] School of Physical Science and Technology, Southwest University, Chongqing 400715, ChinaBy using the relations between the slow-roll parameters and the power spectrum for the single field slow-roll inflation, we derive the scalar spectral tilt n_s and the tensor to scalar ratio r for the constant slow-roll inflation and obtain the constraint on the slow-roll parameter η from the Planck 2015 results. The inflationary potential for the constant slow-roll inflation is then reconstructed in the framework of both general relativity and scalar-tensor theory of gravity, and compared with the recently reconstructed E model potential. In the strong coupling limit, we show that the η attractor is reached. 1704.08559Reconstruction of constant slow-roll inflation Qing Gao December 30, 2023 ==============================================§ INTRODUCTION The observational result on the scalar spectral tilt n_s=0.9645± 0.0049 (68% CL) <cit.> implies that n_s-1≈ -2/N, where N is the number of e-folds before the end of inflation and we choose N=60. So it is natural to parameterize observables n_s and r with N. The parametrization of the slow-roll parameter ϵ by N <cit.> was used to discuss the observables n_s and r and the sub-Planckian field excursion <cit.>. Mukhanov used the simple power-law parametrization ϵ(N)=β/(N+1)^α to reconstruct the class of inflationary potentials <cit.>. The reconstruction of inflationary potentials were then discussed by many researchers <cit.>. In this reconstruction method, the observables n_s and r are derived straightforwardly once we specify the parametrization and the parameters can be constrained from observational data even before we derive the potentials <cit.>. Furthermore, the class of potentials are reconstructed in the full form, not just the first few terms in Taylor expansion <cit.>. Since the parametrization in terms of N works on the observable scale only, so there are some shortcomings on the reconstruction method <cit.>.On the other hand, the attractor n_s=1-2/N and r=12/N^2 can be derived from the T model <cit.>, E model <cit.>, the Higgs inflation with the nonminimal coupling ξψ^2 R in the strong coupling limit ξ≫ 1 <cit.>, the more general potential λ^2 f^2(ψ) with the nonminimal coupling ξ f(ψ)R for arbitrary functions f(ψ) in the strong coupling limit <cit.>, and the Starobinsky model R+R^2 <cit.>. The above attractor was also generalized to the so called α attractor with n_s=1-2/N and r=12α/N^2 <cit.>. Due to the arbitrary nonminimal coupling Ω(ϕ)=1+ξ f(ϕ) and the conformal transformation between Joran frame and Einstein frame, the general scalar-tensor theories of gravity in Jordan frame can be brought into Einstein gravity plus canonical scalar field minimally coupled to gravity in Einstein frame. Therefore, in general, it is possible to obtain any attractor from general scalar-tensor theories of gravity <cit.>.In this paper, we discuss the constant slow-roll inflationary model <cit.>. If the slow-roll parameter ϵ is constant, then the other slow-roll parameter η is 0 and inflation won't never stop. So the constant slow-roll inflationary model means that η is a constant, here η can be either the slow-roll parameter defined by the Hubble parameter or the potential. We reconstruct the class of inflationary potentials for the constant slow-roll inflation in the framework of both general relativity and scalar-tensor theory of gravity. We also fit the parameter η to the observational data given by Planck observations <cit.>. The paper is organized as follows. In Sec. II, we give the general formula and procedure for the reconstruction of the potentials with constant η and compare the potential with the reconstructed E model potential in <cit.>. We conclude the paper in Sec. III. § THE CONSTANT SLOW-ROLL INFLATIONARY MODEL For the constant slow-roll inflationary model <cit.>, η is a constant and |η|<1,η=1/Vd^2V/dϕ^2,where the reduced Planck mass M_pl=√(1/(8π G))=1. It is easy to see that the potential takes the formV(ϕ)= Ae^√(η)ϕ+Be^-√(η)ϕ, 1>η>0A+Bϕ,η=0Acos(√(-η) ϕ)+Bsin(√(-η) ϕ), -1<η<0.In the following, we use the reconstruction method to determine the integration constants A and B.From the relation2η=dlnϵ/dN+4ϵ,we get the solutionϵ(N)=η e^2η N/Dη+2e^2η N,where D is an integration constant. In this paper, we assume the constant parametrization is valid during the whole inflation. At the end of inflation, N=0, ϵ(N=0)≈ 1, so D=1-2/η. The slow-roll parameter (<ref>) becomesϵ(N)=η e^2η N/η-2+2e^2η N,and it has only one parameter η. Note that the sign of the denominator η-2+2e^2η N in Eq. (<ref>) is the same as that of η. The tensor to scalar ratio r isr=16ϵ=16η e^2η N/η-2+2e^2η N,and the scalar spectral tilt n_s isn_s=1+2η-6ϵ=1+2η(η-2-e^2η N)/η-2+2e^2η N.To get the constraint on the slow-roll parameter η, we compare the results obtained from Eqs. (<ref>) and (<ref>) with the Planck 2015 observations <cit.>, and the results are shown in Fig. <ref>. For N=60, the 1σ constraint is -0.018<η<-0.0067, the 2σ constraint is -0.021<η<0.0015, and the 3σ constraint is -0.023<η<0.01. For N=50, the 1σ constraint is -0.014<η<-0.0039, the 2σ constraint is -0.018<η<0.0068, and the 3σ constraint is -0.02<η<0.0168. Since the denominator in Eq. (<ref>) becomes zero when η=0, i.e., η=0 is a singular point, so we take Taylor expansion of ϵ(N) around η=0,ϵ≈1/1+4N.Plugging the result (<ref>) into Eqs. (<ref>) and (<ref>), we getn_s ≈ 1-6/1+4N, r ≈16/1+4N.When η=0, (n_s, r) equal to (0.97, 0.08) for N=50 and (0.975, 0.067) for N=60, respectively.From the definition of the slow-roll parameter ϵ, we getdϕ=dV/dϕ/VdN=∓√(2ϵ)dN,where the sign ± depends on the sign of the first derivative of the potential and the scalar field is normalized by the reduced Planck mass M_pl=1. Soϕ-ϕ_e=±∫_0^N √(2ϵ(N))dN,Substituting Eq. (<ref>) into Eq. (<ref>), we getϕ=1/√(η) arctanh(√(1-(2-η)e^-2η N/2) ),η>0,1/√(-η)arctan(√(-1+(2-η)e^-2η N/2) ), η<0.and the value ϕ_e of the scalar field at the end of inflationϕ_e=1/√(η) arctanh(√(η/2)),η>0,1/√(-η)arctan(√(-η/2)),η<0.From the definition of the slow-roll parameter and the relation (<ref>), we get <cit.>ϵ=1/2dV/dϕ/Vdϕ/dN=1/2dln V/dN.Plugging Eq. (<ref>) into Eq. (<ref>), we getV(N)=Ṽ_0√(|2-η-2e^2η N|).Combining Eqs. (<ref>) and (<ref>), we get the reconstructed potential in general relativityV(ϕ)=V_0sin(√(-η)ϕ),where V_0=Ṽ_0/√(-η). Note that if η>0, then the function sin becomes the function sinh.From Fig. <ref>, we find that η>0 is inconsistent with the observations at the 1σ level, so in the following we consider η<0 only. From Eqs. (<ref>) and (<ref>), we get the field excursionΔϕ=ϕ_*-ϕ_e=1/√(-η)[arctan(√(-1+2-η/2e^-2η N))-arctan(√(-η/2)) ].From Eq. (<ref>), if we take η=-0.015 and N=60, we get the super-Planckian field excursion Δϕ=8.715 which is bigger than the Lyth bound <cit.> N√(r/8)=3.256.Now we compare the potential V(ϕ)=V_0sin(√(-η)ϕ) with the reconstructed E model potential <cit.>V(ϕ)=V_0(1-e^d_2ϕ/d_1)^2,whered_1=1/2√(r/2), d_2=1/3[9r/16-3/2(1-n_s)].Take η=-0.015, we show the reconstructed potentials (<ref>) and (<ref>) in Fig. <ref>. It is clear that the reconstructed potentials are consistent in the observable scale.For the scalar field minimally coupled to gravity in Einstein frame, after the the conformal transformation between the metric g̃_μν and the scalar field ψ in Jordan frame and the metric g_μν and the scalar field ϕ in Einstein frame,g_μν=Ω(ψ)g̃_μν,dϕ^2=[3/2(dΩ/dψ)^2/Ω^2(ψ)+ω(ψ)/Ω(ψ)]dψ^2,we get the action for scalar-tensor theory in Jordan frameS=∫ d^4x√(-g̃)[1/2Ω(ψ)R̃(g̃)-1 2ω(ψ)g̃^μν∇_μψ∇_νψ-V_J(ψ)],where V_J(ψ)=Ω^2(ψ)V(ϕ). If the conformal factor satisfies the strong coupling conditionΩ(ψ)≪3(dΩ(ψ)/dψ)^2/2ω(ψ),then we getϕ≈√(3/2)lnΩ(ψ),Ω(ψ)≈ e^√(2/3) ϕ.For simplicity, we take Ω(ψ)=1+ξ f(ψ) and use the above approximate relations (<ref>) in the strong coupling limit to reconstruct the potential V_J[ψ(ϕ)] in Jordan frame. For this specific choice of Ω(ψ) with f(ψ)=ψ^k, the strong coupling conditions (<ref>) and (<ref>) become <cit.>ξ≫(2/3k^2)^k/2(e^√(2/3) ϕ-1)^1-kexp(√(1/6) kϕ).Therefore, in the strong coupling limit, we get the reconstructed potential of the constant slow-roll inflation in the framework of scalar-tensor theory of gravity,V_J(ψ)=V_0Ω^2(ψ)sin(√(-3η/2)lnΩ(ψ)).Note that the function Ω(ψ)=1+ξ f(ψ) is arbitrary, so we obtain the constant slow-roll inflationary attractor (<ref>) and (<ref>) from the above potential (<ref>) in the strong coupling limit (<ref>), we call this attractor the η attractor. In Fig. <ref>, we take ω(ψ)=1, N=60, η=-0.015 and f(ψ)=ψ^k with k=1/5, 2/3, 1, 3/2 and 5 as examples to show the η attractor n_s=0.961 and r=0.024 in the strong coupling limit. From Eq. (<ref>), we find that the strong coupling limit requires ξ≫ 1933 for k=1/5 and ξ≫ 0.0013 for k=5, the dependences of n_s and r on the coupling constant ξ are shown in Figs. <ref> and <ref>. The results confirm the strong coupling condition (<ref>).§ CONCLUSIONSWe use the relations between observables and slow-roll parameters for the single field slow-roll inflation to reconstruct the inflationary potential in the framework of both general relativity and scalar-tensor theory of gravity by assuming that the slow-roll parameter η is a constant. For the constant slow-roll inflation with constant η, we first derive ϵ(N) from the relation between ϵ and η, then we get the observalbes r(N) and n_s(N). We compare the theoretical predications with the Planck 2015 observations <cit.>, and the results are shown in Fig. <ref>. For N=60, the 1σ constraint is -0.018<η<-0.0067, the 2σ constraint is -0.021<η<0.0015, and the 3σ constraint is -0.023<η<0.01. For N=50, the 1σ constraint is -0.014<η<-0.0039, the 2σ constraint is -0.018<η<0.0068, and the 3σ constraint is -0.02<η<0.0168. These results show that η>0 is inconsistent with the observations at the 1σ level, so the observation favors η<0 and the concave potential V(ϕ) at the 1σ level. From the relations between ϕ(N) and ϵ(N), V(N) and ϵ(N), we get the reconstructed potential V(ϕ)=V_0sin(√(-η)ϕ), and the result was compared with the reconstructed E model potential in <cit.>. We find that the reconstructed potentials are consistent with each other in the observable scale. Finally, we use the conformal transformation between Jordan frame and Einstein to reconstruct the class of extended inflationary potentials, and the η attractor is reached in the strong coupling limit as shown in Fig. <ref>. We also use the strong coupling condition Eqs. (<ref>) to derive the constraint on the coupling constant ξ. The derived analytical results are supported by the numerical results as shown in Figs. <ref> and <ref>. The author thanks Professor Yungui Gong for helpful discussions. This research was supported in part by the National Natural Science Foundation of China under Grant No. 11605061 and the Fundamental Research Funds for the Central Universities. 10Adam:2015rua Adam R, et al. Planck 2015 results. I. Overview of products and scientific results. Astron Astrophys, 2016, 594: A1Ade:2015lrj Ade P A R, et al. Planck 2015 results. XX. Constraints on inflation. Astron Astrophys, 2016, 594: A20Huang:2007qz Huang Q G. Constraints on the spectral index for the inflation models in string landscape. Phys Rev D, 2007, 76: 061303Lyth:1996im Lyth D H. What would we learn by detecting a gravitational wave signal in the cosmic microwave background anisotropy? Phys Rev Lett, 1997, 78: 1861–1863Gong:2014cqa Gao Q, Gong Y. The challenge for single field inflation with BICEP2 result. Phys Lett B, 2014, 734: 41–43Gao:2014yra Gao Q, Gong Y, Li T, et al. Simple single field inflation models and the running of spectral index. Sci China Phys Mech Astron, 2014, 57: 1442–1448Gao:2014pca Gao Q, Gong Y, Li T. Modified Lyth bound and implications of BICEP2 results. Phys Rev D, 2015, 91(6): 063509Huang:2015xda Huang Q G. Lyth bound revisited. Phys Rev D, 2015, 91(12): 123532Linde:2016hbb Linde A. Gravitational waves and large field inflation. JCAP, 2017, 1702(02): 006Mukhanov:2013tua Mukhanov V. Quantum Cosmological Perturbations: Predictions and Observations. Eur Phys J C, 2013, 73: 2486Roest:2013fha Roest D. Universality classes of inflation. JCAP, 2014, 1401(01): 007Garcia-Bellido:2014eva Garcia-Bellido J, Roest D, Scalisi M, et al. Can CMB data constrain the inflationary field range? JCAP, 2014, 1409: 006Garcia-Bellido:2014wfa Garcia-Bellido J, Roest D, Scalisi M, et al. Lyth bound of inflation with a tilt. Phys Rev D, 2014, 90(12): 123539Garcia-Bellido:2014gna Garcia-Bellido J, Roest D. Large-N running of the spectral index of inflation. Phys Rev D, 2014, 89(10): 103527Boubekeur:2014xva Boubekeur L, Giusarma E, Mena O, et al. Phenomenological approaches of inflation and their equivalence. Phys Rev D, 2015, 91(8): 083006Creminelli:2014nqa Creminelli P, Dubovsky S, López Nacir D, et al. Implications of the scalar tilt for the tensor-to-scalar ratio. Phys Rev D, 2015, 92(12): 123528Barranco:2014ira Barranco L, Boubekeur L, Mena O. A model-independent fit to Planck and BICEP2 data. Phys Rev D, 2014, 90(6): 063007Gobbetti:2015cya Gobbetti R, Pajer E, Roest D. On the Three Primordial Numbers. JCAP, 2015, 1509(09): 058Chiba:2015zpa Chiba T. Reconstructing the inflaton potential from the spectral index. Prog Theor Exp Phys, 2015, 2015(7): 073E02Binetruy:2014zya Binetruy P, Kiritsis E, Mabillard J, et al. Universality classes for models of inflation. JCAP, 2015, 1504(04): 033Pieroni:2015cma Pieroni M. β-function formalism for inflationary models with a non minimal coupling with gravity. JCAP, 2016, 1602(02): 012Binetruy:2016hna Binétruy P, Mabillard J, Pieroni M. Universality in generalized models of inflation. http://arxiv.org/abs/1611.07019 arXiv: 1611.07019 [gr-qc]Cicciarella:2016dnv Cicciarella F, Pieroni M. Universality for quintessence. http://arxiv.org/abs/1611.10074 arXiv: 1611.10074 [gr-qc]Barbosa-Cendejas:2015rba Barbosa-Cendejas N, De-Santiago J, German G, et al. Tachyon inflation in the Large-N formalism. JCAP, 2015, 1511: 020Lin:2015fqa Lin J, Gao Q, Gong Y. The model independent reconstruction of inflationary potentials. Mon Not Roy Astron Soc, 2016, 459: 4029–4037Yi:2016jqr Yi Z, Gong Y. Nonminimal coupling and inflationary attractors. Phys Rev D, 2016, 94(10): 103527Gao:2017uja Gao Q, Gong Y. Reconstruction of extended inflationary potentials for attractors. http://arxiv.org/abs/1703.02220 arXiv: 1703.02220 [gr-qc]Odintsov:2017qpp Odintsov S D, Oikonomou V K. Inflation with a Smooth Constant-Roll to Constant-Roll Era Transition. http://arxiv.org/abs/1704.02931 arXiv: 1704.02931 [gr-qc]Nojiri:2017qvx Nojiri S, Odintsov S D, Oikonomou V K. Constant-roll Inflation in F(R) Gravity. http://arxiv.org/abs/1704.05945 arXiv: 1704.05945 [gr-qc]Hodges:1990bf Hodges H M, Blumenthal G R. Arbitrariness of inflationary fluctuation spectra. Phys Rev D, 1990, 42: 3329–3333Copeland:1993jj Copeland E J, Kolb E W, Liddle A R, et al. Reconstructing the inflation potential, in principle and in practice. Phys Rev D, 1993, 48: 2529–2547Liddle:1994cr Liddle A R, Turner M S. Second order reconstruction of the inflationary potential. Phys Rev D, 1994, 50: 758Lidsey:1995np Lidsey J E, Liddle A R, Kolb E W, et al. Reconstructing the inflation potential : An overview. Rev Mod Phys, 1997, 69: 373–410Ma:2014vua Ma Y Z, Wang Y. Reconstructing the Local Potential of Inflation with BICEP2 data. JCAP, 2014, 1409(09): 041Peiris:2006ug Peiris H, Easther R. Recovering the Inflationary Potential and Primordial Power Spectrum With a Slow Roll Prior: Methodology and Application to WMAP 3 Year Data. JCAP, 2006, 0607: 002Norena:2012rs Norena J, Wagner C, Verde L, et al. Bayesian Analysis of Inflation III: Slow Roll Reconstruction Using Model Selection. Phys Rev D, 2012, 86: 023505Choudhury:2014kma Choudhury S, Mazumdar A. Reconstructing inflationary potential from BICEP2 and running of tensor modes. http://arxiv.org/abs/1403.5549 arXiv: 1403.5549 [hep-th]Martin:2016iqo Martin J, Ringeval C, Vennin V. Shortcomings of New Parametrizations of Inflation. Phys Rev D, 2016, 94(12): 123521Kallosh:2013hoa Kallosh R, Linde A. Universality Class in Conformal Inflation. JCAP, 2013, 1307: 002Kallosh:2013maa Kallosh R, Linde A. Non-minimal Inflationary Attractors. JCAP, 2013, 1310: 033Kaiser:1994vs Kaiser D I. Primordial spectral indices from generalized Einstein theories. Phys Rev D, 1995, 52: 4295–4306Bezrukov:2007ep Bezrukov F L, Shaposhnikov M. The Standard Model Higgs boson as the inflaton. Phys Lett B, 2008, 659: 703–706Kallosh:2013tua Kallosh R, Linde A, Roest D. A universal attractor for inflation at strong coupling. Phys Rev Lett, 2014, 112: 011303starobinskyfr Starobinsky A A. A New Type of Isotropic Cosmological Models Without Singularity. Phys Lett B, 1980, 91: 99–102Kallosh:2013yoa Kallosh R, Linde A, Roest D. Superconformal Inflationary α-Attractors. JHEP, 2013, 1311: 198Brooker:2017vyi Brooker D J. How to Produce an Arbitrarily Small Tensor to Scalar Ratio. http://arxiv.org/abs/1703.07225 arXiv: 1703.07225 [astro-ph.CO]Jinno:2017jxc Jinno R, Kaneta K. Hillclimbing inflation. http://arxiv.org/abs/1703.09020 arXiv: 1703.09020 [hep-ph]Galante:2014ifa Galante M, Kallosh R, Linde A, et al. Unity of Cosmological Inflation Attractors. Phys Rev Lett, 2015, 114(14): 141302Martin:2012pe Martin J, Motohashi H, Suyama T. Ultra Slow-Roll Inflation and the non-Gaussianity Consistency Relation. Phys Rev D, 2013, 87(2): 023514Motohashi:2014ppa Motohashi H, Starobinsky A A, Yokoyama J. Inflation with a constant rate of roll. JCAP, 2015, 1509(09): 018Motohashi:2017aob Motohashi H, Starobinsky A A. Constant-roll inflation: confrontation with recent observational data. Europhys Lett, 2017, 117(3): 39001DiMarco:2017sqo Di Marco A, Cabella P, Vittorio N. Reconstruction of α-attractor supergravity models of inflation. Phys Rev D, 2017, 95(2): 023516
http://arxiv.org/abs/1704.08559v2
{ "authors": [ "Qing Gao" ], "categories": [ "astro-ph.CO", "gr-qc", "hep-th" ], "primary_category": "astro-ph.CO", "published": "20170427132935", "title": "Reconstruction of constant slow-roll inflation" }
Generalized G-estimation and Model SelectionMichael P. Wallace^1, Erica E. M. Moodie^2, and David A. Stephens^3 ^1 Department of Statistics and Actuarial Science, University of Waterloo^2 Department of Epidemiology, Biostatistics, and Occupation Health, McGill University^3 Department of Mathematics and Statistics, McGill University Dynamic treatment regimes (DTRs) aim to formalize personalized medicine by tailoring treatment decisions to individual patient characteristics. G-estimation for DTR identification targets the parameters of a structural nested mean model known as the blip function from which the optimal DTR is derived. Despite considerable work deriving such estimation methods, there has been little focus on extending G-estimation to the case of non-additive effects, non-continuous outcomes or on model selection. We demonstrate how G-estimation can be more widely applied through the use of iteratively-reweighted least squares procedures, and illustrate this for log-linear models. We then derive a quasi-likelihood function for G-estimation within the DTR framework, and show how it can be used to form an information criterion for blip model selection. These developments are demonstrated through application to a variety of simulation studies as well as data from the Sequenced Treatment Alternatives to Relieve Depression study.Keywords: Adaptive treatment strategies; Dynamic treatment regimes; Iteratively-reweighted least squares; Quasi-likelihood Information Criterion; Structural nested models.§ INTRODUCTION Dynamic treatment regimes (DTRs) - sequences of decision rules that take patient information as input and output recommended treatments - are part of a rapidly expanding literature on personalized medicine <cit.>. By tailoring treatments to individual patient characteristics, DTRs are able to improve long-term outcomes for a population when compared with more traditional non-tailored approaches. Identification of the optimal regime (which maximizes expected outcome) is a major challenge due to, for example, delayed treatment effects and covariate-dependent treatment assignment.Numerous methods have been proposed for optimal DTR estimation.A general class of DTR estimation approaches relies on structural nested mean models (SNMMs, ).In our formulation, the SNMM parameterizes the difference between the conditional expectation of the outcome following observed treatment with that of a counterfactual outcome under a (potentially unobserved) treatment regime. By estimating the parameters of this model we are then able to identify the optimal DTR, i.e. the sequence of treatment decisions that maximizes the expected outcome across all patients. This general approach of parameterizing and estimating components of the outcome mean model is used in a variety of specific DTR estimation methods, including Q-learning <cit.>, dynamic weighted least squares <cit.>, and G-estimation <cit.>, the last of which is the focus of this paper.Almost all of the methodological developments for DTR estimation have focused on continuous outcomes and additive effects of treatment on the expected counterfactual outcome, with time-to-event outcomes included as a special case.Estimation for discrete outcomes, or for effects of treatment on non-additive scales has received little attention.A recent exception is the work of <cit.> who used generalized additive models to apply Q-learning in this setting. <cit.>, meanwhile, considered a likelihood-based approach in the case of a binary outcome. Such examples are rare, however, and typically grounded in methods that do not offer a great deal of flexibility or robustness in modeling. The presentation (and implementation) of G-estimation primarily for continuous outcomes therefore represents an important limitation of the approach.Even for continuous outcomes, there has been little focus on model selection in the context of G-estimation. The methods listed above all implicitly assume that the SNMMs upon which they rely are correctly (or possibly over-) specified. Very little work has been published related to the problem of choosing between a set of candidate models or model checking. Exceptions include the diagnostic plots of <cit.> and the method of <cit.> that exploits the so-called double-robustness property (discussed below) for model assessment. Neither of these, however, assess the component of the model quantifying the effect of treatment – i.e., the blip model – alone, and can at best assess the validity of both the blip model and another component model simultaneously.In this paper, we present two generalizations of G-estimation. First, we derive and illustrate how iteratively-reweighted least squares (IRLS) may be used to implement G-estimation in a discrete-outcome scenario using log-linear models.We then present a new approach to model selection when using G-estimation for DTRs based on a Quasi-likelihood Information Criterion (QIC). Our QIC formulation is applicable to G-estimation procedures in general, butwe demonstrate how the QIC can be applied to G-estimation in the DTR setting, encompassing multiple stages of treatmentfor both the cases of continuous and count outcomes.§ DTRS AND G-ESTIMATIONWe establish notation by considering G-estimation in its conventional form, where effects of exposure are additive and a linear model is presumed for the counterfactual outcomes.We consider a cohort of subjects on whom data are gathered at fixed intervals (such as visits to a physician) or at fixed clinical decision points (diagnosis, remission, and so on), with a treatment decision made at each of these time points. Our objective is to identify the sequence of treatment decision rules (the DTR) which maximizes a subject's long-term expected outcome (defined such that larger values are preferred). We assume there are a total of J successive treatment decisions (or stages): y denotes observed patient outcome; a_j denotes the stage j treatment decision (j = 1, ..., J), with a_j^0 denoting “no treatment” (such as a control or standard care); h_j denotes the covariate matrix containing patient information (history) prior to the j^th treatment decision.The history can include previous treatments a_1,...,a_j-1 along with non-treatment information x_j.In addition, over- and underline notation is used to indicate the past and future, respectively. For example a_j denotes the vector of treatment decisions up to and including the stage j decision, while a_j+1 denotes the last J-j decisions (from stage j+1 up to and including stage J). The optimal treatment at any given stage is denoted a_j^opt.The (stage j) optimal blip-to-reference (or simply blip) function is defined asγ_j(h_j,a_j) = E[Y(a_j-1,a_j,a_j+1^opt) - Y(a_j-1,a_j^0,a_j+1^opt)| h_j] which is the expected difference in outcome when using a reference treatment a_j^0 instead of a_j at stage j, in subjects with history h_j who receive optimal treatment across the remaining J-j intervals (a_j+1^opt). The optimal treatment at stage j maximizes the blip. Under additive local rank preservation (see 2.1.3 of ), we can decompose the expectation of the observed potential outcome asE[Y(a_J)] = E[Y^opt] - ∑_j=1^J [γ_j(h_j,a_j^opt) - γ_j(h_j,a_j) ] where Y^opt can be thought of as the optimal outcome that would be observed if the optimal treatment was followed at every stage. The observed outcome y is then equal in expectation to the optimal outcome minus the difference in outcome between optimal and observed treatment at each stage.In practice, we assume γ_j(.) takes a known parametric form γ_j(h_j,a_j;ψ_j) with parameters ψ_j.We then estimate ψ_j, and identify the optimal treatment regime by choosing, for each subject, the treatment that maximizes the estimated blip.G-estimation is one method which may be used to estimate ψ_j, and relies on two standard assumptions: the stable unit treatment value assumption and the assumption of no unmeasured confounding (or sequential randomization). The former means that a subject's outcome is not influenced by other subjects' treatment allocation <cit.> and that the counterfactual outcome under a particular treatment is equal to the observed outcome under that treatment; the latter states that the treatment received at stage j is independent of any future (potential) covariate or outcome, conditional on history h_j.Writing ψ_j = (ψ_j,ψ_j+1,...,ψ_J), we define for each j, G_j(ψ_j) = y_j - γ_j(h_ψ j,a_j;ψ_j)where y_j = y + ∑_k=j+1^J [γ_k(h_ψ k,a_k^opt;ψ_k) - γ_k(h_ψ k,a_k;ψ_k) ] can be viewed as a pseudo-outcome which we compute at each stage based on those ψ̂_k (k > j), and hence â_k^opt already estimated. ThereforeG_j(ψ_j)= y - γ_j(h_j,a_j;ψ_j) + ∑_k=j+1^J [γ_k(h_k,a_k^opt;ψ_k) - γ_k(h_k,a_k;ψ_k) ]and we can regard G_j(ψ_j) as being equal to the expected outcome with the effects of stage j treatment `removed' and the difference between optimal and observed treatment thereafter `added'. Under the above assumptions we have that E[G_j(ψ_j)|h_j] = E[Y(a_j-1,a_j^0,a_j+1^opt)|h_j] which represents the expected outcome for a subject who receives treatment history a_j-1 up to stage j-1, no treatment at stage j, and optimal treatment thereafter. We refer to G_j as the stage j treatment-free outcome.To estimate the blip parameters ψ_j, G-estimation considers the set of functionsU_j(ψ_j;β_j;α_j) = {S_j(A_j) - E[S_j(A_j)|h_j;α_j]}{G_j (ψ_j) - E[G_j (ψ_j)|h_j;β_j]}where typically S_j(A_j) = a_jh_j. A fully efficient form of S_j(A_j) has been proposed <cit.>, but requires knowledge of the variance of G_j(ψ_j), which is rarely available in practice.The estimating functions require the specification of a number of models, namely the stage j blip model: γ_j(h_ψ j,a_j;ψ_j); the stage j treatment-free model: E[G_j (ψ_j)|h_β j;β_j]; and the stage j treatment model: E[A_j|h_α j;α_j] (or, more generally, E[S_j(A_j)|h_j;α_j]), where h_ψ j, h_β j and h_α j are subsets of patient history that feature in the blip, treatment-free, and treatment models, respectively. An important property of G-estimation is its double-robustness: if the blip is correctly specified, then as long as at least one of the treatment and treatment-free models is also correctly specified the resulting blip parameter estimators will be consistent.Because the functions G_j(ψ_j) depend on the observed outcome y and each blip model from stage j onwards, G-estimation proceeds recursively, starting at the final stage J and working backwards to stage 1. At each stage the above models are specified, and then the following three steps are carried out: * Estimate the treatment model parameters α̂_j by regressing the stage j treatment a_j on the treatment model covariates h_α j.* Estimate the treatment-free model parameters β_j by `regressing' G_j(ψ_j) on h_β j, where rather than conducting a standard least squares regression, we instead solve the corresponding least squares equation to give β̂_j(ψ_j,ψ̂_j+1) in terms of the stage j blip parameters ψ_j and the estimated blip parameters (ψ̂_j+1) from previous stages.* Using the estimates α̂_j and β̂_j(ψ_j,ψ̂_j+1) from steps 1 and 2, solve the equation E_n[U_j(ψ_j;β̂_j,α̂_j)] = 0 to estimate ψ_j, where E_n denotes the mean over all subjects. We can then use the resulting blip parameter estimates ψ̂_j to estimate the optimal stage j treatment a_j^opt for each subject, and hence the function G_j-1(ψ_j-1), and repeat the above steps until estimates are obtained for every stage of the analysis.§ G-ESTIMATION FOR GENERALIZED LINEAR MODELS The framework in section <ref> is standard for G-estimation applications.It assumes a continuous outcome, and that the treatment modifies the expected outcomes additively, that is, the blip acts additively on the original outcome scale.The construction can be modified to be applicable to discrete outcomes, but relaxing an assumption of additivity of the treatment effect needs more care.In this section, we demonstrate how G-estimation may be generalized to handle other effect types, and show how estimation can be achieved using standard computational approaches. Specifically, we will apply G-estimation for generalized linear models by using iteratively-reweighted least squares. §.§ G-estimation for multiplicative effectsFor an arbitrary counterfactual outcome Y(a), the effect of exposure may be framed in terms of the average potential outcome E[Y(a)], and contrasts comparing this average for different exposures.For example, for a binary exposure we might consider the ratio of expectations E[Y(1)]/E[Y(0)] rather than the expected ratio E[Y(1)/Y(0)]; this focuses on population- rather than individual-level contrasts and avoids identifiability issues associated with attempting to specify a joint model for {Y(0),Y(1)}.For G-estimation, consider for illustration the two interval case; our approach will focus on constructing models for E[Y(a_1,a_2)] using the decompositionE[Y(a_1,a_2)] = E[Y(a_1^opt,a_2^opt)] E[Y(a_1,a_2^opt)|h_1]/E[Y(a_1^opt,a_2^opt)|h_1]E[Y(a_1,a_2)|h_2]/E[Y(a_1,a_2^opt)|h_2]that is, using a multiplicative modification of the optimal outcome, and making a multiplicative rank preserving assumption.In this context, the blip function γ_j(h_ψ j,a_j) may be defined as the ratio of expected counterfactual outcomes E[Y(a_j-1,a_j,a_j+1^opt) | h_ψ j]/E[Y(a_j-1,a_j^0,a_j+1^opt)|h_ψ j] and the expected counterfactual outcome may be computed asE[Y(a_J)|h_J] = E[Y^opt] ∏_j=1^J [γ_j(h_ψ j,a_j)/γ_j(h_ψ j,a_j^opt)]or equivalentlylog(E[Y(a_J)|h_J]) = log(E[Y^opt])- ∑_j=1^J [log(γ_j(h_ψ j,a_j^opt)/γ_j(h_ψ j,a_j)) ],giving rise to a stage-j pseudo-outcome analogous to that in the continuous outcome, linear model setting asy_j = y×∏_k=j+1^J [(γ_k(h_ψ k,a_k^opt;ψ_k)/γ_k(h_ψ k,a_k;ψ_k)) ],and hence G_j(ψ_j) = y_j/γ_j(h_ψ j,a_j;ψ_j). We then propose log-linear models for the treatment-free and blip modelslog{E[G_j(ψ_j)|h_β j;β_j]} = h_β jβ_j logγ_j(h_ψ j,a_j;ψ_j) = a_jh_ψ jψ_j,from which, via some rearrangement, the G-estimating functions becomeU_j(ψ_j;β_j;α_j)= {S_j(A_j) - E[S_j(A_j)|h_j;α_j]}{G_j (ψ_j) - E[G_j (ψ_j)|h_β j;β_j]}= {a_j - E[A_j|h_α j;α_j]}{y_j/exp(a_jh_ψ jψ_j) - exp(h_β jβ_j)}h_ψ j.Again suppressing stage-specific notation, and introducing subscript-i notation for subject i, G-estimation at each stage thus solves0 = ∑_i=1^n d_i h_ψ i (y_i - μ_i(β,ψ)),with d_i = a_i - E[A_i|h_α i;α] and μ_i(β,ψ) = exp(h_β iβ + a_i h_ψ iψ). In section <ref>, we present an IRLS algorithm to estimate blip parameters at each stage in the usual recursive manner.Note that when the observed y value is zero, the pseudo-outcomes in (<ref>) will also be zero unless a further adjustment is made. A simple approach to this issue is to assume that when y=0, it is drawn from a Poisson distribution with mean 0.001 (or some other small value), and replace y with its expectation. For example, for stage J-1, after parameters ψ_J are estimated, the usual adjustment y_J-1 = y ×(γ_J(h_ψ J,a_J^opt;ψ_J)/γ_J(h_ψ J,a_J;ψ_J)) becomesy_J-1 = 0.001 ×(γ_J(h_ψ J,a_J^opt;ψ_J)/γ_J(h_ψ J,a_J;ψ_J))if y is zero; y_J-1 is guaranteed non-negative. Using the approximation exp(x) ≏ 1+x for x small resolves this issue; for example, for stage J-1, after parameters ψ_k have been estimated, the usual adjustmenty_J-1 = y ×(γ_J(h_ψ J,a_J^opt;ψ_J)/γ_J(h_ψ J,a_J;ψ_J))becomesy_J-1 = logγ_J(h_ψ J,a_J^opt;ψ_J) - logγ_J(h_ψ J,a_J;ψ_J)if y is zero; y_J-1 is guaranteed non-negative. §.§ Iteratively-reweighted least squaresWe now demonstrate how G-estimation may proceed for log-linear models using IRLS. Without loss of generality we consider a single-stage example allowing us to suppress stage-specific notation. The G-estimation equations, as written in (<ref>) are of a standard form from which IRLS may be used to estimate ψ.Suppose y has a mean function μ modeled using link function g(·) and linear predictor vector η = h_ββ + ah_ψψ such that μ = g^-1(η).Denote the variance function V(μ). We can then estimate ψ via IRLS using the following algorithm:1. Set initial parameters β^(0), ψ^(0) and compute for each subject the initial linear predictor η^(0)= h_ββ^(0) + a h_ψψ^(0) and mean value μ^(0) = g^-1(η^(0)).2. Set z_i^(1)=η_i^(0)+ ( y_i-μ_i^(0) ) ġ ( μ_i^(0)), w_i^(1) = w_i/[{ġ ( μ_i^(0) )}^2 V ( μ_i^(0))]. Denote by D^(1) the diagonal matrix with (i,i) element d_i^(1), and by A the diagonal matrix with (i,i) element a_i, the observed treatment for subject i.3. Apply the G-estimation procedure to re-estimate β and ψ:ψ^(1)= [h_ψ^⊤ (𝐈_n - h_β D)D^(1)Ah_ψ]^-1[h_ψ^⊤ (𝐈_n - h_β D)D^(1)z^(1)],β^(1) =(h_β^⊤D^(1)h_β)^-1h_β^⊤D^(1)(z^(1) - Ah_ψψ^(1))where h_β D = h_β (h_β^⊤D^(1)h_β)^-1h_β^⊤D^(1), and 𝐈_n is the n × n identity matrix.4. Define vectors η^(1)=h_ββ^(1) + Ah_ψψ^(1) and μ ^(1)=g^-1( η^(1)).5.Return to 2 and iterate through 2-5 using μ^(1) and η^(1) as the updated starting values, obtaining (μ^(2),η^(2)), then repeat to generate (μ^(3),η^(3)), and so on.6.Repeat until μ^(t) and η^(t) satisfy | μ^(t)-μ^(t-1)| <ϵ _μ and/or | η^(t)-η^(t-1)| <ϵ _η for tolerances ϵ _μ and ϵ _η.The sequence of estimates produced by this algorithm converges to the solution of the G-estimating equations. § G-ESTIMATION AND QUASI-LIKELIHOOD Inference for SNMMs using G-estimation, unlike inference for more conventional models such as generalized linear models (GLMs), is not likelihood-based and so established model selection approaches such as Akaike's Information Criterion (AIC, ) cannot be directly used within the DTR framework. However, we shall reframe the preceding presentations of G-estimation to illustrate how quasi-likelihood theory may be applied. §.§ Linear case We assume the treatment-free model is linear in h_β j, i.e. that E[G_j (ψ_j)|h_β j;β_j] = h_β jβ_j. Then by ordinary least squares, we may estimate β_j asβ̂_j(ψ_j,ψ̂_j+1) = [h_β j^⊤h_β j]^-1h_β j^⊤(y_j - A_jh_ψ jψ_j).For convenience we again suppress the subscript-j notation – all of what follows may be applied on a stage-by-stage basis - and write ĥ_β = h_β j[h_β j^⊤h_β j]^-1h_β j^⊤. Substituting the estimate of β̂ in terms of ψ, we may rewrite the estimating function vector asU(ψ) = (Dh_ψ)^⊤(y - h_ββ̂(ψ) - Ah_ψψ)=(Dh_ψ)^⊤(y - ĥ_β (y - Ah_ψψ) - Ah_ψψ)=(Dh_ψ)^⊤[(𝐈_n - ĥ_β) (y - Ah_ψψ)]= h_ψ^⊤W (y - Ah_ψψ) where W = D^⊤ (𝐈_n - ĥ_β). From here, the estimation of ψ follows byψ̂ = (h_ψ^⊤WAh_ψ)^-1h_ψ^⊤Wy.The form of this estimator is straightforward (and is almost identical to a standard weighted ordinary least squares estimator). This affords greater simplicity in implementation, as well as giving a clear indication that quasi-likelihood methods may be easily applied. We follow <cit.> in defining the quasi-likelihood of ψ by writing μ = Ah_ψψ and solving∂ Q/∂ψ = ∂ Q/∂μ∂μ/∂ψ = h_ψ^⊤W (y - μ),yieldingQ(ψ)= ψ^⊤h_ψ^⊤Wy - 1/2ψ^⊤h_ψ^⊤DAh_ψψ = ψ^⊤m - 1/2ψ^⊤Mψwhere m = h_ψ^⊤Wy, M = h_ψ^⊤WAh_ψ = h_ψ^⊤D (𝐈_n - ĥ_β) Ah_ψ, and we ignore the constant term. Because 𝐈_n - ĥ_β is positive definite, M is positive semi-definite in expectation, and thus provided n is large, this quasi-likelihood is uniquely maximized at ψ̂ in large samples. Furthermore, given the stage-by-stage, recursive nature of the G-estimation approach within the DTR setting, we may derive this quasi-likelihood at each stage of an analysis. §.§ Log-linear case In the log-linear case we first reformulate (<ref>), dividing through by exp(h_ββ) to give0 = ∑_i=1^n d_i h_ψ i (y_i^* - μ_i^*(ψ))where y_i^* = y_i exp(-h_β iβ) and μ_i^*(ψ) = exp(a_i h_ψ iψ). This moves the nuisance parameters β into a pseudo-outcome y^*, framing the estimating equations more explicitly in terms of the target blip parameters ψ, as in the linear case. This allows us to return to the theory of Wedderburn and proceed as before by solving∂ Q/∂ψ = ∂ Q/∂μ^*∂μ^*/∂ψ = h_ψ^⊤D (y^* - μ^*), where, since ∂μ^*/∂ψ = Ah_ψμ, it can easily be shown that Q(ψ) = D(y^* log(μ^*) - μ^*) up to a constant, resembling the likelihood for a Poisson regression.but this does not yield a quasi-likelihood in a simple way. However, by appealing to the IRLS procedure, and a recursive calculation, we compute a quasi-likelihood suitable for model comparison by considering the sequence of linear approximations to the log-linear estimating equations implied by (<ref>).The IRLS procedure produces a solution to (<ref>) by utilizing a quadratic approximation to the actual quasi-likelihood at the maximizing value; by standard theory the solution is an o(1) approximation to the actual maximizing value of the quasi-likelihood.This strategy appeals to the common approach of defining a quasi-likelihood from estimating equations by considering the dual quadratic minimization problem (see, for example, ). §.§ The quasi-likelihood information criterion We now address selection of the blip model. Based on the preceding derived quasi-likelihoods, we propose an information criterion whose general form builds on standard likelihood theory, where the Kullback-Leibler divergence between a proposed model and the true, data-generating model is minimized. Under standard regularity conditions on the quasi-likelihood function Q(.), inference proceeds in the usual way for misspecified models.Let f(y) denote the true, data-generating distribution, and let γ(y;ψ_(m)) denote a proposed blip model which, combined with treatment and treatment-free models, fully specifies the G-estimating quasi-likelihood, Q(y;ψ_(m)), and the corresponding density f_m(y) ≡ f(y;ψ_(m)).The proposed blip model is taken from a class of candidate models, ℳ(m) = {γ(y;ψ_(m))|ψ_(m)∈Ψ(m)} with fitted models γ(y;ψ̂_(m)). The divergence between f(y) and f_m(y) estimated using the observed data and ψ̂_(m) is given (up to an additive constant) by δ(ψ̂_(m)) = E[-2Q(Y;ψ)]|_ψ_(m) = ψ̂_(m), computed with ψ̂_(m) fixed, and the expected divergence is given by Δ(m) = E[δ(ψ̂_(m))]; in the latter expression, the expectation is over the distribution of the estimator ψ̂_(m).All expectations are taken with respect to the true distribution f(y) by considering independent copies of the data.Let ψ_(m,*) = min_ψ_(m)∈Ψ(m)Q(y;ψ_(m)). If the true blip function is parametric and contained in ℳ(m), then ψ_(m,*) is the “true" parameter; if the set of candidate models does not contain the true blip, then ψ_(m,*) is the value such that Q(y;ψ_(m,*)) provides the best approximation to f(y) in the sense of minimizing the expected Kullback-Leibler divergence.Under standard regularity conditions on Q(.), we have that ψ̂_(m) is consistent for ψ_(m,*), and√(n) (ψ̂_(m) - ψ_(m,*)) d⟶Normal(0,𝒱(ψ_(m,*)))where 𝒱 is a positive definite matrix given by 𝒱(ψ) = ℐ(ψ)^-1𝒥(ψ) ℐ(ψ)^-1 where, for ψ^'∈𝒩, an open neighborhood of ψ_(m,*)ℐ(ψ^') = . E [-∂^2 Q_1(ψ)/∂ψ∂ψ^⊤] |_ψ = ψ^'𝒥(ψ^') = . E [ {∂ Q_1(ψ)/∂ψ}{∂ Q_1(ψ)/∂ψ}^⊤] |_ψ = ψ^'.and∂ Q_1(ψ)/∂ψ = D_1 A_1 h_ψ 1 (Y_1^* - μ_1^*(ψ))is the G-estimating function inspired by (<ref>) for the first data point. Theorem: Suppose that Q(ψ) is twice continuously differentiable with bounded expectation of its second derivative in an open neighborhood 𝒩 of ψ_(m,*). Then, under the stable unit treatment value and no unmeasured confounding assumptions (detailed in Subsection 2.1), the expected divergence Δ(m) can be approximated asΔ(m) =E[-2Q(ψ_(m,*))] + 2 tr{𝒥(ψ_(m,*)) ℐ(ψ_(m,*))^-1} + o(1)which is consistently estimated byQIC_G(m) = Δ(m) = -2Q(ψ̂_(m)) + 2 tr{J(ψ̂_(m)) I(ψ̂_(m))^-1}where I(.) and J(.) are the observed (empirical) versions of ℐ and 𝒥. Thus, the model selection procedure that chooses a model by minimizing QIC_G(m) across ℳ(m) identifies the model that minimizes Δ(m) with probability 1 as n ⟶∞. Our developments closely follow the derivations by <cit.> and <cit.>. First, and taking all subsequent expectations with respect to the data generation distribution, we define the discrepancy between a candidate blip model (with parameter estimates ψ̂) and the true model (parameterized by ψ_*) as Δ(ψ̂,ψ_*) = E[δ(ψ̂,ψ_*)] where δ(ψ̂,ψ_*) = E[-2Q(ψ)]|_ψ = ψ̂. Now, since E[-∂ Q(ψ)/∂ψ |_ψ = ψ_*] = 0 and, by the above, that E[-∂^2 Q(ψ)/∂ψψ^⊤ |_ψ = ψ_*] is positive semidefinite, we have Δ(ψ̂,ψ_*) ≥Δ(ψ_*,ψ_*) with equality if and only if ψ̂ = ψ_*. We therefore aim to choose the model which minimizes Δ(ψ̂,ψ_*), and as such wish to derive an estimate of this quantity for our model selection criterion. We do so via the following:Theorem: Assume Q(ψ) is twice continuously differentiable with bounded expectation of its second derivative near ψ_*. Then, under the stable unit treatment value and no unmeasured confounding assumptions (detailed in Subsection 2.1), we haveΔ(ψ̂,ψ_*)=E[Δ(ψ_*,ψ_*)] + 2tr{nV(ψ̂)H(ψ_*)} + o(1),where H(ψ_*) = -(1/n)E[∂^2Q(ψ_*)/∂ψ_* ∂ψ_*^⊤]. Proof: see Supplementary Material.Proof:see Supplementary Material.This result gives rise to our quasi-likelihood information criterion, which in terms of the estimator of the asymptotic variance 𝒱, V(ψ̂) = n I(ψ̂)^-1J(ψ̂) I(ψ̂)^-1, may be writtenQIC_G = -2Q(ψ̂) + 2tr{I(ψ̂) V(ψ̂))}.In their derivation of a related criterion, <cit.> use a direct sandwich estimator for 𝒱.However, while this allows a slight simplification of expression (<ref>), we note that this approach fails to accommodate all sources of uncertainty. The estimation of the parameters α of the treatment model at each interval should be acknowledged, and as we move through stages recursively estimation of all previous parameters should be similarly accommodated; this is achieved through the application of Taylor expansions to the estimating function U(ψ) <cit.>, although in our experience such corrections make little difference to the resulting variance estimates.Our derivation of the QIC also differs from that of <cit.> in two substantial ways.First, the estimation of β̂ is corrected for automatically by its substitution in the estimation of ψ̂, that is, using implicit forms β(ψ) in the linear model or the IRLS recursion for the log-linear model.That is, we do not estimate the treatment-free model parameters β in a separate calculation using only the untreated individuals. Secondly, our derivation of the quasi-likelihood matches that of <cit.> in the linear case, however their approach cannot be extended to the log-linear case.The form of (<ref>) is typical in information criterion-style approaches <cit.>, and writing K = tr{I(ψ̂) V(ψ̂))} we may present it as QIC_G = -2Q(ψ̂) + 2K to more clearly evoke this similarity. This criterion may be applied at each stage of the G-estimation process with the blip model returning the lowest criterion value being recommended, as in a more typical analysis. Note, however, that it is necessary to assume that at all but the first stage of treatment an at-worst overspecified blip model is contained within the set of candidate models as otherwise poor parameter estimation can have a cumulative effect. Similarly, we must assume that at least one of the treatment or treatment-free models is correctly specified, so that the resulting blip parameter estimators are consistent. These assumptions are necessary for any recursive procedure. The above theory extends to the case of continuous treatments. The primary complication is that the blip function is extended to include a quadratic treatment term, so that the optimal treatment at any given stage may lie inside the range of possible values it may take <cit.>. After this modification, we can proceed to define an equivalent quasi-likelihood (and quasi-likelihood information criterion) at each stage of treatment. Full details are included in the Supplementary Material.§ ANALYSISIn this section, we first use simulations to demonstrate the IRLS approach to G-estimation for a count outcome and to demonstrate the performance of the QIC_G in a continuous outcome scenario. We then proceed to apply both IRLS and the quasi-likelihood information criterion to an empirical analysis, performing analyses which treat the (discrete) outcome as either continuous or as a count. §.§ Simulation study: IRLS for a log-linear SNMM First, we present an illustration of the IRLS algorithm for G-estimation in the case of a log-linear outcome model. Simulating a two-stage example, we generate data as follows:* stage 1 patient information: X_1∼ N(0,1);* stage 1 treatment: a_1 ∈{0,1}, P(A_1 = 1 | h_1) = expit(x_1);* stage 2 patient information: X_2∼ N(a_1,1);* stage 2 treatment: a_2 ∈{0,1}, P(A_2 = 1 | h_2) = expit(x_2);* stage j blip: γ_j(a_j,x_j) = a_j(ψ_j0 + ψ_j1 x_j1) such that a_j^opt = 1_{ψ_j0 + ψ_j1 x_j1 > 0};* outcome: P(Y = k) = λ^k e^-k/k!, with λ = exp[β_0 + log(|x_1|) - ∑_j=1^2[γ_j(a_j^opt,x_j) - γ_j(a_j,x_j)]].For all simulations we set (ψ_j0,ψ_j1) = (0.5,-0.5), j = 1, 2. As described in section <ref>, one concern in extending G-estimation to discrete outcomes is that of zero values in the response, and the effect they can have on the stage-specific pseudo-outcomes. Replacing the 0 with 0.001 when computing the pseudo-outcome, we investigate the performance of the algorithm in three sets of simulations with varying values for β_0, chosen to yield outcomes with approximately 5%, 10% and 20% zeros. Our analyses correctly specified the treatment model, but mis-specified the treatment-free model, supposing it was linear in x_1 in contrast to the true log(|x_1|) term.Initial simulation runs had unacceptably high rates of failure to converge. To address this we adjusted the IRLS algorithm detailed above slightly, introducing step-halving whereby the initial parameter estimates at each iteration were the mean of the previous two stages, and ignoring the termination condition dependent on |μ^(t) - μ^(t-1)|, instead terminating when only |η^(t) - η^(t-1)| dropped below a given tolerance. This reduced convergence failure rates to 3% or lower.We generated 1000 simulated datasets per setup, setting the tolerance ϵ_η to 0.001 and limiting the number of iterations at each stage to 1000. Exploratory analyses of smaller simulation runs with lower tolerances and larger iteration limits did not yield substantially different results for parameter estimates or failure rates. Results are summarized (Table <ref>), where stage 1 estimates for the covariate-by-treatment interaction are slightly (though not statistically significantly) biased in small samples. Bias does not appear to be related to the probability of zero-outcomes, although standard errors appear to increase with it.n P(Y = 0)ψ_10 ψ_11 ψ_20 ψ_211005%0.503 -0.4270.497 -0.47510% 0.501 -0.4280.496 -0.470 20% 0.520 -0.4190.504 -0.4832005% 0.502 -0.4720.500 -0.483 10%0.503 -0.4720.497 -0.483 20%0.510 -0.4710.499 -0.483 5005% 0.499 -0.4850.500 -0.495 10% 0.497-0.4860.501 -0.494 20% 0.502 -0.4850.498 -0.494§.§ Simulation study: QIC_G Next, we demonstrate the use of QIC_G in the DTR framework with a variety of simulated two-stage examples from the continuous outcome setting (we present results for the discrete-outcome setting in the Supplementary Material). We generate data as follows:* stage 1 patient information: X_1k∼ N(0,1) for k = 1, 2, 3;* stage 1 treatment: a_1 ∈{0,1}, P(A_1 = 1 | h_1) = expit(x_11+x_12+x_13);* stage 2 patient information: X_2k∼ N(a_1,1) for k = 1, 2, 3;* stage 2 treatment: a_2 ∈{0,1}, P(A_2 = 1 | h_2) = expit(x_21+x_22+x_23);* stage j blip: γ_j(a_j,h_j) = a_j(1 + ψ_j1 x_j1 + ψ_j2 x_j2 + ψ_j3 x_j3) such that a_j^opt = 1_{1 + ψ_j1 x_j1 + ψ_j2 x_j2 + ψ_j3 x_j3 > 0};* outcome: Y = -∑_j=1^2[γ_j(a_j^opt,h_j) - γ_j(a_j,h_j)] + ϵ, with ϵ∼log-normal(0,1) - e^0.5; where expit(x) = [1+exp(-x)]^-1 is the expit or inverse-logit function. We have used skewed errors in our generation of the outcome (centralized to have mean zero) to better illustrate the potential benefits of the QIC_G approach. Results using normal errors are included in the Supplementary Material for reference. In our first analyses, we consider datasets of size n = 50, 100 and 200, and set the blip parameters to (ψ_j1,ψ_j2,ψ_j3) = (1,0,0), (1,1,0) or (1,1,1) giving a range of models including one, two, or all three variables at each stage.We conducted a G-estimation analysis of 1000 simulated datasets considering eight different blip models corresponding to each of the possible combinations of the predictors at each stage (that is, using none, one, two or all three). The treatment models were always correctly specified (and modeled using logistic regression), while the treatment-free models E[G_j (ψ_j)|h_β j;β_j] were linear with covariates (1,x_11,x_12,x_13) at stage 1 and (1,x_11,x_12,x_13,a_1x_11,a_1x_12,a_1x_13,x_21,x_22,x_23) at stage 2 (i.e. using all available covariates).Using QIC_G, we performed forward and backward selection within the set of candidate models as in a standard AIC-type stepwise analysis. For comparison we also conducted forward and backward selection based on Wald test p-values at a 0.05 significance level. For these initial simulations, stage 1 results are based on analysis carried out following fitting of the correct model at stage 2. Results (Table <ref>) indicate QIC_G outperforms the Wald-type approaches except for the smallest true models (although even then it largely remains competitive). Furthermore, while QIC_G shows a slight tendency to overfit, the Wald-type approaches show considerably more bias towards underfitting. In addition, we note a slightly greater consistency between the forward and backward QIC_G results than between the Wald test results, suggesting QIC_G may be more robust to choice of selection direction.In these multi-stage simulations, all methods perform better in selection of the first stage blip model rather than the second due to the first stage being a simpler model fitting problem: at stage 2 both first and second stage covariates affect analysis. We note however that the recursive nature of the G-estimation approach means that the analysis of the first stage happens subsequent to analysis of the second stage, using estimates obtained from that stage which we might expect to affect stage 1 model assessment. It seems that any impact this aspect of the estimation process might have is small compared to the inherent simplicity that earlier stages involve fewer covariates.Moreover, in the results for stage 1 model assessment we have ignored the problem of stage 2 model selection, and instead fixed the stage 2 model as the correct one. While in reality this is an implicit assumption we must make, it seems prudent to investigate what impact this may have on model selection. We conduct analyses identical to those above (limited to n = 100) but with two other approaches to model selection. In addition to the `best-case' scenario of selecting the correct stage 2 model, we also investigate the consequences of choosing the model recommended by the model selection procedures themselves, and a `worst-case' scenario where intercept-only models are used (Table <ref>). These non-optimal approaches do result in a drop in performance, but this is not particularly dramatic (and not statistically significant) even in the worst-case scenario. We note, however, that in more complex setups it is likely mis-specification of the stage 2 models could have more dramatic consequences for stage 1 model selection. We also investigated the impact of other aspects of our data generation setup. Weaker effect sizes (simulated by setting the blip parameters to 0.1 and 0.5 instead of 1 as above) predictably resulted in lower success rates across all the methods under consideration. However, the quasi-likelihood approach was much more resilient to these effects. Introducing a correlation structure among the non-treatment covariates also resulted in worse model selection, but again the quasi-likelihood approach appeared to be slightly more robust to these changes. Results of these additional scenarios are included in the Supplementary Material.These results were aggregated across the various model setups at each stage to afford greater simplicity in the presentation of our results. For example, the results corresponding to the stage 1 blip model which only included x_11 were taken from simulations across the three different stage 2 blip models. Non-aggregated results (see Supplementary Material) show little evidence of stage 2 model complexity affecting stage 1 selection or vice-versa. §.§ Investigating the trace term We have framed our QIC_G in terms of the quasi-likelihood and a `penalty term' defined in terms of the trace K = tr{I(ψ̂)V(ψ̂)}. In the likelihood-based setting it is known that this trace term may, if the model under consideration is a good approximation of the truth, be approximated by the dimension of that model <cit.>. Here, the quasi-likelihood is grounded in estimating equation theory and so, if the error terms in the outcome generating model were normally distributed, we might expect a similar result. This is complicated by our estimation of two other models besides the blip, as well as the recursive multi-stage nature of the G-estimation framework.We summarize estimates of the term K from our first set of simulations above with n = 100 in a figure in the supplement, with the modification that we generate the error term ϵ in the outcome from a standard normal distribution rather than log-normal. In general, the estimates appear to be similar to the dimension of the corresponding model, particularly at the second stage, when the correct model was used. If two models of the same dimension are compared then on average an incorrect model will result in a slightly larger trace term than that from a correct model. Stage 1 estimates of the trace in general seem to be slightly lower. These results assume correct specification of the treatment model, ensuring consistency of our estimators as per the double-robustness property of G-estimation. We have found (results omitted) that when both treatment and treatment-free models are mis-specified the resulting trace terms can be much larger. In addition, if the distribution of the error term ϵ is skewed, as in the previously reported results, then the trace term is again much larger. This suggests the possibility of comparing the trace term with candidate model dimension to investigate the validity of the treatment and treatment-free models, following use of residual plots (or similar techniques) to assess the normality of the residuals. We note finally that the use of bootstrap procedures to estimate the trace (or penalty) term tr{I(ψ̂)V(ψ̂)} have been recommended in preference to the use of termwise plug-in versions of matrices I(.) and J(.) (see <cit.>).We have not investigated this possibility in our analysis as the plug-in procedure appears to work well in our examples, and due to the additional computational burden. Bootstrap procedures are valid for inference in the regular G-estimation setting (see for example <cit.>), and are straightforward, although computationally expensive, to implement.§.§ Sequenced Treatment Alternatives to Relieve Depression study We now illustrate application of our proposed IRLS and QIC_G approaches to real data from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study. STAR*D was a multi-stage randomized control trial designed to compare different treatment regimes for patients with major depressive disorder <cit.>. The study was split into 4 levels (one of which was itself split into two sub-levels), with patients receiving a different treatment or combination of treatments within each level.At study entry (level 1) patients were prescribed citalopram and followed up at regularly scheduled clinic visits. Those whose depression did not enter remission – defined as a Quick Inventory of Depressive Symptomatology (QIDS) score less than or equal to 5 – could proceed to a second level of treatment where seven treatment options were available. The second level of treatment was characterized by `switching' from citalopram to one of four new treatments, or `augmenting' the current treatment by receiving citalopram alongside one of three other treatment options. Patients who received cognitive therapy at level 2 (either alone or combined with citalopram) were eligible to enter the sublevel 2A where they received one of the treatments available at level 2. All patients without remission could then proceed to level 3 (and, if their depression persisted, a further level 4) where again their previous treatment was either switched to or augmented with a number of options. Full details of the study design and treatment options, are described elsewhere <cit.>.An important aspect of the study is that patients were asked for their treatment preference and would then be randomized to one of the treatment options consistent with their preference. This is typically characterized as patients choosing to `switch' from their current treatment to, or `augment' it with, a different one, although the reality was slightly more complex. In particular, when moving from level 1 to level 2, patients were asked about their preference to switch or augment their current treatment with cognitive therapy separately to their preference to switch or augmenting with a pharmacological (i.e., drug-based) treatment.We conduct an analysis following <cit.> who investigated the dichotomy between treatments that were, or included, a selective serotonin reuptake inhibitor (SSRI) and those that did not. We restrict attention to two stages of the study and consider level 2 (including level 2A) and level 3 as our first and second stages of treatment, respectively. Treatment at each stage was coded as 1 if an SSRI was received, either alone or in combination, with level 2A treatments (both of which were non-SSRI) combined with level 2 treatments for this purpose. Treatment was coded as 0 if no SSRI was received throughout a stage. Of 1,027 total patients, only 273 entered level 3. Our outcome is defined as negative QIDS score at end of treatment (i.e., at the end of stage 2 if a patient entered level 3, and at the end of stage 1 otherwise). By taking the negative, larger values are preferred, and we therefore seek a DTR that maximizes this outcome. We pursue an analysis analogous to those undertaken by previous authors, viewing QIDS score as a continuous outcome, and then use our new IRLS-based approach and the associated QIC_G to apply a log-linear model.We consider the following tailoring variables: QIDS score measured at the start of the corresponding level (denoted q_j for stage j), the change in QIDS score divided by time across the previous level (QIDS slope, denoted s_j), and patient preference prior to receiving treatment (p_j). Patient preference is binary and coded as 1 if the patient rejected all treatments consistent with switching to a different pharmacological treatment, and 0 otherwise. The assumed treatment model at each stage was fit by logistic regression of observed treatment on preference only, while the treatment-free models are specified as:* stage 1: E[G_1 (ψ_1)|h_β 1;β_1] = β_10 + q_1β_11 + s_1β_12 + p_1 β_13; and* stage 2: E[G_2 (ψ_2)|h_β 2;β_2] = β_20 + q_2β_21 + s_2β_22 + p_2β_23 + a_1 β_24. We consider the `full' blip models* stage 1: γ_1(h_ψ 1,a_1;ψ_1) = a_1(ψ_10 + q_1ψ_11 + s_1ψ_12 + p_1ψ_13); and* stage 2: γ_2(h_ψ 2,a_2;ψ_2) = a_2(ψ_20 + q_2ψ_21 + s_2ψ_22), and investigate all sub-models (every covariate combination from the full models, including intercept-only models). There are eight candidate models at stage 1, and four at stage 2.Rather than proceed directly to a stepwise procedure, we instead fit all possible models as dimensionality was low. G-estimation was therefore first performed four times: once for each of the candidate stage 2 blip models, with an intercept-only blip model specified at stage 1 (this choice not affecting stage 2 analysis). The lowest value of QIC_G was found when the intercept-only model was used, suggesting it is the best choice of stage 2 blip model (based on either a forward or backward selection procedure). This result was reinforced by application of both Wald-type approaches of the previous section.We then repeated the analysis for each of the eight candidate stage 1 blip models using the recommended stage 2 blip model, and found that the model containing stage 1 preference only returned the lowest QIC_G (and would be recommended by either a forward or backward procedure). The Wald-type approaches also both recommended the preference-only model.Our log-linear analyses were based on a similar setup with treatment-free and blip models exponentiated, adding 27 to all outcome measures to ensure they were positive. Applying G-estimation via IRLS and computing QIC_G for every blip model we found the same models recommended at each stage. We observe that in all analyses the same model was recommended at stage 1 regardless of which stage 2 model was chosen.Overall, these results are broadly consistent with the analysis of Chakraborty et al. who, despite using the somewhat different approach of Q-learning, found that no stage 2 blip covariates were statistically significant, while stage 1 preference alone was significant in the stage 1 blip model. It is encouraging that QIC_G for both modeling setups indicated the preference term should be included.We also find that at both stages both estimation processes predict the same optimal treatment for every patient.Finally, we can compare observed outcomes among patients based on how consistent their observed treatments were with the optimal ones as recommended by our models. Among patients who entered stage 2, mean improvement in QIDS score among those who received optimal treatment at both stages, one stage, or no stages was, respectively, 4.67 (sd = 3.92, n = 15), 3.80 (sd = 6.08, n = 75), and 2.56 (sd = 5.19, n = 183). Among patients who did not enter stage 2, mean improvement was 6.13 (sd = 4.72, n = 145) for those who received optimal treatment, and 5.49 (sd = 4.84, n = 609) for those who did not.§ DISCUSSION Personalized medicine and the development of dynamic treatment regimes is an important frontier in biostatistical research. This has been reflected in a rapidly expanding literature focusing on DTR estimation techniques, but more practical concerns have received comparatively little attention. In this paper we have presented two extensions to the G-estimation framework. First, we have demonstrated how G-estimation may be applied for log-linear models via iteratively-reweighted least squares; this provides a relatively straightforward route to G-estimation use in a greater variety of contexts. Further, we have presented an approach to model selection for SNMMs within the DTR framework. By demonstrating how G-estimation in its typical application (for continuous treatments) may be reduced to a relatively simple form, we have derived a quasi-likelihood for each stage of a multi-stage, recursive analysis process. We have then extended the work of <cit.> and <cit.> to derive a general quasi-likelihood information criterion for DTR estimation using G-estimation. Furthermore, while we have focused on the binary treatment setting, the theory extends to the case of a continuous treatment, dramatically increasing its applicability.Our simulations involving log-linear SNMMs indicate how G-estimation may be implemented for discrete outcomes with relative ease, especially if extant IRLS routines in standard software packages can be used for this purpose. We note that there may be further room to improve on the approximation used to handle the zero-outcomes, which at present are set to a very small number only when the non-optimal treatment was received. This concern extends to the binary outcome setting, where estimation is even more problematic: the `blip' cannot be separated from the treatment-free component of the mean using a logit transformation, and the use of a log-linear model only provides reasonable (unbiased) estimators in a small range of settings. The question of how to construct pseudo-outcomes that better address this issue is an avenue for further research. Note, finally, that if the (counterfactual) outcomes are binary, then modeling the expected counterfactual on the usual logistic scale is problematic for G-estimation, as it is not possible to separate the blip component from the treatment-free expected counterfactual component, as would be the case on the linear or log-linear scale.This issue warrants further attention.Through simulation studies we have shown that our quasi-likelihood information criterion performs as well as or better than simpler Wald-type approaches for continuous outcomes, particularly when sample or effect sizes are small, or there is correlation between candidate covariates. In addition, we found greater agreement between the forward and backward stepwise approaches when using QIC_G than the Wald-type approach, a potentially attractive feature in practice. We note, however, that QIC_G does seem to overfit, and as such slight modifications may lead to more balanced results. We have experimented with ad hoc corrections inspired by the Bayesian Information Criterion and the corrected AIC, which have yielded promising results. Moreover while one may often argue that for the purposes of an explanatory analysis, compared with underfitting, overfitting is the less serious error, this is perhaps more justified in a multi-stage setting where mis-specification by underfitting can have a more severe knock-on effect in our analysis.When the outcome was generated using normal errors, rather than the skewed log-normal errors discussed here, the Wald-type approaches become more competitive, but still badly underfit (full results are included in the Supplementary Material). As in any simulation-based analysis we appreciate that the results presented here cannot possibly be comprehensive, and so would encourage further experimentation (and analysis) to assess the properties of these various criteria. Of particular interest is a more extensive investigation of the QIC_G for discrete outcomes, with our preliminary analyses providing encouraging results.The trace term, K, warrants additional research. In our simulations we observed that when errors were normally distributed, and the blip model correctly specified, K was approximately equal to the dimension of the model (as is the case in the likelihood-based setting). This was not generally the case, however, especially when both the treatment and treatment-free models were badly mis-specified or errors were non-normal. From a practical perspective, the quasi-likelihood criteria presented may tend to underfit when models are mis-specified (as the corresponding penalty term is large), but if the researcher has doubts about the legitimacy of parameter estimators (due to mis-specification of both the treatment and treatment-free models), then model selection is a secondary concern. In our analysis of the STAR*D data we found that for the various candidate models, the trace term was somewhat larger than the dimension of the associated blip function while residual plots from the proposed models were consistent with normal errors. This could indicate an inadequacy in the treatment or treatment-free models which might merit further investigation. Our QIC formulation has focused exclusively on selection of the blip, or contrast, component of the outcome mean model. However, the approach can easily be adapted to select the entire mean model (the contrast and treatment-free models simultaneously), allowing the use of our information criterion for G-estimation of static treatment sequences or G-estimation for mediation, as well as in binary outcome settings.§ SUPPLEMENTARY MATERIALAppendix: Appendix containing additional theory, and simulation results.R code: R code for the simulation studies of section <ref> and the Appendix.Appendix: Supplementary Material for Manuscript titled “Generalized G-estimation and Model Selection” § THE DERIVATION OF THE QUASI-LIKELIHOOD For random variable Y, a quasi-likelihood may be deduced from unbiased estimating functionU(y) = y-μ/V(μ)by integrating with respect to μ≡ E[Y]Q(y) = ∫y-μ/V(μ)d μWe consider a one interval quasi-likelihood for simplicity. In the context of the structural mean model of interest, we have that linear predictor η is given byη = h_ββ + a h_ψψ.In the linear model, η≡μ, and we can consider the reduced form of the model and the G-estimating functionU(ψ) = (Dh_ψ)^⊤(𝐈_n - ĥ_β) (y - Ah_ψψ) = h_ψ^⊤W (y - Ah_ψψ)say, where D and A are the diagonal matrices with (i,i) elements a_i - E[A_i|h] and a_i respectively, ĥ_β) is the hat matrixĥ_β = h_β (h_β^⊤h_β)^-1h_β^⊤and W = D^⊤ (𝐈_n - ĥ_β).This is an unbiased estimating equation based on y^' = Wy whereE[Y^'] = WAh_ψψ = μ^'§ QUASI-LIKELIHOOD FOR CONTINUOUS TREATMENTS In the binary treatment setting we showed that our estimating equations U(ψ) may be reduced toU(ψ)=(Dh_ψ)^⊤[(𝐈_n - ĥ_β) (ỹ - Ah_ψψ)].We now extend to the case of a continuous treatment and include a quadratic term in our blip such thatγ(h_ψ,a;ψ_1,ψ_2) = ah_ψ_1ψ_1 + a^2 h_ψ_2ψ_2where we have compartmentalized our blip parameters ψ = (ψ_1,ψ_2) and history design matrix h_ψ = (h_ψ_1,h_ψ_2) depending on whether they are associated with the linear or quadratic term of a in our blip. Writing D_1 and D_2 for the diagonal matrices with (i,i)^th entry a_i - E[A_i|H_i] and a_i^2 - E[A_i^2|H_i], respectively, our estimating equations become <cit.>U(ψ)= ( [ D_1 h_ψ_1; D_2 h_ψ_2 ]) ^⊤[(𝐈_n - ĥ_β) (ỹ - Ah_ψ_1ψ_1 - A^2 h_ψ_2ψ_2)] , which yields a quasi-likelihood of the formQ(ψ)= ψ^⊤( [ D_1 h_ψ_1; D_2 h_ψ_2 ])^⊤[(𝐈_n - ĥ_β) ỹ] - 1/2ψ^⊤( [ D_1 h_ψ_1; D_2 h_ψ_2 ])^⊤( [ Ah_ψ_1 , A^2 h_ψ_2 ])^⊤ψ= ψ^⊤m_c - 1/2ψ^⊤M_c ψwhere m_c and M_c may be thought of as the continuous treatment analogs to m and M derived in the main paper for the binary case.§ PROOF OF DISCREPANCY THEOREM Recall the theorem from section 2.4 of the associated paper:Theorem: Suppose that Q(ψ) is twice continuously differentiable with bounded expectation of its second derivative in a neighbourhood 𝒩 of ψ_(m,*). Then, under the stable unit treatment value and no unmeasured confounding assumptions (detailed in section 2.1), the expected divergence Δ(m) can be approximatedΔ(m) =E[-2Q(ψ_(m,*))] + 2 tr{𝒥(ψ_(m,*)) ℐ(ψ_(m,*))^-1} + o(1)which is consistently estimated byQIC_G(m) = Δ(m) = -2Q(ψ̂_(m)) + 2 tr{J(ψ̂_(m)) I(ψ̂_(m))^-1}where I(.) and J(.) are the observed (empirical) versions of ℐ and 𝒥. Thus, the model selection procedure that chooses a model by minimizing QIC_G(m) across ℳ(m) identifies the model that minimizes Δ(m) with probability 1 as n ⟶∞.Proof: Following <cit.> – see also <cit.> – our quasi-likelihood information criterion (QIC) is based on an estimate of Δ(m), a function of the Kullback-Leibler discrepancy between the data generating model and model m.Consider a decomposition of Δ(m) = E[δ(ψ̂_(m))], whereδ(ψ̂_(m)) = E[-2Q(Y;ψ)]|_ψ_(m) = ψ̂_(m),given byΔ(m) = { E[δ(ψ̂_(m))] - E[-2Q(ψ_(m,*)] } + {E[-2Q(ψ_(m,*))] - E[-2Q(ψ̂_(m))] } + E[-2Q(ψ̂_(m))].We consider first an expansion of Q(.) around the “true" blip parameter for model m, ψ_(m,*): we have thatQ(ψ̂_(m)) = Q(ψ_(m,*)) + Q̇(ψ_(m,*)) (ψ̂_(m)-ψ_(m,*))+ 1/2 (ψ̂_(m)-ψ_(m,*))^⊤Q̈(ψ_(m,*)) (ψ̂_(m)-ψ_(m,*)) + o_p(1)so taking expectations with respect to the data generating model with ψ̂_(m) fixed, we haveE[-2Q(ψ̂_(m))] = E[-2Q(ψ_(m,*))] - (ψ̂_(m)-ψ_(m,*))^⊤ℐ(ψ_(m,*)) (ψ̂_(m)-ψ_(m,*)) + o(1)where, recall,ℐ(ψ^') = . E [-∂^2 Q_1(ψ)/∂ψ∂ψ^⊤] |_ψ= ψ^'and henceE[-2Q(ψ_(m,*))] - E[-2Q(ψ̂_(m)) ] = (ψ̂_(m)-ψ_(m,*))^⊤ℐ(ψ_(m,*)) (ψ̂_(m)-ψ_(m,*)) + o(1).Using the conventional theory of misspecified models, we have that√(n)(ψ̂_(m)-ψ_(m,*)) = {I(ψ_(m,*))}^-1×1/√(n)∑_i=1^n Q̇(Y_i;ψ_(m,*)) + o_p(1)where, by construction of the quasi-likelihood function, we haveQ̇(y;ψ) ≡ U(y;ψ).Hence√(n)(ψ̂_(m) - ψ_(m,*)) d⟶Normal(0, ℐ(ψ_(m,*)) ^-1𝒥(ψ_(m,*)) ℐ(ψ_(m,*)) ^-1)as n ⟶∞, where𝒥(ψ^') = . E [ {∂ Q(ψ)/∂ψ}{∂ Q(ψ)/∂ψ}^⊤] |_ψ = ψ^'andI (ψ^') = -1/n. ∂^2 Q(ψ)/∂ψ∂ψ^⊤|_ψ = ψ^'J (ψ^') = 1/n[ ∂ Q(ψ)/∂ψ{∂ Q(ψ)/∂ψ}^⊤]_ψ = ψ^'.By standard convergence results(ψ̂_(m) - ψ_(m,*))^⊤I (ψ_(m,*)) (ψ̂_(m) - ψ_(m,*)) = (ψ̂_(m) - ψ_(m,*))^⊤ℐ(ψ_(m,*)) (ψ̂_(m) - ψ_(m,*)) + o_p(1)and by a standard result for quadratic formsE[(ψ̂_(m) - ψ_(m,*))^⊤ℐ(ψ_(m,*)) (ψ̂_(m) - ψ_(m,*))] = tr{𝒥(ψ_(m,*)) [ℐ(ψ_(m,*))]^-1} + o(1).Under standard regularity conditions on Q(.), we have that ψ̂_(m)p⟶ψ_(m,*) as n ⟶∞, and hence for large n E[-2Q(ψ̂_(m))] = E[-2Q(ψ_(m,*))] + o(1), soΔ(m) =E[-2Q(ψ_(m,*))] + E[(ψ̂_(m) - ψ_(m,*))^⊤ℐ(ψ_(m,*)) (ψ̂_(m) - ψ_(m,*))] + o(1)≡ E[-2Q(ψ_(m,*))] + 2 tr{𝒥(ψ_(m,*)) [ℐ(ψ_(m,*))]^-1} + o(1).This completes the proof.The asymptotic variance of estimator ψ̂_(m) may itself be estimated byV(ψ̂_(m) ) = n I(ψ̂_(m) )^-1J(ψ̂_(m) ) I(ψ̂_(m) )^-1.§ INVESTIGATING THE TRACE TERM We examine in simulation whether the trace term is a good approximation to the true dimension of the underlying model when that model is fitted, as described in section 5.3 of the main paper.Figure <ref> displays estimates of stage 1 (top) and stage 2 (bottom) trace term K from 1,000 simulations with n = 100 for different candidate models (y-axis). The panels correspond to true blip models containing an intercept term along with stage j covariates x_j1 (left), x_j1,x_j2 (middle), x_j1,x_j2,x_j3 (right), and gray boxes correspond to simulations where the true blip model was fit. In expectation, the trace term matches the dimension of the data generating model even in this relatively small sample setting.§ SIMULATION RESULTS§.§ Discrete Outcome Next, we demonstrate the use of QIC_G in the DTR framework for a discrete outcome case. We generate data as follows:* stage 1 patient information: X_11∼ N(1,1), X_12∼ N(-1,1), X_13∼ N(1,1);* stage 1 treatment: a_1 ∈{0,1}, P(A_1 = 1 | h_1) = expit(x_11);* stage 2 patient information: X_21∼ N(a_1,1), X_22∼ N(-1,1), X_23∼ N(1,1);* stage 2 treatment: a_2 ∈{0,1}, P(A_2 = 1 | h_2) = expit(x_21);* stage j blip: γ_j(a_j,h_j) = a_j(0.5 + ψ_j1 x_j1 + ψ_j2 x_j2 + ψ_j3 x_j3) such that a_j^opt = 1 if 0.5 + ψ_j1 x_j1 + ψ_j2 x_j2 + ψ_j3 x_j3 > 0 and 0 otherwise;* outcome: P(Y = k) = λ^k e^-k/k!, with λ = exp[β_0 - ∑_j=1^2[γ_j(a_j^opt,h_j) - γ_j(a_j,h_j)]], where we vary β_0 so that for the various ψ_jk we consider, P(Y = 0) = 0.1. We set the blip parameters to (ψ_j1,ψ_j2,ψ_j3) = (0.5,0,0), (0.5,0.5,0) or (0.5,0.5,0.5) giving a range of models including one, two, or all three variables at each stage.Given the computational requirements for the IRLS algorithm, our simulations are somewhat more limited. We restrict ourselves to sample sizes of n = 200, and only consider the four blip models containing no covariates, x_j1 only, x_j1 and x_j2 only, or all three covariates, and choose whichever model resulted in the lowest QIC_G. As with our simulations in the main paper (Table 1), we correctly specify our treatment model, and mis-specify the treatment-free model, supposing it is linear in x_11 at stage 1 and linear in x_21 at stage 2. Results are summarized in Table <ref>, where the stage 1 results are based on analyses where the stage 2 blip models were correctly specified. The IRLS algorithm was implemented with an iteration limit of 1,000 and a tolerance limit (between successive log-mean function estimates) of 0.001.The presented results summarize over the simulation runs where all four IRLS algorithms converged. This corresponds to all 1,000 simulated datasets for the simplest models (with only one covariate in the true blip model at each stage), 996 (stage 2) and 994 (stage 1) datasets when two covariates were included, and 904 (stage 2) and 863 (stage 1) datasets where all three were included. If we instead presume model selection is based on the lowest QIC only for those candidate blip models where IRLS converged (as might occur in a real-life analysis), results were near-identical with the slight exception of model selection for the most complex blip model at both stages. In this case, the correct model selection rate drops from 0.921 to 0.833 for stage 2, and from 0.919 to 0.877 for stage 1, with the largest candidate model most likely to result in failure.§.§ Continuous Outcome Additional simulation results referred to in the main paper. Table <ref> contains results for varying effect sizes, Table <ref> contains results for varying correlation strength between covariates, and Table <ref> contains non-aggregated results. Tables <ref>-<ref> contain analogous results to Tables 1-2 in the main paper and Tables <ref>-<ref> in this appendix, but with standard normal errors in the generation of the outcome Y, rather than log-normal errors. In all tables bold indicates the most successful method for each scenario.jasa2
http://arxiv.org/abs/1704.08229v1
{ "authors": [ "M. P. Wallace", "E. E. M. Moodie", "D. A. Stephens" ], "categories": [ "stat.ME" ], "primary_category": "stat.ME", "published": "20170426173233", "title": "Generalized G-estimation and Model Selection" }
[email protected] Paul-Drude-Institut für Festkörperelektronik, Hausvogteiplatz 5–7, 10117 Berlin, GermanyWe investigate the optical properties of InAs quantum dots grown by molecular beam epitaxy on GaAs(110) using Bi as a surfactant. The quantum dots are synthesized on planar GaAs(110) substrates as well as on the {110} sidewall facets of GaAs nanowires. At 10 K, neutral excitons confined in these quantum dots give rise to photoluminescence lines between 1.1 and 1.4 eV. Magneto-photoluminescence spectroscopy reveals that for small quantum dots emitting between 1.3 and 1.4 eV, the electron-hole coherence length in and perpendicular to the (110) plane is on the order of 5 and 2 nm, respectively. The quantum dot photoluminescence is linearly polarized, and both binding and antibinding biexcitons are observed, two findings that we associate with the strain in the (110) plane This strain leads to piezoelectric fields and to a strong mixing between heavy and light hole states, and offers the possibility to tune the degree of linear polarization of the exciton photoluminescence as well as the sign of the binding energy of biexcitons. Fine structure of excitons in InAs quantum dots on GaAs(110) planar layers and nanowire facets Oliver Brandt==============================================================================================§ INTRODUCTION The discrete energy levels of semiconductor quantum dots (QDs) have made possible new classes of solid-state-based quantum devices such as single photon and entangled photon pair emitters<cit.> as well as all-optical logic gates.<cit.> A prerequisite for these applications is a high degree of control not only on the dimensions of the QDs, but also on their symmetry. For instance, entangled photon pairs can be produced in highly symmetrical QDs for which the anisotropic exchange splitting of the bright states of the exciton is smaller than the radiative linewidth.<cit.> This goal is difficult to achieve using QDs on GaAs(001), where the different adatom mobilities along the [110] and [11̅0] directions lead to elongated QDs whose symmetry is thus reduced from D_2d to C_2v, inducing a finite fine structure splitting of the bright exciton states.<cit.> In contrast, group-III-arsenide-based QDs grown on GaAs(111) substrates patterned with pyramidal recesses exhibit a C_3v symmetry resulting in a fine structure splitting of zero.<cit.>The above example illustrates that the fundamental electronic and optical properties of QDs as well as their specific technological applications are intimately related to their symmetry. It is thus of considerable interest to investigate QD systems with symmetries that differ from C_2v and C_3v. Accordingly, the growth of GaAs-based QDs on high-index surfaces such as (113) and (115) has been explored.<cit.> More recently, we reported that a Bi-surfactant-induced morphological instability can enable the growth of InAs 3D islands on GaAs(110), a surface on which 3D islands do not normally form.<cit.> Due to the inequivalent [11̅0] and [001] in-plane directions, these InAs(110) QDs are of C_s<cit.> symmetry, and their optical properties are expected to differ significantly from that of C_2v and C_3v QDs. In particular, it has been predicted that the strong in-plane piezoelectric fields in (In,Ga)As(110) QDs modify their electronic structure, and that the light emission associated with the ground state bright exciton is linearly polarized.<cit.>Here, we use photoluminescence (PL) spectroscopy to investigate the optical properties of InAs QDs grown by molecular beam epitaxy on GaAs(110) substrates as well as on the {110} sidewall facets of GaAs nanowires. For this surface, the formation of QDs is usually inhibited since two-dimensional Frank-van der Merwe growth prevails regardless of the thickness of the strained InAs film, and strain relaxation occurs by the formation of misfit dislocations.<cit.> To induce the Stranski-Krastanov growth of three-dimensional islands, we have used Bi as a surfactant.<cit.>From magneto-PL experiments, we show that for small InAs QDs emitting between 1.3 and 1.4 eV, the electron-hole coherence length of the exciton in these QDs is on the order of 5 and 2 nm in the in- and out-of-plane directions, respectively. As a result of the low C_s symmetry of InAs(110) QDs, strain in the (110) plane leads to a strong mixing between heavy and light holes, as well as to strong piezoelectric fields. While the former is promising for the fabrication of single photon emitters with a high degree of linear polarization, the latter is of interest to obtain QDs with zero biexciton binding energy for the emission of entangled photon pairs.< g r a p h i c s >(a) PL spectra acquired at 10 K from the planar InAs/GaAs(110) sample grown with (triangles) and without (squares) the Bi surfactant. The spectra have been normalized and shifted vertically for clarity. The transitions related to the GaAs substrate, the WL, and the InAs QDs are indicated on the figure. (b) PL spectrum at 4.2 K of the sample grown with a Bi surfactant. (c) Enlarged view of the transition highlighted by a red rectangle in (b). The solid line is a Gaussian fit yielding a full width at half maximum of 140 µeV. § EXPERIMENTAL DETAILS InAs QDs were synthesized by molecular beam epitaxy on planar GaAs(110) substrates and on the {11̅0} sidewalls of GaAs nanowires grown on Si(111) substrates. For the planar samples, about 1.5 monolayers of InAs were deposited on a 150 nm thick GaAs buffer layer and subsequently capped with 50 nm of GaAs. For the nanowire samples, an equivalent amount of InAs was deposited onto the sidewalls of GaAs nanowire cores of about 60 nm diameter, 7 µm length, and a density of 0.3 µm^-2. The nanowires were then clad by a GaAs/AlAs/GaAs multishell structure with respective thicknesses of 5/10/5 nm. For both planar and nanowire samples, a Bi flux (with a beam equivalent pressure of 2× 10^-6 mbar) was present during the InAs deposition, which was carried out at 420 °C. The presence of the Bi surfactant modifies the surface energies, inducing the formation of three-dimensional InAs islands by a process resembling Stranski-Krastanov growth. Comparison samples were also grown without the Bi flux, and in these samples QDs did not form. For the planar sample, the rotation of the substrate was stopped during InAs growth, leading to a gradient in the density of the QDs over the wafer. Further details of the growth process of the planar and nanowire samples can be found in Refs. Lewis2017a,Lewis2017b, respectively.All PL experiments have been carried using a Ti:Sapphire laser emitting at 790 nm as the excitation source. For measurements at 10 K, the samples were mounted on the coldfinger of a continuous-flow He cryostat, and the laser beam was focused using microscope objectives with numerical apertures of 0.25, 0.55 or 0.7. The PL was dispersed with a monochromator equipped with a 900 lines/mm grating and detected with a liquid N_2-cooled (In,Ga)As array or a charge-coupled device. The analysis of the polarization of the PL signal was performed using a polarizer followed by a half-waveplate. For measurements at 4.2 K, the samples were kept at liquid He temperature in a confocal setup, and the laser beam was focused using a microscope objective with a numerical aperture of 0.82. The PL signal was collected using the same objective and coupled to a single mode fiber, whose core acted as a confocal hole (the core diameter of the fiber was 4.4 µm). The signal was dispersed using a 900 lines/mm grating and detected with a liquid N_2-cooled charge-coupled device. With this setup, magnetic fields B with a strength between 0 and 8 T could be applied in Faraday configuration, i. e., B||[110] and B||[111] for the planar and the nanowire samples, respectively.§ RESULTS AND DISCUSSION Figure <ref>(a) shows a PL spectrum taken at 10 K from the sample with InAs QDs on planar GaAs(110). A spectrum of a sample grown without Bi surfactant, while keeping other growth conditions the same is also displayed. Both samples exhibit emission lines centered at 1.513, 1.495 and 1.459 eV, originating from the GaAs bound exciton and the GaAs band-to-carbon transition along with its phonon replica, respectively. For both samples, the intensities of these lines are almost independent of the position of the excitation spot. For the sample grown without the Bi surfactant, a strong PL band at 1.362 eV is observed, which we attribute to charge carrier recombination in the quantum well formed by the two-dimensional InAs layer. In contrast, for the sample grown with Bi, two bands are observed on the lower energy side of GaAs-related PL lines. While the band at 1.422 eV exhibits a full width at half maximum of 17 meV, the band with a peak energy of 1.19 eV is much broader and exhibits an asymmetric lineshape. Following the result in Ref. Lewis2017a, we attribute the bands at 1.422 and 1.19 eV to carrier recombination in the InAs wetting layer (WL) and QDs, respectively. As shown in Fig. <ref>(b), the emission band related to the QDs consists of tens of narrow lines, each associated with an individual QD. These lines exhibit a full width at half maximum as narrow as 140 µeV [see Fig. <ref>(c)], corresponding to the spectral resolution of the setup. As the substrate rotation was stopped during the InAs deposition, the spatial density of QDs and the corresponding spectral density of lines at 1.3 eV vary over the sample. In the following, all experiments have been carried out on a region of the sample where the density of QDs emitting between 1.3 and 1.4 eV is sufficiently low to facilitate single QD spectroscopy.Figure <ref> shows a PL spectrum taken at 10 K on as-grown core-multishell nanowires. We estimate that about 5 nanowires are probed simultaneously in this experiment. The spectrum consists of a broad band centered at 1.44 eV with tens of narrow transitions on its lower energy side. As shown in the enlarged spectrum in inset, the linewidth of the latter transitions ranges typically from 140 µeV (resolution limit of our setup) to a few hundreds of µeV. Polytypism in InAs shells was previously shown to cause photoluminescence lines broader than thoses observed in Fig. <ref>.<cit.> Since these transitions were only observed for samples grown using the Bi surfactant,<cit.> they are associated with Bi induced InAs islands. In analogy to the planar case, we attribute them to emission from single QDs that form on the sidewalls of the nanowires due to the presence of the Bi surfactant, while we attribute the band at 1.44 eV to the InAs WL. The increased width of these lines with respect to the resolution limit is presumably resulting from spectral diffusion due to fast electrostatic fluctuations in the vicinity of the QDs.<cit.> In contrast to the planar sample, no emission lines related to GaAs could be observed, indicating that the capture by the QDs of carriers photoexcited in the GaAs core is highly efficient. This result is a consequence of the core-shell geometry of our nanowires and has been reported previously for nanowires with (In,Ga)As shell quantum wells.<cit.> Figure <ref>(a) shows a confocal PL spectrum taken on the planar QD sample at 4.2 K with an excitation power of 290 nW. Thanks to the increased spatial resolution, only a few QDs are excited, and individual transitions are well resolved. Figure <ref>(b) presents a series of spectra recorded with different excitation powers of the line at 1.3187 eV in Fig. <ref>(a). At high excitation powers, an additional line at 1.3210 eV appears in the spectra. Figure <ref>(c) shows the dependence of the emission intensity of the lines at 1.3187 and 1.3210 eV on excitation power. Below a power of 10 µW, the intensity of the line at 1.3187 eV increases linearly with increasing excitation power, demonstrating that it arises from the recombination of a neutral exciton. For higher powers, the intensity of the neutral exciton transition saturates and eventually decreases, a behavior already reported for other QD systems.<cit.> In contrast, the intensity of the line at 1.3210 eV increases almost quadratically with increasing excitation power. This line is therefore related to an antibinding biexciton with a binding energy of -2.3 meV.For our InAs QDs on planar GaAs(110), the biexciton transition energy was systematically 2–3 meV larger than that of the neutral exciton. To verify whether this observation is a general result for small InAs(110) QDs, we have performed PL experiments with varying excitation power also for the InAs QDs on the { 11̅0 } sidewalls of GaAs nanowires. Figure <ref>(d) shows a typical spectrum of a single QD. With the excitation power increasing from 0.42 to 2.7 µW, the intensity of the transition at 1.3536 eV increases linearly, indicating that it arises from a neutral exciton. The intensity of this line saturates at higher powers. In contrast to the planar case [Fig. <ref>(b)], a biexciton transition appears for higher excitation powers at the lower energy side of the neutral exciton, i. e., the biexciton in Fig. <ref>(d) has a binding energy of +2.1 meV. For InAs QDs on GaAs(001), the biexciton binding energy decreases with decreasing QD size.<cit.> However, this relation seems not to apply here, since the QD with binding biexciton in Fig. <ref>(d) emits at an energy higher than the QD with an antibinding biexciton in Fig. <ref>(b). We note that binding biexcitons were also reported for small InAs QDs deposited either on AlAs(110) using the cleaved edge overgrowth technique<cit.> or on the { 110 } facets of GaAs/AlAs core-shell nanowires.<cit.>< g r a p h i c s >(a) PL spectrum of a few InAs QDs on planar GaAs(110) taken at 4.2 K with an excitation power of 290 nW. (b) PL spectra from the QD emitting at 1.3187 eV in (a) for different excitation powers in µW as indicated in the figure. The spectra have been normalized and shifted vertically for clarity. (c) Intensity of the transitions at 1.3187 and 1.3210 eV (squares and triangles, respectively) as a function of excitation power. The blue and red lines show fits yielding the slopes indicated in the figure. (d) PL spectra of a QD in a nanowire for different excitation powers. The excitation power in µW is specified on the left of each spectrum. For excitation powers up to 2.7 µW, the intensity of the lines labeled X and XX increases linearly and quadratically with increasing excitation power, respectively. To clarify the origin of the different character of the biexciton state, the actual shape and dimensions of the QDs need to be elucidated.<cit.> Atomic force micrographs of uncapped QDs are of limited relevance for this purpose, as significant in- and out-of-plane In segregation may occur during the QD overgrowth by GaAs.<cit.> The three-dimensional shape of a single embedded QD can be reconstructed by atom probe<cit.> or electron tomography,<cit.> but it is difficult to achieve results of statistical significance with these techniques. Magneto-PL is an alternative technique to obtain an estimate of the size of embedded InAs QDs or, rather, the spatial extent of the confining potential. In the presence of an external magnetic field B and neglecting exchange splittings, the energy E_X of a neutral exciton in a QD is given by:<cit.>E_X = E_X^0 ±1/2 g μ_B B + γ_2 B^2 with the exciton energy E_X^0 at B=0, the Bohr magneton μ_B, the exciton Landé factor g and the diamagnetic coefficient γ_2. The diamagnetic coefficient is proportional to the electron-hole coherence length L_eh in the plane perpendicular to the magnetic field:<cit.>γ_2 = e^2 L_eh/8 μ with the reduced mass μ of the exciton in the plane perpendicular to the magnetic field. While large three-dimensional InAs islands on GaAs(110) are elongated along [11̅0], possibly due to different adatom diffusivities along the [11̅0] and [001] directions,<cit.> the shape anisotropy for smaller QDs is negligible.<cit.> If we suppose that, as depicted in the inset of Figs. <ref>(c) and <ref>(d), the InAs(110) QDs in planar and nanowire samples are lens-shaped with a diameter d and a height h, and that L_eh for strongly confined excitons is given by the QD size, then L_eh = d for InAs QDs grown on planar GaAs(110). For the nanowire sample measured in Faraday geometry, γ_2 is proportional to the coherence length of the exciton in the plane perpendicular to the nanowire axis. Therefore, L_eh depends on both d and h and may be written approximately as L_eh = √(d h). Figure <ref>(a) shows the magnetic field dependence of the emission of a neutral exciton in an InAs QD on planar GaAs(110). For finite magnetic fields, the PL line is observed to split into two transitions, which we attribute to the two bright states of the exciton. QDs on GaAs(110) have C_s symmetry, for which the magnetic field is expected to mix dark and bright exciton states,<cit.> hence giving rise to four distinct transitions. The fact that only two lines are observed even at 8 T [see inset in Fig. <ref>(a)] suggests that the bright and dark exciton states have similar g factors. In InAs QDs, the electron g factor is known to depend only weakly on the QD size and shape <cit.> and to have a value between -0.2 and -0.5.<cit.> A similar g factor for bright and dark excitons thus implies a small hole g factor for the InAs/GaAs(110) QDs under investigation. To obtain the energy of the exciton bright states as a function of B, the two lines observed in each PL spectrum are fit by Gaussians [see inset in Fig. <ref>(a)]. The resulting transition energies are subsequently fit by Eq. <ref> as shown by the solid lines in Fig. <ref>(a), yielding |g| = 2.8 and γ_2 = 7.5 µeV/T^2. Figure <ref>(b) shows the electron-hole coherence lengths L_eh deduced from the values of γ_2 measured for 14 different QDs emitting between 1.27 and 1.34 eV. As an average, we obtain d = L_eh = (5.4 ± 1.2) nm, a value similar to that reported for Stranski-Krastanov InAs/GaAs(001) QDs emitting in the same spectral range.<cit.> The magnetic field dependence of the emission of a neutral exciton in an InAs QD on a {110} sidewall of a GaAs nanowire is shown in Fig. <ref>(b). Similar to the planar sample in Fig. <ref>(a), the PL line of the exciton in the sidewall InAs QDs splits into two transitions in the magnetic field. The energy of these lines follows a parabolic dependence on B as well. Note that in contrast to the planar case, the PL lines at 8 T exhibit an asymmetric lineshape, which may be due to a contribution from the dark states. The g factor and γ_2 values deduced from these experiments are smaller than those obtained for the planar sample [Fig. <ref>(a)]. The former finding suggests that the exciton g factor is anisotropic, most probably due to some anisotropy in the hole g factor.<cit.> Measuring smaller γ_2 values for B || [111] than for B || [110] indicates that h < d. To confirm this result, we have measured γ_2 for 15 different InAs QDs on nanowire sidewalls. The corresponding values for L_eh are shown in Fig. <ref>(d), from which we arrive at an average L_eh = (3.8 ± 0.7) nm. Therefore, the strong confinement direction for these InAs(110) QDs is the [110] direction perpendicular to the surface. Assuming that d is equal for QDs in the planar and nanowire samples, this result yields h = 2.5 nm, in good agreement with the result of the atom force microscopy study of uncapped QDs in Ref. Lewis2017a. Together with the similar transition energies [Figs <ref>(b,d)], the magneto-PL measurements in Fig <ref> imply that the QDs in the planar and nanowire samples are of similar shape and size. Hence, we suggest that the different character of the biexciton state in these samples is not only related to the size of the QD, but also to the strain state. In fact, strain results in piezoelectric fields in the (110) plane of the QD <cit.> and, depending on the magnitude of these built-in fields, biexcitons in (In,Ga)As(110) QDs can be binding or antibinding.<cit.> Moreover, it has been predicted that the exact strain state in an InAs QD on a {11̅0} nanowire sidewall differs from that of a QD on planar GaAs(110). In the former case, the substrate is rigid and both the WL and the QD base are in perfect registry with the unstrained GaAs lattice. In the latter, however, the nanowire geometry results in enhanced elastic strain relaxation in all three dimensions. It is thus expected that the strain state of QDs in nanowires will differ from that of QDs on planar GaAs(110).<cit.> In particular, since the core and shell materials assume the same lattice constant along the nanowire axis, they both experience a uniaxial strain. Since the NW strain state depends on the entire structure, tuning the strain by adjusting the nanowire core-shell structure could provide a way to tune the biexciton binding energy. This feature could, for instance, allow one to fabricate QDs with zero biexciton binding energy, which would be suitable for the generation of pairs of entangled photons.<cit.>As mentioned in the introduction, the growth of InAs QDs on GaAs(110) rather than on GaAs(001) has profound consequences resulting from the inequivalent [11̅0] and [001] in-plane directions of InAs(110). In contrast to the C_2v symmetry of InAs QDs on GaAs(001), these inequivalent directions result in a C_s symmetry with the (11̅0) plane acting as a reflection plane.<cit.> Theoretically, the bright exciton states in InAs(110) QDs are expected to exhibit a comparatively large anisotropic exchange splitting, and the corresponding transitions should be polarized along the [11̅0] and [001] directions.<cit.> Figure <ref>(a) shows the polarization dependence of the transitions from several QDs on planar GaAs(110)between 1.28 and 1.30 eV. This measurement was carried out at zero magnetic field and with an excitation power low enough to ensure that most transitions arise from the recombination of neutral excitons. An angle of 0° corresponds to light polarized along [001]. Evidently, while intense QD transitions are observed for light polarized along [11̅0], the signal is too weak for light polarized along [001] to allow a reliable measurement of the anisotropic exchange splitting between the two bright exciton states. We note that such a measurement would be even more complex for QDs in nanowires, since the nanowire geometry leads to an antenna effect that results in different extraction efficiencies for light polarized along the[11̅0] and [001] directions.<cit.>The strong intensity difference between the bright exciton states for [11̅0] and [001] polarization visible in Fig. <ref>(a) could arise from an anisotropic exchange splitting for InAs(110) QDs that is so large that only the state at lower energy is occupied at low temperatures.<cit.> This explanation can be safely excluded here. As shown in Fig. <ref>(a), the energies of the exciton bright states for the planar sample follow a clear parabolic dependence for fields larger than about 2 T. In other words, the Zeeman splitting of the exciton bright states at 2 T is much larger than their anisotropic exchange splitting. Using |g| = 2.8 [Fig. <ref>(a)], we estimate that the anisotropic exchange splitting for InAs(110) QDs is smaller than 100 µeV.Using atomistic calculations, Singh2013 showed that the strain in InAs(110) QDs may lead to a mixing between heavy-hole and light-hole states. In contrast to InAs(001) QDs with a C_2v symmetry, the mixing is strong for the [110] orientation since heavy and light holes belong to the same irreducible representation for the C_s point group. As a result of this mixing, the PL signal from excitons in InAs(110) QDs may be linearly polarized. To obtain the polarization degree for the bright excitons in our InAs QDs on planar GaAs(110), we plot in Fig. <ref>(b) the polarization dependence of the intensity summed over all QDs emitting between 1.28 and 1.30 eV in Fig. <ref>(a). For comparison, we also show the polarization dependence of the near band edge emission from the GaAs substrate, which should be entirely unpolarized<cit.>. Evidently, the polarization dependence of our setup can be neglected for the present analysis. The QD PL signal is clearly linearly polarized along [11̅0], in agreement with the data in Fig. <ref>(a). With the degree of linear polarization ρ = (I_001 - I_11̅0)/(I_001 + I_11̅0), where I_001 and I_11̅0 are the PL intensities for light polarized along [001] and [11̅0], respectively, we obtain ρ = -0.68 as the average polarization for QDs emitting between 1.28 and 1.30 eV. This polarization degree is not only opposite in sign compared to that computed for InAs QDs on GaAs(110) by <cit.> (ρ = 0.35), but also significantly larger. Apparently, the much smaller base diameter of our InAs QDs [cf. Fig. <ref>(b)] in comparison to the 25 nm diameter considered in Ref. Singh2013 enhances the heavy-hole/light-hole mixing and thus amplifies the associated polarization of the bright exciton state. In any case, the findings in Fig. <ref> demonstrate the possibility of fabricating linearly polarized single photon emitters with a deterministic polarization axis determined by the crystallographic axes.§ SUMMARY AND CONCLUSIONS We have studied the fine structure of excitons confined in InAs(110) QDs grown by molecular beam epitaxy. Employing a morphological instability induced by the surfactant Bi, these QDs form on planar GaAs(110) as well as on the {110} sidewall facets of GaAs nanowires. Light emission associated with the radiative decay of excitons in the QDs has been observed in PL spectra between 1.1 and 1.42 eV. From magneto-PL experiments, we have shown that the strong confinement axis is the [110] direction normal to the surface, and we have estimated that the smallest QDs have a height of about 2.5 nm. Despite their reduced symmetry compared to InAs QDs grown on (001) or (111) surfaces, these QDs constitute a versatile system for quantum optics applications in the near infrared spectral range. First, the binding energy of biexcitons in (110) InAs QDs not only depends on the dimensions of the QD, but also on the strength of the piezoelectric fields in the (110) plane. We have observed both binding and antibinding biexcitons, suggesting that one could produce QDs with zero biexciton binding energy by tuning the QD shape and strain state. As a practical means for tuning the strain, we propose to vary the thickness of the GaAs core and outer-shell of GaAs/InAs core-multishells nanowires. Alternatively, variation in the biexciton energy could be achieved by mechanically driving the nanowire. Despite a nonzero anisotropic exchange splitting, such QDs could be used as entangled photon pairs emitter via the time reordering scheme.<cit.> Furthermore, the strain in the (110) plane mixes the heavy and light holes states. As a result of this mixing, the photoluminescence signal of InAs QDs is polarized along the [11̅0] or the [001] direction, depending on the exact shape and In content of the QD. The high degree of linear polarization of InAs QDs along a well-defined axis opens the possibility to generate linearly polarized single photons, which is of interest for applications in quantum key distribution.<cit.>We are grateful to Gerd Paris and Manfred Ramsteiner for technical assistance with the confocal optical setup, and to Jesús Herranz for a careful reading of the manuscript. P. C. acknowledges funding from the Fonds National Suisse de la Recherche Scientifique through project No. 161032. R. B. L. acknowledges support from the Alexander von Humboldt foundation.
http://arxiv.org/abs/1704.08543v1
{ "authors": [ "Pierre Corfdir", "Ryan B. Lewis", "Lutz Geelhaar", "Oliver Brandt" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170427125326", "title": "Fine structure of excitons in InAs quantum dots on GaAs(110) planar layers and nanowire facets" }
=1
http://arxiv.org/abs/1704.08342v1
{ "authors": [ "Eleonora Di Valentino", "Alessandro Melchiorri", "Olga Mena" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170426203607", "title": "Can interacting dark energy solve the $H_0$ tension?" }
Periodic Anderson Model with Holstein Phonons for the Description of the Cerium Volume Collapse Enzhi Li^1,2,Shuxiang Yang^1,2, Peng Zhang^3, Ka-Ming Tam^1,2, Mark Jarrell^1,2, and Juana Moreno^1,2 December 30, 2023 =========================================================================================================== § INTRODUCTIONIn the tensionless limit string theory is expected to exhibit a large underlying symmetry that is believed tolie at the heart of many special properties of stringy physics <cit.>. In flat space, the tensionless limit is somewhat subtle since there is no natural length scale relative to which the (dimensionful) string tension may be taken to zero. The situation is much better in the context of string theory on an AdS background, since the cosmological constant of the AdS space defines a natural length scale. This is also reflected by the fact that higher spin theories – they are believed to capture the symmetries of the leading Regge trajectory at the tensionless point <cit.> —appear naturally in AdS backgrounds <cit.>. In the context of string theory on AdS_3 concrete evidence for this picture was recently obtained in <cit.>. More specifically, it was shown that the CFT dual ofstring theory on AdS_3× S^3 ×T^4, the symmetric orbifold of T^4, see <cit.> for a review, contains the CFT dual of the supersymmetric higher spin theories constructed in <cit.>.[Superconformalhigher spin theories were first constructed in <cit.>. The duality is the natural supersymmetric analogue of the original bosonicproposal of <cit.>, see <cit.> for a review; various properties of the N=4 dualitywere further analysed in <cit.>.] While this indirect evidence is very convincing, it would be very interesting to have more direct access to the higher spin sub-symmetry in string theory. This symmetry is only expected to emerge in the tensionless limit of string theory, in which the string is very floppy and usual supergravity methods are not reliable. Thus we should attempt toaddress this question using a worldsheet approach.Worldsheet descriptions of string theory on AdS backgrounds are notoriously hard, butin the context of string theory on AdS_3, the background with pure NS-NS flux admits a relatively straightforward worldsheet description in terms of a WZW model based onthe Lie algebra 𝔰𝔩(2,R) <cit.>.In this paper we shall use this approach to look for signs of a higher spin symmetry among these worldsheet theories. More concretely, we shall combine the WZW model corresponding to𝔰𝔩(2,R)with an 𝔰𝔲(2) WZW model, describing strings propagating on S^3, as well as four free fermions and bosons corresponding to T^4. The complete critical worldsheet theory then describes strings on AdS_3× S^3 ×T^4. The worldsheet description of these WZW models contains one free parameter, the level kof the N=1 superconformal WZW models associated to 𝔰𝔩(2,R) and𝔰𝔲(2), respectively — these two levels have to be the same in order for the full theory to be critical. Geometrically, these levels correspond to thesize of the AdS_3 space (and the radius of S^3) in string units. The tensionless limit should therefore correspond to the limit where k is taken to be small. In this paper we analyse systematically the string spectrum of the worldsheet theory for k small.[See e.g.<cit.> and references therein for other worldsheet approachesto this problem which do not focus on the spectrum itself, but rather on the symmetry structures that are presumed toemerge in the tensionless limit of string theory.] Aswe shall show, the only massless spin fields that emerge in this limit are those associated to thesupergravity multiplet, while all the higher spin fields remain massive, except in the extremal case where the level istaken to be k=1 — this is strictly speaking an unphysical value for the level since then the bosonic 𝔰𝔲(2) model has negative level; however, as argued in <cit.>, some aspects of the theory may still make sense.(We should also mention that in the context of the WZW model based on AdS_3× S^3 × S^3 × S^1 <cit.> the theory with k=1 is not singular since it is compatible with the levels of the two superconformal 𝔰𝔲(2) models being k^+=k^-=2, leading to vanishing bosonic levels for the two𝔰𝔲(2) algebras.)[Wethank Lorenz Eberhardt for a useful discussion about this point.] For k=1, the bosonic 𝔰𝔩(2,R) algebra has level k_ bos=3, and as in <cit.>, an infinite tower of massless higher spin fields arises from the long stringsubsector (the spectrally flowed continuous representations). These higher spin fields are part of a continuumand realise quite explicitly some of the speculations of <cit.>. For more generic values of the level, we also explain the sense in whicha `leading Regge trajectory' emerges, and we give an explicit description of these states. In particular, we show that the relevant states form the spectrum ofa specific N=4 higher spin theory of Vasiliev that was recently analysed in detail by one of us <cit.>. (More specifically, this higher spin theory consists of one N=4 multiplet for each even spin; the fact that the leading Reggetrajectory in closed string theory only consists of states (or multiplets) of even spin is also familiar from flat space,see the discussion around eq. (<ref>).)For spins that are small relative to the size of the AdS space, the states on the leading Regge trajectory are described by physical states coming from the (unflowed) discrete representations of 𝔰𝔩(2,R); as the spin gets larger, the corresponding classical strings become longer until they hit the boundary of the AdS space where they become part of the spectrally flowed continuous representations, describing the continuum of long strings, see Figure <ref>. This picture fits in nicely with expectations from <cit.>, see also <cit.>. The fact that among these backgrounds with NS-NS flux no conventional higher spin symmetry emerges also has a natural interpretation in terms of the structure of the classical sigma model. Indeed, as explained in<cit.>, the tension of the string is of the form <cit.> T = √( Q_NS^2 + g_s^2 Q_RR^2) ,where Q_NS andQ_RR are quantized, and g_s is the string coupling constant. This formula therefore suggests that the tensionless limit is only accessible in the situation with pure R-R flux (and in the limit g_s→ 0). The paper is organized as follows. We explain the basics of the worldsheet theory (and set up our notation) in Section 2. In Section 3 we prove that the spectrum of this family of worldsheet theoriesdoes not contain any massless higher spin fields among the unflowed representations (describing short strings).In Section 4 we start with identifying the states that comprise the leading Reggetrajectory. We first analyse the states of low spin that arise from the unflowed discrete representations. Wealso comment on the structure of the subleading Regge trajectory, as well as the situation for the case whereT^4 is replaced by K3. The rest of the leading Regge trajectory that is part of thecontinuous spectrum is then identified in Section 5. We also comment there on the massless higher spin fields arising from the spectrally flowed continuous representation at k=1, and explain how they fit in with theexpectations from <cit.>.Section 6 contains our conclusions, and there are three appendices where we have collected some of the more technical arguments that are referred to at various places inthe body of the paper. § WORLDSHEET STRING THEORY ON ADS_3We want to study the spectrum of type IIB strings propagating on backgrounds of the form AdS_3× S^3× X, where X is either T^4 or K3 so that the resulting theory has =4 spacetime supersymmetry. We shall concentrate on the background with pure NS-NS flux for whichthe AdS_3× S^3 theory can be described by a (non-compact)SL(2,R)× SU(2) WZW model[Strictly speaking, we always consider the universal cover of theSL(2,R) group, because the timelike direction of AdS_3 is taken to be non-compact.] that can be studied byconventional CFT methods. The bosonic versionof this theory was discussed in some detail in the seminal papers <cit.>;in what follows we extend, following <cit.>,some aspects of their analysis to thesupersymmetric case. The symmetry algebras of the supersymmetric WZW models are the =1 superconformal affine algebras_k⊕_k' that will be described in more detail below. Their central charges equalc ( _k ) = 3 ( k+2/k + 1/2),c ( _k' ) = 3 ( k'-2/k' + 1/2),and the condition that the total charge adds up to c=9 (as befits a 6-dimensional supersymmetric background)requires then that k=k'. For this choice of levels, the naive =1 worldsheet supersymmetry of the model isenhanced to =2 <cit.>. This enhancement can also be understood from the fact that theAdS_3× S^3 theory can be described as a non-linear sigma model on the supergroupPSL(2| 2) (see, e.g., <cit.>).§.§ The AdS_3 WZW model In our conventions, thealgebra describing superstrings on AdS_3 reads[J^+_m,J^-_n]= -2J^3_m+n + kmδ_m,-n[J^3_m,J^±_n]= ±J^±_m+n [J^3_m,J^3_n]= -k/2mδ_m,-n [J^±_m,ψ^3_r] = ∓ψ^±_m+r [J^3_m,ψ^±_r] = ±ψ^±_m+r[J^±_m,ψ^∓_r]= ∓2ψ^3_m+r {ψ^+_r,ψ^-_s} = kδ_r,-s{ψ^3_r,ψ^3_s} = -k/2δ_r,-s .The dual Coxeter number is h^∨_ = -2. As detailed in appendix <ref>, the shifted currents ^+=J^+ + 2/k(ψ^3ψ^+) ^-=J^- -2/k(ψ^3ψ^-)^3=J^3 +1/k(ψ^-ψ^+)decouple from the fermions, [^a_n,ψ^b_r]=0, and satisfy the same algebra as the J^a with level κ = k+2. The Sugawara stress tensor and supercurrent are T = 1/2k(^+^-+^-^+ -2^3^3-ψ^+∂ψ^--ψ^-∂ψ^++2ψ^3∂ψ^3)G= 1/k(^+ψ^-+^-ψ^+ -2^3ψ^3 - 2/kψ^+ψ^-ψ^3) ,where every composite operator in the above expressions is understood to be normal-ordered. These generators satisfy the=1 superconformal algebra(<ref>)–(<ref>) with central charge (see eq. (<ref>))c= 3(k+2/k+1/2) .The holographic dictionary implies that the global charges in the spacetime theory are given by <cit.>L^CFT_0 = J^3_0 , L^CFT_1 = J^-_0 , L^CFT_-1 = J^+_0 ,with analogous expressions for the right-movers. In particular, the spacetime conformal dimension (which we henceforth refer to as the energy E)is given by the eigenvalue of J^3_0 + J̅^3_0, while the spacetime helicity s equalsJ^3_0 - J̅^3_0.[In the following, we shall refer to s as the spin — this is what it is from theview of the two-dimensional spacetime conformal field theory.] Since we want to keep track of these quantum numbers, it will prove convenient to describe the representation contentwith respect to the coupled currents J^a. In addition to the symmetry algebra, the actual worldsheet conformal field theory is characterised by the spectrum, i.e.,by the set ofrepresentations that appear in the theory. For the bosonic case, a proposal for what this spectrum should be was made in<cit.>, and the same arguments also apply here once we decouple the fermions. Recall that a highest weight representation of a (bosonic) affineKac-Moody algebra is uniquely characterised by the representation of the zero mode algebra (in our case 𝔰𝔩(2,R))acting on the `ground states' — these are the states that are annihilated by the modes J^a_n with n>0. For the case at hand, therelevant representations of 𝔰𝔩(2,R) that appear <cit.> are the so-called principal discrete representations(corresponding to short strings), as well as the principal continuous representations — together they form acomplete basis of square-integrable functions on AdS_3.Furthermore, since the no-ghost theorem truncates theset of these representations to a finite number (depending on k) <cit.>, additionalrepresentations corresponding to their spectrally flowed images appear <cit.>; these describe the long strings. In each case, the representation on the ground states is thesame for left- and right-movers — this theory is therefore the natural analogue of the `charge-conjugation' modular invariant, see also <cit.>. In the supersymmetric case we are interested in, we consider the above 𝔰𝔩(2,R) affine theory for the decoupled bosonic currentsJ^a, and tensor to it a usual free fermion theory (where thefermions will either all be in the NS or in the R sector). Note that this will lead to a modular invariant spectrum since both factorsare separately modular invariant.In the following we shall study the spacetime spectum of this worldsheet theory with a view towards identifying the states on the leading Regge trajectory. We shall first concentrate on the unflowed discrete representations,where the low-lying states of the leading Regge trajectory — those whose spin satisfies s ⪅k/2 —originate from. The remaining states of the Regge trajectory are part of the continuum of long strings that is described by the (spectrally flowed) continuous representations; they will be analysed in Section <ref>. §.§.§ The NS sectorIn the NS sector we label the ground states by |j,m⟩, where m is the eigenvalue of J_0^3, while j labels the spin, C_2|j,m⟩ = -j(j-1)|j,m⟩ , J_0^3|j,m⟩ = m|j,m⟩ ,and C_2 is the quadratic Casimir of 𝔰𝔩(2,R) C_2 = 1/2(J_0^+J_0^- + J_0^-J_0^+)-J_0^3J_0^3 .The condition to be ground states, i.e., to satisfy J^a_n|j,m⟩ = 0 ∀n ≥ 1 andψ^a_r|j,m⟩ = 0 ∀ r ≥12implies, in particular, that the coupled and decoupled bosonic modes with n≥ 0 agree on the ground states,J^a_n |j,m⟩ =J^a_n|j,m⟩ ,n≥ 0;the correction terms involve positive fermionic modes that annihilate the ground states. (Thus it makes no difference whether we label the ground states in terms of the decoupled or coupled spins). Furthermore, the ground states areannihilated byL_n|j,m⟩ =0 forn ≥ 1 and G_r|j,m⟩ = 0 forr≥12 ,as follows from eqs. (<ref>) and (<ref>).The discrete lowest weight representations 𝒟^+_j — in <cit.> they are called `positive energy' —are characterised by the conditions J_0^+|j,m⟩ = |j,m+1⟩ , J_0^-|j,m⟩ = (-j(j-1)+m(m-1))|j,m-1⟩ .Note that the state |j,j⟩ has the lowest J_0^3 eigenvalue and is therefore annihilated by J_0^-. In particular, it follows from (<ref>) that 𝒟^+_j: J_0^-|j,j⟩ = 0 ⇒ L_1^CFT|j,j⟩ = 0as appropriate for a quasiprimary state in the dual 2d CFT. The representation of the full affine algebra is obtained by the action of the negative modes J^a_-n and ψ^a_-r, acting on these ground states. With respect to the globalalgebra, all of these states will then also sit in discrete lowest weight representations of , and thequasiprimary states of the dual CFT will always correspond to the lowest weight states of these discrete representations.§.§.§ The R sector The analysis in the Ramond sector is slightly more subtle since there are fermionic zero modes.The ground states are therefore characterised in addition by an irreducible spinor representation ofthe Clifford algebra in (2+1)-dimensions, spanned by the fermionic zero modes — this representationis two-dimensional and can be described by | s_0⟩, with s_0=± 1. The full set ofground states is therefore labelled by | j,m;s_0⟩.The presence of the fermionic zero modes implies that, unlike (<ref>),the action of the decoupled and coupled bosonic zero modes differs on the ground states. In particular, 𝒥_0^3 = J_0^3 - 1/k(ψ^+ψ^-)_0 ,where on the ground states (see eq. (<ref>))(ψ^+ψ^-)_0| j,m;s_0⟩ = 1/2[ ψ_0^+, ψ_0^- ] | j,m;s_0⟩ = kσ^3/2| j,m;s_0⟩= ks_0/2| j,m;s_0⟩ ,and consequentlyJ_0^3| j,m;s_0⟩= ( m+s_0/2) | j,m;s_0⟩ .Effectively, this can be interpreted as shifting the spin j (with respect to the coupled algebra)of the R sector representation by ±1/2 relative to the decoupled algebra.We are interested in organising the descendants of these ground states in terms of representations of the (coupled)𝔰𝔩(2,R) zero modes since they have a direct interpretation in terms of the dual CFT, seeeq. (<ref>). Since the creation generators — the negative bosonic and fermionic modes — transform in the adjoint representation of this 𝔰𝔩(2,R), the spins that arise will be of the form j+ℓ, where j is the spin of the (decoupled) ground states while ℓ∈Z in the NS sector andℓ∈Z+1/2 in the R-sector — here we have absorbed the above shift by 1/2 into the definition ofℓ. A similar consideration applies for the right-movers where the resulting spin will be j+ℓ̅ for the same j (and with the same restrictions on ℓ̅).Thus the total energy and spin of such a descendant will be E = 2 j + ℓ + ℓ̅ ,s= ℓ - ℓ̅ .Note that in the NS-NS and R-R sectors the spacetime spin s will be integer, while in the NS-R and R-NS sectors it will be half-integer. §.§ The compact directions The remaining spacetime directions are described by S^3 ×T^4. Supersymmetricstrings propagating on S^3 can be described by a WZW model based on , for which our conventions are[K^+_m,K^-_n]= 2K^3_m+n + kmδ_m,-n [K^3_m,K^±_n]= ±K^±_m+n [K^3_m,K^3_n]= k/2mδ_m,-n [K^±_m,χ^3_r]= ∓χ^±_m+r [K^3_m,χ^±_r]= ±χ^±_m+r [K^±_m,χ^∓_r]= ±2χ^3_m+r {χ^+_r,χ^-_s}= kδ_r,-s {χ^3_r,χ^3_s}= k/2δ_r,-s . The dual Coxeter number is h^∨_ = +2. As for the case of , we can decouple the bosons from the femions by defining 𝒦^3 =K^3 -1/k(χ^+χ^-), 𝒦^± =K^±∓2/k(χ^3χ^±) ,so that [^a_m,χ^b_n] =0. The decoupled currents satisfy again the same algebra as the K^a, butwith level (k-2) instead. We will therefore mostly restrict ourselves to k ≥ 2 in this paper,see however the discussion in Section <ref>.[It is potentially interesting to studythe model for certain smaller values of k such as k=0, see e.g. <cit.> andreferences therein for attempts in this direction (in bosonic setups). While the k=0 worldsheet theory is not astandard CFT, it may be related to an integrable theory such as the principal chiral model <cit.>.]The ground states of the corresponding WZW models will transform in the same representation for left- and right-moverswith respect to the decoupled 𝔰𝔲(2) algebras (i.e., with respect to the zero modes of(<ref>)). These representations are labeled by a spin j' with j' = 0,12,1,32,…, and their states are described by m'=-j,-j'+1 ,… ,j'-1,j', as is well-known for 𝔰𝔲(2) representations.We choose the convention that the Casimir of the global decoupled algebra (i.e., of the zero modes of (<ref>))on the representation j' equals 𝒞^_2| j',m'⟩_ S^3 = j'(j'+1)| j' ,m'⟩_ S^3 . The decoupled and coupled bosonic zero modes agree in theNS-sector, while in the R-sector they differ by a fermionic contribution, and as a consequence, the K^3_0 eigenvalues in the R-sector are shifted by ±1/2 relative to those in the NS-sector, cf., the discussion around eq. (<ref>) above. Finally, the T^4 theory corresponds to four free bosons Y^i and four free fermions λ^i (i=1,2,3,4). The ground states in this sector are characterised by a momentum vector |p⃗ ⟩ with(∂ Y^i)_0|p⃗ ⟩ =p^i|p⃗ ⟩and L_0^T^4|p⃗ ⟩ = 1/2∑_i=1^4(p^i)^2|p⃗ ⟩ .For a compact torus the left- and right-moving momenta need not agree — they can differ by winding numbers.However, for our purposes, i.e., for identifying the leading Regge trajectory, we will always work in thezero momentum sector p⃗=0⃗, both for left- and right-movers. The multiplicity of the Ramond sectorground states is accounted for as usual by introducing two labels (s_2, s_3), with s_2,3=±.§.§ GSO projection As usual in a NS-R worldsheet string theory, one must impose an appropriate GSO projection in order to remove tachyonic modes and guarantee supersymmetry of the spacetime theory. In the NS sector the worldsheet parity operator is defined to be odd on the ground states(-1)^F| 0⟩_NS =-| 0⟩_NS .Let us denote by N the (integer or half-integer) excitation number in the 𝔰𝔩(2,R) sector, while N' is the corresponding number for 𝔰𝔲(2), and N” for theT^4 excitations. On a state with excitation numbers (N,N',N”) the total worldsheet parity is then (-1)^F= -(-1)^2N+2N'+2N” .The GSO projection (-1)^F=(-1)^F̅=+1 in the NS-sector thus requires that either one or all three excitation numbers are half-integer, and this has to be imposed both for left- and right-movers. In order to describe this compactly we introduce thenumber n ≡ N +N' +N” -ν , whereν={[12 NS sector; 0R sector ].The above considerations imply that n has to be an integer in the NS sector, both for left- and right-movers. Obviously, the same is true in the R sector since there all excitation numbers are integers anyway.In the R sector, the GSO projection involves also a contribution from the fermionic zero modes corresponding to s_0s_1s_2s_3. Thus we can, for any descendant, satisfy the GSOprojection by changing s_3, if necessary. Thus the GSO projection is correctly accounted for by reducing the multiplicity of the 4-fold ground state in the R-sector of T^4 — corresponding to thefour choices for (s_2,s_3) with s_2,3=± —to 2.§.§ Physical state conditionsThe 𝔰𝔩(2,R) WZW model contains a time-like direction, and as a consequence the theory is non-unitary.As usual in worldsheet string theory, the corresponding negative-norm states are removed upon imposing the Virasoro constraints.In our context, the physical state conditions areL^tot_0-ν =L̅^tot_0-ν̅ = 0 ,where ν,ν̅ =0,1/2 in the R and NS sectors, respectively, andL^tot_0 =L_0^+ L_0^ +L_0^T^4.We parameterise the contributions from each component as L_0^ = -j(j-1)/k + NL̅_0^ =-j(j-1)/k + N̅ L_0^=j'(j'+1)/k + N' L̅_0^ =j'(j'+1)/k + N̅' L_0^T^4=h^T+ N” L̅_0^T^4=h^T+ N̅”.Here, j, j' and h^T label the spins (resp. the conformal dimension) of the corresponding ground states; for the case of 𝔰𝔩(2,R) and 𝔰𝔲(2) the relevant spins are defined with respect to the decoupled currents.Furthermore, physical states satisfy the super-Virasoro constraints L^tot_m|phys ⟩= 0 m > 0 G^tot_r|phys ⟩= 0 r>0 ,where again L^tot and G^tot denote the total worldsheet currents, receiving contributions from all three sectors of the theory. The no-ghost theorem <cit.> (adapted here to thesupersymmetric setup, seealso <cit.>) shows that the Virasoro constraints (<ref>) removenegative-norm states from the spectrumprovided the unitarity bound 0 ≤ j ≤k+2/2is satisfied. This condition is the k-dependent bound on the spin j that we mentioned before, see the discussion at the end ofSection <ref>. It was argued in <cit.>, based on the structure of the spectrally flowedrepresentations, that in fact the bound on j should be slightly stronger and take the form 1/2 < j < k+1/2 .For most of the following the (weaker) unitarity bound will suffice, but for some arguments, in particular the analysis of the spectrally flowed representations, the strongerMaldacena-Ooguri (MO) bound (<ref>) will be required. Next, we write the first equation in the on-shell condition (<ref>) as -j(j-1)/k + j'(j'+1)/k + h^T + n = 0 ,where n was defined above in eq. (<ref>). In addition, we get the same equation with n̅ in place of n from the second condition of(<ref>), where n̅ is defined analogously for the right-movers. We therefore conclude that n=n̅. Furthermore, as was noted above, n is always a non-negative integer after GSO-projection. We can use eq. (<ref>) to solve for j as[We have taken here the positive square root since j>0 for unitarity.]j =1/2( 1 + √((2j'+1)^2+4k (n + h^T)) ).Note that for fixed n,the Virasoro level of the physical states satisfy N ,N',N”≤ n + ν,as follows from (<ref>), and similarly in the barred sector. Since each excitation mode can raise the J_0^3 eigenvalue at most by one (and since each fermionic ψ^±_-1/2 mode can only be applied at most once), we conclude that in the NS sectorthe J_0^3 eigenvalue m of the physical states will lie between j-n-1 ≤ m ≤ j+n+1, while in the R sector it will liebetween j-n-1/2 ≤ m ≤ j+n +1/2. This impliesthat the spacetime states labeled by n have spin s boundedas |s| ≤ 2n+2. More explicitly,the relevant states are of the form[From what we have said so far, it is not yet clear that all these states will indeed be physical, but this will turn out to be the case, see the discussion below in Section <ref>. Furthermore,some of these states will appear with higher multiplicity. For the arguments of the next section it is however enough toknow that only these charges can appear among the physical states.]| j +r-n-1 ⟩⊗| j +r̅ -n-1 ⟩ ,with 0 ≤ r, r̅≤ 2n+2,where r and r̅ are positive integers or zero in the NS sector, and positive half-integers in the R sector — these parameters are simply related to (ℓ,ℓ̅), see the paragraph above (<ref>),by a shift in order to make them non-negative.The spacetime energy and spin of these states is then given by E = 2j + r+r̅ - 2n-2 , s = r - r̅ ,which in particular impliesE = s+2(j+r̅ - n-1) . Finally, it is worth pointing out how the AdS_3× S^3×T^4 supergravity spectrum is obtained from the worldsheetdescription. The supergravity states all arise for n=0, which leads then to fields ofspin | s |≤ 2. Furthermore, this condition restricts the excitation levels as N,N',N”≤ν, and it follows that the supergravity spectrum isobtained from the level 1/2 descendants in the NS sector, as well as the R ground states.Crucially, from (<ref>) (with no momentum in the T^4 directions) we deduce that theandspins are related bySUGRA:j = j'+1 ,so that j is now an integer or half integer (with j ≥ 1). We have explicitly checked that the corresponding physical states preciselyreproduce the supergravity spectrum, as derived in e.g. <cit.>. In particular, one finds that the j'=0 (j=1) sectorcontains the (massless) graviton supermultiplet, while the representations with j' > 0 give rise to a tower of massive BPSmultiplets.[Indeed, it is easy to see from (<ref>) that chiral (E=s) states inthe n=0 sector can only exist for j=1 (and r̅=0).]§ NO MASSLESS HIGHER SPIN STATES FROM SHORT STRINGSWith these preparations at hand we now want to analyse whether the string spectrum possesses massless higher spin states at least for some value of the level. As we shall show in this section, this will not be the case for the short strings coming from the unflowed (discrete) representations. Recall first the standard holographic relation between the mass of an AdS_3 (bulk) excitation and the conformal dimension E andspin s of the dual operator in the boundary 2d CFT <cit.>, m_bulk^2 = (E-|s|)(E +|s|-2) ,where E=h+h̅ and s = h-h̅ in the usual CFT notation. As expected, massless higher spin fields are dual to conservedcurrents of dimension greater than two, which in the present context satisfy E=|s| (with |s| > 2). Hence, massless higher spin states are characterised by the property that either the J_0^3 eigenvalue or the J̅_0^3 eigenvalue vanishes.Let us concentrate, for concreteness, on the case J̅_0^3=0. Then it follows from (<ref>) that we need to have conserved current⇒ j = n + 1 -r̅ .Then the on-shell condition (<ref>) implies that k = (n-r̅-j')(n+1-r̅+j')/n+h^T .As discussed above, the case n=0 corresponds to (supergravity) states that have |s|≤ 2 and are therefore not of higher spin.We may therefore assume that n ≥ 1. Our strategy will be to show that unitarity impliesthatn + r̅≤ 1, contradicting the n ≥ 1 assumption, except for n=1 and r̅=0. The latter case is then excluded by the stronger MO-bound (or by noticing that the relevant state is null). First, from (<ref>) we note that the unitarity bound j ≥ 0 implies n + 1 -r̅≥0. Since j' ≥ 0 by definition,from (<ref>) we find that positivity of k requiresconserved current+(k >0) ⟹ n-r̅ > j' ≥ 0.Next, we use the unitarity bound (<ref>)which translates for j = n + 1 -r̅ inton- r̅≤(n-r̅-j') (n+1-r̅+j')/2 (n + h^T) .Together with (<ref>), this requirement is equivalent toh^T+j'(j'+1)/2(n-r̅)≤1-n-r̅/2 . Since the quantity on the left hand side is greater or equal to zero (recall that n -r̅ >0 and h^T≥ 0,j' ≥ 0 by unitarity), we conclude n+r̅≤ 1. Finally, for n=1 and r̅=0, we have j=2, and hence from(<ref>), k≤ 2, which is only compatible with unitarity for k=2 (and incompatible with the stronger MO-bound(<ref>) even in that case).Actually, the corresponding state J̅^-_-1ψ̅^-_-1/2 |j=2⟩is null at k=2, as has to be the case since it saturates the unitarity bound.Summarizing, we have shown that the only conserved currents that exist in the unflowed discrete representations appear in the supergravity spectrum (n=0), and thus have spin s≤ 2.Our analysis holds for all values of the level k>0; thus, among the WZW backgrounds there is no radius at which the theory develops a higher spin symmetry from the short string spectrum.This is in line with the arguments of the Introduction, see eq. (<ref>). It is also in accord with the results of<cit.> where evidence was foundthat the symmetric orbifold point (that exhibits a large higher spin symmetry) is dual to a background with R-R flux. The long string sector (that is described by spectrally flowed representations) will be discussed in Section <ref>. Aswe shall explain there, for k=1 a stringy tower of higher spin fields appears from the spectrally flowed continuous representation,mirroring the bosonic analysis of <cit.>. Since these massless higher spin fields arise from long strings, they describe a qualitatively different higher spin symmetry from the usual tensionless limit <cit.>.§ REGGE TRAJECTORIES AND THEIR =4 STRUCTURENext we want to identify the leading Regge trajectory states in the string spectrum and compare this to theW_∞ symmetry that was found in <cit.>.In order to identify the leading (and sub-leading) Regge trajectory states in the string spectrum, we first need to study in more detailthe actual physical states. In this section we concentrate again on the states from the unflowed discrete representations; the spectrally flowed representations will be discussed in Section <ref>.§.§ General discrete spectrumRecall from our discussion in Section <ref>that physical states in a representation built from an AdS_3 groundstate labeled by j take theform (<ref>), with the corresponding spacetime energy and spin being given by (<ref>).We now want to show that for all choices of r,r̅ in 0 ≤ r,r̅≤ 2n+2, physical states with thesequantum numbers exist. In addition, we want to determine their multiplicities.Let us start with some general comments about the string spectrum.One should expect that the physical states are obtained by applying eight transverse oscillators to the ground states — of the ten oscillators, one linear combination is eliminated by the Virasoro condition, and a second one leads to spurious states, i.e., gauge degrees of freedom. In the current context, it is natural to take the light-cone directions to be a linear combination ofthe time-like AdS_3 direction, as well as one direction on the T^4. Then the transverse (physical) excitationscorrespond to the ± modes from AdS_3, all three oscillators from the S^3 factor, and three of the four oscillatorsfrom the T^4. Thus the physical descendants of the ground states of the chiral NS and R sector are expectedto be counted by— here j and j' label the spins of the 𝔰𝔩(2,R) and 𝔰𝔲(2) ground state representation (taken with respect to the decoupled currents), respectively,[As far as we are aware, thisformula was first written down in<cit.> generalizing the corresponding bosonic formula from <cit.> and building on <cit.>. These formulae are correct for sufficiently large values of k for which there are nonon-trivial null-vectors.]χ^NS(q,z,y)= q^h(j)+h'(j')+h^Ty^j/1-y (z^j'+1 - z^-j')/(z-1)×∏_n=1^∞(1+yq^n-1/2)(1+y^-1q^n-1/2)(1+zq^n-1/2)(1+z^-1q^n-1/2)(1+q^n-1/2)^4/(1-yq^n)(1-y^-1q^n)(1-zq^n)(1-z^-1q^n)(1-q^n)^4 χ^R(q,z,y)= 2 q^h(j)+h'(j')+h^T y^j (y^1/2+y^-1/2)/(1-y)(z^j'+1 - z^-j')(z^1/2+z^-1/2)/(z-1)×∏_n=1^∞(1+yq^n)(1+y^-1q^n)(1+zq^n)(1+z^-1q^n)(1+q^n)^4/(1-yq^n)(1-y^-1q^n)(1-zq^n)(1-z^-1q^n)(1-q^n)^4where h^T is the ground state conformal dimension of the T^4 theory, while for the𝔰𝔩(2,R) and 𝔰𝔲(2) factors we haveh(j)= -j(j-1)/k ,h'(j')=j'(j'+1)/k .Here y and z are the chemical potentials with respect to 𝔰𝔩(2,R) and 𝔰𝔲(2), respectively, and we have used that the corresponding characters are of the formχ_j(y) = y^j/1-y , χ_j'(z) = ∑_k=-j'^j'z^k = (z^j'+1 - z^-j')/(z-1) .Furthermore, q keeps track of the total Virasoro eigenvalue which has to equal q^ν for the actual physical states, see eq. (<ref>). (We are here describing one chiral sector;the results for left- and right-movers then has to be combined.)The first line in each of (<ref>)-(<ref>) accounts for the contribution of the ground state representations,while the second line describesthe contributions of the non-zero oscillators. The overall multiplicity of 2 in the R-sector reflects the overall multiplicity after GSO projection, see the discussion after eq. (<ref>). We have checked this prediction in some detail (by solving the physical state conditions explicitly, at least for some low-lying states), and we have found complete agreement. We should mention, though, that there are some subtleties with the counting for j=1; this is discussed in more detail inAppendix <ref>. We note that this formula in particular implies that, for all 0 ≤ r ≤ 2n+2, physical states withthese quantum numbers exist. In order to see this, we solve for j (in terms of n, j' and h^T) using eq. (<ref>); then the overall power of q^ν comes from taking the term with q^n+ν from the oscillator product in the second line. In the NS sector r=0 then corresponds to the situation wherethe J^3_0 eigenvalue is j-n-1.This can be achieved by taking from the numerator the term y^-1q^1/2, as well as n powers of y^-1q^1 from the geometric series expansion of the denominator term (1-y^-1 q). Thecorresponding state is thus of the form | j-n-1⟩=(𝒥_-1^-)^nψ_-1/2^-| j⟩ .Similarly, the case r=2n+2 corresponds to having J^3_0 eigenvalue j+n+1, in which case the relevant powers are yq^1/2 from the numerator, and n powers of yq^1 from the geometric series expansion of thedenominator term (1-y q). Schematically, the corresponding state is thus of the form| j+n+1⟩ =[(𝒥_-1^+)^nψ_-1/2^+| j⟩+⋯] ,where the dots stand for additional terms that make it a lowest weight state with respect to the𝔰𝔩(2,R) algebra. In either case it is easy to see that theserepresentations appear with multiplicity one — these are the `extremal' cases that can only be obtained in one way. On the other hand, the intermediate cases 0 < r < 2n+2 can be obtained in more than one way, but from the above analysis it is clear that all of these terms will indeed arise. Incidentally, we should note that it follows from the explicit formula that (apart from the overall y^j/(1-y) term) the partition function is symmetric under the symmetry y↔ y^-1. As a consequence, the multiplicities of the representations corresponding to r and 2n+2-r will be the same.Combining left- and right-movers, the full spacetime spectrum (in terms of energy and spin)forms a diamond in the (E,s) plane for fixed n, depicted in Figure <ref>. Here the corner points have multiplicity one, but the other points have higher multiplicity. On general grounds it is clear that we must be able to organise the spectrum in terms of(small) 𝒩=(4,4) representations, see Appendix <ref> for a brief review of their structure. In the (E,s) plane, 𝒩=(4,4) multiplets form small diamonds withedges spanning two units of energy and two units of spin.For example, the right most vertex of the diamond in Figure <ref>, which is characterised by (r,r̅)=(2n+2,0), has multiplicity one (since both r and r̅ take their extremal values), and corresponds to the chiral states with h=j+n+1 and h̅=j-n-1. This state is then the top (h=h_0+2)component of the left-movinglong N=4 multiplet whose bottom component has h_0=j+n-1 andtransforms in the representation m of 𝔰𝔲(2), where m=2j'+1, seeTable <ref> of Appendix <ref>.Similarly, with respect to the right-movers, the state is the bottom component ofa similar N=4 multiplet with h̅_0 = j-n-1. The relevant states in the full multiplet then give rise to states in the dashed diamond in Figure <ref>. (Here we have also included the R sector states that are needed to complete the multiplets.)Once the states that sit in this multiplet have been accounted for, we look at the remaining states and proceed iteratively. For example, the `extremal' R sector states that contribute to this multiplet haveh=j+n+1/2 and/or h̅ = j - n - 1/2. Concentrating on the first case, it follows from (<ref>) that there will be 8 m states of this form transformingas 4· ( m+1)and 4· ( m-1) — one factor of 2 is the overall factor in eq. (<ref>),while the other factor of 2 comes from the fact that we can either use one fermionic (-1) mode in the R-sector or none. Furthermore,the two different representations come from tensoring with the spin 1/2 representation described by the factor (z^1/2 + z^-1/2) in the first line.Two copies of each of these two representations are part of the long N=4 multiplet, see Table <ref>, while the other two will generate two pairs of new N=(4,4) multiplets, whose bottom components will transform as ( m+1) and ( m-1), respectively.(The second dot along the r̅=0 edge in Figure <ref> represents states in these multiplets.) Proceeding in this manner, we find that the multiplicity of the N=4 multiplets along the r̅=0 edge(i.e., only considering states whose bottom component is h̅_0 = j-n-1) is described inTable <ref>.For future reference,in Table <ref> we also give the multiplicity of the N=4multiplets along the r̅=0 edge for j'=0, i.e., m=1 — in this case, only δ m≥ 0is possible and some of the multiplicities are reduced. §.§ Leading Regge trajectoryHaving discussed the general structure of the discrete string spectrum, we can now identify the states on the leading Regge trajectory. These are the states that should have the lowest energy for a given spin, together with their N=(4,4) descendants. We want to argue that they are precisely described by the dashed diamond in Figure <ref>, where n takes the values n=0, n=1, n=2, etc. First we note that the leading Regge trajectory states will be associated to states with j'=0 (and h^T=0) — for fixed n, as well as (r,r̅), the choice of j' and h^T only enters via j as defined in (<ref>), and j in turn only contributes to E, but not to s, see eq. (<ref>).Choosing j' and or h^T to be non-trivial, increases j and hence E, but does not modify the spin s.The states of lowest energy (for fixed spin) therefore arise for j'=h^T=0. Similarly, by construction, the states with lowest energy for given spin lie (for fixed n and hence j — recall that j'=h^T=0)on the lower edges of the representation diamond. Without loss of generality, focusing on positive helicitymodes we can then restrict our attention to the r̅=0 edge. The energies of these states satisfythe linear dispersion relationE_Regge (s) =s-2n-1+√(1+4k n) , 2n <s ≤ 2n+2 ,where the inequality 2n < s arises because the lowest energy state with spins=2n is obtained from the diamond corresponding to ñ = n-1 — this is a consequence of the inequality√(1+4kn)≥2 + √(1+4k(n-1)) ,which, after squaring twice, is equivalent to (k+2) ≥4 n;in turn this follows from the unitarity bound, see eq. (<ref>), using the expression for j from eq. (<ref>)with j'=h^T=0. The conformal dimensions of the leading Regge trajectory states for smallvalues of the spin areplotted (for k=200) in Figure <ref>.As a side remark, we should note that the states with dispersion relation (<ref>) formally become chiral if k takes the value k=n+1. However, this choice is not allowed by the unitarity bound, except for thesupergravity states with n=0 and the special solution n=1 that was already discussed after eq. (<ref>). [Thelatter case corresponds to n=1 and k=2 and is incompatible with the MO bound (<ref>).] Since r̅=0 the right-moving states are the `extremal' states with N̅'=N̅”=0, so that the right-moving (barred) 𝔰𝔲(2) representation is always trivial. Furthermore, the leading term with r=2n+2 is also trivial with respect to the left-moving 𝔰𝔲(2) algebra, and it is the top state of an N=(4,4) multiplet with 𝔰𝔲(2) ⊕𝔰𝔲(2) quantum numbers ( 1, 1). We now want to argue that the leading Regge trajectory consists just of the first multiplet of Table <ref>for each n. This is natural since there is only a single multiplet with these quantum numbers; its top component is obtained bytensoring therepresentations (<ref>) and (<ref>) for the left- and right-movingsector, respectively. (The terms with r<2n+2, on the other hand, lead in general to N=(4,4) multiplets for which the left-moving𝔰𝔲(2) spin is not trivial.) Furthermore, these states always define the states with smallest energy for the given spin, independent of k.[The situationis in general more complicated for the other states, see the discussion of the next subsection.] In order to see this, it is enough to show that E(n,s=2n+2) < E(p,s=2n+2) for any p>n — note that a state with this spin can only appear for p≥ n. Without loss ofgenerality it is enough to concentrate on the case p=n+1 since any p>n can be iteratively obtained in this manner. Furthermore, we may assume that the relevant state in the p'th (i.e., n+1'th) diamond sits on the lower edge, i.e., has energy described byeq. (<ref>). Then the inequality we need to prove is simply √(1 + 4 k p) ≥ √(1 + 4 k (p-1)) + 2,which upon squaring both sides (after subtracting 2) leads to 1 + k ≥√(1 + 4 k p) .This identity is now a direct consequence of the unitarity bound, see eq. (<ref>)with n=p.We note that these states carry exactly the same quantum numbers as the generators of the even spinN=4 W_∞ algebra that was analysed in <cit.>. This is the minimal version of the N=4 higher spin symmetry, and it has a nice AdS_3 dual that is also discussed in some detail in <cit.>. On the other hand, while the string spectrum also contains multiplets with odd integer spin, there does not seem to be any natural candidate for which of the 7 singlet multiplets at spin 2n+1, see Table <ref>, should be added to the even spin W_∞ algebra in order to generate the full N=4 W_∞ algebra of<cit.> (or the extended algebra of <cit.> where also the charged bilinears are included in the higher spin algebra). Incidentally, the fact that the leadingRegge trajectory should only be identified with the fields (or multiplets) of even spin is also expected from bosonic closed string theory in flat space. There the states of the leading Regge trajectory are associated to theworldsheet states of the form α_-1^μ_1⋯α_-1^μ_n α̅_-1^ν_1⋯α̅_-1^ν_n|p⟩ ,where the level-matching condition requires that the number of transverse oscillators on the left and right is the same. As aconsequence, this only leads to fields of even spin s=2n.§.§ Subleading 𝒩=4 trajectoryUnlike the leading Regge trajectory, the identification of the subleading trajectory turns out to besomewhat less clean, and in particular it depends on the value of k. For 2n<s≤ 2n+2 there are a priori three kinds of states competing to be the subleading trajectory. These arethe states in the interior of the (n,j'=0) diamond; the states on edge of the (n,j'=1/2) diamond; and thestates on the edge of the (n+1,j'=0) diamond. Denoting the energies of these three sets by E_n^*(j'=0), E_n(j'=1/2), and E_n+1(j'=0),respectively, we find that their explicit values for the relevant spins are as given in Table <ref>. It turns out that among these states, the one with the smallest energy is E_n+1(j'=0)ifk ≤k^*_n=7/4+2n+√(4n^2+7n+4)E_n(j'=1/2)ifk ≥k^*_n=7/4+2n+√(4n^2+7n+4) .A few remarks are in order. First, the competing states always lie on the edge of some diamond. Second,for fixed k, the choice between the two diamonds is n- and therefore s-dependent. Nevertheless, the existenceof a minimum value for n (which is n=0) implies that we can make the states of eq. (<ref>)to be the subleading ones for all possible values of n, and thus for all higher spin states, by tuning k to be small enough. This happens for k≤15/4. Note that since k must be integer, this allows for the two solutions k=2 and k=3. We should also note that the 𝔰𝔲(2) ⊕𝔰𝔲(2) quantum numbers are different for thesetwo sets of competing representations, as detailed in Table <ref>. In particular, the states of the second column are non-trivial with respect to the right-moving 𝔰𝔲(2) algebra. Unfortunately, theredoes not seem to be any particularly simple pattern among these representations, and they do not seem to be naturally in correspondence with the subleading Regge trajectory of <cit.>.[We should mentionthat among the above states one should expect that somedo not become chiral at the symmetric orbifold point, i.e., do not belongto the stringy higher spin symmetry, but remain massive even at that point in moduli space.] Obviously, there is no fundamental reason why such a correspondence should exist — the two descriptions refer to different points in moduli space.§.§ AdS_3× S^3× K3One may hope that the situation could become a bit simpler for the case of AdS_3× S^3× K3, since then the spectrum will contain fewer states. Let us consider the case when K3 can be described as aT^4/Z_2 orbifold. This Z_2 orbifold can be easily implemented in the worldsheet description since it simply acts as a minus sign on each of the four bosonic and fermionic oscillators associated to the T^4.For each n, the surviving states organise themselves into N=4 multiplets asUnfortunately, there is still a fairly large multiplicity (namely 3=5-2 — the subtraction of 2 arises as in the passage fromTable <ref> to Table <ref>) for the first odd spin `leading' Regge trajectory states, and again the most natural intepretation is that the leading Regge trajectory hasjust even spin multiplets as before. Similarly, the situation for the subleading Regge trajectory also does not seem to improve significantly.§ SPECTRALLY FLOWED SECTORS AND LONG STRINGS In the previous section we have identified the low-lying states of the leading Regge trajectory that originate from the unflowed discrete representations. More specifically, these states have spin s=2n+2, with n=0,1,2,…, 1/4 (k+2),where the upper bound comes from eq. (<ref>), which in turn is a consequence of the unitarity bound (<ref>). If we impose the slightly stronger MO-bound (<ref>), we find instead s < k/2 + 2 - 1/2k .In either case, we only get finitely many states in this manner.In this section we look for the remaining states of the leading Regge trajectory. As we shall see, they arise from the continuous representations describing long strings. This makes also intuitive sense since the leading Regge trajectory states correspond to longer and longer strings that get closer to the boundary of AdS_3, until they finally merge with the continuum of long strings.We start by describing the rest of the full string spectrum that corresponds to the spectrally flowedcontinuous and discrete representations. For each class of representation we then identify the states of lowest mass for a given spin. We will see that the states from the unflowed discrete representationsare indeed the lightest states of a given spin for small spin; furthermore, for s≈k/2, the spectrally flowedcontinuous representations will take over. The spectrally flowed representations are obtained from the discrete and continuous representations upon applying theautomorphism ofdefined by 4J̃^3_n = J^3_n + k2ωδ_n,0 J̃^±_n = J^±_n∓ω ψ̃^3_r = ψ^3_r ψ̃^±_r = ψ^±_r∓ω L̃_n =L_n - ω J^3_n - k4ω^2δ_n,0 .Here ω is an integer, and the same automorphism (with the same value of ω) is applied to both left- and right-movers. We characterise the spectrally flowed representations by using the same underlying vector space, but letting the J̃^a_m modes act on it (rather than the J^a_m modes), and similarly for the fermions. In order for the resulting representation to decomposeinto lowestweight representations of 𝔰𝔩(2,R) we need, in particular, that J̃^-_0 |j,m⟩ =J^-_ω|j,m⟩ = 0.Thus we take ω to be a positive integer (or zero). Note that the J̃^3_0 eigenvalue of the states is thenJ̃^3_0 |j,m⟩ = ( m +k/2 ω) |j,m⟩ ,where m is the actual J^3_0 eigenvalue of the state in question. Since ω is positive, this eigenvalue is always positive(at least on the ground states).Using the explicit form ofL̃_0 (c.f. (<ref>)),the on-shell condition is in the NS-sector and for general ω -j(j-1)/k-ω(m+k/4ω) + N_tot = 1/2 ,where N_tot = N + N' + N” is the total excitation number, and we have set j'=h^T=0.A similar condition also applies to the right-movers, and we have the level-matching conditionN_tot-N̅_tot=ω ssince (<ref>) involves the term ω m. Finally, we need to impose the GSO-projection. It is natural to assume — and this leads to the correct BPS spectrum of <cit.> — that the correct GSO projection is the one that takes the same form in all representations, including the spectrally flowed ones. In terms of the original vector space description we are using here, this then translates into the condition N_tot+ω+1/2∈N ,since we only flow in the 𝔰𝔩(2,ℝ) factor and hence the fermion number ofthe ground state changes by one for each unit spectral flow, see also <cit.>.[There is a similar spectral flow automorphism for , but since this does not lead to new representations, it is a matter of convenience whether we include this flow or not. In our context, it is simpler not to flow in this sector.]§.§ Spectrally flowed representations — the continuous case According to <cit.> the spectrum of string theory on AdS_3 contains representations whose ground states transform in continuous representations of . The states of the continuous representation C_j^α are labelled by |j,m,α⟩, where j=1/2 + i p with p real, andm takes all values of the form m=α + Z.These representations are neither highest nor lowest weight with respect to . Their Casimir is given by C_2|j,m,α⟩ = - j (j-1) |j,m,α⟩ , - j (j-1) = 1/4 + p^2.In particular, they can therefore only satisfy the mass-shell condition (<ref>) in the NS-sector withN_tot=N̅_tot=0. Since this is incompatible with the GSO projection, there are no physical states in the unflowed continuous representations.[This should better be so, since otherwise the dual CFT would have had an unbounded L_0 spectrum.] However, after spectral flow, these representations give rise to interesting physical states, as we shall now describe.Because of (<ref>) (applied to |j,m,α⟩) thespectrally flowed continuous representations are lowest weight with respect to 𝔰𝔩(2,R) if ω>0. Plugging j=1/2 + i p into the mass-shell condition (<ref>) and solving for m leads to m=-kω/4+1/ω( N_tot - 1/2 +p^2/k+1/4k) ,and similarly for m̅. (Remember that j, i.e., p, and ω are the same for both left- and right-movingrepresentations.) Using (<ref>) the spacetime energy of the state is then E_cont = m+kω/2+m̅+kω/2= kω/2 + 1/ω( N_tot + N̅_tot - 1+2p^2/k+1/2k) .It is clear that the lowest energy for any given quantum numbers is achieved by putting p=0, as also expected classically.Furthermore, using level-matching (<ref>) to solve for the spin we can rewrite this as E_cont(s) = s +kω/2 + 1/ω( 2N̅_tot +1/2k -1 ).Thus the minimum energy for a given s is achieved by putting N̅_tot=0 if ω is odd, orN̅_tot=1/2 if ω is even (as required by the GSO projection,eq. (<ref>)). For any k > √(6)/2-1≃ 0.22 and any even ω≥ 2, thecontinuous (ω-1) sector has lower energy. Hence,in what follows we focus on the case of odd ω. Sinceω s=N_tot, with ω≥ 1,setting N̅_tot = 0 is only valid for s≥ 0. (Analogously, the lowest energy for s≤ 0 is achieved by putting N_tot=0, sothat ω s = -N̅_tot.) Thus we conclude that the spectrally flowed continuous representations contain states with the dispersion relationE_cont (s) = | s| +kω/2+ 1/ω( 1/2k -1 )(ω odd)for any spin s. This energy is a growing function of ω∈N for any k>-1 + √(2)≃ 0.41. Since there is no constraint on the set of spins, the lowest energy for any spin is achieved by putting ω =1, for which we then find E_cont (s) = | s| +k/2+1/2k -1. §.§.§ Massless higher spin fields for k=1 We should note that for k=1, (<ref>) describes massless higher spin states. For this value (k=ω=1), the mass-shell condition (<ref>) (and its right-moving analogue) become simply m = N_ tot-1/2 , m̅ = N̅_ tot-1/2 ,while the conformal dimensions of the dual CFT are h = m + 1/2 = N_ tot , h̅ = m̅ + 1/2 = N̅_ tot ,and the GSO-projection (<ref>) requires now that both N_ tot and N̅_ tot should be integers.Since there are eight transverse oscillators, there is a stringy growth of massless higher spin fields.This phenomenon is the exact analogue of what was found for the bosonic case, where the correspondingphenomenon happens for k_ bos = 1 +2 = 3 in <cit.>. (In particular, k=1 is also the minimum value where the massless graviton that arises from the discrete representation with j=1 is allowed by the MO-bound (<ref>).)The theory with k=1 describes strings scattering off a single NS5 brane; while this is formally an ill-defined theory — the level of the bosonic 𝔰𝔲(2) algebra is negative, k'_ bos=1-2=-1,although this conclusion could be avoided if we consider instead of AdS_3 × S^3 ×T^4 the background AdS_3 × S^3 × S^3 × S^1, see the comments at the bottom of page 2 — it was argued in <cit.> that at least some aspects of the theory still make sense. Note that the gap of the spectrum was predicted in <cit.> to be Δ_0 =(k-1)^2/4k ,see eq. (4.26)of <cit.>, and this is reproduced exactly (as in the bosonic case of <cit.>) in our analysis from the mass-shell condition (<ref>) for p=0, w=1 and N_ tot=0.It was furthermore argued there that the dual CFT should correspond to a symmetric orbifold associated toR^4 ×T^4. (Here the R^4 arises from the S^3 together with theradial direction of AdS_3 that becomes effectively non-compact in this limit.)This is nicely in line with our finding of the massless higher spin fields. In particular, given that the symmetric orbifold involves an 8-dimensional free theory, the single particle generators have the same growth behaviour as found above in (<ref>), see <cit.>. On the other hand, this tensionless limit is different in nature to what one expects from the symmetric orbifold of T^4, see <cit.> for a discussion of this point. In particular, one may expect that these massless higher spin states get lifted upon switching on R-R flux. It would be interesting to confirm this, using the techniques of <cit.>. §.§ Spectrally flowed representations — the discrete case For discrete flowed representations, it follows from the analysis of <cit.> that j satisfies the MO-bound1/2 < j < k+1/2 .Writing m=j+r, and solving the on-shell condition (<ref>) for j,we findj=1/2[1 - kω + √(1+ 4k ( N_tot - r ω -ω +1/4)) ].In addition, we must solve the constraints r≥ -N for ω odd, andr≥ -N -1/2 for ω even. We should notethat ω s=N_tot-N̅_tot, and s=r-r̅, so thatN_tot-rω = N̅_tot-r̅ω. Then jis indeed the same for the left- and right-moving sectors.We first note that j is a decreasing function of r. The unitarity constraint j≥ 0, together with the fact that there is a minimumvalue that r can take as a function of N, leads to the existence of a minimum value for the levelsN_tot, N̅_tot in a given ω sector,which is of the form ω odd: N_tot , N̅_tot ≥k ω^2+2/4 ω+4ω even: N_tot ,N̅_tot ≥k ω^2 + 2/4 ω+4-2ω/4ω+ 4 .Let us then defineN_min(k,ω)= {[ k ω ^2+2/4 ω +4 +b, if ω is odd;; k ω ^2 + 2/4 ω +4-2ω/4ω + 4 +b,if ω is even ].where 0≤ b< 1 is a bookkeeping device that rounds to the closest upper integer if ω is odd, or to the closestupper half-integer if ω is even, as required by the GSO-projection of eq. (<ref>). Note that there is no upper boundon the levels, on the other hand.Furthermore, N_min(k,ω) is an increasing function of both k and, more importantly, ω. This meansthat the lowest allowed levels appear for ω=1.As for the spectrally flowed continuous representations (that are analysed in Section <ref>), the lowest energy states arethose for which either N_tot or N̅_tot (or both) attain theirlowest possible values. Let us first fix N̅_tot=N_min(k,ω). Then by level-matching the spin s is positive or null.Furthermore, we fix r̅=-N_min(k,ω)- 1/2 if ω is even,and r̅=-N_min(k,ω) if ω is odd. (Note that this is only possible if N'=N”=0,namely the internal CFT is not excited; this condition will lead to the analog of the even spin lowest energy states in the unflowed case.) Thisuniquely determines j to bej=1/2(1-kω+√(4 b k (ω +1)+(k ω -1)^2) )for both even and odd ω. We see that when b=0 we indeed get j=0, as expected. The energy is then given byE_disc (s) = s + kω/2+1/ω( -2j(j-1)/k+2N_min(k,ω)-1)= s + √(4 b k (ω +1)+(k ω -1)^2)-2 b-ω(k ω -2)/2 (ω +1)for odd ω, with a similar expression for even ω. As for the continuous case, for any even ω≥ 2, the discrete (ω-1)-sector has lower energy. Hence we restrict our attention to the case of odd ω, with energy given by (<ref>). We find that the lowest energy states of positive helicity s are then given byω even: N_tot = N_min(k,ω)+ ωsm = j - N_min(k,ω)- 1/2+ sN̅_tot = N_min(k,ω) m̅ = j - N_min(k,ω)- 1/2ω odd: N_tot = N_min(k,ω)+ ωsm = j - N_min(k,ω)+ sN̅_tot = N_min(k,ω) m̅ = j - N_min(k,ω).We should note that the left-moving states do not saturate the value of m for the given value of N_tot, and thus they may havemultiplicities greater than one. The right-moving states, on the other hand, saturate them, and hence will be unique. Finally, in the above we have assumed s>0; the corresponding lightest states with negative helicity are obtained upon exchanging the roles ofleft- and right-movers.Even though it is perhaps not evident, the leading energy (<ref>) is an increasing function of ω,a fact which we have confirmed numerically. Therefore, the lowest energy states for any given spin come from theω =1 sector. Their energy isE_disc (s) =| s|+ √(8 b k+(k-1)^2)-2 b-k/4+1/2 .We emphasize that the parameter b introduced above is uniquely fixed by k and does not depend on the spin s.As a result, this dispersion relation is linear in the spin s. The same is also true for the states from the spectrally flowedcontinuous representations, see eq. (<ref>). This behaviour ties in nicely with the observation of <cit.>, see also <cit.>, about the behaviour ofclassical strings for large spin s. In particular, it is argued in<cit.>, see eq. (6.0.8), that the log s term correction term to the linear dispersion relation vanishes for pure NS-NS flux.[These claims are somewhat in tension with the analysis of<cit.> where a (log s)^2 correction term was found for the case of pure NS-NSflux. Our findings seem to support the conclusion of <cit.>. We thank Arkady Tseytlin for drawing our attention to the work of <cit.>.]§.§ Comparison of the different sectors We can now compare the different dispersion relations coming from the different sectors. Recall from the analysis ofSection <ref>, see eq. (<ref>), that the dispersion relation for the leading Regge trajectory states fromthe unflowed discrete representations is E_Regge (s) =1+√(1+2k (s-2)) ,where we have set s=2n+2 in eq. (<ref>) — this corresponds to the top component of the correspondingN=4 multiplet — and expressed n in terms of s. These states are only available for spins ups < k/2 + 2 - 1/2k, see eq. (<ref>). (Note that, for k≥ 2, the right-hand-side of thisinequality is not an integer andhence cannot be attained.[For k=1, it gives s=± 2.]) It is easy to see that (in this range of spins) E_Regge (s) from eq. (<ref>) is smaller than both E_cont (s) from eq. (<ref>) or E_disc (s) from eq. (<ref>); in fact, precisely at the (unphysical) value s = k/2 + 2 - 1/2k wehave E_Regge(s = k2 + 2 - 12k) = 1 + k =E_cont(s = k2 + 2 - 12k).Thus the states from the unflowed discrete representations describe the leading Regge states for spinss < k/2 + 2 - 1/2k.For larger spins, on the other hand, the relevant states must come from the spectrally flowed representations. As we have seenin sections <ref> and <ref>, for both the spectrally flowed continuous and discrete representations, the lowest energy states always appear for ω=1, and in either case, they give rise to states of arbitrarily high spin. We can compare the relevant dispersion relations, and it is fairly straightforward to see from eqs. (<ref>) and (<ref>) that E_cont (s) < E_disc (s)for all spins. Thus it follows that the remaining states of the leading Regge trajectory are part of the spectrally flowed continuous representations. In order to get a sense of the qualitative picture, we have plotted the relevant states in Figure <ref> for one representative value of k (k=20).The picture that emerges is thus that the lowest energy states arise in the unflowed discrete sector for as high spin asallowed by the MO-bound. Once the MO-bound is reached, the continuous ω =1 representations take over;this makes intuitive sense since the leading Regge trajectory states come from highly spinning strings that get longer and longer as the spin is increased. As they hit the boundary of AdS_3, they merge into the continuum of long strings <cit.>, and thus the leading Regge trajectory states of higher spin will arise fromthat part of the spectrum, i.e., from the spectrally flowed continuous representations. § CONCLUSIONS In this paper we have studied string theory on the background AdS_3 × S^3 ×T^4with pure NS-NS flux, using the WZW model worldsheet description with a view to exhibitingthe emergence of a higher spin symmetry in the tensionless (small level) limit. As we have shown inSection 3, this part of the moduli space does not contain a conventional tensionless point where small string excitations become massless and give rise to a Vasiliev higher spin theory. However, for k=1, a stringy massless higher spin spectrum emerges from the spectrally flowed continuous representations (corresponding to long strings).These higher spin fieldsare of a different nature than those arising in the symmetric orbifold of T^4 <cit.>,but they realise nicely some of the predictions of <cit.>. For generic values of k we could also identify quite convincingly the states that make up the leading Regge trajectory,and we saw that they comprise the spectrum of a Vasiliev higher spin theory with N=4 superconformal symmetry. It would be very interesting to try to repeat the above analysis using the worldsheet description of<cit.> that allows for the description of the theory with pure R-R flux (where one would expectthe actual higher spin symmetry to emerge, see the arguments of the Introduction). Among other things one should expect that the massless higher spin fields that arise from the long string spectrum at k=1 will acquire a mass since the long string spectrum is believed to be a specific feature of the pure NS-NS background. On theother hand, the leading Regge trajectory statesshould become massless as one flows to the theory with pure R-R flux. It would be very interesting to confirm these expecations.It would also bevery interesting to analyse to which extent the leading Regge trajectory forms aclosed subsector of string theory in the tensionless limit. It is a pleasure to thank Marco Baggio, Shouvik Datta, Jan de Boer, Lorenz Eberhardt, Diego Hofman, Chris Hull, Wei Li,Charlotte Sleight, Massimo Taronna, and in particular Rajesh Gopakumar, for useful discussions. MRG thanks the Galileo Galilei Institute for Theoretical Physics (GGI) for thehospitality and INFN for partial support during the completion of this work, within the program “New Developments inAdS3/CFT2 Holography”. JIJ thanks the University of Amsterdam String Theory Group, and the Nordic Institute for Theoretical Physics (NORDITA) within the program “Black Holes and Emergent Spacetime”, for their kind hospitality during the course of this work. This research was also (partly) supported by the NCCRSwissMAP, funded by the Swiss National Science Foundation.§ SUPERSYMMETRIC CURRENT ALGEBRAS: CONVENTIONS AND USEFUL FORMULAEThe N=1 superconformal WZW model is generated by a bosonic Kac-Moody algebra 𝔤,coupled to fermions in the adjoint representation J^a(z)J^b(w)∼ if^ab_cJ^c(w)/z-w + kη^ab/(z-w)^2J^a(z)ψ^b(w) ∼ if^ab_cψ^c(w)/z-w ψ^a(z)ψ^b(w)∼ kη^ab/z-w .In terms of modes,[ J^a_m, J^b_n ] =if^ab_cJ^c_m+n+kmη^abδ_m,-n [ J^a_m, ψ^b_r ] =if^ab_cψ^c_m+r {ψ^a_r, ψ^b_s } = k η^abδ_r,-s .The structure constants satisfy f^ab_c = -f^ba_c by definition; moreover, f^abc can be chosen to be completely anti-symmetric (for a semi-simple Lie algebra). In addition, the Jacobi identity readsf^ab_df^dc_e+f^ca_df^db_e +f^bc_df^da_e = 0 ,and the dual Coxeter number h^∨ may be defined throughf^d_bcf^abc = 2h^∨ η^ad .We can decouple the fermions from the bosons by defining the shifted currents ^a as^a = J^a+i/2kf^a_bc(ψ^bψ^c) ,where the round brackets denote normal ordering. Because of the anti-symmetry of the structure constants, this is trivial in the NS-sector, but there is a subtle contribution in the R-sector since we have — we follow the conventions of <cit.>, see in particular eq. (3.1.43), (ψ^aψ^b)_p= 1/2[ψ^a_0,ψ^b_p ]+∑_m ≤ -1ψ^a_mψ^b_p-m-∑_m ≥ 1ψ^b_p-mψ^a_m .With this definition, the OPEs become^a(z)^b(w)∼ if^ab_c^c(w)/z-w +η^ab/(z-w)^2 ^a(z)ψ^b(w) ∼0,where ≡ k- h^∨, and h^∨ is the dual Coxeter number defined in (<ref>). Equivalently, in terms of modes we find[ ^a_m, ^b_n] =if^ab_c ^c_m+n+mη^abδ_m,-n [ ^a_m, ψ^b_r ] =0 .Hence, the algebra is isomorphic to the direct (commuting) sum of a bosonic affine algebra at shifted level κ,and dim(𝔤) free fermions. Using the above shifted currents we obtain the stress tensor and a dimension-3/2 supercurrent via the Sugawara construction,T = 1/2kη_ab[ (^a^b)- ( ψ^a∂ψ^b) ] G = 1/k[η_ab ^aψ^b - if_abc/6k(ψ^aψ^bψ^c)],where the round brackets denote normal-ordering,and the triple product is defined recursively, i.e., (ψ^aψ^bψ^c) ≡ (ψ^a (ψ^b ψ^c)).[Becauseof the total anti-symmetry of the structure constants, normal ordering is again trivial in the NS-sector, but there is a contributioncoming from the commutator term in (<ref>).]These fields satisfy the N=1 superconformal algebra T(z)T(w)∼ c/2/(z-w)^4 + 2T(w)/(z-w)^2 + ∂ T(w)/z-wT(z)G(w)∼ 3/2G(w)/(z-w)^2 + ∂ G(w)/z-wG(z)G(w)∼ 2c/3/(z-w)^3 +2T(w)/z-wwith central charge c = dim(𝔤)(k-h^∨/k+ 1/2)= dim(𝔤)(/ +h^∨+ 1/2).In terms of modes, in the NS sector we have[L_m,L_n] =(m-n)L_m+n + c/12m(m^2-1)δ_m,-n [L_n,G_r]= (n/2-r)G_n+r {G_r,G_s} =2L_r+s + c/3(r^2-1/4)δ_r,-s .Due to the non-trivial R-sector normal ordering term, see eq. (<ref>), for the above definition of the normal ordered modes we obtain in the R-sector the algebra [L_m,L_n]=(m-n)L_m+n +mdim(𝔤)/8(m^2 - 2 h^∨/3k(m^2-1))δ_m,-n [ L_n,G_r ] = ( n/2-r ) G_r+n { G_r, G_s} = 2L_r+s + c/3s^2δ_r,-s+h^∨ dim(𝔤)/12kδ_r,-sinstead. Note in particular that the Virasoro commutator [L_m,L_n] does not have the standard form. If so desired, this can berectified by shifting the zero mode of the stress tensor asL_n→ L_n^R = L_n+dim(𝔤)/16δ_n,0 ,so that the superconformal algebra then reads [L^R_m,L^R_n] =(m-n)L^R_m+n + c/12m(m^2-1)δ_m,-n [ L^R_n,G_r] = ( n/2-r ) G_n+r { G_r, G_s} = 2L^R_r+s + c/3( r^2 - 1/4)δ_r,-s ,which is exactly as in the NS sector. The price one pays for this redefinition is that the fermionic Ramond vacuum| 0⟩_R (which is annihilated by all the positive modes of the fermions) is no longer annihilated by L_0, butrather satisfiesL_0^R| 0⟩_R = dim(𝔤)/16 | 0⟩_Rinstead. Finally, it is interesting to note that if we simultaneously consider supersymmetricand𝔰𝔲(2) algebras (which have h^∨ equal and opposite in sign), as appropriate to AdS_3× S^3,we find that the h^∨ terms in (<ref>) – (<ref>) drop out from the algebra of the total currents. § LOW MOMENTA SUBTLETIES The only subtlety concerning the counting of physical states given by eqs. (<ref>) and(<ref>) arises for j=1 and j'=0. Then the mass-shell condition requires that the physical states appear at excitation number N=1/2, and in particular, the state that is excited by ψ^-_-1/2 has j=0. For j=0 the general character formula for 𝔰𝔩(2,R) representations (<ref>) breaks down since the L_-1 = J^+_0 descendant of the state with j=0 is null. As a consequence, we have the identity y^0/1-y = 1 + y/1-y = χ_j=0 + χ_j=1 ,i.e., the character on the left-hand-side is actually not an irreducible character, but rather splits up into the contributions of two different irreducible 𝔰𝔩(2,R) representations (namely the ones with j=0 and j=1). This phenomenon also has a microscopic origin: for j=1 and j'=0 there are three 𝔰𝔩(2,R)descendants that define physical states, namely |12;2,2⟩= ψ^-_-1/2|1,3⟩ -6ψ^3_-1/2|1,2⟩ +6ψ^+_-1/2|1,1⟩ |12;1,1⟩= ψ^-_-1/2|1,2⟩ -2ψ^3_-1/2|1,1⟩ |12;0,0⟩= ψ^-_-1/2|1,1⟩ ,and one easily confirms that all three of them are physical. (And, in fact,|12;1,1⟩ = J^+_0|12;0,0⟩, reflecting the null-vector structure mentioned before.) Since one of these states (|12;0,0⟩) is the vacuum state of the space-time CFT, these states describe the chiral states of the space-time CFT. (Recall that the above discussion is a chiral discussion; the vacuum state for the right-movers, say, appears then together with the above states.) In particular, the j=h=2 state is the Virasoro field, and at j=h=1 we get in addition to the state |12;1,1⟩six j=h=1 states from the excitations associated to the S^3 ×T^4 directions.Altogether, they give rise to an 𝔰𝔲(2) current algebra (coming from the S^3 excitations),as well as four h=1 bosons — these are the familiar bosons of the T^4. (Similary, inthe R-sector we get four h=1/2 fields and four h=3/2 fields — they describe the fourfree fermions of the T^4, as well as the four supercharges of the N=4superconformal algebra.) § THE STRUCTURE OF SMALL 𝒩=4 MULTIPLETSThe (small) N=4 superconformal algebra is generated by a Virasoro algebra with modes L_n, an affinealgebra with modes T^a_n (where a=±,3), as well as four supercharges Q^i ± where i=1,2. (The superchargestransform as two doublets with respect to thealgebra).The commutation relations are of the form [L_m,L_n]= (m-n)L_m+n + c/12m(m^2-1)δ_m+n,0 [L_m,T^a_n]= -nT^a_m+n [L_m,Q^i ±_n]= (m2-n)Q^i ±_m+n [T^3_m,T^±_n]= ±T^±_m+n [T^3_m,T^3_n]= c/12mδ_m+n,0 [T^+_m,T_n^-]=2T^3_m+n + c6mδ_m+n,0 [T^3_m,Q^i ±_n]= ±1/2Q^i ±_m+n ,[T^±_m,Q^i±_n]=0[T^±_m,Q^1∓_n]= -Q^1±_m+n ,[T^±_m,Q^2∓_n]=Q^2±_m+n {Q_m^i ±,Q_n^i±} = {Q_m^i ±,Q_n^i∓}=0{Q^1±_m,Q^2∓_n}= 2L_m+n± 2(m-n)T^3_m+n + c3(m^2-14) δ_m+n,0 {Q^1±_m,Q^2±_n}=-2(m-n)T^±_m+nwhere the central charge is c=6k. We denote the corresponding right-moving generators asL̅_n, Q̅^i±, and T̅^a_n. With these preparations we can now describe the structure of the supermultiplets. We shall first concentrate on the chiral (say left-moving) algebra and describe the representations of the(small) 𝒩=4 superconformal algebra, keeping track of the 𝔰𝔲(2)quantum numbers. Following the notation in<cit.>, we will label𝔰𝔲(2)representations by their dimension m = 2j' + 1. A generic multiplet has then the form, seeTable <ref>.For small values of m, namely m=1 and m=2, there are various shortenings; more specifically, we findfor m=1 and m=2 the shorter multiplets, see Table <ref>.If we denote the highest weight state as | h;j'⟩, the 1/4 BPS bound for the above algebra is obtained by demanding thatQ^i+_-1/2 | h;j'⟩=0 for one choice of i∈{1,2}. Using that {Q^2-_1/2,Q^1+_-1/2} = {Q^1-_1/2,Q^2+_-1/2} = 2(L_0-T^3_0),we see thatevery 1/4 BPS state is automatically 1/2 BPS, i.e., if Q^i+_-1/2 | h;j'⟩=0 for one choice of i,it is actually zero for both i=1,2. Furthermore, the BPS bound is explicitly BPS bound:(L_0 - T^3_0) | h;j'⟩ =0 ⟹ h = j'.The resulting short multiplet is described in table <ref>. As usual, for small values of j' (or m), there are further shortenings, in particular, for m=1 the whole multiplet consists just of the vacuum itself h=j'=0, while for m=2 the whole multiplet truncates to 2 (h=1/2) ⊕ 2· 1(h=1).The corresponding multiplets of the full (4,4) theory is then obtained by tensoring these chiral multiplets together.For example, if both left- and right-moving multiplets are long (corresponding to m and m̅),the total number of states is 256 × m ·m̅.10Gross:1988ue D. J. Gross, High-Energy Symmetries of String Theory, http://dx.doi.org/10.1103/PhysRevLett.60.1229Phys. Rev. Lett. 60 (1988) 1229.Witten:1988zd E. Witten, Space-time and Topological Orbifolds, http://dx.doi.org/10.1103/PhysRevLett.61.670Phys. Rev. Lett. 61 (1988) 670.Moore:1993qe G. W. Moore, Symmetries and symmetry breaking in string theory,in International Workshop on Supersymmetry and Unification of Fundamental Interactions (SUSY 93) Boston, Massachusetts, March 29-April 1, 1993, pp. 0540–552, 1993. https://arxiv.org/abs/hep-th/9308052hep-th/9308052.Sundborg:2000wp B. Sundborg, Stringy gravity, interacting tensionless strings and massless higher spins, http://dx.doi.org/10.1016/S0920-5632(01)01545-6Nucl.Phys.Proc.Suppl. 102 (2001) 113–119, [https://arxiv.org/abs/hep-th/0103247hep-th/0103247].Mikhailov:2002bp A. Mikhailov, Notes on higher spin symmetries, https://arxiv.org/abs/hep-th/0201019hep-th/0201019.Vasiliev:2003ev M. A. Vasiliev, Nonlinear equations for symmetric massless higher spin fields in (A)dS(d), http://dx.doi.org/10.1016/S0370-2693(03)00872-4Phys. Lett. B567 (2003) 139–151, [https://arxiv.org/abs/hep-th/0304049hep-th/0304049].Gaberdiel:2014cha M. R. Gaberdiel and R. Gopakumar, Higher Spins & Strings, http://dx.doi.org/10.1007/JHEP11(2014)044JHEP 11 (2014) 044, [https://arxiv.org/abs/1406.61031406.6103].David:2002wn J. R. David, G. Mandal and S. R. Wadia, Microscopic formulation of black holes in string theory, http://dx.doi.org/10.1016/S0370-1573(02)00271-5Phys. Rept. 369 (2002) 549–686, [https://arxiv.org/abs/hep-th/0203048hep-th/0203048].Gaberdiel:2013vva M. R. Gaberdiel and R. Gopakumar, Large 𝒩=4 Holography, http://dx.doi.org/10.1007/JHEP09(2013)036JHEP 1309 (2013) 036, [https://arxiv.org/abs/1305.41811305.4181].Prokushkin:1998bq S. F. Prokushkin and M. A. Vasiliev, Higher spin gauge interactions for massive matter fields in 3-D AdS space-time, http://dx.doi.org/10.1016/S0550-3213(98)00839-6Nucl.Phys. B545 (1999) 385, [https://arxiv.org/abs/hep-th/9806236hep-th/9806236].Prokushkin:1998vn S. F. Prokushkin and M. A. Vasiliev, 3-d higher spin gauge theories with matter,https://arxiv.org/abs/hep-th/9812242hep-th/9812242.Gaberdiel:2010pz M. R. Gaberdiel and R. Gopakumar, An AdS_3 Dual for Minimal Model CFTs, http://dx.doi.org/10.1103/PhysRevD.83.066007Phys.Rev. D83 (2011) 066007, [https://arxiv.org/abs/1011.29861011.2986].Gaberdiel:2012uj M. R. Gaberdiel and R. Gopakumar, Minimal Model Holography, http://dx.doi.org/10.1088/1751-8113/46/21/214002J.Phys. A46 (2013) 214002, [https://arxiv.org/abs/1207.66971207.6697].Beccaria:2014jra M. Beccaria, C. Candu and M. R. Gaberdiel, The large N = 4 superconformal W_∞ algebra, http://dx.doi.org/10.1007/JHEP06(2014)117JHEP 06 (2014) 117, [https://arxiv.org/abs/1404.16941404.1694].Gaberdiel:2014yla M. R. Gaberdiel and C. Peng, The symmetry of large 𝒩= 4 holography, http://dx.doi.org/10.1007/JHEP05(2014)152JHEP 05 (2014) 152, [https://arxiv.org/abs/1403.23961403.2396].Maldacena:2000hw J. M. Maldacena and H. Ooguri, Strings in AdS_3 and the SL(2,ℝ) WZW model. The Spectrum, http://dx.doi.org/10.1063/1.1377273J. Math. Phys. 42 (2001) 2929–2960, [https://arxiv.org/abs/hep-th/0001053hep-th/0001053].Maldacena:2000kv J. M. Maldacena, H. Ooguri and J. Son, Strings in AdS_3 and the SL(2,ℝ) WZW model. Euclidean black hole, http://dx.doi.org/10.1063/1.1377039J. Math. Phys. 42 (2001) 2961–2977, [https://arxiv.org/abs/hep-th/0005183hep-th/0005183].Maldacena:2001km J. M. Maldacena and H. Ooguri, Strings in AdS_3 and the SL(2,ℝ) WZW model. Part 3. Correlation functions, http://dx.doi.org/10.1103/PhysRevD.65.106006Phys. Rev. D65 (2002) 106006, [https://arxiv.org/abs/hep-th/0111180hep-th/0111180].Isberg:1993av J. Isberg, U. Lindstrom, B. Sundborg and G. Theodoridis, Classical and quantized tensionless strings, http://dx.doi.org/10.1016/0550-3213(94)90056-6Nucl. Phys. B411 (1994) 122–156, [https://arxiv.org/abs/hep-th/9307108hep-th/9307108].Sagnotti:2003qa A. Sagnotti and M. Tsulaia, On higher spins and the tensionless limit of string theory, http://dx.doi.org/10.1016/j.nuclphysb.2004.01.024Nucl. Phys. B682 (2004) 83–116, [https://arxiv.org/abs/hep-th/0311257hep-th/0311257].Bagchi:2016yyf A. Bagchi, S. Chakrabortty and P. Parekh, Tensionless Superstrings: View from the Worldsheet, http://dx.doi.org/10.1007/JHEP10(2016)113JHEP 10 (2016) 113, [https://arxiv.org/abs/1606.096281606.09628].Bagchi:2015nca A. Bagchi, S. Chakrabortty and P. Parekh, Tensionless Strings from Worldsheet Symmetries, http://dx.doi.org/10.1007/JHEP01(2016)158JHEP 01 (2016) 158, [https://arxiv.org/abs/1507.043611507.04361].Seiberg:1999xz N. Seiberg and E. Witten, The D1 / D5 system and singular CFT, http://dx.doi.org/10.1088/1126-6708/1999/04/017JHEP 04 (1999) 017, [https://arxiv.org/abs/hep-th/9903224hep-th/9903224].Elitzur:1998mm S. Elitzur, O. Feinerman, A. Giveon and D. Tsabar, String theory on AdS_3 × S^3 × S^3 × S^1, http://dx.doi.org/10.1016/S0370-2693(99)00101-XPhys. Lett. B449 (1999) 180–186, [https://arxiv.org/abs/hep-th/9811245hep-th/9811245].Eberhardt:2017fsi L. Eberhardt, M. R. Gaberdiel, R. Gopakumar and W. Li, BPS spectrum on AdS_3× S^3 × S^3 × S^1, http://dx.doi.org/10.1007/JHEP03(2017)124JHEP 03 (2017) 124, [https://arxiv.org/abs/1701.035521701.03552].GGH M. R. Gaberdiel, R. Gopakumar and C. Hull, Stringy AdS_3 from the Worldsheet, to appear.Ferreira:2017zbh K. Ferreira, Even spin 𝒩=4 holography, https://arxiv.org/abs/1702.026411702.02641.Berkovits:1999im N. Berkovits, C. Vafa and E. Witten, Conformal field theory of AdS background with Ramond-Ramond flux, http://dx.doi.org/10.1088/1126-6708/1999/03/018JHEP 03 (1999) 018, [https://arxiv.org/abs/hep-th/9902098hep-th/9902098].Giveon:1998ns A. Giveon, D. Kutasov and N. Seiberg, Comments on string theory on AdS_3, Adv. Theor. Math. Phys. 2 (1998) 733–780, [https://arxiv.org/abs/hep-th/9806194hep-th/9806194].Pakman:2003cu A. Pakman, Unitarity of supersymmetric SL(2,R) / U(1) and no ghost theorem for fermionic strings in AdS(3) x N, http://dx.doi.org/10.1088/1126-6708/2003/01/077JHEP 01 (2003) 077, [https://arxiv.org/abs/hep-th/0301110hep-th/0301110].Israel:2003ry D. Israel, C. Kounnas and M. P. Petropoulos, Superstrings on NS5 backgrounds, deformed AdS(3) and holography, http://dx.doi.org/10.1088/1126-6708/2003/10/028JHEP 10 (2003) 028, [https://arxiv.org/abs/hep-th/0306053hep-th/0306053].Raju:2007uj S. Raju, Counting giant gravitons in AdS(3), http://dx.doi.org/10.1103/PhysRevD.77.046012Phys. Rev. D77 (2008) 046012, [https://arxiv.org/abs/0709.11710709.1171].Ivanov:1994ec I. T. Ivanov, B.-b. Kim and M. Rocek, Complex structures, duality and WZW models in extended superspace, http://dx.doi.org/10.1016/0370-2693(94)01476-SPhys. Lett. B343 (1995) 133–143, [https://arxiv.org/abs/hep-th/9406063hep-th/9406063].Gerigk:2012lqa S. Gerigk, Superstring theory on AdS_3× S^3 and the PSL(2| 2) WZW model. PhD thesis, ETH Zürich, 2012.Hwang:1990aq S. Hwang, No ghost theorem for SU(1,1) string theories, http://dx.doi.org/10.1016/0550-3213(91)90177-YNucl. Phys. B354 (1991) 100–112.Evans:1998qu J. M. Evans, M. R. Gaberdiel and M. J. Perry, The no ghost theorem for AdS_3 and the stringy exclusion principle, http://dx.doi.org/10.1016/S0550-3213(98)00561-6Nucl. Phys. B535 (1998) 152–170, [https://arxiv.org/abs/hep-th/9806024hep-th/9806024].Lindstrom:2003mg U. Lindstrom and M. Zabzine, Tensionless strings, WZW models at critical level and massless higher spin fields, http://dx.doi.org/10.1016/j.physletb.2004.01.035Phys. Lett. B584 (2004) 178–185, [https://arxiv.org/abs/hep-th/0305098hep-th/0305098].Bakas:2004jq I. Bakas and C. Sourdis, On the tensionless limit of gauged WZW models, http://dx.doi.org/10.1088/1126-6708/2004/06/049JHEP 06 (2004) 049, [https://arxiv.org/abs/hep-th/0403165hep-th/0403165].Bershadsky:1999hk M. Bershadsky, S. Zhukov and A. Vaintrob, PSL(n|n) sigma model as a conformal field theory, http://dx.doi.org/10.1016/S0550-3213(99)00378-8Nucl. Phys. B559 (1999) 205–234, [https://arxiv.org/abs/hep-th/9902180hep-th/9902180].Gotz:2006qp G. Gotz, T. Quella and V. Schomerus, The WZNW model on PSU(1,1|2), http://dx.doi.org/10.1088/1126-6708/2007/03/003JHEP 03 (2007) 003, [https://arxiv.org/abs/hep-th/0610070hep-th/0610070].Balog:1988jb J. Balog, L. O'Raifeartaigh, P. Forgacs and A. Wipf, Consistency of String Propagation on Curved Space-Times: An SU(1,1) Based Counterexample, http://dx.doi.org/10.1016/0550-3213(89)90380-5Nucl. Phys. B325 (1989) 225.Petropoulos:1989fc P. M. S. Petropoulos, Comments on SU(1,1) string theory, http://dx.doi.org/10.1016/0370-2693(90)90819-RPhys. Lett. B236 (1990) 151–158.Dixon:1989cg L. J. Dixon, M. E. Peskin and J. D. Lykken, N=2 Superconformal Symmetry and SO(2,1) Current Algebra, http://dx.doi.org/10.1016/0550-3213(89)90459-8Nucl. Phys. B325 (1989) 329–355.deBoer:1998ip J. de Boer, Six-dimensional supergravity on S^3 × AdS_3 and 2-D conformal field theory, http://dx.doi.org/10.1016/S0550-3213(99)00160-1Nucl.Phys. B548 (1999) 139–166, [https://arxiv.org/abs/hep-th/9806104hep-th/9806104].Aharony:1999ti O. Aharony, S. S. Gubser, J. M. Maldacena, H. Ooguri and Y. Oz, Large N field theories, string theory and gravity, http://dx.doi.org/10.1016/S0370-1573(99)00083-6Phys. Rept. 323 (2000) 183–386, [https://arxiv.org/abs/hep-th/9905111hep-th/9905111].Gaberdiel:2015uca M. R. Gaberdiel, C. Peng and I. G. Zadeh, Higgsing the stringy higher spin symmetry, http://dx.doi.org/10.1007/JHEP10(2015)101JHEP 10 (2015) 101, [https://arxiv.org/abs/1506.020451506.02045].Argurio:2000tb R. Argurio, A. Giveon and A. Shomer, Superstrings on AdS(3) and symmetric products, http://dx.doi.org/10.1088/1126-6708/2000/12/003JHEP 12 (2000) 003, [https://arxiv.org/abs/hep-th/0009242hep-th/0009242].Giribet:2007wp G. Giribet, A. Pakman and L. Rastelli, Spectral Flow in AdS(3)/CFT(2), http://dx.doi.org/10.1088/1126-6708/2008/06/013JHEP 06 (2008) 013, [https://arxiv.org/abs/0712.30460712.3046].Gaberdiel:2015wpo M. R. Gaberdiel and R. Gopakumar, String Theory as a Higher Spin Theory,https://arxiv.org/abs/1512.072371512.07237.Gaberdiel:2015mra M. R. Gaberdiel and R. Gopakumar, Stringy Symmetries and the Higher Spin Square, http://dx.doi.org/10.1088/1751-8113/48/18/185402J. Phys. A48 (2015) 185402, [https://arxiv.org/abs/1501.072361501.07236].AndrejStepanchuk:2015wsq A. Stepanchuk, Aspects of integrability in string sigma-models. PhD thesis, Imperial Coll., London, 2015.Banerjee:2015qeq A. Banerjee and A. Sadhukhan, Multi-spike strings in AdS_3 with mixed three-form fluxes, http://dx.doi.org/10.1007/JHEP05(2016)083JHEP 05 (2016) 083, [https://arxiv.org/abs/1512.018161512.01816].David:2014qta J. R. David and A. Sadhukhan, Spinning strings and minimal surfaces in AdS_3 with mixed 3-form fluxes, http://dx.doi.org/10.1007/JHEP10(2014)049JHEP 10 (2014) 49, [https://arxiv.org/abs/1405.26871405.2687].Fuchs:1992nq J. Fuchs, Affine Lie algebras and quantum groups: An Introduction, with applications in conformal field theory. Cambridge University Press, 1995.
http://arxiv.org/abs/1704.08667v1
{ "authors": [ "Kevin Ferreira", "Matthias R. Gaberdiel", "Juan I. Jottar" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170427172926", "title": "Higher spins on AdS$_{3}$ from the worldsheet" }
firstpage–lastpage Imagingbackscattering in graphene quantum point contacts B. Szafran==========================================================The observation of counter rotation in galaxies (i.e. gas that rotates in the opposite direction to the stellar component or two co-spatial stellar populations with opposite rotation) is becoming more commonplace with modern integral field spectroscopic surveys. In this paper we explore the emergence of counter-rotation (both stellar and gaseous) in S0 galaxies from smoothed-particle hydrodynamics simulations of 1/10 mass ratio minor mergers between a ∼10^10.8 M_⊙ disk galaxy with a bulge-to-total ration of 0.17 and a gas rich companion (gas-to-stellar mass fraction of 5.0). These simulations include a self-consistent treatment of gas dynamics, star formation, the production/destruction of H_2 and dust, and the time evolution of the interstellar radiation field. We explore the effect of retrograde versus prograde obits, gas and bulge mass fractions of the primary galaxy, and orbital parameters of the companion. The key requirement for producing counter rotation in stars or gas in a merger remnant is a retrograde primary, while the relative spin of the companion affects only the radial extent of the accreted gas. We also find that including a significant amount of gas in the primary can prevent the emergence of counter-rotating gas, although accreted stars retain counter-rotation. Bulge mass and orbit have a secondary effect, generally influencing the final distribution of accreted stars and gas within the framework outlined above. In addition to our primary focus of counter-rotating components in galaxies, we also make some predictions regarding the SFRs, H_2 distributions, and dust in minor merger remnants.galaxies: interactions – galaxies: kinematics and dynamics § INTRODUCTION Prior to the 1980's the concept of counter-rotating matter in galaxies, galaxies which contain multiple dynamical components (stars and/or gas) rotating in different directions, was primarily theoretical <cit.>. The firstexample of counter-rotation in an observation was shown by <cit.> who observed an SB0 galaxy to have gas rotating inretrograde orbits relative to the stars. Observation of two counter-rotating stellar components in E7/S0 galaxy NGC 4550 by <cit.> came a few years later. In retrospect, it is not surprising that these discoveries were both made for S0 galaxies as subsequent studies have shown that the percentage of S0 galaxies that exhibit counter-rotation is 20-40% <cit.> compared to < 8% for Sa-Sbc galaxies <cit.>. For a more thorough historical review on the topic of multispin galaxies see <cit.> and <cit.>. Having been established as a concrete reality, counter-rotating components ofgalaxies next required a viable formation mechanism.Another important development occuring contemporaneously to the discovery of such systems was the recognition of the importance of galaxy interactions in the emerging hierarchical model of galaxy evolution <cit.>. A number of numerical simulations have shown some galaxy mergers do indeed produce counter-rotation in galaxies <cit.> and, as they are a cosmological necessity, mergers represent the prime candidate for the origin of counter-rotation. This option is also attractive in that mergers have also been shown as a mechanism that can contribute to the formtion of S0 galaxies <cit.>, which may explain the prevalence of counter-rotation in intermediate morphological types. Among studies of the emergence of counter-rotation in simulations only <cit.> and <cit.> demonstrate counter-rotation in minor merger remnants. These simulations are able to preserve the disk of the central galaxy resulting in S0, rather than elliptical, remnants. While the work of <cit.> is focused on the emergence of counter-rotation, the authors tested only three minor merger simulations with only the dark matter content of the accreted satellite varying between simulations. These simulations are also dark matter only, N-body simulations, thus an update using state-of-the-art hydrodynamical simulations is warranted. The more recent work of <cit.> was focused on building gas rings in S0 galaxies through mergers with the emergence of counter-rotation being a secondary trait that is only briefly discussed. Thus, a large parameter space is yet to be explored in regards to the primary factors required to produce counter-rotating components through minor mergers.In this work we perform a series of minor merger simulations using our original chemodynamical code for galaxy evolution <cit.>. Our code represents a major step forward compared to the sticky particle simulations of <cit.>, allowing us to track the complex evolution of different stellar components in our galaxies as well as their multiphase interstellar medium. Another improvement in this study when compared to <cit.> and <cit.> is that we perform a large number of merger simulations with the specific focus of understanding how the initial conditions affect the emergence of counter-rotation. Thus, our study represents the first systematic study of the formation of S0 galaxies hosting counter-rotating gas and/or stars through minor mergers.Our suite of simulations allows us to investigate the effects of the spin orientation of the merging galaxies relative to their orbit, the gas fraction of the primary galaxy, bulge mass of the central galaxy, as well as the orbital parameters themselves. Simulations such as these can help to pin down the statistics of how many galaxy mergers are likely to result in kinematic misalignments easily identifiable by IFS observations. Furthermore, for those simulations resulting in co-rotation of gas and stars, we aim to identify observables that may be strongly indicative of past galaxy interactions in the absence of obvious peculiarities in photometry and/or kinematics. This paper is laid out as follows: In Section <ref> we describe our simulation code and the initial conditions of our merger simulations, in Section <ref> we briefly describe our method of producing two dimensional maps from out simulated data, in Section <ref> we describe the outcomes of our mergers and possible observational signatures, in Section <ref> we provide some discussion and comparison with previous works, and in Section <ref> we briefly summarise our conclusions.§ THE MODEL §.§ A simulation code We employ the simulation code that has been recentlydeveloped in our previous works(, B13; , B14) to investigate the structure and kinematics of gas, dust, and stars in galaxy mergers. Since the details of the code are already given in B13 and B14, we describe them briefly here. Gravitational calculations can be done on GPUs whereas other calculations (e.g., gas dynamics, dust evolution, and hydrodynamics) are performed using CPUs, thus the code can be run on GPU-based clusters. The hydrodynamical part of the code is based on the smoothed-particle hydrodynamics (SPH) method for following the time evolution of gas dynamics in galaxies. The code allows us to investigategas dynamics, star formation from gas, H_2 formation on dust grains, formation of dust grains in the stellar winds of supernovae (SNe) and asymptotic giant branch (AGB) stars,time evolution of interstellar radiation field (ISRF), growth and destruction processes of dust in the interstellar medium (ISM), and H_2 photo-dissociation due to far ultra-violet (FUV) light in a self-consistent manner. The new simulation code does not includethe effects of feedback of active galactic nuclei (AGN) on ISM and the growth of supermassive black holes (SMBHs) in galaxies. Since our main purpose is not to investigate the AGN feedback effects onthe formation of counter-rotating gas rings, we considerour adoption of the code for the investigation of gas and dust properties in S0 galaxies is appropriate.§.§ Merger progenitor disk galaxy §.§.§ Basic Description The smaller (`companion') and larger (`primary') galaxies in a minor merger are represented by disk galaxies with the latter also including a central bulge component. The total masses of dark matter halo, stellar disk, gas disk, and bulge of a disk galaxy are denoted as M_ h, M_ s, M_ g, and M_ b, respectively. The gas mass fraction is denoted as f_ g and considered to be a key parameter that determines the final kinematics of gas. The evolution of gas in minor mergers strongly depends onf_ g of the primary and the companion (f_ g, p and f_ g, c, respectively), and accordingly we investigate models with different f_ g, p and f_ g, c. We adopt the density distribution of the NFW halo <cit.> suggested from CDM simulations to describe the initial density profile of dark matter halo in a disk galaxy:ρ(r)=ρ_0/(r/r_ s)(1+r/r_ s)^2,Here,r, ρ_0, and r_ s are the spherical radius,the characteristicdensity of a dark halo,and the scale length of the halo, respectively. The c-parameter (c=r_ vir/r_ s, where r_ vir is the virial radius of a dark matter halo) and r_ vir are chosen appropriately for a given dark halo mass (M_ dm) by using recent predictions from cosmological simulations<cit.>. The bulge of a disk galaxyis represented by the Hernquist density profile with the bulge mass fraction (M_ b/M_ s) as a free parameter ranging from 0 to 1. For the companion galaxy in a minor merger, f_ b is set to be 0 (bulge-lessdisk galaxy) while the primary galaxy, in most cases, contains a non-rotating bulge with f_ b = 0.17. The radial (R) and vertical (Z) density profiles of the stellar disk are assumed to be proportional to exp (-R/R_0) with scale length R_0 = 0.2R_ sand to sech^2 (Z/Z_0) with scale length Z_0 = 0.04R_ s, respectively. The gas disk with a sizeof R_ g has theradial and vertical scale lengths of 0.2R_ g and 0.02R_ g, respectively. In the present model,the exponential disk has R_ s=17.5 kpc for the primary and 5.5 kpc for the companion. The primary and companion galaxies are assumed to have R_ g/R_ s=1 and 3, respectively. In addition to the rotational velocity caused by the gravitational field of the disk, bulge, and dark halo components, the initial radial and azimuthal velocity dispersions are assigned to the disc component according to the epicyclic theory with Toomre's parameter Q = 1.5. The vertical velocity dispersion at a given radius is set to be 0.5 times as large as the radial velocity dispersion at that point.In this work we describe the initial morphology of the primary galaxy as being a late-type. In particular, these simulated galaxies represent the early end of the late-type galaxy sequence (i.e. Sa galaxies) for two reasons: first they host large bulges and second the majority begin the simulations free of gas. From an observational perspective, it may be equally valid to describe our primary galaxies as initially having S0 morphologies though this simply reflects the often ambiguous nature of visual classifications for S0-Sa galaxies <cit.>. In simulations, the assertion that a gas-free, disk-like galaxies should all be considered S0 in morphology is not clear cut, with examples of simulated gas-free galaxies hosting spiral arms readily available in the literature <cit.>. For this reason we choose to describe the initial morphologies as late-type, although it can easily be argued that an S0 classification is also appropriate. Regardless, we test here the conditions required such that gas-rich, minor mergers involving massive disks (whether Sa or S0) produce S0 remnants containing counter-rotating gas and/or stars.The initial gravitational softening lengths for dark matter halo, stellar disk, gas disk, and bulge in the central galaxy are set to be 2.1 kpc, 0.2 kpc, 0.08 kpc, and 0.2 kpc, respectively. Those for the dark matter halo, the stellar disk, and the gas disk of the companion galaxy are set to be 0.88 kpc, 0.08 kpc, and 0.08 kpc, respectively. We employ a variable softening length for dark matter and stellar particles residing in dense regions, with a minimum value of 0.08 kpc, matching that of the gas particles. Using these multiple and varying softening lengths for different components, we are able to resolve the 100-1000pc scale distribution of cold neutral and molecular gas in simulated S0s in this present study. The main results of this work focus on the kpc (and greater) scale kinematics of different components of our merger remnants. Furthermore, as described in Section <ref>, we also employ a 2D, 1 kpc Gaussian smoothing kernal while producing kinematics maps of our simulations. Thus, our conclusions based on large scale kinematics should not change were we to rerun our simulations at a much higher resolution (although such simulations may differ significantly on sub-kpc scales). Other properties of the galaxies in our models are chosen to roughly match those observed in the local universe. First, the initial stellar mass of the primary galaxies of log_10(M_*) = 10.85 (M_⊙) is near the peak in stellar mass for S0 galaxies observed in the GAMA survey <cit.>. As we mentioned previously, we consider our primary galaxies to begin as late-type galaxies, however we match their initial mass to those of low z S0's in order that the remnants will have masses appropriate for an S0 galaxy (as there is little mass growth during the simulation). The corresponding mass of the satellite galaxies give a merger mass ratio of 1/10, which falls in the observed range for low redshift minor mergers <cit.>. Finally, the gas mass fractions of the companion galaxies of 0.5 are consistent with observed values for dwarf galaxies from the ALFA survey presented in <cit.>. We note that the sample of <cit.> will be biased towards gas-rich systems, particularly at low stellar mass, thus they likely represent the upper end in gas fraction compared to the bulk population of dwarf galaxies. Regardless, gas-rich galaxies similar to our simulated companions are known to exist and, although likely rare, gas-rich minor mergers similar to our simulations will occur and have been shown to be necessary to reproduce observations <cit.>. Following the observed mass-metallicity relation in disk galaxies <cit.>, we allocate a metallicity to each disk galaxy. We do not assume a metallicity gradient in the present study in orderto avoid introducing another model parameters that can hamper theinterpretation of the simulation results. Metallicity-dependent radiative cooling is included, and the initial gas temperature is set to be 10,000 K for the primary and secondary galaxies in the mergers. Gas-to-dust-ratio is a function of metallicity and we assume a dust-to-metal ratio of 0.4 for all models. The details of the model for gas dynamics with dust is given in B13.§.§.§ Possible Limitations We are limited in the exact types of mergers we can study due to the extremely large parameter space represented by the wide variety of progenitor galaxies in minor mergers. First we point out that, although this work represents a step forward towards a systematic analysis of the emergence of counter-rotation in minor merger remnants, we primarily focus on mergers involving gas poor (or gas free) primary galaxies with a fixed bulge-to-total ratio of 0.17 and an extremely gas rich companion with f_g,c = 5.0. We focus on mergers such as these because the large gas content of the merger remnants will be representative of those galaxies hosting the most easily detectable counter-rotating gas disks. While we do include few examples of mergers that vary these basic parameters, we do not explore varying these values in a large range of with a large range of initial conditions (e.g. low f_g,c as well as f_b = 1.0).Recently, <cit.> showed that 50-65% of S0 galaxies in the local universe exhibit stellar bars. Although we find that bars form in a number of our simulations (see Section <ref>), bars are not included in the initial conditions for either the primary or companion galaxy. Bars can significantly influence the angular momentum transfer in galaxies, and thus may have an impact on the way material is accreted during a minor merger. We expect any differences that may be induced by an initial bar will be on sub-galactic scales and will not affect the overall direction of rotation, and by extension our classification of a given component as co- or counter-rotating. Finally, we emphasise that the bulge of the primary galaxy is initially spherical and non-rotating. This is typical of bulges in hydrodynamical simulations, but may not always be the case for observed galaxies that can host flattened and/or rotating bulges <cit.>. Although the relative kinematics of primary and companion stars and gas can have an effect on the final distribution <cit.>, the fact that bulges in our simulations are relatively small with f_b = 0.17 means this can only have a minor effect, if any, on our results. If anything, the presence of a non-rotating component in the primary galaxy would act to confuse signatures of counter-rotation by adding a random velocity component to those stars of the primary galaxy disk. Thus, if the bulges in our primary galaxies are initiated with rotation matching that of the primary disk, the counter-rotation signature would only be enhanced.§.§ Star formationWe adopt the `H_2-dependent' SF recipe (B13) in which SFR is determined by local molecular fraction (f_ H_2) foreach gas particle in the present study. A gas particle can be converted into a new star if the following three conditions are met for each particle: (i) the local dynamical time scale is shorter than the sound crossing time scale (mimicking the Jeans instability) , (ii) the local velocity field is identified as being consistent with gravitationally collapsing (i.e., div v<0), and (iii) the local density exceeds a threshold density for star formation (ρ_ th). We also adopt theKennicutt-Schmidt law, which is described asSFR∝ρ_ g^α_ sf <cit.>, where α_ sf is the power-law slope. A reasonable value of α_ sf=1.5 is adopted for all models. The threshold gas density for star formation (ρ_ th) isset to be 1 cm^-3 for all modelsin the present study.Each SN is assumed to eject the feedback energy (E_ sn) of 10^51 erg and 90% and 10% of E_ sn are used for the increase of thermal energy (`thermal feedback') and random motion (`kinematic feedback'), respectively. The thermal energy is usedfor the `adiabatic expansion phase', where each SN can remain adiabatic for a timescale of t_ adi. This timescale is set to be 10^6 yr. A canonical stellar initial mass function (IMF) proposed by <cit.>, which has three different slopes at different mass ranges is adopted. and the IMF is assumed to be fixed at the canonical one. Therefore, chemical evolution, SN feedback effects, and dust formation and evolution is determined by the fixed IMF. §.§ Dust and metals Chemical enrichment through star formation and metal ejection from SNIa, II, and AGB stars is self-consistently included in the chemodynamical simulations. The time evolution of the 11 chemical elements of H, He, C, N, O, Fe, Mg, Ca, Si, S, and Ba is investigated to predictboth chemical abundances and dust properties in the present study, though we do not discuss these. We consider the time delay between the epoch of star formation and thoseof supernova explosions and commencement of AGB phases (i.e., non-instantaneous recycling of chemical elements). We adopt the nucleosynthesis yields of SNe II and Ia from<cit.> and AGB stars from <cit.> in order to estimate chemical yields in the present study.The dust model adopted in the present study isthe same as those in B13 and B14: Thetotal mass of jth component (j=C, O, Mg, Si, S, Ca, and Fe) of dust from kth type of stars (k = I, II, and AGB for SNe Ia, SNe II, and AGB stars, respectively) are derived based on the methods described in B13. Dust can grow through accretion of existing metals onto dust grains with a timescale of τ_g. Dust grains can be destroyed though supernova blast waves in the ISM of galaxies and the destruction process is parameterised by the destruction time scale (τ_ d). We consider the models with τ_ g=0.25 Gyr and τ_ d=0.5 Gyr, and the reason for this selection is discussed in B13.§.§ H_2 formation and dissociation The details of the new model forH_2 formation on dust grains in galaxy-scale simulations have already been provided in B14, therefore, we summarise only briefly here themodel for H_2 formation and dissociation in the present study. The present chemodynamical simulations include bothH_2 formation on dust grains and H_2 dissociation by FUV radiation self-consistently. The temperature (T_ g), hydrogen density (ρ_ H),dust-to-gas ratio (D) of a gas particle and the strength of the FUV radiation field (χ) around the gas particle are calculated at each time step so that the fraction of molecular hydrogen (f_ H_2) for the gas particle can be derived based on the H_2 formation/destruction equilibrium conditions. The SEDs of stellar particles around each i-th gas particles(thus ISRF) are firstestimated from ages and metallicities of the stars by using stellar population synthesis codes for a given IMF <cit.>. Then the strength of the FUV-part of the ISRF is estimated from the SEDs so that χ_i can be derived for the i-th gas particle. Based on χ_i, D_i, andρ_ H,i of the gas particle, we can derive f_ H_2,i (See Fig. 1 in B13a). Thus, each gas particle has f_ H_2,i, metallicity ([Fe/H]), and gas density, and the total dust, metal, and H_2 masses are estimated from these properties. §.§ Galaxy mergersBoth companion and the primary galaxiesin a galaxy merger are represented by the disk galaxy model described above. Although the mass-ratio of the companion to the primary canbea free parameter represented by m_2, we investigateonly for the models with m_2=0.1. This is because the simulated merger remnants can haveS0-like morphology without spiral arms.In all the simulations of minor mergers,the orbit of the two disks is set to be initially on the xy plane and the distance between the center of mass of the two disks is a free parameter (R_ i). The pericenter distance, represented by r_ pand the orbital eccentricity (e_ o)are free parameters too. The spin of the primary and companiongalaxies in a merger pair is specified by two angle θ and ϕ (in units of degrees), where θ is the angle between the z axis and the vector of the angular momentum of a disk and ϕ is the azimuthal angle measured from x axis to the projection of the angular momentum vector of a disk on the xy plane. In the present model, ϕ is set to be 0 for thetwo disks. The values of θ can be different for the two: θ_1 and θ_2 are θ for the companion and the primary galaxies, respectively: we used this notation of θ_1 for the companion, because we focus on the gas dynamics of the companion.We mainlyinvestigate the models with the following four merger orbital configurations:(i) prograde-prograde (`PP')model with θ_1 = 35, θ_2 = 45, (ii) prograde-prograde (`PR')model with θ_1 = 35, θ_2 = 135, (iii) retrograde-prograde (`RP')model with θ_1 = 35, θ_2 = 45, and (iv) retrograde-retrograde (`RR')model with θ_1 = 145, θ_2 = 135. We investigate two representative cases of orbits, bound orbit with e_ o=0.7 and hyperbolic one with e_ o=1. The results of the two cases are not so different in terms of counter-rotating gas formation in S0s. The model parameters for each modelare summarized in Table 1.§ ANALYSIS§.§ Mapping of Simulated Data Here we briefly describe our methods for producing two dimensional (2D) maps from our three dimensional (3D) SPH simulations. This process is essential in presenting a clear picture of our simulations, however the specific details can have some bearing over the interpretation of our results. Note that, while gas and stars are often treated separately in our analysis, the mapping process for both components is identical. First, we perform a coordinate transformation of the positions and velocities of each particle for a given timestep of a given simulation in order to achieve the desired projection. Typically, this is either edge on or nearly face on (inclined at 15-20^∘) with respect to the disk of the central galaxy. The reason we do not present perfectly face on projections is because this will result in an apparent lack of rotation in the velocity maps for rotating galaxies. This is key to clearly showing counter-rotating components in our simulated galaxy mergers.Next, we create a 2D, 100×100 kpc grid with a pixel size of 0.5 kpc. At each grid point we select all particles within 4 kpc of the pixel centre. For each selected particle we calculate a weighting using a Gaussian kernel with a FWHM of 1 kpc multiplied by the particle mass. From this we produce maps of mass surface density by performing a weighted sum of the selected particles and dividing by our pixel size of 0.25 kpc^2. For velocity maps, we take the weighted average of the z-component of the velocity vector (projecting into or out of the 2D plane defined by our grid) for particles selected about each pixel. A caveat to our kinematic measurements is that we will only trace the dominant kinematic component at a given spatial location, while information on multiple, co-located kinematic components is lost. This smoothing is, however, significantly larger than our softening length, thus we can be sure that our results will not change were we to run the same simulations at much higher spatial resolution.Maps produced using this technique, including surface mass density and kinematics of different components, are shown in Sections <ref> and <ref>. Although we do not suffer from the effects of noise and limits on our sensitivity to low surface mass density regions, these maps provide a useful qualitative comparison to observational data products such as those produced from IFS instruments and even radio interferometry. §.§ Integrated Quantities In this section we briefly describe a number of physical quantities measured in each of our simulations. These include the final half-mass radii and concentrations of each baryonic component, the percentage of gas consumed by star-formation, and the percentage change in angular momentum (L) for gas and primary galaxy stars. These values are summarised in the appendix in Table <ref>, and we describe below our methods for measuring these values.First, half-mass radii and concentration index are measured. We measure the spherical radius defining spheres in which contain 50 and 90% of the total stellar mass giving r_50 and r_90, respectively. The concentration index, c, is then defined as the ratio of these values, r_90/r_50. This value is similar to the concentration index employed on observational data, which <cit.> and <cit.> show that c can be a usefultool in separating between early- and late-type galaxies. The typical value separating morphologies is c = 2.85, with early-type galaxies being more concentrated and late-type galaxies being less concentrated. We present these values for gas, primary stars, companion stars, and newly formed stars. An illustration of this procedure for m2 and m4 is given in Figure <ref>. The next integrated quantity is the percentage of the initial gas mass that is consumed by star formation. This quantity is measured by first subtracting the final gas mass from the initial gas mass, then dividing by the initial gas mass. This gives a decimal value that is then converted to a percentage.Finally, we calculate the change in angular momentum about the primary galaxy of the gas component and the stars initially contained within the primary galaxy for each model. This is done by calculating the cross product of the radial and velocity vector of each particle in a given component, taking the mass weighted center and average velocity as the zero points of the system. This is then multiplied by the mass of the given particle, and each angular momentum vector is summed giving L⃗_⃗t⃗o⃗t⃗, and the final value is the magnitude of this vector. This process is performed at the beginning and end of the simulations and the percentage change is calculated in a similar manner to the percentage change in gas mass. § RESULTS§.§ Isolated Disk Galaxy Model Before examining our minor merger simulations we first briefly explore the properties of our isolated model m1. This simulation begins as a featureless disk of gas and stars with a central bulge having a bulge mass fraction of 0.17. An edge-on view of the initial setup and a face-on view after 1.4 Gyr of evolution in isolation are shown in the top two panels of the first column of Figure <ref>. From the view of the final gas and stellar distributions, we find that this galaxy has developed a well defined spiral arm pattern meaning this simulation can be clearly classified as a late-type galaxy (LTGs). In this work, we consider the presence or absence of spiral arms to be the key factor separating LTGs and S0 galaxies. We also note that the final c value (see Table <ref>) for the stellar component is < 2.85, consistent with this being a LTG. Below the stellar and gas density maps we show the final kinematics maps of the old stars, new stars, and gas, which are found to share a very similar disk-like rotation. At large radii, the kinematics of the old stars exhibit a fairly random distribution, however this is simply due to the presence of a pressure supported bulge. The compact profile of this galaxy component has a broad, low density wing of stars resulting in a small fraction of bulge stars being found beyond the full extent of the disk, particularly in directions perpendicular to the disk distribution. As mentioned, these starts are initially non-rotating, thus in these regions where we primarily find bulge stars, kinematics are relatively random. In our isolated model we have found a strong spiral structure and co-rotation among all components of our simulated galaxy. Thus, we can say with confidence that the absence of these features in our simulations of minor mergers are the result of the galaxy interraction rather than secular processes. It can be argued that the presence of a massive gas disk in our isolated model means that this is a somewhat unfair comparison to our initially gas-free primary galaxies, however previous works have shown that spiral arms can be formed in simulations of isolated, gas-free disk galaxies <cit.>. §.§ Conditions for Producing Counter-Rotating Gas Disks in S0 GalaxiesIn this Section we explore the effects of the relative spins and gas fractions of galaxies in our merger simulations on the gas versus stellar kinematics of the merger remnants. Specifically, we are interested in identifying the key initial conditions that result in a counter-rotating gas disk. For a quick reference, we provide in the second column of Table 2 a list of components that are counter rotating relative to stars originally in the primary galaxy for each model.Before presenting our results, we must explicitly state the definition of counter rotation used in this work. We consider the primary baryonic component of these galaxies to be the stars initially belonging to the central galaxy as these stars make up a majority of the mass budget of our merger remnants. Thus we define a given component at “counter rotating” if it exhibits rotation in the opposite direction to those stars initially belonging to the primary galaxy. In IFS observations of merger remnants such as these, stars initially in the primary galaxy will represent the primary spectral component. In particular, for IFS analyses in which only a single stellar component is assumed, only the most massive kinematic component will be analysed. This is typically true of shallow observations, while observations with longer exposure times can reliably extract multiple stellar kinematic components. In star-forming galaxies, gas kinematics are also often straightforward to measure from the roughly Gaussian emission lines. Thus gas versus stellar counter-rotation is often the easiest to identify from an observational perspective.§.§.§ Dependence on Initial Angular Momentum Here we examine the stellar versus gas kinematics of four simulated mergers with f_g,p=0.0, models m2-m5, beginning with very similar initial conditions. We fix the initial orbits, M_* of both galaxies, f_g,c, and rotational speed of both galaxies. The only difference in each simulation are the direction of the spin of each galaxy with respect to the spin of the orbit, either retrograde or prograde. Thus, each simulation may be distinguished by the relative spin of the companion and the primary galaxies, e.g. retrograde-prograde. For the remainder of this work, we will refer to the relative spins in a given simulation in this manner, giving first the spin of the companion followed by the primary (e.g. “RP” denotes a retrograde companion and a prograde primary). These scenarios are illustrated in the columns 2-5 of Figure <ref> where we show from left to right models m2 (RR), m3 (PR), m4 (RP), and m5 (PP). The first result of these four simulations is that mergers resulting in counter-rotating gaseous and stellar components require that the primary galaxy be rotating retrograde to the merger orbit.This is consistent with a number of previous works focusing on both dissipative and dissipationless major mergers <cit.>. In regards to counter- versus co-rotation of accreted material, however, the spin of the companion galaxy has no effect. This is due to the fact that the companion galaxy is completely destroyed in the merger and its gas and stellar content is redistributed in the disk of the primary galaxy. In this way the initial internal kinematics of the companion galaxy are lost and the orbital direction of the accreted material is inhereted by the orbital direction of the merger. Although the spin of the companion galaxy does not seem to affect the spin of the gas and new stars of the merger remnant, it does have a clear effect on the radial extent of the accreted gas. For f_g,p = 0 mergers in which the rotation of the companion galaxy is retrograde, RR (m2) and RP (m4), the gas content of the merger remnant appears in Figure <ref> to be less extended than for our PR (m3) and PP (m5) mergers. This agrees with the r_f50,g values in Table <ref> with m2 and m4 having values of 3.1 and 10.4 kpc, compared to 13.1 and 21.6 kpc for models m3 and m5. Furthermore, the concentration indices of m2 and m4 are ∼47% larger than m3 and m5, meaning the former two have significantly more compact gas distributions. The relative spin of the primary galaxy has a the same effect on the gas extent, however the effect is not as large. For example, r_f50,g of our RP merger is 3.4× larger than for our RR, and gas in our PR merger is 4.2× more extended than in our RR remnant. Therefor we find that the merger remnant among m2-m5 with the smallest gas disk is our RR merger while the merger with the largest gas disk is PP. We discuss further the varying morphologies of each component in Section <ref>.§.§.§ Dependence on Central Gas Mass Next we explore how the properties of the merger remnant depend on the gas content of the primary galaxy. In this Section we take our f_g,p=0, RR merger, model m2, from Section <ref> as our base model and run an additional three simulations with f_g,p increasing each time. We perform mergers with primary gas fractions of 0.01, 0.05, and 0.10, which correspond to 0.2 (m6), 1.0 (m7), and 2.0 (m8) times the total gas mass of the companion galaxy. In each simulation the gas in the primary galaxy is arranged in an exponential disk with a scale radius and scale height matched to that of the stellar disc component of the primary galaxy. The initial and final gas/stellar distributions and final kinematics are shown in columns 6-8 of Figure <ref>. The main effect of adding gas to the primary galaxy is that gas accreted from the companion during a retrograde merger collides with gas in the primary. In all four simulations the gas is accreted in such a way that its bulk motion is opposite to the direction of rotation of the primary disk. Thus, the rotation of the accreted gas, which follows the orbital angular momentum of the companion's orbit, will be decreased relative to the gas free case. This can be seen Figure <ref>, comparing to bottom rows of columns 2 and 6 showing the final gas kinematics. The base f_g,p=0 model, m2, shows clear gas-stars counter rotation with a complex gas velocity field. The same is true of m6 with f_g,p=0.01, however the irregularities in the gas velocity have been smoothed out. Our two models in which the primary galaxy begins with the same or more gas than the companion, m7 and m8, have co-rotating gas and primary stars in their merger remnants. In general, accreted gas is swept up by any preexisting gas in theprimary galaxy. The larger the gas mass initially in the primary, the easier it is for accreted gas to change its orbital direction. We also find that both the gas and primary stars of merger remnants with non-zero f_g,p are more extended than in the f_g,p=0 case. Increasing f_g,p results in an increase of r_f50,g, with models m6, m7, and m8 having r_f50,g = 6.1, 7.6, and 10.6 compared to 3.1 for m2. Adding even a small amount of gas to the primary galaxy is found to also significantly reduce c_g, however, beyond this initial drop, adding more gas does not seem to further decrease this value. Models m6-m8 all have similar c_g values at ∼1.6 compared to c_g=2.2 found for m2. We will discuss further the relative morphologies of different components of each model further in Section <ref>. Considering the kinematics of stars initially belonging to the companion galaxy, row 5 of Figure <ref>, we find that including gas in the primary galaxy haslittle to no effect. There is no appreciable difference between models m6-m8, which are also quite similar to those of m2, the f_g,p=0 case. This is because stars merge without dissipation and thus retain the motion of the companion's orbit about the primary galaxy. The kinematics of the new stars shown in row 4 of Figure <ref>, on the other hand, are somewhat more complex. In the central regions we find that newly formed stars are co-rotating with the primary galaxy stars. This is even true for model m6, although the co-rotating region is extremely small with a radius of ∼3.5 kpc. This co-rotating region increases in size with increasing f_g,p. Beyond this inner co-rotating region, the new stars are found to counter rotate. The reason for this complex behavior is that stars in the inner region form from gas that is either initially belonging to the co-rotating gas disk of the primary or from accreted gas that has reached smaller radii and is thus more likely to have been swept up by the preexisting gas. Counter rotating new stars at large radii formed from gas initially in the companion galaxy before this gas has been swept up by the gas of the primary. Thus, these new stars retain the counter-rotating orbits by merging dissaptionlessly.Finally, we also explore the effect of the initial gas content of the primary galaxy in a PR merger. This was shown in the previous section to be the only other configuration resulting in counter-rotating gas for models with gas free primary galaxies (m3). We show the final stellar and gas velocity maps for PR mergers with and without initial gas in the primary galaxy in column 3 of Figure <ref> and column 1 of Figure <ref>, respectively. The overall spatial distributions of gas and stars, as well as the stellar kinematics, in the two models are quite similar, however there is a clear difference in the gas kinematics. In m9 we find that in the central region gas and stars are co-rotating while gas accreted at larger radii is counter-rotating as in m6. This is in contrast to nonzero f_g,p models with RR orbits, m6-m8 discussed above, which exhibit smooth velocity maps with no kinematic flips. As we noted in the previous section, mergers with a prograde satellite rotation result in a larger final gas disk. In the case of m6-m8 all the accreted gas falls to small radii and is swept up by the initial gas content of the primary galaxy, while in m9 gas rapidly stripped from the companion collects at radii beyond the central gas disk and is thus able to retain the counter-rotating kinematics inherited from the orbit of the merger. §.§.§ The Effect of Orbit, Bulge Mass, and f_g,c Next we briefly investigate the effect of the orbital distance and eccentricity, primary bulge mass, and f_g,c on the final distribution and kinematics of gas and stars in our merger simulations. We show the initial setup, the near face-on view of the final merger remnant, and the final near face-on stellar and gas kinematics of these models in the columns 2-8 ofFigure <ref>.Considering merger orbit, we present two RR simulations that both have orbits with initial and pericenter separations between the two galaxies twice that of our previous models. These models are m10 and m11 in Table 1, and the gas fractions of the primary galaxies are 0 and 0.01 respectively. We also produce a PP merger with an initial separation 2× larger than in previous sections and with f_g,p = 0.05, m12. Finally, we present model m13, which has an orbital eccentricity of 1.0, a parabolic orbit, requiring a significantly larger initial separation to achieve the same pericentre distance as our other models. For models m10 and m11, the resulting merger remnants are quite similar to m2, exhibiting counter-rotating gas in the central regions. In contrast to m2, however, both models have a more prominent ring structure and long tidal streams of gas with a slightly different orbital plane. These tidal streams appear to have inherited their orbits from the initial orbit of the incoming companion galaxy, and in the final kinematic maps their rotation is offset by ∼10-30^∘. This is consistent with the inclination of the orbit we input as initial conditions. As with previous simulations, the kinematics of new stars tend to follow those of the gas, and the kinematics of stars initially in the companion are largely unaffected appearing quite similar to models m2-m8. Model m12, our PP merger, has resulting gas/stellar distributions and kinematics that are extremely similar to the f_g,p=0, PP merger, m5. Both m5 and m12 are among the models with the largest r_f50,g at 21.6 and 16.9 kpc, respectively. The radial distribution of the gas is similar as well, with c_g values found to be 1.50 and 1.68. Furthermore, all components in these models are found to be co-rotating with fairly regular velocity maps. This suggests that PP mergers may be among the most difficult to identify based on the kinematics of their remnant galaxies. In contrast to all models discussed to this point, model m13 (shown in column 5 of Figure <ref>, which has a parabolic merger orbit, exhibits chaotic kinematics in the accreted stars and gas. The final distribution of gas has a centrally concentrated region surrounded by a handful of long streams that roughly mirror the distributions of accreted stars. These streams exhibit distinct kinematics that are apparent in the kinematics maps of the new stars and companion stars. Relative to these two kinematic maps, the kinematics of the accreted gas is fairly regular, particularly in the central regions where it counter rotates relative to the primary stars (which have relatively regular rotation). From an observational perspective, this galaxy would likely be identified as having gas versus stellar counter rotation, thus this suggests a parabolic orbit will not affect the emergence of counter rotation. We note, however that the random motions of the accreted stellar components could complicate their identification in a spectral decomposition of the stellar continuum. We also test the effect of significantly increasing the bulge mass of the primary galaxy in m14, thus testing the effect of a PR wet, minor merger with an elliptical galaxy. This is also depicted in Figure <ref> in the 6^th column. The strong gravitational potential of the massive bulge results in much more efficient stripping of gas and stars from the incoming companion galaxy. Thus, the merger remnant of m17 is among the most extended gas disks in our our simulations with r_f50,g=21.1 kpc. While the final kinematics of this model are quite complex, in the central regions, where IFS observations are typically focused, the gas is predominantly counter-rotating. In the inner-most 2 kpc we find a region of co-rotating gas, however this gas is the remnant of the initial gas disk included in the primary galaxy in this model. As in previous models, m6-m8, tidal interactions cause this initial gas disk to collapse inward and be rapidly depleted through star formation. Beyond the inner regions of the galaxy, the gas exhibits a kinematic twist that manifests as a spiral pattern in the velocity map. This is also reflected in the outer stellar kinematics, which will again be dominated by newly formed stars similar to models m10 and m11. Similar to previous models, these stellar motions will be difficult to detect, but the kinematic twist seen in the gas content could possibly be detected using radio interferometry for very nearby galaxies. Finally, we test the effects of lowering f_g,c on the resulting properties of our merger simulations. In simulations discussed thus far we have employed f_g,c = 5.0. Although low mass galaxies with gas fractions this large have been observed, they represent the most gas rich dwarf systems at low redshift <cit.>. As such, wet, minor mergers with companion galaxies of lower f_g,c are more common. We have performed two additional RR, minor merger simulations, m15 and m16, with orbits matched to m2 and f_g,c = 1.0 and 0.5, respectively. These are shown in columns 7 and 8 of Figure <ref>. Overall the resulting merger remnants are quite similar to that of m2, featuring counter-rotating gas and companion/new stars. The radial extent of the gas for all three models is comparable with m2, m15, and m16 having r_f50,g = 3.1, 5.4, and 6.4 respectively. Models m15 and m16, however, have lower c_g than m2 (∼1.4 vs 2.2). It is not clear, though, if this is due to the change in f_g,c, or a result of a truncation of the gas disks of m15 and m16 due to the combination of our mass resolution and the low space density of gas at large radii in these models. Using a larger value of f_g,c allows us to better map the final distribution of gas, particularly at large radii, and should not have a major effect on our main conclusions regarding the emergence of counter-rotation.§.§ Merger Remnant Morphologies In all of our simulations, the merging process results in a S0 merger remnant, i.e. a disk galaxy devoid of spiral arms. This is in contrast to many previous works on simulations of counter-rotating gas and stellar components in merger remnants focusing on major mergers as these simulations typically result in elliptical morphologies <cit.>. In this sense, the mergers presented here provide an important step forward in understanding the formation mechanism for the 20-40% of S0 galaxies found to host counter-rotating components <cit.>. We note that in some models the images of the final morphologies shown in the second rows of Figures <ref> and <ref> appear as late types, models m12 and m14 in particular. This appearance is driven by the gas distributions at large radii, rather than stars. To show this we create simulated SDSS r-band images of model m17 placed at a redshift of 0.05. First an artificial datacube was created using the simple stellar population spectral models from the PÉGASE-HR library <cit.>, created from observations of Milky Way stars. In each spaxel of our simulated cubes we determine the age of each star particle based on their time of birth and the time elapsed in the simulation. Star particles present at the beginning of the simulation are initiated with an age of 1 Gyr. We then create a mass weighted spectrum in each spaxel where each star particle contributes a component taken as the PÉGASE-HR model with an age closest to that of the particle. We next blur the image using a Gaussian kernel with a full-width half max of 3.61 pixels, corresponding to the average seeing of the SDSS survey. Finally, we apply the SDSS r-band filter to this cube and add Gaussian noise at a level that produces a simulated image typical of SDSS observations at z=0.05.Face-on and edge-on simulated r-band images for model m17 are shown in Figure <ref>. This shows that the spiral arms apparent in Figure <ref> are not present in the stellar distribution, which has a lenticular appearance. The gas component of these galaxies at large radii is almost entirely HI, thus star-forming regions will not be present. This means that even for photometry at shorter wavelengths, e.g. the u-band, will not provide imaging of the gas.In a majority of our merger simulations, the concentration index of the primary stars in our remnant, c_p*, is larger than 2.85. The value of c = 2.85 is found by <cit.> and <cit.> to reliably separate visually-classified LTGs (c < 2.85) and ETGs (c > 2.85). The concentration of new stars, c_n*, also tends to be quite high in most cases as a majority of stars are formed in the inner most regions of the primary galaxy. These new stars are extremely bright relative to older stellar populations and will thus contribute to increasing the overall c of the full stellar distribution. We note, however that in no model is c_g or c_c* larger than 2.85, meaning that these accreted components are preferentially arranged in a disk structure. Thus, from a quantitative point of view, merger remnants with c_p* < 2.85 may remain as LTGs rather than transforming into S0's. From our simple prescription in which an S0 is identified as such based on the lack of spiral arms, however, all galaxy mergers result in S0 remnants as shown by the second rows of Figure <ref> and <ref>. Next we examine in detail the spatial distribution of the various components of the gas in our simulated merger remnants. We show in the rows of Figures <ref> and <ref> the face on views of the surface density, Σ, of, from top to bottom, HI, H_2, mass of dust, mass of metals, and stars tagged as initially belonging to the disk component of the primary galaxy. We observe both inner and outer rings in oursimulations, and our large range of initial conditions allow us to discuss the likely formation mechanism for these structures. The occurance of rings in each of our models is summarised in Table 2 for reference. Inner rings have typical sizes comparable to inner galaxy structures such as bars while outer rings are two or more times larger <cit.>. In this work we employ a fixed radial cut of 10 kpc to separate inner and outer rings, i.e. rings with radii < 10 kpc are inner rings and those with radii > 10 kpc are outer rings. In our isolated fiducial model, no bar emerges through secular processes. Thus, we can say with confidence that bars evident in our merger simulations are indeed induced by the merging process rather than internal/secular evolution.First we examine the formation of inner rings, which occur in eleven out of sixteen simulations presented in Figures <ref> and <ref>; m2, m3, m4, m6, m8, m10, m11, m12, m13, m14, m15, and m16. This includes simulations in which the primary galaxy initially contains no gas as well simulations in which the primary galaxy contains gas. Comparing the morphologies of the gas and stars in our merger remnants, it appears that the occurrence of inner rings in mergers with f_g,p is related to the generation of a bar in the disk component of the primary galaxy, as seen in the bottom row of Figures <ref> and <ref>. This bar formation is due to tidal torques on the disk stars during the merging process and the inner gas rings in these simulations form at radii consistent with the lengths of the stellar bars. Inner rings appear to always contain HIWe also find that including large amounts of gas in the primary galaxy suppresses merger induced bar formation, and thus the emergence of an inner gas ring at the outer edge of the bar. The apparent ring structures in models m6, m8, m12, and m14 which initially contain gas in the primary galaxy, are significantly more compact than those seen in m2, m3, and m4. This may suggest that the ring-like appearance in these simulations emerges due to the rapid depletion of gas in the inner-most regions rather than bar driven resonances. Outer gas rings are only prominent in four out of sixteen of these simulations, always occurring in those simulations in which the rotation of the companion galaxy is prograde relative to its orbit about the primary (RP and PP mergers). As we noted in Section <ref>, the gas initially in the companions in these mergers is more efficiently stripped resulting in a much more extended gas distribution when compared with mergers with retrograde companion rotation. One difference between our RP and PP mergers is that we find a merger induced bar in the RP mergers, which is responsible for the generation of an inner ring in addition the the outer ring (m3 and m9). In the PP case, we find neither a bar nor an inner ring. We do find gas well beyond the outer ring, however, whereas in our RP mergers the gas is rapidly truncated beyond the outer ring. This may be the result of angular momentum and material being transported inwards due to the influence of the central bar. The gas in m3 loses 47.9% of its angular momentum while for m5 this value is only 12.3% in agreement with this assessment (see Table <ref>). A similar comparison between m9 and m14, however, is complicated by a 10× difference in f_g,p.The size of the outer ring in all cases isroughly the same, and this may be related to the orbit of the companion galaxy, which does not change between these simulations. Further simulations with a larger variation in orbital parameters will be required to test this. Finally, we note that all of the outer rings observed in our simulations are almost entirely devoid of H_2. Thus, these regions will have a large hydrogen mass with relatively low levels of star-formation, and should therefore represent outliers from the kennicutt-schmidt relation <cit.>.Although we do not observe spiral arms in any of our merger remnants, we do observe arm-like tidal streams in three of our simulations. Specifically these are models m4, m5, and m12, which are also those three simulations with a prograde primary galaxy. Furthermore, these are also the only three galaxies that exhibit no counter-rotating components in their merger remnants. Although observed low surface brightness features such as these often requires long exposure times, these may be useful in identifying merger remnants with prograde primary galaxies that will not display clear kinematic signatures.Before leaving the subject of merger remnant morphologies we also highlight another peculiarity that emerges in models m2 and m3. These models are both performed with retrograde primary galaxies and f_g,p = 0, resulting in the two strongest stellar bars among all of our models (see the bottom row of Figure <ref>). We find that these models also exhibit bars in their distributions of gas, dust, and metals, and this bar is significantly shorter and perpendicular to the stellar bar. These so-called “nested” bars were originally proposed as a mechanism for fueling active galactic nuclei by <cit.> and have subsequently been observed in around 1/3 of barred galaxies <cit.>. Works on simulations of nested bars are on-going with some groups manually creating these structures to explore their influence on the host galaxy evolution <cit.> or, notably, <cit.> who demonstrate the emergence of nested bar formation in an isolated disk galaxy. Our simulation is unique as this serendipitous discovery of a nested bar that appears to be induced by a galaxy merger. The topic of merger induced nested bars will be the topic of future research, and we simply make note of it here. §.§ Other Galaxy Properties Here we present the evolution of star-formation and dust scaling relations of our merger simulations. While not directly related to the emergence of counter-rotation in S0 galaxies, these provide signatures that can be reliably compared with current observations.§.§.§ Star-Formation The time evolution of SFR and H_2 fraction (M_H2/M_HI) is shown form2-m8 in Figure <ref>. The top row shows the evolution of SFR and the bottom row shows the evolution of H_2 fraction. The horizontal dashed line in the top row indicates the SFR of the z=0 star-forming main sequence at the mass of our merger simulations <cit.>. First we focus on our models with f_g,p = 0 presented, shown in the left column of Figure <ref>. We find that all four mergers peak in their SFR around 2 Gyr, which corresponds to the final coalescence of our mergers, and this peak is roughly consistent with the SFR expected for main-sequence galaxies of this mass. Beyond 2 Gyr, the SFR gradually tapers off in most cases, however m5, our PP merger, experiences a rapid truncation in SFR at around 3.5 Gyr. The H_2 fractions of our models gradually increase to 2 Gyr then, in all cases other than m5, the H_2 fraction levels off for the remainder of these simulations. This suggests a coevolution of HI and H_2 in these mergers where the H_2 consumed by star formation is continuously replenished by the HI reservoir. Model m5, on the other hand, was found to exhibit a much larger spatial distribution of gas due to more efficient stripping during the merger as a consequence of the PP configuration. Indeed, Table <ref> shows that r_f50,g for m5 is 21.1 kpc, significantly larger than the values of 2.1, 13.1, and 10.4 found for m2, m3, and m4. Gas at large radii remains stable due to a rapid rotation, preventing the HI from collapsing to form H_2. The rapid truncation of SFR seen at 3.5 Gyr is the result of the depletion of the inner H_2 reservoir, which is no longer replinished from the large HI disk. This is also reflected in the significantly lower H_2 fraction seen in this galaxy beyond 2 Gyr. In the right column of Figure <ref> we show the time evolution of SFR and H_2 fraction for our series of RR mergers with increasing primary gas fraction, m6-m8. Here, model m2 is plotted again as a reference. In the case where gas is included in the primary galaxy we find an earlier peak in the SFR that can be slightly above the average SFR of the main sequence. This occurs because the initial gas disk in the primary galaxy experiences a tidal torque due to the incoming companion that causes it to collapse inward. This compaction of the gas causes an increase in the H_2 fraction, and as a result the initial gas in the primary galaxy is rapidly converted into stars. In the case of m6, there is significantly less initial gas in the primary and thus the SFR peak is lower than models m7 and m8. In models m7 and m8, however we see a rapid truncation of the SFR reminiscent of that seen in our PP merger, m5. Similar to m5, m7 and m8 result in co-rotating remnants that have a significantly larger final gas disk. Thus, the truncation of SFR can again be attributed to stable HI gas at large radii. This is again reflected in the lower H_2 fraction when compared with models m2 and m6. Unlike m5 however, m7 and m8 eventually increase their SFR again to roughly match the level seen in m2 and m6 at the end of our simulations. The presence of remaining gas from the primary galaxy provides a mechanism for angular momentum transfer, allowing accreted gas to continue to move inwards and form stars.SFR and H_2 fraction evolution for models m9-m16 are also shown in Figure <ref>. The same overall trends observed in models m2-m8 are also at play here. In general, models in which gas is found to be trapped at large radii, whether in a disk (e.g. m12 and m14) or in tidal streams (e.g. m10 and m11) exhibit a suppression of both SFR and H_2 fraction compared to m2 at late times. Again this is due to the fact that gas at large radii is stable and prevented from converting from HI to H_2. Also similar to m2-m8, models in Figure <ref> that begin with a large value of f_g,p (e.g. m9 and m12) are found to have an early peak in SFR related to the rapid depletion of gas from the primary galaxy. We include Figure <ref> for completion, however, as star formation is not the focus of this work we leave in depth analysis for future work.§.§.§ Dust Mass Relations Finally, we explore the dust mass, M_d, scaling relations of our minor-merger remnants. Relations between M_d and quantities such as SFR and M_* have been explored by a number of authors. In particular, works such as <cit.> and <cit.> use samples of massive visually classified ETGs known to contain dust based on detection at infrared wavelengths. Galaxies from <cit.> are further classified as being largely dispersion supported (“dispersion dominated galaxies”, DDGs) based on IFS observations from the SAMI Galaxy Survey <cit.>. The presence of dust in these systems is peculiar as these galaxies are likely to host a hot, X-ray emitting halo that is inhospitable to long lived dust grains <cit.>. Galaxies from <cit.> and <cit.> are found to be outliers in M_d vs SFR and M_d vs M_*, which is taken as evidence that their dust content was recently accreted through mergers. As we have described, our models carefully track the evolution of M_d allowing us to test the connection between the M_d-SFR and M_d-M_* relation and minor-merger activity.We show in Figure <ref> the relationship between M_d and SFR for our minor-merger simulations in comparison with observations. The relationship for “normal” star-forming galaxies from the Sloan Digital Sky Survey (SDSS) taken from <cit.> are shown as small orange circles, and a linear fit to this data is indicated by the black dashed line. Dusty elliptical galaxies from <cit.> and DDGs from <cit.> are shown as red triangles and red circles, respectively. We show the values for our merger simulations using green circles at two different time steps. Smaller circles show the position at the halfway point of our simulations, slightly after final coalescence, and the larger circles show the final positions. Both our simulated galaxies and dusty ETGs of <cit.> and <cit.> exhibit significantly lower SFRs at a fixed M_d when compared to the bulk of star-forming galaxies from SDSS. We also find that the SFR of our simulations is typically higher at the halfway point, which should be expected given the declining star-formation histories of our models shown in Figures <ref> and <ref>. It can be seen that a non-negligible number of star-forming SDSS galaxies also occupy the low SFR region below the dashed line however, which suggests that mergers are not the only mechanism that can cause galaxies to fall below the bulk relation. The fact that all of our simulated data points fall below this relation is intriguing though, and observations of M_d vs SFR can provide an indication of merging activity, however further observations will be required to confirm this. Indeed, IFS observations of the four galaxies from <cit.> show these galaxies to have kinematic signatures of mergers such as offsets between the kinematic position angles of gas and stars <cit.>. We also show in Figure <ref> the M_d-M_* relationship of our simulations. We again plot our values over those for ETGs from <cit.> and <cit.>. These points are now compared with galaxies from the Herschel Reference Survey <cit.>, which is an infrared survey of very nearby galaxies representing a large range in galaxy type and environments. The proximity of this sample provides a high confidence in visual morphological classifications as well as allowing for measurements of M_d to very low masses. LTGs and ETGs from HRS are shown in Figure <ref> as blue and orange pentagons, respectively.Similar to Figure <ref>, our simulated galaxies are found to occupy a similar region to dusty ETGs of <cit.> and <cit.>, following a roughly horizontal sequence around M_* ≃ 10^11 M_⊙. This sequence appears to extend HRS ETGs to higher M_d while HRS LTGs, on the other hand, are found to follow a sequence of increasing M_* with increasing M_d (albeit with relatively large scatter compared to M_d-SFR). A relationship between M_* and M_d is a common feature of star-forming galaxy samples <cit.> attributed to the fact that dust is produced by evolved stars, thus more stars will lead to more dust. The lack of correlation seen by <cit.> and <cit.> dusty ETGs is possible further evidence for a merger driven origin for their dust content. In such a scenario, the amount of dust in these galaxies appears will be dictated by the dust content of the companion galaxy rather than the stellar mass of the primary. Our simulated values agree well with this hypothesis as we find nearly an order of magnitude variation in M_d in an extremely narrow range of M_*.§ DISCUSSION§.§ The Emergence of Counter-Rotation A number of previous works have investigated the emergence of gas-versus-stellar counter-rotation in galaxy merger simulations. The general result of these simulations is that the primary galaxy must rotate in the opposite direction of the merger orbit, i.e. a retrograde merger <cit.>. This has also been shown by <cit.> for gas-free, sticky-particle, N-body simulations of minor mergers. In this paper we have found a similar result using state-of-the-art SPH simulations including realistic stellar and gas physics, as well as tracking the production (and destruction) of new stars, dust, and metals. In addition to confirming the necessity of a retrograde orbit in producing a counter-rotating gas disk, we have also explored the effects of including significant amounts of gas in the primary galaxy. In such a merger, the accreted gas collides with the gas initially in the primary galaxy, which can have a significant impact on the final gas kinematics. We compare this with the results of <cit.>, who performed simulations with smooth, filamentary accretion of gas onto a gas-rich primary galaxy. Similar to <cit.>, smooth accretion such as this onto a gas free primary galaxy will also result in a counter-rotating gas disk. By gradually adding gas to the central galaxy, <cit.> found that once the gas content of the primary was equal to the mass of gas accreted, the final gas disk will co-rotate with the stellar component. For the first time, we have shown that this is also true for the discrete accretion of a single, gas-rich companion galaxy. This observation may influence the discrepancy between the fraction of S0 galaxies exhibiting gas vs stellar counter-rotation <cit.> and the fraction seen in later types <cit.>. Galaxies with Sa and later morphologies are far more likely to contain large amounts of gas, thus minor mergers, even with extremely gas rich companions, will have little, if any, effect on the observed kinematics post-merger. It should also be noted, however, that S0 galaxies and LTGs have typically experienced significantly different formation histories. The well established morphology density relation <cit.> shows that S0 galaxies are more common in higher density, group environments than LTGs, where minor mergers are more common <cit.>. This means that LTGs have experienced fewer mergers, on average, and their gas content inhibits the emergence of counter-rotating components as we have shown. Thus, the morphology density relation provides an additional reason for the large discrepancy between the fractions of S0 and LTGs exhibiting counter-rotating gas. From an observational point of view, studies of gas-versus-stellar counter-rotation in galaxies is becoming more commonplace due to the proliferation of integral field spectroscopy <cit.>. These studies identify not only those galaxies with gas and stellar kinematic misalignemts of 180^∘, those hosting counter-rotating gas disks, but also galaxies with misalignments between 0^∘ and 180^∘. The gas content of these galaxies, which is most likely externally accreted, is expected to relax into either co- or counter-rotation with the stellar content of the primary galaxy within ∼5.0 dynamical times <cit.>. In this framework, the distribution of kinematic misalignment angles should be strongly peaked around 0^∘ (this peak includes all normal/non-disturbed galaxies), relatively flat between 0^∘ and 180^∘, and weakly peaked at 180^∘.This is roughly the result observed for ATLAS^3D galaxies by <cit.>. <cit.> attempt to match the distribution in kinematic offsets found by <cit.> using a simple toy model. Without significantly increasing the relaxation time above 5.0 dynamical times, they typically find the peak at 180^∘ is much larger than the observed distribution. We have shown here that in cases where massive ETGs contain gas prior to an accretion even, the emergence of a counter-rotating gas disk is suppressed, an effect not accounted for in the work of <cit.>.Recently it has been shown that ∼10-40% of ETGs contain significant amounts of molecular gas <cit.> Properly accounting for this could suppress the number of counter-rotating gas disks to match observations without invoking long gas relaxation times. We note that ATLAS^3D galaxies are, on average, of an earlier morphological type than galaxies simulated in this work. It is reasonable to suggest this may limit the appropriateness of a comparison between our study and the works of <cit.> and <cit.>. First we point out that the ATLAS^3D sample includes a large number of S0 galaxies at a broad range of stellar masses, comparable to simulated galaxies studied here. We have also shown in model m14 that increasing the bulge fraction of the primary galaxy alters the efficiency of gas stripping of the companion, but does not change the general result regarding co- versus counter-rotating gas disks. Thus, given the simplicity of the model presented in <cit.>, for a large number of ATLAS^3D galaxies (at lower masses in particular) we consider this comparison to be apropos. More generally, we expect our results regarding the dependence of the emergence of counter-rotating gas disks on merger orbits and primary gas fraction to hold for all ETGs in a comparable mass range. Another interesting aspect of retrograde mergers in which the primary galaxy initially contains significant amounts of gas is that, while the gas is found to co-rotate with the bulk of the stars, we also identify clear cases of stellar-versus-stellar counter-rotation. This is shown in Figure <ref> for model m8 where we find stars initially belonging to the accreted companion galaxy end up in counter-rotating orbits relative to those stars initially belonging to the primary galaxy. Similar types of orbital segregation has previously been shown in major merger simulations by <cit.> and <cit.>. The reason for this behavior is the collisionless nature of star-particles, which is in stark contrast to the behavior of the accreted gas. We also find that the newly formed stars have the most complex kinematic structure, but this is due to the fact that these stars form from two, initially distinct, gas reservoirs. Mergers such as this could explain examples of galaxies with stellar-versus-stellar counter-rotation from observations such as the S0 galaxy NGC 1366 <cit.>. Identifying galaxies with stellar-stellar counter-rotation such as these at higher redshift will be significantly more difficult, particularly using observations from modern IFS surveys such as SAMI. This is due to the fact that the spectral signature of accreted stars will have a much lower flux density than the stars of the primary galaxy. A possible alternative method to determine of S0 galaxies have counter-rotating stellar populations would be to observe individual globular clusters in nearby S0s, using techniques demonstrated by the SLUGGs Survey <cit.>. Multiple kinematic populations of globular clusters could be evidence of galaxies hosting both globular cluster systems formed in-situ and a population of accreted globular clusters.§.§ Gas Rings and Bar Formation Apparent in a number of our minor merger simulations is the emergence of gas rings. Rings in the inner regions are quite common in disk galaxies and can often be associated with resonances and/or bars <cit.>. Although bars have been found to form through instabilities in simulations of isolated disk galaxies <cit.>, in our isolated disk test-case we did not observe spontaneous bar formation. This gives us confidence that bars emerging in our simulations are primarily the result of galaxy-galaxy interaction.Bar formation through galaxy interactions has long been suspected, for example <cit.> showed that a large fraction of galaxies in the centre of the coma cluster exhibit bars. Recent works studying simulations of galaxy interactions, flybys in particular, have shown that tidal forces can indeed induce bar formation, even in cases where a secular bar would not normally appear <cit.>. In our simulations, inner-rings (with radii < 10 kpc) appear to emerge primarily in cases where a bar has also formed, suggesting that these two structures are closely related.Outer rings in galaxies (with radii > 10 kpc), on the other hand, are difficult to connect with secular evolutionary processes in galaxies and are often attributed to external accretion of gas <cit.>. We find outer rings in a number of our simulations including models m3, m5, m9, m12, and m14 (see Figures <ref> and <ref>). The key similarity between these three simulations is that the companion galaxy in each case experiences more efficient tidal stripping of gas during infall. In all three cases this is due to the prograde rotation of the satellite relative to the merger orbit. This has been known to enhance tidal stripping since the very early simulations of <cit.>. For model m14, this process is further enhanced by the presence of a massive bulge in the central galaxy, which produces a deep and extended gravitational potential well.A number of authors have explored the incidence of ring structures in S0 galaxies finding, in general, these to be quite frequent, occurring in ∼25-90% of galaxies (depending on sample selection and wavelength targeted). Observations have focused on indicators of star formation such as UV <cit.> or Hα emission <cit.> indirectly related to H_2 regions, HI <cit.>, and even stellar light <cit.>. There are, though, relatively few observational examples of directly observed molecular gas rings such as that of NGC 4477 shown in <cit.>. This is partly due to technological limitations, however our simulations suggest that a majority of H_2 rings formed through mergers are compact, inner rings reducing our ability to identify them in observations. This means that current facilities will be able to resolve molecular gas rings only in the most nearby S0 galaxies. Newer interferometric facilities such as ALMA are thus the most likely to observe molecular gas rings in galaxies in the near future. § CONCLUSIONS In this paper we have presented the morphologies and kinematics of a series of minor merger remnants between two disk galaxies. These mergers result in S0 remnants with a variety of kinematic signatures including both co- and counter-rotation in accreted gas and stars as well as kinematically decoupled cores and kinematic twists, which we have presented using 2D projected maps. These maps share many similarities with IFS data products, and, although we do not suffer from noise and surface brightness limitations, they provide a useful comparison to maps of galaxies from modern IFS surveys. The relatively large number of simulations presented here has allowed us to perform a systematic study of the initial conditions responsible for the various kinematic signatures, and we summarise our main results as follows: * The key factor necessary for producing counter-rotating gas and stellar populations in a given merger remnant is that the orbit of the merger must be retrograde with respect to the primary galaxy. Thus, accreted material is brought in counter to the rotational direction of thestellar content of the primary galaxy.* The relative spin of the companion does not affect the above result, however it does affect the final spatial distribution of gas. Encounters with prograde companion galaxy rotation result in more extended gas distributions than retrograde due to more efficient tidal stripping.* If the primary galaxy contains as much or more gas than the companion, the accreted gas is swept up by the gas of the primary resulting in co-rotating gas and primary stars in the remnant. In this case, however, stars accreted from the companion remain counter-rotating due to their collisionless nature. This observation can help to explain the difference in the fraction of counter-rotating S0 galaxies (20-40%) when compared to LTGs (< 8%).From a practical standpoint, these conclusions show that, although gas versus stellar counter-rotation is the easiest to observe, the lack of such a signature does not immediately rule out recent, wet, minor mergers. Examples of galaxies with co-rotating gas and stars may host a secondary, counter-rotating stellar component. Such a component may be difficult to identify in some observations, however, but we propose the observation of counter-rotating planetary nebulae systems and globular cluster systems in S0 galaxies <cit.> may provide evidence of these systems. Finally, we also find three examples of prograde mergers with no counter-rotating components that will be the hardest merger remnants to identify. These three models do exhibit clear stellar, tidal streams at large radii, which may be used to identify them as merger remnants, although observing these will typically require extremely long exposure times. In addition to these major conclusions, we observe a number of other intriguing features in our galaxy merger simulations. * We find merger induced bars in a number of our simulations, in particular, retrograde mergers with gas free primary's (m2 and m3) exhibit the strongest bars (as well as secondary, nested bars). Including gas in the primary galaxy tends to suppress bar formation.* We also find inner and outer rings in some simulations. Inner rings appear to be connected to bar driven resonances while outer rings are associated with tidally stripped gas that collects at large radii.* Simulations in which remnants have large gas disks (m5 in particular) have a corresponding large angular momentum that seems to prevent HI from converting into H_2. This is associated with a significantly suppressed SFR and H_2 mass fraction.* We compare the M_d-SFR and M_d-M_* distributions of our simulated galaxies to observed samples of “normal” star-forming galaxies as well as massive ETGs containing dust. Our results support the connection between the dust content of these galaxies and merger activity proposed by <cit.> and <cit.>. We note that our suite of simulations still represents only a small portion of the possible parameter space for minor galaxy mergers. Parameters that have not varied here include mass ratio, dark matter fraction, and orbital inclination to name a few. An obvious next step that can address this issue is to apply a similar analysis to simulated galaxy mergers from the GalMer database <cit.>. Regardless, the results presented here are a useful step forward in our understanding of the formation of counter-rotating stellar and gaseous components in S0 galaxies.RB acknowledges support under the Australian Research Council's Discovery Projects funding scheme (DP130100664). We also with to thank the anonymous referee for helpful comments that have helped to clarify this manuscript.mnras§ TABLE OF INTEGRATED VALUESHere we present Table <ref>, which gives the integrated quantities calculated as described in Section <ref>. These include change in gas mass, final half mass radii and concentrations of each component, and the change in angular momentum of gas and primar stars.
http://arxiv.org/abs/1704.08434v1
{ "authors": [ "Robert Bassett", "Kenji Bekki", "Luca Cortese", "Warrick J. Couch" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170427050328", "title": "The Formation of S0 Galaxies with Counter-Rotating Neutral and Molecular Hydrogen" }
PITT-PACC-1620 Pittsburgh Particle Physics, Astrophysics, and Cosmology Center,Department of Physics and Astronomy, University of Pittsburgh, PA 15260, USA Pittsburgh Particle Physics, Astrophysics, and Cosmology Center, Department of Physics and Astronomy, University of Pittsburgh, PA 15260, USA Pittsburgh Particle Physics, Astrophysics, and Cosmology Center, Department of Physics and Astronomy, University of Pittsburgh, PA 15260, USA We investigate the feasibility of the indirect detection of dark matter in a simple model using the neutrino portal. The model is very economical, with right-handed neutrinos generating neutrino masses through the Type-I seesaw mechanism and simultaneously mediating interactions with dark matter. Given the small neutrino Yukawa couplings expected in a Type-I seesaw,direct detection and accelerator probes of dark matter in this scenario are challenging. However, dark matter can efficiently annihilate to right-handedneutrinos, which then decay via active-sterile mixing through the weak interactions, leading to a variety of indirect astronomical signatures. We derive the existing constraints on this scenario from Planck cosmic microwave background measurements, Fermi dwarf spheroidal galaxies and Galactic Center gamma-rays observations, and AMS-02 antiprotons observations, and also discuss the future prospects of Fermi and the Cherenkov Telescope Array. Thermal annihilation rates are already being probed for dark matter lighter than about 50 GeV, and this can be extended to dark matter masses of 100 GeV and beyond in the future.This scenario can also provide a dark matter interpretation of the Fermi Galactic Center gamma ray excess, and we confront this interpretation with other indirect constraints. Finally we discuss some of the exciting implications of extensions of the minimal model with large neutrino Yukawa couplings and Higgs portal couplings. Indirect Detection of Neutrino Portal Dark Matter Barmak Shams Es Haghi April 27, 2017 ==================================================15pt=18pt§ INTRODUCTION A wide array of gravitational phenomena over a range of cosmological scales strongly supports the hypothesis of dark matter (DM) <cit.>.There is, however, no firm evidence that DM couples to ordinary matter other than through gravity, and the search for such non-gravitational DM interactions has become one of the main drivers in particle physics today.Neutrinos (ν) in the Standard Model (SM) may be identified as a component of DM, since they are color-singlet, electrically neutral cosmic relics. However, the smallness of the lightest neutrino mass makes them relativistic at freeze-out in the early universe, and thus incompatible with current observations to account for the majority of the cold DM. One therefore must seek a solution beyond the SM. Since we do not know how DM couples (if at all) to the SM, it is important to explore a variety of models to understand in a comprehensive manner how non-gravitational DM interactions may manifest <cit.>. Since DM is presumably electrically neutral, it may be either the neutral component of an electroweak multiplet, as in the well motivated weakly interacting massive particle (WIMP) paradigm,or alternatively it may be a Standard Model (SM) gauge singlet state. In the latter case of gauge singlet DM, an economical and predictive mechanism for mediating DM interactions to the SM is provided by the so-called “portals”- renormalizable interactions of DM through gauge singlet SM operators. There are only three such portals in the SM-the Higgs portal <cit.>, the vector portal <cit.>, and the neutrino portal <cit.>.As applied to DM, the Higgs portal <cit.>, and the vector portal <cit.> have been extensively investigated, while the neutrino portal option has received comparatively little attention, despite the strong motivationdue to its connection to neutrino masses.In this paper we will examine a minimal model of neutrino portal DM in the simplest setup of a Type-I seesaw scenario <cit.>.The neutrino portal to DM relies on DM interactions being mediated by the right-handed neutrinos (RHNs). Since the RHNs are responsible for generating neutrino masses, one may typically expect the DM interaction strength with the SM to be very small since it is governed by the neutrino Yukawa coupling. In this case it is challenging to probe neutrino portal DM in accelerator experiments or in direct detection experiments. On the other hand, the DM coupling to theRHN can be sizable, thereby facilitating the efficient annihilation of DM to pairs of RHNs. This allows DM to be produced thermally in the early universe with the observed relic abundance and furthermore presents an opportunity to test the scenario through a variety of indirect detection channels. In this work we investigate the indirect detection signatures of neutrino portal DM.The scenario investigated here was first proposed in Ref. <cit.> and falls into the class of “secluded” DM scenarios. Some aspects of the thermal cosmology were investigated in Ref. <cit.>. In regards to indirect detection signatures, Ref. <cit.> explored a possible interpretation of the Fermi Galactic Center gamma ray excess <cit.> in terms of the DM annihilation to RHNs. Recently, Ref. <cit.> investigated the limits from gamma ray observations on DM annihilation to RHNs, although did not explore the implications for specific particle physics models.Extensions of the simplest scenario, which include additional states and/or interactions have also been discussed in Refs. <cit.>.Our work provides a comprehensive and updated analysis of the indirect detection phenomenology of neutrino portal DM. In particular, we present constraints from Planck cosmic microwave background (CMB) measurements, Fermi dwarf spheroidal galaxies and Galactic Center gamma-rays studies, and AMS-02 antiproton observations, and also describe the future prospects for Fermi and the Cherenkov Telescope Array. Thermal relic annihilation rates are already constrained for DM masses below about 50 GeV.This scenario can also provide a DM interpretation of the Fermi Galactic Center gamma ray excess, although we demonstrate that such an interpretation faces some tension from dSphs and antiproton constraints. We also describe extensions of this scenario beyond the minimal model, including scenarios with large Yukawa and Higgs portal couplings, and highlight the potentially rich physics implications in cosmology, direct detection, and collider experiments.Besides these probes, there is also the interesting possibility of a hard gamma-ray spectral feature that arises from the radiative decays of N, which could place complementary constraints in the region m_χ∼ m_N, m_N ≲ 50 GeV. We will comment on this possibility below, and we refer the reader to Ref. <cit.> for a detailed study. The outline of the paper is as follows. In Section <ref> we describe a minimal neutrino portal DM model, outline the expected range of couplings and masses, and discuss the cosmology. The primary analysis and results concerning the indirect detection limits and prospects are discussed in Section <ref>. In Section <ref> we describe several features and phenomenological opportunities present in non-minimal neutrino portal DM scenarios. Our conclusions are presented in Section <ref>.§ NEUTRINO PORTAL DARK MATTERThe simplest construction beyond the Standard Model to account for the neutrino masses is the introduction of right-handed neutrinos (RHN). Beside the normal Dirac mass terms with the Yukawa interactions, the RHN can also have a Majorana mass term since it is a SM gauge singlet. This is the traditional Type-Iseesaw mechanism <cit.>. For the same reason of its singlet nature, N can serve as a mediator to the dark sector via the neutrino portal. A simple model of neutrino portal DM based on the Type-I seesawcontains three new fields, N,χ,ϕ, where N and χ are two component Weyl fermions and ϕ is a real scalar field.They are charge-neutral with respect to the SM gauge interactions. The fermion N is identified as a RHN. We will assume that χ is lighter than ϕ, andtheyare charged under a Z_2 symmetry, which renders χ stable and a potential DM candidate. The Lagrangian has the following new mass terms and Yukawa interactionsL ⊃- 1/2m_ϕ^2 ϕ^2 - [ 1/2 m_N N N + 1/2 m_χχχ+y L H N +λ N ϕχ +h.c.] ,where L and H are the SM SU(2)_L lepton and Higgs doublets, respectively. There are two central features of this model. First, the RHN field N serves as a mediator between the dark sector fields χ, ϕ and the SM fields, due to the couplings λ and y. This mediation allows for non-gravitational signatures of the DM and a thermal DM cosmology. Second, after the Higgs obtains a vacuum expectation value, ⟨ H ⟩ = v/√(2) with v = 246 GeV, a small mass for the light SM-like neutrinos is generated via the seesaw mechanism:m_ν∼y^2 v^2/2 m_N.Given the observed neutrino masses[In principle, we would need at least two right-handed states to generate the observed neutrino mass pattern. For our current interest, we will only focus on the lower-lying one N.], the Yukawa coupling y depends on the RHN mass, m_N. For instance, fixing m_ν∼√((Δ m_ν)^ atm)∼ 0.05 eV suggsts a small neutrino Yukawa coupling of ordery ≃ 10^-6 (m_N/v)^1/2. As we will discuss in more detail shortly, the requirement of thermal freeze-out of the DM puts an upper bound on the DM and RHN massless than 20 TeV. Therefore, the Yukawa couplings that we will be interested in will generally be quite small.It will thus be extremely difficult to produce the DM at accelerators, or directly detect it through its scattering with SM particles. However, there is an opportunity to probe this type of DM via indirect detection, and this will be the primary focus of this paper. As alluded to already we will be interested in DM that is thermally produced in the early universe. The RHN mediator allows for the dark sector to couple to the SM thermal bath in the early universe. Then, provided that m_χ > m_N and that all of the particles are sufficiently light, say below O(10 TeV), the DM can efficiently annihilate to RHNs,χχ→ NN, and achieve the correct relic abundance. The process Eq. (<ref>) is governed by the coupling λ, which is a prioria free parameter. The thermally averaged annihilation cross section is⟨σ v ⟩ =[ Re(λ)^2(m_χ+m_N)+ Im(λ)^2(m_χ-m_N)]^2/16 π [m_ϕ^2 + m_χ^2 - m_N^2]^2( 1- m_N^2/m_χ^2)^1/2.We observe that the annihilation cross section Eq. (<ref>) depends on the coupling λ and the masses m_χ, m_N, m_ϕ. However, the indirect detection signatures that we will investigate will depend in a detailed way only on the size of the annihilation cross section ⟨σ v ⟩, which determines the rate, as well as the masses m_χ and m_N, which will affect the energy spectrum of the SM annihilation products. Thus, it will be more convenient to simply work with the three parameters {⟨σ v ⟩, m_χ, m_N}. Note that for a given set of masses, one can always obtain the desired cross section by an appropriate choice of the coupling λ through Eq. (<ref>), provided the coupling remains perturbative. We will discuss this point in detail shortly. We can restrict the parameter space further if we demand that the DM saturates the observed relic density. For Majorana fermion DM the observed relic abundance is obtained for <cit.>⟨σ v ⟩_ thermal = 2.2 × 10^-26cm^3s^-1.Once we fix the annihilation cross section to saturate the observed relic abundance, then all of the physics can be characterized in terms of the two masses m_χ and m_N. Parameter choices that predict cross sections smaller than (<ref>) overproduce the DM. We now discuss the expected range of masses and couplings of the new states in the model.A first constraint comes from demanding that the coupling λ be perturbative and thus the theory be predictive.Assuming m_N ≪ m_χ, the partial-wave perturbative unitarity bound for the DM annihilation amplitude requires that λ < √(4 π). The over-closure and perturbative unitarity constraints lead to the boundm_χ≲√(π/4 ⟨σ v ⟩_ thermal)≈ 20 TeV,which is in broad agreement with the general analysis of Ref. <cit.>. Furthermore, there are a variety of limits on the right-handed neutrinos N, which depend on its mass and mixing angle with active neutrinos. In particular, for seesaw motivated mixing angles, the lifetime of N is typically longer than O(1s) for m_N ≲ 1 GeV,and is thus constrained by Big Bang Nucleosynthesis <cit.>. Then, considering m_χ > m_N in order to obtain and efficient DM annihilation cross section we will consider in this paper masses in the range1GeV < m_N< m_χ≲ 20 TeV. The discussion above assumes a standard thermal history for the DM particle χ, which relies on χ being in equilibrium with the plasma. Since the dark sector particles χ and ϕ have no direct couplings to the SM, it is the RHN that is ultimately responsible for keeping χ and ϕ in equilibrium. It is therefore important that N remain in equilibrium with the SM during the freezeout process. The relevant processes to consider are the decay and inverse decays of N to the SM. This question has been investigated recently in Ref. <cit.>[See Ref. <cit.> for a similar discussion in the context of right-handed sneutrino DM.]. For Yukawa couplings dictated by the naive seesaw relation, these process are very efficient when m_N ≳ m_W, since N decays through a two body process. However, if N is light, m_N ≲ m_W, the three body decays of N become inefficient and N can fall out of equilibrium. As a consequence, this fact requires an annihilation cross section that is larger than the canonical thermal relic value by some order one factor in the early universe to efficiently deplete the χ abundance, as explored in detail in Ref. <cit.>. A detailed investigation of the cosmology is beyond the scope of this paper, but we will take the standard thermal value for the annihilation cross section as a motivated benchmark. Besides the terms in Eq. (<ref>), an additional Higgs portal coupling, ϕ^2 |H|^2 is allowed in the model. This interaction provides an alternative means to keep ϕ, χ, and N in thermal equilibrium with the SM. We will assume for now that this coupling is small so that the phenomenology is dictated by the minimal neutrino portal interaction. However, a large Higgs portal coupling can lead to a variety of interesting effects, and we will discuss this topic in Section <ref>.§ INDIRECT DETECTION CONSTRAINTS AND PROSPECTSWe now come to the main subject of this work: the constraints and prospects for indirect detection of neutrino portal DM.We will investigate several indirect signatures of DM annihilation in this scenario, including observations of the CMB, gamma rays, and antiprotons. For each of these indirect probes the relevant underlying reactionis DM annihilation to RHNs as in Eq. (<ref>), followed by the weak decays of the RHNs to SM particles due to mixing. We will thererfore require the energy spectrum dN/dE per DM annihilation in the photon, electron and antiproton channels as an input to our further analysis below.To compute these spectra we first simulate the decay of RHNs to SM particles in the N-rest frame using  <cit.> in conjunction with themodel files <cit.>.These partonic events are then passed to  <cit.> for showering and hadronization, thereby yielding the prediction for the resulting photon, electron, and antiproton spectrum coming from the N decay, dN'_i/dE' for i = γ, e^-, p̅. These events are then boosted to the DM rest frame according to the formula (see, e.g., Refs. <cit.> for the case of massless particles):dN_i/dE =∫_γ(E-β√(E^2-m^2)) ^γ(E+β√(E^2-m^2))dE'/2βγ√(E'^2-m^2) dN'_i/dE',       γ = (1-β^2)^-1/2 = m_χ/m_N,where m is the mass of the boosted particle, i.e., photons, or electrons, antiprotons; see Appendix A for a derivation of Eq. (<ref>).This gives the prediction for the required spectrum in each channel. We note that spin correlations are not accounted for in our simulation, but these are expected to have only a modest effect on the broad continuum spectra of interest to us (see Ref. <cit.> for an explicit example where this expectation is borne out).We display in Figure <ref> examples of the predicted continuum γ-ray, electron, and antiproton spectra for (E_i^2 dN_i /dE_i versus E_i for i = γ, e^-, p̅), where we have fixed the DM mass to be m_χ = 200 GeV and chosen three values for the RHN masses m_N = 20 GeV (solid), 50 GeV (dashed), 100 GeV (dotted). Here we have assumed that N couples solely to the first generation (electron-type) lepton doublet. In the case of the γ-ray and antiproton spectrum, one observes a broad spectrum that peaks in the O(10  GeV) range. The location of the peak is largely dictated by the DM mass, which controls the total injected energy. There is a mild sensitivity to the RHN mass, with harder spectra resulting from a larger mass gap between the DM and RHN. For the electron case, in addition to the continuum component, there is a hard component resulting from the primary N → W e decay, which is clearly seen in Figure <ref>.In this work we will restrict to the case in which N couples to the electron-type lepton doublet, but it is worth commenting on the cases of couplings to muon and/or tau flavor. In these cases, we have checked that the continuum spectra is very similar to the electron-flavor case, as is expected since these particles dominantly originate from decay of the electroweak bosons.The primary difference for muon or tau-flavor couplings will be the absence of the hard electron component from the primary N decay. The electron spectrum will be used below as an input to the CMB bounds, so one may expect a mild difference in the resulting limits in the case of muon or tau flavor couplings.We now present in turn the constraints on neutrino portal DM from the Planck cosmic microwave background measurements, Fermi observations of gamma rays from the Galactic Center and from dwarf spheroidal galaxies, and AMS-02 observations of antiprotons. A summary of these constraints, as well as a discussion of other indirect searches not considered here, and an analysis of the future prospects, is presented below in Section <ref>.§.§ Cosmic Microwave BackgroundThe Cosmic Microwave Background (CMB) provides a sensitive probe of DM annihilation around the epoch of recombination. In particular, if the annihilation products include energetic electrons and photons, the photon-baryon plasma can undergo significant heating and ionization as these particles are injected into the bath, modifying the ionization history and altering the temperature and polarization anisotropies. Using precise measurements of the CMB by a number of experiments, includingWMAP <cit.>, SPT <cit.>, ACT <cit.>, and Planck <cit.>, robust, model-independent constraints on DM annihilation have been derived by several groups <cit.>. The relevant quantity of interest for DM annihilation during recombination is the energy absorbed by the plasma per unit volume per unit time at redshift z, dE/dV dt = ρ_ c^2 Ω_χ^2(1+z)^6 [f(z) ⟨σ v ⟩/m_χ],where ρ_ c is the critical density of the Universe today and Ω_χ is the DM density parameter today. Production of neutrinos as daughter particles and free-streaming of electrons and photons after creation until their energy is completely deposited into the intergalactic medium (IGM) (via photoionization, Coulomb scattering, Compton processes, bremsstrahlung and recombination) affect the the efficiency of energy deposition. This is accounted for in Eq. (<ref>) by the efficiency factor, f(z), which gives the fraction of the injected energy that is deposited into the IGM at redshift z and depends on the spectrum of photons and electrons arising from DM annihilations. Furthermore, since the CMB data are sensitive to energy injection over a narrow range of redshift, i.e., 1000-600, f(z) can be well-approximated by a constant parameter f_eff. The additional energy injection from DM annihilation in Eq. (<ref>) alters the free electron fraction (the abundance ratio of free electrons to hydrogen), which in turn affects the ionization history. These effects are quantitatively accounted for with new terms in the Boltzmann equation describing the evolution of the free electron fraction. The additional terms are added to the baseline ΛCDM code and used to derive limits on the energy release from DM annihilation.Planck sets a limit on the particle physics factors in Eq. (<ref>)f_eff(m_χ)⟨σ v⟩/m_χ < 4.1 ×10^-28 cm^3s^-1 GeV^-1,which is obtained from temperature and polarization data (TT,TE,EE+lowP) <cit.>. To apply the Planck constraints of Eq. (<ref>) to the neutrino portal DM model, it remains to compute the efficiency factor f_eff(m_χ) in our model.We use the results of Ref. <cit.>, which provides f^γ (e^-)_eff(E) curves for photons and electrons to compute a weighted average with the photon/electron spectrum (dN/dE)_γ, e^- predicted in our model according tof_eff(m_χ)=1/2 m_χ∫_0^m_χ dEE [2f_eff^e^-(E)(dN/dE)_e^- + f_eff^γ(E)(dN/dE)_γ].The photon and electron spectra for each DM and RHN mass point are computed with Monte Carlo simulation described at the beginning of this section and are displayed for a few benchmarks in Figure <ref>. Using these spectra and Eqs. (<ref>) and (<ref>), we obtain a limit on the annihilation cross section from the CMB as a function of m_χ and m_N. These limits are displayed in Figure <ref> as contours of the 95% C.L. upper limit on log_10[⟨σ v ⟩/ ( cm^3s^-1)] (black curves) from the CMB from Planck <cit.> in the m_χ - m_N plane. The thick (red) line indicates the region where the cross section limit is equal to the thermal relic value of Eq. (<ref>). The constraints on the annihilation cross section are translated tolimits on the minimum value of the coupling constant λ (which occurs for m_ϕ = m_χ) as shown by the vertical (blue) lines. The shaded (blue) region indicates where the perturbative unitarity bound is violated, λ > √(4π).Since the efficiency factor f_eff is essentially constant over a broad range of m_χ, Eq. (<ref>) implies that the limit on ⟨σ v⟩ scales with m_χ irrespective of the value of m_N, and this feature is clearly present in Figure <ref>. We observe that Planck is able to constrain the thermal relic value based on Eq. (<ref>) for DM masses below about 20 GeV.A small feature in the limit contour is apparent in the region near m_W ≲ m_N ≲ m_Z. This is a consequence of the dominance of the two body decay to N → W ℓ in this small mass window.§.§ Gamma rays from the galactic center One of the primary signatures of DM annihilation are high-energy gamma rays. In comparison to other cosmic ray signatures involving electrically charged particles, gamma rays are essentially unperturbed by magnetic fields and the astrophysical environment as they travel to us from their source, yielding information about both the energy and location of the underlying DM reaction. One can search for both gamma ray line signatures as well as a continuum signal. While a line signature is unfortunately not present in the neutrino portal DM model, there can be a distinct continuum gamma ray signal, and this will be the subject of investigation here.Significant advances in our study of the gamma-ray sky have been achieved over the past several years by the Fermi Gamma Ray Space Telescope, and data from the Fermi collaboration can be used to probe DM annihilation over a wide range of models and DM masses.In this section we will consider gamma ray signatures from the center of the Milky Way. The Galactic Center has long been recognized as the brightest source of DM induced gamma rays, a consequence of its proximity and the rising DM density in this region. At the same time extracting a signal from this region is challengingdue to significant and not well-understood astrophysical backgrounds.Below we will also investigate gamma ray signals from dwarf spheroidal galaxies, which provide a cleaner, albeit dimmer, source of gamma rays. The quantity of interest for gamma ray signals of DM annihilation is the gamma ray flux per unit energy per unit solid angle in a given direction, Φ_γ(E,n̂),where E is the energy and n̂ is a unit vector along the path of the line of sight.The gamma ray flux can be written as Φ_γ(E,n̂)=1/4π[ ⟨σ v⟩/2m_χ^2dN_γ/dE] J(n̂).The term in square brackets in Eq. (<ref>) above depends only on the underlying particle physics properties of the DM model, including m_χ, ⟨σ v ⟩, and the spectrum of photons emitted per DM annihilation dN_γ/dE. This spectrum is shown in Figure <ref> for the channel χχ→ NN for several choices ofχ and N masses.The quantity J(n̂) inEq. (<ref>), also called the J-factor, depends only on astrophysics and involves an integral over the DM density profile ρ_χ( r) that runs along the path of the line of sight defined by n̂:J(n̂)=∫_l.o.s.ρ_χ^2(r) dl.In practice, the J-factor is averaged over a particular region of interest relevant for the analysis. The J-factor depends sensitively on the DM distribution and can vary by several orders of magnitude depending on this assumption, which translates into a substantial uncertainty in the derived annihilation cross section limit. At present, there is no consensus on the expected DM halo profile. Cuspy profiles such as NFW <cit.> or Einasto <cit.> find support from N-body simulations <cit.>.These simulations only involve DM, and the inclusion of baryonic processes may significantly impact the shape of the profile, especially towards the inner region of the Milky Way.However, even the qualitative nature of the resulting DM distribution is a matter of debate, and it is possible that the resulting profile is either steepened <cit.> or flattened <cit.> due to baryonic effects. Besides the assumption of the DM distribution, a separate, smaller O(1) uncertainty arises from the overall normalization of the profile, which is fixed to match the local DM density ρ_0 <cit.>. The current situation regarding the observed gamma ray flux from the Galactic Center is somewhat murky. A number of analyses, starting from the works of Goodenough and Hooper <cit.> and culminating most recently in the Fermi analysis <cit.>, have found a broad excess of gamma rays from the Galactic Center, which peaks in the 1-3 GeV range. All analyses conclude that there is a highly statistically significant excess above the currently accepted diffuse background models (see for example Refs. <cit.>). However, the origin of these gamma rays is still not clear. While there has been a significant effort devoted to possible DM interpretations, recently it has been argued that the excess is more likely to be a new population of unresolved point sources, which would disfavor the simplest DM interpretations <cit.> (see however <cit.>).It is certainly interesting to speculate on a possible DM origin, and we will carry out this exercise below in Section <ref>. Here we will instead take a conservative approach and use the Fermi data to place limits on DM annihilation. To obtain limits on the neutrino portal DM scenario, we use the model independent results of Ref. <cit.>.In that work, four years of data from the Fermi Large Area Telescope was used to construct maps of the gamma ray flux in the region around the Galactic Center in four energy bins in the range from 300 MeV-100 GeV. Backgrounds templates from known point sources and emission from the Galactic Disk are then subtracted to yield the residual flux. Assuming that DM annihilation accounts for the remaining emission, the authors then place limits on DM annihilation for several choices of halo profiles. This procedure yields conservative limits since it is expected that additional background sources, such as the central supermassive black hole, unresolved point sources, and cosmic ray interactions with the gas, also contribute significantly to the residual emission. Limits on the the particle physics factor that governs the gamma ray flux, (⟨σ v⟩/m_χ^2)∫ dE dN_γ/dE, are provided in Ref. <cit.>. For the neutrino portal DM model, we can use these results to derive a limit on the annihilation cross section for the process χχ→ NN as a function of the DM and RHN mass. In Figure <ref> we show contours of the 95% C.L. upper limit on the annihilation cross section in the m_χ - m_N plane labelled by the black curves. These limits are derived under the assumption of an NFW profile and local DM density ρ_0 = 0.3 GeV cm^-3.We see that under these assumptions, the Fermi data probes the thermal relic cross sections of Eq. (<ref>) for m_χ≲ 10 GeV (thick red contour). The constraints on the annihilation cross section are again translated tolimits on the minimum value of the coupling constant λ as shown by the vertical (blue) lines. The shaded (blue) region indicates the perturbative unitarity bound. However, we again emphasize that there are significant uncertainties associated with halo profile, and the limits will become stronger (weaker) by a factor of a few to 10 (depending of course on the detailed shape) if one assumes a contracted (cored) DM distribution <cit.>.We observe a small feature near m_W ≲ m_N ≲ m_Z where the two body decay N → W ℓ dominates.§.§ Gamma rays from dwarf spheroidal galaxies Gamma ray observations of dwarf spheroidal satellite galaxies (dSphs) of the Milky Way offer a promising and complementary indirect probe of DM annihilation. There are several reasons to consider dSphs. They are DM-dominated, having mass to light ratios in the 10-2000 range. Being satellites of the Milky Way, the dSphs are nearby. There are many of them, O(40), allowing for a joint analysis to increase statistics. And, crucially, while the Galactic Center provides a significantly brighter source of DM, the dSphs are known to have substantially smaller astrophysical gamma-ray backgrounds in comparison to the Galactic Center, making them very clean sources for indirect searches. The Fermi-LAT collaboration has analyzed 6 years of gamma ray data from Milky Way dSphs, finding no significant excess above the astrophysical backgrounds <cit.>. Here we will discuss the implications of these null results for the neutrino portal DM scenario. The Fermi analysis <cit.> is based on a joint maximum likelihood analysis of 15 dSphs for gamma ray energies in the 500 MeV - 500 GeV range. The quantity of interest in the likelihood analysis is the energy flux,φ_k,j=∫_E_j,min^E_j,max E Φ_γ,k(E) dE,for kth dwarf and jth energy bin. For each dwarf and energy bin, Fermi provides the likelihood, L_k,j as a function of φ_k,j.The likelihood function accounts for instrument performance, the observed counts, exposure, and background fluxes.For a given DM annihilation channel, the energy flux depends on m_χ, ⟨σ v⟩, and J_k (the J-factor of the dSph – see Eq. (<ref>)) according to Eqs. (<ref>,<ref>,<ref>), i.e., φ_k,j=φ_k,j(m_χ, ⟨σ v⟩, J_k). The likelihood for a given dwarf, L_k, isL_k(m_χ, ⟨σ v⟩, J_k)= L N(J_k|J̅_k,σ_k)∏_jL_k,j(φ_k,j(m_χ, ⟨σ v⟩, J_k)),where L N accounts for statistical uncertainty in the J-factor determination (from the stellar kinematics in the dSphs), incorporated as a nuisance parameter in the likelihood. The Fermi-LAT collaboration employs a log-normal distribution parameterized by J̅_k,σ_k :L N(J_k|J̅_k,σ_k)=1/ln(10)J_k√(2π)σ_ke^-(log_10(J_k)-log_10(J̅_̅k̅))^2/2σ_k^2,where J_k is the true value of the J-factor and J̅_̅k̅ is the measured J-factor with error σ_k on the quantity log_10J̅_̅k̅. The combined likelihood for all the dwarfs is thenL(m_χ, ⟨σ v⟩, {J_i})=∏_kL_k(m_χ, ⟨σ v⟩, J_k),where {J_i} is the set of J-factors. Given that no significant excess is observed, a delta-log-likelihood method is used to set limits on DM model parameters, treating the J-factors as nuisance parameters.The delta-log-likelihood Δln Lis given byΔln L(m_χ, ⟨σ v⟩)=lnL(m_χ, ⟨σ v⟩, {Ĵ̂_i})-lnL(m_χ, ⟨σ v⟩,{J_i})where ⟨σ v⟩ and {J_i} are the values of ⟨σ v⟩ and {J_i} that jointly maximize the likelihood at the given m_χ, and{Ĵ̂_i}={Ĵ̂_i(m_χ,⟨σ v⟩)} are the values of the J-factors that maximize the likelihood for a given m_χ and ⟨σ v⟩.A 95% C.L. upper limit is then defined by demanding -Δ lnL(m_χ, ⟨σ v⟩) ≤ 2.71/2.We follow a similar approach to the Fermi prescription defined above, with one minor modification to speed up the numerical optimization. Rather than optimize over each of the 15 nuisance J-factors for each dSph, we introduce a single parameter, δ, which represents the deviation of the J-factor of the dwarfs from their central values according to log_10(J_k) = log_10(J̅_k) + δ σ_k.Since no gamma-ray excess is observed in any indiviudual dSph, it is reasonable to expect that the fit tends to move all J-factors up or down simultaneously depending on the assumed values of m_χ and⟨σ v ⟩, and this effect that is captured well by our δ prescription.As a validation, we have checked that our prescription reproduces the Fermi limits on DM annihilation in the b b̅ channel <cit.> at the 10-20% level throughout the entire mass range. Using the gamma ray spectra produced with the Monte-Carlo simulation described at the beginning of this section (examples are shown in Figure <ref>), we derive limits on the neutrino portal DM model for the channel χχ→ NN. In Figure <ref> we show contours of the 95% C.L. upper limiton the annihilation cross section in the m_χ - m_N plane.The Fermi data from the Milky Way dSphs are able to probe thermal relic cross sections (<ref>) for m_χ∼ 40 - 80 GeV as shown by the thick (red) line, depending on the mass of the RHN [Our annihilation cross section limits are weaker than those dervied in Ref. <cit.> by roughly a factor of two. We have not been able to find the source of the discrepancy, although it is perhaps possible to attribute the difference to the uncertainties in the dSph J-factors. We are grateful to Farinaldo Queiroz for correspondence on this issue.].The vertical (blue) lines and the associated numbers show the limits on the minimum value of the coupling constant λ. The shaded (blue) region indicates the perturbative unitarity bound.In the region m_W ≲ m_N ≲ m_Z the two body decay N → W ℓ opens up and saturates the branching ratio, which is clearly seen in Figure <ref>. §.§ Antiprotons Antiprotons (p̅) have long been recognized as a promising indirect signature of DM. While DM annihilation typically produces equal numbers of protons and antiprotons, the astrophysical background flux of antiprotons is very small in comparison to that of protons. On the other hand, describing the production and propagation of these charged hadrons is a challenging task, and any statement regarding DM annihilation rests on our ability to understand theassociated astrophysical uncertainties. The Alpha Magnetic Spectrometer (AMS-02) experiment has provided the most precise measurements of the cosmic ray proton and antiproton flux to date <cit.>, and here we will explore the implications of this data on our neutrino portal DM scenario. Since DM annihilates to RHNs, which subsequently decay via W, Z, and Higgs bosons, the resulting cascade decay, showering and hadronization produce a variety of hadronic final states including antiprotons. AMS-02 will therefore provide an important probe of the model. The propagation of antiprotons through the galaxy to earth is described by a diffusion equation for the distribution of antiprotons in energy and space (see, e.g., Ref. <cit.> and references therein). The transport is modeled in a diffusive region taken to be a cylindrical disk around the galactic plane and is affected by several physical processes. These include diffusion of the antiprotons through the turbulent magnetic fields, convective winds that impel antiprotons outward, energy loss processes, solar modulation, and a source term describing the production and loss of antiprotons. The source term accounts for astrophysical sources such as secondary and tertiary antiprotons, and antiproton annihilation with the interstellar gas, as well as primary antiprotons produced through DM annihilation. The propagation depends on a number of input parameters, and a set of canonical models, called MIN, MED, MAX are often employed <cit.>.The diffusion equation is solved assuming the steady state condition to find the flux of antiprotons from DM annihilation at earth, Φ_p̅,χ(K)=v_p̅/4π(ρ_0/m_χ)^2 R(K) 1/2⟨σ v⟩dN_p̅/dK,where dN_p̅/dK is the kinetic energy (K) spectrum of antiprotons per DM annihilation, v_p̅ is the antiproton velocity, and ρ_0 is the local DM density. The propagation function R(K) accounts for the astrophysics of production and propagation, and we use the parameterization provided in Ref. <cit.>. AMS-02 has provided precise measurements of the proton flux, Φ_p (K) <cit.>,and the antiproton-to-proton flux ratio, r(K) <cit.>, which can be used to place constraints on DM annihilation.To proceed, we require an estimate of the secondary background antiproton flux originating from astrophysical sources.For this purpose we use the best-fit secondary flux, Φ_p̅,bkg(K), from <cit.>, which provides an acceptable fit to the AMS-02 data.With the total antiproton flux, Φ_p̅,tot(K,m_χ,⟨σ v⟩)= Φ_p̅,bkg(K)+Φ_p̅,χ(K,m_χ,⟨σ v⟩), and the measured proton flux from AMS-02, Φ_p(K), in hand, we form the ratio of these two fluxes and fit it to the observed ratio. The test statistic is χ^2(m_χ,⟨σ v⟩)=∑_i[r(K_i)-(Φ_p̅,tot(K_i,m_χ,⟨σ v⟩)/Φ_p(K_i))]^2/σ_i^2,where i runs over energy bins, and σ_i is the reported uncertainty of the flux ratio <cit.>.Following Ref. <cit.>, we define a limit on ⟨σ v⟩ as a function of m_χ, m_N according to the conditionχ^2(m_χ,⟨σ v⟩)-χ_0^2≤ 4.where χ_0^2 is the best fit chi-squared statistic assuming no primary DM antiproton source from Ref. <cit.>. The limit is derived under the assumption of a Einasto profile and using the MED propagation scheme. Contours of the limit on the annihilation cross section in the m_χ-m_N plane are displayed in Figure <ref>. For DM masses in the range of 20 - 80 GeV, AMS-02 is able to probe the thermal cross section Eq. (<ref>), as indicated by the thick (red) line. The vertical (blue) lines show the limits on the minimum value of the coupling constant λ. The shaded (blue) region indicates the perturbative unitarity bound.It is important to note again that there are significant uncertainties associated with the DM halo profile and the propagation scheme, which can lead to a variation in the cross section limits by one order of magnitude or more <cit.>. Note that for a fixed m_χ, the limits in Figure <ref> become stronger as m_N is increased. This is because for fixed m_χ, heavier RHNs tend to produce more low energy antiprotons (see Figure <ref>). However, the ratio r(K) shows good agreement with the astrophysical background model at low value of kinetic energy K and a slight excess at larger values of K, explaining the behavior seen in Figure <ref>.§.§ Summary of limits and future prospectsIn Figure <ref> we show the combined limits on the neutrino portal DM model for the case in which the annihilation cross section is fixed to the thermal value, ⟨σ v ⟩ = 2.2 × 10^-26 cm^3 s^-1. Constraints from Planck CMB measurements, Fermi observations of gamma-rays from the Galactic Center and dSphs, and AMS-02 antiproton measurements are shown. We remind the reader that the Fermi Galactic Center limits are derived for the choice of an NFW halo profile, while the AMS-02 antiproton limits are based on an Einasto profile and MED propagation scheme.Under the stated assumptions, we conclude that thermal annihilation is constrained for DM masses up to 50-70 GeV depending on RHN mass.AMS-02 provides the best probe in the case m_N ≲ m_χ, while Fermi dSphs provides the superior constraint for m_N ≪ m_χ. We have also illustrated the impact of astrophysical uncertainties on the antiproton and dSphs limits in Figure <ref>. For antiproton constraints, we show Burkert profile and MED propagation (green dotted line) andEinasto profile and MAX propagation (green dashed line). For dSphs, we show log_10(J_k) = log_10(J̅_k) - 2σ_k (blue dotted line) and log_10(J_k) = log_10(J̅_k) +2σ_k (blue dashed line).There are several other notable indirect DM searches that we wish to comment on here. AMS-02 has provided detailed measurements of the cosmic ray positron spectrum <cit.>. Much attention has been paid to these results (and those of its forerunner PAMELA <cit.>) due to the observation of a striking rise in the fractional positron flux, which potentially points to a new primary source of positrons. While it is true that DM annihilation in our scenario produces a significant positron flux, the cross section limits from Fermi dSphs gamma rays and AMS-02 antiproton observations are expected to be stronger than those from AMS-02 positron measurements by an order of magnitude or more, and thus we have chosen to focus on these stronger tests. Another well-known indirect DM probe is high energy neutrinos from DM annihilation in the sun, which can be probed with the IceCube experiment <cit.>. But under the minimal assumption of typical seesaw values for the neutrino Yukawa coupling (see Eq. (<ref>)) the DM-nucleon scattering rate will be too small to allow for the efficient capture of DM in the sun, so we do not consider this possibility further.Along with the continuum gamma-ray signatures studied here, there is also the possibility of a harder gamma-ray spectral feature that arises from the radiative decay N →γν <cit.>. This signature will be relevant in the region m_χ∼ m_N, m_N ≲ 50 GeV. For the benchmark thermal relic cross section, there are already relevant limits in this region from AMS-02 (see Figure <ref>), which however are subject to sizable astrophysical uncertainties. In that regard, the spectral “triangle” feature would provide a complementary probe. On the one hand, the hard spectral feature has the advantage of being more easily discernible over the power law background, while at the same time it is expected that the overall rate will be significantly less than the gamma-ray continuum signal due to its radiative origin. A full quantitative study of this signature goes beyond our scope here and we refer the reader to Ref. <cit.> for further details.As we have demonstrated, the data collected so far by Fermi-LAT already leads to stringent limits on DM parameter space, and the sensitivity will improve significantly in the coming years. The projected sensitivities for 10 and 15 years of data taking has been studied in detail by the collaboration in Ref. <cit.>. The fast discovery of new dSphs is the primary upcoming change in dSph targeted DM searches. The identification of new dSph candidates by the Dark Energy Survey (DES) <cit.> over the past two years, if confirmed, will double the number of known dSphs. Following on important discoveries of the Sloan Digital Sky Survey (SDSS) <cit.>, which covered 1/3 of the sky and discovered 15 ultra-faint dSphs, surveys like DES and especially the Large Synoptic Survey Telescope (LSST) <cit.> will cover complementary regions of the sky which are expected to discover potentially O(100) dSphs. Ref. <cit.> takes 60 total dSphs as an estimate of the number of dSphs that can be used for LAT searches. They find that the sensitivity of searches targeting dwarf galaxies will improve faster than the square root of observing time.Following Ref. <cit.> we expect an improvement on the cross section limit from Fermi-LAT 15 years dSph observations by a factor of a few, which will probe thermal relic DM with masses m_χ≳ 100 GeV in the neutrino portal DM scenario.Due to their large effective areas, ground-based imaging air Cherenkov telescopes (IACTs), such asH.E.S.S. <cit.>,VERITAS <cit.>, and MAGIC <cit.>, and in the future CTA <cit.>andHAWC <cit.>, are well suited to search for higher energy gamma rays originating from heavy DM annihilation. In particular, H.E.S.S. has presented a search for DM annihilation towards the Galactic Center using 10 years of data <cit.>.Assuming a cuspy NFW or Einasto profile the search sets the strongest limits on TeV mass DM that annihilates to WW or quarks, and almost reach thermal annihilation rates. Taken at face value, the H.E.S.S. limits are indeed stronger than the Fermi dSphs limits for DM masses above a few hundred GeV, but are however less robust due to the inherent astrophysical uncertainties associated with the central region of the Milky way, both in terms of conventional gamma-ray sources and the DM distribution. The H.E.S.S data is not publicly available, so unfortunately we are not able to properly recast their limit.However, for a fixed DM mass, the continuum photon spectrum produced in our model from χχ→ NN is qualitatively similar to the spectrum produced by χχ→ WW. We can therefore obtain a rough estimate of the H.E.S.S. sensitivity by translating their limits in the WW channel to our parameter space The H.E.S.S. limits are approaching the canonical thermal relic annihilation rate for DM masses around 1 TeV. In the future, the Cherenkov Telescope Array (CTA) will be able to further probe heavy TeV-scale DM annihilation, with the potential to improve by roughly an order of magnitude in cross section sensitivity over current instruments depending on the annihilation mode and DM mass. Here we estimate the sensitivity of future CTA gamma-ray observations of the Galactic Center using a “Ring” method technique <cit.>. Our projections are based on a simplified version of the analysis carried out in Ref. <cit.> that we now briefly describe.The analysis begins with the definition of signal (referred to as “ON”) and background (“OFF”) regions.A binned Poisson likelihood function is constructed in order to compare the DM model μ to a (mock) data set n:L(μ|n)=∏_i,jμ_ij^n_ij/n_ij!e^-μ_ij.where μ_ij is the predicted number of events for a given model μ in the ith energy bin and the jth region of interest, corresponding to ON (j=1) and OFF (j = 2) regions.These model predictions are compared to the corresponding observed counts n_ij. We use 15 logarithmically-spaced energy bins, extending from 25 GeV to 10 TeV.The number of gamma-ray events predicted by each model consists of three components: a DM annihilation signal, an isotropic cosmic-ray (CR) background, and the Galactic diffuse emission (GDE) background:μ_ij=μ_ij^DM+μ_ij^CR+μ_ij^GDE.The details for the regions of interest thathave been used in our analysis, including the corresponding solid angles and J-factors, can be found in Ref. <cit.>.Furthermore, we have used the effective area produced by MPIK group <cit.> and fixed the time of observation to be 100 hours. We account for differential acceptance uncertainties (i.e. acceptance variations across different energy bins and regions-of-interest) by rescaling the predicted signals μ_ij by parameters α_ij and profiling the likelihood over their values. Following Ref. <cit.> we assume Gaussian nuisance likelihoods for all α with respective variance σ_α^2 independent of i and j. Our limits correspond to differential acceptance uncertainties of 1%.The mock data n we employ includes a fixed isotropic cosmic-ray background component in all bins, and no signal from DM annihilation.We derive 95% CL upper limits (sensitivity) on the annihilation cross-section ⟨σ v ⟩ in the usual way by requiring -Δln L≤ 2.71/2. Our projections are shown in Figure <ref>. We have not included systematic uncertainties for the background components, which can be as large as order one and thus significantly degrade the CTA sensitivity. However, this can be partially overcome through a more sophisticated morphological analysis, which leverages the shape differences between the galactic diffuse emission and DM signal <cit.>. In the end, we expect that Figure <ref> provides a reasonable ballpark estimate of the CTA sensitivity, which can improve over H.E.S.S. by a factor of a few to ten in the 100 GeV - TeV DM mass range. We expect Fermi dSphs observations to provide superior limits for lower mass DM, m_χ≲ 100 GeV. § GALACTIC CENTER GAMMA RAY EXCESSINTERPRETATION Asmentioned in Section <ref>, various analyses of Fermi-LAT data show a spherically symmetric excess of gamma rays coming from the central region of the Milky Way peaking in the 1-3 GeV energy range <cit.>. Since DM annihilation to RHNs abundantly produces gamma rays, it is interesting to explore a possible interpretation of this excess in the context of the neutrino portal DM model. In fact, this possibility was previously investigated in Ref. <cit.>, which found that DM annihilation to RHNs could indeed provide a good fit to the Galactic Center excess. Here we will additionally confront this interpretation with existing constraints from other indirect probes, and notably Fermi gamma-ray observations from dSphs and AMS-02 antiproton observations. We fit the neutrino portal DM model parameters to the Galactic Center excess spectrum given in Ref. <cit.>. We adopt Navarro-Frenk-White (NFW) profile with γ=1.2. Following <cit.> we define the χ^2 asχ^2()=∑_ij[Φ_i()-(Φ_i)_obs]·Σ_ij^-1·[Φ_j()-(Φ_j)_obs],where ={⟨σ v⟩, m_χ,m_N}, Φ_i ( (Φ_i)_obs) is the predicted (observed) γ-ray flux (see Eq. (<ref>)) in the i^th energy bin, and Σ is the covariance matrix. We find that the best-fit point is{⟨σ v⟩=3.08× 10^-26 cm^3 s^-1,m_χ=41.3 GeV, m_N=22.6GeV} with χ^2=14.12 for 23 degrees-of-freedom.Figure <ref> displays 1σ,2σ, and 3σ CL regions in the m_N - m_χ parameter space.We see that neutrino portal DM can provide an acceptable fit over a significant range of mass parameters. Next, we would like to confront this interpretation with the other constraints derived in Section <ref>. To this end, we perform the Galactic Center excess while fit fixing the annihilation cross section to itsthermal value, and overlay the limits derived from Planck CMB, Fermi dSphs, and AMS-02 antiproton observations. The result is displayed in the right panel of Figure <ref>. We see that this interpretation faces some tension with limits from dSphs and antiprotons. However, it is too early to conclude from this analysis that the DM interpretation of the excess is not viable given the significant astrophysical uncertainties in the local DM density, dSphs DM densities, and the modeling of the antiproton propagation.§ BEYOND THE MINIMAL SCENARIOWe have explored what is perhaps the simplest scenario of neutrino portal DM. The primary probe of this model comes from indirect detection, and we have presented a comprehensive picture of the current constraints. However, it is possible that the neutrino mass model is more complex than the simplest Type-I seesaw, or that there are additional interactions of the scalar mediator with the Higgs, in which case a much richer phenomenology is possible. In this section we will highlight some of these possibilities.§.§ Large neutrino Yukawa coupling Taking the naive seesaw relation in Eq. (<ref>) as a guide, one generally expects very small active-sterile mixing angles, θ∼√(m_ν/m_N)≃ 10^-6×(m_N/100 GeV)^-1/2, suggesting poor prospects for direct detection and accelerator experiments. However, the neutrino Yukawa coupling and active sterile-mixing angle can bemuch larger if one goes beyond the simplest Type-I seesaw. For example, in the inverse seesaw model <cit.>, the RHNs are pseudo-Dirac fermions, with splitting governed given by a small Majorana mass. The SM neutrino masses are light due to the same small Majorana mass, while the Yukawa coupling can in principle be as large as y ∼ 0.1, while being compatible with experimental constraints. Such large Yukawa couplings not only offer increased chances to probe the RHNs directly (see, e.g., Ref. <cit.> for a revew), but will also enhance the detection prospects of the DM sector. For instance, one can induce sizable DM couplings to the Z and Higgs boson at one loop that mediate large scattering rates with nuclei, which is relevant for direct detection experiments and capture of DM in the sun. One can also potentially produce the RHNs directly in accelerator experiments. This also opens up the possibility for the RHN to be heavier than the dark sector particles, while still having a thermal cosmology. Due to the large mixing angle, it is possible for DM to annihilate efficiently into light active neutrinos, and furthermore the DM may annihilate to other SM particles through the loop-induced Z and h couplings. We refer the reader to Refs. <cit.> for recent investigations of these issues.§.§ Higgs portal coupling The scalar particle ϕ can couple to the Higgs portal at the renormalizable level L⊃λ_ϕ H/2ϕ^2 |H|^2.We have so far assumed that this coupling is small. The reason we have made this assumption is primarily for simplicity, asthen the phenomenology and cosmology is solely dictated by the neutrino portal link to the SM. However, this assumption can certainly be questioned. Restricting to the fields and interactions of our scenario in Eq. (<ref>), we observe that the Higgs portal coupling (<ref>) will be induced at one loop with strength of order λ_ϕ H∼λ^2 y^2/16 π^2, which is very small due to the small neutrino Yukawa coupling. Still, one may expect unknown UV physics to generically induce a larger coupling. This is because there is no enhanced symmetry in the limit λ_ϕ H→ 0, and so even though the operator (<ref>) is marginal, we cannot rely on technical naturalness ensure a small value without further information about the UV physics. That being said, one can certainly imagine completions in which the Higgs portal coupling is suppressed. For example if ϕ is a composite scalar state of some new strong dynamics, then the Higgs portal operator would fundamentally be a higher dimension operator and could be therefore be naturally suppressed. Another good reason to consider the Higgs portal operator is that it provides additional opportunities to probe the dark sector in experiment. A one loop coupling of the DM to the Higgs will be induced and this can mediate scattering of DM with nuclei, or invisible decays of the Higgs to DM <cit.>.An even more distinctive signature at colliders can arise if the Higgs could decay into a pair of light scalars, h→ϕϕ. These scalars, once produced would then cascade decay via ϕ→ N χ. The resulting RHN N, being lighter than the W boson, will have a macroscopic decay length and could leave a striking displaced vertex signal (see, e.g., <cit.>). The signature would thus be an exotic Higgs decay with two displaced vertices. § SUMMARY AND OUTLOOKIn this paper, we have investigated a simple model of neutrino portal DM, in which the RHNs simultaneously generate light neutrino masses via the Type-I seesaw mechanism and mediate interactions of DM with the SM. The model, presented in Section <ref>, is quite minimal and contains a dark sector composed of a fermion χ (the DM candidate) and scalar ϕ, along with the RHN N.Given the generic expectation of tiny neutrino Yukawa couplings, testing this model with direct detection or accelerator experiments is likely to be challenging. However, it is possible in this model that DM efficiently annihilates to RHNs, which allows for a number of indirect probes of this scenario. We have carried out an extensive characterization of the indirect detection phenomenology of the neutrino portal DM scenario in Section <ref>. Restricting to an experimentally and theoretically viable mass range,1 GeV ≲ m_N< m_χ≲ 10 TeV, we have derived the constraints on the χχ→ NN annihilation cross section from Planck CMB measurements, Fermi gamma-ray observations from the Galactic Center and from dSphs,and AMS-02 antiproton observations. Currently, the dSphs and antiproton measurements constrain DM masses below 50 GeV for thermal annihilation rates. In the future, Fermi dSphs observations will be able probe DM masses above the 100 GeV range for thermal cross sections, while CTA will be able to approach thermal cross section values for DM masses in the 100 GeV - 1TeV range. This model can also provide a DM interpretation of the Fermi Galactic Center gamma ray excess as discussed in Section <ref>.We have verified that the predicted spectrum of gamma rays is compatible with the observed excess for RHN and DM masses in the 20 - 60 GeV range and annihilation rates close the the thermal value. However, we have also shown that this interpretation faces some tension with the existing constraints from Fermi dSphs and AMS-02 antiprotons (subject of course to various astrophysical uncertainties). It will be interesting to see how this situation develops as Fermi and AMS-02 collect more data. However, at least in the simplest model explored here, it will be challenging to find complementary probes outside of indirect detection. It is possible that the neutrino mass generation mechanism is more intricate than the simplest Type-I seesaw, as discussed in Section <ref>. If so, the implications for neutrino portal DM could be dramatic, particularly if the neutrino Yukawa coupling is large, as this could lead to direct detection prospects, accelerator probes, and new annihilation channels. Additionally, it is possible in this scenario for additional Higgs portal couplings to be active, which could yield further phenomenological handles.Portals provide a simple and predictive theoretical framework to characterize the allowed renormalizable interactions between the SM and DM. Furthermore, the existence of neutrino masses already provides a strong hint that the neutrino portal itself operates in nature. These two observations provide a solid motivation for testing the neutrino portal DM scenario, both through the generic indirect detection signals investigated in this paper, and also the additional signals present in more general models. It is worthwhile to broadly explore these scenarios and their associated phenomenology in detail, and we look forward to further progress in this direction in the future.§.§.§ AcknowledgementsWe thank David McKeen and Satyanarayan Mukhopadhyay for helpful discussions. We are also grateful to Farinaldo Queiroz for correspondence, and to Roberto Ruiz de Austri for pointing out an inconsistency in our antiproton spectrum in the first version of this paper.The work of BB and BSE is supported in part by the U.S. Department of Energy under grant No. DE-SC0015634, and in part by PITT PACC.The work of TH and BSE is supported in part by the Department of Energy under Grant No. DE-FG02-95ER40896, and in part by PITT PACC.We would also like to thank the Aspen Center for Physics for hospitality, where part of the work was completed.The Aspen Center for Physics is supported by the NSF under Grant No. PHYS-1066293. equationsection § BOOSTED SPECTRUM FOR MASSIVE PARTICLESConsider first a particle of mass m with a normalized monoenergetic and isotropic spectrum f(E) in frame O with energy E_0, i.e., f_0(E)=δ (E-E_0),∫_m ^∞ dE f_0(E)=1.We wish to find the spectrum in a boosted frame O'. In general there will be an angle θ between theboost velocity β and the particle momentum, such that thethe energy E' in O' is related to energy E in O, asE'=γ(E-βp cosθ ).where p = √(E^2-m^2). Using Eq. (<ref>) and averaging over the angle θ under the assumption of isotropy, one can show that the energies are uniformly distributed in O' according to the “box” spectrum:f'_0(E') = 1/2βγp_0 θ[E'-γ(E_0-βp_0 )] θ[γ(E_0+βp_0)-E']. We can use this result (<ref>) to boost a general isotropic energy spectrum f(E) observed in O, that in particular is not necessarily monoenergetic.Starting from the normalization condition, we have1 =∫_m ^∞ dE f(E)=∫_m ^∞ dE_0 f(E_0)[∫_m^∞ dE δ (E-E_0)],where in the last step we have inserted the identity and changed the order of integration.The quantity in brackets is simply a monoenergtic spectrum with energy E_0 that was already considered above.Using Eq. (<ref>), it is straightforward to derive the boosted spectrum in the frame O':f'(E')=∫_γ(E'-β√(E'^2-m^2)) ^γ(E'+β√(E'^2-m^2))dE/2βγ√(E^2-m^2) f(E). 99 Jungman:1995dfG. Jungman, M. Kamionkowski and K. Griest,Phys. Rept.267, 195 (1996) doi:10.1016/0370-1573(95)00058-5 [hep-ph/9506380]. Bergstrom:2000pnL. Bergstr�m,Rept. Prog. Phys.63, 793 (2000) doi:10.1088/0034-4885/63/5/2r3 [hep-ph/0002126]. Bertone:2004pzG. Bertone, D. Hooper and J. Silk,Phys. Rept.405, 279 (2005) doi:10.1016/j.physrep.2004.08.031 [hep-ph/0404175]. Feng:2010gwJ. L. Feng,Ann. Rev. Astron. Astrophys.48, 495 (2010) doi:10.1146/annurev-astro-082708-101659 [arXiv:1003.0904 [astro-ph.CO]].Silveira:1985rkV. Silveira and A. Zee,Phys. Lett.161B, 136 (1985). doi:10.1016/0370-2693(85)90624-0 Patt:2006fwB. Patt and F. Wilczek,hep-ph/0605188. Galison:1983paP. Galison and A. Manohar,Phys. Lett.136B, 279 (1984). doi:10.1016/0370-2693(84)91161-4 Holdom:1985agB. Holdom,Phys. Lett.166B, 196 (1986). doi:10.1016/0370-2693(86)91377-8 seesaw P. Minkowski, Phys. Lett. B67, 421 (1977); T. Yanagida, in Proc. of the Workshop on Grand Unified Theory and Baryon Number of the Universe, KEK, Japan, 1979; M. Gell-Mann, P. Ramondand R. Slansky in Sanibel Symposium, February 1979, CALT-68-709 [retroprint arXiv:hep-ph/9809459],and in Supergravity, eds. D. Freedman et al. (North Holland, Amsterdam, 1979);S. L. Glashow in Quarks and Leptons, Cargese, eds. M. Levy et al. (Plenum, 1980, New York), p. 707;R. N. Mohapatra and G. Senjanovic,Phys. Rev. Lett. 44, 912 (1980); J. Schechter and J. W. F. Valle,Phys. Rev. D 22, 2227 (1980).McDonald:1993exJ. McDonald,Phys. Rev. D 50, 3637 (1994) doi:10.1103/PhysRevD.50.3637 [hep-ph/0702143 [HEP-PH]]. Burgess:2000yqC. P. Burgess, M. Pospelov and T. ter Veldhuis,Nucl. Phys. B 619, 709 (2001) doi:10.1016/S0550-3213(01)00513-2 [hep-ph/0011335]. Assamagan:2016azcK. Assamagan et al.,arXiv:1604.05324 [hep-ph]. Pospelov:2007mpM. Pospelov, A. Ritz and M. B. Voloshin,Phys. Lett. B 662, 53 (2008) doi:10.1016/j.physletb.2008.02.052 [arXiv:0711.4866 [hep-ph]]. ArkaniHamed:2008qnN. Arkani-Hamed, D. P. Finkbeiner, T. R. Slatyer and N. Weiner,Phys. Rev. D 79, 015014 (2009) doi:10.1103/PhysRevD.79.015014 [arXiv:0810.0713 [hep-ph]]. Alexander:2016alnJ. Alexander et al.,arXiv:1608.08632 [hep-ph].Tang:2016sibY. L. Tang and S. h. Zhu,arXiv:1609.07841 [hep-ph]. Tang:2015cooY. L. Tang and S. h. Zhu,JHEP 1603, 043 (2016) doi:10.1007/JHEP03(2016)043 [arXiv:1512.02899 [hep-ph]].TheFermi-LAT:2015kwaM. Ajello et al. [Fermi-LAT Collaboration],Astrophys. J.819, no. 1, 44 (2016) doi:10.3847/0004-637X/819/1/44 [arXiv:1511.02938 [astro-ph.HE]]. Goodenough:2009gkL. Goodenough and D. Hooper,arXiv:0910.2998 [hep-ph]. Hooper:2010mqD. Hooper and L. Goodenough,Phys. Lett. B 697, 412 (2011) doi:10.1016/j.physletb.2011.02.029 [arXiv:1010.2752 [hep-ph]]. Daylan:2014rsaT. Daylan, D. P. Finkbeiner, D. Hooper, T. Linden, S. K. N. Portillo, N. L. Rodd and T. R. Slatyer,Phys. Dark Univ.12, 1 (2016) doi:10.1016/j.dark.2015.12.005 [arXiv:1402.6703 [astro-ph.HE]]. Calore:2014xkaF. Calore, I. Cholis and C. Weniger,JCAP 1503, 038 (2015) doi:10.1088/1475-7516/2015/03/038 [arXiv:1409.0042 [astro-ph.CO]].Campos:2017odjM. D. Campos, F. S. Queiroz, C. E. Yaguna and C. Weniger,arXiv:1702.06145 [hep-ph].Falkowski:2009yzA. Falkowski, J. Juknevich and J. Shelton,arXiv:0908.1790 [hep-ph]. Kang:2010haZ. Kang and T. Li,JHEP 1102, 035 (2011) doi:10.1007/JHEP02(2011)035 [arXiv:1008.1621 [hep-ph]]. Falkowski:2011xhA. Falkowski, J. T. Ruderman and T. Volansky,JHEP 1105, 106 (2011) doi:10.1007/JHEP05(2011)106 [arXiv:1101.4936 [hep-ph]]. Cherry:2014xraJ. F. Cherry, A. Friedland and I. M. Shoemaker,arXiv:1411.1071 [hep-ph]. Macias:2015cnaV. Gonzalez Macias and J. Wudka,JHEP 1507, 161 (2015) doi:10.1007/JHEP07(2015)161 [arXiv:1506.03825 [hep-ph]]. Gonzalez-Macias:2016vxyV. Gonz�lez-Mac�as, J. I. Illana and J. Wudka,JHEP 1605, 171 (2016) doi:10.1007/JHEP05(2016)171 [arXiv:1601.05051 [hep-ph]]. Escudero:2016tzxM. Escudero, N. Rius and V. Sanz,arXiv:1606.01258 [hep-ph]. Escudero:2016ksaM. Escudero, N. Rius and V. Sanz,arXiv:1607.02373 [hep-ph]. Allahverdi:2016fvlR. Allahverdi, Y. Gao, B. Knockel and S. Shalgar,arXiv:1612.03110 [hep-ph]. Bertoni:2014mvaB. Bertoni, S. Ipek, D. McKeen and A. E. Nelson,JHEP 1504, 170 (2015) doi:10.1007/JHEP04(2015)170 [arXiv:1412.3113 [hep-ph]]. Ibarra:2016fcoA. Ibarra, S. Lopez-Gehler, E. Molinaro and M. Pato,Phys. Rev. D 94, no. 10, 103003 (2016) doi:10.1103/PhysRevD.94.103003 [arXiv:1604.01899 [hep-ph]].Steigman:2012nbG. Steigman, B. Dasgupta and J. F. Beacom,Phys. Rev. D 86, 023506 (2012) doi:10.1103/PhysRevD.86.023506 [arXiv:1204.3622 [hep-ph]]. Griest:1989wdK. Griest and M. Kamionkowski,Phys. Rev. Lett.64, 615 (1990). doi:10.1103/PhysRevLett.64.615Boyarsky:2009ixA. Boyarsky, O. Ruchayskiy and M. Shaposhnikov,Ann. Rev. Nucl. Part. Sci.59, 191 (2009) doi:10.1146/annurev.nucl.010909.083654 [arXiv:0901.0011 [hep-ph]].Ruchayskiy:2012siO. Ruchayskiy and A. Ivashko,JCAP 1210, 014 (2012) doi:10.1088/1475-7516/2012/10/014 [arXiv:1202.2841 [hep-ph]]. Bandyopadhyay:2011qmP. Bandyopadhyay, E. J. Chun and J. C. Park,JHEP 1106, 129 (2011) doi:10.1007/JHEP06(2011)129 [arXiv:1105.1652 [hep-ph]]. Alwall:2014hcaJ. Alwall et al.,JHEP 1407, 079 (2014) doi:10.1007/JHEP07(2014)079 [arXiv:1405.0301 [hep-ph]]. Alva:2014gxaD. Alva, T. Han and R. Ruiz,JHEP 1502, 072 (2015) doi:10.1007/JHEP02(2015)072 [arXiv:1411.7305 [hep-ph]]. Degrande:2016ajeC. Degrande, O. Mattelaer, R. Ruiz and J. Turner,Phys. Rev. D 94, no. 5, 053002 (2016) doi:10.1103/PhysRevD.94.053002 [arXiv:1602.06957 [hep-ph]].Sjostrand:2007gsT. Sjostrand, S. Mrenna and P. Z. Skands,Comput. Phys. Commun.178, 852 (2008) doi:10.1016/j.cpc.2008.01.036 [arXiv:0710.3820 [hep-ph]].Mardon:2009rcJ. Mardon, Y. Nomura, D. Stolarski and J. Thaler,JCAP 0905, 016 (2009) doi:10.1088/1475-7516/2009/05/016 [arXiv:0901.2926 [hep-ph]].Agrawal:2014ohaP. Agrawal, B. Batell, P. J. Fox and R. Harnik,JCAP 1505, 011 (2015) doi:10.1088/1475-7516/2015/05/011 [arXiv:1411.2592 [hep-ph]].Elor:2015tvaG. Elor, N. L. Rodd and T. R. Slatyer,Phys. Rev. D 91, 103531 (2015) doi:10.1103/PhysRevD.91.103531 [arXiv:1503.01773 [hep-ph]]. Elor:2015bhoG. Elor, N. L. Rodd, T. R. Slatyer and W. Xue,JCAP 1606, no. 06, 024 (2016) doi:10.1088/1475-7516/2016/06/024 [arXiv:1511.08787 [hep-ph]].Hinshaw:2012akaG. Hinshaw et al. [WMAP Collaboration],Astrophys. J. Suppl.208, 19 (2013) doi:10.1088/0067-0049/208/2/19 [arXiv:1212.5226 [astro-ph.CO]]. Story:2012wxK. T. Story et al.,Astrophys. J.779, 86 (2013) doi:10.1088/0004-637X/779/1/86 [arXiv:1210.7231 [astro-ph.CO]]. Hou:2012xqZ. Hou et al.,Astrophys. J.782, 74 (2014) doi:10.1088/0004-637X/782/2/74 [arXiv:1212.6267 [astro-ph.CO]]. Sievers:2013icaJ. L. Sievers et al. [Atacama Cosmology Telescope Collaboration],JCAP 1310, 060 (2013) doi:10.1088/1475-7516/2013/10/060 [arXiv:1301.0824 [astro-ph.CO]]. Ade:2015xuaP. A. R. Ade et al. [Planck Collaboration],Astron. Astrophys.594, A13 (2016) doi:10.1051/0004-6361/201525830 [arXiv:1502.01589 [astro-ph.CO]]. Padmanabhan:2005esN. Padmanabhan and D. P. Finkbeiner,Phys. Rev. D 72, 023508 (2005) doi:10.1103/PhysRevD.72.023508 [astro-ph/0503486]. Zhang:2006frL. Zhang, X. L. Chen, Y. A. Lei and Z. G. Si,Phys. Rev. D 74, 103519 (2006) doi:10.1103/PhysRevD.74.103519 [astro-ph/0603425]. Galli:2009zcS. Galli, F. Iocco, G. Bertone and A. Melchiorri,Phys. Rev. D 80, 023505 (2009) doi:10.1103/PhysRevD.80.023505 [arXiv:0905.0003 [astro-ph.CO]]. Slatyer:2009yqT. R. Slatyer, N. Padmanabhan and D. P. Finkbeiner,Phys. Rev. D 80, 043526 (2009) doi:10.1103/PhysRevD.80.043526 [arXiv:0906.1197 [astro-ph.CO]]. Kanzaki:2009hfT. Kanzaki, M. Kawasaki and K. Nakayama,Prog. Theor. Phys.123, 853 (2010) doi:10.1143/PTP.123.853 [arXiv:0907.3985 [astro-ph.CO]]. Hisano:2011dcJ. Hisano, M. Kawasaki, K. Kohri, T. Moroi, K. Nakayama and T. Sekiguchi,Phys. Rev. D 83, 123511 (2011) doi:10.1103/PhysRevD.83.123511 [arXiv:1102.4658 [hep-ph]]. Hutsi:2011vxG. Hutsi, J. Chluba, A. Hektor and M. Raidal,Astron. Astrophys.535, A26 (2011) doi:10.1051/0004-6361/201116914 [arXiv:1103.2766 [astro-ph.CO]]. Galli:2011rzS. Galli, F. Iocco, G. Bertone and A. Melchiorri,Phys. Rev. D 84, 027302 (2011) doi:10.1103/PhysRevD.84.027302 [arXiv:1106.1528 [astro-ph.CO]]. Finkbeiner:2011dxD. P. Finkbeiner, S. Galli, T. Lin and T. R. Slatyer,Phys. Rev. D 85, 043522 (2012) doi:10.1103/PhysRevD.85.043522 [arXiv:1109.6322 [astro-ph.CO]]. Slatyer:2012yqT. R. Slatyer,Phys. Rev. D 87, no. 12, 123513 (2013) doi:10.1103/PhysRevD.87.123513 [arXiv:1211.0283 [astro-ph.CO]]. Galli:2013dnaS. Galli, T. R. Slatyer, M. Valdes and F. Iocco,Phys. Rev. D 88, 063502 (2013) doi:10.1103/PhysRevD.88.063502 [arXiv:1306.0563 [astro-ph.CO]]. Lopez-Honorez:2013lcmL. Lopez-Honorez, O. Mena, S. Palomares-Ruiz and A. C. Vincent,JCAP 1307, 046 (2013) doi:10.1088/1475-7516/2013/07/046 [arXiv:1303.5094 [astro-ph.CO]]. Madhavacheril:2013cnaM. S. Madhavacheril, N. Sehgal and T. R. Slatyer,Phys. Rev. D 89, 103508 (2014) doi:10.1103/PhysRevD.89.103508 [arXiv:1310.3815 [astro-ph.CO]]. Slatyer:2015jlaT. R. Slatyer,Phys. Rev. D 93, no. 2, 023527 (2016) doi:10.1103/PhysRevD.93.023527 [arXiv:1506.03811 [hep-ph]].Slatyer:2015klaT. R. Slatyer,Phys. Rev. D 93, no. 2, 023521 (2016) doi:10.1103/PhysRevD.93.023521 [arXiv:1506.03812 [astro-ph.CO]]. Navarro:1995iwJ. F. Navarro, C. S. Frenk and S. D. M. White,Astrophys. J.462, 563 (1996) doi:10.1086/177173 [astro-ph/9508025]. Navarro:1996gjJ. F. Navarro, C. S. Frenk and S. D. M. White,Astrophys. J.490, 493 (1997) doi:10.1086/304888 [astro-ph/9611107]. EinastoJ. Einasto Trudy. 1965. Inst.Astrofiz.Alma-Ata.,5,87.Navarro:2008kcJ. F. Navarro et al.,Mon. Not. Roy. Astron. Soc.402, 21 (2010) doi:10.1111/j.1365-2966.2009.15878.x [arXiv:0810.1522 [astro-ph]]. Springel:2008ccV. Springel et al.,Mon. Not. Roy. Astron. Soc.391, 1685 (2008) doi:10.1111/j.1365-2966.2008.14066.x [arXiv:0809.0898 [astro-ph]]. Blumenthal:1985qyG. R. Blumenthal, S. M. Faber, R. Flores and J. R. Primack,Astrophys. J.301, 27 (1986). doi:10.1086/163867 Ryden:1987skaB. S. Ryden and J. E. Gunn,Astrophys. J.318, 15 (1987). doi:10.1086/165349 Gnedin:2004cxO. Y. Gnedin, A. V. Kravtsov, A. A. Klypin and D. Nagai,Astrophys. J.616, 16 (2004) doi:10.1086/424914 [astro-ph/0406247].Gnedin:2011ujO. Y. Gnedin, D. Ceverino, N. Y. Gnedin, A. A. Klypin, A. V. Kravtsov, R. Levine, D. Nagai and G. Yepes,arXiv:1108.5736 [astro-ph.CO]. Governato:2012faF. Governato et al.,Mon. Not. Roy. Astron. Soc.422, 1231 (2012) doi:10.1111/j.1365-2966.2012.20696.x [arXiv:1202.0554 [astro-ph.CO]]. Iocco:2011jzF. Iocco, M. Pato, G. Bertone and P. Jetzer,JCAP 1111, 029 (2011) doi:10.1088/1475-7516/2011/11/029 [arXiv:1107.5810 [astro-ph.GA]]. Lee:2014mzaS. K. Lee, M. Lisanti and B. R. Safdi,JCAP 1505, no. 05, 056 (2015) doi:10.1088/1475-7516/2015/05/056 [arXiv:1412.6099 [astro-ph.CO]]. Bartels:2015aeaR. Bartels, S. Krishnamurthy and C. Weniger,Phys. Rev. Lett.116, no. 5, 051102 (2016) doi:10.1103/PhysRevLett.116.051102 [arXiv:1506.05104 [astro-ph.HE]]. Lee:2015feaS. K. Lee, M. Lisanti, B. R. Safdi, T. R. Slatyer and W. Xue,Phys. Rev. Lett.116, no. 5, 051103 (2016) doi:10.1103/PhysRevLett.116.051103 [arXiv:1506.05124 [astro-ph.HE]]. McDermott:2015ydvS. D. McDermott, P. J. Fox, I. Cholis and S. K. Lee,JCAP 1607, no. 07, 045 (2016) doi:10.1088/1475-7516/2016/07/045 [arXiv:1512.00012 [astro-ph.HE]]. Horiuchi:2016zwuS. Horiuchi, M. Kaplinghat and A. Kwa,JCAP 1611, no. 11, 053 (2016) doi:10.1088/1475-7516/2016/11/053 [arXiv:1604.01402 [astro-ph.HE]]. Hooper:2012srD. Hooper, C. Kelso and F. S. Queiroz,Astropart. Phys.46, 55 (2013) doi:10.1016/j.astropartphys.2013.04.007 [arXiv:1209.3015 [astro-ph.HE]]. Ackermann:2015zuaM. Ackermann et al. [Fermi-LAT Collaboration],Phys. Rev. Lett.115, no. 23, 231301 (2015) doi:10.1103/PhysRevLett.115.231301 [arXiv:1503.02641 [astro-ph.HE]]. Aguilar:2016kjlM. Aguilar et al. [AMS Collaboration],Phys. Rev. Lett.117, no. 9, 091103 (2016). doi:10.1103/PhysRevLett.117.091103 Boudaud:2014qraM. Boudaud, M. Cirelli, G. Giesen and P. Salati,JCAP 1505, no. 05, 013 (2015) doi:10.1088/1475-7516/2015/05/013 [arXiv:1412.5696 [astro-ph.HE]]. Donato:2005myF. Donato,Nucl. Phys. Proc. Suppl.138, 303 (2005). doi:10.1016/j.nuclphysbps.2004.11.068Cirelli:2010xxM. Cirelli et al.,JCAP 1103, 051 (2011) Erratum: [JCAP 1210, E01 (2012)] doi:10.1088/1475-7516/2012/10/E01, 10.1088/1475-7516/2011/03/051 [arXiv:1012.4515 [hep-ph]].Aguilar:2015ooaM. Aguilar et al. [AMS Collaboration],Phys. Rev. Lett.114, 171103 (2015). doi:10.1103/PhysRevLett.114.171103 Giesen:2015ufaG. Giesen, M. Boudaud, Y. G�nolini, V. Poulin, M. Cirelli, P. Salati and P. D. Serpico,JCAP 1509, no. 09, 023 (2015) doi:10.1088/1475-7516/2015/09/023, 10.1088/1475-7516/2015/9/023 [arXiv:1504.04276 [astro-ph.HE]]. Aguilar:2014mmaM. Aguilar et al. [AMS Collaboration],Phys. Rev. Lett.113, 121102 (2014). doi:10.1103/PhysRevLett.113.121102 Adriani:2013udaO. Adriani et al. [PAMELA Collaboration],Phys. Rev. Lett.111, 081102 (2013) doi:10.1103/PhysRevLett.111.081102 [arXiv:1308.0133 [astro-ph.HE]].Aartsen:2016zhmM. G. Aartsen et al. [IceCube Collaboration],arXiv:1612.05949 [astro-ph.HE]. Charles:2016pgzE. Charles et al. [Fermi-LAT Collaboration],Phys. Rept.636, 1 (2016) doi:10.1016/j.physrep.2016.05.001 [arXiv:1605.02016 [astro-ph.HE]].Abbott:2005biT. Abbott et al. [DES Collaboration],astro-ph/0510346. York:2000gkD. G. York et al. [SDSS Collaboration],Astron. J.120, 1579 (2000) doi:10.1086/301513 [astro-ph/0006396].Ivezic:2008feZ. Ivezic et al. [LSST Collaboration],arXiv:0805.2366 [astro-ph].Abdallah:2016ygiH. Abdallah et al. [HESS Collaboration],Phys. Rev. Lett.117, no. 11, 111301 (2016) doi:10.1103/PhysRevLett.117.111301 [arXiv:1607.08142 [astro-ph.HE]].Smith:2013ttaA. W. Smith et al.,arXiv:1304.6367 [astro-ph.HE]. Aleksic:2013xeaJ. Aleksi? et al.,JCAP 1402, 008 (2014) doi:10.1088/1475-7516/2014/02/008 [arXiv:1312.1535 [hep-ph]]. Doro:2012xxM. Doro et al. [CTA Consortium],Astropart. Phys.43, 189 (2013) doi:10.1016/j.astropartphys.2012.08.002 [arXiv:1208.5356 [astro-ph.IM]]. Abeysekara:2014ffgA. U. Abeysekara et al. [HAWC Collaboration],Phys. Rev. D 90, no. 12, 122002 (2014) doi:10.1103/PhysRevD.90.122002 [arXiv:1405.1730 [astro-ph.HE]].Moulin:2013lmaE. Moulin [CTA Consortium], Silverwood:2014yzaH. Silverwood, C. Weniger, P. Scott and G. Bertone,JCAP 1503, no. 03, 055 (2015) doi:10.1088/1475-7516/2015/03/055 [arXiv:1408.4131 [astro-ph.HE]].Bernlohr:2012weK. Bernlöhr et al.,Astropart. Phys.43, 171 (2013) doi:10.1016/j.astropartphys.2012.10.002 [arXiv:1210.3503 [astro-ph.IM]].Mohapatra:1986awR. N. Mohapatra,Phys. Rev. Lett.56, 561 (1986). doi:10.1103/PhysRevLett.56.561 Atre:2009rgA. Atre, T. Han, S. Pascoli and B. Zhang,JHEP 0905, 030 (2009) doi:10.1088/1126-6708/2009/05/030 [arXiv:0901.3589 [hep-ph]]. Deppisch:2015qwaF. F. Deppisch, P. S. Bhupal Dev and A. Pilaftsis,New J. Phys.17, no. 7, 075019 (2015) doi:10.1088/1367-2630/17/7/075019 [arXiv:1502.06541 [hep-ph]].Izaguirre:2015pgaE. Izaguirre and B. Shuve,Phys. Rev. D 91, no. 9, 093010 (2015) doi:10.1103/PhysRevD.91.093010 [arXiv:1504.02470 [hep-ph]].
http://arxiv.org/abs/1704.08708v2
{ "authors": [ "Brian Batell", "Tao Han", "Barmak Shams Es Haghi" ], "categories": [ "hep-ph", "astro-ph.HE" ], "primary_category": "hep-ph", "published": "20170427182728", "title": "Indirect Detection of Neutrino Portal Dark Matter" }
firstpage–lastpage 2017Efficiently ConstructingTangent Circles Alex Kontorovich original form 2017 January 16 ========================================= The survey volume of a proper motion-limited sample is typically much smaller than a magnitude-limited sample. This is because of the noisy astrometric measurements from detectors that are not dedicated for astrometric missions. In order to apply an empirical completeness correction, existing works limit the survey depth to the shallower parts of the sky that hamper the maximum potential of a survey. The number of epoch of measurement is a discrete quantity that cannot be interpolated across the projected plane of observation, so that the survey properties change in discrete steps across the sky. This work proposes a method to dissect the survey into small parts with Voronoi tessellation using candidate objects as generating points, such that each part defines a `mini-survey' that has its own properties. Coupling with a maximum volume density estimator, the new method is demonstrated to be unbiased and recovered ∼20% more objects than the existing method in a mock catalogue of a white dwarf-only solar neighbourhood with Pan–STARRS 1-like characteristics. Towards the end of this work, we demonstrate one way to increase the tessellation resolution with artificial generating points, which would be useful for analysis of rare objects with small number counts.methods: data analysis – surveys – proper motions – stars: luminosity function, mass function – white dwarfs – solar neighbourhood. § INTRODUCTION A number of types of transient, variable and moving sources are not rare, but their detection requires repeated observations of the same part of sky. This was not possible to perform over a large sky area until the era of digital astronomy. The highly automated observing runs and efficient digital detectors allow efficient data collection, while faster processors and the automated data reduction pipelines allow the production of high volume of output. Earliest attempts for such automations digitise the photographic plates from large sky area surveys, where measurements are made objectively with computer, as opposed to measuring manually. Large scale projects of this kind include the PPM catalogue <cit.>, Automated Plate Machine Project <cit.>, USNO A 1.0, A 2.0 and B 1.0 <cit.>, SuperCOSMOS <cit.>, UCAC 1, 2, 3 and 4 <cit.> and SUPERBLINK <cit.>, all of which had several epochs and simple tiling strategies.In the current era of digital astronomy, some surveys continue to use simple tiling patterns where multiple pawprints are combined immediately to produce full coverage over a sky cell, for example in the UKIDSS <cit.>, four pawprints can cover a cell while VISTA employs six <cit.>. In other cases, SDSS Stripe 82 had nine epochs on average <cit.>, ALLWISE has a coverage from 12 to over 200 frames <cit.>; and the Pan-STARRS1 (PS1) 3π Steradian Survey (3SS) typically has 60 epochs <cit.>, The Dark Energy Survey will scan ∼5,000 deg^2 10 times <cit.>, Gaia will have on average 81 transits, with over 140 in the most-visited parts of the sky at the end of the 5 yr nominal mission <cit.>, and LSST will provide close to 1,000 epochs for half of the sky towards the end of the 10 yr survey mission <cit.>[https://github.com/LSSTScienceCollaborations/]. The key survey characteristics (depth and epoch coverage) vary on small scales and in complex ways because tiling strategies/overlapping patterns for these surveys are extremely complicated to maximize coverage due to losses from, for example, chip gaps between CCDs and unfavourable observing conditions, unlike the situation with large-format photographic plates. This in turn complicates the analysis of any survey sample culled from them because optimal techniques like V_max require the precise survey characteristics. This problem becomes even more complex when different surveys are combined to expand the wavelength coverage and/or maximum epoch difference, for example when SDSS is combined with USNO-B 1.0 to derive the proper motions <cit.>, the survey typically has five epochs (four from USNO-B and one from SDSS).Existing methods tackle inhomogeneity by limiting the analysis to the shallower part of the survey and by applying a global correction in order not to run into unaccountable incompleteness (e.g. ). In view of this problem, when deriving the white dwarf luminosity function (WDLF) with a maximum volume density estimator, <cit.> measured the empirical photometric and astrometric uncertainty for different skycells as defined by the tiling of the photographic plates. <cit.> improved the completeness correction due to kinematic selections. The methods described in these works allow an analysis to probe deeper, but in order to maximize the use of the data, we propose a new method based on Voronoi tessellation that can further maximize the analytical survey volume within which completeness and other biases can be corrected. Voronoi tessellation has been employed in defining simulation grids, clustering analysis, visualization etc. However, its property that partitions sources into well defined grids has not been used in all domains of astrophysics. By dividing the sky through Voronoi tessellation into a number of cells that is equal to the number of candidate sources, we can treat each cell as a mini-survey that has well-defined local properties. Analysis can also be performed at lower or higher resolution if needed.In Section <ref>, we will describe the mathematical framework and the construction of the simulated solar neighbourhood in Section <ref>. The new method is applied to the simulated data in Section <ref> under different selection criteria and we describe one procedure through which the resolution can be increased. The bias due to the choice of model is briefly discussed. In the final section, we discuss one possible way to increase the resolution for the analysis and conclude this work.§ MATHEMATICAL FRAMEWORK OF THE VORONOI METHOD The maximum volume density estimator <cit.> tests the observability of a source by finding the maximum volume in which it can be observed by a survey (e.g. at a different part of the sky at a different distance). It is proven to be unbiased <cit.> and easily can combine multiple surveys <cit.>. In a sample of proper motion sources, we need to consider both the photometric and astrometric properties (see LHR15 for details). The number density is found by summing the number of sources weighted by the inverse of the maximum volumes. For surveys with small variations in quality from field to field and from epoch to epoch, or with small survey footprint areas, the survey limits can be defined easily. However, in modern surveys, the variations are not small; this is especially true for ground-based observations. Therefore, properties have to be found locally to analyse the data most accurately. Through the use of Voronoi tessellation, sources can be partitioned into individual 2D cells within which we assume the sky properties are defined by the governing source. Each of these cells has a different area depending on the projected density of the population.An important assumption for using the Voronoi method is that the distributions of the observing parameters of the cells at different resolutions are very similar to each other, hence the integrated maximum volume is approximately equal to the exact solution. In the rest of the article, cell will be used to denote Voronoi cell and h-pixel for HEALPix pixel <cit.> and pixels for the ones on a detector. §.§ Voronoi Tessellation A Voronoi tessellation is made by partitioning a plane with n points into n convex polygons such that each polygon contains one point. Any position in a given polygon (cell) is closer to its generating point than to any other for the case of Voronoi tessellation using Euclidean distance ( D_E = √((x_1-x_2)^2 + (y_1-y_2)^2)). For use in astronomy, such a tessellation has to be done on a spherical surface (two-sphere).In the following work, the tessellation is constructed with the SciPy package spatial.SphericalVoronoi, where each polygon is given a unique ID that is combined with the vertices to form a dictionary. The areas are calculated by first decomposing the polygons into spherical triangles with the generating points and their vertices[https://github.com/tylerjereddy/py_sphere_Voronoi] and then by using L'Huilier's Theorem to find the spherical excess. For a unit-sphere, the spherical excess is equal to the solid angle of the triangle. The sum of the constituent spherical triangles provides the solid angle of each cell. §.§ Cell Properties For a Voronoi cell j, the properties of the cell are assumed to be represented by generating source i. Both i and j are indexed from 1 to 𝒩, but since each source has to be tested for observability in each cell to calculate the maximum volume, i and j cannot be contracted to a single index. Furthermore, the cells do not need to be defined by only the sources. Arbitrary points can be used for tessellation such that i and j will not have a one-to-one mapping. The epoch of the measurement is labelled by k. When tested for observability, epoch-wise information is essential in calculating the photometric and proper motion uncertainties as functions of distance. The major difference in the following approach is that the proper motion uncertainty is found from the formal propagation of errors instead of measuring the empirical form as a function of magnitude, σ_μ(mag), which limits the survey to the worst part of a tile (RH11). This new approach does not need to take into account the scatter in σ_μ(mag), due to different local sky properties and different colours of the sources and their neighbours. Different types of source can differ by up to a few magnitudes in the optical/infrared colours, so two sources with similar magnitudes in one filter can have very different proper motion uncertainties if one is close to the detection limit in another filter. The modelling of the photometric uncertainties from CCD detectors is much simpler than that for photographic plates, because the photometric response of the modern detectors is much more linear at both the faint and bright ends. Thus, the uncertainties can be estimated with relatively simple equations.§.§.§ Photometric Uncertainty When a source is being tested for the observability, it is `placed' at a different distance so the apparent brightness changes as a consequence. The background and other instrumental noises are constant, but the Poisson noise from the source changes with the measured flux, hence the photometric uncertainties are functions of distance. The total noise, N, of a photometric measurement can be estimated by N = √( ( F + d + s ) × t + r^2 )where F is the instrumental flux per unit time, d is the dark current per unit time, s is the sky background flux per unit time, t is the exposure time and r is the read noise. Among these quantities, d, s, t and r are fixed quantities in a given epoch, only the flux varies as a function of distance. We use F as the measured flux and ℱ(D) as the flux at an arbitrary distance D. Therefore, in a Voronoi cell j at epoch k, the photometric noise of source i isN_i,j,k(D) = √(( ℱ_i(D) + d_j,k + s_j,k) × t_j,k + r^2 )where the flux at D is calculated from applying the inverse square law on the observed flux F_i and observed distance D_i,ℱ_i(D) = F_i×( D_iD)^2 .The random photometric uncertainty of a source at an arbitrary distance in a given epoch is the inverse signal-to-noise ratio,δℱ_i,j,k(D) = N_i,j,k(D)ℱ_i(D).The total photometric uncertainty of the source as a function of distance, combining with the systematic uncertainty, σ_s, coming from the absolute calibration of the detector, is thereforeσ_i,j,k(D) = √(δℱ^2_i,j,k(D) + σ^2_s),which represents the photometric uncertainty as a function of the distance to the source.§.§.§ Astrometric UncertaintyThe least square solution of proper motion in one direction for source i can be expressed in the following matrix form, the epoch is labelled by the subscript from 1 to M(j) where M is the number of epochs in cell j,( [ 1σ_1 Δ t_1σ_1;··;··;1σ_M(j) Δ t_M(j)σ_M(j) ])_𝐀×( [ α; μ_α ]) = ( [ Δα_1σ_1; ·; ·; Δα_M(j)σ_M(j) ])where Δ t_k is the time difference between the mean epoch and epoch k, Δα_k is the positional offset from the mean position, α, and proper motion, μ_α, in the direction of the right ascension. The associated uncertainties can be found from the diagonal terms of the normal matrix,𝖠^T𝖠 = [ [ ∑_k( 1σ_k)^2 ∑_k( 1σ_kΔ t_kσ_k); ∑_k( 1σ_kΔ t_kσ_k) ∑_k( Δ t_kσ_k)^2 ]]so for each cell,1σ_μ_αcosδ^2 = ∑_k( Δ t_kσ_k)^2and the total proper motion uncertainty isσ_μ = √(σ^2_μ_αcosδ + σ^2_μ_δ) = √(2)σ_μ_αcosδ .The uncertainties in the α and δ directions are symmetrical in four-parameter astrometric solution (two positions and two proper motions). In the case of five-parameter solution where parallax is solved and for the seven-parameter solution where, in addition, the acceleration terms in both directions are solved for, the uncertainties will not be symmetrical due to the parallactic term, so the off-diagonal terms have to be taken into account which would otherwise be negligible compared to the diagonal terms. However, for a variance-weighted mean epoch, the off-diagonal terms are exactly zero by definition. §.§ Consequence to V_max Calculation There is only one minor adjustment to the volume integral – the lower proper motion limit. Instead of finding the limit by measuring from a number of nearby sources that include mostly sources with different colours, the limit is defined by the properties of the Voronoi cell that comes only from the generating source of the cell. A different set of pixelization by HEALPix, denoted by l, is for the line-of-sight tangential velocity completeness correction (see RH11 for detailed description). For source i, the maximum volume has to be tested in each cell j, the expression is almost identical to that in LRH15, except for the j and l indexesV_max = ∑_j Ω_j ∫_D_min,j^D_max,jρ(D)ρ_⊙× D^2×[ ∫_a(D)^b(D) P_l(j)(v_T)dv_T] dDwhere ρ(D)ρ_⊙ is the density normalized by that at the solar neighbourhood, P_l(j) is the tangential velocity distribution, l(j) denotes the h-pixel mapped from cell j with area Ω_j, v_T is the tangential velocity, D_min and D_max are the minimum and maximum photometric distances, and σ_μ(D) is the proper motion uncertainty as a function of the distance to the source. This is to model the change in the proper motion uncertainties due to varying apparent magnitude with distance (i.e. at greater distance the proper motion uncertainty will be larger because the source becomes fainter which increases the single-epoch positional uncertainty). The lower tangential velocity limit in the inner integral, a(D), isa(D) = max[v_min, 4.74× s ×σ_μ(D) × D ]where the factor of 4.74 comes from the unit conversion from arcsec yr^-1 to km s^-1 at distance D, v_min is the global lower tangential velocity limit and s is the significance of the proper motions. The expression is identical to that in LRH15, but σ_μ(D) is calculated in a completely different way.The inner integral can vanish before reaching the distance limits so the integrator must use a small step size or the distances at which the inner integral vanish have to be calculated explicitly. The cell ID j and h-pixel ID l can be set as a one-to-one mapping by calculating the tangential velocity distribution for each of the Voronoi cell. However, the Voronoi tessellation is dependent on the sample, while the tangential velocity correction is fixed on the sky, using a precomputed look-up table because the tangential velocity correction can significantly reduce computation time.§ SIMULATED DATA SET To demonstrate the power of the Voronoi method described in Section <ref>, we apply it to catalogues of simulations of the solar neighbourhood. This section details the construction of the Monte Carlo simulation.We generated snapshots of white dwarf (WD)-only populations in the solar neighbourhood containing six dimensional phase space information. The procedure is very similar to that described in LRH15, however, we introduce changes to the noise model of the system and include epoch-wise information. The volume probed is assumed to be small such that the simulation is performed in a Cartesian space, instead of a plane polar system centred at the Galactic Centre. The Galaxy has three distinct kinematic components: a thin disc, a thick disc and a stellar halo, all of which we model with no density variations along the co-planar directions of the Galactic plane. The vertical structures of the discs follow exponential profiles, with scale height H_thin and H_thick such thatρ(D)ρ_⊙ = exp(-| z |H) = exp(- | D sin b |H),where z is the vertical distance from the Galactic plane and b the Galactic latitude. None of the three components are tilted relative to each other. The velocity components, U, V and W, of each WD are drawn from the Gaussian distributions described by the measured means and standard deviations of the three sets of kinematics that describe the three populations in the solar neighbourhood (Table <ref>). Theoretical WDLFs are used as the probability distribution functions (pdfs) in the simulations. The normalizations of the pdfs are taken from the WD densities found in RH11. The input parameters for a WDLF are the star formation rate (SFR), initial mass function (IMF), MS evolution model and WD cooling model. The standard equation for modelling the WDLF with those four given inputs isΦ(M_bol) = ∫_ℳ_l^ℳ_udt_cool/dM_bolψ( t_0 - t_cool - t_MS) ϕ( ℳ) dℳ,where Φ(M_bol) is the number density of WDs at magnitude M_bol. The derivative inside the integral is the characteristic cooling time of WDs, ψ(t) is the SFR at time t and ϕ is the IMF. The input parameters are assumed to be invariant with time and are summarized in Table <ref>. The integral also depends on the lifetimes of MS progenitors, t_MS, as a function of mass and metallicity. We have adopted the stellar evolution tracks from the Padova group (PARSEC; ) with a metallicity of Z=0.019 and Y=0.30 <cit.>. By assuming a fixed surface gravity logg=8.0 and pure hydrogen atmosphere (DA)[http://www.astro.umontreal.ca/∼bergeron/CoolingModels/], the WD cooling time, t_cool, is a function of mass and luminosity <cit.>, and t_0 is the total time since the onset of star formation. The integral is over all MS masses that have had time to produce WDs at the present day, with the magnitude-dependent lower limit, ℳ_l, corresponding to the solution oft_0 = t_cool( M_bol,ξ(ℳ_l) ) + t_MS( ℳ_l, Z )and the upper limit for WD production ℳ_u ≈ 8 ℳ_⊙. The initial–final mass relation, ξ, relates the MS progenitor mass to the mass of the WD, is adopted from <cit.> without including globular clusters in the analysis where the final WD mass can be expressed asξ(ℳ_i) = ℳ_f(ℳ_i) = 0.101ℳ_i + 0.463.The thin and thick disc populations are assigned with constant SFR since look back time, τ = 8 Gyr and τ=10 Gyr respectively, while the halo has a starburst of 1 Gyr at τ=13 Gyr. The IMF has an exponent of -2.3 in the mass range of interest <cit.>.From the pdfs of the kinematics, distance and bolometric magnitude, we calculate the apparent magnitudes in the PS1 g_ P1, r_ P1, i_ P1, z_ P1 and y_ P1 filters[http://panstarrs.stsci.edu/] <cit.>. The line-of-sight and the projected velocity can be derived from the given 3D kinematics and 3D position. The uncertainties in the five filters are calculated from the sky background flux, exposure time, dark current and read noise that are representative of the PS1 3SS at Processing Version 2 (PV2). The sky background flux is drawn from a Gaussian distribution measured from the 3SS in each of the filters (Table <ref>). The means and standard deviations[1.4826 times the median absolute deviation is used for robust estimation of the standard deviation.] were measured from 100 fields drawn randomly across the survey footprint area.To simulate the variations in the observing properties of the sky, HEALPix is used to pixelate the sky using a resolution of n_side = 256, i.e. each has a size of ∼0.0534 deg^2, which is sufficiently small compared to the projected density of white dwarfs that is less than 1 deg^-2 (e.g. RH11). The HEALPix resolution is on a much finer scale than the Voronoi tessellation used for determining the maximum volume in order to test the accuracy later (Section <ref>). One feature of this approach is that when the analysis is done at a higher resolution, the new set of cells will be guaranteed to land on a different set of h-pixels and as such will provide a self-consistency check. There is not a switching from Voronoi cells to HEALPix pixels in the analysis: there is simply a look-up table for matching a given cell with the h-pixel that the cell-generating source lands on. Each h-pixel is given a sky background noise for each epoch of the measurement. When the sky brightness is below the lower limit, a sky brightness that is fainter than 99.73% of the measured values, the background noise is resampled until it is above the limit. An ADU of 1 e^- per photon is assumed. The 3SS has 12 epochs on average in each of the filters, so the number of epochs for each source in the simulation is drawn from a distribution[This study only focuses on tessellation; the effect of non-detection is another huge step in the optimization of analysis.] that follows 1+P(11), where P(11) is Poisson distribution with a mean of 11 and the epochs are drawn from a random distribution over a period of 3 yr with 6 h on either side of the source masked out to simulate seasonal observing. When sources are distributed over the sky, they will take the set of values defined by the nearest pixel. Using the treatments from Section <ref>, with a dark current of 0.2e^-s^-1, exposure times of 43, 40, 35, 30 and 30 s, zero-point magnitudes at 24.563, 24.750, 24.611, 24.250 and 23.320 mag in the five filters, and a constant read noise of 5.5e^- <cit.>, each source is assigned with proper motion uncertainty using equation (<ref>). These inputs produce an all-sky survey that has 10σ detections (and their standard deviations) in g_ P1, r_ P1, i_ P1, z_ P1 and y_ P1 at 21.98 ± 0.04, 21.53 ± 0.05, 21.12 ± 0.04, 20.54 ± 0.05 and 19.59 ± 0.04. They are similar to the PV2 values, but the distribution is much narrower because the noise model is itself noiseless (e.g. the sources are not affected by diffraction spikes, optical ghosts, cosmic rays or other effects that lead to larger scatter).§ APPLICATION TO WDLFS This section describes the application of our survey dissection method (presented in Section <ref>) to simulated PS1-like WD catalogues generated using the recipe described in Section <ref>. The bright limits at all filters are set at 15. The faint limits are at 21.5, 21.0, 20.5, 20.0 and 19.5 in g_ P1, r_ P1, i_ P1, z_ P1 and y_ P1 filters respectively, which are the typical magnitudes at which the 3SS is complete. The lower proper motion limit is set to five times the proper motion uncertainties, σ_μ, unless specified otherwise; and the upper proper motion limit is set at 0.08438 and 0.4219 arcsec yr^-1 for the cases using lower tangential velocity limits at 40 and 200, respectively. The two limits correspond to a minimum distance of 100 pc; this is to avoid any bias coming from the unaccounted parallax signature of very nearby sources. The upper tangential velocity limits are different in each analysis. The photometric parallaxes were not derived, and the real distances and bolometric magnitudes are used. The volume and the maximum volume are found by integrating equation (<ref>) from D_min to D, and from D_min to D_max respectively. §.§ Comparison with the RH11 selection The RH11 method increases the survey volume by restricting shallow survey depths only over areas that are severely limited by a small number of poor observations. In this section we illustrate how the Voronoi method can further increase the number of sources that can be recovered while rigorous completeness correction can be performed. In Fig. <ref>, under the selection criteria: 40< v_tan < 60, proper motion less than half an arcsec year^-1 and a minimum distance of 100 pc, the number of sources recovered by the Voronoi method is plotted as a solid line, RH11 method as a dashed line and the global 95th percentile as a dotted line. The ratio between the RH11 and Voronoi methods is plotted as a thick black solid line, while that between a global lower proper motion limit and the Voronoi method is plotted as a thick grey solid line. With the Voronoi method, more sources can be recovered. The ratio between the RH11 and Voronoi methods slowly decreases as the absolute bolometric magnitudes increase; the ratio with the global limit simply plummets given that the upper proper motion limit is only 0.5 arcsec yr^-1. Fig. <ref> shows that the ratios stay fairly constant until some faint limits. The sources lost with the RH11 treatment are due to a combination of (1) the reassignment of proper motion uncertainties based on empirical observations where 95-per-cent of all sources are given larger uncertainty values than their measured ones and (2) the loss of the deepest areas of the survey. Due to the simplistic noise model of the simulation (i.e. no bad pixels, saturation, diffraction, optical ghosts and other effects that affect the photometric and astrometric precision significantly), the distribution of the proper motion uncertainties in the simulation is typically narrower than real measurements. Nevertheless, at 5σ level the Voronoi method can recover ∼15% more sources than the RH11 method and much more sources than applying a global lower proper motion limit (Fig. <ref>).§.§ Thin Disc and Combined Discs Study of the thin disc WDLF requires a selection of the low-velocity population in order to minimize the contaminations from older populations, which typically possess higher velocities. In this section, we show the observed WDLFs from a thin disc-only simulation and from a mixed thin disc, thick disc and halo simulation. The WDLF comparison plots are displayed with the WDLF in the top panel, differences between the input and calculated WDLFs and theas a function of bolometric magnitude are in the middle panel and at the bottom panel respectively.§.§.§ Thin disc-only sample In an analysis selecting only thin-disc WDs, the observed WDLF agrees very closely with the input function down to M_bol≈15 when the number of sources drops significantly (Fig. <ref>). From the (mag) distribution, the derived solution is very stable throughout, except at the brightest and the faintest ends where themethod is known to become less reliable as the number of sources decreases. In theory, = 0.5, because it is the expected value of a uniform distribution between 0 and 1. Statistically, it is expected that only ∼60% of the time thelies within the error bar. The uncertainty inis 1/√(12N). The small oscillation about the line at (mag)=0.5 is a good indication that the sample is unbiased over a large dynamic range of magnitudes. The outliers at the extreme ends result from the application of the density estimator to a small number of sources, and so likely do not represent the true values. Taking 40 and 60 as the lower and upper tangential velocity limits of the inner integral (equation <ref> & <ref> and the equivalent set of the upper limit), the total integrated number density of the work is 3.65 ± 1.03 × 10^-3 pc^-3, compared to the input 3.10 × 10^-3 pc^-3. The overall = 0.4944 ± 0.0031, which is very close to 0.5, indicating an unbiased sample.§.§.§ Mixed Population (40-60) The modification to the density estimator itself is small, only the lower limit of the inner integral is changed (equation <ref>); and it is not expected that the effect due to contamination should differ from the previous analysis in LRH15. The extra depth enabled by the new method could have led to a significant increase in the measured density due to a combination of two effects: (1) an increase in contamination fraction as the thin disc contribution drops rapidly with distance: at the Galactic poles, the thin disc and thick disc densities equate at 525 pc; and (2) the kinematic completeness correction applied on contaminants, which are more common at fainter magnitudes.The kinematics of the two discs are well measured; however, the relative density of WDs in them is much less studied – there is only one measurement on record (RH11). To understand the effects of contamination, a better understanding of the two populations is needed. Nevertheless, we can compare the WDLFs from the last section to a mixed population with the same set of upper and lower tangential velocity limits (40 and 60; Fig. <ref>). The total integrated number density is 4.00 ± 1.03 × 10^-3 pc^-3 as compared to 3.10 × 10^-3 pc^-3 for the thin disc and 0.64 × 10^-3 pc^-3 for the thick disc, which sum to 3.74 × 10^-3 pc^-3. If it is treated as a pure thin disc WDLF, there is a roughly constant overestimation of 0.1 dex at all magnitudes. When both discs are considered, the small over-density (0.26 × 10^-3 pc^-3) is due to contamination from the thick disc where the using of a thin disc scale-height on these sources will lead to an overestimation of the maximum volume. In this simulation, 16.0% of the data are from the thick disc.Thedistribution is very similar to the clean sample, and for the entire sample = 0.4984 ± 0.0028 which is within 1 standard deviation of the ideal value 0.5. We believe this velocity range is a good choice for driving an upper limit of the thin disc white dwarf density in the solar neighbourhood.§.§.§ Mixed Population (40-120) In M17, 40 and 120 are used as the tangential velocity limits to study both discs together, with a scale-height of 300 pc instead of 250 pc. From H06, it is known that the effect of scale-height is larger at the bright end because sources can be seen at a larger distance hence the density-correction is larger in equation (<ref>). The effect on the total normalization is small because faint sources dominate after density and completeness corrections. However, studies in, for example, star formation history <cit.> or high energy exotic particles <cit.> are sensitive to the whole range of magnitudes. This cannot simply be assumed to be negligible. Fig. <ref> investigates this effect by comparing the cases of 250 and 300 pc scaleheights. It shows that the choice of scale-height has almost no effect to the WDLFs except at the brightest magnitudes, and for the distribution of , the differences are negligible. However, the absolute normalizations from using the two scale-heights are consistently overestimated by ∼0.1 dex. In this simulation, 23.1% of the sources in the range of 40-120 are from the thick disc and the halo; in comparison, only 15.5% of sources are not from the thin disc in the 40-60 selection. §.§ Halo The study of the halo WDLF requires a selection of high velocity population in order to minimise the contaminations from the thick disc. In this section, we will show the observed WDLFs from a halo-only simulation and from a mixed thin disc, thick disc and halo simulation.§.§.§ Halo-only sample In the halo-only simulation, the observed WDLF agrees very well with the input function (Fig. <ref>). From the (mag) distribution, the derived solution is stable throughout, except at the faintest bin where there are only two sources. The small oscillation about the line at =0.5 is a good indication that the sample is unbiased over the entire range of magnitudes. The upper and lower tangential velocity limits are set at 200 and 500 which define the survey limits of the inner integral (equations <ref> & <ref> and the equivalent set of the upper limit), the total integrated number density of the work is 1.77 ± 0.10 × 10^-4 pc^-3, compared to the input 1.90 × 10^-4 pc^-3 (the integrated density up to 15 is 1.68 × 10^-4 pc^-3) and = 0.5116 ± 0.0200.§.§.§ Mixed Population (200-500) In 10 realizations of the halo-only simulation, under the selection of 200-500, there is a mean contamination rate of 7.3% (minimum at 3.6% and maximum at 11.9%). In the thin disc analysis, where the contamination is over 15%, we do not observe any significant bias in the WDLF or (mag) distribution. We believe these fractions of contaminations have little effect on the analysis so the samples should be representative of the halo, we therefore do not consider them further. §.§ Sensitivity to resolution The purpose of this new method is to tackle a complex survey that has small-scale variations that reduce the maximum survey volume that is available to a study. The way this method divides the sky has avoided the detailed treatment of the survey and approximates it with the source properties that go into the analysis, which raises a concern whether the resolution for a small sample would be sufficient, and not causing significant systematic bias. We suggest a method to increase the resolution by a factor of ∼3, which can be repeated to further increase the resolution if needed, as it would be useful if there are only a few sources over the sky. A higher resolution can be achieved by using the vertices as new generating points (Fig. <ref>). However, an increased resolution requires much more computation time, because the time complexity of the Voronoi method is O( 𝒩^2 ). There is a trade-off between the accuracy and the computing time. The properties at the new cells can be approximated by those carried by the nearest sources (or they can be extracted directly from the field from the raw data). Fig. <ref> demonstrates with 10 halo analyses that for a very well-behaved survey (small differences in survey depths), a resolution in the order of ∼30 deg^2 per cell compared to ∼100 deg^2 per cell will only lead to an increase of <1% in number density (top panel). The increase arises from the deeper parts of the survey that the standard resolution always underestimates (as it is statistically less likely to land on the deeper parts). To understand the effect at lower resolutions, we simulate this by using one in three sources to generate the Voronoi tessellation. The change in number density is only in the range of a few percentage points (bottom panel). Over a large number of simulations, the ratios should average to 1. The asymmetric distribution comes from the inverse proportionality between the maximum volume and the number density.§ CONCLUSION In this work, we have demonstrated that the use of Voronoi tessellation can increase the survey volume to more optimally retrieve sources from a large sky area multi-epoch survey. The assumption behind this is not ideal, but it is not possible to take an average value of the number of epochs and their properties for each cell. Further subdivisions will take much more time to compute the volumes as this algorithm scales as x ×𝒩^2, where x is the number of subdivision of each cell and 𝒩 is the number of sources. Nevertheless, this method is one big step in approaching the optimal sampling of the survey footprint with limited computing power.From a mixed population simulation, we find that under the framework of our galactic models, the new method recovers ∼10-15% more sources than the RH11 method under a typical lower proper motion selection. When considering a restricted tangential velocity selection (40-60), we do not observe any bias in the WDLFs brighter than 15. Similar result is observed from the 40-120 sample. However, this conclusion is valid only up to M_bol=15, the result should not be extrapolated to fainter magnitudes where thin disc contribution to the number density drops significantly compared to the thick disc and the halo. This work has deliberately removed sources closer than 100 pc to avoid bias caused by parallactic displacements, which as a consequence removed all the faint sources that can be seen only from a small distance. In the high velocity regime, at the given thick disc-to-halo density ratio, a 200-500 selection will only contain a small fraction of contaminations so it is a good sample for studying the halo.We have demonstrated one way to increase the resolution of the tessellation and it shows that for a well behaved survey, a low resolution only limits the volume by ∼1%. An adaptive way that only subdivides cells larger than a certain solid angle can provide a grid of cells that have similar areas should it be more useful in some certain scenarios. When applying this method to real surveys, careful treatment at the boundaries is needed because the area of the survey is important. Leaving the boundaries untreated one will always end up with 4π steradian of sky area. In order to have a correct boundary that defines the survey, artificial points have to be added to create a layer of bounding cells surrounding the survey area such that the boundaries of the second last layer of cells overlap the survey footprint. One can identify the artificial points by using the survey boundary as a cell boundary, and then locate a generating point that can produce the correct boundary geometry. However, one should note that a typical survey boundary is given by a small circle on the celestial sphere, while the Voronoi tessellation constructs cell boundaries along the great circles. The total area described by the Voronoi cells is always going to be slightly different from the true survey area (unless it is a full sky survey). However, the difference in area is very small, comparable to the unaccounted non-perfect fill-factor or dead pixels at the detector.In future surveys, the complexity of the tiling pattern, scanning strategy as well as the detector arrays at the focal plane will only increase. There is an increasing need for a more optimal analytic tool to maximally use the available data. The Voronoi method can include the faintest objects that would otherwise be neglected because of unaccountable incompleteness.§ ACKNOWLEDGMENTSThe Pan–STARRS 1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan–STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, the University of Maryland, and Eotvos Lorand University (ELTE).We thank the PS1 Builders and PS1 operations staff for construction and operation of the PS1 system and access to the data products provided. ML acknowledges financial support from the STFC Consolidated Grant of the Institute for Astronomy, University of Edinburgh. ML also wishes to thank Dr. Nigel Hambly and the referee Dr. Floor van Leeuwen for helpful comments and suggestions that have led to major improvements in the clarity and presentation of the article.
http://arxiv.org/abs/1704.08745v1
{ "authors": [ "Marco C. Lam" ], "categories": [ "astro-ph.IM", "astro-ph.GA", "astro-ph.SR" ], "primary_category": "astro-ph.IM", "published": "20170427210514", "title": "Maximizing Survey Volume for Large-Area Multi-Epoch Surveys with Voronoi Tessellation" }
§ INTRODUCTIONThe nature of dark matter (DM) is one of the primary mysteries in modern physics and cosmology. However, despite the character of DM proving elusive, γ-ray telescope projects like Fermi-LAT <cit.> and HESS <cit.> continue to be effective in probing the parameter space of DM composed of Weakly Interacting Massive Particles (WIMPs) <cit.>. Of the two aforementioned projects, HESS has devoted far less time to this search as its sensitivity interval is largely unsuited to the study of the kind of low mass WIMPs that might be consistent with the excess γ-rays from the galactic centre, as well as the PAMELA <cit.> and AMS anti-particle experiments <cit.>. However, both HESS and the upcoming Cherenkov Telescope Array (CTA) <cit.> are admirably suited to probe large mass WIMPs. In this regard we aim to examine how far into the high-mass parameter space these projects can explore, in a manner similar to earlier works on the topic <cit.>. This being of particular significance as Fermi-LAT is less effective in constraining high-mass WIMPs, particularly those annihilating via τ lepton states (or other channels that provide harder γ-ray spectra). This is very effectively illustrated in Figure <ref>, which shows that γ-ray spectra of high-mass WIMPs, annihilating via τ leptons, falls within the maximum HESS sensitivity region, but is well outside that of Fermi-LAT.In this work we will make use of the sensitivity calculations from <cit.> for HESS and CTA respectively, we note that these may be less accurate than a full mock analysis of the telescope performance.The environments we will use for this study are the Dark Energy Survey (DES) dwarf galaxy candidates <cit.> Tucana II, Reticulum II, and the newly discovered Tucana III <cit.>. This choice of environment is motivated by the extremely large J-factors of these DES dwarf galaxies, which will result in larger DM-induced γ-ray fluxes than previously studied classical dwarf galaxies. This is of particular importance as it has been recently shown that systematic errors in dwarf galaxy J-factor determination may be considerably under-estimated <cit.>. The method of analysis will be to produce integrated γ-ray fluxes for each of the chosen targets; these are then compared to the sensitivity of the HESS and CTA experiments with 100 hours of observation time. As a point of comparison we will also perform the same analysis on the Ophiuchus galaxy cluster, which is visible from the southern hemisphere. Additionally, we will perform this exercise considering both DM fluxes boosted by halo substructure, following the method from <cit.>, and more conservative cases when such boost effects are absent. In addition to this, we will explore the consequences of DM particle annihilations producing Higgs boson pairs which then subsequently decay. This is based on the recently proposed “Madala" boson <cit.>, a 270 GeV particle hypothesised based on multiple anomalies in the LHC run1 data (not subsequently removed by further data). This new boson has Higgs-like couplings within the standard model but also couples to a “hidden"-sector particle via a mediating scalar. This extra hidden particle is motivated as a candidate for DM as it does not couple to the standard model directly. The mass of the mediating scalar has been tentatively suggested to be greater than 140 - 160 GeV <cit.> with the DM mass being half the mass of the mediator. The main decay paths of this mediator are through W and Higgs boson pairs. However, we will focus specifically on the Higgs channel, as it has particularly interesting features. The resulting γ-ray spectra from the Tucana III dwarf will be compared to the HESS and CTA sensitivities to find null constraints. These will be contrasted with actual Fermi-LAT upper-limits <cit.> on the Reticulum II dwarf galaxy.The possible null-constraints from high-energy observations will then be compared to those of the up-coming SKA phase 1 <cit.>. In order to determine the possible role of southern hemisphere γ-ray telescopes in a multi-frequency DM search that incorporates up-coming highly sensitive radio experiments like the SKA.Thus, we will argue that high-energy experiments like HESS and CTA have a particularly important niche role within the hunt for DM by examining how effectively they can constrain the regions of the parameter space least touched by other experiments. This paper is structured as follows: in Sect. <ref> we detail the models used for halo properties and flux calculations, the results are then presented in Sect. <ref> and final conclusions drawn in Sect. <ref>.§ DARK MATTER HALOS AND Γ-RAY FLUXThe γ-ray flux produced by a DM halo, integrated over a given energy interval, is written as followsϕ (E_min,E_max, ΔΩ, l) = 1/4π⟨σ V⟩/2 m_χ^2∫_E_min^E_maxN_γE_γd E_γ∫_ΔΩ∫_lρ^2 (ř) dl^'dΩ^',where m_χ is the WIMP mass, ρ is the DM halo density profile, ⟨σ V⟩ is the velocity averaged annihilation cross-section, and N_γE_γ is the γ-ray yield from WIMP annihilations (sourced from PYTHIA <cit.> routines in DarkSUSY <cit.>). The astrophysical J-factor encompasses the last two of the above integralsJ (ΔΩ, l) = ∫_ΔΩ∫_lρ^2 (ř) dl^'dΩ^',with the integral being extended over the line of sight l, and ΔΩ is the observed solid angle. In this work we will calculate the particle physics factor for an energy intervalψ (> E_min) = 1/4π⟨σ V⟩/2 m_χ^2∫_E_min^∞N_γE_γd E_γ.Thus the flux will be found fromϕ (> E_min) =ψ (> E_min) × J(ΔΩ, l) .In the case of the Ophiuchus cluster we will calculate the J-factor as followsJ_vir = 4π∫_0^R_virρ^2 (r) dr^3 ,with R_vir being the virial radius of the cluster. In order to do this, we use the following DM density profilesρ_N(r)=ρ_s/r/r_s(1+r/r_s)^2, ρ_B (r) = ρ_s^'/(1 + r/r_s^')(1+(r/r_s^')^2),where r_s and r_s^' are the scale radii, ρ_s and ρ_s^' are the characteristic halo densities, while ρ_N and ρ_B are the NFW and Burkert halo profiles respectively <cit.>. For Ophiuchus we use r_s = 0.611 Mpc and ρ_s = 4.31 × 10^3 ρ_c (for NFW) with ρ_c being the critical density. These are found by using a virial radius of R_vir = 2.97 Mpc and corresponding mass of M_vir = 1.5 × 10^15 M_⊙ and using the relations <cit.>r_s = R_vir/c_vir,ρ_s (c_vir)/ρ_c= Δ_c/3c_vir^3/ln(1+c_vir)-c_vir/1+c_vir,and c_vir (M_vir) is found according to the method described in <cit.>. The quantity r_s^' is related to r_s by a factor of ∼ 1.52, and ρ_s^' is founded by enforcing a normalization of the integral 4π∫_0^R_vir drρ_B r^2 = M_vir. The density contrast parameter at collapse Δ_c is given in flat cosmology by the approximate expression <cit.>Δ_c≈ 18 π^2 - 82 x - 39 x^2 ,with x = 1.0 - Ω_m (z), where Ω_m (z) is the matter density parameter at redshift z given byΩ_m (z)= 1/1 + Ω_Λ (0)/Ω_m (0)(1+z)^-3. The J-factors found for the targets considered in this work are reported in Table <ref>.It has been recently shown <cit.> that the systematic errors in the calculation of dwarf galaxy J-factors, introduced through the treatment of the spherical Jeans equation and scaling assumptions for tracers of the inner-halo density, have been greatly under-estimated in the literature. In order to account for this systematic uncertainty, we will show all results with the maximum J-factor uncertainty found in <cit.>: this will correspond to a J-factor (and thus flux) reduction by a factor of 4.Finally, we notice that DM halos may contain substructure, in the form of sub-halos of various masses. These tend to be of a higher concentration than their parent halo and thus enhance the resulting flux from DM annihilation <cit.>. In order to calculate this amplification (boost) factor we will follow the formulation derived in <cit.>. The sub-structure boost factor is defined as a luminosity increase caused by integrating over sub-halo luminosities determined by the virial mass and by halo concentration parameters found numerically according to the method discussed in <cit.>; we note that a similar method is provided in <cit.>. This effect results in a boosting factor (which multiplies the γ-ray flux) of b ∼ 36 for Ophiuchus, while dwarf galaxies with masses ∼ 10^7 M_⊙ have values b ∼ 3-4.§ RESULTSFigure <ref> displays the 3σ null-constraints that can be derived from 100 hours of observation of the targets we consider here via HESS and CTA, comparing these two existing bounds from <cit.>. In this figure, the case without a substructure boosting effect is shown. The top-left panel demonstrates the sensitivity of the high-energy γ-ray constraints to the hardness of the target DM spectrum. In this case the bb̅ annihilation channel produces a softer spectrum than that of τ leptons and thus HESS cannot improve on the Fermi-LAT constraints with 100 hours of observation of the chosen targets. CTA can make minor improvements within 100 hours observation of the highest J-factor dwarf galaxies (Tucana III and Reticulum II). The top-right panel shows an improvement in constraints as the W^+W^- channel results in a slight spectral hardening. In this case, both HESS and CTA can improve upon the Fermi-LAT constraints for high-mass WIMPs, above 8 and 5 TeV respectively, with 100 hours of observing time. In the case of CTA this improvement is far more significant than HESS, reaching near an order of magnitude over Fermi-LAT for a 10 TeV WIMP. The bottom panel shows the most significant result, this being for the case of annihilation via τ^+τ^- intermediate states. In this scenario, HESS can achieve nearly an order of magnitude improvement on its previous results between 3 and 10 TeV <cit.> with 100 hours of observation of the high J-factor DES dwarf galaxies, with CTA being able to better this by an additional factor of 2 at all masses shown. Additionally, HESS and CTA can use these targets to probe the parameter space region favoured by AMS positron excess models (shown in orange), which are currently not ruled out in this annihilation channel for both Fermi-LAT and Planck results <cit.>. The J-factor uncertainty introduced by the analysis in <cit.> does not greatly affect the ability of HESS and CTA to set new limits on the cross-section. It is worth noting that, without any substructure boosting, the dwarf galaxies make far better targets than a galaxy cluster such as Ophiuchus, even before possible background emissions have been considered.Figure <ref> shows the 3σ null-constraints that can be derived through 100 hours of observation via HESS and CTA, comparing these two existing bounds from <cit.>. In this figure, the case including the substructure boosting effect is shown. As expected, given the dwarf boosting factor of ∼ 4, the potential null-constraints are substantially improved, to the point where even the bb̅ channel provides a candidate for the extension of the current HESS limits within 100 hours of observing time. Of particular note is that a galaxy cluster like Ophiuchus is still an unviable target for HESS, despite its larger boosting factor of ∼ 36. For CTA, Ophiuchus can provide marginal improvements in the τ lepton channel but these are likely insignificant should there be any source of background γ-ray fluxes from non-DM sources in the cluster. As acoincidence it is also worth noting that the dwarf J-factor maximum uncertainty from <cit.> is very similar to the boost gained from the substructure effect in dwarf galaxies around 10^7 M_⊙.In Figure <ref> we display the potential constraints derivable with the SKA from DM-induced synchrotron radiation within reference targets like the Draco dwarf galaxy, the M81 galaxy, and the Coma cluster, as considered in a recent study by <cit.> (see discussion and references therein for full details). These constraints were derived by assuming a power-law background spectrum in each target, normalized using the available data. The minimal value of the annihilation cross-section is then found, such that the DM synchrotron emission profile can be extracted from the background given the capabilities of the SKA. The weakness of the Draco results can be largely attributed to the available data only observing small region in the centre of the dwarf galaxy. The great power of the SKA to probe the WIMP parameter space with just 100 hours of observation is clearly evident in these results. However, the curves of particular interest are those for τ lepton annihilations (dashed lines). It is clear that, in both the NFW and Burkert density profile cases, these potential constraints are substantially weaker than their b-quark counterparts. Of particular importance is the fact that these constraints become either weaker than the existing HESS results above 3 TeV, or very similar to them. Thus, the potential constraints from HESS and CTA, displayed in Figs <ref> and <ref>, can play a role in expanding our ability to constrain the WIMP parameter space, thus acting as an important supplement to experiments like SKA and Fermi-LAT that are more sensitive to softer spectra.Finally, the consequences of DM annihilation via Higgs bosons are shown in Figure <ref>. The γ-ray yield data <cit.> for this channel show a large increase at the mass of the DM particle, resulting in a line-like feature in the emission spectrum from a DM halo. The maximum of this feature was used to constrain the cross-sections that would be allowed by 100 hour null constraints on the Tucana III dwarf galaxy. This results in very “spikey" constraint curves, as it is highly sensitive to the shape of the experiment's differential sensitivity profile. The constraints from Fermi-LAT data on Reticulum II <cit.> are also shown for comparison. It is clear that Fermi-LAT can probe the low mass spectrum more effectively, while CTA becomes competitive soon after the lower bound on the DM particle from the Madala hypothesis and displays great potential to produce constraints well below the thermal relic limit on particles with both intermediate and very large masses, these being more severe than even the best case scenario with heavy lepton annihilations. HESS shows a similar ability to put stringent constraints on this annihilation channel with 100 hours of observation, being able to probe down to the thermal relic level even for particles of mass 1 TeV and higher.§ CONCLUSIONSThis work has shown that there is great potential for the current iteration of HESS to extend the Fermi-LAT constraints on high-mass WIMPS, particularly in the region of the parameter space that boasts the weakest constraints. This is found to be independent of the use of a substructure boosting factor, although its presence provides stronger constraints. The up-coming CTA experiment will have even greater ability to probe the high-mass region of the WIMP parameter space, additionally being more sensitive to the softer W-boson and b-quark annihilation channels. Of particular note is the fact that CTA can improve on Fermi-LAT by nearly an order of magnitude for WIMP masses between 3 and 10 TeV in the τ lepton annihilation channel. The dwarf galaxies Tucana II & III as well as Reticulum II are highlighted as being of particular interest to the continuing search for DM due to their exceptionally large J-factors. Furthermore, we demonstrated that HESS and CTA DM searches can play a role in supplementing future experiments with high DM sensitivity like the SKA, which are very sensitive to lower WIMP masses <cit.>, but also place weaker constraints on high-mass WIMPs with hard annihilation spectra. Finally, we showed that very powerful constraints, well below Fermi-LAT above 100 GeV masses, can be derived using HESS and CTA for WIMPs associated with the recently hypothesized “Madala" boson which was motivated by anomalies in LHC run1 data. This means that HESS and CTA boast the potential to rule out the Madala-associated candidate particle as a major component of DM, provided its annihilations predominantly result in the production of Higgs boson pairs. These arguments fully motivate a multi-frequency search, incorporating Southern-African experiments, with the potential to greatly advance the search for WIMP DM across the mass spectrum. This is of particular significance as the radio and γ-ray observations are sensitive to differing astrophysical uncertainties and cosmic backgrounds. 99fermi-docs Atwood, W. B. et al. for the Fermi/LAT collaboration, 2009, Astrophys. J., 697,1071, arXiv:0902.1089 [astro-ph].hess-details <https://www.mpi-hd.mpg.de/hfm/HESS/>hess-perf HESS Collaboration, Aharonian, F. et al., 2006, Astron.Astrophys., 457, 899.funk-cta2013 Funk, S. & Hinton, J., 2013, APh, 43, 348.hessdwarves2014 Abramowski, A. et al. for the HESS colloboration, 2014, arXiv: 1410.2589 [astro-ph].Fermidwarves2014 Ackermann, M. et al. for the Fermi-LAT collaboration, 2014, Phys. Rev. D, 89, 042001, arXiv:1310.0828 [astro-ph.HE].Fermidwarves2015 Drlica-Wagner, A. et al. for the Fermi-LAT collaboration & Abbott, T. et al. for the DES collaboration, 2015, arXiv: 1503.02632 [astro-ph].pamela-docs Picozza, P. et al., 2007, Astropart. Phys., 27(4), 296.hooper2014 Hooper, D., Linden, T. & Mertsch, P., 2015, JCAP, 03, 021, arXiv:1410.1527 [astro-ph].hooper2011 Hooper, D. & Linden, T., 2011, Phys. Rev. D, 84, 123005.calore2014 Calore, F., Cholis, I., McCabe, C. & Weniger, C., 2015, Phys. Rev. D, 91, 063003, arXiv:1411.4647 [astro-ph].cholis2013 Cholis, I. & Hooper, D., 2013, Phys. Rev. D, 88, 023013, arXiv:1304.1840 [astro-ph].cta-docs <https://portal.cta-observatory.org/Pages/Home.aspx>doro2012 Doro, M. et al., 2013, Astroparticle Physics, 43, 189.carr2016 Carr, J. et al., 2015, In Proceedings of the 34th International Cosmic Ray Conference (ICRC2015), arXiv: 1508.06128.Fermidetails <http://www.slac.stanford.edu/exp/glast/groups/canda/lat_Performance.htm> des <http://www.darkenergysurvey.org>desdwarf Bechtol, K. et al. (The DES Collaboration), Accepted toApJ (2015), arXiv:1503.02584 [astro-ph.GA].des2015 Drlica-Wagner, A. et al. (DES), 2015, Astrophys. J., 813, 109, arXiv: 1508.03622.ullio2016 Ullio, P. & Mauro, V., JCAP, 2016, 07, 025.prada2013 Sanchez-Conde, M. & Prada, F., MNRAS, 2014, 442 (3), 2271, arXiv:1312.1729 [astro-ph].madala1 von Buddenbrock, S. et al., 2015, arXiv:1506.00612 [hep-ph].madala2 von Buddenbrock, S. et al., 2016, arXiv:1606.01674 [hep-ph]madala3 Mellado, B., 2016, presentation at University of the Witwatersrand.ska2012 Dewdney, P., Turner, W., Millenaar, R., McCool, R., Lazio, J. & Cornwell, T., SKA baseline design document, 2012, <http://www.skatelescope.org/wp-content/uploads/2012/07/SKA-TEL-SKO-DD-001-1_BaselineDesign1.pdf>pythia Sjöstrand, T., 1994, Comput. Phys. Commun., 82, 74.darkSUSY Gondolo, P., Edsjo, J., Ullio, P., et al.., 2004, JCAP, 0407, 008.nfw1996 Navarro, J. F., Frenk, C. S. & White, S. D. M., 1996, ApJ, 462, 563.Burkert1995 Burkert, A., 1995, ApJ, 447, L25.ludlow2013 Ludlow, A. D. et al., 2013, MNRAS, 432, 1103L.prada2012 Prada, F. et al., 2012, MNRAS, 423 (4), 3018, arXiv:1104.5130 [astro-ph].bryan1998 Bryan, G. & Norman M., 1998, ApJ, 495, 80bonnivard2015 Bonnivard, V. et al., 2015, ApJ, 808, L36.pieri2011 Pieri L. et al., 2011, Phys. Rev. D., 83, 023518.Bullock2001 Bullock, J.S. et al.., 2001, MNRAS, 321, 559ng2014 Ng, K. et al., 2014, Phys. Rev. D, 89, 083001, arXiv: 1310.1915 [astro-ph.CO].beck2016 Beck, G. & Colafrancesco, S., 2016, JCAP, 05, 013.ppdmcb1 Cirelli, M., et al.,2011, JCAP, 1103, 051. Erratum: 2012, JCAP, 1210, E01,arXiv 1012.4515.ppdmcb2 Ciafaloni, P. et al., 2011, JCAP 1103, 019, arXiv 1009.0224.gsp2015 Colafrancesco, S., Marchegiani, P. & Beck, G., 2015, JCAP, 02, 032C.Colafrancesco2015 Colafrancesco , S. et al, 2015, Probing the nature of dark matter with the SKA, Proceedings of Science: Advancing Astrophysics with the SKA, arXiv:1502.03738 [astro-ph].
http://arxiv.org/abs/1704.08029v2
{ "authors": [ "Geoff Beck", "Sergio Colafrancesco" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170426092208", "title": "Multi-frequency search for Dark Matter: the role of HESS, CTA, and SKA" }
Surface-sampled simulations of turbulent flow at high Reynolds number Neil D. Sandham, Roderick Johnstone, Christian T. Jacobs December 30, 2023 =====================================================================empty emptyPrivacy technologies have become extremely prevalent in recent years from secure communication channels to the Tor network. These technologies were designed to provide privacy and security for users, but these ideals have also led to increased criminal use of the technologies. Privacy and anonymity are always sought after by criminals, making these technologies the perfect vehicles for committing crimes on the Internet. This paper will analyze the rising threat of subverting privacy technologies for criminal or nefarious use. It will look at recent research in this area and ultimately reach conclusions on the seriousness of this issue.§ INTRODUCTION In recent years, privacy technologies have flourished and have received more attention. With the growth of the Internet, more and more users have sought out the ability to privately and securely communicate. These technologies are designed to provide anonymity and privacy for their users including whistleblowers and others who need to be able to communicate anonymously. Along with these new technologies came a rise in cybercrime. Anonymity is something that cybercriminals always desire and these technologies provide an easy way to achieve it. This led to an explosion in untraceable ransomware, Tor botnets, and other crimes that cannot easily be tracked back to an attacker. With the rise of Bitcoin, payments can also be made anonymously and cannot be tracked back to the attacker. The privacy infrastructure was designed to help users, but also led to this steep rise in cybercrime and a steep increase in the difficulty of catching these cyber criminals. This paper looks at how this privacy infrastructure is subverted for attacks and how serious of a threat this poses to the Internet as a whole. Section II will outline a few privacy technologies, Section III will outline the research analyzed within this paper, and Section IV will present conclusions drawn from this research.§ PRIVACY TECHNOLOGIESAs usage of the Internet has exploded, the need for security and privacy when using the Internet has also increased dramatically. To meet these rising needs, many new privacy technologies have been created from secure protocols, like TLS/SSL, to Tor, a network that allows for anonymous Internet usage, to Bitcoin which allows anonymous payments to be sent over the Internet. These technologies were all created to help provide security and privacy over the Internet in order to allow for services to be provided online. Whistleblowers and individuals living under totalitarian regimes can share information, individuals can anonymously browse the Internet, and individuals can check their bank accounts and make purchases online. The ideals that these technologies aimed to provide were positive, but were also very useful for cyber criminals, leading to many crimes and attacks that subvert these privacy technologies for nefarious means. §.§ SSL/TLSSSL/TLS is a technology that was designed to allow for secure communication across the Internet. It is a protocol that was built on top of TCP that involves a session between a client and a server and then connections related with that session. It allows for one or two way authentication, confidentiality, and message integrity, all of which make the communication more secure and less vulnerable to attacks. SSL/TLS has been widely deployed in order to provide these features to users, especially with HTTPS, which allows for secure browsing of the Internet and the ability to securely make payments over the Internet. §.§ TorTor is a network that aims to provide anonymous Internet browsing and communication. It is built on the idea of Onion routing, which involves messages passing through multiple servers in such a way that each node only knows the information about the previous and next nodes. In this infrastructure, the first node will only know about the user and second node, not the destination, that the last node will only know about the destination and the previous node, not the sender, and that intermediate nodes only know about the previous and next nodes. This is done by encrypting messages with a key for each node so that as the messages maneuver through the network, each layer of encryption is peeled off. Tor also allows decoupling of the IP address and the address used to visit the website so that the server hosting the site cannot easily be tracked down. Tor does not provide end to end security, as it only encrypts the information within the network, so if end to end security is needed, it is provided by the end server. There have been a few recorded vulnerabilities in the Tor network, but it is widely considered to be very secure. The foundation of the Tor network led to the ability to browse the Internet and communicate anonymously, with the latter being very important to government and military entities, but, along with this, came an increase in sophisticated cybercrime, like ransomware and secure botnets, and the ability to create websites that offer illicit drugs, murder for hire, and many other illegal activities. §.§ BitcoinBitcoin is the first successful cryptocurrency. It is a form of currency that is completely electronic and is not backed by some other means like a government. By design, it allows for Bitcoins to be transferred anonymously but also securely. It is based on a chain of transactions, which is used to ensure that nobody is lying about a transaction and allows for Bitcoins to be transferred. Bitcoins are also only tied to an address, called a wallet, which is not tied directly to an individual's identity. Since the currency is not connected to an identity, a transaction cannot be tracked back to see who it came from. The ability to make anonymous payments over the Internet can be very useful, but can also be used in many crimes. One major crime that became popular with the rise of Bitcoin is ransomware, which is malware that will infect a device, encrypt important files, and then require payment in order to get the key that will unlock the files. With the invent of Bitcoin, attackers can require that payment is made in the form of Bitcoins, making the payment untraceable. Bitcoin also lead to an increase in other crimes, like purchasing illegal drugs and even hiring hitmen, since the payment cannot be traced.§ RESEARCH ON SUBVERTING PRIVACY TECHNOLOGIES§.§ Botnet over Tor: The Illusion of Hiding [1] This research was performed by Casenove and Miraglia from VrijeUniversiteit in Amsterdam, The Netherlands. It analyzes how botnets try to guarantee anonymity of the Command and Control node, addresses the problems with Tor-based botnets, and shows how full anonymity is never achieved.The researchers found that as botnets began to make use of the Tor network, they typically used it to hide the connection, as best as possible, from infected devices to the command and control node. This was successful in its intended purpose of hiding the communication, but the act of using the Tor network often led to abnormal traffic that revealed the fact that a device was infected with a botnet. This was ultimately counterproductive to the ultimate purpose of the botnet; while the control servers were more difficult to find and take down, it was more obvious that the infected devices were infected, which makes them easier to fix. The researchers then discuss the fact that botnets that make use of Tor are still susceptible to many attacks that affect non-Tor based botnets. Crawling the address space to determine how many hosts are infected is still feasible and so is using traffic analysis at the exit nodes to determine the address of a command and control node. Using Tor in this manner has made it more difficult to track down the command server, but it does not solve all issues relating to botnets, but as these botnets evolve and make better use of Tor, they will be increasing difficult to undermine.This research demonstrates that botnets that make use of privacy technologies, like Tor, are already here and are becoming more and more sophisticated. The bots analyzed in this research were only making minimal use of Tor and were not attempting to hide the fact that Tor was running on the infected devices, yet they were still successful when only using privacy technologies minimally. As these botnets become more sophisticated, they will become more difficult to track and take down. They will be better able to hide their communications and hide themselves on the infected devices. While botnets that make use of Tor are relatively new, they will continue to improve and use more privacy features, highlighting the threat that comes from subverting privacy technologies and the need to get ahead of these types of attacks. §.§ OnionBots: Subverting Privacy Infrastructure for Cyber Attacks [2] This research was performed by Sanatinia and Noubir from Northeastern University in Boston, MA. The goal of this research was to introduce a method by which a botnet could operate using the Tor infrastructure and then propose a method for neutralizing bots that are using the Tor network.The proposed method for creating a botnet that uses the Tor network relies on a self-healing, low degree, and low diameter graph of bots. New nodes would join the botnet by contacting a list of current bots and then generating a connection between itself and that bot. This would be done in such a way that the bots would have a small number of neighbors, but the shortest path to any other bot would also be small. These nodes would also maintain the network by healing when a node leaves the graph. Since these nodes are using the Tor network, there is a decoupling between the IP address of a node and its .onion address. This allows the nodes to continually change their .onion address while still allowing neighbors to be able to communicate to it and allowing the Command and Control node to control it. The research shows that this botnet would be able to maintain a low degree and diameter while also not becoming partitioned if many nodes drop out of the network. It would also be difficult to track and take down since it is built on the Tor network and the devices can frequently change their .onion addresses in order to make finding the infected devices more difficult.The research then proposed a method for defeating these so-called OnionBots. The technique is called Sybil Onion Attack Protocol - SOAP. The basis of the technique is that a node controlled by a defender that is part of the botnet is able to get a neighboring node to drop all of its other links and connect only to nodes the defender set up. The defender does this by creating new nodes that are under its control (replicas) that then connect to the target node. Part of the design of the botnet states that a node should connect to the neighboring nodes with the lowest degree and drop its neighbors with the highest degree in order to do so. The defender can exploit this by having the new nodes connect to the target node and report having very few neighbors. This will cause the target node to drop its neighbors and connect to the defender controlled nodes. Once the targeted node is only connected to the defender owned nodes, it is then disconnected from the botnet and is neutralized.The goal of this research was to show that it is possible to create a botnet that operates using the Tor network for privacy and anonymity and show that there are ways to counteract this type of botnet. The threat posed by botnets operating over the Tor network is severe. Botnets have already been used in many crimes and attacks across the Internet, including the DDoS attack launched in October 2016 that targeted Dyn and prevented access to numerous websites. Botnets are already difficult to track down and stop, but botnets that make use of the Tor network would be extremely difficult to take down and could degrade service within the Tor network as a whole, leading to issues for its intended users. As the previous research showed, Tor based botnets already exist, and while they are not very sophisticated yet, this research shows that this class of botnets can continue to improve and become more successful. Ultimately, the research argues that these issues should be dealt with preemptively, including making changes to these privacy technologies if needed. Subverting these privacy technologies is a significant threat to any user of the Internet and a preemptive strategy is the best strategy that can be adopted at this point as it would protect against these technologies being used for crimes and can still provide the services they provide to users who are using them for non-criminal means. If the threat of subverting Tor for botnet use is not dealt with preemptively and quickly, it could become an enormous issue for the entire Internet and lead to a significant increase in untrackable crimes. §.§ Preparing for Malware that Uses Covert Communication Channels: The Case of Tor-based Android Malware [3] This research was performed by Kioupakis and Serrelis from AMC Metropolitan College in Amaroussio, Greece. It proposes a methodology for creating Android based malware that uses the Tor network to extract and steal information from the device and then proposes a method to mitigate this type of malware. The researchers first discuss the fact that there are already two known cases of Android based malware that made use of the Tor network in some capacity, although it was typically used to retrieve information from the Command and Control bot. After analyzing these previous botnets, they proposed a design, including requirements and module implementations, for an Android based malware that makes use of the Tor network to steal information from the device. This malware targets the most popular version of Andoird, Jelly Bean, and would not be caught by anti-malware implementations that currently exist. It would be able to sit on Android devices, undetected, and siphon information that is then transmitted back to a central server, possibly compromising personal and confidential information that is contained in or sent from the device.After describing in detail how the malware is built, they propose an anti-malware system that can help to detect and stop malware that makes use of the Tor network. This anti-malware system would sniff packets off of the network and would shut down an interface if it is sending Tor traffic and it is not in a list of devices approved to use the Tor network. Current malware mitigations would not be able to detect the malware they designed, so new techniques were required that could detect and stop this type of malware. They propose that this type of anti-malware be developed preemptively in order to detect and stop future malware variants that will make use of Tor.The research demonstrates that Tor can be used for many purposes within the broad scope of cybercrime and the security community needs to begin to take note and get ahead of these uses. If malware is able to run on Android devices undetected and anonymously send data back to a Command and Control server, then this is a very large threat to an enormous number of users. Android has one of the largest user bases in the world, so any malware that affects these devices could be extremely effective and lucrative. A reactive approach to this issue could lead to the loss of personal information for millions of users before this malware is identified and stopped. Tor has many useful purposes, but the protections it provides to cyber criminals must be minimized in order to protect Tor and the Internet's future use, even if it means making modifications to the Tor network or the devices that use it. Malware that uses privacy technologies is already here and will continue to become more stealthy and secure, demonstrating the need to attack the issue now before it becomes even more prevalent. §.§ Ransomware: Emergence of the Cyber-Extortion Menace [4] This research was performed by Hampton and Baig from the Security Research Institute at Edith Cowan University in Perth, Australia. The goal of this research was to show the trends that have occurred in the history of ransomware and make obvious the idea that security professionals need to get ahead of these issues instead of constantly being behind and reacting to the issue.The researchers analyzed twenty-nine variants of ransomware that belonged to nine different families. For each variant, they looked for twenty-two different features that these variants could possess to see what features have been kept and also see what recent trends are emerging. They found that cryptographically secure encryption was not used until around 2013 and in recent years, there has been an increase in the use of Tor and cryptocurrencies like Bitcoin. The researchers suggest that the limited data makes it difficult to make predictions, but history has shown that future generations learn from the success and failure of past generations, and as of now, using Tor and other privacy technologies seems to be successful. Their analysis suggests that future variants of ransomware will make more use of privacy technologies in order to protect the cybercriminals and make it that much more difficult to track them down.The researchers suggest, similar to the research described above, that security researchers need to get ahead of the trends instead of reacting as they occur. This is not possible in every situation, but if researchers can look towards the future and try to predict what might occur, it can be mitigated before it becomes a larger issue. They also describe the fact that vulnerability analysis has become a systematic and well vetted process with numerous databases detailing almost every imaginable vulnerability, yet malware, specifically ransomware, analysis has not had the same prominence and success. Ransomware is currently a massive business bringing in millions of dollars per year for cybercriminals and it affects individuals, businesses, and the government alike. The researchers ultimately advocate for creating a formal approach for documenting and analyzing malware, especially ransomware, so that security professionals can prevent more malware variants from springing up, which has been happening rapidly in the past few years. Once this process is in place, security researchers can begin to counteract these strains of malware, including reducing the ability of these strains to use privacy technologies for nefarious means.§ CONCLUSIONS As all of this research has shown, the possibility of subverting privacy technologies for nefarious means is very real. These attacks are not just a future possibility; they are here now and appear to be extremely successful. These technologies, like Tor and Bitcoin, were designed to provide users with a sense of privacy and anonymity when using the Internet. Tor was designed to help people communicate in an anonymous and untraceable way, which is great for military use of for whistleblowers who are afraid of persecution. Bitcoin was developed to allow people to make monetary transactions that were not tied to their identity and could not be traced back to them. These technologies were all designed and implemented with the intent to help people, but they are also being used to commit crimes and even help terrorists.Addressing these issues is a real concern that must be met with proactive measures. Far too many times, especially with security, we are reactive to situations. Many protocols and, in fact, the basis of the Internet was designed without security in mind and it was only added when bad things began to happen. There was a swift reaction to these issues and those problems were fixed. In order to stop the viral spread of cybercrime, made much easier with these technologies, there needs to be a proactive plan. As the research from Hampton and Baig stated, we need to create a system of standardization and understanding when it comes to malware and cybercrime research. We also need to continue to push for more research like the research presented here. This research looked at future possibilities and explored what may be coming in order to make suggestions of how it can be stopped now. If we, as a community, can look ahead and understand how issues may arise in the future, we can fix them now and stop the current rise of cybercrime, and also prevent new variants from popping up in the future. The privacy technologies discussed in this paper were designed for good reason and provide many useful services, but in order for them to be truly successful and usable, there need to be mitigations in place that prevent cybercriminals from exploiting the protections provided. The research above discusses future threats that undermine privacy technologies, like Tor and Bitcoin, but the threat of subverting privacy technologies for criminal means is here now and must be dealt with in a proactive manner in order to stop these crimes and prevent them from appearing in the future to the greatest extent possible. 99c1 M. Casenove and A. Miraglia, "Botnet over Tor: The illusion of hiding," 2014 6th International Conference On Cyber Conflict (CyCon 2014), Tallinn, 2014, pp. 273-282. doi: 10.1109/CYCON.2014.6916408 c2 Amirali Sanatinia, Guevara Noubir, "OnionBots: Subverting Privacy Infrastructure for Cyber Attacks”, in Proceedings of the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN’15), 2015. c3 Fragkiskos, Kioupakis, E., & Serrelis, E. (2014). Preparing for Malware that Uses Covert Communication Channels: The Case of Tor-based Android Malware. c4 Hampton, N., & Baig, Z. A. (2015). Ransomware: Emergence of the cyber-extortion menace.
http://arxiv.org/abs/1707.01142v1
{ "authors": [ "Craig Ellis" ], "categories": [ "cs.CR" ], "primary_category": "cs.CR", "published": "20170426221957", "title": "Analysis of the Rising Threat of Subverting Privacy Technologies" }
http://arxiv.org/abs/1704.08266v2
{ "authors": [ "Jenna M. Nugent", "Xinyu Dai", "Ming Sun" ], "categories": [ "astro-ph.GA", "astro-ph.CO" ], "primary_category": "astro-ph.GA", "published": "20170426180116", "title": "Suzaku Measurements of Hot Halo Emission at Outskirts for Two Poor Galaxy Groups: NGC 3402 and NGC 5129" }
1cm Physics Department,via P.Giuria 1,I-10125 Turin,Italy [email protected] A superbubble which advances in a symmetric Navarro–Frenk–White density profileor in an auto-gravitating density profile generates a thick shell with a radius that can reach 10 kpc. The application of the symmetric and asymmetric image theory to this thick 3D shell produces a ring inthe 2D map of intensity and a characteristic `U' shape in the case of 1D cut of theintensity. A comparison of such a ring originating from a superbubble is made with the Einstein's ring. A Taylor approximation of order 10for the angular diameter distance is derivedin order to dealwith high values of the redshift. Keywords:Cosmology;Observational cosmology;Gravitational lenses and luminous arcs The Ring Produced by anExtra-Galactic Superbubble in Flat CosmologyL. Zaninetti=========================================================================§ INTRODUCTION A firsttheoretical prediction of the existence of gravitational lenses (GL) is due toEinstein <cit.> where theformulae for the optical properties of a gravitational lens for star A and B were derived. A first sketchwhich dates back to 1912 is reportedatp.585 in<cit.>. Thehistorical context of the GL is outlined in<cit.>and ONLINE information can be foundat <http://www.einstein-online.info/>.After43 yearsa first GLwas observed in the form ofa close pair of blue stellar objects of magnitude 17 with a separation of 5.7 arc sec at redshift 1.405,0957 + 561 A, B, see <cit.>. This double system is also known as the "Twin Quasar" and aFigurewhich reports a 2014 image oftheHubble Space Telescope (HST) forobjects A and B is available at <https://www.nasa.gov/content/goddard/hubble-hubble-seeing-double/>.At the moment of writing, the GL is usedroutinely as an explanationfor lensed objects, see the following reviews<cit.>. As an example of current observations,28 gravitationally lensed quasars have been observedby the Subaru Telescope, see <cit.>, where for each systemamass model was derived. Another example is givenby the SDSS-III Baryon Oscillation SpectroscopicSurvey (BOSS), see <cit.>, where 13two-image quasar lenseshave been observed and the relative Einstein's radius reported in arcsec. A first classification separates the strong lensing,such as an Einstein ring (ER)and the arcs from the weak lensing, such as the shape deformation of background galaxies.The strong lensing is verifiedwhen the light from a distant background source,such as a galaxy or quasar, is deflected into multiple paths by an intervening galaxy or a cluster of galaxiesproducing multiple images of the background source: examples are the ER and the multiplearcs in cluster of galaxies, see <https://apod.nasa.gov/apod/ap160828.html>.In the case ofweak lensing the lens is not strong enough to form multiple images or arcs, but the source can be distorted: both stretched (shear) and magnified (convergence), see <cit.> and<cit.>. The first cluster of galaxiesobserved with the weak lensing effect is reported by <cit.>.We now introduce supershells, which were unknownwhen the GL was postulated.Supershellsstarted to be observedfirstlyin our galaxyby <cit.>, where 17 expanding H I shells were classified,andsecondly in external galaxies, see as an example <cit.>, where manysupershells were observed in NGC 1569. In order to model such complex objects,the term super bubble (SB) hasbeen introduced but unfortunately the astronomers oftenassociate the SBswith sizes of ≈ 10–100 pc and thesupershells withring-like structures with sizes of ≈1 kpc. At the same time an application of the theory of image explains thelimb-brighteningvisibleon the maps of intensityof SBs and allows associating the observedfilaments to undetectable SBs, see <cit.>.This paper derives, in Section <ref>, anapproximate solution for theangular diameter distance in flat cosmology. Section <ref> briefly reviews the existing knowledgeof ERs. Section <ref> derives an equation of motion for an SBin a Navarro–Frenk–White (NFW) density profile. Section <ref> adopts a recursive equation in order to model an asymmetric motion for an SBin an auto-gravitating density profile.Section <ref> applies the symmetrical andthe asymmetricalimage theoryto the advancing shell of an SB.§ THE FLATCOSMOLOGY Following eq. (2.1) of <cit.>, the luminosity distance in flat cosmology,, is(z;c,H_0,) = c/H_0 (1+z) ∫_1/1+z^1 da/√( a + (1-) a^4) ,where H_0 is the Hubble constant expressed in 0units, c is the velocityof light expressed in , z is the redshift, a is the scale factor, and is= 8π G ρ_0/3 H_0^2 ,where G is the Newtonian gravitational constant and ρ_0 is the mass density at the present time. An analytical solution for the luminosity distanceexists in the complex plane, see <cit.>.Here we deal withanapproximatesolution for the luminosity distancein the framework of aflatuniverseadopting the same cosmological parameters of<cit.> which are H_0=72 0units, =0.26 and =0.74. Anapproximate solution for theluminosity distance, (z), is given by a Taylor expansionof order 10 about a=1for the argument of the integral (<ref>)(z) = 4163.78 ( 1+z ) (2.75+ 0.12882 ( 1+z ) ^-10- 1.34123 ( 1+z ) ^-9+6.23877 ( 1+z ) ^-8- 17.0003 ( 1+z) ^-7+ 29.8761 ( 1+z ) ^-6- 35.1727 ( 1+z ) ^-5+ 28.2558 ( 1+z ) ^-4-16.5327 ( 1+z ) ^-3+ 9.26107 ( 1+z) ^-2- 6.46 ( 1+z ) ^-1 ) .More detailson the analytical solution for the luminosity distancein the caseofflat cosmology can be foundin <cit.> and Figure <ref> reports the comparison between the above analytical solution, andTaylor expansionof order 10, 8 and 2. Thegoodness of the Taylorapproximationis evaluated through the percentage error, δ, which isδ =| (z) - (z)_10 |/(z) × 100.As an example, Table<ref> reportsthepercentageerrorat z= 4 for three order of expansion; is clear the progressivedecrease of the percentageerrorwiththe increasein the order of expansion.Another useful distance is the angulardiameter distance, , which is = /(1+z)^2 ,see <cit.> and the Taylor approximation for the angular diameter distance, = /(1+z)^2 . As a practicalexample of the aboveequation,the angular scale of 1 arcsec is 7.73 kpcat z=3.042 when<cit.> quotes 7.78 kpc: thismeansa percentage errorof 0.63% between the two values. Another check can be done with the Ned Wright's Cosmology Calculator <cit.>available at <http://www.astro.ucla.edu/ wright/CosmoCalc.html>: it quotes a scale of 7.775 kpc arcsec^-1 which meansa percentage errorof 0.57% with respect to our value.In this section we have derived the cosmological scalingthat allows to fix the dimension of the ER.§ THE ER This section reviews the simplest version of the ERand reports the observations of two recent ERs. §.§ The theoryIn the case of a circularlysymmetriclens and when the sourceand the length are on the same line of sight,the ERradius in radiant isθ_E = √( 4 G M (θ_E)/c^2D_ds/D_dD_s) ,where M (θ_E) is the mass enclosedinside the ER radius, D_d,s,ds arethe lens,source and lens–source distances, respectively, G is the Newtonian gravitational constant,and c is the velocity of light,see eq. (20) in <cit.>and eq. (1)in <cit.>. The mass of the ERcan be expressed inunits of solar mass, M_:M (θ_E,arcsec) = 1.228 10^8 Θ_ E, arcsec^2 D_ ds,MpcD_s,Mpc/D_d,MpcM_ , where θ_E,arcsec is theERradius in arcsec and the three distances are expressed in Mpc. §.§ The galaxy–galaxy lensing system SDP.81 The ring associated with the galaxy SDP.81, see <cit.>, isgenerally explained by a GL. In this framework we have a foreground galaxy atz=0.2999 and a background galaxy at z=0.3042. This ringhasbeen studied with theAtacama Large Millimeter/sub-millimeter Array (ALMA) by<cit.>. The system SDP.81as analysedbyALMApresents 14molecular clumps along the two main lensed arcs. We can therefore speak of the ring appearance asa `grand design' and we now test the circular hypothesis. In order totest the departure from a circle,an observational percentageof reliability isintroduced that uses both the size and the shape,ϵ_obs =100(1-∑_j |R_obs-R_ave|_j/∑_j R_obs_j),where R_obs is the observedradius in arcsecand R_aveis the averaged radius in arcsec which is R_ave = 1.54 arcsec.Figure <ref>reports the astronomical data of SDP.81and thepercentageof reliability is ϵ_obs= 92.78%.§.§ Canarias ERThe objectIAC J010127-334319 has been detected in the optical region with the Gran Telescopio CANARIAS; the radius of the ERis θ_E=2.16 arcsec,see <cit.>. As an example, inserting the aboveradius, D_s,Mpc=1192 Mpc, D_ls,Mpc=498 Mpcand D_l,Mpc=951 Mpc in Eq. (<ref>), we obtain a massfor the foregroundgalaxy ofM (θ_E,arcsec) =1.310^12 M_. § THE EQUATION OF MOTION OF A SYMMETRICAL SBThe densityisassumed to haveaNavarro–Frenk–White (NFW ) dependence on r in sphericalcoordinates:ρ(r;r_0,b,ρ_0) = ρ_0r_0( b+r_0) ^2/ r ( b+r ) ^2 ,where b represents the scale,see <cit.>for more details. The piece-wisedensity isρ (r;r_0,b,ρ_0)= {[ ρ_0; ρ_0r_0( b+r_0) ^2/ r ( b+r ) ^2 ].The total mass swept, M(r;r_0,b,ρ_0), in the interval [0,r] isM(r;r_0,b,ρ_0) =1/ 3 b+3 r(-4 r_0π ρ_0 ( 3 ln( b+r_0 ) b ^3+3 ln( b+r_0 ) b^2r +6 ln( b+r_0 ) b^2r_0 +6 ln( b+r_0 ) brr_0 +3ln( b+r_0 ) br_0^2 +3 ln( b+r_0 ) rr_0^2-3 ln( b+r ) b^3-3 ln ( b+r ) b^2r-6 ln( b+r ) b^2r_0 -6 ln( b+r ) brr_0-3 ln( b+r ) br_0^2-3 ln( b+r ) rr_0^2+3 b^2r-3 b ^2r_0+3 br_0r -4 br_0^2-r_0^2r ) ).The conservation of momentum in spherical coordinates in the framework of the thin layer approximationstates thatM_0(r_0)v_0 = M(r) v,where M_0(r_0) and M(r) are the sweptmasses at r_0 and r, and v_0 and v are the velocities ofthe thin layer at r_0 and r. The velocity is, therefore, dr/dt = NE/DE ,whereNE = -r_0^2v_0( b+r ) ,andDE= 3 ln( b+r_0 ) b^3+3 ln( b+r_0) b^2r+6 ln( b+r_0 ) b^2r_0+6 ln( b+r_0 ) brr_0 +3 ln( b+r_0) br_0^2+3 ln( b+r_0 ) rr_0^2 -3 ln( b+r ) b^3-3 ln( b+r ) b^2 r-6 ln( b+r ) b^2r_0-6 ln( b+r) brr_0-3 ln( b+r ) br_0^2-3 ln ( b+r ) rr_0^2+3 b^2r-3 b^2r_0+3 br _0r-4 br_0^2-r_0^2r.The integration ofthe above differential equation of the first ordergivesthe following non-linear equation:1/r_0^2v_0 (-6( b+r_0 ) ^2 ( b+r/2 ) ln( b+ r_0 ) +6( b+r_0 ) ^2 ( b+r/2) ln( b+r )-6( r-r_0 ) ( b^2+3/2 br_0+1/3 r_0^2 ) ) =( t- t_0)The above non-linear equation does not havean analytical solution for the radius, r,as a function oftime. The astrophysicalunits arepc for lengthandyrfor time. With these units, the initial velocity isv_0(km s^-1)= 9.7968 10^5 v_0(pc yr^-1). The energy conserving phase of an SBin the presence of constant densityallows setting upthe initialconditions, and theradiusis R =111.552 √( N^*)√(E_51)t_7^3/5/√(n_0)pc ,where t_7 is the timeexpressedin units of 10^7 yr, E_51is theenergy expressedinunits of 10^51 erg,n_0 isthe number density expressedinparticles cm^-3 (density ρ_0=n_0m, where m=1.4m_H) and N^*is the number of SN explosions in5.0 · 10^7 yr and therefore is a rate, see see eq. (10.38) in <cit.>. Thevelocityof an SB in such a phase is v_0 =0.416324 5^2/514^4/5√( N^*)√(E_51)/√(n_0)t_7^2/5km/s .The initial condition for r_0 and v_0 are now fixedby the energy conserving phase for an SB evolving ina medium at constant density. The freeparameters of the model are reported in Table <ref>,Figure <ref> reports the law of motion andFigure <ref> the behaviour of the velocity as a function of time. Once we have fixed the standardradius of SDP.81 atr=11.39 kpc, we evaluate the pair of values forb (the scale) and for t (the time) that allowssuch a value of the radius, see Figure <ref>.The pair of valuesof n_0(initial number density) and t (the time)which produces the standardvalue of the radius is reportedin Figure <ref>;Figure <ref> conversely reportsthe actual velocity of the SB associated with SDP.81 as function of n_0.The swept mass can be expressedinthe number of solar masses, M_, and,with parameters as in Table<ref>,isM=3732.709n_0 M_ . § THE EQUATION OF MOTION OF AN ASYMMETRICAL SBIn order to simulate an asymmetric SBwe briefly review a numericalalgorithmdeveloped in <cit.>. We assume anumber density distribution asn(z) = n_0 sech^2 (z/2 h),where n_0 is the density at z=0, h is a scaling parameter,andsech is the hyperbolic secant (<cit.>).Wenow analyze the case of anexpansionthat startsfrom a givengalactic height z,denoted by z_OB, whichrepresentsthe OB associations. It is not possible to findr analyticallyand a numerical method should be implemented.The following two recursive equations are found when momentum conservation is applied:r_n+1 = r_n + v_n Δ tv_n+1 = v_n (M_n(r_n)/M_n+1 (r_n+1) ),wherer_n, v_n, M_n are the temporaryradius, the velocity,and the total mass,respectively, Δ t is the time step,and n is the index. The advancing expansion is computed in a 3D Cartesian coordinate system (x,y,z)with the centerof the explosion at(0,0,0). The explosion is better visualized in a 3D Cartesian coordinate system (X,Y,Z) in which the galactic plane is given by Z=0. The followingtranslation, T_OB,relatesthe two Cartesian coordinatesystems. T_OB {[ X=x; Y=y; Z=z+ z_OB ]. ,where z_OB is the distancein parsec of theOB associations from the galactic plane.The physical units for the asymmetrical SBhave not yet been specified:parsecs for length and 10^7 yr for time are perhaps an acceptableastrophysical choice.With these units, the initial velocity v_0=ṙ_̇0̇ isexpressed in units of pc/(10^7 yr) and should be convertedinto km/s; this means that v_0 =10.207 v_1wherev_1 is the initial velocity expressed in km/s.Weare now ready to present the numerical evolutionof the SB associated with SDP.81 when z_OB=100, see Fig. <ref>. The degree of asymmetry can be evaluatedintroducingthe radius alongthe polar direction up, r_up, the polar direction down, r_up andthe equatorialdirection , r_eq. In our model all the already defined three radiiare different, see Table <ref>.We can evaluatethe radiusand the velocity as function of the directionplotting the radius and the direction in section in the Y-Z;X=0plane, see Figures<ref>and<ref>.§ THEIMAGE We nowbriefly reviewthe basic equations of the radiative transfer equation, the conversion of the flux of energyinto luminosity, thesymmetric and the asymmetric theory of the image. §.§ Radiative transfer equation The transfer equation in the presence of emission and absorption, see for example eqn.(1.23) in <cit.>or eqn.(9.4) in <cit.>or eqn.(2.27)in <cit.>,is dI_ν/ds =-k_νζ I_ν+ j_νζ ,whereI_ν is the specific intensity or spectral brightness,sis the line of sight, j_ν the emission coefficient, k_ν a mass absorption coefficient, ζ themass density at position s, and the index ν denotes the involved frequency of emission. The solution toEq. (<ref>)isI_ν (τ_ν) = j_ν/k_ν ( 1 - e ^-τ_ν(s) ) ,where τ_ν is the optical depth at frequency νd τ_ν = k_νζ ds .We now continue analysing the case of anoptically thin layer in which τ_ν is very small (or k_νvery small) and the densityζ is replaced by thenumber density ofparticles, n(s).In the following, theemissivity is taken to be proportional to the number density j_νζ =Kn(s) ,where K is aconstant. The intensity is thereforeI_ν (s) = I_0 + K ∫_s_0^s n (s') ds ' ,where I_0 is the intensityat thepoint s_0. The MKS units of the intensity are W m^-2 Hz^-1sr^-1. The increase in brightness is proportional to the number densityintegrated along the line ofsight: in the case of constant number density, it is proportional only to the line ofsight.As an example, synchrotron emission has an intensity proportional to l, the dimensionof the radiating region, in the case of a constantnumber density of the radiating particles,see formula (1.175) of <cit.>.§.§ The source of luminosity The ultimate source of theobserved luminosity is assumedto be the rate of kinetic energy, L_m,L_m = 1/2ρ AV^3 ,where A isthe considered area, V isthe velocity of a spherical SB and ρis the density in the advancing layer of a spherical SB. In the case of the spherical expansion of an SB, A=4π r^2, where r is the instantaneous radius of the SB, which meansL_m = 1/2ρ 4π r^2 V^3.Theunits of the luminosityare W in MKSand erg s^-1 in CGS. The astrophysicalversion of thethe rate of kinetic energy, L_ma,is L_ma =1.39× 10^29n_1r_1^2 v_1^3ergs/s,where n_1 is the number density expressed in unitsof1 particle/cm^3, r_1istheradius in parsecs, and v_1 is the velocity in km/s. As an example, according to Figure <ref>, insertingr_1=11.39 10^3,n_1=0.1 andv_1 =26.08in the above formula,the maximum available mechanical luminosity isL_ma =3.2 10^40 ergs/s. The spectral luminosity, L_ν, at a given frequency ν isL_ν =4 π^2S_ν ,where S_ν is the observed flux densityat a given frequency ν with MKS units as W m^-2 Hz^-1. The observed luminosityat a given frequency ν canbe expressed asL_ν = ϵL_ma ,whereϵisa conversion constant fromthe mechanical luminosity tothe observed luminosity. More detailson the synchrotron luminosityand the connected astrophysical unitscan be found in <cit.>.§.§ The symmetrical image theoryWe assume that the number densityof the emitting matter n is variable, and in particular rises from 0 at r=a to a maximum value n_m, remains constantup to r=b, and then falls again to 0. This geometricaldescription is shown in Figure <ref>. The length of the line of sight, when the observer is situated at the infinity of the x-axis,is the locus parallel to the x-axis whichcrossesthe position y in aCartesian x-y plane and terminatesat the external circle of radius b. The locus length is l_0a = 2 × ( √( b^2 -y^2) - √(a^2 -y^2)); 0 ≤ y < al_ab = 2 × ( √( b^2 -y^2));a ≤ y < b .When the number density of the emitting matter n_m is constant between two spheres of radii a and b, the intensity of radiation is I_0a =K_I × n_m × 2 × ( √( b^2 -y^2) - √(a^2 -y^2)); 0 ≤ y < aI_ab =K_I × n_m ×2 × ( √( b^2 -y^2));a ≤ y < b ,where K_I is a constant. The ratio between the theoretical intensity at the maximum (y=a)and at the minimum (y=0) is given by I(y=a)/I(y=0) = √(b^2 -a^2)/b-a .Theparameter b is identified with the external radius, which meansthe advancing radius of an SB. The parameter a can be found fromthe following formula:a=b ( (I(y=a)/I(y=0))_obs^2 - 1)/( (I(y=a)/I(y=0))_obs^2 + 1),where(I(y=a)/I(y=0))_obsis the observed ratio betweenthe maximum intensity atthe rimand the intensity at the center. The distance Δ y after which the intensityis decreased of a factor fin the region a ≤ y < b isΔ y = 2 √(b^2f^2+a^2-b^2) -√(a^2f^4-b^2 f^4+2 a^2f^2+2 b^2f^2+a^2-b^2)/2f .We can now evaluate the half-width half-maximum by analogy with the Gaussian profileHWHM_U, whichis obtained by the previous formula upon inserting f=2:HWHM_U =1/2 √(a^2+3 b^2)-1/4 √(25 a^2-9 b^2) .In the above model, b is associated withthe radius of theouterregion of the observed ring, a converselycan be deduced fromthe observedHWHM_U: a =1/21 √(441 b^2+464HWHM_U^2-32 √(441 b^ 2 HWHM_U^2+100HWHM_U^4)).As an example, inserting in the above formulab=1.54 arcsec and HWHM_U=0.1 arcsec, weobtain a=1.46 arcsec.A cut in the theoretical intensityof SDP.81, see Section <ref>,is reported in Figure <ref> andatheoretical image in Figure <ref>.The effect of theinsertion of a thresholdintensity, I_tr, which is connected with the observationaltechniques, is now analysed.The threshold intensity can be parametrizedtoI_max,themaximumvalueof intensity characterizing the ring:a typical image witha holeis visible in Figure <ref> when I_tr= I_max/fac, where fac is a parameter which allows matching theory with observations. A comparison between the theoreticalintensityand the theoretical fluxcan be made through the formula (<ref>) and due to thefact thatis assumed to constantover all the astrophysical image,the theoretical intensityand the theoretical fluxare assumed to scale in the same way.Thetheoretical flux profilesforIAC J010127-334319, see Section<ref>,is reported in Figure<ref>.The linear relation between the angular distance,in pc, and the projected distance on the sky in arcsec allows to statethefollowing The `U' profileof cut in theoretical flux for a symmetricER is independent of the exact value of the angular distance.§.§ The asymmetrical image theory We now explain anumerical algorithm which allows us to buildthecompleximage of an asymmetrical SB. * An empty (value=0) memory gridℳ (i,j,k) whichcontains NDIM^3 pixels is considered* Wefirstgenerate an internal 3D surface by rotating the section of 180^∘ around the polar direction anda secondexternalsurface at a fixed distance Δ R from the first surface.As an example, we fixed Δ r = 0.03 r_max,where r_max is the maximum radius of expansion. The points on the memory grid which lie between theinternal and the external surfaces are memorized on ℳ (i,j,k) with a variable integer number according to formula (<ref>)and density ρ proportional to the sweptmass.* Each point of ℳ (i,j,k)has spatial coordinates x,y,zwhichcan be represented by the following 1 × 3matrix, A,A=[ [ x; y; z ]] .The orientationof the object is characterized bythe Euler angles (Φ, Θ, Ψ) andthereforeby a total 3 × 3rotation matrix, E, see <cit.>. The matrix pointis represented by the following 1 × 3matrix, B,B = E · A. * The intensity map is obtained by summing the points of the rotated images along a particular direction.* The effect of theinsertion of a threshold intensity, I_tr, given by the observational techniques, is now analysed. The threshold intensity can be parametrized by I_max, the maximumvalueof intensity whichcharacterizesthe map, see<cit.>. Anideal image of the intensity of the Canarias ring is shown in Fig. <ref>. The theoreticalfluxwhichis here assumed to scale asthe fluxofkineticenergy as represented byeqn.(<ref>), is reported in Figure <ref>. Thepercentageof reliabilitywhich characterizesthe observed and the theoretical variations in intensity of the above figure is ϵ_obs= 92.7%.§ CONCLUSIONS Flat cosmology: In order to havea reliable evaluationof the radius of SDP.81 we haveprovideda Taylor approximation of order 10for theluminosity distance in the framework of the flat cosmology. Thepercentage error between analytical solution and approximated solutionwhenz=3.04 (the redshift of SDP.81) is δ = 0.588%. Symmetric evolution of an SB: The motion of a SB advancing in a medium withdecreasing density in spherical symmetryis analyzed. The type of density profile here adopted is a NFWprofile which has three free parameters, r_0 ,band ρ_0. The available astronomical data does not allow to close the equations at r=11.39 kpc (the radius of SDP.81). A numerical relationship which connects the numberdensitywith the lifetime of a SB is reported in Figure <ref> and an approximation of the above relationshipis t/10^7 yr = 67.36 n_0^0.26 when b=1 pcandr_0=44 pc.Symmetric Image theory:The transfer equation for the luminous intensity in the case of optically thin layeris reduced in the case of spherical symmetryto the evaluation of alength between lower and upper radiusalongthe line of sight, see eqn.<ref>. The cut in intensity has a characteristic "U" shape, see eqn.(<ref>),which also characterizesthe image of ER associatedwith the galaxy SDP.81.Asymmetric Image theory: The layer betweena complex 3D advancing surfacewith radius, r_a,function of two angles in polarcoordinates (external surface) and r_a- Δ r(internal surface) is filled withN random points. After a rotation characterizedby three Eulerangles which align the 3D layerwith the observer, the image is obtained by summing a 3D visitation gridover one index, see image <ref>. The variationsfor theCanarias ER of the flux counts in ADU asfunction of the angle can be modeled because radius, velocity andtherefore flux of kinetic energy are different for each chosen direction, see Figure <ref>. § ACKNOWLEDGMENTSThe real data ofFigure <ref> were kindly provided by Margherita Bettinelli. The real data of Figure <ref> were digitizedusingWebPlotDigitizer, a Web based tool to extract data from plots, available at<http://arohatgi.info/WebPlotDigitizer/>. REFERENCES10 url<#>1#1urlprefixURLEinstein1936 Einstein A 1936 Lens-Like Action of a Star by the Deviation of Light in the Gravitational Field Science 84, 506Einstein_3_1994 Einstein A 1994 The Collected Papers of Albert Einstein, Volume 3: The Swiss Years: Writings, 1909-1911. (Princeton, NJ: Princeton University Press)Valls-Gabaud2006 Valls-Gabaud D 2006 The conceptual origins of gravitational lensing in J M Alimi and A Füzfa, eds, Albert Einstein Century International Conference vol 861 of American Institute of Physics Conference Series pp 1163–1163 (Preprint 1206.1165)Walsh1979 Walsh D, Carswell R F and Weymann R J 1979 0957 + 561 A, B - Twin quasistellar objects or gravitational lens279, 381Blandford1992 Blandford R D and Narayan R 1992 Cosmological applications of gravitational lensing30, 311Mellier1999 Mellier Y 1999 Probing the Universe with Weak Lensing37, 127 (Preprint astro-ph/9812172)Refregier2003 Refregier A 2003 Weak Gravitational Lensing by Large-Scale Structure41, 645 (Preprint astro-ph/0307212)Treu2010 Treu T 2010 Strong Lensing by Galaxies48, 87 (Preprint 1003.5567)Rusu2016 Rusu C E, Oguri M, Minowa Y, Iye M, Inada N, Oya S, Kayo I, Hayano Y, Hattori M, Saito Y, Ito M, Pyo T S, Terada H, Takami H and Watanabe M 2016 Subaru Telescope adaptive optics observations of gravitationally lensed quasars in the Sloan Digital Sky Survey458, 2 (Preprint 1506.05147)More2016 More A, Oguri M, Kayo I, Zinn J, Strauss M A, Santiago B X, Mosquera A M, Inada N, Kochanek C S, Rusu C E, Brownstein J R, da Costa L N, Kneib J P, Maia M A G, Quimby R M, Schneider D P, Streblyanska A and York D G 2016 The SDSS-III BOSS quasar lens survey: discovery of 13 gravitationally lensed quasars456, 1595 (Preprint 1509.07917)Kilbinger2015 Kilbinger M 2015 Cosmology with cosmic shear observations: a review Reports on Progress in Physics 78(8) 086901 (Preprint 1411.0115)DePaolis2016 De Paolis F, Giordano M, Ingrosso G, Manni L, Nucita A and Strafella F 2016 The Scales of Gravitational Lensing Universe 2, 6 (Preprint 1604.06601)Wittman2001 Wittman D, Tyson J A, Margoniner V E, Cohen J G and Dell'Antonio I P 2001 Discovery of a Galaxy Cluster via Weak Lensing557, L89 (Preprint astro-ph/0104094)heiles1979 Heiles C 1979 H I shells and supershells229, 533Sanchez2015 Sánchez-Cruces M, Rosado M, Rodríguez-González A and Reyes-Iturbide J 2015 Kinematics of Superbubbles and Supershells in the Irregular Galaxy, NGC 1569799 231Zaninetti2012g Zaninetti L 2012 Evolution of superbubbles in a self-gravitating disc Monthly Notices of the Royal Astronomical Society 425, 2343 ISSN 1365-2966Adachi2012 Adachi M and Kasai M 2012 An Analytical Approximation of the Luminosity Distance in Flat Cosmologies with a Cosmological Constant Progress of Theoretical Physics 127, 145Zaninetti2015b Zaninetti L and Ferraro M 2015 On Non-Poissonian Voronoi Tessellations Applied Physics Research 7, 108Tamura2015 Tamura Y, Oguri M, Iono D, Hatsukade B, Matsuda Y and Hayashi M 2015 High-resolution ALMA observations of SDP.81. I. The innermost mass profile of the lensing elliptical galaxy probed by 30 milli-arcsecond images67 72 (Preprint 1503.07605)Zaninetti2016b Zaninetti L 2016 An analytical solution in the complex plane for the luminosity distance in flat cosmology Journal of High Energy Physics, Gravitation and Cosmology 2, 581Etherington1933 Etherington I M H 1933 On the Definition of Distance in General Relativity. Philosophical Magazine 15Wright2006 Wright E L 2006 A Cosmology Calculator for the World Wide Web118, 1711 (Preprint astro-ph/0609593)Narayan1996 Narayan R and Bartelmann M 1996 Lectures on Gravitational Lensing ArXiv Astrophysics e-prints (Preprint astro-ph/9606001)Bettinelli2016 Bettinelli M, Simioni M, Aparicio A, Hidalgo S L, Cassisi S, Walker A R, Piotto G and Valdes F 2016 The Canarias Einstein ring: a newly discovered optical Einstein ring461, L67 (Preprint 1605.03938)Eales2010 Eales S, Dunne L, Clements D and Cooray A 2010 The Herschel ATLAS122, 499 (Preprint 0910.4279)ALMA2015 ALMA Partnership, Vlahakis C, Hunter T R and Hodge J A 2015 The 2014 ALMA Long Baseline Campaign: Observations of the Strongly Lensed Submillimeter Galaxy HATLAS J090311.6+003906 at z = 3.042808 L4 (Preprint 1503.02652)Rybak2015 Rybak M, Vegetti S, McKean J P, Andreani P and White S D M 2015 ALMA imaging of SDP.81 - II. A pixelated reconstruction of the CO emission lines453, L26 (Preprint 1506.01425)Hatsukade2015 Hatsukade B, Tamura Y, Iono D, Matsuda Y, Hayashi M and Oguri M 2015 High-resolution ALMA observations of SDP.81. II. Molecular clump properties of a lensed submillimeter galaxy at z = 3.04267 93 (Preprint 1503.07997)Wong2015 Wong K C, Suyu S H and Matsushita S 2015 The Innermost Mass Distribution of the Gravitational Lens SDP.81 from ALMA Observations811 115 (Preprint 1503.05558)Hezaveh2016 Hezaveh Y D, Dalal N and Marrone D P 2016 Detection of Lensing Substructure Using ALMA Observations of the Dusty Galaxy SDP.81823 37 (Preprint 1601.01388)Navarro1996 Navarro J F, Frenk C S and White S D M 1996 The Structure of Cold Dark Matter Halos462, 563 (Preprint astro-ph/9508025)McCray1987 McCray R A 1987 Coronal interstellar gas and supernova remnants in A Dalgarno & D Layzer, ed, Spectroscopy of Astrophysical Plasmas (Cambridge, UK: Cambridge University Press) pp 255–278Spitzer1942 Spitzer Jr L 1942 The Dynamics of the Interstellar Medium. III. Galactic Distribution.95, 329Rohlfs1977 Rohlfs K, ed 1977 Lectures on density wave theory vol 69 of Lecture Notes in Physics, Berlin Springer VerlagBertin2000 Bertin G 2000 Dynamics of Galaxies (Cambridge: Cambridge University Press.)Padmanabhan_III_2002 Padmanabhan P 2002 Theoretical astrophysics. Vol. III: Galaxies and Cosmology (Cambridge, UK: Cambridge University Press)rybicki Rybicki G and Lightman A 1991 Radiative Processes in Astrophysics (New-York: Wiley-Interscience)Hjellming1988 Hjellming, R M 1988 Radio stars IN Galactic and Extragalactic Radio Astronomy(New York: Springer-Verlag)Condon2016 Condon J J and Ransom S M 2016 Essential radio astronomy (Princeton, NJ: Princeton University Press)lang Lang K R 1999 Astrophysical formulae. (Third Edition) (New York: Springer)Goldstein2002 Goldstein H, Poole C and Safko J 2002 Classical mechanics (San Francisco: Addison-Wesley)Zaninetti2012b Zaninetti L 2012 On the spherical-axial transition in supernova remnants Astrophysics and Space Science 337, 581 (Preprint 1109.4012)
http://arxiv.org/abs/1704.08541v1
{ "authors": [ "L. Zaninetti" ], "categories": [ "astro-ph.CO", "astro-ph.GA" ], "primary_category": "astro-ph.CO", "published": "20170427124849", "title": "The Ring Produced by an Extra-Galactic Superbubble in Flat Cosmology" }
Quasistationary solutions of scalar fields around collapsing self-interacting boson stars José A. Font December 30, 2023 =========================================================================================§ INTRODUCTION Some problems in nonlinear elasticity (including, for instance, those involvinghyperelastic materials) reduce tothat of minimizing the total energy functional.In this situation, and in contrast to the case of linear elasticity, the integrand is almost always nonconvex, while the functional is nonquadratic. This renders the standard variational methods inapplicable. Nevertheless, for a sufficiently large class of applied nonlinear problems, we may replace convexity with certain weaker conditions, i.e.polyconvexity <cit.>.Denote by ^m× nthe set of m× n matrices. Recall thata functionWΩ×^3× 3→, Ω⊂^3,is called polyconvex if there exists a convex function G(x,·)^3× 3×^3 × 3×_+→ such thatG(x, F,F,F)=W(x,F) for allF∈^3× 3 withF > 0,almost everywhere (henceforth abbreviated as a.e.) in  Ω.LetΩ be a bounded domain in^3 which boundary ∂Ω satisfies the Lipschitz condition. Ball's method <cit.> is to consider a sequence {φ_k}_k∈ℕ minimizing the total energy functionalI(φ)=∫_ΩW(x,Dφ) dx.over the set of admissible deformations_B={φ∈ W^1_1(Ω), I(φ) < ∞, J(x,φ) > 0 a.e. in Ω,φ|_∂Ω=φ|_∂Ω},where φ are Dirichlet boundary conditions andJ(x,φ)stands for the Jacobian of φ, J(x,φ) =Dφ (x). Furthermore, it is assumed that the coercivity inequalityW(x,F)≥α (|F|^p+| F|^q+ ( F)^r) + g(x)holds for almost all x∈Ω and all F∈^3× 3, F > 0, where p≥ 2, q≥p/p-1, r>1 andg∈ L_1(Ω), F denotes the adjoint matrix, i.e. a transposed matrix of(2 × 2)-subdeterminants ofF. Moreover, the stored-energy function  W is polyconvex. By coercivity, it follows that the sequence (φ_k, Dφ_k,Dφ_k) is bounded in the reflexive Banach space W^1_p(Ω)× L_q(Ω)× L_r(Ω). Relying on the relation betweenp andq,one can conclude thatthere exists a subsequence converging weakly to an element (φ_0, Dφ_0,Dφ_0). For the limit φ_0 to belong to the class _B of admissible deformations, we need to impose the additional condition: W(x,F)→∞ asF → 0_+(see <cit.> for more details).This condition is quite reasonablesince it fits in with the principle that “extreme stress must accompany extreme strains”. Another important property of this approach is the sequentially weakly lower semicontinuity of the total energy functional,I(φ)≤_k→∞ I(φ_k),which holds because the stored-energy function is polyconvex. It is also worth noting that Ball's approach admits the nonuniqueness of solutions observed experimentally (see <cit.> for more details).One of the most important requirements of continuum mechanics isthat interpenetration of matter does not occur, from which it follows that any deformation has to be injective. Global injectivity of deformations has been established by J. Ball <cit.> within the existence theory based on minimization of the energy<cit.>. More precisely, ifφΩ→ℝ^n, Ω⊂^n, is a mapping in W^1_p(Ω), p > n,coinciding on the boundary∂Ω with a homeomorphism φ and J(x, φ) > 0a.e. inΩ, φ(Ω) is Lipschitz, and if for someσ > n∫_Ω |(D φ(x))^-1|^σ J(x, φ) dx = ∫_Ω| D φ(x)|^σ/J(x, φ)^σ-1dx < ∞,thenφis a homeomorphism ofΩ onφ(Ω) and φ^-1∈ W^1_σ(φ(Ω)).To apply this result to nonlinear elasticity it is required that some additional conditions onthe stored-energy function be imposed in order to obtaininvertibility of deformations. Thus, in <cit.> (see also <cit.>), it is considered a domain Ω⊂^3 with a Lipschitz boundary ∂Ω and a polyconvex stored-energy function W. Suppose that there exist constants α>0, p>3, q > 3, r>1, and m> 2q/q-3, as well as a function g∈ L_1(Ω) such thatW(x,F)≥α (|F|^p + | F|^q+ ( F)^r + ( F)^-m) + g(x)for almost all x∈Ω and all F∈^3× 3, F > 0. Take a homeomorphism φΩ→Ω' in W^1_p(Ω) with J(x,φ) > 0 a.e. inΩ. Then there exists a mapping φΩ→Ω' minimizing the total energy functional (<ref>)over the set of admissible deformations (<ref>), which is a homeomorphism due to (<ref>) withφ^-1∈ W^1_σ(Ω), σ = q(1+m)/q+m > 3. In this article we obtain the injectivity property (Theorem <ref>) based on the boundedness of the composition operator φ^* L^1_p(Ω')→ L^1_q(Ω). Boundedness of these operators is intimately related to a condition of finite distortion. Recall thata W^1_1,-mapping fΩ→^n with nonnegative Jacobian, J(x,f) ≥ 0 a.e., is called a mapping with finite distortion if |Df(x)|^n ≤ K(x) J(x,f)for almost allx∈Ω, where 1 ≤ K(x) < ∞ a.e. in  Ω. A functionK_O (x,φ) = |Dφ(x)|^n/J(x,φ)is called the outer distortion coefficient [ It is assumed that K_O (x,φ) = 1 if J(x,φ) = 0. ]. It is worth noting that mappings with finite distortion arise innonlinear elasticity from geometric considerations: it would be desirable that the deformation is continuous, maps sets of measure zero to sets of measure zero, is a one-to-one mapping and that the inverse map has “good” properties. Hence, many research groups all over the word have worked on this issue (see<cit.>and a lot more). It is known that in the planar case(Ω, Ω'⊂^2)a homeomorphismφ∈ W^1_1,(Ω)has an inverse homeomorphism φ^-1∈ W^1_1,(Ω') if and only ifφ is a mapping with finite distortion <cit.>. In the spatial case W^1_n,-regularityof the inverse mapping was shown forW^1_q,-homeomorphism,q > n-1, with the integrable inner distortion [ Here K_I (x,φ) = 1 if | Dφ (x)| = 0, and K_I (x,φ) = ∞ if | Dφ (x)| ≠ 0 and J(x,φ) = 0. ]K_I(x,φ) = | D φ (x)|^n/J(x,φ)^n-1.Moreover, the relaxation of (<ref>) on the caseσ = n,∫_Ω' |D φ^-1(y)|^ndy = ∫_Ω| D φ (x)|^n/J(x,φ)^n-1dx = ∫_Ω K_I(x,φ) dx,holds <cit.>. In <cit.> the authors study W^1_n-homeomorphisms φΩ→Ω' between two bounded domains in ℝ^n with finite energy and consider the behavior of such mappings. In general, the weak W^1_n-limit of a sequence of homeomorphisms may lose injectivity. However, if there is a requirement on totally boundedness of norms of the inner distortion K_I(·,φ) | L_1(Ω) and some additional requirements, then the limit map is a homeomorphism. The main idea behind the proof of existence and global invertibility is to investigate admissible deformations φ_k in parallel with its inverse φ_k^-1 along to a minimizing sequence {φ_k}. This is possible[See <cit.> foranother proof of this property under weaker assumption] due to integrability of the inner distortion as this ensures the existence and regularity of an inverse map belonging to W^1_n. Note that the authors of these papers include requirements of integrability of the inner distortion coefficient in the coercive inequality. The authors of the current paper prefer to include this condition to the class of admissible deformation, so as to obtain more “fine graduation” of deformations. We also emphasize that the aforementioned regularity properties of an inverse homeomorphism(including the caseq = n-1) can be obtained using a technique of the theory of bounded operators of Sobolev spaces. Putting p = σ(n-1)/σ-1, p' = σ, q = n-1, q' = ∞, ϱ = σ in Theorem <ref> <cit.> we derive the aforementioned result from <cit.>. By taking p = p' = ϱ = n, q = n-1, q' = ∞ in the same theorem one can obtain the regularity of an inverse mapping from <cit.>.Whereas we have dealing withW^1_n-mappingswith finite distortion in this article, wereducecoercivity conditions on the stored-energy function to W(x,F)≥α |F|^n + g(x).For given constantsp, q≥ 1 and M > 0, and the total energyI, identified by (<ref>), we define the class of admissible deformations(p,q,M)={φΩ→Ω'is a homeomorphism with finite distortion,φ∈ W^1_1(Ω), I(φ) < ∞,J(x,φ) ≥ 0 a.e. in Ω, K_O(·, φ) ∈ L_p (Ω), K_I(·,φ) | L_q (Ω)≤ M },where K_O(x,φ) and K_I(x,φ) are the outer and the inner distortion coefficients. We prove an existence theorem inthe following formulation (see precise requirements in Section <ref>).[Theorem <ref> below] Let Ω, Ω'⊂^n be bounded domains with Lipschitz boundaries. Given a polyconvex function W(x,F), satisfying the coercivity inequality (<ref>), and a nonempty set  (n-1,s,M) with M > 0, s > 1, then there exists at least one homeomorphic mapping φ_0∈(n-1,s,M)such that I(φ_0)=inf{I(φ),φ∈(n-1,s,M)}.The existence theorem is also obtainedfor classes of mappings with prescribed boundary values and the same homotopy class as a given one, and covers the cases=1 in some cases (Section <ref>). Note that the class of admissible deformations from the paper <cit.> is related to considered in the present paper classes (see Remark <ref>). For the same reason, the elasticity result of <cit.> can be derived from the result of the present paper. Indeed, the integrability of the distortion coefficient follows from the Hölder inequality and (<ref>) by s = σ r/rn + σ - n where σ = q(1+m)/q+m (see Section <ref>).Some important properties of mappings of these classes can be found in <cit.>. Note also that the property of mapping to be sense preserving in the topological way follows from the property that the required deformation is a mapping with bounded (n,q)-distortion if q> n-1 <cit.>. Additionally, there is a different approach to injectivity which was proposed by P. Ciarlet and I. Nečas in <cit.>. This approach rests upon the additional injectivity condition∫_Ω J(x,φ)dx ≤ |φ(Ω)|on the admissible deformations ifΩ⊂^n is a bounded open set with C^1-smooth boundary, φ∈ W^1_p (Ω), p > n, and J(x,φ) > 0 a.e.inΩ. Under these assumptions, the minimization problem of the energy functional can be constrained to a.e. injective deformations.In the three-dimensional case the relation (<ref>) under the weaker hypothesis p > n-1 was studied in <cit.>. In this case,φ may no longer be continuous and the inverse mapping φ^-1 has only regularityBV_ (φ (Ω), ℝ^n). Local invertibility properties of the mapping φ∈ W^1_p (Ω), p ≥ n, under the condition J(x,φ) > 0 a.e., can be found in <cit.>. The casep > n - 1 is considered in the recent paper <cit.>, the approach of which uses the topological degree as an essential tool and based on some ideas of <cit.>.Some other studies of local and global invertibility in the context of elasticity can be found in  <cit.>.Also, see <cit.>for a general review of research in the elasticity theory.We will now give an outline of the paper. The first section contains general auxiliary facts and some facts about mappings with finite distortion. The second section is devoted to the injectivity almost everywhere property (Theorem <ref>). This property follows from jointly boundedness of pullback operatorsdefined by a sequence of homeomorphismsφ_k andthe uniform convergence ofinverse homeomorphisms ψ_k (Lemma <ref>). Moreover, as a consequence, we obtain the strict inequality J(x, φ_0)>0a.e.(Lemma <ref>).The third section is dedicated to the existence theorem. In the forth section we give two examples to illustrate advantages of our method. Appendix contains some discussion about geometry of domains that does not direct bear on the subject of this paper but is of independent sense.Some ideas of this articlewere announced in the note <cit.>.§ MAPPINGS WITH FINITE DISTORTION Mappings with finite distortion is a natural generalization ofmappings with bounded distortion. The reader not familiar with mappings with bounded distortion may look at <cit.>. To take a close look at the theory of mapping with bounded distortion, the reader can studymonographs <cit.>.In this section we present some important concepts and statementsnecessary to proceed. On a bounded domain Ω⊂^n, i.e. a nonempty, connected, and open set, we define in the standard way (see <cit.> for instance)the space C_0^∞(Ω) of smoothfunctions with compact support, the Lebesgue spaces L_p(Ω) and L_p, (Ω) of integrable functions, and Sobolev spaces W^1_p(Ω) and W^1_p, (Ω), 1 ≤ p ≤∞. A mappingf ∈ L_1,(Ω) belongs tohomogeneous Sobolev class L^1_p(Ω), p ≥ 1, if it has the weak derivatives of the first order and its differential D f(x) belongs to L_p (Ω). We say that a bounded domain Ω⊂^n has a Lipschitz boundary if for each x ∈∂Ω there exists a neighborhood U such that the set Ω∩ U is represented by the inequality ξ_n < f(ξ_1, …ξ_n-1) in some Cartesian coordinate system ξ with Lipschitz continuous function f^n-1→. Domains with Lipschitz boundary are sometimes called domains having the strong Lipschitz property, whereas Lipschitz domains are defined through quasi-isometric mappings. Detailed discussion see in Appendix <ref>.Recall that fortopological spaces X and Y, a continuous mappingf X → Yis discreteiff^-1(y)is a discrete set for ally ∈ Y andfisopenif it takes open sets onto open sets. Given an open set Ω⊂^n and a mapping fΩ→^n with f ∈ W^1_1, (Ω) is called a mapping with finite distortion, whenever |Df(x)|^n ≤ K(x) |J(x,f)| for almost allx∈Ω, where 1 ≤ K(x) < ∞ a.e. in Ω [ Some authors include condition J(x,f) ≥ 0 in Definition <ref>. We do not use the condition for the Jacobian to be non-negative as it is unnecessary in the context of the theory of composition operators, see details in <cit.>.]. In other words, the finite distortion conditionamounts to the vanishing of the partial derivatives off ∈ W^1_1, (Ω) almost everywhere on the zero set of the Jacobian Z = {x ∈Ω: J(x,f) = 0}.Similarly, the finite codistortion conditionmeans thatDf (x) = 0a.e. on the the setZ. IfK ∈ L_∞(Ω),a mapping fis called a mapping with bounded distortion (or a quasiregular mapping).For a mapping with finite distortion withJ(x,f) ≥ 0 a.e.the functionsK_O(x,f)=|Df(x)|^n/J(x,f)and K_I(x,f)=| Df(x)|^n/J(x,f)^n-1when0 < J(x,f) < ∞ andK_O(x,f) = K_I(x,f) = 1otherwise are called the outer and the inner distortion coefficients of  f at the point  x. It is easy to see thatK_I^1/n-1(x,f) ≤ K_O(x,f) ≤ K_I^n-1(x,f) for a.e. x∈Ω. In 1967 Yu. Reshetnyak proved strong topological properties of mappings with bounded distortion: continuity, openness, and discreteness <cit.>.Theorem 2.3 of <cit.> shows that W^1_n,-mappingwith finite distortion and nonnegative Jacobian,J(x,f) ≥ 0 a.e., is continuous. In recent years, a lot of research has been done in order to find the sharp assumptionsfor these topological properties in the class of mappings with finite distortion,for example, <cit.>. Let fΩ→ℝ^n, n≥ 2, be a non-constant mapping with finite distortion satisfying J(x,f) ≥ 0 a.e., f∈ W^1_n,(Ω), K_O(·,f)∈ L_n-1, (Ω) and K_I(·,f)∈ L_s, (Ω) for some s > 1. Then f is discrete and open.On the other hand, mappings with finite distortion are closely related toboundedness of composition operators of Sobolev spaces. Recall thata measurable mappingφΩ→Ω'induces a bounded operator φ^* L^1_p(Ω^')→ L^1_q(Ω)by the composition rule,1≤ q≤ p<∞,ifthe operator φ^* L^1_p(Ω^') ∩_ (Ω^')→ L^1_q(Ω)with φ^*(f)=f∘φ, f∈L^1_p(Ω^') ∩_(Ω^'), is bounded. If a measurable mapping φ induces a bounded composition operator φ^* L^1_p(Ω') → L^1_q (Ω), 1≤ q ≤ p ≤∞, then φ has finite distortion. Now we consider a generalization of inner and outer distortion functions, which is more conducive to dealing with composition and pullback operators. Following <cit.>, for a mapping fΩ→Ω' of class W^1_1,(Ω) define the (outer) distortion operator functionK_f,p (x) = |Df(x)||J(x,f)|^1/pfor x∈Ω∖ Z, 0 otherwise,and the (inner) distortion operator function𝒦_f,p (x) = | Df(x)||J(x,f)|^(n-1)/pfor x∈Ω∖ Z, 0 otherwise,where  Z is a zero set of the Jacobian J(x,f). Note that K_O(x,f) = K_f,n^n(x) and K_I(x,f) = _f,n^n(x) if x ∈Ω∖ Z. Hence K_O(·,f)∈ L_n-1 (Ω) results in K_f,n(·) ∈ L_n(n-1) (Ω), and K_I(·,f) | L_s(Ω)≤ M implies _f,n(·) | L_ϱ(Ω)≤ M^1/n for ϱ = ns. The following theorem shows the regularity properties which ensurethat the direct and the inverse homeomorphisms belong tocorresponding Sobolev classes. Let φΩ→Ω' be a homeomorphism with the following properties: * φ∈ W^1_q, (Ω), n-1 ≤ q ≤∞; * the mapping φ has finite codistortion; * 𝒦_φ, p∈ L_ϱ(Ω), where 1/ϱ = n-1/q - n-1/p, n-1 ≤ q ≤ p ≤∞ (ϱ = ∞ for q = p). Then the inverse homeomorphism φ^-1 has the following properties: * φ^-1∈ W^1_p', (Ω'), where p' = p/p-n+1, (p' = 1 for p=∞); * φ^-1 has finite distortion (J(y,φ^-1) > 0 a.e. for n ≤ q); * K_φ^-1, q'∈ L_ϱ(Ω'), where q' = q/q-n+1 (q' = ∞ for q = n-1). Moreover, K_φ^-1,q'(·) | L_ϱ(Ω') = 𝒦_φ,p(·) | L_ϱ(Ω). If replace the condition 2 on “the mapping φ has finite distortion” and the condition 3 on one with the outer distortion operator function: “K_φ, p∈ L_ϰ(Ω) where 1/ϰ = 1/q - 1/p, n-1 ≤ q ≤ p ≤∞ (ϰ = ∞ for q = p)”. Then the conclusion of this theorem is valid with the next estimate K_φ^-1,q'(·) | L_ϱ(Ω')≤K_φ,p(·) | L_ϰ(Ω)^n-1 (see <cit.>). A homeomorphism φΩ→Ω' induces a bounded composition operator φ^* L^1_p(Ω')→ L^1_q(Ω), 1≤ q ≤ p <∞, where φ^*(f)=f∘φ for f∈ L^1_p(Ω'), if and only if [ Necessity is proved in <cit.> (see also earlier work <cit.>), and sufficiency, in Theorem 6 of <cit.>. ] the following conditions hold: * φ∈ W^1_q, (Ω); * the mapping  φ has finite distortion; * K_φ,p(·) ∈ L_ϰ(Ω), where 1/ϰ=1/q-1/p, 1≤ q ≤ p <∞ (and ϰ=∞ for q=p). Moreover, φ^*≤K_φ,p(·) | L_ϰ(Ω)≤ C φ^* for some constant  C. Assume that a homeomorphism φΩ→Ω' induces a bounded composition operator φ^* L^1_p(Ω') → L^1_q(Ω) for n-1≤ q ≤ p ≤∞, where φ^*(f)=f∘φ for f∈ L^1_p(Ω') (and in the case p=∞ the mapping φ has finite codistortion). Then the inverse mapping φ^-1 induces a bounded composition operator φ^-1* L^1_q'(Ω) → L^1_p'(Ω'), where q'=q/q-n+1 and p'=p/p-n+1, and has finite distortion. Moreover, φ^-1*≤K_φ^-1,q'(·) | L_ρ(Ω')≤K_φ,p(·) | L_ϰ(Ω)^n-1, where 1/ρ=1/p'-1/q'.Recall that a differential(n-1)-formω onΩ' is defined asω(y) = ∑_k=1^n a_k(y) dy_1 ∧ dy_2 ∧…∧dy_k∧…∧ dy_n.A formω, with measurable coefficients a_k, belongs toℒ_p (Ω', Λ^n-1) ifω|ℒ_p (Ω', Λ^n-1) = ( ∫_Ω(∑_k=1^n a^2_k(y))^p/2 dy)^1/p < ∞.Letf = (f_1, …, f_n) Ω→Ω' belongs toW^1_q(n-1),(Ω) andω be a smooth n-1-form.Then the pullback φ^*ωcan be written asf^*ω(x) = ∑_k=1^n a_k(f(x)) df_1 ∧…∧df_k∧…∧ df_n. For anyω∈ℒ_p (Ω', Λ^n-1) the pullback operator f̃^* ω(x)is defined by continuity <cit.>:f̃^* ω(x) = f^* ω(x),ifx ∈Ω∖ (Z ∪Σ), 0,otherwise. As consequence of <cit.> we can obtain A homeomorphism fΩ→Ω' induces a bounded pullback operator f̃^* ℒ_p(Ω', Λ^n-1) →ℒ_q(Ω, Λ^n-1), 1 ≤ q ≤ p ≤∞, if and only if: * fΩ→Ω' has finite codistortion; * 𝒦_f,p(n-1)∈ L_ϰ (Ω) where 1/ϰ = 1/q - 1/p. Moreover, the norm of the operator f̃^* is comparable with 𝒦_f,p(n-1)| L_ϰ (Ω). Assume that a homeomorphism φΩ→Ω' belongs to W^1_n-1,(Ω) and induces a bounded pullback operator φ^*ℒ_p(Ω',Λ^n-1) →ℒ_q(Ω, Λ^n-1), for 1 ≤ q ≤ p ≤∞. Then the inverse mapping φ^-1∈ W^1_1,(Ω) induces a bounded pullback operator φ^-1^*ℒ_q'(Ω,Λ^1) →ℒ_p'(Ω, Λ^1), where q'=q/q-1 and p'=p/p-1. Moreover, the norm of the operator φ^-1^* is comparable with the norm of φ^*. § ALMOST-EVERYWHERE INJECTIVITY It is well known that the limit of homeomorphisms need not behomeomorphism or even an injective mapping. It is illustrated by the simple example of mappings φ_k(x) = |x|^k-1 x on the punctured unit ball.Here we have the limit mapping φ_0 (x) ≡ 0 and injectivity is lost.Recall that a mapping φΩ→ℝ^n is called injective almost everywhere whenever there exists a negligible set  S outside which  φ is injective.The sequence of homeomorphisms φ_k = (φ_k,1, φ_k,2) [-1,1]^2 → [-1,1]^2 of the classW^1_2 ([-1,1]^2) with integrable distortion, such that φ_k,1 (x_1,x_2) = 2 x_1 ξ_k (x_2) ifx_1∈[0, 1/2], 2(1 - ξ_k (x_2))x_1 - (1 - 2 ξ_k (x_2))ifx_1∈(1/2,1], φ_k,1( -x_1, x_2) = - φ_k,1( x_1, x_2), φ_k,1( x_1, -x_2) = φ_k,1( x_1, x_2),and φ_k,2 (x_1,x_2) = x_2, with ξ_k (t) = 1+(k-1) t/2k, shows that injectivity almost everywhere can be lost either. Let Ω, Ω' ⊂^n be bounded domains with Lipschitz boundaries. Consider a sequence of homeomorphisms φ_k, which maps Ω onto Ω', with φ_k ∈ W^1_n-1, (Ω), and J(x,φ_k) ≥ 0 a.e., such that: * φ_k →φ_0 weakly in  W^1_n-1,(Ω) with J(x,φ_0) ≥ 0 a.e. in Ω; * every mapping φ_k induces a bounded pullback operator φ_k^* ℒ_n/n-1(Ω', Λ^n-1) →ℒ_r/n-1(Ω, Λ^n-1) for some n-1 ≤ r ≤ n; * the norms of the operators φ_k^* are totally bounded. Then the mapping φ_0 is injective almost everywhere. By Theorem <ref> conditions 2 and 3 of Theorem <ref> can be replaced by totally boundedness of inner distortion operator functions_φ_k,nin L_ϱ with ϱ = rn/(n-1)(n-r)≥ n. Let Ω, Ω' ⊂^n be bounded domains with Lipschitz boundaries. Consider a sequence of homeomorphisms of finite distortion φ_k, which maps Ω onto Ω', with φ_k ∈ W^1_n-1, (Ω), and J(x,φ_k) ≥ 0 a.e., such that: * φ_k →φ_0 weakly in  W^1_n-1,(Ω) with J(x,φ_0) ≥ 0 a.e. in Ω; * the norms of inner distortion operator functions _φ_k,n| L_ϱ are totally bounded for some ϱ≥ n. Then the mapping φ_0 is injective almost everywhere. Taking into account_φ,n| L_ns = K_I| L_s^1/n by Remark <ref> we derive the next assertion. Let Ω, Ω' ⊂^n be bounded domains with Lipschitz boundaries. Consider a sequence of homeomorphisms of finite distortion φ_k, which maps Ω onto Ω', with φ_k ∈ W^1_n-1, (Ω), and J(x,φ_k) ≥ 0 a.e., such that: * φ_k →φ_0 weakly in  W^1_n-1,(Ω) with J(x,φ_0) ≥ 0 a.e. in Ω; * the norms of inner distortion functions K_I| L_ns are totally bounded for some s ≥ 1 [ The exponent r from Theorem <ref> can be expressed as r=n (n-1) s/ns + 1 - s≥ n-1 ]. Then the mapping φ_0 is injective almost everywhere. As it will be clear from the subsequent, the theorem is valid provided that composition operators φ_k^*L^1_n(Ω') →L^1_ρ(Ω), 1 ≤ρ < n, and ψ_k^*L^1_r'(Ω) →L^1_n(Ω'), n ≤ r' ≤∞, are bounded. We combine both conditions in boundedness of pullback operators φ_k^* ℒ_n/n-1(Ω', Λ^n-1) →ℒ_r/n-1(Ω, Λ^n-1), n-1 ≤ r ≤ n. Indeed, if a homeomorphism φ induces a bounded pullback operator φ^*ℒ_n/(n-1)(Ω',Λ^n-1) →ℒ_r/(n-1)(Ω,Λ^n-1) then by Theorem <ref> the inverse mapping ψ = φ^-1 has finite distortion and induces a bounded pullback operator ψ^*ℒ_r'(Ω,Λ^1) →ℒ_n(Ω',Λ^1) for r'=r/r-n+1≥ n. Moreover, ψ^*∼φ^* [ a ∼ b means there exist constants C_1, C_2 > 0, such that C_1 a ≤ b ≤ C_2 a ]. As there is a case of 1-forms, it is the same as boundedness of composition operator ψ^*L^1_r'(Ω) →L^1_n(Ω') and ψ^* = ψ^*. Further, in accordance with Theorem <ref> an inverse homeomorphism φ = ψ^-1 has finite distortion and induces a bounded composition operator φ^*L^1_n(Ω') →L^1_ρ(Ω) for ρ = r/(n-1)^2 - r (n-2)≥ 1 and φ^*∼ψ^*^n-1∼φ^*^n-1. With this background the first thing we have to do is to verify that the limit mapping φ_0 induces a bounded composition operator φ_0^* L^1_n(Ω')∩(Ω') → L^1_ρ(Ω). If conditions of Theorem <ref> are fulfilled, then the mapping φ_0 induces a bounded composition operator φ_0^* L^1_n(Ω')∩(Ω') → L^1_ρ(Ω), ρ = r/(n-1)^2 - r (n-2)≥ 1.Consider u∈ L^1_n(Ω')∩(Ω'). Sinceφ_k^*≤ C by Remark <ref>, the sequence w_k=φ_k^*u=u∘φ_k is bounded in L^1_ρ(Ω). Using the Poincaré inequality and a compact embedding of Sobolev spaces (see <cit.> for instance), we obtain a subsequence with w_k→ w_0 in L_t(Ω) where 1 < t < nρ/n - ρ. From this sequence, in turn, we can extract a subsequence which converges almost everywhere in  Ω. The same arguments ensure thatφ_k →φ_0 a.e. Then w_0(x)=u∘φ_0(x) for almost all x∈Ω.On the other hand, since w_k converges weakly to w_0 in L^1_ρ (Ω), we have u∘φ_0 | L^1_ρ(Ω)=w_0 | L^1_ρ(Ω)≤_k→∞w_k | L^1_ρ(Ω)= _k→∞φ^*_k(u) | L^1_ρ(Ω)≤_k→∞φ_k^*·u| L^1_n(Ω') ≤ C ·u | L^1_n(Ω'). Thus, φ_0 induces a bounded composition operator φ_0^* L^1_n(Ω')∩(Ω') → L^1_ρ(Ω), and moreover, φ_0^*≤ C. Similar we can obtain boundedness of pullback operatorφ_0^* ℒ_n/n-1(Ω', Λ^n-1) →ℒ_r/n-1(Ω, Λ^n-1). If conditions of Theorem <ref> are fulfilled, then the mapping φ_0 induces a bounded pullback operator φ_0^* ℒ_n/n-1(Ω', Λ^n-1) →ℒ_r/n-1(Ω, Λ^n-1). Now we need to consider some regularity properties of the sequence{φ_k}_k ∈ℕ which meet the requirements of Theorem <ref>. Let conditions of Theorem <ref> be fulfilled, define a sequence of continuous mappings ψ_k Ω' →Ω as ψ_k=φ^-1_k. Then there exists a subsequence {ψ_k_l}_l∈ℕ and a continuous mapping ψ_0 Ω' →Ω such that ψ_k_l→ψ_0 locally uniformly. Notice that the sequence ψ_k is uniformly bounded since the domain Ω is bounded. On the other hand, since ψ_k∈ W^1_n,(Ω') (by Remark <ref> and Theorem <ref>) we obtain the estimate (corollary of <cit.>) osc(ψ_k,S(y',r)) ≤ L (lnr_0/r)^-1/n( ∫_B(y',r_0) |D ψ_k(y)|^n dy )^1/n, where S(y',r) is the sphere of radius r<r_0/2 centered at  y' and B(y',r_0)⊂Ω' is the ball of radius r_0 centered at  y'. It follows the equicontinuity of the family of functions {ψ_k}_k∈ℕ on any compact part of Ω'. Hölder's inequality, Theorem <ref> and Theorem <ref> yield ∫_B(y',r_0) |D ψ_k (y)|^n dy ≤∫_B(y',r_0)|D ψ_k(y)|^n/J(y,ψ_k)^n/r' J(y,ψ_k)^n/r' dy ≤( ∫_Ω'(|Dψ_k(y)|^n/J(y,ψ_k)^n/r')^ϱ'/ndy )^n/ϱ'( ∫_B(y',r_0) J(y,ψ_k)^n/r'·ϱ'/ϱ' - n dy)^ϱ' - n/ϱ'≤K_ψ_k,r'(·) | L_ϱ'(Ω')^n |ψ_k(B(y',r_0))|^ϱ' - n/ϱ'≤C̃^n |Ω|^ϱ' - n/ϱ', where r' = r/r-n+1, 1/ϱ' = 1/n - 1/r', and since n/r'·ϱ'/ϱ' - n =1. Thus, we see that the family {ψ_k}_k∈ℕ is equicontinuous and uniformly bounded. By the Arzelà–Ascoli theorem there exists a subsequence {ψ_k_l} converging uniformly to a mapping ψ_0 as k_l →∞. Now we verify that the set of points x∈Ω with φ(x) ∈∂Ω' is negligible. The proof of this statement is based on some properties ofadditive function Φ defined on open bounded sets. For proving Lemma <ref> below we modify the method of proof of <cit.>.Given a bounded open set A'⊂^n, define the class of functions ∘L^1_p (A') as the closure of the subspace C_0^∞ (A') in the seminorm of L^1_p (A'). In general, a function f ∈∘L^1_p (A') is defined only on the set  A', but, extending it by zero, we may assume that f ∈ L^1_p (^n).Let us recall that a mapping Φdefined on open subsets from^nand taking nonnegative finite values is called a monotone ifΦ(V) ≤Φ(U) for V ⊂ U and countably additive function of set (see <cit.>)if for any countable setU_i ⊂ U ⊂^n,i = 1, 2, …, ∞,of pairwise disjoint open sets the following inequality ∑_i=1^∞Φ (U_i) = Φ(⋃_i=1^∞ U_i)takes place. Assume that the mapping φΩ→Ω' induces a bounded composition operator φ^* L^1_p (Ω') ∩(Ω') → L^1_q (Ω), 1 ≤ q < p ≤∞. Then Φ(A') = sup_f ∈∘L^1_p (A') ∩(A')( φ^* f | L^1_q(Ω) /f |L^1_p (A' ∩Ω') )^σ, σ = pq/p-q forp<∞, qforp=∞,is a bounded monotone countably additive function defined on the open bounded sets A' with A' ∩Ω' ≠∅. If f ∈∘L^1_p (A') ∩(A') and A' ⊄Ω', we consider a composition φ^* f where it is well defined.It is obvious that Φ(A'_1)≤Φ(A'_2) whenever A'_1 ⊂ A'_2.Take disjoint sets {A'_i}_i∈ℕ inΩ' and put A'_0 = ⋃_i=1^∞ A'_i. Consider a function f_i ∈∘L^1_p (A'_i) ∩(A_i') such that the conditionsφ^* f_i | L^1_q(Ω) ≥(Φ(A'_i) (1- ε/2^i))^1/σ f_i |∘L^1_p (A'_i)and f_i |∘L^1_p (A'_i) ^p = Φ(A'_i) (1- ε/2^i) forp < ∞ ( f_i |∘L^1_p (A'_i) ^p = 1 forp=∞)hold simultaneously where 0< ε <1. Putting f_N = ∑_i=1 ^N f_i ∈ L^1_p(Ω') ∩(Ω'), and applying Hölder's inequality in the case of equality [Let us remind that for a_i, b_i ≥ 0, 1/k + 1/k' = 1, |∑ a_i b_i| = (∑ a_i^k )^1/k(∑ b_i^k')^1/k' if and only if a_i^k and b_i^k' are proportional.], we obtainφ^* f_N | L^1_q(Ω) ≥( ∑_i=1^N(Φ(A'_i) (1- ε/2^i))^q/σ f_i | ∘L^1_p (A'_i)^q)^1/q= ( ∑_i=1^NΦ(A'_i) (1- ε/2^i))^1/σ f_N |∘L^1_p (⋃_i=1^N A'_i) ≥( ∑_i=1^NΦ(A'_i) - εΦ(A'_0) )^1/σ f_N |∘L^1_p (⋃_i=1^N A'_i) ,since the sets A_i, on which the functions ∇φ^* f_i are nonvanishing, are disjoint. This implies thatΦ(A'_0)^1/σ≥supφ^* f_N| L^1_p (Ω) / f_N |∘L^1_p (⋃_i=1^N A'_i) ≥( ∑_i=1^NΦ(A'_i) - εΦ(A'_0) )^1/σ,where we take the sharp upper bound over all functionsf_N ∈∘L^1_p (⋃_i=1^N A'_i) ∩(⋃_i=1^N A'_i), f_N = ∑_i=1 ^N f_i,and f_i are of the form indicated above. Since N  and  ε are arbitrary,∑_i=1^∞Φ(A'_i) ≤Φ(⋃_i=1^∞ A'_i). We can verify the inverse inequality directly by using the definition of  Φ. For estimating Φ through multiplicity of covering, we need the following corollary to the Bezikovich theorem (see <cit.> for instance). For every open set U ⊂^n with U≠^n, there exists a countable family ℬ={B_j} of balls such that * ⋃_j B_j = U; * if B_j=B_j(x_j,r_j) ∈ℬ then dist (x_j, ∂ U) = 12 r_j; * the families ℬ = {B_j} and 2 ℬ = {2B_j}, where the symbol 2B stands for the ball of doubled radius centered at the same point, constitute a covering of finite multiplicity of  U; * if the balls 2 B_j = B_j(x_j, 2 r_j), j=1, 2, intersect then 5/7 r_1 ≤ r_2 ≤7/5 r_1; * we can subdivide the family {2B_j} into finitely many tuples so that in each tuple the balls are disjoint and the number of tuples depends only on the dimension  n. Take a monotone countably additive function  Φ defined on the bounded open sets A' with A' ∩Ω' ≠∅. For every set A' there exists a sequence of balls {B_j}_j∈ℕ such that * the families of {B_j}_j∈ℕ and {2 B_j}_j∈ℕ constitute a covering of finite multiplicity of  U; * ∑_j=1^∞Φ(2 B_j) ≤ζ_n Φ (U) where the constant ζ_n depends only on the dimension  n. In accordance with Lemma <ref>, construct two sequences {B_j}_j∈ℕ and {2 B_j}_j∈ℕ of balls and subdivide the latter into ζ_n subfamilies {2 B_1j}_j∈ℕ, …, {2 B_ζ_n j}_j∈ℕ so that in each tuple the balls are disjoint: 2 B_ki∩ 2 B_kj = ∅ for i ≠ j and k = 1, …, ζ_n. Consequently,∑_j=1^∞Φ(2 B_j) = ∑_k=1^ζ_n∑_j=1^∞Φ(2 B_kj) ≤∑_k=1^ζ_nΦ(U) = ζ_n Φ(U).Mappings inducing a bounded composition operator is known to satisfythe Luzin ^-1-property<cit.>. Take two open sets Ω  and  Ω' in ^n with n≥ 1. If a measurable mapping φΩ→Ω' inducesa bounded composition operator φ^* L^1_p(Ω') ∩ C^∞(Ω')→ L^1_q (Ω), 1≤ q ≤ p ≤ n, then  φ has the Luzin ^-1-property, i.e. |φ^-1 (A)| = 0 if |A| = 0, A ⊂Ω'. Theorem 4 of <cit.> is stated for a mapping φΩ→Ω' generating a bounded composition operator φ^* L^1_p(Ω') → L^1_q (Ω) with 1≤ q ≤ p ≤ n. Observe that only smooth test functions are used in its proof, which therefore also justifies Theorem <ref>. Here we obtain the next generalization of Theorem <ref>. If a measurable mapping φΩ→Ω' induces a bounded composition operator φ^* L^1_p (Ω') ∩(Ω') → L^1_q (Ω), 1 ≤ q ≤ p ≤ n, then |φ^-1(E)| = 0 if |E| = 0, E ⊂Ω'.IfE ⊂Ω' then the statement of the theorem follows by Theorem <ref>. Consider the cut-offη∈ C_0^∞(^n) equal to 1 on B(0,1) and vanishing outside B(0,2). By Lemma <ref> the function f (y) = η( y-y_0/r) satisfies φ^* f | L^1_q (Ω) ≤ C_1 Φ(2B)^1/σ |B|^1/p- 1/n, where B ∩Ω' ≠∅ (letΦ(2B)^1/σ = φ^* for any ballB ifp = q). Take a setE ⊂∂Ω' with |E| = 0. Since  φ is a mapping withfinite distortion <cit.>, φ^-1(E) ≠Ω (otherwise, J(x,φ)=0 and, consequently, Dφ(x) = 0, that is,  φ is a constant mapping). Hence, there is a cube Q ⊂Ω such that 2Q ⊂Ω and | Q ∖φ^-1 (E) | >0 (here 2Q is a cube with the same center as  Q and the edges stretched by a factor of two compared to  Q). Since  φ is a measurable mapping, by Luzin's theoremthere is a compact set T ⊂ Q ∖φ^-1 (E) of positive measure such that φ T →Ω' is continuous. Then, the image φ(T) ⊂Ω' is compact and φ(T) ∩ E = ∅. Consider an open setU ⊃ E with φ(T)∩ U = ∅ and U∩Ω' ≠∅. Choose a tuple {B(y_i, r_i)}_i∈ℕ of balls in accordance with Lemma <ref>: {B(y_i, r_i)}_i∈ℕ and {B(y_i, 2 r_i)}_i∈ℕ are coverings of  U, and the multiplicity of the covering {B(y_i, 2 r_i)}_i∈ℕ is finite (B(y_i, 2 r_i) ⊂ U for all  i ∈ℕ). Then the function f_i associated to the ball B(y_i, r_i) enjoys φ^*f_i = 1 onφ^-1(B(y_i, r_i)) and φ^* f = 0 outsideφ^-1(B(y_i, 2 r_i)), in particularφ^* f_i = 0 on  T. In addition, we have the estimateφ^* f_i | L^1_q (2Q) ≤φ^* f_i | L^1_q (Ω)≤ C_1 Φ(B(y_i, 2 r_i))^1/σ |B(y_i, r_i)|^1/p- 1/n. By the Poincaré inequality (see <cit.> for instance), for every function g ∈ W^1_q, (Q) with q<n vanishing on  T, we have( ∫_Q | g |^q^* dx )^1/q^*≤ C_2 l(Q)^n/q^*( ∫_2Q |∇ g |^q dx )^1/q where q^* = nq/n-q and l(Q) is the edge length of  Q.Applying the Poincaré inequality to the function φ^* f_i and using the last two estimates, we obtain|φ^-1(B(y_i, r_i)) ∩ Q|^1/q - 1/n≤ C_3 Φ(B(y_i, 2 r_i))^1/σ |B(y_i, r_i)|^1/p- 1/n.Note, that the constantC_3 can depend on the cube Q. In turn, Hölder's inequality guarantees that (∑_i=1^∞|φ^-1(B(y_i, r_i)) ∩ Q|) ^1/q - 1/n ≤ C_3 (∑_i=1^∞Φ(B(y_i, 2 r_i)))^1/σ(∑_i=1^∞|B(y_i, r_i)|)^1/p- 1/n. As the open set  U is arbitrary, this estimate yields |φ^-1(E) ∩ Q|=0. Since the cube Q ⊂Ω is arbitrary, it follows that |φ^-1(E)|=0. The sequence {φ_k}_k∈ℕ converges weakly in W^1_r, (Ω). Therefore, by embedding theorem picking up the subsequence if necessary, it is reputed that φ_0 is an almost everywhere pointwise limit of the homeomorphisms φ_kΩ→Ω'. In this case the images of some points x∈Ω may belong to the boundary ∂Ω'.Denote by S ⊂Ω a negligible set on which the convergenceφ_k(x)→φ_0(x) as k →∞ fails. If x∈Ω∖ S withφ(x) ∈Ω' then the injectivity follows from the uniform convergence of ψ_k = φ_k^-1 on  Ω'(see Lemma <ref>) and the identityψ_k ∘φ_k(x)=x,x ∈Ω∖ S,Passing to the limit as k →∞, we infer that ψ_0 ∘φ_0(x)=x,x∈Ω∖ S.Hence, we deduce that if φ_0(x_1)=φ_0(x_2)∈Ω' for two points x_1,x_2∈Ω∖ S then x_1=x_2.Since for the domain Ω' with Lipschitz boundary we have|∂Ω'| = 0, Lemmas <ref> and <ref> imply Theorem <ref>.Let us mention another interesting corollary of Theorem <ref>. Recall thata mapping fΩ→Ω' is said to be approximative differentiable at x ∈Ω with approximative derivative Df(x) if thereis a set A⊂Ω of density one at x [ i.e. lim_r→ 0|A ∩ B(x,r)|/|B(x,r)| = 1 ] such thatlim_y→ x, y ∈ Af(y) - f(x) - Df(x)(y - x)/y - x = 0.It is well known that Sobolev functions are approximative differentiable a.e. (see <cit.> for more details). If an almost everywhere injective mapping φΩ→Ω' with φ∈ W^1_1(Ω) and J(x,φ) ≥ 0 a.e. in Ω has the Luzin ^-1-property then J(x,φ) > 0 for almost all x ∈Ω. Let  E be a set outside which the mapping  φ is approximatively differentiable and has the Luzin ^-1-property. Since φ∈ W^1_1(Ω), then |E| = 0 (see <cit.>). In addition, we may assume that { x∈Ω∖ E | J(x, φ) = 0} is contained in a Borel set Z of measure zero. Put σ = φ(Z). By the change-of-variable formula <cit.>, taking the injectivity of  φ into account, we obtain ∫_Ω∖Σχ_Z (x) J (x, φ) dx = ∫_Ω∖Σ (χ_σ∘φ)(x) J (x, φ) dx = ∫_Ω'χ_σ (y) dy. By construction, the integral in the left-hand side vanishes; consequently, |σ| = 0. On the other hand, since  φ has the Luzin ^-1-property, we have |Z| = 0.§ ELASTICITY The goal of this section is to prove the existence theorem for minimizing problemof energy functionalin the classes (n-1, s, M;φ) where s∈[1,∞]. Our prove works for all values of parameter s.It is worth to note that at s=1 some results of this section look likesome statements of paper <cit.>.In our proof we use different arguments, such as the boundedness of composition operators. It gives an opportunity to apply them to new classes of deformations. Naturally, the proof of our main resultdiffers substantially from previous works and is based crucially on the results and methods of <cit.>.Comparison of our results with those in another papers see in Remark <ref> and Section <ref>.§.§ Polyconvexity LetF= [f_ij]_i,j=1,…, n be a (n × n)-matrix. For every pair of ordered tuples I=(i_1,i_2,… i_l), 1 ≤ i_1 < … < i_l ≤ n,and J=(j_1,j_2,… j_l), 1 ≤ j_1 < … < j_l ≤ n, define l× l-minor of the matrixFF_IJ = | [ f_i_1 j_1 ⋯ f_i_1 j_l; ⋮ ⋱ ⋮; f_i_l j_1 ⋯ f_i_l j_l ]|.Notice thatn× n-minor is the determinant of F. LetF_# be an ordered list of all minors ofF. Let F_#∈ D ⊂ℝ^N for sufficiently large N (N = 2nn), where D be a convex set with nonnegativen× n-minor. A function W𝕄^n× n→ is polyconvex if there exists a convex function G D→, such that G(F_#)=W(F).Examples of polyconvex but not convex functions areW(F)=FandW(F)= F^T F =F^T F^2(see, for example, <cit.>).It is known that for a hyperelastic material with experimentally known Lamé coefficients it can be constructed a stored-energy function of an Ogden material (see <cit.> for more details). On the other hand, a well-known Saint-Venant–Kirchhoff material, is not polyconvex<cit.>.§.§ Existence theorem LetΩ,Ω'⊂^n be two bounded domains with Lipschitz boundaries. Recall thata mappingGΩ×^m→ enjoys the Carathéodory conditions whenever G(x, ·) is continuous on ^m for almost all x∈Ω; and G(·, a) is measurable on  Ω for all a∈^m.Considera functional I(φ)=∫_ΩW(x, Dφ(x)) dx,where WΩ×^n× n→ is a stored-energy function with the following properties:(a)polyconvexity: there exists a convex function G Ω× D →, D ⊂^N, meeting Carathéodory conditions such that for all F∈^n × n, F ≥ 0, the equality G(x, F_#)=W(x,F) holds almost everywhere in  Ω;(b) coercivity: there exists a constant α>0 and a function g∈ L_1(Ω) such that W(x,F)≥α |F|^n + g(x) for almost all x∈Ω and all F∈^n × n, F ≥ 0.Given constants p, q≥ 1, M > 0 define the class of admissible deformations(p,q,M)={φΩ→Ω'is a homeomorphism with finite distortion,φ∈ W^1_1(Ω), I(φ) < ∞,J(x,φ) ≥ 0 a.e. in Ω, K_O(·, φ) ∈ L_p (Ω), K_I(·,φ) | L_q (Ω)≤ M },where K_O(x,φ) and K_I(x,φ) are the outer and the inner distortion functions defined by (<ref>).For these families of admissible deformations we have natural embeddings (p,q_2,M_2) ⊂(p,q_1,M_1)ifq_1 ≤ q_2 and M_2 |Ω|^1/q_1 - 1/q_2≤ M_1. Ifp_1 ≤ p_2 then(p_2,q,M) ⊂(p_1,q,M)also holds. Suppose that conditions (a) and (b) on the function W(x,F) are fulfilled and the set  (n-1,s,M) is nonempty, M > 0, s > 1. Then there exists at least one homeomorphic mapping φ_0∈(n-1,s,M) such that I(φ_0)=inf{I(φ),φ∈(n-1,s,M)}.If there is a homeomorphic Dirichlet dataφΩ→Ω', φ∈ W^1_n(Ω), J(x, φ) > 0 a.e. in Ω, K_I(·,φ) | L_q (Ω)≤ M, and I(φ) < ∞, than we can define the classes of admissible deformations (p, q, M;φ)={φ∈(p, q, M),φ|_∂Ω=φ|_∂Ω a.e. on ∂Ω}. Because of Theorem <ref> and compactness of the trace operator (see <cit.> for instance) it can be easily obtained the next existence theoremwith respect to a Dirichlet boundary conditionφ|_∂Ω=φ|_∂Ω a.e. on ∂Ω. Suppose that conditions (a) and (b) on the function W(x,F) are fulfilled and the set  (n-1, s, M;φ) is nonempty, M > 0, s ≥ 1. Then there exists at least one mapping φ_0∈(n-1, s, M;φ) such that I(φ_0)=inf{I(φ), φ∈(n-1, s, M;φ)}.In some cases it is more convenient to consider deformations of the same homotopy class as a given homeomorphism φ instead of deformations with prescribed boundary values.In this casewe can define the next class of admissible deformations(p,q,M;φ, hom)={φ∈(p, q, M), φ belongs to the same homotopy class as φ}. Suppose that conditions (a) and (b) on the function W(x,F) are fulfilled and the set  (n-1, s, M;φ, hom) is nonempty, M > 0, s ≥ 1. Then there exists at least one mapping φ_0∈(n-1, s, M;φ, hom) such that I(φ_0)=inf{I(φ), φ∈(n-1, s, M;φ, hom)}. Note that we can omit the condition that φ is a homeomorphism in the definition of (n-1, s, M;φ, hom) (and (n-1, s, M;φ)) if s > 1. Since φ∈(n-1, s, M;φ, hom) belongs to W^1_n(Ω), K_O(·,φ) ∈ L_n-1(Ω) and K_I(·,φ) ∈ L_s(Ω), s > 1, the mapping φ is continuous, open and discrete (Theorem <ref> and <cit.>). Also, it is known that continuous open discrete mapping φ, with the same homotopy class as a given homeomorphism φ∈ W^1_n (Ω), is also a homeomorphism of Ω onto Ω' (see <cit.> for instance). Added to this is the fact that if we have boundary conditions, we do not need restriction on K_O(x,φ) (see Remark <ref> for details). Thereafter for s ≥ 1 instead of (n-1, s, M;φ) and (n-1, s, M;φ, hom) we can consider classes (s, M;φ)={φ∈(s, M),φ|_∂Ω=φ|_∂Ω a.e. on ∂Ω}and (s,M;φ, hom)={φ∈(s, M),φ belongs to the same homotopy class as φ}, where (s,M)={φΩ→Ω'is a homeomorphism with finite distortion,φ∈ W^1_1(Ω), I(φ) < ∞,J(x,φ) ≥ 0 a.e. on Ω,K_I(·,φ) | L_s (Ω)≤ M }. Note that, for a mapping being of the class (1,M) we ask the same requirements as those in the paper <cit.>.§.§ Proof of the existence theorem In this section we prove the existence of a minimizing mapping for the functional I(φ)=I(φ)- ∫_Ω g(x)dx. Observe now that the coercivity (<ref>) of the function  W and the corollary of the Poincaré inequality (see <cit.> for instance) ensure the existence of constants c>0 and d∈ such thatI(φ) = I(φ)- ∫_Ω g(x)dx ≥ cφ| W^1_n(Ω)^n + dfor every mapping φ∈ =(n-1,s,M), where is defined by (<ref>).Take a minimizing sequence {φ_k} for the functional  I. Thenlim_k→∞I(φ_k) = inf_φ∈I(φ).By (<ref>) and the assumption inf_φ∈I (φ)<∞, the sequence {φ_k}_k∈ℕ is bounded inW^1_n(Ω).Remind that Sobolev spaceW^1_n hasthe “continuity” property of minors —rank-l minors of Dφ_kare weakly converging ifφ_kbelongs toW^1_pwith p ≥ l, 1 ≤ l < n<cit.>. In the casel = n there is no weak convergence but something close to it <cit.>. For achieving weak convergence of Jacobians, it is necessary to impose someadditional conditions,for instance, nonnegativity of Jacobians almost everywhere <cit.>. Here it will be convenient for usthe next formulation of this assertion, which can be found in <cit.>. Let Ω be a domain in ^n and a sequence f_kΩ→ℝ^n, k = 1, 2, …, converge weakly in W^1_n,(Ω) to a mapping f_0. For l-tuples 1 ≤ i_1 < … < i_l ≤ n and 1 ≤ j_1 < … < j_l ≤ n the equality lim_k→∞∫_Ωθ∂ (f_k^i_1,…,f_k^i_l)/∂ (x_j_1,…,x_j_l)dx = ∫_Ωθ∂ (f_0^i_1,…,f_0^i_l)/∂ (x_j_1,…,x_j_l)dx holds for every θ in ∘L_n/(n-l)(Ω), the space of functions in L_n/(n-l)(Ω) with compact support in Ω, and corresponding l × l minors [ i.e. determinants of the matrix that is formed by taking the elements of the original matrix from the rows whose indexes are in (i_1,i_2,…,i_l) and columns whose indexes are in (j_1,j_2,…,j_l) ] of D f_k and D f_0, l=1,2,…,n-1. Moreover, if in addition J(x, f_k) ≥ 0 a.e. in Ω, the equality (<ref>) holds for l=n. Hence there exists a minimizing sequence fulfilling the conditionsφ_k ⟶φ_0 weakly in W^1_n(Ω), Dφ_k ⟶ Dφ_0 weakly in L_n/n-1,(Ω), … J (·,φ_k) ⟶ J (·,φ_0) weakly inL_1,(Ω)as k →∞, where φ_0 guarantees the sharp lower boundI (φ_0)=inf_φ∈I (φ). It remains to verify that φ_0 ∈. To this end, we need the properties of mappings of  . The limit mapping φ_0 satisfies J(·,φ_0)≥ 0 a.e. in Ω. The inequality J(·,φ_0)≥ 0 follows directly from the weak convergence of J(·,φ_k) in L_1(K), for everyK ⋐Ω. Among other things, we can establish the nonnegativity of the Jacobian by using weak convergence(see <cit.>).Nowby Corollary  <ref> the mapping φ_0 is almost-everywhere injective (moreover according to the proof of Theorem <ref>, injectivety can be lost only if points go to the boundary). Furthermore, sinceφ_0 ∈ W^1_n(Ω) has finite distortion (by Lemma <ref> and Lemma <ref>) and if K_O(·,φ_0) ∈ L_n-1(Ω) and K_I(·,φ_0) ∈ L_s(Ω), s>1, then the mapping φ_0 is continuous, discrete and open by Theorem  <ref>. Therefore so φ_0 is a homeomorphism.Moreover, the proof of Theorem <ref> results inthe Lusin ^-1-property forφ_0 (see Lemma <ref>). Then, Lemma <ref> implies the limit mapping φ_0 satisfies the strict inequality J(x,φ_0) > 0 a.e. in  Ω. Theorem <ref> is not known if s = 1. However, we include the case s=1 for classes (n-1,s,M; φ) and (n-1,s,M; φ, hom) ((s,M; φ) and (s,M; φ, hom)). Indeed, whereas both φ_k and ψ_k belong to Sobolev spaces W^1_n(Ω) and W^1_n(Ω'), the same arguments as in Lemma <ref> ensure that there are a sequence of homeomorphisms {φ_k}_k∈ℕ and a sequence of inverse homeomorphisms {ψ_k}_k∈ℕ, which converge locally uniformly to φ_0 and ψ_0 respectively. Then φ_0 and ψ_0 are continuous and ψ_0 ∘φ_0 (x) = x, φ_0 ∘ψ_0 (y) = y, if φ_0 (x) ∉∂Ω' and ψ_0 (y) ∉∂Ω. Since φ_0 coincides with given homeomorphism φ on the boundary (or is in the same homotopy class), μ(y,Ω,φ_0) = 1 for y ∉φ_0(∂Ω). Therefore for y∈Ω' there is x ∈Ω such that φ_0(x) = y ∈Ω'. Passing to the limit in ψ_k ∘φ_k (x) = x, we obtain ψ_0(y) = x ∈Ω. Similar we obtain φ_0(x) = y ∈Ω' for x ∈Ω. In order to make sure thatφ_0 ∈ it remains to verify K_O(·, φ_0) ∈ L_n-1(Ω) and K_I(·,φ_0) | L_s (Ω) ≤ M.It follows from the semicontinuity property of distortion coefficient<cit.>, <cit.> (see this property under weaker assumption and some generalization in <cit.>).In order to complete the proof, it remains to verify lower semicontinuity of the functional∫_Ω W(x, Dφ_0) dx≤_k→∞∫_Ω W(x,Dφ_k) dx,using conventional technique for polyconvex case (see, for example, <cit.>).§ EXAMPLES As our first example consider an Ogden material with the stored-energy function W_1 of the formW_1(F)=a(F^T F)^p/2 + b(F^T F)^q/2 + c( F)^r + d ( F)^-m,where a > 0, b > 0, c > 0, d > 0, p > 3, q > 3, r > 1, and m > 2q/q-3.Then W_1(F) is polyconvex and the coercivity inequality holds <cit.>:W_1(F)≥α(|F|^p+| F|^q)+ c( F)^r + d ( F)^-m.We have to solve the minimization problem I_1(φ_B)=inf{I_1(φ) : φ∈_B},where I_1(φ) = ∫_Ω W_1 (Dφ (x)) dx and the class of admissible deformations _B = {φ∈ W^1_1(Ω), I_1(φ) < ∞, J(x,φ) > 0 a.e. in Ω, φ|_∂Ω=φ|_∂Ω a.e. on ∂Ω} is defined by (<ref>) for a homeomorphic boundary conditionsφΩ→Ω', φ∈ W^1_p(Ω), J(x, φ) > 0 a.e. in Ω and I_1(φ) < ∞. The result of John Ball <cit.> ensures that there exists at least one solution φ_B∈_B to this problem, which is a homeomorphism in addition.Denoteinf_φ∈_B I_1(φ) + m = M for anym>0 and consider a class, defined by (<ref>),(s,M;φ)={φΩ→Ω'is a homeomorphism with finite distortion,φ∈ W^1_1(Ω), I_1(φ) < ∞,J(x,φ) ≥ 0 a.e. in Ω,K_I(·,φ) | L_s (Ω)≤ M,φ|_∂Ω=φ|_∂Ω a.e. on ∂Ω}. It is easy to check thatφ∈_B is a homeomorphism (by <cit.>), has finite distortion (asJ(x,φ) ≥ 0a.e.) andK_I(·,φ) | L_s (Ω)≤ M by Hölder inequality for s=σ r/rn + σ - n > 1 where σ = q(1+m)/q+m > n. It means that_B ∩(s,M;φ) ≠∅. Moreover, a minimizing sequence {φ_k}⊂_B of the problem (<ref>) belongs to(s,M;φ) as well.On the other hand, for the functions of the form (<ref>) Theorem <ref> holds. Indeed, W_1(F) is polyconvex and satisfies W_1(F)≥α |F|^3 - α,where  α plays the role of the function h(x) of (<ref>). When we consider the same boundary conditionsφΩ→Ω' and solve the minimization problem I_1(φ_0)=inf{I_1(φ) : φ∈(s,M;φ)}Lemma <ref> and Remark <ref>yields a solution φ_0∈(s,M;φ) which is a homeomorphism.Let us discuss another example. Here the stored-energy function is of the formW_2(F)=a (F^T F)^3/2.This function is polyconvex and satisfies W_2(F)≥αF^3,but violates the inequality of the form (<ref>). Moreover, W_2(F) violates the asymptotic conditionW_2(x,F)→∞ asF → 0_+,which plays an important role in <cit.> and other articles.Nevertheless, for the stored-energy function W_2 there exists a solution to the minimization problemI_2(φ_0)=inf I_2(φ) in the class of homeomorhismsφ∈(n-1,s,M), s>1, where I_2 (φ) = ∫_Ω W_2 (Dφ (x)) dx.§ APPENDIX, GEOMETRY OF DOMAINS It is known that the concept of a domain “with Lipschitz boundary”and a “domain with quasi-isometric boundary” are used in different senses. To avoid ambiguity, we present in this section precise definitions of such domains, used in the work, and their equivalence.It is evident thatthe bi-Lipschitz mapping is also a quasi-isometric one. The inverse implication is not valid butthe following assertion is true: every quasi-isometric mapping is locally bi-Lipschitz one(see Lemma <ref> below).Hence Ω is a domainwith Lipschitz boundary (Definition <ref>) if and only if it is a domainwith quasi-isometric boundary (Definition <ref>). Note that if the constantM in Definition <ref> is allowed to depend on x and z, then a domain with quasi-isometric boundary may not have Lipschitz property nor cone property (see <cit.>). A homeomorphism φ U → U' of two open sets U, U'⊂^n is called a quasi-isometric mapping if the following inequalities _y→ x|φ(y)-φ(x)|/|y-x|≤ Mand _y→ z|φ^-1(y)-φ^-1(z)|/|y-z|≤ M hold for all x∈ U and z∈ U' where M is some constant independent of the choice of points x∈ U and z∈ U'. A mapping φ U → U' of two open sets U, U'⊂^n is a bi-Lipschitz mapping if the following inequality l |y-x| ≤ |φ(y)-φ(x)| ≤ L |y-x| holds for all x, y∈ U where l and L are some constants independent of the choice of points x, y∈ U. A domain Ω⊂^n is called a domain with quasi-isometricboundary whenever for every point x∈∂Ω there are a neighborhood U_x ⊂^n and a quasi-isometric mapping ν_x U_x → B(0, r_x) ⊂^n, where the number r_x>0 depends on  U_x, such that ν_x (U_x ∩∂Ω) ⊂{y ∈ B(0, r_x) | y_n = 0} and ν_x (U_x ∩Ω) ⊂{y ∈ B(0, r_x) | y_n > 0}.Let d_E(u,v)denote the intrinsic metric in the domainEdefined as the infimum over the lengths of all rectifiable curves inEwith endpointsuand  v. It is well-known that a mapping is quasi-isometric if and only if the lengths of a rectifiable curvein the domain and of its image are comparable.The last property means the following one: given mappingφΩ→Ω'is quasi-isometric if and only if L^-1d_B(x,y)≤ d_φ(B)(φ(x),φ(y))≤ L d_B(x,y)for allx, y∈ B. Let φΩ→Ω' be a quasi-isometric mapping then for any fixedball B⋐Ω the inequality d_φ(B)(φ(x),φ(y))≤ L|φ(x)-φ(y)| holds for all points x, y∈ B with some constant L depending on the choice of B only. Take an arbitrary function g∈ W^1_∞(φ(B)). Then φ^*(g) = g∘φ∈ W^1_∞(B) and, by the Whitney type extension theorem (see for instance <cit.>), there is a boundedextension operator ext_B W^1_∞(B)→ W^1_∞(^n). Multiply ext_B(φ^*(g)) by a cut-off-function η∈ C_0^∞(Ω) such that η(x) =1 for all points x∈ B. Then the product η·ext_B(φ^*(g)) belongs to W^1_∞(Ω), equals 0 near the boundary ∂Ω and its norm in W^1_∞(Ω) is controlled by the norm g| W^1_∞(φ(B)). It is clear that φ^-1^*(η·ext_B(φ^*(g))) belongs to W^1_∞(Ω'), equals 0 near the boundary ∂Ω' and its norm in W^1_∞(Ω') is controlled by the norm g| W^1_∞(φ(B)). Extending φ^-1^*(η·ext_B(φ^*(g))) by 0 outside  Ω' we obtain a bounded extension operator ext_φ(B) W^1_∞(φ(B))→ W^1_∞(^n). It is well-known (see for example <cit.>) that a necessary and sufficient condition for the existence of such an operator is an equivalence of the interior metric in φ(B) to the Euclidean one: the inequality d_φ(B)(u,v)≤ L |u-v| holds for all points u, v∈φ(B) with some constant L.plain
http://arxiv.org/abs/1704.08022v5
{ "authors": [ "A. O. Molchanova", "S. K. Vodop'yanov" ], "categories": [ "math.FA", "math.AP" ], "primary_category": "math.FA", "published": "20170426091030", "title": "Injectivity almost everywhere and mappings with finite distortion in nonlinear elasticity" }
.tifpng.png`convert #1 `dirname #1`/`basename #1 .tif`.png -0.5in 6.5in 9.0in 0.0in 1.23ex 0in arabic inlist0.0in 0.0in*reflist-.25in .15in 0.0in *
http://arxiv.org/abs/1705.02393v1
{ "authors": [ "Malik Hassanaly", "Stephen Voelkel", "Venkat Raman" ], "categories": [ "physics.flu-dyn", "math.DS" ], "primary_category": "physics.flu-dyn", "published": "20170427234253", "title": "Classification and Simulation of Anomalous Events in Turbulent Combustion" }
Water abundance in four of the brightest water sources in the southern skyBing-Ru Wang1, Lei Qian1, Di Li1,2 Zhi-Chen Pan1 Received; accepted =========================================================================== Neural machine translation (NMT) heavily relies on an attention network to produce a context vector for each target word prediction. In practice, we find that context vectors for different target words are quite similar to one another and therefore are insufficient in discriminatively predicting target words. The reason for this might be that context vectors produced by the vanilla attention network are just a weighted sum of source representations that are invariant to decoder states. In this paper, we propose a novel GRU-gated attention model (GAtt) for NMT which enhances the degree of discrimination of context vectors by enabling source representations to be sensitive to the partial translation generated by the decoder. GAtt uses a gated recurrent unit (GRU) to combine two types of information: treating a source annotation vector originally produced by the bidirectional encoder as the history state while the corresponding previous decoder state as the input to the GRU. The GRU-combined information forms a new source annotation vector. In this way, we can obtain translation-sensitive source representations which are then feed into the attention network to generate discriminative context vectors. We further propose a variant that regards a source annotation vector as the current input while the previous decoder state as the history. Experiments on NIST Chinese-English translation tasks show that both GAtt-based models achieve significant improvements over the vanilla attention-based NMT. Further analyses on attention weights and context vectors demonstrate the effectiveness of GAtt in improving the discrimination power of representations and handling the challenging issue of over-translation. § INTRODUCTION Neural machine translation (NMT), as a large, single and end-to-end trainable neural network, has attracted wide attention in recent years <cit.>. Currently, most NMT systems use an encoder to read a source sentence into a vector and a decoder to map the vector into the corresponding target sentence. What makes NMT outperform conventional statistical machine translation (SMT) is the attention mechanism <cit.>, an information bridge between the encoder and the decoder that produces context vectors by dynamically detecting relevant source words for predicting the next target word. Intuitively, different target words would be aligned to different source words so that the generated context vectors differ significantly from one another across different decoding steps. In other words, these context vectors should be discriminative enough for target word prediction otherwise the same target words might be generated repeatedly (a well-known issue of NMT: over-translation, see Section <ref>). However, this is often not true in practice, even when “attended” source words are rather relevant. We observe that (see Section <ref>) the context vectors are very similar to each other, and that the variance in each dimension of these vectors across different decoding steps is very small. These indicate that the vanilla attention mechanism suffers from its inadequacy in distinguishing different translation predictions. The reason behind, we conjecture, lies in the architecture of the attention mechanism which simply calculates a linearly weighted sum of source representations that are invariant across decoding steps. Such invariance in source representations may lead to the undesirable small variance of context vectors.In order to handle this issue, in this paper, we propose a novel GRU-gated attention model (GAtt) for NMT. The key is that we can increase the degree of variance in context vectors by refining source representations according to the partial translation generated by the decoder. The refined source representations are composed of the original source representations and the previous decoder state at each decoding step. We show the overall framework of our model and highlight the difference between GAtt and the vanilla attention in Figure <ref>. GAtt significantly extends the vanilla attention by inserting a gating layer between the encoder and the vanilla attention network. Specifically, we model this gating layer with a GRU unit <cit.>, which takes the original source representations as its history and the corresponding previous decoder state as its current input. In this way, GAtt can produce translation-sensitive source representations so as to improve the variance in context vectors and therefore its discrimination ability in target word prediction.As GRU is able to control the information flow between the history and current input through its reset and update gate, we further propose a variant of GAtt that, instead, regards the previous decoder state as the history while the original source representations as the current inputs. Both models are simple yet efficient in training and decoding. We testify GAtt on Chinese-English translation tasks. Experimental results show that both GAtt-based models significantly outperform the vanilla attention-based NMT. We further analyze the generated attention weights and context vectors, showing that the attention weights are more accurate and the context vectors are more discriminative for target word prediction.§ RELATED WORK Our work contributes to the development of attention mechanism in NMT. Originally, NMT does not have the attention mechanism and mainly relies on the encoder to summarize all source-side semantic details into a fixed-length vector <cit.>. Bahdanau et al. DBLP:journals/corr/BahdanauCB14 find that using a fixed-length vector, however, is not adequate to represent a source sentence and propose the popular attention mechanism, enabling the model to automatically search for parts of a source sentence that are relevant to the next target word. From then on, the attention mechanism has gained extensive concern. Luong et al. luong-pham-manning:2015:EMNLP explore several effective approaches to the attention network, introducing the local and global attention model. Tu et al. DBLP:journals/corr/TuLLLL16 introduce a coverage vector to keep track of the attention history such that the attention network can pay more attention to untranslated source words. Mi et al. mi-wang-ittycheriah:2016:EMNLP2016 leverage well-trained word alignments to directly supervise the attention weights in NMT. Yang et al. 2016arXiv160705108Y bring a recurrence along the context vector to help adjust the future attention. Cohn et al. DBLP:journals/corr/CohnHVYDH16 incorporate several structural bias, such as position bias, markov condition and fertilities, into the attention-based neural translation model. However, all these models mainly focus on how to make the attention weights more accurate. As we mentioned, even with well-designed attention models, context vectors may be lack of discrimination ability for target word prediction.Another closely related work is the interactive attention model <cit.> which treats source representations as a memory and models the interaction between the decoder and this memory during translation via reading and writing operations. To some extent, our model can also be regarded as a memory network, which only includes the reading operation. However, our reading operation differs significantly from that in the interactive attention, where we employ the GRU unit for composition while they merely use the content-based addressing. Compared with the interactive attention, our GAtt, without the writing operation, is more efficient in both training and decoding.The gate mechanism in our GAtt is built on the GRU unit. GRU usually acts as a recurrent unit that leverages a reset gate and an update gate to control how much information flow from the history state and the current input respectively <cit.>. It is an extension of the vanilla recurrent neural network (RNN) unit with the advantage of alleviating the vanishing and exploding gradient problems during training <cit.>, and also a simplification of the LSTM model <cit.> with the advantage of efficient computation. The idea of using GRU as a gate mechanism, to the best of our knowledge, has never been investigated before. Additionally, our model is also related with the tree-structured LSTM <cit.>, where LSTM is adapted to compose vary-sized children nodes and current input node in a dependency tree into the current hidden state. GAtt differs significantly from the tree-structured LSTM in that the latter employs the sum operation to deal with the vary-sized representations, while our model leverages the attention mechanism. § BACKGROUND In this section, we briefly review the vanilla attention-based NMT <cit.>. Unlike conventional SMT, NMT directly maps a source sentence 𝐱 = {x_1, …, x_n} to its target translation 𝐲 = {y_1, …, y_m} using an encoder-decoder framework. The encoder reads the source sentence 𝐱, and encodes the representation of each word 𝐡_i by summarizing the information of neighboring words. As shown by the blue color in Figure <ref>, this is achieved by a bidirectional RNN, specifically the bidirectional GRU model. The decoder is a conditional language model which generates the target sentence word by word using the following conditional probability (see the yellow lines in Figure <ref> (a)):p(y_j|𝐱, 𝐲_<j)=softmax(g(E_y_j-1, 𝐬_j, 𝐜_j))where 𝐲_<j = {y_1, ⋯, y_j-1} is a partial translation, E_y_j-1∈ℝ^d_w is the embedding of previously generated target word y_j-1, 𝐬_j∈ℝ^d_h is the j-th target-side decoder state and g(·) is a highly non-linear function. Please refer to <cit.> for more details. What we concern in this paper is 𝐜_j ∈ℝ^2d_h, which is the translation-sensitive context vector produced by the attention mechanism.Attention Mechanism acts as a bridge between the encoder and the decoder, which makes them tightly coupled. The attention network aims at recognizing which source words are relevant to the next target word and giving high attention weights to these words in computing the context vector 𝐜_j. This is based on the encoded source representations 𝐇 = {𝐡_1, ⋯, 𝐡_n} and the previous decoder state 𝐬_j-1 (see the purple color in Figure <ref> (a)). Formally,𝐜_j = Att(𝐇, 𝐬_j-1)Att(·) denotes the whole process. It first computes an attention weight α_ji to measure the degree of relevance of a source word x_i for predicting the target word y_j via a feed-forward neural network:α_ji = exp(e_ji)/∑_k exp(e_jk))The relevance score e_ji is estimated via an alignment model as in <cit.>: e_ji = v^T_a tanh(W_a 𝐬_j-1 + U_a 𝐡_i). Intuitively, the higher attention weight α_ji is, the more important word x_i is for the next word prediction. Therefore, Att(·) generates 𝐜_j by directly weighting the source representations 𝐇 with their corresponding attention weights {α_ji}_i=1^n:𝐜_j = ∑_i α_ji𝐡_i Although this vanilla attention model is very successful, we find that, in practice, the resulted context vectors {𝐜_j}_j=1^m are very similar to one another. In other words, these context vectors are not discriminative enough. This is undesirable because it makes the decoder (Eq. (<ref>)) hesitate in deciding which target word should be predicted. We attempt to solve this problem in the next section. § GRU-GATED ATTENTION FOR NMT The problem mentioned above reveals some shortcomings of the vanilla attention mechanism. Let's revisit the generation of 𝐜_j in Eq. (<ref>). As different target words might be aligned to different source words, the attention weights of source words vary across different decoding steps. However, no matter how the attention weights of source words vary, the source representations 𝐇 remain the same, i.e. they are decoding-invariant. And this invariance would limit the discrimination power of the generated context vectors. Accordingly, we attempt to break up this invariance by refining the source representations before they are input to the vanilla attentionnetwork at each decoding step. To this end, we propose the GRU-gated attention (GAtt), which, similar to the vanilla attention, can be formulated into the following form:𝐜_j = GAtt(𝐇, 𝐬_j-1)The gray color in Figure <ref> (b) highlights the major difference between GAtt and the vanilla attention. Specifically, GAtt consists of two layers: a gating layer and an attention layer.Gating Layer. This layer aims at refining the source representations according to the previous decoder state 𝐬_j-1 so as to compute translation-relevant source representations. Formally,𝐇^g_j = Gate(𝐇, 𝐬_j-1)The Gate(·) should be capable of dealing with the complex interactions between the source sentence and the partial translation, and freely controlling the semantic match and information flow between them. Instead of using conventional gating mechanism <cit.>, we directly choose the whole GRU unit to perform this task. For a source representation 𝐡_i, GRU treats it as the history representation and refines it using the current input, i.e. the previous decoder state 𝐬_j-1:𝐳_ji = σ(W_z 𝐬_j-1 + U_z 𝐡_i + b_z)𝐫_ji = σ(W_r 𝐬_j-1 + U_r 𝐡_i + b_r)h_ji = tanh(W 𝐬_j-1 + U [ 𝐫_ji⊙𝐡_i] + b)𝐡^g_ji = (1 - 𝐳_ji) ⊙𝐡_i + 𝐳_ji⊙h_jiwhere σ(·) is the sigmoid function, and ⊙ denotes the element-wise multiplication. Intuitively, the reset gate 𝐫_ji and update gate 𝐳_ji measure the degree of the semantic match between the source sentence and partial translation. The former determines how much the original source information could be used to combine the partial translation, while the latter defines how much the original source information can be kept around. As a result, 𝐡^g_ji becomes translation-sensitive, rather than decoding-invariant, which is desired to strengthen the discrimination power of 𝐜_j.Attention Layer. This layer is the same as the vanilla attention mechanism:𝐜_j = Att(𝐇^g_j, 𝐬_j-1)The Att(·) in Eq. (<ref>) denotes the same procedure as that in Eq. (<ref>). However, instead of paying attention to the original source representations 𝐇, this layer relies on the gate-refined source representations 𝐇^g_j. Notice that 𝐇^g_j is adaptive during decoding, indicated with the subscript j. Ideally, we expect 𝐇^g_j is decoding-specific enough such that 𝐜_j can vary significantly across different target words.Notice that Gate(·) is not a multi-stepped RNN. It is simply a composition function, or only one-stepped RNN. Therefore, it is computationally efficient. To train our model, we employ the standard training objective, i.e. maximizing the log-likelihood of the training data, and optimize the model parameters using the standard stochastic gradient algorithm.Model Variant We refer to the above model as GAtt, which regards the source representations as the history and the previous decoder state as the current input. Which information should be treated as input or history does not matter, especially for the GRU unit since GRU is able to control the information flow freely. We can also use the previous decoder state as the history and the source representations as the current input. We refer to this model as GAtt-Inv. Formally,𝐜_j = GAtt-Inv(𝐇, 𝐬_j-1)with,𝐜_j = Att(𝐇^g_j^', 𝐬_j-1)𝐇^g_j^' = Gate(𝐬_j-1, 𝐇)The major difference lies at the order of the inputs in Gate(·), since the inputs to GRU are directional. We verify both model variants through the following experiments.§ EXPERIMENTS §.§ Setup We evaluated the effectiveness of our model on Chinese-English translation tasks. Our training data consists of 1.25M sentence pairs, with 27.9M Chinese words and 34.5M English words respectively[This data is a combination of LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06.]. We chose the NIST 2005 dataset as the development set to perform model selection, and the NIST 2002, 2003, 2004, 2006 and 2008 datasets as our test sets. There are 878, 919, 1788, 1082 and 1664 sentences in NIST 2002, 2003, 2004, 2005, 2006, 2008 dataset respectively. We evaluated the translation quality using the case-insensitive BLEU-4 metric <cit.>[https://github.com/moses-smt/mosesdecoder/blob/master/scripts/ generic/multi-bleu.perl] and TER metric <cit.>[http://www.cs.umd.edu/ snover/tercom/]. We performed paired bootstrap sampling <cit.> for statistical significance test using the script in Moses[https://github.com/moses-smt/mosesdecoder/blob/master/scripts/ analysis/bootstrap-hypothesis-difference-significance.pl]. §.§ Baselines We compared our proposed model against the following two state-of-the-art SMT and NMT systems: * Moses <cit.>: an open source state-of-the-art phrase-based SMT system.* RNNSearch <cit.>: a state-of-the-art attention-based NMT system using the vanilla attention mechanism. We further feed the information of y_j-1 to the attention, and implemented the decoder with two GRU layers, following the suggestions in dl4mt[https://github.com/nyu-dl/dl4mt-tutorial/tree/master/session3]. For Moses, we trained a 4-gram language model on the target portion of training data using the SRILM[http://www.speech.sri.com/projects/srilm/download.html] toolkit with modified Kneser-Ney smoothing. The word alignments were obtained with GIZA++ <cit.> on the training corpora in both directions, using the “grow-diag-final-and” strategy <cit.>. All other parameters were kept as the default settings.For RNNSearch, we limit the vocabulary of both source and target languages to be the most frequent 30K words, covering approximately 97.7% and 99.3% of the two corpora respectively. The words that do not appear in the vocabulary were mapped to a special token “UNK”. We trained our model with the sentences of length up to 50 words in the training data. Following the settings in <cit.>, we set d_w = 620, d_h = 1000. We initialized all parameters randomly according to a normal distribution (μ=0, σ=0.01) except the square matrices which are initialized with random orthogonal matrices. We used the Adadelta algorithm <cit.> for optimization, with a batch size of 80 and gradient norm as 5. The model parameters were selected according to the maximum BLEU points on the development set. Additionally, during decoding, we used the beam-search algorithm, and set the beam size to 10.For GAtt, we randomly initialized its parameters as what we do in RNNSearch. All the other settings are the same as RNNSearch. All NMT systems were trained on a GeForce GTX 1080 using the computational framework Theano. In one hour, the RNNSearch system processes about 2769 batches while GAtt processes 1549 batches. §.§ Translation Results The results are summarized in Table <ref>. Both GAtt and GAtt-Inv outperform both Moses and RNNSearch. Specially, GAtt yields 35.70 BLEU and 56.06 TER scores on average, with improvements of 4.59 BLEU and 1.61 TER points over Moses, and 1.66 BLEU and 2.12 TER points over RNNSearch; GAtt-Inv achieves 35.70 BLEU and 55.99 TER scores on average, with gains of 4.59 BLEU and 1.68 TER points over Moses, and 1.66 BLEU and 2.19 TER points over RNNSearch. All improvements are statistically significant. It seems that GAtt-Inv obtains very slightly better performance than GAtt in terms of TER on average. However, these improvements are neither significant nor consistent. In other words, GAtt is as efficient as GAtt-Inv. This is reasonable, since the difference of GAtt and GAtt-Inv lies at the order of inputs to GRU, and GRU is able to control its information flow from each input through its reset and update gate.§.§ Effects of Model Ensemble We further testify whether the ensemble of our models and RNNSearch can generate better performance against any single system. We ensemble different systems by simply averaging their predicted target word probabilities at each decoding step, as suggested in <cit.>. We show the results in Table <ref>. Not surprisingly, all the ensemble systems achieves significant improvements over the best single system. And the ensemble of “RNNSearch+GAtt+GAtt-Inv” produces the best results, 38.64 BLEU and 54.15 TER scores on average. This demonstrates that these neural models are complementary and beneficial to each other. §.§ Translation Analysis In order to have a deep understanding of how the proposed models work, we dug into the translated sentences of different neural systems. Table <ref> shows an example. All the neural models generate very fluent translations. However, RNNSearch only translates the rough meaning of the source sentence, ignoring important sub-phrases “重新 融入 社会” and “临时”. These missing translations resonate with the finding of Tu et al. DBLP:journals/corr/TuLLLL16. sIn contrast, GAtt and GAtt-Inv are able to capture these two sub-phrases, generating the key translations “integration” and “interim”.To find the underlying reason, we investigated their generated attention weights. Rather than using the generated target sentences, we feed the same reference translations into RNNSearch and GAtt for making a fair comparison[We do not analyze GAtt-Inv because it is very similar to GAtt.]. Figure <ref> visualizes the attention weights. Both RNNSearch and GAtt have very intuitive attentions, e.g. “refugees” is aligned to “难民”, “government” is aligned to “政府”. However, compared against those of RNNSearch, the attentions learned by GAtt are more focused and accurate. In other words, the refined source representations in GAtt help the attention mechanism concentrate its weights on translation-related words. To verify this point, we evaluated the quality of word alignments induced from different neural systems in terms of alignment error rate (AER) <cit.> and the soft version (SAER) of AER, following Tu et al. <cit.>.[Notice that we used the same dataset and evaluation script as Tu et al. <cit.>. We refer the readers to <cit.> for more details.] Table <ref> display the evaluation results of word alignments. We find that both GAtt and GAtt-Inv significantly outperform RNNSearch in terms of both AER and SAER. Specifically, GAtt obtains a gain of 7.91 SAER and 7.3 AER points over RNNSearch. As we obtain word alignments by connecting target words to source words with the highest alignment probabilities computed according to their attention weights, the consistent improvements of our model over RNNSearch on AER score indicate that our model indeed learns more accurate attentions.Another very important question is whether GAtt enhances the discrimination of the context vectors. We answer this question by visualizing these vectors, as shown in Figure <ref>. We can observe that the heatmap of RNNSearch is very smooth, which varies very slightly across different decoding steps (the horizontal axis). This means that these context vectors are very similar to one another, thus lacking of discrimination. In contrast, there are obvious variations in GAtt. Statistically, the mean variance of the context vectors across different dimensions in RNNSearch is 0.0057, while it is 0.0365 in GAtt, 6 times larger than that of RNNSearch. Additionally, across different decoding steps, the mean variance is 0.0088 in RNNSearch, while it is 0.0465 in GAtt. All these strongly suggest that our model makes the context vectors more discriminative across different target words. §.§ Over-Translation Evaluation Over-translation or repeatedly predicting the same target words <cit.> is a challenging problem for NMT. We conjecture that the reason behind the over-translation issue is partially due to the small differences in context vectors learned by the vanilla attention mechanism. As the proposed GAtt can improve the discrimination power of context vectors, we hypothesize that our modelcan deal better with the over-translation issue than the vanilla attention network. To testify this hypothesis, we introduce a metric called N-Gram Repetition Rate (N-GRR) that calculates the portion of repeated n-grams in a sentence:N-GRR = 1/CR∑_c=1^C∑_r=1^R|N-grams_c,r| - |u(N-grams_c,r)|/|N-grams_c,r|where |N-grams_c,r| denotes the number of total n-grams in the r-th translation of the c-th sentence in the testing corpus and u(N-grams_c,r) the number of n-grams after duplicate n-grams are removed. In our test sets, there are C=6606 sentences with r=4 and r=1 translations for the Reference and NMT systems respectively. If we compare N-GRR scores of machine-generated translation against those of reference translations, we can roughly know how serious the over-translation problem is. We show N-GRR results in Table <ref>. Compared with reference translations (Reference), RNNSearch yields significant high scores, indicating that RNNSearch generates redundant repeated n-grams in translations, and therefore the over-translation problem in RNNSearch is serious. In contrast, both GAtt and GAtt-Inv achieve considerable improvements over RNNSearch in terms of N-GRR. Especially, we find that GAtt-Inv performs better than GAtt on all n-grams, which is in accordance with the translation results in Table <ref>. These N-GRR results strongly suggest that the proposed models are able to handle the over-translation issue and that generating more discriminative context vectors makes NMT suffer less from the over-translation issue.§ CONCLUSION In this paper, we have presented a novel GRU-gated attention model (GAtt) for NMT. Instead of using decoding-invariant source representations, GAtt produces new source representations that vary across different decoding steps according to the partial translation so as to improve the discrimination of context vectors for translation. This is achieved by a gating layer that regards the source representations and previous decoder state as the history and input to a gated recurrent unit. Experiments on Chinese-English translation tasks demonstrate the effectiveness of our model. In-depth analysis further reveals that our model is able to significantly reduce repeated redundant translations (over-translations).In the future, we would like to apply our model to other sequence learning tasks as our model is easily to be adapted to any other sequence-to-sequence tasks (e.g. document summarization, neural conversion model, speech recognition, .etc). Additionally, except for the GRU unit, we will explore more different end-to-end neural architectures, such as convolutional neural network, LSTM unit as the gate mechanism plays a very important role in our model. Finally, we are interested in adapting our GAtt model as a tree-structured unit to compose different nodes in a dependency tree. named
http://arxiv.org/abs/1704.08430v2
{ "authors": [ "Biao Zhang", "Deyi Xiong", "Jinsong Su" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20170427042541", "title": "A GRU-Gated Attention Model for Neural Machine Translation" }
That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox Anders Sandberg[Future of Humanity Institute, University of Oxford. Littlegate House, Suite 1, 16/17 St. Ebbe's Street. Oxford OX1 1PT. United Kingdom. ], Stuart Armstrong[Future of Humanity Institute. ], Milan Ćirković[Future of Humanity Institute and Astronomical Observatory of Belgrade, Volgina 7, 11000 Belgrade, Serbia. ] December 30, 2023 =========================================================================================================================================================================================================================================================================================================================================== Conventional wireless power transfer systems consist of a microwave power generator and transmitter located at one place and a microwave power receiver located at a distance. Here we show that wireless power transfer can be realized as a single “distributed” microwave generator with an over-the-air feedback, so that the microwave power is generated directly at the place where the energy needs to be delivered. We demonstrate that the use of this paradigm increases efficiency and dramatically reduces sensitivity to misalignments, variations in load and power, and possible presence of obstacles between the source and receiver. § INTRODUCTION In conventional wireless power transfer systems <cit.> (developing since the time of Nikola Tesla), the first stage of wireless power transport is a microwave generator which transforms DC or 50/60 Hz power into microwave oscillations. This power transformation is necessary because wireless power links can operate only at reasonably high frequencies. At this stage, any type of microwave generators can be used. Obviously, as in any device which transforms one energy form into another, some power loss is inevitable, although modern microwave generators can be quite efficient. Next, the microwave power available from this generator is send into space using some kind of antenna. Part of this energy is received by another antenna at the receiving end, and the energy is finally converted back to the DC or 50/60 Hz form and used there. Some energy is inevitably lost in the internal resistance of the generator and in the ohmic resistances of the twoantennas. Finally, not all the radiated energy can be captured by the receiving antenna, and some energy escapes into surrounding space.This conventional paradigm is illustrated in Fig. <ref>. Of course, the two antennas do not have to be electric dipoles: the wire dipoles are used as a generic example of any antenna. One of the most important (and unavoidable) reasons for power losses in this system is the dissipation in the internal resistance of the source (Z_1). Actually, if the goal is to maximize the power in the load, the source and load impedances should be conjugate matched, and in this case one half of the power is lost in Z_1. Moreover, as a matter of fact, inconventional systems there are two such parasitic source resistances, because the microwave generator V is itself fed by a DC or mains source, and there are losses also in the internal resistance of the battery or mains generator. In this presentation we will describe an alternative paradigm of wireless power transfer, which completely eliminates parasitic losses in the internal resistance of the microwave power source, because in this new scenario the microwave power is generated directly in the load. One can say that the internal resistance of the microwave generator becomes the load resistance. Moreover, in these new self-oscillating systems, the signal frequency and strength are automatically adjusted due to the generator feedback system, this way minimizing sensitivity to misalignments and variations of the load impedance. § SELF-OSCILLATING SYSTEMSLet us consider the conventional wireless power transfer system, illustrated in Fig. <ref>. Assuming that the wireless link is reciprocal, the mutual impedances are equal and we can denoteZ_12=Z_21=-Z_ m. The current in the load circuit reads:I_2=Z_ mV (Z_1+Z_ in,1)(Z_2+Z_ in,2)-Z_ m^2,andthe power delivered to the load is equal to P=|I_2|^2R_2, where R_2= Re(Z_2).To maximize the amplitude of the current in the load, one brings the circuit to resonance, so that the reactive impedances in the denominator cancel out and the denominator is a real number. In the optimal, idealistic, scenario, the mutual resistance between the two antennas nearly cancels out the radiation resistances [Re(Z_ in,1Z_ in,2-Z_ m^2)→ 0], which corresponds to the non-radiating mode of two coupled antennas (in the example of two parallel dipole antennas shown in Fig. <ref>, this regime corresponds to the mode I_2=-I_1 of closely positioned antennas). Note that the difference Re(Z_ in,1Z_ in,2-Z_ m^2) is always positive and can only approach zero. Finally, the delivered power can be maximized by matching the load and source resistances, but it is always limited by losses in Z_1. Now let us assume that the internal resistance of the source, Re(Z_1), can be negative. In this scenario, the absolute value of the denominator in (<ref>) can be zero, meaning that in the assumption of linear response, the load current is unbounded. Obviously, this corresponds to a self-oscillating circuit, andwe do not anymore need the voltage source V. The energy is delivered directly to the load from a primary power source which creates negative (active) resistance at the active side of the link. This scenario is illustrated in Fig. <ref>. As is well known from the theory of generators based on negative resistance devices, the self-oscillation regime is established “automatically”, provided that the initial conditions are appropriate. The oscillation level is determined by the non-linearity of the negative-resistance element, while the oscillation frequency corresponds to the resonance of the system. Thus, there is no need for any adjustments of the link if, for instance, the position of the receiver changes. Such change corresponds to changed reactances of the system, which means that if the receiver is moved, the oscillation frequency will change so that the resonance of the system holds, and the delivered power will be optimized automatically. Naturally, receiver positions and load impedance can be changed within some limits, ensuring that the self-oscillations remain possible. We have modelled the proposed self-oscillating wireless power transfer system using a one-dimensional transmission-line model. The active element is a resistive sheet R_1<0, the receiver is a resistive sheetR_2>0, and the regime of compensated radiation into free space is ensured by short-circuit terminations at both ends. The presence of absorbing obstacles between the source and receiver as well as parasitic radiation leakage is modelled by a complex admittance sheet Y positioned at an arbitrary distance between the source and the load. This model set-up is shown in Fig. <ref>. The self-oscillating regime is initiated by an electric-current sinc pulse at z=0. The calculated results are shown in Fig. <ref>. Here, the wireless coupling between the source and the receiveris due to plane-wave propagation between the active sheet R_1and the receiving resistive sheet R_2. The surface admittanceof the obstacle is normalized to the free-space wave impedance η_0.The quantity represented in Fig. <ref>is the time-averaged electric field (measured in V/m) at the receiver location. The field is averaged over a long period far from t=0. The Fourier transform of the initiating pulse field is a gate function (a low-frequency sinc pulse in time domain). Its amplitude is 1 (measured in V/(m*sec)) for -ω_0<ω<ω_0 and 0 elsewhere (indicatively,ω_0/(2π)= 3 GHz). Time-averaged electric field strength at the receiver position is shown by color distribution. We see that the delivered power does not change if we introduce an obstacle and increase its conductance up to a certain threshold. Similar property is observed also if we change the position of the obstacle or the receiver, as will be shown in the presentation. These results confirm robustnessof the proposed system to variations of the environment. Note that the proposed system resembles recently conceptualized parity-time (PT) symmetric systems, where gain is compensated by loss in symmetrically positioned active and lossy elements. Although in our proposed devicesideal PT-symmetry is not required, it is interesting to observe similarities with energy teleportation through nearly opaque screens in PT-symmetric systems <cit.>. § DISCUSSION AND CONCLUSIONAlthough we have introduced and explained the main idea of self-oscillating wireless power transfer based on the use of negative-resistancecircuits, the same principle can be realized using otherself-oscillating circuits. For example, consider a generator formed by a microwave amplifier with an appropriate positive feed-back loop. In thisalternative scenario the wireless link can be a part of the feed-back circuit of a generator, which creates microwave oscillations directly where the power is needed. Conceptually, to converta conventional generator into a wireless power delivery system, one can let a part of the feed-back signal propagate in space and insert the object to which we want to deliver power into the field of the feed-back electromagnetic wave. In conclusion, we have described an alternative paradigm of wireless power transfer, where microwave energy is generated directly at the location where it is needed. The wireless link is a part of the feed-back loop of a microwave self-oscillating circuit. In this scenario, the whole system is a single microwave generator, which directly converts DC or mains power into microwave power at the position of the receiver.9 Science A. Kurs, A. Karalis, R. Moffatt, J. D. Joannopoulos, P. Fisher, M. Soljacic, “Wireless Power Transfer via Strongly Coupled Magnetic Resonances,” Science, 317, 2007, pp. 83–86.Bred1 S. Y. R. Hui, W. Zhong, and C. K. Lee, “A Critical Review of Recent Progress in Mid-Range Wireless Power Transfer,” IEEE Trans. Power Electr., 29, 2016, pp. 4500–4511.Bred J. Lee and S. Nam, “Fundamental Aspects of Near-Field Coupling Small Antennas for Wireless Power Transfer,” IEEE Trans. Antennas Propag., 58, 2010, pp. 3442–3449. teleport Y. Ra'di, D. L. Sounas, A. Alù, and S. A. Tretyakov, “Parity-time-symmetric Teleportation,” Phys. Rev. B, 93, 2016, p. 235427.
http://arxiv.org/abs/1705.00533v1
{ "authors": [ "Sergei A. Tretyakov", "Constantin R. Simovski", "Constantinos A. Valagiannopoulos", "Younes Ra'di" ], "categories": [ "physics.app-ph", "physics.class-ph" ], "primary_category": "physics.app-ph", "published": "20170427153129", "title": "Self-Oscillating Wireless Power Transfer Systems" }
Systematizing Decentralization and Privacy 1]Carmela Troncoso 2]Marios Isaakidis 3]George Danezis4]Harry Halpin[1]IMDEA Software Institute, E-mail: [email protected] [2]University College London, E-mail: [email protected] [3]University College London, E-mail: [email protected] [4]INRIA, E-mail: [email protected] Proceedings on Privacy Enhancing Technologies 10.1515/popets-2017-0052 307 2017-02-28 2017-06-01 2017-06-0220174 Decentralized systems are a subset of distributed systems where multiple authorities control different components and no authority is fully trusted by all. This implies that any component in a decentralized system is potentially adversarial. We revise fifteen years of research on decentralization and privacy, and provide an overview of key systems, as well as key insights for designers of future systems. We show that decentralized designs can enhance privacy, integrity, and availability but also require careful trade-offs in terms of system complexity, properties provided, and degree of decentralization. These trade-offs need to be understood and navigated by designers. We argue that a combination of insights from cryptography, distributed systems, and mechanism design, aligned with the development of adequate incentives, are necessary to build scalable and successful privacy-preserving decentralized systems.Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments [ December 30, 2023 =============================================================================================§ INTRODUCTION: THE LONG ROAD FROM 2001 TO 2016 The successful adoption of decentralized systems such as BitTorrent <cit.>, Tor <cit.>, and Bitcoin <cit.>, and the revelations of mass surveillance against centralized cloud services <cit.>, has contributed to the wide belief that decentralized architectures are beneficial to privacy. Yet, there does not exist a foundational treatment or even an established common definition of decentralization.In this paper we aim at defining decentralization and systematizing the ways in which a system can be decentralized, and, by presenting the key design decisions in decentralized systems, bring forth past lessons that can inform a new generation of decentralized privacy-enhancing technologies.This is not the first time there has been a surge of interest in decentralization. As Cory Doctorow noted at the 2016 Decentralized Web Summit: “It's like being back at the O'Reilly P2P conference in 1999,” which signaled a peak of interest around decentralized architectures at the turn of the millennium <cit.>. The `hype' around decentralization was followed in the early 2000s by research and deployment activity around decentralized systems.To some extent, decentralization was originally a response to the threat of censorship. Perhaps the first rallying cry for decentralization was the Eternity Service <cit.>. Anderson created this system in response to the success of the Church of Scientology at closing down the anon.penet.fi remailer <cit.> “as a means of putting electronic documents beyond the censor's grasp.” This motivation of censorship resistance is clear in more modern systems: Tor using a decentralized network of anonymous relays and a DHT-based hidden services naming infrastructure; Bitcoin emerging as a censorship-resistant way to transfer funds to organizations like Wikileaks after the centralized e-Gold <cit.> online currency had been shut down by the Department of Justice; or BitTorrent succeeding as a peer-to-peer (P2P) file sharing service using Mainline DHT <cit.> rather than having a central indexing service like Napster that could be subject to requests to keep track of file copying <cit.>. In each of these cases, decentralization arose as a response to the shutdown of a centralized authority, aiming to remove that single natural point of failure.Despite the millennial fervour for decentralization, the 2000s witnessed the rise of massively distributed, but not decentralized, data centers and systems as the dominant technical paradigm embodied by the Cloud computing capabilities offered by Google, Facebook, Microsoft, and others. Eventually, users were diverted away from software running locally on their machines, which essentially is a form of decentralization, towards cloud applications that enabled an unprecedented aggregation of user data by the providers. Snowden's revelations in 2013 on mass surveillance programs leveraging the centralized nature of these services gave credence to long-standing privacy concerns brought about by the rise and popularity of centralized services. The desire to preserve privacy, liberty, and the autonomous control of infrastructure and services have led to a call to “re-decentralize” the Internet <cit.>. As a result, in the 2010s we are observing an upsurge of alternatives to centralized infrastructures and services, although most alternatives to Cloud-based applications are still under development.It is important for system designers to neither be nostalgic about past systems nor fatalistic about future ones. Today's networking and computing environments are vastly different from those in 2000: Smart-phones have placed a powerful computer in people's pockets; users are usually connected to the Internet over fast connections without time or bandwidth caps; clients, such as web browsers, are now mature end-used platforms with P2P communications enabled and cryptographic capabilities; and mobile code, in the form of Javascript, is ubiquitous. Even though the design space for modern decentralized systems is less restricted than in the past, fundamental challenges remain. Our key objective is to support future work on decentralized privacy systems by systematizing the past 15 years of research, between O'Reilly's publication of “Peer-to-Peer: Harnessing the Power of Disruptive Technologies” <cit.> in 2001, and 2016. We aim at highlighting key findings in classic designs, and also the important problems faced by designers of past systems, so as to inform the choices made by engineers pursuing decentralization today.§ EPISTEMOLOGYScope. There is a wide use of the term `decentralized'. In this paper, we restrict ourselves to discussing systems that support privacy properties using decentralized architectures. We draw a distinction between decentralized and distributed architectures, as follows:Distributed systemA system with multiple components that have their behavior co-ordinated via message passing. These components are usually spatially separated and communicate using a network, and may be managed by a single root of trust or authority.Distribution is beneficial to support robustness against single component failure, scalability beyond what a single component could handle, high-availability and low-latency under distributed loads, and ecological diversity to prevent systemic failures. Developments led by Google, ranging from BigTable <cit.> to MapReduce <cit.> are good examples of distributed systems.Decentralized systemA distributed system in which multiple authorities control different components and no single authority is fully trusted by all others.Following Baran <cit.>, systems are conceived of as networks of interconnected components, where all the components of a system form a graph, where the nodes of the graph are the components and the edges the connections between them (see Fig. <ref>). Due to this analogy with graphs, the terms “decentralized network” and “decentralized system” tend to be used interchangeably. However, decentralized systems are not just network topologies, but systems that exist to fulfill some function or set of functions, otherwise called `operations.' These operations are accomplished by passing messages between a sender and a receiver node, with other nodes serving as proxies to relay the message <cit.> (right graph in Fig. <ref>). On the contrary, in centralized systems messages and operations are orchestrated by a central trusted authority (depicted as an orange circle in the left graph in Fig. <ref>). Centralized systems may be distributed, typically for efficiency or scaling, but not for privacy, and so the underlying components are fundamentally trusted. Only external entities are considered adversarial.Widely deployed systems such as Bitcoin, BitTorrent, and Tor are on the other hand decentralized. Contrary to generic distributed systems, in participating parties may choose their relationships of trust autonomously, including the case where there one may not trust any other components. This has profound implications in terms of security and privacy: no single entity that can act as a trusted computing base (TCB) <cit.> to enforce a global security or privacy policy. Any internal component of the system may be adversarial, in addition to external parties, requiring defences in depth.In terms of security and privacy we adopt the following broad definitions, that we make more detailed at the corresponding section when the context requires clarification or preciseness.SecurityWe consider the security aspects of a system to be those that encompass traditional information security properties. This include of course confidentiality, integrity, and authentication; but also less traditional ones such as availability, accountability, authorization, non-repudiation or non-equivocation.PrivacyWe consider the privacy aspects of a system to be those related to the protection of users' related data (identities, actions, etc.). This protection is usually formalized in terms of privacy properties (anonymity, pseudonymity, unlinkability, unobservability) for which we follow the definitions by Pfitzmann and Hansen <cit.>. These definitions are extended in the privacy-oriented discussion in Section <ref>.Methods & Model. To systematize knowledge in decentralized privacy-preserving systems we performed a systematic literature review of all papers published in the top 4 computer security conferences (IEEE S&P, ACM CCS, Usenix Security, NDSS) as well as the specialized conferences (PETS, WPES and IEEE P2P) that are proposing or analyzing decentralized systems with privacy properties, from the years 2000 to 2016.Our first analysis resulted in 165 papers (28 from IEEE S&P, 56 from ACM CCS, 18 from Usenix Security, 11 from NDSS, 11 from PETS, 10 from WPES, and 31 from IEEE P2P). Finally the paper contains only 90 references from these venues (13 from IEEE S&P, 32 from ACM CCS, 10 from Usenix Security, 11 from NDSS, 9 from PETS, 6 from WPES, and 9 from IEEE P2P), 19 are well-known deployed systems that do not have an associated peer-reviewed publication, and the rest come from an additional pool of 30 conferences and workshops (among them FOCI, WEIS, NSDI, SIGCOMM, SIGSAC, or CRYPTO). The selection was done on the basis of highlighting design decisions that reflect a key lesson worth of future reference.Due to the vast amount of identified designs, by necessity we do not describe each system in detail, but instead show how each system exemplifies a property or design choice. We do, though, expand upon Tor, BitTorrent, and Bitcoin as they are are heavily deployed and have substantial academic analysis. As illustrated in Figure <ref>, we study the pool of selected designs with the intention to determine: * How is the system decentralized? (Section <ref>)* What advantages do we get from decentralizing? (Section <ref>)* How does decentralization support privacy? (Section <ref>)* What are the disadvantages of decentralizing? (Section <ref>)* What implicit centralized assumptions remain? (Section <ref>)* What can we learn from existing designs? (Section <ref>) Insights.* The key difference between distributed systems and decentralized systems is one of authority and trust between components. Differences in architecture and use of security and privacy controls stem from it.* Decentralized systems embody a complex set of relationships of trust between parties managing different aspects of the system. Untrusted insiders are common, and security controls must be deployed taking into account adversaries within the system.* In distributed, but not decentralized, systems the existence of a single authority that provisions and manages all components that are trusted enables the use of simple security, many times based on dedicated trusted components that act as roots of trust.* In decentralized systems no single authority can provision a root of trust or trusted computing base, making security mechanisms reliant on those (such as central access control or traditional public key infrastructures) inapplicable. § DECENTRALIZATION AND PRIVACY This section runs over the key questions we pose in the previous sections with regards to the current state of affairs in decentralized systems. Table <ref> (page table:systemsTable) provides a summary of the different design decisions and the properties achieved as a result. §.§ How Is Decentralization Achieved?We review key architectural decisions: how to orchestrate the infrastructure of the network, how to route messages, and how to distribute trust between nodes. §.§.§ InfrastructureA first key choice concerns the distribution of tasks needed for maintaining a service within the system. The provisioning of infrastructure impacts the design in terms of trust and message routing. User-based Infrastructure. Some decentralized system consist solely of nodes that are users and there is no additional infrastructure. They rely solely on users to collectively contribute resources (bandwidth, storage) in order to provide a service. The advantage of this design is that by nature it does not require a third-party centralized authority. This user-based design can support services such as hosting of encrypted data, e.g. in Freenet <cit.> and Cachet <cit.>.A disadvantage is that user-based infrastructure may lead to poor performance due to evolving into sparsely connected topologies, and to “churn” caused by peers constantly joining and leaving the network.User-independent Infrastructure. Here, the functions of the decentralized system are realized by nodes that are not users. A set of third-parties that are not necessarily trusted may provide all or part of the functionality to users. This design pattern underlies classic open federated protocols such as SMTP <cit.> and XMPP <cit.> based on a client-server model. The advantages of user-independent infrastructure include increased availability of the service, a reduced attack surface, and immunity to user churn. Servers do not necessarily threaten user privacy. The Eternity Service <cit.>, as realized in systems like Tahoe-LAFS <cit.>, combined encryption with the use of several servers controlled by different non-collaborating authorities for the private storage and replication of files. Other examples of systems that rely on user-independent infrastructure include DP5 <cit.> and Riposte <cit.> in terms of Private Information Retrieval <cit.> or anonymous communication systems like mix networks <cit.> or DC-nets <cit.>. Hybrid Systems. Functions may be shared between users and nodes run by third-parties. An example is Tor, where relays are mainly run by volunteers but Directory Authorities are operated by a closed `known' group of servers.In terms of privacy and security, new elements such as distributed ledgers decentralize traditionally centralized cryptographic protocols in these hybrid systems. For example, computations can be locally and securely recorded to the blockchain with the support of multi-party computation protocols <cit.>, even without a trusted third party <cit.>, or using a small number of stable entities to ensure reliability and low-latency, as in the Sharemind MPC system <cit.>.§.§.§ Network Topology When considering a decentralized system, there are two distinct topologies. The first, network topology describes the connections between nodes used to route traffic; and the second, authority topology describes the power relations between the nodes. Thus, the network routing structure does not necessarily have to mirror how authority is decentralized in a system, although it often does. That can greatly affect the security and privacy properties of the system <cit.>. It must be noted that components of traditional network routing is done in a hierarchical manner, including spanning tree protocols such as in BGP <cit.> in the current Internet as well as `next generation' designs like SCION <cit.>.Mesh. Mesh topologies are unstructured. Nodes can route messages to every other node they are connected with. One advantage is that mesh networks function in settings with no stable connections to other nodes to guarantee service in the presence of massive churn and changing connectivity, such as in mobile ad-hoc networking and file sharing in early versions of Gnutella <cit.>. A particularly popular communication means in mesh topologies <cit.> are gossip protocols. In gossiping, as opposed to flooding, a random subset of the nodes in the network are chosen to receive the messages. These nodes then continue to broadcast the message via another independently selected random subset of the network to relay messages. The reliability of message delivery under load is questionable and information propagation experiences delays. Historically mesh networking does not preserve user privacy of their users, but recent secure messaging systems such as Briar <cit.> use this topology to remain functional during Internet blackouts.Distributed Hash Tables (DHT). DHTs are network topologies where each node maintains a small routing table of its neighbours, and messages are passed greedily to known nodes that are `closer' to the intended recipient. Although efficient and decentralized, DHTs do not by themselves provide strong security, privacy and anonymity properties. While decentralized, DHTs are not secure and privacy-preserving by default: Tran et al. <cit.> show that low latency anonymity systems based on DHTs such as Salsa <cit.> are vulnerable to having large amounts of traffic captured by adversaries controlling a fraction of the relays. DHT nodes may, however, be grouped into byzantine quorums to defeat adversaries that control a minority of nodes <cit.>. Super-nodes. Super-nodes are nodes that are endowed with more, and contribute more, resources to the system. This may be in terms of computation power, storage, or network connectivity, stability and up time. In terms of routing, such super-nodes may be used to mediate operations requiring higher network throughput. They can be arranged in structured topologies, designed to leverage them; or they may emerge naturally in unstructured topologies, as a result of some nodes committing more resources. Most P2P systems such as BitTorrent eventually rely on super-nodes <cit.>. These super-nodes have serious implications on availability and integrity, as they may become targets for attack, and privacy, as they mediate, and are in a privileged position to observe, a larger fraction of activities.Stratified. Some of the more complex decentralized systems use a stratified design where nodes have specialized roles in terms of routing, or other functions. A paradigmatic example is the Tor network. Tor users autonomously form circuits from an open-ended set of Tor relays, in layers of entry guards, middle nodes and exit nodes. A high-integrity global list of these relays is maintained through consensus by a closed group of specialized Directory Authorities. Simultaneously, Tor hidden services are resolved through a Hidden Service Directory maintained by a simple DHT topology. We note that, on some level, Tor has also evolved to use super-nodes on its topology and the distribution of traffic sent through Tor relays is far from uniform <cit.>. Cascades, are a particular case of Stratified topologies in anonymous communications, in which paths are pre-defined. The advantages and disadvantages of such choice as opposed to free routes has been discussed in <cit.>.§.§.§ Authority We now consider the relation among nodes in terms of authority and describe mechanisms to mitigate the potentially effects of power disparity that could potentially harm the security and privacy of users.Ad-hoc: Nodes Interact Directly. In ad-hoc there is no relationship of authority among nodes. Nodes directly interact with each other without the participation of other nodes, and they do so for the benefit of the involved parties only. In terms of routing, ad-hoc requires a mesh topology where nodes do not carry traffic for other nodes. However, note that mesh topologies do not always have a ad-hoc (lack of) authority relations, such as routing based on gossip. An example of this type of system would be point-to-point communication in Briar <cit.>.For purposes of privacy, direct interaction bypasses possibly compromised nodes, but not network adversaries. As for confidentiality, communications can be encrypted between the two nodes, and can be extended to group communication using group key agreement protocols <cit.>. P2P: Nodes Assist Other Nodes. P2P designs have no central authority. Unlike ad-hoc interaction, nodes provide services and resources to other nodes, such as routing messages or storing blocks of data. Nodes have equal authority and so each node may equally compel any other node, although services and resources are usually provided according to their capacity. In other words, P2P systems self-organize and all nodes are responsible for carrying out operations for all other nodes, rather than having any pre-configured special position of authority. Since nodes are not motivated by authority to help each other, mechanisms should instead be in place to provide `incentives' for collaborative behaviour.There are clear advantages for the security and privacy properties in P2P systems. Information about peers is not centralized and interaction typically remains local to a few nodes, so it is difficult for an adversary to obtain a global view of the system. Yet, relying on peers for functionality poses an additional threat to privacy, since requests may be served by adversarial nodes. These nodes can passively collect information on other nodes or they may actively disrupt the integrity of operations by forging messages or replay attacks that are hard to detect. Furthermore, since P2P systems are usually open, without any admissions control, adversaries may purposely inject a large number of Sybil nodes, to increase their chances of a successful attack <cit.>. P2P systems are not a silver bullet for decentralization: there is no clear and definite solution to Sybil attacks in P2P networks, although such an attack can be mitigated using reputation <cit.> or trust <cit.>. Social-based: Nodes Assist Friends. These designs take advantage of pre-existing decentralized relationships, such as “friendship”.In terms of applicability of security mechanisms this approach maintains most advantages of a P2P system. It is less vulnerable to Sybil attacks as adversarial nodes can be excluded from participating in the network or may be easier to detect <cit.>, as it is harder to infiltrate a social network than a network. The downside is that, without cover traffic, a global passive adversary can discover the underlying social graph by monitoring network communications and violate privacy properties such as unobservability and unlinkability. This in turn may lead to user deanonymization <cit.>, and techniques such as perturbation of the underlying graph may not be robust enough to prevent this <cit.>.A number of systems implement social-based communication to resist Sybil attacks. For instance Drac <cit.> and Pisces <cit.> use social-networks to support routing of messages. X-Vine <cit.> is a mechanism that, applied to distributed hash tables, helps resisting denial of service via Sybil attacks at the cost of higher latency. Tribler <cit.> uses social-based trust relations to improve performance that exploits similarity to improve performance, content discovery, and downloading in file sharing; or Nasir et al.'s socially-aware DHT <cit.>, which reduce latency and improve the reliability of the communication.Federated: Providers Assist Users. In federated designs, users are associated to provider nodes, which they trust and that act as authorities. Each provider is responsible only for its own users but collaborates with other providers in order to provide a service. No single provider has authority over other providers, and thus there is a “federation” of providers. Federated authorities typically use user-independent infrastructure and act as a super-node in terms of routing. This combination of design choices leads typically to high availability as long as the provider is accessible and not compromised, but the provider is a central point of attack to violate security properties and the provider itself can violate the privacy of nodes.The primary weakness of federated systems is the assumption that federated service providers largely act honestly. Some techniques can relax strong trust assumptions in the provider. End-to-end encryption can maintain confidentiality <cit.> using providers. Computation can be obscured using secret sharing <cit.> or differential privacy-based solutions <cit.>.Accountability: Transparency Assists Users.Transparency can be used to make an authority accountable in order to establish trust. It promotes integrity of operations by monitoring the correct behavior of nodes, e.g. a transparent log of a provider's operations in a federated system audited by users or other providers acting in lieu of their associated users. The nature of this auditor's authority is very different from the aforementioned previous types of authority relations and critically relies on the non-collusion of the auditor and the audited authority, e.g., Bitcoin consensus over its blockchain using proof-of-work. Other alternatives, such as Certificate Transparency <cit.>, rely on a set of services and auditors to keep track of X.509 certificates and quickly detect potentially rogue or hacked certificate authorities. Similarly, electronic election protocols <cit.> achieve robustness through proofs of correct shuffling of votes, e.g., Helios <cit.>. Yet naïve designs of audit logs may violate the privacy of decentralized nodes by learning too much information. While decentralized accountability can have clear advantages regarding integrity, there are difficulties in maintaining privacy in any distributed log. This disadvantage can nevertheless be reduced as shown by Zerocash <cit.>, which uses zero-knowledge proofs in order to maintain unlinkability in auditing relationships; or CONIKS <cit.>, that shows that auditing the consistency of a name-key binding through time enables verification of user public keys by the end users collectively and by other providers, while concealing the identities and the number of users at each provider using Verified Random Functions.Insights.* Decentralization encompasses a large space of designs from decentralized ad-hoc mesh to federated super-node networks, not just peer-to-peer. These offer a variety of privacy and systems (e.g., availability, or reliability) properties. Developer instincts may often be incorrect in terms of their trade off.* Despite being separate parts of the design, the network topology in decentralized systems often mirrors the authorities' trust relationships. However, a strict mapping between authority, infrastructure and networking topology is not necessary, and may come at the cost of harming privacy or availability.* Centralization in terms of federated and super-nodes leads to better availability and system performance. However, it introduces single points of failure that impact availability and privacy. P2P models are by design more resilient to unstable routing and compromises, but entail higher engineering complexity.* All networking topologies suffer under node churn, and pure P2P topologies must effectively address this effectively to be applicable at all.* Decentralization does not imply the absence of any infrastructure. However, the infrastructure itself needs to be decentralized by being provided by a plurality of authorities. Such infrastructure may enhance performance by offering super-nodes or dedicated high-availability operations.* De-facto super nodes may emerge naturally in decentralized designs, as a result of different node capabilities, and efficiency in centralizing certain operations. If this occurs outside the context of careful design, those super nodes become a single point of failure, and may lead to de facto re-centralization.* Lack of relationships of authority imply that nodes must be willing to provide services to each other on a different basis. Designers of decentralized systems must carefully engineer such incentives, to ensure that natural (non adversarial) selfishness does not lead to dysfunction. Monetary incentives, reputation, and reciprocity can be the basis of such incentives – but off the shelf such mechanisms are often central points of failure. §.§ The Advantages of Decentralization In this section we discuss a number of perceived intrinsic architectural advantages to decentralized designs that make them appealing compared to their centralized counterparts.§.§.§ Flexible Trust ModelsAn intrinsic advantage of decentralized architectures relates to the existence of multiple independent authorities. These create a distributed trusted computing base that ensures that a subset of rogue nodes, at least up to a certain threshold, cannot compromise the overall security properties of the whole system. Distributed Trust. Decentralized systems leverage multiple independent authorities into a security assumption: for example, all forms of threshold cryptography <cit.> assure that if some fraction of participants are honest, some security property can be guaranteed. This principle can also be applied to secure multi-party computation, distributed key generation, public randomness and threshold-based decryption, and signing. One such privacy system is Vanish <cit.> that guarantees deletion after a pre-set expiry date. It illustrates how a multi-authority system implements properties otherwise impossible, or implausible, to when implemented by a single entity. However, the system was in practice defeated by a Sybil attack that the security properties of its DHT did not take into account <cit.>.Reliance on multiple authorities to regain a degree of privacy has also been proposed for commercial cloud storage in case some providers are dishonest <cit.>.No Natural Central Authority. In some settings there exists no central authority and thus a decentralized architecture is a natural choice. This setting has been traditionally studied in the contexts of decentralized access control, as in TAOS <cit.> and SDSI <cit.>, and `trust management', such as Keynote <cit.>. In such systems a set of distributed principals make claims about users and each other, and those claims need to be assembled and used to resolve access control decisions. Bauer et al. <cit.> show that the task of resolving access control decisions in a decentralized setting is faster than doing so centrally. Leveraging Existing Trust Networks. In some cases a decentralized infrastructure embeds or expresses a pre-existing set of trust relationships that a system may reuse to support security properties. Systems may use the underlying social trust structure to build overlay privacy-friendly social network services, as surveyed by Paul et al. <cit.>. As an example, the Frientegrity system <cit.> provides a social network platform using untrusted providers seeing only encrypted data, where users can exchange information with `friends' protected by cryptographic access control. This use of encryption to defend against the providers themselves is not the case for systems like Diaspora <cit.>, an open-source project that takes a different approach: users connect to a provider they trust – that gains full visibility of their activity – and delegate the access control on the content they share with their social circles to that provider.§.§.§ Distributed Allocation of Resources Assists with Ease of Deployment A central premise of P2P networks is that nodes contribute spare resources, and doing away with a central authority that is forced to bear the full costs (such as Google's server costs). This reduces costs and helps ease deployment by spreading these demands amongst multiple parties. Costs are lowered as spare capacity in the existing infrastructure is used, e.g., underutilized resources given by users such as the early SETI@home project <cit.> and the use of users' storage in Freenet <cit.>. In terms of availability, decentralized architectures exhibit fewer correlated failures by virtue of being distributed. As an example the Cachet system <cit.> uses a pool of untrusted peers as a storage back end of a decentralized Online Social Network. §.§.§ Resilience Against Formidable AdversariesLocation Diversity. Decentralization provides properties that are inherently difficult to centralize, such as the network location diversity needed for Tor bridges <cit.> to bypass censorship both on the network and legal levels. A number of designs take advantage of this, like Publius <cit.>, in order to resist censorship, although censorship resistance itself is a separate field with many centralized, as well as decentralized, solutions. Survivability. Decentralized architectures can be designed to survive catastrophic attempts to take them down or inflict crippling damage, in a way that centralized systems cannot resist <cit.>. This property has been used to build highly robust botnets using a peer-to-peer architecture <cit.>. Although these bot-nets are decentralized on the technical level, they of course maintain central but covert command and control (C&C). Those botnets have demonstrably been harder to take down using conventional techniques, but are also vulnerable to new threats that result from their decentralization, such as poisoning and enumeration of nodes. A further discussion of wider `Darknet' survivability is provided by Zhou et al. <cit.>.Separation of Development from Operations. Decentralized architectures clearly separate the authorities that provide public code – and that have no access to operational data and secrets – and those that run the code. Users and nodes, deploying software, can audit any such open source code for integrity, and chose whether to deploy it. The core development team maintains the code, that is publicly visible and auditable, but upgrading is up to independent relay operators. This model is followed by both Tor and Bitcoin. As a result, attempts to coerce the Tor development team can only have an indirect and possibly highly visible effect – rendering such attempts less effective. Similarly in Ethereum, the exploitation of a vulnerability in the DAO smart-contract, led to the core developers proposing a “hard fork”, and this fork was voluntarily adopted by the majority of the Ethereum mining node operators. Publicly Verifiable Integrity. Due to the availability of multiple independent authorities, decentralized systems can implement accountability mechanisms to publicly verify integrity. Adversaries are disincentivised to compromise nodes, by ensuring attacks have an observable effect so that cheating can ideally be discovered before it has a negative effect. Verifiable logs can be used to help enable privacy as ensuring that actions are transparent enables users to know what happened with their data, as when Pulls et al. <cit.> use decentralization to support transparent audits of personal data accesses. Auditability is also a key feature of secure electronic election systems such as the Helios system <cit.>. Such systems rely on the existence of multiple authorities in a number of ways in e-voting: threshold cryptography is used for parameter and ballot generation, with privacy enforced via threshold decryption.Insights.* Real-world relationships of trust and authority are personal, complex and localized, and rarely hierarchical or all-or-nothing. Decentralized systems offer flexible trust models that can leverage those relationships to support security and privacy properties. * When it comes to high-availability and survivability against powerful adversaries – particularly with legal authority – decentralized designs are not just best, but sometimes the only available option. Designs that allow operations to continue despite some authorities being adversarial or not available, are necessary to support these properties.* Decentralization's fundamental advantage in terms of security stems from an attacker having to compromise a set of independent authorities in order to disrupt or weaken the security properties of a system. Decentralized systems that do not offer this property may be more fragile than centralized equivalents.* Decentralized designs decouple development from operations and have a multistakeholder governance model, where node operators influence the entire system based on the software configuration they choose to deploy.* Decentralized systems can leverage public accountability to detect and exclude compromised or misbehaving authorities. Such accountability architectures may be used instead of more complex or expensive prevention techniques, but need to ensure that auditing will be effective and eventually acted upon. * Leveraging spare resources of nodes allows decentralized system to scale, and ease deployment. However, this by itself opens the door to high-churn and cannot be a substitute for robust incentives to participate as the system scales or nodes are asked to take on real costs. §.§ How Does Decentralization Support Privacy?In this section we survey the privacy properties obtained through mechanisms that are inherent to decentralized architectures. We limit ourselves to the analysis of technical properties that may be obtained in decentralized systems. We acknowledge that decentralized systems may offer both greater user privacy and autonomous control of the infrastructure. As such they are a possible technological solution to the legally-binding, but often technologically unenforced, demands from data protection laws <cit.>, that often are addressed involving a central authority, the data controller <cit.>. How decentralized systems relate to the law and business models is out of the scope of this paper.Confidentiality from Third Parties. Some designs employ a decentralized architecture on the grounds that the lack of centralized components, which have full access to user data and can surveil their actions, would be beneficial to confidentiality and unobservability. Such systems may use threshold encryption <cit.> in order to trade off information confidentiality and information availability, such as the PASIS <cit.> architecture. This scheme splits the data in n “shares” and distributes it among peers in such a way that recovering m shares allows one to recover the data, but having less pieces provides no information. Similar solutions are provided by POTSHARDS <cit.> or Plutus <cit.>.Confidentiality from Peers. In P2P architectures, nodes must interact with other nodes, but they want their communications or actions to remain confidential. For example, nodes need to perform a joint computation, but do not trust each other nor a third party with their data. In this case, decentralization enables them to exchange encrypted data and obtain the sought after result without relying on any particular entity to preserve their privacy. The P4P framework <cit.> is such a system, in which further zero-knowledge proofs are integrated to protect computations against malicious users. More recent, blockchain-backed systems, such as Enigma <cit.> rely more heavily on transparency to achieve this goal. In terms of message-passing, systems that pass end-to-end encrypted messages across untrusted federated servers achieve peer confidentiality. Anonymity. Due to the distribution of resources in decentralized networks, it is expensive for one entity to observe all actions in the network and track all activities from a user. Many  <cit.>, leverage this approach to provide anonymous communication, although the precise properties provided in terms of anonymity differ. Some decentralized systems fail to provide full anonymity but instead provide pseudonymity which is weaker  <cit.>, e.g. it allows multiple anonymous actions to be linked, providing weaker privacy, but enabling functionality such as detecting returning users and reducing the complexity of the system. For example, in Bitcoin every transaction is linked to a pseudonym and stored in the blockchain. This allows to trace money flows and avoid double-spending; but on the downside if a pseudonym is ever deanonymized (e.g. <cit.>), all actions from the person would be revealed. A number of decentralized systems, ranging from mix-nets <cit.>, to DC-nets <cit.>, to Tor <cit.>, provide some degree of anonymity.Deniability. Deniability enables a subject to safely and believably deny having originated an action, so as to shield her from responsibility associated to performing such action. The fact that actions cannot be linked back to a user (i.e. “unlinkability” <cit.>), equips users with freedom to perform actions without fear of retaliation. For instance, in Freenet <cit.> requests are hard to link to their originator, thus users can freely search for information without revealing their preferences. Plausible deniability is crucial in facilitating anonymous and censorship-resistant publishing, and may be implemented using cryptographic techniques allowing of `repudiation'. This was the motivation behind the original Eternity service <cit.> and well-known designs such as Publius <cit.>or Tangler <cit.>.Covertness. Some systems protect even the act of participation of nodes in the decentralized network from outside observers (“unobservability”<cit.> if the items of interest is the existence of users). In addition to more well-known work like Tor pluggable transports <cit.>, the Membership Concealing Overlay Network (MCON) <cit.> leverages this to provide strong forms of covertness. All nodes in MCON only have links with trusted friends, and a complex overlay network is jointly created that allows all nodes to communicate indirectly with all nodes. As any node only connects to other locally trusted peers, the system defends against attempts to enumerate all users by malicious nodes.Insights. * The key bet of decentralized systems in terms of privacy is that a local adversary may not observe all communications, data, or actions. However, global adversaries are increasingly realistic. Thus decentralized systems that rely solely on dispersion of information to provide confidentiality are fragile.* Decentralization can harm privacy: Distributing trust and resource contribution to multiple authorities may provide adversarial nodes with extended visibility of user data and network traffic. Thus, naive decentralization designs may in fact create more, not fewer, attack points to breach privacy.* Decentralization alone cannot balance the needs for privacy, integrity and availability. It is only combined with the use of advanced cryptography that decentralized architectures obtain those properties. In particular, the reliance on others to perform actions, may naturally expose personal information to other nodes without the use of cryptography. However, naive encryption alone may not be sufficient to support the integrity of operations that are more complex than end-to-end messaging.* Decentralized networks can provide privacy properties like anonymity and even covertness. Yet, most real-world decentralized systems do not use the advanced cryptography and traffic analysis resistance necessary for that purpose as it increases design, implementation, operations and coordination costs.§.§ The Disadvantages of DecentralizationSadly, there is no free lunch in decentralization. While decentralizing has many advantages, there is no guarantee that the properties and features of centralized systems are maintained in the process. This section summarizes problems emerging when decentralizing designs. A further critique of decentralized systems, focusing on personal data, is provided by Narayanan <cit.>.§.§.§ Increased Attack SurfaceDecentralizing systems across different nodes inherently augments the number of points (attack vectors) that an adversary could use to launch an attack or to observe the users' traffic.Internal Adversaries. In centralized systems, system components can be monitored and evaluated by a trusted entity to detect malicious insiders. In a decentralized system it is easier to insert a malicious node undetected. A number of such attacks have been documented against decentralized systems: the predecessor attack <cit.> uncovers communication partners in many anonymous communication schemes <cit.>, or the Sybil attack which can be used to bias reputation scores <cit.> or corrupt the information exchanged in collaborative decentralized systems <cit.>. Furthermore, when messages are relayed through other nodes, e.g., to gain anonymity, their content is exposed to potential adversaries, as in Crowds <cit.> for Web transactions or in Yacy <cit.> for searching information.Traffic Analysis. Decentralization inherently implies that information will traverse a network. Even in the presence of encryption, metadata is available to external adversaries. For instance, in anonymous communications networks it has been repeatedly shown that both passive local <cit.> or (partially) global <cit.>, as well as active adversaries <cit.>, can reduce or break anonymity by looking at traffic patterns.Inconsistent Views. Decentralization typically implies that nodes have a partial, thus non-consistent, view of the network which can have an impact on integrity. These non-consistent views allow adversaries to “cheat” without being detected. For instance, in Bitcoin adversaries can perform double spending by forcing non-consistency through fast operations <cit.>, or eclipse attacks <cit.> in which the adversary gains control over all connections of a target node thus isolating her from the rest of the network. Furthermore, the lack of global information results in users not necessarily making the optimal choices with respect to optimizing their privacy, as studied both in the context of anonymous communications <cit.> and location privacy <cit.>.§.§.§ Cumbersome ManagementAn obvious problem of decentralization is that no entity has a global vision of the system, and there is no central authority to direct nodes in making optimal decisions with regard to software updates, routing, or solving consensus. This makes the availability of a decentralized network more difficult to maintain, a factor significant enough to contribute in the failure of a system, as pointed out by the Mojo Nation developers <cit.>.It is very common that nodes in a decentralized system have hugely varying capabilities (bandwidth, computation power, etc.) <cit.>, making super-nodes attractive targets <cit.>. Finally, decentralized systems need to overcome the shortcomings of underlying technologies (such as NAT <cit.>), that favor the client-server paradigm over peer-to-peer networking.Defense Difficulties. The lack of central management hinders the establishment of effective protection mechanisms. For instance, the non-consistent view of the network not only enables attacks, but also hampers the use of collaborative approaches to detect incorrect information <cit.>. Similarly, it becomes extremely difficult to prevent Sybil attacks, and defenses must either leverage local information, for example defenses based on social networks <cit.>, or collaborative approaches that combine information from several nodes <cit.>. Routing Difficulties. A straightforward consequence of the lack of centralized control is an increased complexity in routing. Nodes do not have an overview of the network and its capabilities <cit.> and consequently cannot globally optimize routing decisions <cit.>, falling back to inefficient flooding or gossiping methods in mesh topologies. This is made harder by highly diverse nodes <cit.>, the existence of churn <cit.> and the reliance on possibly malicious nodes <cit.>. Solutions to these problems include using complex routing algorithms to enable secure and private discovery of nodes <cit.>, or avoiding the use of a centralized directory via next-generation DHTs.The lack of centralized routing information in decentralized topologies also impacts performance as it hinders the selection of optimal routes or load balancing. We find two approaches to alleviate this problem: using local estimations to improve performance <cit.>, or providing means for users to make better decisions about routing individually <cit.>. The latter is known to be prone to attacks <cit.>.§.§.§ Lack of ReputationDecentralization is also an obstacle to the implementation of accountability and reputation mechanisms.The negative effect is amplified when privacy and anonymity mechanisms are in place, as it becomes even more difficult to identify misbehaving nodes such as Sybils <cit.>. An effect of this lack of reputation is that nodes have no incentive to behave correctly and can misbehave to obtain advantages within the system (e.g., better performance). This problem has been identified in many settings such as P2P file sharing <cit.>, multicast communication <cit.>, or reputation <cit.>. In particular, the presence of churn, which make nodes short-lived and difficult to track over time, makes the establishment of reputation to guarantee veracity a very challenging problem <cit.>, even more if privacy has to be preserved <cit.>.Poor Incentives. Without reputation, reciprocity and retaliation it is hard to establish incentive schemes for nodes to not be selfish, in particular in a privacy preserving manner. A solution to this problem is increasing transparency of actions, e.g. by having witnesses to report on malicious nodes in a privacy-preserving manner <cit.>. However, the most popular approach is the use of (anonymous) payments that incentivize good and collaborative behavior that benefits all users in the network <cit.>. In contrast, one example of negative reinforcement is the tit-for-tat strategy to encourage users to share blocks to incentivize sharing, as in BitTorrent.Insights. * Decentralized designs may prevent conventional attacks but also introduce new ones. Unless they are carefully designed, they may expose personal information to more, rather than fewer parties; and the need to perform joint computation across many authorities introduces threats to integrity. * Decentralized systems are particularly susceptible to traffic analysis, compared with centralized designs, since their distributed operations are mediated through networks and adversarial nodes that may use meta-data to compromise privacy.* Decentralized systems by nature require complex management of routing, naming and consistent state – due to the lack of a central coordinator. Conventional defences against network attacks, like denial of service, require centralization and cannot be straightforwardly applied.* Sybil attacks are the great unsolved problem of decentralized systems that allow open and dynamic participation. Solutions based on social networks rely on fragile social assumptions; admission control through identification or payment re-introduce centralization. Proof-of-work defences increase the cost of participation.§.§ What Is Still Centralized in Decentralized Designs?Even when systems claim to be decentralized, usually there are “hidden” centralized assumptions and parts of the design that need to be centralized to operate correctly. These are often implicit.§.§.§ Centralization of Network Information & Computations In any decentralized system routing packets across the network is a challenge for both operational and privacy reasons. Typically routing can be divided in two main task. The first task is how to find candidate nodes to relay traffic, and second task is how to select among these nodes. While as detailed in Sect. <ref>, there are many decentralized algorithms to choose the route, actually finding candidate nodes is difficult, as highlighted in Sect. <ref>.Centralized Directories. A common solution for the first problem is to assume that there exists a centralized directory that knows all network members. The most prominent example is the Domain Name System (DNS) that resolves easy-to-remember domain names to associated IP addresses in order to allow finding hosts in the largest known decentralized system: the Internet. Though distributed, this centralized service has serious security implications, e.g. for privacy <cit.> or availability <cit.>, and thus several alternatives are being proposed <cit.> and deployed <cit.>. Another example are Tor Directory authorities <cit.> that provide Tor clients with the full list of onion routers. These directories solve the discovery problem but have become a bottleneck for the scalability of the system <cit.>.How to decentralize these authorities in an efficient, privacy-preserving manner is an active area of research. Solutions are based on having multiple copies of the publicly verifiable directory kept consistent via consensus protocol and distributed via gossiping, although it risks covertness; or to use friend-of-a-friend discovery and routing <cit.>. Path Selection. Once routing alternatives are known the question remains: Which route to choose? Thus typically, a centralized server is considered that can “rank” routing options to allow for path optimization with respect to adversaries <cit.>, performance <cit.>, or with respect to users' reputation <cit.>. Such a centralized ranking approach has been shown to be vulnerable to attacks <cit.>. Typically DHTs are the possible solution, although only a few have the necessary security and privacy properties for use in decentralized systems <cit.>. Distributed Computations.A number of decentralized systems are designed with the assumption that there is a central entity that performs computations on the data collected by the nodes in the system. Paradigmatic examples of this behavior are decentralized sensor networks <cit.> where the challenge is to send decentralized measurements to a “master” node, but there exist other applications such as distributed network monitoring for intrusion detection <cit.>, anonymous surveys <cit.>, or private statistics <cit.> in which, even though nodes perform decentralized computations, interaction with a central authority is needed to produce the final result. §.§.§ Trust EstablishmentA challenge when decentralizing networks is to ensure that nodes can be trusted to perform the actions they are assigned or can authenticate themselves as the intended receiver of a message. Often, to avoid dealing with this problem, a common implicit centralized assumption is that a set of trusted servers is assumed to exist, such as in Dissent <cit.> or the Directory Authorities in Tor. Decentralized trust establishment is still an open problem, though some of the excitement around mining in Bitcoin is precisely due to their attempt to avoid this problem and so build a `trustless' decentralized system.Authentication. In general certificate infrastructures are not decentralized, e.g., PKI. Therefore, some decentralized systems rely on centralized certification authorities to authenticate nodes that can be used for secure routing <cit.>, user authentication <cit.>, or to enrol users in the system in the context of anonymous credentials <cit.>, a privacy-preserving alternative for authentication without requiring user identification. Such centralized authorities are simpler for deployability or usability, but become a single point of failure as pointed out by Lesueur et al. in <cit.>. They also introduce an imbalance of power unnatural for decentralized environments since they allow a single entity to revoke peers' authentication credentials. Many decentralized designs do not address authentication (e.g. <cit.>, see <cit.> for more details), although work from TAOS <cit.> and SDSI <cit.> onwards has been working in this direction <cit.>. Authentication is useful to prevent Sybil attacks, and work on decentralized and privacy-preserving authentication via threshold cryptography is one promising solution  <cit.>, as is the use of zero-knowledge systems for anonymous credentials <cit.>.Authorization.Assuming the existence of a centralized entity is also common when it comes to storing and enforcing authorization policies, as highlighted by numerous efforts to decentralize policy management and enforcement from SDSI <cit.> to more recent systems  <cit.>. OAuth was designed to be federated in terms of authorization, but in practice only a few large providers use this standard <cit.>. So if an adversary compromises a user's single authentication method such as a password, it can compromise them across multiple decentralized systems. Work descending from SDSI <cit.> to limited-time authorization via pseudonyms and blind signatures present one way forward to decentralize authorization <cit.>. Abuse Prevention. As mentioned in Sect. <ref> accountability is a challenge in decentralized systems. Hence, existing abuse-prevention schemes end up relying on centralized parties, often determining global reputation scores. Solutions based on blacklistable credentials (anonymous credentials for which authorization can be selectively revoked) use a centralized authority for enrollment <cit.>, or to store blacklists <cit.>. Similarly, identity escrow <cit.> or revocable anonymous communication solutions <cit.>, that allow for re-identification of misbehaving users require a centralized party that stores those identities. In practice, spam prevention in federated email systems also uses centralized lists of known spammers. Typically, these are built from pre-existing trusted social networks, and only recently have reputation systems such as AnonRep (based on homomorphic encryption and verified shuffles) allowed reputation to be done in a privacy-preserving and decentralized manner <cit.>. Payment Systems. In many applications of decentralized services it could be desirable to count on a payment system to reward peers for their contributions. While many alternatives have been presented in the literature specifically aimed at peer to peer systems, e.g. <cit.>, they inherently rely on a centralized authority that opens accounts (the bank) and sometimes even on other authorities that can act as “arbiters” in case of dispute <cit.>, or on authorities that record transactions to help taxation on the operations run in the system, even if the transactions are anonymized <cit.>. Decentralized crypto-currencies can help ameliorate this problem. Trusted Developer Community. All decentralized systems work by virtue of having the nodes communicate via the same protocol. Thus, the actual software can be a centralized point of failure if the protocol is flawed. If the protocol is standardized or otherwise uniformly specified, the implementation of the protocol itself may be a failure. Furthermore, the developers themselves could be compromised. his danger is augmented by the software monoculture prevalent in deployed systems, that results in a bug in a popular platform capable of compromising a large set of authorities. One solution is to apply the technique of forcing public transparency and auditing of the integrity of the development process. Open-source development, done in public repositories, is increasingly required. Integrity is ensured via deterministic builds <cit.> so that everybody can verify the genuine binary, and the authority to run new versions of the software remains in the hands of the operators. This approach is already followed by Tor and increasingly by Bitcoin, where the choice to deploy particular open-source code is up to miners.Insights. * Many decentralized systems implicitly rely on centralized components to hold network information for efficient routing or for establishing trust and defending against Sybil attacks.* Essential user-facing infrastructure, from authentication to authorization is centralized even in decentralized systems. Developing alternatives seems to be an open problem, with no clear established design. For payments, Bitcoin has recently provided a decentralized solution, but it suffers from a number of scalability, privacy, and financial volatility problems.* The developer community of a system is usually an implicit centralized authority, making social attacks on the developer community itself one of the largest dangers to any decentralized system.§.§ Systematization of Existing Designsjustification=centering Selected decentralized privacy systems evaluated on how they achieve decentralization, the privacy properties they provide, and implicit centralized assumptions.System Infrastructure Network Topology Authority 3rd party-Confidentiality Peer Confidentiality Anonymity Deniability Unobservability Centralized Directories Central Trust Establishment User AnonymityTor <cit.> Hybrid Stratified P2PMixnets <cit.> User-independent Super-Node P2P I2P <cit.> User-based DHT P2P Crowds <cit.> User-based Mesh P2PMCON <cit.> User-Based Mesh SocialFile Sharing/Censorship Resistance BitTorrent <cit.> User-basedSuper-Node P2PFreenet <cit.> User-based DHT P2PGnutella <cit.> User-based Super-Node P2PPublius† <cit.> User-independent Mesh FederatedEternity <cit.> User-independent Super-Node Federated Tribbler <cit.> User-based DHT Social Vanish† <cit.> User-based DHT P2PTangler <cit.> User-independent Super-Node FederatedTahoe-LAFS <cit.> User-independent Stratified Federated Cryptocurrencies Bitcoin <cit.> User-based Super-Node Ad-hoc±Zerocash <cit.> User-based Super-Node Ad-hoc±MojoNation <cit.> User-based Mesh P2P Ethereum <cit.> User-based Super-Node Ad-hoc± Secure Messaging SMTP+PGP <cit.> User-independent Stratified FederatedXMPP+OTR <cit.> User-independent Stratified FederatedBriar† <cit.> User-based Mesh Ad-hocDP5† <cit.> User-independent Stratified FederatedRiposte <cit.> User-independent Stratified FederatedDissent/Buddies <cit.> User-independent Stratified FederatedDrac <cit.> User-based Mesh SocialShadowWalker <cit.> User-based DHT P2P Social Applications Diaspora <cit.> User-based Stratified Federated X-Vine <cit.> User-based DHT Social Auditable Systems CONIKS† <cit.> User-independent Stratified Federated±Enigma† <cit.> User-based Super-Node Federated±Certificate Transparency <cit.> User-independent Stratified Federated± Helios <cit.> User Independent Super-Node Federated±= provides property, = does not provide property;= academic proposal, † = prototype implemented,= deployed; ± = publicly auditable =12 /Rotate 90 Table <ref> presents a systematic analysis of decentralized designs, clustered based on their principal goal. The columns infrastructure, network topology, authority relations, privacy properties, follow closely the definitions of the previous subsections. We applied some level of simplification to complex systems with multiple components or multiple use-cases. The systematization focuses on parts of the system relevant for its main use-case as used in prototype or deployment. Insights.* Many systems that provide good coverage of privacy properties and decentralization (usually via DHTs) have not been widely deployed * Widely deployed systems either are user-independent federated systems or user-based DHT-based systems, both without advanced privacy properties. * Hybrid and stratified systems such as Tor provide provide advanced privacy properties at the cost of centralized assumptions.* The space of ad-hoc, mesh, and covert designs is under-explored. § FUTURE RESEARCH LINES §.§ Address Decentralization's Shortcomings To build the next generation of decentralized systems, good will, slogans, and demands are not enough. What is needed is a clear research plan. A number of designs we review consider decentralization as a goal and virtue in itself and do too little to address the inherent challenge of maintaining privacy properties and deployment with high availability. In particular we studied in Section <ref> a number of those challenges: an increased attack surface, with corrupt insiders; susceptibility to peers violating privacy and vulnerability to traffic analysis, integrity and consistency attacks; expensive and fragile routing; potential degradation in performance; loss of central choke points to enforce security controls; peer diversity and lack of incentives. These are serious and real threats, and not acknowledging them and confronting them head on leads to weak systems that cannot credibly compete with centralized solutions. This is demonstrated by the failure of Ethereum to promptly address the DAO vulnerability <cit.>. Indeed, decentralization in the style of early BitTorrent simply ends up being an inefficient way to do redundancy and availability without a centralized authority — and with no credible privacy properties. Likewise, Bitcoin and Ethereum provides this style of decentralization with the addition of integrity but their simplistic accountability designs harms privacy. Therefore, more research is required looking at systems such as Tor and Bitcoin as platforms rather than purely as channels, including understanding their interfaces, performance, quality of service guarantees and the privacy properties as a whole system in order to deliver better privacy properties. Availability without centralization is a key promise of decentralized systems, but often fails when the system grows. The most important engineering challenge of those reviewed is that decentralized systems often do not scale and are inefficient in comparison to centralized systems. In practice, in a world with limited resources and investment, inefficient decentralization leads to a failure of decentralization. This problematic dynamic is built into decentralized designs: maintaining high-integrity requires a majority to honestly participate in decisions. Although one could point to Bitcoin as a success, the larger Bitcoin network of miners grows the less it scales, as all miners need to detect and verify new blocks and transactions. Even worse, Ethereum smart contracts are executed on each node in the network. In both Bitcoin and Ethereum, as the number of nodes grows, the system gets slower. Due to this unfortunate design flaw, Bitcoin and Ethereum will face serious issues when scaling without major design changes that accountability as such does not address. We can be assured the current generation of attempts to “re-decentralize” the Internet will fail without more research on how to scale efficiently.Finally, there has to be a deeper acceptance that even honest users and peers in decentralized systems will have to be incentivised to participate and behave cooperatively. This is particularly true when stronger privacy protections are implemented and reputation based on repeated and iterated interactions cannot be leveraged. In those cases standard platforms must be developed to prevent Sybil attacks and establish privacy preserving reputation to curtail abuse; accounting and payment mechanisms need to be devised to ensure that those that do work are rewarded to sustain their operations. Systems that do not provide incentives for participation in the infrastructure will fall foul of the tragedy of the commons and will remain mere proofs of concepts. Even with motivated users, human fallibility must be addressed realistically. Decentralization advocates desire of users to return to a `lost golden age' of self-hosting services, as in the `re-decentralize' project <cit.>. However, the popularity of services like Facebook and Gmail shows that most do not have the time or skills to host decentralized nodes unless a powerful incentive exists such as file-sharing.Worse, users may not be qualified at protecting their own systems, when even most skilled professional administrators cannot. Building successful decentralized systems that do not betray the security and privacy of their users is hard, and entails much more than tacking a blockchain or P2P network to a pre-existing problem, but also has to take into account platform security and ease of user operations.§.§ Develop Design and Evaluation StrategiesSystems that claim to be decentralized today simply often use the adjective in an informal manner, resulting in decentralized “snake oil”, as is the case for some blockchain-based start-ups. Unlike formal security definitions, information-theoretic definitions of anonymity, and differential privacy, there are no coherent quantitative metrics to characterize decentralization. Aside from having a common definition of the privacy and security properties, decentralization engineering also requires the development of design strategies that measure both decentralization and its effect on the properties systematized earlier.More often than not, properties are neglected, rarely mentioned or evaluated, including the impact of decentralization and availability. Section <ref>, for instance, illustrates the variety of options in this design space. Beyond the impact of decentralization on availability, a key missing piece is a systematic means for evaluating the privacy and security properties provided by a given decentralization system. As we evidence, decentralization can support privacy in many ways (Section <ref>), as well as supporting other properties too (Section <ref>). We observe that systems are often designed with one particular privacy goal in mind, which is frequently redefined to suit the design, and system designers tend to resort to ad-hoc evaluation. A particular case in which a lack of systematic evaluation has great impact in terms of understanding the protection provided by decentralized system is the case of compound systems (i.e, systems that combine different schemes to try to improve overall protection); or the case where systems are deployed in environments with different characteristics than those assumed in their design. In decentralized systems, it is not granted that the protection of the whole is greater or equal than the sum of the parts. In fact, the inverse may hold: combining different decentralized systems with different assumptions may violate the properties each system guarantees by itself. For example, while a user may assume using BitTorrent over Tor provides anonymity for file-sharing, in fact the reverse holds: Tor provides no anonymity to UDP-based systems like BitTorrent, and users can even be deanonymized by virtue of running BitTorrent <cit.>. In other words, systems to not exist in a vacuum. Their analysis and evaluation needs to account for interactions with their environment or other systems.A similar trend is observed in terms of measuring the severity of disadvantages introduced by decentralization. Though, as we show in Section <ref>, many weaknesses arise from decentralizing, few works evaluate their implications, or do so in a design specific way that is difficult to extrapolate to other systems. As a result it is extremely difficult to compare systems and find promising new directions. This slows the development of robust decentralized systems by obscuring good design decisions. For example, in many systems there is a trade-off between privacy and availability. Further work is also required to radically simplify the deployment and management of “real-world” decentralized applications, either on larger platforms or as stand-alone distributed systems. Deployability and usable application life-cycle support is at the heart of the current centralized cloud-based `dev-ops' revolution, and has made centralized app stores and Web applications as popular as they are. Yet, there are no equivalent tools or technologies to facilitate the deployment, management, and monitoring of decentralized systems, let alone their continuous updates, application life-cycle management, and telemetry. This gap negatively affects developer's productivity and makes the engineering and maintenance of decentralized systems very expensive. Building toolchains that support easy management – without introducing any central control – is largely an open research problem. Successful projects such as Tor and Bitcoin have developed best practices and running code in that space such as open-source development and reproducible builds <cit.> to address security concerns that may be generalized.Key Research Questions for Decentralization. * Are there generalized techniques to provide privacy and integrity properties for decentralized systems without damaging availability?* Can we develop systematic techniques to evaluate decentralized systems both in isolation and when they are deployed in different environments?* How can human users be incentivised to work in a decentralized manner?* How do real-world deployment of decentralization lead to scalability challenges that change the desired properties and defeat decentralization?* Can we develop a mathematical metric to define degrees of decentralization?In the next section we will provide provisional answers to these questions to guide future research. These answers will be based on the observations built in previous sections. § CONCLUSIONS: TOWARDS FULL DECENTRALIZATIONAvailability, Privacy, and Integrity. Our analysis points to some fundamental trade-off between availability, privacy, and integrity in decentralized systems: A good design for one is an unsafe design pattern for another. Systems use a wide variety of infrastructure, network topology, and authority relation choices (as systematized in Table <ref>). Three widely deployed decentralized systems demonstrate a different set of design goals. Bitcoin comes with high-integrity at the cost of a public ledger with little privacy. Tor routers provide high-privacy at the cost of no available or correct collective statistics to ensure the integrity of the entire system. BitTorrent provides high availability in downloading files, but fails to provide privacy to its users against powerful adversaries. We believe it is not pre-ordained that there is a trade-off between privacy, availability, and integrity in decentralized systems by virtue of using advanced cryptographic techniques. Unlike Bitcoin, Zerocash<cit.> combines both privacy and integrity using zero-knowledge proofs. Likewise, many academic systems, such as Drac<cit.>, tackle traffic analysis to defend privacy in a P2P network. Simply put, advanced techniques for providing everything from dummy traffic for anonymity to succinct zero-knowledge proofs are not yet part of the toolbox for many decentralized system engineers.Interdisciplinarity. Reviewing the literature reveals that to build good secure privacy-preserving decentralized systems, one needs: * Expertise in building distributed systems, as decentralized systems are by definition distributed.* Knowledge of modern cryptography, as complex cryptographic protocols are necessary to achieve simultaneously privacy, integrity and availability.* An understanding of mechanism design, game theory and sociology to motivate cooperation amongst possibly selfish actors. The focus on social incentive structures is usually left out, and thus most decentralized systems do not gain real-world wide deployment. In general, the involvement of nodes in decentralized systems varies and this is usually mirrored in the power allowed to authorities, as well as in inter-node relationships that reflect social behavior. Some designs assume centralized components, for better availability and performance. Others push for sheer decentralization, in pursuit of resilience to censorship and network outages. Are these design choices often social or political rather than technical? Most designs, though, fall somewhere in the middle and generally impose cryptographic techniques and rely on real-world dynamics in order to defend against adversarial nodes. Certainly, the way decentralization is achieved affects the privacy of the users and thus their behavior. It falls upon decentralized system designers to achieve satisfactory performance and deployability, while taking into account not just the technical but the necessary social structure of the system.Real-world Scalability. From our study of the literature, we have shown that a number of key functions of decentralized systems often fall-back to centralized models in practice for scalability, even when unnecessary. First, network directories, key management, and naming often remain centralized. Thus, the there is a need to design of collective high-integrity and re-usable infrastructures to support directories, node discovery, and key exchange. These mechanisms need to scale up and remain decentralized, while not being open to corruption or inconsistencies. Second, reputation and abuse control often require either centralized entities, or building on pre-existing social networks in user-based infrastructure. Even advanced privacy-preserving techniques, such as anonymous blacklisting, assume that centralized services will issue and bind identities, and e-cash protocols rely on a bank to issue coins and prevent double spending. More work is required in establishing reputation in decentralized systems and preventing abuse without resorting to central points of control.Third, it is important to make credible assumptions about the platform security and computing environment of end-users or other devices. It is too facile to heavily rely on end-user systems keeping secret keys and data, and ignore that they are often compromised. Achieving perfect end-point security is an ambitious goal in and of itself – and so needed but beyond the strict remit of building secure decentralized systems. Decentralized architectures that display or limit the effect of compromises, and which may `heal' and recover privacy properties following hacks, should be preferred to those that fail catastrophically or silently under those conditions.Defining Decentralization. In general, decentralized systems are networks. Yet as shown by the difference between network topologies for routing and the relationships of authority, a decentralized network is not simply a single network, but multiple kinds of networks connected on different levels of abstraction. Worse, the overly simplified models of decentralization presented in many papers and research prototypes do not take into account the changes produced by real-world usage into account. As shown by BitTorrent, simple decentralized networks tend to evolve from P2P into super-node systems. In general, as a system scales there is a tendency towards distribution, but not decentralization, in order to maintain efficiency. Using network science, one can show simple models such as random graphs with basic mechanism design such as preferential attachment scale into small-world systems over time, and these systems often simply transform into a federated client-server architecture or a simple centralized distributed system. In order to maintain decentralization as an emergent property, it appears that advanced hybrid and stratified system, e.g. Tor, are necessary to “unnaturally” maintain decentralization and the relevant privacy properties. Yet, the Tor network has many centralized technical (complete network information by directory authorities) and social assumptions (control by a core group of developers). The key point of a real measure of decentralization should be to take these more stratified designs into account. An ideal decentralized system would remove all centralized assumptions while maintaining the needed security and privacy properties. The ultimate bet of decentralized systems is still open: is being vulnerable to a (possibly random) subset of decentralized authorities better than being vulnerable to a single centralized authority? Decentralization seems to be the result of a breakdown in trust in centralized institutions, but we do not yet understand how to build decentralized social institutions to support decentralized technical systems despite the promises of Bitcoin to produce algorithmic monetary policy, or the promise of Ethereum to support modern civilization with scripts with dubious security properties. Decentralization is a hard problem, but the fact that it is technically amendable to advanced techniques from distributed systems and cryptography should indicate that the social questions at the heart of decentralization are not unsolvable. Acknowledgements. The authors would like to thank the reviewers for insightful comments that helped improving the paper, in particular Prateek Mittal for acting as shepherd. This work is supported by the EU H2020 project NEXTLEAP (GA 688722). abbrv
http://arxiv.org/abs/1704.08065v3
{ "authors": [ "Carmela Troncoso", "Marios Isaakidis", "George Danezis", "Harry Halpin" ], "categories": [ "cs.CR", "cs.DC" ], "primary_category": "cs.CR", "published": "20170426114055", "title": "Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments" }
Three-Dimensional Structure of the GMF Department of Physics and Astronomy, University of Calgary, Calgary, Alberta, T2N 1N4, [email protected] Radio Astrophysical Observatory, Herzberg Programs in Astronomy and Astrophysics, National Research Council Canada, PO Box 248, Penticton, BC V2A 6J9, Canada We present Rotation Measures (RM) of the diffuse Galactic synchrotron emission from the Canadian Galactic Plane Survey (CGPS) and compare them to RMs of extragalactic sources in order to study the large-scale reversal in the Galactic magnetic field (GMF). Using Stokes Q, U and I measurements of the Galactic disk collected with the Synthesis Telescope at the Dominion Radio Astrophysical Observatory, we calculate RMs over an extended region of the sky, focusing on the low longitude range of the CGPS (ℓ=52^∘ to ℓ=72^∘). We note the similarity in the structures traced by the compact sources and the extended emission and highlight the presence of a gradient in the RM map across an approximately diagonal line, which we identify with the well-known field reversal of the Sagittarius-Carina arm. We suggest that the orientation of this reversal is a geometric effect resulting from our location within a GMF structure arising from current sheets that are not perpendicular to the Galactic plane, as is required for a strictly radial field reversal, but that have at least some component parallel to the disk. Examples of models that fit this description are the three-dimensional dynamo-based model of Gressel et al.and a Galactic scale Parker spiral <cit.>, although the latter may be problematic in terms of Galactic dynamics. We emphasize the importance of constructing three-dimensional models of the GMF to account for structures like the diagonal RM gradient observed in this dataset.Three-dimensional structure of the magnetic field in the disk of the Milky Way A. Ordog1 J.C. Brown1R. Kothes2T.L. Landecker2 =============================================================================================== § INTRODUCTION The Galactic magnetic field (GMF) is recognized as an essential constituent of the interstellar medium. The field lines in the Galactic disk are approximately aligned with the material spiral arms, and are typically modelled as logarithmic spirals. Estimates of the pitch angle vary between 0^∘ <cit.> and -30^∘ (which includes the halo field; ), with the most commonly cited value being around -11.5^∘ <cit.>. Furthermore, the pitch angle likely varies with radius <cit.>. The general consensus has the predominant direction of the large-scale field clockwise, as viewed from the North Galactic pole, with one known reversed region directed counterclockwise <cit.>. By Ampère's law, such a magnetic “shear”, or more commonly “reversal”, requires the presence of a current sheet at the interface between the two magnetic regions. In a galactic disk, if the direction of the field is purely a function of radius, a magnetic field reversal would imply a current sheet perpendicular to the disk, for which two-dimensional modelling is sufficient. To date, there is no satisfactory explanation of the source mechanism for a large-scale reversal <cit.>, nor a resolution to the question of how many reversals exist in the Galaxy. Some models suggest a single reversal <cit.>, while another suggests as many as one at each arm-interarm boundary <cit.>. Alternatively, the observed reversal may not be a large-scale feature, but rather a close-up view of a more local effect <cit.>.Further complicating the problem is the fact that large-scale reversals are not observed in external galaxies <cit.>.We examine a previously un-noted characteristic of the large-scale reversal in the GMF in Rotation Measures (RM) of the extended polarised emission data from the Canadian Galactic Plane Survey (CGPS) and discuss its implications for understanding the structure of the field reversal. In doing so, we also highlight the usefulness of this particular dataset in the context of extracting information about GMF structures. § THE DATA When a linearly polarised electromagnetic wave passes through a magnetized electron gas such as the interstellar medium, its plane of polarisation rotates, an effect known as “Faraday Rotation”: Δϕ = ϕ-ϕ_∘ = λ^2(0.812 ∫ n_e B· dl) = λ^2RM[rad]. Here ϕ_∘ and ϕ are the polarisation angles at the source and observer respectively, λ is the wavelength, n_e is the electron density, dl is the path length increment along the line of sight (LOS) directed from the source to the observer, and B is the magnetic field in the region. The integral defines the Rotation Measure (RM) and can be determined for a given polarised source as the slope of ϕ versus λ^2. Positive (negative) RMs indicate an average magnetic field pointing toward (away from) the observer. RMs for many lines of sight can therefore be used to probe the GMF.We use data from the CGPS <cit.>, collected using the Synthesis Telescope at the Dominion Radio Astrophysical Observatory (DRAO), which has four, 7.5 MHz bands within a 35 MHz window, centred at 1420 MHz <cit.>. Simultaneous observations of Stokes I, Q and U allow for unambiguous determination ofRMs. The CGPS has the highest density of compact extragalactic (EG) source RMs in the Galactic disk to date <cit.>, with more than 1 source per square degree. However, little exploration of the CGPS extended emission (XE; diffuse synchrotron emission) RMs has been carried out. These data are presented in Figs. <ref> and <ref>.The XE RM dataset has two possible limitations. First, while EG sources dominate the emission along their lines of sight, the XE originates from a range of depths, leading to significant depth depolarisation, whereby polarisation of light emitted at different points along the LOS (experiencing different amounts of Faraday rotation) averages out. Beyond a distance known as the Polarisation Horizon (PH; ; Kothes et al., in prep.), polarised emission from the XE is not detectable. Even within the PH, XE can be Faraday-thick if both emission and rotation occur within the same volume, resulting in different amounts of Faraday Rotation for emission produced at different distances <cit.>. This in turn can lead to nonlinearities in polarisation angle as a function of λ^2, potentially making simple RM calculations unreliable. Rotation Measure Synthesis <cit.> has been used to gain information on the Faraday depth structure of XE or other sources having multiple Faraday components, but the CGPS observations do not have the required dense sampling of λ^2 space for this technique to be employed.Second, the synthesis array that obtained these data is inherently limited to measuring structures on angular scales smaller than about 30 arcminutes <cit.>. Our data are less sensitive to the larger polarisation structures that are part of the diffuse Galactic emission, and complementary single-antenna data are required to incorporate this information into the images. Single-antenna data from the John A. Galt telescope are available for total and polarised intensity (PI; see Fig. <ref>, 2nd and 3rd panels), but not in the four separate bands required for RM calculations <cit.>.We counter these concerns as follows. First, if we observe some degree of spatial structure in the RM maps obtained using the assumption of a linear relationship between the polarisation angle and the wavelength squared, then it is reasonable to assume that there is meaningful information in those RMs. They may be interpreted as characteristic RMs of the lines of sight for the given wavelength range even if they lack detail about the full Faraday structure <cit.>. Furthermore, the PH that limits the LOS distance makes the XE RMs ideal for probing the local field structure.Although the XE RM data miss the largest angular structures, the interferometer is sensitive to high spatial frequencies, and hence detects sharp gradients across angular scales, such as that produced by a large-scale field reversal. Therefore, although we cannot be certain of where the zero levels of emission are in the Stokes Q and U maps used for calculating RMs, we can see where sharp changes in RM (and therefore in the magnetic field) occur, and we argue that this is valuable information. § ANALYSISFor this analysis, we focus on the lowest 20^∘ longitude range of the CGPS, (52^∘<ℓ<72^∘; Figs. <ref> and <ref>), where we are looking toward the inner Galaxy, into the Sagittarius-Carina arm. Here the magnetic field component orthogonal to the LOS is sufficient to produce observable polarised synchrotron emission (Fig. <ref>, 2nd and 3rd panels) and the parallel component is sufficient to produce relatively large RMs in both the EG point sources (Fig. <ref>, 1st panel) and the XE (Fig. <ref>, 2nd panel). The RMs for both the XE and EG sources are calculated using Eq. (1), and any data points with a signal to noise ratio less than 5 for the XE RMs are discarded. The top panel of Fig. <ref> shows the total intensity map for comparison.We note that despite the potential drawbacks outlined above, there is a significant degree of spatial structure present in the XE RM map, indicating that this is a useful dataset to study. Furthermore, we observe a striking degree of similarity between the PI map without single-antenna data (Fig. <ref>, 2nd panel) and the PI map including single-antenna data (Fig. <ref>, 3rd panel), indicating that much of the structure corresponds to the higher spatial frequencies that are observable with the DRAO Synthesis Telescope. This lends support to the reliability of interferometer-only data in this region. §.§ Observed RM distributionThe most remarkable feature of these maps is the gradient in RM across a diagonal boundary, observable in both the EG RMs and the XE RMs. The boundary, identified by the dashed lines in Fig. <ref>, extends from around ℓ =67^∘, b=4^∘, to ℓ = 56^∘, b=-2^∘, above which the RMs are predominantly positive and below which they are predominantly negative. This indicates that above (below) the boundary the LOS magnetic field is directed toward (away from) us, which we interpret as being counterclockwise (clockwise), viewed from the North Galactic pole. Comparing the EG RMs (1st panel) to the XE RMs (2nd panel), we observe that both RM tracers appear to follow a similar trend, although slightly higher magnitude RMs appear in theEG sources than in the XE sources. This strong resemblance between RM maps derived from EG and XE data, over an extended area, has not been remarked on previously. Since the RMs agree so well in sign (and reasonably well in magnitude) we conclude that they are tracing the same magnetic field configuration, namely a large-scale field reversal.=-1 The resemblance between the XE and EG RMs is surprising when we consider that the two sources likely probe very different spatial volumes. The assumption is that XE RMs probe the LOS only as far as the PH, which is closer than 2 kpc for 50^∘<ℓ<120^∘ (; Kothes et al., in prep.). The EG RMs, on the other hand, probe the LOS out to the edge of the Galaxy. If this assumption is correct, one might expect the magnitudes of the EG RMs to be significantly larger than those of the XE RMs, depending on the field configuration. To comment on this, we examine the differences between the RMs of these two datasets by binning the data into 1^∘ longitude bins[Bin sizes of 1^∘ were chosen to ensure that a statistically significant number of EG sources fall within each bin.] and observing the variations between the two datasets as a function of longitude. The bottom panel of Fig. <ref> shows the binned XE and EG RMs, in which we can see the similarity in the general trends between the datasets. The EG RMs are indeed predominantly larger than their XE counterparts, but the largest difference is only around 150 rad m^-2. In an average electron density of 0.1 cm^-3 and a large-scale GMF that decreases as r^-1 from a local strength of 2 μG, this would occur over a distance of less than 1 kpc, much smaller than the difference in depths assumed to be probed by the EG and XE RMs. §.§ InterpretationThe small differences that do exist between the magnitudes of the EG and XE RM values are likely due to depth depolarisation, which is more significant for the XE than the EG sources. One possible explanation for the smaller-than-expected difference is that the magnetic field strength and electron density actually decay quickly enough with galactocentric distance beyond a few kpc so that the EG sources undergo most of their observed rotation nearby. A second possibility is that reversed regions of the magnetic field may also exist in the outer Galaxy and contribute to reducing the magnitudes of EG RMs so that they are comparable to the XE RMs that probe the nearby field. This could be verified by examining RMs of pulsars located between the XE PH and the edge of the Galaxy, but is beyond the scope of this paper and would ideally require a much higher density of pulsars with known distances than is presently available. The known pulsars in this region are shown in the top panel of Fig. <ref>, for comparison with the EG RMs. Their RMs are consistent with the pattern seen in the EG and XE datasets, but their paucity and the uncertainties in their distances do not allow definite conclusions about the number or location of field reversals.A third possibility is that the PH near the reversal is considerably further than the assumed 2 kpc, since depth depolarisation reduces substantially in regions where the magnetic field drops to zero.Four decades of RM observations have led to the interpretation of a magnetic field reversal between the local and Sagittarius-Carina arms. These observations would suggest a current sheet between the two spiral arms that is perpendicular to the disk. However, our new data indicate that the field reversal is actually diagonal rather than vertical across the Galactic disk, opening up the possibility of a current sheet contained within the disk, rather than perpendicular to it. This is contrary to the large-scale analysis of the GMF by Kronberg & Newton-McGee , who concluded that no reversals occur across the Galactic plane. However, Fig. 4 of Kronberg & Newton-McGee does show a slight deviation from the symmetric model in the present region of interest (52^∘<ℓ<72^∘), which we are able to see in more detail with our much higher density of EG sources per square degree. A current sheet contained within the disk is physically preferable, as there is more material in the disk to support the required current. However, a source or driving mechanism would need to exist in order for such a current to be sustained.=-1 To further examine this diagonal reversal, we plot RMs as a function of angular displacement perpendicular to the boundary we identified between the regions of positive and negative RMs, as shown in Fig. <ref>, with XE and EG RMs averaged into 1^∘ bins. For the XE RMs the gradient has a slope of 38.3 rad m^-2 degree^-1. For an infinitesimally thin current sheet, the gradient would be a step function. Instead, our observations suggest a current “slab” of finite thickness, where the slope depends on the thickness and current density of the slab. For the EG RMs the gradient is less steep, with a slope of 27.0 rad m^-2 degree^-1. This could be attributed to the longer lines of sight for the EG sources. If the current sheet defining the boundary between opposing magnetic field directions is tilted not only in the plane of the sky-projection but is also tilted along the LOS, then RMs resulting from longer lines of sight would have more latitudinal variation, which would cause the gradient to be smeared out instead of sharp. Such an effect could be present in both XE and EG datasets even in the case of a very thin current sheet.Determining the thickness of the current slab from Ampère's law would require knowledge about the current density in addition to the magnetic field gradient. Assuming a magnetic field strength of 2 μG on either side of the current slab with opposing directions on the two sides, and a uniform current density, the thickness of the slab multiplied by the current density would be roughly 3×10^-4 A m^-1. This corresponds to either a low current density or a thin slab. Observational techniques for determining the thickness of the slab would include comparison of pulsar RMs (with known distances) on and near the identified boundary to EG and XE RMs, along with a more accurate determination of the distance to the PH. Further investigations of the field reversal in this region will involve modelling the current slab with varying thickness, inclination and current density, and fitting such models to the RM datasets.It is clear that three-dimensional modelling of the GMF is indispensable for describing this diagonal boundary between opposing field directions. A new model that accounts for this type of structure is the hybrid dynamo model of Gressel et al. , a three-dimensional global simulation of the disk magnetic field that allows for evolution of the system in contrast to static solutions. This model also includes magneto-rotational instabilities, which can result in vertical undulations of an antisymmetric, azimuthal field component about the Galactic mid-plane. As Gressel et al. point out, this would lead to apparent radial field reversals viewed from near the mid-plane, which would be difficult to detect in external galaxies, particularly if the amplitude of the field undulation is small compared to the scale-height of the Galactic disk, as appears to be the case for the particular model presented in Fig. 10 of Gressel et al. . The authors note that it would be worthwhile to investigate whether such a model is consistent with all-sky RM data. Although our present analysis covers only a small segment of the sky, our observations do lend support to the possibility of convective instability-induced, undulating field reversals in the disk.Another model describing large-scale reversals in the magnetic fields of spiral galaxies is the spiral potential model of Dobbs et al. . In these simulations the reversals in the galactic disk are associated with large changes in the velocity field across spiral shocks, as well as changes between inward gas flows along the arms and outward radial flows in inter-arm regions. While this description predicts realistic locations for field reversals, it does not address any latitudinal variation of the reversal location.An alternative model that could account for the observed diagonal field reversal is the Galactic Parker spiral model of Akasofu & Hakamada , in which a dipolar magnetic field at the Galactic centre is carried outwards by a Galactic wind, analogous to the solar wind. An offset between the axes of the Galaxy and its magnetic dipole would cause a vertical oscillation of azimuthal and radial field components about the mid-plane similar to that described by Gressel et al. . The Parker spiral provides a mathematical description that can easily fit our data (see Fig. <ref>), while also describing other features of the GMF. It explains the absence of observed field reversals in other galaxies, and predicts a decline in the pitch angle of the spiral field patternwith increasing galactocentric distance, although the latter may also be attributed to flaring of the outer disk <cit.>.The Parker model was rejected almost immediately <cit.> but many of the bases for that rejection are no longer valid, given presently available data. The Parker spiral is a convenient mathematical tool, though it does have difficulty generating the flow of material from the Galactic centre, orthogonal to the spiral arms, that is needed to sustain the field in this configuration. Nevertheless, it is a simple model that is able to explain many observed GMF features. Considering similarities between Galactic and solar magnetohydrodynamics is likely to be informative even if the two systems are not exactly alike. Further investigations into a Parker-spiral-based model for the GMF will need to consider how the geometry changes due to sub-virial outflow, the orbital motion of the interstellar medium, and field generation through a Galactic-scale dynamo.§ SUMMARYWe have demonstrated that RMs of diffuse synchrotron emission from within our Galaxy, as observed with the Synthesis Telescope at the DRAO, can provide useful information on large-scale magnetic field structures. Although the XE data, which lack a single-antenna component, miss the largest angular structures, the interferometer is sensitive to high angular frequencies, and hence detects very well sharp changes in the image plane. This allowed us to examine more carefully the well-established magnetic field reversal between our local arm and the Sagittarius Arm. What we found was a diagonal boundary separating positive and negative RMs in the lower longitude region of the CGPS, suggesting the presence of a diagonally oriented current sheet, and highlighting the need for three-dimensional modelling of the GMF, in contrast with two-dimensional models of the disk that can only account for strictly radial reversals. We have noted, as well, the strong resemblance between RM maps derived from EG and XE data, over an extended area, which will require additional investigation, including modelling and comparison to pulsar RMs, as the similarity is not expected due to the difference in length scales being probed. Future work will include the addition of broad spatial structure to the RM maps by combining the four band CGPS data with corresponding frequency bands of the Global Magneto-Ionic Medium Survey (Wolleben et al. 2009). New observations complementary to the CGPS at higher latitudes are currently underway to study continuation of the RM gradient above and below the Galactic disk. Understanding the observed diagonal boundary will contribute to more accurate three-dimensional models of the GMF structure.We gratefully acknowledge L. Nicolic, T. Foster, B. Jackel, D. Knudsen, B. Gaensler and J. Dickey for enlightening conversations. We thank the anonymous referee whose comments and suggestions have improved this manuscript. The Dominion Radio Astrophysical Observatory is a National Facility operated by the National Research Council Canada. The Canadian Galactic Plane Survey is a Canadian project with international partners, and is supported by the Natural Sciences and Engineering Research Council (NSERC).aa
http://arxiv.org/abs/1704.08663v2
{ "authors": [ "A. Ordog", "J. C. Brown", "R. Kothes", "T. L. Landecker" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20170427172038", "title": "Three-Dimensional Structure of the Magnetic Field in the Disk of the Milky Way" }
-.70in 1.1in -.12 -.12 .26=1414=1414=10000 decorations.markings,matrix,arrows1em *0pt3.25ex plus 1ex minus .2ex1.5ex plus .2ex 1.2 plain thmTheorem[section]thmbis[1] thm *thm*Theorem *pcpePrincipe lem[thm]Lemma *claimClaim pro[thm]Proposition pro-def[thm]Proposition-Definition cor[thm]Corollary conj[thm]Conjecture que[thm]Question prob[thm]Problem Def[thm]Definition definition *NotNotation rem[thm]Remark remark exe[thm]Exemple( [ ⌈⌉node distance=2cm, autoequationsection Mathmatisches Institut Universitt Bonn Endenicher Allee 60, Office 303 53115 Bonn, [email protected] We prove that every compact Khler threefold X of Kodaira dimension κ = 0 or 1 has a -factorial bimeromorphic model X' with at worst terminal singularities such that for each curve C ⊂ X', the pair (X',C) admits a locally trivial algebraic approximation such that the restriction of the deformation of X' to some neighborhood of C is a trivial deformation. As an application, we prove that every compact Khler threefold with κ = 0 or 1 has an algebraic approximation. We also point out that in order to prove the existence of algebraic approximations of a compact Khler threefold with κ = 2, it suffices to prove that of an elliptic fibration over a surface. Algebraic approximations of compact Khler threefolds of Kodaira dimension 0 or 1 Hsueh-Yung Lin================================================================================§ INTRODUCTION From the point of view of the Hodge theory, compact Khler manifolds can be considered as a natural generalization of smooth complex projective varieties. While an arbitrarily small deformation as a complex variety of a smooth complex projective variety might no longer be projective, a sufficiently small deformation of a Khler manifold remains Khler. The so-called Kodaira problem asks wether it is possible to obtain all compact Khler manifolds through (arbitrarily small) deformations of projective varieties.[Kodaira problem] Given a compact Khler manifold X, does X always admit an (arbitrarily small) deformation to some projective variety? In dimension 1, compact complex curves are already projective. For surfaces, Problem <ref> is known to have a positive answer, first due to Kodaira using the classification of compact complex surfaces <cit.>, then to N. Buchdahl <cit.> proving that any compact Khler surface has an algebraic approximation using M. Green's density criterion (cf. Theorem <ref>). We refer to <cit.> for other positive results.As for negative answers, C. Voisin constructedin each dimension ≥ 4 examples of compact Khler manifolds which do not have the homotopy type of a smooth projective variety <cit.>, thus answered in particular negatively the Kodaira problem. Later on, she constructed in each even dimension ≥ 8 examples of compact Khler manifolds all of whose smooth bimeromorphic models are homotopically obstructed to being a projective variety <cit.>.For threefolds, the Kodaira problem remains open at present. There are nevertheless positive results concerning a bimeromorphic variant of the Kodaira problem. Let X be a compact Khler threefold of Kodaira dimension = 0 or 1. There exists a -factorial bimeromorphic model X' of X with at worst terminal singularities such that X' has a locally trivial algebraic approximation. In order to prove Theorem <ref>, thanks to the minimal model program (MMP) for Khler threefolds <cit.>, we can chooseX' to be a minimal model of X, and this is what we did in most of the cases. Geometric descriptions of these varieties X' can be obtained as an output of the abundance conjecture <cit.> applied to X', which is enough to prove the existence of a locally trivial algebraic approximation for X'. The aim of this article is to prove the following stronger version of Theorem <ref> by further exploiting the geometry of X'. We refer to Section <ref> for the terminologies used in the statement of Theorem <ref>. Let X be a compact Khler threefold of Kodaira dimension = 0 or 1. There exists a -factorial bimeromorphic model X' with at worst terminal singularities such that whenever C ⊂ X' is a curve or empty, the pair (X',C) has a locally trivial and C-locally trivial algebraic approximation. We will also prove a result relating the type of algebraic approximation that X' has in Theorem <ref> and the algebraic approximation of X.Let X be a compact Khler threefold and X' a normal bimeromorphic model of X. If (X',C) has a locally trivial and C-locally trivial algebraic approximation whenever C ⊂ X' is a curve or empty, then X has an algebraic approximation. We refer to Corollary <ref> for a more general statement. Since -factorial varieties are normal by definition, putting Proposition <ref> together with Theorem <ref> yields immediately the following result. Every compact Khler threefold of Kodaira dimension 0 or 1 has an algebraic approximation. As for threefolds of Kodaira dimension 2, since minimal models of such varieties are elliptic fibrations, the existence of algebraic approximations of these varieties is related to the following question. Let f : Y → B be an elliptic fibration where Y is a compact Khler and the base B is smooth and projective. Assume that the locus D ⊂ B parameterizing singular fibers of f is normal crossing, does Y have an algebraic approximation? We will see that a positive solution of Question <ref> will eventually solve the Kodaira problem for threefolds of Kodaira dimension 2.If Question <ref> has a positive answer in the case where B is a surface, then every compact Khler threefold of Kodaira dimension 2 has an algebraic approximation. In view of <cit.> and <cit.>, it is plausible that Question <ref> would have a positive answer. It is a work in progress of Claudon and Hring toward an answer to Question <ref>.The article is organized as follows. We will first introduce in Section <ref> some deformation-theoretic terminologies including those appearing in Theorem <ref> then prove some general results. In particular, we will prove Corollary <ref> and deduce Proposition <ref> from it. Next, we will turn to describingminimal models of a compact Khler threefold of Kodaira dimension 0 or 1 in Section <ref>. According to these descriptions, we will choose some threefolds X and prove in Section <ref> that whenever C ⊂ X is a curve or empty, the pair (X,C) always has a C-locally trivial algebraic approximation. Based on these results, the proof of Theorem <ref> will be concluded in Section <ref>, where we also prove Proposition <ref>.§ DEFORMATIONS Terminologies Let X be a complex variety. A deformation of X is a surjective flat holomorphic map π: → containing X as a fiber. We say that a deformation π: → is locally trivial if for every x ∈, there exists a neighborhood x ∈⊂ of x such that if U π^-1(π(x)) ∩, thenis isomorphic to U ×π() over π(). In this article, a fibration is a surjective holomorphic map f: X → B with connected fibers. A deformation π : → of X is called strongly locally trivial with respect to the fibration structure f : X → B if π has a factorization of the form[cramped, row sep = 20, column sep = 40] [r, "q"] [d, "π", swap]× Bdl_1such that the restriction of q to X onto its image coincides with f, and that for every (t,b) ∈× B, there exist neighborhoods b ∈ U ⊂ B and t ∈ V ⊂ such that q^-1(V × U) is isomorphic to q^-1({t}× U) × V over V.Let X be a complex variety and C ⊂ X a subvariety of X. A C-locally trivial deformation of (X,C) is a deformation (,) → of the pair (X,C) such that the deformation (,) → restricted to some neighborhood ⊂ of is isomorphic to the trivial deformation U ×, C ×→ with U ∩ X. An algebraic approximation of the pair (X,C) is a deformation (,) → of (X,C) such that there exists a sequence of points (t_i)_i∈ inparameterizing algebraic members and converging to o, the point which parameterizes (X,C). If X is endowed with a G-action where G is a group and C is a G-invariant subvariety, then a G-equivariant deformation of the pair (X,C) is a deformation (,) → of (X,C) such that the G-action on X extends to an action onpreserving each fiber of → and . Locally trivial deformations and bimeromorphic transformationsThe following lemma concerns the behaviour of C-locally trivial deformations of a pair (X,C) under bimeromorphic transformations. Let f : X → Y be a map between complex varieties and assume that there exists a subvariety C ⊂ Y such that f maps XD isomorphically onto YC where Df^-1(C).Then for every C-locally trivial deformation π : (,) →Δ of Y, there exists a D-locally trivial deformation (,) → of the pair X, D together with a map F: → oversuch that F^-1() = and that F_ | is an isomorphism onto .Let ⊂ be a neighborhood ofsuch that there exists an isomorphism over Δ of the pairs,≃U ×Δ, C ×Δwhere UY ∩. So we can write≃() ⊔U ×/∼where ∼ glues the two pieces of the union using isomorphism (<ref>). Isomorphism (<ref>) also implies that since f maps XD isomorphically onto YC, we have over≃ f^-1(UC) ×.We define () ⊔ (f^-1(U)× )/∼andD ×⊂where ∼ glues the two pieces of the union using isomorphism (<ref>). One easily checks thatis Hausdorff so thatis a complex variety. The map π' : →Δ and the projection π” : f^-1(U)×→ give rise to a map π_X : (, ) →Δ which, by construction, is a D-locally trivial deformation of the pair (X,D).Finally the restriction of f to f^-1(U)defines an obvious map F: → satisfying the property that F^-1() = and that F_ |: ≃.We can also show that given a D-locally trivial deformation (,) → of the pair X, D, there exists a C-locally trivial deformation (,) → of the pair X, C together with a map F: → oversuch that F^-1() = and that F_ | is an isomorphism onto . This can be proven by exchanging the role of C and D in the proof of Lemma <ref>. Let X be a compact Khler manifold. Assume that X is bimeromorphic to a compact Khler variety Y. After a sequence of blow-ups of X along smooth centers, we obtain a resolution[cramped, row sep = 0, column sep = 20] X Z[r, "ν"] [l, "η" , swap]Yof the bimeromorphic map XY. Let C ⊂ Y be the image of the exceptional set of ν. The following lemma shows in particular that a C-locally trivial deformation of the pair (Y,C) always induces a deformation of X. Suppose thatπ : (,) →Δ is a C-locally trivial deformation of the pair (Y,C). Then up to shrinking , the deformation π induces a deformation[cramped, row sep = 0, column sep = 20][r] [l]of (<ref>).Since ν maps ν^-1(YC) isomorphically onto YC and since (,) → is a C-locally trivial deformation of the pair (Y,C), by Lemma <ref> there exists a deformation → of Z and a map F: → overwhose restriction to the central fiber is ν : Z → Y. As η_*_Z≃_X and R^1η_* _Z = 0 since η is a composition of blow-ups along smooth centers, by <cit.> the deformation → of Z induces a deformation → of the morphism Z → X overup to shrinking . The following is an immediate consequence of Lemma <ref>. With the same notation as above, if Y has a C-locally trivial algebraic approximation, then X also has an algebraic approximation. In particular, if Y is normal and satisfies the property that for every subvariety C ⊂ Y whose irreducible components are all of codimension ≥ 2, the pair (Y,C) has a C-locally trivial algebraic approximation, then X also has an algebraic approximation.Let → be a C-locally trivial algebraic approximation of Y and let[cramped, row sep = 0, column sep = 20][r] [l]be the induced deformation of (<ref>) as in Lemma <ref>. Up to shrinkingwe can suppose that for each t ∈, the fibers _t →_t and _t →_t of the maps → and → over tare both bimeromorphic. Therefore if over a point t ∈ the variety _t is algebraic, then _t is also algebraic.For the last statement of Corollary <ref>, the normality of Y implies that each irreducible component of the image in Y of the exceptional set E of ν is of codimension ≥ 2. Thus (Y,ν(E)) has a ν(E)-locally trivial algebraic approximation by assumption. We conclude by the first part of Corollary <ref> that X has an algebraic approximation. Assume that X' satisfies the hypothesis made in the proposition. Let C(C_0 ⊔ C_1) ⊂ X' be a subvariety of dimension ≤ 1 where C_i denotes the union of the irreducible components of C of dimension i. Since C_0 = 0, a locally trivial deformation of X' induces in particular a C_0-locally trivial deformation of (X',C_0). Hence by assumption, the pair (X',C) has a C-locally trivialalgebraic approximation. It follows from the second part of Corollary <ref> that X has an algebraic approximation. G-equivariant locally trivial deformations The following lemma shows that given a G-equivariant C-locally trivial deformation (,) → of (X,C), there always exists a G-equivariant trivialization of some neighborhood of . This will imply that the quotient (/G,/G) → is a C/G-locally trivial deformation of (X/G,C/G). Let X be a smooth complex variety and G a finite group acting on X. Let C be a G-invariant subvariety of X and assume that there exists a G-equivariant deformation of π : → of X over a one-dimensional base .Assume also that there exists an open subset ⊂ and an isomorphism ≃ V × overwhere V ∩ X such that V contains C (this hypothesis holds for instance, when π induces a G-equivariant C-locally trivial deformation of (X,C)), then up to shrinking , there exist ⊂, a G-invariant neighborhoodof , and a G-equivariant isomorphism(,) ≃ (U ×, C ×)overwhere U ∩ X. In particular, π: (, ) → is a G-equivariant C-locally trivial deformation of (X,C) and the quotient (/G,/G) → is a locally trivial and C/G-locally trivial deformation of (X/G,C/G). Before proving Lemma <ref>, let us first prove a technical lemma. Let G be a finite group acting on a variety X and let π: → be a G-equivariant deformation of X over a one-dimensional base. Let ⊂ be an open subset such that there exists an isomorphism ≃ V × overwhere V ∩ X. Let ^G ⋂_g ∈ G g(). Then for every G-invariant relatively compact subset U ⊂ V^G ^G ∩ X, up to shrinkingthere exists a G-invariant subsetof ^G and a G-equivariant isomorphism ≃ U × over .We may assume that V^G ∅. Since ^G is open by finiteness of G, after shrinkingwe can also assume that the restriction of π to ^G is surjective and thatis isomorphic to the open unit disc B(0,1) ⊂ such that 0 parameterizes the central fiber X. Fix a generator / t of the space of constant vector fields Γ(,T_)_≃ on . For z ∈, let z / t∈Γ(,T_)_ denote the corresponding vector field. By identifying ^G with a subset of V × through the isomorphism ≃ V ×, we can define the homomorphism of Lie algebrasξ :→Γ(^G , T_^G) z↦∑_g ∈ G g^* χ(z)_| ^G,where χ (z ) is the vector field on V × which projects to z/ t inand to 0 in V. By <cit.> (see also <cit.>), there exists a local group action Φ : →^G ofon ^G inducing ξ, where ⊂×^G is a neighborhood of {0}×^G. We recall that the meaning of a local group action is the following.*For all x ∈^G, the subset ∩×{x} is connected.* Φ(0,∙) is the identity map on ^G.* Φ(gh,x) = Φ(g,Φ(h,x)) whenever it is well-defined.*The morphism of Lie algebras →Γ(^G , T_^G) induced by Φ coincides with ξ.Sincethe vector field ξ(z) is G-invariant for all z ∈ by construction, the map Φ is also G-equivariant (where G acts trivially on ). Also since G acts on ^G → in a fiber-preserving way, the projection of ξ(z) in Γ(^G, π^*T_) equals |G| · p_2^* z/ t. Hence if Φ_ denotes the local group action ondefined byΦ_ : (_×π)() →(x,b)↦ b + |G|· xthen we have the following commutative diagram. [cramped, column sep = 20] [r, "Φ"] [d]^G [d, "π"] (_×π)() [r, "Φ_"]By the relative compactness of U inside V^G, there exists > 0 such that B(0,) × U ⊂. The restriction of Φ tois isomorphic onto its image. We verify easily with the help of (<ref>) and the properties ii) and iii) that the inverse of Φ : →Φ() is Ψ : Φ()→v↦π(v)/|G|, Φ-π(v)/|G|, v.Let ΦB0,/|G|× U⊂^G. We have U ∩ X byii) and up to replacingby B(0,),we have thus by construction an isomorphism U × ∼(x,t)↦Φt/|G|, x ,over , which is moreover G-equivariant since Φ is G-equivariant.Since C is G-invariant and since the subset ^G ⋂_g ∈ G g() is a finite intersection so is an open subset, V^G ^G ∩ X is a G-invariant neighborhood of C. Let U ⊂ V^G be a G-invariant neighborhood of Y which is relatively compact in V^G. By applying Lemma <ref> toand to U, we deduce that up to shrinking , there exists a G-invariant subset ⊂^G together with a G-equivariant isomorphism U ×≃ over . As C is a G-invariant subset of U, the image ⊂ of C × under the above isomorphism is also G-invariant. This proves that the G-equivariant isomorphism ≃ U × induces a G-equivariant isomorphism of the pairs (,) ≃ (U ×, C ×), which is the main statement of the lemma.It follows by definition that π : ( , ) → is a G-equivariant C-locally trivial deformation of (X,C). Since X is smooth, up to further shrinkingwe can assume that → is a smooth deformation, so that the quotient /G → is a locally trivial deformation <cit.>. As( / G , / G) ≃(U/ G) ×,(C/ G) × over , the deformation ( / G , / G) → of the pair (X/G,C/G) is C/G-trivial.Thefollowing lemmais a special case of Lemma <ref>. Let f : X → B be a G-equivariant fibration where G is a finite group. Let [cramped, row sep = 20, column sep = 40] [r, "q"] [d, "π", swap]× Bdl_1be a G-equivariant strongly locally trivial deformation of f over a one-dimensional base . Suppose that C is a G-invariant subvariety of X and that f(C) is a finite set of points, then the deformation π : → induces a G-equivariant C-locally trivial deformation (,) → of the pair (X,C). Let {p_1,…,p_n} f(C) ⊂ B.By definition, up to shrinking , for each i there exists a neighborhood p_i ∈ V_i ⊂ B of p_i such thatthe restriction of π : → to _iq^-1(× V_i) is isomorphic to (_i ∩ X) × over .Up to shrinking the V_i's, we can assume that they are pairwise disjoint, so that ⊔_i = 1^n _i is isomorphic to V × overwhere V ∩ X. Applying Lemma <ref> to the G-equivariant deformation π:→, the G-invariant subvariety C, andyields Lemma <ref>.For simplicity, Lemma <ref> is stated and proven under the assumption that = 1 and so are Lemma <ref> and Lemma <ref>, which will be enough for the purpose of this article. All these lemmata could have been stated without assuming that = 1. § BIMEROMORPHIC MODELS OF NON-ALGEBRAIC COMPACT KHLER THREEFOLDS The reader is referred to <cit.> for a survey of the minimal model program (MMP) for Khler threefolds. Let X be a compact Khler threefold with non-negative Kodaira dimension (X). By running the MMP on X, we obtain a -factorial bimeromorphic model X_ of X with at worst terminal singularities (which are isolated, since X = 3) whose canonical line bundle K_X_ is nef. Such a variety X_ is called a minimal model of X. By the abundance conjecture, which is known to be true for Khler threefolds, there exists m ∈_>0 such that mK_X_ is base-point free and that the surjective map f : X_→ B defined by the linear system |mK_X_| is a fibration satisfying B = (B) = (X). The fibration f : X_→ B is called the canonical fibration of X_ and a general fiber F of f satisfies (mK_F) ≃_F by the adjunction formula. The aim of this section is to describe minimal models of non-algebraic compact Khler threefolds of Kodaira dimension = 0 or 1. Let us start from varieties with = 0.Let X be a non-algebraic compact Khler threefold with (X) = 0 and let X_ be a minimal model of X. Then X_ is isomorphic to a quotient X/G by a finite group G where X is either a 3-torus or a product of a K3 surface and an elliptic curve.Since (X) = 0, there exists m ∈_>0 such that (mK_X_) ≃_X_. Let π : X_→ X_ be the index 1 cover of X_: this is a finite cyclic cover tale over X X such that K_X_≃_X_ <cit.>. As X_ has at worst terminal singularities, by <cit.>, the variety X_ has also at worst terminal singularities. Since X is assumed to be non-algebraic, by <cit.>X_ is smooth. Thus by the Beauville-Bogomolov decomposition theorem <cit.>, there exists a finite tale cover X' →X_ such that X' is either a 3-torus or a product of a K3 surface and an elliptic curve (as X' is non-algebraic, X' cannot be a Calabi-Yau threefold); let τ : X' → X_ denote the composition of X'→X_ with π.The finite map τ is tale over X X. Let X^∘→ X'Z → X X be the Galois closure of τ_|X'Z where Z τ^-1(X) and let G X^∘/ (X X). Since X and hence Z are finite sets of points, we have π_1(X'Z)≃π_1(X'). It follows that X^∘→ X'Z extends to X→ X' which is the finite tale cover associated to the subgroup X^∘ / (X'Z) < π_1(X'Z)≃π_1(X'). The variety X is still a 3-torus or a product of a K3 surface and an elliptic curve. As XX^∘ is a set of isolated points, the G-action on X^∘ extends to a G-action on X whose quotient is X_.The group G constructed in the proof of Proposition <ref> acts freely outside of a finite set of points of X. Forquotients (S × E) / G of the product of a non-algebraic K3 surface S and an elliptic curve E, we can show that the G-action is necessarily diagonal. Let G be a group acting on S × E where S is a non-algebraic K3 surface and E is an elliptic curve. Then this G-action is the product of a G-action on S and a G-action on E.For each g ∈ G and each fiber F of the second projection p_2 : S × E → E, since h^0,1(F) < h^0,1(E), it follows that g(F) is still a fiber of p_2. So the G-action on S × E induces a G-action on E. Suppose that there exist g ∈ G and a fiber E_t of the first projection p_1 : S × E → S such that g(E_t) is not contracted by p_1, then if we vary t ∈ S, we have a two-dimensional covering family of curves {E'_tp_1(g(E_t))}_t ∈ S on S generically of geometric genus 1. Sincealgebraic equivalence coincides withlinear equivalence for curves on a K3 surface and since there is only one-dimensional families of curves of geometric genus 1 in each linear system,{E'_t}_t∈ S is in fact a one-dimensional family of curves, say parameterized by some proper curve T. As S is non-algebraic, the family {E'_t}_t∈ T is an elliptic fibration and there exists t ∈ T such that the normalization E'_t of E'_t is ^1. Let C ⊂ S be a curve such that for each p ∈ C, we have E'_p = E'_t. Since the curves g(E_p) ⊂ S × E are mutually disjoint for p ∈ C, their strict transformations g(E_p) in the normalization E'_t× E of E'_t × E are also disjoint from each other. It follows that [g(E_p)]^2 = 0 in H^4(E'_t× E,) and since E'_t≃^1, the curve g(E_p) has to be a fiber of E'_t× E →E'_t. The latter is in contradiction with the assumption that g(E_p) is not contracted by p_1. Next we turn to varieties with = 1. Let X be a non-algebraic compact Khler threefold with (X) = 1. Let X_ be a minimal model of X and X_→ B the canonical fibration of X_. Then X_→ B satisfies one of the following descriptions:*If a general fiber F of X_→ B is algebraic, then F is either an abelian surface or a bielliptic surface;*If F is not algebraic, then F is either a K3 surface or a 2-torus, and there exists a finite Galois cover B→ B of B and a smoothfibration X→B whose fibers are all isomorphic to F, such that X is bimeromorphic to X_×_BB over B. Moreover, the monodromy action of π_1(B) on F preserves the holomorphic symplectic form. Finally if either F is a K3 surface or X_ contains a curve which dominates B, then there exists a finite Galois base change as above such that X→B is isomorphic to the standard projection F ×B→B.Since X_ has only isolated singularities, a general fiber F of X_→ B is a connected smooth surface. AsK_F is torsion, the classification of surfaces shows that F is either a K3 surface, an Enriques surface, a 2-torus, or a bielliptic surface. Since X, and thus X_ is non-algebraic, if F is algebraic then by Fujiki's result <cit.>F is irregular, so F can only be an abelian surface or a bielliptic surface, which proves i). Assume that F is not algebraic, then F is either a K3 surface or a 2-torus and by <cit.>, the fibration X_→ B is isotrivial. By <cit.>, there exists some finite map B→ B of B and a smoothfibration X→B all of whose fibers are isomorphic to F, such that X is bimeromorphic to X_×_BB over B. Up to taking the Galois closure of B→ B, we can assume that B→ B is Galois.Since f̃ is smooth and isotrivial, the fundamental group π_1(B) acts on F by monodromy transformations. Since X is assumed to be non-algebraic, we have H^0(X,_X^2)0. Hence by the global cycle invariant theorem,the π_1(B)-action on F is symplectic.As X is Khler, again by the global cycle invariant theorem there exists a Khler class on F fixed by the induced monodromy action on H^2(F,).It follows that the map π_1(B) →(F)/_0(F) has finite image where _0(F) denotes the identity component of (F) <cit.>.In the case where F is a K3 surface, _0(F) is trivial, soπ_1(B) acts as a finite group on F. Accordingly after some finite base change of f̃ : X→B, the fibration f̃ becomes a trivial. Now assume that Fis a2-torus and that X_ contains a curve dominating B. After another finite base change off̃ : X→B we can assume that f̃ has a section : B→X, namely f̃ is a Jacobian fibration. Recall that π_1(B) →(F)/_0(F) has finite image, so after a further finite base changeoff̃ : X→B, we can assume that the monodromy action of π_1( B) on H^1(F,) is trivial. As f̃ : X→B is a Jacobian fibration, we conclude that X≃ F ×B and that f̃ is isomorphic to the projection F ×B→B.As before, both in the case where F is a K3 surface or a 2-torus, up to taking the Galois closure of B→ B we can assume that B→ B is Galois. § EQUIVARIANT ALGEBRAIC APPROXIMATIONS OF PAIRS In this section, we will prove for some compact Khler threefolds X endowed with a G-action that for every G-invariant curve C ⊂ X, there exists a G-equivariant C-locally trivial algebraic approximation of the pair (X,C). Results in Section <ref> show that the quotients X/G of these varieties cover all compact Khler threefolds of Kodaira dimension 0 or 1 up to bimeromorphic transformations, hence will allow us to conclude the proof of Theorem <ref> in Section <ref>.Before dealing with threefolds, we start by proving analogue statements concerning the existence of a G-equivariant C-locally trivial algebraic approximation for fibrations admitting a strongly locally trivial algebraic approximation and for surfaces in the next two subsections.Fibrations admitting a strongly locally trivial algebraic approximation Let X be a non-algebraic compact Khler variety and f : X → B a surjective map onto a curve with algebraic fibers. Suppose that X has a strongly locally trivial algebraic approximation π : → with respect to f, then for any subvariety C ⊂ X, up to shrinkingthe deformation π induces a C-locally trivial algebraic approximation of (X,C).If moreover there exists a finite group G acting f-equivariantly on X and on B and the algebraic approximation of X in the assumption above is G-equivariant, then the induced C-locally trivial algebraic approximation is also G-equivariant for every G-invariant subvariety C.Since X is non-algebraic and since the base and the fibers of f are algebraic, by Campana's criterion <cit.> every subvariety of X (in particular C) is contained in a finite number of fibers of f. We can thus apply Lemma <ref> to conclude.Let X be a non-algebraic compact Khler variety and f : X → B a surjective map onto a curve. Let G be a finite group acting f-equivariantly on X and on B. Assume that a general fiber of f isan abelian variety, then for every G-invariant subvariety C ⊂ X, the pair (X,C) has a G-equivariant C-locally trivial algebraic approximation.By <cit.>, the fibration f has a G-equivariant strongly locally trivial algebraic approximation. Hence Corollary <ref> follows from Lemma <ref>. Surfaces with a finite group actionFirst we recall some Hodge-theoretical criteria for the existence of an algebraic approximation. Let π : → B be a family of compact Khler manifolds over a smooth base. If a fiber X = π^-1(b) satisfies the property that the composition of the Kodaira-Spencer map and the contraction with some Khler class [] ∈ H^1(X,_X^1) [cramped, row sep = 0, column sep = 40] μ_[] :T_B,b[r, "KS"] H^1(X, T_X)[r, "[]"]H^2(X, T_X⊗_X)[r]H^2(X,_X)is surjective, then there exists a sequence of points in B parameterizing algebraic members which converges to b. The following is a variant of Theorem <ref> when the variety X is endowed with a finite group action. Let X be a compact Khler manifold with an action of a finite group G. Suppose that the universal deformation space of X is smooth. If there exists a G-invariant Khler class [] ∈ H^1(X,_X^1) such that the following composition of maps[cramped, row sep = 0, column sep = 40] μ_[] : H^1(X, T_X)[r, "[]"]H^2(X, T_X⊗_X)[r]H^2(X,_X)is surjective, then X has a G-equivariant algebraic approximation.The following is an easy application of Theorem <ref>. Let S be a non-algebraic compact Khler surface and G a finite group acting on S. If K_S ≃_S, namely if S is either a K3 surface or a 2-torus, then S has a G-equivariant algebraic approximation.Since S is a surface with trivial K_S, the universal deformation space of S is smooth. Also, we have the isomorphism T_S ≃_S^1 defined by the contraction with a fixed holomorphic symplectic form. So for a G-invariant Khler class [], the map μ_[] defined in Theorem <ref> with → B replaced by the family of K3 surfaces _U → U has the factorization[cramped, row sep = 0, column sep = 40] μ_[] : H^1(S, T_S) ≃ H^1(S, _S^1) [r, "[]"] H^2(S, _S^2)≃ H^2(X,_X).Since []^20, the map μ_[] is non-zero. Moreover since h^2(S,_S) = 1, the map μ_[] has to be surjective. Hence Lemma <ref> is a consequence of Theorem <ref>. Lemma <ref> and <ref> concern C-locally trivial algebraic approximations of a pair (S,C) for K-trivial surfaces. Let S be a non-algebraic 2-torus and let G be a finite group acting on S. Let C ⊂ S be a G-invariant curve. Then the pair (S,C) has a G-equivariant C-locally trivial algebraic approximation.Since S is a non-algebraic 2-torus containing a curve, it is a smooth isotrivial elliptic fibration f : S → B and the only curves of S are fibers of f. As the G-action sends curves to curves, the fibration f is G-equivariant.We thus conclude by Corollary <ref> that (S,C) admits a G-equivariant C-trivial algebraic approximation.Let S be a non-algebraic K3 surface and let G be a finite group acting on S. Let C ⊂ S be a G-invariant curve. Then (S,C) has a G-equivariant C-locally trivial algebraic approximation. When the algebraic dimension a(S) of S is zero, more precisely the deformation → of S over the Noether-Lefschetz locus preserving the classes of each irreducible component of C in the universal deformation of S preserving the G-action isa G-equivariant C-locally trivial algebraic approximation. First we note that since H^2(S,)^G is a sub--Hodge structure of H^2(S,) of weight 2, if the G-action does not preserve the holomorphic symplectic form, then H^2(S,)^G is concentrated in bi-degree (1,1). As the intersection of H^1,1(S)^G with the Khler cone 𝒦_S ⊂ H^2(S,) is not 0, we deduce that H^2(S,)^G contains a Khler class, which is in contradiction with the hypothesis that S is non-algebraic. We deduce that the G-action preserves the holomorphic symplectic form of S.Since S is assumed to be non-algebraic, according to whether a(S) = 0 or 1 only two situations can happen:* a(S) = 0: every curve in S is a disjoint union of trees of smooth (-2)-curves intersecting transversally;* a(S) = 1:S is an elliptic fibration f : S → B and the G-action sends fibers to fibers.In the second situation, we can apply Corollary <ref> to get a G-equivariant C-locally trivial algebraic approximation of (S,C) as we did in the proof of Lemma <ref>. In the first situation, let us write C = ∪_i ∈ I C_i where the C_i's are irreducible components of C. Since the universal deformation space of S is smooth, its locus preserving the G-action can be identified with an open subset of H^1(S,T_S)^G. As the group action G on S is symplectic, the isomorphism T_S ≃_S^1 defined by the contraction with a fixed holomorphic symplectic form induces an isomorphism H^1(S,T_S)^G ≃ H^1(S,_S^1)^G.Under this identification, the universal deformation spaceof S preserving the G-action and the curve classes [C_i] can be identified with an open subset U of VH^1(S,_S^1)^G∩[C_i] ^⊥_i∈ Iwhere [C_i] _i∈ I denotes the linear subspace of H^1(S,_S^1) spanned by the classes [C_i] and the orthogonality is defined with respect to the cup product. Since [C_i] _i∈ I is G-invariantand since the G-action preserves the cup product, the orthogonal [C_i] ^⊥_i∈ I is also G-invariant. Therefore V =[C_i] ^⊥_i∈ I.Since S is not algebraic, the curve classes [C_i] cannot generate the whole H^1(S,_S^1), hence V0 and let v be a non-zero element in V. As C_i^2 < 0 for all i, by the Hodge index theoremv^2 >0. If [] is a Khler class, then again by the Hodge index theorem we have v ·[]0. Using the factorization (<ref>), we see again that sinceh^2(S,_S) = 1, the map μ_[] defined in Theorem <ref> with → B replaced by the G-equivariant deformation → of S over the Noether-Lefschetz locus , is surjective. Therefore by Theorem <ref>, → is an algebraic approximation of S. Since the curve classes [C_i] ∈ H^2(S,) remains of type (1,1), → inducesfor each i, a deformation (,_i) of the pair (S,C_i). It remains to show that (,∪_i ∈ I_i) →Δ is a C-locally trivial deformation. Let us decompose C = ⊔_i=1^m C'_i into its connected components. As we mentioned before,each C'_i is a tree of smooth (-2)-curves intersecting transversally. Therefore up to shrinking , if = ⊔_i=1^m '_i denotes the decomposition ofinto its connected components, then up to reordering the indices i, each fiber of '_i → is still a tree of (-2)-curves isomorphic to C'_i.Since a tree of smooth (-2)-curve on a surface can be contracted to a rational double point, there exists a bimeromorphic morphism ν : →' oversuch that for each fiber _t of →, the restriction of ν to _t is the contraction of '_i ∩_t to a rational double point <cit.>. Since fibers of '_i → are all isomorphic, the singularity type of ν('_i ∩_t) ⊂_t does not depend on t ∈. As the germs of a rational double point of a fixed type on a surface are all isomorphic, up to shrinkingthere exists a neighborhood _i ⊂' of ν('_i) such that the pair _i, ν('_i) is isomorphic overto the trivial product U_i ×, ν(C'_i) with U_i _i ∩ν(S). It follows that ',ν()→ is a ν(C)-locally trivial deformation of ν(S), ν(C), hence (, ) → is C-locally trivial by Lemma <ref>. For the sake of completeness, we conclude the present subsection by the following proposition which will not be used latter in the article. It is the generalization of <cit.> in the G-equivariant setting. Let S be a compact Khler surface and G a finite group acting on S. Whenever C ⊂ S is a curve or empty, the pair (S,C) has a G-equivariant C-locally trivial algebraic approximation.We may assume that S is non-algebraic. If the algebraic dimension a(S) of S is 1, then S is an elliptic fibration and we can use Corollary <ref> to conclude. If a(S) = 0, then the minimal model S' of S is either a 2-torus or a K3 surface and the map ν : S → S' is G-equivariant. By Lemma <ref>, <ref>, and <ref>, the pair (S', ν(C)) has a G-equivariant ν(C)-locally trivial algebraic approximation. Hence byLemma <ref>, (S, C) has a G-equivariant C-locally trivial algebraic approximation. K3 fibrations Let XS × B where S is a non-algebraic K3 surface and B is a smooth projective curve. Let G be a finite group actingon B andon S and let G act on X by the product action. Whenever C ⊂ X is a G-invariant curve or empty, the pair (X,C) has a G-equivariant C-locally trivial algebraic approximation. Let p_1 : S × B → S denote the first projection. As the G-action on S × B is a product action, the image C'p_1(C) is a G-invariant curve. By Lemma <ref> and <ref>, there exists a G-equivariant C'-locally trivial algebraic approximation π: (,') → of the pair (S,C'). Let ⊂ be a neighborhood of ' such that there exists an isomorphism ≃ U × over , so U × B ×≃× B over . Since U × B is a neighborhood of C and since C is G-invariant, Lemma <ref> implies that the algebraic approximation Π : × B → of X defined by the composition of π with the first projection × B → induces a C-locally trivial algebraic approximation of (X,C). 2-torus fibrationsBefore we study the existence of (C-)locally trivial algebraic approximations of a pair (X,C) in the case of 2-torus fibrations, let us first prove a statement concerning the existence of multisections of a torus fibration via strongly locally trivial perturbation. Let f : X → B be a smooth torus fibration whose total space X is compact Khler. There exists an arbitrarily small strongly locally trivial deformation f' : X' → B of f such that f' has a multisection. Moreover if f is endowed with an f-equivariant G-action for some finite group G, then one can choose the above deformation to be G-equivariant.The construction of an arbitrarily small deformation of f possessing a multi-section already appeared in <cit.>. We will recall how this deformation is constructed and prove that it is strongly locally trivial along the way.Let J → B be the Jacobian fibration associated to f andits sheaf of sections. The sheaf can be defined by the exact sequence[cramped, row sep = 5, column sep = 40] 0 [r]_[r] [r][r] 0where _ R^2g-1f_* and / ^g,g-1 R^2g-1f_* / R^g-1f_*^g_X/B.To each isomorphism class of J-torsor g : Y → B, one can associate in a biunivocal way, an element η(g) ∈H^1(B,) satisfying the property that g has a multisection if and only if η(g) is torsion (cf. <cit.>). Moreover, ifexp : VH^1(B ,) → H^1(B, ) denotes the morphism induced by the quotient →, then there exists a family[cramped, row sep = 20, column sep = 40] [r, "q"] [d, "π", swap] V × Bdl_1Vof J-torsor such that for each v ∈ V, the element in V associated to the J-torsor π^-1(v) → B is η(f) + exp(v). Concretely, the above family is constructed as follows. The mapV→H^1(B, ) v↦η(f) + exp(v),defines an elementη^V ∈V, H^1(B,)≃ H^0(V,_V) ⊗ H^1(B,) ≃ H^1(V × B, _2^*)where V, H^1(B,) denotes the space of holomorphic maps between V and H^1(B,). So one can find a covering ∪_i=1^nU_i = B of B by open subsets such that η^V represents a Čech 1-cocycleη^V_ij∈Γ(V × U_ij, _2^*) ≃V, Γ(U_ij, )where U_ij U_i ∩ U_j. Let us write X_if^-1(U_i) andX_ij f^-1(U_ij) for all i and j. The 1-cocycle (η^V_ij)_i,j defines the transition maps V × X_ij→ V × X_ij which are translations by η^V_ij andthe family → V × B is obtained by glueing (V × X_i→ V × U_i )_i together using thesetransition maps. Since q^-1(V × U_i) ≃ V × X_i over V for all i, the family π : → V is strongly locally trivial. If f : X → B is endowed with an f-equivariant G-action for some finite group G, then this G-action induces an action onand on . The restriction to the G-invariant subspace V^G ⊂ V of (<ref>) is a deformation of the J-torsor f : X → B preserving the equivariant G-action <cit.>. The proof that V^G contains a dense subset of points parameterizing J-torsors having a multi-section is contained in the proof of <cit.>, which we sketch now and provide necessary references for the detail. By Deligne's theorem, WH^1(B, _) is a pure Hodge structure of degree 2g and concentrated in bi-degrees (g-1,g+1), (g,g), and (g+1,g-1) <cit.>. Let W_KW ⊗ K for any field K. If F^∙ W_ denotes the Hodge filtration, then V is isomorphic to W_/F^gW_ <cit.>. Let μ : W_→ V denote the composition μ : W_ W_→ V.Using the Hodge theory we see easily that μ is surjective, so μ(W_) is dense in V. Since G is finite, we have μ(W_^G) ⊗ = μ(W_)^G ⊗ = V^G.Therefore μ(W_^G) is dense in V^G. Using the assumption that X is Khler, one can prove that the image of the G-equivariant class η_G(f) ∈ H^1_G(B,) associated to X (which is a refinement of η(f), cf. <cit.>) under the connection morphismH^1_G(B,) → H^2_G(B,_)is torsion <cit.>. It follows that there exists m ∈_>0 and v_0 ∈ V^G such that mη(f) = exp(v_0). Therefore η(f) + expv - v_0/m is torsion for each v ∈μ(W_^G), so each of the fibrations _v → B parameterized by the subset μ(W_^G) - v_0/m⊂ V^Gin the family (<ref>) has a multisection. As we saw that μ(W_^G) ⊂ V^G is dense, we conclude that the restriction of (<ref>) to V^G is a deformation of f: X → B containing a dense subset of members having a multisection. Let f: X → B be a smooth isotrivial 2-torus fibration over a smooth projective curve B. Let G be a finite group acting f-equivariantly on X and on B such that X → B coincides with the base change of X/G → B/G by B → B/G. Whenever C ⊂ X is a G-invariantcurve or empty, the pair (X/G,C/G) has a C/G-locally trivial algebraic approximation.First we assume that f does not have any multisection. In particular, the curve C is contained in a finite union of fibers of f. Using Lemma <ref> there exists an arbitrarily small strongly locally trivial, so in particular C-locally trivial, deformation of f to some fibration which has a multisection. Thus up to replacing f by this arbitrarily small deformation, we can assume that f has a multisection.Since f has a multisection, there exists a finite base change f̃ : X→B of X → B such that X≃ S ×B where S is a fiber of f and that f̃ is the second projection. After base changing with the Galois closure of B→ B/G, we can assume that B→ B/G is Galois whose Galois group is denoted by G acting on B and on S by monodromy transformations.Let C be the pre-image of C under the map X→ X, which is G-invariant by assumption. By Lemma <ref>, it suffices to show that the pair (X,C) has a G-equivariant C-locally trivial algebraic approximation.Since S ×B→B is isomorphic to the base change of X/G →B/G by B→ B/G, the G-action on S ×B induces a G-action on S such that the first projection p_1 : S ×B→ S is G-equivariant. As C is G-invariant, the curve C'p_1(C) is also G-invariant. By Lemma <ref> and <ref>, there exists a G-equivariant C'-locally trivial algebraic approximation (,') → of the pair (S,C'). By repeating the same argument as in the proof of Lemma <ref>, we conclude that the deformationΠ : ×B→ induces a G-equivariant C-locally trivial algebraic approximation of the pair (X,C). Non-algebraic 3-tori Let X be a non-algebraic 3-torus and G a finite group acting on X. Then for every G-invariant curve C ⊂ X, the pair (X,C) has a G-equivariant algebraic approximation.First we assume that there exists a generically injective morphism ν : C' → X from a smooth curve of geometric genus ≥ 2 to X. Since ν factorizes through C'→ J(C') j X where J(C') denotes the Jacobian associated to C', the 3-torus X contains an abelian variety of dimension ≥ 2 which isj(J(C')) ⊂ X. As X is non-algebraic, we have j(J(C')) = 2 and hence X is a smooth isotrivial fibration f : X → B in abelian surfaces. As X is assumed to be non-algebraic, the G-action on X preserves the fibers of f. Hence we can apply Corollary <ref> to conclude that (X,C) has a G-equivariant algebraic approximation. Now assume that X does not contain any curve of geometric genus ≥ 2, then C is a union of smooth elliptic curves. It follows that X is a smooth isotrivial elliptic fibration f: X → S. Moreover, the fibration f does not have any proper curve other than the fibers of f. Indeed, if such a curve C' exists, then for any fiber F of f the image of : C' × F → X defined by (x,y)x + y is an algebraic surface, so necessarily contains a curve of geometric genus ≥ 2 which is in contradiction with our assumption. Since the only curves of X are fibers of f, the curve C is a union of fibers of f. It also follows that the G-action preserves the fibers of f, so induces a G-action on S. By Lemma <ref>, there exists an arbitrarily small G-equivariant strongly locally trivial, hence C-locally trivial, deformation f' : (X',C) → B of f having a multisection. For such an X', we already saw that X' contains at least one curve of geometric genus 2, so that (X',C), and hence (X,C), have a G-equivariant C-locally trivial algebraic approximation. § ALGEBRAIC APPROXIMATIONS OF COMPACT KHLER THREEFOLDS We can now conclude the proof of Theorem <ref>.Let X be a non-algebraic compact Khler threefold and let X' be a bimeromorphic model of X for which we wish to prove that whenever C ⊂ X' is a curve or empty, the pair (X',C) has a locally trivial and C-locally trivial algebraic approximation. If the choice of X' is isomorphic to the quotient X/G of some smooth variety X by a finite group G, then first of all X/G is -factorial. To prove that (X/G,C) has a locally trivial and C-locally trivial algebraic approximation, it suffices by Lemma <ref> to prove that the pair (X,C) has a G-equivariant C-locally trivial algebraic approximation (,) → where C is the pre-image of C under the quotient map X→X/G.If (X) = 0, we choose X' to be a minimal model of X. In particular, X' is -factorial and has at worst terminal singularities. By Proposition <ref>, the variety X' is a quotient X / G by a finite group where X is either a non-algebraic 3-torus or the product of a non-algebraic K3 surface and an elliptic curve. If C = ∅, since X' is minimal, the existence of a locally trivial algebraic approximation of X' results from <cit.>. If C is a curve, the existence of a G-equivariant C-locally trivial algebraic approximation of (X,C) is a consequence of Lemma <ref>or of Lemma <ref> together with Lemma <ref>, according to wether X is a 3-torus or the product of a non-algebraic K3 surface and an elliptic curve. If (X) = 1, then by Theorem <ref> there are two cases to be distinguished. If we are in the first case of Theorem <ref>, with the same notation therein we take X' = X_, so in particular X'is -factorial with at worst terminal singularities. Since the canonical fibration X_→ B has a strongly locally trivial algebraic approximation <cit.>, we can apply Lemma <ref> and deduce that (X',C) has a locally trivial and C-locally trivial algebraic approximation for every curve C ⊂ X'. If we are in the second case of Theorem <ref>, with the same notation therein we take X' = X /G where G (B/B). By <cit.>, the variety X' has at worst terminal singularities. The existence of a G-equivariant C-locally trivial algebraic approximation of (X,C) is a consequence of Lemma <ref> or Lemma <ref>, according to wether X→B is a fibration in K3 surfaces or 2-tori. As was mentioned in the introduction, the combination of Proposition <ref> and Theorem <ref> proves Theorem <ref>, the existence of an algebraic approximation of any compact Khler threefold of Kodaira dimension 0 or 1. Finally we prove Proposition <ref>, which concerns threefold of Kodaira dimension 2.As an output of the Khler MMP for threefolds and the abundance theorem (cf. the beginning of Section <ref>), a compact Khler threefold X with (X) = 2 is bimeromorphic to an elliptic fibration X' → B' with X' being normal and B' a projective surface. Let[cramped, row sep = 0, column sep = 20] X Y'[r, "ν"] [l , "μ" swap]X'be a resolution of the bimeromorphic map XX' where μ is bimeromorphic. Since X' is normal, there exists a subvariety C ⊂ X' of dimension at most 1 such that the restriction of ν to Y' ν^-1(C) is an isomorphism onto X'C. Accordingly f' : Y' → B' is still an elliptic fibration. Let D' ⊂ B' denote the locus parameterizing singular fibers of f' and let (B,D) → (B',D') be a log-resolution of the pair (B',D'). Let ν' : Y → Y' ×_B' B be a desingularization of Y' ×_B' B. AsUY' ×_B' (BD) → BD and hence U are smooth, we can assume that the restriction of ν' to the Zariski open ν'^-1(U) is an isomorphism onto U. It follows that Y → B is an elliptic fibration whose locus of singular fibers is contained in the normal crossing divisor D. Let η : Y → Y' ×_B' B → Y' → X denote the composition, which is bimeromorphic. Since both Y and X are smooth, we have η_*_Y = _X and R^1η_*_Y = 0. We can therefore apply <cit.> as in the proof of Lemma <ref> to conclude that if Question <ref> has a positive answer for the elliptic fibration Y → B, then X has an algebraic approximation by <cit.>. § ACKNOWLEDGEMENTThe author is supported by the SFB/TR 45 "Periods, Moduli Spaces and Arithmetic of Algebraic Varieties" of the DFG (German Research Foundation). He would like to thank F. Gounelas, C.-J. Lai, S. Schreieder, A. Soldatenkov, and C. Voisin for questions, remarks, and general discussions on various subjects related to this work.plain
http://arxiv.org/abs/1704.08109v2
{ "authors": [ "Hsueh-Yung Lin" ], "categories": [ "math.AG", "math.CV" ], "primary_category": "math.AG", "published": "20170426134218", "title": "Algebraic approximations of compact Kähler threefolds of Kodaira dimension 0 or 1" }
apsrev∇̃ ε ł <cit.> <̊r̊e̊f̊>̊ [] γ δ Ε Θ 𝒟 𝒵 σ ω u̇ ∇ łΔ · ∧ ∇∇ ∇∇∇ ∇∇∇̃ ε ł <cit.> <̊r̊e̊f̊>̊ [] γ Ε Θ 𝒟 𝒵 σ ω u̇ ∇ nab∇̃ e ∂ ⟨ ⟩ ⟨ ⟩÷c_s^2u__Bv__Be__Bσ__Tn__E #1/#2#1/#2#1(<ref>)
http://arxiv.org/abs/1704.08311v2
{ "authors": [ "Artyom V. Astashenok", "Alvaro de la Cruz-Dombriz", "Sergei D. Odintsov" ], "categories": [ "gr-qc", "astro-ph.CO" ], "primary_category": "gr-qc", "published": "20170426192216", "title": "The realistic models of relativistic stars in f(R) = R + alpha R^2 gravity" }
The deceiving simplicity of problems with infinite charge distributions in electrostatics Marcin Kościelecki^1, Piotr Nieżurawski^2^1Department of Mathematical Methods in Physics, Faculty of Physics, University of Warsaw ul. Pasteura 5, 02-093 Warsaw, Poland ^2Institute of Experimental Physics, Faculty of Physics, University of Warsaw ul. Pasteura 5, 02-093 Warsaw, Poland [email protected], [email protected] December 30, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================We show that for an infinite, uniformly charged plate no well defined electric field exists in the framework of electrostatics, because it cannot be defined as a mathematically consistent limit of a solution for a finite plate. We discuss an infinite wire and an infinite stripe as examples of infinite charge distributions for which the electric field can be determined as a limit in a formal, mathematical way. We also propose a didactic framework that can help students understand subtleties related to the problems of limits in electrostatics. The framework consists of heuristic tools (claims) that help to align intuitions in the spirit of a rigorous definition of an integral. We thoroughly discuss to what degree the solution for a finite plate agrees with the traditional but unfortunately ill-defined solution for an infinite plate. Physics is a science of approximations. One can ask why the use of mathematically ill-defined formulae and objects should be forbidden if they make life simpler. In our opinion, approximations should have solid physical and mathematical foundations. § INTRODUCTION In this paper, we discuss conceptual problems related to teaching electrostatics to college students. Many exercises involve sophisticated integrating over bounded or unbounded domains. However, the problem of the existence of integrals over unbounded domains is rarely discussed. Generally, the teaching process focuses on the application of symmetry as a leading heuristic rule, but a mathematical perspective on validity and the drawbacks of such an approach are not presented, even in standard textbooks (e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). The absence of such a discussion is permanent and hard to accept. Students who attend lectures have completed at least a basic calculus course and should be capable of understanding explanations related to the existence of limits, the Riemann integral over an unbounded domain and the integral in the Cauchy principal value sense. More than fifty years ago R. Shaw <cit.> expressed his frustration in the following words: Presumably not unconnected with this uncritical acceptance of arguments based on symmetry is the fact that false, or at best incomplete, arguments of this type are quite common in elementary textbooks on electricity. We will show that the symmetry heuristics in electrostatics do more harm than good and do not agree with the formal mathematical definition of limit. Even the Cauchy principal value, sometimes presented as a mathematical representation of a symmetry heuristics, does not work in the long run as it clashes with invariance under translations. We understand that a heuristic is necessary to frame student intuition and give a general feeling of the subject <cit.>. Attempts to “associate meaning with certain structures” in case of definite integral in the context of electrostatics are presented in <cit.>. However, we did not find any discussion about a “concept image” related to integration over an unbounded domain. Therefore, we propose a new leading concept for the case of charge distributions over unbounded domains.Unbounded distributions are problematic in various aspects. Here we focus on the existence of electric field integrals. However, other approaches are present in the literature. For example, the authors of <cit.> discuss asymptotic conditions of an unbounded charge distribution necessary to obtain the assumed asymptotics of the potential. We show our ideas in action discussing a few examples of unbounded charge distributions: the infinite wire, the infinite stripe, a quarter of the infinite plate, and the infinite plate.We disagree with the popular opinion that calculating the electric field of the infinite plate is the simplest and correct way to obtain the approximation of the field of a big but finite plate. Let us assume that we somehow convince a student that for a large plate, far from its edges the field should be nearly uniform and nearly perpendicular to the plate. The student uses textbook procedures and receives the result. This approach has three significant flaws. First, the student has no idea how precise is the result. What is the error of the result? Is it 10% or 10^-6%? (for a detailed discussion see chapter <ref>) Second, this approach strengthens the conviction that the field of the infinite plate exists as – intuitively but not mathematically – the limit of the enlarging procedure. Third, from the beginning the student is exposed to dirty tricks dressed up as fundamental principles. § INTEGRALS OVER UNBOUNDED DOMAINS IN ELECTROSTATICS §.§ The didactic challenge The electric field of uniformly charged infinite objects such as an infinite wire and a plate is one of the standard topics present in introductory courses in electrostatics. Given some specific volumetric distribution of charges ρ(r⃗) confined in some finite volume (domain) V∈ℝ^3, the electric field at point r⃗ is given by the formula: E⃗(r⃗)=k∫_Vρ(r⃗')(r⃗-r⃗')/|r⃗-r⃗'|^3V'where k=1/4πε_0. In the case of infinite volume, the integral of the electric field over a non-compact domain should be computed as a limit: E⃗_∞(r⃗)=lim_V→ℝ^3k∫_Vρ(r⃗')(r⃗-r⃗')/|r⃗-r⃗'|^3V'The existence of the limit (<ref>) is treated as a default in textbooks. Authors of textbooks (e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>) implicitly assume that integrals over unbounded domains are computable and they focus on presenting the most effective ways to calculate the limit (<ref>) (often using Gauss's law), so the discussion has a technical and not an existential nature – for more details see Appendix <ref>. Unfortunately, a discussion about the existence of limit (<ref>) is unavoidable, even in the case of such a standard problem of electrostatics as a charged infinite plate. The absence of such a discussion is difficult to understand. One of the pessimistic explanations can be found in <cit.>. However, we optimistically believe that the authors could not find a satisfying way to explain all the subtleties to students. Indeed, comments like <cit.> (p. 181) do not help:A double integral ∫ f(x,y) x y over an infinite region R can be defined by taking a sequence of regions {R_n} such that, for any part of R, this part is included in all R_n for n greater than some m. If the double integral over R_n has a unique limit for all such sequences, this limit can be taken as the definition of the integral over R. Improper double integrals may be defined similarly. It appears, however, that unless the same process gives a unique value when |f(x,y)| is substituted for f(x,y) the value of the limit will depend on the shapes of the regions R_n, and consequently a non-absolutely convergent double integral has no meaning unless these are specified. However true, these thoughts are convoluted enough to present a didactic challenge. Unfortunately, the over-abundant symmetry heuristics presented as obvious in textbooks makes detailed discussion about the existence of limit (<ref>) more difficult. The didactic challenge is solvable but to do this the symmetry argument should not be used as a leading idea in electrostatics. A concise presentation of problems related to limit (<ref>) could involve the following steps:* Downgrade symmetry intuitions as they do not help with the nuances of calculations over unbounded domains. * Find intuitions/heuristics that help to understand the mathematical subtleties of limit (<ref>). * Check which classical problems of electrostatics can be computed directly from definition (<ref>). * Accept the fact that some problems become ill-posed when extended to an unbounded domain. * Discuss the finite domain solutions for non-extendable problems. §.§ Drawbacks of symmetry intuitions We present simple examples of how intuition built on the symmetry argument conflicts with strict mathematical definitions. We believe that the typical second year student is capable of understanding the examples that follow. §.§.§ Limits To show that the limit lim_x→+∞cos(x)does not exist (see also: <cit.>, p. 66), it is enough to show a counterexample – for two different sequences: x_n=2π n, and y_n=π+2π n, n∈ℕ, the limit (<ref>) gives two different results: lim_n→+∞cos(2π n)=1,lim_n→+∞cos(π+2π n)=-1.One would get into serious trouble during a calculus exam arguing that lim_x→+∞cos(x)=0,using the let's take the average or symmetry with respect to the x-axis argument, even if lim_n→∞cos(π/2+π n)=0.The truth is that not every sequence has a limit. §.§.§ Integrals Imagine one has to compute the integral of a real function f(x) over ℝ. The existence of such an integral, by definition, is related to the existence of two independent limits: ∫_-∞^+∞f(x) x:=lim_A→-∞lim_B→+∞∫_A^Bf(x) xIn this spirit the integral: ∫_-∞^+∞sin(x) x=lim_A→-∞lim_B→+∞∫_A^Bsin(x) x=lim_B→+∞(-cos(B))-lim_A→-∞(-cos(A))does not exist because each limit for cos(x) does not exist as we have shown in (<ref>).Why cannot one use the argument of symmetry and claim that the integral (<ref>) is equal to zero because the sin(x) is an odd function? The symmetry argument applied here essentially means that we treat variables A and B as not independent – now we impose an additional constraint A=-B and we want to solve problem (<ref>) by computing: lim_B→+∞∫_-B^Bsin(x) x=lim_B→+∞(-cos(B))-(-cos(-B))=0,However from a mathematical point of view, the value of integral (<ref>) is equal to (<ref>) only when (<ref>) exists in the sense of (<ref>)! The nuance lies in the implication: if the integral defined in (<ref>) exists then the result does not depend on the way we link the A and B values, say A=-B^2 or A=-2B, etc. But the converse is not true.To save the symmetry argument one could abandon formal definitions and say that every integral in electrostatics should be understood in the symmetric sense: v.p.∫_-∞^+∞f(x) x:=lim_A→∞∫_-A^Af(x) xwhere v.p. means the Cauchy principal value (see also: <cit.>, p. 45, example 11). Unfortunately such an approach also clashes with the symmetry heuristic when one tries to apply it to symmetric functions such as cos(x): lim_A→+∞∫_-A^Acos(x) x=lim_A→+∞sin(A)-sin(-A)=lim_A→+∞2sin(A)It is easy to show, as we did for (<ref>), that the last limit in (<ref>) does not exist. The conflict also manifests itself at the level of intuitions. Physicists like the idea of translational invariance as much as symmetry. On the computational level this means that the integral (in the principal value sense as well) over an unbounded domain should not change if we shift the graph of the function by π/2, so the result for sin(x) should be the same as for cos(x). § A CONCEPTUAL FRAMEWORK FOR UNDERSTANDING ELECTROSTATICS We would like our students posess the ability to first think about whether a problem has a solution before going into the technical nuances of finding the best shortcut for solving it. The first step should not involve a discussion about the possible symmetries of the problem for it treats the existence of solutions by default. We need a leading idea that focuses on the nuances of the existence of limit (<ref>) and at the same time could be accepted on the heuristic level as it relates to physical objects. Therefore we propose two equivalent claims: The property of a system should not depend on the method of dividing the system into subsystems.The property of the system should not depend on the method of constructing the system from subsystems. The above claims consider two important facts related to limit (<ref>): 1) The existence of limit for an unbounded region means that all possible ways to fill–up that region must lead to the same result. 2) If the result of (<ref>) is not independent of the choice of division into smaller parts, the limit does not exist. Claim <ref> represents a static approach to the system while Claim <ref> focuses on its dynamical aspect. Both should appeal to different mathematical and physical intuitions of our students.In light of the above claims students would be less surprised to see that the integral (<ref>) in the case of an infinite charged plate gives different values depending on the particular prescription of extending volume V (in a two-dimensional case) to infinity. Students can check that such a field, understood as a unique solution of (<ref>), does not exist and has the same meaningless status as limit (<ref>). In the next sections we will revisit standard problems of electrostatics and use Claims <ref> and <ref> as the leading ideas. § CLASSICAL PROBLEMS OF ELECTROSTATICS REVISITED §.§ The didactic challenge, part II We aim to show that the application of Claims <ref> and <ref> can lead to interesting results or can at least provoke refreshing discussions with students. We examine the existence of the electric field for: a uniformly charged infinite wire, an infinite stripe and an infinite plate by computing appropriate limits of solutions for a finite wire and a rectangle. For linear and surface charge distributions, we use the following variants of formula (<ref>) E⃗(r⃗)=k∫_Lλ(r⃗')(r⃗-r⃗')/|r⃗-r⃗'|^3L'where λ(r⃗) is a linear charge distribution along some finite length curve L, and E⃗(r⃗)=k∫_Sσ(r⃗')(r⃗-r⃗')/|r⃗-r⃗'|^3S'where σ(r⃗) is a surface charge distribution on some finite area surface S. We do not discuss how to derive (<ref>) and (<ref>) from (<ref>) by treating charge density in the rigorous, distributive sense. Such an approach, however preferable, would pose another didactic challenge as first year students are not familiar with the theory of distributions.§.§ From finite to infinite straight wire First we consider a one-dimensional, straight, uniformly charged wire with linear charge density λ. We start with a wire L of finite length extending from point a to b on the X axis. We determine the electric field at point r⃗=[0,y,z], assuming y≠0 or z≠0. Using Coulomb's law and superposing contributions from infinitesimal charge elements λ x' at point r⃗'=[x', 0, 0] one obtains: E⃗(r⃗) = kλ∫_Lr⃗-r⃗'/|r⃗-r⃗'|^3x'where ∫_Lr⃗-r⃗'/|r⃗-r⃗'|^3x'=ê_x∫_L-x'/|r⃗-r⃗'|^3x'+ê_y∫_Ly/|r⃗-r⃗'|^3x'+ê_z∫_Lz/|r⃗-r⃗'|^3x'and r⃗-r⃗' = [-x',y,z]|r⃗-r⃗'| =√(x'^2+y^2+z^2)As the y component of the electric field is analogous to the z component, for simplicity we continue calculation of the field at point r⃗=[0, 0,z], on the Z axis (assuming z≠0). In this case E_y=0. We calculate x and z components of E⃗: E_x=kλ∫_a^b-x'/√(x'^2+z^2)^3x'=kλ(1/√(b^2+z^2)-1/√(a^2+z^2))E_z=kλ∫_a^bz/√(x'^2+z^2)^3x'=kλ1/z(b/√(b^2+z^2)-a/√(a^2+z^2)) §.§.§ Discussion Our goal is to obtain the formula for the electric field of an infinite wire. First, we cannot assume that a solution in the sense of (<ref>) exists. Therefore, we cannot set a=-b and calculate the limit b→+∞ for E_x and E_z in (<ref>). We cannot assume only from symmetry that the field component parallel to the wire, E_x, is zero as is usually done in approaches using Gauss' law. Only after we prove that a solution exists – which means that we have to compute limits a→-∞ and b→+∞ independently – then any symmetry-inspired methods or other shortcuts can be used and would give the same result. These considerations may seem superfluous, but such nuances play a crucial role in the case of the infinite plate.The results for E_x and E_z are independent of any order in which limits a→-∞ and b→+∞ are calculated and in agreement with textbooks: lim_b→+∞lim_a→-∞E_x=0 lim_b→+∞lim_a→-∞E_z=2kλ1/z§.§ From the rectangle to the infinite plate In an analogy to the case of the finite wire, we start with a finite rectangle and analyse what happens if sides of the rectangle are independently extended to infinity. It will be shown that in some cases the integral (<ref>) does not exist. §.§.§ The rectangle Let us consider a two dimensional, uniformly charged rectangle P=[a,b]×[c,d] on the XY-plane. The choice of coordinates is shown in Fig. <ref>, where σ denotes a constant surface charge density. We determine the components of the electric field at point r⃗=[0, 0,z] on the Z axis, assuming z≠0. Details are presented in Appendix <ref>. The x-component of the electric field is equal to E_x=kσln(d+√(b^2+d^2+z^2)/c+√(b^2+c^2+z^2) c+√(a^2+c^2+z^2)/d+√(a^2+d^2+z^2))As the result for E_y can be easily obtained after a change of variables in equation (<ref>) E_y=kσln(b+√(d^2+b^2+z^2)/a+√(d^2+a^2+z^2) a+√(c^2+a^2+z^2)/b+√(c^2+b^2+z^2))we limit our considerations to E_x only. The z-component of the electric field is equal to E_z= kσ{arctan[bd/z√(b^2+d^2+z^2)]-arctan[bc/z√(b^2+c^2+z^2)]-arctan[ad/z√(a^2+d^2+z^2)]+arctan[ac/z√(a^2+c^2+z^2)]}These results will be used in the following sections to calculate the electric field of infinite charge distributions. §.§.§ From the rectangle to the infinite stripe We extend the rectangle to the infinite stripe by setting d→+∞ and c→-∞. A discussion about the limits would be identical to the one from section <ref>. After computing limits independently for d and c one obtains (see Appendix <ref>) well-defined components of the field E_x stripe=kσlna^2+z^2/b^2+z^2E_y stripe=0E_z stripe=2kσ{arctan[b/z]-arctan[a/z]} §.§.§ From the stripe to the infinite plate This procedure breaks down if we “extend” the infinite straight stripe to the infinite plate, calculating E_x plane=lim_b→+∞lim_a→-∞E_x stripe=lim_b→+∞lim_a→-∞kσlna^2+z^2/b^2+z^2We aim to show that such a limit does not exist using a method similar to the case of limit (<ref>). To prove that various procedures lead to different results, let us assume that a=-ξ bwhere ξ is an arbitrary constant, ξ>0. Then: E_x plane=lim_b→+∞E_x stripe=kσlnξ^2It is clear that any result is obtainable. For example, if we set ξ=1 then E_x plane=0. But for ξ=e one obtains E_x plane=2kσ. Similar reasoning shows that the y-component also does not exist. To help students, we can use our claims and explain the mathematical fact of non-existence of a limit on the level of intuition: the electric field of the infinite plate depends on the way the plate is built because different methods for extending the stripe to infinity give different results. This means that the electric field for the infinite plate does not exist. Problems with E_x plane and E_y plane do not influence the existence of the third limit for E_z plane E_z plane=z/|z|σ/2ε_0The last result is presented in standard textbooks as the z component of the electric field of the infinite plate, the remaining components are set to be zero as a result of symmetry. However, with the help of formula (<ref>) we see that the E_x plane and the E_y plane can be arbitrary so we cannot talk about the vector quantity E⃗_ plane in a meaningful way as two of its components are undefined. §.§.§ A quarter of ℝ^2 Another aspect of asymptotics of the electric field of the rectangle from section <ref> will be revealed if, instead of extending opposite sides, one extends the rectangle to the first quarter of the XY-plane by extending the adjacent sides. We set a=0, c=0, b→+∞ and d→+∞. Then the limit of the argument of the logarithm in equation (<ref>) for E_x equals zero: lim_b→+∞lim_d→+∞(d+√(b^2+d^2+z^2)/√(b^2+z^2) √(z^2)/d+√(d^2+z^2))=0Thus one has E_xℝ_+^2=lim_b→+∞lim_d→+∞E_x=-∞One obtains the same result for E_yℝ_+^2 in equation (<ref>) by calculating the same limit. Once more two components of the electric field are undefined. The z-component of the electric field is equal to E_zℝ_+^2=z/|z|σ/8ε_0which is a quarter of the standard solution for the z-component of the field from an infinite plate. One could try to build the solution for an infinite plate of four such quarters. Unfortunately, the vector E⃗_ℝ_+^2 is undefined and the existence of a well defined system made from four undefined subsystems cannot be accepted in a mathematical and intuitive sense.We showed that the solution for the infinite wire exists, but there is no solution for the infinite plate. We did not find such a discussion in any textbook. For example, in <cit.> (problem 33, p. 1014) students are encouraged only to calculate the field of a half of an infinite wire. This result could be used to verify the existence of a solution for the infinite wire. The next, natural step would be to calculate the field from a half of an infinite plate. That would necessarily lead to a discussion on the existence of a solution. § HOW IMPORTANT ARE FINITE SIZE AND ASYMMETRY Although the problem of the existence of a solution for an infinite plate is fundamental, it may be treated as the next academic curio. A more practical question is: How much the field of a finite plate differs from the widely used, standard textbooks values: σ/2ε_0 for a perpendicular component and zero for parallel component? The results (<ref>), (<ref>), and (<ref>) for a uniformly charged rectangle can be used to analyze the ratios E_z/σ/2ε_0 and E_x/E_z. One expects the first ratio to be approximately equal to 1, and the second to 0 if there is good agreement. As we show in the following simple examples, for a wide range of parameters values, the ratio E_z/σ/2ε_0 is around 0.95 as expected. However, the ratio E_x/E_z can reach any value. It is clear that the perpendicular field component cannot be neglected, especially in calculations in which all components of the electric field E⃗ are important.We demonstrate the behaviour of these ratios in two simple cases:(a) An extending stripe. The field is calculated in point (0, 0,z) where z>0 (Fig. <ref>). To be in a reasonable distance from the edges, we set the width of the rectangle to be 20 times larger than the distance z. Thus, we set three sides at d=b=10z and c=-10z. The length of the rectangle, and the position of the fourth side, we relate to the asymmetry parameter ξ>0 by setting a=-10zξ. For example, if ξ=1, the square is obtained. The dependencies of E_z/σ/2ε_0 and E_x/E_z on ξ in this case are shown in Fig. <ref>, note that E_y=0. It is clear that E_x cannot be neglected, it is a significant component of the electric field: |E_x/E_z|≳20% for ξ smaller than 0.5 or greater than 3.It is worthwhile to comment about asymptotic behaviour: there are finite, non-zero limits for E_z and E_x as ξ→∞ (this describes a stripe that is infinitely long on one side, here – on the negative part of the X axis).(b) An extending square. The field is calculated at point (0, 0,z) where z>0 (Fig. <ref>). To be in a reasonable distance from the edges we set the distance to the top and right edges, of the rectangle to be 10 times the distance z by setting d=b=10z. Both, the length and the width of the rectangle we relate to the asymmetry parameter ξ>0 by setting a=c=-10zξ. In this case, two sides of the resulting square “move away” as the asymmetry parameter ξ increases. The dependencies of E_z/σ/2ε_0 and E_x/E_z on ξ in this case are shown in Fig. <ref>. It should be noted that E_y=E_x. For ξ>2 or ξ<0.5 the E_x component of the field is greater than around 20% of E_z. For ξ>200 the E_x component of the field is greater than E_z. However, for ξ>30 the field component parallel to the plate, √(E_x^2+E_y^2)=√(2)|E_x|, is greater than E_z.The asymptotic behaviour is different than in the case of the extending stripe. Only the perpendicular component, E_z, is bounded. The parallel component is unbounded, lim_ξ→∞E_x=∞ and lim_ξ→∞E_y=∞, as in the case discussed in section <ref>.For completeness we show how the field E_z above the centre of the extending square varies. The field is calculated at point (0, 0,z) where z>0. To be above the centre of the square we set b=d=η z and a=c=-η z where η>0. Thus, the length of a side of the square is equal to 2η z. The dependency of E_z/σ/2ε_0 on the ratio η in this case, is shown in Fig. <ref>. It should be noted that here E_y=E_x=0. If η=1, which means that the length of a side of the square is equal to 2z, the z-component of the field is only around 35% of σ/2ε_0. The field magnitude reaches 95% of σ/2ε_0 for η=20 (the length of a side of the square is equal to 40z). The field at the distance of 5 cm above the centre of a square with a side of length 60 cm (η=6) would be equal to around 85% of σ/2ε_0.§ CONCLUSIONS We showed that for an infinite, uniformly charged plate no well defined electric field exists in the framework of electrostatics. We propose heuristic tools (the claims) that would help to align intuitions in the spirit of the rigorous definition of an integral. We want students to first consider the existence of the solution. We demonstrated that unfortunately some classical problems present in textbooks cannot be defined in a meaningful way – it is hard to talk about an electric field when only one component of the vector quantity is not ill-defined. Such problems seem to be very simple but their simplicity is deceptive.The good news is that a discussion about the applicability of solutions for a finite plate to an infinite plate problem is relatively simple. The transition from a rectangle to an infinite plate can lead through an infinite stripe or a quarter of ℝ^2 and help to understand where the solution ceases to exist. As we showed, a more rigorous discussion during classes is possible. Moreover, it may be interesting for students as a working example of the advantages of taking a closer look at definitions of mathematical objects. The didactic challenge can be overcome.The authors would like to thank Kazimierz Napirkowski, Andrzej Majhofer and Robin & Tad Krauze for fruitful discussions and valuable comments. plain10 Dobbs E. R. Dobbs. Basic Electromagnetism. Springer-Science+Business Media, B.V., 1993.Feynman2013 Richard P Feynman, Robert B Leighton, and Matthew Sands. The Feynman Lectures on Physics, Desktop Edition Volume I, volume 1. Basic Books, 2013.Griffiths2013electrodynamics David Jeffrey Griffiths. Introduction to electrodynamics, fourth edition. Pearson, 2013.halliday2014fundamentals David Halliday, Robert Resnick, and Jearl Walker. Fundamentals of physics extended, 10th edition, volume 1. John Wiley & Sons, 2014.halliday2014instructors David Halliday, Robert Resnick, and Jearl Walker. Fundamentals of physics extended, 10th edition, instructor's solutions manual, volume 1. John Wiley & Sons, 2014.herbert1991introductory J Herbert and P Neff. Introductory Electromagnetics. John Wiley & Sons, 1991.MIT Physics Department Faculty, Lecturers, and Technical Staff: Boleslaw Wyslouch, Brian Wecht, Bruce Knuteson, Erik Katsavounidis, Gunther Roland, John Belcher, Joseph Formaggio, Peter Dourmashkin and Robert Simcoe. 8.02 Physics II: Electricity and Magnetism. Massachusetts Institute of Technology: MIT OpenCourseWare, <https://ocw.mit.edu>. https://ocw.mit.edu/courses/physics/8-02-physics-ii-electricity-and-magnetism-spring-2007/class-activities/chapte4gauss_law.pdfLink to file, Spring 2007 (3.11.2016).Prytz2015 Kjell Prytz. Electrodynamics: The Field-Free Approach: Electrostatics, Magnetism, Induction, Relativity and Field Theory. Springer, 2015.Shaw-Symmetry-Uniq Ronald Shaw. Symmetry, Uniqueness, and the Coulomb Law of Force. Am. J. Phys, 33:4, 300-305, 1965.Sherin-How-students Bruce L. Sherin. How students understand physics equations. Cognition and Instruction, 2001, Vol. 19, No. 4. pp. 479-541.Meredith-Context Dawn C. Meredith and Karen A. Marrongelle. How students use mathematical resources in an electrostatics context. Am. J. Phys., 76(6), 570–578, 2008.Doughty-cues Leanne Doughty, Eilish McLoughlin, Paul van Kampen. What integration cues, and what cues integration in intermediate electromagnetism. Am. J. Phys, 82:11, 1093-1103, 2014.Palma G. Palma, R. Oyarzun, U. Raff. Generalization of the electrostatic potential function for an infinite charge distribution. Am. J. Phys., 71 (8) 813-5, 2003.Bohren2009 Craig F Bohren. Physics textbook writing: Medieval, monastic mimicry. Am. J. Phys, 77(2):101–103, 2009.Jeffreys Bertha Swirles Jeffreys and Harold Jeffreys. Methods of Mathematical Physics, 2nd edition. Cambridge University Press, 1950.Bourchtein L. Bourchtein A. Bourchtein. CounterExamples: From Elementary Calculus to the Beginnings of Analysis. CRC Press, Taylor & Francis Group, 2015.Gelbaum Bernard R. Gelbaum and John M. Olmsted. Counterexamples in Analysis. Dover Publications, Inc., Mineola, New York, 2003.§ UNIFORMLY CHARGED RECTANGLE Let us consider a two dimensional, uniformly charged – with constant surface charge density σ – rectangle P=[a,b]×[c,d] on the XY-plane. The choice of coordinates is shown in Fig. <ref>. We determine the electric field at point r⃗=[0, 0,z] on the Z axis, assuming z≠0. Using Coulomb's law and superposing contributions from infinitesimal charge elements σ S' at point r⃗'=[x',y', 0] one obtains: E⃗(r⃗) = kσ∫_Pr⃗-r⃗'/|r⃗-r⃗'|^3S'where ∫_Pr⃗-r⃗'/|r⃗-r⃗'|^3S'=ê_x∫_P-x'/|r⃗-r⃗'|^3S'+ê_y∫_P-y'/|r⃗-r⃗'|^3S'+ê_z∫_Pz/|r⃗-r⃗'|^3S'and r⃗-r⃗' = [-x', -y',z]|r⃗-r⃗'| =√(x'^2+y'^2+z^2)x'∈[a,b] and y'∈[c,d] at z'=0. Let us focus on the x-component of E⃗: E_x=kσ∫_c^d∫_a^b-x'/√(x'^2+y'^2+z^2)^3x' y'After the first integration one obtains ∫_a^b-x'/√((x'^2+y'^2+z^2)^3)x' =.1/√(x'^2+y'^2+z^2)|_a^b =1/√(b^2+y'^2+z^2)-1/√(a^2+y'^2+z^2)The second integration leads to ∫_c^d1/√(b^2+y'^2+z^2)y' =.ln|y'+√(b^2+y'^2+z^2)||_c^d =ln|d+√(b^2+d^2+z^2)/c+√(b^2+c^2+z^2)|Finally, the x-component of the electric field is equal to: E_x=kσln(d+√(b^2+d^2+z^2)/c+√(b^2+c^2+z^2) c+√(a^2+c^2+z^2)/d+√(a^2+d^2+z^2))The result for E_y can be easily obtained after a change of variables in equation (<ref>).To fully describe the electric field of the uniformly charged rectangle we calculate the z component of E⃗: E_z=kσ∫_c^d∫_a^bz/√(x'^2+y'^2+z^2)^3x' y'The first integration: ∫_a^b1/√(x'^2+y'^2+z^2)^3x' =x'/(y'^2+z^2)√(x'^2+y'^2+z^2)|_a^b =b/(y'^2+z^2)√(b^2+y'^2+z^2)-a/(y'^2+z^2)√(a^2+y'^2+z^2)The next integral is more complicated: ∫_c^db/(y'^2+z^2)√(b^2+y'^2+z^2) y' =1/zarctan[by'/z√(b^2+y'^2+z^2)]|_c^d =1/z{arctan[bd/z√(b^2+d^2+z^2)]-arctan[bc/z√(b^2+c^2+z^2)]}Finally, we obtain E_z= kσ{arctan[bd/z√(b^2+d^2+z^2)]-arctan[bc/z√(b^2+c^2+z^2)]-arctan[ad/z√(a^2+d^2+z^2)]+arctan[ac/z√(a^2+c^2+z^2)]}It is simple to show that lim_b→+∞lim_a→-∞lim_d→+∞lim_c→-∞E_z=z/|z|kσ{π/2+π/2+π/2+π/2}=z/|z|σ/2ε_0This limit for E_z is equal to the result well known from textbooks. § FROM A RECTANGLE TO AN INFINITE STRIPE We “extend” the rectangle to the infinite stripe by setting d→+∞ and c→-∞: lim_c→-∞lim_d→+∞(d+√(b^2+d^2+z^2)/c+√(b^2+c^2+z^2) c+√(a^2+c^2+z^2)/d+√(a^2+d^2+z^2)) =lim_c→-∞(c+√(a^2+c^2+z^2)/c+√(b^2+c^2+z^2)) =a^2+z^2/b^2+z^2One obtains a well defined x-component of the field: E_x stripe=kσlna^2+z^2/b^2+z^2 § EXAMPLES OF INCONSISTENCIES We are aware that it is a risky task to pinpoint the inconsistencies in well established textbooks. However, as university teachers that have to explain the issue to confused students every year, we would be more than satisfied to be able to recommend a textbook in which the authors present a consistent approach to problems with infinite charge distributions. Unfortunately, we did not find a mathematically correct treatment of such cases. To show that the problem is widespread, we present an arbitrary list of a few introductory courses in electrostatics in which the existence of the electric field or the force due to an uniformly charged infinite object is taken for granted.* In <cit.> (Cancelling Components, pp. 639-640) the authors explain that in the case of a uniformly charged ring the components perpendicular to the ring axis are cancelled. This result is used as well in the case of a uniformly charged disk (pp. 643-644). However, at the end of this section the authors obtain the electric field for an infinite plate by extending the radius of the disk to infinity. There is no discussion of the existence of the presented integral if the radius of the disk is infinite. So, the components perpendicular to the axis of the disk are obtained on the same basis as in result (<ref>). In the following (p. 673) or in <cit.> (p. 13), the field from an infinite sheet is calculated using Gauss' law, with the same assumption that the field parallel to the plate is zero. As we show in section <ref> or <ref>, this field does not exist in the framework of electrostatics. * In <cit.> (p. 51) the infinite sheet is built from infinite wires. The author observes that the integrand is an odd function, thus the result must be zero. Once more, students may think about sin(x) as the integrand (see Eq. <ref>) and wonder why physics lectures are not compatible with mathematical ones. * We find in <cit.> (section 13-4, pp. from 13-13 to 13-14) that in the case of an infinite plate only the perpendicular component of the gravitational or the electric field is considered. * In <cit.> (problem 33, p. 1052) and in <cit.> (section 4.8, p. 31) we have examples of the standard superposition of the fields from infinite plates. It is similar to superposing undefined quantities – such are the field components parallel to an infinite sheet. * In <cit.> (problem 37, p. 1053) the authors instruct students on how they should think: ”THINK To calculate the electric field at a point very close to the center of a large, uniformly charged conducting plate, we replace the finite plate with an infinite plate having the same charge density. Planar symmetry then allows us to apply Gauss' law to calculate the electric field.”* In <cit.> (p. 53) we read “In some textbook problems the charge itself extends to infinity (we speak, for instance, of the electric field of an infinite plane, or the magnetic field of an infinite wire). In such cases the normal boundary conditions do not apply, and one must invoke symmetry arguments to determine the fields uniquely.”This suggests that the authors do not doubt that electrostatics is able to describe the case. The only problem is how to change the game rules to prove the result we believe in. * In <cit.> (pp. 45-46) the field components parallel to an infinite sheet are calculated and zero values are obtained. The author integrates first over an azimuthal angle, as the result is zero, the next integration over a radius is not necessary. However, students who usually already know Fubini's theorem can try to integrate first over the radius that leads them to infinity! In all these cases the existence of the solution is assumed, and the authors' main goal is to obtain a mathematical formula, usually via some technical shortcut. A discussion of the existence of the solution would be beneficial for the didactic process, and is likely to lead to the correct result.
http://arxiv.org/abs/1704.08758v3
{ "authors": [ "Marcin Kościelecki", "Piotr Nieżurawski" ], "categories": [ "physics.class-ph" ], "primary_category": "physics.class-ph", "published": "20170427214821", "title": "The deceiving simplicity of problems with infinite charge distributions in electrostatics" }
1Astronomical Institute, Tohoku University, Aramaki, Aoba, Sendai 980-8578, Japan 2Dept. of Astrophysical Sciences, Princeton University, 4 Ivy Lane, Princeton, NJ 08544, USA3Faculty of Business Administration, Tokyo Keizai University, Kokubunji, Tokyo, 185-8502, Japan4Institute for Cosmic Ray Research, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8582, Japan 5Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo, 113-0033, Japan 6National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588, Japan 7Department of Astronomy, School of Science, SOKENDAI (The Graduate University for Advanced Studies), Mitaka, Tokyo 181-8588, Japan 8Department of Economics, Management and Information Science, Onomichi City University, Hisayamada 1600-2, Onomichi, Hiroshima 722-8506, Japan 9Subaru Telescope, NAOJ, 650 N Aohoku Pl, Hilo, HI 96720, USA 10Research Center for Space and Cosmic Evolution, Ehime University, Matsuyama, Ehime 790-8577, Japan 11Faculty of Education, Bunkyo University, Koshigaya, Saitama 343-8511, Japan 12Graduate School of Science and Engineering, Ehime University, Bunkyo-cho 2-5, Matsuyama 790-8577, Japan 13Institute for Advanced Research, Nagoya University, Chikusaku, Nagoya 464-8602, Japan 14Research Center for the Early Universe, University of Tokyo, Tokyo 113-0033, Japan 15Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), University of Tokyo, Chiba 277-8582, Japan 16High Energy Accelerator Research Organization and The Graduate University for Advanced Studies, Oho 1-1, Tsukuba, Ibaraki 305-0801, Japan 17Academia Sinica Institute of Astronomy and Astrophysics, P.O. Box 23-141, Taipei 10617, [email protected]: large-scale structure of universe — cosmology: observations — galaxies: evolution— galaxies: high-redshift — galaxies: activeClustering of quasars in a wide luminosity range at redshift 4 with Subaru Hyper Suprime-Cam wide field imaging Wanqiu He1, Masayuki Akiyama1,James Bosch2,Motohiro Enoki3,YuichiHarikane4,5,HiroyukiIkeda6,Nobunari Kashikawa6,7,Toshihiro Kawaguchi8,Yutaka Komiyama6,7,Chien-HsiuLee9,Yoshiki Matsuoka10,6,Satoshi Miyazaki6,7,TohruNagao10, Masahiro Nagashima11,Mana Niida12,Atsushi J Nishizawa13,Masamune Oguri14,5,15,Masafusa Onoue6,7,Taira Oogi15,Masami Ouchi4,AndreasSchulze6, Yuji Shirasaki6,John D. Silverman15,Manobu M. Tanaka16,Masayuki Tanaka6,Yoshiki Toba17,Hisakazu Uchiyama7,Takuji Yamashita10==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================We examine the clustering of quasars over a wide luminosity range, by utilizing 901 quasars at z_ phot∼3.8 with -24.73<M_ 1450<-22.23 photometrically selected from the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) S16A Wide2 date release and 342 more luminous quasars at 3.4<z_ spec<4.6 having -28.0<M_ 1450<-23.95 from the Sloan Digital Sky Survey (SDSS) that fall in the HSC survey fields. We measure the bias factors of two quasar samples by evaluating the cross-correlation functions (CCFs) between the quasar samples and 25790 bright z∼4 Lyman Break Galaxies (LBGs) in M_ 1450<-21.25 photometrically selected from the HSC dataset. Over an angular scale of 10.0” to 1000.0”, the bias factors are 5.93^+1.34_-1.43 and 2.73^+2.44_-2.55 for the low and high luminosity quasars, respectively, indicating no luminosity dependence of quasar clustering at z∼4. It is noted that the bias factor of the luminous quasars estimated by the CCF is smaller than that estimated by the auto-correlation function (ACF) over a similar redshift range, especially on scales below 40.0”. Moreover, the bias factor of the less-luminous quasars implies the minimal mass of their host dark matter halos (DMHs) is 0.3-2×10^12h^-1M_⊙, corresponding to a quasar duty cycle of 0.001-0.06.§ INTRODUCTIONIt is our current understanding that every massive galaxy is likely to have a super massive black hole (SMBH) at its center <cit.>. Active Galactic Nuclei (AGNs) are thought to be associated with the growth phase of the BHs through mass accretion. Being the most luminous one of the AGN population, quasars may be the progenitors of the SMBHs in the local universe. Observations over the last decade or so are establishing a series of scaling relations between SMBH mass and properties of their host galaxies (for review see <cit.>). A similar scaling relation, involving the mass of the SMBH, is reported even with the host dark matter halo (DMH) mass <cit.>. As a result, SMBHs may play an important role in galaxy formation and evolution. However, the physical mechanism behind the scaling relations is still unclear. Clustering analysis of AGNs is commonly used to investigate SMBH growth and galaxy evolution in DMHs. Density peaks in the underlying dark matter distribution are thought to evolve into DMHs (e.g., <cit.>), in which the entire structure is gravitationally bound with a density 300 times higher than the mean density of the universe. More massive DMHs are formed from rarer density peaks in the early universe, and are more strongly clustered (e.g. <cit.>; <cit.>). If focusing on the large scale clustering, i.e. two-halo term, the mass of quasars host halos can be inferred by estimating the clustering strength of quasars in relative to that of the underlying dark matter, i.e. bias factor. How bias factor of quasars depends on redshift and luminosity provides further information on the relation between SMBHs and galaxies within their shared DMH.Many studies, based on the two-point correlation function (2PCF) of quasars, have been conducted by utilizing large databases of quasars, such as the 2dF Quasar Redshift Survey (e.g., <cit.>) and the Sloan Digital Sky Survey (e.g, <cit.>; <cit.>; <cit.>). The redshift evolution of the auto-correlation function (ACF) indicates that quasars are more strongly biased at higher redshifts. For example, luminous SDSS quasars with -28.2<M_1450<-25.8 at z∼4 show strong clustering with a bias factor of 12.96±2.09, which corresponds to a host DMH mass of ∼10^13 h^-1 M_⊙ <cit.>. It is suggested that such high luminosity quasar activity needs to be preferentially associated with the most massive DMHs in the early universe <cit.>. If we consider the low number density of such massive DMHs at z=4, the fraction of halos with luminous quasar activity is estimated to be 0.03∼0.6 (<cit.>) or up to 0.1-1 <cit.>. The clustering strength of quasars can be also measured from the cross-correlation function (CCF) between quasars and galaxies. When the size of a quasar sample is limited, the clustering strength of the quasars can be constrained with higher accuracy by using the CCF rather than the ACF since galaxies are usually more numerous than quasars. Enhanced clustering and overdensities of galaxies around luminous quasars are expected from the strong auto-correlation of the SDSS quasars at z∼4. However, observational searches for such overdensities around quasars at high redshifts have not been conclusive. While some luminous z>3 quasars are found to be in an over-dense region (e.g., <cit.>; <cit.>; <cit.>; <cit.>; <cit.>; <cit.>), a significant fraction of them do not show any surrounding overdensity compared to the field galaxies, and it is suggested that the large scale (∼10 comoving Mpc) environment around the luminous z>3 quasars is similar to the Lyman-Break Galaxies (LBGs), i.e. typical star-forming galaxies, in the same redshift range (e.g., <cit.>; <cit.>; <cit.>; <cit.>).To investigate the quasar environment at z∼4, the clustering of quasars with lower luminosity at M_UV≳-25, i.e. typical quasars, which are more abundant than luminous SDSS quasars, is crucial that it can constrain the growth of SMBHs inside galaxies in the early universe <cit.>. At low redshifts (z≲3), clustering of quasars is found to have no or weak luminosity dependence (e.g., <cit.>; <cit.>; <cit.>; <cit.>). Above z>3, <cit.> examined the CCF of 25 less-luminous quasars in the COSMOS field. However, since the sample size is small, the clustering strength of the less-luminous quasars has still not been well constrained, and their correlation with galaxies remains unclear. The wide and deep multi-band imaging dataset of the Subaru Hyper Suprime-Cam Strategic Survey Program (HSC-SSP; <cit.>) provides us ac unique opportunity to examine the clustering of galaxies around high-redshift quasars in a wide luminosity range. Based on an early data release of the survey (S16A; <cit.>), a large sample of less-luminous z∼4 quasars (M_ UV<-21.5) is constructed for the first time <cit.>. They cover the luminosity range around the knee of the quasar luminosity function, i.e. they are typical quasars in the redshift range. Additionally, more than 300 SDSS luminous quasars at z∼4 fall within the HSC survey area thanks to a wide filed of 339.8 deg^2. Likewise, the five bands of HSC imaging are deep enough to construct a sample of galaxies in the same redshift range through the Lyman-break method <cit.>. Here, we examine the clustering of galaxies around z∼4 quasars over a wide luminosity range of -28.0<M_ 1450<-22.23 by utilizing the HSC-SSP dataset. By comparing the clustering of the luminous and less-luminous quasars, we can further evaluate the luminosity dependence of the quasar clustering. The outline of the paper is as follows. Section <ref> describes the samples of z∼4 quasars and LBGs. Section <ref> reports the results of the clustering analysis, and we discuss the implication of the observed clustering strength in section <ref>. Throughout this paper, we adopt a ΛCDM model with cosmological parameters of H_0=70 km s^-1 Mpc^-1 (h=0.7), Ω_m=0.3, Ω_Λ=0.7 and σ_8=0.84. All magnitudes are described in the AB magnitude system.§ DATA§.§ HSC-SSP Wide-layer dataset We select the candidates of z∼4 quasars and LBGs from the Wide-layer catalog of the HSC-SSP <cit.>. HSC is a wide-field mosaic CCD camera, which is attached to the prime-focus of the Subaru telescope (<cit.>; <cit.>). It covers a FoV of 1.5 deg diameter with 116 Full-Depletion CCDs, which have a high sensitivity up to 1μm. The Wide-layer of the survey is designed to cover 1,400 deg^2 in the g, r, i, z and y bands with 5 σ detection limits of 26.8, 26.4, 26.4, 25.5 and 24.7, respectively, in the 5 year survey <cit.>. In this analysis, we use S16A Wide2 internal data release <cit.>, which covers 339.8 deg^2 in the 5 bands, including edge regions where the depth is shallower than the final depth. The data are reduced with hscPipe-4.0.2 <cit.>. The astrometry of the HSC imaging is calibrated by the Pan-STARRS 1 Processing Version 2 (PS1 PV2) data <cit.>, which covers all HSC survey regions to a reasonable depth with a similar set of bandpasses <cit.>. It is found that the offset RMS of stellar objects between the HSC and PS1 positions is ∼40 mas. Extended galaxies have additional offsets with RMS of ∼30 mas in relative to the stellar objects <cit.>.Following the description in sections 2.1 and 2.4 in <cit.>, we construct a sample of objects with reliable photometry (referred as clean objects hereafter). We applyflags_ pixel_ edge =Not Trueflags_ pixel_ saturated_ center =Not Trueflags_ pixel_ cr_ center =Not Trueflags_ pixel_ bad =Not Truedetect_ is_ primary =Truein all of the 5 bands. These parameters are included as standard output products from the SSP pipeline.The (1)-(4) criteria remove objects detected at an edge of the CCDs, affected by saturation within their central 3×3 pixels, affected by cosmic-ray hitting within their central 3×3 pixels and flagged with bad pixels. The final one picks out objects after the deblending process for crowded objects. We apply additional masks (for details see section 2.4 in <cit.>) to remove junk objects. Patches, defined as a minimum unit of a sub-region with an area about 10.0' by 10.0', having color offsets in the stellar sequence larger than 0.075 in either of the g-r vs. r-i, r-i vs. i-z and i-z vs. z-y color-color planes are removed (see section 5.8.4 in <cit.>). Tract 8284 is also removed due to unreliable calibration. Moreover, we remove objects close to bright objects by setting the criterion that flags_ pixel_ bright_ object_ center in all 5 bands are “Not True”. Regions around objects brighter than 15 in the Guide Star Catalog version 2.3.2 or i=22 in the HSC S16A Wide2 database are also removed with masks described in <cit.>. After the masking process, the effective survey area is 172.0 deg^2.We use PSF magnitudes for stellar objects and CModel magnitudes for extended objects. PSF magnitudes are determined by fitting a model PSF, while CModel magnitudes are determined by fitting a linear combination of exponential and de Vaucouleurs profiles convolved with the model PSF at the position of each object. We correct for galactic extinction in all 5 bands based on the dust extinction maps by <cit.>. Only objects that have magnitude errors in the r and i bands smaller than 0.1 are considered. §.§ Samples of z∼4 quasarsWe select candidates of z∼4 quasars from the stellar clean objects. In order to separate stellar objects from extended objects, we apply the same criteria as described in <cit.>,i_ hsm_ moments_ 11/ i_ hsm_ psfmoments_ 11 <1.1;i_ hsm_ moments_ 22/ i_ hsm_ psfmoments_ 22 <1.1.i_ hsm_ moments_ 11 (22) is the second order adaptive moment of an object in x (y) direction determined with the algorithm described in <cit.> and i_ hsm_ psfmoments_ 11 (22) is that of the model PSF at the object position. The i-band adaptive moments are adopted since the i-band images are selectively taken under good seeing conditions <cit.>. Objects that have the adaptive moment with "nan" are removed. Since stellar objects should have a consistent adaptive moment with that of the model PSF, we set the above stellar/extended clarification criteria. The selection completeness and the contamination are examined by <cit.>. At i<23.5, the completeness is above 80% and the contamination from extended objects is lower than 10%. At fainter magnitudes (i>23.5), the completeness rapidly declines to less than 60% and the contamination sharply increases to greater than 10% (see the middle panel of figure 1 in <cit.>). To avoid severe contamination by extended objects, we limit the faint end of the quasar sample to i=23.5.We apply the Lyman-break selection to identify quasars at z∼4. The selection utilizes the spectral property that the continuum in blue-ward of the Lyα line (λ_ rest=1216 Å) is strongly attenuated by absorption due to the intergalactic medium (IGM). The Lyα line of an object at z=4.0 is redshifted to 6075 Å in the observed frame, which is in the middle of the r-band, as a result the object has a red g-r color. We apply the same color selection criteria as described in <cit.>. In total, 1023 z∼4 quasar candidates in the magnitude range 20.0 <i<23.5 are selected. We limit the bright end of the sample considering the effects of saturation and non-linearlity. Even though we include edge regions with a shallow depth for the sample selection, we do not find a significant difference of the number densities in the edge and central regions. Therefore, we conclude that larger photometric uncertainties or higher number density of junk objects in the shallower regions do not result in a higher contamination for quasars in the region. The i-band magnitude distribution of the sample is shown with the red histogram in the left panel of figure <ref>. The completeness of the color selection is examined with the 3.5<z_spec<4.5 SDSS quasars with i>20.0 within the HSC coverage <cit.>. Among 92 SDSS quasars with clean HSC photometry, 61 of them pass the color selection, resulting in the completeness of 66%. Since the sample is photometrically selected, it can be contaminated by galactic stars and compact galaxies that meet the color selection criteria. The contamination rate is further evaluated by using mock samples of galactic stars and galaxies; the contamination rate is less than 10% at i<23.0, and increases to more than 40% at i∼23.5. It causes an excess of HSC quasars in faint magnitude bins (23.2<i<23.5) as shown in the left panel of figure <ref>. Since the contamination rate sharply increases at i>23.5, we limit the sample at this magnitude. For the bright end, as the luminous SDSS quasar sample primarily includes quasars brighter than i=21.0, we consider the HSC quasar sample fainter than i=21.0 to constitute the less-luminous quasar sample. Finally, 901 quasars from the HSC are selected in the magnitude range of 21.0<i<23.5. Here, we convert the i-band apparent magnitude to the UV absolute magnitude at 1450 Å using the average quasar SED template provided by <cit.> at z∼4, which results in a magnitude range of -24.73<M_ 1450<-22.23. In <cit.>, a best fit analytic formula of the contamination rate as a function of the i-band magnitude is provided. If we apply it to the less-luminous quasar sample, it is expected that 90 out of 901 candidates are contaminating objects, i.e. contamination rate of the z∼4 less-luminous quasar sample is 10.0%.The redshift distribution of the sample of the z∼4 less-luminous quasar candidates is shown in figure <ref> with the red histogram. For 32 candidates with spectroscopic redshift information, we adopt their spectroscopic redshifts, otherwise the redshifts are estimated with a Bayesian photometric redshift estimator using a library of mock quasar templates <cit.>. Most of the quasars are in the redshift range between 3.4 and 4.6. Average and standard deviation of the redshift distribution are 3.8 and 0.2, respectively.In order to examine the luminosity dependence of the quasar clustering, a sample of luminous z∼4 quasars is constructed based on the 12th spectroscopic data release of the Sloan Digital Sky Survey (SDSS) <cit.>. We select quasars with criteria on object type (“QSO”), reliability of the spectroscopic redshift (“z_waring” flag =0), and estimated redshift error (smaller than 0.1). Only quasars within the coverage of the HSC S16A Wide2 data release are considered. We limit the redshift range between 3.4 and 4.6 following the redshift distribution of the HSC z∼4 LBG sample (which will be discussed in section <ref>). In the coverage of the HSC S16A Wide2 data release, there are 342 quasars that meet the selection criteria. Their redshift distribution is shown by gray filled histogram in figure <ref>. Average and standard deviation of the redshift distribution are 3.77 and 0.26, respectively. Although the redshift distribution of the SDSS sample shows excess around z∼3.5 compared to the HSC sample, the average and standard deviations are close to each other. The i-band magnitude distribution of the SDSS quasars is plotted by the black histogram in the left panel of figure <ref>. To determine their i-band magnitude in the HSC photometric system, we match the sample to HSC clean objects using a search radius of 1.0”. Out of the 342 SDSS quasars, 296 have a corresponding object among the clean objects, while the others are saturated in the HSC imaging data. For the remaining 46 quasars, we convert their r- and i-band magnitudes in the SDSS system to the i-band magnitude in the HSC system following the equations in section 3.3 in <cit.>. As can be seen from the distributions, the SDSS quasar sample covers a magnitude range about 2 magnitude brighter than the HSC quasar sample. Their corresponding UV absolute magnitudes at 1450 Å are in the range of -28.0 to -23.95 evaluated by the same method with the less-luminous quasar sample. §.§ Sample of z∼4 LBGs from the HSC datasetWe select candidates of z∼4 LBGs from the S16A Wide2 dataset in the similar way as we select the z∼4 quasar candidates. Different from the quasars, we select candidates from the extended clean objects instead of the stellar objects, i.e. we pick out the clean objects that do not meet either of the equations (6)(7) as extended objects. As shown in figure 9 of <cit.>, extended galaxies at z>3 are distinguishable from stellar quasars with these criteria, as a result of the good image quality of the i-band HSC Wide-layer images, which has a median seeing size of 0.61” <cit.>. While the stellar/extended classification is ineffective at i>23.5, the contamination of stellar objects to the LBG sample is negligible, because the extended objects outnumber the stellar objects by ∼30 times at 23.5<i<25.0.We determine the color selection criteria of z∼4 LBGs based on color distributions of a library of model LBG spectral energy distributions (SEDs), because the sample of z∼4 LBGs with a spectroscopic redshift at the depth of the HSC Wide-layer is limited. The model SEDs are constructed with the stellar population synthesis model by <cit.>. We assume a Salpeter initial mass function <cit.> and the Padova evolutionary track for stars (<cit.>; <cit.>) of solar metallicity. Following a typical star-formation history of z∼4 LBGs derived based on an optical-NIR SED analysis (e.g. <cit.>; <cit.>; <cit.>), we adopt an exponentially declining star-formation history with ψ(t)=τ^-1exp(-t/τ), where τ=50 Myr and t=300 Myr. In addition to the stellar continuum component, we also consider the Lyα emission line at 1216 Å with a EW_Lyα randomly distributed within the range between 0 and 30 Å, which is determined to follow the Lyα EW distribution of luminous LBGs in the UV absolute magnitude range of -23.0 ∼ -21.5 <cit.>. We apply extinction as a screen dust with the dust extinction curve of <cit.>. We assume that E(B-V) has a Gaussian distribution with a mean of 0.14 and 1σ of 0.07 following that observed for z∼3 UV-selected galaxies <cit.>. In order to reproduce the observed scatter of the g-r color of galaxies at z∼3 (see figure <ref>), the scatter of the color excess is doubled to σ=0.14. In total, 3,000 SED templates are constructed. Each template is redshifted to z=2.5-5.0 with an interval of 0.1. Attenuation by the intergalactic medium is applied to the redshifted templates. We follow the updated number density of the Lyα absorption systems in <cit.>, and consider scatter in the number density of the systems along different line of sights with the Monte Carlo method used in <cit.> (Inoue, private communication). In Figure <ref>, we compare the distributions of the g-r and r-z colors of the templates with those of the galaxies with spectroscopic redshift in the HSC-SSP catalogs of the Ultra-Deep layer. The color distribution of the mock LBGs as a function of redshift reproduces that of the galaxies with spectroscopic redshifts around 3. At z>3.5, it is hard to judge the consistency due to the limited size of galaxies with available spectroscopic redshifts.Considering the color distributions of the mock LBGs and the LBGs with a spectroscopic redshift, we determine the color selection criteria on the g-r vs. r-z color-color diagram as shown in figure <ref> with the blue dashed lines. Gray dots and blue crosses represent colors of galaxies with a spectroscopic redshift at 0.2<z<0.8 and 0.8<z<3.5, respectively, in the HSC Wide-layer photometry. Red stars are galaxies at 3.5<z<4.5. We plot the color track of the model LBG with the black solid line, and mark the colors at z=2.5, 3.0, 3.5, 4.0 and 4.4 with the 1σ scatter. The pink shaded region represents 1σ scatter of the r-z color along the model track. The selection criteria are0.909(g-r)-0.85 > (r-z);(g-r) > 1.3;(g-r) < 2.5.We determine the selection criteria to enclose the large part of the color distribution of the models while preventing severe contamination from low-redshift galaxies. The third criterion limits the upper redshift range of the sample, and is adjusted to match the expected redshift distribution of the less-luminous z∼4 quasars. In order to reduce contaminations by low-redshift red galaxies and objects with unreliable photometry, we consider two additional criteria (i-z) < 0.2;(z-y) < 0.2following figure 3 in <cit.>. Because the contamination by low-redshift galaxies is severe at magnitude fainter than i=24.5, we limit the sample at this magnitude. Finally, we select 25790 z∼4 LBG candidates at i<24.5. The i-band magnitude distribution of the candidates is shown in the right panel of figure <ref>. The brightest candidate is at i=21.87, but there are only 4 candidates at i<22. Thus we plot the distribution from i=22. The corresponding UV absolute magnitudes of the candidates at 1450 Å are evaluated to be in the range of -23.88<M_ 1450<-21.25 by the model LBG at z∼4. It should be noted that there is a difference in the sky coverage between both of the quasar samples and i<24.5 LBGs, because of the edge regions with shallow depth where only the quasars are selected reliably. Such selection effects are taken into consideration when constructing the random sample (section <ref>). §.§ Redshift distribution and contamination rate of the z∼4 LBG sampleThe redshift distribution of the LBG sample is evaluated by applying the same selection criteria to a sample of mock LBGs, which are constructed in the redshift range between 3.0 and 5.0 with a 0.1 redshift bin. At each redshift bin, we randomly select LBG templates from our library of SEDs and normalize them to have 22.0<i<24.5 following the LBG UV luminosity function at z∼3.8 <cit.>. We convert the apparent i-band magnitude to the absolute UV magnitude based on the selected templates. It should be noted that an object with a fixed apparent magnitude has a higher luminosity and a smaller number density in the luminosity function at higher redshifts. We also consider the difference in comoving volume at each redshift bin. For each redshift bin, we then place the mock LBGs at random positions in the HSC Wide-layer images with a density of 2,000 galaxies per deg^2, and apply the same masking process as for the real objects. We calculate the expected photometric error at each position using the relation between the flux uncertainty and the value of image variance. This relation is determined empirically with the flux uncertainty of real objects as a function of the PSF and object size. The variance is measured within 1”×1” at each point. The size of the model PSF at the position is evaluated with the model PSF of the nearest real object in the database. In order to reproduce the photometric error associated with the real LBGs, we use the relation for a size of 1.5”. After calculating the photometric error with this method, we add a random photometric error assuming the Gaussian distribution. Finally, we apply the color selection criteria and remove mock LBGs with the magnitude error in either of i- or r- band larger than 0.1. The ratio of the recovered mock LBGs to the full random mock LBGs is evaluated as the selection completeness at each redshift bin. We find that the selection completeness is ∼10.0-30.0% in the redshift range between 3.5 and 4.2, but smaller than 5% at other redshifts. These low rates are due to the fact that we set stringent constraints so that we can prevent the severe contamination from low-redshift galaxies. Based on a selection completeness of 20.0% at 3.5<z<4.2, we calculate an expected number of 35988 LBGs with 22<i<24.5 in the HSC-SSP S16A Wide-layer from the LBG UV luminosity function at z∼3.8 <cit.>, which is larger than the actual LBG sample size (25790) in this work since we consider the edge regions that have a shallow depth. The effect of the shallow depth is considered in the construction of the random objects (section <ref>).The redshift distribution is measured by multiplying the completeness ratio with the number of mock LBGs at each redshift, which is shown in figure <ref> with the blue histogram. The average and 1σ of the distribution is 3.71 and 0.30, respectively. The redshift distribution of the LBGs is similar to that of the luminous quasar sample, but slightly extended toward lower redshifts than the less-luminous quasar sample. It is likely that the extension is due to the higher number density of LBGs in 22.0<i<24.5 at 3.3<z<3.5. The LBG sample can be contaminated by low-redshift red galaxies which have similar photometric properties to the z∼4 LBGs. We evaluate the contamination rate of the LBG selection using the HSC photometry in the COSMOS region and the COSMOS i-band selected photometric redshift catalogue, which is constructed by a χ^2 template-fitting method with 30 broad, intermediate, and narrow bands from UV to mid-IR in the 2-deg^2 COSMOS field <cit.>. In the HSC-SSP S15B internal database, three stacked images in the COSMOS region, simulating good, median, and bad seeing conditions, are provided. Since the i-band images of the Wide-layer are selectively taken under good or median seeing conditions <cit.>, we match the catalogs from the median stacked image, which has a FWHM of 0.70”, with galaxies in the photometric redshift catalog within an angular separation of 1.0”. As examined by <cit.>, the photometric redshift uncertainty of galaxies with the COSMOS i'-band magnitude brighter than 24.0 is estimated to be smaller than 0.02 at z<1.25. For galaxies within the same luminosity range at higher redshifts 1.25<z<3, the uncertainty is significantly higher but roughly below 0.1. Thus we only include objects with photometric redshift uncertainty less than 0.02 and 0.1 at z<1.25 and at z>1.25, respectively, in the matched catalog. We apply the color selection criteria (8)-(12) to the matched catalog. Among 700 matched galaxies with 3.5<z_phot<4.5, 117 galaxies pass the selection criteria, resulting in the completeness of 17%, which is consistent with that examined by the mock LBGs. Meanwhile, we investigate the contamination by the ratio of galaxies at z<3 or z>5 among those passing the selection criteria at each magnitude bin of 0.1. It is found that the contamination rate is 10% to 30% in the magnitude range of i=23.5-24.5, and sharply increases to >50% at i=25.0. In total, all contaminating sources are clarified to be at z<3, while 95% of them are at z<1. We multiply the contamination rate as a function of the i-band magnitude with the number counts of the LBG candidates at each 0.1 bin to estimate the total number of contaminating sources in the sample. Among 25790 LBG candidates, 5886 are expected to be contaminating objects at z<3, i.e. the contamination rate is 22.8%.Furthermore, we also check the photometric redshift of the LBG candidates determined with the 5 bands HSC Wide-layer photometry by the MIZUKI photometric redshift code, which uses the Bayesian photometric redshift estimation <cit.>. Among the 25790 z∼4 LBG candidates, 25749 of them have photometric redshift with the MIZUKI code, and 4091 of them have photometric redshift lower than z=3.0. The contamination rate is evaluated to be 15.9%, which is similar to the one evaluated in the COSMOS region. Since the COSMOS photometric redshift catalog is based on the 30 bands photometry covering wider wavelength coverage, we consider the contamination rate evaluated in the COSMOS region in the later clustering analysis.§.§ Constructing random objects for the clustering analysisThe clustering strength is evaluated by comparing the number of pairs of real objects and that of mock objects distributed randomly in the survey area. Therefore it is necessary to construct a sample of mock objects that are distributed randomly within the survey area and are selected with the same selection function as the real sample. From z=3 to 5, we construct 3000 mock LBG SEDs, which are normalized to have i=24.5, at each 0.1 redshift bin. Then we place the mock LBGs randomly over the survey region with the surface number density of 2,000 LBGs per deg^2 with errors as described in section <ref>. After applying the same color selection and magnitude error criteria as for the real objects, we create a sample of 150,756 random LBGs, which reproduce the global distribution of the real LBGs including the edge of the survey region where the depth is shallower. Therefore, the clustering analysis is not affected by the discrepancy of the sky coverage between the quasars and LBGs.§ CLUSTERING ANALYSIS §.§ Cross-correlation functions of the less-luminous and luminous quasars at z∼4We evaluate the CCFs of the z∼4 quasars and LBGs with the projected two point angular correlation function, ω(θ), since most of the quasar and LBG candidates do not have spectroscopic redshifts. We use the estimator from <cit.>,ω(θ)=DD(θ)/DR(θ)-1,where DD(θ)=⟨ DD ⟩/N_ QSON_ LBG and DR(θ)=⟨ DR ⟩/N_ QSON_ R are the normalized quasar - LBG pair counts and quasar - random LBG pair counts in an annulus between θ-Δθ and θ+Δθ, respectively. Here, ⟨ DD ⟩ and ⟨ DR ⟩ are the numbers of quasar - LBG and quasar - random LBG pairs in the annulus, and N_ QSO,N_ LBG and N_ R are the total numbers of quasars, LBGs and random LBGs, respectively. We set 14 bins from 1.0" to 1000.0" in the logarithmic scale. The CCFs of the quasars and LBGs for the less-luminous and luminous quasars are plotted in the left and right panels of figure <ref>, respectively, and summarized in table <ref> along with the pair count in each bin. The uncertainty of the CCFs is evaluated through the Jackknife resampling <cit.>. We separate the survey area into N=22 subregions with a similar size. In i-th resampling, we ignore one of the subregions to construct a new set of samples of quasars, LBGs, and random LBGs and estimate their correlation function, ω_i. We evaluate the uncertainty only by the diagonal elements of the covariance matrixCov(ω_i,ω_j)=N-1/N∑_k=1^N(ω^k_i-ω_i)(ω^k_j-ω_j),where ω_i is the mean of ω_i over the N Jackknife samples, because the diagonal elements are sufficienct to recover the true uncertainty <cit.>. The ω_i at each radius bin is consistent with the CCFs of the whole samples of the less-luminous and luminous quasars. The resulting uncertainty with the Jackknife resampling is about 1.5 - 2 times larger than the Poisson error (σ(θ)=(1+ω(θ))/√(N_pair)) in the scale beyond 500.0”. But these two error estimators are consistent with each other in the scale within 300.0”. In the scale smaller than 20.0”, due to the limited quasar-LBG pair count, the Poisson error can be even larger than the Jackknife one if we evaluate the Poisson uncertainty with the Poisson statistics for a small sample <cit.>. Here, since we do not consider the small scale within 10.0” in the fitting process, we adopt the Jackknife error for the CCF beyond 10.0”. For the scale within 10.0”, if the Jackknife estimator fails to give a value due to either of no ⟨ DD ⟩ or ⟨ DR ⟩ pair count in any subsamples, we show the Poisson error following the Poisson statistics for a small sample <cit.> in table <ref> and figure <ref>. The binned CCF is fitted through the χ^2 minimization with a single power-law modelω(θ)=A_ωθ^-β- IC.We apply a β of 0.86, which is determined with the ACF of the LBGs in the following section <ref>. IC is the integral constraint which is a negative offset due to the restricted area of an observation <cit.>. As described in <cit.>, the integral constraint can be estimated by integrating the true ω(θ) on the total survey area Ω asIC=1/Ω^2∫∫ω(θ)dΩ_1dΩ_2.We calculate the integral constraint using random LBG-random LBG pairs over the entire survey area throughIC=∑[RR(θ)A_ωθ^-β]/∑RR(θ)following <cit.>. Since the survey area is wide and the scale of interest is within 1000.0”, the IC/A_ω is small compared to the observed CCFs and the IC term can be neglected in the fitting process. In this study, we focus on the large scale clustering between two halos, i.e. two-halo term. Thus the excess within an individual halo (one-halo term) is not considered in the fitting process. The radial scale of the region dominated by the one-halo term is examined to be 0.2-0.5 comoving h^-1Mpc (e.g. <cit.>; <cit.>). At redshift 4, the corresponding angular separation is ∼10.0” - 20.0”. Thus we fit the binned CCF with A_ω in the scale larger than 10.0”. The best fit A_ω is summarized in table <ref> where the upper and lower limits correspond to χ^2=1 from the minimal χ^2. Here, the χ^2 fitting fails to fit the CCF of the SDSS luminous quasars with negative bins due to the limited luminous quasar sample size.Another fitting method, the maximum likelihood (ML) method, which does not require a specific binning is applied to the CCFs since the χ^2 fitting to the binned CCFs can be highly affected by the negative bins. As described in <cit.>, if we assume that the pair counts in each bin follows the Poisson distribution, we can define a likelihood of having the observed pair sample from a model of a correlation function asℒ=∏_i=1^N_binse^-h(θ_i)h(θ_i)^⟨ DD(θ_i) ⟩/⟨ DD(θ_i) ⟩!,where h(θ)=(1+ω(θ))⟨ DR(θ)⟩ is the expected object-object mean pair counts evaluated from the object-random object pair counts within a small interval around θ. Here, ω(θ) is the power-law model (equation (15)). Then, we can define a function for minimization, S∼-2lnℒ, as S=2∑_i^N_bins h(θ_i)-2⟨ DD(θ_i) ⟩∑_i^N_binsln h(θ_i),where only terms dependent on model parameters are kept. Assuming that S follows a χ^2 distribution with one degree of freedom, the parameter range with S=1 from the minimum value corresponds to a 68% confidence range of the parameter. The ML fitting is applied for the CCFs in the range between 10.0” and 1000.0” with an interval of 0.5”. The interval is set to keep the object-object pair count in each bin small enough, so that the bins are independent of each other. The best fit parameters are summarized in table <ref>. The ML method yields slightly higher A_ω than the χ^2 fitting but still consistent within the 1σ uncertainty. However, in the range containing several negative bins, the best ML fitting models can be lower than the positive bins of the binned CCF, as can be seen in the right panel of figure <ref>. It is reported that the assumption that pair counts follow the Poisson statistics (i.e., clustering is negligible) will underestimate the uncertainty of the fitting <cit.>. We find the scatter of the ML fitting is only slightly smaller than the χ^2 fitting. Therefore, we adopt the ML fitting results hereafter for both of the CCFs since both of them have negative bins in the binned CCFs. The contamination rates of the HSC quasar and LBG samples are taken into account by A'_ω=A^ fit_ω/(1-f^ QSO_c)(1-f^ LBG_c),where f^ QSO_c and f^ LBG_c are the contamination rates of the less-luminous quasar and LBG samples estimated by sections <ref> and <ref>, respectively. Since we do not know redshift distributions and clustering properties of the contaminating sources, we simply assume that they are randomly distributed in the survey area. The A_ω after correcting for the contamination is listed in table <ref>. We note that the contaminating galaxies or galactic stars can have their own spatial distributions. For example, it is reported that the galactic stars cause measurable deviation from the true correlation function only on scales of a degree or more due to their own clustering property (e.g. <cit.>; <cit.>). Therefore the correction in this work only gives an upper limit of the true A_ω and we rely on the values without the correction in the discussions. §.§ Auto-correlation function of z∼4 LBGsIn order to derive the bias factor of the quasars from the strength of the quasar-LBG CCFs, we need to evaluate the bias factor of the LBGs from the LBG ACF. The binned ACF of the z∼4 LBGs is derived in the same way as the quasar-LBG CCF. We use the estimatorω(θ)=DD(θ)/DR(θ)-1,where DD(θ)=⟨ DD ⟩/(N_LBG(N_LBG-1)/2) and DR(θ)=⟨ DR ⟩/N_LBGN_R are the normalized LBG-LBG and LBG-random LBG pair counts in the annulus between θ-Δθ and θ+Δθ, respectively. Here, ⟨ DD ⟩ and ⟨ DR ⟩ are the numbers of LBG-LBG and LBG-random LBG pairs in the annulus, and N_LBG and N_R are the total numbers of LBGs and random LBGs, respectively. We set 14 bins from 1.0" to 1000.0" in the logarithmic scale. The LBG ACF is shown in figure <ref> and table <ref> along with the pair counts. Thanks to the large sample of the LBGs, the LBG-LBG pair count is large enough to constrain the ACF even in the smallest bin. We adopt the Jackknife error, which has two times larger value than the Poisson error at all bins. Most of the bins have clustering signal more than 3σ.We fit the raw LBG ACF with a single power-law model ω(θ)=A_ωθ^-β- IC by χ^2 minimization in the scale from 10.0” to 1000.0”. The integral constraint is negligible. Thanks to the small uncertainty of the LBG ACF, the power-law index can be constrained tightly to be β=0.86^+0.07_-0.06 as shown in figure <ref>. As already mentioned in section 3.1, we adopt this power-law index throughout this paper. The best fit parameters are listed in table <ref>.The effect of the contamination is evaluated withA'_ω=A^ fit_ω/(1-f^ LBG_c)^2.The results are listed in table <ref>. We do not consider the contamination for fitting the power-law index β because it would not be affected by a random contamination.§ DISCUSSION §.§ Clustering bias from the correlation lengthOne of the parameters representing the clustering strength is the spatial correlation length, r_0 (h^-1 Mpc), which is in the spatial correlation function with the power-law form as ξ(r)=(r/r_0)^-γ,where γ is related to the power of the projected correlation function through γ=1+β. The spatial correlation function can be projected to the angular correlation function through Limber's equation (<cit.>). We ignore the redshift evolution of the clustering strength within the covered redshift range. Then the spatial correlation length of the ACF can be derived from the amplitude of the angular correlation function, A_ω, asr_0=[ A_ωc/H_0H_γ[∫ N(z)dz]^2/∫ N^2(z)χ(z)^1-γE(z)dz ]^1/γ,where H_γ=Γ(1/2)Γ(γ-1/2)/Γ(γ/2), E(z)=[Ω_m(1+z)^3+Ω_Λ]^1/2, χ(z)=c/H_0∫_0^z1/E(z')dz'and N(z) is the redshift distribution of the sample. For the CCF, the same relation can be modified to <cit.>r_0=[ A_ωc/H_0H_γ∫ N_ QSO(z)dz∫ N_ LBG(z)dz/∫ N_ QSO(z)N_ LBG(z)χ(z)^1-γE(z)dz ]^1/γ.Applying the redshift distributions of the less-luminous quasars, the luminous quasars and the LBGs at z∼4 estimated in section <ref> for N_ QSO(z) and section <ref> for N_ LBG(z), respectively, we evaluate r_0 from A_ω with and without the contamination correction as summarized in table <ref>. Although the contamination rates of the less-luminous quasars and the LBGs are not high, the correlation lengths of the less-luminous quasar-LBG CCF and the LBG ACF are significantly increased after correcting for the contamination. Meanwhile, r_0 of the luminous quasar-LBG CCF vary slightly, because the SDSS quasar sample is not affected by a contamination. The measurement of r_0 is sensitive to the assumed redshift distribution of the sample. For example, r_0 will be smaller if we assume a narrower redshift distribution even for the same A_ω. As discussed in section <ref>, the redshift distribution of the LBGs is estimated to be more extended than both of the less-luminous and luminous quasar samples. If we assume the redshift distribution of the LBGs is the same as the less-luminous quasars, r_0 of the LBG and the less-luminous quasars decreases to 5.52^+0.77_-0.87 h^-1Mpc, which is 23% lower than that estimated originally, because the fraction of the LBGs contributing to the projected correlation function in the overlapped redshift range increases, yielding a weaker correlation strength, i.e. a smaller r_0 from a fixed A_ω.The bias factor is defined as the ratio of clustering strength of real objects to that of the underlying dark matter at the scale of 8 h^-1Mpc,b=√(ξ(8,z)/ξ_DM(8,z)).The clustering strength of the underlying dark matter can be evaluated based on the linear structure formation theory under the cold dark matter model <cit.> as ξ_DM(8,z)=(3-γ)(4-γ)(6-γ)2^γ/72[σ_8g(z)/g(0)1/z+1]^2,where g(z)=5Ω_mz/2 [Ω^4/7_mz-Ω_Λ z+(1+Ω_mz/2)(1+Ω_Λ z/70) ]^-1,andΩ_mz=Ω_m(1+z)^3/E(z)^2,Ω_Λ z=Ω_Λ/E(z)^2.We derive the bias factors b_ LBG and b_ QG from the spatial correlation length of the LBG ACF and the quasar-LBG CCF, respectively. Following <cit.>, the quasar bias factor is then evaluated from the bias factor of the CCF byb_ QSOb_ LBG∼ b^2_ QG.We list the LBG ACF bias factors in table <ref>.The estimated b_ LBG with and without the contamination correction are consistent with <cit.> and the brightest bin at M_UV∼-21.3 in <cit.>, respectively. The quasar bias factors derived from the CCF are summarized in table <ref>.§.§ Bias factor from comparing with the HALOFIT power spectrumThe bias factors can also be derived by directly comparing the observed clustering with the predicted clustering of the underlying dark matter from the power spectrum Δ^2(k,z) (e.g., <cit.>). The spatial correlation function derived from Δ^2(k,z) can be projected with the Limber's equation into the angular correlation ω_ DM(θ) asω_ DM(θ)=π∫∫Δ^2(k,z)/kJ_0[kθχ(z)]N^2(z)dz/dχF(χ)dk/kdz,where J_0 is the zeroth-order Bessel function, χ is the radial comoving distance, N(z) is the normalized redshift distribution function, dz/dχ=H_z/c=H_0[Ω_m(1+z)^3+Ω_Λ]^1/2/c, and F(χ)=1 for the flat universe. We evaluate the non-linear evolution of the power spectrum Δ_NL^2(k, z) in the redshift range between z=3 and 5 with the HALOFIT code <cit.> by adopting the cosmological parameters used throughout this paper. The bias parameters are derived by fitting b^2ω_ DM(θ) to the observed correlation functions, ω_ obs(θ). For the LBG ACF, ω_ DM(θ) is directly compared to the ω_ obs(θ) through χ^2 minimization. For the CCFs, the redshift distribution in equation <ref> is replaced by the multiplication of those of quasars and LBGs asω_ DM-CCF(θ)=π∫∫Δ^2(k,z)/kJ_0[kθχ(z)]N_ QSO(z)N_ LBG(z)dz/dχF(χ)dk/kdz.In the scale from 10.0” to 1000.0”, both of the χ^2 and ML fitting are applied to the less-luminous quasar CCF, while only ML fitting works for the luminous quasar CCF. The bias factors of the quasar samples are derived from the CCF and the LBG ACF through equation (33). The best fit bias factors are summarized in table <ref> and table <ref>. They are consistent with those derived from the power-law fitting within the 1σ uncertainty. Thus the power law approximation with an index of β=-0.86 can well reproduce the underlying dark matter distribution on scales larger than 10.0”. In the scale below 10.0”, the underlying dark matter model becomes flat since we do not consider the one-halo term. If we compare the observed correlation functions with the best-fit power-spectrum models, there is an obvious overdensity of galaxies in that scale in figure <ref>, which is consistent with the one-halo term of the LBG ACF at z∼4 (e.g., <cit.>). In the left panel of figure <ref>, it also shows an overdensity of galaxies within 10.0” around the less-luminous quasars although the error bar is large. Interestingly, we find that the luminous quasars show no pair count within 10.0” in the right panel of figure <ref>. It should be noted that the best fit model in scales larger than 10.0” suggests only 1 SDSS quasar - HSC LBG pair within 10.0”, which is consistent with no pair count. Thus it can be caused by the limited size of the SDSS quasar sample, though we cannot exclude the possibility that there is a real deficit of galaxies around luminous quasars within 10.0”. We consider the contamination by modifying the redshift distribution normalization ∫_0^∞N(z)dz∼1-f_c for the less-luminous quasars and the LBGs respectively. We simply assume that the contamination will not contribute to the underlying dark matter correlation function. The modified underlying dark matter correlation functions are plotted in figures <ref> and <ref>. Since the redshift distribution form is the same after considering the contamination, only the amplitude of the underlying dark matter correlation function is changed. The bias factors with contamination are listed in table <ref> and table <ref>, which are consistent with those derived from fitting with the power-law model after correcting for the contamination.§.§ Redshift and luminosity dependence of the bias factorAt first, we discuss the luminosity dependence of the bias factors of the luminous and less-luminous quasars in this work. The bias factor of the less-luminous quasars is 5.93^+1.34_-1.43, which is derived by fitting the CCF with the underlying dark matter model in the scale from 10.0” to 1000.0” through the ML fitting. The bias factor is consistent with that of the luminous quasars, 2.73^+2.44_-2.55, obtained from the CCF through the same method within the 1 σ uncertainty. If we consider the possible effect of the contamination, the bias factor of the less-luminous quasars increases to 6.58^+1.49_-1.58, which is still consistent with that of the luminous quasars within the uncertainty. Thus no or only a weak luminosity dependence of the quasar clustering is detected within the two samples.In order to discuss the redshift dependence of the quasar clustering, we compare the bias factors with those in the literature in the left panel of figure <ref>.The bias factors in the previous studies show a trend that quasars at higher redshifts are more strongly biased, indicating that quasars preferentially reside in DMHs within a mass range of 10^12∼10^13h^-1M_⊙ from z∼0 to z∼4. There is no discrepancy between the bias factors estimated with the ACF and the CCF at z≲3. In this work, the bias factor of the less-luminous quasars at z∼4 follows the trend, while the bias factor of the luminous quasars is similar to or even smaller than those at z∼3.The luminosity dependence of the quasar bias factors at z∼3-4 is summarized in the right panel of figure <ref>. Both of the bias factors of the less-luminous quasars with and without the contamination correction are consistent with but slightly higher than that evaluated with the CCF of 54 faint quasars in the magnitude range of -25.0<M_UV<-19.0 at 1.6<z<3.7 measured by <cit.>, the CCF of 58 faint quasars in the magnitude range of -26.0<M_UV<-20.0 at 2.8<z<3.8 measured by <cit.>, and the CCF of 25 faint quasars in the magnitude range of -24.0<M_UV<-22.0 at 3.1<z<4.5 measured by <cit.>, which suggests a slightly increasing or no evolution from z=3 to z=4. Meanwhile, for the clustering of the luminous quasars, the bias factor in this work is consistent with the CCF of 25 bright quasars in the magnitude range of -30.0<M_UV<-25.0 at 1.6<z<3.7 measured by <cit.> and the ACF of 24724 bright quasars in the magnitude range of -27.81<M_UV<-22.9 mag at 2.64<z<3.4 measured by <cit.>. Different from the case of the less-luminous quasars, the clustering of the luminous quasars suggests no or a declining evolution from z∼3 to z∼4. The bias factor of the luminous quasars in this work shows a large discrepancy with the ACF of 1788 bright quasars in the magnitude range of -28.2<M_UV<-25.8 (which is transferred from M_i(z=2) by equation (3) in <cit.>) at 3.5<z<5.0 measured by <cit.>. They give two values for the bias factor that the higher one is obtained by only considering the positive bins and the lower one considers all of the bins in the ACF. The bias factor from another subsample of bright quasars covering -28.0<M_UV<-23.95 at 2.9<z<3.5 in <cit.> is also shown in the panel. The z∼4 quasar bias factors in <cit.> show a large discrepancy from the bias factor of the luminous quasars in this work and in <cit.> with the similar magnitude and redshift coverage. In the right panel of figure <ref>, we plot the expected CCF with b_ QG∼√(b_ QSOb_ LBG)=9.83 by the orange dash-dot-dotted line. We adopt the higher b_ QSO in <cit.> and the b_ LBG with the contamination correction to measure the upper limit of the b_ QG. Although the expected CCF is consistent with some bins within the 1σ uncertainty, it predicts much stronger clustering than both of the best fit power-law and dark matter models. In order to quantitatively examine the discrepancy, we plot the minimization function S of the ML fitting for the luminous quasars with the HALOFIT power spectrum as a function of the bias factor in figure <ref>. Both of the bias factors at 3.5<z<5 in <cit.> are beyond the 1σ uncertainty, corresponding to a low probability. Meanwhile, the bias factor in <cit.>, whose uncertainty is small thanks to the large sample, also shows a large discrepancy from those in <cit.>. <cit.> suspect the discrepancy is mainly caused by a difference in large scale bins (>30h^-1Mpc). We further investigate the effect from the fitting scale as shown in table <ref> and the right panel of figure <ref>. In the scale of 40.0” to 160.0”, we find a strong CCF of the luminous quasars and the LBGs, which is consistent with the ACF of the luminous quasars. On scales below 40.0”, the ML fitting suggests a b_ QG of 0. On larger scales, the ML fitting is not efficient since the pair counts in each bin is too large to fulfill the assumption that bins are independent with each other, even if choosing a small bin width of 0.5” interval. Therefore we only expand the ML fitting scale to 2000.0”. If we consider the power-law model, the b_ QG obtained by fitting in the range of 40.0” to 1000.0” is 24.7% and 7.6% higher than that estimated in the range of 10.0” to 1000.0” and of 40.0” to 2000.0”, respectively, which suggests that the deficit of the luminous quasar-LBG pair on small scales may weaken the CCF more severely than fitting on scales larger than 1000.0”. Such deficit can be an implication of the feedback from the luminous quasars. Since the fitting of the luminous quasar CCF strongly depends on the scale, especially on small scales, we still focus on the results in the scale of 10.0” to 1000.0” to keep accordant to the LBG ACF and the less-luminous quasar CCF throughout the discussion.Quasar clustering models based on semi-analytic galaxy models predict no luminosity dependence of the quasar clustering at redshift 4 (e.g. <cit.>; <cit.>). Although there is a relation between mass of the SMBHs and DMHs in the models, SMBHs in a wide mass range are contributing to quasars at a fixed luminosity, thus there is no relation between the luminosity of model quasars and the mass of their DMHs. The predicted quasar bias factor at redshift 4 in <cit.> is 3.0∼5.0, which is consistent with the quasar bias factors in this work. No luminosity dependence is also predicted in a continuous SMBH growth model of <cit.>. They assume an Eddington limited SMBH growth until redshift 2. However, the predicted bias factor is much larger than the results in this work. On the other hand, there are models which predict stronger luminosity dependence of the quasar clustering at higher redshifts (e.g. <cit.>; <cit.>). These models predict SMBHs in a narrow mass range are contributing to the luminous quasars.In order to disclose the luminosity and redshift dependences of the quasar clustering, we need to understand the cause of the discrepancy between the quasar ACF and quasar-LBG CCF for the luminous quasars at z∼4. The quasar-LBG CCF could be affected by the suppression of galaxy formation due to feedback from luminous quasars (e.g. <cit.>; <cit.>; <cit.>). The weak cross-correlation could also be induced by a discrepancy between the redshift distributions of the quasars and LBGs. We need to further determine the redshift distribution through spectroscopic follow-up observations of the LBGs. §.§ DMH massThe bias factor of a population of objects is directly related to the typical mass of their host DMHs, because more massive DMHs are more strongly clustered and biased in the structure formation under the ΛCDM model <cit.>. The relation between the M_ DMH and the bias factor is derived based on an ellipsoidal collapse model that is calibrated by an N-body simulation asb(M,z) = 1+1/√(a)δ_crit[aν^2√(a)+b√(a)(aν^2)^(1-c) -(aν^2)^c/(aν^2)^c+b(1-c)(1-c/2)],where ν=δ_crit/(σ(M)D(z)) and critical density δ_crit=1.686 <cit.>. We adopt the updated parameters a=0.707, b=0.35, c=0.80 in <cit.>. The rms mass fluctuation σ(M) on a mass scale M at redshift 0 is given byσ^2(M)=∫Δ^2(k)W̃^2(kR)dk/k,andM(R)=4πρ_0 R^3/3,where R is the comoving radius, W̃(kR)=(3sin(kR)-(kR)cos(kR))/(kr)^3 is the top hat window function in Fourier form and ρ_0=2.78×10^11Ω_mh^2M_⊙ Mpc^-3 is the mean density in the current universe. The linear power spectrum Δ^2(k) at redshift 0 is obtainedfrom the HALOFIT code <cit.>. The growth factor D(z) is approximated by D(z)∝g(z)/1+zfollowing <cit.>. Assuming the quasars and LBGs are associated with DMHs in a narrow mass range, we can infer the mass of the quasar host DMHs through the above relations. The evaluated halo masses of the less-luminous quasars and the luminous quasars are 1∼2×10^12h^-1M_⊙ and <10^12h^-1M_⊙ as summarized in table <ref>, respectively. Since the bias factor of the luminous quasars has a large uncertainty, we could only set an upper limit of the M_ DMH. We note that the halo mass strongly depends on the amplitude of the power spectrum on the scale of 8 h^-1 Mpc, σ_8. If we adopt σ_8=0.9, the host DMH mass of the less-luminous quasars will be 4-6×10^12h^-1M_⊙ with the same bias factor.§.§ Minimum halo mass and duty cycleIn the above discussion, we assume that quasars are associated with DMHs in a specific mass range, but it may be more physical to assume that quasars are associated with DMHs with a mass above a critical mass, M_ min. In this case, the effective bias for a population of objects which are randomly associated with DMHs above M_ min can be expressed withb_ eff=∫_M_ min^∞b(M)n(M)dM/∫_M_ min^∞n(M)dM,where n(M) is the mass function of DMHs and b(M, z) is the bias factor of DMHs with mass M at z. We adopt the DMH mass function from the modified Press-Schechter theory <cit.> as n(M,z) = -A√(2a/π)ρ_0/Mδ_c(z)/σ^2(M)dσ(M)/dM {1+[σ^2(M)/aδ_c^2(z)]^p}exp[-aδ_c^2(z)/2σ^2(M)],where A=0.3222, a=0.707, p=0.3 and δ_c(z)=δ_crit/D(z). If we follow the above formulation, the M_ min is estimated to be ∼0.3-2×10^12h^-1M_⊙ and <5.62×10^11h^-1M_⊙ with the bias factors of the less-luminous quasars and the luminous quasars, respectively.Comparing the number density of the DMHs above the M_ min and that of the less-luminous and luminous quasars, we can infer the duty-cycle of the quasar activity among the DMHs in the mass range byf=n_QSO/∫_M_min^∞n(M)dM,assuming one DMH contains one SMBH. The co-moving number density of z∼4 less-luminous quasars are estimated with the HSC quasar sample <cit.>. Integrating the best-fit luminosity function of z∼4 quasars from M_ 1450∼-24.73 to M_ 1450∼-22.23, we estimate the total number density of the less-luminous quasar to be 1.07×10^-6h^3Mpc^-3, which is 2 times higher than that of the luminous quasars with -28.00<M_ 1450<-23.95 (4.21×10^-7h^3Mpc^-3). If we adopt the n(M) in equation (41), the duty-cycle is estimated to be 0.001∼0.06 and <8×10^-4 for M_ min from the less-luminous and the luminous quasar CCF, respectively. If we use the bias factor estimated by considering the effect of the possible contamination, the duty cycle of the less-luminous quasars is estimated to be 0.003∼0.175, which is higher than the estimation above. We compare the duty-cycles with those evaluated for quasars at 2<z<4 in the literature in figure <ref>. The estimated luminosity dependence of the duty-cycles is similar to that estimated for quasars in the similar luminosity range at z∼2.6 <cit.>, although the duty-cycles at z∼4 are one order of magnitude smaller than those at z∼2.6. The estimated duty-cycle corresponds to a duration of the less-luminous quasar activity of 1.5∼90.8 Myr, which is broadly consistent with the quasar lifetime range of 1∼100 Myr estimated in previous studies (for review see <cit.>). It needs to be noted that the estimated duty-cycle is sensitive to the measured strength of the quasar clustering. Small variation in the bias factor can results in even one order of magnitude difference in the duty-cycle, because of the non-linear relation between b and M_ DMH and the sharp cut-off of n(M) at the high-mass end. Furthermore, the duty-cycle is also sensitive to the assumed value of σ_8 <cit.>.§ SUMMARYWe examine the clustering of a sample of 901 less-luminous quasars with -24.73<M_ 1450<-22.23 at 3.1<z<4.6 selected from the HSC S16A Wide2 catalog and of a sample of 342 luminous quasars with -28.00<M_ 1450<-23.95 at 3.4<z_ spec<4.6 within the HSC S16A Wide2 coverage from the 12th data release of SDSS. We investigate the quasar clustering through the CCF between the quasars and a sample of 25790 bright LBGs with M_ 1450<-21.25 in the same redshift range from the HSC S16A Wide2 data release. The main results are as follows.1. The bias factor of the less-luminous quasar is 5.93^+1.34_-1.43 derived by fitting the CCF with the dark matter power-spectrum model through the ML method, while that of the luminous quasars is 2.73^+2.44_-2.55 obtained in the same manner. If we consider the contamination rates of 22.7% and 10.0% estimated for the LBG and the less-luminous quasar samples, respectively, the bias factor of the less-luminous quasars can increase to 6.58^+1.49_-1.58 in an assumption that the contaminating objects are distributed randomly. 2. The CCFs of the luminous and less-luminous quasars do not show significant luminosity dependence of the quasar clustering. The bias factor of the less-luminous quasars suggests that the environment around them is similar to the luminous LBGs used in this study. The luminous quasars do not show strong association with the luminous LBGs in scale 10.0” to 1000.0”, especially on scales smaller than 40.0”. The bias factor of the luminous quasar is smaller than that derived from the ACF of the SDSS quasars at z∼4 <cit.>. The reason may be partly due to the deficit of the pairs on small scales, which may be a reflection of the strong feedback from the SMBH. 3. The bias factor of the less-luminous quasars corresponds to a mass of DMHs of ∼1-2×10^12h^-1M_⊙. Minimal host DMH mass for the quasars can be also inferred from the bias factor. Combining the halo number density above that mass threshold and the observed quasar number density, the fraction of halos which are in the less-luminous quasar phase is estimated to be 0.001∼0.06 from the CCF. The corresponding quasar lifetime is 1.5∼90.8 Myr. Correlation analysis in this work is conducted in the projected plane, and accurate information on the redshift distribution of the samples and the contamination rates is necessary to obtain reliable constraints on the clustering of the z∼4 quasars. Spectroscopic follow-up observations are expected to obtain the accurate information. Additionally, the full HSC Wide survey plans to cover 1400 deg^2 in 5 years, which can significantly enhance the sample size. The statistical significance of the current results can then be largely improved. We would like to thank Dr. A.K. Inoue who kindly provides us with the IGM model data. The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org.SDSS-IV is managed by the Astrophysical Research Consortium for theParticipating Institutions of the SDSS Collaboration including theBrazilian Participation Group, the Carnegie Institution for Science,Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics,Instituto de Astrofísica de Canarias, The Johns Hopkins University,Kavli Institute for the Physics and Mathematics of the Universe (IPMU) /University of Tokyo, Lawrence Berkeley National Laboratory,Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg),Max-Planck-Institut für Astrophysik (MPA Garching),Max-Planck-Institut für Extraterrestrische Physik (MPE),National Astronomical Observatories of China, New Mexico State University,New York University, University of Notre Dame,Observatário Nacional / MCTI, The Ohio State University,Pennsylvania State University, Shanghai Astronomical Observatory,United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona,University of Colorado Boulder, University of Oxford, University of Portsmouth,University of Utah, University of Virginia, University of Washington, University of Wisconsin,Vanderbilt University, and Yale University.This paper makes use of software developed for the Large Synoptic Survey Telescope. We thank the LSST Project for making their code available as free software athttp://dm.lsst.org.The Pan-STARRS1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, the University of Maryland, and Eotvos Lorand University (ELTE). [Adams et al.(2015)]adams2015 Adams, S. M., Martini, P., Croxall, K. V., Overzier, R. A., & Silverman, J. D. 2015, MNRAS, 448, 1335.[Adelberger&Steidel(2005)]AS2005 Adelberger, K. L., & Steidel, C. C. 2005, ApJ, 630, 50.[Aihara et al.(2017a)]aihara2017a Aihara, H., Armstrong, R., Bickerton, S., Bosch, J., Coupon, J., Furusawa, H., ... & Kawanomoto, S., 2017a, submitted to PASJ.[Aihara et al.(2017b)]aihara2017b Aihara, H., Armstrong, R., Bickerton, S., Bosch, J., Coupon, J., Furusawa, H., ... & Kawanomoto, S., 2017b, submitted to PASJ.[Akiyama et al.(2017)]akiyama2017 Akiyama, M., .... 2017, submitted to PASJ[Alam et al.(2015)]sdss2015 Alam, S. et al. 2015, ApJS, 219, 12.[Allen et al.(2005)]Allen2005 Allen, P. D. et al. 2005, MNRAS, 360, 1244.[Ando et al.(2006)]ando2006 Ando, M., Ohta, K., Iwata, I., Akiyama, M., Aoki, K., & Tamura, N. 2006, ApJS, 645, 9.[Bañados et al.(2013)]banados2013 Bañados, E., Venemans, B., Walter, F., Kurk, J., Overzier, R., & Ouchi, M. 2013, ApJ, 773, 178.[Bruzual&Charlot(2003)]BC2003 Bruzual, G., & Charlot, S. 2003, MNRAS, 344, 1000.[Calzetti et al.(2000)]calzetti2000 Calzetti, D., Armus, L., Bohlin, R. C., Kinney, A. L., Koornneef, J., & Storchi-Bergmann, T. 2000, ApJ, 533, 682.[Capak et al.(2011)]Capak2011 Capak, P. L. et al. 2011, Nature, 470, 233.[Carroll et al.(1992)]Carroll1992 Carroll, S. M., Press, W. H., & Turner, E. L. 1992, ARAA, 30, 499.[Conroy & White(2012)]CW2012 Conroy, C., & White, M. 2012, ApJ, 762, 70.[Croft et al.(1997)]Croft1997 Croft, R. A., Dalton, G. B., Efstathiou, G., Sutherland, W. J., & Maddox, S. J. 1997, MNRAS, 291, 305.[Croom & Shanks(1999)]CS1999 Croom, S. M., & Shanks, T. 1999, MNRAS, 303, 411.[Croom et al.(2005)]croom2005 Croom, S. M.et al. 2005, MNRAS, 356, 415.[Davis & Peebles(1983)]DP1983 Davis, M., & Peebles, P. J. E., 1983, ApJ, 267, 465.[Dawson et al.(2012)]boss Dawson, K. S. et al. 2012, ApJ, 145, 10.[Eftekharzadeh et al.(2015)]Eftekharzadeh2015 Eftekharzadeh, S. et al. 2015, MNRAS, 453, 2779.[Fagotto et al.(1994a)]Fagotto1994a Fagotto, F., Bressan, A., Bertelli, G., & Chiosi, C. 1994, A&AS, 104.[Fagotto et al.(1994b)]Fagotto1994b Fagotto, F., Bressan, A., Bertelli, G., & Chiosi, C. 1994, A&AS, 105.[Fanidakis et al.(2013)]Fanidakis2013 Fanidakis, N., Macciò, A. V., Baugh, C. M., Lacey, C. G., & Frenk, C. S. 2013, MNRAS, 436, 315.[Ferrarese (2002)]Ferrarese2002 Ferrarese, L. 2002, ApJ, 578, 90.[Francke et al.(2007)]Francke2007 Francke, H. et al. 2007, ApJL, 673, 13.[Garcia-Vergara et al.(2017)]GV2017 Garcia-Vergara, C., Hennawi, J. F., Barrientos, L. F., & Rix, H. W. 2017, submitted to ApJ.[Gehrels (1986)]neil1986 Gehrels, N. 1986, ApJ, 303, 336.[Groth&Peebles(1977)]GP1977 Groth, E. J., & Peebles, P. J. E. 1977, ApJ, 217, 385.[Gunn&Stryker(1983)]Gunn1983 Gunn, J. E., & Stryker, L. L. 1983, ApJS, 52, 121.[Hirata & Seljak(2003)]HSM2003 Hirata, C., & Seljak, U. 2003, MNRAS, 343, 459.[Hildebrandt et al.(2009)]Hildebrandt2009 Hildebrandt, H. et al. 2009, A&A, 498, 725.[Hopkins et al.(2006)]Hopkins2006 Hopkins P. F. et al. 2006, ApJS, 163, 1.[Hopkins et al.(2007)]hopkins2007 Hopkins, P. F., Lidz, A., Hernquist, L., Coil, A. L., Myers, A. D., Cox, T. J., & Spergel, D. N., 2007, ApJ, 662, 110.[Huband et al.(2013)]Hub2013 Husband, K., Bremer, M. N., Stanway, E. R., Davies, L. J. M., Lehnert, M. D., & Douglas, L. S. 2013, MNRAS, 642.[Ikeda et al.(2015)]ikeda2015 Ikeda, H. et al. 2015, ApJ, 809, 138.[Ilbert et al.(2008)]Ilbert2008 Ilbert, O., Capak, P., Salvato, M., Aussel, H., McCracken, H. J., Sanders, D. B., ... & Mobasher, B. 2008, ApJ, 690, 1236.[Inoue & Iwata(2008)]inoue2008 Inoue, A. K., & Iwata, I. 2008, MNRAS, 387, 1681.[Inoue et al.(2014)]inoue2014 Inoue, A. K., Shimizu, I., Iwata, I., & Tanaka, M. 2014, MNRAS, 442, 1805.[Bosch et al.(2017)]Bosch2017 James, Bosch., et al. 2017, submitted to PASJ.[Kashikawa et al.(2007)]kashikawa2007 Kashikawa, N., Kitayama, T., Doi, M., Misawa, T., Komiyama, Y., & Ota, K. 2007, ApJ, 663, 765.[Kayo & Oguri(2012)]KO2012 Kayo, I., & Oguri, M., 2012, MNRAS, 424, 1363.[Kim et al.(2009)]kim2009 Kim, S. et al. 2009, ApJ, 695, 809.[Kormendy & Ho(2013)]KHo2013 Kormendy, J. & Ho, L. C. 2013, ARAA, 51, 511.[Kormendy & Richstone(1995)]KR1995 Kormendy, J. & Richstone, D. 1995, ARAA, 33, 581.[Krumpe et al.(2010)]krumpe2010 Krumpe, M., Miyaji, T., & Coil, A. L. 2010, ApJ, 713, 558.[Leauthaud et al.(2007)]Leauthaud2007 Leauthaud, A., Massey, R., Kneib, J. P., Rhodes, J., Johnston, D. E., Capak, P., ... & Mellier, Y., 2007, ApJS, 172, 219.[Limber(1953)]limber1953 Limber, D. N. 1953, ApJ, 117, 134.[Magnier et al.(2013)]Magnier2013 Magnier, E. A., Schlafly, E., Finkbeiner, D., Juric, M., Tonry, J. L., Burgett, W. S., ... & Morgan, J. S. 2013, ApJS, 205, 20.[Martini & Weinberg(2001)]MW2001 Martini, P., & Weinberg, D. H. 2001, ApJ, 547, 12.[Martini(2004)]martini2004 Martini, P. (2004). Coevolution of Black Holes and Galaxies.[Miyazaki et al.(2012)]miyazaki2012 Miyazaki, S. et al. 2012, Proc. SPIE, 8446, 0.[Miyazaki et al.(2017)]miyazaki2017 Miyazaki, S. ... 2017, submitted to PASJ.[Mo et al.(2010)]Mo2010 Mo, Houjun, Frank Van den Bosch, & Simon White. 2010, Galaxy formation and evolution. [Mountrichas et al.(2009)]Mountrichas2009 Mountrichas, G., Sawangwit, U., Shanks, T., Croom, S. M., Schneider, D. P., Myers, A. D., & Pimbblet, K. 2009, MNRAS, 394, 2050.[Myers et al.(2006)]Myers2006 Myers, A. D. et al. 2006, ApJ, 638, 622.[Myers et al.(2007)]Myers2007 Myers, A. D., Brunner, R. J., Nichol, R. C., Richards, G. T., Schneider, D. P., & Bahcall, N. A. 2007, ApJ, 658, 85.[Nonino et al.(2009)]nonino2009 Nonino, M. et al. 2009, ApJS, 183, 244.[Oogi et al.(2016)]oogi2016 Oogi, T., Enoki, M., Ishiyama, T., Kobayashi, M. A., Makiya, R., & Nagashima, M. 2016, MNRAS, 456, 30.[Ouchi et al.(2004)]ouchi2004 Ouchi, M., Shimasaku, K., Okamura, S., Furusawa, H., Kashikawa, N., Ota, K., ... & Miyazaki, M. 2004, ApJ, 611, 685.[Ouchi et al.(2005)]ouchi2005 Ouchi, M.et al. 2005, ApJL, 635, 117.[Press&Schechter(1974)]PS1974 Press, W. H., & Schechter, P. 1974, ApJ, 187, 425.[Reddy et al.(2008)]reddy2008 Reddy, N. A., Steidel, C. C., Pettini, M., Adelberger, K. L., Shapley, A. E., Erb, D. K., & Dickinson, M. 2008, ApJS, 175, 48.[Richards et al.(2006)]richards2006 Richards, G. T., Strauss, M. A., Fan, X., Hall, P. B., Jester, S., Schneider, D. P., ... & Gray, J. 2006, AJ, 131, 2766.[Roche et al.(2002)]Roche2002 Roche, N. D., Almaini, O., Dunlop, J., Ivison, R. J., & Willott, C. J. 2002, MNRAS, 337, 1282.[Salpeter(1955)]salp1955 Salpeter, E. E. 1955, ApJ, 121, 161.[Schlegel et al.(1998)]Schlegel1998 Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525.[Shapley et al.(2001)]Shapley2001 Shapley, A. E., Steidel, C. C., Adelberger, K. L., Dickinson, M., Giavalisco, M., & Pettini, M. 2001, ApJ, 562, 95.[Shen et al.(2007)]shen2007 Shen, Y. et al. 2007, AJ, 133, 2222.[Shen et al.(2009a)]shen2009 Shen, Y. et al. 2009a, ApJ, 697, 1656.[Shen(2009b)]shen2009b Shen, Y. 2009b, ApJ, 704, 89.[Sheth&Torman(1999)]ST1999 Sheth, R. K., & Tormen, G. 1999, MNRAS, 308, 119.[Sheth et al.(2001)]sheth2001 Sheth, R. K., Mo, H. J., & Tormen, G. 2001, MNRAS, 323, 1.[Shirasaki et al.(2011)]Shirasaki2011 Shirasaki, Y., Tanaka, M., Ohishi, M., Mizumoto, Y., Yasuda, N., & Takata, T. 2011, PASJ, 63, 469.[Siana et al.(2008)]siana2008 Siana, B., del Carmen Polletta, M., Smith, H. E., Lonsdale, C. J., Gonzalez-Solares, E., Farrah, D., ... & Fang, F., 2008, ApJ, 675, 49.[Smith et al.(2003)]smith2003 Smith, R. E. et al. P. 2003, MNRAS, 341, 1311.[Steidel et al.(1996)]Steidel1996 Steidel, C. C., Giavalisco, M., Pettini, M., Dickinson, M., & Adelberger, K. L. 1996, ApJL, 462, 17.[Tanaka(2017)]MIZUKI Tanaka et al. 2017, PASJ submitted[Tinker et al.(2005)]Tinker2005 Tinker, J. L., Weinberg, D. H., Zheng, Z., & Zehavi, I. 2005, ApJ, 631, 41.[Uchiyama et al.(2017)]Uchiyama2017 Hisakazu Uchiyama, Jun Toshikawa, Nobunari Kashikawa, ... submitted to PASJ.[Utsumi et al.(2010)]Utsumi2010 Utsumi, Y., Goto, T., Kashikawa, N., Miyazaki, S., Komiyama, Y., Furusawa, H., & Overzier, R. 2010, ApJ, 721, 1680.[van der Burg et al.(2010)]Burg2010 van der Burg, R. F., Hildebrandt, H., & Erben, T., 2010, A&A, 523, 74.[White et al.(2008)]white2008 White, M., Martini, P., & Cohen, J. D. 2008, MNRAS, 1080[White et al.(2012)]white2012 White, M. et al. 2012, MNRAS, 424, 933.[Yabe et al.(2009)]Yabe2009 Yabe, K., Ohta, K., Iwata, I., Sawicki, M., Tamura, N., Akiyama, M., & Aoki, K. 2009, ApJ, 693, 507.[Yu&Tremaine(2002)]YT2002 Yu, Q. & Tremaine, S. 2002, MNRAS, 335, 965.[Zehavi et al.(2005)]Zehavi2005 Zehavi, I., Zheng, Z., Weinberg, D. H., Frieman, J. A., Berlind, A. A., Blanton, M. R., ... & Suto, Y. 2005, ApJ, 630, 1.[Zheng et al.(2006)]Zheng2006 Zheng, W. et al. 2006, ApJ, 640, 574.
http://arxiv.org/abs/1704.08461v1
{ "authors": [ "Wanqiu He", "Masayuki Akiyama", "James Bosch", "Motohiro Enoki", "Yuichi Harikane", "Hiroyuki Ikeda", "Nobunari Kashikawa", "Toshihiro Kawaguchi", "Yutaka Komiyama", "Chien-Hsiu Lee", "Yoshiki Matsuoka", "Satoshi Miyazaki", "Tohru Nagao", "Masahiro Nagashima", "Mana Niida", "Atsushi J Nishizawa", "Masamune Oguri", "Masafusa Onoue", "Taira Oogi", "Masami Ouchi", "Andreas Schulze", "Yuji Shirasaki", "John D. Silverman", "Manobu M. Tanaka", "Masayuki Tanaka", "Yoshiki Toba", "Hisakazu Uchiyama", "Takuji Yamashita" ], "categories": [ "astro-ph.GA", "astro-ph.CO" ], "primary_category": "astro-ph.GA", "published": "20170427074352", "title": "Clustering of quasars in a wide luminosity range at redshift 4 with Subaru Hyper Suprime-Cam wide field imaging" }
Mathematics Institute, Imperial College [email protected] analyst's take on the BPHZ theorem M. Hairer===================================== We provide a self-contained formulation of the BPHZ theoremin the Euclidean context, which yields a systematic procedureto “renormalise” otherwise divergent integrals appearing ingeneralised convolutions of functions with a singularity of prescribed orderat their origin. We hope that the formulation given in this article will appeal to an analytically minded audience and that it will help to clarify to what extent such renormalisations are arbitrary (or not). In particular, we do not assume any background whatsoever in quantum field theory and we stay away from any discussion of the physical context in which such problems typically arise.§ INTRODUCTION The BPHZ renormalisation procedure named afterBogoliubov, Parasiuk, Hepp and Zimmerman <cit.> (but see alsothe foundational results by Dyson and Salam <cit.>) provides a consistent way to renormalise probability amplitudes associated to Feynman diagrams in perturbative quantum field theory (pQFT). The main aim of this article is to provide an analytical result, Theorem <ref> below, which is a general form of the “BPHZ theorem” in the Euclidean context. To a large extent, this theorem has been part of the folklore of mathematical physicssince the publication of the abovementioned works (see for examplethe article <cit.> which gives rather sharp analytical bounds and is close in formulationto our statement, as well as the seriesof articles <cit.> which elucidate some of the algebraic aspects of the theory, but focus on dimensional regularisation which is not available in the general context considered here), but it seems difficult to find preciseanalytical statements in the literature that go beyond the specific context of pQFT. One reason seems to be that, in the context of the perturbative expansions arising in pQFT,there are three related problems. The first is to control the small-scale behaviour of theintegrands appearing in Feynman diagrams (the “ultraviolet behaviour”), the second is to control their large scale (“infrared”) behaviour, and the final problem is to show thatthe renormalisation required to deal with the first problem can be implementedby modifying (in a scale-dependent way) the finitely many coupling constants appearing inthe Lagrangian of the theory at hand, so that one still has a physical theory. The approach we take in the present article is purely analytic and completely unrelated to any physicaltheory, so we do not worry about the potential physical interpretation of the renormalisation procedure. We dohowever show in Section <ref> that it has a number of very nice mathematical properties so that the renormalised integrals inherit many natural properties from their unrenormalised counterparts.We also completely discard the infrared problem by assuming that all the kernels (“propagators”) under consideration arecompactly supported. For the reader who might worry that this could render our main result all but useless, we give a simple separate argument showing how kernelswith algebraic decay at infinity can be dealt with as well. Note also, that contrary to much of the related theoretical and mathematical physics literature, all of our arguments take place in configuration space, rather than in Fourier space. In particular, the analysis presented in this article shares similarities with a number of previous works, see for example <cit.> and references therein.The approach taken here is informed by some results recently obtained in the context of the analysis of rough stochastic PDEs in <cit.>. Indeed,the algebraic structure described in Sections <ref> and <ref> below isvery similar to the one described in <cit.>, with the exceptionthat there is no “positive renormalisation” in the present context.In this sense, this article can be seen as a perhaps gentler introduction to these results, with the content of Section <ref> roughly parallel to<cit.>, while the content of Section <ref> is rather close to that of <cit.>. In particular, Section <ref> is rather algebraic in nature and allows to conceptualise the structure of the counterterms appearing in the renormalisation procedure, whileSection <ref> is rather analytical in nature and contains the multiscale analysis underpinning our main continuity result, Theorem <ref>. Finally, in Section <ref>, we deal with kernels exhibiting only algebraic decay at infinity. While the conditions given in this section are sharp in the absence of any cancellations in the large-scale behaviour, we do not introduce ananalogue of the “positive renormalisation” of <cit.>, so that the argument remains relatively concise. §.§ Acknowledgements The author would like to thank Ajay Chandra and Philipp Schönbauerfor several useful discussions during the preparation of this article. Financial support through ERC consolidator grant 615897 and a Leverhulmeleadership award is gratefully acknowledged.§ AN ANALYTICAL FORM OF THE BPHZ THEOREMFix a countable setof labels, a map→, and an integer dimension d > 0. We assume that the set of labels has a distinguished element which we denote by δ∈ satisfying δ = -d and that, for every multiindex k, there is an injective map ↦^(k) onwith ^(0) = and such that [e:propLab] (^(k))^(ℓ) = ^(k+ℓ) ,^(k) = - |k| .We also set _⋆ = ∖{δ^(k) :k ∈^d} and we assume that there is a finite set _0 ⊂ such that every element ofis of the form ^(k) for some k ∈^d and some ∈_0. We then give the following definition. A Feynmandiagram is a finite directed graph Γ = (,) endowed with the following additional data: * An ordered set of distinct vertices = {[1],…,[k]}⊂such that each vertex inhas exactly one outgoing edge (called a “leg”)and no incoming edge, and such that each connected component of Γ contains at least one leg. We will frequently use the notation _⋆ = ∖, as well as_⋆⊂ for the edges that are not legs.* A decoration → of the edges of Γ such that(e) ∈_⋆ if and only if e ∈_⋆.R5cm [style=thick] [dot] (l) at (0,0) ; [dot] (u) at (1.5,.9) ; [dot] (d) at (1.5,-.9) ; [dot] (r) at (3,0) ; [->] (l) – (u); [->] (u) – (r); [->] (u) to[bend left=25] (d); [->] (u) to[bend right=25] (d); [->] (l) – (d); [->] (d) – (r); [thick,red] (l) – ++(150:0.5); [thick,red] (l) – ++(-150:0.5); [thick,red] (r) – ++(0:0.5);A Feynman diagram. We will always use the convention of <cit.> that e_- and e_+ arethe source and target of an edge e, so that e = (e_- → e_+). We also label legsin the same way as the corresponding element in , i.e. we call the unique edge incident to the vertex [j] the jth leg of Γ. The way we usually think of Feynman diagrams is as labelled graphs (_⋆,_⋆) witha number of legs attached to them, where the legs are ordered and each leg is assigneda d-dimensional multiindex. An example of Feynman diagram with 3 legsis shown in Figure <ref>, with legs drawn in red and decorations suppressed. We do not draw the arrows on the legs since they are always incoming by definition.In this example, || = 7and |_⋆| = 4. Write now = ^d, and assume that we are given a kernel K_→ for every ∈_⋆, such that K_ exhibits a behaviour of orderat the origin but is smooth otherwise. For simplicity, we also assume that these kernels are all compactly supported, say in the unit ball. More precisely, we assume that for every ∈_⋆ and every d-dimensional multiindex k there exists a constant C such thatone has the bound [e:propKer] |D^k K_(x)| ≤C |x|^- |k|_|x| ≤1 ,∀x ∈ .We also extend K to all ofby using the convention that K_δ = δ, a Dirac mass at the origin, and we impose that for every multiindex k and label ∈, one has [e:propK] K_^(k) = D^k K_ .Note that (<ref>) is compatible with (<ref>) so that non-trivial(i.e. not just vanishing or smooth near the origin) kernel assignments do actually exist. To some extent it is also compatible with the convention K_δ = δ and δ = -d since the “delta function” on ^d is obtained as a distributional limit of functions satisfying a uniform bound of the type (<ref>) with = -d. Given all this data, we would now like to associate to each Feynman diagram Γ with k legs a distribution ΠΓ on ^k by setting [e:eval] (ΠΓ)(ϕ) = ∫_^ ∏_e ∈ K_(e)(x_e_+ - x_e_-) ϕ(x_[1],…,x_[k]) dx .Note that of course ΠΓ does not just depend on the combinatorial data Γ = (,,,), but also on the analytical data (K_)_∈_⋆. We sometimes suppress the latter dependency on our notation in order to keep it light, but it willbe very useful later on to also allow ourselves to vary the kernels K_. We call the map Π a “valuation”.The problem is that on the face of it, the definition (<ref>) does not always make sense. The presence of the (derivatives of) delta functions is not a problem: writing v_i ∈_⋆ for the unique vertex such that([i] → v_i) ∈ and ℓ_i for the multiindex such that the label of this leg is δ^(ℓ_i), we can rewrite (<ref>) as[e:evalBis] (ΠΓ)(ϕ) = ∫_^_⋆ ∏_e ∈_⋆ K_(e)(x_e_+ - x_e_-)(D_1^ℓ_1⋯D_k^ℓ_k ϕ)(x_v_1,…,x_v_k) dx .The problem instead is the possible lack of integrability of the integrand appearing in (<ref>). For example, the simplest nontrivial Feynman diagram with two legs is given by Γ = which, by (<ref>), should be associated to the distribution [e:evalKernel] (ΠΓ)(ϕ) = ∫_^2 K_(y_1 - y_0) ϕ(y_0,y_1) dy .If it happens that < -d, then K_ is non-integrable in general, so that this integral may not converge. It is then natural to modify our definition, but “as little as possible”. In this case, we note that if the test function ϕ happens to vanish near the diagonal y_1 = y_0, then the singularity of K_ does not matter and (<ref>) makes perfect sense. We would therefore like to find a distribution ΠΓ which agrees with(<ref>) on such test functions but still yields finite values for every test function ϕ. One way of achieving this is to set [e:renormSimple] (ΠΓ)(ϕ) = ∫_^2 K_(y_1-y_0) (ϕ(y_0,y_1) - ∑_|k| + ≤-d (y_1 - y_0)^k k! D_2^k ϕ(y_0,y_0) ) dy . At first glance, this doesn't look very canonical since it seems that the variables y_0 and y_1 no longer play a symmetric role in this expression. However, it is an easy exercise to see thatthe same distribution can alternatively also be written as(ΠΓ)(ϕ) = ∫_^2 K_(y_1-y_0) (ϕ(y_0,y_1) - ∑_|k| + ≤-d (y_0 - y_1)^k k! D_1^k ϕ(y_1,y_1) ) dy . The BPHZ theorem is a far-reaching generalisation of this construction. To formalise what we mean by this, write ^-_∞ for the space of all smooth kernel assignments as above (compactly supported in the unit ball and satisfying (<ref>)). When endowed with the system ofseminorms given by the minimal constants in (<ref>), its completion ^-_0 is a Fréchetspace.With these notations, a “renormalisation procedure” is a map K ↦Π^K turning a kernel assignment K ∈^-_0 intoa valuation Π^K. The purpose of the BPHZ theorem is to argue that thefollowing question can be answered positively.0emMain question: Is there a consistent renormalisation proceduresuch that, for every Feynman diagram, ΠΓ can beinterpreted as a “renormalised version” of (<ref>)?As stated, this is a very loose question since we have not specified what we mean by a “consistent” renormalisation procedure and what properties we would like a valuation to have in order to be a candidate for an interpretation of (<ref>). One important property we would like a good renormalisation procedure to have is the continuity of the map K ↦Π^K. In this way, we can always reason on smooth kernel assignments K ∈^-_∞ and then “only” need to show that theprocedure under consideration extends continuously to all of ^-_0.Furthermore, we would like Π^K to inherit as many properties as possible from its interpretation as the formal expression (<ref>). Of course, as already seen, the “naïve” renormalisation procedure given by (<ref>) itself does not have the required continuity property, so we will have to modify it.§.§ Consistent renormalisation procedures The aim of this section is to collect and formalise a number of properties of (<ref>) which then allows us to formulate precisely what we mean by a “consistent” renormalisation procedure. Let us writefor the free (real) vector space generated by all Feynman diagrams. This space comes with a natural grading and we write _k ⊂ for the subspace generated by diagrams with k legs. Note that _0 ≈ since there is exactly one Feynman diagram with 0 legs, which is the empty one.Write _k for the space of all distributions on ^k that are translation invariant in the sense that, for η∈_k, h ∈, and any test function ϕ, one has η(ϕ) = η(ϕ∘τ_h) whereτ_h(y_1,…,y_k) = (y_1 + h,…,y_k+h). We will write _k^(c)⊂_k for the subset of “compactly supported” distributions in the sense that there exists a compact set ⊂^k / such that η(ϕ) = 0 as soon as ϕ∩ = ∅. Compactly supported distributions can be tested against any smooth function ϕ with theproperty that for any x ∈^k, the set {h ∈ : ϕ(τ_h(x)) ≠ 0} is compact. Note that _1 ≈ since translation invariant distributions in one variable are naturally identified with constant functions. We will use the convention_0 ≈ by identifying “functions in 0 variables” with .We also set = ⊕_k ≥ 0_k, so that a valuation Π can be viewed as a linear map Π→ which respects the respective graduations of these spaces.Note that the symmetric group _k in k elements acts naturally on _k by simply permuting the order of the legs. Similarly, _k acts on _k bypermuting the arguments of the test functions.Given two Feynman diagrams Γ_1 ∈_k and Γ_2 ∈_ℓ, we thenwrite Γ_1 ∙Γ_2 ∈_k + ℓ for the Feynman diagram given by the disjoint union of Γ_1 and Γ_2. Here, we renumber the ℓ legs of Γ_2 in an order-preserving way from k+1 to k+ℓ, so that although Γ_1 ∙Γ_2 ≠Γ_2 ∙Γ_1 in general, one hasΓ_1 ∙Γ_2 = σ_k,ℓ(Γ_2 ∙Γ_1), where σ_k,ℓ∈_k+ℓ is the permutation that swaps (1,…,ℓ) and (ℓ+1,…,ℓ+k). Given distributions η_1 ∈_k and η_2 ∈_ℓ, we write η_1 ∙η_2 ∈_k+ℓ for the distribution such that(η_1 ∙η_2)(ϕ_1 ⊗ϕ_2) = η_1(ϕ_1)η_2(ϕ_2) .Similarly to above, one hasη_1 ∙η_2 = σ_k,ℓ(η_2 ∙η_1). We extend ∙ by linearity to all ofandrespectively, thus turning these spaces into (non-commutative) algebras. This allows us to formulate the first property we would like to retain. A consistent renormalisation procedure should produce valuations Π that aregraded algebra morphisms fromtoand such that, for every Feynman diagram Γ with k legs and every σ∈_k, one has Πσ(Γ) = σ(ΠΓ). Furthermore ΠΓ∈_k^(c) if Γ is connected with k legs. Similarly, consider a Feynman diagram Γ with k ≥ 2 legs such that the label of the kth leg is δ and such that the connected component of Γ containing [k]contains at least one other leg. Let _k Γ be the Feynman diagram identical to Γ, but with thekth leg removed. If the label of the kth leg is δ^(m) with m ≠ 0, we set _k Γ = 0. If we write ι_k for the natural injection of smooth functions on ^k-1 to functions on ^k given by (ι_k ϕ)(x_1,…,x_k) = ϕ(x_1,…,x_k-1), we have the following property for (<ref>) which is very natural to impose on our valuations.. A consistent renormalisation procedure should produce valuations Π such that for any connected Γ with k legs, one has (Π_k Γ)(ϕ) = (ΠΓ)(ι_kϕ) for all compactly supported test functions ϕ on ^k-1. (Note that the right hand side is well-defined by Remark <ref> even though ι_k ϕ is no longer compactly supported.) To formulate our third property, it will be useful to have a notation for our test functions.We write _k for the set of all ^∞ functions on ^k with compact support.It will be convenient to consider the following subspaces of _k. Letbe a collection of subsets of {1,…,k} such that every set A ∈ contains at least two elements. Then, we write_k^()⊂_k for the set of such functions ϕ which vanishin a neighbourhood of the set Δ_k^()⊂^k given by[e:defDiagonal] Δ_k^() = {y∈^k :∃A ∈ withy_i = y_j∀i,j ∈A} .Because of this definition, we also call a collectionas above a “collision set”.Note that in particular one has _k^(∅) = _k.A first important question to address then concerns the conditions under which the expression (<ref>) converges. A natural notion then is that of thedegree of a subgraph of a Feynman diagram. In this article, we define a subgraph Γ̅⊂Γto be a subsetof the collection _⋆ of internal edges and a subset ⊂_⋆ of the internal vertices such that consists precisely of those vertices incident to at least one edge in . (In particular, isolated nodes are not allowed in Γ̅.)Given such a subgraph Γ̅, we then set [e:degreeSubgraph] Γ̅∑_e ∈ (e) + d(|| - 1) .We define the degree of the full Feynman diagram Γ in exactly the same way, withandreplaced by _⋆ and _⋆. One then has the following result initially due to Weinberg <cit.>. See also <cit.> for the proof of a slightly more general statement which is also notationally closer to the setting considered here. If Γ is a Feynman diagram with k legs such that Γ̅> 0 for everysubgraph Γ̅⊂Γ, then the integral in (<ref>) is absolutely convergent for every ϕ∈_k. We will henceforth call a subgraph Γ̅⊂Γ divergent if Γ̅≤ 0. A virtually identical proof actually yields the following refined statement which tells us very precisely where exactly there is a need for renormalisation.Let Γ be a Feynman diagram with k legs and letbe a collision set such that, for every connected divergent subgraph Γ̅⊂Γ, there exists A ∈ such that every leg in A is adjacent to Γ̅.Then (<ref>) is absolutely convergent for every ϕ∈_k^().Here and below we say that an edge e is adjacent to a subgraph Γ̅⊂Γ(possibly itself consisting only of a single edge) if e is not an edge of Γ̅, but shares a vertex with such an edge. Since the main idea will be useful in the general result, we sketch it here. Note first that we can assume without loss of generality that, for every A ∈, the vertices of _⋆ to which the legs in A are attached are all distinct, since otherwise (<ref>) vanishes identically for ϕ∈_k^().The key remark is that, for every configuration of points x ∈^_⋆ we can find a binary tree T with leaves given by _⋆ and a label _u ∈ for every inner vertex u of T in such a way thatis increasing when going from the root tothe leaves of T and, for any v, v̅∈_⋆, one has [e:boundDist] C^-12^-_u ≤x_v - x_v̅ ≤C2^-_u ,where u = v ∧v̅ is the least common ancestor of v and v̅ in T. Here, the constant C only depends on the size of _⋆. (Simply take for T the minimal spanning tree of the point configuration.) Writing = (T,) for this data, we then let D_⊂^_⋆ be the set of configurations giving rise to the data . By analogy with the construction of <cit.>, we call D_ a “Hepp sector”. While the type of combinatorial data (T,) used to index Hepp sectors is identical tothat appearing in “Gallavotti-Nicolo trees” <cit.> and the meaning of the indexis similar in both cases, there does not appear to be a direct analogy between the terms indexed by this data in both cases.Thanks to the tree structure of T, the quantity d_ given by d_(v,v̅) = 2^-_uas above is an ultrametric.Writing (e) for the value of _e^↑, with e^↑ = e_-∧ e_+, the integrand of (<ref>) is then bounded by some constant times ∏_e ∈_⋆ 2^- (e)(e). Identifying T with its set of internal nodes, one can also show that the measure of D_ is bounded by ∏_u ∈ T 2^-d_u.Finally, by the definition of _k^(), there exists a constant N_0 such that the integrand vanishes on sets D_ such that sup_A∈_A^↑≥ N_0, where A^↑ is the least common ancestor in T of the collection of elements of _⋆incident to the legs in A. Writing_= {(T,) :sup_A∈ _A^↑ < N_0} ,we conclude that (<ref>) is bounded by some constant multiple of [e:bigSum] ∑_∈_ ∏_u ∈T 2^-η_u ,η= d + ∑_e ∈_⋆_e^↑ (e) .We now note that the assumption onguarantees that, for every node u ∈ T, one has either ∑_v ≥ uη_v > 0, or there exists some A ∈ such that u ≤ A^↑. In the latter case, _u is bounded from above by N_0. Furthermore, as a consequence of the fact that each connected component of Γ has at least one leg and the kernels K_ are compactly supported, (<ref>) vanishes on all Hepp sectors with some _u sufficiently negative. Combining these facts, and performing the sum in (<ref>) “from the leaves inwards” as in <cit.>,it is then straightforward to see that it does indeed converge, as claimed. In other words, Proposition <ref> tells us that the onlyregion in which the integrand of (<ref>) diverges in a non-integrable way consists of an arbitrarily small neighbourhood of those points x forwhich there exists a divergent subgraph Γ̅= (, )such that x_u = x_v for all vertices u,v ∈.It is therefore very natural to impose the following. A consistent renormalisation procedure should produce valuations Π thatagree with (<ref>) for test functions and Feynman diagrams satisfying the assumptions of Proposition <ref>.Finally, a natural set of relations of the canonical valuation Π given by (<ref>) which we would like to retain is those given by integration by parts. In order to formulate this, it is convenient to introduce the notion of a half-edge. A half-edge is a pair (e,v) with e∈ andv ∈{e_+,e_-}. It is said to be incoming if v = e_+ and outgoing if v = e_-. Given an edge e, we also write e_← and e_→ for the two half-edges (e,e_-) and (e,e_+). Given a Feynman diagram Γ, a half-edge (e,v), and k ∈^d, we then write _̣(e,v)^k Γ for the element ofobtained from Γ by replacing the decorationof the edge e by ^(k) and then multiplying the resulting Feynman diagram by (-1)^|k| if the half-edge (e,v) is outgoing.We then writefor the smallest subspace ofsuch that, for every Feynman diagram Γ, every i ∈{1,…,d} and every inner vertex v ∈_⋆ of Γ, one has [e:IBP] ∑_e ∼v _̣(e,v)^δ_i Γ∈ ,where e ∼ v signifies that the edge e is incident to the vertex vand δ_i is the ith canonical element of ^d. By integration by parts, it is immediate that if the kernels K_ are all smooth, then the canonical valuation (<ref>) satisfies Π= 0. It is therefore natural to impose the following. A consistent renormalisation procedure should produce valuations Π thatvanish on . Setting =/, we can therefore consider a valuation as a map Π→. Note that sinceis an ideal ofwhich respects its grading,is again a graded algebra.Furthermore, sinceis invariant under the action of the symmetric group, _k acts naturally on _k. In particular, Property <ref> can be formulated inrather thanand it is not difficult to see that the deletion operation _k introduced in Property <ref> also makes sense on . This motivates the following definition.A valuation Π→ is consistent for the kernel assignment K if it satisfies Properties <ref>, <ref> and <ref>.§.§ Some algebraic operations on Feynman diagrams In order to satisfy Property <ref>, we will consider valuations that differ from the canonical one only by counterterms of the same form, but with some of the factors of (<ref>)corresponding to divergent subgraphs replaced by a suitable derivative of a delta function, just like what we did in (<ref>). These counterterms can again be encoded into Feynman diagrams with the same number of legs as the original diagram, multiplied by a suitable weight. We are therefore looking for a procedure which, given a smooth kernel assignment K ∈^-_∞, builds a linear map M^K→ such that if we define a “renormalised” valuation Π̂^K by [e:renormPi] Π̂^K Γ= Π^K M^K Γ ,with Π^K the canonical valuation given by (<ref>), then K ↦Π̂^K is a renormalisation procedure which extends continuously to all of ^-_0. We would furthermore like M^K to differ from the identity only by terms of theform described above, obtained by contracting divergent subgraphs to a derivative of a delta function.The procedure (<ref>) is exactly of this form with [e:Msimple] M^K = - ∑_|k| + ≤-dc_k·[style=thick] [dot] (l) at (0,0) ; [thick,red] (l) – ++(180:0.5) node[midway,above=-0.1] 0; [thick,red] (l) – ++(0:0.5) node[midway,above=-0.1] k;, c_k = 1k! ∫_x^k K_(x) dx .Note that the condition ≤ -d which is required for M^K to differ from the identity is precisely the condition that the subgraph [style=thick] [dot] (l) at (0,0) ; [dot] (r) at (1,0) ; [->] (l) – (r) node[midway,above=-0.1] ; is divergent, which then guarantees that this example satisfies Property <ref>.It is natural to index the constants appearing in the terms of such a renormalisation map by the corresponding subgraphs that were contracted. These subgraphs then have no legs anymore, but may require additional decorations describing the powers of x appearing in the expression for c_k above. We therefore give the following definition, where the choice of terminology ischosen to be consistent with the QFT literature. A vacuum diagram consists of a Feynman diagram Γ = (,) with exactly one leg per connected component, endowed additionally with a node decoration _⋆→^d. We also impose that each leg has label δ. We say that a connected vacuum diagram is divergent if Γ≤ 0, where Γ = ∑_e ∈(e) + ∑_v ∈ |(v)| + d(||- 1) .We extend this to arbitrary vacuum diagrams by imposing that(Γ_1 ∙Γ_2) = Γ_1 + Γ_2.One should think of a connected vacuum diagram Γ as encoding the constant [e:PiK] Π_-^K Γ∫_^_⋆∖{v_⋆} ∏_e ∈_⋆ K_(e)(x_e_+ - x_e_-)∏_w ∈_⋆(x_w - x_v_⋆)^(w)dxwhere v_⋆ is theelement of _⋆ that has the unique leg attached to it. This is then extended multiplicatively to all vacuum diagrams. In view of this, it is also natural to ignore the ordering of the legs for vacuum diagrams, and we will always do this from now on. Write now _- for the algebra of all vacuum diagrams such that each connected componenthas at least one internal edge and by _- ⊂_- for the subalgebra generated by those diagrams such that each connected components is divergent.Since we ignored the labelling of legs, the product ∙ turns _- into a commutative algebra. Note that if we write _+ ⊂_- for the ideal generated by all vacuum diagrams Γ with Γ > 0, then we have a natural isomorphism_- ≈_- / _+ . Similarly to above, it is natural to identify vacuum diagrams related to each otherby integration by parts, but also those related by changing the location of the leg(s). In order to formalise this, we reinterpret a connected vacuum diagram as above as a Feynman diagram “with 0 legs”, but with one of the vertices being distinguished, which is of course completely equivalent, and we write it as (Γ,v_⋆,).With this notation, we define ̣̂_- as the smallest ideal of _- such that, for every connected (Γ,v_⋆,)one has the following. * For every vertex v ∈∖{v_⋆} and every i∈{1,…,d}, one has [e:IBPbis] ∑_e ∼v (_̣(e,v)^δ_i Γ,v_⋆,) + (v)_i (Γ, v_⋆,- δ_i _v) ∈̣̂_- ,where _v denotes the indicator function of {v}.* One has [e:IBProot] ∑_e ∼v_⋆ (_̣(e,v_⋆)^δ_i Γ,v_⋆,) - ∑_v∈(v)_i (Γ, v_⋆,- δ_i _v) ∈̣̂_- , * For every vertex v ∈, one has [e:moveLeg] (Γ,v_⋆,) - ∑_→^d (-1)^||(Γ,v,-+ Σ_v_⋆) ∈̣̂_- ,where Σ = ∑_u (u) and we use the convention ! = ∏_u∈∏_i=1^d (u)_i! to define the binomial coefficients, with the additional convention that the coefficient vanishes unless ≤ everywhere.One can verify that if K ∈^-_∞ and Π_-^K is given by (<ref>), then ̣̂_- ∈Π_-^K. In the case of (<ref>) and (<ref>), this is because the integrand is then a total derivative with respect to (x_v)_i and (x_v_⋆)_i respectively. In the case of (<ref>), this can be seen by writing (x_w - x_v_⋆)^(w) = ((x_w - x_v) - (x_v_⋆ - x_v))^(w) and applying the multinomial theorem.The expressions (<ref>) and (<ref>) are consistent with (<ref>) in the special case = 0. Considering the case v = v_⋆ in (<ref>), it is also straightforward to verify that (Γ,v_⋆,) ∈̣̂_- as soon as (v_⋆) ≠ 0. As before, we then write _- as a shorthand for _- / ̣̂_- and similarly for _-. (This is well-defined since ̣̂_- does not mix elements ofdifferent degree.) As a consequence of Remark <ref>, we see thatevery K ∈^-_∞ yields a character Π_-^K of _- and therefore also of _-. R5cm [style=thick] [dot] (l) at (0,0) ; [dot] (r) at (3,0) ; [dot] (ul) at (0.5,1) ; [dot] (ur) at (2.5,1) ; [dot] (d) at (1.5,-1) ; [dot] (c) at (1.5,0) ; [->] (l) – (ul); [->,markboth] (ul) – (ur) node[pos=0.2,above=-0.1] 1 node[pos=0.8,above=-0.1] 1; [->] (ur) – (r);[->] (l) – (c); [->] (ul) – (c); [->,markstart] (l) – (d); [->,markstart] (c) – (d); [->,markend] (d) – (r) node[pos=0.8,below right=-0.1] 1; [ultra thick,boundary] (l) – ++(180:0.5); [ultra thick,boundary] (r) – ++(0:0.5); [thick,red] (d) – ++(-90:0.5); [line width=0.5cm,draw opacity=0.15, line cap=round] (l) – (ul) – (c) – (l); [line width=0.5cm,draw opacity=0.15, line cap=round] (ur) – (r); Example of a subgraph (shaded) and its boundary (green). Given a Feynman diagram Γ and a subgraph Γ̅⊂Γ, wecan (and will) identify Γ̅ with an element of _-, obtained bysetting all the node decorations to 0.By (<ref>) we do not need to specify where we attach leg(s) to Γ̅ since these elements are all identified in _-. We furthermore write ̣̅Γ for the set of all half-edges adjacent to Γ̅.Figure <ref> shows an example of a Feynman diagram with a subgraph Γ̅ shaded in grey and ̣̅Γ indicated in green.Legs can also be part of ̣̅Γ as is the case in our example, but they can not be part of Γ̅ by our definition of a subgraph. Note also that the edge joining the two vertices at the top appears as two distinct half-edges in ̣̅Γ. Given furthermore a map ℓ̣̅Γ→^d (canonically extended to vanish on all other half-edges of Γ), we then define the following two objects. * A vacuum diagram (Γ̅, πℓ) which consists of thegraph Γ̅ endowed with the edge decoration inherited from Γ, as well as the node decoration = πℓ given by (πℓ)(v) = ∑_e : (e,v) ∈Γ̣ℓ(e,v).* A Feynman diagram Γ / (Γ̅, ℓ) obtained by contractingthe connected components of Γ̅ to nodes and applying ℓ to theresulting diagram in the sense that, for edges e ∈∖ adjacent to ̣̅Γ and with label (in Γ) given by , we replace their label by ^(ℓ(e_←) + ℓ(e_→)).In the example of Figure <ref>, where non-zero values of ℓ are indicated by small labels, we have(Γ̅, ℓ) =[style=thick,baseline=0.3cm] [dot] (l) at (0,0) ; [dot,label=[shift=(0.1,0)]left:1] (r) at (3,0) ; [dot,label=[shift=(-0.1,0)]right:1] (ul) at (0.5,1) ; [dot,label=[shift=(0.1,0)]left:1] (ur) at (2.5,1) ; [dot] (c) at (1.5,0) ; [->] (l) – (ul); [->] (ur) – (r);[->] (l) – (c); [->] (ul) – (c);,Γ/ (Γ̅, ℓ) =[style=thick,baseline=-0.7cm] [dot] (l) at (0,0) ; [dot] (r) at (2.5,0) ; [dot] (d) at (1.5,-1) ; [->] (l) to[bend left=40] node[midway,below=-0.05] 2 (r); [->] (l) to[bend left=30] (d); [->] (l) to[bend right=30] (d); [->] (d) – (r) node[midway,below right=-0.1] 1; [thick,red] (l) – ++(180:0.5); [thick,red] (r) – ++(0:0.5); [thick,red] (d) – ++(-90:0.5);where a label k on an edge means that if it had a decorationin Γ,then it now has a decoration ^(k). Given a map ℓ̣̅Γ→^d as above, we also write“ℓ” as a shorthand for the restriction of ℓ to outgoing half-edges. With these notations at hand, we define a map Δ→_- ⊗ by [e:coaction] ΔΓ= ∑_Γ̅⊂Γ ∑_ℓ̣̅Γ→^d (-1)^|ℓ| ℓ! (Γ̅, πℓ) ⊗Γ/ (Γ̅, ℓ) ,where we use the same conventions for factorials as in (<ref>). Note that since the right hand side is identified with an element of _- ⊗, this sum is finite. Indeed, unless (Γ̅, πℓ) ∈_-, which only happens for finitely many choices of ℓ,the corresponding factor is identified with 0 in _-. For any fixed Γ this sum is actually finite since there are only finitely many subgraphs and since, for large enough ℓ, (Γ̅, πℓ) is no longer in _-.The factor (-1)^|ℓ| appearing here encodes the fact that foran edge e, having ℓ(e, u) = k means that in the resulting Feynman diagram Γ / (Γ̅, ℓ), one would like to replace thefactor K_(x_e_+ - x_e_-) by its kth derivative with respect to x_u, which is precisely what happens when one replaces the corresponding connected component of Γ̅ by a derivative of a delta function. In the case when u = e_-, namely when the half-edge is outgoing,this is indeed the same as (-1)^|k| (D^kK_)(x_e_+ - x_e_-), while the factor (-1)^|k| is absent for incoming half-edges.It turns out that one has the following. The map Δ is well-defined as a map fromto _- ⊗. Before we start our proof, recall the following version of the Chu-Vandermonde identityGiven finite sets S, S̅ and mapsπ S →S̅ and ℓ S →, we define π_⋆ℓS̅→ by π_⋆ℓ(x) = ∑_y ∈π^-1(x)ℓ(y). Then, for every finite set S and every kS →, one has the identity∑_ℓ :π_⋆ℓkℓ = π_⋆kπ_⋆ℓ ,where the sum runs over all possible choices ofℓ such that π_⋆ℓ is fixed.We first show that for Γ∈ the right hand side of (<ref>) is well-defined as an element of _- ⊗, which is a priori not obvious since we did not specifywhere the legs of (Γ̅, ℓ) are attached. Our aim therefore is to show that, for any fixed L ∈^d, the expression [e:startExpr] ∑_ℓ̣̅Γ→^d Σℓ= L (-1)^|ℓ| ℓ! (Γ̅, v, πℓ) ⊗Γ/ (Γ̅, ℓ)is independent of v ∈ in _- ⊗. By Remark <ref>, we can restrict the sum over ℓ to those values such that ℓ vanishes on the set A_v of all half-edges incident to v since(Γ̅, v, ℓ) = 0 in _- for those ℓ for which this is not the case. Fixing some arbitrary u ≠ v and using (<ref>) as well as Lemma <ref>,we then see that(<ref>) equals∑_ℓ̣̅Γ∖A_v →^d Σℓ= L∑_m ̣̅Γ→^d(-1)^|ℓ|+|m| ℓ! ℓm (Γ̅, u, πℓ- πm + Σm _v) ⊗Γ/ (Γ̅, ℓ) .Writing k = ℓ-m, we rewrite this expression as∑_ḳ̅Γ∖A_v →^d Σk ≤L∑_m ̣̅Γ∖A_v →^d Σm = L-Σk(-1)^|k|+|m| + |m| k! m! (Γ̅, u, πk + Σm _v) ⊗Γ/ (Γ̅, k + m) .At this stage we note that, as a consequence of (<ref>), we have for every subset A ⊂̣̅Γ and every M ∈^d the identity∑_m ̣̅Γ∖A →^d Σm = M(-1)^|m| m! Γ/ (Γ̅, k + m) = ∑_n A →^d Σn = M (-1)^|n|+|n| n! Γ/ (Γ̅, k + n) .Inserting this into the above expression and noting that for functions n supported on A_v one has π n = Σ n _v, we conclude that it equals∑_ḳ̅Γ∖A_v →^d Σk ≤L∑_n A_v →^d Σn = L-Σk(-1)^|k|+|n| k! n! (Γ̅, u, πk + πn) ⊗Γ/ (Γ̅, k + n) .Setting ℓ = k+n and noting that k!n! = (k+n)! since k and n have disjoint support, we see that this is indeed equal to (<ref>) with v replaced by u, as claimed.It remains to show that Δ is well-defined on , namely that Δτ = 0 in _-⊗ for τ∈. Choose aFeynman diagram Γ, an inner vertex v ∈_⋆, an index i ∈{1,…,d}, and a subgraph Γ̅⊂Γ.Writing A̅_v for the half-edges in Γ̅ adjacent to v and A_v for the remaining half-edges adjacent to v (so that A_v ⊂̣̅Γ),it suffices to show that[e:wantedVanish] ∑_h ∈A_v∑_ℓ̣̅Γ→^d (Γ̅, ℓ) ∈_- (-1)^|ℓ| ℓ! (Γ̅, πℓ) ⊗_̣h^δ_iΓ/ (Γ̅, ℓ) + ∑_h ∈A̅_v ∑_ℓ̣̅Γ→^d _̣i,v(Γ̅, ℓ) ∈_- (-1)^|ℓ| ℓ! _̣h^δ_i(Γ̅,πℓ) ⊗Γ/ (Γ̅, ℓ) = 0in _- ×, where we used the shorthand notation _̣i,v(Γ̅, ℓ) ∈_- for the condition _̣h^δ_i(Γ̅, ℓ) ∈_-, which is acceptable since this condition does not depend on which half-edge h one considers.If v is not contained in Γ̅, then the second term vanishes and A_v consists exactly of all edges adjacent to v in Γ / (Γ̅, ℓ), so that the first term vanishes as well by (<ref>). If v is contained in Γ̅, thenwe attach the leg of the corresponding connected component Γ̅_0 of Γ̅ to v itself, so that in particular the sum over ℓ can be restricted to values supported on ̣̅Γ∖ A_v.By (<ref>), the second term is then equal to∑_h ∈̣̅Γ_0 ∖A_v ∑_ℓ̣̅Γ∖A_v →^d _̣i,v(Γ̅, ℓ) ∈_- (-1)^|ℓ| ℓ! ℓ(h)_i (Γ̅,v,π(ℓ- δ_i _h)) ⊗Γ/ (Γ̅, ℓ) ,which can be rewritten as∑_h ∈̣̅Γ_0 ∖A_v ∑_ℓ̣̅Γ→^d (Γ̅, ℓ) ∈_- (-1)^|ℓ| + δ_h ∈ ℓ! (Γ̅,v,πℓ) ⊗Γ/ (Γ̅, ℓ+ δ_i _h) .Inserting this into (<ref>), we conclude that this expression equals∑_h ∈̣̅Γ_0 ∑_ℓ̣̅Γ→^d (Γ̅, ℓ) ∈_- (-1)^|ℓ| ℓ! (Γ̅, πℓ) ⊗_̣h^δ_iΓ/ (Γ̅, ℓ)which vanishes in _-⊗ by (<ref>) since the half-edges in ̣̅Γ_0 are precisely all the half-edges adjacent in Γ / (Γ̅, ℓ) to the node that Γ̅_0 was contracted to. For any element g _- → of the dual of _-, we now have alinear map M^g → byM^g Γ= (g ⊗𝕀)ΔΓ ,which leads to a valuation Π^K_g → by setting [e:defPig] Π^K_g = Π^K ∘M^gas in (<ref>), with Π^K the canonical valuation (<ref>).Note that this is well-defined since Π^K = 0, as already remarked. In particular, we can also view Π^K_g as a map fromto .For any choice of g (depending on the kernel assignment K), sucha valuation then automatically satisfies Properties <ref> and <ref>, since these were encoded in the definition of the space , as well as Property <ref> since the action of Δ commutes with the operation of “amputation of the kth leg” on the subspace on which the latter is defined. In general, such a valuation may fail to satisfy Property <ref>, but if we restrict ourselves to elements g _- → that are also characters, one hasM^g (Γ_1 ∙Γ_2) = (M^g Γ_1)∙(M^g Γ_2) .Sinceis an ideal, this implies that the valuation Π^K_g is multiplicative as amap fromto , as required by Property <ref>. We have therefore shown the following. For every character g _- →, the valuation Π^K_g is consistent for K in the sense of Definition <ref>. Writing _- for the space of characters of _-, it is therefore natural to definea “consistent renormalisation procedure” as a map ^-_∞→_- such that the map[e:defVal] K ↦Π̂^K = Π^K ∘M^(K) ,where Π^K denotes the canonical valuation given by (<ref>), extends continuously to all of ^-_0. Our question now turns into the question whether such a map exists. We do certainly not want to impose thatextends continuously to all of ^-_0 since this would then imply that Π^K extends to all of ^-_0 which is obviously false.§.§ A Hopf algebraIn this subsection, we address the following point. We have seen that every character g of _-allow us to build a new valuation Π_g from the canonical valuation Π associated to a smooth kernel assignment. We can then take a second character h and build a new valuation Π_g ∘ M^h. It is natural to ask whether this would give us a genuinely new valuation or whether this valuation is again of the form Π_g̅ for some character g̅. In other words, does _- have a group structure, so that g ↦ M^g is a leftaction of this group on the space of all valuations?In order to answer this question, we first define a map _- →_- ⊗_- in a way very similar to the map Δ, but taking into account the additional labels : [e:coprod] (Γ,v_⋆,) = ∑_Γ̅⊂Γ ∑_ℓ̣̅̅Γ→^d →^d (-1)^|ℓ̅| ℓ̅! (Γ̅, + πℓ̅) ⊗(Γ,v_⋆,- ) / (Γ̅, ℓ̅) .Here, we define (Γ,v_⋆,) / (Γ̅, ℓ̅) similarly to before, with the node-label of the quotient graph obtained by summing over the labels of all thenodes that get contracted to the same node. If Γ̅ completely contains one (or several) connected components of Γ, then this definition could create graphs that contain isolated nodes, which is forbidden by our definition of _-. Given (<ref>), it is natural to identify isolated nodes with vanishing node-label with the empty diagram , while we identify those with non-vanishing node-labels with 0. In particular, it follows thatτ= τ⊗+ ⊗τ+ τ ,where each of the terms appearing in τ is such that both factors contain at least one edge. Note the strong similarity with <cit.> which looks formally almost identical,but with graphs replaced by trees. As before, one then hasThe mapis well-defined both as a map _- →_- ⊗_- and a map _- →_- ⊗_-. It follows immediately from the definitions that is multiplicative. What is slightly less obvious is that it also has a nice coassociativity property as follows. The identities [e:coassoc] (⊗𝕀) Δ= (𝕀⊗Δ) Δ ,(⊗𝕀) = (𝕀⊗)hold between maps →_- ⊗⊗ for = in the case of the first identity and for ∈{_-,_-} in the case of the second one. We only verify the second identity since the first one is essentially a special case of the second one. The difference is the presence of legs, which are never part of the subgraphs appearing in the definition of Δ, but otherwise play the same role as a “normal” edge. Fix now a Feynman diagram Γ as well as two subgraphs Γ_1 and Γ_2 with the property thateach connected component of Γ_1 is either contained in Γ_2 or vertex-disjoint from it. We also write Γ̅= Γ_1 ∪Γ_2 and Γ_1,2 = Γ_1 ∩Γ_2.There is then a natural bijection between the terms appearing in (⊗𝕀) and those appearing in(𝕀⊗) obtained by noting that first extracting Γ̅ from Γ and then extracting Γ_1 from Γ̅ is the same as first extracting Γ_1 from Γ and then extracting Γ_2 / Γ_1,2 from Γ / Γ_1. It therefore remains to show thatthe labellings and combinatorial factors appearing for these terms are also the same. This in turn is a consequence from a generalisation of the Chu-Vandermonde identity and can be obtained in almost exactly the same way as <cit.>. If we writefor the empty vacuum diagram and ^* for theelement of _- that vanishes on all non-empty diagrams, then we see that (_-, , ∙,,^*) is a bialgebra. Since it also graded (by the number of edges of a diagram) and connected (the only diagram with 0 edges is the empty one), it is a Hopf algebra so that _- is indeed a group with productf ∘g (f ⊗g) ,and inverse g^-1 = g, whereis the antipode. The first identity in(<ref>) then implies that the map g ↦ M^g = (g ⊗𝕀)Δ does indeed yield a group action on the space of valuations, thus answeringpositively the question asked at the start of this section.§.§ Twisted antipodes and the BPHZ theoremAn arbitrary character g of _- is uniquely determined by its value on connected vacuum diagrams Γ with Γ≤ 0. Comparing (<ref>) with (<ref>), this would suggest that a natural choice of renormalisation procedureis given by simply setting (K)Γ= - Π_-^KΓ ,as this would indeed reproduce the expression (<ref>). Unfortunately, while this choice does yield valuations that extend continuously to all kernel assignments in^-_0 for a class of “simple” Feynman diagrams, it fails to do so for all of them.Following <cit.>, a more sophisticated guess would beto set (K)Γ = Π_-^KΓ, forthe antipode of _- endowed with the Hopf algebra structure described in the previous section. The reason why this identity also fails to do the trick can be illustrated with the following example. Consider the case d = 1 and two labels with |_1| = -1/3 and|_2| = -4/3. Drawing edges decorated with _1 in black and edges decorated with _2 in blue, we then considerΓ= triangle> ,which has degree Γ = 0. Since Γ has only one leg, the naive valuation Π^K Γ can be identified with the real numberΠ^K Γ= (K_1 * K_2 * K_1)(0) ,where we wrote K_iK__i and * denotes convolution. Since this might diverge for a generic kernel assignment in ^-_0, even if K_2 is replaced by its renormalised version, there appears to be no good canonical renormalised value for Π̂^K Γ, so we would expect to just have Π̂^K Γ = 0.Let's see what happens instead ifwe choose the renormalisation procedure (K)Γ = Π_-^KΓ. It follows from the definition of Δ that [e:exDelta] ΔΓ= ⊗triangle> + line> ⊗loop> + baretriang> ⊗leg> ,since line> and baretriang> are the only subgraphs of negative degree, but their degree remains above -1 so that no node-decorations are added. Note furthermore that in _- one has the identitiesline> = line> ⊗+ ⊗line> ,baretriang> = baretriang> ⊗+ ⊗baretriang> .The reason why there is no additional term analogous to the middle term of (<ref>) appearing in the second identity is that the corresponding factor would be of positive degree and therefore vanishes when viewed as an element of _-. As a consequence, we have τ = -τ in both cases, so that the first and last terms of (<ref>) cancel out and we are eventually left withΠ̂^K Γ= - (K_1 * K_1)(0)·K_2(0) ,which is certainly not desirable since it might diverge as well.The way out of this conundrum is to define a twisted antipode _- →_- which is defined by a relation very similar to that defining the antipode, but this time guaranteeing that the renormalised valuation vanishes on those diagrams that encode “potentially diverging constants” as above. Here, the renormalised valuation is defined by setting[e:properRenorm] (K)Γ= Π_-^K Γ ,where Π_-^K is defined by (<ref>). Writing _-⊗_- →_- for the product, we defineto be such that[e:twisted] (⊗𝕀)Γ= 0 ,for every non-empty connected vacuum diagram Γ∈_- with Γ≤ 0. At first sight, this looks exactly like the definition of the antipode. The difference is that the mapin the above expression goes from_- to _- ⊗_-, so that no projection onto diverging diagrams takes place on the right factor. If we view _- as a subspace of _-, then theantipode satisfies the identity (⊗π_-)Γ= 0 ,where π_-_- →_- is the projection given by quotienting by the ideal _+ generated by diagrams with strictly positive degree. We have the following simple lemma. There exists a unique map _- →_- satisfying (<ref>). Furthermore, the map Π^K_ given by (<ref>) with (K) = Π_-^K is indeed a valuation.The existence and uniqueness ofis immediate by performing an induction over the number of edges. Defining k_- →_-^⊗ (k+1) inductively by 0 = ι and then k+1 = (k ⊗𝕀)ι ,where ι_- →_- is the canonical injection, one obtainsthe (locally finite) Neumann series [e:reprAhat] = ∑_k ≥0 (-1)^k+1 ^(k)k ,where ^(k)_-^⊗ (k+1)→_- is the multiplication operator. The uniqueness also immediately implies thatis multiplicative, so that (K) as defined above is indeed a character for every K ∈^-_∞. We call the renormalisation procedure defined by (K) = Π_-^K the “BPHZ renormalisation”. It follows from (<ref>) that in the above example the twisted antipode satisfiesbaretriang> = - baretriang> + bareloop> line> ,so that(⊗𝕀)ΔΓ= ⊗triangle> - line> ⊗loop> - baretriang> ⊗leg> + bareloop> line> ⊗leg> ,which makes it straightforward to verify that indeedΠ^K_Γ = 0. The following general statement should make it clear that this is indeed the “correct” way of renormalising Feynman diagrams. The BPHZ renormalisation is characterised by the fact that, for every k ≥ 1 andevery connected Feynman diagram Γ with k legs and Γ≤ 0, thereexists a constant C such that if ϕ is a test function on ^k of the form ϕ= ϕ_0·ϕ_1such that ϕ_1 depends only on x_1+…+x_k, ϕ_0 depends only on the differences of the x_i,and there exists a polynomialP with P + Γ≤ 0 and [e:propPhi0] ϕ_0(x_1,…,x_k) = P(x_2-x_1,…,x_k-x_1) ,|x| ≤C ,then (Π^K_Γ)(ϕ) = 0.One way to interpret this statement is that, once we have defined Π^K_Γ for test functions in _k^() with = {{1,…,k}},the canonical way of extending it to all test functions is tosubtract from it the linear combination of derivatives of delta functions which has precisely the same effect when testing it against all polynomials of degree at most -Γ.The statement follows more or less immediately from the following observation. Take a valuation of the form Π^K_g as in (<ref>) for some K ∈^-_∞ and some g ∈_-. Fixing the Feynman diagram Γ from the statement, we write Γ̣= {[1],…,[k]} for its k legs, and we fix a function Γ̣→^d with || + Γ≤ 0. Write furthermoreΓ̣→^d for the function such that the ℓth leg has label δ^(([ℓ])). We assume without loss of generality that ([1]) = 0 since we can always reduce ourselves to this case by (<ref>).Let then P be given byP(x) = P_(x) = ∏_i=[2]^[k] (x_i - x_[1])^(i) ,let ϕ_0 be as in (<ref>), and let ϕ_1 be a test function depending only on the sums of the coordinates and integrating to 1. We then claim that, writing Γ̅⊂Γ for the maximal subgraph where we only discarded the legs and v_⋆ for the vertex of Γ incident tothe first leg, one has( Π_g^K Γ)(ϕ) = (g⊗Π_-^K) (Γ̅,v_⋆,π(- )) .(In particular ( Π^K_g Γ)(ϕ) = 0 unless ≤.) Indeed, comparing (<ref>) to (<ref>), it is clear that this is the case when g = ^*, noting that [e:derPoly] D_2^([2])⋯D_k^([k]) P_= P_-.The general case then follows by comparing the definitions of Δ and , noting thatby (<ref>) the effect of the labelin (<ref>) is exactly the sameof that of the components of ℓ supported on the “legs” in (<ref>). In other words, when comparing the two expressions one should set ℓ̅(h) = ℓ(h) for the half-edges h that are not legs and (v) = ∑ℓ(e,v), where the sum runs over all legs (if any) adjacent to v.The claim now follows immediately from the definition of the twisted antipode and the BPHZ renormalisation:(Π^K_Γ)(ϕ)= (Π_-^K ⊗Π_-^K) (Γ̅,v_⋆,π(- ))= Π_-^K (⊗𝕀) (Γ̅,v_⋆,π(- )) = 0 ,since the degrees of Γ and of (Γ̅,v_⋆,π( - )) agree (and are negative) by definition. § STATEMENT AND PROOF OF THE MAIN THEOREMWe now have all the definitions in place in order to be able to state the BPHZ theorem. The valuation Π^K_is consistent for K and extends continuously to all K ∈^-_0. By Proposition <ref>, we only need to show the continuity part of the statement. Before we turn to the proof, we give an explicit formula for the valuation Π^K_ instead of the implicit characterisation given by (<ref>). This is nothing but Zimmermann's celebrated “forest formula”.§.§ Zimmermann's forest formula So what are these “forests” appearing in the eponymous formula? Given any Feynman diagram Γ, the set_Γ^- of all connected vacuum diagrams Γ̅⊂Γwith Γ̅≤ 0 is endowed with a natural partial order given by inclusion. A subset ⊂_Γ^- is called a “forest” if any two elements ofare either comparable in _Γ^- or vertex-disjoint as subgraphs of Γ. Given a forestand a subgraph Γ̅∈, we say thatΓ̅_1 is a child of Γ̅ if Γ̅_1 < Γ̅ and there exists no Γ̅_2 ∈ with Γ̅_1 <Γ̅_2 < Γ̅. Conversely, we then say that Γ̅ is Γ̅_1's parent. (The forest structure ofguarantees that its elements have at most one parent.) An element without children is called a leaf and one without parent a root. If we connect parents to their children in , then it does indeed form a forest with arrows pointing away from the roots and towards the leaves. We henceforth write _Γ^- for the set of all forests for Γ.Given a diagram Γ, we now consider the space _Γ generated by all diagrams Γ̂ such that each connected component has either at least one leg or a distinguished vertex v_⋆, but not both. We furthermore endow Γ̂ with an ^d-valued vertex decorationsupported on the leg-less components and, most importantly, with a bijectionτ→ between the edgesof Γ̂ and those of Γ, such that legs get mapped to legs. The operation of discarding τ yields a natural injection _Γ↪_- ⊗ by keeping the components with a distinguished vertex in the first factor and those with legs in the second factor. (The space _Γ itself however is not a tensor product due to the constraint thatτ is abijection, which exchanges information between the two factors.)We can also define _Γ analogously to (<ref>) and (<ref>)–(<ref>), so that_Γ = _Γ / _Γ naturally injects into _- ⊗.Given a connected subgraph γ⊂Γ, we then define a contraction operator _γ acting on _Γ in the following way. Given an element (Γ̂,) ∈_Γ, we write γ̂ for the subgraph ofΓ̂ such that τ is a bijection between the edges of γ̂ and those of γ. If γ̂ is not connected, then we set _γ (Γ̂,) = 0. Otherwise, we set as in (<ref>) [e:contraction] _γ (Γ̂,) = ∑_ℓ̅γ̣→^d _γ→^d (-1)^|ℓ̅| ℓ̅!_(γ̂, + πℓ̅) ≤0 (γ̂, + πℓ̅) ·(Γ̂,- ) / (γ̂, ℓ̅) ,with the obvious bijections between the edges of γ̂·Γ̂/ γ̂ and those of Γ. This time we explicitly include the restriction to terms such that (γ̂, + πℓ̅) ≤ 0, which replaces the projection to _- in (<ref>). An important fact is then the following. Let γ_1, γ_2 be two subgraphs of Γ that are vertex-disjointand let Γ̂∈_Γ be such that γ̂_1 and γ̂_2 are vertex disjoint. Then _γ_1_γ_2Γ̂= _γ_2_γ_1Γ̂.We will use the natural convention that ∅∈_Γ^-. For any ∈_Γ^-, we then write _Γ for theelement of _Γ defined recursively in the following way.For = ∅, we set _∅Γ = Γ. For non-empty , we write ρ() ⊂ for the set of roots of and we set recursively_Γ= _∖ρ() ∏_γ∈ρ() _γΓ .The order of the product doesn't matter by Lemma <ref>, since the roots ofare all vertex-disjoint. With these notations at hand, Zimmermann's forest formula <cit.> then reads The BPHZ renormalisation procedure is given by the identity [e:forest] (⊗𝕀)ΔΓ= Γ∑_∈_Γ^- (-1)^|| _Γ ,where we implicitly use the injection _Γ↪_- ⊗for the right hand side.This follows from the representation (<ref>). Another way of seeing it is tofirst note thatis indeed of the form (⊗𝕀)ΔΓ for some _-→_- and to then make use of the characterisation (<ref>) of the twisted antipode . This implies that it suffices to show that Γ = 0for every connected Γ with a distinguished vertex and a node-labelling such thatΓ≤ 0.The idea is to observe that _Γ^- can be partitioned into two disjoint sets that are in bijection with each other: those that contain Γ itself and the complement _Γ^- of those forest that don't. Furthermore, it follows from the definition that _ΓΓ = Γ, so that ∑_∈_Γ^- (-1)^|| _Γ= ∑_∈_Γ^- (-1)^|| (_Γ- _∪{Γ} Γ) = ∑_∈_Γ^- (-1)^|| (_Γ- _ Γ) ,which vanishes thus completing the proof. In order to analyse (<ref>), it will be very convenient to have ways of resumming its terms in order to make cancellations more explicit. These resummations are based on the following trivial identity. Given a finite set A and operatorsX_i with i ∈ A, one has[e:prod] ∏_i∈A(𝕀-X_i) = ∑_B ⊂A (-1)^B ∏_j∈BX_j ,provided that the order in which the operators are composed is the same in each term and that the empty product is interpreted as the identity. The right hand side of this expression isclearly reminiscent of (<ref>) while the left hand side encodes cancellations if the X_i are close to the identity in some sense. If _Γ^- itself happens tobe a forest, then _Γ^- consists simply of all subsets of_Γ^-, so thatone can indeed write [e:simpleCase] (⊗𝕀)ΔΓ= __Γ^- Γ ,where _Γ is defined by _∅Γ = Γ and then via the recursion [e:defRGamma] _Γ= _∖ρ() ∏_γ∈ρ() (𝕀- _γ) Γ .In general however this is not the case, and this is precisely the problem of “overlapping divergences”. In order to deal with this, we introduce the following variant of (<ref>) which still works in the general case. To formulate it, we introduce the notion of a “forest interval”for Γ which is a subset of _Γ^- of the form [,] in the sense that it consists preciselyof all those forests ∈_Γ^- such that ⊂⊂. An alternative description ofis that there is a forest δ() = ∖disjoint fromand such thatconsists of all forests of the type ∪ with ⊂δ(). Given a forest interval, we define an operation _ which renormalises all subgraphs in δ() and contracts those subgraphs in . In other words, we set _ = _^, where _^ is defined recursively by_^Γ= _^∖ρ() ∏_γ∈ρ() _γ^♯Γ ,_γ^♯={[𝕀- _γif γ∈δ(),; - _γ otherwise. ].This definition is consistent with (<ref>) in the sense that one has _ = _ for = [∅,]. Combining Proposition <ref> with (<ref>), we then obtain the following alternative characterisation of our renormalisation map. Let Γ be a Feynman diagram and letbe a partition of _Γ^- consisting of forest intervals. Then, one has the identity (⊗𝕀)ΔΓ = ∑_∈_Γ.§.§ Proof of the BPHZ theorem, Theorem <ref> We now have all the ingredients in place to prove Theorem <ref>. We only need to show that for every (connected) Feynman diagram Γ there are constants C_Γ and N_Γ such that for every test function ϕ withcompact support in the ball of radius 1one has the bound [e:mainBound] |(Π^K_Γ)(ϕ)| ≤C_Γ∏_e ∈ |K_(e)|_N_Γsup_|k| ≤N_Γ D^(k)ϕ_L^∞ ,where |K_|_N denotes the smallest constant C such that(<ref>) holds for all |k| ≤ N. The proof of (<ref>) follows the same lines as that of the main result in<cit.>, but with anumber of considerable simplifications: * There is no “positive renormalisation” in the present context so that we do not need to worry about overlaps between positive and negative renormalisations. As a consequence, we also do not make any claim on the behaviour of (<ref>) when rescaling the test function.In general, it is false that (<ref>) obeys the naive power-counting when ϕ is replaced by ϕ^λ and λ→0 as in <cit.>. * The BPHZ renormalisation procedure studied in the present article is directly formulated at the level of graphs. In <cit.> on the other hand, it is formulated at the level of trees(which are the objects indexing a suitable family of stochastic processes) and then has to be translated into a renormalisation procedure on graphs which, depending on how trees are glued together in order to form these graphs,creates additional “useless” terms.* We only consider kernels with a single argument, corresponding to “normal” edges in our graphs, while<cit.> deals with non-Gaussian processes which then gives rise toFeynman diagrams containing some “multiedges”.We therefore only give an overview of the main steps, but we hope that the style of our exposition is such that the interested reader will find it possible to fill in the missing details withoutundue effort.As in the proof of Proposition <ref>, we break the domain of integration into Hepp sectors D_ and we estimateterms separately on each sector. The main trick is then to resum the terms as in Lemma <ref>, but by using a partition _ that is adapted to the Hepp sectorin such a way that the occurrences of (𝕀 - _γ) create cancellations that are useful on D_.In order to formulate this, it is convenient to write all the terms appearing in the definition of Π^K_Γ as integrals over the same set of variables. For this, we henceforth fix a connected Feynman diagramΓ once and for all, together with an arbitrary total order for its vertices. We then define the space _Γ generated by connected Feynman diagrams Γ̅ with edges and vertices in bijection with those of Γ via a map τ (, ) → (,),together with a vertex labelling , as well as a map → which vanishes on all legs of Γ̅.The goal of this map is to allow us to keep track on which parts of Γ were contracted, as well as the structure of nested contractions: $̧ measures how “deep” a given edge lies within nested contractions. In particular, it is natural to impose that$̧ vanishes on legs since they are never contracted. We furthermore impose that for every j > 0, every connected component γ̂ of ^̧-1(j) has the following two properties.* The highest vertex v_⋆(γ̂) of γ̂ has an incident edge ewith (̧e) < j. (Here, “highest” refers to the total order we fixed on vertices of Γ,which is transported to Γ̅ by the bijection between vertices of Γ and Γ̅.)* All edges e incident to a vertex of γ̂ other than v_⋆(γ̂) satisfy (̧e) ≥ j.Writing ^c ⊂ for those vertices v with at least one edge e incident to v such that (̧e) > 0, we also impose that (v) = 0 for v ∉^c.We view Γ itself as an element of _Γ by setting ≡̧0. Note that this data defines a map v ↦ v_⋆ from ^c to ^csuch that v ↦ v_⋆(γ̂) for γ̂ the connected component of ^̧-1(j) with the lowest possible value of j containing v.For γ⊂Γ as above, we then define maps _γ on _Γ similarly to (<ref>). This time however, we set _γΓ̅= 0 unless the following conditions are met. * The graph τ^-1(γ) ⊂Γ̅ is connected. * For every edge e adjacent to τ^-1(γ), one has(̧e) ≤inf_ê∈(̧ê).We also restrict the sum over labels ℓ supported on edges with (̧e) = inf_ê∈(̧ê). In order to remain in _Γ, instead of extracting γ̂= τ^-1(γ), we reconnect the edges of Γ̅adjacent to γ̂ to the highest vertex v̂ of γ̂ and we increase (̧e) by 1 on all edges e of γ̂. We similarly define elements _Γ as above with every instance of _γ replaced by _γ. We also view Γ itself as an element of _Γ by setting both $̧andto0. Let us illustrate this by taking forΓthe diagram of Figure <ref> and forγthe triangle shaded in grey. In this case, assuming that the order on our vertices is such that the first vertex is the leftmost one and that the degree ofγis above-1so that no node-decorations are needed, we have _γ[style=thick,baseline=-0.1cm] [dot] (l) at (0,0) ; [dot] (r) at (3,0) ; [dot] (ul) at (0.5,1) ; [dot] (ur) at (2.5,1) ; [dot] (d) at (1.5,-1) ; [dot] (c) at (1.5,0) ; [->] (l) – (ul); [->] (ul) – (ur); [->] (ur) – (r);[->] (l) – (c); [->] (ul) – (c); [->] (l) – (d); [->] (c) – (d); [->] (d) – (r); [thick,red] (l) – ++(180:0.5); [thick,red] (r) – ++(0:0.5); [thick,red] (d) – ++(-90:0.5); [line width=0.5cm,draw opacity=0.15, line cap=round] (l) – (ul) – (c) – (l); = [style=thick,baseline=-0.1cm] [dot,boundary] (l) at (0,0) ; [dot] (r) at (3,0) ; [dot] (ul) at (0,1) ; [dot] (ur) at (2.5,1) ; [dot] (d) at (1.5,-1) ; [dot] (c) at (1,1) ; [->] (l) – (ul); [->] (l) – (ur); [->] (ur) – (r);[->] (l) – (c); [->] (ul) – (c); [->] (l) to[bend right=20] (d); [->] (l) to[bend left=20] (d); [->] (d) – (r); [thick,red] (l) – ++(180:0.5); [thick,red] (r) – ++(0:0.5); [thick,red] (d) – ++(-90:0.5); [line width=0.5cm,draw opacity=0.15, line cap=round] (l) – (ul) – (c) – (l);with(̧v)equal to1in the shaded region of the diagram on the right.The green node then denotes theelementv_⋆for all the nodesvin that region. This time, it follows in virtually the same way as the proof of Proposition <ref>that ifγ_1andγ_2are either vertex disjoint or such that one is included in the other, then the operators_γ_1and_γ_2commute. In particular, we can simply write[e:defRM] _Γ = (∏_γ∈δ()(𝕀 - _γ) ∏_γ̅∈(-_γ̅))Γ ,without having to worry about the order of the operations as in (<ref>).For everyK ∈^-_∞and every test functionϕ, we then have alinear map^K _Γ→^∞(^_⋆)given by (^K Γ̅) (x)=∏_e ∈_⋆ K_(e)(x_τ(e_+) - x_τ(e_-)) ∏_v ∈_⋆ (x_τ(v) - x_τ(v_⋆))^(v)×(D_1^ℓ_1⋯ D_k^ℓ_kϕ)(x_v_1,…,x_v_k) ,whereτ∪→∪is the bijection between edges and vertices ofΓ̅and those ofΓ,v_iare the vertices to which theklegs ofΓare attached, andℓ_iare the corresponding multiindices as in (<ref>). With this notation, our definitions show that, for every partitionof_Γ^-into forest intervals, one has (Π^K_Γ)(ϕ) = ∑_∈∫_^_⋆(^K_Γ)(x) dx .We bound this rather brutally by [e:terms] |(Π^K_Γ)(ϕ)|≤∑_∑_∈_∫_D_|(^K_Γ)(x)| dx ≤∑_∑_∈_sup_x ∈ D_|(^K_Γ)(x)|∏_u ∈ T 2^-d_u .At this stage, we would like to make a smart choice for the partition_which allows us to obtain a summable bound for this expression.In order to do this, we would like to guarantee that a cancellation(𝕀- _γ)appears for all of the subgraphsγthat are such that the length of all adjacent edges (as measured by the quantity|x_τ(e_+) - x_τ(e_-)|) is much greater than the diameter ofγ(measured in the same way).In order to achieve this, we first note that by Proposition <ref> and(<ref>) below, we can restrict ourselves in(<ref>) to the case where_is a partition of the subset_Γ^- ⊂_Γ^-of all forests containing only subgraphs that are full inΓ.(Recall that a subgraphγ̅⊂Γis full inΓif it is induced by a subset of the vertices ofΓin the sense that it consists of all edges ofΓconnecting two vertices of the subset in question.) We then consider the following construction. Forany forest∈_Γ^-, write_Γfor the Feynmandiagram obtained by performing the contractions of_Γ. (So that_Γis a linear combination of terms obtained from_Γby adding node-labelsand the corresponding derivatives on incident edges.) As above, writeτfor the corresponding bijection between edges and vertices of_Γand those ofΓ.Given a Hepp sector= (T,)forΓand an edgeeofΓ, we then write_^(e) = (v_e), wherev_e = τ(τ^-1(e)_-) ∧τ(τ^-1(e)_+)is the common ancestor inTof the two vertices incident toe, but when viewed as an edge of_Γ. (Since we only consider forests consisting of full subgraphs,τ^-1(e)_-andτ^-1(e)_+are distinct, so this is well-defined.) Givenγ∈, we then set_^(γ) = inf_e ∈_γ^_^(e) ,_^(γ) = sup_e ∈_γ^_^(e) ,where_γ^denotes the edges belonging toγ, but not to any of the children ofγin, while_γ^denotes the edges adjacent toγand belonging to the parent(γ)ofγin(with the convention that ifγhas no parent, then(γ) = Γ). With these notations, we then make the following definition. Fix a Hepp sector . Given a forest ∈_Γ^-, we say that γ∈ is safe inif_^(γ) ≥_^(γ) andthat it is unsafe inotherwise. Given a forestand a subgraph γ∈_Γ^-, we say that γ is safe / unsafe forif ∪{γ}∈_Γ^- and γ is safe / unsafe in ∪{γ}. Finally, we say that a forestis safe if every γ∈ is safe in . The following remark is then crucial. Let _s ∈_Γ^- be a safe forest and write _u for the collection of all γ∈_Γ^- that are unsafe for _s.Then, one has _s ∪_u ∈_Γ^- and furthermore every γ in _s / _u is safe / unsafe in _s ∪_u.Fix _s and write again τ for the corresponding bijection between edges and vertices of __sΓ and those of Γ. For each γ∈_s,write _γ^_s⊂ for the set of vertices of the form τ(τ^-1(e)_±) for e ∈_γ^_s, as well as v_⋆,γ^_s∈_γ^_s for thehighest one of these vertices. (This is the vertex that edges outside of γ were reconnected toby the operation __s.) We also write _γ^_s⊂ for all verticesof the form τ(τ^-1(e)_±) for e ∈_γ^_s that are not in _γ^_s.With this notation, _^(γ) = ((_γ^_s)^↑) and there exists a vertex w ∈_γ^_s for (γ) the parent of γ in _s (with the convention as above) such that _^(γ) = (v_⋆,γ^_s∧ w). Since both (_γ^_s)^↑ and v_⋆,γ^_s∧ w lie on the path connecting the root of T to v_⋆,γ, it follows from the definition of a safe forest that one necessarily has v_⋆,γ^_s∧ w > (_γ^_s)^↑. Let now γ̅∈_Γ^- ∖_s be such that _s ∪{γ̅}∈_Γ^- and set _γ̅ = _γ̅^_s ∪{γ̅} as well as _γ̅ = _γ̅^_s ∪{γ̅}. It follows from the definitions that γ̅∈_u if and only ifnone of the descendants of _γ̅^↑ in T belongs to _γ̅. As a consequence of this characterisation,any two graphs γ_1, γ_2 ∈_u are either vertex-disjoint, or one of them is included in the other one. Indeed, assume by contradiction that neither is included in the other one and that their intersection γ_∩ contains at least one vertex.Writing γ̂_∩ for one of the connected components of γ_∩, there exist edges e_i in γ_i that are adjacent to γ̂_∩: otherwise, since the γ_i are connected, one of them would be contained in γ̂_∩. Write v_i for the vertex of e_i that does not belong to γ̂_∩. Such a vertex exists since otherwise it would not be the case that γ̂_∩ is full in γ^↑ = (γ_1) = (γ_2). Since γ_1 is unsafe, it follows that v_2 is not a descendentof (_γ̂_∩∪{v_1})^↑, so that in particular, for every vertex v ∈γ̂_∩, one has v_1 ∧ v > v_2∧ v. The same argument with the roles of γ_1 and γ_2 reversed then leads to a contradiction.This shows that _s ∪_u is indeed again a forest so that it remains to show the last statement. We will show a slightly stronger statement namely that, given an arbitrary forest, the property of γ∈ being safe or unsafe does not change under the operation of adding toa graphγ̅ that is unsafe for . Given the definitions, there are three potential cases that could affect the “safety” of γ: either γ̅⊂γ, or γ⊂γ̅, or γ̅⊂(γ) and there exists an edge e adjacent to both γ and γ̅. We consider these three cases separately and we write = ∪{γ̅}.In the case γ̅⊂γ, it follows from the ultrametric property and the fact that γ̅ is unsafe that _^(γ) = _^(γ) whence the desired property follows. In the case γ⊂γ̅, it is _^(γ) which couldpotentially change since _γ^ becomes smaller when adding γ̅. Note however that by the ultrametric property, combined with the fact that γ̅ is unsafe, the edges e in _γ^∖_γ^ satisfy _^(e) = _^(e). Furthermore, again as a consequence of γ̅ being unsafe, one has _^(e) < _^(e̅) for every edge e̅ in γ̅ which is not in γ, so in particular for e̅∈_γ^. This shows again that _^(γ) = _^(γ) as required. The last case can be dealt with in a very similar way, thus concluding the proof. As a corollary of the proof, we see that the definition of the notion of “safe forest” as well as the construction of_ugiven a safe forest_sonly depend onthe topology of the treeTand not on thespecific scale assignment.It also follows that, given an arbitrary∈_Γ^-, there exists a unique way of writing= _s ∪_uwith_sa safe forest and_ubeing unsafe for_s(and equivalently for). In particular, writing_Γ^(s)(T)for the collection ofsafe forests for the treeT,the collection_= {[_s, _s ∪_u] :_s ∈_Γ^(s)(T)}where, for any_s, the forest_uis defined as in Lemma <ref>, forms a partition of_Γ^-into forest intervals. It then follows from (<ref>) that |(Π^K_Γ)(ϕ)| ≤∑_T∑__s ∈_Γ^(s)(T)∑_sup_x ∈ D_|(^K_[_s,_s∪_u]Γ)(x)|∏_v ∈ T 2^-d_v ,whereruns over all monotone integer labels forTand the construction of_ugiven_sandTis as above. We note that the first two sums are finite, so that as in the proof of Proposition <ref> it is sufficient,for any given choice ofTand safe forest_s, to find acollection real-valuedfunction{η_i}_i ∈I(for some finite index setI) on the interior vertices ofTsuch that [e:wantedboundeta] ∑_sup_x ∈ D_|(^K_[_s,_s∪_u]Γ)(x)|∏_v ∈ T 2^-d_v≤∑_i ∈ I∑_∏_v ∈ T 2^-η_i(v)_v ,and such that [e:wantedeta] ∑_w ≥ vη_i(w) > 0,∀ v ∈ T ,∀ i ∈ I ,which thenguarantees that the above expression converges. R5cm[style=thick,scale=0.85] [fill=blue!5]plot[smooth cycle, tension=.7] coordinates (-1.5,0) (-1.3,0.5) (-0.5,0) (2,0.5) (1.1,-1) (0,-0.2) (-1,-.8); [fill=blue!15]plot[smooth cycle, tension=.7] coordinates (2,.5) (2,1.3) (3,1); [fill=blue!15]plot[smooth cycle, tension=.7] coordinates (-.5,0) (-1,1.3) (0,1) (0.3,1.2) (0.3,0.5); [fill=blue!30]plot[smooth cycle, tension=.7] coordinates (0.3,1.2) (-0.1,1.4) (0.1,1.6) (0.4,1.3); [dot,red] (l) at (2,.5) ; [dot,red] (l) at (-.5,0) ; [dot,red] (l) at (0.3,1.2) ; Structure of _Γ.Before we turn to the construction of theη_i, let us examine in a bit more detail thestructure of the graph_Γ= (_,_).Writingτfor the bijection between_ΓandΓ, everyγ∈yields a subgraph(γ) = (_γ, _γ)of_Γwhose edge set is given by the preimage underτof the edge set ofγ∖⋃(γ), where(γ)denotes theset of all children ofγin. Furthermore,(γ)is connected by exactly one vertexto(γ̅), forγ̅∈(γ) ∪{(γ)}, and it isdisconnected from(γ̅)for all other elementsγ∈.This is also the case ifγis a root of, so that(γ) = Γby our usual convention,if we set(Γ)to be the preimage in_Γof the complement of all roots of. We henceforth writev_⋆(γ)for the unique vertex connecting(γ)to((γ))and we write_γ^⋆= _γ∖{v_⋆(γ)}, so that one has a partition_= _Γ⊔_γ∈_γ^⋆.In this way, thetree structure ofis reflected in the topology of_Γ, as illustrated inFigure <ref>, where each(γ)is stylised by a coloured shape, with parents having lighter shades than their children and connecting vertices drawn in red. Recall that we also fixed a total order on the vertices ofΓ(and therefore those of_Γ) and that the construction of_Γimplies that the corresponding order on{v_⋆(γ)}_γ∈is compatible with the partial order ongiven by inclusion. Fore ∈_, writeM_e ⊂{+,-}for those ends such thatτ(e)_∙≠τ(e_∙)for∙∈M_eand set _^∙ = {(e,∙) :e∈_^m, ∙∈ M_e} .Then, by the construction of_Γ, for every∙∈M_ethere exists auniqueγ_∙(e) ∈and vertexe_∘∈_such that[e:propse] e_∙ = v_⋆(γ_∙(e)) , e_∘ = τ^-1(τ(e)_∙) ∈_γ_∙(e) , e ∈_(γ_∙(e)) .Givenℓ_^∙→^d, we then define a canonical basis element_^ℓΓ∈_Γby _^ℓΓ = (_Γ, ^(ℓ), πℓ) ,where^(ℓ)is the edge-labelling given by^(ℓ)(e) = (τ(e)) + ∑_∙∈M_eℓ(e,∙), withthe original edge-labelling ofΓ, andπℓis the node-labelling given byπℓ(v) = ∑{ℓ(e,∙) :e_∘= v}. Givenγ∈andℓas above, we also setℓ(γ) = ∑{|ℓ(e,∙)| :γ_∙(e) = γ}.We now return to the bound (<ref>) and first consider thespecial case when_sis a safe forest such that_u = ∅. By (<ref>) and (<ref>),__s Γcan then be written as [e:hatRF] __sΓ = (-1)^|_s|∑_ℓ__s^∙→^d(-1)^ℓ_ℓ!__s^ℓΓ ,whereℓ_ = ∑{|ℓ(e,∙)| :∙= -}and the sum in (<ref>) is restricted to those choices ofℓsuch that, for everyγ∈_s, one hasγ+ ℓ(γ) ≤0.In this case, we take as the index setIappearing in (<ref>)all those functionsℓappearing in the sum (<ref>) (recall that the sum is restricted to finitely many such functions) and we set η_ℓ (u) = d + ∑_e ∈__s^(ℓ)(e)_e^↑(u) + ∑_(e,∙)∈__s^∙ |ℓ(e)| _(e,∙)^↑ (u) ,where, fore ∈__s,e^↑denotes the node ofTgiven byτ(e_-)∧τ(e_+)and, for(e,∙)∈__s^∙,(e,∙)^↑denotes thenodeτ(e_∘) ∧τ(e_∙).It follows from the definition of^Kthat this choice does indeed satisfy (<ref>). We now claim that as a consequence of the fact that_sis such that_u = ∅, it also satisfies (<ref>). Assume by contradiction that there exists a nodeuofTand a labellingℓsuch thata ∑_v ≥u η_ℓ(v) ≤0. Write_0 ⊂__sfor the verticesvsuch thatτ(v) ≥uinTandΓ_0 = (_0,_0) ⊂__sΓfor the corresponding subgraph.In general,Γ_0does not need to be connected, so we writeΓ_0^(i) = (_0^(i),_0^(i))for its connected components. We then seta_i|_0^(i)| - 1 + ∑_e ∈_0^(i)^(ℓ)(e) + ∑_(e,∙)∈__s^∙ |ℓ(e)| _{e_∙, e_∘}⊂_0^(i) ,so that∑_i a_i ≤a, with equality ifΓ_0happens to be connected. Sincea≤0, there existsisuch thata_i ≤0. Furthermore,ican be chosen such that|_0^(i)| ≥2, since|_0| ≥2and we would otherwise havea = |_0| - 1 ≥1.Set_0,γ = _0∩_γand let_s^(i) ⊂_s∪{Γ}be the subtree consisting of thoseγsuch that either_γ∩_0^(i) ≠∅orv_⋆(γ) ∈_0^(i)(or both).We also breaka_iinto contributions coming from eachγ∈_s^(i)by setting [e:defagamma] a_i,γ |_0,γ| - 1 + ∑_e ∈_γ∩_0^(ℓ)(e) +∑_(e,∙)∈__s^∙_γ_∙(e) = γ |ℓ(e)| _{e_∙, e_∘}⊂_0^(i) .We claim that∑_γa_i,γ = a_i: recalling that one always hasΓ∈_s^(i)by definition, the only part which is not immediate is that∑_γ(|_0,γ| - 1) = |_0^(i)| - 1. This is a consequence of the fact that in the sum∑_γ|_0,γ|, each “connecting vertex” is counted double. Since_s^(i)is a tree, the number of theseequals|_s^(i)|-1, whence the claim follows.We introduce the following terminology. An elementγ∈_s ∪{Γ}is said to be “full” if_γ∩_0^(i) = _γ, “empty” if_γ∩_0^(i) = ∅, and “normal” otherwise. We also seta_i,γ = 0for all emptyγwith_0,γ = ∅. Recall furthermore the definition ofγforγ∈_sgiven in (<ref>) and the definition ofℓ(γ)given above. With this terminology, we then have the following. A full subgraph γ cannot have an empty parentand one has [2] a_i,γ = γ + ℓ(γ) - ∑_γ̅∈(γ) (γ̅+ ℓ(γ̅)) if γ is full, a_i,γ = 0 if γ is empty, a_i,γ >- ∑_γ̅∈_⋆(γ) (γ̅+ ℓ(γ̅))if γ is normal, where _⋆(γ) consists of those children γ̅of γ such that v_⋆(γ̅) ∈_0^(i). Before we proceed to prove Lemma <ref>, let us see how this leads to a contradiction.By (<ref>), one hasγ̅+ ℓ(γ̅) ≤0for everyγ∈_sand a fortioriγ̅< 0. Furthermore, since|_0^(i)| ≥2, there exists at least one subgraphγwhich is either full or normal. Since full subgraphs can only have parents that are either full or normal and sinceΓitself cannot be full (since legs are never contained in_0^(i)), we have at least one normal subgraph. Since each of the negative termsγ+ ℓ(γ)appearing in the right hand side ofthe bound ofa_i,γforγfull is compensated by a corresponding term in its parent, and since we use the strict inequality appearing for normalγat least once, we conclude that one has indeed∑_γa_i,γ > 0as required. Let us first show that the bounds (<ref>) hold. If γ is empty, one has either γ∉_s^(i) in which case _0,γ = ∅ and a_i,γ = 0 by definition, or _0,γ = v_⋆(γ) in which case a_i,γ = 0 by (<ref>). If γ is full, then it follows immediately from the definition of γ that one would have a_i,γ = γ- ∑_γ̅∈(γ)γ̅ if it weren't for the presence of the labels ℓ. If γ is full then, whenever(e,∙) is such that γ_∙(e) = γ, one alsohas {e_∙,e_∘}⊂_0^(i) by (<ref>) and the definition of being full. Similarly, one has e ∈_γ∩_0 whenever γ_∙(e) ∈(γ). The first identity in (<ref>) then follows from thefact that each edge with γ_∙(e) = γ contributes |ℓ(e)| to the last term in(<ref>) while each edge with γ_∙(e) ∈(γ) contributes -|ℓ(e)| to the penultimate term.Regarding the last identity in (<ref>), given a normal subgraph γ, write γ̂ for the subgraph of Γ with edge set given by[e:defgammahat] = τ(_γ∩_0^(i)) ∪⋃_γ̅∈_⋆(γ)(γ̅). In exactly the same way as for a full subgraph, one then has a_i,γ≥γ̂- ∑_γ̅∈_⋆(γ) (γ̅+ ℓ(γ̅)). The reason why this is an inequality and not an equality is that we may have additional positive contributions coming from those ℓ(e) with γ_m(e) = γ and such that e_∘∈_0^(i), while we do not have any negative contributions from those ℓ(e) with γ_m(e) ∈_⋆(γ) but e ∉_0^(i). The claim then follows from the fact that one necessarily has γ̂> 0 by the assumption that _u = ∅. Indeed, it follows from its definition and the construction of theHepp sector T that the subgraph Γ_0 satisfies that _^_s(e) > _^_s(e) for every edge e ∈_0 and every edge e̅ adjacent to Γ_0 in __sΓ, so that one would have γ̂∈_u otherwise.It remains to show that if γ is a full subgraph, then it cannot have empty parents. This follows in essentially the same way as above, noting that if it were the case that γ has an empty parent, then it would be unsafe in _s, in direct contradiction with the fact that _s is a safe forest. In order to complete the proof of Theorem <ref>, it remains to consider the general case when_u ≠∅. In this case, setting= [_s, _s∪_u], we have [e:defRMgen] _Γ = (-1)^|_s|∑_ℓ__s^m →^d(-1)^ℓ_ℓ!(∏_γ∈_u(𝕀 - _γ)) __s^ℓΓ ,with the sum overℓrestricted in the same ways as before. Again, webound each term in this sum separately, so that our index setIconsists again of the subset offunctionsℓ__s^m →^dsuch thatγ+ ℓ(γ) < 0for everyγ∈_s, but this time each of these summands is still comprised of several terms generated by the action of the operators_γfor the “unsafe” graphsγ.For anyγ∈_u, we define a subgraph(γ)of__sΓas before, with the children ofγbeing those in_s ∪{γ}not in all of_s∪_u. The definition ofγbeing “unsafe” then guarantees thatthere exists a vertexγ^↑inTsuch thatτ(_γ) = {v ∈ :v ≥γ^↑}. We furthermore define γ^ = sup{e^↑ :e ∈_(γ) &e ∼(γ)} ,with “∼” meaning “adjacent to”, which is well-defined since all of the elements appearing under thesuplie on the path joiningγ^↑to the root ofT. In particular, one hasγ^↑> γ^↑↑. We also setN(γ) = 1 + ⌊-γ⌋with theconventionthatN(γ) = 0forγ∉_u.We claim that this time, if we set[e:fulleta] η_ℓ (u) = d + ∑_e ∈__s^(ℓ)(e)_e^↑(u) + ∑_e∈__s^m |ℓ(e)| _e_m^↑ (u) + ∑_γ∈_u N(γ) (_γ^↑(u) - _γ^(u)),thenη_ℓdoes indeed satisfy the required properties, which then concludes the proof. As before, we assume by contradiction that there isusuch thata = ∑_v ≥uη_ℓ(v) ≤0and we define, for each connected componentΓ_0^(i)ofΓ_0,a_i|_0^(i)| - 1 + ∑_e ∈_0^(i)^(ℓ)(e) + ∑_e∈__s^m |ℓ(e)| _{e_∙, e_∘}⊂_0^(i) + ∑_γ∈_u N(γ)_(γ) = Γ_0^(i)∩((γ)) .It is less obvious than before to see that∑a_i ≤abecause of thepresence of the last term. Givenγ∈_u, there are two possibilities regarding thecorresponding term in (<ref>). Ifγ^↑< uinT, then it does not contribute toaat all. Otherwise,τ^-1(γ)is included inΓ_0andwe distinguish two cases. In the first case, one has(γ) = ((γ)) ∩Γ_0. In this case, since the inclusionγ⊂(γ)is strict, there is at least one edge in((γ))adjacent to(γ).Since this edge is also adjacent toΓ_0, it follows that in this caseγ^< uso that we have indeed a contributionN(γ)toa. In the remaining case, the corresponding term may or may not contribute toa, but if it does, then its contribution is necessarily positive, so we can discard it and still have∑a_i ≤aas required.As before, we then writea_i = ∑_γ∈_s^(i) a_γ,iwitha_i,γ|_0,γ| - 1 + ∑_e ∈_γ∩_0^(ℓ)(e) +∑_e∈__s^m_γ_m(e) = γ |ℓ(e)| _{e_∙, e_∘}⊂_0^(i) + ∑_γ̅∈_u N(γ̅)_(γ̅) = Γ_0^(i)∩(γ) . We claim that the statement of Lemma <ref> still holds in this case.Indeed, the only case that requires a slightly different argument is that whenγis “normal”. In this case, defining againγ̂as in (<ref>), we havea_i,γ≥γ̂+ N(γ̂) -∑_γ̅∈_⋆(γ) (γ̅+ ℓ(γ̅)) ,since the last term in (<ref>) contributes precisely whenγ̂∈_uand then only the term withγ̅= γ̂is selected by the indicator function. The remainder of the argument, including the fact that this then yields a contradiction with the assumption thata ≤0, is then identical to before since one always hasγ̂+ N(γ̂) > 0.In order to complete the proof of our main theorem, it thus remains to show that the choice ofη_ℓgiven in (<ref>) allows to bound from above the contribution of the Hepp sector indexed byT, in the sense that the bound (<ref>) holds. The only non-trivial part of this is the presence of a termN(γ) (_γ^↑(u) - _γ^(u)) for each factor of(1-_γ)in (<ref>). This will be a consequence of the following bound. Let K_i → be kernels satisfying the bound (<ref>) with = -α_i < 0 for i∈ I with I a finite index set, and write I_⋆ = I ⊔{⋆}. Let furthermore x_i, y_i ∈ such that|x_i - x_j| ≤δ < Δ≤ |x_i - y_j| for all i,j ∈ I_⋆ and let N ≥ 0 be an integer. Then, one has the bound |∏_i ∈ I K_i(x_i-y_i) - ∑_|ℓ| < N1ℓ!∏_i ∈ I (x_i-x_⋆)^ℓ_i (D^ℓ_iK_i)(x_⋆-y_i)|≲δ^NΔ^-N∏_i ∈ I |y_i-x_i|^-α_i .The proof is a straightforward application of Taylor's theorem to the function x ↦∏_i ∈ I K_i(x_i) defined on ^I. For example, the version given in <cit.> shows that for every ℓ̃ I →^d with |ℓ̃| = N,there exist measures _ℓ̃ on ^I with total variation1ℓ̃!∏_i |(x_i - x_⋆)^ℓ̃_i| ≲δ^N and support in the ball of radius Kδ around (x_⋆,…,x_⋆) (for some K depending only on |I| and d) such that∏_i ∈ I K_i(x_i-y_i) - ∑_|ℓ| < N 1ℓ!∏_i ∈ I (x_i-x_⋆)^ℓ_i(D^ℓ_iK_i)(x_⋆-y_i) =∑_|ℓ̃| = N∫∏_i ∈ I(D^ℓ̃_iK_i)(z_i-y_i) _ℓ̃(dz)If Δ > (K+1)δ, then the claim follows at once from the fact that |(D^ℓ̃_iK_i)(z_i-y_i)| ≲ |z_i-y_i|^-α_i - |ℓ̃_i|≲Δ^ - |ℓ̃_i| |x_i-y_i|^-α_i .If Δ≤ (K+1)δ on the other hand, each term in the left hand side of (<ref>) already satisfies the required bound individually. It now remains to note that each occurence of(1-_γ)in (<ref>) produces precisely one factor of the type considered in Lemma <ref>, withthe setIconsisting of the edges in(γ)adjacent toγ,δ= 2^-(γ^↑)andΔ= 2^-(γ^). The additional factorδ^N(γ) Δ^-N(Γ)produced in this way precisely corresponds to the additional termN(γ) (_γ^↑(u)- _γ^(u))in our definition ofη. The only potential problem that could arise is when some edges are involved inthe renormalisation of more than one different subgraph. The explicit formula (<ref>) however shows that this is not a problem. The proof of Theorem <ref> is complete. §.§ Properties of the BPHZ valuation In this section, we collect a few properties of the BPHZ valuationΠ^K_.In order to formulate the main tool for this, we first introduce a “gluing operator”_- →_-such thatΓis the connected vacuum diagram obtained by identifying all the marked vertices ofΓ, for example (markedtriang>markedloop>) = markedboth> ,where the marked vertices are indicated in green. It follows from the definition(<ref>) that the linear mapΠ_-^Ksatisfies the identity [e:contr] Π_-^K τ = Π_-^Kτ ,τ∈_- .We claim that the same also holds forΠ_-^K π, whereπ_- →_-is the canonical projection. One has Π_-^K πτ = Π_-^Kπτ for all τ∈_-.By induction on the number of connected components and since Π_-^K,and π are all multiplicative, it suffices to show that, for every element τ of the form τ = γ_1 γ_2 where the γ_i are connected and non-empty, one has the identity Π_-^K πτ = Π_-^K πγ_1·Π_-^K πγ_2 = Π_-^K (πγ_1·πγ_2) .In particular, one has Π_-^K τ = 0 for every τ with τ≤ 0 of the form (γ_1 γ_2), as soon as one of the factors has strictly positive degree.We will use the fact that, as a consequence of (<ref>) combined with the definition of , one has for connected σ = (Γ,v_⋆,) with σ≤ 0 the identity [e:idenAhat] σ = - σ - ∑_Γ̅⊂ΓΓ̅∉{∅, Γ}(π⊗𝕀) _Γ̅σ ,where we made use of the operators _Γ̅σ =∑_ℓ̣̅̅Γ→^d →^d(-1)^|ℓ̅|ℓ̅! (Γ̅,⋆, + πℓ̅) ⊗ (Γ,v_⋆, - ) / (Γ̅, ℓ̅)and ⋆ denotes some arbitrary choice of distinguished vertex. (Here,stands for “extract”.) Note that the nonvanishing terms in (<ref>) are always such that the degree of Γ̅ (not counting node-decorations) is negative.The proof of the lemma now goes by induction on the number of edges of τ = γ_1γ_2.In the base case, each of the γ_i has one edge and there are two non-trivial cases. In the first case, γ_i ≤ 0 for both values of i. In this case, it follows from the above formula that, since v_⋆ is the vertex in τ at whichboth edges are connected and since γ_1 = - γ_i, one has τ = - τ + 2τ ,so that the claim follows from (<ref>), combined with the fact that πγ_i = -γ_i. In the second case, one has γ_1 ≤ 0 and γ_2 > 0, but γ_1 + γ_2 ≤ 0 so that πτ = τ. In this case, the only subgraph of τ of negative degree is γ_1, so thatτ = - τ + τ ,thus yielding Π_-^K τ = 0 as required.We now write Γ for the graph associated to τ and Γ_i ⊂Γ for thesubgraphs associated to each of the factors γ_i.Writing _Γ for the set of all non-empty proper subgraphs of Γ, we then have a natural bijection _Γ = _Γ_1 ⊔{γ̅_1 ⊔Γ_2 : γ̅_1 ∈_Γ_1}⊔_Γ_2⊔{γ̅_2 ⊔Γ_1 : γ̅_2 ∈_Γ_2}⊔{γ̅_1 ⊔γ̅_2 : γ̅_1 ∈_Γ_1, γ̅_2 ∈_Γ_2}⊔{Γ_1,Γ_2} . Take now an element of the form γ̅_1 ⊔Γ_2 from the first set above. As before, there are no edges in Γ adjacent to Γ_1 other than those incident to v_⋆. Furthermore, γ̅_1 ⊔Γ_2 has strictly less edges than Γ, so we can apply our induction hypothesis, yielding Π_-^K(π⊗𝕀) _γ̅_1 ⊔Γ_2τ = Π_-^K(π_γ_2⊗𝕀) _γ̅_1γ_1 = Π_-^K(πγ_2·(π⊗𝕀) _γ̅_1γ_1) ,where _γ_2γ↦(γ·γ_2). In a similar way, we obtain the identities Π_-^K(π⊗𝕀) _γ̅_1τ = Π_-^K(γ_2·(π⊗𝕀) _γ̅_1γ_1) ,Π_-^K(π⊗𝕀) _γ̅_1⊔γ̅_2τ = Π_-^K((π⊗𝕀) _γ̅_1γ_1 ·(π⊗𝕀) _γ̅_2γ_2) , Π_-^K(π⊗𝕀) _γ_1τ = Π_-^K(γ_2·πγ_1) ,as well as the corresponding identities with 1 and 2 exchanged. Inserting these identities into (<ref>) (with the sum broken up according to (<ref>)), we obtain Π_-^K τ = - Π_-^K τ- ∑_γ̅_1 ∈_Γ_1Π_-^K((γ_2 +πγ_2) ·(π⊗𝕀) _γ̅_1γ_1)- ∑_γ̅_2 ∈_Γ_2Π_-^K((γ_1 +πγ_1) ·(π⊗𝕀) _γ̅_2γ_2) - ∑_γ̅_1 ∈_Γ_1∑_γ̅_2 ∈_Γ_2Π_-^K((π⊗𝕀) _γ̅_1γ_1 ·(π⊗𝕀) _γ̅_2γ_2)- Π_-^K(γ_2·πγ_1) - Π_-^K(γ_1·πγ_2) .At this stage, we differentiate again between the case in which γ_i ≤ 0 for both i and the case in which one of the two has positive degree. (The case in which both have positive degree is again trivial.) In the former case, πγ_i = γ_i and one has γ_2 +πγ_2 = -∑_γ̅_2 ∈_Γ_2(π⊗𝕀) _γ̅_2γ_2 .In particular, the second and third terms are the same as the fourth, but with opposite sign and one has Π_-^K τ = ∑_γ̅_1 ∈_Γ_1∑_γ̅_2 ∈_Γ_2Π_-^K((π⊗𝕀) _γ̅_1γ_1 ·(π⊗𝕀) _γ̅_2γ_2)- Π_-^K (γ_1·γ_2)- Π_-^K(γ_2·γ_1) - Π_-^K(γ_1·γ_2)= Π_-^K ((γ_1+γ_1)·(γ_2+γ_2))- Π_-^K (γ_1·γ_2)- Π_-^K(γ_2·γ_1) - Π_-^K(γ_1·γ_2) = Π_-^K (γ_1 ·γ_2) ,as claimed. Consider now the case γ_1 > 0. Then, the two terms containing πγ_1 vanish and we obtain similarly Π_-^K τ = - Π_-^K τ- ∑_γ̅_1 ∈_Γ_1Π_-^K(γ_1 ·(π⊗𝕀) _γ̅_2γ_2)- Π_-^K(γ_1·γ_2)=- Π_-^K (γ_1·γ_2) + Π_-^K (γ_1 · (γ_2+γ_2)) - Π_-^K(γ_1·γ_2) = 0 ,as claimed, thus concluding the proof. As a consequence of this result, we have the following. Recall that_k^(c)is the space oftranslation invariant compactly supported (modulo translations) distributions inkvariables. Givenx ∈^k,y ∈^ℓ, we also writex⊔y = (x_1,…,x_k,y_1,…,y_ℓ) ∈^k+ℓ. For anyk,ℓ≥1, wethen have a bilinear “convolution operator”⋆_k^(c) ×_ℓ^(c) →_k+ℓ-2^(c)obtained by setting (η⋆ζ)(x ⊔ y) = ∫_η(x ⊔ z)ζ(z ⊔ y) dz , x ∈^k-1, y ∈^ℓ-1 ,wheneverηandζare represented by continuous functions. It is straightforward to seethat this extends continuously to all of_k^(c) ×_ℓ^(c), and that it coincides with the usual convolution in the special casek = ℓ= 2.Similarly, we have a convolution operator⋆_k ×_ℓ→_k+ℓ-2obtained in the following way. LetΓ∈_kandΓ̅∈_ℓbe Feynman diagrams such that the label of thekth leg ofΓand the first leg ofΓ̅are both given byδ.We then defineΓ⋆Γ̅∈_k+ℓ-2to be the Feynman diagram withk+ℓ-2legs obtained by removing thekth leg ofΓas well as the first leg ofΓ̅, and identifying the two vertices these legs were connected to. (We also need to relabel the legs ofΓ̅accordingly.) This operation extends to all of_k ×_ℓby noting that given a Feynman diagramΓ∈_k, there always existsΓ_n ∈_kwithΓ_n =Γin_kwhich is a linear combination of diagrams with labelδon thenth leg: if thenth leg ofΓhas labelδ^(m)withm ≠0,one obtainsΓ_nby performing|m|“integrations by parts” using (<ref>). We then define in generalΓ⋆Γ̅by settingΓ⋆Γ̅Γ_k ⋆Γ̅_0and we can check that this is indeed well-defined in_k+ℓ-2. We then have the following consequence of Lemma <ref>. The BPHZ valuation satisfies Π_(Γ⋆Γ̅) = Π_Γ⋆Π_Γ̅.Write ^⋆⊗→ for the convolution operator introduced above and note that the canonical valuation Π (we suppress the dependence on K) does satisfy the property of the statement. It therefore suffices to show that one has the identity [e:wanted] (Π_-⊗𝕀)Δ^⋆ = ^⋆((Π_-⊗𝕀)Δ⊗ (Π_-⊗𝕀)Δ)between maps ⊗→.Suppose that Γ∈_k and Γ̅∈_ℓ, write v for the vertex of Γ adjacent to the kth leg, and let v̅ be thevertex of Γ̅ adjacent to its first leg. Fix furthermore an arbitrary map σ (_⋆⊔_⋆)/{v,v̅}→ which is injective and such that σ(v) = 0. Since internal edges of Γ⋆Γ̅ are in bijection with the disjoint union of the internal edges of Γ and those of Γ̅, we have an obvious bijection between subgraphs γ of Γ⋆Γ̅and pairs (γ_1,γ_2) of subgraphs of Γ and Γ̅. We also have a natural choice of distinguished vertex for each connected subgraph ofΓ, Γ̅ or Γ⋆Γ̅ by choosing the vertex with the lowest value of σ.If we then write Δ̂τ∈_- ⊗ for the right hand side of (<ref>) with this choice of distinguished vertices, then we see that (⊗𝕀)Δ̂(Γ⋆Γ̅) = (⊗^⋆)(𝕀⊗τ⊗𝕀) (Δ̂Γ⊗Δ̂Γ̅) ,where τ_- ⊗→⊗_- is the map that exchangesthe two factors. Applying Π_-π to both sides and making use ofLemma <ref>, the required identity (<ref>) follows at once. R5cm[style=thick,scale=0.85] [fill=blue!5]plot[smooth cycle, tension=.7] coordinates (-1.5,0) (-1.3,1.5) (-0.5,1) (2,1.5) (1.1,-1) (0,-0.2) (-1,-.8); [fill=green!5]plot[smooth cycle, tension=.7] coordinates (2,1.5) (2,2.3) (3,2); [dot,boundary] (l) at (2,1.5) ;(l) – ++(35:0.3);(l) – ++(80:0.3);(l) – ++(-120:0.5);(l) – ++(-160:0.5); [dot] (l1) at (1.1,-1) ; [dot] (l2) at (-1.5,0) ; [red] (l1) – ++(-50:0.4);(l1) – ++(90:0.5);(l1) – ++(120:0.5); [red] (l2) – ++(180:0.4);(l2) – ++(60:0.5);(l2) – ++(0:0.5);(l2) – ++(-50:0.5);at (0.2,0.4) Γ_0;at (2.4,1.9) γ; Generalised self-loop.Consider the situation of a Feynman diagramΓcontaining a vertexvand a subgraphγwhich is a “generalised self-loop atv” in the sense that*The vertex v is the only vertex of γ that is adjacent to any edge not in γ.*No leg of Γ is adjacent to any vertex of γ, except possibly forv. We then obtain a new diagramΓ_0by collapsing all ofγonto the vertexv, as illustrated in Figure <ref>, where the vertexvis indicatedin green and legs are drawn in red.As a consequence of Proposition <ref>, we conclude thatin such a situation there exists a constantc_γ∈such thatΠ_Γ = c_γΠ_Γ_0 ,and that furthermorec_γ= 0as soon asγ≤0as a consequence of Proposition <ref>. One particularly important special case is that of actual self-loops, whereγconsists of a single edge connectingvto itself, thus showing thatΠ_Γ= 0for everyΓcontaining self-loops since the degree of a self-loop of typeis given by, which is always negative.Finally, it would also appear natural to restrict the sums in (<ref>) and (<ref>) to subgraphsΓ̅that are c-full inΓ(in the sense that each connected component ofΓ̅is a full subgraph ofΓ), especially in view of the proof of the BPHZ theorem where we saw that the “dangerous” connected subgraphs are always the full ones. We can then perform the exact same steps as before, including the construction of a corresponding twisted antipode and the verification of the forest formula. Writing_Γ^-for the subset of_Γ^-consisting of forestssuch that eachγ∈is a full subgraph of its parent(γ)(as usual with the convention that the parent of the maximal elements isΓitself), it is therefore natural in view of (<ref>) to define a valuation [e:forestFormulaFull] Π_^Γ = (Π_-⊗Π)∑_∈_Γ^- (-1)^||_Γ ,whereΠandΠ_-are the canonical valuations associated to someK ∈^-_∞. It turns out that, maybe not so surprisingly in view of Proposition <ref>, this actuallyyields the exact same valuation: One has Π_^ = Π_.In order to show that (Π_-⊗Π)∑_∈_Γ^-∖_Γ^- (-1)^||_Γ = 0 ,we will partition _Γ^-∖_Γ^- into sets such that theabove sum vanishes, when restricted to any of the sets in the partition.In order to formulate our construction, given γ∈_Γ^-, we write γ^∈_Γ^- for the “closure” of γ in Γ, i.e. the full subgraph of Γ with the same vertex set as γ. For ∈_Γ^-∖_Γ^-, we then have a unique decomposition = ^∪^p such thateach γ∈^ is full in Γ, no element of ^ iscontained in an element of ^p, and no root of ^p is full in Γ. Write ^p_ for the set of roots of ^p and set ^p = {γ^ : γ∈^p_} .In general, one may have ^∩^p≠∅, so we also set ^_∘ = ^∖^p. If we write ↦ (^p,^_∘), then we see thatthe preimage of (^p,^_∘) underconsists of all forests of the form ^p ∪^_∘∪, whereis an arbitrary subset of ^p. Furthermore, _Γ^-∖_Γ^- consists precisely of those forestssuch that ^p ≠∅.Since ∑_⊂^p (-1)^|| = 0, it thus remains to show thatthe quantity [e:quantityB] (Π_-⊗Π) _^p ∪^_∘∪Γ is independent of ⊂^p.To see that this is the case, consider the space _Γ and the operators _ as in the proof of the BPHZ theorem and denote by Π̂_Γ→ the composition of Π→ with the natural injection _Γ↪. One then has for every forestthe identity(Π_-⊗Π) _Γ = Π̂_Γ ,_∏_γ∈_γ .(As already pointed out before, the order of the operations does not matter here.) Let now γ∈_Γ^- and considerthe elements _γΓ and _γ_γ^Γ. It follows from the definition of the operators _γ thatall the terms appearing in both expressions consist of the same graph where edges in Γ∖γ^adjacent to γ^ are reconnected to the distinguished vertex v_⋆ of γ and the edges in γ^ that are not in γ are turned into self-loops for v_⋆. Regarding the edge and vertex-labels ℓ andgenerated by these operations, a straightforward application of the Chu-Vandermonde theorem shows that they yield the exact same terms in both cases. The only difference is that the function $̧ is equal to1onγin the first case, while it equals2onγand1on edges ofγ^that are not inγin the second case. This however would only make a difference if we were to compose this with an operator of the type_γ̅for someγ̅withγ⊂γ̅⊂γ^. In our case however, we only use this in order to compare_^p ∪^_∘∪to_^p ∪^_∘, so thatwe consider the situationγ∈^p_. Since these graphs are all vertex-disjoint,it follows that(∏_γ∈^p__γ)Γand(∏_γ∈^p_∪_γ)Γonly differ by the value of$̧in the way described above.Our construction of the sets _∘^ and ^p then guarantees that this discrepancy is irrelevant when further applying _γ̅ for γ̅∈_∘^∪ (^p ∖^p_), so that (<ref>) is indeed independent ofas claimed. § LARGE-SCALE BEHAVIOUR We now consider the case of kernelsK_that don't have compact support. In order to encode their behaviour at infinity, we assign to each label∈a second degree_∞→_- ∪{-∞}with_∞δ^(k) = -∞and satisfying this time the consistency condition_∞^(k) = _∞.[It would have looked more natural to impose the stronger condition _∞^(k) = _∞ - |k| as before. One may further think that in this case one would be able to extend Theorem <ref> to all diagrams Γ, not just those in _+. This is wrong in general, although we expect it to be true after performing a suitable form of positive renormalisation as in <cit.>. This is not performed here, and as a consequence we are unable to take advantage of the additional large-scale cancellations that the stronger condition _∞^(k) = _∞ - |k| would offer. ] We furthermore assume that we are given a collection of smooth kernelsR_^d →for∈_⋆satisfying the bounds [e:boundInfinity] |D^k R_(x)| ≲ (2+|x|)^_∞ ,for all multiindicesk, uniformly over allx ∈^d, and such that [e:compatR] R_^(k) = D^kR_ .Similarly to before, we extend this toby using the conventionR_δ^(m) ≡0and we write^+_∞for the set of all smooth compactly supported kernelassignments↦R_, as well as^+_0for its closure under the system of seminorms defined by (<ref>).Consider then the formal expression (<ref>), but with each instance ofK_replaced byG_= K_+ R_.The aim of this section is to exhibit a sufficient condition onΓwhichguarantees that this expression can also be renormalised, using the same procedure asin the previous sections. The conditions we require in Theorem <ref> below can be viewed as a large-scale analogue to the conditions of Weinberg's theorem. They are required because, unlike in <cit.>, we do not perform any “positive renormalisation” in the present article.To formulate our main result, we introduce the following construction. Given a Feynman diagramΓwith at least one edge, consider a partition_Γof itsinner vertex set, i.e. elements of_Γare non-empty subsets of_⋆and⋃_Γ= _⋆. We always consider the case where the partition_Γconsists of at least two subsets, in other words|_Γ| ≥2. Given such a partition, we then set _∞_Γ∑_e ∈(_Γ)(e) + d(|_Γ|-1) ,where(_Γ)consists of all internal edgese ∈_⋆such thatboth endse_+ande_-are contained in different elements of_Γ. Note the strong similarity to (<ref>), which is of course not a coincidence. We will call a partition_Γ“tight” if there exists one single elementA ∈_Γcontaining all of the verticesv_i,⋆ ∈_⋆that are connected to legs ofΓ.GivenKandRin_∞^-and_∞^+respectively, we furthermoredefine a valuationΠ^K,Rby setting as in (<ref>)[e:PiKR] (Π^K,RΓ)(ϕ) = ∫_^_⋆∏_e ∈_⋆ G_(e)(x_e_+ - x_e_-)(D_1^ℓ_1⋯ D_k^ℓ_kϕ)(x_v_1,…,x_v_k) dx ,where we used again the notationG_= K_+ R_. We then have the following result which is the analogue in this context ofProposition <ref>.Let Γ be such that every tight partition _Γ of its inner vertices satisfies _∞_Γ < 0. Then, the map(K,R) ↦Π^K,RΓ extends continuously to all of (K,R) ∈_∞^-×_0^+.This is a corollary of Theorem <ref> below: given (<ref>) andgiven that we restrict ourselves to K ∈_∞^-, it suffices to note thatΠ^K,R = Π^0,K+R_.The reason why it is natural to restrict oneself to tight partitions can best be seen with thefollowing very simple example. Consider the case Γ =[style=thick,baseline = -0.1cm] [dot,label=[shift=(0,0.1)]below:v_1] (l) at (0,0) ; [dot,label=[shift=(0,0.1)]below:v_2] (m) at (1,0) ; [dot,label=[shift=(0,0.1)]below:v_3] (r) at (2,0) ; [->] (l) – (m) node[midway,above=-0.1] _1; [->] (m) – (r) node[midway,above=-0.1] _2; [thick,red] (l) – ++(180:0.5) node[midway,above=-0.1] 0; [thick,red] (r) – ++(0:0.5) node[midway,above=-0.1] 0;.Writing G_i = K__i + R__i and identifying functions with distributions as usual, one then has (Π^K,RΓ)(x,y) = (G_1 ⋆ G_2)(y-x). If the G_i are smooth functions, then this is of course well-defined as soon as their combined decay at infinity is integrable, which naturally leads to the condition _∞_1 +_∞_1 < -d, which corresponds indeed to the condition _∞_Γ < 0 for _Γ = {{v_1,v_3},{v_2}}, the only tight partition of the inner vertices of Γ. Considering instead all partitions would lead to the condition _∞_i < -d for i =1,2, which is much stronger than necessary. Note now the following two facts.*The condition of Proposition <ref> is compatible with the definition of the spacein the sense that if it is satisfied for one of the summands in the left hand side of (<ref>), then it is also satisfied for all the others, as an immediate consequence of the fact that _∞^(k) = _∞. In particular, we have a well-defined subspace _+ ⊂ on which the condition of Proposition <ref> holds and therefore Π^K,RΓ is well-defined for (K,R) ∈_∞^-×_0^+.*If Γ satisfies the assumption of Proposition <ref>, then it is alsosatisfied for all of the Feynman diagrams appearing in the second factor of the summands of ΔΓ,so that _+ is invariant under the action of _- on . This suggests that if we define a BPHZ renormalised valuation on_+by [e:fullBPHZ] Π_^K,R = (Π^K_- ⊗Π^K,R)Δ ,then it should be possible to extend it to kernel assignments exhibiting self-similarbehaviour both at the origin and at infinity. This is indeed the case, as demonstrated by the main theorem of this section.The map (K,R) ↦Π_^K,RΓ extends continuously to(K,R) ∈_0^-×_0^+ for all Γ∈_+.Consider the spacedefined as the vector space generated by the set ofpairs (Γ, ), where Γ is a Feynman diagram as before and⊂_⋆ is a subset of its internal edges.We furthermore define a linear map→ by Γ = ∑_⊂_⋆ (Γ, ), and we define a valuation onby setting (Π̃^K,R (Γ, ))(ϕ)= ∫_^_⋆∏_e ∈_⋆∖ K_(e)(x_e_+ - x_e_-) ∏_e ∈ R_(e)(x_e_+ - x_e_-) ×(D_1^ℓ_1⋯ D_k^ℓ_kϕ)(x_v_1,…,x_v_k) dx , so that Π^K,R = Π̃^K,R. Similarly to before, we define ̣̃ by the analogue of (<ref>) and we set = / ̣̃, noting that Π̃^K,R is well-defined on .We also define a map Δ̃→_- ⊗ in the same way as (<ref>), but with the sum restricted to subgraphs γ whose edge sets are subsets of _⋆∖. (This condition guarantees that can naturally be identified with a subset of the quotient graph Γ / γ.) With this definition, one has the identity Δ̃ = (𝕀⊗)Δ ,as a consequence of the fact that the set of pairs (,γ) such that⊂_⋆ and γ is a subgraph of Γ containing only edges in _⋆∖ is the same as the set of pairs such thatγ is an arbitrary subgraph of Γ andis a subset of the edges of Γ / γ. This in turn implies that one has the identity [e:BPHZglobal] Π_^K,RΓ = (Π^K_- ⊗Π^K,R)ΔΓ = (Π^K_- ⊗Π̃^K,R)Δ̃Γ .Let now _+ be the subspace ofconsisting of pairs (Γ, )such that _∞ < 0 for every tight partitionwith () ⊂. Again, this defines a subspace _+ ⊂ invariant under the action of _- by Δ̃ andmaps _+ (defined as in the statement of the theorem)into _+, so that it remains to show that (Π^K_- ⊗Π̃^K,R)Δ̃ extends to kernels (K,R) ∈_0^-×_0^+ on all of _+.For this, we now fix τ = (Γ, ) ∈_+ and we remark that for R ∈_∞^+ we can interpret the factor ∏_e ∈ R_(e)(x_e_+ - x_e_-) in (<ref>) as being part of the test function. More precisely, we set ϕ⊗_τ R = ϕ(x_1,…,x_k)∏_e ∈ R_(e)(x_[e]_+ - x_[e]_-) ,where [·]_±→{k+1,…,k+2||} is an arbitrary but fixed numbering of the half-edges of . We then have (Π̃^K,Rτ)(ϕ) = (Π^Kτ)(ϕ⊗_τ R), where τ∈_+ is the Feynman diagram obtained by cutting each of the edges e ∈ open, replacing them by two legs with label δ and numbers given by [e]_±.It is immediate from the definitions and the condition (<ref>) that this is compatible with theactions of Δ̃ and Δ in the sense that one has ((g ⊗Π̃^K,R) Δ̃τ)(ϕ)= ((g ⊗Π^K) Δτ)(ϕ⊗_τ R) ,∀ g ∈_- .Inserting this into (<ref>), we conclude that (Π_^K,RΓ)(ϕ) = ∑_⊂_⋆(Π_^K (Γ,))(ϕ⊗_(Γ,) R) ,so that it remains to bound separately each of the terms in this sum. For this, we write _d = ^d for the discrete analogue of our state space = ^d, we set N = k+2||, and we write 1 = ∑_x ∈_d^NΨ_x for a partition of unity with the property thatΨ_x(y) = Ψ_0(y-x) and that Ψ_0 is supported in a cube of sidelength 2 centredat the origin, so that it remains to show that∑_x ∈_d^NS_x , S_x (Π_^K (Γ,))((ϕ⊗_(Γ,) R)Ψ_x) ,is absolutely summable. It then follows from Theorem <ref> that the summand in the above expression is bounded by |S_x| ≲∏_e ∈ (1 + |x_[e]_+ - x_[e]_-|)^_∞(e) ,for all (K,R) ∈_0^-×_0^+. This expression is not summable in general, so we need to exploit the fact that there are many terms that vanish. For instance, since the test function ϕ is compactly supported, there exists C such that S_x = 0 as soon as |x_i| ≥ C for some i ≤ k. Similarly, since the kernels K_ are compactly supported, there exists C such thatS_x = 0 as soon as there are two legs [i] and [j] of _τ attached to the same connected component and such that |x_i-x_j| ≥ C.Let nowbe the finest tight partition for Γ with () ⊂ and let L ∈ denote the (unique) set which contains all the vertices adjacent to the legs of Γ. We conclude from the above consideration that one has [e:wantedBoundLS] ∑_x ∈_d^N|S_x| ≲∑_y ∈_d^_{y_L = 0}∏_e ∈() (1 + |y_[e_+] - y_[e_-]|)^_∞(e) ,where [v] ∈ denotes the element ofcontaining the vertex v. At this stage, the proof is virtually identical to that of Weinberg's theorem, with the difference that we need to control the large-scale behaviour instead of the small-scale behaviour. We define Hepp sectors D_⊂_d^ for = (T,) in exactly the same way as before, the difference being that this time no two elements can be at distance less than 1, so thatwe can restrict ourselves to scale assignments with _v ≤ 0 for every inner vertex of T. Also, in view of (<ref>), the leaves of T are this time given by elements of . In the same way as before, the number of elements of D_ is of the order of∏_u ∈ T 2^-d _u so that one has again a bound of the type [e:goodbound] ∑_x ∈_d^N|S_x| ≲∑_∏_u ∈ T 2^-_uη_u ,η_u = d + ∑_e ∈()_e^↑_∞(e) ,where e^↑ denotes the common ancestor in T of the two elements ofcontaining the two endpoints of e. Our assumption on Γ now implies that for every initial segment T_i of T[i.e. T_i is such that if u ∈ T_i and v ≤ u, then v ∈ T_i.], one has ∑_u ∈ T_iη_u < 0. This is because one has ∑_u ∈ T_iη_u = _T_i, where _T_i is the coarsest coarsening ofsuch that for every edge e ∈(_T_i), one has e^↑∉_T_i.We claim that any such η satisfies S(T,η) ∑_∏_u ∈ T 2^-_uη_u < ∞ ,where again the sum is restricted to negativethat are monotone on T.This can be shown by induction over the number of leaves of T. If T has only two leaves, then this is a converging geometric series and the claim is trivial.Let now T be a tree with m ≥ 3 leaves and assume that the claim holds for all trees with m-1 leaves. Pick an inner vertex u of T which has exactly two descendants (such a vertex always exists since T is binary) and write T̃ for the new tree obtained from T by deleting u and coalescingits two descendants into one single leaf. Write furthermore u^↑ for the parent of u in T, which exists since T has at least three leaves. The following example illustrates this construction:T =[baseline=1.3cm] [dot] (r) at (0,3) ; [dot] (0) at (-1,1) ; [dot,label=[shift=(-0.1,-0.1)]above right:u^↑] (1) at (1,2) ; [dot] (00) at (-1.5,0) ; [dot] (01) at (-0.5,0) ; [dot] (10) at (0.25,0) ; [dot,label=[shift=(-0.1,-0.1)]above right:u] (11) at (1.5,1) ; [dot] (110) at (1,0) ; [dot] (111) at (2,0) ;(r) – (0);(0) – (00);(0) – (01);(r) – (1);(1) – (10);(1) – (11);(11) – (110);(11) – (111); ⇒T̃ =[baseline=1.3cm] [dot] (r) at (0,3) ; [dot] (0) at (-1,1) ; [dot,label=[shift=(-0.1,-0.1)]above right:u^↑] (1) at (1,2) ; [dot] (00) at (-1.5,0) ; [dot] (01) at (-0.5,0) ; [dot] (10) at (0.5,0) ; [dot] (11) at (1.5,0) ;(r) – (0);(0) – (00);(0) – (01);(r) – (1);(1) – (10);(1) – (11);Since the condition on η is open and since S_η increases when increasing η_u, we can assume without loss of generality that η_u ≠ 0. There are then two cases:*If η_u < 0, we have ∑__u > _u^↑ 2^-_uη_u≈ 1, so that[e:induction] S(T,η) ≈ S(T̃,η̃) ,where η̃_v is just the restriction of η_v to the tree T̃. Since initial segments of T̃ are also initial segments of T and since η̃= η on them, we can make use of the induction hypothesis to conclude.*If η_u > 0, we have ∑__u > _u^↑ 2^-_uη_u≈ 2^-_u^↑η_u, so that (<ref>) holds again, but this timeη̃_u^↑ = η_u^↑ + η_u and η̃_v = η_v otherwise. We conclude in the same way as before since the only “dangerous” case is that of initial segments T̃_i containing u^↑, but these are in bijection with the initial segment T_i = T̃_i ∪{u} of T such that ∑_v ∈T̃_iη̃_v = ∑_v ∈ T_iη_v, so that the induction hypothesis still holds.Applying this to (<ref>) completes the proof of the theorem.While the definition of Π_^K,R is rather canonical, given kernel assignments K and R, the decomposition G = K + R is not. Using the fact that _- is a group, it ishowever not difficult to see that, for any two choices (K,R), (K̅, R̅) ∈_0^-×_0^+ such thatK_ + R_ = K̅_ + R̅_ ,∀∈_⋆ ,there exists an element g ∈_- such that Π_^K̅,R̅ = (g ⊗Π_^K,R)Δ. Martin
http://arxiv.org/abs/1704.08634v2
{ "authors": [ "Martin Hairer" ], "categories": [ "math-ph", "math.CA", "math.MP" ], "primary_category": "math-ph", "published": "20170427155642", "title": "An analyst's take on the BPHZ theorem" }
firstpage–lastpage Interpenetrating Graphene Networks: Three-dimensional Node-line Semimetals with Massive Negative Linear Compressibilities R. E. Cohen^1,3 December 30, 2023 ========================================================================================================================= Our previous paper outlined the general aspects of the theory of radio light curve and polarization formation for pulsars. We predicted the one-to-one correspondence between the tilt of the linear polarization position angle of the and the circular polarization. However, some of the radio pulsars indicate a clear deviation from that correlation. In this paper we apply the theory of the radio wave propagation in the pulsar magnetosphere for the analysis of individual effects leading to these deviations. We show that within our theory the circular polarization of a given mode can switch its sign, without the need to introduce a new radiation mode or other effects. Moreover, we show that the generation of different emission modes on different altitudes can explain pulsars, that presumably have the X-O-X light-curve pattern, different from what we predict. General properties of radio emission within our propagation theory are also discussed. In particular, we calculate the intensity patterns for different radiation altitudes and present light curves for different observer viewing angles. In this context we also study the light curves and polarization profiles for pulsars with interpulses. Further, we explain the characteristic width of the position angle curves by introducing the concept of a wide emitting region. Another important feature of radio polarization profiles is the shift of the position angle from the center, which in some cases demonstrates a weak dependence on the observation frequency. Here we demonstrate that propagation effects do not necessarily imply a significant frequency-dependent change of the position angle curve. polarization – stars: neutron – pulsars: general. § INTRODUCTION During almost fifty years of study from the very beginning in 1967, when radio pulsars were first observed <cit.>, the major understanding in neutron stars' magnetosphere structure and in the origin of their activity was achieved <cit.>. However, some key questions including the mechanism of the coherent radio emission generation still remain unexplained. The mass M, the period P, and the breaking factor of the pulsar Ṗ can be determined directly with a good accuracy, but, on the other hand, such important parameter as the inclination angle αbetween magnetic and rotational axes can be found only using the so-called rotating vector model of the position angle swing along the mean radio profile  <cit.>. However, this measurement is usually non-reliable, as the main assumption of RVM, e.g. polarization of radio emission is formed in the generation region at distance ∼ 10-30R from the stellar surface (with R here and further being the radius of the neutron star), is not justified. This is mainly because radio emission interacts with plasma created in magnetospheric discharges, and radiation polarization characteristics start to deviate from the simple prediction of RVM. In order to make correct theoretical predictions of the radio polarization all propagation effects, e.g. magnetospheric plasma birefringence <cit.>, cyclotron absorption  <cit.>, and limiting polarization  <cit.>, should be accurately taken into account.This is the second paper dedicated to the study of polarization characteristics based on the quantitative theory of the radio waves propagation in the pulsar magnetosphere. In Paper I <cit.> the theoretical aspects of the polarization formation based on <cit.> approach were studied, and the numerical simulation method was proposed. It allowed us to describe the general properties of mean profiles such as the position angle of the linear polarization p.a. and the circular polarization for the realistic structure of the magnetic field in the pulsar magnetosphere. We confirmed the main theoretical prediction found by <cit.>, i.e., the correlation of signs of the circular polarization, V, and derivative of the position angle with respect to pulsar phase, dp.a./ dϕ for both emission modes. In most cases it gave us the possibility to recognize the orthogonal mode, ordinary or extraordinary, playing the main role in the formation of the mean profile.On the other hand, there are some pulsars for which observations were in disagreement with our predictions. The detailed statistical analysis of polarization characteristics that support the predicted O-X-O light-curve model will be presented in Paper III (Jaroenjittichai et al. in preparation), while in a current paper we focus on more detailed analysis of the wave propagation in the pulsar magnetosphere. In Sect. <ref> and <ref> we briefly discuss the propagation model and other theoretical assumptions used in our simulations. We compare our model with the broadly used geometric models, namely, the hollow cone model and the rotating vector model with the aberration/retardation effects, to emphasize the features that are different from that simplified model.In consequent sections we discuss the results obtained using our technique. Sect. <ref> is dedicated to the above mentioned deviations from our predictions. In Sect. <ref> we explain the switch of the circular polarization sign of a single mode, that was not predicted previously, but is clearly visible for some pulsars. The predicted O-X-O mode sequence is seen to be broken in some of the pulsars' profiles. We show in Sect. <ref> that if the two modes are being generated at different heights, one can indeed explain this anomalous behavior. In Sect. <ref> we briefly discuss the possibility to explain the central hump in the position angle curves of some of the pulsars. It is demonstrated, that there is no need to introduce a complicated altitude profile of the radiation.In Sect. <ref> we focus on some general properties of the formation of light curves and polarization profiles, namely for ordinary pulsars (Sect. <ref>) and for the pulsars with interpulses (Sect. <ref>). For both cases we show the intensity pattern in the picture plane and the Stokes parameter map at a given altitude and explain how different profiles can be formed in this context. In Sect. <ref> we explain the width of the position angle curves. Finally, we discuss the shift of the position angle on different frequencies with the propagation effects taken into account in Sect. <ref>.§ PROPAGATION THEORYIn this section we remember general assumptions about the radiation generation and propagation effects that we use for our calculations. We also discuss some important results obtained in Paper I. §.§ Hollow cone model For a long time it was known, that there are two orthogonal modes propagating in pulsars' magnetosphere: the extraordinary X-mode and the ordinary O-mode <cit.>. While the X-mode propagates along the straight line without any refraction, the O-mode is being deflected from the magnetic axis <cit.>. This led to the idea of the modification of the hollow-cone model for the directivity pattern generation, where there is an inner cone — straightly propagating X-mode, and the outer one is the O-mode that is deflected from the magnetic axis <cit.>. Radiation in the central region of the cone is suppressed due to large curvature radius of the magnetic field lines, as well as outside the edges of the polar cap region, where there are no open field lines. Various pulsar profiles correspond to different intersections of the line of sight and the directivity pattern from the hollow-cone model. Note, that in Sect. <ref> we show, that in fact the directivity pattern can be more complex depending on various plasma parameters. Remember that the 'hollow cone' model assumes the magnetic field to be dipolar with the radiation propagating along a straight line. The polarization characteristics themselves are formed exactly in the same region, where the radiation is generated, i.e., deep near the stellar surface. This assumptions allow us to analytically calculate the so-called Rotating Vector Model (RVM) curve for the p.a. plot along the rotation phase ϕ <cit.>,p.a.=arctan(sinαsinϕ/sinαcosζcosϕ-sinζcosα),where ζ is the angle between the rotation axis and the line of sight. Equation (<ref>) can be obtained considering the linear polarization that rigidly follows the magnetic field direction. Note, that in this paper, dissimilar to Paper I, the impact angle β is chosen as β = ζ - α, i.e., negative β corresponds to the line of sight closer to the rotation axis, than the magnetic moment. On the other hand, similar to Paper I, the sign of the p.a. is conventionally (astronomically) chosen, see, e.g., <cit.>. In this case the Θ_1 variable from <cit.> equations (see below) corresponds exactly to the position angle of the linear polarization. §.§ Propagation effectsWhile neglecting the propagation effects can work for some pulsars, most of them, however, appear to poorly correspond to this simplified approach. First of all, the p.a. curves of some profiles appear to be shifted from the center of the profile (clearly breaking the RVM-curve) and some of them, e.g., PSR J1022+1001, expose anomalous humps in the center <cit.>. This problem is usually solved by considering the so-called aberration/retardation effects (later A/R) and by the assumption that the radiation is generated at some particular altitude <cit.>. The A/R effect allows to determine the shift of the p.a. curve as Δϕ≈ 4r_ emΩ/c and hence deduce the radiation origin height r_ em. The general agreement from this simplified technique, which is in a good consistency with geometric conclusions, is that the radiation originates in the deep regions near (10-100) stellar radius. However, it is clear, that to address to this problem self-consistently, one must take into accountpropagation effects in the neutron star magnetosphere. On the other hand, some profiles expose a nontrivial circular polarization and even a polarization sign reversal, not only in the core emission, but in the conal part as well (see ). The early papers <cit.> proposed a possible explanation of the circular polarization by assuming a wave propagation at a nonzero angle to the finite magnetic field line. However, for the radiation formation in deep regions, where the magnetic field is high enough, these explanations failed to work.A more accurate consideration of the circular polarization formation in the limiting polarization region in ultrarelativistic highly-magnetized magnetosphere by <cit.> provides a possible explanation, however circular polarization is not yet explained quantitatively without sticking to a particular radiation mechanism (see, e.g., ).As it was already stressed, the importance of the propagation effects was shown by <cit.>. First, the refraction of the O-mode takes place in the region r < r_O, wherer_O∼10^2R·λ_4^1/3γ_100^1/3B_12^1/3ν_GHz^-2/3P^-1/5.Here and below R is the stellar radius, λ is the multiplicity parameterλ = n_ e/n_ GJ,i.e., the electron-positron number density normalized to Goldreich-Julian one (λ_4 = λ/10^4), γ_100 is the characteristic Lorentz-factor of secondary plasma normalized by 10^2, B_12 is the polar cap magnetic field B_0 in 10^12 G, ν_GHz is the frequency in GHz and P is the period of rotation in seconds.On the other hand, as the number density n_ e quickly decreases far from the star surface, the ray transits from the region of the dense plasma where the linear polarization follows the external magnetic field, to the region of rarefied plasma where the external magnetic field cannot affect the polarization of a ray. As a result, the polarization freezes at some distance r_ esc (so-called limiting polarization, see ). For ordinary pulsars one can obtain <cit.>r_ esc∼ 10^3R·λ_4^2/5γ_100^-6/5B_12^2/5ν_ GHz^-2/5P^-1/5.As we see, for ordinary radio pulsars the escape region is located well inside the light cylinder R_ L = c/Ω≈ 10^4 R, but much higher than the radiation domain. Thus, one should consider the evolution of polarization characteristics from the generation region r_ em up to the altitude r = r_ esc at which the polarization freezes.In Paper I the numerical approach with the method of <cit.> equation was proposed that describes the evolution of polarization characteristics along the line of sight on complex angle Θ=Θ_1+iΘ_2, with Θ_1, in agreement with p.a. determination (<ref>), being the p.a. andwhere V/I is the relative level of circular polarization. For small Θ_2 ≪ 1 one can approximate the level of circular polarization as V/I∝d(β_B+δ)/dl/cos[2(p.a.-β_B-δ)],where the derivative is taken near the r_ esc. Here the angle β_B corresponds to orientation of the external magnetic field in the picture plane and the additional phase δ appears due to the external electric field resulting in electric drift motion of particles in the pulsar magnetosphere tanδ =-cosθU_y/c/sinθ - U_x/c.Here U_x and U_y are two components of the E×B drift velocity, and θ is the angle between wave vector k and external magnetic field B. It is the phase δ that is responsible for the aberration in this approach. Note, that if the propagation effects are neglected, we have the position angle following the direction of the magnetic field, i.e., p.a.=β_B for ordinary and p.a.≈β_B + π/2 for extraordinary mode. According to (<ref>), it gives opposite signs for Stokes parameter V for two orthogonal modes.§.§ Cyclotron absorptionCyclotron absorption takes place in the region where the resonance condition ω_ B = γω̃ holds. Here ω̃ is the shifted frequency, i.e., ω̃=ω-k·v. The distance from the stellar surface at which the resonance takes place can be found as <cit.>r_ abs≈ 1.8× 10^3R·ν_ GHz^-1/3γ_100^-1/3B_12^1/3θ_ abs^-2/3.Here θ_ abs is the angle between the propagation line and local magnetic field. As a result, the intensity of a ray at large distance I_∞ can be expressed through its initial intensity I_0 by clear connection I_∞=I_0 e^-τ with the optical depth τ=2ω/c_r_ em^> r_ absIm[n]dl.Here r_ em is the generation height and n is the refractive index, found by averaging the dielectric tensor over the plasma distribution function. For a given choice of the energy distribution function F(γ) we obtain <cit.>τ≈πω/c_r_ em^>r_ absω_ p^2/ω^2F(|ω_B|/ω̃)dl. It is also useful to write down the approximate simple expression <cit.>τ≈λ(1-cosθ_ abs)r_ abs/R_ L.As will be shown below, cyclotron absorption plays one of the main roles in formation of the mean profile of radio pulsars.To summarize, let us enumerate the main propagation effects in terms of distances from the neutron star that are to be taken into account (see Figure <ref>). * The radiation will origin at some level r =r_ em, which is a free parameter in our consideration. * For r<r_ O (<ref>) the refraction of O-mode takes place; for ordinary pulsarsr_ O∼(20 - 50)R. As this level depends on the frequency ν, the final directivity pattern of the O-mode depends essentially on the radius-to-frequency mapping. E.g., for frequency-independent radiation radius r_ em one can obtain for the frequency dependence of the mean pulse window width w_ O∝ν^-0.14 <cit.>. * The polarization evolves until the region of limiting polarization r∼ r_ esc (<ref>), and to reproduce it correctly, one should integrate the Kravtsov-Orlov system at least until this height.* For most of the pulsars the light cylinder radius R_ L=c/Ω is large enough and polarization usually forms before reaching this region, i.e., r_ esc < R_ L. However for millisecond pulsars or for pulsars with high plasma multiplicity λ the limiting polarization region r_ esc can be comparable or even exceed R_ L. In this case it is important to take into account the quasi-monopole component of the magnetic field. § NEW EFFECTS §.§ Sign switch of the circular polarizationAs one can see from (<ref>), V∝d(β_B+δ)/dl meaning a one-to-one correspondence between the sign of the circular polarization and the derivative of (β_B+δ). On the other hand, as is shown in Figure <ref>, the derivatives dβ_B/dl and dδ/dl along the ray have different signs: while at lower altitudes the first term prevails <cit.>, at higher altitudes β_B is nearly constant, and the sign is dictated by the derivativeIn Paper I we found that in most cases the sign of the derivative d(β_B+δ)/dl is governed by δ, i.e., V∝dδ/dl. On the other hand, as was shown by <cit.>, the sign of the derivative dβ_B/dl coincides with the sign of the observable derivative dp.a./dϕ. This leads to conclusion that the signs of V and dp.a./dϕ are correlated: for X-mode they are the same and opposite for O-mode. However, in general the sign of V is sensitive to the escape radius r_ esc. If this radius is well above the extremum of β_B+δ (see Figure <ref>), then the sign of V is fixed during the whole profile. However, when the r_ esc is near the extremum, polarization can be formed slightly below or slightly above this altitude, since plasma density along the ray can vary with phase. This results in different signs of V for different phases.Note, that in Fig. <ref> the polarization is being formed at different heights for phases ϕ=-5 and ϕ=5 resulting in different signs of V (see Fig. <ref>). This can be the case for high Lorentz-factors of the secondary plasma, since r_ esc is most sensitive to γ_0 (<ref>). As it will be shown in Paper III, it is this point that helps us to explain some of the exceptional profiles, for example, PSR J2048-1616. §.§Deviations from the predicted mode sequenceIf two orthogonal modes are detected in the three-component mean profile, they are presumably in the O-X-O sequence. In Paper III we critically confront this prediction with polarization data collected by <cit.> and <cit.> and demonstrate general agreement between the predictions of the theory and observations.However, some pulsars demonstrate polarization profiles that poorly fit to this simplified model. Namely, at high frequencies PSR J2048-1616 has three peaks following ( due to the sign of circular polarization) the X-O-X pattern while p.a. curve shows only one orthogonal mode. On the other hand, in PSR J0738-4042 p.a. data indicates two orthogonal modes while the circular polarization V does not change its sign. In Figure <ref> and Figure <ref> we show the simulated profiles and observation data for PSR J2048-1616 and PSR J0738-4042. In these calculations we assume that the O-mode in both cases is being generated deep enough near the stellar surface, resulting in the core component of the three peaked profile, while the X-mode is emitted further above at higher altitudes forming the edges of the pattern. In this case, for specific radiation altitudes, one would expect to have an X-O-X pattern.§.§ Central humpThus, we model the radiation on two distinct widely separated regions. The altitude parameters for both pulsars are given in Table <ref>. As one can see, the anomalous polarization profiles can be qualitatively explained using this technique. In fact, the same approach can be used to explain the profile and polarization curve for another pulsar, PSR J1146-6030, exposing similar polarization pattern. However, it is important to note, that the absolute intensity of the radiation from a given radius is an open parameter that in this case was adjusted empirically to fit the profiles.As it was mentioned above, some two-peaked pulsars demonstrate a strange p.a. behaviour at the central region (see Figure <ref>, right panel). This hump behaviour was previously discussed by <cit.> where the authors assumed that radiation is generated at various heights. As Δ p.a.∼ 4 Ω r_ em/c resulting from R/A effect, they solved the inverse problem and reconstructed the complex radiation altitude profile. In this paper we show that there is no need to assume an anomalous altitude profile to explain this property.Indeed, the difference of p.a. curve from the standard RVM curve (<ref>) is as strong as the density of secondary plasma along the ray. Thus, in the regions, where the plasma density is suppressed (i.e., in the central region of the 'hollow cone') our curve will tend to be closer to RVM one, resulting the hump in the center of the p.a. curve. This phenomena can as well be observed in some two-peaked profiles near the central region <cit.>. However, due to suppression of radiation in that region, it is hard to detect the p.a. value there. In Figure <ref> we demonstrate this effect in simulated two-peaked pulsar (left panel) in comparison with real observational data for PSR J1022+1001 <cit.>. As we see, central hump can be easily reproduced as well. § GENERAL PROPERTIES §.§ Directivity patternAt first, let us consider the effect of cyclotron absorption on the directivity pattern, i.e., the mean intensity of the profile for various emission radii r_ em. As was already shown, cyclotron absorption takes place in the region of weak magnetic field far away from the stellar surface, where the relation ω̃=ω_B/γ holds <cit.>. As in Paper I, we model the optical depth τ (<ref>) by the particle distribution function F(γ)=6γ_0/2^1/6πγ^4/(2γ^6+γ_0^6),where γ_0 corresponds to mean Lorentz factor of secondary plasma. Such a distribution reproduces good enough the results of numerical simulations <cit.>, e.g., power-law dependence F(γ) ∝γ^-2 for γ≫γ_0.As a result, two main plasma parameters affecting the strength of the resonance are the multiplicity λ (<ref>) and the mean Lorentz-factor γ_0. In Figure <ref> and Figure <ref> we show the directivity pattern for various emission altitudes r_ em (in star radii), where again ϕ is the phase and β is the impact angle (minimum angle between the magnetic axis and the line of sight)[These patterns are well consistent with ones obtained by <cit.>, where propagation effects were taken into account as well.].As was found in Paper I, the high multiplicity implies a strong absorption of the trailing peak. But the trailing peak reappears when we have high enough Lorentz factors. This is due to the fact, that the absorption radius r_ abs∝γ_0^-1/3 (<ref>) and hence τ∝λγ_0^-1/3 (<ref>). In addition, this picture clearly shows, that the number of peaks of pulsar's profile is not a purely geometric property, but can also be a consequence of a strong synchrotron absorption. The only justified approach to distinguish between those two cases is to analyze the polarization curves.In Figure <ref> and Figure <ref> we present two geometrically different cases, where one obtains single peak. In the first case, the line of sight crosses the directivity pattern near its center, but the trailing peak is suppressed due to high multiplicity λ=5000, resulting in the only peak. In the second case we have small multiplicity and large Lorentz factor, so the directivity pattern is a hollow circle. But now the line of sight crosses pattern near its boundary (β=12).It is clear, that for the first case the p.a. curve is to be smooth and even close to flat, as we cross just one part of the directivity pattern and the projection of the magnetic field onto the picture plane does not change strongly.The pulsars J1739-3023 and J1709-4429  <cit.> as well as B0656+14, B0950+08, and B1929+10 <cit.> definitely belong to this class. However, if the position angle jumps as the line of sight passes through the center of the profile, as in J1224-6407, J1637-4553, J1731-4744, and J1824-1945 <cit.>, one can be sure that we meet the second case. One can also note in Figure <ref>, that the circular polarization level V/I is much larger for the leading peak, than for the trailing one, although the resonant suppression is strong for the first one. This is due to the fact, that the escape height for the leading peak lies in the region of lower plasma density, than for the trailing one.§.§ Pulsars with interpulses In most cases the interpulses, i.e., distinct radiation features separated from the main pulse by the phase ϕ close to 180 <cit.>, are thought to originate from the pulsar's opposite pole <cit.>. Thus, those pulsars are believed to have an inclination angle close to 90 and hence their analysis is important in the context of obliquity angle evolution <cit.> and the directivity pattern and polarization formation <cit.>.Here we present the modeled profiles and polarization curves of pulsars with interpulses, both for the main pulse (MP) and the interpulse (IP). As the line of sight crosses actually the same directivity pattern with two different impact angles (see Figure <ref>), we are able to model both the MP and the IP on the same directivity pattern. In Figure <ref>-<ref> the two cases are presented with different geometric parameters. The perturbations in the directivity pattern, given by the density profile (<ref>), cause the formation of two emitting regions, separated by the suppressed density gap on the magnetic field lines intersecting neutron star surface where the condition Ω·B≈ 0 is satisfied. Dashed and dotted lines represent respectively the line of sight path for the main pulse and the interpulse. In Figure <ref> we demonstrate the case α≈ 85, while in Figure <ref> the orthogonal geometry α≈ 90 is presented. As one can see, there are clearly two distinct "hot" regions, and while in the second case they are symmetric, generating IP with roughly the same amplitude (>50%), in the first case the lower region is suppressed, as it is located in the rarified plasma region, and this results in a large difference in the intensity of the MP and IP (10-30%). In the first two pictures (Figure <ref>) we show the MP and IP generating along the same region (corresponding impact angle β=3), which results in roughly the same p.a. curve and similar circular polarization level (see, e.g., PSR J1722-3712). On the other hand, for β=-2 the MP and IP are generated in different regions of the directivity pattern, having the opposite run of the position angle (see, e.g., PSR J1549-4848). In the second case (the leading part of the directivity pattern is suppressed due to large λ (see Sect. <ref>), as the inclination angle is close to 90, MP and IP always cross the opposite "hot" regions of the directivity pattern (see Figure <ref>). As these regions are close to symmetric, we end up having similar amplitudes for MP and IP and opposite run of the p.a. and circular polarization. §.§ Width of the emitting regionUnderstanding the size and width of the region where the radio emission originates is a crucial step towards a construction of a self-consistent radiation theory. Although there are no direct methods of determining the actual altitude of that region, there are several naive approaches that may help to do rough estimates. Namely, geometrical 'hollow cone' model together with the A/R effects <cit.> showed that the radiation can originate in the region from 10 to 100 stellar radius. However, this approach is not physically-motivated if propagation effects are significant.In this paper we propose a method of conducting the radius-to-frequency mapping that allows us to evaluate the height and characteristic depth of the radiation region using polarization characteristics. To do that, we compare the results of our simulation of the p.a. with the corresponding observational plots obtained by <cit.> who presented the p.a. curve with characteristic distinct scatter points. Such a scatteringcan be explained if we assume that the radiation originates not from one particular radius, but from a rather wide shell.In Figure <ref> we present the results of such analysis for the two-peaked pulsar PSR B0301+19 compared to the observational curves presented in Figure <ref>. We approximated the scatter curve with the parameters shown in Table <ref> where Δ p.a. is the rough scatter dispersion of the position angle data points. As we see, for double-peaked mean profile (which we connect with the O-mode) the width of the p.a. curve is slimmer in the center of a profile and wider near the pulse edges. This common property which is observed in all double-peak O-mode pulsars can be easily explained. Indeed, as is shown on Figures <ref> and <ref>, the central 'hole' of the directivity pattern increases in size with the generation radius r_ em. Thus, only the very deep parts of the radiation domain give the observable radiation in the central part of the mean pulse. As to pulse edges, they will be radiated from all the generation domain. In addition, as for higher frequency the thickness of the p.a. curve (which is directly corresponded to the radiation region size) is smaller, one can conclude that the radiating shell is smaller as well, which is due to the fact that higher frequencies are generated in the deep regions close to the stellar surface. On the other hand, for the single-peaked X-mode pulsar PSR B0540+23 (Figure <ref>) the p.a. curve is wider in the center of integrated profile (due to the intensity suppression near the edges), which is also in a good agreement with observational data presented on Figure <ref>. The estimated upper boundaries for the altitudes in this case are also presented in Table <ref>. Basically the same trend holds here: higher frequencies are generated on lower altitudes and have a narrower radiation region. This fact, however, does not appear to be universal as for some pulsars the higher frequencies may have a wider radiation region (that can be estimated from Δ p.a.), while still originating from the deep altitudes (e.g., PSR B0943+10, PSR B1133+16, and PSR B2020+28). As a result, the key parameters, i.e., the inclination angle α and the impact angle β, as well as the approximate multiplicity parameterλ, the characteristic gamma-factor γ_0 and the radiation height r_ em can be determined from the mean profile shape and circular polarization V. Those approximate values can further be corrected in the comparison with polarization data. After that we are left with only one parameter: Δ r_ em, i.e., the characteristic depth of that region. In Table <ref> we present the parameters of the above mentioned pulsars that were used in our simulations. This estimations are rough and are based exclusively on mean profile I, circular polarization V and position angle p.a. curves that we compared with the observations. The period P and magnetic field B_12 = B_0/(10^12G) was taken from <cit.> and <cit.>. To conclude, one can say that our approach, together with the observational scatter data for p.a. (such as in catalog by ) provides a strong instrument for estimating the upper bounds for the radiation region altitudes. §.§ Position angle shiftThe maximum of the p.a. derivative (dp.a./dϕ)_ max, i.e., the center of the p.a. curve, is shifted to the right relative to the center of the profile. This is a well known observational effect and it was a subject of study for a long time: see, e.g., PSR J0729-1448, J0742-2822, and J1105-6107 from <cit.> andJ0631+1036, J0659+1414, J0729–1448,J0742–2822, J0908–4913, and J1057–5226 from <cit.>. As it was mentioned above, usually the p.a. shift is assumed to be the consequence of the A/R effects. In this case, the position angle shift can be estimated as Δϕ_p.a.≈4r_ emΩ/c, and this dependence is usually used to carry out the radius-to-frequency mapping, comparing the position angle shifts on various frequencies <cit.>. The results are mostly consistent with the fact, that higher frequencies are generated closer to the stellar surface. It was also shown by <cit.> that in fact for some pulsars the p.a. shift on two frequencies (1.4 and 3.1 GHz) is effectively the same, hence implying the weak dependence of the p.a. shift on the frequency. To study the dependence of the shift in Figure <ref> we show the position angle curves for two distinct frequencies (0.4 and 1.4 GHz). Multiplicity, on the other hand, models how magnetospheric plasma affects radiation. The error bars are modeled by the radiation originating from various altitudes, as in observations we are not able to distinguish the emission heights. In Figure <ref> we study the dependence of the position angle shifts difference at two frequencies on the plasma multiplicity.One should note two important things here. First, the higher the multiplicity, the more different are the curves on various frequencies. This provides a possible restriction for the multiplicity from multifrequency observations. On the other hand, despite the distinct frequencies, the curves are close to each other (see blue points to the right) and when taking into account the scattering due to emission height, they mostly overlap (even for large λ). This fact demonstrates, that the propagation effects do not necessarily imply a strong dependence of the p.a. shift on frequency, especially if the radiation is generated in a wide range of heights. § DISCUSSIONS AND CONCLUSION In this paper we demonstrate that complex behaviour of pulsar light-curves and polarization profiles can be explained with a propagation theory assuming a different loci in the parameter space: pulsar inclination geometry, emission region, plasma multiplicity and mean Lorentz-factor. Some general properties of the mean profile formation are also discussed. At first, in Sect. <ref> we explain how the sign of the circular polarization of a single mode can change over a profile. For usual parameters of the magnetospheric plasma the polarization is being formed high enough, so the sign of the V is governed by the derivative of the phase δ (appearing due to nonzero electric field in the pulsar magnetosphere) and, thus, is fixed. In some cases, however, when the altitude at which polarization becomes frozen is low, one can have a sign that depends on the rotation phase.In Sect. <ref> the possible explanations for complex directivity patterns of some pulsars are discussed. While most of the pulsars follow the simple hollow cone model directivity pattern, some clearly contradict with it. We show that assuming the generation of X and O modes on various altitudes one can easily explain this behaviour. On the other hand, in Sect. <ref> we have shown that there is no need to assume anomalous altitude profile of radiation for some two-peaked pulsars, that have a hump in the center of the profile (as was done by ). Such effect can be easily explained by the suppression of the plasma density near the center of the directivity pattern, as in this case we will have a weaker shift from the RVM curve.We further discussed the more general properties. In Sect. <ref> the directivity patterns for various multiplicities λ and mean Lorentz-factors of the secondary plasma γ_0 were presented. The role of plasma cyclotron absorption in formation of the mean profiles was also discussed. It was shown, that one can obtain single-peaked pulsars for different impact angles β, and that the only reliable way to distinguish between those cases is to analyze the polarization curves. On top of that in Sect. <ref> the pulsars with interpulses are discussed. We demonstrate the directivity patterns for various obliquity angles and show the formation of the main pulse and interpulse and their polarization curves for various impact angles. Further, in Sect. <ref> we discuss the possible explanation of the position angle curve width for two characteristic pulsars, showing the possibility to determine the altitudes and sizes of the emitting region. It is shown that the altitudes are in a good agreement with the results obtained within the simple geometric and A/R effects considered by <cit.> and <cit.>. But upon that, our method provides an additional information about the width of the radiating region.Finally, we analyze the frequency dependence of the shift of the position angle curve from the center of the mean profile. One of the key arguments against the importance of the propagation effects in the magnetosphere is that for some pulsars the position angle curve does not strongly depend on the observation frequency <cit.>. In Sect. <ref> we analyze the position angle curves for various plasma multiplicity factors on distinct frequencies (with error bars due to generation in a wide shell of altitudes). We demonstrate that in fact even a high multiplicity does not necessarily imply a strong dependence of position angle curve on frequency, and the curves for two frequencies mostly coincide.In Paper III we will confront the predictions of our model with observational data. We assume that further development of the self-consistent technique discussed above will allow us to make a powerful tool to estimate the plasma parameters for individual pulsars. § ACKNOWLEDGMENTSThe observational data for PSR J0738-4042 (at 1375 MHz) and PSR J1022+1001 (at 728 MHz) were obtained from the EPN database under the Creative Commons Attribution 4.0 International licence. The numerical code that supports the plots and diagrams within this paper as well as the relevant parameters of modeling are available from the authors upon reasonable request.We thank Ya.N. Istomin, B. Stappers and P. Weltevrede for their interest and useful discussions and the anonymous referee for instructive comments which helped us to improve the manuscript. This work was partially supported by Russian Foundation for Basic Research (Grant no. 14-02-00831). AAP is supported by Porter Ogden Jacobus Fellowhip, awarded by the graduate school of Princeton University.mnras § MODIFIED PLASMA DENSITY PROFILEIn Paper I the axisymmetric distribution of outflowing plasma within polar cap was assumed. In general this assumption is incorrect and this fact may be important for almost orthogonal radio pulsars, when Goldreich-Julian charge densityρ_ GJ = -Ω·B/2π c changes the sign within the polar cap. Indeed, near this line the potential drop through the gap (which is proportional to Goldreich-Julian charge density ρ_ GJ) is too low to create pairs. For this reason axisymmetric density profile considered in Paper I is adjusted by the empiric Gaussian factor, that depends on polar angle θ from the rotation axis Ω. As a result, within the polar cap in the vicinity of the neutron stellar surface we obtain (see Figure <ref>)g(r_m,φ_m) = exp(-r_m^4/R_0^4)/1+(r_0/r_m)^5(1-exp[-(π/2-α + θ_m)^2/2(δθ)^2]).Here r_m < R_0 and φ_m are the polar coordinates, R_0 = (Ω R/c)^1/2R is the polar cap radius, r_0 determines the dimension of a 'hole' in the hollow cone, θ_m = 2/3 (r_m/R)sinφ_m and δθ is the empirical angular width of the gap near the Ω·B=0 line. The first factor in (<ref>) models the suppression of secondary plasma generation near the magnetic axes r_m≲ r_0 where the magnetic field lines have large curvature radius, while the second one corresponds to zero line Ω·B=0. It is important that the line Ω·B=0 at the star surface locates below the magnetic pole. This implies that the appropriate region of thedirrectivity pattern (which is formed at the distances r ≫ R) can be belowthe equator θ = π/2. As a result, the interpulse can be connected this the similar radiation domain as the main one. § MAGNETIC FIELD STRUCTUREAs it was demonstrated in Paper I, the structure of the pulsar magnetosphere obtained numerically by many authors at first within force-free approximation <cit.> and later within MHD <cit.> and even PIC simulations <cit.> can be modelled good enough by rotating dipole magnetic field and radial quasi-monopole analytical solutions obtained by <cit.> and <cit.>. The transition between these two asymptotic behaviors takes place in the vicinity of the light cylinder R_ L. It is this magnetic field structure that was used in our previous simulations. As a result, for ordinary pulsars (P ∼ 1s) for which Eqn. (<ref>) gives r_ esc≪ R_ L the polarization characteristics are determined by the domain with almost dipole magnetic field. In this case the mean profiles are well described by 'hollow cone' model with the S-shape curve of the p.a. dependence on the phase ϕ. On the other hand, according to (<ref>), especially for millisecond pulsars (r_ esc≫ R_ L) the dipole magnetic field in the polarization formation domain should be adjusted to quasi-radial (and, hence, homogeneous) wind component. As will be shown in Paper III, more than a half of millisecond pulsars have an approximately constant p.a. within the main pulse.On the other hand, as was recently obtained by <cit.>, for large enough inclination angles α > 30 the angular structure of the radial wind differs drastically from the Michel-Bogovalov 'split-monopole' solution. For this reason below we use the following expressions for the magnetic field in the wind domainB_r= Ψ_ tot/2 π r^2(cos^2α + πsinθcosφsin^2α), B_φ= -Ψ_ tot/2 π r R_ L(sinθcos^2α + πsin^2θcosφsin^2α),where Ψ_ tot = π f_* R^2(Ω R/c) B_0 is the total magnetic flux in the wind and 1.592 < f_*(α) < 1.96 is the dimensionless polar cap area <cit.>. Here the first terms correspond toanalytical ”split monopole” solution and the second ones correspond to orthogonal magnetic structure obtained numerically by <cit.>.
http://arxiv.org/abs/1704.08743v1
{ "authors": [ "Hayk Hakobyan", "Vasily Beskin", "Alexander Philippov" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170427205627", "title": "On the mean profiles of radio pulsars II: Reconstruction of complex pulsar light-curves and other new propagation effects" }
deDefinition theo[de]Theorem lem[de]Lemma cor[de]Corollary rem[de]Remark fac[de]Fact cri[de]Criterion itemizeg enumerateg descriptiong∃-.39em∀-.41em.31emwidth .25em =⌜ =#1 =#1 = <=0pt by - ⌜⌝ A Note on McGee's ω-Inconsistency ResultAfter I had finished a draft of this note I discovered that basically the same point has been made in <cit.>. This put an end to the idea of publishing the note. Since in the present note the point is made in a slightly different and, I believe, crisper way I decided nonetheless to make the note publicly available. I wish to thank Matteo Zichetti, who drew my attention to a rather important typo in an earlier version of this note. Johannes Stern [email protected] January 2017 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In this note we show that McGee's ω-inconsistency result can be derived from Lb's theorem. In his paper “How Truthlike Can a Predicate be?" <cit.> showed the ω-inconsistency of a broad family of theories of truth. The purpose of this note is to highlight the connection between McGee's result and Lb's theorem. Once this connection is made explicit McGee's ω-inconsistency result may be viewed as a variant of Gdel's second incompleteness theorem. For expository purposes we start by providing McGee's result roughly following his original derivation.[On notation: ℒ is a standard arithmetical language with the exception that we assume the existence of certain function symbols in the language. In particular we assume the existence of the function symbol f^∙ (cf. below). ℒ_T (ℒ_P) is the extension of ℒ by a unary predicate T (P). We assume some standard coding scheme for the expressions of the languages under consideration and denote the name of the code of an expression η by η. The numeral of a natural number n is denoted by n. Finally, we write ⌜ϕ(ẋ)⌝ for denoting the function that with n as argument provides the code of the formula ϕ(n).] [McGee]Let Γ be a theory extending Q in the language ℒ_T, which is closed under the rule(TIntro)ϕ/Tϕ and proves (Cons)Tϕ→Tϕ (TImp)Tϕ→ψ→(Tϕ→Tψ) (UInf)∀x Tϕ(ẋ)→T∀vϕ(v)for all ϕ,ψ∈𝖲𝖾𝗇𝗍_ℒ_𝖳. Then Γ is ω-inconsistent.A theory is ω-inconsistent if there exists a formula ϕ(x) such that the theory proves ∀ xϕ and ϕ(n) for all n∈ω. In other words, if we allow for one application of the ω-rule inconsistency will arise. The ω-rule allows us to infer ∀ xϕ if we have derived ϕ(n) for all n∈ω. The crucial observation by McGee was that there is a two-place primitive recursive function f, which when applied to a natural number n and the code, i.e., the Gdel number, of a sentence ϕ provides the code of the sentenceT⌜… T_ntimesϕ…⌝.This allowed McGee to define an ω-truth predicateT^ω x:=∀ y T f^∙(y,x)where f^∙ is a function symbol representing f in Γ.[For ease of exposition we avail ourselves to certain function symbols, such as f^∙, in the language. In this we follow <cit.> presentation of McGee's result. <cit.> presented his result without assuming such function symbols.] A sentence ϕ is ω-true iff each sentence resulting from ϕ by applying any finite number of truth predicate to the sentence is true. By first-order logic and (UInf) one can easily prove the following characteristics of T^ω: (A1)T^ωϕ→TT^ωϕ (A2)T^ωϕ→TϕWith these prerequisites we can provide a crisp version of McGee's original proof: We start by an application of the Diagonal lemma: * γ↔ T^ωγ* Tγ↔ T T^ωγ1,(T-Intro),(T-Imp)* Tγ→ TT^ωγ2,(Cons)* Tγ→ T^ωγ3,A1* T^ωγ→ TγA2* T^ωγ4,5* γ1,6From line 7 and (T-Intro) we may derive Tγ_T f^∙(0,γ),, TTγ_T f^∙(1,γ),, TTTγ_T f^∙(2,γ),,…By the ω-rule this yields ∀ x T f^∙(x,γ), that is, T^ωγ, which contradicts line 6 above. McGee's result adds to a family of inconsistency results, such as, Tarski's undefinability result (<cit.>) or Montague's theorem (<cit.>) that point to severe limitations on how truthlike predicates can be. However, as we have seen, we have to go beyond the resources of classical first-order logic to turn the ω-inconsistency result into an inconsistency result proper. But if these resources, that is, the ω-rule, are made available, McGee's result turns out to be a direct consequence of Lb's theorem and as such a variant of Gdel's second incompleteness theorem.[See <cit.> for a discussion of the relation between Lb's theorem and Gdel's second incompleteness theorem.] The reason is that in a theory of truth that proves (T-Imp), (UInf) and is closed under the rule (T-Intro), we can derive the three Lb derivability conditions for the ω-truth predicate T^ω if we allow for application of the ω-rule.[There exist ω-consistent theories of this kind thus this observation is non-trivial in the sense that the application of the ω-rule does not lead to inconsistency, i.e., the explosion of the derivability relation.] This implies that we can derive Lb's theorem for T^ω, which directly contradicts the principle (Cons) because (Cons) forces the ω-truth predicate to be provably consistent, that is, we can prove T^ω0=1. Let us make this observation explicit. <cit.> showed that if a theory Λ extending Q the following conditions are satisfied for a predicate P (D1)Λ⊢ϕ⇒Λ⊢Pϕ (D2)Λ⊢Pϕ→ψ→(Pϕ→Pψ) (D3)Λ⊢Pϕ→PPϕfor allϕ,ψ∈𝖲𝖾𝗇𝗍_ℒ_𝖯. Then Λ proves (L1)Pϕ→ϕ⇒Λ⊢ϕ (L2)PPϕ→ϕ→Pϕ for allϕ∈𝖲𝖾𝗇𝗍_ℒ_𝖯. (L1) is known as Lb's theorem, whereas (L2) is the so-called formalized Lb's theorem. In a theory Σ that is just like the theory Γ of Theorem <ref> with the exception that (Cons) is no longer assumed we may establish both versions of Lb's theorem for the predicate T^ω. [ω-Lb]Let Σ be a theory extending Q in the language ℒ_T, which is closed under the rule(TIntro)ϕ/Tϕ and proves (TImp)Tϕ→ψ→(Tϕ→Tψ) (UInf)∀x Tϕ(ẋ)→T∀vϕ(v)for all ϕ,ψ∈𝖲𝖾𝗇𝗍_ℒ_𝖳. Let T^ω be defined as above. Then (i)Σ⊢_ωT^ωϕ→ϕ⇒Σ⊢_ωϕ (ii)Σ⊢_ωT^ωT^ωϕ→ϕ→T^ωϕ. The derivability relation ⊢_ω signifies the closure of the classical derivability relalion under non-embedded applications of the ω-rule.[As a consequence we do not appeal to full ω-logic for establishing Theorem <ref>. In other words to avoid triviality we only need to assume the ω-consistency of Σ rather than the existence of standard models of Σ.] As to be expected,Theorem <ref> is established by showing that the three Lb derivability conditions can be proved in Σ for the predicate T^ω. We state this claim in the following lemma. Let Σ be as in Theorem <ref>. Then for all ϕ∈𝖲𝖾𝗇𝗍_ℒ_𝖳 (M1)Σ⊢_ωϕ⇒Σ⊢_ωT^ωϕ (M2)Σ⊢_ωT^ωϕ→ψ→(T^ωϕ→T^ωψ) (M3)Σ⊢_ωT^ωϕ→T^ωT^ωϕ.(M1) follows directly from the rule (T-Intro), the ω-rule and the definition of T^ω. (M2) follows from (T-Imp), (T-Intro), the ω-rule and again the definition of T^ω. For (M3) we observe that by (A1) we have T^ωϕ→ TT^ωϕ, i.e., T^ωϕ→ T f^∙(0,T^ωϕ). By (T-Intro) and (T-Imp) we obtain TT^ωϕ→ TTT^ωϕ. Then by (A1) we derive T^ωϕ→ TTT^ωϕ, that is, T^ωϕ→ T f^∙(1,T^ωϕ). Clearly, we may repeat this process and therefore, by an application of the ω-rule, derive T^ωϕ→∀ x T f^∙(x,T^ωϕ). By definition of T^ω this is the desired (M3).By Lemma <ref>, Theorem <ref> is a direct corollary of Lb's original result. Moreover, as we have already mentioned, McGee's ω-inconsistency result proves to be a direct corollary of this result.Since the theory Γ proves (Cons) and thus Γ⊢_ω T^ωϕ→ T^ωϕ, we have Γ⊢ T^ω0=1→ 0=1. By Theorem <ref> Lb's theorem holds for Γ. This yields the contradiction. This observation establishes a firm link between McGee's theorem and Lb's theorem. This connection is of course not too surprising. In their paper “Possible World Semantics for Modal Notions Conceived as Predicates" <cit.> already remark: “In the end all limitative results can be derived from Lb's theorem." Their observation, however, depends on rather involved semantic considerations while the present note might serve as an accessible illustration of this general fact.[In the terminology of <cit.>, the connection between McGee's result and Lb's theorem is roughly as follows: a given possible world frame ⟨ W,R⟩ admits a valuation only if the frame ⟨ W,R^∗⟩ does, where R^∗ is the transitive closure of R. This is precisely due to the fact that we can define an ω-truth predicate. Indeed R^* is the relevant accessibility relation for T^w. Moreover, the valuation on the frame ⟨ W,R^∗⟩ must be such that Lb's theorem is true at each world, which immediately yields all limitative results. The question which possible world frames allow for an valuation may therefore be rephrased as the question: for which transitive frames can we find a valuation such that Lb's theorem is true at each world of the model?] apalike
http://arxiv.org/abs/1704.08283v1
{ "authors": [ "Johannes Stern" ], "categories": [ "math.LO" ], "primary_category": "math.LO", "published": "20170426183230", "title": "A Note on McGee's ω-Inconsistency Result" }
equationhmargin=25mm, 25mm, vmargin=25mm, 25mm, headsep=10mm, headheight=5mm, footskip=10mm equationsectiontheoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary claim[theorem]Claim assumption[theorem]Assumption conjecture[theorem]Conjecturedefinition remark[theorem]Remark definition[theorem]Definition *ackAcknowledgement *remark*Remark *claim*Claim romenumerate[1][0pt] #1 AMS 2000 Mathematics Subject Classification: 05C80, 60C05, 90B15The phase transition in bounded-size Achlioptas processes Oliver Riordan Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK. E-mail: [email protected] Lutz Warnke School of Mathematics, Georgia Institute of Technology, Atlanta GA 30332, USA; and Peterhouse, Cambridge CB2 1RD, UK. E-mail: [email protected] 27, 2017 ============================================================================================================================================================================================================================================================================================================================================= Perhaps the best understood phase transition is that in the component structure of the uniform random graph process introduced by Erdős and Rényi around 1960. Since the model is so fundamental, it is very interesting to know which features of this phase transition are specific to the model, and which are `universal', at least within some larger class of processes (a `universality class'). Achlioptas process, a class of variants of the Erdős–Rényi process that are easy to define but difficult to analyze, have been extensively studied from this point of view. Here, settling a number of conjectures and open problems, we show that all `bounded-size' Achlioptas processes share (in a strong sense) all the key features of the Erdős–Rényi phase transition. We do not expect this to hold for Achlioptas processes in general.§ INTRODUCTION§.§ SummaryIn this paper we study the percolation phase transition in Achlioptas processes, which have become a key example for random graph processes with dependencies between the edges.Starting with an empty graph on n vertices, in each step two potential edges are chosen uniformly at random.One of these two edges is then added to the evolving graph according to some rule,where the choice may only depend on the sizes of the components containing the four endvertices. [Here we are describing Achlioptas processes with `size rules'. This is by far the most natural and most studied type of Achlioptas process, but occasionally more general rules are considered.] For the widely studied class of bounded-size rules (where all component sizes larger than some constant K are treated the same), the location and existence of the percolation phase transition is nowadays well-understood.However, despite many partial results during the last decade (see, e.g., <cit.>), our understanding of the finer details of the phase transition has remained incomplete, in particular concerning the size of the largest component. Our main results resolve the finite-size scaling behaviour of percolation in all bounded-size Achlioptas processes.We show that for any such rule thephase transition is qualitatively the same as that of the classical Erdős–Rényi random graph process in a very precise sense: the width of the `critical window' (or `scaling window') is the same, and so is the asymptotic behaviour of the size of the largest component above and below this window, as well as the tail behaviour of the component size distribution throughout the phase transition. In particular, when ε = ε(n) → 0 as n →∞ but ε^3 n →∞, we show that, with probability tending to 1 as n →∞, the size of the largest component after i steps satisfiesL_1(i) ∼ C ε^-2log(ε^3 n)if i= n- n,cε n if i= n+ n,where t_c,C,c>0 are rule-dependent constants (in the Erdős–Rényi case we have t_c=C=1/2 and c=4).These and our related results for the component size distribution settle a number of conjectures and open problems from <cit.>. In the language of mathematical physics, they establish that all bounded-size Achlioptas processes fall in the same `universality class' (we do not expect this to be true for general Achlioptas processes).Such strong results (which fully identify the phase transition of the largest component and the critical window) are known for very few random graph models. Our proof deals with the edge–dependencies present in bounded-size Achlioptas processesvia a mixture of combinatorial multi-round exposure arguments, the differential equation method, PDE theory, and coupling arguments. This eventually enables us toanalyze the phase transition via branching process arguments. §.§ Background and outline results In the last 15 years or so there has been a great deal of interest in studying evolving network models, i.e., random graphs in which edges (and perhaps also vertices) are randomly added step-by-step, rather than generated in a single round. Although the original motivation, especially for the Barabási–Albert model <cit.>, was more realistic modelling of networks in the real world, by now evolving models are studied in their own right as mathematical objects, in particular to see how they differ from static models. Many properties of these models have been studied, starting with the degree distribution. In many cases one of the most interesting features is a phase transition where a `giant' (linear-sized) component emerges as a density parameter increases beyond a critical value.One family of evolving random-graph models that has attracted a great deal of interdisciplinary interest (see, e.g., <cit.>) is that of Achlioptas processes, proposed by Dimitris Achlioptas at a Fields Institute workshop in 2000. These `power of two choices' variants of the Erdős–Rényi random graph process can be described as follows.Starting with an empty graph on n vertices and no edges, in each step two potential edges e_1,e_2 are chosen uniformly at random from all n2 possible edges (or from those not already present). One of these edges is selected according to some `decision rule' and added to the evolving graph.Note that the distribution of the graph G^_n,i after i steps depends on the rule used, and that always adding e_1 gives the classical Erdős–Rényi random graph process (exactly or approximately, depending on the precise definitions).Figure <ref> gives a crude picture of the phase transition for a range of rules. In general, the study of Achlioptas processes is complicated by the fact that there are non-trivial dependencies between the choices in different rounds.Indeed, this makes the major tools and techniques for studying the phase transition unavailable (such as tree-counting <cit.>, branching processes <cit.>, or random walks <cit.>), since these crucially exploit independence.The non-standard features of Achlioptas processes have made them an important testbed for developing new robust methods in the context of random graphs with dependencies, and for gaining a deeper understanding of the phase transition phenomenon.Here the class of bounded-size rules has received considerable attention (see, e.g, <cit.>):the decision of these rules is based only on the sizes c_1, …, c_4 of the components containing the endvertices of the two potential edges e_1 and e_2, with the restriction that all component sizes larger than some given cut-off K are treated in the same way (i.e., the rule only `sees' the truncated sizes min{c_i,K+1}).Perhaps the simplest example is the Bohman–Frieze process (BF), the bounded-size rule with cut-off K=1 in which the edge e_1 is added if and only if c_1=c_2=1 (see, e.g., <cit.>).Figure <ref> suggests that while the BF rule delays percolation compared to the classical Erdős–Rényi random graph process (ER), it leaves the essential nature of the phase transition unchanged.In this paper we make this rigorous for all bounded-size rules, by showing that these exhibit Erdős–Rényi-like behaviour (see Theorem <ref>). Although very few rigorous results are known for rules which are not bounded-size (see <cit.>), as suggested in Figure <ref> these seem to have very different behaviour in general. The study of bounded-size Achlioptas processes is guided by the typical questions from percolation theory (and random graph theory).Indeed, given any new model, the first question one asks is whether there is a phase transition in the component structure, and where it is located.This was answered in a pioneering paper by Spencer and Wormald <cit.> (and for a large subclass by Bohman and Kravitz <cit.>) using a blend of combinatorics, differential equations and probabilistic arguments.They showed that for any bounded-size rulethere is a rule-dependent critical time =^∈ (0,∞) at which the phase transition happens, i.e., at which the largest component goes from being of order O(log n) to order Θ(n).More precisely, writing, as usual, L_j(G) for the number of vertices in the jth largest component of a graph G, Spencer and Wormald showed that for any fixed t ∈ [0,∞), whp (with high probability, i.e., with probability tending to 1 as n →∞) we have[Here and throughout we ignore the irrelevant rounding to integers in the number of edges, writing G^_n,tn for G^_n,tn.]L_1(G^_n,tn) =O(log n)if t <, Θ(n)if t >,where =^ isgiven bythe blowup point of a certain finite system of differential equations (in the Erdős–Rényi process (<ref>) holds with =1/2, see also Remark <ref>).Let N_k(G) denote the number of vertices of G which are in components of size k, and letS_r(G) = ∑_C |C|^r/n = ∑_k ≥ 1k^r-1N_k(G)/n,where the first sum is over all components C of G and |C| is the number of vertices in C. Thus S_r+1(G) is the rth moment of the size of the component containing a randomly chosen vertex. The susceptibility S_2(G) is of particular interest since in many classical percolation models its analogue diverges precisely at the critical point. Spencer and Wormald <cit.> showed that this holds also for bounded-size Achlioptas processes: the n→∞ limit of S_2(G^_n,tn) diverges at the critical time .Once the existence and location of the phase transition have been established, one typically asks about finer details of the phase transition,in particular about the size of the largest component. For the Bohman–Frieze process this was addressedin an influential paper by Janson and Spencer <cit.>, using a mix of coupling arguments, the theory of inhomogeneous random graphs, and asymptotic analysis of differential equations. They showed that there is a constant c=c^BF>0 such that we whp have linear growth of the formlim_n→∞ L_1(G^BF_n, n +n)/n = (c+o(1))as ↘ 0, which resembles the Erdős–Rényi behaviour (where =1/2 and c=4).Using work of the present authors <cit.> and PDE theory, this wasextended to certain BF-like rules by Drmota, Kang and Panagiotou <cit.>, but the general case remained open until now. Regarding the asymptotics in (<ref>), note that  is held fixed as n→∞; only after taking the limit in n do we allow → 0. The next questions one typically asks concern the `finite-size scaling', i.e., behaviour as a function of n, usually with a focus on the size of the largest component as =(n) → 0 at various rates. For the `critical window' = λ n^-1/3 (with λ∈) of bounded-size rules this was resolved by Bhamidi, Budhiraja and Wang <cit.>, using coupling arguments, Aldous' multiplicative coalescent, and inhomogeneous random graph theory.However, the size of the largest component outside this window has surprisingly remained open, despite considerable attention. For example, two papers <cit.> were solely devoted to the study of L_1(G^_n,i) in the usually easier subcritical phase (i= n -n with ^3 n →∞), but both obtained suboptimal upper bounds (a similar remark applies to the susceptibility, see <cit.>).In contrast, there is no rigorous work about L_1(G^_n,i) in the more interesting weakly supercritical phase (i= n +n with → 0 but ^3 n →∞), making the size of the largest component perhaps the most important open problem in the context of bounded-size rules.Of course, there are many furtherquestions that one can ask about the phase transition, and here one central theme is: how similar are Achlioptas processes to the Erdős–Rényi reference model? For example, concerning vertices in `small' components of size k, tree counting shows that in the latter model we have N_k(G^ER_n, n ± n)/n ≈ k^-3/2e^-(2+o(1))^2 k/√(2π)as → 0 and k →∞ (ignoring technicalities), where =1/2. Due to the dependencies between the edges explicit formulae are not available for bounded-size rules,which motivates the development of new robust methods thatrecover the tree-like Erdős–Rényi asymptotics in such more complicated settings.Here Kang, Perkins and Spencer <cit.> presentedan interesting PDE-based argument for the Bohman–Frieze process,but this contains an error (see their erratum <cit.>) which does not seem to be fixable. Subsequently, partial results have been proved by Drmota, Kang and Panagiotou <cit.> for a restricted class of BF-like rules.In this paper we answer the percolation questions discussed above for all bounded-size Achlioptas processes, settling a number of open problems and conjectures concerning the phase transition.We first present a simplified version of our main results,writing L_j(i)=L_j(G^_n,i), S_r(i)=S_r(G^_n,i) and N_k(i)=N_k(G^_n,i) to avoid clutter. In a nutshell, (<ref>)–(<ref>) of Theorem <ref>determine the finite-size scaling behaviour of the largest component, the susceptibility and the small components. Informally speaking, all these key statisticshave, up to rule-specific constants, the same asymptotic behaviour as in theErdős–Rényi process, including the same `critical exponents' (in ER we have t_c=C=1/2, c=4, B_r=(2r-5)!!2^-2r+3, A=1/√(2π) and a=2, see also Remark <ref>). In particular,(<ref>)–(<ref>)show that the unique `giant component' initially grows at a linear rate, as illustrated by Figure <ref>.Letbe a bounded-size rule withcritical time =^>0 as in (<ref>).There are rule-dependent positive constants a,A,c,C,γ and (B_r)_r ≥ 2 such that the following holds for any =(n) ≥ 0 satisfying → 0 and ^3 n →∞ as n →∞. -0.75em *(Subcritical phase) For any fixed j ≥ 1 and r ≥ 2, whp we haveL_j( n - n)∼ C ^-2log(^3 n),S_r( n - n)∼ B_r ^-2r+3. *(Supercritical phase) Whp we haveL_1( n + n)∼ c n , L_2( n + n) = o( n) . *(Small components) Suppose that k=k(n)≥ 1 and =(n)≥ 0 satisfy k ≤ n^γ, ^2k ≤γlog n, k→∞ and ^3k→ 0. Then whp we haveN_k( n ± n)∼ A k^-3/2e^-a^2kn.(Here we do not assume that ^3n→∞.)In each case, what we actually prove is stronger (e.g., relaxing → 0 to ≤_0); see Section <ref>.To the best of our knowledge, analogous precise results, giving sharp estimates for the size of the largest component in the entire sub- and super-critical phases (see also Remark <ref>), are known only for the Erdős–Rényi model <cit.>, random regular graphs <cit.>, the configuration model <cit.>, and the (supercritical) hypercube <cit.>. Here and throughout we use the following standard notation for probabilistic asymptotics, where (X_n) is a sequence of random variables and f(n) a function. `X_n∼ f(n) whp'means that there is some δ(n)→ 0 such that whp (1-δ(n))f(n)≤ X_n≤ (1+δ(n))f(n). This is equivalent toX_n=(1+(1))f(n), where in general (f(n)) denotes a quantity that, after dividing by f(n), tends to 0 in probability. Similarly, `X_n=o(f(n)) whp' simply means X_n=(f(n)). `X_n=(f(n))' means that X_n/f(n) is bounded in probability.Finally, we use X = a ± b as shorthand for X ∈ [a-b,a+b].Assume that =(n) ≥ 0 satisfies → 0 and ^3 n ≤ω=ω(n) →∞ as n →∞.By (<ref>) and (<ref>), for any step n - n ≤ i ≤ n +n whp we have n^2/3/ω≤ L_1(i) ≤ω n^2/3, say.It follows that when ^3n=O(1), then L_1(i)=(n^2/3) (in fact (n^2/3)). We do not discuss this critical case further, since it is covered by the results of <cit.>. The natural benchmark for our results is the classical Erdős–Rényi random graph process (which, as discussed, is also a bounded-size Achlioptas process).In their seminal 1960 paper, Erdős and Rényi <cit.> determined the asymptotics of thenumber L_1(G^ER_n,i) of vertices in the largest component after i=tn steps for fixed t>0. In 1984,Bollobás <cit.> initiated the study of `zooming in' on the critical point, i.e, of the case t ∼ 1/2, whichhas nowadays emerged into a powerful paradigm. In particular, assuming =(n) → 0 and ^3 n ≥ (log n)^3/2, Bollobás identified thecharacteristic features of the phase transition.Namely, in the subcritical phase i=n/2- n there are many `large' components of size L_j(G^ER_n,i) ∼ 2^-1^-2log(^3n),similar to (<ref>). In the supercritical phase i=n/2+ n there is a unique `giant component' of size L_1(G^ER_n,i) ∼ 4 n, whereas all other components are much smaller,similar to (<ref>)–(<ref>).In 1990, Łuczak <cit.> sharpened the assumptions of <cit.> to the optimal condition =(n) → 0 and ^3 n →∞ (also used by Theorem <ref>), thus fully identifying the phase transition picture. Indeed, a separation between the sub- and super-critical phases requires ^-2log(^3n) = o( n), which is equivalent to ^3 n →∞ (see also Remark <ref>).Informally speaking, Theorem <ref> shows that the characteristic Erdős–Rényi features arerobust in the sense that they remain valid for all bounded-size Achlioptas processes.One main novelty of our proof approach is a combinatorial multi-round exposure argument around the critical point .From a technical perspective this allows us to avoid arguments where the process is approximated (in some time interval) by a simpler process, which would introduce various error terms. Such approximations are key in all previous work on this problem <cit.>.Near the critical i ≈ n we are able to track the exact evolution of our bounded-size Achlioptas process (G^_n,i)_i ≥ 0. This more direct control is key for our very precise results, in particular concerning the finite-size scaling behaviour as =(n) → 0. In this context our high-level proof strategy for step i= n +nis roughly as follows:(i) we first track the evolution of (G^_n,i)_0 ≤ i ≤ i_0 up to step i_0=(-σ)n for some tiny constant σ > 0,(ii) we then reveal information about the steps (-σ) n, …, (+) n in two stages (a type of a two-round exposure),and (iii) we analyze the second exposure round using branching process arguments. The key is to find a suitable two-round exposure method in step (ii). Of course, even having found this, since there are dependencies between the edges, the technicalities ofour approach are naturallyquite involved (based on a blend of techniques, including the differential equation method, PDE theory, and branching process analysis);see Section <ref> for a detailed overview of our arguments. So far we have discussed bounded-size rules. One of the first concrete rules suggested was the product rule (PR), where we select the potential edge minimizing the product of the sizes of the components it joins.This rule belongs to the class of size rules, which make their decisions based only on the sizes c_1, …, c_4 of the components containing the endvertices of e_1,e_2 (note that PR is not a bounded-size rule). The original question of Achlioptas from around 2000 was whether one can delay the phase transition beyond ^ER = 1/2 using an appropriate rule, and Bollobás quickly suggested the product rule as most likely to do this.In fact, this question (which with hindsight is not too hard) was answered affirmatively by Bohman and Frieze <cit.> using a much simpler rule (a minor variant of the BF rule).Under the influence of statistical mechanics, the focus quickly shifted from the location of the critical time to the qualitative behaviour of the phase transition (see, e.g., <cit.>).In this context the product rule has received considerable attention;the simulation-based Figure <ref> shows why: for this rule the growth of the largest component seems very abrupt, i.e., much steeper than in the Erdős–Rényi process. In fact, based on extensive numerical data, Achlioptas, D'Souza and Spencer conjectured in Science <cit.> that, for the product rule, the size of the largest component whp `jumps' from o(n) to Θ(n)in o(n) steps of the process, a phenomenon known as `explosive percolation'. Although this claim was supported by many papers in the physics literature (see the references in <cit.>),we proved in <cit.> that no Achlioptas process can `jump', i.e., that they all have continuous phase transitions.Nevertheless, the product rule (like other similar rules) still seems to have an extremely steep phase transition; we believe that L_1(G^PR_n, n +n) ∼ c^β n for some β∈ (0,1), in contrast to the `linear growth' (<ref>) of bounded-size rules; see also <cit.>.Despite much attention, general size rules have largely remained resistant to rigorous analysis; see <cit.> for some partial results. Our simulations and heuristics <cit.> strongly suggest that L_1(tn)/n can even be nonconvergent in some cases. §.§ Organization The rest of the paper is organized as follows. In Section <ref> we define the class of models that we shall work with, which is more general than that of bounded-size Achlioptas process. Then we give our detailed results for the size of the largest component, the number of vertices in small components, and the susceptibility (these imply Theorem <ref>).In Section <ref> we give an overview of the proofs, highlighting the key ideas and techniques – the reader mainly interested in the ideas of the proofs may wish to read this section first. In Section <ref> we formally introduce the proof setup, including the two-round exposure, and establish some preparatory results.Sections <ref> and <ref> are the core of the paper; here we relate the component size distribution of G^_n,i to a certain branching process, and estimate the first two moments of N_k(i). In Section <ref> we then establish our main results for L_1(i), N_k(i) and S_r(i), by exploiting the technical work of Sections <ref>–<ref> and the branching process results proved with Svante Janson in <cit.>.In Section <ref> we discuss some extensions and several open problems.Finally, Appendix <ref> contains some results and calculations that are omitted from the main text, andAppendix <ref> gives a brief glossary of notation. §.§ AcknowledgementsThe authors thank Costante Bellettini and Luc Nguyen for helpful comments on analytic solutions to PDEs,and Svante Janson for useful feedback on the branching process analysis contained in an earlier version of this paper (based on large deviation arguments using a uniform local limit theorem together with uniform Laplace method estimates).We also thank Joel Spencer for his continued interest and encouragement. § STATEMENT OF THE RESULTS In this section we state our main results in full, and also give further details of the most relevant earlier results for comparison. In informal language, we show that in any bounded-size Achlioptas process, the phase transition `looks like' that in the Erdős–Rényi reference model (with respect to many key statistics). In mathematical physics jargon this loosely says that all bounded-size rules belong to the same `universality class' (while certain constants may differ, the behaviour is essentially the same). In a nutshell, our three main contributions are as follows, always considering an arbitrary bounded-size rule. (1) We determine the asymptotic size of the largest component in the sub- and super-critical phases, i.e., step i= n ± n with ||^3 n →∞ and || ≤_0 (see Theorems <ref> and <ref>), and show uniqueness of the `giant' component in the supercritical phase. We recover the characteristic Erdős–Rényi features showing, for example,that whp we have L_2( n + n) ≪ L_1( n + n) ∼ρ(+) n for some (rule-dependent) analytic function ρ with ρ(+) ∼ c as ↘ 0. (2) We determine the whp asymptotics of the number of vertices in components of size k as approximately N_k( n ± n) ≈ Ak^-3/2e^-(a+o(1))^2kn (see Theorems <ref> and <ref>). Informally speaking, in all bounded-size rules the number of vertices in small components thus exhibits Erdős–Rényi tree-like behaviour, including polynomial decay at criticality (the case =0). (3) We determine the =(n) → 0 whp asymptotics of the subcritical susceptibility as S_r( n- n) ∼ B_r ^-2r+3 (see Theorem <ref>). Thus the `critical exponents' associated to the susceptibility are the same for any bounded-size rule as in the Erdős–Rényi case. So far we havelargely ignored that Achlioptas processes evolve over time.Indeed, Theorem <ref> deals with the `static behaviour' of some particular step i=i(n), i.e., whp properties of the random graph G^_n,i.However, we are also (perhaps even more) interested in the `dynamic behaviour' of the evolving graph, i.e., whp properties of the random graph process (G^_n,i)_i ≥ 0.Our results accommodate this: Theorems <ref>, <ref> and <ref> apply simultaneously to every step outside of the critical window, i.e.,every step i= n ± n with ||^3 n →∞ and || ≤_0.When comparing our statements with results for the classical Erdős–Rényi process, the reader should keep in mind that G^ER_n,i corresponds to the uniform size model with i edges.In particular, results for step i=n/2 ± n should be compared to the binomial model G_n,p with edge probability p=(1± 2)/n. Occasionally we write a_n ≪ b_n for a_n = o(b_n), and a_n ≫ b_n for a_n = ω(b_n).For I⊆, we say that a function f:I→ is (real) analytic if for every x_0∈ I there is an r>0 and a power series g(x)=∑_j≥ 0 a_j (x-x_0)^j with radius of convergence at least r such that f and g coincide on (x_0-r,x_0+r)∩ I. This implies that f is infinitely differentiable, but not vice versa. A function f defined on some domain including I is (real) analytic on I if f|_I is analytic. The definition for functions of several variables is analogous.§.§ Bounded-size ℓ-vertex rules All our results apply to (the bounded-size case of) a class of processes that generalize Achlioptas processes. As in <cit.> we call these ℓ-vertex rules. Informally, in each step we sample ℓ random vertices (instead of two random edges),and according to some rule  then add one of the ℓ2 possible edges between them to the evolving graph. Formally, an ℓ-vertex size rule  yields for each n a random sequence (G^_n,i)_i≥ 0 of graphs with vertex set [n]={1, …, n}, as follows. G^_n,0 is the empty graph with no edges.In each step i≥ 1 we draw ℓ vertices _i=(v_i,1,…,v_i,ℓ) from [n] independently and uniformly at random,and then, writing _i=(c_i,1,…,c_i,ℓ) for the sizes of the components containing v_i,1,…,v_i,ℓ in G^_n,i-1,we obtain G^_n,i by adding the edge v_i,j_1v_i,j_2 to G_n,i-1^, where the ruledeterministically selects the edge (between the vertices in _i) based only on the component sizes.Thus we may think ofas a function (_i) ↦{j_1,j_2} from ()^ℓ to [ℓ]2. Note that G^_n,i may contain loops and multiple edges; formally, it is a multigraph. However, therewill be rather few of these and they do not affect the component structure, so the reader will lose nothing thinking of G^_n,i as a simple graph.As the reader can guess, a bounded-size ℓ-vertex rule  with cut-off K is then an ℓ-vertex size rule where all component sizes larger than K are treated in the same way.Following the literature (and to avoid clutter in the proofs), we introduce the convention that a component has size ω if it has size at least K+1. We define the set = _K := {1, …, K, ω}of all `observable' component sizes.Thus any bounded-size rule  with cut-off K corresponds to a function _K^ℓ→[ℓ]2.Of course,is a bounded-size ℓ-vertex rule if it satisfies the definition above for some K.For the purpose of this paper, results for ℓ-vertex rules routinely transfer to processes with small variations in the definition (since we only consider at most, say, 9n steps, exploiting that ≤ 1 by <cit.>).As in <cit.> this includes, for example, each time picking an ℓ-tuple of distinctvertices, or picking (the ends of) ℓ/2 randomly selected (distinct)edges not already present.We thus recover the original Achlioptas processesas 4-vertex rules wherealways selects one of the pairs e_i,1={v_i,1,v_i,2} and e_i,2={v_i,3,v_i,4}.Since we are aiming for strong results here, one has to be a little careful with this reduction; an explicit argument is given in Appendix <ref>. In the results below, a number of rule-dependent constants and functions appear. To avoid repetition, we briefly describe the key ones here.Firstly, for each rulethere is a set ⊆^+ of component sizes that can be producedby the rule. For large enough k, we have k∈ if and only if k is a multiple of the periodof the rule; see Section <ref>. For all Achlioptas processes, =^+ and =1, so the indicator functions k∈ appearing in many results play no role in this case. Secondly, for each rulethere is a function ψ(t)=ψ^(t) describing the exponential rate of decay of the component size distribution at time t (step tn), as in Theorem <ref>.This function is (real) analytic on a neighbourhood (-_0,+_0) of , with ψ()=ψ'()=0 and ψ”() > 0. Hence ψ(±)=Θ(^2) as → 0.§.§ Size of the largest component In this subsection we discuss our results for the size of the largest component in bounded-size rules,which are much in the spirit of the pioneering work of Bollobás <cit.> and Łuczak <cit.> for the Erdős–Rényi model.Here Theorem <ref> is perhaps our most important single result: it establishes the asymptotics of L_1(i)=L_1(G^_n,i) in the sub- and super-critical phases (i.e., all steps i= n ± n with ||^3 n →∞ and || ≤_0). These asymptotics are as in the Erdős–Rényi case, up to rule-specific constants. One of the most basic questions about the phase transition in any random graph model is: how large is the largest component just after the transition?From the general continuity results of <cit.> it follows for bounded-size rules that L_1(tn)/n converges to a deterministic `scaling limit' (see also <cit.>).More concretely, there exists a continuous function ρ=ρ^:[0,∞) → [0,1] such that for each t ≥ 0 we have L_1(tn)/nρ(t) ,wheredenotes convergence in probability (i.e., for every η>0 whp we have |L_1(tn)/n-ρ(t)| ≤η).In fact, it also follows that ρ(t) = 0 for t ≤ and ρ(t)>0 otherwise, see <cit.>.Of course, due to our interest in the size of the largest component, this raises the natural question: what are the asymptotics of the scaling limit ρ=ρ^?For some bounded-size rules, Janson and Spencer <cit.> and Drmota, Kang and Panagiotou <cit.>showed that ρ(+) ∼ c as ↘ 0, where c=c^>0. However, for Erdős–Rényi random graphs much stronger properties are known: ρ=ρ^ER is analytic on [1/2,∞). In particular, it has a power series expansion of the form ρ^ER(1/2+) = 4+ ∑_j ≥ 2 a_j ^jfor ≥ 0 having positive radius of convergence(in fact, 1-ρ(t)=e^-2tρ(t)). Our next theorem shows that all bounded-size rules have these typical Erdős–Rényi properties (up to rule specific constants),confirming natural conjectures of Janson and Spencer <cit.> and Borgs and Spencer <cit.> (and the folklore conjecture that ρ(+) ∼ c initially grows at a linear rate).Letbe a bounded-size ℓ-vertex rule.Let the critical time >0 and the function ρ=ρ^ be as in (<ref>) and (<ref>). Then the function ρ is analytic on [,+δ) for some δ>0, with ρ()=0 andthe right derivative of ρ atstrictly positive.In particular, there are constants _0 > 0 and (a_j)_j ≥ 1 with a_1>0 such thatfor all ∈ [0,_0] we have ρ(+) = ∑_j ≥ 1 a_j^j .Informally, (<ref>) and (<ref>) show that the initial growth of the largest component is linear for any bounded-size rule, i.e., roughly that L_1( n +n) ≈ cn for some rule-dependent constant c=a_1>0 (see also Figure <ref>).The proof shows that for t ∈ [-_0,+_0] we have ρ(t)=(|_t|=∞) for a certain branching process _t defined in Section <ref>. The convergent power series expansion (<ref>) improves and extends results of Janson and Spencer <cit.> for the Bohman–Frieze process (with a O(^4/3) second order error term), and one of the main results of Drmota, Kang and Panagiotou <cit.> for a restricted class of bounded-size rules (they establish ρ(+) = c+ O(^2) for BF-like rules).Theorem <ref> also shows that ρ'(t) is discontinuous at  (recall that ρ(t)=0 for t ≤), which in mathematical physics is a key feature of a `second order' phase transition.Unfortunately, the convergence result (<ref>) tells us very littleabout the size of the largest component just before the phase transition (recall that ρ(t)=0 for t ≤). For the Erdős–Rényi process it is well-known that in the subcritical phase, for any j ≥ 1 we roughly have L_j(n/2- n) = Θ(^-2log(^3 n)) whenever ^3n →∞, see, e.g., <cit.>.For bounded-size rules there are several partial results <cit.> for L_1( n- n), but none are as strong as the aforementioned Bollobás–Łuczak results from <cit.>. For the subcritical phase, our next theorem establishes the full Erdős–Rényi-type behaviour for all bounded-size rules (in a strong form). Theorem <ref> confirms a conjecture of Kang, Perkins and Spencer <cit.>, andresolves a problem of Bhamidi, Budhiraja and Wang <cit.>, both concerning upper bounds of the form L_1( n- n) ≤ D ^-2log n. Letbe a bounded-size ℓ-vertex rule with critical time >0 as in (<ref>).There is a constant _0>0 such that the following holds for any integer r ≥ 1. For any i=i(n) ≥ 0such that =-i/n satisfies ∈ (0,_0) and ^3 n →∞ as n →∞, we have L_r(i) = ψ(-)^-1(log(^3n)-5/2loglog(^3n) +(1) ) ,where the rule-dependent function ψ(t) is as in Theorem <ref> below.Here, as usual, X_n=(1) means that for any δ > 0 there are C_δ,n_0>0 such that (|X_n| ≤ C_δ) ≥ 1-δ for n ≥ n_0. See Remark <ref> for the interpretation of the function ψ. Since ψ(-)=Θ(^2), this result shows in crude terms that the largest O(1) components have around the same size L_r( n - n) ≈ L_1( n - n) ≈ a^-2log(^3 n) for some rule-dependent constant a >0.Theorem <ref> is best possible: the assumption ^3 n →∞ cannot be relaxed – inside the critical window the sizes of the largest components are not concentrated, see <cit.>.Furthermore, as discussed in <cit.>, the (1) error termis sharp in the Erdős–Rényi case (where =1/2 and ψ(-)∼ 2^2 as → 0).Theorem <ref> improves several previous results on the size of the largest subcritical component due to Wormald and Spencer <cit.>, Kang, Perkins and Spencer <cit.>, Bhamidi, Budhiraja and Wang <cit.> and Sen <cit.>. Most notably, in the weakly subcritical phase i=- n with =(n) → 0 the main results of <cit.> are as follows:for bounded-size rules <cit.> establishes bounds of the form L_1(i) ≤ D^-2(log n)^4 for =(n) ≥ n^-1/4,which in <cit.> was sharpened for the Bohman–Frieze rule to L_1(i) ≤ D^-2log n for =(n) ≥ n^-1/3.The key difference is that (<ref>) provides matching bounds(the harder lower bounds were missing in previous work)all the way down to the critical window. Herethe difference between log n and log(^3n) matters.For the supercritical phase, (<ref>) and Theorem <ref> apply only for fixed >0, showing roughly that L_1( n +n) ≈ρ(+) n.Of course, it is much more interesting to `zoom in' on the critical point , and study the size of the largest component when =(n) → 0.Despite being very prominent and interesting inthe classical Erdős–Rényi model, see, e.g., <cit.>,the weakly supercritical phase of bounded-size rules has remained resistant to rigorous analysis for more than a decade.The following theorem closes this gap in our understanding of the phase transition,establishing the typical Erdős–Rényi characteristics for all bounded-size rules. Informally, (<ref>)–(<ref>) state that (whp) we have L_2( n+ n) ≪ L_1( n+ n) ≈ρ( +) n whenever =(n) satisfies ^3n→∞ and 0 < ≤_0 (note that τ→ 0 as n →∞). In particular, in view of (<ref>) this means that, in all bounded-size rules, just after the critical window the unique `giant component' already grows with linear rate (i.e., that whp L_1( n+ n) ≈ cn, see Figure <ref>).Letbe a bounded-size ℓ-vertex rule.Let the critical time >0 and the functions ρ,ψ be as in (<ref>) and Theorems <ref> and <ref>.There is a constant _0>0 such that the following holds for any function ω=ω(n) with ω→∞ as n →∞ [As usual we assume ω>0, but we do not write this as it is not formally needed: any statements involving ω that we prove are asymptotic (for example any `whp' statement), and so we need consider only n large enough that ω(n)>0.] and any fixed r≥ 1. Setting τ=τ(n):=(logω)^-1/2, whp the following inequalities hold in all steps i=i(n) ≥ 0.-0.75em * (Subcritical phase) If =-i/n satisfies ^3 n ≥ω and ≤_0, thenL_r(i) = (1±τ) ψ(-)^-1log(^3 n) . * (Supercritical phase) If =i/n- satisfies ^3 n ≥ω and ≤_0, then we haveL_1(i) = (1 ±τ) ρ(+) n , L_2(i)≤τ L_1(i) . Note that we consider step i= n -n in (<ref>) and step i= n +n in (<ref>)–(<ref>).This parametrization may look strange, but it allows us to conveniently make whp statements about the random graph process (G^_n,i)_i ≥ 0.Indeed, (<ref>)–(<ref>) whp hold simultaneously in every step outside of the critical window, i.e.,every step i= n ± n with ||^3 n →∞ and || ≤_0. This is much stronger than a whp statement for some particular step i=i(n), i.e., for the random graph G^_n,i. For this reason, the subcritical part of Theorem <ref> does not follow from Theorem <ref>.With this discussion in mind, one can argue that Theorem <ref> describes the `dynamic behaviour' of the phase transition in bounded-size Achlioptas processes. §.§ Small components During the last decade, a widely used heuristic for many `mean-field' random graph models is that most `small' components are trees (or tree-like).The rigorous foundations of this heuristic can ultimately be traced back to the classical Erdős–Rényi model, where it has been key in the discovery (and study) of the phase transition phenomenon, see <cit.>. As there are explicit counting formulae for trees, by exploiting the (approximate) independence between the edges this easily gives the asymptotics of N_k( n ± n) in the Erdős–Rényi process, see (<ref>).For bounded-size rules, the classical tree-counting approach breaks down due to the dependencies between the edges. However, Spencer and Wormald <cit.> already observed around 2001 that N_k(tn)=N_k(G^_n,tn) can be approximated via the differential equation method <cit.>; their proof implicitly exploits that the main contribution again comes from trees, see also <cit.>.In particular, for the more general class of bounded-size ℓ-vertex rules it is nowadays routine to prove that for any t ∈ [0,∞) and k ≥ 1 we have N_k(tn)/nρ_k(t) ,where the functions ρ_k=ρ_k^:[0,∞) → [0,1] are the unique solution of an associated system of differential equations (ρ'_k depends only on ρ_j with 1 ≤ j ≤max{k,K}, see Lemmas <ref> and <ref>).In fact, a byproduct of <cit.> is that the ρ_k(t) have exponential decay of the form ρ_k(t) ≤ A_t e^-a_t k for t <, with a_t,A_t>0.To sum up, in view of (<ref>) and (<ref>) the precise asymptotics of ρ_k( n ± n) is an interesting problem (for the special case =0 this was asked by Spencer and Wormald as early as 2001, see <cit.>). This requires the development of new proof techniques, which recover the Erdős–Rényi tree–asymptotics in random graph models with dependencies.This challenging direction of research was pursued by Kang, Perkins and Spencer <cit.> and Drmota, Kang and Panagiotou <cit.>, who obtained some partial results for bounded-size rules, using PDE-theory and an auxiliary result from <cit.>. However, they only recovered the exponential rate of decay (i.e., that log(ρ_k(±)) ≈ -(a+o(1))^2 k for small  and large k) for a restricted class of rules which are Bohman–Frieze like.We sidestep both shortcomings by directly relating ρ_k(t) with an associated branching processes, see Remark <ref>.Indeed, the next theorem completely resolves the asymptotic behaviour of ρ_k(t) for all bounded-size ℓ-vertex rules. Note that below we have ψ(±)=a^2 + O(^3) and θ(±) = A + O() for rule-dependent constants a,A>0, so (<ref>) qualitatively recovers the full Erdős–Rényi tree-like behaviour of (<ref>). A more quantitative informal summary of (<ref>) and (<ref>) is that whp N_k( n ± n) ≈ A k^-3/2e^-(a+o(1))^2 kn for large k and small  (ignoring technicalities).Letbe a bounded-size ℓ-vertex rule.Let the critical time >0 and the functions (ρ_k)_k ≥ 1 be as in (<ref>) and (<ref>), and the setof reachable component sizes as in (<ref>).There exist a constant _0>0 and non-negative analytic functions θ(t) and ψ(t) on I=[-_0,+_0] such that ρ_k(t) = (1+O(1/k)) k ∈ k^-3/2θ(t) e^-ψ(t) k,uniformly in k≥ 1 and t∈ I, with θ(),ψ”()>0and ψ()=ψ'()=0.Furthermore, ρ_k(t) ∈ [0,1] and ∑_k ≥ 1ρ_k(t) + ρ(t)=1 for t ∈ I and ρ as in  (<ref>) and Theorem <ref>.There is a constant B >0 such that ∑_j ≥ kρ_j() = (1+O(1/k)) B k^-1/2 for all k ≥ 1.The proof shows that for t ∈ I we have ρ_k(t)=(|_t|=k) for a certain branching process _t defined in Section <ref>.The above multiplicative error 1+O(1/k) is best possible for Erdős–Rényi (where =1/2, ψ(t)=-log(2te^1-2t)=2t-1-log(2t) and θ()=1/√(2π), so that ψ(±) ∼ 2 ^2 and θ(±) ∼ 1/√(2π) as → 0). Moreover, the detailed asymptotics of (<ref>) resolves conjectures of Kang, Perkins and Spencer <cit.> and Drmota, Kang and Panagiotou <cit.>. The indicator k ∈ may look somewhat puzzling; its presence is due to the generality of ℓ-vertex rules – see Remark <ref> and Section <ref>. In the Achlioptas process case we have =^+,i.e., all component sizes are possible, and so the indicator may be omitted.Although (<ref>) is very satisfactory for the `idealized' component size distribution (ρ_k)_k ≥ 1,we cannot simply combine it with (<ref>) to obtain the results we would like for the component size distribution (N_k)_k ≥ 1 of the random graph process (G^_n,i)_i ≥ 0, which is of course our main object of interest.The problem is that (<ref>) only applies for k=O(1) and fixed t = ±, whereas we would like to consider k →∞ and → 0.In other words, we would like variants of (<ref>) which allow us (i) to study large component sizes with k = k(n) →∞, and (ii) to `zoom in' on the critical , i.e., study t=± with =(n) → 0.The next theoremaccommodates both features: it shows that N_k(i) ∼ρ_k(i/n) n holds for a wide range of sizes k and steps i. Note thatthere is someγ = γ(β,a, _0)>0 such that the assumptions on k below, and hence (<ref>)–(<ref>), hold for any 1 ≤ k ≤γlog n,with the allowed range of k increasing as =(n) → 0. (Aiming at simplicity, here we have not tried to optimize the range; see also Theorem <ref>, Corollary <ref> and Section <ref>. Note that we allow for =0.)Letbe a bounded-size ℓ-vertex rule.Let the critical time >0 be as in (<ref>), and the functions ρ,(ρ_k)_k ≥ 1 as in Theorem <ref> and (<ref>),and define a=ψ”()>0 where ψ is as in Theorem <ref>.There is a constant _0>0 such that, with probability 1-O(n^-99), the following inequalities hold for all steps (-_0) n ≤ i ≤ (+_0) n and sizes 1 ≤ k ≤ n^1/10 such that =i/n- satisfies 10a^2 k ≤log n: N_k(i)/n= (1 ± n^-1/30/k) ·ρ_k(i/n) ,N_≥ k(i)/n= (1 ± n^-1/30/k) ·(∑_j ≥ kρ_j(i/n) + ρ(i/n)) ,where N_≥ k(i) = ∑_k'≥ k N_k'(i). For comparison with (<ref>) in Theorem <ref>note that if we set t=i/n and =|-t|, then ^3 k = o(1) implies ψ(t)k = a^2k + o(1) in (<ref>). Similarly, ^2 k = o(1) implies ψ(t)k = o(1).The multiplicative 1+o(1/k) error term in (<ref>) allows for very precise asymptotic results in combination with (<ref>). Indeed, whp, for all steps (-_0) n ≤ i ≤ (+_0) n and sizes 1 ≤ k ≤ n^1/10satisfying 10a( -i/n)^2k ≤log n, wehave[In this and similar formulae, the implicit constant is uniform over the choice of i=i(n) and k=k(n).]N_k(i) = (1+O(1/k)) k ∈ k^-3/2θ(i/n) e^-ψ(i/n) k n .Furthermore, combining (<ref>) with Remark <ref>,we see that, whp, for all 1 ≤ k ≤ n^1/10 we haveN_≥ k( n) = (1+O(1/k)) B k^-1/2 n .Thus, at criticalilty we have polynomial decay of the tail of the component size distribution, which is a prominent hallmark of the critical window.For bounded-size rules the Bk^-1/2n asymptotics of (<ref>) answers a question of Spencer and Wormald from 2001, see <cit.>.§.§ Susceptibility The susceptibility S_2(tn) is a key statistic of the phase transition, which has been widely studied in a range of random graph models (see, e.g. <cit.>). For example, in classical percolation theory the critical density coincides with the point where (the infinite analogue of) the susceptibility diverges, and in the Erdős–Rényi process it is folklore thatfor t < ^ER=1/2 we have S_2(G^ER_n,tn) 1/1-2t. More importantly, in the context of bounded-size Achlioptas processes the location  of the phase transition is determined by the critical time where the susceptibility diverges, see <cit.>.This characterization is somewhat intuitive, since S_2(tn)=S_2(G^_n,tn) is the expected size of the component containing a randomly chosen vertex from G^_n,tn, see (<ref>).Of course, since L_1(G)^2/n ≤ S_2(G)≤ L_1(G), bounds on one of L_1(i) and S_2(i) imply bounds on the other. (For example, S_2(i) = O(1) implies L_1(i) = O(√(n)) = o(n).) However, one only obtains weak results this way; proving that whp L_1(tn)=Ω(n) after the point at which S_2(tn) blows up is far from trivial. Turning to the susceptibility in bounded-size rules, using the differential equation method <cit.> and ideas from <cit.> it is nowadays routine to prove that for each t ∈ [0,) and r ≥ 2 we haveS_r(tn)s_r(t) ,where the functions s_r=s_r^:[0,) → [1,∞) are the unique solution of a certain system of differential equations (involving also ρ_1,…,ρ_K),with lim_t ↗ s_r(t) = ∞. (Recall that S_r+1 denotes the rth moment of the size of the component containing a random vertex.) Motivated by `critical exponents' from percolation theory and statical physics, the focus has thus shifted towards the finer behaviour of the susceptibility, i.e., the question at what rate s_r(-) blows up as ↘ 0 (in the Erdős–Rényi case we have s_2(-) ∼ (2)^-1, see (<ref>) and <cit.>).Using asymptotic analysis of differential equations, Janson and Spencer <cit.> determined the scaling behaviour of s_2, s_3 and s_4 for the Bohman–Frieze rule.For s_2 and s_3 their argument was generalized by Bhamidi, Budhiraja and Wang <cit.> to all bounded-size rules.Based on branching process arguments,the next theorem establishes the asymptotic behaviour of s_r for any r ≥ 2 (for the larger class of bounded-size ℓ-vertex rules). To avoid clutter below, we adopt the convention that the double factorial x!!=∏_0 ≤ j < x/2(x-2j) is equal to 1 for x ≤ 0. Recall from Remark <ref> that for an Achlioptas processes =1.Letbe a bounded-size ℓ-vertex rule. Let the critical time >0 and the functions (s_r)_r ≥ 2 be as in (<ref>) and (<ref>).Let B_r := (2r-5)!! ·√(2π)θ()/·ψ”()^-r+3/2 ,where ≥ 1 is defined in Section <ref>, and the functions θ(t) and ψ(t) are as in Theorem <ref>. Then there exists a constant _0>0 such that, for all r ≥ 2 and ∈ (0,_0), we have B_r>0 ands_r(-) = (1+O())B_r ^-2r+3.In the language of mathematical physics (<ref>) and (<ref>) loosely say that, as ↘ 0, all bounded-size rules have the same susceptibility-related `critical exponents' as the Erdős–Rényi process (where the constant is B_r=(2r-5)!!2^-2r+3, since θ() = 1/√(2π) and ψ”()= 4 by folklore results).The proof shows that for t ∈ [-_0,) we have s_r(t)= |_t|^r-1 for a certain branching process _t defined in Section <ref>.Next we `zoom in' on the critical point , i.e., discuss the behaviour of the susceptibility S_r( n- n) when =(n) → 0.Here the subcritical phase in the Erdős–Rényi case was resolved by Janson and Luczak <cit.>, using martingale arguments, differential equations and correlation inequalities. For bounded-size rules Bhamidi, Budhiraja and Wang <cit.> used martingales arguments and the differential equation method to prove results covering only part of the subcritical phase.In particular, for i= n- n with =(n) → 0 their results apply only to S_2(i) and S_3(i) and only in the restricted range ≥ n^-1/5.Using very different methods,the next theorem resolves the scaling behaviour of the susceptibility S_r(i) in the entire subcritical phase.In particular, our result applies for any r ≥ 2 all the way up to the critical window, i.e., we only assume ^3 n →∞.Note that γ_r,n, = o(1) when =o(1), so (<ref>) intuitively states that whp S_r( n- n) ≈ B_r ^-2r+3.Letbe a bounded-size ℓ-vertex rule with critical time >0 as in (<ref>), and define B_r>0 as in (<ref>). There are positive constants _0>0 and (A_r)_r ≥ 2 such that the following holds for any function ω=ω(n) with ω→∞ as n →∞.For any integer r ≥ 2, whpS_r(i) = (1 ±γ_r,n,) B_r ^-2r+3holds in all steps i=i(n) ≥ 0 such that =-i/n satisfies ^3 n ≥ω and ≤_0, where γ_r,n,:=A_r( + (^3n)^-1/4). The assumption r ≥ 2 cannot be relaxed, since S_1(i) =1 holds deterministically, cf. (<ref>). In (<ref>) we have not tried to optimize the error term for =Θ(1), since our main interest concerns the ↘ 0 behaviour.The supercritical scaling of the susceptibility is less informative and interesting, since S_2(i) is typically dominated by the contribution from the largest component.In particular, for i= n +n with ^3n →∞ we believethat whp S_2(i) ∼ L_1(i)^2/n for any bounded-size rule (for fixed ∈ (0,_0) this follows from Theorem <ref>), but we have not investigated this.§ PROOF OVERVIEW In this section we give an overview of the proof, with an emphasis on the structure of the argument.Loosely speaking, one of the key difficulties is that there are non-trivial dependencies between the choices in different rounds.To illustrate this, let us do the following thought experiment. Suppose that we change the vertices offered to the rule at one step, and as a consequence, the rule adds a different edge to the graph. This results in a graph with different component sizes. Hence, whenever the process samples vertices from these components in subsequent steps, the rule is presented with different component sizes.This may alter the decision of the rule, and hence the edge added,which can change further subsequent decisions, and so on.In other words, changes can propagate throughout the evolution of the process, which makes the analysis challenging. For bounded-size rules we overcome this difficulty via the following high-level proof strategy.First, we track the evolution of the entire component size distribution during the initial i_0=(-σ)n steps, where σ > 0 is a small constant.Second, using the graph after i_0 steps as an anchor, for i_1=(+σ)n we reveal information about the steps i_0, …, i_1 via a two-round exposure argument (not the classical multi-round exposure used in random graph theory).We engineer this two-round exposure in a way that eventually allows us to analyze the component size distribution in step i_0 ≤ i ≤ i_1 via a neighbourhood exploration process which closely mimics a branching process. Intuitively, this allows us to reduce most questions about the component size distribution to questions about certain branching processes.These branching processes are not of a standard form, but we are nevertheless able to analyze them (with some technical effort).This close coupling with a branching process is what allows us to obtain such precise results. In this argument the restriction to bounded-size rules is crucial, see Sections <ref> and <ref>.In the following subsections we further expand on the above ideas, still ignoring a number of technical details and difficulties.In Section <ref> we outline our setup and the two-round exposure argument.Next, in Section <ref> we explain the analysis of the component size distribution via exploration and branching processes.Finally, in Section <ref> we turn to the key statistics L_1(i), N_k(i) and S_r(i), and briefly discuss how we eventually adapt approaches used to study the Erdős–Rényi model to bounded-size Achlioptas processes.§.§ Setup and two-round exposure In this subsection we discuss the main ideas used in our two-round exposure; see Section <ref> for the technical details.Throughout we fix a bounded-size ℓ-vertex rulewith cut-off K (as defined in Section <ref>). Using the methods of <cit.>, we start by tracking the evolution of the entire component size distribution up to step i_0=(-σ)n. More precisely, we show that the numbers N_k(i_0) of vertices in components of size k can be approximated by deterministic functions (see Theorem <ref> and Lemma <ref>).Conditioning on the graph G_i_0=G^_n,i_0 after i_0 steps, i.e., regarding it as given,we shall reveal information about steps i_0+1, …, i_1=(+σ)n in two rounds. We assume (as we may, since the variables N_k(i_0) are concentrated) that each N_k(i_0) is close to its expectation.We partition the vertex set of G_i_0 into V_S ∪ V_L, where V_S contains all vertices that in the graph G_i_0 are in components of size at most K (the labels S and L refer to `small' and `large' component sizes). Note that in any later step i ≥ i_0, since G_i⊇ G_i_0, every vertex v ∈ V_L is in a component of G_i=G^_n,i with size larger than K, i.e., with size ω as far as the rule  is concerned. Hence, when a vertex v in V_L is offered to , in order to know the decision made by  we do not need to know which vertex in V_L we are considering – as far as  is concerned, allsuch vertices have the same component size ω. In our first exposure round we reveal everything about the vertices offered to  in all steps i_0< i≤ i_1 except that whenever a vertex in V_L is chosen, we do not reveal which vertex it is; as just observed, this information tells us what decisionswill make. This allows us to track: (i) the edges added inside V_S, (ii) the V_S–endvertices of the edges added connecting V_S to V_L, and (iii) the number of edges added inside V_L.(Formally this can be done via the differential equation method <cit.> and branching process techniques, see Section <ref>–<ref>; note that (i)–(ii) track the evolution of the `V_S-graph' beyond the critical .) After this first exposure round we have revealed a subgraph H_i of G_i (called the `partial graph' in Figure <ref>), consisting of all edges in G_i_0, together with all edges in steps between i_0 and i with both ends in V_S. Furthermore, we know that G_i consists of H_i with certain edges added: a known number of V_S–V_L edges whose endpoints in V_S are known, and a known number of V_L–V_L edges.In the second exposure round the vertices in V_L (corresponding to (ii) and (iii) above) are now chosen independently and uniformly at random from V_L; see the proof of Lemma <ref> for the full details.Hence, after conditioning on the outcome of the first exposure round, the construction of G_i from the `partial graph' H_i described above has a very simple form (see Figure <ref> and Lemma <ref>).Indeed, for each V_S–V_L edge the so-far unknown V_L–endpoint is replaced with a uniformly chosen random vertex from V_L.Furthermore, we add a known number of uniformly chosen random edges to V_L.This setup, consisting of many independent uniform random choices, is ideal for neighbourhood exploration and branching process techniques.§.§ Component size distributionTo get a handle on the component size distribution of the graph G_i=G^_n,i after i_0 < i ≤ i_1 steps, we use neighbourhood exploration arguments to analyze the second exposure round described above. As usual, we start with a random vertex v ∈ V_S ∪ V_L, and iteratively explore its neighbourhoods. Suppose for the moment that v∈ V_L. Recalling the construction of G_i from the partial graph H_i, any vertex w ∈ V_L has neighbours in V_L and V_S, which arise (a) via random V_L–V_L edges and (b) via V_S–V_L edges with random V_L–endpoints.Furthermore, each of the adjacent V_S–components found in (b) potentially yields further V_L–neighbours via V_S–V_L edges. Repeating this exploration iteratively, we eventually uncoverthe entire component of G_i which contains the initial vertex v.Treating (a) and (b) together as a single step, each time we `explore' a vertex in V_L we reach a random number of new vertices in V_L, picking up a random number of vertices in V_S along the way. As long as we have not used up too many vertices, the sequence of pairs (_j,_j) giving the numberof V_L and V_S vertices found in the jth step will be close to a sequence of independent copies of some distribution (Y_t,Z_t) that depends on the `time' t=i/n. We thus expect the neighbourhood exploration process to closely resemble a two-type branching process _t with offspring distribution (Y_t,Z_t), corresponding to V_L and V_S vertices.In this branching process, vertices in V_S have no children (they are counted `in the middle' of a step). Of course, we need to modify the start of the process to account for the possibility that the initial vertex is in V_S. Writing _t for the (final modified) branching processes, it should seem plausible that the expected numbers of vertices in components of size k and in components of size at least k approximately satisfyN_k(tn) ≈(|_t|=k) n and N_≥ k(tn) ≈(|_t| ≥ k) n,ignoring technicalities (see Sections <ref>–<ref> and <ref> for the details). In view of (<ref>), we need to understand the behaviour of the branching process _t.Here one difficulty is that we only have very limited explicit knowledge about the offspring distribution (Y_t,Z_t).To partially remedy this, we prove that several key variables determined by the first exposure round have exponential tails (see, e.g., inequalities (<ref>)–(<ref>) and (<ref>)–(<ref>) of Theorems <ref> and <ref>). Combining calculus with ODE and PDE techniques (the Cauchy–Kovalevskaya Theorem; see Appendix <ref>), this allows us to eventually show that the probability generating function (t,α,β) := (α^Y_tβ^Z_t)is extremely well-behaved, i.e., (real) analytic in a neighbourhood of (,1,1), say (see Sections <ref>–<ref> and <ref>).In a companion paper <cit.> written with Svante Janson(see also Section <ref> and Appendix <ref>), we show that the probability of _t generating k particles is roughly of the form(|_t|=k) ≈ A k^-3/2 e^-ψ(t) kwithψ(±) ≈ a ^2 .Turning to the survival probability (|_t|=∞), for this the V_S–vertices counted by Z_t are irrelevant (since these do not have children, the only possible exception being the first vertex).Combining a detailed analysis of Y_t with standard methods for single-type branching processes, we eventually show that Y_=1, and (in <cit.>) that the survival probability of _t is roughly of the form (|_t|=∞) ≈ 0 ,  if t ≤, c ,  if t=+,for small(see Sections <ref>–<ref>, Appendix <ref> and <cit.> for the details). In the above discussion we have ignored a number of technical issues.For example, in certain parts of the analysis we need to incorporate various approximation errors: simple coupling arguments would, e.g., break down for large component sizes. (Such errors are not an artifact of our analysis. For example, the number of isolated vertices changes with probability Θ(1) in each step, so after Θ(n) steps we indeed expect random fluctuations of order Θ(√(n)).)To deal with such errors we shall use (somewhat involved)domination arguments, exploiting that the exploration process usually finds `typical' subsets of the underlying graph (see Section <ref>).Perhaps surprisingly, this allows us to employ dominating distributions (Y^±_t,Z^±_t) that have probability generating functions which are extremely close to the `ideal' one in (<ref>): the dominating branching processes are effectively indistinguishable from the actual exploration process. In this context one of our main technical contributions is that we are able to carry out (with uniform error bounds)the point probability analysis (<ref>) and the survival probability analysis (<ref>) despite having only some `approximate information' about the underlying (family of) offspring distributions. This is key for determining the asymptotic size of the largest component in the entire subcritical and supercritical phases.§.§ Outline proofs of the main results Using the setup (and technical preparation) outlined above, we prove our main results for L_1(i), N_k(i) and S_r(i) by adapting approaches that work for the classical Erdős–Rényi random graph. Of course, in this more complicated setup many technical details become more involved. In this subsection we briefly outline the main high-level ideas that are spread acrossSections <ref>–<ref>(the actual arguments are complicated, for example, by the fact that parts of the branching process analysis rely on Poissonized variants of G_i).We start with the number N_k(i) of vertices in components of size k.After conditioning on the outcome of the first exposure round, we first use McDiarmid's bounded differences inequality <cit.> to show that whp N_k(i) is close to its expected value (here we exploit that the second exposure rounds consists of many independent random choices), and then approximate N_k(i) via the branching process results (<ref>) and (<ref>). The full details of this approach are given in Sections <ref> and <ref>,and here we just mention one technical point: conditioning allows us to bring concentration inequalities into play, but we must then show that (except for unlikely `atypical' outcomes) conditioning on the first exposure round does not substantially shift the expected value of N_k(i).Next we turn to the size L_1(i) of the largest component in the subcritical and supercritical phases, i.e., where the step i= n ± n satisfies ^3 n →∞. Intuitively, our arguments hinge on the fact that the expected component size distribution has an exponential cutoff after size ^-2=Θ(ψ(±)^-1), see (<ref>) and (<ref>).Indeed, (<ref>) and ∫_k^∞e^-a x = Θ(a^-1e^-a k) suggest that for k ≫^-2 we roughly have (k ≤ |_±| < ∞) = ∑_j ≥ k(|_±|=j) ≈ A ∑_j ≥ k j^-3/2 e^-ψ(±) j= Θ(^-2 k^-3/2) e^-ψ(±) k. In the subcritical phase we have (|_ -| = ∞)=0 by (<ref>). Using (<ref>) we thus expect that for k ≫^-2 we haveN_≥ k( n - n) ≈(|_ -| ≥ k) n = (k ≤ |_ - | < ∞) n ≈Θ(^-2 k^-3/2) e^-ψ(-) kn.By considering which sizes k satisfy N_≥ k( n - n) = Θ(k), this suggests that whp L_1( n- n) ≈ψ(-)^-1log(^3 n).We make this rigorous via the first- and second-moment methods, using a van den Berg–Kesten (BK)-inequalitylike argument for estimating the variance (see Sections <ref>, <ref> and <ref> for the details). Turning to the more interesting supercritical phase, where i= n +n, note that the right hand side of (<ref>) is o() for k ≫^-2 =Θ(ψ(+)^-1), and that (|_ +| = ∞) ≈ c by (<ref>).Using (<ref>) we thus expect that for k ≫^-2 we haveN_≥ k( n + n) ≈(|_ +| ≥ k) n ≈(|_ +| = ∞) n ≈ cn.Applying the first- and second-moment methods we then show that whp N_≥Λ(i) ≈ N_≥Λ(i) for suitable ^-2≪Λ≪ n, adapting a `typical exploration' argument of Bollobás and the first author <cit.> for bounding the variance (see Sections <ref>, <ref> and <ref> for the details). Mimicking the Erdős–Rényi sprinkling argument from <cit.>, we then show that whp most of these size ≥Λ components quickly join, i.e., form one big component in o( n) steps (see Sections <ref> and <ref>). Using continuity of (|_ +| = ∞), this heuristically suggests that whp L_1( n+ n) ≈(|_ +| = ∞) n ≈ cn ,ignoring technicalities (see Section <ref> for the details).For the subcritical susceptibility S_r( n- n) with ^3 n →∞ we proceed similarly.Indeed, substituting the estimates (<ref>) and (<ref>) into the definition (<ref>) of S_r(i), since ψ(-)=Θ(^2) we expect that for r ≥ 2 we haveS_r( n - n) ≈ A ∑_k ≥ 1 k^r-5/2 e^-ψ(-) k= Θ((ψ(-)^-1)^r-3/2)= Θ( ^-2r+3).In fact, comparing the sum with an integral, we eventually find that S_r( n - n) ≈ B_r ^-2r+3 for small  (see Lemma <ref>). Applying the second-moment method we then show that whp S_r(i) ≈ S_r(i), using a BK-inequality like argument for bounding the variance (see Sections <ref>, <ref> and <ref> for the details). Finally, one non-standard feature of our arguments is that we can prove concentration of the size of the largest component in every step outside of the critical window (cf. Theorem <ref>). The idea is to fix a sequence (m_j) of not-too-many steps that are close enough together that we expectL_1(m_j) ≈ L_1(m_j+1).Since there are not too many steps in the sequence, we can show that whp L_1(m_j) is close to its expected value for every step m_j in the sequence. By monotonicity, in all intermediate steps m_j ≤ i ≤ m_j+1 we have L_1(m_j) ≤ L_1(i) ≤ L_1(m_j+1),which together with (<ref>) establishes the desired concentration (related arguments are sometimes implicitly used in the context of the differential equation method). As we shall see in Section <ref>, the choice of the step sizes m_j+1-m_j requires some care, since we need to take a union bound over all auxiliary steps, but this idea can be made to work by sharpening the second-order error terms of various intermediate estimates.A similar proof strategy applies to the susceptibility S_r(i), which is also monotone (see Section <ref> for the details).§ PREPARATION AND SETUP In this section we formally introduce the proof setup, together with some preparatory results.Throughout we fix a bounded-size ℓ-vertex rulewith cut-off K, andstudy the graph G_i=G^_n,i after i steps, where i=tn (or rather tn) with t≈. We refer to t (or in general i/n) as `time'. As discussed in Section <ref>, we stop the process after the first i_0 ≈ (-σ)n steps, where σ>0 is a small constant, and then analyze the evolution of the component structure from step i_0 to step tn via a two-round exposure argument.The main goals of this section are to formally introduce the two-round exposure, and to relate the second round of the exposure to a random graph model which is easier to analyze. Turning to the details, for concreteness letσ := min{1/2ℓ^2(K+1),/3} .Sett_0:=-σand t_1 :=+ σ ,and i_0:=t_0 n and i_1 := t_1 n,ignoring from now on the irrelevant rounding to integers. After i_0 steps we partition the vertex set into V_S and V_L, where V_S contains all vertices in components of G_i_0 having size at most K. Here the labels S and L correspond to `small' and `large' component sizes. This partition is defined at step i_0, and does not change as our graph evolves. In Section <ref> we explain our two-round exposure argument in detail. Then, in Section <ref>, we use the differential equation method to track the number of vertices in small components, as well as parts of the evolution of the graphs induced by V_S and V_L.Next, in Section <ref> we use branching process techniques to track the evolution of the V_S–graph in more detail, which also yields exponential tail bounds for certain key quantities.In Section <ref> we then use PDE theory to show that an associated generating function is analytic. In Section <ref> we introduce a convenient form of the Erdős–Rényi sprinkling argument. Finally, in Section <ref> we define and study the setof component sizes that the ℓ-vertex rule  can produce, and the `period'of the rule; for `edge-based' rules such as Achlioptas processesthese technicalities are not needed. §.§ Two-round exposure and conditioning Recall that we first condition on G_i_0. Our aim now is to analyze the steps i with i_0 < i ≤ i_1.Recall that _i = (v_i,1, …, v_i,ℓ) denotes the uniformly random ℓ-tuple of vertices offered to the rule in step i.Given G_i_0 we expose the information about steps i_0<i≤ i_1 in two rounds.In the first exposure round _1(i_0,i_1), for every step i_0 < i ≤ i_1 we (i) reveal which vertices of _i = (v_i,1, …, v_i,ℓ) are in V_S and which in V_L, and (ii) for those vertices v_i,j in V_S, we also reveal precisely which vertex v_i,j is.In the second exposure round _2(i_0,i_1), for every step i_0 < i ≤ i_1 we reveal the choices of all so-far unrevealed vertices in V_L. The `added edges', i.e., edges of G_i∖ G_i_0, are of three types: V_S–V_S edges (where both endvertices are in V_S), V_L–V_L edges (where both endvertices are in V_L, but still unrevealed after the first exposure round) and V_S–V_L edges (where the endvertex in V_L is still unrevealed).To be pedantic, we formally mean pairs of vertices, allowing for loops and multiple edges; the term `edge' allows for a more natural and intuitive discussion of the arguments. The following lemma encapsulates the key properties of the two-round exposure discussed informally in Section <ref>.Given G_i_0, the information revealed by the first exposure round _1(i_0,i_1) is enough to make all decisions of , i.e., to determine for every i_0 < i ≤ i_1 the indices j_1=j_1(i) and j_2=j_2(i) such that v_i,j_1 and v_i,j_2 are joined by the rule .Furthermore, conditional on G_i_0 and on the first exposure round, all vertices revealed in the second exposure round _2(i_0,i_1) are chosen independently and uniformly random from V_L. The claim concerning the second exposure round is immediate, since in each step the vertices _i=(v_i,1, …, v_i,ℓ) are chosen independently and uniformly random.Turning to the first exposure round,we now make the heuristic arguments of Section <ref> rigorous. For i_0≤ i≤ i_1 let E_i be the set of edges of G_i∖ G_i_0 with both ends in V_S (the edges added inside V_S), and let V_i be the (multi-)set of vertices of V_S in at least one V_S–V_L edge in G_i∖ G_i_0 (the set of V_S endvertices of the added V_S–V_L edges). We claim that the information revealed in the first exposure round determines E_i and V_i for each i_0≤ i≤ i_1. The proof is by induction on i; of course, E_i_0=V_i_0=∅. Suppose then that i_0 < i ≤ i_1 and that the claim holds for i-1. The information revealed in the first exposure round determines which of the vertices v_i,1,…,v_i,ℓare in V_S as opposed to V_L, and precisely which vertices those in V_S are. Let _i=(c_i,1, …, c_i,ℓ) ∈{1, …, K, ω}^ℓlist the sizes of the components of G_i-1 containing v_i,1,…,v_i,ℓ, with all sizes larger than K replaced by ω. We shall show that _i is determined by the information revealed in the first exposure round. By the definition of a bounded-size rule, _i determines the choice made by the rule , i.e., the indices j_1=j_1(_i) and j_2=j_2(_i) such that v_i,j_1 and v_i,j_2 are joined by  in step i, which is then enough to determine E_i∖ E_i-1 and V_i∖ V_i-1, completing the proof by induction.If v_i,j∈ V_L, then v_i,j is in a component of G_i-1⊇ G_i_0 of size at least K+1, so we know that c_i,j=ω, even without knowing the particular choice of v_i,j∈ V_L. Suppose then that v_i,j∈ V_S. Since we know G_i_0 and E_i-1, we know the entire graph G_i-1[V_S]. Furthermore, we know exactly which components of G_i-1[V_S] are connected to V_L in G_i-1, namely those containing one or more vertices of V_i-1. Let C be the component of G_i-1[V_S] containing v_i,j. If C is not connected to V_L in G_i-1, then C is also a component G_i-1, whose size we know. If C is connected to V_L then in G_i-1 the component containing C has size at least K+1, so c_i,j=ω. This shows that c_i,j is indeed known in all cases, completing the proof. Intuitively speaking, after the first exposure round _1(i_0,i_1), for i_0≤ i≤ i_1 we are left with a `marked' auxiliary graph H_i, as described in Figure <ref>.More precisely, for i_0≤ i≤ i_1 let H_i be the `marked graph' obtained as follows. Starting from G_i_0, (i) insert all V_S–V_S edge added in steps i_0<j≤ i, and (ii) for each V_S–V_L edge added in steps i_0<j≤ i, add a `stub' or `half-edge' to its endvertex in V_S. Thus, in the (temporary) notation of the proof above, H_i is formed from G_i by adding the edges in E_i and stubs corresponding to the multiset V_i. Each mark or stub represents an edge to a so-far unrevealed vertex in V_L, anda V_S–vertex can be incident to multiple stubs.For i_0≤ i≤ i_1 let Q_0,2(i) denote the number of V_L–V_L edges (including loops and repeated edges) added in total in steps i_0 < j ≤ i, so by definitionQ_0,2(i_0)=0. By Lemma <ref> the information revealed in the first exposure round _1(i_0,i_1) determines the graphs (H_i)_i_0≤ i≤ i_1 and the sequence (Q_0,2(i))_i_0 ≤ i ≤ i_1. Furthermore, in the second exposure round we may generate G_i from H_i by replacing each stub associated to a vertex v∈ V_S by an edge vw to a vertex w chosen independently and uniformly at random from V_L, and adding Q_0,2(i) random V_L–V_L edges to H_i, where the endvertices are chosen independently and uniformly random from V_L.Since our focus is on the component sizes of G_i, the internal structure of the components of G_i_0 and H_i is irrelevant; all we need to know is the size of each component, and how many stubs it contains. Any component C of H_i is either contained in V_L (in which case |C|>K) or in V_S. If C⊆ V_S then we say that C has type (k,r) if |C|=k and Ccontains r stubs, i.e., is incident to r V_S–V_L edges in G_i∖ H_i, cf. Figure <ref>. As usual, for i ≥ 0 and k ≥ 1, we write N_k(i) for the number of vertices of G_i which are in components of size exactly k. For i ≥ i_0, k ≥ 1 and r ≥ 0, we write Q_k,r(i) for the number of components of H_i of type (k,r).[Note that N_k counts vertices, and Q_k,r counts components; the different normalizations are convenient in different contexts.] ThusQ_k,r(i_0) = r=0, 1 ≤ k≤ K N_k(i_0)/k .We may think of an added V_L–V_L edge as a component of type (0,2): it contains no vertices, but has two stubs associated to it. Hence the notation Q_0,2(i) above; we let Q_0,r:=0 for r 2.For i_0 ≤ i ≤ i_1, let _i := ((N_k(i_0))_k > K,(Q_k,r(i))_k,r ≥ 0) .This parameter list contains the essential information about H_i. Given (a possible value of) _i,treating _i as deterministic we construct a random graph J_i=J(_i) as follows: start with a graph _i=(_i)consisting of Q_k,r(i) type-(k,r) components for all k ≥ 1 and r ≥ 0, and N_k(i_0)/k components of size k for all k > K. Let V_S be the set of vertices in components of the first type, and V_L the set in components of the second type.Given _i, we then (i) connect each stub of _i to an independent random vertex in V_L, and (ii) add Q_0,2(i) random V_L–V_L edges to _i.By construction and Lemma <ref> we have the following result.Given _i, the random graph J_i=J(_i) has the same component size distribution as G_i conditioned on the parameter list _i. Our strategy for analyzing the component size distribution of G_i will be as follows.In Sections <ref>–<ref> we will show that the random parameter list _i, which is revealed in the first exposure round, is concentrated, i.e., nearly deterministic. Then, in the second round (so having conditioned on _i) we use the random model J(_i) to construct G_i.The advantage is that J(_i) is very well suited to branching process approximation, since it is defined by a number of independent random choices. Note for later that, by definition of the discrete variables, for i ≥ i_0 we have|V_L| = ∑_k>K N_k(i_0) , |V_S| = ∑_k ≥ 1,r ≥ 0k Q_k,r(i) and n = |V_L|+|V_S|.§.§ Differential equation approximation In this subsection we study the (random) parameter list _i defined in (<ref>). We shall track the evolution of several associated random variables,using Wormald's differential equation method <cit.> to show that their trajectories stay close to the solution of a certain associated system of ODEs (after suitable rescaling). In fact, we rely on a variant of this method due to the second author <cit.> in order to obtain sufficiently small approximation errors. §.§.§ Small components We start by tracking the number of vertices of G_i which are in components of size k ∈={1, …, K,ω}, which we denote by N_k(i).Here, as usual, `size ω' means size at least K+1.The following result intuitively shows N_k(i) ≈ρ_k(i/n) n, where ρ_k is a smooth (infinitely differentiable) function.Later we shall show that the ρ_k, and the related functions appearing in the next few lemmas, are in fact analytic. With probability at least 1-n^-ω(1) we have max_0 ≤ i ≤ i_1max_k ∈ |N_k(i) -ρ_k(i/n) n| ≤ (log n) n^1/2 ,where the functions (ρ_k)_k ∈ from [0,t_1] to [0,1] are smooth. They satisfy ∑_k ∈ρ_k(t)=1 and ρ'_ω(t)≥ 0, and are given by the unique solution toρ_k(0)=k=1andρ'_k(t) = ∑_∈^ℓΔ_ρ^(k,) ∏_j ∈ [ℓ]ρ_c_j(t) ,for certain coefficients Δ_ρ^(k,)∈ with |Δ_ρ^(k,)| ≤ 2K.This follows from a nowadays standard (see e.g., <cit.>) applicationof the differential equation method <cit.>, where the (log n) n^1/2 error term is due to a variant of the second author <cit.>. Let us briefly sketch the details. Noting that edges connecting two vertices in components of size ω leave all N_k with k ∈ unchanged, it is easy to check from the definition of a bounded-size rule that if, in step i, all vertices v_i,j with c_j=c_i,j≠ω lie in different components, then for each k ∈ the number of vertices in components of size k changes by Δ_ρ^(k,), a deterministic quantity with |Δ_ρ^(k,)| ≤ 2K. Furthermore, in step i, the probability that at least two of the ℓ randomly chosen vertices lie in the same component of size at most K is at most ℓ^2K/n.So, if (_i)_i≥ 0 denotes the natural filtration associated to our random graph process, using |N_k(i+1)-N_k(i)| ≤ 2K it follows as in <cit.> that(N_k(i+1)-N_k(i) |_i) = ∑_∈^ℓΔ_ρ^(k,) ∏_j ∈ [ℓ]N_c_j(i)/n±4ℓ^2K^2/n .Since |^ℓ| = (K+1)^ℓ = O(1), |Δ_ρ^(k,)| ≤ 2K and N_k(0)=k=1n,a routine application of the differential equation method <cit.> (see, e.g., <cit.>) implies that (<ref>) holds with probability at least 1-n^-ω(1), where the (ρ_k(t))_k ∈ are the unique solution to (<ref>). Now we turn to properties of the functions (ρ_k)_k ∈.By induction on j we see that the jth derivatives ρ^(j)_k(t) exist for all k∈ and j≥ 0, i.e., that the (ρ_k)_k ∈ are smooth.Since N_k(i) ∈ [0,n] and ∑_k ∈N_k(i)=n, it follows from (<ref>) that ρ_k(t) ∈ [0,1] and ∑_k ∈ρ_k(t)=1. (This also follows directly from the differential equations, similar to Theorem 2.1 in <cit.>.) Finally, Δ_ρ^(ω,) ≥ 0 and ρ_k(t) ≥ 0 imply ρ'_ω(t)≥ 0.For later reference we now extend the results of Lemma <ref> to any fixed component size k'. One way to do this is to note that any bounded-size rule with cut-off K can be interpreted as a bounded-size rule with cut-off max{k',K}, and apply Lemma <ref> to this rule. This has the drawback that as k' varies, our formula for ρ_k' changes. In the next lemma we take a different approach which avoids this. The key point is that the functions (ρ_k)_k≥ 1 in (<ref>)–(<ref>) below are the unique solution of a system of ODEs.Recall that N_≥ k(i) = ∑_k'≥ k N_k'(i). Given k' ≥ 1, with probability at least 1-n^-ω(1) we have max_0 ≤ i ≤ i_1max_1 ≤ k ≤ k'|N_k(i) - ρ_k(i/n)n|≤ (log n) n^1/2 ,max_0 ≤ i ≤ i_1max_1 ≤ k ≤ k'|N_≥ k(i) - ρ_≥ k(i/n)n|≤ (log n) n^1/2where the functions ρ_k:[0,t_1]→ [0,1] are given by the unique solution tothe system of differential equations (<ref>) for 1≤ k≤ K and (<ref>) below for k>K, and we writeρ_≥ k(t) = 1- ∑_1 ≤ j < kρ_j(t)and interpret ρ_ω as ρ_≥ K+1.Furthermore, the functions (ρ_k)_k ≥ 1 are smooth on [0,t_1], with ρ_k(t),ρ_≥ k(t) ∈ [0,1]. The proof is a minor generalization of that of (<ref>), so let us omit the details and only outline how the differential equations are obtained.For 1≤ k≤ K the equation (<ref>) remains valid; here we may eitherinterpret ρ_ω as ρ_≥ K+1=1-∑_k≤ Kρ_k, or include an equation for ρ_ω itself; this makes no difference. For k>K, arguing as for (<ref>) but now with `size ≥ k+1' playing the role of size ω we haveρ_k(0)=0andρ'_k(t) = ∑_∈{1, …, k, ≥ k+1}^ℓΔ_ρ^(k,) ∏_j ∈ [ℓ]ρ_c_j(t) ,where the Δ_ρ^(k,) are constants with |Δ_ρ^(k,)| ≤ 2k. Recalling (<ref>) and (<ref>), the key observation is that each ρ'_k depends only on ρ_j with 1 ≤ j ≤max{k,K}. Hence standard results imply that the infinite system of differential equations (<ref>) and (<ref>)–(<ref>) has a unique solution on [0, t_1].Mimicking the proof of Lemma <ref>, it then follows that the functions (ρ_k)_k ≥ 1 are smooth, with ρ_k(t) ∈ [0,1] and ∑_1 ≤ j < kρ_j(t) ≤ 1. Recall that after i_0 steps we partition the set of vertices into V_S ∪ V_L, where V_S contains all vertices in components of size at most K. Our later arguments require that whp |V_S|,|V_L|=Θ(n); in the light of Lemma <ref>, to show this it is enough to show that min{ρ_1(t_0),ρ_ω(t_0)}>0.This is straightforward for ρ_1(t_0); for ρ_ω(t_0) the key observation is that a new component of size 2r is certainly formed in any step i where all vertices v_i,1, …, v_i,ℓ lie in distinct components of size r.Hence, via successive doublings, by time t_0 we create many components of size 2^j > K;Lemma <ref> makes this idea rigorous.Define the functions (ρ_k)_k ∈ as in Lemma <ref>. For all t ∈ (0,t_1] we have min{ρ_1(t),ρ_ω(t)}>0.As noted above, if k is even then Δ_ρ^(k,(k/2, …, k/2)) =k≥ 1.Furthermore, Δ_ρ^(k,) ≥ 0 ifdoes not contain k. Since ρ_j(t) ≥ 0, |Δ_ρ^(k,)| ≤ 2 k and ∑_j ρ_j(t)=1, by theform of ρ'_k in (<ref>) and (<ref>)it readily follows for any integer k ≥ 1 that ρ'_k(t) ≥2 | k(ρ_k/2(t))^ℓ - 2 k ·ℓρ_k(t) ≥ - 2 ℓ k ρ_k(t). We claim that, for every j ∈ and t ∈ (0,t_1], we have ρ_2^j(t)>0; the proof is by induction on j. For the base case j=0, from (<ref>) we have (ρ_1(t)e^2ℓ t)'=(ρ_1'(t)+2ℓρ_1(t))e^2ℓ t≥ 0. Hence ρ_1(t) ≥ρ(0)e^-2ℓ t=e^-2ℓ t. For the induction step j ≥ 1, we write k=2^j to avoid clutter.It follows from (<ref>) that for t'≥ t we have ρ_k/2(t') ≥ρ_k/2(t/2)e^-ℓ k (t'-t/2). Since ρ_k/2(t/2)>0 by induction, we deduce that there is a δ=δ(k,t)>0 such that ρ_k/2(t') ≥δ for all t' ∈ [t/2,t]. The first inequality in (<ref>) implies that (ρ_k(t)e^2ℓ kt)' ≥δ^ℓ in [t/2,t],which readily implies ρ_k(t) ≥ e^-2 ℓ k t·δ^ℓ t/2 >0 for k=2^j. This completes the proof by noting that ρ_ω(t)≥ρ_k(t) whenever k>K.As we shall discuss in Section <ref>, for ℓ-vertex rules it is, in general, not truethat min_k ∈ρ_k(t)>0 for t>0(in contrast to the usual `edge-based' Achlioptas processes considered in <cit.>).§.§.§ Random V_L–V_L edges Next we focus on the evolution of Q_0,2(i), which counts the number of V_L–V_L edges added in steps i_0 < j ≤ i.With probability at least 1-n^-ω(1) we have max_i_0 ≤ i ≤ i_1 |Q_0,2(i) -q_0,2(i/n) n| ≤ (log n)^2 n^1/2 ,where the function q_0,2:[t_0,t_1] → [0,1] is smooth, with q_0,2(t_0)=0 and q_0,2'(t) > 0. It is given by the unique solution to the differential equation (<ref>). This follows again by a routine application of the differential equation method <cit.>, so we only outline the argument. Formally, we use Lemma <ref> to obtain bounds at step i_0, and then track Q_0,2(i) and (N_k(i))_k ∈ from step i_0 onwards. Analogous to (<ref>) we consider (Q_0,2(i+1)-Q_0,2(i) |_i), i.e., the conditional one-step expected change in Q_0,2(i). This time we need to consider vertices in components of size ω that are in V_S separately from those in V_L. Let ϑ_L(t) := ρ_ω(t_0) ,which corresponds to the idealized rescaled number of vertices in V_L. Noting that for i≥ i_0 there are N_ω(i)-|V_L|=N_ω(i)-N_ω(i_0) vertices in V_S that are in components of G_i of size ω (i.e., size at least K+1), letϑ_k(t) := ρ_k(t)if 1 ≤ k ≤ K,ρ_ω(t)-ρ_ω(t_0)if k = ω,corresponding to the idealized rescaled numberof vertices in V_S which are in components of size k ∈={1,…, K, ω}.Sinceis a bounded-size rule, Q_0,2(i+1)-Q_0,2(i) is determined by the following information:the sizes c_i,j∈={1,…,K,ω} of the components containing the vertices v_i,1, …, v_i,ℓ and, where c_i,j=ω, the information whether v_i,j is in V_L or not. (It does not matter whether any of these vertices lie in the same component or not).So, with |V_L| = N_ω(i_0) and (<ref>) in mind, it is straightforward to see that q_0,2(t) is given by the unique solution toq_0,2(t_0)=0 and q_0,2'(t) = ∑_=(c_1, …, c_ℓ) ∈ (∪{L})^ℓΔ^() ∏_j ∈ [ℓ]ϑ_c_j(t) ,where Δ^()=1 if we have _j_1=_j_2=L for the indices {j_1,j_2}=() selected by the rule, and Δ^()=0 otherwise. Now we turn to properties of q_0,2(t).By Lemma <ref>, all ϑ_k(t) are smooth, so q_0,2(t) is smooth by (<ref>).Similarly, recalling ρ'_ω(t) ≥ 0 and ϑ_ω(t_0) = 0, we see that ϑ_k(t) ∈ [0,1] and ∑_k ∈∪{L}ϑ_k(t) = 1.Now, if ℓ distinct vertices from V_L are chosen, then a V_L–V_L edge is added. Hence Δ^((L, …, L))=1, which implies q_0,2'(t) ≥(ρ_ω(t))^ℓ > 0 for all t ∈ [t_0,t_1], see Lemma <ref>. Finally, using Δ^() ≤ 1 and ∑_k ∈∪{L}ϑ_k(t) = 1 we deduce that q_0,2'(t) ≤ 1, so by (<ref>) we have q_0,2(t) ≤ q_0,2(t_0) + t-t_0 ≤ 2σ≤ 1 for all t ∈ [t_0,t_1].§.§.§ Components in V_S We now study the `marked graph' H_i defined in Section <ref>, see also Figure <ref>.For k ≥ 1 and r ≥ 0, recall that Q_k,r(i) counts the number of type-(k,r) components in H_i, i.e., components of H_i which contain k vertices from V_S and have r stubs (and so are incident to r V_S–V_L edges in G_i∖ H_i).As usual, we expect that Q_k,r(tn)/n can be approximated by a smooth function q_k,r(t), and our next goal is to derive a system of differential equations that these q_k,r must satisfy.Note that (<ref>) below only implies Q_k,r(i)/n ≈ q_k,r(i/n) for fixed k and r(see Section <ref> for an extension to all k ≥ 1 and r ≥ 0).The system of differential equations (<ref>) and (<ref>) below has a unique solution (q_k,r)_k ≥ 1, r ≥ 0 on [t_0,t_1], with each q_k,r:[t_0,t_1] → [0,1] a smooth function.Given k' ≥ 1 and r' ≥ 0, with probability at least 1-n^-ω(1) we have max_i_0 ≤ i ≤ i_1max_1 ≤ k ≤ k' 0 ≤ r ≤ r'|Q_k,r(i) - q_k,r(i/n)n| ≤ (log n)^2 n^1/2 . As in the proof of Lemma <ref> we only sketch the differential equation method <cit.> argument. Again, we use Lemma <ref> to obtain bounds at step i_0, and then track (N_k(i))_k ∈ and (Q_k,r(i))_1 ≤ k ≤ k', 1 ≤ r ≤ r' from step i_0 onwards; here, as usual, ={1,…,K,ω}.Since |Q_k,r(i+1)-Q_k,r(i)| ≤ 2, the `exceptional event' that two of the ℓ random vertices lie in the same (k,r)–component of H_i with k ≤ k' contributes at most, say, 4ℓ^2 k'/n = O(1/n) to (Q_k,r(i+1)-Q_k,r(i) |_i).Hence, recalling the definition of ρ_k(t), by considering the expected one-step changes of Q_k,r(i), it is not difficult to see that q'_k,r(t) is a polynomial function of ρ_ω(t_0), the ρ_k̃(t) with k̃∈, and the q_k̃,r̃(t) with 1 ≤k̃≤ k and 0 ≤r̃≤ r (edges connecting two vertices from V_L or two vertices in (k̃,r̃)–components with k̃ > k or r̃ > r leave Q_k,r(i) unchanged). For later reference, we now spell out these differential equations explicitly.By (<ref>), the initial conditions are q_k,r(t_0) = r=0, 1 ≤ k≤ Kρ_k(t_0)/k .Turning to q'_k,r, sets(k,r) := ω ,  if k ≥ K+1 or r ≥ 1, k,  otherwise.From the relationship between H_i and G_i established in Section <ref> (see Figure <ref>), a vertex v∈ V_S in a type-(k,r) component of H_i is in a component of G_i with size s(k,r), where size ω means size ≥ K+1. Recall that in step i the ruleconnects v_i,j_1 with v_i,j_2, where {j_1,j_2}=(_i). In the following formulae we sum over all possibilities ∈^ℓ for _i, and always tacitly define{j_1,j_2}=(). Bearing in mind that |V_L| = N_ω(i_0), similar arguments to those leading to (<ref>) and (<ref>) show that q'_k,r(t)= ∑_∈^ℓ[∏_j ∈ [ℓ] ∖{j_1,j_2}ρ_c_j(t)] ·[∑_1 ≤ h ≤ 3 F_h(k,r,)] ,where F_1(k,r,) := ∑_k_1+k_2=k: k_1,k_2 ≥ 1 r_1+r_2=r: r_1,r_2 ≥ 0 k_1 q_k_1,r_1(t) k_2 q_k_2,r_2(t) c_j_1=s(k_1,r_1), c_j_2=s(k_2,r_2) ,corresponding to creating a new (k,r)–component by adding an edge within V_S, F_2(k,r,) := r ≥ 1k q_k,r-1(t) ρ_ω(t_0) [ c_j_1=s(k,r-1), c_j_2=ω + c_j_1=ω, c_j_2=s(k,r-1)] ,corresponding to adding a V_S–V_L edge to a (k,r-1)–component, and F_3(k,r,) := -k q_k,r(t)[ c_j_1=s(k,r)ρ_c_j_2(t)+ ρ_c_j_1(t)c_j_2=s(k,r)] ,corresponding to destroying a (k,r)–component by connecting one of its vertices in V_S to something else. (The normalization is different for q_k,r and ρ_k since Q_k,r counts components, whereas N_k counts vertices.) Turning to properties of the q_k,r(t), recall that the ρ_k(t) are smooth on [0,t_1].The key observation is that q'_k,r depends only on (q_,)_1 ≤≤ k,0 ≤≤ r and (ρ_j)_j ∈, see (<ref>)–(<ref>).So, using |^ℓ| = (K+1)^ℓ = O(1), standard results imply that the infinite system of differential equations (<ref>)–(<ref>) has a unique solution on [t_0, t_1].Furthermore, by induction on j ≥ 0 (and k+r ≥ 1) we see that all the q_k,r(t) are j times differentiable; thus the (q_k,r)_k≥ 1, r ≥ 0 are smooth on [t_0,t_1].Finally, since 0 ≤ Q_k,r(i) ≤ n/k, standard comparison arguments yield q_k,r(t) ∈ [0,1], say.§.§ Exploration tree approximation In this subsection we continue studying the (random) parameter list _i defined in (<ref>) of Section <ref>. More concretely, we shall track the evolution of several associated random variables usingthe exploration tree method developed in <cit.>, which intuitively shows that these variables (i) are concentrated, and (ii) have exponential tails.This proof method is based on branching process approximation techniques, and it usually works in situations where the quantities in question can be determined by a subcritical neighbourhood exploration process. §.§.§ Component size distribution We first revisit the number N_k(i) of vertices of G_i in components of size k in the subcritical phase, which we studied in <cit.> for size rules(for bounded-size rules the quantity  appearing in Theorem 1 of <cit.> is equal to  by Theorem 15 of <cit.>). Since t_0 <, Theorem 1 of <cit.> implies the following result, which applies to all component sizes k ≥ 1and steps i≤ i_0=t_0 n, showing concentration and exponential tail bounds.Let (ρ_k)_k ≥ 1 be the functions defined in Lemma <ref>.There are constants a,A,D_N,n_0 > 0 with A ≥ 1such that for n ≥ n_0, with probability at least 1-n^-99, the following holds for all k ≥ 1:max_0 ≤ i ≤ i_0|N_k(i)-ρ_k(i/n)n| ≤ (log n)^D_N n^1/2 , max_0 ≤ i ≤ i_0N_≥ k(i) ≤ A e^-akn, sup_t ∈ [0,t_0]ρ_k(t) ≤ A e^-ak .Furthermore, we have ∑_k ≥ 1ρ_k(t)=1 for all t ∈ [0,t_0].To be pedantic, the functions (ρ_k)_k ≥ 1 of Theorem 1 of <cit.> could potentially differ from those considered in Lemma <ref>.However, since both are defined without reference to n, by (<ref>) and (<ref>) these must be equal.This justifies (with hindsight) our slight abuse of notation.Furthermore, from Lemmas <ref> and <ref> and the fact that ∑_k ≥ 1ρ_k(t_0)=1, we see that ρ_ω(t_0)=1-∑_1 ≤ k ≤ Kρ_k(t_0)=∑_k > Kρ_k(t_0) > 0 .Let us briefly outline the high-level proof strategy from Section 2 of <cit.>, which we will adapt to H_i (as defined in Section <ref>) in a moment.The basic idea is to generalize slightly, and establish concentration starting from an initial graph F.Using induction, it then suffices to prove concentration during an interval consisting of a small (linear) number of steps.For this purpose we use a two-phase[We use the word phase rather than round to avoid any confusion with the main two-round exposure argument described in Section <ref>.]exposure argument: we first reveal which ℓ-tuples appear in the entire interval, and then expose their order (in which they are presented to the rule ).Given a vertex v, via the first exposure phase we can severely restrict the set of components (of F) and tuples (that appear in the interval) which can influence the size of the component containing v under the evolution of any size rule.Indeed, the only components/tuples which can possibly be relevant are those which can be reached from v after adding all ℓ2 edges in each ℓ-tuple appearing in the first exposure phase. Of course, all these tuples and components can be determined by a neighbourhood exploration process.If the interval has length δ n = Θ(n), then (since there are at most ℓ n^ℓ-1 many ℓ-tuples containing any given vertex, each containing at most ℓ-1 new vertices) it seems plausible that the expected size of the associated offspring distribution is at most roughlyδ·ℓ(ℓ-1) ·∑_k ≥ 1k N_k(F)/n = δℓ(ℓ-1)S_2(F),where, as usual, S_2(F) denotes the susceptibility of the graph F, i.e., the expected size of the component containing a random vertex. In <cit.> our inductive argument hinges on the fact that the associated branching process remains subcritical (i.e., quickly dies out) as long as δℓ(ℓ-1)S_2(F) < 1.In the first exposure phase this allows us to couple the neighbourhood exploration process with an `idealized' branching process that is defined without reference to n.In particular, this gives rise to a so-called exploration tree _v,δ (see page 187 in <cit.>), which itself contains enough information to reconstruct all relevant tuples and components.In the second exposure phase we then reveal the order of the relevant tuples, using the ruleto construct the component containing v (see Section 2.4.3 in <cit.>).We can eventually establish tight concentration since, by the subcritical first phase, _v,δ typically contains rather few components and tuples (see Lemma 14 in <cit.>).Finally, the above discussion also explains why the inductive argument breaks down around , since then a giant component emerges (in which case S_2(F) = ω(1), so for any δ=Θ(1) the branching process just described will be supercritical). §.§.§ Distribution of the V_S–components We now turn, for k ≥ 1 and r ≥ 0, to the number Q_k,r(i) of components of H_i of type (k,r). We shall prove that, starting from F=G_i_0, these random variables remain tightly concentrated for all i_0 < i ≤ i_1 (with exponential tails). The basic idea is to apply the argument outlined in Section <ref> for one interval of length δ n = i_1-i_0, see (<ref>)–(<ref>), using a minor twist to ensure that the corresponding exploration process remains subcritical even beyond , exploiting the fact that we are restricting to bounded-size rules.Recall that in the definition of Q_k,r(i) we do not care about the endpoints in V_L of the incident V_S–V_L edges, see also Figure <ref>.With this in mind, the key observation is that the evolution of the components in V_L is irrelevant for the evolution of Q_k,r(i): it suffices to know that these have size ω (i.e., size >K). So, starting with a vertex v ∈ V_S, in the exploration process associated with the first exposure phase (which finds all relevant tuples and components) we do not further test reached vertices w ∈ V_L (since we already know that these vertices are in components of size ω).To keep the differences to <cit.> minimal, we shall simply pretend that all vertices in V_L lie in distinct `dummy' components of size K+1, say (it would be more elegant to mark reached V_L–vertices, by introducing a new vertex type in Section 2.4.2 of <cit.>).Since δ =t_1-t_0=2σ≤ [ℓ^2(K+1)]^-1, see (<ref>)–(<ref>), the branching-out rate of (<ref>) thus changes to at most δ·ℓ(ℓ-1) ·(∑_1 ≤ k ≤ K k N_k(F)/n + (K+1)N_ω(F)/n) ≤δℓ(ℓ-1)(K+1) < 1 ,suggesting that the exploration process indeed remains subcritical.This makes it plausible that, by a minor variant of the proof used in <cit.>, we can track the evolution of the (Q_k,r(i))_k ≥ 1, r ≥ 0 for all i_0 ≤ i ≤ i_1, i.e., show that they are tightly concentrated around deterministic trajectories, see (<ref>) below.Furthermore, since the associated exploration process is subcritical, we also expect that these have exponential tails, see (<ref>) and (<ref>) below.Let (q_k,r)_k ≥ 1,r ≥ 0 be the functions defined in Lemma <ref>.There are constants b,B,D_Q,n_0 > 0 with B ≥ 1such that for n ≥ n_0, with probability at least 1-n^-99, the following hold for all k ≥ 1 and r ≥ 0:max_i_0 ≤ i ≤ i_1|Q_k,r(i)-q_k,r(i/n)n| ≤ (log n)^D_Q n^1/2 , max_i_0 ≤ i ≤ i_1Q_≥ k, ≥ r(i) ≤ B e^-b(k+r)n, sup_t ∈ [t_0,t_1] q_k,r(t) ≤ B e^-b(k+r) ,where Q_≥ k, ≥ r(i) := ∑_k' ≥ k,r' ≥ r Q_k',r'(i).Recall that, by definition of the discrete variables, for i ≥ i_0 we have∑_k ≥ 1,r ≥ 0k Q_k,r(i) = |V_S|=n-|V_L| = ∑_1 ≤ k ≤ K N_k(i_0); see (<ref>). Hence a standard comparison argument (using Lemma <ref> and exponential tails) shows that for t ∈ [t_0,t_1] we have ∑_k ≥ 1,r ≥ 0 k q_k,r(t) = ∑_1 ≤ k ≤ Kρ_k(t_0) . Let δ_t = t - t_0.Starting with F=G_i_0 satisfying the conclusions of Lemma <ref>, the core argument is a minor modification of the proof of Theorem 3 in <cit.>, with Q=K+1. As discussed, the main idea is to pretend that all vertices of V_L are in distinct (dummy) components of size K+1.In particular, when constructing the exploration tree _v,δ_t, in the case |C_u_i(F)| > K at the top of page 188in <cit.>, we simply add K+1 new `dummy' vertex nodes as the children of u_i (from the tree structure of _v,δ_t it then is clear that u_i lies in a component of size >K, which is all we need in Section 2.4.3 of <cit.> to make the decisions of ).Of course, later on in the exploration argument these dummy vertices need not be further tested for neighbours, but in the domination arguments of Lemma 7 and 9 of <cit.> we shall pretend that they are tested (this only generates more fictitious vertices, which is safe for upper bounds).Let (k',r') be the `worst case' type of the component containing v, which results after adding all ℓ2 edges in each ℓ-tuple appearing (the actual type (k,r) satisfies k ≤ k' and r ≤ r').One important observation is that the number of vertex nodes of _v,δ_t dominates k'+(K+1)r' ≥ k'+r'. With this in mind, the corresponding variants of Lemma 7 and 9 of <cit.> yield exponential tails in k+r.All other parts of the proof of Theorem 3 in <cit.> carry over with only obvious minor changes; we leave the details to the interested reader.We close this section by noting a simple bound on the derivatives q_k,r'(t). Define b>0 as in Theorem <ref>. There is a constant B' such that for all k≥ 1 and r≥ 0 we have sup_t∈ [t_0,t_1]|q_k,r'(t)| ≤ B' k^3(r+1) e^-b(k+r). By (<ref>), each of the quantities F_h appearing in (<ref>) and defined in (<ref>)–(<ref>) is (crudely) at most a constant times k^3(r+1)e^-b(k+r). Recalling that |ρ_k(t)|≤ 1 for all k∈={1,2,…,K,ω}, the claimed bound follows from (<ref>). §.§ Analyticity In this subsection we use PDE theory to establish analytic properties of an idealized version of the parameter list _i, which will later be important for our branching processes analysis (see Section <ref> and Appendix <ref>). In fact, we believe that our fairly general approach for establishing analyticity may be of independent interest.Before turning to the main result, we give a simple preparatory lemma. For the definition of (real) analytic, see Remark <ref>.The functions (ρ_k)_k ∈, q_0,2 and (q_k,r)_k≥ 1,r≥ 0defined in Lemmas <ref>, <ref> and <ref> are analytic on (t_0,t_1).For t ∈ [0,t_1] the functions (ρ_k)_k ∈ are the unique solution to a finite system (<ref>) of ODEs. The Cauchy–Kovalevskaya Theorem for ODEs (see Theorem <ref> in Appendix <ref>) thus implies that the (ρ_k)_k ∈ are analytic on (0,t_1).Recalling (<ref>)–(<ref>), equation (<ref>) shows that for t ∈ [t_0,t_1) the derivative q_0,2' is a polynomialfunction of known analytic functions. It follows that q_0,2 is analytic on (t_0,t_1).Finally, for t ∈ [t_0,t_1), k≥ 1 and r≥ 0, the derivative q'_k,r is a polynomial function of(q_,)_1 ≤≤ k,0 ≤≤ r and known analytic functions, see (<ref>)–(<ref>). By induction on k+r, the Cauchy–Kovalevskaya Theorem (see Theorem <ref>) thus implies that each q_k,r is analytic on (t_0,t_1), completing the proof.With the functions (q_k,r)_k ≥ 1, r ≥ 0 and q_0,2 as defined in Lemmas <ref> and <ref>, our main aim in this section is to study the generating functionP(t,x,y) := ∑_k,r ≥ 0 x^ky^r q_k,r(t) ,where (for notational convenience) we setq_0,r(t) :≡ 0 for all r ≠ 2.Recall that t_0=-σ and t_1=+σ.With b as in the exponential tail bound (<ref>), let:= (t_0,t_1) ×(-e^b/3,e^b/3)^2⊂^3.From (<ref>) it is easy to see that P=P(t,x,y) converges absolutely for (t,x,y) ∈.We shall now show that in fact P is (real) analytic in this domain. [Substituting the exponential tails (<ref>) into the equation (<ref>) for q'_k,r, by analyzing the combinatorial structure of the derivatives q^(j)_k,r it is possible to prove directly that sup_t ∈ [t_0,t_1]|q^(j)_k,r(t)| ≤ B_j e^-b(k+r)/2, say. It is then not difficult to show smoothness (infinite differentiability) of P=P(t,x,y). What we prove here is stronger.] In our approach the complementary conclusions of the differential equation method (the equations for q'_k,r) and the exploration tree approach (the exponential decay of q_k,r) work hand-in-hand with PDE theory (the Cauchy–Kovalevskaya Theorem).The function P(t,x,y) is analytic in the domain  defined in (<ref>).More precisely, for each _0∈ (t_0,t_1) there is a δ>0 such that the function P(t,x,y) has an analytic extension to the complex domain _δ(_0) := { (t,x,z) ∈^3: |t-_0| < δ and|x|,|y|<e^b/3}. Our proof strategy is roughly as follows.Using the `nice' form of the differential equations q'_k,r (given by Lemma <ref>), we show that a minor modification of P satisfies a first-order PDE of the form P_t = F(t,x,y,P_x).A general result from the theory of partial differential equations (the Cauchy–Kovalevskaya Theorem, see Appendix <ref>) then allows us to deduce that this PDE has an analytic local solution, say =(t,x,y).Here the exponential tail of the q_k,r (given by Theorem <ref>) will be a crucial input, ensuring that the boundary data of the corresponding PDE are analytic.To show that  and P coincide (first-order PDEs can, in general, have additional non-analytic solutions), we substitute the Taylor series (t,x,y)=∑_k,r,sc_k,r,sx^ky^r(t-_0)^sback into both sides of the PDE, and essentially show that the functions_k,r(t) = ∑_sc_k,r,s(t-_0)^s satisfy the same system of ODEs as the functions q_k,r(t).Exploiting that `nice' systems of ODEs have unique solutions we obtain _k,r(t)=q_k,r(t), which by (<ref>) establishes (t,x,y)=∑_k,rx^ky^r_k,r(t) = P(t,x,y), as desired.With the definition (<ref>) of P and the convention (<ref>) in mind, let R(t,x,y) := ∑_k ≥ 1,r ≥ 0 x^ky^r q_k,r(t) = P(t,x,y)- y^2q_0,2(t).We shall show that, for each real _0 ∈ (t_0,t_1), the function R=R(t,x,y) has an analytic extension to a complex domain _δ(_0) as in the statement of the theorem. By Lemma <ref> it follows that P=P(t,x,y) has such an extension (after decreasing δ>0, if necessary), and in particular that P is analytic in .We start with some basic properties of R in the slightly larger domain^+ := { (t,x,z) ∈×^2: t ∈ (t_0,t_1)and|x|,|y|<e^b/2} , where b is as in (<ref>) and (<ref>).Since |x^ky^r|≤ e^b(k+r)/2 and |q_k,r(t)|≤ B e^-b(k+r), the sum in (<ref>) converges uniformly in ^+. For (t,x,y)∈^+ we claim that the partial derivative R_t satisfiesR_t := ∂/∂ t R(t,x,y) = ∑_k ≥ 1,r ≥ 0 x^ky^r q'_k,r(t) . By basic analysis, to see this it suffices to show uniform convergence of the sum on the right of (<ref>). But this follows from bound |q'_k,r(t)|≤ B' k^3(r+1)e^-b(k+r) from Lemma <ref>.For (t,x,y)∈^+ we similarly see that R_x := ∂/∂ xR(t,x,y) = ∑_k ≥ 1,r ≥ 0 kx^k-1y^r q_k,r(t) . The plan now is to substitute our formulae for the derivatives q'_k,r into (<ref>) and then rewrite the resulting expression in terms of known expressions and functions (in order to eventually obtain a PDE for R). Turning to the details, for ∈^ℓ with ={1, …, K, ω} we define for brevityΨ_ :=∏_j ∈ [ℓ] ∖{j_1,j_2}ρ_c_j(t),where, as usual in this section, {j_1,j_2}=(). Thus we may write (<ref>) asq'_k,r(t)= ∑_∈^ℓΨ_∑_1 ≤ h ≤ 3 F_h(k,r,),with the F_h defined in (<ref>)–(<ref>). Substituting the formula for q'_k,r given by (<ref>) into (<ref>), we see thatR_t = ∑_∈^ℓΨ_∑_k ≥ 1,r ≥ 0 x^ky^r ∑_1 ≤ h ≤ 3 F_h(k,r,). Recalling that components with more than K vertices are formally assigned size ω, we expect that in (<ref>) almost all terms in the multiple sumcome from the case (c_j_1,c_j_2)=(ω,ω).With this in mind, let _1–_3 be modified versions of F_1–F_3 where all conditions c_j_x=s(·,·) in (<ref>)–(<ref>)are replaced by c_j_x=ω. Thus_1(k,r,) = ∑_k_1+k_2=k: k_1,k_2 ≥ 1 r_1+r_2=r: r_1,r_2 ≥ 0 k_1 q_k_1,r_1 k_2 q_k_2,r_2c_j_1=ω, c_j_2=ω,recalling that {j_1,j_2}=(). Since the indicator function can be moved outside the sum, using x^k-2y^rk_1k_2 = k_1x^k_1-1y^r_1k_2x^k_2-1y^r_2 and then (<ref>) we see that ∑_k ≥ 1,r ≥ 0 x^ky^r_1(k,r,)= c_j_1=c_j_2=ωx^2∑_k ≥ 1,r ≥ 0 ∑_k_1+k_2=k: k_1, k_2 ≥ 1r_1+r_2=r: r_1, r_2 ≥ 0 x^k-2y^r k_1q_k_1,r_1 k_2q_k_2,r_2 = c_j_1=c_j_2=ω x^2 (R_x)^2 . Proceeding analogously, since_2(k,r,) = 2 r ≥ 1k q_k,r-1ρ_ω(t_0) c_j_1=ω, c_j_2=ω,using kx^ky^r = xy k x^k-1y^r-1 and (<ref>) we deduce ∑_k ≥ 1,r ≥ 0 x^ky^r_2(k,r,)= 2 ρ_ω(t_0) c_j_1=c_j_2=ω xy ∑_k,r≥ 1kx^k-1y^r-1q_k,r-1 = 2 ρ_ω(t_0) c_j_1=c_j_2=ω xy R_x .Similarly, but more simply, ∑_k ≥ 1,r ≥ 0 x^ky^r_3(k,r,) = -xR_x [ c_j_1=ωρ_c_j_2(t)+ ρ_c_j_1(t)c_j_2=ω] .Since Ψ_, defined in (<ref>), is a polynomial in the functions (ρ_k)_k ∈, multiplying (<ref>)–(<ref>) by Ψ_ and summing over the finite set ^ℓ, we see that∑_∈^ℓΨ_∑_k ≥ 1,r ≥ 0 x^ky^r ∑_1 ≤ h ≤ 3_h(k,r,) = x^2 (R_x)^2f_1(t)+xy R_x f_2(t)+x R_x f_3(t),where each f_h(t) is a polynomial in the functions (ρ_j)_j ∈.We now turn to the differences F_h-_h for h ∈{2,3}.Recalling (<ref>), for F_2 defined in (<ref>) we note that s(k,r-1)ω if and only if k ≤ K and r=1. Hence F_2(k,r,)=_2(k,r,) whenever k > K or r ≠ 1.For F_3 defined in (<ref>) we note that s(k,r)ωif and only if k ≤ K and r=0. Hence F_3(k,r,)=_3(k,r,) whenever k > K or r >0.Considering the finite number of cases with F_h ≠_h, it follows that ∑_∈^ℓΨ_∑_k ≥ 1,r ≥ 0 x^ky^r∑_h ∈{2,3}[F_h(k,r,)-_h(k,r,)]= ∑_1 ≤ k ≤ K∑_0 ≤ r ≤ 1 x^ky^r f_k,r(t) ,where each f_k,r(t) is a polynomial in the functions (ρ_j)_j ∈ and (q_j,0)_1 ≤ j ≤ K.For the difference F_1-_1 more care is needed.Subtracting (<ref>) from (<ref>) we haveF_1(k,r,) -_1(k,r,) = ∑_k_1+k_2=k: k_1,k_2 ≥ 1 r_1+r_2=r: r_1,r_2 ≥ 0 k_1 q_k_1,r_1 k_2 q_k_2,r_2[c_j_1=s(k_1,r_1), c_j_2=s(k_2,r_2) - c_j_1=c_j_2=ω],where {j_1,j_2}=(), as usual. Since s(k_h,r_h)ω implies k_h ≤ K and r_h=0,for (k,r)=(k_1+k_2, r_1+r_2) with k > 2K or r ≥ 1 at most one of the two events s(k_1,r_1) ≠ω and s(k_2,r_2) ≠ω can occur.Thus, considering these events in turn, when (k,r) satisfies k>2K or r≥ 1 we haveF_1(k,r,) -_1(k,r,)= Δ_1(k,r,)+Δ_2(k,r,)where Δ_1(k,r,) := ∑_1 ≤ k_1 ≤ Kk > k_1 k_1 q_k_1,0 (k-k_1) q_k-k_1,r[c_j_1=k_1-c_j_1=ω] c_j_2=ω,and Δ_2= Δ_2(k,r,) is defined similarly, swapping the roles of (k_1,r_1,j_1) and (k_2,r_2,j_2). Relabeling k_2 in the sum in Δ_2 as k_1, and changing the order of the product, we can write Δ_2 in the same form as Δ_1 but with different indicator functions. It follows that when (k,r) satisfies k>2K or r≥ 1, then F_1(k,r,) -_1(k,r,) = ∑_1 ≤ k_1 ≤ Kk > k_1 k_1 q_k_1,0 (k-k_1) q_k-k_1,r I(,k_1)for some coefficients I(,k_1)∈{-2,-1,0,1,2}. Writing x^ky^r(k-k_1)=x^k_1+1 (k-k_1)x^k-k_1-1y^r, we have∑_k ≥ 1, r≥ 0 x^ky^r k > k_1 k_1 q_k_1,0 (k-k_1) q_k-k_1,r I(,k_1)=I(,k_1) x^k_1+1 k_1q_k_1,0R_x.Now the formula (<ref>) does not apply to (k,r) with k≤ 2K and r=0.However, there are only finitely many such terms in (<ref>).Using (<ref>)–(<ref>) it thus follows that∑_∈^ℓΨ_∑_k ≥ 1,r ≥ 0 x^ky^r[F_1(k,r,)-_1(k,r,)]= ∑_1 ≤ k_1 ≤ K x^k_1+1R_x g_k_1(t)+ ∑_1 ≤ k ≤ 2Kx^k g_k,0(t),where each g_k_1(t) and g_k,0(t) is a polynomial in the functions (ρ_j)_j ∈ and (q_j,0)_1 ≤ j ≤ 2K, say.To sum up, writing F_h = _h + (F_h-_h) and substituting (<ref>), (<ref>) and (<ref>) and into (<ref>), for (t,x,y) ∈^+ we arrive at a first-order PDE of the form R_t = ∑_1 ≤ k ≤ 2K∑_0 ≤ r ≤ 1∑_0 ≤ s ≤ 2 f_k,r,s(t)x^k y^r (R_x)^s,where each function f_k,r,s(t) is a polynomial in the functions (ρ_j)_j ∈ and (q_j,0)_1 ≤ j ≤ 2K.Since finite sums and products of analytic functions are analytic, by Lemma <ref> it follows that the functions f_k,r,s(t) are analytic in (t_0,t_1).Given _0∈ (t_0,t_1), we shall formally define the initial data of the PDE viaR(_0,x,y)=R^_0(x,y) ,where R^_0(x,y):=R(_0,x,y) is a two-variable function.For each t ∈ (t_0,t_1) we claim that R^t(x,y):=R(t,x,y) is analytic in the complex domain = (b/2) := {(x,y) ∈^2 : |x|,|y|<e^b/2} .This is routine: with t ∈ (t_0,t_1) fixed, (<ref>) defines R^t(x,y)as a power series which, as noted earlier, converges absolutely if |x|,|y| ≤ e^b/2.By standard results, R^t is thus analytic in .The plan now is to fix _0∈ (t_0,t_1) and construct an analytic local solution =(t,x,y) to the PDE (<ref>)with initial data (<ref>). Then we shall show that for real t near _0 and complex x,y with |x|,|y| ≤ e^b/3, this solution coincides with R=R(t,x,y) as defined in (<ref>).This shows thatis the required analytic extension of R.Turning to the details, fix _0 ∈ (t_0,t_1).Since the functions (ρ_j)_j ∈ and (q_j,0)_1 ≤ j ≤ 2K are real analytic, each has a complex analytic extension to a neighbourhood of _0. Thus we may extend these functions simultaneously to a complex domain of the form = () : = {t ∈: |t-_0| < } ,where =(_0)>0. Recalling the definition of the functions f_k,r,s(t) in (<ref>), there is thus a (complex) analytic function F:××→ such that for (t,x,y)∈× the first-order PDE (<ref>)–(<ref>) may be written as R_t = F(t,x,y,R_x), with analytic initial data (<ref>) for (x,y) ∈.Applying a convenient version of the Cauchy–Kovalevskaya Theorem for first-order PDEs (see Theorem <ref> in Appendix <ref>), there exists 0 < δ < such that with= _δ(_0) := (δ) ×(b/3)⊂^3the following holds: there is a function (t,x,y) = ∑_k,r,s ≥ 0c_k,r,sx^ky^r(t-_0)^swhich is analytic in the complex domain ,and which satisfies (<ref>)–(<ref>) for all (t,x,y) ∈ (with R replaced by ).Define_k,r(t) := ∑_s ≥ 0c_k,r,s(t-_0)^s , so that  can be written as (t,x,y) = ∑_k,r ≥ 0x^ky^r_k,r(t) .Now (_0,x,y) and R^_0(x,y)=R(_0,x,y) are both analytic for (x,y) ∈(b/3), and they agree on this domain. By the uniqueness of Taylor series it follows that _k,r(_0) =q_k,r(_0)if k ≥ 1 and r ≥ 0, 0 otherwise.Note that all terms on the right hand side of the partial time-derivative (<ref>) contain some power x^j with j ≥ 1.Since =(t,x,y) satisfies (<ref>) (with R replaced by ), using ∂/∂ t(t,x,y) = ∑_k,r ≥ 0x^ky^r'_k,r(t) we readily infer '_0,r(t)=0, which by (<ref>) implies _0,r(t) ≡ 0.Note that  from (<ref>) can thus be simplified to(t,x,y) = ∑_k ≥ 1, r ≥ 0x^ky^r_k,r(t) ,which closely mimics the form of R defined in (<ref>).For brevity, we shall henceforth always tacitly assume k ≥ 1 and r ≥ 0.Substituting =(t,x,y) as written in (<ref>) into both sides of (<ref>) (with R replaced by ), we now compare the coefficients of the x^ky^r terms on both sides.By tracing back how we arrived at (<ref>)–(<ref>) and (<ref>)–(<ref>), it is not difficult to see (by effectively doing all our calculations `in reverse') that we obtain differential equations of the form[Note that '_k,r in (<ref>) contains some q_j,0 terms, which arise due to the functions f_k,r,s in (<ref>).] '_k,r(t) = J_k,r((ρ_j(t))_j ∈,(_i,j(t))_1 ≤ i ≤ k 0 ≤ j ≤ r,(q_j,0(t))_1 ≤ j ≤ 2K) ,where the structure of the polynomial functions J_k,r coincides with the derivatives of the q_k,r in the sense that these satisfyq'_k,r(t) = J_k,r((ρ_j(t))_j ∈,(q_i,j(t))_1 ≤ i ≤ k 0 ≤ j ≤ r,(q_j,0(t))_1 ≤ j ≤ 2K) .Analogous to Lemma <ref>, the form of the infinite system of realdifferential equations (<ref>) and (<ref>) ensures that it has a unique solution (_k,r)_k ≥ 1,r≥ 0 in (_0-δ,_0+δ). From (<ref>), the functions q_k,r(t) satisfy this system of differential equations (the boundary condition holds trivially), so q_k,r(t)=_k,r(t) in this (real) interval,and hence R(t,x,y)=(t,x,y) for all (t,x,y) ∈=_δ(_0) with real t ∈. Since  is analytic in _δ(_0), this shows that R has the required analytic extension, completing the proof. Note if the PDE (<ref>)–(<ref>) had a unique solution, then the last part of the above proof would be redundant (where we show that analytic local solutions extend R). For the interested reader we mention that S=xR_x satisfies a quasi-linear PDE, where S(t,1,1)=∑_k ≥ 1, r≥ 0 kq_k,r(t) is constant by Remark <ref>.Keeping the notational convention (<ref>), we now derive some basic properties of u(t) := ∑_k,r ≥ 0r (r-1) q_k,r(t) =P_yy(t,1,1) .The function u(t) is (real) analytic for t ∈ (t_0,t_1), with u(t)≥ u(t_0)=0. Furthermore, u'(t) > 0 for all t ∈ (t_0,t_1). That u'(t) > 0 is fairly intuitive by (i) noting that we have q'_0,2(t) > 0 by Lemma <ref>, and (ii) observing that the discrete random variable W(i)=∑_k ≥ 1, r ≥ 0 r(r-1)Q_k,r(i) is non-decreasing.Observing that u(t)=P_yy(t,1,1), Theorem <ref> immediately implies that u(t) is analytic for t ∈ (t_0,t_1).Furthermore, by Lemmas <ref> and <ref> we have q_k,r(t) ≥ 0, which implies u(t) ≥ 0.By Lemma <ref> and (<ref>) of Lemma <ref> we also have q_k,r(t_0)=0 for all r ≥ 1, which readily yields u(t_0) = 0.It remains to establish that u'(t)>0. Recalling the convention (<ref>) we have u(t) = 2 q_0,2(t) + w(t) where w(t) = ∑_k ≥ 1, r ≥ 2r (r-1) q_k,r(t). Since q'_0,2(t) > 0 by Lemma <ref>, it suffices to prove w'(t) ≥ 0 for t ∈ (t_0,t_1).The basic strategy is to compare w(t) with W(i)=∑_k ≥ 1, r ≥ 2 r(r-1)Q_k,r(i).Combining the inequalities (<ref>)–(<ref>) of Theorem <ref> (which imply Q_k,r(i) = 0 if k+r ≥ (log n)^2, say), it follows that whp max_i_0 ≤ i ≤ i_1|W(i) - w(i/n) n| ≤ (log n)^D_Q + 9 n^1/2. To prove w'(t) ≥ 0 for t ∈ (t_0,t_1), it suffices to show w(τ_2) ≥ w(τ_1) for all t_0 < τ_1 ≤τ_2 < t_1.Here the key observation is that W(i) cannot decrease in any step (after adding a V_S–V_L or V_S–V_S edge we have W(i+1) ≥ W(i), and V_L–V_L edges are irrelevant).Indeed, recalling i_j = t_j n, using (<ref>) it follows for all t_0 < τ_1 ≤τ_2 < t_1 that w(τ_2)-w(τ_1) ≥ - 2(log n)^D_Q + 9 n^-1/2 .Since w(t) does not depend on n, we thus have w(τ_2) -w(τ_1) ≥ 0, completing the proof.§.§ Sprinkling In this subsection we introduce a dynamic variant of the classical Erdős–Rényi sprinkling argument from <cit.> (see Lemmas 4 and 6 of <cit.> for related arguments), which will later be key for studying the size of the largest component of G_i (see Section <ref>).Intuitively, sprinkling quantifies the following idea: if there are many vertices in large components, then most of these components should quickly merge to form a `giant' component as the process evolves. Later we shall apply Lemma <ref> below with Λ=ω(^-2), x=Θ( n) and ξ= o(1) chosen such that Δ_Λ,x,ξ = o( n) and x/Λ = ω(1). For any bounded size ℓ-vertex rule with cut-off K there are constants λ,η>0 such that the following holds.Let _i,Λ,x,ξ denote the event that N_≥Λ(i) ≥ x implies L_1(i+Δ_Λ,x,ξ) ≥ (1-ξ) N_≥Λ(i), where Δ_Λ,x,ξ = λ n^2/(ξΛ x).Then for all i ≥ i_0, x ≥Λ > K and ξ >0 we have (_i,Λ,x,ξ) ≤exp(-η x/Λ) + n^-ω(1). We may assume ξ∈ (0,1), since the claim is trivial otherwise.As N_ω(i) is monotone increasing in i,by (<ref>) and Lemma <ref> there is a constantα>0 such that, with probability 1-n^-ω(1), we have min_i ≥ i_0N_ω(i) ≥ N_ω(i_0) ≥α n .Let λ = 4/α^ℓ-2 and η=1/9.Note that, conditional on G_i satisfying N_ω(i) ≥α n and N_≥Λ(i) ≥ x,it suffices to show that _i,Λ,x,ξ fails with (conditional) probability at most exp(-η x/Λ).Turning to the details, let W denote the union of all components of G_i with size at least Λ. Clearly, the number of components of G_j meeting W is (a) at most |W|/Λ in step j=i, and (b) monotone decreasing as j increases. Moreover, until there is a component containing at least (1-ξ)|W| vertices, in each step we have probability at leastmin_i' ≥ i(N_ω(i')/n)^ℓ-2|W|/nξ|W|/n≥α^ℓ-2ξ(|W|/n)^2 =: qof joining two vertices from W that are in distinct components (equation (<ref>) exploits that the bounded size ℓ-vertex rule has cut-off K < Λ),in which case the number of components meeting W reduces by one; for later reference we call such steps joining.Recalling |W|= N_≥Λ(i) ≥ x and x ≥Λ, define M := 2/q·|W|/Λ≤4 n^2/α^ℓ-2ξΛ |W|≤4 n^2/α^ℓ-2ξΛ x = Δ_Λ,x,ξ,and note that qM ≥ 2x/Λ.Starting from G_i, using standard Chernoff bounds (and stochastic domination) it follows that, with probability at least 1-exp(- η x/Λ), say,after at most M additional steps either(i) at least qM/2 ≥ |W|/Λ joining steps occurred, which is impossible,or (ii) there is a component containing at least (1-ξ)|W| vertices.In case (ii) we have L_1(i+M) ≥ (1-ξ)|W| = (1-ξ)N_≥Λ(i), which completes the proof.§.§ Periodicity and reachable component sizes In this subsection we study the component sizes that can appear in the random graph process (G^_n,i)_i ≥ 0.In a standard Achlioptas process (i.e., an `edge rule'), all component sizes are possible, since if the rule is presented with two potential edges each of which would join an isolated vertex to a component of size k, then it must form a component of size k+1. Indeed, this observation easily leads to an inductive lower bound on the rate of formation of components of size k as a function of k and t (see below). For ℓ-vertex rules, this need not be the case. For example, there are 3-vertex rules that never join components of size 1 and 2, and so never form a component of size 3.Indeed, the rule may always choosea larger component to join to something else, if available, and if presented with three components of sizes 1 and 2, may join two of the same size. This phenomenon affects not only small component sizes: there are bounded-size rules which never join an isolated vertex to any component other than another isolated vertex, for example, and so only create components (other than isolated vertices) of even size.This effect needs to be taken into account when considering the large-k asymptotics of ρ_k(t) and N_k(i). Fortunately, for the asymptotics, there is only one relevant parameter, the `period'of the rule, defined below. In an Achlioptas process we have =1; handling the general case requires no major new ideas, but is a little fiddly. The reader may thus wish to skip the rest of this section, and to focus on the (most important) case =1 throughout the paper.Recall that an ℓ-vertex size ruleis defined by a function :(^+)^ℓ→[ℓ]2 giving the (distinct) indices of the vertices that the rule will join when presented with vertices in components of size (c_1,…,c_ℓ). For each such size-vector , let s() denote the size of the resulting component, in the case that the edge added does join two components. (If not, no new component size results.) Thus s()=c_j_1+c_j_2, where ()={j_1,j_2}. Define the set = of reachable component sizes to be the smallest subset of the positive integers such that1 ∈and∈^ℓ s()∈.In other words, =⋃_r≥ 0_r, where_0={1}and_r+1=_r ∪{s():∈_r^ℓ}. Ifis an Achlioptas process, then we have =^+. Indeed, an Achlioptas process is a 4-vertex rule in which {j_1,j_2} is always either {1,2} or {3,4}. For such a rule s(k,1,k,1)=k+1, and it follows by induction that k∈ for all k≥ 1.For bounded-size ℓ-vertex rules we now record that, beyond the cut-off K, all elements of  are multiples of ,and that beyond some perhaps larger integer, all multiples of  are in . Letbe a bounded-size ℓ-vertex rule with cut-off K. Then there are integers ,≥ 1,with  a power of two, such that for k≥ we have k∈ if and only if k is a multiple of . Furthermore, if k∈ and k>K, then k is a multiple of . We shall writeforto avoid clutter.Let a be the smallest integer such that 2^a > K,and define ^+={i∈: i≥ 2^a}.For any rule, we have s(i,i,…,i)=2i, so i∈ implies 2i∈ andcontains all powers of two (as used earlier in the proof of Lemma <ref>).If a size-vectorhas all c_i≥ 2^a, then the bounded-size rule `sees' only large components (size >K),and so makes some fixed choice, say (relabeling if needed) {j_1,j_2}={1,2}. It follows that if i∈^+ then i+2^a∈^+, since s(i,2^a,…,2^a)=i+2^a. Letbe the set of residue classes modulo 2^a that appear in ^+. Within each residue class in , if k is the smallest element of the class included in ^+, then we have k,k+2^a,k+2· 2^a,…∈^+. It follows that beyond some constant(which may be significantly larger than 2^a) we have i∈^+ if and only if i is in one of the classes in .Considering the case when one component has size i and the others have size j, we see (as above) that i,j∈^+ implies i+j∈^+. Henceis closed under addition, and is thus a subgroup of /2^a. Hence there is some number , a divisor of 2^a, such thatconsists of all multiples of . Hence,for i≥ := max{,2^a},we have i∈ (which is equivalent to i∈^+) if and only if i is a multiple of . For the final statement, if k > K, then we have k+m2^a∈ for all m≥ 0, by induction on m (using s(k+i2^a,2^a,…,2^a)=k+(i+1)2^a, as above).Since k+m2^a≥ for m large enough,we have thatdivides k+m2^a.Sinceis a divisor of 2^a, we deduce that  divides k.We call , which is uniquely defined by the properties given in Lemma <ref>, the period of the rule. The constantis not uniquely defined; for definiteness, we may take it to be the minimal integer with the given property.For Achlioptas process we have =1 and =1 since =^+ (as noted above).We state this observation as a lemma for ease of reference. Ifcorresponds to a bounded-size Achlioptas process, then =^+, and ==1. The component sizes inare all those than can possibly appear in G^_n,i; we next note for any k∈, we will see many components of this size – for i=Θ(n) and n large enough, on average a constant fraction of the vertices will be in components of size k.Recall that N_k(tn) ≈ρ_k(t) n (see Lemma <ref> and Theorem <ref>). Letbe a bounded-size rule. Let (ρ_k)_k ≥ 1 be the functions defined in Lemma <ref>. If t ∈ (0,t_1] and k ≥ 1, then ρ_k(t)>0 if and only if k ∈. From Lemma <ref> it follows that, for each fixed k ≥ 1, whp we havemax_0 ≤ i ≤ t_1 n|N_k(i)/n - ρ_k(i/n)| ≤ (log n) n^-1/2 = o(1) .In the case k ∉, by construction we have N_k(i)=0 for all i ≥ 0 (with probability one),so ρ_k(t)=0 for all t ∈ [0,t_1]. We now turn to the case k ∈. Instead of adapting the differential inequality (<ref>) based proof of Lemma <ref>,we here give a perhaps more intuitive alternative argument, which extends more easily to the functions q_k,r.If 0 ≤ i'<i and C is a component of G^_n,i' with k vertices, then C is also a component of G^_n,i with (conditional) probability at least (1-k/n)^ℓ(i-i'), simply by considering the event that none of the ℓ random vertices in any of steps i'+1,…,i falls in C (see Lemma 5 in <cit.> for similar reasoning).For k constant and i-i'=O(n), this probability is exp(-kℓ (i-i')/n +o(1)),so (N_k(i) | G^_n,i') ≥ N_k(i') ·exp(-kℓ (i-i')/n +o(1)).Together with (<ref>) it follows easily that for 0 ≤ t' ≤ t we haveρ_k(t)≥ρ_k(t')e^-kℓ(t-t'). Define _r as in (<ref>). We now show that for each r≥ 0, for every k∈_r and t∈ (0,t_1] we have ρ_k(t)>0. We prove this by induction on r. The base case r=0 is immediate, since ρ_1(0)=1 and so ρ_1(t)≥ e^-ℓ t>0 by (<ref>). For the induction step, let k∈_r+1 and t∈ (0,t_1]. Then there are c_1,…,c_ℓ∈_r with s()=k. By induction we have ρ_c_j(t/2)>0 for j=1,…,ℓ. Hence, using (<ref>), there is some δ=δ(k,t)>0 such that ρ_c_j(t')≥δ for all t'∈ [t/2,t]. For n large enough, in each step i with tn/2≤ i≤ tn, by (<ref>) we thus have probability atleast, say, δ^ℓ/2 of selecting vertices in distinct components of sizes c_1,…,c_ℓ, and thusforming a component with k vertices. Such a component, once formed, has probability at least e^-k ℓ t/2+o(1) of surviving to step tn, as above. It follows that N_k(tn) ≥ e^-kℓ t/2+o(1)δ^ℓ/2 · t n/2=Ω(n).By (<ref>) this implies ρ_k(t) >0.As well as the possible component sizes, we need to consider the possible `sizes' of (k,r)-components that can appear in the marked graph H_i defined in Section <ref> (see Figure <ref>).Recall that Q_k,r(tn) ≈ q_k,r(t) n (see Lemmas <ref>–<ref> and Theorem <ref>). Letbe a bounded-size rule with cut-off K. Define (k,r)-components as in Section <ref>, and let (q_k,r)_k≥ 1, r≥ 0 be the functions defined in Lemma <ref>. Then there is a set ^*_⊂^2 with the following properties:* the marked graph H_i can only contain (k,r)-components with (k,r)∈, *(0,r)∈^*_ if and only if r=2, *for t∈ (t_0,t_1], k≥ 1 and r≥ 0 we have q_k,r(t)>0 if and only if (k,r)∈^*_,*(k,0)∈ if and only if k∈, *if k>K, r≥ 0 and k∈, then (k,r)∈, and*if (k,r)∈ and r≥ 1, then k is a multiple of .We defineto be the set of all pairs (k,r) such that it is possible (for some n and i) for a (k,r)-component to appear in the marked graph H_i, together with the exceptional pair (0,2),whose inclusion is convenient later. Thus properties <ref> and <ref> hold by definition; pairs (0,r) play no role in the other properties.By Lemma <ref> we have q_1,0(t_0)=ρ_1(t_0)/1>0,and whp max_i_0 ≤ i ≤ i_1|Q_k,r(i)/n-q_k,r(i/n)|=o(1) for fixed k ≥ 1 and r ≥ 0.It is clearly possible to give a recursive description of ∖{(0,2)} similar to that for ,starting with {(1,0)}. Property <ref> thus follows by an argument analogous to the proof of Lemma <ref>;we omit the details. For property <ref>, note that a (k,0) component in H_i is a k-vertex component in G_i, so (k,0)∈ implies k∈. In the reverse direction, if k∈ then there is a finite sequence of steps (corresponding to the recursive description of ) by which a k-vertex component may be `built' from isolated vertices. This sequence is also possible starting after step i_0, using isolated vertices in V_S. This shows that (k,0)∈. A slight extension of the previous argument proves <ref>. Indeed, if k∈ and k>K, then it is possible for a (k,0) component C to form in V_S. Since k>K (so this component is large) it is possible in a later step for an edge to be added joining C to V_L; this can happen any number of times.Finally, for <ref>, note that V_L may contain components of any size k∈ with k>K. This includes all large enough multiples of . Since a (k,r)-component may join to r such components, which may happen not to be joined to any other (k',r')-components, we see that if (k,r)∈, then k+rm∈ for all large enough m. By Lemma <ref>, it follows that k is itself a multiple of .§ COMPONENT SIZE DISTRIBUTION: COUPLING ARGUMENTS In this section we study a variant of the random graph J_i=J(_i) introduced in Section <ref>.Our goal is to relate the component size distribution of J_i to a `well-behaved' branching process _i/n; our analysis hinges on a step-by-step neighbourhood exploration process. The general idea of comparing such exploration processes with a branching process is nowadays standard, although the details are more involved than usual.In this section we take a `static' viewpoint, considering a single value of i=i(n), which we throughout assume to lie in the rangei_0≤ i≤ i_1with i_0 and i_1 defined as in (<ref>); the associated `time' t=i/n satisfies t ∈ [t_0,t_1].Although in the end we wish to analyze J(_i), we will also considerrandom graphs constructed from other parameter lists , motivating the following definition.A parameter listis an ordered pair=((N_k)_k > K,(Q_k,r)_k,r ≥ 0)where each N_k is an integer multiple of k and each Q_k,r is a non-negative real number. We always assume, without further comment, that only finitely many of the N_k and Q_k,r are non-zero. An important example is the random parameter list _i defined in (<ref>), arisingfrom the random graph process process G^_n,i after i steps. In this case, each Q_k,r∈, but it will be useful to allow non-integer values for the Q_k,r later. Given a parameter list as above, wedenote the individual parameters in  by N_k=N_k() and Q_k,r=Q_k,r(). Let =((N_k)_k > K, (Q_k,r)_k,r ≥ 0) be a parameter list with N_k,Q_k,r∈. We define the initial graph H=H() as follows. For each k ≥ 1 and r≥ 0 take Q_k,r type-(k,r) components (i.e., components with k vertices and r `stubs'); the union of their vertex sets is V_S=V_S(). In addition, take N_k/k components of size k for each k > K; the union of their vertex sets is V_L=V_L(). The order of  is|| :=|V_S| + |V_L| = ∑_k≥ 1, r≥ 0 k Q_k,r + ∑_k>K N_k.We construct the random graph J=J() by (i) connecting each stub of each (k,r)-component in H to an independent random vertex in V_L, and (ii) for each r ≥2 adding Q_0,r random hyperedges (x_1, …, x_r) ∈ (V_L)^r to H, where the vertices x_j are all chosen independently and uniformly at random from V_L. When Q_0,r=0 for r 2, as is the case for the random parameter list _i defined in (<ref>), the construction above is exactly the construction described in Section <ref> (see also Figure <ref>); the slightly more general form here avoids unnecessary case distinctions later. Hence, we may restate Lemma <ref> as follows. Let i=i(n) satisfy i_0≤ i≤ i_1, and let _i be the random parameter list generated by the random graph process G^_n,i. Then, conditional on _i, the random graphs J(_i) and G_i=G^_n,i have the same component size distribution.For technical reasons it will often be convenient to work with a `Poissonized' version of J().Let =((N_k)_k > K, (Q_k,r)_k,r ≥ 0) be a parameter list with N_k ∈ and Q_k,r∈ [0,∞). The Poissonized random graph =()is defined exactly as in Definition <ref>, except that the numbers of type-(k,r) components are now independent Poisson random variables with mean Q_k,r.It will be useful to think of the (k,r)–components (together with their r adjacent edges) as distinct r-uniform hyperedges with weight k, i.e., with k attached V_S–vertices.Indeed, this point of view unifies (i) and (ii) from our construction: both then correspond to hyperedges g=(x_1, …, x_r) ∈ (V_L)^r of weight k, with independent x_j ∈ V_L.Using standard splitting properties of Poisson processes, it now is easy to see that, for all k ≥ 0 and r ≥ 0, hyperedges g=(x_1, …, x_r) ∈ (V_L)^r of weight k, henceforth referred to as (k,r)–hyperedges, appear in () according to independent Poisson processes with rate_k,r = _k,r() := Q_k,r/|V_L|^r = Q_k,r()/|V_L()|^r ,where Q_k,r and |V_L|=∑_k > KN_k are determined by the parameter list . This is essentially a version of the inhomogeneous random hypergraph model of Bollobás, Janson and Riordan <cit.>, though with the extra feature of weights on the hyperedges, and built on top of the (deterministic, whenis given) initial graph H_L = H_L() := H[V_L] = H()[V_L()] ,i.e., the graph on V_L consisting of N_k/k components each size k>K, see also Figure <ref>. It is this model that we shall work with much of the time. In this section our main focus is on parameter lists which satisfy the typical properties of those arising from (G^_i)_i_0 ≤ i ≤ i_1 derived in Section <ref>, which we now formalize. For t∈ [t_0,t_1] we say that a parameter list=((N_k)_k > K, (Q_k,r)_k,r ≥ 0) is t-nice if it satisfies N_k,Q_k,r∈ and the following conditions, where the constants D_N, A, a and D_Q, B, b are as in Theorems <ref> and <ref>:|N_k-ρ_k(t_0)n|≤ (log n)^D_N n^1/2∀k>K,|Q_k,r-q_k,r(t)n|≤ (log n)^D_Q n^1/2∀k,r≥ 0,N_≥ k ≤ A e^-akn ∀k>K,Q_k,r ≤ B e^-b(k+r)n ∀k,r≥ 0,||= n, N_k>0k∈and Q_k,r>0(k,r)∈.Note that the definition involves n, so formally we should write (n,t)-nice. However, the value of n should always be clear from context. When the value of t is not so important, we sometimes write nice rather than t-nice.The final condition above is technical, and simply expresses that the component sizes/types in a t-niceparameter list are ones that could conceivably arise in our random graph process (see Section <ref>). Let _i denote the event that the random parameter list _i=(G^_n,i) is t-nice, where t=i/n, and let = ⋂_i_0 ≤ i ≤ i_1_i. Then() = O(n^-99) . By Theorem <ref>, relations(<ref>) and (<ref>) hold with the required probability. Similarly, Theorem <ref> gives (<ref>) and (<ref>) for k≥ 1. For k=0, we have Q_k,r(i)=0 unless r=2. When k=0 and r=2, the bound (<ref>) holds with the required probability by Lemma <ref> (increasing D_Q if necessary), and (<ref>) holds trivially if B≥ e^2b, which we may assume. Finally, (<ref>) follows from (<ref>), and (<ref>) from the definitions of  and  in Section <ref>. For later reference, we collect some convenient properties of nice parameter lists.First, since there exists k^*∈∖ [K] with ρ_k^*(t)>0 (see Lemmas <ref> and <ref>), using (<ref>) and (<ref>) we deduce that there is an absolute constant ζ > 0 such that for n large enough, every t-nice parameter list satisfiesζ n ≤ |V_L| ≤ n .Second, letting B_0=2/b where b is the constant in (<ref>), since Q_k,r is an integer, for n large enough (<ref>) implies Q_k,r =0 whenever k+r≥ B_0log n,say. Arguing similarly for N_k, and stating only a crude bound to avoid dealing with the constants, for n ≥ n_0(a,A,b,B,ζ) any t-nice parameter list satisfiesmax_k ≥ΨN_k=0, max_k+r ≥ΨQ_k,r=0 andmax_k,r Q_k,r≤Ψ |V_L|,where Ψ := (log n)^2; for the final bound we simply use Q_k,r≤ Bn from (<ref>) and (<ref>). Throughout this section we shall always, without further comment, assume that n is large enough such that every t-nice parameter listsatisfies (<ref>)–(<ref>). The remainder of this section is organized as follows; throughout we considerwhich is t-nice, for some t∈ [t_0,t_1]. First, in Section <ref> we introduce a neighbourhood exploration process for (), which we couple with an `idealized' branching process _t in Section <ref>.Next, in Section <ref> we show that J() can be `sandwiched' between two instances of (·), say (^-_t) ⊆ J() ⊆(^+_t), which we are able to study via associated `dominating' branching processes ^±_t.Later, in Section <ref>, we will use Lemmas <ref> and <ref> to transfer properties of J() back to the original random graph process. §.§ Neighbourhood exploration processIn this subsection we introduce a neighbourhood exploration process which initially may be coupled exactly with a certain branching process (defined in Section <ref>). With an eye on our later arguments we shall consider an arbitrary parameter list =((N_k)_k > K, (Q_k,r)_k,r ≥ 0). We intuitively start the exploration of =() with a random vertex from V_S ∪ V_L, but in the Poissonized model this requires some care, since V_S is a random set.For this reason we first discuss the exploration starting from a set W of vertices from V_L, where W is a union of components of H_L=H_L(), deferring the details of the initial generation to Section <ref>.§.§.§ Exploring from an initial set Writing, as usual, C_v(G) for the (vertex set of) the component of the graph G containing a given vertex v,let C_W() := ⋃_v∈ W C_v(). The basic idea is that each vertex v ∈ V_L has neighbours in V_S and in V_L.Indeed, via each (k,r)–hyperedge (x_1, …, x_r) containing v we reach k new V_S–vertices and find up to r-1 new neighbours {x_1, …, x_r}∖{v} in V_L (there could be fewer if there are clashes).Repeating this exploration iteratively, we eventually find C_W(), see Figure <ref> (see also Figure <ref> for the related graph H_i). Turning to the details of the exploration process, we shall maintain sets of active and explored vertices in V_L=V_L(), as well as the number of reached vertices from V_S=V_S(). After step j of the exploration we denote these by _j, _j and S_j, respectively; note that _j∪_j, the set of `reached' vertices in V_L,will always be a union of components of H_L. Initially, given a union W ⊆ V_L of components of H_L, we start with the active set _0=W, the explored set _0=∅, and some initial number S_0 ∈ (it will later be convenient to allow S_0>0).In step j ≥ 1, we pick an active vertex v_j ∈_j-1. For each k ≥ 0 and r ≥ 1 we then proceed as follows, see also Figure <ref>.We sequentially test the presence and multiplicity of each so-far untested (k,r)-hyperedge g∈ (V_L)^r of the form (v_j,w_1, …, w_r-1), …, (w_1, …, w_r-1,v_j), and denote the resulting multiset of `newly found' hyperedges by _j,k,r. Now, for each hyperedge g ∈_j,k,r we increase the number of V_S–vertices reached by k, and mark all `newly found' vertices, i.e., all vertices in ⋃_1 ≤ h ≤ r-1C_w_h(H_L) ∖ (_j-1∪_j-1), as active. Finally, we move the vertex v_j from the active set to the explored set.As usual, we stop the above exploration process if |_j|=0, in which case _j = C_W() ∩ V_LandS_j = |C_W() ∩ V_S|+S_0 .It will be convenient to extend the definitions of _j, _j and S_j to all j ≥ 0.Namely, if |_j|=0 then we set X_j'=X_j for all j' > j and X ∈{,,S}. Note that, by construction, the following properties hold for all j ≥ 0:_j ∪_j⊆_j+1∪_j+1⊆ V_L,S_j ≤ S_j+1,|_j ∪_j| + S_j ≤ |C_W()| + S_0 .Furthermore, since the vertex v_j is moved from the active to the explored set in step j ≥ 1 whenever |_j-1| ≥ 1, it is not difficult to see that for all j ≥ 0 we have |_j| = max{|_j ∪_j|-j,0}.Note that, in view of <ref>, to study the size of C_W() it is enough to track the evolution of (|_j ∪_j|,S_j)_j ≥ 0.For this reason we shall studyM_j := |_j ∪_j|,the number of vertices of V_L reached by the exploration process after j steps. In Section <ref> we shall specify the initial distribution of S_0 and W, which then in turn defines the process = () := (M_j,S_j)_j ≥ 0 .Intuitively,corresponds to a random walk which counts the number of V_S and V_L vertices reached by the exploration process. For convenience we also define|| := |C_W()|+S_0 = |C_W() ∩ V_L| + (|C_W() ∩ V_S|+S_0),the total number of reached vertices, including those that we started with (see Section <ref>). Note that  determines ||, cf. (<ref>)–(<ref>) above.One main goal of this section is to show that || determines the expected component size distribution of =(), see (<ref>)–(<ref>) below.§.§.§ Initial generation and first moment formulae We now turn to the `initial generation', which yields the input W and S_0 for our exploration process.Recall that the vertex set of the Poissonized model =() is not deterministic (in contrast to that of J()). We are eventually interested in N_j() with j ≥ 1, which denotes the number of vertices in components of size j in .Writing _k,r for the (random) set of all (k,r)–hyperedges in , note thatN_j() =∑_v ∈ V_L|C_v()|=j + ∑_k ≥ 1,r ≥ 0∑_g ∈_k,r k |C_g()|=j ,where C_() denotes the component ofwhich contains ∈{v,g}.Using standard results from the theory of point processes (see, e.g., Lemma <ref> in Appendix <ref>) we have ( ∑_∈_k,r|C_()|=j) = Q_k,r·(|C(_k,r)|=j) ,where C(_k,r) is defined as follows. We add an `extra' V_S–componentwith ||=k, and connectto =() via r random vertices in V_L, i.e., we add an extra (k,r)–hyperedge g.We then write C(_k,r) for the component of the resulting graph which contains . It follows that N_j() = ∑_k > K∑_v ∈ V_L:|C_v(H_L)|=k(|C_v()|=j) + ∑_k ≥ 1, r ≥ 0 k Q_k,r(|C(_k,r)|=j) .By definition,∑_k >K N_k(H_L) + ∑_k ≥ 1,r ≥ 0 k Q_k,r = |V_L| + |V_S| = || . Thus (<ref>) implicitly defines the desired initial distribution of =(M_j,S_j)_j ≥ 0. Namely, for any v ∈ V_L, with probability 1/|| we start the exploration process with S_0:=0 and W:= C_v(H_L) ,and for any k ≥ 1, r ≥ 0, with probability k Q_k,r/|| we select w_1, …, w_r ∈ V_L independently and uniformly at random, and then start the exploration process withS_0:=k and W:= ⋃_1 ≤ h ≤ r C_w_h(H_L) .In the first case, from (<ref>) we have||=|C_W()|+S_0=|C_v()|,while in the second case (<ref>) and the construction of C(_k,r) yield ||=|C_W()|+S_0 = |C_W()|+k=|C(_k,r)|. Combining this discussion with (<ref>)–(<ref>), we thus obtainN_j() = [∑_k > K∑_v ∈ V_L:|C_v(H_L)|=k(1/||·(|C_v()|=j)) + ∑_k ≥ 1, r ≥ 0(k Q_k,r/||·(|C(_k,r)|=j))] · ||= (|| = j) || .Finally, defining N_≥ j() in the obvious way, for future reference we similarly haveN_≥ j() = (|| ≥ j) || .§.§.§ Variance estimates One application of the exploration process described above is the following bound on thevariance of the number of vertices in a range of component sizes. It will turn out later that, for the parameters Λ_j we are interested in,in the subcritical case the upper bound proved below is small compared to ( X)^2, allowing us to apply Chebyshev's inequality to establish concentration (see Sections <ref> and <ref>).Letbe an arbitrary parameter list, and define =() as in Definition <ref>. For all 0 ≤Λ_1 ≤Λ_2, setting X=N_≥Λ_1()-N_≥Λ_2() we haveX ≤ X ( N_≥Λ_2() + Λ_2) . The key step in the proof is the following van den Berg–Kesten-type estimate: we claimthat for all R_1,R_2 ⊆ V_L and _1,_2⊆ with I_2⊆ [Λ,∞) we have(|C_R_1()| ∈_1, |C_R_2()| ∈_2,C_R_1() ∩ C_R_2() =∅) ≤(|C_R_1()| ∈_1) (|C_R_2()| ≥Λ) .To prove this claim, given U ⊆ V_L we defineas the subgraph ofobtained by deletingall vertices and hyperedges involving vertices from U.Clearly, if C_R_1() ∩ V_L = U and C_R_1() and C_R_2() are disjoint, then C_R_2() = C_R_2().Exploring as in Section <ref>, starting from W_1=C_R_1(H_L), we can determine C_R_1()=C_W_1() while only revealing information about hyperedges involving vertices fromU=C_R_1() ∩ V_L.By construction, the remaining weighted hyperedges g=(x_1, …, x_r) ∈ (V_L ∖ U)^r have not yet been tested, i.e., still appear according to independent Poisson processes. So, sincedoes not depend on the status of the revealed hyperedges (which each involve at least one vertex from U), it follows(by conditioning on all possible sets U)that the left hand side of (<ref>) is at most|C_R_1()| ∈_1·max_R_1 ⊆ U ⊆ V_L ∖ R_2|C_R_2()| ∈_2.Since ⊆ and _2 ⊆ [Λ, ∞), this implies (<ref>). Having established the claim, we turn to the bound on X; the main complication involves dealing with the first step of our exploration, i.e., with the possibility of starting in the random set V_S. Analogous to (<ref>) we write X=N_≥Λ_1()-N_≥Λ_2() as a sum of terms of the form |C_v()| ∈ or k|C_g()| ∈, where:=[Λ_1,Λ_2).Note that X^2 involves pairs (C_1(),C_2()) of components of the form C_j() ∈{C_v(),C_g()}.LetX^2=Y+Z, where Y contains all summands with pairs of equal components, i.e., C_1()=C_2(), and Z all summands with pairs of distinct components, i.e., C_1() ≠ C_2().Since each relevant component contains at most Λ_2 vertices, we have Y ≤ X ·Λ_2, soY ≤ X ·Λ_2. For Z we proceed similarly as for (<ref>). In particular, using standard results from the theory of point processes (see, e.g., Lemmas <ref>–<ref> in Appendix <ref>), we have ( ∑_f∈_k_1,r_1∑_g∈_k_2,r_2 k_1 k_2 |C_f()| ∈, |C_g()|∈ , C_f() ≠ C_g())= k_1Q_k_1,r_1k_2Q_k_2,r_2(|C_1(_++)| ∈, |C_2(_++)| ∈ , C_1(_++) ≠ C_2(_++) ) ,where C_1(_++) and C_2(_++) arise analogous to Section <ref>,by adding an extra (k_j,r_j)-hyperedge for j=1,2. (Here it is important that C_f() ≠ C_g() implies f ≠ g, so we add two distinct extra hyperedges.) More precisely, we form _++ by adding, for each j=1,2, an `extra' component _j with k_j vertices, joined to a random set R_j={w_j,1, …, w_j,r_j} of vertices of , where the w_j,h are chosen independently and uniformly from V_L. Then C_j(_++) is the component of _++ containing _j. (Since the definition of _++ depends on (k_1,r_1) and (k_2,r_2), the notation C_j(_k_1,r_1,k_2,r_2) analogous to that used in Section <ref> would also be appropriate; we avoid this as being too cumbersome.)In the case we are interested in, the components C_j(_++) in the augmented graph _++ are distinct and thus disjoint. Thus, for each j,C_j(_++) consists of the k_j vertices in _j together with all vertices in C_R_j(). Hence, in this case, |C_j(_++)| ∈ if and only if |C_R_j()| ∈_j = [Λ_1-k_j,Λ_2-k_j). Using (<ref>) it follows that, conditional on R_1 and R_2, we have(|C_1(_++)| ∈, |C_2(_++)| ∈,C_1(_++) ≠ C_2(_++)) ≤(|C_R_1()| ∈_1) (|C_R_2()| ≥Λ_1-k_2) .Define C(_k_j,r_j) as in the previous subsection, adding only one extra component with k_j vertices joined to a set R_j consisting of r_j random vertices from V_L, so |C(_k_j,r_j)|=|C_R_j()|+k_j. Then, taking the expectation over the independent random sets R_j and applying (<ref>) twice, we deduce that the right hand side of (<ref>) is at most k_1 Q_k_1,r_1(|C(_k_1,r_1)| ∈) · k_2Q_k_2,r_2(|C(_k_2,r_2)| ≥Λ_1 ) .The estimates for the other terms of Z involving |C_v()|,|C_w()| and |C_v()|,|C_g()| are similar, but much simpler, and we conclude thatZ ≤ X · N_≥Λ_1() = ( X)^2 +X · N_≥Λ_2(). HenceX^2 =Z +Y ≤ ( X)^2 +XN_≥Λ_2() + Λ_2 ,completing the proof.Using related arguments, we next prove an upper bound on the (rth order) susceptibility of (_i).Since =(_i) has approximately (rather than exactly) n vertices, for any graph G it will be convenient to define the `modified' susceptibility S_r,n(G) := ∑_C|C|^r/n = ∑_k ≥ 1 k^r-1 N_k(G)/n .Note that we divide by n, rather than by the actual number of vertices of G. For later reference we collect the following basic properties of this parameter (to establish monotonicity it suffices to check the case where F and H differ by a single edge or isolated vertex).Let r ≥ 1. For any n-vertex graph G we have S_r(G)=S_r,n(G).For any two graphs F ⊆ H we have S_r,n(F) ≤ S_r,n(H). In the subcritical case it will turn out that the bound (<ref>) below is small enough to establish concentration via Chebyshev's inequality (see Sections <ref> and <ref>).Letbe an arbitrary parameter list, and define =() as in Definition <ref>.For r ≥ 2 we have S_r,n() ≤ n^-1 S_2r,n().We shall mimic the basic proof strategy of Lemma <ref>, but treat pairs of equal components with more care.Turning to the details, analogous to (<ref>) we first claim that for all R_1,R_2 ⊆ V_L we have (|C_R_1()|^r-1 |C_R_2()|^r-1C_R_1() ∩ C_R_2() = ∅) ≤|C_R_1()|^r-1|C_R_2()|^r-1.Indeed, using the conditioning argument leading to (<ref>) we see that the left hand side of (<ref>) is at most |C_R_1()|^r-1·max_R_1 ⊆ U ⊆ V_L ∖ R_2|C_R_2()|^r-1 ,whereis the subgraph ofobtained by deleting all vertices and hyperedges involving vertices from U.Since ⊆ and so |C_R_2()| ≤ |C_R_2()|, this establishes inequality (<ref>). Next we focus on S_r,n(). Inspired by (<ref>), using |C|^r=∑_v ∈ C|C|^r-1 we rewrite nS_r,n() as X: = nS_r,n() = ∑_v ∈ V_L|C_v()|^r-1 + ∑_k ≥ 1,r ≥ 0∑_g ∈_k,r k |C_g()|^r-1 ,where C_() denotes the component ofwhich contains ∈{v,g}.Since X^2 involves pairs (C_1(),C_2()) of components, analogous to (<ref>) we may write X^2=Y+Z, where Y contains all summands with pairs of equal components, i.e., C_1()=C_2(), and Z all summands with pairs of distinct components, i.e., C_1() ≠ C_2().Now, in any graph G, the sum corresponding to Y is ∑_v,w∈ V(G)|C_v|^r-1|C_w|^r-1C_v=C_w, which counts |C|^2r for each component of G. Thus, specializing to G=, we have Y=nS_2r,n().For Z we proceed analogously to (<ref>)–(<ref>) in the proof of Lemma <ref>.Indeed, combining standard results from the theory of point processes with inequality (<ref>), as in that proof we see thatZ ≤ X · X = ( X)^2. From the bounds above we have X^2= Y+ Z ≤ ( X)^2+n(S_2r,n()). Hence X≤ n(S_2r,n()). Since X=n S_r,n(), this completes the proof. §.§.§ Some technical properties To facilitate the coupling arguments to come in Sections <ref> and <ref>, we next derive some technical properties of the random walk =(M_j,S_j)_j ≥ 0 associated to the exploration process.We expect that the neighbourhoods in =() are initially `tree-like', which suggests that in the exploration process we can initially replace the sets _j,k,rof reached hyperedges by multisets which do not depend on the so-far tested tuples. The technical lemma below formalizes this intuition via the random multisets _k,r.Recalling (<ref>), note that M_j-1≥ j if and only if |_j-1| ≥ 1, i.e., if the exploration has not yet finished after step j-1.Let =((N_k)_k > K, (Q_k,r)_k,r ≥ 0) be a parameter list. Define () = (M_j,S_j)_j ≥ 0 as in (<ref>).Independently for each k,r,let _k,r=_k,r() be a random multiset where tuples g=(w_1, …, w_r-1) ∈ (V_L)^r-1 appear according to independent Poisson processes with rate r_k,r, where _k,r=_k,r() is defined as in (<ref>).Given j ≥ 1, condition on (M_i,S_i)_0 ≤ i ≤ j-1.Then there is a coupling of (the conditional distribution of)(M_j,S_j) with the _k,r such that, with probability one, we haveM_j-M_j-1 ≤∑_k ≥ 0,r ≥ 2∑_(w_1, …, w_r-1) ∈_k,r∑_1 ≤ h ≤ r-1|C_w_h(H_L())|, S_j-S_j-1 ≤∑_k,r ≥ 1k|_k,r| .If, in addition,satisfies (<ref>)and M_j-1≥ j holds, then we have equality in (<ref>) and (<ref>) with probability at least 1-O((log n)^22M_j-1/|V_L|),where the implicit constant is absolute. For later reference we remark that, using standard splitting properties of Poisson processes, _k,r may be generated by a more tractable two-stage process.Namely, we first determine |_k,r| ∼(_k,r)with _k,r = _k,r() :=|V_L|^r-1· r_k,r = rQ_k,r/|V_L|.Then, given |_k,r|=y_k,r, we set _k,r=_k,r():={(w_k,r,1,1, …, w_k,r,1,r-1), …, (w_k,r,y_k,r,1, …, w_k,r,y_k,r,r-1)},where all the vertices w_k,r,y,h∈ V_L are chosen independently and uniformly at random.Throughout we condition on the `history' (M_i,S_i)_0 ≤ i ≤ j-1.If M_j-1 < j then (by (<ref>)) we have |_j-1| =0 and thus (M_j,S_j)=(M_j-1,S_j-1), so all claimed bounds hold trivially.We may thus assume M_j-1≥ j. We start with the upper bounds (<ref>)–(<ref>). The plan is to embed the found hyperedges _j,k,r into the potentially larger multiset _k,r.To this end we map each tuple g ∈_j,k,r of the form (v_j,w_1, …, w_r-1), …, (w_1, …, w_r-1,v_j) to (w_1, …, w_r-1); this can be done in a unique way by deleting the first coordinate which is equal to v_j, say.Let _j,k,r denote the resulting multiset of tuples (w_1, …, w_r-1) ∈ (V_L)^r-1.Since _k,r uses rate r ·_k,r, by standard superposition properties of Poisson processesthere is a natural coupling such that _j,k,r⊆_k,r .By definition of the exploration process we haveM_j-M_j-1= |(⋃_k ≥ 0,r ≥ 2 ⋃_(w_1, …, w_r-1) ∈_j,k,r ⋃_1 ≤ h ≤ r-1 C_w_h(H_L) ) ∖(_j-1∪_j-1)| , S_j-S_j-1= ∑_k,r ≥ 1k|_j,k,r| .Together <ref> imply (<ref>)–(<ref>). Turning to the question of equality in (<ref>)–(<ref>), assume from now on that (<ref>) holds. Recall that in each step j' ≤ j the exploration process inspects all so-far untested tuples containing v_j'.Hence, in the natural coupling, equality holds in (<ref>) whenever _k,r contains no vertices from_j={v_1, …, v_j}⊆_j-1∪_j-1.Set X: = ∑_k ≥ 0,r ≥ 2|_k,r| andX': = ∑_k ≥ 0,r ≥ 2 (r-1) |_k,r|.Let  be the `good' event that, as w_τ runs over the X' vertices appearing in the random multiset ⋃_k,r_k,r (cf. (<ref>)),we have (i) each C_w_τ(H_L) is disjoint from _j-1∪_j-1 (which is itself a union of components of H_L), and(ii) the X' components C_w_τ(H_L) of H_L are pairwise distinct (and so pairwise disjoint). As noted above, property (i) of  implies _j,k,r = _k,r.Ifholds, then the union in (<ref>) is disjoint, and it follows that we have equality in (<ref>) and (<ref>).Let Ψ=(log n)^2. To estimate the probability thatfails to hold, we use the two stage construction of _k,r discussed around (<ref>)–(<ref>): we first reveal all the sizes |_k,r|, and then sequentially reveal the X' ≤ (Ψ-1) X≤Ψ X random vertices w_τ∈ V_L appearing in all of the sets _k,r. Since (<ref>) implies that all components of H_L have size at most Ψ, for each random vertex w_τ∈ V_Lit then is enough to consider the event that (i) w_τ equals one of the M_j-1 vertices in _j-1∪_j-1,or (ii) w_τ equals one of the at most (τ-1)Ψ≤ X' Ψ≤Ψ^2 X so far `discovered' vertices in ⋃_1 ≤ x < τ C_w_x(H_L).Using conditional expectations it follows that () ≤( ( Ψ X ·(M_j-1/|V_L| + Ψ^2 X/|V_L|)| (|_k,r|)_k,r ≥ 0)) ≤Ψ^3 (M_j-1 X +X^2) / |V_L| .Noting that X is a Poisson random variable (by standard superposition properties), using |_k,r| = r Q_k,r/|V_L| ≤Ψ^2 we see that X =X ≤ (Ψ+1)^2 ·Ψ^2 = O(Ψ^4) and X^2 =X + ( X)^2 = O(Ψ^8).Recalling M_j-1≥ j ≥ 1 we infer () = O(Ψ^11 M_j-1/|V_L|), completing the proof.We now turn to properties of the random initial values (M_0,S_0).Since the exploration process starts with _0 = W and _0=∅, by (<ref>) we have M_0 = |W|.The next lemma is an immediate consequence of the constructions <ref> in Section <ref>; for the final estimate we use (<ref>) to bound both the number of random verticesand the component sizes by Ψ=(log n)^2. Let =((N_k)_k > K, (Q_k,r)_k,r ≥ 0) be any parameter list. Define () = (M_j,S_j)_j ≥ 0 as in (<ref>), and let(Y_0,,Z^0_,R_) be the probability distribution on ^3 given by ((Y_0,,Z^0_,R_)=(y,z,r)) = N_y() y>K, z=0, r=0+ z Q_z,r() y=0, z ≥ 1/|| .Then there is a coupling such that with probability one we have M_0≤ Y_0, + ∑_1 ≤ h ≤ R_ |C_w_h(H_L())| ,S_0 = Z^0_,where the vertices w_h ∈ V_L=V_L() are chosen independently and uniformly at random.Furthermore, ifsatisfies (<ref>), then we have equality in (<ref>) with probability at least 1-Ψ^3/|V_L|.§.§ Idealized process Letbe a t-nice parameter list. In this subsection we compare the random walk =()=(M_j,S_j)_j ≥ 0 of the exploration process with a closely related `idealized' branching process _t that is defined without reference to , or indeed to n. The precise definitions (given below) are rather involved. However, given how our exploration process treats vertices in V_S and V_L, and that the first step of the exploration process is special, it is not surprising that _t will be a special case of the following general class of branching processes. Let (Y,Z) and (Y^0,Z^0) be probability distributions on ^2. We write ^1=^1_Y,Zfor the Galton–Watson branching process started with a single particle of type L, in which each particle of type L has Y children of type L and Z of type S. Particles of type S have no children, and the children of different particles are independent.We write =_Y,Z,Y^0,Z^0 for the branching process defined as follows:start in generation one with Y^0 particles of type L and Z^0 of type S. Those of type L have children according to ^1_Y^0,Z^0, independently of each otherand of the first generation. Those of type S have no children. We write || (|^1|) for the total number particles in  (^1). §.§.§ Two probability distributions For t ∈ [t_0,t_1], to define our branching process _t, we shall define a distribution (Y_t,Z_t) on ^2 thatgives the `idealized' (limiting) behaviour of the numbers of V_L and V_S vertices found in one step of our exploration process, and a corresponding distribution (Y_t^0,Z_t^0) for the first step. Recall from Definition <ref>, or from Theorems <ref> and <ref>, that the quantitiesρ_k(t_0) and q_k,r(t) defined in Lemmas <ref> and <ref>, are (informally speaking), the idealized versions of N_k/n and Q_k,r/n arising in t-nice parameter lists. Recall also (from Remark <ref>) that ρ_ω(t_0)=∑_k>Kρ_k(t_0).Let N be the probability distribution on  with(N=k) = k > Kρ_k(t_0)/ρ_ω(t_0) .Intuitively, this corresponds to an idealized version of the distribution of |C_w(H_L(_tn))|,where _tn=(G^_n,tn) is the random parameter list defined in (<ref>) and the vertex w is chosen uniformly at random from V_L (see (<ref>) for the definition of the `initial graph' H_L). We henceforth write N_h and N_k,r,y,h for independent copies of N. We also defineH_k,r,t∼(λ_k,r(t))with λ_k,r(t) := rq_k,r(t)/ρ_ω(t_0) ,which corresponds to an idealized version of |_k,r|, see (<ref>).Of course, we take the random variables H_k,r,t to be independent.We define (Y_t,Z_t) := (∑_k ≥ 0, r ≥ 2 ∑_1 ≤ j ≤ H_k,r,t ∑_1 ≤ h ≤ r-1 N_k,r,j,h,∑_k,r ≥ 1 k H_k,r,t) ,in analogy with the quantities appearing on the right-hand side in (<ref>)–(<ref>). Turning to (Y^0_t,Z^0_t), we define (Y_0,t,Z^0_t,R_t) as the probability distribution on ^3 with ((Y_0,t,Z^0_t,R_t)=(y,z,r)) = ρ_y(t_0) y>K, z=0, r=0+ z q_z,r(t) y=0, z ≥ 1 ,and setY^0_t := Y_0,t + ∑_1 ≤ h ≤ R_t N_h ,in analogy with (<ref>)–(<ref>). That (<ref>) indeed defines a probability distribution follows from Remark <ref> and ∑_k ≥ 1ρ_k(t_0)=1 of Theorem <ref>.For t∈ [t_0,t_1] we can now formally define the `idealized' branching process _t: it is simply_t := _Y_t,Z_t,Y_t^0,Z_t^0.Of course, we define _t^1=^1_Y_t,Z_t also.Our main goal in the rest of this subsection is to prove the following result, showing that we canapproximate the expected number of vertices in small components of J=J() via _t. Let t∈ [t_0,t_1], and let  be a t-nice parameter list.Set D_:=max{D_N,D_Q}+25, where D_N, D_Q>0 are as in Definition <ref>.Define J=J() as in Definition <ref>. Then| N_j(J) - (|_t|=j) n|= O(j (log n)^D_n^1/2) and | N_≥ j(J) - (|_t| ≥ j) n|= O(j (log n)^D_n^1/2),uniformly over all j≥ 1, t∈ [t_0,t_1] and all t-nice parameter lists .Before embarking on the proof, we establish the `parity constraints' that the distributions of (Y_t,Z_t) and (Y_t^0,Z_t^0) satisfy.Recall (see Lemma <ref>) thatdenotes the set of all component sizes that the rulecan possibly produce, andthe period of the rule.Below, the precise form of the finite `exceptional set' {0}× [K] is irrelevant for our later argument; it arises only due to the generality of ℓ-vertex rules – for Achlioptas processes =1 by Lemma <ref> and so the next lemma holds trivially. For any t ∈ (t_0,t_1], the following hold always: (Y_t,Z_t)∈ ()^2and(Y_t^0,Z_t^0)∈ ()^2∪({0}× [K]). We first consider the distribution N defined in (<ref>). If k∉ then ρ_k(t)=0 and so (N=k)=0. By Lemma <ref>, if k∈ and k>K, then k is a multiple of . It follows that N can only take values in . The set of values (k,r) for which q_k,r(t)>0 is described in Lemma <ref>. In (<ref>), we can have a contribution from a particular pair of values (k,r) only if r≥ 1 and λ_k,r(t)>0, which implies q_k,r(t)>0, i.e., (k,r)∈. By Lemma <ref> this implies that k is a multiple of . Since each random variable N is always a multiple of(this follows from Lemma <ref> since N>K always holds by the definition of N),it follows that (Y_t,Z_t)∈ ()^2 holds always.We now turn to (Y_t^0,Z_t^0). Using again that for r≥ 1 we can only have q_k,r(t)>0 if k is a multiple of , we see from (<ref>) that both Y_0,t (and hence, from (<ref>), Y_t^0) and Z_t^0 will be multiples of  unless (Y_0,t,Z_t^0,R_t)=(0,z,0) for some z with q_z,0(t)>0. But in this case, by Lemma <ref> we have (z,0)∈ andso z∈. And since R_t=0, we have Y^0_t=0.To sum up, (Y_t^0,Z_t^0)∈ ()^2∪({0}×) holds always.This completes the proof since Lemma <ref> implies ∖⊆ [K].§.§.§ Coupling In this section we prove the key coupling result relating our exploration process to _t. We start with a technical lemma.The constants b, B, D_N and D_Q here (and throughout the section) are those in Definition <ref>. Recall that H_k,r,t is defined in (<ref>) and _k,r in Lemma <ref>.Let t∈ [t_0,t_1], and let  be a t-nice parameter list.Define a probability distribution N' onby N' ∼ |C_w(H_L())|, where w ∈ V_L is chosen uniformly at random.Then, writing Ψ = (log n)^2, for n ≥ n_0(b,B) we haveNN' = O((log n)^D_N+2n^-1/2),∑_k,r ≥ 0H_k,r,t|_k,r|= O((log n)^D_Q+4n^-1/2) ,(Y_0,t,Z^0_t,R_t)(Y_0,,Z^0_,R_)= O((log n)^D_Q+4n^-1/2) , ∑_k,r ≥ 0:k+r ≤Ψ(H_k,r,t≥Ψ)≤ n^-ω(1), |_k,r|=0whenever k+r≥Ψ.For k>K, by the definition (<ref>) of N we have (N=k)=ρ_k(t_0)/ρ_ω(t_0), whileby definition (N'=k)=N_k()/|V_L()|. Note also that ∑_k>Kρ_k(t_0)=ρ_ω(t_0)>0 by Remark <ref>. By the condition (<ref>) ofbeing t-nice, each N_k() is within (log n)^D_N n^1/2 of ρ_k(t_0)n. For k>Ψ we have N_k()=0 by (<ref>), while ∑_k>Ψρ_k(t_0)=n^-ω(1) by (<ref>). Inequality (<ref>) follows easily from these bounds.In preparation for inequality (<ref>), note that (x)(y)≤ |x-y|.Since |_k,r| ∼(_k,r) for _k,r=rQ_k,r/|V_L| as in (<ref>),now (<ref>) follows by combining (<ref>), (<ref>), (<ref>) and (<ref>). The proof of (<ref>) is analogous.Since H_k,r,t = r q_k,r(t)/ρ_ω(t) = O(r e^-b(k+r)) = O(1) by (<ref>), the bound (<ref>) follows from standard Chernoff bounds (for Poisson random variables).The final inequality (<ref>) is a simple consequence of |_k,r| = rQ_k,r/|V_L| = 0, cf. (<ref>) and (<ref>).In preparation for the proof of Theorem <ref>, we now show that the number || of vertices reached by the exploration process defined in (<ref>) is comparable to the number of particles |_t|, unless both are fairly big.In the light of (<ref>)–(<ref>), this will be key for understanding the number of verticesin components of a given size in =().Let t∈ [t_0,t_1], and let  be a t-nice parameter list.Set D_:=max{D_N,D_Q}+25, where D_N, D_Q>0 are as in Definition <ref>.Then there is a coupling of =() and _t such that for every Λ=Λ(n) ∈, with probability at least 1-O(Λ(log n)^D_n^-1/2) we have ||=|_t| or min{||,|_t|}>Λ. Here the implicit constant is uniform over the choice of t,and Λ. The proof is based on a standard inductive coupling argument, exploiting that a natural one-by-one breadth-first search exploration of _t induces a random walk (with respect to the number of reached vertices).The idea is to show that the numbers of vertices from V_S and V_L found by the exploration process equal the numbers of type S and L particles generated by _t.More formally, starting with (M_0,S_0) and (Y^0_t,Z^0_t), the plan is to step-by-step couple (M_j-M_j-1,S_j-S_j-1) with a new independent copy of (Y_t,Z_t) at each step. With the proof of Lemma <ref> in mind, the basic line of reasoning is roughly as follows: we can construct each (M_j-M_j-1,S_j-S_j-1) by sampling at most (log n)^O(1) random vertices ∈ V_L, and by (<ref>) the corresponding component sizes |C_(H_L)| can each be coupled with N up to (log n)^O(1)n^-1/2 errors; a similar remark applies to the other variables, see Lemma <ref>.So, we expect that the coupling fails during the first O(Λ) steps with probability at most O(Λ (log n)^O(1)n^-1/2).As the statement is trivial for Λ = 0 or Λ = Ω(n^1/2), we assume throughout that 1 ≤Λ=Λ(n) = O(n^1/2).The basic idea is to construct the coupling inductively, revealing _t and =()=(M_j,S_j)_j ≥ 0 step-by-step. We shall in fact couple the numbers of type L and type S particles of _t with the number of vertices from V_L and V_S found by the exploration process. We shall consider only steps 0 ≤ j ≤Λ. This suffices since,if the coupling succeeds this far, then after Λ steps, either the exploration process has stopped, and we have ||=|_t|, or it has not, in which case ||>Λ and |_t|>Λ.For the base case j=0 we claim that there is a coupling such that, with probability 1-O((log n)^D_n^-1/2), we have M_0=Y^0_t and S_0 = Z^0_t.Writing Ψ=(log n)^2 for brevity as usual, by the final statement of Lemma <ref> we see that, with probability 1-O(Ψ^3/n), equality holds in (<ref>) and (<ref>). Now the desired coupling of (M_0,S_0) with (Y^0_t,Z^0_t) is straightforward.Indeed, we first couple(Y_0,,Z^0_,R_) with (Y_0,t,Z^0_t,R_t), and then for 1 ≤ h ≤ R_≤Ψ we sequentially couple |C_w_h(H_L)| with independent copies of N.Using Lemma <ref> it follows that the described coupling fails with probability at mostO(Ψ^3/n) + (Y_0,t,Z^0_t,R_t)(Y_0,,Z^0_,R_) + Ψ·NN' = O((log n)^D_n^-1/2) . Turning to the inductive step, consider 0<j≤Λ. We may assume that M_j-1≥ j, since otherwise the exploration has stopped already, and that M_j-1≤Λ, since otherwise ||,|_t|>Λ.It suffices to show that, conditioning on the first j-1 steps of the exploration, there is a coupling such that, with probability 1-O((log n)^D_ n^-1/2), we have (M_j-M_j-1,S_j-S_j-1)=(Y_t,Z_t).By Lemma <ref>, with probability 1-O(Ψ^11 M_j-1/n) = 1- O(Ψ^11n^-1/2),we have equality in (<ref>) and (<ref>). Recalling the two-stage process generating _k,r described around (<ref>)–(<ref>), the desired coupling is then straightforward in view of Lemma <ref>. Indeed, we first couple each |_k,r| with H_k,r,t.Of course, we abandon our coupling whenever max_k +r ≤ΨH_k,r,t≥Ψ,which by Lemma <ref> occurs with probability at most n^-ω(1).After this first step, in order to couple (M_j-M_j-1,S_j-S_j-1) with (Y_t,Z_t), by the two-stage definition of _k,r it remains to sequentially couple at most (Ψ+1)^2 ·Ψ≤Ψ^4 independent copies of N' ∼ |C_w(H_L)| with independent copies of N. To sum up, using Lemma <ref> it follows that the described couplingfails with probability at mostO(Ψ^11n^-1/2) + ∑_k ≥ 0, r ≥ 0H_k,r,t|_k,r| + n^-ω(1) + Ψ^4 ·NN' = O((log n)^D_n^-1/2) .Since we only consider Λ+1=O(Λ)steps j in total, this completes the proof of Theorem <ref>.§.§.§ Proof, and consequences, of Theorem <ref> The proof of Theorem <ref> hinges on the basic observation that, for any graph, adding or deleting an edge changes the number of vertices in components of size j (at least j) by at most 2j.Since =() is a Poissonized version of J=J() we thus expect that N_k(J) ≈ N_k().The approximation N_k() ≈(|_t|=k)n then follows from (<ref>) and the coupling of Theorem <ref>.Let =() and =(). First, using (<ref>) and Theorem <ref> we have | N_j() - (|_t|=j)n | = |(||=j) - (|_t|=j)| · n = O(j (log n)^D_n^1/2). Next, we relate N_j(J) and N_j() by de-Poissonization, defining Y_k,r∼(Q_k,r()) for convenience (as usual, all these random variables are independent).From the definition of J=J() in Section <ref>, we see that the effect of increasing of Q_k,r by 1 on the randomgraph J=J() may be thought of as follows: we first add a new component of size k (which changes any N_j(J) by at most k), and then we add r edges. From the definition of =(), using a natural coupling and our basic Lipschitz observation it follows that |N_j(J) - N_j()|≤∑_k,r ≥ 0(2jr+k) · |Y_k,r-Q_k,r()| . For any random variable Z ∼(μ) we have |Z-μ| ≤√( Z) = √(μ) by Jensen's inequality. Since Y_k,r = Q_k,r() ≤ B e^-b(k+r)n by (<ref>), using (<ref>) it follows that | N_j(J) -N_j()| = O(jn^1/2), which together with (<ref>) and D_≥ 1 completes the proof of inequality (<ref>) for N_j(J).Turning to N_≥ j(J), from (<ref>) and Theorem <ref> we have | N_≥ j() - (|_t| ≥ j)n| = |(|| ≥ j) -(|_t| ≥ j)| · n = O(j (log n)^D_n^1/2).Now, changing N_j(·) to N_≥ j(·), the rest of the argument for (<ref>) carries over to prove (<ref>). We next prove two corollaries to Theorem <ref>, relating _t to (i) the solutions to certain differential equations from Section <ref> and (ii) the functions ρ and s_r from Sections <ref> and <ref>.Let (ρ_k)_k ≥ 1 and (ρ_≥ k)_k ≥ 1 be the functions defined in Lemma <ref>. Then ρ_k(t) = (|_t|=k) and ρ_≥ k(t) = (|_t| ≥ k) for all t ∈ [t_0,t_1] and k ≥ 1. Fix t ∈ [t_0,t_1] and k ≥ 1.Using the fact, proved in Lemma <ref>, that the random parameter list _tn=(G^_n,tn) is almost always nice,and the conditioning lemma, Lemma <ref>, it is easy to deduce from Theorem <ref> that | N_k(tn) - (|_t|=k) n| = O(k (log n)^D_n^1/2).Also, from Lemma <ref> it is easy to see that | N_k(tn) - ρ_k(t) n| = O((log n)^2n^1/2).As n →∞, it follows that |(|_t|=k) - ρ_k(t)| = O(k (log n)^D_n^-1/2+(log n)^2n^-1/2) = o(1) .Since _t and ρ_k(t) are both defined without reference to n, it follows that (|_t|=k)=ρ_k(t).The same argument (with obvious notational changes) gives ρ_≥ k(t) = (|_t| ≥ k).Let the functions ρ and (s_r)_r ≥ 2 be as in (<ref>) and (<ref>).Then ρ(t) = (|_t|=∞) for t ∈ [t_0,t_1], and s_r(t) =|_t|^r-1∈ [1,∞) for all t ∈ [t_0,) and r ≥ 2.For t ∈ [0,∞), Theorem 3 and Section 5 of <cit.> imply ρ(t)=1-∑_k ≥ 1ρ_k(t). For t ∈ [t_0,t_1], by Corollary <ref> we conclude ρ(t)=1-∑_k ≥ 1ρ_k(t) = 1-∑_k ≥ 1(|_t|=k)= (|_t|=∞) . For t ∈ [0,), the main result of <cit.> implies ∑_k ≥ 1ρ_k(t) = 1 and S_r(tn) ∑_k ≥ 1k^r-1ρ_k(t) ∈ [1,∞) for r ≥ 2.For t ∈ [t_0,), using (<ref>) and Corollary <ref> we infer (|_t|=∞) = 0 and ∑_k ≥ 1k^r-1ρ_k(t)= |_t|^r-1, so that (<ref>) implies s_r(t) =|_t|^r-1∈ [1,∞). §.§ Dominating processesIn this subsection we relate the random graph J=J() more directly to the Poissonized model =().Loosely speaking, for any `time' t ∈ [t_0,t_1] the plan is to slightly adjust (decrease or increase) the parameters of , see (<ref>), and `sandwich' J=J() between =(^±_t) such that typically ⊆ J ⊆.Using stochastic domination this will allow us to avoid one major drawback of the coupling arguments from Section <ref>, namely, that the `coupling errors' deteriorate for moderately large component sizes (see Theorems <ref> and <ref>). The basic idea is that the number of vertices offound by the exploration process is bounded from above and below by perturbed variants ^±_t of the idealized branching process _t, formalized in Definition <ref> below.§.§.§ Sandwiching Given parameter listsand ', we write ≼' if =((N_k)_k > K, (Q_k,r)_k,r ≥ 0) and '=((N_k)_k > K, (Q_k,r')_k,r ≥ 0) with Q_k,r≤ Q'_k,r for all k,r≥ 0. From the definition (Definition <ref>) of the random graph J(), if ≼' then there is a coupling such thatJ() ⊆ J(').Similarly,if ≼' then (see Definition <ref>)there is a coupling such that() ⊆('). Our next lemma states that, for any t-nice , we can whp sandwich J() between the more tractable Poissonized random graphs = (^±_t), for parameter lists ^±_t which we now define.Recall that in this case Q_k,r=Q_k,r() ≈ q_k,r(t) n, with q_k,r(t)=0 when (k,r)∉ (see (<ref>) and Lemma <ref>).Set B_0=2/b and b_0=b/400, where b is as in Definition <ref>.Given t∈ [t_0,t_1] and a t-nice parameter list , letq^±_k,r,n(t) := k,r ≥ 0, k+r ≤ B_0 log n(k,r)∈max{q_k,r(t) ± e^-b_0(k+r)n^-0.49,0} ,Q^±_k,r(t) := q^±_k,r,n(t)n,and define the parameter lists ^+_t and ^-_t by^±_t := ((N_k)_k > K,(Q^±_k,r(t))_k,r ≥ 0) . Note that the definition of ^±_t depends not only on(via the N_k), but also on t and on n. The parameters Q^±_k,r(t) defined in (<ref>) do not depend on . Of course, in (<ref>) the precise numerical value 0.49 is irrelevant for our later arguments (any γ∈ (1/3,1/2) suffices).The second indicator function in (<ref>) simply restricts the types of (k,r)-components to ones that can possiblyappear in G^_n,i; see Lemma <ref>.Let t∈ [t_0,t_1], and let =((N_k)_k > K, (Q_k,r)_k,r ≥ 0) be a t-nice parameter list.Define ^±_t as in Definition <ref>. Then there is a coupling such that we have(^-_t) ⊆ J() ⊆(^+_t)with probability 1-n^-ω(1).We can construct (_t^±) by first exposing the associated `Poissonized' parameters Y_k,r∼(Q_k,r^±), and then setting(_t^±) = J(^*)with ^*=((N_k)_k > K,(Y_k,r)_k, r ≥ 0). By (<ref>) it thus suffices to show that the associated random parameters used in typical realizations of (^±_t) sandwich the parameter listfrom above and below. We shall prove this using standard Chernoff bounds, which imply that any Poisson random variable Y with mean μ = O(n) satisfies (|Y-μ| ≥ (log n)^2n^1/2) ≤ n^-ω(1), say. Turning to the details, let b,B>0 be as in (<ref>), and B_0=2/b, b_0=b/400 as in Definition <ref>. Define Y^±_k,r∼(Q^±_k,r(t)).We henceforth consider only (k,r)∈, since otherwise Y_k,r^±=0 by definition and Q_k,r=0 by (<ref>). Note that, with D_Q>0 as in (<ref>), for n ≥ n_0(D_Q) we have min_k+r ≤ B_0 log ne^-b_0(k+r)n^-0.49≥ n^-0.495≥ 4 (log n)^max{D_Q,2}n^-1/2.So, using the discussed Chernoff bounds, with probability 1-n^-ω(1) we thus obtain Y^+_k,r≥ Q^+_k,r(t) -(log n)^2n^1/2≥ q_k,r(t)n + 2 (log n)^D_Qn^1/2 ,Y^-_k,r≤Q^-_k,r(t) > 0(Q^-_k,r(t) +(log n)^2n^1/2) ≤Q^-_k,r(t) > 0(q_k,r(t) n -2 (log n)^D_Qn^1/2)simultaneously for all k,r ≥ 0 with k+r ≤ B_0 log n.Comparing (<ref>)–(<ref>) with (<ref>), using Q_k,r≥ 0 it follows that with probability at least 1-n^-ω(1) we haveY^-_k,r≤ Q_k,r≤ Y^+_k,rfor all k,r with k+r≤ B_0log n.Turning to k+r> B_0log n,if n is large enough (n ≥ n_0(b_0,B_0,B)), then we have Q_k,r=0 by (<ref>), while q_k,r,n^±=0 by definition. Hence the inequalities (<ref>) hold trivially, completing the proof.In view of (<ref>) and (<ref>), setting B_1 = B + 1 and b_0 = b/400, for n ≥ 1 we havesup_t ∈ [t_0,t_1]q^±_k,r,n(t) ≤ B_1 e^-b_0(k+r). §.§.§ Stochastic domination Let  be a t-nice parameter list, and define ^±_t as in Definition <ref>.Aiming at studying the component size distribution of =(^±_t), we shall next define branching processes _t^1,± and _t^±.Since ^±_t depends on  and on n, these branching processes will also depend on  and n, in contrast to the idealized processes ^1_t and _t defined in Section <ref>. To define our branching processes, we need to define the corresponding offspring distributions.For _t^- this is a little involved,since we need to deal with `clashes' in the exploration process. The main idea is that the exploration process finds subsets of the vertices which resemble `typical' subgraphs of . For example, in view of (<ref>) we expect that only a e^-Ω(k)-fraction of the discovered vertices originate from size-k components of V_L, which intuitively explains the definition of N^-=N^-() given in (<ref>) below, bearing in mind that we will only consider up to O(n^2/3) exploration steps. First, let us fix some constants. Given a,A>0 as in (<ref>), letD_0 := 2/a and a_0:=min{a/2,1/(6 D_0)}.Given ζ>0 as in (<ref>), letA_0 := 16/ζ·∑_k ≥ 1kA e^-a_0 kand A_1 := A_0/ζ . Given t∈ [t_0,t_1] and a t-nice parameter list , define a probability distribution N^+ onby(N^+=k) = k > KN_k()/|V_L()|,and setλ^+_k,r(t) := r Q^+_k,r(t)/|V_L()| ,where Q^±_z,r(t) is defined in Definition <ref>. Similarly, define a probability distribution N^- onby(N^-=k) = k > Kmax{N_k()-A_0 e^-a_0 k n^2/3,0}/|V_L()|,  if k ≥ 1, 1-(N^-≥ 1) ,  if k = 0,and setλ^-_k,r(t) := max{(1- A_1 r n^-1/3)rQ^-_k,r(t)/|V_L()|,0}.Define two probability distributions (Y^±_0,t,Z^0,±_t,R^±_t) on ^3 by((Y^±_0,t,Z^0,±_t,R^±_t)=(y,z,r)) = N_y()y > K, z=0, r=0+ z Q^±_z,r(t)y=0, z ≥ 1/|^±_t| ,where Q^±_z,r(t) and ^±_t are defined in Definition <ref>.Finally, define (Y_t^±,Z_t^±) and (Y_t^0,±,Z_t^0,±) analogous to (Y_t,Z_t) and (Y_t^0,Z_t^0) in (<ref>)–(<ref>) and (<ref>)–(<ref>), but with N, λ_k,r(t) and (Y_0,t,Z^0_t,R_t) replaced by N^±, λ^±_k,r(t) and (Y^±_0,t,Z^0,±_t,R^±_t).Given t∈ [t_0,t_1] and a t-nice parameter list , we define _t^±=_t^±() as _Y_t^±,Z_t^±,Y^0,±_t,Z^0,±_t, where the general branching process definition is given in Definition <ref>, and the offspring distributions (Y_t^±,Z_t^±) and (Y_t^0,±,Z_t^0,±) are defined as above. Similarly, we define _t^1,±=_t^1,±() as ^1_Y_t^±,Z_t^±. One of our main goals is to show that we can approximate the expected number of vertices in components of at least a given(large) size in =(^±_t) via the branching processes _t^±; see Theorem <ref> below. To do this, we need to relate, via stochastic domination, the exploration processes into the branchingprocesses _t^±.Before turning to the domination arguments, we first record two simple observations. Firstly, |^±_t| ≈ n for t-nice .More precisely, using (<ref>), (<ref>), (<ref>)–(<ref>), (<ref>), with B_0 = 2/b we have ||^±_t|-n| = ||^±_t|-|||= | |V_S(^±_t)| - |V_S()| | ≤∑_k ≥ 1,r ≥ 0k |Q^±_k,r(t)-Q_k,r()| ≤∑_k ≥ 1, r ≥ 0: k+r ≤ B_0 log n k [e^-b_0(k+r)n^0.51 + (log n)^D_Q n^1/2] = O(n^0.51) . The next observation concerns parity constraints.For any t-nice parameter listwith t ∈ (t_0,t_1], the random variables defined above satisfy(Y_t^±,Z_t^±) ∈ ()^2 and(Y_t^0,±,Z_t^0,±) ∈ ()^2∪({0}× [K])with probability 1. For k > 0, note that (N^±=k)>0 implies k>K and N_k()>0, so k∈∖ [K] by (<ref>).By Lemma <ref> it follows that N^± can only take values in .Furthermore, λ_k,r^±(t)>0 implies Q_k,r^±(t)>0, and hence, by (<ref>), that (k,r)∈. Now the remaining argument of Lemma <ref> carries over.The following theorem states that the number of vertices found by the exploration process ^+=(_t^+) is dominated from above by the total size of the branching process ^+_t. Let t∈ [t_0,t_1], and letbe a t-nice parameter list. Define ^+_t as in Definition <ref>, and the branching process ^+_t=_t^+() as in Definition <ref>. Define the exploration process ^+=(^+_t) as in (<ref>)–(<ref>) of Section <ref>.Then there is a coupling such that |^+| ≤ |^+_t|.By definition of ^+ and ^+_t it suffices to show that there is a coupling satisfying the following properties with probability one:(i) for j=0 the initial values satisfy M_0 ≤ Y^0,+_t and S_0 ≤ Z^0,+_t,and (ii) for every j ≥ 1 the step-wise differences satisfy M_j-M_j-1≤ Y^+_t and S_j-S_j-1≤ Z^+_t.This claim is an immediate consequence of Lemmas <ref> and <ref>.Indeed, the case j=0 follows from Lemma <ref> (noting that (Y^+_0,t,Z_t^0,+,R_t^+)=(Y_0,^+_t,Z^0_^+_t,R_^+_t) and N_k(^+_t) = N_k() hold), and the case j ≥ 1 follows by applying Lemma <ref> inductively.The next theorem states that the exploration process ^- is dominated from below by the branching process ^-_t until both have found many vertices, namely at least n^2/3 (this cutoff aims at simplicity rather than the best bounds).Let t∈ [t_0,t_1], and letbe a t-nice parameter list. Define ^-_t as in Definition <ref>, and the branching process ^-_t=_t^-() as in Definition <ref>. Define the exploration process ^-=(^-_t) as in (<ref>)–(<ref>) of Section <ref>.Then there is a coupling such that, with probability 1-n^-ω(1), we have |^-| ≥ |^-_t| or min{|^-|,|^-_t|} > n^2/3. We follow the approach of Theorem <ref>, and think of the exploration process as sequence of random vertex sampling steps.Intuitively, to achieve domination we (i) only sample from a subset of all vertices, and (ii) give up as soon as certain unlikely events occur (corresponding to `atypical' explorations). Turning to the details, for brevity, define Λ := n^2/3 .Given ζ>0 as in (<ref>), note that for n ≥ n_0(ζ,A_0) large enough we have|V_L|-A_0Λ≥ζ n - A_0n^2/3≥ζ/2 · n .Note that, in view of (<ref>) and the definition (<ref>) of a_0,D_0, for n ≥ n_0(a,A)we haveN_k() = 0 for all k ≥ D_0 log n ,min_k ≤ D_0 log n e^-2a_0kΛ≥ n^1/3 . We now introduce ^*=^*(^-_t), which is a slight modification of the exploration process for =(^-_t) described in Section <ref>. The initial set W ⊆ V_L and initial value S_0 are chosen exactly as for (^-_t).Given these, ^* finds a subset C^-_W() ⊆ C_W()of the set C_W() of vertices ofreachable from W.The only difference from (^-_t) is that in each step with j ≥ 1, when we explore v_j ∈_j-1, we only test for new V_L–vertices from V_L,j := V_L∖(_j-1∪{v_j}), i.e., only consider a subset of the hyperedges tested by the original process.More precisely, we consider (test for their presence in =(^-_t)) all k-weighted hyperedges g∈ (V_L)^r of the form (v_j,w_1, …, w_r-1), …, (w_1, …, w_r-1,v_j) with all w_h∈ V_L,j.A key point is that (since v_j is added to the `explored' set after step j) a given hyperedge is tested at most once in this process.Comparing the numbers of vertices found by ^-=(^-) and ^*, see (<ref>) and (<ref>),we infer that |^-| = |C_W()| + S_0 ≥ |C^-_W()| + S_0 = |^*| . Analogous to the proof of Theorem <ref>, in view of (<ref>) the basic idea is to inductively couple ^*=(M_j,S_j)_j ≥ 0 with ^-_t, such that M_j and S_j dominate (from above) the corresponding number of type L and S particles found in the exploration of ^-_t. Within each step j of this coupling we shall perform a (random) number of vertex sampling steps.Recalling that _j∪_j is the set of vertices reached after j steps, in each vertex sampling step we shall (i) reveal a random vertex and add either zero or one components of H_L to the set of reached vertices, and (ii) reveal a new independent random variable with distribution N^- (for the details see below).We stop constructing the coupling as soon as, after any vertex sampling step, either of the following properties holds:1pt 0pt 0pt (P1) we can already witness that |_t^-| > Λ, or (P2) the set of reached vertices contains, for some k ≥ 1, more than A_0 e^-a_0 kΛ vertices from V_L–components of size at least k.Of course, we also stop constructing the coupling if after the end of some step j we have _j=0, i.e., the exploration has finished. Note that (P1) says that, in our coupled exploration of the branching process _t^-, we have already reached more than Λ particles. If we either complete the coupling, or stop due to (P1), then we say the coupling succeeds; if we stop due to (P2), the coupling fails. From the way the coupling is defined (below), if we reach the end of the exploration we have |^*|≥ |_t^-|, while if we stop due to (P1) then, because the coupling succeeded up to this point (so we have reached at least as many vertices in ^* as particles in _t^-), we have |^*|,|_t^-|>Λ. We shall show that the probability of failure is n^-ω(1).We start with step j=0, considering the initial set W arising in the definition of ^*, chosen exactly as in (_t^-).Recall from (<ref>)–(<ref>) that W is the union of the components of H_L containing a certain number (either 1 or r in the two cases) of vertices w_h chosen uniformly at random fromV_L. LetC^*_w_h := C_w_h(H_L) ∖(⋃_1 ≤ s < h C_w_s(H_L) ) .Since components of H_L are disjoint, this definition simply says that C^*_w_h=C_w_h(H_L) if this is a `new' component,and C^*_w_h=∅ if it is a `repeated' component. In particular, |⋃_1 ≤ h ≤ r C_w_h(H_L)|=∑_1 ≤ h ≤ r |C^*_w_h|.As long as (P2) does not hold, for all k≥ 1 we have|C^*_w_h| =k | w_1, …, w_h-1≥max{N_k()-A_0 e^-a_0 kΛ,0}/|V_L| = (N^-=k) ,so that each random variable |C^*_w_h| stochastically dominates N^-.Consequently, there is a coupling such that either (P2) occurs, or M_0 =|W|≥ Y^-_0,t + ∑_1 ≤ h ≤ R^-_t N^-_h=Y^0,-_t and S_0 ≥ Z^0,-_t both hold, establishing the base case. Next we turn to step j ≥ 1. We may assume that the exploration has not finished (i.e., j ≤ M_j-1), and that (P2) does not hold; otherwise the coupling has already either succeeded or failed.Since (P2) does not hold, taking k=1, we have reached at most A_0Λ vertices in V_L, and hence j≤ A_0Λ. As in Section <ref>, we analyze the modified exploration process using auxiliary variables that may be constructed via a two-stage process. Turning to the details, let ^-_k,r,j denote the multi-set of (r-1)-tuples (w_1,…,w_r-1) of vertices in V_L,j corresponding to k-weighted hyperedges (v_j,w_1, …, w_r-1), …, (w_1, …, w_r-1,v_j) found in step j of our exploration. By standard properties of Poisson random variables, we have |^-_k,r,j| ∼(^-_k,r,j) with^-_k,r,j := |V_L,j|^r-1rQ_k,r^-(t)/|V_L|^r = (|V_L|-j)^r-1rQ_k,r^-(t)/|V_L|^r≥λ^-_k,r(t),where we use the bound j/|V_L| ≤ A_0 Λ/|V_L| ≤ A_0/ζ· n^-1/3= A_1 n^-1/3 (see (<ref>) and (<ref>)) to establish the final inequality. Moreover, conditional on |^-_k,r,j|=y_k,r, we have^-_k,r,j={(w_k,r,1,1, …, w_k,r,1,r-1), …, (w_k,r,y_k,r,1, …, w_k,r,y_k,r,r-1)},where each w_k,r,y,h∈ V_L,j is chosen independently and uniformly at random.Recalling the definition of the modified exploration process, it is not difficult to see that M_j-M_j-1 = |(⋃_k ≥ 0, r ≥ 2 ⋃_1 ≤ y ≤ |^-_k,r,j| ⋃_1 ≤ h ≤ r-1 C_w_k,r,y,h(H_L) ) ∖(_j-1∪_j-1)| , S_j-S_j-1 = ∑_k,r ≥ 1 k |^-_k,r,j| .To bound M_j-M_j-1 from below, we write the right-hand side of (<ref>) as a disjoint union of sets C^*_w_τ, similar to (<ref>). Indeed, _j-1∪_j-1 is a union of components of H_L, so each C_w_τ(H_L) is either a `new' component (disjoint from _j-1∪_j-1 and from those appearing before), or a `repeat'. In the latter case we set C^*_w_τ=∅. Since |V_L,j| ≤ |V_L|, arguing as for (<ref>) we see that, as long as (P2) does not occur, each |C^*_w_τ| stochastically dominates N^-.There is thus a coupling such that either (P1) or (P2) occurs (causing us to stop partway through step j), or M_j-M_j-1≥ Y^-_t and S_j-S_j-1≥ Z^-_t both hold, completing the induction step. If the coupling above succeeds (i.e., stops due to finishing, or due to (P1)), then it has the required properties. Thus itonly remains to show that it is unlikely to fail, i.e., stop due to (P2) occurring. Let _1 be the `bad' event that we stop due to (P2) after more than 2Λ vertex sampling steps, and let _2 be the event that we stop due to (P2) after at most 2Λ vertex sampling steps; to complete the proof we must show that (_i)=n^-ω(1) for i=1,2; as we shall now see, this is straightforward.From the definition (<ref>) of N^- and the fact that |V_L()|=∑_k>K N_k(), we have(N^-=0) ≤∑_k>K A_0 e^-a_0 k n^2/3/|V_L| = O(n^2/3)/|V_L| = o(1),recalling (<ref>). In particular, if n is large enough then (N^-≥ 1)≥ 3/4. In each vertex sampling step we `reach' N^- new vertices in _t^-; if _1 holds then we do not stop (for any reason) within 2Λ vertex sampling steps, and in particular (considering (P1)) we carry out 2Λ such steps reaching at most Λ particles in _t^-. Hence(_1) ≤(2Λ,3/4) ≤Λwhich, by a standard Chernoff bound, is e^-Ω(Λ) = n^-ω(1).Turning to _2, letπ_k := 2A e^-2 a_0k/ζ.By the choice (<ref>) of A_0, for any k' we have∑_k≥ k' 8 k π_k≤∑_k≥ k'16k A e^-a_0k/ζ· e^-a_0 k'≤ A_0 e^-a_0 k'.Hence, if we stop due to (P2), there is some k such that we have reached more than 8kπ_kΛ vertices in components of size exactly k, and in particular, have reached a new component of size exactly k more than 8π_kΛ times.As noted above, if (P2) does not hold, then j≤ A_0Λ. Since a ≥ 2 a_0 and N_k()≤ N_≥ k()≤ A e^-akn (from (<ref>)), as long as (P2) does not hold, then (for n large enough) in each vertex sampling step we have(|C_w̃(H_L)| = k |⋯) ≤N_k()/|V_L,j|≤Ae^-akn/|V_L| -A_0Λ≤2A e^-2 a_0k/ζ = π_k ,recalling (<ref>). By (<ref>), for n large enough (which we always assume), all components of H_L have size at most D_0log n. From the discussion above, if _2 holds then, considering when (P2) first holds, there is some k≤ D_0log n such that within the first at most 2Λ vertex sampling steps there are at least 8π_kΛ steps in which we choose a component of size k,with (P2) not holding at the start of any of these steps.The probability of this (for a given k) is at most(2Λ,π_k) ≥ 8Λπ_k= e^-Ω(Λπ_k),using a standard Chernoff bound. From (<ref>) we have Λπ_k = Ω(n^1/3), so this probability is n^-ω(1), and summing over k≤ D_0log n we conclude that (_2)=n^-ω(1), completing the proof. We are now ready to bound the expected number of vertices ofin components of at least a certain size. The lower bound in (<ref>) below is only non-trivial for k ≤ n^2/3, but this suffices for our purposes. Let t∈ [t_0,t_1], and letbe a t-nice parameter list.Define _t^± as in Definition <ref>, =(^±_t) as in Definition <ref>, and _t^±=_t^±() as in Definition <ref>. Then for all k ≥ 1 we have k ≤ n^2/3((|^-_t| ≥ k) |^-_t|- n^-ω(1)) ≤ N_≥ k() ≤ N_≥ k() ≤(|^+_t| ≥ k) |^+_t|.Since ^-_t ≼^+_t, from (<ref>) we have N_≥ k() ≤ N_≥ k().Furthermore, we have N_≥ k((^±_t)) = (|(^±_t)| ≥ k) |^±_t| by (<ref>).Noting that |^±_t| = O(n), the result follows from Theorems <ref>–<ref>.§.§.§ Second moment estimate In this subsection we use domination arguments (as in the previous section) to prove an upper bound on the second moment ofthe number N_≥Λ() of vertices in large components, where =(_t^+) witha t-nice parameter list. This will later be key for analyzing the size of the giantcomponent in the supercritical case (see Sections <ref> and <ref>).Before turning to the formal details, we shall outline the argument considering the simpler quantity X_L, the number of vertices in V_L=V_L() that, in , are in components of size at least Λ.Note that we may write the second moment of X_L as X_L^2 = ∑_v_1∈ V_L∑_v_2∈ V_L|C_v_1|≥Λ,|C_v_2|≥Λ.In turn, we can express this as |V_L|^2 timesp := |C_v_1|≥Λ,|C_v_2|≥Λ,where the `starting vertices' v_1 and v_2 are chosen independently and uniformly from V_L. To estimate p, the basic plan is to exploreoutwards from the vertices v_1 and v_2 as usual, comparing each exploration to a (dominating) branching process _t,i^+, i=1,2. Here, as in previous sections, we view as a (weighted) hypergraph on H_L, consisting of the original edges inside H_L plus some (k,r)-hyperedges, each of which connects r random vertices of V_L and contributes k extra vertices (in V_S) of its own.The main difficulty we encounter is dependence between the two explorations: we seek a moment bound applicable in the supercritical case, when the main contribution is from v_1 and v_2 lying in the same component, the giant component. So, unchecked, the explorations are very likely to interfere with each other. To deal with this, we first explore the graph from v_1, but stopping the exploration early if we reach Λ vertices (so far, this is how we estimated X_L, or rather N_≥Λ). Let U_1 be the set of vertices reached by the first exploration, and A_1⊆ U_1 the `boundary', i.e., those vertices reached but not yet fully explored (tested for new neighbours). If the exploration `succeeds' (reaches Λ vertices), then we start exploring from v_2. It may be that v_2∈ U_1, in which case |C_v_2|=|C_v_1|≥Λ. Otherwise, to avoid dependence, we explore from v_2 but only within V_L∖ U_1. The tricky case is when the second exploration stops, revealing the component C' of v_2 in the subgraph on V_L∖ U_1, and it turns out that |C'|<Λ. In this case it might still be true that |C_v_2|≥Λ; this happens if and only if there is a hyperedge joining A_1 to C'. (We have not yet tested these hyperedges.) We can bound the probability of this by an estimate proportional to |A_1||C'|/n. It turns out that this can be too large, so we use an idea of Bollobás and the first author <cit.>: we introduce a second early stopping condition for the first exploration, if the boundary at any point becomes too large (see also Figure <ref>). As usual, we couple our explorations step-by-step with a dominating branching process, ^+_t, using bounds on the probability that ^+_t is large to bound the probability we are looking for. We end up with an overestimate, but since we only claim an upper bound (and the estimate turns out to be tight enough), this is no problem. Let us now turn to the details.We present here the combinatorial part of the argument, leading to the rather complicated bound (<ref>) below. In the supercritical case (t=+ with ^3n →∞) we shall later use results about the branching processes _t^+ and _t^1,+ to derive X =X^2 - ( X)^2 = o( ( X)^2 ) for Λ = ω(^-2), see Lemma <ref>.In particular, the quantities τ, ν and ρ_1 appearing below (which depend on )will then satisfy τ∼|_t^+|≥Λ∼|^+_t|=∞ =Θ(), ν=O(^-1), and ρ_1 = Θ().The assumption Λ≥^-2 is not optimal here, but will be natural in the later probabilistic arguments. Let t ∈ [t_0,t_1] with =t->0, and letbe any t-nice parameter list. Define _t^+ as in Definition <ref>, =(^+_t) as in Definition <ref>, and _t^+=_t^+() as in Definition <ref>.Then, setting X=N_≥Λ(), for all ^-2≤Λ≤ n^2/3 we haveX^2≤(log n)^2 X +|_t^+|^2( τ|_t^+|≥Λ + Oτν n^-1/3 + ν n^-1 + n^-1/3(|_t^+|=∞)),where τ is defined in (<ref>) below, andν := (|^+_t| |^+_t|<∞) .Furthermore, defining ^1,+_t=^1,+() as in Definition <ref>,if ρ_1 =(|^1,+_t| = ∞)satisfies ρ_1 > 0, then τ≤1-e^-ρ_1Λ^-1·|_t^+| ≥Λ . We follow the strategy outlined above the statement of the theorem, with the main additional complication being that we must consider also vertices in V_S, which is a random set; as in previous sections this leads to exploration and branchingprocesses with a more complicated start.We write V_L=V_L(^+_t), N_k = N_k(_t^+)=N_k() and Q_k,r=Q_k,r^+(t) to avoid clutter in the notation. Let X=N_≥Λ(); our goal is to approximate the second moment X^2.Analogous to (<ref>) we writeX=∑_v ∈ V_L|C_v()|≥Λ + ∑_k ≥ 1,r ≥ 0 ∑_g ∈_k,r k |C_g()|≥Λ,where _k,r is the (random) set of (k,r)-hyperedges in . We shall expand X^2, grouping all terms that arise from the same g∈_k,r taken twice into the sum Y below. Thus we writeX^2 = Y + Z where Z=Z_1 + Z_2 + Z_3 +Z_4,withY:= ∑_k,r∑_g ∈_k,r k^2 |C_g()|≥Λ,Z_1:=∑_v_1 ∈ V_L ∑_v_2 ∈ V_L|C_v_1()|≥Λ|C_v_2()|≥Λ,Z_2:=∑_v_1 ∈ V_L ∑_k_2,r_2∑_g_2 ∈_k_2,r_2 k_2 |C_v_1()|≥Λ|C_g_2()|≥Λ,Z_3:=∑_k_1,r_1∑_g_1 ∈_k_1,r_1 ∑_v_2 ∈ V_L k_1 |C_g_1()|≥Λ|C_v_2()|≥Λ ,Z_4:=∑_k_1,r_1∑_g_1 ∈_k_1,r_1 ∑_k_2,r_2∑_g_2 ∈_k,r k_1k_2 |C_g_1()|≥Λ|C_g_2()|≥Λg_1 g_2.Note that the decomposition above is very different from that used in (<ref>) and (<ref>), where Y contained all terms with the two g_i (or v_i) in the same component; here we only put the g_1, g_2 term into Y if g_1=g_2. The reason for using a different decomposition here is that the previous decomposition is only useful when we have an upper bound on the size of the relevant components. The term Y will be easy to handle (see below); we first discuss how to evaluate the expectations of the Z_i.For Z_1 we of course haveZ_1 = |V_L|^2|C_v_1()| ≥Λ, |C_v_2()| ≥Λ,where v_1 and v_2 are independently chosen uniformly at random from V_L. For the remaining terms we have the complication that the sets _k,r being summed over are random sets, with Poisson sizes. For Z_2, arguing exactly as for (<ref>), by standard results on point processes we haveZ_2 = |V_L| ∑_k_2,r_2 k_2 Q_k_2,r_2 |C_v_1(_2)|≥Λ,|C_g_2(_2)|≥Λ,where as before v_1 is a random vertex from V_L, but now _2 is formed fromby adding an `extra' (k_2,r_2)-hyperedge g_2, joined (as always) to r_2 vertices of V_L chosen independently at random. There is of course a similar formula for Z_3. Finally, arguing as for (<ref>), we haveZ_4 =∑_k_1,r_1∑_k_2,r_2 k_1 Q_k_1,r_1 k_2 Q_k_2,r_2 |C_g_1(_12)|≥Λ,|C_g_2(_12)|≥Λ,where _12 is formed fromby adding an extra (k_1,r_1)-hyperedge g_1 and a distinct extra (k_2,r_2)-hyperedge g_2,both joined toin the usual way. (Here we used the condition g_1 g_2 in the definition of Z_4.)At this point it seems that we might have several cases to consider; fortunately, we can group them back together, using the argument for (<ref>) in Section <ref>, but considering the (random) starting points x_1, x_2 for two explorations; each x_i will play the role of either v_i or g_i.To define x_1, for any v∈ V_L, with probability 1/|_t^+| we set x_1=v_1=v, while for each k_1≥ 1 and r_1≥ 0, with probability k_1Q_k_1,r_1/|_t^+| we set x_1=g_1 with g_1 of type (k_1,r_1). Since _t^+ is a parameter list, this defines a probability distribution. Define x_2 in the same way, but independently from x_1. Then it is straightforward to see from the formulae for Z_i above thatZ = |_t^+|^2|C_x_1(_*)|≥Λ,|C_x_2(_*)|≥Λ,where _* is obtained fromas follows: if x_1=g_1 is of type (k_1,r_1), we add g_1 as an extra (k_1,r_1) hyperedge, and similarly if x_2=g_2. If both hold, the added extra hyperedges are distinct (and independent).As before, we deal with starting our exploration from an extra hyperedge by passing to the set W of vertices reachable in one step. More precisely, for each i, if x_i=v_i we setS_0,i:=0 and W_i := C_v_i(H_L),while if x_i is a (k_i,r_i)-hyperedge g_i, we set S_0,i:=k_i and W_i := ⋃_1≤ h≤ r_i C_w_h,i(H_L),with all w_h,i chosen independently and uniformly from V_L. For each i, this is exactly the starting rule used in (<ref>)–(<ref>), with the two starts independent. As in Section <ref>, we shall explore within =(^+_t), from each starting set W_i.Let us write _i for this exploration, defined exactly as =(_t^+) is defined in Section <ref>, with (W_i,S_0,i) as the initial values. Of course, the two explorations are far from independent, since they explore the same random hypergraph . There is also another problem: in the exploration _1, say, the initial values account for the possibility of an added (k_1,r_1)-hyperedge g_1 – if we do add such a hyperedge in forming _*, then in _1 we already count its k_1 extra vertices from the start, and account for the fact that g_1 connects the vertices in W_1 by marking these as `reached'right from the start. Unfortunately, _1 does not account for any connections formed by possibly adding g_2. Still, if the exploration _1 does not reach any vertex in W_2, then we do have |_1|=|C_x_1(_*)|. To formalize this, let:= { W_1 and W_2 are connected in }be the event that there is a path from some vertex in W_1 to some vertex in W_2 within the hypergraph . This is exactly the event that the complete exploration _1 meets W_2, or vice versa. Whendoes not hold, then |_i| = |C_x_i(_*)|. Hence, setting: = { |_1|≥Λ and|_2|≥Λ}we have{ |C_x_1(_*)|≥Λ and|C_x_2(_*)|≥Λ} ⊆ ∪,and soX^2 =Y+Z ≤ Y + |_t^+|^2 ·(∪). The easiest term to bound on the right-hand side above is Y. As in previous sections, setΨ := (log n)^2,which is simply a convenient `cut-off' for the various distributions involved. Indeed, recalling our conventions N_k=N_k() and Q_k,r=Q^+_k,r(t), by (<ref>), (<ref>) and (<ref>) we see thatmax_k ≥ΨN_k=0 andmax_k+r ≥ΨQ_k,r=0.In other words, all components of our initial marked graph H have size at most Ψ, where in a (k,r)-hyperedge we count the k vertices and the r stubs in determining its size. It follows immediately that Y ≤Ψ· X, so Y ≤Ψ· X . It remains to bound (∪), which we do by considering the explorations _1 and _2 defined within(not _*); our aim is to compare the explorations with two independent copies ^+_t,i, i=1,2, ofthe branching process ^+_t=^+_t() given by Definition <ref>.In order to retain sufficient independence in the analysis, we will need to consider restricted explorations _i^-⊆_i, defined in different ways for i=1 and i=2.Since each exploration _i has the same distribution as the exploration (_t^+) defined in previous sections, by Theorem <ref> we may couple _1 with _t,1^+ so that the latter dominates the former. As outlined above, we may wish to abandon the exploration/coupling part way through; in fact, it will be convenient to construct _1^-⊆_1, and couple it with _t,1^+, in an `edge-by-edge' way, to allow for stopping part way through step j. Thus, in step j ≥ 1 (where we process v_j ∈_j-1) we sequentially consider, for all k ≥ 0 and r ≥ 1 with k+r ≤Ψ, each so-far untested k-weighted hyperedge g∈ (V_L)^r of the form (v_j,w_1, …, w_r-1), …, (w_1, …, w_r-1,v_j).For each such hyperedge g we test the presence and multiplicity m_g; if m_g ≥ 1 we (i) mark the vertices ⋃_1 ≤ h ≤ r-1C_w_h(H_L) ∖ (_j-1∪_j-1) as active, and (ii) increase the number of found V_S–vertices by k m_g. Finally, at the end of step j we move v_j from the set of active vertices to the set of explored vertices. We stop the exploration/coupling if either (i) at the beginning (when the reached and active sets are both equal to W_1), or (ii) after completely processing any particular hyperedge g, one of the following two conditions holds:1pt 0pt 0pt (P1) the exploration has reached at least Λ vertices in V_L, or (P2) there are currently at least 2Λ active vertices (vertices in V_L that have been reached but not yet fully explored).For later reference we define the event:= {we stop _1 due to (P1) or (P2)} .Ifdoes not hold then we complete the coupling, exploring the entirety of _1. Thusimplies_1^-=_1.If |_1|≥Λ thenholds: at the latest we stop when we have reached Λ vertices in the exploration of _1. In other words,|_1| ≥Λimplies. Define |_1^-| to be the total number of vertices reached by the possibly truncated exploration _1^-, including the |W_1|+S_0,1 initial vertices.From the domination in the coupling we certainly have |_t,1|≥ |_1^-|. Let w(^+_t,1) denote the maximum (supremum) of the number of L–particles in any generation of the branching process ^+_t,1. Since we constructed our coupling in breadth-first search order, if (P2) holds then the at least 2Λ active vertices at this point correspond to particles in ^+_t,1 that are contained in two consecutive generations of ^+_t,1.Allowing for the fact that we might stop due to (P2) partway through the exploration of some vertex v_j, it follows that (P2)implies w(^+_t,1) ≥2Λ-1/2≥Λ .By the domination in the coupling, (P1) implies |^+_t,1|≥Λ. Thus ifholds, either |^+_t,1|≥Λ or w(^+_t,1) ≥Λ. Since |^+_t,1| < Λ implies |^+_t,1| < ∞,we thus obtain () ≤(|^+_t| ≥Λ) + |^+_t| < ∞, w(^+_t) ≥Λ =: τ,where we think of (|^+_t| ≥Λ) as the `main term'. Here we have dropped the subscripted 1's, since _t,1^+ has the same distribution as _t^+.Let U_1⊆ V_L denote the set of V_L-vertices reached by the first exploration _1^-, the possibly truncated version of _1. Also, let A_1⊆ U_1 denote the set of active (not yet fully explored) vertices at the end of _1^-, so A_1∅ only if we stopped the exploration early. Since we only stop after completely processing any hyperedge, we can `overshoot' our stopping criteria somewhat, but, from (<ref>), only by at most (Ψ-1) ·Ψ≤Ψ^2, say. It follows that|U_1|≤Λ + Ψ^2≤ 2 n^2/3, |A_1|≤ 2Λ + Ψ^2 ≤ 5n^2/3, where we tacitly used max{Λ,1,Ψ^2}≤ n^2/3 and ^-1·Ψ^2 ≤√(Λ)·Ψ^2 ≤ n^2/3.After the first exploration, by definition all potential hyperedges meeting U_1∖ A_1 have already been tested for their presence in . Furthermore, no such hyperedges containing any vertices outside U_1 are present in . (Such vertices would have been reached by the exploration.) The remaining untested potential hyperedges are of two types: those entirely outside U_1, and those meeting A_1. Let  denote the set of potential hyperedges of the latter type.Turning to the second exploration _2, we define _2^-⊆_2 as follows: we explore as usual starting from the initial set W_2 and initial value S_0,2, but only testing hyperedges outside U_1. More precisely, if W_2 meets U_1 we shall not explore at all (defining _2^- to be empty and have size 0, say); otherwise, we run our second exploration in the subgraph (U_1^) obtained by deleting all vertices in U_1 and all incident hyperedges, as in the proof of Lemma <ref>. We write |_2^-| for the size of _2^-, i.e., the number of vertices reached, counting vertices in V_S, and including the initial |W_2|+S_0,2 vertices.Since (U_1^) is an induced subgraph of , there is a coupling such that _2^- is dominated by _t,2^+; we construct this coupling exactly as for _1, except that some tests (of edges meeting A_1) are simply omitted. This coupling gives|_2^-| ≤ |^+_t,2| ,where ^+_t,2 has the distribution of ^+_t and is independent of ^+_t,1. Indeed, we obtain independence of the coupled branching processes for the same reason that we can couple with a branching process in each case: in each stepour arguments show that the conditional distribution of the number of new vertices we reach in the hypergraph exploration is dominated by the distribution arising in the branching process. For _2^- the conditioning here is on the entire exploration _1^- as well as all earlier steps of _2^-.Let U_2 be the set of vertices in V_L reached by the (restricted) exploration _2^-. Let_2 := { W_2∩ U_1∅},and let _2 := {at least one hyperedge inmeeting U_2 is present in }.We claim that= _2∪_2.To see this, supposeholds but not _2. Then the set U_2 of vertices in V_L found by _2^- consists of all vertices in U_1^ connected, in [U_1^], to W_2. Since there is a path from W_2 to W_1 in , there must be a hyperedge in containing some vertex in U_2 and some vertex in U_1. But such a hyperedge must be in . Hence ⊆_2∪_2. The reverse containment is immediate.If =_2∪_2 does not hold, then _2^-=_2. Hence, if ∖ holds, we have |_2^-|=|_2|≥Λ. Since |^+_t,2|≥ |_2^-| by (<ref>), using (<ref>) we see that∖implies∩{|^+_t,2|≥Λ}.Ifdoes not hold, then the exploration _1^- runs to completion, and in particular A_1=∅ and hence =∅, so _2 cannot hold. Thus_2 = ∩_2.Recalling that =_2∪_2, we conclude that∪⊆∩{|^+_t,2|≥Λ}∪ (∩_2) ∪_2.Hence(∪) ≤, |^+_t,2|≥Λ+ , _2, |^+_t,2|<Λ+ (_2). The first (main) term in (<ref>) is easy to bound: the branching process ^+_t,2 has the distribution of _t^+ and is independent of our first exploration and hence of . Thus, |^+_t,2|≥Λ = () |_t^+|≥Λ. We now turn to , _2, |^+_t,2|<Λ. We will evaluate this by conditioning on the result of the two explorations _1^- and _2^- as well as the coupled branching process _t,2^+. (More formally, we condition on all information revealed during these explorations.) The first key observation is that, for any two distinct vertices x_1,x_2 ∈ V_L, the probability π that they are connected by some so-far untested hyperedge satisfies π=O(n^-1).To see this, note that there are at most r^2 |V_L|^r-2 = O(r^2 n^r-2) hyperedges containing x_1,x_2, and each so-far untested hyperedge appears independently according to a Poisson process with rate ^+_k,r= _k,r(^+_t) =Q_k,r^+(t)/|V_L|^r = O(e^-b_1(k+r)n^1-r), see (<ref>), (<ref>) and (<ref>).Using a union bound argument, it follows thatπ≤∑_k≥ 0,r ≥ 0[r^2 |V_L|^r-2·^+_k,r] = O(∑_k≥ 0,r ≥ 0 r^2 e^-b_1(k+r)/n) = O(n^-1) .Recall that U_2 is the set of vertices in V_L reached by _2^-. Since |U_2|≤ |_2^-|≤ |_t,2^+|, recalling (<ref>) the total number of pairs of vertices (x_1,x_2) ∈ A_1 × U_2 is at most|A_1 × U_2|= |A_1| · |U_2| ≤ |A_1|· |^+_t,2| ≤ 5n^2/3 |_t,2^+| .If _2 holds, at least one of these pairs is connected by some so-far untested hyperedge. Hence, by a union bound argument, using π=O(1/n) we infer _2 |_1^-,_2^-,_t,2^+ ≤ |A_1 × U_2| ·π = O( n^-1/3) · |^+_t,2| ,and so_2, |^+_t,2|<Λ|_1^-,_2^-,^+_t,2 = |^+_t,2|<Λ·_2 |_1^-,_2^-,^+_t,2= O( n^-1/3) · |^+_t,2||^+_t,2|<Λ.Taking the expectation over _2^- and ^+_t,2 (which are coupled with each other),since ^+_t,2 is independent of _1^- we conclude that_2, |^+_t,2|<Λ|_1^-= O( n^-1/3) ·(|^+_t,2| |^+_t,2|<Λ) .Since _t,2^+ has the distribution of _t^+, we have(|^+_t,2| |^+_t,2|<Λ) = (|^+_t| |^+_t|<Λ) ≤(|^+_t| |^+_t|<∞) =:ν,so _2, |^+_t,2|<Λ|_1^-= O(ν n^-1/3). Since this holds whatever the outcome of _1^-, and this outcome determines whetherholds, we conclude that, _2, |^+_t,2|<Λ= O(ν n^-1/3) ·(). Next we bound the probability of the event _2 that the random starting set W_2 of our second exploration intersects the set U_1 of V_L-vertices reached by the first, truncated exploration. Now W_2=C_R_2(H_L) is the union of the components of the initial graph H_L containing the random vertices in R_2, where R_2 consists either of a single vertex random vertex of V_L, or of r_2 independent random vertices of V_L. Since U_1 is a union of components of V_L, the event _2 holds if and only if R_2 contains at least one vertex from U_1. Since R_2 is independent of U_1,using conditional expectations we thus infer(_2) ≤(|R_2∩ U_1| ≥ 1) ≤( ( |R_2| ·|U_1|/|V_L| | |R_2|, U_1 )) =|R_2|· |U_1| /|V_L|. Recall that |_t^+|= Θ(n) by (<ref>), and that V_L=V_L(^+_t) satisfies |V_L|=Θ(n) by (<ref>) and the definition of _t^+, see (<ref>). Since the variables Q_k,r=Q_k,r^+(t) have exponential tails, see (<ref>) and (<ref>), we have|R_2| =∑_v_2 ∈ V_L1/|_t^+| +∑_k_2,r_2 r_2 k_2Q_k_2,r_2/|_t^+| = O(1)·n + ∑_k,r kr e^-b_1(k+r)n/n= O(1).Since |V_L|=Θ(n), we thus obtain(_2) = O(n^-1) · |U_1|= O(n^-1) ·( min{|_t,1^+|,2n^2/3} ) = O(n^-1) ·( min{|_t^+|,2n^2/3} ),where in second step we used |U_1|≤ |_1^-| ≤ |_1| ≤ |_t,1^+| and the bound (<ref>), and in the final step we used that _t,1^+ has the distribution of _t^+. Now( min{|_t^+|,2n^2/3} ) ≤( |_t^+||_t^+|<∞ ) + ( 2n^2/3|_t^+|=∞ )= ν + 2n^2/3(|_t^+|=∞).Hence(_2) ≤ Oν n^-1 + n^-1/3(|_t^+|=∞). Combining <ref> yields(∪) ≤() ·(|_t^+|≥Λ) + O(ν n^-1/3)+ Oν n^-1+n^-1/3(|_t^+| = ∞).Together with <ref>, this completes the proof of inequality (<ref>). It remains to prove the claimed upper bound (<ref>) for τ defined in (<ref>). Recall that the width w() of a branching process is defined as the supremum of the number of particles in any generation. We first estimate the probability of the event involving w(^+_t) ≥Λ in (<ref>). Analogous to Section 2 of <cit.>, by sequentially exploring _t^+ generation-by-generation, we can stop at the first generation with at least Λ particles of type L.The children of each of these particles form independent copies of the branching process ^1,+_t=^1,+()defined in Definition <ref> (note that this process differs from _t^+), so the conditional probability of dying out is at most (1-ρ_1)^Λ for ρ_1 =(|^1,+_t| = ∞). Since ρ_1>0, it follows that|^+_t| < ∞| w(^+_t) ≥Λ≤ (1-ρ_1)^Λ≤ e^-ρ_1 Λ < 1 .Note that, for any two events , with (|)>0, we have (,) = () ·(|)= ( ,)/(|)·(|) ≤() ·(|)/1-(|) . Since x/(1-x) is monotone increasing for x < 1, we thus obtain|^+_t| < ∞, w(^+_t) ≥Λ≤(|^+_t| = ∞) ·e^-ρ_1 Λ/1-e^-ρ_1 Λ ,which together with (|^+_t| = ∞) ≤(|^+_t| ≥Λ) and 1+x/(1-x)=1/(1-x) completes the proof of inequality (<ref>). § COMPONENT SIZE DISTRIBUTION: QUALITATIVE BEHAVIOUR In this section we study the Poissonized random graphs =(^±_t) introduced in Section <ref>.Our goal is to use properties of the closely related branching processes _t and ^±_t, together with results from the previous section, to estimate various moments of the component size distribution of .In Section <ref> we establish several technical properties of the offspring distributions of _t and ^±_t. In Section <ref> we state results for the survival and point probabilities of these branching processes,whichin Section <ref> are then used to estimate the first moment and variance of (a) the number of vertices of =(^±_t) in components of at least certain sizes and (b) the rth order susceptibility of . As a by-product, we also establish several results (Theorems <ref>, <ref> and <ref>) describing the qualitative behavior of various limiting functions appearing in Section <ref>.Finally, as mentioned earlier, in Section <ref> we will use Lemmas <ref>, <ref> and <ref> to transfer properties of (^±_t) back to the original random graph process G^_n,tn. §.§ Properties of the offspring distributions In this subsection we revisit the branching processes _t,_t^1 and _t^±,_t^1,± defined in Sections <ref> and <ref>, and derive properties of their offspring distributions. We start with the `idealized' offspring distributions (Y_t,Z_t) and (Y^0_t,Z^0_t) defined in Section <ref>, studying the probability generating functions (t,α,β) := ( α^Y_tβ^Z_t) and^0(t,α,β) := ( α^Y^0_tβ^Z^0_t).These expectations (infinite sums) make sense for complex α and β whenever the corresponding sum converges absolutely. A priori, they make sense only for real t; however, we shall show that both probability generating functions extend to analytic functions in a certain complex domain. There exist δ>0 and R>1 such that the functions (t,α,β) and ^0(t,α,β)are defined for all real t with |t-|<δ and complex α,β with |α|,|β|<R.Furthermore, each of these functions has an analytic extension to the complex domain _δ,R:={(t,α,β) ∈^3: |t-|<δ and|α|,|β|<R}. The proof hinges on the following two facts: (i) that the probability generating function of the distribution N defined in (<ref>) is analytic due to the exponential tails of Theorem <ref>,and (ii) that the generating function P(t,x,y) defined in (<ref>) is analytic by Theorem <ref>. Turning to the details, we first study Φ(α):=α^N = ∑_k > Kα^k ρ_k(t_0)/ρ_ω(t_0).Let β_0=e^b/3>1, where b>0 is the constant in (<ref>).Recalling the exponential tail bound |ρ_k(t_0)| ≤ A e^-ak of (<ref>), standard results for power series yield that Φ(α)=α^N is analytic for all α∈ with |α| < e^a. Since Φ(1)=1, we may pick α_0 ∈ (1,e^a) such that Φ(α_0) < β_0. Since Φ is a power series with non-negative coefficients, it follows that |Φ(α)| < β_0 for all α∈ with |α|≤α_0.We shall prove the result with R:=min{α_0,β_0}>1.Recalling the definition of (Y_t,Z_t), see (<ref>), by independence andusing that H_k,r,t∼(λ_k,r(t))it follows that(t,α,β)= ∏_k ≥ 0,r ≥ 1([(α^N)^r-1β^k]^H_k,r,t)= exp{∑_k ≥ 0,r ≥ 1λ_k,r(t)((Φ(α))^r-1β^k-1)} .Recalling λ_k,r(t)= r q_k,r(t)/ρ_ω(t_0) and the definition of P(t,x,y), see (<ref>), we see that(t,α,β) = exp{(P_y(t,β,Φ(α))-P_y(t,1,1)) / ρ_ω(t_0)}.By Theorem <ref> there is some δ>0 such that P(t,x,y) has an analytic extension to the complex domain _δ,β_0. Replacing P by this extension in the formula above gives the required analytic extension of , since derivatives, compositions and products of analytic functions are analytic.Finally we consider ^0(t,α,β)=( α^Y^0_tβ^Z^0_t), which from (<ref>)–(<ref>) satisfies^0(t,α,β)= ∑_k > Kρ_k(t_0) α^k + ∑_z ≥ 1,r ≥ 0 z q_z,r(t) (α^N)^rβ^z = ρ_ω(t_0)Φ(α) + β P_x(t,β,Φ(α)).Using again that derivatives, products and compositions of analytic functions are analytic, we see that ^0(t,α,β) also has an analytic extension of the claimed form. Since _α(t,1,1)= Y_t and _αα(t,1,1)= Y_t(Y_t-1), Theorem <ref> implies that Y_t, Y_t^2 and thus Y_t are analytic for t ∈ (-,+).A similar argument applies to Z_t, Y^0_t and Z^0_t.Intuitively, we now show that  is the `critical point' of thebranching process _t= _Y_t,Z_t,Y_t^0,Z_t^0 defined in Section <ref>(as expected, since a linear size giant component appears after time  in the random graph process).We have Y_=1.Furthermore, for all t ∈ (t_0,t_1) we haveY^0_t >0 and Y_t > 0 .Fix t ∈ (t_0,t_1).Recalling the definition of Y_t and u(t), see (<ref>) and (<ref>), using independence, the fact that H_k,r,t∼(λ_k,r(t)) and that H_k,r,t = λ_k,r(t)= r q_k,r(t)/ρ_ω(t_0), we see thatY_t = ∑_k ≥ 0, r ≥ 2[H_k,r,t· (r-1) · N]= [∑_k,r ≥ 0r(r-1) q_k,r(t) ] · N/ ρ_ω(t_0) = u(t)N/ ρ_ω(t_0) .Since N > 0, Lemma <ref> thus entails Y_t = u'(t) · N/ ρ_ω(t_0) > 0.By (<ref>)–(<ref>) we similarly have Y^0_t ≥ Y_0,t =N ·ρ_ω(t_0)> 0.We next prove Y_=1. By Corollary <ref> we have (|_t|=∞)=ρ(t) for t ∈ [t_0,t_1], so the discussion below (<ref>) implies (|_t|=∞)=0for t ∈ [t_0,]and(|_t|=∞)>0for t ∈ (,t_1].Recall that the branching process _t has (except for the initial generation) a two-type offspring distribution (Y_t,Z_t), which corresponds to particles of type L and S, respectively. Since only type L particles (which are counted by Y_t) have children, by (<ref>) standard branching process results imply Y_t≤ 1 for t ∈ [t_0,) and Y_t≥ 1 for t ∈ (,t_1].Now Y_ = 1 follows since Y_t is analytic and thus continuous at t=. Intuitively speaking, we next show that no linear relation of the form aY_t + b Z_t=c holds. Definein Lemma <ref>.There exists k_0 > K such that, for all t ∈ (t_0,t_1),min{(Y_t=k_0,Z_t=k_0),(Y_t=k_0+,Z_t=k_0),(Y_t=k_0,Z_t=k_0+)} > 0 . Fix t ∈ (t_0,t_1).By Lemma <ref> there exists k_0 ∈ with k_0 ≥max{K+1,} and k_0+∈. By Lemma <ref>, ρ_k_0(t_0) and ρ_k_0+(t_0) are positive. Furthermore, since k_0>K, by Lemma <ref><ref> we have (k_0,2)∈ and (k_0+,2)∈, and hence q_k_0,2(t) and q_k_0+,2(t) are positive. We consider the cases ∑_k,r ≥ 0H_k,r,t∈{H_k_0,2,t,H_k_0+,2,t} in the definition (<ref>) of (Y_t,Z_t).For k^* ∈{k_0,k_0+} we then focus on the event H_k^*,2,t=1, and consider the cases N_k^*,1,1,1∈{k_0,k_0+} in the definition (<ref>) of (Y_t,Z_t).Recalling that ∑_k,r ≥ 0λ_k,r(t) ∈ (0,∞), it follows that (<ref>) holds. We now turn to the `perturbed' offspring distributions (Y^±_t,Z^±_t) and (Y^0,±_t,Z^0,±_t) defined in Section <ref>. Note that these distributions depend not only on t, but also on the (t-nice) parameter list , see Definition <ref>.The next result intuitively states that all such probability generating functions ^± and ^0,±, defined in (<ref>) below, are almost indistinguishable from the corresponding `idealized'and ^0 defined in (<ref>).There exist C,n_0>0 and R>1 such that the following holds for all n≥ n_0, all t ∈ [t_0,t_1] and all t-nice parameter lists . Define (Y^±_t,Z^±_t) and (Y^0,±_t,Z^0,±_t) as in Definition <ref>, and set^±(t,α,β) := ( α^Y^±_tβ^Z^±_t)and^0,±(t,α,β) := ( α^Y^0,±_tβ^Z^0,±_t) .Then, writing := {x ∈: |x| ≤ R}, we havesup_α,β∈max{|(t,α,β)|,|^±(t,α,β)|,|^0(t,α,β)| ,|^0,±(t,α,β)|}≤ C , sup_α,β∈|(t,α,β) -^±(t,α,β) | ≤ C n^-1/3,sup_α,β∈|^0(t,α,β) - ^0,±(t,α,β)| ≤ C n^-1/3 . We start by showing that N^± and λ^±_k,r(t) are very good approximations to N and λ_k,r(t).Here and throughout the proof, all constants do not depend on t∈ [t_0,t_1] or on the choice of .Combining thedefinitions of N and N^± (see (<ref>) and Definition <ref>), with the exponential tails of ρ_k(t_0) and N_≥ k (see (<ref>) and (<ref>)),we see thatthere are absolute constants d,D,n_0>0such that, for n ≥ n_0,max{(N=k),(N^±=k)} = O(e^-ak) ≤ D e^-d k.Recall that (N^±=k) approximates N_k/|V_L|=N_k()/|V_L()|, and that N_k approximates ρ_k(t_0)n (see Definition <ref> and (<ref>)).After decreasing d and increasing D,n_0 (if necessary),using ρ_k(t) ≤ A e^-ak and a ≥ a_0 (see (<ref>) and (<ref>))together with the upper bound (<ref>) and (<ref>), it is routine (but slightly messy) to see that, for n ≥ n_0,|(N=k)-(N^±=k)|= O(min{(log n)^D_n^-1/2+e^-a_0kn^-1/3, e^-a k}) ≤ D e^-d k n^-1/3 .For λ_k,r(t) and λ^±_k,r(t) as defined in (<ref>) and Definition <ref>,similar reasoning shows that (again after decreasing d and increasing D,n_0, if necessary), for n ≥ n_0, max{|λ_k,r(t)|, |λ^±_k,r(t)|} ≤ D e^-d(k+r),|λ_k,r(t) - λ^±_k,r(t)|≤ D e^-d (k+r) n^-1/3 . With the above estimates in hand, the proof boils down to routine calculations (analogous to those from Theorem <ref>). Turning to the details, letΦ(α):=α^N andΦ^±(α):=α^N^± . Using (<ref>) we write (t,α,β) = exp{∑_k ≥ 0,r ≥ 1λ_k,r(t)((Φ(α))^r-1β^k-1)} = : exp{Γ(t,α,β)}.Recalling the definition of (Y^0,±_t,Z^0,±_t), see Definition <ref>, arguing as for (<ref>) we obtain^±(t,α,β) = exp{∑_k ≥ 0,r ≥ 1λ^±_k,r(t)((Φ^±(α))^r-1β^k-1)} = : exp{Γ^±(t,α,β)}.Now Φ(1)=Φ^±(1)=1. Using the (uniform) exponential tail bound (<ref>) to boundthe derivatives of Φ and of Φ^±, we may find a constant 1<R<e^d/2 such that Φ(R),Φ^±(R)<e^d/2. Writing := {x ∈: |x| ≤ R} as in the statement of the theorem, since Φ and Φ^± are power series with non-negative coefficients it follows thatsup_α∈max{|Φ(α)|,|Φ^±(α)|}≤ e^d/2 .Together with the exponential tail bound (<ref>) and R ≤ e^d/2, it follows that there is a C_1 ≥ 1 such that sup_α, β∈max{|Γ(t,α,β)|,|Γ^±(t,α,β)| }≤∑_k ≥ 0,r ≥ 1 De^-d (k+r)(e^d(k+r)/2+1) ≤ C_1 .Furthermore, using R ≤ e^d/2 and the exponential difference estimate (<ref>), there is a C_2 >0 such that sup_α∈|Φ(α)-Φ^±(α)| ≤∑_k ≥ 0e^dk/2· De^-d kn^-1/3≤ C_2 n^-1/3.Note that (as easily seen by induction), for all I ∈ we have |∏_h ∈ [I] y_h-∏_h ∈ [I] z_h| ≤∑_j ∈ [I] |y_j-z_j| ·∏_1 ≤ h < j |y_h|∏_j < h ≤ I |z_h|.Together with the bound (<ref>) and the difference estimate (<ref>), it now follows for r ≥ 1 that sup_α∈|(Φ(α))^r-1-(Φ^±(α))^r-1|≤ r · C_2 n^-1/3· (e^d/2)^max{r-2,0}≤ C_2 r e^dr/2 n^-1/3.Together with the difference estimates (<ref>), the upper bound (<ref>) and R ≤ e^d/2, using (<ref>) we also infer that there is a C_3 ≥ 1 such that, say, sup_α, β∈|Γ(t,α,β) - Γ^±(t,α,β) | ≤∑_k ≥ 0,r ≥ 1 D (C_2r+2) e^-d(k+r)/2 n^-1/3≤ C_3 n^-1/3 .Together with (<ref>) and (<ref>)–(<ref>),setting C :=2 C_3 e^C_1, say, for n ≥ n_0(C) large enough this readily establishes (<ref>) and the upper bounds for  and ^± in (<ref>).Finally, we omit the analogous arguments for ^0(t,α,β) and ^0,±(t,α,β). §.§ Branching process results In this subsection we state a number of results concerning the branching processes _t and _t^±, whichwe shall prove in a companion paper <cit.> written with Svante Janson(modulo a reduction given in Appendix <ref>). As we shall see, their survival and point probabilities are qualitatively similar to standard Galton–Watson branching process arising in the context of classical Erdős–Rényi random graphs.In particular, for t=+ the survival probabilities grow linearly in , and for t=± the size-k point probabilities decay exponentially in Θ(^2k). We start with our results for the `idealized' branching process _t= _Y_t,Z_t,Y_t^0,Z_t^0 defined in Section <ref>. There exists _0>0 such that the survival probability ρ(t)=(|_t|=∞) is zero for -_0≤ t≤,is positive for < t ≤ + _0, and is analytic on [,+_0]. In particular, there are constants a_i with a_1>0 such thatfor all ∈ [0,_0] we haveρ(+) = ∑_i=1^∞ a_i^i .Moreover, an analogous statement holds for ρ_1(t)=(|^1_t|=∞), where _t^1=^1_Y_t,Z_t is defined as in Section <ref>. Note that this result and Corollary <ref>, which gives ρ(t) = (|_t|=∞) for t ∈ [t_0,t_1], immediately imply Theorem <ref>.Recall from Section <ref> thatis the set of component sizes which can be produced by the rule , and that for t>0, ρ_k(t)>0 if and only if k∈ (see Lemma <ref>). There exists _0>0 such that (|_t|=k) = (1+O(1/k)) k∈k^-3/2θ(t) e^-ψ(t) kuniformly over all k≥ 1 and t∈ I = [-_0,+_0], where the functions θ and ψ are analytic on I with θ(t)>0, ψ(t)≥ 0, ψ()=ψ'()=0, and ψ”()>0.Note that the last condition implies in particular that ψ(±)=a^2+O(^3) where a>0.Throughout the paper, ψ(t) and θ(t) refer to the functions ψ and θ appearing in the result above.Note that Theorem <ref> and Corollary <ref>, which gives ρ_k(t) = (|_t|=k) for t ∈ [t_0,t_1],immediately imply Theorem <ref>.Next, we state our results for the `perturbed' branching processes ^±_t=^±_t() defined in Section <ref>.Note that each is actually a family of branching processes, one for each t-nice parameter list . In the followingresults the conditions ensure that n is at least some constant, which may be made large by choosing T large and _0small. In other words, particular small values of n play no role. There exist _0,C,T>0 such that, writing I_n = {t ∈: T n^-1/3≤ |-t| ≤_0}, for any n≥ 1, any t ∈ I_n and any t-nice parameter list  the following holds for ^±_t=^±_t() as in Definition <ref>.The survival probabilities (|^±_t|=∞) are zero if t ≤, and if t > they are positive and satisfy |(|^±_t|=∞) - ρ(t)|≤ Cn^-1/3 ,where the function ρ is as in Theorem <ref>. Moreover, an analogous statement holds for (|^1,±_t|=∞),where ^1,±_t=^1,±_t() is as in Definition <ref>.Recall thatis the period of the rule , defined in Section <ref>. As usual, K is simply the cut-off size of the bounded-size rule . There exist _0,C,T>0 such that, writing I_n = {t ∈: T n^-1/3≤ |-t| ≤_0},for ^±_t=^±_t() as in Definition <ref> we have(|^±_t|=k) = (1+O(1/k)+O(n^-1/3)) k≡ 0 mod k^-3/2θ(t) e^-ξ() kuniformly over all n≥ 1, k>K, t∈ I_n and t-nice parameter lists , where the functions θ and ψ are as in Theorem <ref>, and|ξ()- ψ(t)| ≤ C n^-1/3|t-| .We shall later apply these results with =(n) satisfying ^3n→∞, in which case (|^±_t+|=∞) ∼ρ(t+) = Θ() and (|^±_t-|=∞) = ρ(t-)=0.Furthermore, for t= ± and ^3n→∞ we also have ξ()∼ψ(t)=Θ(^2).The indicator functions k∈ and k≡ 0 mod in Theorems <ref> and <ref>, and condition k>K in the latter, may seem somewhat mysterious, so let us comment briefly. Firstly, without the indicator function, for any fixed k, the conclusion (<ref>) holds trivially. Indeed the function f_k(t) := k^-3/2θ(t) e^-ψ(t) k is positive at t= and is continuous, so reducing _0 ifnecessary, it is bounded and bounded away from zero.Since probabilities lie in [0,1],by simply taking the implicit constant in the O(1/k) term large enough, for a fixed k we can thus ensure that (<ref>) holds without the indicator function.A similar comment applies to Theorem <ref>. It might thus appearthat neither result says anything for small (fixed) k, but this is not quite true. When the relevant indicator function is 0, the result asserts that the corresponding probability is 0.In the context of Theorem <ref>,for k ∉ and t ∈ [t_0,t_1] we know that (|_t|=k)=ρ_k(t)=0 by Corollary <ref> and Lemma <ref>. We could perhaps define the processes ^±_t so that their sizes (when finite) always lie in , but we have not done so. Hence the slightly different condition in Theorem <ref>. In any case, the interest is only in k large, and in this case, from Lemma <ref>,k∈ if and only if k is a multiple of .In Theorems <ref>–<ref> we may take the same constants _0,C,T in all cases (by choosing the minimum and maximum, respectively).Furthermore, by increasing T, in Theorems <ref>–<ref> we may assume that ρ^±(t) ≥ρ(t)/2, ρ^±_1(t) ≥ρ_1(t)/2 and ξ() ≥ψ(t)/2 hold for t ∈ I_n. The proofs of the results above are deferred to Appendix <ref> and the companion paper <cit.>. They rely on various technical properties of _t and _t^± established in Section <ref> (and some basic properties from Sections <ref>–<ref>), but are otherwise independent of, and rather different from, the arguments in the present paper. §.§ Moment estimates In this subsection we estimate various moments of the component size distribution of the Poissonized random graphs = (^±_t).Firstly, in Section <ref> we estimate the expected number of vertices in `large' components of ,and show that the variance is small.Then, in Section <ref> we establish analogous statements for the expectation and variance of the modified susceptibility S_r,n() defined in (<ref>).Our proofs combine the domination arguments from Section <ref> with the branching processes estimates from Section <ref>. To apply both, we often need to make additional assumptions on the component sizes k we study.In particular, due to the lower bound in the domination result Theorem <ref> we often restrict our attention to k ≤ n^2/3.Similarly, for t=± we often assume k ≤ n^1/3/ since this implies k ξ()= kψ(t) + O(1) in the branching process estimates of Theorem <ref>, see (<ref>).Furthermore, to take advantage of the fact that the tails decay exponentially in kψ(t) = Θ(^2k) for t=±, we typically also assume k ≥^-2. These constraints will not severely affect our later applications.For example, in Section <ref> we exploit that when ^3n →∞ we can choose suitable k = ω(^-2log(^3n)) with ^-2≪ k ≪min{n^2/3,n^1/3/}, i.e., which satisfies all the constraints (with room to spare).For later reference we note the following simple summation result, which will be convenient in a number of technical estimates (see Lemma <ref> for a further refinement).For all u ∈ with u ≠ -1 there exists C_u > 0 such that for all δ > 0 and j_0 ≥ 1 we have∑_j ≥ j_0j^u e^-δ j≤ C_u (1 + δ^-(u+1)) e^-δ j_0/2.For all u ∈ with u >0 there exists D_u > 0 such that for all δ>0 and j_0 >0 we have∑_j ≥ j_0j^-u e^-δ j≤ D_u δ^-1 j_0^-u e^-δ j_0 .Inequality (<ref>) is immediate for u < -1, taking C_u=∑_j≥ 1 j^u<∞. For u>-1 it suffices to show that the sum of the terms j^u e^-δ j/2 with j ≥δ^-1 is at most a constant times the sum of these terms with 1 ≤ j ≤δ^-1. This follows easily from the bounds ∫_0^δ^-1x^u = Θ(δ^-(u+1)), x^u e^-δ x/4 = O(δ^-u) and ∫_δ^-1^∞e^-δ x/4 = Θ(δ^-1).Similarly, inequality (<ref>) follows readily from j^-u≤ j_0^-u and ∫_z^∞e^-δ x = O(δ^-1e^-δ z).§.§.§ Number of vertices in large components Our goal is to estimate the expectation and variance of the number N_≥Λ of vertices in `large' components of .We start with the subcritical case i=( -) n.Since the expectation N_≥Λ drops exponentially with rate ψ(t) Λ =Θ(^2 Λ), where t=-, in (<ref>) below the leading constant is irrelevant for our purposes (with a little care,we can also obtain the precise asymptotics when ^-2≪Λ≪min{n^2/3,n^1/3/}).There exist constants _0,d,D,T > 0 such that the following holds for all t ∈ [t_0,t_1] with =-t ∈ [T n^-1/3,_0], and all t-nice parameter lists . Define ψ:[-_0,+_0] → [0,∞)as in Theorem <ref>, _t^± as in Definition <ref>, =(^±_t) as in Definition <ref>, and _t^±=_t^±() as in Definition <ref>. If max{^-2,K} < Λ≤min{n^2/3, n^1/3/}, thend ^-2Λ^-3/2e^-ψ(t)Λn - n^-ω(1)≤ N_≥Λ( ) ≤ D ^-2Λ^-3/2e^-ψ(t)Λn . Since t <, Theorem <ref> implies (|^±_t| = ∞ )=0.So, since Λ≤ n^2/3, Theorem <ref> gives (Λ≤ |^-_t| < ∞ ) |^-_t| - n^-ω(1)≤ N_≥Λ()≤(Λ≤ |^+_t| < ∞ ) |^+_t| .Since |^±_t| = Θ(n) by (<ref>), it remains to estimate (Λ≤ |^±_t| < ∞ ). By Theorem <ref> and inequality (<ref>),we have(Λ≤ |^+_t| < ∞ ) = ∑_k ≥Λ(|^+_t| =k)= O(∑_k≥Λk^-3/2 e^-ξ(_t^+)k)= O(ξ(_t^+)^-1Λ^-3/2 e^-ξ(_t^+)Λ)= O(ψ(t)^-1Λ^-3/2 e^-ψ(t)Λ),using for the last step ξ(_t^+)≥ψ(t)/2 (see Remark <ref>) and Λ|ξ(_t^+)-ψ(t)|=O(1),which follows from (<ref>) and Λ≤ n^1/3/.Since ψ(t) = Θ(^2), this establishes the upper bound in (<ref>). For the lower bound, we pick Λ≤Λ' < Λ+ such that Λ'.Applying Theorem <ref> similarly to (<ref>), using Λ' ≥Λ > K, Λ'|ξ(_t^-)-ψ(t)|=O(1), ∫_y^ze^-a x = a^-1e^-a y(1-e^-a(z-y)) and Λ' ψ(t) =Θ(Λ^2)=Ω(1) it follows that (Λ≤ |^-_t| < ∞ )≥∑_Λ ' ≤ k ≤ 2Λ'(|^-_t| =k) = Ω(∑_Λ ' ≤ k ≤ 2Λ'k k^-3/2 e^-ξ(_t^-)k)= Ω(Λ^-3/2∑_Λ'/≤ j ≤ 2Λ'e^-ψ(t) j ) = Ω(ψ(t)^-1Λ^-3/2 e^-ψ(t)Λ').This establishes the lower bound in (<ref>) since ψ(t) = Θ(^2) and |Λ'-Λ| = O(1). We now turn to the more interesting supercritical case i=( +) n (here our estimates are tailored for our goal of proving concentration in every step, see Section <ref>; otherwise simpler bounds would suffice).Recall from Theorem <ref> that (|_t|=∞)=Θ().Assuming Λ = ω(^-2) and ^3n→∞, the right hand side of (<ref>) is o( n), so the result below implies N_≥Λ() ∼(|_t|=∞)n = Θ( n).Under the same assumptions we also have small variance, since then N_≥Λ() = o(( n)^2) by (<ref>). There exist constants _0,d_1,D,T > 0 such such that the following holds for all t ∈ [t_0,t_1] with =t-∈ (T n^-1/3,_0], and all t-nice parameter lists . Define _t^± as in Definition <ref>, =(^±_t) as in Definition <ref>, and _t as in Section <ref>. If max{^-2,K} < Λ≤min{n^2/3, n^1/3/}, then | N_≥Λ()-(|_t|=∞)n|≤ D n(e^-d_1^2 Λ + (^3 n)^-1/3) ,N_≥Λ()≤ D ( n)^2(e^-d_1^2 Λ+ (^3n)^-1/3) . By Theorems <ref> and <ref> and Remark <ref> there is a constant d_1>0 such that (|^1,±_t()|=∞) ≥(|^1_t|=∞)/2 ≥ d_1 and ξ(_t^±) ≥ψ(t)/2 ≥ d_1 ^2. We first focus on N_≥Λ().Analogous to the proof of Lemma <ref>, using Theorem <ref> we readily obtain(|^-_t| = ∞)|^-_t|-n^-ω(1)≤ N_≥Λ() ≤(|^+_t| ≥Λ)|^+_t| .Proceeding similarly to (<ref>), using inequality (<ref>)together with ξ(_t^+) ≥ d_1 ^2 and Λ^-3/2≤^3, it follows that (Λ≤ |^+_t| < ∞)= O(∑_k ≥Λk^-3/2 e^-ξ(_t^+) k) = O(^-2Λ^-3/2 e^-d_1^2 Λ)= O( e^-d_1^2 Λ) .Note that |^±_t| = n(1+o(n^-1/3)) by (<ref>). By Theorem <ref> we have |(|^±_t|=∞) - (|_t|=∞)|≤ Cn^-1/3; this and (<ref>)–(<ref>) imply (<ref>) for suitable D>0. We now turn to the variance of X:= N_≥Λ().Here the second moment estimate of Lemma <ref> will be key, which involves the two auxiliary parameters ν and τ.Analogous to (<ref>), using inequality (<ref>)together with ξ(_t^+) ≥ d_1 ^2, we obtain ν = ( |^+_t| |^+_t| < ∞) = ∑_k≥ 1 k (|^+_t|=k) = O(∑_k ≥ 1 k^-1/2 e^-ξ(_t^+) k) = O(^-1) .Using inequality (<ref>) for τ, noting (|^1,±_t()|=∞) ≥ d_1 and ^2 Λ≥ 1, we infer τ≤ (1+O(e^-d_1 ^2 Λ)) ·(|^+_t| ≥Λ) .We now estimate X^2 by bounding each term on the right hand side of (<ref>) from Lemma <ref>. Using (|^+_t| ≥Λ)| = Θ() and ( n)^-1·^-2 = (^3 n)^-1 we readily see that ν n^-1 = O(( n)^-1) = O(( n)^-1) ·^-2(|^+_t| ≥Λ)^2 = O((^3n)^-1) ·(|^+_t| ≥Λ)^2 .Noting n^-1/3·^-1 = (^3 n)^-1/3, we similarly see that τν n^-1/3 + n^-1/3(|^+_t| = ∞) = O(n^-1/3) ·(|^+_t| ≥Λ) = O((^3 n)^-1/3) ·(|^+_t| ≥Λ)^2.Using (<ref>) for the first step and then (|^+_t| ≥Λ)|^+_t| = Θ( n), we also obtain (log n)^2· X ≤ n^2/3·(|^+_t| ≥Λ)|^+_t| = O((^3n)^-1/3) ·[(|^+_t| ≥Λ)|^+_t|]^2.Substituting the above estimates into (<ref>),using ^3 n ≥ 1 (which follows from the assumption ^-2 < Λ≤ n^1/3/) we obtainX^2≤[(1+O(e^-d_1 ^2 Λ + (^3n)^-1/3)) ·(|^+_t| ≥Λ) |^+_t| ]^2 .Recall that |(|^+_t|=∞) - (|_t|=∞)| = O(n^-1/3) and |^±_t| = n(1+o(n^-1/3)).Using (<ref>) and n^-1/3 = O((^3n)^-1/3) it follows that X^2≤[(1+O(e^-d_1 ^2 Λ + (^3n)^-1/3)) ·(|_t| =∞)n ]^2 .Estimating X =N_≥Λ() by (<ref>) above, using (|_t| =∞) = Θ() and X =X^2 - ( X)^2 now inequality (<ref>)follows for =(^+_t), increasing the constant D if necessary.It remains to bound the variance of X̃ := N_≥Λ().Noting ^-_t ≼^+_t, using (<ref>) we infer X̃^2 ≤ X^2 and thus X̃≤ X^2-(X̃)^2.So, since (<ref>) yields the same qualitative estimates for X̃ and X, inequality (<ref>) for =(^-_t) follows analogously to our above estimates for X.§.§.§ SusceptibilityWe now turn to the susceptibility in the subcritical case i=(-)n.Our goal is to approximate the expectation and variance of the (modified rth order) susceptibility S_r,n()=∑_k ≥ 1k^r-1N_k()/n defined in (<ref>),exploiting that we have good control over N_k() ≈(|^±_t| = k).Similar to (<ref>) and (<ref>), in view of (<ref>) and ψ(-)=Θ(^2) we expect for r ≥ 2 thatS_r,n() ≈∑_k ≥ 1k^r-1(|^±_t| = k) =|^±_t|^r-1≈∑_k ≥ 1Θ(k^r-5/2) e^-ψ(-) k≈Θ(^-2r+3) . We shall obtain a sharper estimate by comparing the above sum with an integral.To avoid clutter, in (<ref>) below we use the convention that x!!=∏_0 ≤ j < x/2(x-2j) is equal to 1 when x=-1. For all r ∈ with r ≥ 2 there exists C_r > 0 such that for all δ>0 we have|∑_j ≥ 1j^r-5/2 e^-δ j - (2r-5)!! √(2π)/(2δ)^r-3/2| ≤ C_r (1+δ^-(r-5/2)) .The basic idea is to compare the sum in (<ref>) with the integralf(r) := ∫_0^∞ x^r-5/2e^-δ j .Let g(x) = x^r-5/2e^-δ x. For r = 2 the function g(x) is monotone decreasing, and for r ≥ 3 there is x_δ = Θ(δ^-1) such that g(x) is increasing for x ≤ x_δ and decreasing for x ≥ x_δ. It follows that |∑_j ≥ 1j^r-5/2 e^-δ j - f(r)| ≤ O(1) + r ≥ 3 O(δ^-(r-5/2)) .It remains to evaluate the integral f(r); this is basic calculus. For r=2 the substitution y^2=δ x allows us to determine f(2) via the Gauss error function:f(2) = ∫_0^∞ x^-1/2e^-δ x = √(π/δ)·2/√(π)∫_0^∞ e^-y^2 = √(π/δ) .For r ≥ 3 we use integration by parts to infer f(r) = -x^r-5/2e^-δ x/δ|_0^∞ + (r-5/2)/δ·∫_0^∞ x^r-7/2 e^-δ x =(2r-5) f(r-1)/2δ .Solving the above recurrence for r ≥ 2 completes the proof. As a step towards making (<ref>) rigorous, we now estimate the `idealized' moments |_-|^r-1.For θ(t) and ψ(t) as defined in Theorem <ref>, let B_r := (2r-5)!! √(2π)θ()/ [ψ”()]^r-3/2 .Then B_r > 0 for r ≥ 2. Furthermore, there exists _0>0 such that, for all r ≥ 2 and ∈ (0,_0), |_-|^r-1 = (1+O())B_r^-2r+3. For brevity, let t=-.Theorem <ref> gives (|_t| = ∞)=0, so Theorem <ref> and Lemma <ref> imply|_t|^r-1 = ∑_k ≥ 1k^r-1(|_t| = k) = ∑_k ≥k(1+O(1/k))k^r-5/2θ(t) e^-ψ(t)k + O(1),where the implicit constants are independent of  and k. Taking the parity constraint into account, it follows that |_t|^r-1 = ∑_j ≥ 1 ( j)^r-5/2θ(t) e^-ψ(t) j + O(∑_j ≥ 1 j^r-7/2 e^-ψ(t) j) + O(1).Estimating the first sum by (<ref>), and the second sum by (<ref>), it follows that|_t|^r-1 = ^r-5/2θ(t) ·(2r-5)!! √(2π)/[2 ψ(t)]^r-3/2 + O(ψ(t)^-(r-5/2)) + O(1).Recalling t=-, note that ψ(-) = ψ”()^2/2 + O(^3) and θ(-) = θ()+O(). Since ψ”(),θ()>0, it follows that θ(t) = (1+O()) θ(), [2 ψ(t)]^r-3/2 = (1+O())[^2 ψ”()]^r-3/2 and ψ(t)=Θ(^2).This completes the proof of (<ref>) since O((^2)^-(r-5/2)) + O(1) = O() ·^-2r+3. Note that, combined with Corollary <ref>, which gives s_r(t) =|_t|^r-1 for t ∈ [t_0,), Lemma <ref> implies Theorem <ref>.Mimicking the above calculations and using Lemma <ref>, we now approximate the expectation and variance of S_r,n().Note that (<ref>) below yields S_r,n() ∼ B_r^-2r+3 whenever → 0 and ^3 n →∞.Furthermore, (<ref>) shows that we have small variance whenever ^3 n →∞. There exist positive constants c,T,_0 > 0 and (a_r,b_r)_r ≥ 2 such such that the following holds for all t ∈ [t_0,t_1] with =-t ∈ [Tn^-1/3,_0], and all t-nice parameter lists . Define _t^± as in Definition <ref>, and =(^±_t) as in Definition <ref>.If r ≥ 2 and ^3 n ≥ c, then | S_r,n()-B_r^-2r+3|≤ a_r ( + (^3 n)^-1/3) ^-2r+3 ,S_r,n()≤ b_r (^3 n)^-1( S_r,n())^2 ,where B_r > 0 is defined as in (<ref>). Let Λ = ^-2(log^3 n)^2, which satisfies 2max{^-2,K} < Λ≤ n^2/3 for ^3 n large enough.Aiming at (monotone) coupling arguments, note that S_r,n(G) = ∑_k ≥ 1 k^r-1 N_k(G)/n = ∑_k ≥ 1[k^r-1 - (k-1)^r-1] N_≥ k(G)/n .Recall that |^±_t|=(1+o(n^-1/3)) n by (<ref>).Estimating N_≥ k() via Theorem <ref>, and noting that (|^±_t| = ∞)=0 by Theorem <ref>,using Λ≤ n^2/3 and |^±_t|^r-1 = ∑_k ≥ 1 k^r-1(|^±_t| = k)it follows that S_r,n() =(1 +o(n^-1/3))|^±_t|^r-1+ O(∑_k ≥Λ k^r-1(|^-_t| = k)+ n^-ω(1)). Proceeding analogously to (<ref>)–(<ref>),replacing the (|_t|=k) estimate of Theorem <ref> with the (|^±_t|=k) estimate of Theorem <ref>, it follows that |^±_t|^r-1 = (1+O(n^-1/3)) ·^r-5/2θ(t) (2r-5)!! √(2π)/[2 ξ(^±_t)]^r-3/2 + O(ξ(^±_t)^-(r-5/2)) + O(1) .Since |ξ(^±_t)-ψ(t)| = O(n^-1/3) and ψ(t) = Θ(^2), see Theorem <ref> and Remark <ref>,we have ξ(^±_t) = (1+O((^3n)^-1/3)) ·ψ(t).So, by proceeding analogously to the deduction of (<ref>) from (<ref>), using = Ω(n^-1/3) it follows that|^±_t|^r-1 = (1+O() + O((^3n)^-1/3) ) · B_r^-2r+3 . Estimating (|^-_t| = k) via Theorem <ref> and recalling that ξ(^-_t) ≥ψ(t)/2 = Θ(^2) by Remark <ref>,using ξ(^-_t)Λ≥ 2log(^3 n) (for ^3 n large enough) and inequality (<ref>), it follows that∑_k ≥Λ k^r-1(|^-_t| = k) = O(∑_k ≥Λ k^r-5/2 e^-ξ(^-_t) k) = O(^-2r+3) · e^-ξ(^-_t) Λ/2= O((^3 n)^-1) ·^-2r+3.Together with (<ref>) and (<ref>) this completes the proof of (<ref>). A much simpler variant of the above calculations(using S_r,n() ≥ S_r,n()= Ω(∑_1 ≤ k ≤Λk^r-1(|^-_t| = k)) and ∑_1 ≤ k ≤^-2 k^r-5/2 = Ω(^-2r+3), say) yields the crude lower bound S_r,n() = Ω(^-2r+3). (This also follows directly from (<ref>) if we allow ourselves to impose an upper bound onthat depends on r.) This lower bound, and the upper bound in (<ref>) (applied with 2r in place of r) imply that S_2r,n() ≤ b_r ^-3( S_r,n())^2. By Lemma <ref> we have S_r,n() ≤ n^-1 S_2r,n(), so (<ref>) follows.§ PROOFS OF THE MAIN RESULTS In this section we prove our main results for the size of the largest component, the number of vertices in small components, and the susceptibility.As discussed in the proof outline of Section <ref>, we shall establish these by adapting Erdős–Rényi proof strategies to the Achlioptas process setting, exploiting the setup and technical work of Sections <ref>–<ref>(which are, of course, the meat of the proof).One non-standard detail is that we study G_i=G_n,i^ via the auxiliary random graphs J_i and _i (see Lemmas <ref>, <ref> and <ref>), using the following two key facts: (i) that J_i=J(_i) has the same component size distribution as G_i conditioned on the parameter list _i defined in (<ref>), and (ii) that we can whp sandwich J_i between two `Poissonized' random graphs _i, i.e., _i ⊆ J_i ⊆_i with _i=(^±_t), where t=i/n and ^±_t is as in Definition <ref>. For technical reasons our arguments require that the parameter list _i is t-nice in the sense of Definition <ref>, which by Lemma <ref> fails with probability at most () = O(n^-99). In Section <ref> we focus on the number of vertices in small components, and prove Theorem <ref>.In Section <ref> we turn to the size of the largest component, and prove Theorems <ref> and <ref>.Finally, in Section <ref> we consider the susceptibility, and prove Theorem <ref>. Note that Theorems <ref>, <ref> and <ref> have already been proved in Sections <ref>–<ref>;indeed, as noted there, in the light of Corollaries <ref>–<ref>, these results are immediate from Theorems <ref>–<ref>and Lemma <ref>, respectively.§.§ Small componentsIn this subsection we prove Theorem <ref>, i.e., estimate the number N_k(i) of vertices in components of size k after i steps. Our arguments use the following three ideas: (i) that the random variable N_k(i) is typically close to its expected value, (ii) that we can approximate N_k(i) using the `idealized' branching process _t, t=i/n, and (iii) that we have detailed results for the point probabilities of _t. We start with a conditional concentration result. The key observation is that, for any graph, adding or deleting an edge changes the number of vertices in components of size k (at least k) by at most 2k. Let t∈ [t_0,t_1], and let  be a t-nice parameter list, and define J=J() as in Definition <ref>. Then, with probability at least 1-n^-ω(1), we have |N_k(J)- N_k(J)| ≤ k (log n)n^1/2 and |N_≥ k(J) -N_≥ k(J)| ≤ k (log n)n^1/2 for all 1 ≤ k ≤ n.By definition of J=J() and (<ref>), the probability space Ω=Ω() on which the random graph J() is defined consists of M:=∑_k,r ≥ 0 rQ_k,r()≤∑_k,r ≥ 0 r Be^-b(k+r)n =O(n) independent random variables, each corresponding to the uniform choice of a random vertex from V_L. Furthermore, changing the outcome of one variable can be understood as (i) first removing one edge and (ii) then adding one edge.From the observation before the lemma, it follows that |N_k(J)(ω_1)-N_k(J)(ω_2)| ≤ 4k whenever ω_1,ω_2 ∈Ω differ in the outcome of a single random variable.An analogous remark applies to N_≥ k(J).By McDiarmid's bounded-differences inequality <cit.> we thus have |N_k(J)- N_k(J)| ≥ k (log n)n^1/2≤ 2 exp(-2(k(log n)n^1/2)^2/M (4k)^2) = n^-ω(1) .An analogous bound holds for N_≥ k(J), and a union bound over all 1 ≤ k ≤ n completes the proof.Set D_ := D_+2, where D_>0 is as in Theorem <ref>, and define _t as in Section <ref>.Then, with probability at least 1-O(n^-99), the following hold for all i_0≤ i≤ i_1 and all k ≥ 1:|N_k(i)-(|_i/n|=k) n| ≤ k (log n)^D_n^1/2, |N_≥ k(i)-(|_i/n| ≥ k) n| ≤ k (log n)^D_n^1/2.We have not tried to optimize the k(log n)^D_ n^1/2 error term in (<ref>)–(<ref>), which suffices for our purposes. For k>n the statement is trivial since 0 ≤ N_k(i),N_≥ k(i) ≤ n. For i_0≤ i≤ i_1, let _i be the event that (<ref>)–(<ref>) hold for all 1 ≤ k ≤ n, so our goal is to estimate the probability that = ⋂_i_0 ≤ i ≤ i_1_i fails.Given a parameter list , let _i() be the that event (<ref>)–(<ref>) hold for all 1 ≤ k ≤ n, with N_k(i) and N_≥ k(i) replaced by N_k(J()) and N_≥ k(J()). Let _i be the random parameter list defined in (<ref>), and let _i be the event that _i is i/n-nice. By Lemma <ref> we have(_iand _i)≤max_i/n-nice (_i()) .Since D_ > max{D_,1}, Lemma <ref> and Theorem <ref> give (_i()) ≤ n^-ω(1) whenis i/n-nice. This, Lemma <ref>, and a union bound over the O(n) values of i completes the proof.By combining the concentration result Theorem <ref> with the branching process results from Section <ref> (see Theorems <ref>–<ref>) and sprinkling (see Section <ref>), we can easily prove that whp L_1( n+ n) ∼(|_+|=∞)n for n^-1/6+o(1)≤≤_0, say. In Section <ref> we shall use a more involved second moment argument to relax this assumption to the optimal condition n^-1/3≪≤_0 (exploiting the sandwiching and domination arguments of Sections <ref> and <ref>). We now combine the concentration result above with the branching process results from Section <ref>. Let β:=1/10 and τ := n^-β/3. By Theorem <ref> and Corollary <ref>, we haveρ_k(t) =(|_t|=k) = (1+O(1/k)) k ∈ k^-3/2θ(t) e^-ψ(t) kuniformly over all k≥ 1 and t∈ [-_0,+_0], where the functions θ and ψ are analytic.As discussed in Section <ref>, if k ∉, then N_k(i)=0 holds with probability one for all i ≥ 0.Due to the indicator in the above bound on ρ_k(t),it follows that (<ref>) holds trivially for k ∉.We now focus on the main case k ∈.Aiming at comparing the additive errors in Theorem <ref> with the above bound on ρ_k(t),note that by Theorem <ref> we have a=ψ”()>0, ψ()=ψ'()=0 and b=θ()>0. Hence, after decreasing _0 if necessary, we crudely have ψ(t) ≤ a^2 and θ(t) ≥ b/2>0 for t∈ [-_0,+_0].When 1 ≤ k ≤ n^β and a^2k≤βlog n, we thus have k n^1/2/k^-5/2e^-ψ(t)kn≤ k^7/2e^a^2 kn^-1/2≤ n^9β/2-1/2 = o(n^-β/3-1/1000) ,say. Let K_0 be a constant such that for all k≥ K_0 the O(1/k) error term in (<ref>) is at most 0.1 in magnitude. Since θ(t) ≥ b/2>0, it follows that k (log n)^D_n^1/2 = o(τ/k) ·ρ_k(t)n for k ∈∖ [K_0] and t∈ [-_0,+_0].Using Theorem <ref>, this establishes (<ref>) for k ∈∖ [K_0]. We now turn to the remaining case k ∈ := ∩ [K_0] of (<ref>).By Lemma <ref>, for each k∈ we have ρ_k()>0.Since each ρ_k(t) is continuous, after decreasing _0 if necessary, there is some constant c>0 such that ρ_k(t)≥ c for all k∈ and t∈[-_0,+_0]. Hence k (log n)^D_n^1/2 = o(τ/k) ·ρ_k(t)n for k ∈ and t∈ [-_0,+_0],which by Theorem <ref> completes the proof of (<ref>). Finally, we omit the similar (but simpler) proof of (<ref>).§.§ Size of the largest componentIn this subsection we prove Theorems <ref> and <ref>, i.e., estimate the size L_1(i) of the largest component after i= n ± n steps.Our arguments use the following three ideas: (i) that we can typically sandwich G_i between two Poissonized random graphs _i from Section <ref>, (ii) that we have L_1(_i) ≤ L_1(G_i) ≤ L_1(_i) by sandwiching and monotonicity, and (iii) that we can estimate the typical size of L_1(_i) by first and second moment arguments combined with `sprinkling', exploiting that the component size distribution has an exponential cutoff after size ^-2.§.§.§ The subcritical case (Theorems <ref> and <ref>) In this subsection we estimate the size of the r-th largest component in the subcritical case i= n- n (for constant r).Before giving the technical details, let us first sketch the high-level proof structure of Theorem <ref>, ignoring the difference between G_i and _i for simplicity. Note that for any r ≥ 1 we have L_r(i) ∉ (Λ^-,Λ^+) ≤L_1(i) ≥Λ^+ + L_r(i) ≤Λ^-andL_1(i) < Λ^+≤N_≥Λ^+(i) ≥Λ^+ + N_≥Λ^-(i) ≤ rΛ^+ .With the exponential decay of Lemma <ref> in mind, the basic idea is now to pick Λ^- ≈Λ^+ such that, roughly speaking, N_≥Λ^-(i) ≫Λ^+ ≫ N_≥Λ^+(i).By Markov's inequality this will giveN_≥Λ^+(i) ≥Λ^+≤ N_≥Λ^+(i)/Λ^+ = o(1) .Furthermore, X=N_≥Λ^-(i)-N_≥Λ^+(i) will satisfy X ≈ N_≥Λ^-(i) ≫Λ^+ ≫ N_≥Λ^+(i). From Chebychev's inequality and the variance estimate of Lemma <ref>, we will then obtainN_≥Λ^-(i) ≤ rΛ^+≤X ≤ r Λ^+≤X ≤ X/2≤ X · O(Λ^+)/( X)^2 = O(Λ^+)/ N_≥Λ^-(i) = o(1) . The proofs below make the outlined argument precise. For x=x(n) satisfying 1 ≤ x ≤logloglog(^3 n), say, setΛ^± := ψ(-)^-1(log (^3 n) -52loglog(^3 n) ± x) .Since ψ(-)=Θ(^2) and ^3n →∞, routine calculations yield ^-2≪Λ^-∼Λ^+ ≪min{n^2/3,n^1/3/} (recall that a_n ≪ b_n means a_n = o(b_n),cf. Remark <ref>).Furthermore, by the choice of Λ^± we havee^-ψ(-)Λ^± = Θ((log(^3n))^5/2e^∓ x/^3n)= Θ( ^2n^-1(Λ^±)^5/2 e^∓ x).Similar to the argument for (<ref>), using Lemma <ref> and writing t=i/n we obtain L_r(i) ∉ (Λ^-,Λ^+)and _i≤max_t-nice L_r(J()) ∉ (Λ^-,Λ^+) .Combining the sandwiching of Lemma <ref> with the idea of (<ref>), and writing =(^±_t) for brevity, using monotonicity we arrive at L_r(i) ∉ (Λ^-,Λ^+)and _i≤ n^-ω(1) + max_t-nice [N_≥Λ^+() ≥Λ^+ + N_≥Λ^-() ≤ rΛ^+].By Lemma <ref> and the fact that Λ^-∼Λ^+, forsmall enough we haveN_≥Λ^±() = Θ(1) ·e^-ψ(-)Λ^±n/^2(Λ^+)^5/2·Λ^+ ± n^-ω(1) = Θ(e^∓ xΛ^+) .By Markov's inequality, it follows thatN_≥Λ^+() ≥Λ^+≤ N_≥Λ^+()/Λ^+=O(e^-x).Moreover, from (<ref>), for x sufficiently large (depending on the constant r) we have N_≥Λ^+() ≤Λ^+ and N_≥Λ^-() ≥ 4rΛ^+. Let X=N_≥Λ^-() - N_≥Λ^+(). Then X ≥ N_≥Λ^-()/2 ≥ 2rΛ^+. By Lemma <ref> we haveX≤ XN_≥Λ^+() + Λ^+ ≤ 2 Λ^+X.Proceeding analogously to (<ref>), using Chebychev's inequality, the variance bound above, and (<ref>), it follows thatN_≥Λ^-() ≤ rΛ^+≤X ≤ X/2≤O(Λ^+)/ N_≥Λ^-() =O(e^-x).The result follows from (<ref>), the bounds (<ref>) and (<ref>), and Lemma <ref>. Next we estimate the sizes of the r largest components in every subcritical step by a similar (but more involved) argument.For Theorem <ref> the idea is to consider the graphs G_m_j at a decreasing sequence of intermediate steps m_j=(-_j)n, where _1^3n = ω; we index from j=1 since _0 plays a different role – as usual it is a (small) constant upper bound on values of =|i/n-| that we consider. We shall show that typically L_1(m_j) ≤Λ^+_j and N_≥Λ^-_j+1(m_j+1) ≥ r Λ^+_j for all j ≥ 1 with _j ≤_0. For steps m_j+1≤ i ≤ m_j we then argue similarly to (<ref>): by monotonicity we have L_1(i) ≤ L_1(m_j) ≤Λ^+_j,which together with N_≥Λ^-_j+1(i) ≥ N_≥Λ^-_j+1(m_j+1) ≥ r Λ^+_j then in turn implies L_r(i) ≥Λ^-_j+1. The next proof implements this strategy, using parameters _j ≈_j+1 and Λ^+_j ≈Λ^-_j+1 that make the corresponding error probabilities summable.For concreteness, letξ= ξ(n) := (logω)^-2/3,so that ξ→ 0 as n →∞.To ensure _j ≤_0, we define j_0=j_0(n,ω,ξ,_0) as the smallest j ∈ such that ω^1/3n^-1/3(1+ξ)^j-1≥_0.For all j ≥ 1 we set _j := ω^1/3n^-1/3(1+ξ)^j-1 ,  if j < j_0,_0,  if j ≥ j_0, Λ^±_j := (1 ±ξ) ψ(-_j)^-1log (_j^3 n) ,m_j :=(-_j)n .Since _j^3n≥_1^3n=ω→∞, as in the proof of Theorem <ref> we have_j^-2≪Λ^±_j ≪min{n^2/3,n^1/3/_j}.Moreover, since ψ(-)=Θ(^-2), by choice of Λ_j^± we have_j^-2(Λ_j^±)^-5/2e^-ψ(-_j)Λ_j^± nn= Θ_j^-2(_j^-2log(_j^3n))^-5/2 (_j^3n)^-(1±ξ)n = Θ (log(_j^3n))^-5/2 (_j^3n)^∓ξ. Since ψ()=ψ'()=0 and ψ”()>0, the Mean Value Theorem implies that for all ,' ∈ [_j, _j+1] with j ≥ 1 we have|ψ(-)-ψ(-')| ≤ |_j+1-_j| · O() = O(ξ) · O()= O(ξ) ·ψ(-) ,where the implicit constant does not depend on j.It follows that there exists a universal constant d>0 such that for all _j≤≤_j+1 with j ≥ 1 we haveΛ^+_j ≤ (1 +d ξ) ·ψ(-)^-1log (^3 n) , Λ^-_j+1 ≥ (1 -d ξ) ·ψ(-)^-1log (^3 n) .In view of these bounds and the proof strategy outlined above, it thus suffices to prove that whp the following event  holds: L_1(m_j) ≤Λ^+_j and N_≥Λ^-_j+1(m_j+1) ≥ r Λ^+_j for all 1 ≤ j < j_0.Indeed, arguing as for (<ref>)–(<ref>) above, ifholds then Λ_j+1^-≤ L_r(i)≤ L_1(i)≤Λ_j^+ for all m_j+1≤ i≤ m_j with 1 ≤ j < j_0, which, in view of (<ref>)–(<ref>) implies (<ref>) with τ=(logω)^-1/2≫ dξ, say.As we shall see, the proof of ()=o(1) is similar to the proof of Theorem <ref>, but here we have more elbow room.By Lemma <ref> and the estimate (<ref>), recalling that _j^3n≥_1^3n=ω→∞ and ω^ξ/3→∞, there is a constant D>0 such if n is large enough, then for all 1≤ j≤ j_0 and all (m_j/n)-nice parameter lists , setting _m_j=(^±_m_j/n) we haveN_≥Λ^+_j(_m_j)/Λ^+_j≤D/(log(_j^3n))^5/2 (_j^3n)^ξ≤1/(_j^3n)^ξ/3→ 0andN_≥Λ^-_j(_m_j)/Λ^-_j≥(_j^3n)^ξ/D (log(_j^3n))^5/2≥ (_j^3n)^ξ/3→∞. Recalling that L_1(_m_j) ≥Λ^+_j implies N_≥Λ^+_j(_m_j) ≥Λ^+_j, by (<ref>) and Markov's inequality we havemax_m_j/n-nice (L_1(_m_j) ≥Λ^+_j)≤1/(_j^3n)^ξ/3. Arguing as for (<ref>), given a parameter listwhich is (m_j/n)-nice, let _m_j=(^-_m_j/n), and letX_j := N_≥Λ_j^-(_m_j) - N_≥Λ_j^+(_m_j).Then by Lemma <ref> and the crude final estimate in (<ref>), for n large enough we haveX_j≤ X_jN_≥Λ_j^+(_m_j) + Λ_j^+ ≤ 2 Λ_j^+X_j.Since Λ_j-1^+∼Λ_j^+∼Λ_j^-, the estimate (<ref>) easily implies X_j ≥ N_≥Λ^-_j(_m_j)/2 ≥ 2rΛ_j-1^+ (for n large). Hence by Chebychev's inequality we have(N_≥Λ^-_j(_m_j) ≤ rΛ^+_j-1) ≤X_j≤ X_j/2≤4 X_j/( X_j)^2≤8Λ_j^+/ X_j≤16Λ_j^+/ N_≥Λ_j^-(_m_j)≤17/(_j^3n)^ξ/3,say, using (<ref>) in the last step. Arguing as for (<ref>) (using sandwiching and the idea of (<ref>)), writing =⋂_i_0≤ i≤ i_1_i for the event that every _i is (i/n)-nice, from (<ref>) and (<ref>) we conclude that(∩) ≤∑_1 ≤ j ≤ j_0[n^-ω(1) + 18/(_j^3n)^ξ/3]≤∑_1 ≤ j ≤ j_0[n^-ω(1) + 18/ω^ξ/3 (1+ξ)^(j-1)ξ],recalling the definition of _j in the last step. The main term is a geometric progression with ratio (1+ξ)^-ξ=1-Θ(ξ^2), so the sum is O(ω^-ξ/3ξ^-2)=o(1) by choice of ξ.This completes the proof since ()=o(1) by Lemma <ref>. Note that in the above proof we can allow for r →∞ at some slow rate (e.g., r=ω^ξ/4 works readily).§.§.§ The supercritical case (Theorem <ref>)In this subsection we estimate thesize of the largest component in the supercritical phase.We first outline the proof structure of Theorem <ref> for step i= n +n, ignoring several technicalities. Given ξ=o(1), let i^*= n + (1-ξ) n.From the variance estimate of Lemma <ref> and the assumption ^3n→∞, we can eventually pick ξ =o(1) and (ξ)^-2≪Λ≪min{n^2/3, n^1/3/} such that for m ∈{i,i^*} Chebychev's inequality yields|N_≥Λ(m)- N_≥Λ(m)| ≥ξ n≤ N_≥Λ(m)/(ξ n)^2≤O(e^-d_1 ^2Λ + (^3n)^-1/3)/ξ^2 = o(1) .Lemma <ref> and continuity of (|_t|=∞) thus suggest that for m ∈{i,i^*} whp we have N_≥Λ(m) ≈ N_≥Λ(m) ≈(|_m/n|=∞) n ≈(|_+|=∞) n = Θ( n).Now the upper bound for L_1(i) is immediate by the standard observation that L_1(G)+L_2(G) ≤ N_≥Λ(G)+2Λfor any graph G and any Λ≥ 1. Indeed, using Λ = o( n) we should whp have L_1(i) ≤ L_1(i) + L_2(i) ≤ N_≥Λ(i) + 2 Λ≈(|_+|=∞) n .For the lower bound we use `sprinkling', exploiting that N_≥Λ(i^*) ≥ x= Θ( n) by (<ref>).Applying Lemma <ref>, using Δ_Λ,x,ξ = O(n^2/(ξΛ x)) = o(ξ n), x/Λ = Θ( n/Λ) = ω(1) and i^*+Δ_ξ,Λ,x≤ i we expect that whp L_1(i) ≥ L_1(i^*+Δ_ξ,Λ,x) ≥ (1-ξ)N_≥Λ(i^*) ≈(|_+|=∞) n ,which together with (<ref>) also suggests L_2(i) = o(L_1(i)).Similar to the subcritical proof, for concentration in every supercritical step, we shall use (a rigorous version of) the above line of reasoning for a carefully chosen increasing sequence of intermediate steps m_j=(+_j)n, relating N_≥Λ(m_j-1) and L_1(m_j) via sprinkling.For concreteness and brevity, letξ = ξ(n) := (logω)^-1 , φ(x) :=(|_+x|=∞) . Define j_0=j_0(n,ω,ξ,_0) as the smallest j ∈ such that ω^1/6n^-1/3(1+ξ)^j-1≥_0.For all j ≥ 1 we set _j := ω^1/6n^-1/3(1+ξ)^j-1 ,  if j < j_0,_0,  if j ≥ j_0, Λ_j := _j^-2(log(_j^3n))^3,m_j :=(+_j)n .Since _j^3n ≥_1^3n =ω^1/2→∞, routine calculations yield _j^-2≪Λ_j ≪min{n^2/3,n^1/3/_j}. By Theorem <ref> there is a constant c>0 such that we haveφ() ≥ c for all 0 ≤≤_0. Furthermore, φ' is bounded on [0,_0], so for all _j-1≤≤_j+1 with j ≥ 2 the Mean Value Theorem yields|φ(_j± 1)-φ()| ≤ |_j+1-_j-1| · O(1) = O(ξ) = O(ξ) ·φ() ,using (<ref>) in the last step. Here the implicit constant does not depend on j.It follows that there exists a universal constant d>0 such that for all _j-1≤≤_j+1 with j ≥ 2 we have(1-d ξ) ·φ() ≤φ(_j±1) ≤ (1 +d ξ) ·φ() .Note for later that, since _j^3n≥_1^3n=ω^1/2, for all 1≤ j≤ j_0 we haveΛ_j/φ(_j) n = Θ( Λ_j/_j n) = Θ( (log(_j^3n))^3/_j^3n)= O( (log(ω^1/2))^3/ω^1/2) ≤ω^-1/4,if n is large enough.Let be the event that (1-ξ) ·φ(_j)n ≤ N_≥Λ_j(m_j) ≤ (1+ξ) ·φ(_j)n for all 1 ≤ j ≤ j_0.To later use `sprinkling', for c as in (<ref>), we define x_j := c _j n/2 .Recalling Lemma <ref>, we now define _j := _m_j,Λ_j,x_j,ξ and := ⋂_1 ≤ j ≤ j_0_j. If ∩ holds, then so do (<ref>)–(<ref>). To establish the claim, suppose thatandhold, and let i be such that =i/n- satisfies ^3n≥ω and ≤_0. Then from the definition of j_0 and the fact that _2^3n=(1+ξ)^3ω^1/2≤ω, there is some 2≤ j< j_0 such that m_j≤ i≤ m_j+1. From (<ref>) we have Λ_j+1≤ω^-1/4φ(_j+1) = o(ξ) ·φ(_j+1), say.So, using (<ref>), monotonicity, (<ref>) and (<ref>), we haveL_1(i)≤ L_1(i) + L_2(i)≤ N_≥Λ_j+1(i) + 2Λ_j+1≤ N_≥Λ_j+1(m_j+1) + 2Λ_j+1≤ (1+2ξ) ·φ(_j+1)n ≤(1+(d+3)ξ) ·φ()n.From the lower bound in (<ref>) and (<ref>), we have N_≥Λ_j-1(m_j-1) ≥ c _j-1 n/2 = x_j-1. Since _j-1^3 n ≥ω^1/2→∞, using (<ref>) it is not difficult to check that the parameter Δ_Λ,x,ξ=Θ(n^2/(ξΛ x)) appearing in Lemma <ref> satisfies Δ_Λ_j-1,x_j-1,ξ= Θ(n^2)/ξΛ_j-1 x_j-1 = O(_j-1 n)/ξ (logω)^3 = o(ξ_j-1 n) < (_j-_j-1) n = m_j - m_j-1 .Since the `sprinkling event' _j-1=_m_j-1,Λ_j-1,x_j-1,ξ, holds, from (<ref>) and (<ref>) we thus deduce that L_1(i) ≥ L_1(m_j)≥L_1(m_j-1+Δ_Λ_j-1,x_j-1,ξ) ≥ (1-ξ) · N_≥Λ_j-1(m_j-1)≥ (1-2ξ) ·φ(_j-1)n ≥(1-(d+3)ξ) ·φ()n.Combining (<ref>) and (<ref>), we haveL_2(i) ≤ 2(d+3)ξ·φ()n ≤ 4 (d+3)ξ· L_1(i) .Together, (<ref>)–(<ref>) readily establish (<ref>)–(<ref>) with τ=(logω)^-1/2≫ 4 (d+3)ξ, say, completing the proof of the claim.Having proved the claim, it remains only to show that ()=o(1) and ()=o(1).As in previous subsections, by a simple application of our conditioning and sandwiching results (Lemmas <ref> and <ref>), writing _m_j=(^±_m_j/n) it follows that(∩) ≤∑_1 ≤ j ≤ j_0[n^-ω(1) + 2 max_m_j/n-nicemax__m_j∈{_m_j,_m_j}(|N_≥Λ_j(_m_j)-φ(_j)n| > ξφ(_j)n) ] .To avoid clutter, we henceforth tacitly assume that  is m_j/n-nice.Since _j^2 Λ_j = (log(^3_j n))^3 and _j^3 n ≥ω^1/2→∞, it is not difficult to check thate^-d_1_j^2 Λ_j + (_j^3 n)^-1/3≤2/(_j^3 n)^1/3 = o(ξ) ,where the constant d_1>0 is as in Lemma <ref>.Recalling φ(_j)=(|_+_j|=∞) ≥ c _j, see (<ref>) and (<ref>), using (<ref>) and Lemma <ref> we infer that, say, (|N_≥Λ_j(_m_j)-φ(_j)n| > ξφ(_j)n) ≤(|N_≥Λ_j(_m_j)- N_≥Λ_j(_m_j)| ≥ cξ_j n/2) .Similar to (<ref>), using Chebychev's inequality, (<ref>), and the variance estimate of Lemma <ref> it now follows that there is a constant C>0 such that (|N_≥Λ_j(_m_j)-φ(_j)n| ≥ξφ(_j)n) ≤ N_≥Λ_j(_m_j)/(cξ_j n/2)^2≤C/ξ^2(_j^3n)^1/3.Substituting (<ref>) and _j^3n = ω^1/2(1+ξ)^3(j-1) into (<ref>), using ∑_ℓ≥ 0 (1+ξ)^-ℓ=O(ξ^-1) we obtain(∩) ≤ n^-ω(1) + ∑_j ≥ 12C/ξ^2ω^1/6(1+ξ)^j-1≤ n^-ω(1) + O(1)/ω^1/6ξ^3 = o(1) .Since ()=o(1) by Lemma <ref>, this establishes ()=o(1).It remains to prove ()=o(1).Since _j^3 n ≥ω^1/2→∞, using (<ref>) and (<ref>) it is routine to see that, say,x_j/Λ_j = Θ(^3_j n)/(log(_j^3n))^3≥(log(_j^3n))^2 .Using Lemma <ref> and _j^3 n = ω^1/2(1+ξ)^3j, for some constant η>0 we thus obtain() ≤∑_1 ≤ j ≤ j_0(_m_j,Λ_j,x_j,ξ) ≤∑_1 ≤ j ≤ j_0[ exp(-η x_j/Λ_j) + n^-ω(1)] ≤∑_1 ≤ j ≤ j_01/_j^3 n + n^-ω(1) = o(1),arguing as for (<ref>) in the final step. This was all that remained to complete the proof of Theorem <ref>. §.§ SusceptibilityIn this subsection we prove Theorem <ref>, i.e., estimate the (rth order) susceptibility S_r(i)=S_r,n(G_i) in the subcritical case i= n- n (see (<ref>) for the definition of the modified parameter S_r,n).Similar to Section <ref>, our arguments use the following two ideas: (i) that we typically have S_r,n(_i) ≤ S_r,n(G_i) ≤ S_r,n(_i) by sandwiching and monotonicity, and (ii) that we can estimate the typical value of S_r,n(_i) by a second moment argument.Ignoring the difference between G_i and _i (and some other technical details), for Theorem <ref> the basic line of reasoning is as follows. Using the variance estimate of Lemma <ref> and the assumption ^3n →∞, the idea is that we can eventually pick ξ = o(1) such that Chebychev's inequality intuitively gives|S_r,n(i)- S_r,n(i)| ≥ξ S_r,n(i)≤ S_r,n(i)/(ξ S_r,n(i))^2≤O(1)/ξ^2 ^3 n = o(1) .To prove bounds in every subcritical step, analogous to Section <ref> we consider a decreasing sequence of intermediate steps m_j=(-_j) n.Using (a rigorous version of) the above reasoning we show that typically ^-_r,j≤ S_r,n(m_j) ≤^+_r,j for suitable ^±_r,j, which by monotonicity (see Remark <ref>) then translates into bounds for every step. Fix r≥ 2. For concreteness, let, ξ = ξ(n) := ω^-1/4 ,so that ξ→ 0 as n →∞.To ensure _j ≤_0, we define j_0=j_0(n,ω,ξ,_0) as the smallest j ∈ such that ω^1/3n^-1/3(1+ξ)^j-1≥_0.For all j ≥ 1 we set_j := ω^1/3n^-1/3(1+ξ)^j-1 ,  if j < j_0,_0,  if j ≥ j_0,m_j :=(-_j)n , ^±_r,j:= (1 ± (_j^3n)^-1/4)(B_r ± a_r(_j + (_j^3n)^-1/3))_j^-2r+3 ,where B_r>0 is the constant defined in (<ref>) and a_r>0 is the constant in Lemma <ref>. It is routine to see that there is a constant A_r>0 such that for all 1≤ j≤ j_0 and all _j ≤≤_j+1 we have max{|^+_r,j-B_r ^-2r+3|,|^-_r,j+1-B_r ^-2r+3|}≤ A_r ( + (^3 n)^-1/4) B_r ^-2r+3 . Letbe the event that^-_r,j≤ S_r,n(m_j) ≤^+_r,jfor all 1 ≤ j ≤ j_0.Since S_r(i)=S_r,n(G_i) is monotone (see Remark <ref>), the eventimplies that for all steps m_j+1≤ i ≤ m_j with 1 ≤ j < j_0 we have have ^-_r,j+1≤ S_r(i) ≤^+_r,j.Recalling the definition of the steps m_j, in view of (<ref>) it thus is immediate that  implies (<ref>). Since ()=o(1) by Lemma <ref>, it remains only to show that (∩)=o(1). As usual (for example, as for (<ref>)), by conditioning (Lemma <ref>), sandwiching (Lemma <ref>) and monotonicity (Remark <ref>), writing _m_j=(^±_m_j/n), we conclude that(∩) ≤∑_1 ≤ j ≤ j_0[n^-ω(1) + max_m_j/n-nice [(S_r,n(_m_j) < ^-_r,j) + (S_r,n(_m_j) > ^+_r,j)]] .Since _j^3n≥ω→∞, if n is large enough the assumptions of Lemma <ref> are satisfied. In particular, by the expectation bound (<ref>) and the choice of _r,j^± we infer that^+_r,j≥(1+(_j^3n)^-1/4)S_r,n(_m_j) and^-_r,j≤(1-(_j^3n)^-1/4)S_r,n(_m_j) .The variance bound (<ref>) of Lemma <ref> states thatS_r,n(_m_j) / S_r,n(_m_j)^2≤b_r/_j^3n.By Chebychev's inequality and (<ref>), it follows for large n that(S_r,n(_m_j) < ^-_r,j) +(S_r,n(_m_j) >^+_r,j) ≤2b_r/(_j^3 n)^1/2≤1/(_j^3 n)^1/3 .Substituting this bound into (<ref>) and using _j^3n = ω (1+ξ)^3(j-1), it follows that (∩) ≤ n^-ω(1) + ∑_j ≥ 11/ω^1/3 (1+ξ)^j-1≤ n^-ω(1) + O(1)/ω^1/3ξ = o(1),completing the proof. § OPEN PROBLEMS AND EXTENSIONS Our proof methods exploited that, via a two-round exposure argument, we could construct the random graphs we study using many independent (uniform) random vertex choices. This allowed us to bring branching process comparison arguments into play. Although we have not checked the details, we believe that these methods adapt without problems to, for example, the vertex immigration random graph model introduced by Aldous and Pittel <cit.>,and its generalization proposed by Bhamidi, Budhiraja and Wang <cit.>, where in each time-step either (i) components of bounded size immigrate into the vertex set, or (ii) an edge connecting two randomly chosen vertices is added.Indeed, for these models the key observation is that, similar to the present paper, near the critical point we can again partition the vertex set into V_S ∪ V_L in such a way that we can construct the random graph by (a) joining components from V_S to a certain number of uniformly random vertices from V_L and (b) adding uniformly random edges to V_L.(To account for the fact that the vertex set grows over time, here V_L contains all components at some suitable time t_0 ∈ (0,), and V_S contains all new vertices and components which arrive by time t ∈ (t_0,t_1), where t_1> and t_1-t_0 is small enough that the graph induced by V_S stays `subcritical'.) This makes it plausible that the methods of this paper can again be used to analyze the phase transition in these models.In the light of the above discussion, the following open problems might be more interesting for further workthan simply adapting the methods used here to other models. In (1)–(3) below, we consider only bounded-size rules. (1) Show that, for =(n) satisfying → 0 and ^3 n →∞ as n →∞, the size of the second largest `supercritical' component whp satisfies L_2( n +n) ∼ a^-2log(^3n), where the constant a = Ψ”()>0 is as in Theorem <ref>. Since Theorem <ref> implies that the largest `subcritical' component whp satisfies L_1( n -n) ∼ a^-2log(^3n), this would establish the `symmetry rule' (also called `discrete duality') that is well-known for Erdős–Rényi random graphs (see, e.g., Section 3 in <cit.> or Section 5.6 in <cit.>); it would also be consistent with the small component size distribution (<ref>) established in this paper. (2) Show that the asymptotic form (<ref>) of the function ρ_k(t) appearing in (<ref>) remains valid for any bounded time interval (excluding 0), not just close to the critical time . One can also ask similar questions about the function ρ(t) appearing in (<ref>). For example, Janson and Spencer <cit.> were interested (for the Bohman–Frieze rule) in whether ρ(t) is analytic (or, as they asked it, smooth)for any t ∈ [,∞), not just for time t ∈ [,+_0) as shown in this paper. (3) Show that, for =(n) satisfying = O(1) and ^3 n →∞ as n →∞, the size of the largest `supercritical' component L_1( n +n) satisfies a central limit theorem (CLT).This is well-known for Erdős–Rényi random graphs (see, e.g., <cit.> and the references therein); it would also complement the law of large numbers established in this paper. (4) Analyze, for fixed >0 or =(n) → 0, the qualitative behaviour of the rescaled size of the largest `supercritical' component L_1( n +n)/n for `explosive' (unbounded) size rules such as the product rule, the sum rule, or the dCDGM rule (defined in <cit.>). As discussed in the introduction, see also Figure <ref>, these rules seem to have an extremely steep growth, which most likely differs from the linear growth of bounded-size rules established in this paper (we believe that the corresponding scaling limits ρ(t) have an infinite right-hand derivative at the critical time , see also Section <ref>).For the duality problem (1), similar to <cit.>, we expect that taking out the giant component we obtain an instance of the random hypergraph model J() that is close enough to a natural dual `subcritical' version, which can be coupled to the supercritical branching process conditioned on not surviving. Then it ought to be possible to prove results for the small component sizes that are similar to what we have below , though the technical challenges seem formidable.In work in preparation <cit.>,we use a combinatorial multi-round exposure argument to prove a weaker result: that whp L_2( n +n) = O(min{^-2,1}log n) for n^-1/3(log n)^1/3≪ = O(1).For the time interval problem (2), we speculate that variants of the methods of this paper might extend by some kind of step-by-step argument, but we did not investigate this closely as the present paper was already long enough, and the near-critical behaviour in any case seems the most interesting. In <cit.> we exploit the PDE approach of Section <ref> (among other ideas) to prove, for any t ∈ (,∞), that ρ_k(t) decays exponentially in k and that ρ(t) is analytic.For the CLT problem (3), we speculate that for fixed ∈ (0,_0) it might be possible to adapt the differential equation method based approach of Seierstad <cit.> (together with ideas of this paper and <cit.>) to establish asymptotic normality after suitable rescaling, but we have not investigated this closely as our main focus is the more challenging =(n) → 0 case. Indeed, it seems that a CLT for =(n) → 0 with ^3 n →∞ requires new ideas that go beyond <cit.> and the recent random walk based CLT approach <cit.>.The `unbounded' size rules problem (4) is conceptually perhaps the most important one, and it will most likely further stimulate the development of new tools and techniques in the area.Based on the partial results from <cit.>, we believe that it would be key to understand the effect of the edges which are added close to the `critical point' where the susceptibility diverges (e.g., if they have a similar effect to the addition of random edges). An alternative approach might be to analyze the behaviour of the infinite system of differential equations derived in <cit.>, which however is not known to have a unique solution (for this one perhaps needs to augment the system by further typical properties of the associated random graph process; see also Section 3 in <cit.>). plain tocsectionReferences 10DRS D. Achlioptas, R.M. D'Souza, and J. Spencer.Explosive percolation in random networks.Science 323 (2009), 1453–1455.Aldous1997 D. Aldous. Brownian excursions, critical random graphs and the multiplicative coalescent. Ann. Probab. 25 (1997), 812–854.AldousPittel2000 D. Aldous, and B. Pittel. On a random graph with immigrating vertices: emergence of the giant component. Random Struct. Alg. 17 (2000), 79–102. BA1999 A.-L. Barabási and R. Albert. Emergence of scaling in random networks. Science 286 (1999), 509–512.BBW12b S. Bhamidi, A. Budhiraja, and X. Wang. The augmented multiplicative coalescent, bounded size rules and critical dynamics of random graphs. Probab. Theory Related Fields 160 (2014), 733–796.BBW12a S. Bhamidi, A. Budhiraja, and X. Wang. Bounded-size rules: The barely subcritical regime.Combin. Probab. Comput. 23 (2014),505–538.BBW11 S. Bhamidi, A. Budhiraja, and X. Wang. Aggregation models with limited choice and the multiplicative coalescent.Random Struct. Alg. 46 (2015), 55–116.ODEGG G. Birkhoff and G.-C. Rota. Ordinary differential equations. 4th ed., John Wiley & Sons (1989).BF T. Bohman and A. Frieze.Avoiding a giant component.Random Struct. Alg. 19 (2001), 75–85.BK T. Bohman and D. Kravitz.Creating a giant component.Combin. Probab. Comput. 15 (2006), 489–511.Bollobas1984 B. Bollobás. The evolution of random graphs. Trans. Amer. Math. Soc. 286 (1984), 257–274.BB B. Bollobás, Random Graphs. 2nd ed., Cambridge University Press (2001). 2SAT B. Bollobás, C. Borgs, J. T. Chayes, J. H. Kim, and D. B. Wilson. The scaling window of the 2-SAT transition. Random Struct. Alg. 18 (2001), 201–256.BJR B. Bollobás, S. Janson, and O. Riordan. The phase transition in inhomogeneous random graphs. Random Struct. Alg. 31 (2007), 3–122.BR2009 B. Bollobás and O. Riordan. Random graphs and branching processes. In Handbook of large-scale random networks,Bolyai Soc. Math. Stud 18 (2009), pp. 15–115.BR2012RW B. Bollobás and O. Riordan. Asymptotic normality of the size of the giant component via a random walk. J. Combin. Theory Ser. B 102 (2012), 53–61.BR2012 B. Bollobás and O. Riordan. A simple branching process approach to the phase transition in G_n,p. Electron. J. Combin. 19 (2012), Paper 21.BCvdHSS2005 C. Borgs, J.T. Chayes, R. van der Hofstad, G. Slade and J. Spencer. Random subgraphs of finite graphs. I. The scaling window under the triangle condition. Rand. Struct. & Algor. 27 (2005), 137–184.BS C. Borgs and J. Spencer. Personal communication, EURANDOM workshop Probability and Graphs (2014). dCDGM R.A. da Costa, S.N. Dorogovtsev, A.V. Goltsev, and J.F.F. Mendes.Explosive percolation transition is actually continuous.Phys. Rev. Lett. 105 (2010), 255701.DKP M. Drmota, M. Kang, and K. Panagiotou. Pursuing the Giant in Random Graph Processes. Preprint (2013). .RGD R. Durrett. Random graph dynamics. Cambridge University Press (2010).ER1960 P. Erdős and A. Rényi.On the evolution of random graphs.Magyar Tud. Akad. Mat. Kutató Int. Közl 5 (1960), 17–61.vdHN2012 R. van der Hofstad and A. Nachmias. Hypercube percolation. J. Eur. Math. Soc. 19 (2017), 725–814.J2010 S. Janson. Susceptibility of random graphs with given vertex degrees. J. Comb. 1 (2010), 357–387.J S. Janson. Networking – Smoothly does it. Science 333 (2011), 298–299.JL S. Janson and M.J. Luczak. Susceptibility in subcritical random graphs. J. Math. Phys. 49 (2008), 125207. JLR S. Janson, T. Łuczak and A. Ruciński. Random Graphs. Wiley-Interscience (2000).JR2011 S. Janson and O. Riordan. Duality in inhomogeneous random graphs, and the cut metric. Rand. Struct. & Algor. 39 (2011), 399–411.JR2012 S. Janson and O. Riordan. Susceptibility in inhomogeneous random graphs. Electron. J. Combin. 19 (2012), Paper 31.BPpaper S. Janson, O. Riordan, and L. Warnke.Sesqui-type branching processes.In preparation. JS S. Janson and J. Spencer.Phase transitions for modified Erdős-Rényi processes.Ark. Math. 50 (2012), 305–329. JW2016 S. Janson and L. Warnke. On the critical probability in percolation. Electron. J. Probab., to appear. .KPSPC M. Kang, W. Perkins, and J. Spencer. Personal communication (2012).KPS M. Kang, W. Perkins, and J. Spencer. The Bohman–Frieze process near criticality. Random Struct. Alg. 43 (2013), 221–250.KPSE M. Kang, W. Perkins, and J. Spencer. Erratum to “The Bohman–Frieze process near criticality” Random Struct. Alg. 46 (2015), 801.Karp1990 R.M. Karp.The transitive closure of a random digraph. Random Struct. Alg. 1 (1991), 73–93.Luczak1990 T. Łuczak. Component behavior near the critical point of the random graph process. Rand. Struct. & Algor. 1 (1990), 287–310.McDiarmid1989 C. McDiarmid.On the method of bounded differences.In Surveys in Combinatorics (Norwich, 1989), London Math. Soc. Lecture Note Ser., vol. 141, pp. 148–188. Cambridge Univ. Press, Cambridge (1989).NP2007 A. Nachmias and Y. Peres. Component sizes of the random graph outside the scaling window. ALEA Lat. Am. J. Probab. Math. Stat. 3 (2007), 133–142.NachmiasPeres2010 A. Nachmias and Y. Peres. Critical percolation on random regular graphs. Rand. Struct. & Algor.36 (2010), 111–148.NP2010 A. Nachmias and Y. Peres. The critical random graph, with martingales. Israel J. Math. 176 (2010), 29–41.Petrovsky I. G. Petrovsky. Lectures on partial differential equations. Dover Publications (1991).PittelWormald B. Pittel and N.C. Wormald. Counting connected graphs inside-out.J. Combin. Theory Ser. B 93 (2005), 127–172.Range R. M. Range. Holomorphic functions and integral representations in several complex variables. Vol. 108 of Graduate Texts in Mathematics, Springer-Verlag (1986).OR2012 O. Riordan The phase transition in the configuration model. Combin. Probab. Comput. 21 (2012), 265–299.RW O. Riordan and L. Warnke. Explosive percolation is continuous. Science 333 (2011), 322–324.RWapcont O. Riordan and L. Warnke. Achlioptas process phase transitions are continuous. Ann. Appl. Probab. 22 (2012), 1450–1464.RWPRE O. Riordan and L. Warnke. Achlioptas processes are not always self-averaging.Physical Review E 86 (2012), 011129.RWapsubcr O. Riordan and L. Warnke. The evolution of subcritical Achlioptas processes. Rand. Struct. & Algor. 47 (2015), 174–203.RWapunique O. Riordan and L. Warnke. Convergence of Achlioptas processes via differential equations with unique solutions. Combin. Probab. Comput. 25 (2016), 154–171.RWapip O. Riordan, and L. Warnke.In preparation. Seierstad T.G. Seierstad. On the normality of giant components Rand. Struct. & Algor. 43 (2013), 452–485.Sen S. Sen. On the largest component in the subcritical regime of the Bohman-Frieze process. Electron. Commun. Probab. 21 (2016), Paper 64.JSP J. Spencer. Potpourri. J. Comb. 1 (2010), 237–264.SW J. Spencer and N.C. Wormald.Birth control for giants.Combinatorica 27 (2007), 587–628.DEMLW L. Warnke. On Wormald's differential equation method.Manuscript (2013).DEM N.C. Wormald.Differential equations for random processes and random graphs. Ann. Appl. Probab. 5 (1995), 1217–1235.DEM99 N.C. Wormald.The differential equation method for random graph processes and greedy algorithms.In Lectures on approximation and randomized algorithms, pages 73–155. PWN, Warsaw (1999). § APPENDIX§.§ Transferring results from 4-vertex rules to Achlioptas processesIn this appendix we briefly present one possible way of transferring results from 4-vertex processes to the original Achlioptas process (where in each step the two edges e_1,e_2 are chosen independently and uniformly at random from all edges not yet present, say). Fixing some rule , the Achlioptas process (G^_n,i)_0 ≤ i ≤ 9n is uniquely determined by the sequence of potential edges = (e_1,i,e_2,i)_1 ≤ i ≤ 9n offered during the first 9n steps.In the Achlioptas process any valid sequenceoccurs with probabilityat most ∏_0 ≤ i < 9n1/(n2-i)^2 = ∏_0 ≤ i < 9n4/n^4(1-1/n-2i/n^2)^2≤(4/n^4)^9n/(1-19/n)^18n≤ e^400(4/n^4)^9nfor n ≥ n_0.Mapping _i=(v_1, …, v_4) to the pairs e_1,i={v_i,1,v_i,2} and e_2,i={v_i,3,v_i,4}, in the 4-vertex process any edge sequence = (e_1,i,e_2,i)_1 ≤ i ≤ 9n occurs with probability exactly (4/n^4)^9n.It follows that if an event  fails with probability at most π in the 4-vertex process, then  fails with probability at most e^400π= O(π) in the Achlioptas process (tacitly assuming that the event  does not depend on any graphs G^_n,i with i > 9n, which of course holds in this paper). Since our main results only concern events that fail with negligible probability π→ 0,this formally justifies the fact that we may treat the original Achlioptas processes as a 4-vertex process. (Similar reasoning applies to other variations.)§.§ Cauchy–Kovalevskaya ODE and PDE theorems In this appendix we present two `easy-to-apply' versions of theCauchy–Kovalevskaya theorem, which are optimized for the (combinatorial) applications in this paper.These show that, under suitable regularity conditions, certain systems of ODEs or PDEs have analytic solutions.We first consider first-order PDEs, with =(x_1, …, x_n)∈^n.Our starting point is the following standard version of the Cauchy–Kovalevskaya Theorem, taken from pages 15–16 in <cit.>.This states that a first order PDE has an analytic local solution provided (i) the time-derivative of the function u to be solved for is given by an analytic function of u and its space-derivatives as in (<ref>) below, and (ii) the initial data (<ref>) is analytic. Similar statements hold for more general PDEs, but we shall not need this.Let n ≥ 1, let t_0∈ and let _0 ∈^n. Suppose that the function f:^n → is analytic in some neighbourhood of _0, and that F:^2n+2→ is analytic in some neighbourhood of (t_0, _0, f(_0), ∂ f/∂ x_1(_0), …, ∂ f/∂ x_n(_0)).Then there exists a neighbourhoodof (t_0,_0) in ^n+1 and an analytic function u: → which satisfies∂/∂ t u(t,)= F(t,,u(t,),∂/∂ x_1u(t,),…,∂/∂ x_nu(t,))andu(t_0,) = f() . Standard results also give uniqueness in this case (among analytic solutions). For our application, local existence as above is not quite enough; we would like existence in a neighbourhoodof a certain compact (`space') domain rather than just of a point. Fortunately, this follows by a compactness argument. Given =(y_1,…,y_n) ∈^n and =(r_1,…,r_n) with all r_i>0, we write(,) := {∈^n: |x_i-y_i|< r_i, 1≤ i≤ n }for the polycylinder (or polydisc)in ^n with centre  and polyradius .With t_0∈ fixed, for r > 0 we write(r) := { t∈: |t-t_0| < r }.Suppose that n≥ 1, t_0∈, >0, and 0<a_i<b_i for i=1,…,n. Let :=(), _0 :=((0,…,0),),and _1 :=((0,…,0),).Suppose that the functions f:_1 → and F:×_1 ×^n+1→ are analytic. Then there is a δ>0 and an analytic function u: _0 ×_0 → which satisfies (<ref>)–(<ref>), where _0 := (δ). Furthermore, the Taylor series of u around (t_0,0, …, 0) converges (to u)in the domain _0 ×_0.Let _0⊂_1 be the closure of _0, i.e., the set {∈^n: |x_i|≤ a_i, i=1,…,n}. For any point ∈_0, by Lemma <ref> there is an r_>0 such that, defining_:=(r_), _:=(,(r_, …, r_)) and_ := _×_,the following holds: (i) we have _⊆ and _⊆_1, and (ii) there exists an analytic function u_: _→ which satisfies (<ref>)–(<ref>) for all (t,) ∈_ (with u replaced by u_). Suppose that _ and _ intersect; we claim that then u_ and u_ agree on _∩_. To see this, first note that _∩_ is of the form (r)× for some r>0 and some open ⊂^n. Suppose that (t,)∈_∩_. Sinceis open, some open polycylinder :=(,) is contained in , so (t,) ∈(r)×⊂_∩_.Since u_ and u_ are analytic in the polycylinder (r)×, by the complex version of the Taylor series expansion (see, e.g., Theorem 1.18 in <cit.>) they both have Taylor series around (t_0,) which converge in this domain. By construction, u_ and u_ and satisfy the initial condition (<ref>) and the time-derivative equation (<ref>) for all (t,) ∈(r)×. These properties together uniquely determine all partial derivatives of u_ and u_ at the point (t_0,) (this observation also forms the basis of the Cauchy–Kovalevskaya theorem). Thus u_ and u_ have the same Taylor expansion around (t_0,) and hence agree in (r)× and in particular at (t,).The collection {_} of polycylinders forms an open cover of _0. By compactness, there is a finite subcover: _0 ⊂⋃_∈ P_ with P finite. Let δ := min_∈ P r_>0, and set _0:=(δ). Let := _0×_0. Then⊆⋃_∈ P (_0 ×_) ⊆⋃_∈ P (_×_) =⋃_∈ P_.Define u:→ by u(t,):=u_(t,) for any ∈ P such that (t,)∈_. This definition makes sense by the claim above. Then u is analytic: for any (t,)∈, we have (t,)∈_ for some ∈ P, and since _ is open and u_ agrees with u in _, u is analytic at . Similarly, u satisfies (<ref>)–(<ref>) since the u_ do. This completes the proof of the first statement.The second statement follows: since u is analytic in the polycylindercentered at (t_0,0,…,0), by e.g., Theorem 1.18 in <cit.> its Taylor series about (t_0,0,…,0) converges in . Turning to the ODE case, the following folklore theorem (see, e.g., Corollary 2 in Section 6.11 of <cit.>) states thatfunctions u_1, …, u_s which satisfy a finite system of ODEsare real-analytic if their derivatives (<ref>) are based on real-analytic equations;the technical condition (<ref>) ensuresthat (<ref>) makes sense. Let s ≥ 1.Suppose that ⊆ is an open interval, that ⊆^s is an open set, and that F_j: ×→ is real-analytic for 1 ≤ j ≤ s.Suppose that the functions u_1,…,u_s fromtosatisfy (u_1(t), …, u_s(t)) ∈ and d/d t u_j(t) = F_j(t, u_1(t), …, u_s(t))for all t∈. Then u_1,…,u_s are real-analytic in .§.§ Palm theory for the Poisson process In this appendix we present two elementary instances of palm theory for the Poisson process,which provide methods for calculating the mean of certain random sums. In Lemma <ref> below we write, as usual, [N]={1,2,…,N} and [0]=∅.The symmetry assumption (<ref>) holds for functions that are invariant under relabellings.In the right hand side of (<ref>), we intuitively think of N+1, …, N+s either (i) as `extra' elements that are added to the random set [N], or (ii) as special elements of the `enlarged' random set [N+s].Let N ∼(λ) with λ∈ [0,∞).Given s ≥ 1, let fbe a measurable random function, independent of N, defined on the product of ()^s and finite subsets of .Assume that, for all m ≥ s and x_1, …, x_s ∈ [m], we have(f(x_1, …, x_s,[m]∖{x_1, …, x_s})) = (f(m-s+1, …, m, [m-s])). Then (^*∑_(x_1, …, x_s) ∈ [N]^s f(x_1, …, x_s,[N]∖{x_1, …, x_s})) = λ^s (f(N+1, …, N+s, [N])) ,where ∑^* means that we are summing over s-tuples with distinct x_i.The argument is elementary: after conditioning on N ≥ s it suffices to rewrite terms, exploiting symmetry of f and the identity (N=m) mss! = λ^s(N=m-s).More precisely, by the assumed independence, we see that the left hand side of (<ref>) may be written as ∑_m ≥ s(N=m) ^*∑_(x_1, …, x_s) ∈ [m]^s(f(x_1, …, x_s,[m]∖{x_1, …, x_s}))= ∑_m ≥ s(N=m) mss! (f(m-s+1, …, m, [m-s])) ,which by our above discussion equals λ^s (f(N+1, …, N+s, [N])).We shall also use the following simple variant (again thinking of f as being symmetric w.r.t. the labels);the proof is very similar to Lemma <ref> and thus omitted. For i ∈ [2], let N_i ∼(λ_i) be independent random variables.Let f be a measurable random function, independent of N_1 and N_2, defined on the product of ()^2 and finite subsets of ×.Assume that, for all m_1,m_2 ≥ 1, x ∈ [m_1] and y ∈ [m_2], we have (f(x, y,[m_1]∖{x}, [m_2] ∖{y})) = (f(m_1, m_2,[m_1-1],[m_2-1])).Then (∑_x ∈ [N_1], y ∈ [N_2] f(x, y,[N_1]∖{x}, [N_2] ∖{y})) = λ_1λ_2 (f(N_1+1, N_2+1, [N_1], [N_2])) .§.§ Branching processes The branching process results stated in Section <ref>, namely Theorems <ref>–<ref>, will, in essence, be proved in a separate paper <cit.> with Svante Janson.The reason for the split is that the proofs use very different methods from those used in the present (already fairly long) paper; they are pure branching-process theory, with no random graph theory involved. As formulated in Section <ref>, however, these results involve rather complicated definitions from Section <ref>. Although our only aim is to analyze the specific branching processes _t and _t^± defined in Section <ref>, to avoid the need to repeat the full definitions in <cit.>, in Section <ref> we review, and somewhat generalize, them. More precisely, we gather together the properties of these processes (or rather, the offspring distributions defining them) needed for the analysis into a formal definition,which is of course tailored to our context. Then, in Section <ref>, we state two results that, as we show, imply Theorems <ref>–<ref>.The statements of these results are complicated by the parameter ; in Section <ref> we show that the general case may be deduced from the special case =1 proved in <cit.>.§.§.§ Setup and assumptions Throughout this appendix, we consider branching processes of the following general form, formally defined in Definition <ref>. Each generation consists of some number of particles of type L and some number of type S. Particles of type S never have children. Given a probability distribution (Y,Z) on ^2, ^1_Y,Z is the Galton–Watson process starting with a single particle of type L (in generation 0) in which each particle of type L has Y children of type L and Z of type S, independent of other particles in its generation and of the history. Given a second probability distribution (Y^0,Z^0) on ^2, _Y,Z,Y^0,Z^0 is the branching process defined in the same way, except that the first generation consists of Y^0 particles of type L and Z^0 of type S. A branching process family (_t)_t∈ (t_0,t_1) = (_Y_t,Z_t,Y^0_t,Z^0_t)_t∈ (t_0,t_1) is simply a family of branching processes as above, one for each real number t in some interval (t_0,t_1). Note that the branching process family (_Y_t,Z_t,Y^0_t,Z^0_t)_t∈ (t_0,t_1) is fully specified by the interval (t_0,t_1) and the distributions of (Y_t,Z_t) and of (Y^0_t,Z^0_t) for each t. We shall often describe properties of these distributions via their probability generating functions. The next definition encapsulates those properties of the `idealized' branching process _t defined in (<ref>) that we shall need. Let t_0<<t_1 be real numbers, and letand K be non-negative integers. The branching process family (_Y_t,Z_t,Y^0_t,Z^0_t)_t∈ (t_0,t_1) is -critical with periodand offset K ifthe following hold: *There exist δ>0 and R > 1 with (-δ,+δ) ⊆ (t_0,t_1) such that the functions (t,α,β) := (α^Y_tβ^Z_t) and^0(t,α,β) := (α^Y_t^0β^Z_t^0)are defined for all real t with |t-| < δ and all complex α, β with |α|,|β| < R. Furthermore, these functions have analytic extensions to the complex domain_δ,R:={(t,α,β) ∈^3 : |t-|<δ and|α|,|β|<R}. * For each t∈ (t_0,t_1), with probability 1 we have (Y_t,Z_t) ∈ ()^2and(Y_t^0,Z_t^0) ∈ ()^2 ∪ ({0}× [K]). *We haveY_ =1, Y^0_ > 0, and. d/ Y_t|_t= > 0 . *There exists some k_0∈ such thatmin{Y_=k_0, Z_=k_0,Y_=k_0+, Z_=k_0,Y_=k_0, Z_=k_0+} > 0 . As we shall show in a moment, the results in Section <ref>–<ref> show that the branching process family (_t)_t∈ (t_0,t_1) defined in (<ref>) is -critical with periodand offset K, whereis the period of the rule , defined in Section <ref> (see Lemma <ref>), and K is the cut-off of .We also consider `perturbed' distributions that differ from these `idealized' ones slightly. Let (_t)_t∈ (t_0,t_1) be a -critical branching process family with period  and offset K, let δ, R and k_0 be as in Definition <ref>. Given t,η≥ 0 with |t-|< δ,we say that the branching process _Y,Z,Y^0,Z^0 is of type (t,η) (with respect to (_t), δ, R, and k_0) if the following hold:*Writing := {(α,β) ∈^2: |α|,|β| < R},the expectations (α,β):=(α^Yβ^Z) and^0(α,β) := (α^Y^0β^Z^0)are defined (i.e., the sums converge absolutely) for all (α, β) ∈.* With probability 1 we have (Y,Z) ∈ ()^2and(Y^0,Z^0) ∈ ()^2 ∪ ({0}× [K]). * For all (α,β) ∈ we have|(α,β)-(t,α,β)|≤ηand|^0(α,β)-^0(t,α,β)| ≤η . Note that when =1 (the main case we are interested in), the offset K plays no role inDefinitions <ref> and <ref>,so we may take K=0. Definition <ref> says that, in some precise sense, the distributions of (Y,Z) and of (Y^0,Z^0) are `η-close' to those of (Y_t,Z_t) and (Y_t^0,Z_t^0), respectively.We shall only consider cases where 0≤η≤ |t-|.Note that our definition of `type (t,η)' is with reference to a branching process family (_t), as well assome additional constants. This branching process family will always be clear from context, so we shall often omit referring to it explicitly; we shall always omit reference to the additional constants. Let (_t)_t∈ (t_0,t_1) be the branching process family defined in (<ref>).Then (_t) is -critical with periodand offset K, whereis defined in Lemma <ref>, and K is the cut-off size in the bounded-size rule . Furthermore, there exist constants δ,C>0such that for any t∈ (t_0,t_1) with Cn^1/3≤ |t-|≤δ and any t-nice parameter list ,the branching processes _t^±=_t^±() defined in Definition <ref> are of type (t,Cn^1/3) with respect to (_t)_t∈ (t_0,t_1). Let δ be as in Theorem <ref>, and let R be the smaller of the radii R appearing in Theorems <ref> and <ref>.Considering first (_t)_t∈ (t_0,t_1), the analyticity condition <ref> in Definition <ref> is satisfied by Theorem <ref>. Condition <ref> holds by Lemma <ref>,the criticality condition <ref> holds by Lemma <ref>,and the non-degeneracy condition <ref> holds by Lemma <ref>.We now turn to _t^±() as defined in Definition <ref>. Condition <ref> in Definition <ref> is an immediate consequence of the uniform upper bound (<ref>) from Theorem <ref>.Condition <ref> on the support holds by Lemma <ref>, and the `η-close' condition <ref> holds by (<ref>)–(<ref>) of Theorem <ref>, provided C is chosen large enough.§.§.§ Results In this subsection we state two results, Theorems <ref> and <ref> below, which imply the results in Section <ref>.The proofs are deferred to Section <ref> and <cit.>. We start with the tail asymptotics of the branching process (which simplifies when =1: then (<ref>) holds for all k ≥ 1 without the indicator).Let (_t)_t∈ (t_0,t_1) be a -critical branching process family with periodand offset K. Then there exist constants _0,c>0 and analytic functions θ and ψ on the interval I=[-_0,+_0] such that(||=k) = (1+O(1/k)+O(η)) k≡ 0 mod k^-3/2θ(t) e^-ξ_Y,Z kuniformly over all k>K, t∈ I, 0≤η≤ c|t-| and all branching processes =_Y,Z,Y^0,Z^0 of type (t,η)(with respect to (_t)), where the constant ξ_Y,Z, which depends on the distribution of (Y,Z), satisfiesξ_Y,Z =ψ(t) +O(η|t-|).Moreover, θ>0, ψ≥ 0, ψ()=ψ'()=0, and ψ”()>0.The condition k>K in Theorem <ref> is needed only to account for the possibility thatthat for some small k which are not multiples of  we may have (||=k)>0; this can only happen if (Y_t^0,Z_t^0)=(0,z) for some z ∈ [K]. As discussed in Section <ref>, in the most important case with period =1 we may take K=0. We next turn to the survival probability of our branching process. Let (_t)_t∈ (t_0,t_1) be a -critical branching process family. Then there exist constants _0,c>0 with the following properties. Firstly, the survival probability ρ(t)=(|_t|=∞) is zero for -_0≤ t≤, and is positive for <t≤+_0. Secondly, ρ(t) is analytic on [,+_0]; in particular, there are constants a_i with a_1>0 such thatfor all ∈ [0,_0] we have ρ(+) = ∑_i=1^∞ a_i^i .Thirdly, for any t and η with |t-|≤_0 and η≤ c|t-|, and any branching process =_Y,Z,Y^0,Z^0 of type (t,η) (with respect to (_t)), the survival probabilityofis zero if t≤,and is positive and satisfies = ρ(t)+O(η)if t>. Moreover, analogous statements hold for the survival probabilities ρ_1(t) and _1 of the branching processes ^1_t and ^1_Y,Z. In the light of Lemma <ref>, Theorems <ref> and <ref> follow immediately from Theorem <ref>, and Theorems <ref> and <ref> from Theorem <ref> and the discussion in Remark <ref>.§.§.§ Reduction to the special case =1 and K=0 The proofs of Theorems <ref> and <ref> in the key case =1 and K=0 will be given in a companion paper <cit.> written with Svante Janson.In this subsection we outline how both theorems follow from these key special cases; the argument is purely technical and requires no new ideas.Turning to the details for Theorem <ref>, suppose that we ave given a -critical branching process family (_t)_t∈ (t_0,t_1)= (_Y_t,Z_t,Y_t^0,Z_t^0)_t∈ (t_0,t_1) with period > 1 and offset K ≥ 0, and a branching process =_Y,Z,Y^0,Z^0 of type (t,η)with respect to this family. (As discussed in Sections <ref>–<ref>, for period = 1 we make take offset K=0, and so there is nothing to show.)We shall modify these branching processes in two steps into ones correspondingto the case =1, K=0.Of course, in each step we need to check that our branching processessatisfy Definitions <ref> and <ref> (so we can apply Theorem <ref> to them), and that the conclusion of Theorem <ref> for the new distributions implies the conclusion of Theorem <ref> for the old distributions (possibly after decreasing the corresponding constant c>0). We start with a simpleauxiliary claim for the distributions (Y_t^0,Z_t^0) and (Y^0,Z^0) fixed above.For each k≥ 1the function f_k(t) := (Y^0_t=0, Z^0_t=k) is defined for real t with |t-|< δ, and satisfies |(Y^0=0, Z^0=k) - (Y_t^0=0, Z_t^0=k)| ≤η.Furthermore, f_k has an analytic extension to the complex domain _δ := {t ∈: |t-|< δ}. Since ^0(t,α,β) has an analytic extension to _δ,Rby Definition <ref>, and f_k(t) = ^0_β^k(t,0,0)/k!, it follows that f_k(t) has an analytic extension to _δ.Furthermore, for any real t with |t-|< δ,using standard Cauchy estimates (with center a=(0,0) and multiradius r=(1,1); see, e.g., Theorem 1.6 in <cit.>) we obtain |(Y^0=0, Z^0=k) - (Y_t^0=0, Z_t^0=k)| = |^0_β^k(0,0) - ^0_β^k(t,0,0) | / k!≤sup_α,β∈ :|α|,|β| ≤ 1|^0(α,β) - ^0(t,α,β) | ≤η,where we used (<ref>) from Definition <ref> for the last inequality (recall that R > 1).For Theorem <ref> we first deduce the case ≥ 1, K > 0 from the case≥ 1, K=0.Recall that Z_t^0, and also Z^0, need not always be a multiple of . However, from condition (<ref>) of Definition <ref> (and its analogue in Definition <ref>), the only possible exceptions are values (Y_t^0,Z_t^0)=(0,k) with k ∈ [K]={1, …, K}, and similarly for Z^0. We modify the distribution of (Y_t^0,Z_t^0) by simply setting this random variable to be equal to (0,), say, whenever it takes a value (0,k) with k ∈ [K]. We modify (Y^0,Z^0) in an analogous way. It is easy to see that the resulting branching processes satisfy the conditions in Definitions <ref> and <ref>. Indeed, the key assumption is the analytic extension of the probability generating function ^0, but ^0 has changed only by the addition of the finite sum ∑_k ∈ [K](β^-β^k)(Y^0_t=0,Z^0_t=k),which is has an analytic extension to _δ,R by Claim <ref>. Since the distribution of Y_t^0 has not changedthe new distribution still satisfies the criticality condition (<ref>). We next check that the new (Y^0,Z^0) is of type (t,Cη) with respect to the new (Y_t^0,Z_t^0) for some constant C ≥ 1.Considering how ^0(α,β)-^0(t,α,β) changes when we modify the distributions,and using Claim <ref> to compare the relevant point probabilities,this is easily seen to follow from the fact that the original (Y^0,Z^0) is of type (t,η) with respect to the original (Y_t^0,Z_t^0).In terms of the conclusion of Theorem <ref>,since Y^0=0 implies that the process stops immediately, and so ||=Z^0,we have only affected the value of (||=k) for k ∈ [K], which does not alter the conclusion of Theorem <ref>. To sum up, since C η≤ c|t-| is equivalent to η≤ (c/C) · |t-|,the conclusion of Theorem <ref> (with constant c)for the modified distributions with K =0 implies the conclusion of Theorem <ref> (with c replaced by the constant c/C) for the original distributions with K>0, as claimed.After this first change, for Theorem <ref> it remains to deduce the case > 1, K = 0 from the case =1, K=0. If the distributions (Y_t^0,Z_t^0) and (Y^0,Z^0) as well as (Y_t,Z_t) and (Y,Z) are all supportedon ()^2, then, in the branching process, individuals are born in groups of size  (both in the first generation and later on). Thus we may describe the same random tree differently as a branching process, by treating each such group as an individual. The new branching process ' deterministically satisfies ||= |'| .To check that it satisfies the conditions in Definitions <ref> and <ref>, we now relate the initial generation and later offspring distributions of to those of '.For the initial generation, we simply divide Y_t^0, Z_t^0, Y^0 and Z^0 by ,which preserves all relevant conditions in Definitions <ref> and <ref>. Indeed, the only condition that requires some argument is the analytic extension condition for ^0: the key point is that the original ^0 has an analytic extension to the polydisk _δ,R.By standard results for complex analytic functions, this extension is given by a single power series around (,0,0) whichconverges in the entire polydisk (see, e.g., Theorem 1.18 in <cit.>). Since, for every t, Y_t^0 and Z_t^0 are both supported on , all powers of α and β appearing in this power series are multiples of . Substituting α^1/ and β^1/ thus gives a corresponding power series for the new distributions, converging in _δ,R^. For the later offspring distributions (Y_t,Z_t) and (Y,Z), the operation is to take the sum of  independent copies of the distribution divided by . It is not hard to check that this preserves the assumptions, after increasing η by a constant factor C ≥ 1.Firstly, the mean of Y_t is unaffected and the mean of Y_t^0 is simply divided by > 1, so the criticality condition (<ref>) still holds.Secondly, the new `idealized' probability generating functions  and ^0 satisfy(t,α,β)=(t,α^1/,β^1/)^and^0(t,α,β) = ^0(t,α^1/,β^1/) ,so, arguing as above, they extend analytically to _δ,R^. Thirdly, the new `perturbed' probability generating functions , ^0 satisfy(α,β) = (α^1/,β^1/)^and^0(t,α,β) = ^0(t,α^1/,β^1/) ,so they are defined and (complex) analytic in := {(α,β) ∈^2: |α|,|β| < R^}. Now, since the `η-close' condition (<ref>) holds for the original distributions, using the form of (<ref>)–(<ref>) it is easy to see that, after replacing η with C η≥η, (<ref>) again holds for the new distributions.Furthermore, if the original distribution satisfies the non-degeneracy condition (<ref>) with > 1 and k_0 ∈, then it is not difficult to check that the new distribution satisfies (<ref>) with =1 and the same constant k_0 ∈ (when we sum the > 1 independent copies of the modified distribution, we just take the value (k_0,k_0)/ for all the first -1 copies, and then consider the values (k_0,k_0)/, (k_0+,k_0)/, and (k_0,k_0+)/ for the last copy).To sum up, the new distributions associated to ' satisfy Definitions <ref> and <ref>, and are of type (t,Cη) for some C ≥ 1.Recalling (<ref>), the conclusion of Theorem <ref> with constant c for the modified distributions with =1 and K=0 easily implies the conclusion of Theorem <ref> (with c replaced by the constant c/C)for the original distributions with >1 and K = 0, as claimed. Finally, the same arguments allow us to deduce Theorem <ref> from the special case =1, K=0. Indeed, we modify the branching process in two steps, as above, which preserves the assumptions of the theorem (as we have just shown). Since we have only altered outcomes with finite size,conclusions about the survival probability thus carry over from the modified branching processes to the original ones. § GLOSSARY OF NOTATION ,natural numbers with and without 0L_j(G) size of the jth largest component in the graph GN_k(G), N_≥ k(G) number of vertices in components with exactly/at least k-vertices in the graph GS_r(G) rth order susceptibility of the graph G; see (<ref>) S_r,n(G) modified rth order susceptibility of the graph G; see (<ref>) C_v(G) (vertex set of) the component of a graph G containing a vertex v C_W(G) the union of C_v(G) over v∈ WFor bounded-size rules : decision rule; see Section <ref> () = {j_1,j_2} indices of the vertices joined bywhen presented with vertices v_1,…,v_ℓ in components of size c_1,…,c_ℓ; see Sections <ref> and <ref>K cut-off in the bounded-size rule  =_K set {1,2,…,K,ω} of `observable' component sizes, where ω means size >K G_i = G^_n,i random graph after i steps of the process (often with i=tn) t time parameter (often corresponding to i/n)critical time; see (<ref>)the set of possible component sizes; see Section <ref>period of the rule; see Section <ref>For the graph G_i = G^_n,i after i=tn steps:_i σ-algebra corresponding to information revealed by step i; see Section <ref> L_j(i) size L_j(i)=L_j(G_i) of the jth largest component after i steps of the process N_k(i), N_≥ k(i) number N_k(i)=N_k(G_i) and N_≥ k(i) = N_≥ k(G_i)of vertices in components with exactly/at least k-vertices after i steps of the process S_r(i) rth order susceptibility S_r(i)=S_r(G_i) after i steps of the process; see (<ref>) ρ(t) scaling limit of L_1, i.e., limit of L_1(tn)/n; see (<ref>) ρ_k(t) scaling limit of N_k, i.e, limit of N_k(tn)/n; see (<ref>) s_r(t) scaling limit of S_r, i.e., limit of S_r(tn)/n; see (<ref>) ψ(t) rate function in the decay of the component size distribution at time t (step tn); see Theorems <ref> and <ref>For the two-round exposure near around :t_0, t_1 times with t_0 << t_1: the main focus of this paper are times t ∈ [t_0,t_1]; see (<ref>)–(<ref>)i_0, i_1 steps i_0=t_0n and i_1=t_1n: we reveal information about the steps i_0<i ≤ i_1 via a two-round exposure argument; see Section <ref> and (<ref>)V_S, V_L sets of vertices in Small and Large (size >K) components at step i_0; see Section <ref> H_i the `marked graph' after i steps; see Section <ref> Q_k,r(i) number of (k,r)-components after i steps of the process; see Section <ref> (note that the definition for k ≥ 1 and k=0 differs slightly, see also Sections <ref>–<ref>) q_k,r(t) scaling limit of Q_k,r, i.e., limit of Q_k,r(tn)/n; see Sections <ref> and <ref> _i random `parameter list' generated by the random graph G^_n,i after i steps; see (<ref>) J_i=J(_i) random graph constructed using the random `parameter list' _i; see Section <ref>For the branching process comparison arguments: generally used for |t-|Ψ=(log n)^2 a convenient cut-off size an arbitrary `parameter list'; see Definition <ref> J=J() random graph constructed using ; see Definition <ref> =() Poissonized random graph constructed using ; see Definition <ref> _k,r the set of (k,r)-components/hyperedges in ; see Section <ref> _t idealized branching process that approximates the neighbourhoods of the random graphs G^_n,tn and (_tn); see Section <ref> ^±_t perturbed variants of a given t-nice parameter list(see Definition <ref> for the definition of t-nice) which typically satisfy J(^-_t) ⊆ J() ⊆ J(^+_t); see Definition <ref> and Lemma <ref> _t^±=_t^±() dominating branching processes that approximate (from above and below) the neighbourhoods of the random graphs (^±_t); see Section <ref> n^2/3 convenient cut-off size at which we abandon certain domination arguments; see Theorems <ref>–<ref> and Lemma <ref>
http://arxiv.org/abs/1704.08714v1
{ "authors": [ "Oliver Riordan", "Lutz Warnke" ], "categories": [ "math.PR", "cond-mat.stat-mech", "math.CO", "05C80, 60C05, 90B15" ], "primary_category": "math.PR", "published": "20170427184903", "title": "The phase transition in bounded-size Achlioptas processes" }
Cartan ribbonization]Cartan ribbonization and a topological inspection DTU ComputeTechnical University of Denmark2800 Kongens LyngbyDenmark [email protected] DTU NanotechTechnical University of Denmark2800 Kongens LyngbyDenmark [email protected] DTU ComputeTechnical University of Denmark2800 Kongens LyngbyDenmark [email protected] develop the concept of Cartan ribbons together with a rolling-based method to ribbonize and approximate any given surface in space by intrinsically flat ribbons. The rolling requires that the geodesic curvature along the contact curve on the surface agrees with the geodesic curvature of the corresponding Cartan development curve. Essentially, this follows from the orientational alignment of the two co-moving Darboux frames during rolling. Using closed contact center curves we obtain closed approximating Cartan ribbons that contribute zero to the total curvature integral of the ribbonization. This paves the way for a particularly simple topological inspection – it is reduced to the question of how the ribbons organise their edges relative to each other. The Gauss–Bonnet theorem leads to this topological inspection of the vertices. Finally, we display two examples of ribbonizations of surfaces, namely of a torus using two ribbons, and of an ellipsoid using closed curvature lines as center curves for the ribbons.[ Steen Markvorsen September 30, 2018 ======================§ INTRODUCTION The approximation of surfaces by patch-works of planar parts has a long use in fundamental and applied mathematics. Foremost comes to mind the multifaceted applications of triangulations <cit.>. In the present work we develop a scheme for approximating a surface by the use of multiple developable surfaces. Some of the beauty of this approach is the relatively few numbers of developable stretches – ribbons – needed to approximate a givensurface. Not to mention that the study of shapes and structures of developable surfaces is itself a classical subject that has intrigued mathematicians for centuries and has found numerous artistic applications in architecture and design, see<cit.>.In the seventies K. Nomizu pointed out that the concept of (extrinsic) rolling can be understood as a kinematic interpretation of the (intrinsic) Levi-Civita connection and of the Cartan development of curves, see <cit.> and <cit.>. One derives simple expressions for the components of the corresponding relative angular velocity vector of the rolling, i.e. the geodesic torsion, the normal curvature, and the geodesic curvature of the given curve and its development, see <cit.>. For example, in conjunction with a plane, the rolling must propagate along a planar curve which has the same geodesic curvature as the given curve, see examples in <cit.>.In recent years rolling has received a renewed wave of interest – in part because of its importance for robotic manipulation of objects <cit.>. For example, there has been an interest in understanding rolling from symmetry arguments <cit.> as well as purely geometrical considerations <cit.>. Also, the shapes known as D-forms are examples of surface structures that are formed by assembling several developable surfaces <cit.>.The paper is organized as follows: In sections <ref> and <ref> we apply the notion of rolling as an alternative entrance to the construction of developable surface approximations. We show how the method of rolling a surface along the planar Cartan development of a given curve on the surface produces a planar ribbon which – after isometric bending along the lines of the instantaneous rotation axes – will reproduce the surface approximation along the said curve. In other words, the rolling induces a local isometry between the flat approximation along the curve and the plane. Further in section <ref> we discuss a specific measure of thelocal goodness of a given ribbon approximation. In section <ref> we then initiate the corresponding study of such approximations by establishing a precise calculation of the Euler characteristic of the surfaces via an inspection of the family of approximating ribbons. Finally, in sections <ref> and<ref>, we illustrate the approximation method by two concrete examples which show the ensuing Cartan ribbon approximations of a torus (along two trigonometric center curves) and of an ellipsoid (along six lines of curvature), respectively. § THE INITIAL SETTINGWe consider two surfaces S and S̃ in ℝ^3. Let γ be a smooth, regular curve on S, γ: J=[0,α]→ S, such that γ(0)=(0,0,0). We equip γ with its Darboux frame field ℱ = {e, h, N}, defined as follows: for each t ∈ J we let N(t) denote a unit normal vector to S at γ(t), we lete(t) = γ'(t)/‖γ'(t) ‖ the unit tangent vector of γ and h(t) = N(t)× e(t). The frame ℱ then satisfies the following equations – see for example<cit.>:[ e'(t); h'(t); N'(t) ] = ‖γ'(t) ‖·[ 0κ_g(t)κ_n(t); -κ_g(t) 0τ_g(t); -κ_n(t) -τ_g(t) 0 ][ e(t); h(t); N(t) ],where τ_g(t), κ_n(t), and κ_g(t) are the geodesic torsion, the normal curvature, and the geodesic curvature, respectively, of γ at γ(t). Since we are so far only considering local geometric entities, the surfaces S and S̃ need not be orientable, i.e. the frame ℱ and its properties – such as the signs appearing in (<ref>) – depend on the local choice of normal vector field N. In the final sections we will note a few consequences concerning the rolling and the corresponding ribbonization of non-orientable surfaces. §.§ Moving S on S̃Given a curve γ on S as above, we now consider smooth and regular curves γ̃ on the other surface S̃ such that thefollowing initial compatibility and contact conditions are satisfied:γ̃(0) =γ(0)= (0,0,0) γ̃'(0) =γ'(0) ‖γ̃^'‖ = ‖γ^'‖,so that γ̃ has the same initial point and direction as γ and so that γ̃ has the same speed as γ for all t ∈ J. A framed motion of (S, γ) on S̃ is then defined as follows:Let E^+(3) be the group of direct isometries of ℝ^3. A (1-parameter) framed motion g_t of (S, γ) on S̃ along γ̃ is a differentiable map J → E^+(3) such that for each t the map g_t is the isometry that mapsγ(t) to γ̃(t),γ(t) + e(t) to γ̃(t) + ẽ(t),γ(t) + N(t) to γ̃(t) + Ñ(t) ,where ẽ andÑ are two of the members of the Darboux frame ℱ̃ = {ẽ, h̃ = Ñ×ẽ, Ñ} along γ̃ on S̃ defined in the same way as the frame ℱ along γ on S. The point γ̃(t) is called the contact point at instant t, and γ̃(J) is called the contact curve of the framed motion g_t of (S, γ) on S̃. Since g_t is in particular an instantaneous isometry it is represented by x ↦ R_t x + c_t, where R_t∈ SO(3) is a rotation matrix and c_t a translation vector. The instantaneous framed motion is then given by the vector field V_tx ↦Ω_t (x - c_t) + c'_t, with Ω_t = R'_t R_t^𝖳, see <cit.>. As g_t is a framed motion we have:Let D_t be the matrix having e(t), h(t) and N(t) as coordinate column vectors (with respect to a fixed coordinate system in ℝ^3) and similarly, letD̃_t be the matrix having ẽ(t), h̃(t) and Ñ(t) as coordinate column vectors (with respect to the same fixed coordinate system in ℝ^3). ThenR_t = D̃_t D_t^𝖳c_t = γ̃(t) - R_tγ(t) ,so thatg_t(x) = D̃_tD_t^𝖳(x - γ(t)) + γ̃(t). The rotation D̃_tD_t^𝖳 maps the vector e(t) to ẽ(t), and N(t) to Ñ(t). The representation g_t(x) = R_tx + c_t is therefore given by (<ref>).§.§ Rolling S on S̃ A framed motion g_t of (S, γ) on S̃ along γ̃ is said to be rotational if, for all t ∈ J, Ω_t is different from the zero matrix. At each time instant we can then find a unique vector ω_t≠ 0, the angular velocity vector, such that ω_t× x = Ω_tx for all x ∈ℝ^3. Based on the orientation of the angular velocityvector relative to the common tangent plane of g_t(S) and S̃, we introduce the following terminology for the instantaneous motion – which extends directly to the entire motion.The instantaneous rotational framed motion g_t is a pure spinning if the angular velocity vector ω_t is orthogonal to the tangent plane T_γ̃(t)S̃, and a pure twisting if ω_t is proportional to the tangent vector ẽ(t). Finally, the motion g_t will be called a standard rolling if ω_t does not contain a spinning component and is not a pure twisting, i.e. a standard rolling ofS on S̃ is characterized by the condition that there exist smooth functions a and b such that ω_t decomposes as follows for all t:ω_t = a(t)·ẽ(t) + b(t)·h̃(t) + 0 ·Ñ(t) ,b(t) ≠ 0.It turns out that a standard rolling of a given surface S on a plane gives a kinematic approach towards the construction of approximating developable ribbons that is presented below in section <ref>. To begin with, we observe the following result for the more general situation of rolling S on a general surface S̃:With the setting introduced above, a framed motion g_t of (S, γ) on S̃ along γ̃ is a standard rolling if and only if the following conditions are satisfied for all t ∈ J:κ_g(t)= κ̃_g(t) κ_n(t)≠κ̃_n(t) ,where κ̃_g and κ̃_n denote the geodesic curvature and the normal curvature of γ̃, respectively.As in proposition <ref>, R_t = D̃_t D_t^𝖳 and c_t = γ̃(t) - R_tγ(t). Then, g_t(x) = R_tx + c_t, and so we can find the instantaneous motion V_t by computing Ω_t (x - c_t) + c'_t. Since c'_t = γ̃'(t) - R'_tγ(t) - R_tγ'(t) = - R'_tγ(t) for R_t maps γ'(t) to γ̃'(t), we obtainV_t(x) =Ω_t (x - γ̃(t) + R_tγ(t)) - R'_tγ(t) =Ω_tx - Ω_tγ̃(t) + R'_tγ(t)- R'_tγ(t) =Ω_t(x - γ̃(t)),where Ω_t = R'_t R_t^𝖳 = D̃'_tD̃_t + D̃_t D_t^'𝖳 D_tD̃_t^𝖳. If now we let_t=‖γ'(t) ‖[ 0κ_g(t)κ_n(t); -κ_g(t) 0τ_g(t); -κ_n(t) -τ_g(t) 0 ],we have – from (<ref>) – that D̃^'_t = D̃_t_t^𝖳 = - D̃_t_t (_t is skew symmetric) as well as D_t^'𝖳 D_t = _t. Hence, if _t = _t - _t, that is_t = [ 0_t^1,2_t^1,3; -_t^1,2 0_t^2,3; -_t^1,3 -_t^2,3 0 ],where_t^1,2 = ‖γ'(t) ‖· (κ_g(t) - κ̃_g(t))_t^1,3 = ‖γ'(t) ‖· (κ_n(t) - κ̃_n(t))_t^2,3 = ‖γ'(t) ‖· (τ_g(t) - τ̃_g(t)),the expression for Ω_t reduces toΩ_t = D̃_t_tD̃_t^𝖳,and the resulting angular velocity vector of the rolling is thence – with respect to the Darboux frame ℱ̃(t) = {ẽ(t), h̃(t), Ñ(t)} along γ̃ in S̃: ω_t =(-_t^2,3, _t^1,3, -_t^1,2)_ℱ̃(t)= ‖γ'(t) ‖·( - τ_g(t) + τ̃_g(t) , κ_n(t) - κ̃_n(t),-κ_g(t) + κ̃_g(t))_ℱ̃(t). By comparing(<ref>) with (<ref>) we see that the conditions(<ref>) are necessary and sufficient for g_t to be a standard rolling. In passing we note – for later use – that (<ref>) and proposition <ref> immediately give the coordinates of the pulled-back angular rotation vectorω̂_t = R_t^𝖳ω_t with respect to the frameℱ(t) for astandard rolling:ω̂_t = ‖γ'(t) ‖·( - τ_g(t) + τ̃_g(t) , κ_n(t) - κ̃_n(t),0)_ℱ(t).The important special case in which S̃ is a plane is covered by the following corollary:If S̃ is a plane, then the motion g_t is a standard rolling if and only ifκ_g(t)= κ̃_g(t) κ_n(t)≠ 0.The instantaneous angular rotation vector ω_t and its pull-back ω̂_t arecorrespondingly – in ℱ̃(t)and ℱ(t) respectively:ω_t = ‖γ^'(t) ‖· (-τ_g(t), κ_n(t), 0)_ℱ̃(t)= ‖γ^'(t) ‖· (-τ_g(t), κ_n(t), 0)_ℱ(t),where nowℱ̃(t) = {ẽ(t), ẽ_3×ẽ(t), ẽ_3} is the co-moving frame in the plane with constant normal vector field ẽ_3 along γ̃.§ DEVELOPABLE CARTAN SURFACE RIBBONSIn this section we show that the rolling discussed above serves as a tool for obtaining a flat developable approximation of the surface S along γ. This is alternative to constructing developable approximations via envelopes of tangent planes along γ, see <cit.>. In the recent work <cit.> osculating developable surfaces and their singularities have been studied, see also <cit.>. It will follow from the condition (<ref>) that the approximating surface is free of singularities in a neighbourhood of γ, see theorem <ref> below. We first consider the notion of ruled surfaces, since developable surfaces constitute a special subcategory of those:Let w_- and w_+ denote two positive functions on the given t-interval J,let I = [-w_-(t), w_+(t)], and let V denote the corresponding parameter domain in ℝ^2. A parametrized ruled surface (with boundary) rV →ℝ^3 based on the center curve γ is determined by a non-vanishing vector field β along γ:r(t,u) = γ(t) + u ·β(t),t ∈ J,u ∈ I.We will assume that β is a unit vector field along γ and that the surface r is regular, i.e. its partial derivatives are linearly independent for all u in the interval [-w_-(t), w_+(t)],t ∈ J. Regularity implies in particular thatβ(t) ≠± e(t)for allt ∈ J. Moreover, the surface r(V) is flat (with Gaussian curvature zero at all points, i.e. developable), precisely when the following condition is satisfied – see <cit.>:β' · ( β× e ) = 0. If r(V) is eventually to be constructed so that it becomes a flat approximation ofS along γ, we need to find a regular parametrization r such that r(V) is developable and has the same normal field Nas S along γ. It means that we needto determine the vector function β so that it fulfills (<ref>), (<ref>), andβ· N = 0. The desired vector function β is precisely (modulo length and sign) the previously encountered pulled-back angular velocity vector ω̂_t along γ associated with the rolling of S along γ̃ on a plane, see <cit.>:Let γ denote a smooth curve on a surface S and let ℱ = { e, h, N} be the corresponding Darboux frame field along γ. Suppose that the normal curvature function κ_n for γ on S never vanishes. Then there exists a unique developable surface which contains γ and which has everywhere the same tangent plane as S along γ. It is parametrized as follows:r(t,u) = γ(t) + u ·ω̂_t/‖ω̂_t‖,u ∈ [w_-(t), w_+(t)] ,t ∈ J.where ω̂_t denotes the pulled-back angular velocity vector:ω̂_t = κ_n(t)· h(t) - τ_g(t)· e(t), ‖ω̂_t‖ = √(κ_n^2(t) + τ_g^2(t)). Write β in terms of its coordinate functions β· e and β· h, substitute into equation (<ref>) and apply equation (<ref>) to express the derivatives of e and h. Thenβ· e/β· h = -τ_g/κ_n,and the result follows upon normalization of the solution β. The ruling directions of the developable surface are thus given by the instantaneous angular velocity vector of the rolling. The developable surface, which is parametrized by (<ref>) – and which is therefore approximating the surface S – will be called the Cartan surface ribbon along γ on S. As is already in the name, the Cartan surface ribbon can be developed isometrically into a planar ribbon:The associated Cartan planar ribbon for γ on S– which is defined along γ̃ in the plane – is now determined via (<ref>) in the proposition below, which also establishes the isometry between the two Cartan ribbons. An isometry from the Cartan surface ribbon onto the associated Cartan planar ribbon is realized along the development curve γ̃ in the following way, which is in precise accordance with the previously found rolling of S along γ on the plane with contact curve γ̃. We simply map the point r(t,u) to the pointr̃(t, u)= γ̃(t) + u ·ω_t/‖ω_t‖= γ̃(t) + u ·ω_t/√(κ_n^2(t) + τ_g^2(t)) .We let β(t) = ω_t/‖ω_t‖ and β̂(t) = ω̂_t/‖ω̂_t‖. Since κ_g(t) = κ̃_g(t) all the scalar products between two vectors chosen from {γ'(t), β̂(t), β̂'(t)}are the same as the scalar products between the corresponding two vectors chosen from {γ̃'(t), β(t), β'(t)}. It follows that the two first fundamental forms for r(t, u)and r̃(t, u) respectively, have identical coordinate functions. The two ribbons r and r̃ are therefore isometric.In all of the above constructions we have assumed that the center curves in question have nowhere vanishing normal curvature. For a number of cases the normal curvature does vanish, such as on planar faces of polyhedra and through lines of inflections on generalized cylindrical faces. The method of approximation byribbons can be extended to these surfaces by cut and paste along the singular rulings under the condition that the geodesic torsion also vanishes together with the normal curvature. For example, for surfaces containing planar domains, the ribbonization can be continued over any edge of the planar domain if the ruling of the ribbon agrees with the given edge. For polyhedral surfaces this is always possible. A ribbon with planar patches will also be denoted a Cartan ribbon, see the later section on Euler's polyhedral formula. §.§ Curvature and parallel transportIn view of our observations concerning the rolling of S on the plane, it now makes sense to say that the Cartan surface ribbon can be rolled isometrically onto the associated Cartan planar ribbon. This is induced in the way just described by the rolling of S on the plane, which itself is represented by the pulled-back angular velocity vector field ω̂ along γ in S and by ω along γ̃ in the plane. Accordingly, once the center curve γ̃ in the plane has been constructed using κ̃_g(t) = κ_g(t), then the approximatingCartan surface ribbon can be obtained via the inverse rolling of the Cartan planar ribbon backwards into contact with the surface S along γ. An early hint of this connection is presented in <cit.>.The key object for the actual construction of the approximating Cartan surface ribbon along a given curve γ on S is thence the planar curveγ̃, which may itself be constructed either by rolling, or – simpler – by integrating the curvature function κ_gof γ, but in the plane, in the well known way, see <cit.>: Suppose γ̃ has (signed) curvature κ_g and speed ‖γ̃'‖ = v. Then, modulo rotation and translation in the plane, we have:γ̃(t) = ∫_0^t v(t̂)· (cos(φ(t̂)), sin(φ(t̂)) ) dt̂whereφ(t̂) = ∫_0^t̂ v(û)·κ_g(û) dû. The curve γ̃ appears as a special – and simple –example of a Cartan development as already alluded to via the reference to Nomizu's initial work, see <cit.>. This is why the ensuing developable ribbons are called Cartan surface ribbons. To be a bit more specific concerning our simple 2-dimensional setting, we recall in particular the important geodesic curvature equivalence used above:We let the tangent space T_γ(0)S at γ(0) represent the plane S̃ into which we want to construct the Cartan development curve corresponding to the given curve γ in S. For each t we consider the parallel transport of the tangent vector γ'(t) alongγ from the point γ(t) to the point γ(0), see <cit.>:X(t) = Π_γ^γ(t), γ(0)( γ'(t)).The Cartan development γ̃ of γ in T_γ(0)S is then:γ̃(t) = ∫_0^tX(u) du .From this construction it follows in particular thatAny tangent vector γ̃'(t_1) = X(t_1) is itself parallelly transported (in the usual Euclidean sense) along γ̃ in the tangent space T_γ(0)S (which may be canonically identified with T_γ̃(0)S̃) from (0,0) to γ̃(t_1) and the (geodesic) curvature function of the planar curve γ̃ is equal to the geodesic curvature function of the original curve γ in S:κ̃_g(t) = κ_g(t) for all t. Suppose Y is any parallel vector field along the curve γ on the surface S, then the angle θ(t) = ∠(Y(t), γ'(t)) gives the geodesic curvature of γ via θ'(t) = κ_g(t). Since the same holds true by construction along the development curve γ̃ in the tangent plane, we get θ̃(t) = θ(t) so that κ̃_g = κ_g.§.§ A measure of local goodness of Cartan ribbon approximationsA measure of the goodness of a single ribbon approximation along a given center curve γ can be obtained from the following construction. Close to γ the surface S can be parametrized as a graph surface 'over' the Cartan ribbon in the direction of the normal field N of the ribbon as follows:S_ε:σ(t,u) =γ(t) + u ·( κ_n(t) h(t) - τ_g(t) e(t)/√(κ_n^2(t) +τ_g^2(t) )) + f(t,u)· N(t) ,t ∈ J ,u ∈ [-ε, ε],where f denotes the corresponding 'height' function and ε is everywhere smaller than each of the width functions w_- and w_+ for all t ∈ J along γ. (Both width functions have positive minima since they are positive and J is closed.) The function f clearly has f(t,0) = f'(t,0) = 0 for all t ∈ J, so thatf(t,u) = 1/2f”(t,0)· u^2 + O(u^3) for each t ∈ J and for all u ∈ [-ε, ε]. The domain in space that is enclosed 'between' the surface S_ε and the Cartan Ribbon is thence parametrized as follows:𝒟_ε:R(t,u,w)= γ(t) + u ·(κ_n(t) h(t) - τ_g(t) e(t)/√(κ_n^2(t) +τ_g^2(t) )) + w · f(t,u) · N(t), where t∈ J, u ∈ [-ε, ε],w ∈ [0, 1]. We consider the volume of the domain 𝒟_ε as a natural local measure of goodness ℳ(γ, ε) of our approximation of the surface S, i.e. of the approximation by the single Cartan ribbon to the tubular neighborhood 𝒮_ε of width 2ε along the center curve γ: ℳ(γ, ε) = Vol(𝒟_ε) = ∫_J ∫_-ε^ε ∫_0^1 |(R'_t× R'_u) · R'_w| dt du dw .We then have the following evaluation of 𝒟_ε.The goodness ℳ(γ, ε) of the single ribbon approximation along a unit speed center curve γ can be expressedin terms of the curvature functions H(t), K(t), κ_n(t) and τ_g(t) along γ as follows:ℳ(γ, ε) = 1/3ε^3·∫_J F(H(t), K(t), κ_n(t), τ_g(t)) dt + O(ε^4),whereF(H, K, κ_n, τ_g) = κ_n^2/(κ_n^2 +τ_g^2)^3/2·| ( τ_g^2 - κ_n^2 +2Hκ_n - 2τ_g√(2Hκ_n - K - κ^2_n))|. Using the parametrization of 𝒟_ε and the derivatives of the Darboux frame in (<ref>) we find that the volume element|(R'_t× R'_u) · R'_w| has the following leading term:|(R'_t× R'_u) · R'_w| =| 1/2 f”_uu(t,0) · u^2·κ_n(t)/√(κ_n^2(t) +τ_g^2(t) )| + O(u^3) . The second derivative f”_uu(t,0) is precisely the normal curvature of the surface S in the direction of the ruling line of the Cartan ribbon at γ(t). It can thence be expressed by the curvature function values H(t), K(t), κ_n(t) and τ_g(t) at γ(t) along γ: f”_uu(t,0) = κ_n/κ^2_n + τ^2_g( τ_g^2 - κ_n^2 +2Hκ_n - 2τ_g√(2Hκ_n - K - κ^2_n)). Insertion into (<ref>) then gives: ℳ(γ, ε) = ∫_J ∫_-ε^ε ∫_0^1|(R'_t× R'_u) · R'_w| dt du dw = ∫_J ∫_-ε^ε | 1/2·u^2·κ_n^2/(κ_n^2 +τ_g^2)^3/2( τ_g^2 - κ_n^2 +2Hκ_n - 2τ_g√(2Hκ_n - K - κ^2_n))| + O(u^3)dt du = 1/3ε^3·∫_J |κ_n^2/(κ_n^2 +τ_g^2)^3/2( τ_g^2 - κ_n^2 +2Hκ_n - 2τ_g√(2Hκ_n - K - κ^2_n))|dt + O(ε^4) = 1/3ε^3·∫_J F(H(t), K(t), κ_n(t), τ_g(t)) dt + O(ε^4).Suppose that the center curve γ is a line of curvature on the surface S – as is the case for all the chosen center curves on the ellipsoid considered in section <ref> below. Then the geodesic torsion of γ vanishes identically and the corresponding local measure of goodness of the Cartan ribbon along γ reduces to:ℳ(γ, ε) = 1/3ε^3·∫_J |κ_n(h(t))|dt + O(ε^4),where κ_n(h(t)) denotes the normal curvature of Sat γ(t) in the direction of h(t), which is orthogonal to γ'(t). This follows directly from equation (<ref>) and the fact that in this case we have f”_uu(t,0) = κ_n(h(t)). Another consequence of theorem <ref> is the following result, which is not surprising, since we are approximating the surface S with flat Cartan ribbons: Suppose that the Gaussian curvature K of S vanishes identically along γ. Thenℳ(γ, ε) =O(ε^4).This follows readily by inserting the following ingredients into the formula (<ref>):K(t)= 0 , H(t)= κ_1(t) ,κ_2(t)= 0 ,τ_g(t)= κ_1(t)cos(θ(t))sin(θ(t)) , κ_n(t)= κ_1(t)cos^2(θ(t)) ,where θ(t) denotes the angle between γ'(t) and the principal direction of curvature for S at γ(t) corresponding to the principal curvature κ_1(t). Although theorem <ref> is but an initial step towards a global measure of goodness for the total number of individual Cartan ribbons (that are in use for the overall approximation of a given full surface), it may still be possible and reasonable to apply the formula (<ref>) – or a proper refinement of it – for each ribbon and then simply sum the values of goodness over the number of ribbons. Naturally, the u-domain of integration should then not just be [-ε, ε] but rather the full width-interval [-w_-(t), w_+(t)] along the respective ribbons. Moreover, good single ribbon approximations (and their higher dimensional analogues) represent an interesting alternative basis and tool for principal geodesic analysis, and for polynomial regression in general, on surfaces and in Riemannian manifolds, see <cit.> and <cit.>. In particular, in that setting the notion of Riemannian polynomials have also been studied via rolling maps, see <cit.> and <cit.> – much in the same vein as we have employed the concept of rolling in the present work.§.§ The local cut-off procedure for neighboring ribbonsWe consider two neighboring center curves γ^1 and γ^2 for two neighboring Cartan ribbons and prove the existence of their intersection curve, that eventually constitute the wedge (or cut-off) curve in space 'between' the two center curves, see the examples in sections <ref> and <ref>. The wedge thereby defines the actual width functions w^2_- and w^1_+, that are used for the final ribbonization of the surface S. In this setting w^1_+ is to be thought of as the cut off function for γ^1 in the direction towards γ^2, and w^2_1 is the cut off function for γ^2 in the (opposite) direction from γ^2towards γ^1. The wedges are well-defined for each pair of neighboring Cartan ribbons, i.e. the cut-off functions exist, provided the corresponding center curves are pairwise sufficiently close to each other.We sketch the proof as follows. Suppose that r_1 is the ruling line at some point p on γ^1. We must show that (for close-by neighboring center curves) there is a corresponding ruling line r_2 at some point of γ^2 so that the two rulings intersect in a (cut-off) point, i.e. so that w^1_+ and w^2_- exist. Obviously, this does not necessarily work for center curves that are far apart from each other, so we need that the center curves are sufficiently close.We may assume that the two center curves are neighboring coordinate curves in a special local parametrization ofa tubular neighborhood around γ^1. Specifically, without lack of generality, we parametrize the neighborhood by a smooth vector function ρ with parameters t and v such that the following properties are satisfied: ρ(t,0) = γ^1(t); ρ(t,ε) = γ^2(t); every t-coordinate curve has nonvanishing normal curvature: κ_n(ρ'_t(t,v)) ≠ 0; andρ'_v(t_0, v) is in the direction of the ruling line of the Cartan ribbon along the curve ρ(t, v) at the point ρ(t_0, v) for all v ∈ [0, ε].This latter condition means that the curve q_t_0(v) = ρ(t_0, v), v ∈ [0, ε] has tangent lines that are ruling lines of the respective Cartan ribbons along the center curves ρ(t, v) for each v in the said interval.If the curveq_t_0 has nonzero curvature at v = 0 (and possibly also nonzero torsion there), then an intersection argument in the ambient space shows that there exists a ruling line of the Cartan ribbon at some point ρ(t_2, ε) along the center curve ρ(t, ε)close toρ(t_0, ε), i.e. for t_2 close to t_0, which intersects the ruling line r_1 based at p = ρ(t_0, 0) –provided ε is sufficiently small. If the torsion of the curve q_t_0 vanishes in the interval v ∈ [0, ε] so that it is planar in that interval, then t_2 = t_0 and the intersection takes place in that plane.The same argument holds if q_t_0 has zero curvature at v = 0 but, say, has positive curvature for v ∈ ]δ, ε]. Moreover, if q_t_0 has zerocurvature in an interval, v ∈ [0, ε[, then q_t_0 is a straight line in that interval and every point on the ruler from p is also a point on a ruler for the ribbon with center curve η(t, v_0) for any v_0 in that interval, and the corresponding cut-off value for w^1_+ can be chosen to be any value in ]0, ε[. § GAUSS–BONNET INSPECTIONWe consider a finite (piecewise smooth) ribbonization ℛ = ∪_i^Rℛ_i, R= #ℛ, of S all of whose Cartan surface ribbons ℛ_i, i = 1, …, R,are closed in the sense that they are based on closed smooth center curves on S as in figures <ref> and <ref> below. Let 𝒲 = ∪_i𝒲_i denote the system of (piecewise smooth) wedge curves stemming from the ribbonization ℛ and let 𝒲 denote the corresponding planar wedge curve system of the Cartan planar ribbons ℛ. The end (cut-)curves of the planar ribbons – that are typically needed in order to obtain the planar representation of the ribbons – are not considered part of 𝒲.We now apply the Gauss–Bonnet theorem to surfaces which are ribbonized by such circular ribbons.The system of wedge curves 𝒲 consists of curves with possible branch-points, where three or more ribbons come together, and withpossible end-points, where one ribbon is locally bent around the wedge (and is thus in contact with itself), as in the top and bottom ribbon on the ellipsoid in Fig. <ref> below.We may assume without lack of generality that the branch points and end points are all isolated and regular in the sense that the wedge curves in a neighbourhood of such points can be mapped diffeomorphically to a corresponding star configuration in ℝ^3 with a number of straight line segments issuing from a common vertex. The branch-points and end-points are called vertices of the ribbonization ℛ. The vertex set is denoted by 𝒫 and the number of vertices by P = #𝒫. The number ofsegments issuing from a given vertex p_k in the vertex set 𝒫 is called the degree, d_k = d(p_k) of the vertex. If a ribbon has an isolated cone point then this is also a vertex, and – in accordance with the above definition – we count its degree as 0.The Euler characteristic, χ (ℛ), of a ribbonization ℛ is χ (ℛ)= 1/2∑_k=1^P(2 - d_k). The total curvature contributions for the Gauss–Bonnet theorem can be divided into three parts: a) Surface contributions: the surface integral of the Gauss curvature K,C_ℛ\ 𝒲∪ 𝒫 = ∫_ℛ\ 𝒲 ∪ 𝒫 K dμ = 0. b) Wedge contributions:The integral of the geodesic curvature along the edges of the Cartan ribbons excluding the vertex points, C_𝒲\ 𝒫 = ∑_q=1^R ∫_𝒲_q\𝒫_qκ^𝒲_q(s) ds. c) Vertex contributions: sum of the angular deficit (angular defect) at the vertices, i.e. 2π minus the sum of the inner angles β (j,k) at the vertices. The inner angles are replaced by the corresponding outer angles α(j,k) as α=π-β where α∈ [-π, π] and β∈ [0,2π], C_𝒫= ∑_j=1^P ( 2π -∑_k=1^d_kβ(j,k) ) =∑_k=1^P ( 2π -∑_j=1^d_k (π - α(j,k)) ) =∑_k=1^P ( 2π -π d_k) + ∑_k=1^P∑_j=1^d_kα(j,k). Summarising: Adding these contributions together we find: 2π·χ (ℛ)= ∑_q=1^R ∫_𝒲_q\𝒫_qκ^𝒲_q(s) ds +∑_k=1^P∑_j=1^d_kα(j,k)+ ∑_k=1^P (2π -π d_k). By a permutation of the outer angles in the second term one can group them according to the ribbon wedge curves they appear on. This is possible because each of the kinks on the ribbons is encountered precisely once in the summation. Further, as the ribbons are closed it follows that their wedge integral and the corresponding sum of outer angles together cancels to zero. Hence one is left with the equality:χ=1/2∑_k=1^P(2-d_k). As mentioned, the set of vertices, 𝒫, is a feature of the three-dimensional mesh of wedge curves. Wedge curves from two, most commonly distinct, ribbons follow each other until a vertex point, where, e.g. three ribbons come together. We summarise the different vertex characters with Table <ref>. The ribbon formula in Theorem <ref> is valid for orientable as well as non-orientable surfaces. To see this we only need to show that the formula does not change whether the ribbons are regular closed ribbons or Möbius strip-ribbons. This follows as a consequence of lemma <ref> below. A conventional cylindrical closed ribbon (without vertices), and a Möbius strip-ribbon both contribute zero to the total curvature integral.It follows simply by cutting the ribbons along a ruler. In this case, the ribbons can be fully flattened and has a total curvature contribution of 2π which is equal to the sum of the four artificial angles introduced by the cutting along the ruler. The difference between a ribbon that is orientable and one that is not consists of a simple permutation of the four inner angles of the cut. For the explicit extension of theorem <ref> toCartan ribbonizations that include ribbons with open center curves, it is sufficient to count the number N_O of such ribbons, and equation <ref> becomes:χ=1/2∑_k=1^P(2-d_k) + N_O.A necessary and sufficient criterion for the correct representation of the topology of the surface S by a given ribbonization is the following: For each ribbon there exists ahomeomorphism which maps the ribbon to a domain on the surface such that * The contact structure (edges and vertices) between the individual ribbons is preserved* The full surface S is covered precisely once by the images of the ribbons.For ribbonizations with sufficiently narrow ribbons, i.e.with small cut-off functions w_- and w_+, such homeomorphisms can for example be obtained via normal projection (along the orthogonal lines to S) of the ribbons into the surface.§.§ From ribbon inspection to the Euler polyhedral formulaWe consider a polyhedron Q and apply the conventional notation, i.e. F, E and V denote the number of faces, of edges, and of vertices, respectively, of the polyhedron. To apply the ribbon formula (<ref>) we need to cover the polyhedron with closed ribbons. One can cover each one of the F faces by a closed ribbon with a (flat) vertex covering the intrinsic part of the face polygon. With this choice there are then F new such virtual vertices, all with degree zero. We therefore have the total number of ribbon vertices P = V+Fand∑_k=1^P d_k = 2· E. Hence we recover the well known polyhedron formula from the ribbon formula: χ(Q) = 1/2∑_k=1^V(2 - d_k ) =2(V+F)-2 E/2 =V-E+F . § AN UNKNOT-BASED CARTAN RIBBONIZED TORUSThis example is concerned with the ribbonization of the torus𝒯^2:σ(u,v)= ((2+ cos(u))·cos(v),(2+ cos(u))·sin(v),sin(u) ), (u, v) ∈ℝ^2using the following two closed curves as center curves (see Fig. <ref>):γ_1(t)= σ(3· t,t ),t ∈ [-π, π] γ_2(t)= σ(3 · t , t + π/3), t ∈ [-π, π] . The corresponding two Cartan surface ribbons are then constructed (with constant and equal width functions) along the two curves, using the parametrization recipe in (<ref>). They are displayed on the right in Fig. <ref>. The ribbons are then widened in ℝ^3 in the direction of ±ω until intersection with their respective neighbour ribbons. In the present example the planar ribbons are constructed via the planar center curves γ̃ from (<ref>) using the geodesic curvature function from the curves (<ref>) on the torus, see figure <ref>.The intersection width functions are obtained numerically by solving the intersection equation for each value of t along the center curves, see Fig. <ref>. Once the cut-off widths w_± of the Cartan surface ribbons have been determined, the corresponding Cartan planar ribbons (with the same width-functions w_±(t)) are finally constructed from the planar center curve with the same geodesic curvature as the original center curve on the surface. In this particular case both Cartan planar ribbons are identical – one of them is displayed in Fig. <ref>.§.§ Inspection of the ribbonized torusThe number of vertices of the above ribbonization is 0 and hence according to equation (<ref>) we get immediatelyEuler characteristic χ = 0 for the torus.§ CURVATURE LINE BASED RIBBONIZATIONS OF AN ELLIPSOIDA curvature line parametrization of the ellipsoid with half axes √(a) > √(b) >√(c) > 0 is obtained as follows, see <cit.> and <cit.>: σ(u,v) = ( ±√(a(a-u)(a-v)/(a-b)(a-c)),±√(b(b-u)(b-v)/(b-a)(b-c)), ±√(c(c-u)(c-v)/(c-a)(c-b))),where u ∈(b, a) and v ∈(c,b). This particular parametrization of the ellipsoid is shown in the leftmost display in Fig. <ref>. As shown on the display the coordinate (curvature) lines of this parametrization extend smoothly from one octant to a neighbouring octant except at the 4 umbilical points on the ellipsoid corresponding to parameter values u → b and v → b.Such curvature line ribbonizations are interesting, partly because they give nontrivial illustrations of the simple measure of goodness established in corollary <ref>, and partly because they also clearly highlights the significant umbilical points. The umbilics on the ellipsoid considered here correspond to the four endpoints of the wedge segments that appear on the top cap and on the bottom cap –both visible in the second display from the left in figure <ref>.§.§ Inspection of the ellipsoidThe ellipsoid has 4 vertices– corresponding to the 4 umbilical points – each of degree one, d_k = 1, and each therefore contributing one-half to the Euler characteristic, see equation (<ref>). § COMPARISON WITH CLASSICAL TOPOLOGICAL INSPECTIONSAs illustrated above, the topology of the surface can be read off from a ribbonization – in fact often in an easier way than from a triangulation. In this section we will briefly compare the above inspection with the methods of Morse and Poincaré-Hopf based on inspections of Morse height-functions and their corresponding vector fields, respectively.Consider a Morse height-function f on a surface S and choose center curves for a ribbonization among level curves of f. Since the saddle points of f are isolated the center curves can be chosen to be arbitrarily close and yet with tangents avoiding asymptotic directions, so that such ribbonizations exist and have the same topology as the surface. Moreover, as a third perspective, the gradient of f on S represents a vector field whose indices also count its topology.Based on a Morse height-functionthese three topological inspections are all based oncountings of minima, maxima, and saddle points. Clearly, the final summations give the same result when applyingTable <ref> below.In the case of a torus with its classical Morse height function, see <cit.>, the corresponding ribbonizations all have one minimum, one maximum (both with degree d_k = 0) and two saddle points (with degrees d_k = 4), so that the sum is 1/2∑_k=1^n_v(2-d_k) = 0, as expected. An interesting Morse height function for the non-orientable Boy's model of ℝP^2 in ℝ^3, that may likewise be used as center curves for a ribbonization, is presented by U. Pinkall in <cit.>. This particular ribbonization has 4 vertices of degree d_k =0, and 3 vertices of degree d_k = 4, so that χ = 1.§ CONCLUSIONS In this paper we recover the conditions for the existence of proper rollings of one surface on another <cit.> – in particular the condition that the two contact curves, that are generated from the rolling, have identical geodesic curvature. This follows from defining the standard rollings as rigid motions in ℝ^3 that are conditioned partly via their instantaneous rotation vectors and partly via the obvious condition of contact between the mentioned track curves on the respective surfaces, i.e. common speed of the contact point along the tracks and common tangent planes at the instantaneous point of contact.Surfaces are then approximated by a mesh of ribbons. Rolling a surface on a plane and using the Cartan developments of curves allow us to construct developable ribbons that have common tangent planes everywhere along the curve of contact on the surface. In this way we may approximate the surface not just by one such developable surface but by a full set of ribbons. In short, the surface is ribbonized by flat ribbons which have center-curve contact with the surface. This is a clear difference in comparison with the much used method of triangulations, which typically only give discrete point-contact with the surface. In the same way as for triangulations, defining a measure of “goodness” of a Cartan ribbonization is dependent on the actual application. Different methods for designing surfaces by developable patches within a desired global error bound have been developed in <cit.>. For Cartan ribbonizations, this is an interesting problem, which we have addressed by introducing a local measure of goodness for the approximation of the surface along a single ribbon.Concerning the global structure of the approximations, we present a particularly simple topological inspection of the ribbonized surfaces, which gives the Euler characteristic of the ribbonization – and thence also of the surface, if the ribbonization is fine enough. The ensuing topological formula for the Gauss–Bonnet theorem involves only the vertices of the ribbonization and their degrees. This complements the classical inspections of topology stemming from Morse theory and from the Poincaré-Hopf formula, which also amount to summing over critical point indices. If we organise the ribbonization of a given surface according to level curves of a Morse height function, then we obtain the direct correspondence shown in Table <ref>.The intriguing relations between the kinematics of rolling and the geometry of developable surfaces clearly carries many more assets for future work than what we cover in the present paper. As indicated above, already the study of ribbonizations could well pave new ways for refined analyses of physical, geometrical, and topological properties of surfaces. Not to mention the potentials of their higher dimensional analogues. Possible practical applications are manifold and appear in such diverse fields as robotics, architecture, design, shape analysis, and modern engineering. See for example the followingworks on rolling spherical robots <cit.>, roof panelling <cit.>, statistical geometric regression analysis <cit.>, and the manufacturing of clothes <cit.>. 99 schumaker1993 Schumaker LL.1993 Triangulations in CAGD.IEEE Computer Graphics and Applications 13, 47-52. lawrence2011 Lawrence S.2011 Developable surfaces: Their history and application.Nexus Network Journal 13, 701-714.nomizu1978 Nomizu K.1978 Kinematics and differential geometry of submanifolds.Tôhoku Math. Journ. 30, 623-637.Kobayashi1963 Kobayashi S, Nomizu K.1963 Foundations of Differential Geometry I,Interscience Publishers, New York. tuncer2007 Tunçer Y, Sağel MK, Yayli Y.2007 Homothetic motion of submanifolds on the plane in 𝔼^3.Journal of Dynamical Systems & Geometric theories 5, 57-64.cui2010 Cui L, Dai JS.2010 A Darboux-frame-based formulation of spin-rolling motion of rigid objects with point contact.IEEE Transactions on Robotics 26, 383-388.molina2014 Molina MG, Grong E.2014 Geometric conditions for the existence of a rolling without twisting or slipping.Communications on Pure and Applied Analysis 13, 435-452.izumiya2015 Izumiya S, Otani S.2015 Flat Approximations of Surfaces Along Curves.Demonstratio Mathematica 48, 217-241.hananoi2017 Hananoi S, Izumiya S.2017 Normal developable surfaces of surfaces along curves.Proceedings of the Royal Society of Edinburgh Section A: Mathematics pp. 1-27.raffaelli2016 Raffaelli M, Bohr J, Markvorsen S.2016 Sculpturing surfaces with Cartan ribbons.In Proceedings of Bridges 2016: Mathematics, Music, Art, Architecture, Education, Culture. pp. 457-460. Tessellations Publishing. (Bridges Conference Proceedings).cui2015 Cui L, Dai JS.2015 From sliding-rolling loci to instantaneous kinematics: An adjoint approach.Mechanism and Machine Theory 85, 161-171.chitour2015 Chitour Y, Molina MG, Kokkonen P.2015 Symmetries of the rolling model.Mathematische Zeitschrift. 281, 783-805.tuncer2010 Tunçer Y, Ekmekci N.2010 A study on ruled surface in euclidean 3-space.Journal of Dynamical Systems & Geometric theories 8, 49-57.chitour2014 Chitour Y, Molina MG, Kokkonen P.2014 The rolling problem: overview and challenges in Geometric Control Theory and Sub-Riemannian Geometry.Editors: G. Stefani, J.P. Gauthier, M. Sigalotti, U. Boscain, A. Saryehev, Springer International Publishing, 2014. 103-122.krakowski2016 Krakowski KA, Leite FS.2016 Geometry of the rolling ellipsoid.Kybernetika 52, 209-223.sharp2005 Sharp J.2005 D-forms and Developable Surfaces.In Bridges 2005: Mathematical Connections between Art, Music and Science 503-510.wills2006 Wills T.2006 D-Forms: 3D forms from two 2D sheets.In Bridges 2006: Mathematical Connections in Art, Music, and Science 503-510.orduno2016 Orduño RR, Winard N, Bierwagen S, Shell D, Kalantar N, Borhani A, Akleman EA.2016 A Mathematical Approach to Obtain Isoperimetric Shapes for D-Form Construction. In Proceedings of Bridges 2016: Mathematics, Music, Art, Architecture, Education, Culture 277-284. Tessellations Publishing.Gray Gray A, Abbena E, Salomon S.2006 Third Edition: Modern differential geometry of curves and surfaces with Mathematica®, ISBN: 978-0-58488-488-4. CRC Press, Boca Raton,Florida. DoCarmo do Carmo MP.1976 Differential geometry of curves and surfaces, ISBN: 978-0132125895. Prentice Hall, Englewods Cliffs, New Jersey. Izumiya2017 Izumiya S, Saji K, Takeuchi N.2017 Flat surfaces along cuspidal edges.Journal of singularities 16, 73-100.SingerThorpe Singer IM, Thorpe JA.1967 Lecture notes on elementary topology and geometry.ISBN: 0-387-90202-3. Springer, New York, NY. fletcher2004a Fletcher PT, Lu CL, Pizer SA, and Joshi S.2004 Principal geodesic analysis for the study of nonlinear statistics of shape. Ieee Transactions on Medical Imaging 23, 995-1005.hinkle2012a Hinkle J, Muralidharan P, Fletcher, PT, Joshi S.2012 Polynomial Regression on Riemannian Manifolds. Lecture Notes in Computer Science 7574, 1-14.ISBN: 9783642337116. Springer, Berlin.jupp1987a Jupp PE, Kent JT.1987 Fitting smooth paths to spherical data.Journal of the Royal Statistical Society Series C-applied Statistics 36, 43-46.ISSN: 14679876, 00359254.leite2015a Leite FS,Krakowski KA.2015 Covariant differentiation under rolling maps.Preprint Number 08–22, Universidade de Coimbra. Sotomayor2008a Sotomayor J, Garcia R.2008 Lines of Curvature on Surfaces, Historical Comments and Recent Developments.São Paulo Journal of Mathematical Sciences 2, 99-143. ni2004 Ni X, Garland M, Hart JC.2004 Fair Morse functions for extracting the topological structure of a surface mesh.ACM Transactions on Graphics (TOG) 23, 613-622.milnor1965 Milnor JW.1965 Topology from the differentiable viewpoint.ISBN: 0-691-04833-9. The University Press of Virginia, Charlottesville.milnor1963 Milnor JW.1965 Morse Theory.Annals of Mathematics Studies 51.Princeton University Press.Pinkall1986 Barth W, et al.1986 Mathematical models.ISBN: 3-528-08991-1. Friedr. Vieweg & Sohn, Braunschweig. liu2007a Liu Y-J, Lai Y-K, Hu S-M.2007 Developable strip approximation of parametric surfaces with global error bounds.Pacific Graphics 2007: 15th Pacific Conference on Computer Graphics and Applications, pp. 441-444. liu2009a Liu Y-J, Lai Y-K, Hu S.2009 Stripification of Free-Form Surfaces With Global Error Bounds for Developable Approximation.IEEE Transactions on Automation Science and Engineering 6 700-709.tang2016a Tang C, Bo P, Wallner J, Pottmann H.2016 Interactive Design of Developable Surfaces.Acm Transactions on Graphics 35 1-12.rabinovich2018a Rabinovich M, Hoffmann T, Sorkine-Hornung O.2018 Discrete geodesic nets for modeling developable surfaces.ACM Transactions on Graphics 37 16:1-16:17. bai2015a Bai Y,Svinin M, Yamamoto M.2015 Motion planning for a pendulum-driven rolling robot tracing spherical contact curves.Ieee International Conference on Intelligent Robots and Systems, 4053-4058.ISBN: 9781479999941, 9781479999934. Institute of Electrical and Electronics Engineers Inc.mehrtens2007 Mehrtens P,Schneider M.2012 Bahn frei für die Architektur – Approximation von Freiform-Flächen durch abwickelbare Streifen.Stahlbau, 81, Heft 12, 931-934.Rose2007 Rose K, Sheffer A, Wither J, Cani M P, Thibert B.2007 Developable Surfaces from Arbitrary Sketched Boundaries.Eurographics Symposium on Geometry Processing (2007). The Eurographics Association 2007.
http://arxiv.org/abs/1704.08064v2
{ "authors": [ "Matteo Raffaelli", "Jakob Bohr", "Steen Markvorsen" ], "categories": [ "math.DG", "70E18 (Primary) 53A17 (Secondary)" ], "primary_category": "math.DG", "published": "20170426114022", "title": "Cartan ribbonization and a topological inspection" }
Systematizing Decentralization and Privacy: Lessons from 15 Years of Research and Deployments [ December 30, 2023 =============================================================================================empty§ INTRODUCTION Plants produce an amazing variety of metabolites. Only a few of these are involved in “primary” metabolic pathways, thus common to all organisms; the rest, termed “secondary" metabolites, are characteristic of different plants groups<cit.>. In fact, “secondary” metabolites, despite the name initially addressed to underline their inessiantiality for primary plant processes<cit.>, are the result of different plants responses, through the course of evolution, to specific needs. Among such metabolites, volatile organic compounds (VOCs) play a dominant role<cit.>.Being released by quite any kind of tissues<cit.> and type of vegetation (trees, shrubs, grass, etc.) as green leaf volatiles, nitrogen-containing compounds and aromatic compounds, plants VOCs can be emitted constitutively<cit.> or in response to a variety of stimuli. They are in fact involved in a wide class of ecological functions, as a consequence of the interactions of plants with biotic and abiotic factors<cit.>. Plants use VOCs to perform indirect plant defence against insects<cit.>, to attract pollinators<cit.>, for plant-to-plant communication <cit.>, for thermo-tolerance and environmental stress adaptation (see more references in<cit.>), to defend from predator<cit.>. According to their biosynthetic origin and chemical structure, plant volatiles can be grouped into isoprenoids or terpenoids, but also oxygenated VOCs (OVOCs), such as methanol (CH_4O), acetone (C_3H_6O), acetaldehyde (C_2H_4O), methyl-ethyl-ketone (MEK, C_4H_8O) and methyl-vinyl-ketone (MVK, C_4H_6O)<cit.>; in few cases, sulfur compounds (e.g. in Brassicales) and furanocoumarins and their derivatives (e.g in Apiales, Asterales, Fabales, Rosales) are found<cit.>.Interestingly, VOCs emissions strongly depend on the species (see <cit.> for references). Indeed, different plant lineages often adopt different chemical solutions to face the same problem; this is the case, for example, of the different odorous volatiles emitted by different flowers for solving the common problem of attracting the same type of pollinator, which usually visit a large amount of plant species<cit.>.In this paper we apply both complex networks analysis<cit.> and community detection<cit.> to identify an eventual hierarchy among the available species, on the basis of their similarities in terms of VOCs emissions. Complex Network Theory<cit.> has been already successfully used in ecology to determine, for example, the stability and robustness of food webs<cit.> with respect to the removal of one or more individuals from the network, or in biology to study the structure of protein interactions in the cell by the so-called protein interaction networks (PINs)<cit.>. Moreover metabolic networks are used to study the biochemical reactions which take place into living cells<cit.>. Still, biological networks found important applications in medicine<cit.>, where they are applied as a solution to human diseases comorbidity analysis<cit.>, or to study the structural and functional aspects of human brain, by defining the reciprocal interactions of the cerebral areas<cit.>. Nevertheless, the application of Complex Networks Theory in botany is still scarce, exception made for some tentatives of comparing different ecosystems looking for steady (i.e. “universal”) behaviours<cit.>. Recent applications of graph theory in botany deal with the attempt of assessing plants species similarities on the basis of both their diaspore morphological properties, and fruit-typology ecological traits<cit.>. Following the same approach, in this paper we perform network analysis with the goal of identifying communities of “similar” species, starting from the volatiles they emit, or more generally from the ways the species share between them to react to external wounding stimuli. At this purpose, data are represented by means of bipartite graphs, which are particularly suitable to study the relations between two different classes of objects and to group individuals according to the properties they share. More in details, the vertices of a bipartite graph can be subdivided into two disjoint sets, such that every vertex of one set is connected only with a vertex of the other set. No links are present between vertices belonging to the same set. In our case, the plants species and the volatiles they emit will define the two independent sets of vertices of the bipartite graph built from botanical data. In practice two different graphs can be analysed: the first one is made up by all plants species (as vertices) connected on the basis of the common properties they share, i.e. in that case the amount of VOCs emitted; the second one is made up by VOCs connected accordingly to the plants that share the same emissions. Once the graph are suitably deduced from the experimental data, community detection is a powerful method to classify in a quantitative way the different species creating a taxonomic tree<cit.>.The paper is organized as it follows: in Section “Results” we present the main outputs of the analysis conducted on the dataset considered; in Section “Discussion” we debate the main implications of our results and we propose possible further developments of the actual study. We refer the reader interested in more details about both data and methodology to Section “Materials and Methods”.§ RESULTS AND DISCUSSION The present research work focuses on a group of 75 volatiles emitted by 109 different plant species in basal conditions, in order to understand if taxonomy-related plants emit a similar VOC composition. To assure the analysis to be robust and consistent, we measured the volatiles emitted by each plant species by a three times replication experiment. We refer to the section “Materials and methods” for a suitable description of the dataset preparation.Complex networks analysis is applied to the VOCs dataset represented as bipartite network, in order to easily define metrics and hidden statistical properties able to discriminate and classify plant taxonomy based on VOCs patterns.§.§ Data preprocessingThe 109 plants species analysed are representative of 56 families, and the dataset is quite homogeneous in terms of families percentages. The most copious families are: Asteraceae (8.26%), Solanaceae (6.42 %), Rosaceae (6.42 %), Fabaceae (5.5 %), Brassicaceae (4.59 %), and Polygonaceae (3.67 %). All the other families are present at lower percentages. To evaluate the data statistical structure we plotted for each protonated mass the emission recorded for all the 109 plants species. Figure <ref> (empty blue bullets) shows the emission of protonated masses PM149 (panel A) and PM205 (panel B), as two examples of VOCs records behaviour. Protonated masses are expressed as mass-to-charge (m/z) ratios. From the chemical composition point of view, PM149 and PM205 belong to terpenes/sesquiterpenes fragments (Tp/STp-f) and sesquiterpenes (STp) classes, respectively. The VOCs series turn out to be characterized by the superposition of an irregular, abruptly changing pulsatile component and a slowly changing one. More in details, zero-values indicate the lack of emission of that specific VOC for the corresponding plants, and the flat and uniform plateau suggests a small emission of the same VOC. Finally, spike-like pulses, clearly emerging from the background, are related to a huge emission of that VOC for a given plant. Figure <ref> suggests that both the protonated masses PM149 and PM205 are emitted in large quantity just by few species. That behaviour turned out to be representative of the whole dataset (not shown). From a statistical point of view the same result was confirmed by the presence of outliers inside each record, that can be easily visualized by boxplot methodology<cit.> (see Section “Materials and methods” for more details). Outliers are shown in Fig. <ref> (panels A, red dots), and they correspond to those observations far from the sample mean. In that case, since the behaviour was coherent for all the VOCs, we excluded the presence of outliers as a consequence of merely experimental errors. Rather, protonated mass records were characterized by heavy-tailed distribution, as Fig. <ref> (panels B) shows: few values lie in the queues of the absolute frequencies sample data histograms. Standardized values were employed in order to assure the results comparability. Notwithstanding the clear dominating behaviour of some species emissions with respect to the other plants, for a given VOC, the statistical procedure of taking into account just the highest recorded values (extreme values) turned out to be too restrictive. In fact, not even a small emission of a protonated mass can be neglected from an experimental point of view. A low emission is as well a signal from a wounded leaf, and it has to be taken carefully into account when comparing the several species reciprocal behaviour with respect to an external wounding perturbation. §.§ Basic network analysisWe considered two different ways of building the plants network, depending on the statistical measure used to represent the highly not-gaussian behaviour of the series. In the first case, we set a fixed threshold for the signal intensity (1normalized counts per second, ncps) and we considered significant all the emissions larger than it (graph: G_1(V,E)). In the second case, we applied a more severe criterion, and we decided to take into account just the emissions above the third quartile of the corresponding data statistical distribution, i.e. Q_3/4 (graph: G_2(V,E)). Figure <ref> shows both the approaches applied to PM149.1 (panel A) and PM205.1 (panel B). Red dots in both panels highlight values larger than 1, while cyan bullets represent the value exceeding Q_3/4.In both cases, a bipartite network was build, made up by V = 184 vertices, subdivided into two layers: the first one made up by V_P = 109 plants species and the second one composed by V_PM=75 emitted VOCs. By definition of bipartite graph, connections were possible between vertices belonging to the two different layers, only. No links are present among plants, as well as among VOCs. Plants species networks are subsequently defined by considering as vertices the plant species in the database, so as bipartite projections of both G_1(V,E) and G_2(V,E). Two vertices are connected if they share at least one common property, in other words, if they emit almost the same amount of a specific VOC.For every network, we considered size (number of edges), order (number of vertices), degree (average and its distribution), density (the ratio of actual vertices against the possible ones), clustering and finally the community structure. §.§.§ Threshold-based graphThe plants graph corresponding to the first method was created as a bipartite projection of graph G_1(V,E). In the resulting graph G^P_1(V_1,E_1) plants are interconnected on the basis of the common VOCs they emit. G^P_1(V_1,E_1) is made up of V_1=V_P=109 vertices (plant species), and E_1 = 5,886 edges. Species i and species j are linked if they share at least one common emitted protonated mass. The weight w_ij of each link e_ij is given by the total number of shared VOCs between species i and species j. G^P_1(V_1,E_1) is a fully connected graph, its density D = 2E_1/V_1(V_1-1) is equal to 1, and the degree of each node is equal to 108, which is also equal to the nodes mean degree (k=1/V_1∑_i=1^V_1k_i=2E_1/V_1). Each vertex is connected to all the other vertices, or equivalently each species emits at least one VOCs in common with all the other species. That network structure is poorly able to extract information about the dominant behaviour of one species with respect to the others, in terms of their emissions.Concerning the links weights distribution, the maximum number of protonated masses shared by two species is 66, and in average species are connected by links of weight w_ij=24, in agreement with the dense structure of the network.§.§.§ Third-quartile-based graphThe plants graph corresponding to the second test was analogously constructed as the species-vs-species bipartite projection graph of G_2(V,E) graph. Again, the common emitted VOCs determine the presence or not of a (weighted) link between two nodes. G^P_2(V_2,E_2) is made up by V_2= V_P=109 vertices and E_2 = 2,343 edges. Links are less by construction: in that case, for each VOC just the emissions larger than Q_3/4 were considered significant. It follows that the network construction procedure accounted for a more severe pruning. Graph density reduces to 0.39 consistent with the fact that the graph is not fully connected. Rather, isolated vertices emerge, suggesting the presence of plants which do not emit any of the measured VOCs at a high level. By removing them the graph density increase to 0.73.In that case the majority of species share few common VOCs emitted (i.e. the mean of the edges weights is around 5). On the contrary some vertices are connected by heavy links (the maximum weight's value is 67, similarly to the previous case).Figure <ref> (panel A, black crosses) shows the network degree distribution P(k), representing the fraction of vertices with degree K > k. A log-line plot is chosen to display the degree complementary cumulative distribution function (CCDF). The graph strength distribution is also shown in Fig. <ref> (panel B, black crosses) in log-line scale. The strength s of a vertex corresponds practically to its weighted degree, thus it takes into account the total weight of the vertex connections, and it allows one to identify high and low concentration edges-regions inside an undirected graph. The maximum strength value is equal to s_max=1,624 and it corresponds to Lavandula spica L. (Lavender) species, the minimum is equal to s_min=27 and it is common to Humulus lupulus L. (Wild hop), Actinidia arguta (Siebold & Zucc.) (Hardy kiwi), Ficus benjamina L. (Weeping fig), Magnolia liliiflora (Desr.) (Japanese magnolia), and Diospyros lotus L. (Date-plum) species.Finally, Fig. <ref> (panel C) shows the local clustering coefficient, defined as the tendency among two vertices to be connected if they share a mutual neighbour. Taken as a whole, Fig. <ref> suggests that plants network is not dominated by some central nodes with a huge amount of connections linking them to all the other minor vertices. Notwithstanding, some species emit a large quantity of VOCs and communities detection algorithms are applied to identify them and the respective aggregating VOCs. The graph G^P_2(V_2,E_2) isolated nodes were removed before performing that basic metrics analysis for visual reasons. The degree and strength of an isolated node are equal to 0 by definition and the clustering coefficient is not defined.§.§.§ Selected-VOCs graphA third test was performed on a reduced version of the original database. Certain VOCs which could be more strictly associated to the mechanical wounding performed during the sample measurements than to plant species-specific emissions were excluded. Indeed, certain compounds such as methanol, acethaldeyde, some C6-compounds, etc. <cit.> are produced by almost all plant species, but there is no a common behaviour in terms of quality and quantity of VOCs involved <cit.>; their inclusion in the database could lead to misinterpretation. Furthermore, other compounds that turned out to be less powerful in the aggregation features, as highlighted by the above described analyses, were removed from the dataset.As a result, a selection of 30 protonated masses were taken into account.In order to compensate that filter introduced by the hand-made choice of the relevant VOCs to be considered for the analysis, a threshold equal to 0 was used to distinguish between relevant and negligible emissions of that specific VOC.The corresponding bipartite network G_3(V,E) was made up by V = 139 vertices subdivided in two sets: V_P = 109, analogously to previous graphs, and V_PM = 30. In order to study plants network, the bipartite projection G^P_3(V_3, E_3) was analysed. The vertices are still V_3=V_P=109, while the edges are equal to E_3=2,522, similarly to the third-quartile-based graph. The graph density is 0.43 due to the presence of 28 isolated nodes, while it raises to 0.78 if they are removed. Concerning the graph basic metrics, Fig. <ref> (panel A, red crosses) shows G^P_3(V_3,E_3) complementary cumulative degree distribution P(k), while Fig. <ref> (panel C, red crosses) depicts the graph strength distribution. Both figures are in log-line scale.The network strength maximum value decreases to s_max=746, but it still corresponds to Lavandula spica L. (Lavender) species, which again emerges as the most connected node. On the other side, the strength minimum value is s_min=23 for Cyperus papyrus L. (Papyrus), Salicornia europaea L. (Glasswort), and Solanum quitoense Lam. (Naranjilla) species. Further, G^P_3(V_3,E_3) is characterized by a smaller range of strength values with respect to G^P_2(V_2, E_2), and a more restricted set of nodes seem to dominate the network behaviour. Nevertheless, the graph degree and strength distribution do not suggest the presence of a scale-free structure behind our data. Finally, Fig. <ref> (panel C, red crosses) shows G^P_3(V_3,E_3) clustering coefficient. The behaviour is similar to the one observed for G^P_2(V_2, E_2) graph. Such as for G^P_2(V_2,E_2) graph, isolated nodes were removed before performing that basic metrics analysis. Analogously, the strength minimum value is performed after excluding the isolate nodes, since the degree k and thus the strength s of an isolated node are equal to 0 by definition. §.§ Community detection analysis §.§.§ Threshold-based and third-quartile-based graphsA first attempt to group plants on the basis of the VOCs emitted was performed by applying the community detection to both the dense G^P_1(V_1,E_1) graph and the third-quartile-based graph G^P_2(V_2,E_2). For both of them, subgraphs were obtained filtering-out a growing number of links, from the lower to the higher weighted ones. A unit-based normalization was applied to edges weights to limit their values to the [0, 1] range (w^resc_ij parameter in Tab <ref>). Four communities detection algorithms were applied: (i) Louvain or Blondel’s modularity optimization algorithm (BL), (ii) fast greedy hierarchical agglomeration algorithm (FG), (iii) walktrap community finding algorithm (WT), and (vi) label propagation community detection method (LP). We refer to the section “Materials and methods” for a detailed description of the communities detection methods.Notwithstanding some discrepancies in the results depending on algorithms optimization after pruning the network, two big communities emerge from G^P_1(V_1,E_1) analysis, which turned out to be robust to algorithm changes and to the filtering procedure of the edges weights (see Tab. <ref>), exception made for severe filters (rescaled weight parameter w_ij > 0.5 in Tab. <ref>). In that case, almost half of the graph nodes were filtered out, thus reducing the reliability of the related results as the consequence of a huge loss of information. On the contrary, by pruning the graph from the most heavy links, the results were statistically comparable thus meaning that the plants network was not dominated by some big vertices acting as hubs of the whole system. The two uncover communities embed the 61.47% and 38.53% of the total amount of species inside the database, respectively. The situation improved by analyzing the communities of G^P_2(V_2,E_2) graph. Figure <ref> is a representation of G^P_2(V_2,E_2) plants network. The dimension of each node is proportional to the node's weighted degree. The thickness of each link connecting two nodes i and j is proportional to the link's weight, w_ij. Nodes colours refer to cluster membership. In that case two big clusters emerge from a basic community detection. They embed 44 and 31 species, i.e. respectively the 40.4% and 28.4% of the species present in the dataset (yellow and aqua clusters in Fig. <ref>). Brassicaceae family started to be pretty grouped in a third small family (6 species only accounting for the 5.5% of the species dataset, violet cluster of Fig. <ref>), exception made for the Brassica oleracea L. var botrytis species (Cauliflower) which belongs to another community (yellow cluster in Fig.<ref>). By construction 28 isolated nodes emerged (not shown in Fig. <ref>), corresponding to species which were not sharing any of the measured VOCs with the other plants. Isolated nodes accounted for the 25.7% of species total amount. Again, the results were consequent to the simultaneous application of more than a single methodology. The findings proved to be independent from the applied methodology and they were considered robust and reliable from a statistical point of view.Hereafter, the composition of every cluster is summarised, together with the protonated mass that the species share at graph's communities level: * cluster 1: 31 species (28.4% of the database total species) grouped in 21 families; prevailing families: Rosaceae, Asteraceae, Fabaceae, Ebenaceae, Plantaginaceae, and Solanaceae. Two VOCs in particular are responsible for that partitioning: PM27 (hydrocarbons, Hyd) and PM73 (acids, A) (20 species), followed by PM55 (aldehydes fragment, Ald-f), PM89 (esters, E), PM115 (acids, A) (19 species), and PM53 (fragment, f), PM81 (aldehydes fragments, Ald-f) (18 species). In general, the more informative VOCs for this cluster are compounds belonging to several chemical classes. Notice that from m/z = 123 (PM123) to m/z = 205 (PM205), where peaks deriving from terpenes, sesquiterpenes and their fragments are found, the emissions are null for all the species. One species can emit more than one VOC, so that all the species can be counted more than once to assess how many species share the same protonated mass emission.Gossypium herbaceum L. (Cotton), Plantago lanceolata L. (Plantain), and Inula viscosa L. (Inula) species are between the highest weighted degree nodes in Fig. <ref>. * cluster 2: it is the biggest community, made up of 44 species (40.4% of the total species amount) grouped into 27 families; dominant families: Asteraceae, Apiaceae, Cannabaceae, Lamiaceae. The species belonging to that cluster emit, taken as a whole, a large amount of VOCs. They share in particular the emission of VOCs which are or refer to terpenes compounds, which are among the principal odour-like molecules emitted by plants flowers and leaves. In details, 28 species share PM123 and PM135, both terpenes or sesquiterpenes fragments (Tp/STp-f); 27 species share PM93 (Tp-f), PM95 (STp-f), PM105 (heterocyclic aromatic compounds, HeArC), PM109 (Tp-f), PM119 (Tp-f), PM121 (Tp-f), PM137 (Tp/STp-f), PM143 (ketones and aldehydes, K/Ald), PM149 (Tp/STp-f), PM163 (STp-f), PM205 (STp); 26 species share PM91 (hydrocarbons, Hyd), PM107 (HeArC), PM111 (aldehydes, Ald), PM153 (Tp-f). Accordingly, that community includes plant species characterized by intense flavour, such as Lavandula spica L. (Lavander, a well known plant used for its flavour), Foeniculum vulgare Mill. (Fennel, an anise-flavored spice), Crithmum maritimum L. (Samphire, a very flavoured sea fennel), and Liquidambar styraciflua L. (Sweetgum, commonly used as flavor and fragrance agent). A more detailed description of cluster 2 is supplied hereafter. * cluster 3: 6 species only (5.5% of total species) from 3 families: Brassicaceae (dominating family with 4 species), Actinidiaceae, and Fabaceae. Interestingly, the Brassicaceae Cauliflower belongs to the previous community (i.e., to cluster 2, where species characterized by more intense odours and presence of terpenes compounds are clustered). Indeed, Cauliflower is, among the Brassicacaeae species included in the present study, one of the richest in VOCs and terpenes <cit.>. This is the most homogeneous community in terms of family composition. PM63, a typical sulfur compound (SC), is the most emitted VOC, being released by 5 species (4 of them belonging to the Brassicaceae family), followed by another sulfur compound, PM49, and PM83 (alcohols fragment, Alc-f) (3 species), PM87 (Ald/Alc). In particular Brassica rapa L. (Chinese cabbage) emits also PM85 (Alc-f), PM103 (esters, E), PM117 (Alc), PM129 (Alc), PM143 (ketones and aldehydes, K/Ald).The latter protonated mass, tentatively identified as 2-Nonanone <cit.> has been already reported in Chinese cabbage<cit.>. The emission of all the other VOCs is null for the whole species set.* cluster 4: 28 isolated species (25.7% of total species) belonging to 20 different families dominated by Polygonaceae, Rosaceae, Solanaceae, Araceae, Fabaceae. They do not share any emitted VOC with other plants, since they do not release any protonated mass at all. That result has to be interpreted taking into account G^P_2(V_2,E_2) construction procedure. Just the emissions exceeding the Q_3/4 of the corresponding protonated mass distribution were considered as relevant. In that sense that nodes are isolated from the rest of the graph and they do not emit VOCs. Previous results are summarized in Tab. <ref>, which shows the dominant families in each cluster and how many species belong to that families. The list of species present in each cluster is reported in Tab. <ref>. Cluster 2, besides being the biggest one, is made up by those species corresponding to the highest weighted degree vertices in G^P_2(V_2,E_2). That species work as highly connected nodes, and they share several VOCs with the other neighboring nodes. They correspond to the biggest yellow nodes in Fig. <ref>. Here we list the principal ones: Lavandula spica L. (Lavander), Foeniculum vulgare Mill. (Fennel), Crithmum maritimum L. (Samphire), Liquidambar styraciflua L. (Sweetgum), Chrysanthemum indicum L. (Chrisanth), Santolina chamaecyparissus L. (Cotton lavender), Curcuma longa L. (Turmeric), Cupressus sempervirens L. (Mediterranean cypress), Ocimum basilicum L. (Basil), Citrus x Aurantium L. (Bitter orange), Tetradenia riparia (Hochst.) Codd. (Ginger bush), Juniperus communis L. (Juniper), Artemisia vulgaris L. (Mugwort), Citrus x Limon L. (Lemon), Stevia rebaudiana (Stevia), Eucalyptus globulus L. (Eucalyptus), Quercus ilex L. (Holm oak), Hedera helix L. (Ivy).Other species with as well a huge emission of VOCs are present in cluster 1: Gossypium herbaceum L. (Cotton), Plantago lanceolata L. (Plantain), and Inula viscosa L. (Inula) are the most connected aqua nodes in Fig. <ref>.Cluster 3 (violet vertices in Fig. <ref>.) turns out to be the most homogeneous one in terms of families composition, since it groups species belonging mainly to Brassicaceae family, characterized by the predominant emission of sulphur compounds.§.§.§ Selected VOCs graphCommunities detection algorithms were applied to the G^P_3(V_3, E_3) following the same procedure described forG^P_2(V_2, E_2) graph. The VOCs reduction reflected into a more clear picture of species reciprocal behaviour in terms of emitted protonated masses. Besides the set of 28 isolated nodes, tree big communities were detected. Figure <ref> shows G^P_3(V_3, E_3) graph partitioning. The graph's nodes are coloured according to their community membership. Such as for G^P_2(V_2, E_2) bipartite projection graph, the biggest nodes correspond to those species which share several VOCs with the other neighboring species. Analogously, edges weights are proportional to the amount of VOCs shared by each couple of adjacent vertices. Again cluster 2 (yellow nodes, Fig. <ref>) is made up by the highest-weighted-degree nodes. In other terms the species corresponding to yellow nodes are the most interconnected ones: Lavandula spica L. (Lavender), Foeniculum vulgare Mill. (Fennel), Santolina chamaecyparissus L. (Cotton lavender), Crithmum maritimum L. (Samphire), Cupressus sempervirens L. (Mediterranean cypress), Ocimum basilicum L. (Basil), Liquidambar styraciflua L. (Sweetgum), Eucalyptus globulus L. (Eucalyptus), Juniperus communis L. (Juniper), Curcuma longa L. (Turmeric), Hedera helix L. (Ivy), Dahlia pinnata Cav. (Dhalia), Brassica oleracea L. var botrytis (Cauliflower), Picea abies L. (Norway spruce), Tetradenia riparia (Hochst.) Codd. (Ginger bush), Apium graveolens L. (Celery), Stevia rebaudiana (Stevia), Artemisia dracunculus L. (Tarragon), Artemisia vulgaris L. (Mugwort), Quercus ilex L. (Holm oak). That result is fully in agreement with the previous one. Some highly connected nodes are also present in cluster 1 (aqua nodes, Fig. <ref>), such as for example: Citrus x Aurantium L. (Bitter orange), Cannabis sativa L. (Hemp), Citrus x Limon L. (Lemon), Humulus lupulus L. var. Cascade (Common hop), Ruta graveolens L. (Rue), Calycanthus floridus L. (Carolina allspice) and Psidium guajava L. (guava).Cluster 3 is still homogeneously made-up by Brassicaceae species (violet vertex in Fig. <ref>).Hereafter the four communities are described in term of dominating families and clustering protonated masses.* cluster 1: it is the biggest community, made up by 37 species (33.9% of the total species amount) grouped into 23 families; dominant families: Cannabaceae, Polygonaceae, Sapindaceae, Asteraceae, Lauraceae, Magnoliaceae, Malvaceae, Martyniaceae, Rosaceae, and Solanaceae. This community is characterized by an high heterogeneity in terms of its families composition. The species belonging to that cluster release in particular PM93 (Tp-f, 22 species), PM109 (Tp-f) and PM137 (Tp/STp-f) (26 species), PM95 (STp-f), PM121 (Tp-f), PM123 (Tp/STp-f), PM149 (Tp/STp-f), PM205 (STp)(more than 20 species). The m/z listed above probably refer to terpenes compounds and almost all of them are found in plant belonging to cluster 2 of the previous analysis. Indeed, the actual cluster 1 shares with the previous cluster 2 more than 51% of plant species (Tab. <ref> and Tab. <ref>), including Citrus spp. In this community the species that release sulfur compounds (PM49 and PM63) are also found, such as: Ruta graveolens L. (Rue), Inula viscosa L. (Inula), Psidium guajava L. (Guava), Gossypium herbaceum L. (Cotton), and Citrus x Aurantium L. (Bitter orange), which together with Cannabis sativa L. (Hemp), and Citrus x Limon L. (Lemon) are among the most emitting species. Interestingly, species from Brassicaceae family, typically rich in sulfur compounds<cit.>, are not included in this cluster.* cluster 2: 25 species (22.9% of database total species) grouped in 16 families; prevailing families: Asteraceae (5 species), Apiaceae, Lamiaceae, and Cupressaceae. This community is made up by those species which are the most active in terms of VOCs emission, in agreement with the species gathered in cluster 2 of the previous analysis; see yellow nodes in Fig. <ref> and Tab. <ref>. As an example, we just list the most interconnected nodes: Lavandula spica L. (Lavender), Foeniculum vulgare Mill. (Fennel), Santolina chamaecyparissus L. (Cotton lavender, known for its smell), Crithmum maritimum L. (Samphire) (found in cluster 2 of the previous analysis). Cauliflower is also found here. Again, an high heterogeneity characterizes the families distribution. Accordingly, the species belonging to this cluster release some volatiles already highlighted for the previous cluster 2; in fact, the most released VOC is PM153 (Tp), emitted by 24 species, followed by PM93 (Tp-f), PM95 (STp-f), PM121 (Tp-f), PM123 (Tp/STp-f), PM149 (Tp/STp-f) (released by 23 species), and PM109 (Tp-f), PM119 (Tp-f), PM133 (Tp), PM137 (Tp/STp-f), PM143 (K/Ald), PM151 (Tp/Tp-f), PM205 (STp)(emitted by more than 20 species). Except for the ketone PM143, they are all terpenes compounds. * cluster 3: 19 species (17.5% of total species) from 13 families only: Brassicaceae, Actinidiaceae, and Fabaceae. All these species emit in particular sulphur compounds PM49 (SC) and PM63 (SC) (13 and 12 species, respectively), while just few of them also release PM93, PM95, and PM153 (Tp-f, STp-f and Tp, respectively). Brassica rapa L. (Chinese cabbage) species again distinguishes, being the only one which emits PM143 (K/Ald). This cluster is the most stable and it corresponds to cluster 3 of the previous analysis. It shows an homogenous families composition, since it groups all the Brassicaceae species, exception made for the Brassica oleracea L. var botrytis (Cauliflower) species, in agreement with previous analysis.* cluster 4: 28 isolated species (25.7% of total species) belonging to 23 different families dominated by Solanaceae, Araceae, Fabaceae, Rosaceae. As for the previous analysis on graph G^P_2(V_2, E_2) the isolated nodes correspond to species which do not emit any VOCs.A detailed description of the plants families and species composition of each cluster of G^P_3(V_3, E_3) graph is provided in Tab. <ref> and Tab. <ref>, respectively. §.§.§ Features graph, G_3^PMThe second bipartite projection of graph G_3(V,E), i.e. the graph of VOCs G^PM_3(V_3b,E_3b) is shown in Fig. <ref>. The graph is made up by V_3b = 30 vertices (each corresponding to one protonated mass), and E_3b = 435 edges. Usually a bipartite graph is based on the representation of different individuals according to the common properties they share. Here the emitted VOCs are the analogous of features, since the most two plants emit the same volatiles the most they are similar. We chose to show only the results coming from the second bipartite projection of graph G_3(V,E), since we obtained similar results for G_2(V,E). Graph G_1(V,E) is not considered since from the previous analyses it turned out to be less suitable to describe data as a network. Colors here help the reader to distinguish between the most and less interconnected VOCs. Such as for the species-based bipartite projection graph, some protonated masses are highly connected with their neighborhoods. The highest value of weighted degree is recorded for PM95 (s_max = 679), followed by PM93, PM109, PM121, PM149, PM135, PM123, PM137, PM205 (light blue vertices in Fig. <ref>). All that VOCs are shared by a large number of species, and they are terpenes compounds; accordingly, they are the responsible for the species grouping in the first two communities of graph G^P_3(V_3,E_3) (aqua and yellow clusters in Fig. <ref>), made up by species rich in such types of compounds. Indeed, terpenes are the largest and assorted group of plant natural products, including hemiterpenes (C_5), monoterpenes (C_10), sesquiterpene (C_15), homoterpenes (C_11 and C_16), some diterpene (C_20) and triterpene (C_30), that are easily released into the atmosphere. The highest amount of species shared between two VOCs is observed between all the following couples of VOCs: PM93–PM95 and PM93–PM109 (respectively 48 and 46 maximum numbers of species), PM95–PM109, PM109–PM137, PM93–PM121, PM95–PM121, PM95–PM135, PM95–PM137, PM95–PM149, PM109–PM121, PM93–PM123. Their corresponding links are the thickest ones (highest link weights) in Fig. <ref>. In most cases, plants share two compounds belonging to the same chemical class; for example, PM95–PM109 is a couple of sesquiterpenes and/or sesquiterpenes fragments, while PM93–PM123 are terpenes and/or terpenes fragments. It’s worth noting that sesquiterpenes have a distinct biochemical pathway from that of other hemiterpenes<cit.>, thus it is more expectable that a plant species emits, simultaneously, two or more VOCs of the same class instead of the combination of VOCs of different classes. However, terpenes biosynthesis is very complex<cit.> and uses many separated pathways, and cases of plants producing isoprene (terpenes building unit) but not other monoterpenes (and viceversa) have been frequently reported<cit.>.On the contrary, the two sulphur compounds PM49 and PM63, which considerably determine the assembling of the violet cluster in Fig. <ref>, are small dimension nodes, since the species they share are homogeneous in term of family composition, but they are few. Among volatile organic sulfur compounds, dimethylsulfide (DMS, PM63) and methanethiol (MT, PM49) are two of the most frequent products of plant metabolism. Their biosynthetic pathways share the role of a common lyase enzyme (dimethylsulfoniopropionate, DMSP) that is not widely distributed in terrestrial plants<cit.>. Finally, PM201 (Tp), PM169 (aldehydes, Ald, a product of monoterpene oxidation), and PM159 (acids/esters, Ac/Es) are some of the less interconnected VOCS.§ CONCLUSIONSVolatile organic compounds (VOCs), that represent a crucial component of a plant’s phenotype<cit.>, have been analysed by bipartite networks methodology in order to classify plants species. In particular, several quantitative measures coming from Complex Network Theory<cit.> have been applied to uncover eventual similarities between the species in term of their VOCs emissions. To assure the reliability and robustness of the results, different classical and advanced community detection algorithms have been applied, and only the comparable results were retained. Moreover data have been pre-processed by means of both descriptive and quantitative statistical methods, to better focus on data behaviour. VOCs time series, obtained by recording the emissions content for each available species, suggest the presence of spike-like pulses (corresponding to few species), exceeding from a quite flat background signal. Each VOC turns out to be emitted by few species in a very large quantity, with respect to all the other species emissions of the same protonated mass. After a preliminary test performed on the whole dataset, some VOCs have been excluded. In fact, some volatiles, especially C6 compounds and acetaldehyde, can occur in response to external stress, including wounding; this should be taken into account when using these compounds for communities detection analysis. Using a reduced dataset, community detection suggested the presence of 4 clusters. Two communities are made up by highly VOCs-emitting species. We recall here the most interconnected nodes: Lavandula spica L. (Lavender), Foeniculum vulgare Mill. (Fennel), Santolina chamaecyparissus L. (Cotton lavender), Crithmum maritimum L. (Samphire), Cupressus sempervirens L. (Mediterranean cypress), Ocimum basilicum L. (Basil) (for cluster 1); Citrus x Aurantium L.(Bitter orange), Cannabis sativa L. (Hemp), Citrus x Limon L. (Lemon), Humulus lupulus L. var. Cascade (Common hop), Ruta graveolens L. (Rue), Calycanthus floridus L. (Carolina allspice) and Psidium guajava L. (guava) (for cluster 2). A third community clearly groups species belonging to Brassicaceae family, turning out to be quite homogeneous in terms of clusters families composition. Finally, a fourth community highlights all those species which, by network construction, are not sharing any VOCs emission with the other species. See previous Section “Community detection analysis” for more details.The second bipartite projection confirmed terpenes compounds and sulphur compounds to be the two chemical classes most responsible for species classification. Indeed, the chemistry of volatiles has been shown to be species-specific<cit.>; for example, species characterized by terpenes and nitrogen-containing compounds as floral volatiles are different from species releasing sulphur-containing volatiles<cit.>. Moreover, terpenes compounds emitted by plant species (the so-called “terpenome”<cit.>), are the major constituents of plants essential oils<cit.>, and can be used to distinguish different species; in this study, although the exact chemical definition of the compounds involved is beyond the purpose, community detection highlighted two well defined groups (clusters 1 and 2) of species that emit different terpenes compounds. In conclusion, complex network analysis confirms to be an advantageous methodology to uncover plants relationships also related to the way they react to the environment in which they live. That result strengthens previous findings obtained by applying Complex Network Theory to the plants morphological features<cit.>. A similar approach can be extended to different fields in botanic framework, such as plant ecology, psychophysiology and plant communication.§ METHODS§.§ DataPTR-ToF-MS has been used in this study as the detector for the organic compounds emitted by leaf samples. A full description of this tool, with its advantages and disadvantages, can be found elsewhere<cit.>. The compounds emitted by different leaves were transported from the air stream where collided with H_3O^+ reagent ion inside the drift tube. The analysis was carried out as follows: each leaf samples was placed into 3/4 L glass jar (Bormioli, Italy) provided of glass stopper fitted with two Teflon tubes connected respectively to the PTR-ToF-MS (8000, Ionicon Analitic GmbH, Innsbruck, Austria) and the zero air generator (Peak Scientific instruments, USA). Each sample was obtained by cutting pieces of representative mature and healthy leaves from three different plant exemplars (5 g total weight). For each plant species, three replicates (three different jar) were evaluated. An overview of the plants used is shown in Tab <ref> and Tab <ref>, for a total of 109 species belonging to 56 plant families. Before each leaf sample analysis, the glass jar was exposed to 1 minute of purified air flux (100 sccm) to remove all the VOCs accumulated in the head space during the time between sample preparation; then, a blank air sample was taken and subsequently used for background correction. All measurements were conducted in an air-conditioned room, with temperature and humidity respectively set at 20 ± 3 ^∘C and 65% <cit.>, and using the same PTR-ToF-MS instrumental parameters: drift pressure = 2.30 mbar, drift temperature = 60^∘C and inlet temperature = 40^∘C, drift voltage = 600 V, extraction voltage at the end of the tube (Udx) 35 V, which resulted in E/N ratio of 140 Td (1 Td = 10-17 Vcm^-2). This setup allowed a good balance between excessive water cluster formation and product ion fragmentation<cit.>. Moreover, the inlet flux was set to 100 sscm. The internal calibration of ToF spectra was based on m/z = 29.997 (NO^+), m/z = 59.049 (C_3H_7O^+) and m/z = 137.132 (C_10H_17^+) and was performed off-line after dead time correction; for peak quantification, the resulting data were corrected according to the duty cycle. Data were recorded with the software TOF-DAQ (Tofwerk AG, Switzerland), the sampling time for each channel of TOF acquisition was 0.1 ns, acquiring 1 spectrum per second, for a mass spectrum range between m/z 20 and m/z 220. The raw data were normalized to the primary ion signal from counts per seconds (cps) to normalized counts per second (ncps) as described by Herbig et al.<cit.>. Data were filtered following the procedure used by Taiti et al.<cit.> and used for statistical analysis. In this manner, a dataset comprised of mean mass spectra for each sample analyzed was compiled. Finally, the tentative identifications of peaks was performed on the basis of an high mass resolution and rapid identification of compounds with a high level of confidence<cit.>. Further characterization of VOCs belonging to certain chemical classes such as terpenes, which are prone to fragmentation, was attempted using literature data on fragmentation of standards during PTR-ToF-MS analysis <cit.>. Similar approach was performed for the other identified compound, e.g. following Papurello et al.<cit.> and Liu et al.<cit.> for sulfur compounds, Loreto et al.<cit.>, Brilli et al.<cit.>, Degen et al.<cit.>, and Wu et al.<cit.> for wounding-related VOCs, and Schwartz et al.<cit.> and Soukoulis et al.<cit.> for aldehydes, ketones and alcohols.§.§ Descriptive statistics: boxplotsBoxplots are an intuitive graphical non-parametric method particularly suitable to visualize the distribution of continuous univariate data, firstly proposed by Tukey<cit.>. None a-priory assumption is made on the underlying statistical distribution. Boxplots show information about data location and spread, by starting from the estimation of the second quartile (or median, Q_2) and of the interquartile range (IQR), where IQR = Q_3 - Q_1, and Q_3 and Q_1 are the third and first quartiles, respectively. Boxplots are also known as box-and-whisker plots. The rectangular box is related to the data quartiles, and, more in details, the left and right sides of the rectangle correspond respectively to Q_1 and Q_3. The whiskers are lines extending from the box till lower and upper first outliers. It follows that the boxplot width visually shows the sample IQR, the vertical band drawn inside the box represents the median, and as a whole the box is a measure of the data dispersion and skewness. On the contrary, there is no common definition for the end of the boxplots whiskers. In the present work we adopt the following formalism: outliers are defined as those data points lying outside the range (Q_1 - 1.5 × IQR; Q_3 + 1.5 × IQR); extreme events are defined as those data points exceeding the range (Q_1 - 3 × IQR; Q_3 + 3× IQR).Several graphical solutions for boxplots are present nowadays, andgeneralized versions allow to apply them to skewed distributions, also, by assuring a robust measure of the skewness in the determination of the whiskers<cit.>.We recall here that the quartiles are also called quantiles of order 1/4, 1/2, 3/4, or Q_1/4, Q_1/2, and Q_3/4, respectively. That second formalism will be used along the paper. §.§ Building the graph: projection in the space of plants/VOCsData are represented as an undirected bipartite graph G(N,E), where every plant species p is connected to its features, i.e. in that case the VOCs it emits. No connection is present between the two set of nodes, i.e. the plant species and the recorded VOCs. Usually, a bipartite graph can also be described by a binary matrix A(p,f) whose element a_ij is 1 just if plant p shows the feature f. The most immediate way to measure correlation between species is counting how many VOCs the plants species share in term of significant emissions, and similarly how many plants emit the same VOCs. We refer to the Basic Network Analysis subsection for a proper description of the methodology. In formulas, this corresponds to consider the matrix of species P(p,p)=AA^T and the matrix of volatile organic compounds, F(f,f)=A^TA, i.e. the two bipartite projections of G(N,E). In the present work, we focused on the graph having as nodes the different plants, i.e. on the Plants graph G^P(N,E) whose edges weights are proportional to the number of commonly emitted VOCs between plants. Second, in order to catch the predominant similarities in terms of volatile organic compounds emissions, we analysed the second bipartite projection, i.e. the Features graph, G^F(N,E), whose nodes represent the emitted VOCs. In that case edges weights were proportional to the number of plants sharing the same emitted compound.§.§ Basic network analysisAs regards network analysis, we computed some global and local basic metrics described hereafter. * Graph density (D) is defined as the ratio between the numbers of existing edges and the possible number of edges. Given a N-order network, graph density is computed as D = 2E/N(N-1). Strictly connected to D, is the graph average degree k=1/V∑_i=1^Vk_i=2E/V, where k_i is the degree of each vertex in V, i.e. the number of edges incident to it.* Network clustering coefficient (c) is the overall measure of clustering in a undirected graph in terms of probability that the adjacent vertices of a vertex are connected. More intuitively, global clustering coefficient is simply the ratio of the triangles and the connected triples in the graph. The corresponding local metric is the local clustering coefficient, which is the tendency among two vertices to be connected if they share a mutual neighbour. In this analysis we used a local vertex-level quantity<cit.> defined in Eq. (<ref>): c_i^w = 1/s_i(k_i-1)∑_jh(w_ij+w_ih)/2a_ija_iha_jh,The normalization factor 1/s_i(k_i-1) accounts for the weight of each edge times the maximum possible number of triplets in which it may participate, and it ensures that 0 ≤ c_i^w≤ 1. That metric combines the topological information with the weight distribution of the network, and it is a measure of the local cohesiveness, grounding on the importance of the clustered structure evaluated on the basis of the amount of interaction intensity actually found on the local triplets<cit.>.* Network strength (s) is obtained by summing up the edge weights of the adjacent edges for each vertex<cit.>. That metric is a more significant measure of the network properties in terms of the actual weights, and is obtained by extending the definition of vertex degree k_i = ∑_ja_ij, with a_ij elements of the network adjacent matrix A. In formulas, s_i = ∑_j = 1^Na_ijw_if.§.§ Grouping plants from graph: communities detection analysis Communities detection aims essentially at determine a finite set of categories (clusters or communities) able to describe a data set, according to similarities among its objects<cit.>. More in general, hierarchy is a central organising principle of complex networks, able to offer insight into many complex network phenomena<cit.>.In the present work we adopted the following methods belonging to complex networks framework:* Fast greedy (FG) hierarchical agglomeration algorithm<cit.> is a faster version of the previous greedy optimisation of modularity<cit.>. FG gives identical results in terms of found communities. However, by exploiting some shortcuts in the optimisation problem and using more sophisticated data structures, it runs far more quickly, in time O(mdlog n), where d is the depth of the “dendrogram” describing the network community structure. * Walktrap community finding algorithm (WT) finds densely connected subgraphs from a undirected locally dense graph via random walks. The basic idea is that short random walks tend to stay in the same community<cit.>. Starting from this point, WT is a measure of similarities between vertices based on random walks, which captures well the community structure in a network, working at various scales. Computation is efficient and the method can be used in an agglomerative algorithm to compute efficiently the community structure of a network. * Louvain or Blondel method (BL) <cit.> to uncover modular communities in large networks requiring a coarse-grained description. Louvain method (BL) is an heuristic approach based on the optimisation of the modularity parameter (Q) to infer hierarchical organization. Modularity (Eq. (<ref>)) measures the strength of a network division into modules<cit.>, as it follows:Q = 1/2m∑_vw[A_vw - k_vk_w/(2m )]δ(c_v, c_w) = ∑^c_i=1(e_ii - a^2_i),where, e_ii is the fraction of edges which connect vertices both lying in the same community i, and a_i is the fraction of ends of edges that connect vertices in community i, in formulas: e_ii = 1/2m∑_vw[A_vwδ(c_v, c_w) ], and a_i = k_i/2m=∑_ie_ij; A is the adjacent matrix for the network; c the number of communities; k_i = ∑_w A_vw the degree of the vertex-i, n and m = 1/2∑_vw A_vw the number of graph vertices and edges, respectively. Delta function, δ(i,j), is 1 if i=j, and 0 otherwise. * Label propagation (LP) community detection method is a fast, nearly linear time algorithm for detecting community structure in networks<cit.>. Vertices are initialised with a unique label and, at every step, each node adopts the label that most of its neighbours currently have, that is by a process similar to an `updating by majority voting' in the neighbourhood of the vertex. Moreover, LP uses the network structure alone to run, without requiring neither optimisation of a predefined objective function nor a-priori information about the communities, thus overcoming the usual big limitation of having communities which are implicitly defined by the specific algorithm adopted, without an explicit definition. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities.Besides the complex networks communities detection methodologies, a classic cluster analysis<cit.> based on dimensionality reduction methods was also performed to assure the results robustness and reliability, by rejecting those solutions not independent from the statistical methodology applied. § ACKNOWLEDGMENTSThe authors acknowledge support from EU FET Open Project PLEASED nr. 296582.GV and GC also acknowledge EU FET Integrated Project MULTIPLEX nr. 317532. § AUTHOR CONTRIBUTIONS STATEMENTGV, EM, CT, GC and SM contributed equally to the analysis of the dataset and to the interpretation of the results of this analysis, both from the point of view of Network Theory as well as in terms of biological implications. They also contributed equally to the writing and reviewing of the manuscript.§ ADDITIONAL INFORMATIONCompeting financial interests. The authors declare no competing financial interests.10 url<#>1urlprefixURL theis2003evolution authorTheis, N. & authorLerdau, M. titleThe evolution of function in plant secondary metabolites. journalInternational Journal of Plant Sciences volume164, pagesS93–S102 (year2003). pichersky2000genetics authorPichersky, E. & authorGang, D. R. titleGenetics and biochemistry of secondary metabolites in plants: an evolutionary perspective. journalTrends in plant science volume5, pages439–445 (year2000). dicke2010induced authorDicke, M. & authorLoreto, F. titleInduced plant volatiles: from genes to climate change. journalTrends in plant science volume15, pages115 (year2010). dudareva2006plant authorDudareva, N., authorNegre, F., authorNagegowda, D. A. & authorOrlova, I. titlePlant volatiles: recent advances and future perspectives. journalCritical reviews in plant sciences volume25, pages417–440 (year2006). penuelas2001complexity authorPeñuelas, J. & authorLlusia, J. titleThe complexity of factors driving volatile organic compound emissions by plants. journalBiologia Plantarum volume44, pages481–487 (year2001). holopainen2010leaf authorHolopainen, J. K., authorHeijari, J., authorOksanen, E. & authorAlessio, G. A. titleLeaf volatile emissions of betula pendula during autumn coloration and leaf fall. journalJournal of chemical ecology volume36, pages1068–1075 (year2010). holopainen2010multiple authorHolopainen, J. K. & authorGershenzon, J. titleMultiple stress factors and the emission of plant vocs. journalTrends in plant science volume15, pages176–184 (year2010). spinelli2011emission authorSpinelli, F., authorCellini, A., authorPiovene, C., authorNagesh, K. M. & authorMarchetti, L. titleEmission and function of volatile organic compounds in response to abiotic stress (publisherINTECH Open Access Publisher, year2011). mumm2003chemical authorMumm, R., authorSchrank, K., authorWegener, R., authorSchulz, S. & authorHilker, M. titleChemical analysis of volatiles emitted by pinus sylvestris after induction by insect oviposition. journalJournal of chemical ecology volume29, pages1235–1252 (year2003). dudareva2000biochemical authorDudareva, N. & authorPichersky, E. titleBiochemical and molecular genetic aspects of floral scents. journalPlant physiology volume122, pages627–634 (year2000). baldwin2006volatile authorBaldwin, I. T., authorHalitschke, R., authorPaschold, A., authorVon Dahl, C. C. & authorPreston, C. A. titleVolatile signaling in plant-plant interactions:" talking trees" in the genomics era. journalScience volume311, pages812–815 (year2006). heil2010explaining authorHeil, M. & authorKarban, R. titleExplaining evolution of plant communication by airborne signals. journalTrends in ecology & evolution volume25, pages137–144 (year2010). war2012mechanisms authorWar, A. R. et al. titleMechanisms of plant defense against insect herbivores. journalPlant signaling & behavior volume7, pages1306–1320 (year2012). ruuskanen2009measurements authorRuuskanen, T. et al. titleMeasurements of Volatile Organic Compounds-from Biogenic Emissions to Concentrations in Ambient Air. Ph.D. thesis, schoolUniversity of Helsinki, Faculty of Science, Department of Physics, Division of Atmospheric Sciences and Geophysics (year2009). agrawal2011current authorAgrawal, A. A. titleCurrent trends in the evolutionary ecology of plant defence. journalFunctional Ecology volume25, pages420–432 (year2011). berenbaum2008facing authorBerenbaum, M. R. & authorZangerl, A. R. titleFacing the future of plant-insect interaction research: le retour à la ”raison d'être”. journalPlant Physiology volume146, pages804–811 (year2008). llusia2002seasonal authorLlusia, J., authorPenuelas, J. & authorGimeno, B. titleSeasonal and species-specific response of voc emissions by mediterranean woody plant to elevated ozone concentrations. journalAtmospheric Environment volume36, pages3931–3938 (year2002). caldarelli2007scale authorCaldarelli, G. titleScale-Free Networks: complex webs in nature and technology. journalOUP Catalogue(year2007). Boccaletti2006175 authorBoccaletti, S., authorLatora, V., authorMoreno, Y., authorChavez, M. & authorHwang, D. U. titleComplex networks: Structure and dynamics. journalPhysics Reports volume424, pages175 – 308 (year2006). <http://www.sciencedirect.com/science/article/pii/S037015730500462X>. barrat2004architecture authorBarrat, A., authorBarthelemy, M., authorPastor-Satorras, R. & authorVespignani, A. titleThe architecture of complex weighted networks. journalProceedings of the National Academy of Sciences of the United States of America volume101, pages3747–3752 (year2004). raghavan2007near authorRaghavan, U. N., authorAlbert, R. & authorKumara, S. titleNear linear time algorithm to detect community structures in large-scale networks. journalPhysical Review E volume76, pages036106 (year2007). newman2004finding authorNewman, M. E. & authorGirvan, M. titleFinding and evaluating community structure in networks. journalPhysical review E volume69, pages026113 (year2004). ma2016wiener authorMa, J., authorShi, Y., authorWang, Z. & authorYue, J. titleOn wiener polarity index of bicyclic networks. journalScientific reports volume6 (year2016). li2013note authorLi, X., authorLi, Y., authorShi, Y. & authorGutman, I. titleNote on the homo-lumo index of graphs. journalMATCH Commun. Math. Comput. Chem volume70, pages85–96 (year2013). cao2014extremality authorCao, S., authorDehmer, M. & authorShi, Y. titleExtremality of degree-based graph entropies. journalInformation Sciences volume278, pages22–33 (year2014). Dunne01102002 authorDunne, J. A., authorWilliams, R. J. & authorMartinez, N. D. titleFood-web structure and network theory: The role of connectance and size. journalProceedings of the National Academy of Sciences volume99, pages12917–12922 (year2002). <http://www.pnas.org/content/99/20/12917.abstract>. http://www.pnas.org/content/99/20/12917.full.pdf. Stelzl2005957 authorStelzl, U. et al. titleA human protein-protein interaction network: A resource for annotating the proteome. journalCell volume122, pages957 – 968 (year2005). <http://www.sciencedirect.com/science/article/pii/S0092867405008664>. proulx2005network authorProulx, S. R., authorPromislow, D. E. & authorPhillips, P. C. titleNetwork thinking in ecology and evolution. journalTrends in Ecology & Evolution volume20, pages345–353 (year2005). barabasi2011network authorBarabási, A.-L., authorGulbahce, N. & authorLoscalzo, J. titleNetwork medicine: a network-based approach to human disease. journalNature Reviews Genetics volume12, pages56–68 (year2011). leecomorbidity authorLee, D.-S. et al. titleThe implications of human metabolic network topology for disease comorbidity. journalProceedings of the National Academy of Sciences of the United States of America volume105, pages9880–9885 (year2008). stephan2000computational authorStephan, K. E. et al. titleComputational analysis of functional connectivity between areas of primate cerebral cortex. journalPhilosophical Transactions of the Royal Society of London B: Biological Sciences volume355, pages111–126 (year2000). caretta2008 authorCaretta Cartozo, C., authorGarlaschelli, D., authorRicotta, C., authorM., B. & authorG., C. titleQuantifying the universal taxonomic diversity in real species assemblage. journalJournal of Physics A volume41, pages224012 (year2008). vivaldo2016networks authorVivaldo, G., authorMasi, E., authorPandolfi, C., authorMancuso, S. & authorCaldarelli, G. titleNetworks of plants: how to measure similarity in vegetable species. journalarXiv preprint arXiv:1602.05887 (year2016). tukey1977exploratory authorTukey, J. titleExploratory data analysis.-reading, mass.: Addison-wesley. journalExploratory data analysis: Reading, Mass: Addison-Wesley(year1977). vandervieren2004adjusted authorVandervieren, E. & authorHubert, M. titleAn adjusted boxplot for skewed distributions. journalCOMPSTAT 2004, proceedings in computational statistics. Springer, Heidelberg pages1933–1940 (year2004). loreto2006induction authorLoreto, F., authorBarta, C., authorBrilli, F. & authorNogues, I. titleOn the induction of volatile organic compound emissions by plants as consequence of wounding or fluctuations of light and temperature. journalPlant, cell & environment volume29, pages1820–1828 (year2006). brilli2011detection authorBrilli, F. et al. titleDetection of plant volatiles after leaf wounding and darkening by proton transfer reaction 'time-of-flight' mass spectrometry (ptr-tof). journalPLoS One volume6, pagese20419 (year2011). degen2004high authorDegen, T., authorDillmann, C., authorMarion-Poll, F. & authorTurlings, T. C. titleHigh genetic variability of herbivore-induced volatile emission within a broad range of maize inbred lines. journalPlant physiology volume135, pages1928–1938 (year2004). wu2008comparison authorWu, J., authorHettenhausen, C., authorSchuman, M. C. & authorBaldwin, I. T. titleA comparison of two nicotiana attenuata accessions reveals large differences in signaling induced by oral secretions of the specialist herbivore manduca sexta. journalPlant Physiology volume146, pages927–939 (year2008). van1991identification authorVan Langenhove, H. J., authorCornelis, C. P. & authorSchamp, N. M. titleIdentification of volatiles emitted during the blanching process of brussels sprouts and cauliflower. journalJournal of the Science of Food and Agriculture volume55, pages483–487 (year1991). geervliet1997comparative authorGeervliet, J. B., authorPosthumus, M. A., authorVet, L. E. & authorDicke, M. titleComparative analysis of headspace volatiles from different caterpillar-infested or uninfested food plants of pieris species. journalJournal of chemical ecology volume23, pages2935–2954 (year1997). buhr2002analysis authorBuhr, K., authorvan Ruth, S. & authorDelahunty, C. titleAnalysis of volatile flavour compounds by proton transfer reaction-mass spectrometry: fragmentation patterns and discrimination between isobaric and isomeric compounds. journalInternational Journal of Mass Spectrometry volume221, pages1–7 (year2002). pierre2011differences authorPierre, P. S. et al. titleDifferences in volatile profiles of turnip plants subjected to single and dual herbivory above-and belowground. journalJournal of chemical ecology volume37, pages368–377 (year2011). dudareva2013biosynthesis authorDudareva, N., authorKlempien, A., authorMuhlemann, J. K. & authorKaplan, I. titleBiosynthesis, function and metabolic engineering of plant volatile organic compounds. journalNew Phytologist volume198, pages16–32 (year2013). sun2016my authorSun, P., authorSchuurink, R. C., authorCaissard, J.-C., authorHugueney, P. & authorBaudino, S. titleMy way: Noncanonical biosynthesis pathways for plant volatiles. journalTrends in Plant Science (year2016). lindfors2000biogenic authorLindfors, V. & authorLaurila, T. titleBiogenic volatile organic compound (voc) emissions from forests in finland. journalBoreal environment research volume5, pages95–113 (year2000). bentley2004environmental authorBentley, R. & authorChasteen, T. G. titleEnvironmental voscs—-formation and degradation of dimethyl sulfide, methanethiol and related materials. journalChemosphere volume55, pages291–317 (year2004). dobson2006relationship authorDobson, H. E. titleRelationship between floral fragrance composition and type of pollinator. journalBiology of floral scent pages147–198 (year2006). kumari2014essoildb authorKumari, S. et al. titleEssoildb: a database of essential oils reflecting terpene composition and variability in the plant kingdom. journalDatabase volume2014, pagesbau120 (year2014). edris2007pharmaceutical authorEdris, A. E. titlePharmaceutical and therapeutic potentials of essential oils and their individual volatile constituents: a review. journalPhytotherapy research volume21, pages308–323 (year2007). lindinger1998proton authorLindinger, W. & authorJordan, A. titleProton-transfer-reaction mass spectrometry (ptr–ms): on-line monitoring of volatile organic compounds at pptv levels. journalChemical Society Reviews volume27, pages347–375 (year1998). jordan2009high authorJordan, A. et al. titleA high resolution and high sensitivity proton-transfer-reaction time-of-flight mass spectrometer (ptr-tof-ms). journalInternational Journal of Mass Spectrometry volume286, pages122–128 (year2009). taiti2016assessing authorTaiti, C. et al. titleAssessing voc emission by different wood cores using the ptr-tof-ms technology. journalWood Science and Technology pages1–23. mancuso2015soil authorMancuso, S. et al. titleSoil volatile analysis by proton transfer reaction-time of flight mass spectrometry (ptr-tof-ms). journalApplied Soil Ecology volume86, pages182–191 (year2015). pang2015biogenic authorPang, X. titleBiogenic volatile organic compound analyses by ptr-tof-ms: Calibration, humidity effect and reduced electric field dependency. journalJournal of Environmental Sciences volume32, pages196–206 (year2015). herbig2009line authorHerbig, J. et al. titleOn-line breath analysis with ptr-tof. journalJournal of breath research volume3, pages027004 (year2009). taiti2016sometimes authorTaiti, C. et al. titleSometimes a little mango goes a long way: A rapid approach to assess how different shipping systems affect fruit commercial quality. journalFood analytical methods volume9, pages691–698 (year2016). lanza2015selective authorLanza, M. et al. titleSelective reagent ionisation-time of flight-mass spectrometry: a rapid technology for the novel analysis of blends of new psychoactive substances. journalJournal of Mass Spectrometry volume50, pages427–431 (year2015). maleknia2007ptr authorMaleknia, S. D., authorBell, T. L. & authorAdams, M. A. titlePtr-ms analysis of reference and plant-emitted volatile organic compounds. journalInternational Journal of Mass Spectrometry volume262, pages203–210 (year2007). kim2009measurement authorKim, S. et al. titleMeasurement of atmospheric sesquiterpenes by proton transfer reaction-mass spectrometry (ptr-ms). journalAtmospheric Measurement Techniques volume2 (year2009). demarcke2009laboratory authorDemarcke, M. et al. titleLaboratory studies in support of the detection of sesquiterpenes by proton-transfer-reaction-mass-spectrometry. journalInternational Journal of Mass Spectrometry volume279, pages156–162 (year2009). papurello2012monitoring authorPapurello, D. et al. titleMonitoring of volatile compound emissions during dry anaerobic digestion of the organic fraction of municipal solid waste by proton transfer reaction time-of-flight mass spectrometry. journalBioresource technology volume126, pages254–265 (year2012). liu2013experimental authorLiu, D., authorAndreasen, R. R., authorPoulsen, T. G. & authorFeilberg, A. titleExperimental determination of mass transfer coefficients of volatile sulfur odorants in biofilter media measured by proton-transfer-reaction mass spectrometry (ptr-ms). journalChemical engineering journal volume219, pages335–345 (year2013). schwarz2009determining authorSchwarz, K., authorFilipiak, W. & authorAmann, A. titleDetermining concentration patterns of volatile compounds in exhaled breath by ptr-ms. journalJournal of Breath Research volume3, pages027002 (year2009). soukoulis2013ptr authorSoukoulis, C. et al. titlePtr-tof-ms, a novel, rapid, high sensitivity and non-invasive tool to monitor volatile compound release during fruit post-harvest storage: the case study of apple ripening. journalFood and Bioprocess Technology volume6, pages2831–2843 (year2013). campello2007fuzzy authorCampello, R. titleA Fuzzy Extension of the Rand Index and Other Related Indexes for Clustering and Classification Assessment. journalPattern Recognition Letters, volume28 (year2007). clauset2008hierarchical authorClauset, A., authorMoore, C. & authorNewman, M. titleHierarchical structure and the prediction of missing links in networks. journalNature volume453, pages98–101 (year2008). <http://dx.doi.org/10.1038/nature06830>. clauset2004finding authorClauset, A., authorNewman, M. E. & authorMoore, C. titleFinding community structure in very large networks. journalPhysical review E volume70, pages066111 (year2004). pons2005computing authorPons, P. & authorLatapy, M. titleComputing communities in large networks using random walks. In booktitleComputer and Information Sciences-ISCIS 2005, pages284–293 (publisherSpringer, year2005). blondel2008fast authorBlondel, V. D., authorGuillaume, J.-L., authorLambiotte, R. & authorLefebvre, E. titleFast unfolding of communities in large networks. journalJournal of Statistical Mechanics: Theory and Experiment volume2008, pagesP10008 (year2008). <http://stacks.iop.org/1742-5468/2008/i=10/a=P10008>. newman2004fast authorNewman, M. E. titleFast algorithm for detecting community structure in networks. journalPhysical review E volume69, pages066133 (year2004). jolliffe2002principal authorJolliffe, I. titlePrincipal component analysis (publisherWiley Online Library, year2002). macqueen1967some authorMacQueen, J. et al. titleSome methods for classification and analysis of multivariate observations. In booktitleProceedings of the fifth Berkeley symposium on mathematical statistics and probability, vol. volume1, pages281–297 (organizationOakland, CA, USA., year1967).
http://arxiv.org/abs/1704.08062v1
{ "authors": [ "Gianna Vivaldo", "Elisa Masi", "Cosimo Taiti", "Guido Caldarelli", "Stefano Mancuso" ], "categories": [ "q-bio.QM", "physics.data-an" ], "primary_category": "q-bio.QM", "published": "20170426112530", "title": "Beyond the network of plants volatile organic compounds" }
[email protected] School of Physics and Astronomy and Institute of Gravitational Wave Astronomy, University of Birmingham, Birmingham B15 2TT, United KingdomSchool of Physics and Astronomy and Institute of Gravitational Wave Astronomy, University of Birmingham, Birmingham B15 2TT, United KingdomRhodes College, Memphis, TN 38112, USA School of Physics and Astronomy and Institute of Gravitational Wave Astronomy, University of Birmingham, Birmingham B15 2TT, United KingdomUniversità degli Studi di Urbino "Carlo Bo", I-61029 Urbino, [email protected] School of Physics and Astronomy and Institute of Gravitational Wave Astronomy, University of Birmingham, Birmingham B15 2TT, United Kingdom Proposed near-future upgrades of the current advanced interferometric gravitational wave detectors include the usage of frequency dependent squeezed light to reduce the current sensitivity-limiting quantum noise.We quantify and describe the degradation effects that spatial mode-mismatches between optical resonators have on the squeezed field. These mode-mismatches can to first order be described by scattering of light into second-order Gaussian modes.As a demonstration of principle, we also show that squeezing the second-order Hermite-Gaussian modes HG_02 and HG_20, in addition to the fundamental mode, has the potential to increase the robustness to spatial mode-mismatches.This scheme, however, requires independently optimized squeeze angles for each squeezed spatial mode, which would be challenging to realise in practise. Multi-spatial-mode effects in squeezed-light-enhanced interferometric gravitational wave detectors Andreas Freise December 30, 2023 ================================================================================================== § INTRODUCTION The current advanced gravitational-wave detectors, e.g., the Advanced LIGO <cit.> detectors, are dual-recycled Michelson interferometers with arm cavities, as shown in Fig. <ref>.One of the limiting noise sources is quantum noise which arises from quantum fluctuations of light.To reduce the quantum noise over a broad-frequency band, one approach is to inject frequency dependent squeezed vacuum states into the dark port of the interferometer <cit.>.These states are produced by the combination of a squeezer and a filter cavity, where the filter cavity generates the frequency dependency <cit.>, such that the phase quadrature is squeezed for high frequencies and the amplitude quadrature is squeezed for low frequencies.This technology can be fitted into the current infrastructure <cit.>, and is planned to be implemented in the next upgrade of the current observatories. There are several practical imperfections that can influence the performance of this scheme, such as spatial mode-mismatches, optical losses, and phase noise <cit.>. This paper focuses on spatial mode-mismatches. Their effects on the squeezing can be categorized into two types. The first type is when a part of the squeezed states in the fundamental mode irreversibly scatters to higher-order modes, which has an effect similar to an optical loss. The second type is when the quantum states are allowed to coherently couple back and forth between the fundamental and higher-order modes. This type requires multiple interfaces where mode-mismatch induced scatterings occur. Particularly, there are two important such interfaces, located between the three components of interest in this work: the squeezer, the filter cavity, and the interferometer — each to a good approximation having its own well-defined spatial mode basis.Kwee et al. <cit.> studied the combined effect of these two types by considering mode-mismatches at the above mentioned interfaces.In this study, to better understand these two effects individually, we isolate them as much as possible by mode-mismatching one of the three components at a time, i.e., two components are always kept perfectly mode matched to each other.In contrast to Ref. <cit.> and to what would be done in practice, the filter cavity is intentionally made to be resonant for higher-order modes within the frequency band of interest.On the one hand, this allows us to further study the interesting coherent scattering effect. On the other hand, it might also be relevant in reality for long filter cavities.Additionally, we have looked into whether injecting multi-spatial-mode squeezing, where two higher-order spatial modes are squeezed in addition to the fundamental mode, can provide robustness to mode-mismatches. The interesting spatial aspects of squeezed states have generated the relatively new field of quantum imaging <cit.>, which has experimentally demonstrated the abilities of both generating squeezed higher-order Gaussian modes <cit.>, and combining different squeezed transverse modes <cit.>. These are, in principle, the tools needed to produce the multi-spatial-mode squeezing considered in this paper. The key results of this paper are summarized as follows.In Fig. <ref> we show the quantum noise limited sensitivity for various levels of mode-mismatches between the interferometer and the filter cavity, while keeping the squeezer mode matched to the filter cavity. This mode-mismatch has the same effect as a lossy element between the filter cavity and the interferometer. The exact same effect is seen when mode-mismatching the squeezer to a mode matched filter cavity and interferometer. These results are consistent with the result obtained by Kwee et al. <cit.> in the high-frequency part of the spectrum. Figure <ref> shows the result when the squeezer is kept mode matched to the interferometer instead of to the filter cavity.In this case, there are scattering points (spatial basis changes) before and after the filter cavity, which allows the squeezed states to coherently scatter to higher-order modes and then back to the fundamental mode.If a higher-order mode involved in this process picks up a different phase than the fundamental mode when reflected off the filter cavity, this mode-mismatch enables for potentially antisqueezed states to mix in with the squeezed states—which would be worse than just a loss.This coherent scattering effect can be seen in Fig. <ref> at low frequencies where the fundamental mode is near-resonant while the higher-order modes are off resonance, and at the two local peaks where the second and fourth order modes are resonant while the fundamental mode is off resonance.These results are consistent with the low-frequency part of the spectrum obtained by Kwee et al. <cit.>.Figure <ref> shows the results obtained when letting the field emitted by the squeezer have squeezed states in the three Hermite-Gaussian modes HG_00, HG_02, and HG_20. This is in contrast to above where only the HG_00 mode was squeezed. Just as when generating Fig. <ref>, the filter cavity is mode-mismatched to the interferometer while the squeezer is kept mode matched to the filter cavity. The filter cavity is redesigned so that the second order modes have the same resonance condition as the fundamental mode, which is necessary to correctly rotate all the squeezed states. In addition, the squeeze angles of the second order modes have been independently optimized to maximize the broad-frequency band sensitivity. Figure <ref> shows that, in principle, the injection of a multi-spatial-mode-squeezed field could provide resilience to the type of mode-mismatch considered here. For practical implementation it would require a more detailed study and experimental demonstration.The outline of this paper goes as follows. In Sec. <ref>, we go into the details of the model used to study the impact of spatial mode-mismatches, and we thoroughly analyze the results presented in Figs. <ref> and <ref> by using analytical expressions. In Sec. <ref> we elaborate on the model used to study if the injection of squeezed states in multiple spatial modes could provide robustness to mode-mismatches, and the results presented above in Fig. <ref> are further analysed.§ THE EFFECT OF SPATIAL MODE-MISMATCHESWe now go into the details behind the modeling of how mode-mismatches affects the quantum-noise-limited sensitivity of a squeezed-light-enhanced interferometric gravitational wave detector.Specifically, we start with the description of the optical setup in subsection <ref>, andthen in subsection <ref>, we describe the general framework used to analyze the results.Finesse <cit.>—the numerical software that was used to produce the results—uses an equivalent method <cit.>. A similar framework can also be found in Ref. <cit.>. In the later subsections <ref>, <ref>, and <ref>, we look into mode-mismatches between the three components—the squeezer, the filter cavity, and the interferometer.§.§ The optical setup The optical setup used here is visualized in Fig. <ref>, and is a simplified and idealized model of an Advanced LIGO detector <cit.> with frequency dependent squeezed light injected through the dark port. The key parameters of the interferometerare listed in Table <ref>.The frequency dependent squeezing is realized by reflecting the squeezed field off a detuned over-coupled Fabry-Perot cavity.This cavity is frequently referred to as a filter cavity <cit.>.The filter cavity considered in this work is a linear overcoupled 16 m long confocal optical cavity, based on the one proposed in <cit.> for near-term upgrade of Advanced LIGO.In this work, the input mirror is lossless, the end mirror is perfectly reflective, and we have assumed that the mirrors are much larger than the beam sizes so that clipping losses are negligible. The values used for cavity detuning and input mirror transmission were obtained by maximizing the broadband sensitivity between 10 Hz and 3 kHz. The radius of curvature for the two mirrors is chosen to make the higher-order modes resonant within the frequency band of interest, for the reason mentioned in the introduction. All the used filter cavity parameters are shown in Table <ref>.We have three components to mode-mismatch to each other: the interferometer, the filter cavity, and the squeezer. The mode-mismatch between the interferometer and the filter cavity is generated by displacing a mode matching lens along the optical axis. For the squeezer component, Finesse allows us to freely specify the complex beam parameter of the field that is emitted, and we used this feature to control the mode matching of the squeezer. §.§ The mathematical framework The spatial distribution of the field within the interferometer can be expanded in one common interferometer eigenbasis U_n^IFO(x, y, z). Specifically, the sideband field at ω_0 ±Ω (ω_0 is the carrier frequency of the laser) reads:Ê(ω_0 ±Ω, x, y, z) = ∑_n=0^N c_nâ_ω_0±Ω,nU_n^IFO(x, y, z)Here â_ω_0±Ω,n are the annihilation operators for the upper and lower sidebands of the nth mode, c_n is the relative weight of the nth mode satisfying ∑_n=0^∞c_n^2 = 1, N denotes the number of modes included in the model,z is the coordinate along the optical axis, and x and y are the transverse coordinates. Similarly, the eigenbases of the filter cavity and the squeezer are denoted by U_n^FC and U_n^SQZ, respectively.These are the three eigenbases used to describe the spatial distribution of the field within the optical setup.Which eigenbasis is used where is indicated by the background colors in Fig. <ref>, and the red dots indicate where the basis changes take place.Scattering between modes labeled by different numbers n occurs when changing basis from U_n^SQZ to U_n^FC and when changing basis from U_n^FC to U_n^IFO, if the complex beam parameters of the bases are different.In this paper, we use the two-photon formalism <cit.> to model the quantum noise.In this formalism, the key quantitates are (i) the amplitude and phase quadrature operators which are defined as â_1(Ω)= â_ω_0 + Ω + â_ω_0 - Ω ^†/√(2) ,â_2(Ω) = â_ω_0 + Ω - â_ω_0 - Ω^†/√(2)iand (ii) the transfer matrix relating the quadrature operators of the fields at different locations. In our case, we care about higher-order modes where the quadrature operators can be represented in terms of a column vector of length 2 N:𝐚= ⊕_n=0^N 𝐚_n(Ω)with each pair of quadrature operators for mode n being defined as𝐚_n (Ω) = [ â_1,n(Ω) â_2,n(Ω) ]^T . The field that enters the interferometer can be related to the field entering the squeezer through 𝐚_IFO^ = 𝒦_2 𝒯𝒦_1 𝒮 𝐚_SQZ^ ,Here, 𝒮 is the squeezing matrix, 𝒯 is the filter cavity transfer matrix, 𝒦_1 describes the basis change from U_n^SQZ to U_n^FC, and 𝒦_2 describes the basis change from U_n^FC to U_n^IFO. These matrices are described as follows. The joint squeezing matrix 𝒮 is given by the direct sum of the individual squeezing matrices for every spatial mode in the field:𝒮 = ⊕_n=0^N 𝒮_n.The squeezing matrix 𝒮_n for spatial mode n is given by[c] cosh r_n + sinh r_ncos 2φ_nsinh r_n sin 2φ_nsinh r_n sin 2φ_ncosh r_n- sinh r_n cos 2φ_n ,where r_n and φ_n are the squeeze factor and angle, respectively.In later subsections, the states in the fundamental mode are squeezed by 10 dB while all higher order modes contain pure vacuum states. That is, r_0 = (2log_10e)^-1 and r_n = 0 for all n>0. The angle φ_0 is optimized such that the high-frequency shot noise is maximally reduced. The filter cavity then takes care of correctly rotating the squeezed states for the rest of the frequency components.The matrix 𝒦 describing a basis change between two spatial mode bases is given byK= [c] K_0,0 ⋯K_0,k ⋯K_0,N ⋮ ⋱ ⋮ ⋱ ⋮K_n,0 ⋯K_n,k ⋯K_n,N ⋮ ⋱ ⋮ ⋱ ⋮K_N,0 ⋯K_N,k ⋯K_N,N,where each entry K_n,k is a 2×2 matrix given byK_n,k≡κ_nk[c] cosβ_nk-sinβ_nk sinβ_nk cosβ_nk.Here, κ_nk is the coupling magnitude from mode number k in the old basis to mode number n in the new basis, and β_nk is the corresponding coupling phase.Expressed in the spatial basis U_n^FC, the reflection off the filter cavity is given by 𝒯= ⊕_n=0^N 𝒯_n(Ω),where the spatial mode n undergoes a phase change specified by𝒯_n(Ω) = 𝒜_2 [c] r_n^(Ω) 0 0 r_n^*(-Ω) 𝒜_2^-1.The transfer function for a sideband in spatial mode n is given byr_n^(Ω) = e^-iϕ_n(Ω)-√(R_ in)/√(R_ in)e^-iϕ_n(Ω)-1 ,where ϕ_n(Ω) = [ 2 L/c(Ω+Δ) -q_nψ_rt^]and R_in is the input mirror power reflectivity, Δ is the cavity detuning, L is the macroscopic cavity length, c is the speed of light, ψ_rt^ is the round-trip Gouy phase and q_n is the order of the mode n. The matrix 𝒜_2 = 1/√(2)[r] 1 1 -i iis used to transform the transfer function for the sidebands to that for the quadratures.§.§ Mode-mismatched interferometerIn this scenario, the interferometer is mode-mismatched to both the squeezer and the filter cavity, while the squeezer and the filter cavity are kept mode matched to each other.To generate this mode-mismatch, one of the lenses used to mode match the filter cavity to the interferometer is displaced along the optical axis.The resulting quantum-noise-limited sensitivity is shown in Fig. <ref>, while Fig. <ref> shows the same data but expressed in terms of improvement over the nonsqueezed case.The dip in improvement around 70 Hz demonstrates that one cannot achieve a perfect broad-frequency band noise reduction by using only one filter cavity <cit.>. However, when operating with a tuned signal recycling cavity, as done here, one filter cavity still performs very well <cit.>.Since we are using realistic mirror losses inside the interferometer, the sensitivity improvement does not reach exactly 10 dB even when all three components are perfectly mode matched. The reason for the broad-frequency band squeezing degradation is best explained by using the analytics developed above.Since the squeezer and the filter cavity are mode matched, and assuming that the self-coupling phases in equation <ref> are β_kk = 0, the basis change matrix 𝒦_1 in equation <ref> becomes the identity matrix.This assumption does not reduce the generality as any self-coupling phase could be compensated for by adjusting the initial squeeze angle.Equation <ref>, describing the quantum field injected into the interferometer, is then reduced to 𝐚_IFO^ = 𝒦𝒯𝒮 𝐚_SQZ^,which is visualized in Fig. <ref>.The only frequency dependent process that the field undergoes is the interaction with the filter cavity, which is described by equation <ref>. When this process takes place, all the squeezed states are in the fundamental mode and therefore undergo the correct rotation T_0(Ω).The phase changes of the pure vacuum states in the higher-order modes T_n(Ω) are unimportant, as these just rotate circular symmetric probability distributions around their symmetry axes.The mode-mismatch-induced basis change 𝒦 makes the fundamental mode exchange some squeezed states for pure vacuum states with the higher-order modes. This makes the fundamental mode of the interferometer eigenbasis less squeezed for all frequencies, and has the same effect as an optical loss. That is, for small coupling coefficients κ_0, whereκ_0^2 = ∑_n=1^N κ_0n^2is the total power coupling magnitude for scattering away from the fundamental mode, the quantum noise in the interferometer scales as(1-κ_0^2)e^-2r_0 + κ_0^2.§.§ Mode-mismatched filter cavity Just as above, the filter cavity is spatially mode-mismatched to the interferometer, but here the squeezer is kept mode matched to the interferometer instead of to the filter cavity. In this case, there are nontrivial spatial basis changes before and after the filter cavity that give rise to couplings between different spatial modes. Since the squeezer and the interferometer are mode matched to each other, the second basis change is the inverse of the first, thus, equation <ref> becomes𝐚_IFO^ = 𝒦^-1𝒯𝒦𝒮 𝐚_SQZ^.This process is visualized in Fig. <ref>. Due to the mode-mismatch 𝒦 between the squeezer and the filter cavity, the field incident on the filter cavity input mirror has a part of its squeezed states located in higher-order modes. If these higher-order modes experience phase shifts different from the phase shift of the fundamental mode when reflected off the filter cavity (i.e., if T_n(Ω) ≠ T_0(Ω)), then the mode-mismatch between the filter cavity and the interferometer, 𝒦^-1, enables for these now wrongly rotated squeezed states to mix back in with the squeezed states in the fundamental mode.If the wrongly rotated states are antisqueezed, this coherent scattering process is worse than an optical loss.In Figs. <ref> and <ref>, this coherent scattering effect can be seen in two different regions: at low frequencies, where the fundamental mode is nearly resonant while the higher-order modes are off resonance, and at about 300 Hz and 700 Hz where the second-order and fourth-order modes are resonant while the fundamental mode is not.The reason that the second-order and fourth-order modes show up is that the mode-mismatch was generated by offsetting the waist size and displacing the waist position of the beam, which only generates nonzero couplings between modes with even mode-order spacing.Since the couplings decrease with increasing mode-order spacing, we only included modes up to order four in our simulations. For a small mode-mismatch, and for the worst case higher-order-mode rotations, the quantum noise in the interferometer scales ase^-2r+ 4(1-e^-2r)κ_0^2.See Appendix <ref> for a derivation of this formula. For large squeeze magnitudes, this is a factor of 2 worse than the effect of a corresponding optical loss. It should be mentioned that the filter cavity was deliberately designed to have this small mode spacing so that we could see the effect of higher-order mode resonances. If this 16 m filter cavity would be implemented in LIGO, it would be designed such that the higher-order modes are resonant well outside the frequency range of interest. However, this might not be possible for much longer filter cavities, e.g., as proposed for the Einstein Telescope <cit.>.For high frequencies, neither the fundamental mode nor the higher-order modes are resonant, thus T_n(Ω)=T_0(Ω), and the squeezed field is consequently unaffected by this mode-mismatch.§.§ Mode-mismatched squeezer Here we consider the case where the squeezer is mode-mismatched to both the filter cavity and the interferometer, while the last two are kept mode matched to each other. This means that the basis change between the squeezer and the filter cavity generally has nonzero couplings between different spatial modes, while the matrix performing the basis change in between the filter cavity and the interferometer becomes the identity matrix. Thus, equation <ref> becomes𝐚_IFO^ = 𝒯𝒦𝒮 𝐚_SQZ^,which is visualized in Fig. <ref>. The effect is the same in Sec. <ref>, thus the result can be seen in Figs. <ref> and <ref>.In contrast to the case in Sec. <ref>, there are indeed squeezed states in the higher-order modes that have incorrect rotations due to the filter cavity. But since these are not allowed to couple back to the fundamental mode again, this does not contribute to any extra quantum noise.§ ROBUSTNESS TO MODE-MISMATCHES THROUGH SQUEEZED HIGHER-ORDER MODES In this section, we show that the injection of squeezed states in multiple spatial modes potentially can provide robustness to mode-mismatches.This requires that the initial orientation of the squeezing ellipses can be independently optimized for each spatial mode, which would be challenging to achieve in practice due to the degenerate resonance conditions of the second order modes.Further, the field from three different squeezers would have to be superimposed into one by using mode-selecting cavities. In subsection <ref> the mode-mismatched interferometer is revisited (see Sec. <ref>), but this time three spatial modes are squeezed instead of just the fundamental mode. Subsection <ref> provides a simple analytic test of the principle of using multiple squeezed modes—it was not rejected.§.§ Mode-mismatched interferometer The same mode-mismatch is considered as in Sec. <ref>, that is, the interferometer is mode-mismatched to the filter cavity and the squeezer, while the filter cavity and the squeezer are kept mode matched to each other. Therefore, equation <ref> applies here as well, but with some alterations to the squeezing matrix 𝒮 and to the filter cavity transfer matrix 𝒯, as described below. We squeezed the Hermite-Gaussian modes HG_02 and HG_20, in addition to the fundamental mode, as these two second order modes have the strongest couplings to the fundamental mode, as mentioned in Sec. <ref>. All three states are squeezed by 10 dB.The two extra modes are labeled n=1 and n=2, thus, the squeeze magnitudes in the squeezing matrix 𝒮 (equation <ref>) becomes r_n= (2log_10e)^-1 for n∈{0,1,2 }, and r_n= 0 for n>2.Further, for each level of mode-mismatch the initial squeeze angles φ_n for n∈{0,1,2} are independently optimized to maximize the sensitivity (or equivalently, to minimize the quantum noise).This optimization is needed to correctly compensate for the phases β_0k, k∈{1,2 }, that are picked up when the squeezed higher order modes couple into fundamental mode due to the mode-mismatch-induced basis change 𝒦 (equation <ref>). To acquire the optimal frequency dependent rotation for the squeezed states in all three spatial modes, the filter cavity was made critical by changing the radius of curvature of the two filter cavity mirrors to 16 m.This gives a round-trip Gouy-phase of π, hence, the second order modes have the same resonance condition as the fundamental mode, and therefore pick up the same phase shift modulo 2π when subjected to filter cavity transfer matrix 𝒯.This can be seen by setting ψ_rt = π, q(0) = 0 and q(1) = q(2) = 2 in equation <ref>. The results for two different levels of mode-mismatches are shown in Fig. <ref>, and are presented in terms of sensitivity improvement over the no-squeezing case. The figure also includes the corresponding traces from subsection <ref> for comparison.One can see that for 5 % mode-mismatch the sensitivity is increased with about 1.5 dB compared to the case when only the fundamental mode is squeezed, and that most of the mode-mismatch-induced squeezing degradation is recovered by squeezing the two extra spatial modes. There are two reasons for this: (i)In the previous section, pure vacuum states from the second-order modes mixed in with the squeezed states in the fundamental mode due to the mode-mismatch. Now, correctly rotated squeezed states mix in instead. (ii) The couplings between the fundamental mode and the higher-order modes that carry pure vacuum states are small for this level of mode-mismatch.For the larger mode-mismatch of 15 %, the sensitivity gain is also larger—about 3 dB. This is because the coupling magnitudes between the fundamental mode and the second-order modes have increased.However, the sensitivity does not rise to around the mode matched case, as the fundamental mode has significant couplings to pure-vacuum-state-carrying higher-order modes.The results show that squeezing the two extra spatial modes provide robustness to this particular mode-mismatch in our model.§.§ Test of principle In this subsection we provide a test of principle for multi-spatial-mode squeezing by injecting two squeezed quantum fields into a Mach-Zehnder interferometer.The test originated from the idea of testing if the benefits of squeezing higher-order modes could be downgraded or even rejected, if we allow propagations and scatterings that are more general in nature than the ones studied in the previous subsection. The optical setup is shown in Fig. <ref> and consists of two squeezers—one for each incoming field—and two mixing points with a generic propagation in between. The test was performed as follows:(i) Various parameters of the system are independently assigned randomized values within realistic and physically valid intervals. These parameters are: the beam splitters' reflection coefficients and microscopical offsets along their surface normals; the macroscopical and microscopical propagation phases; and the readout quadrature. Here, microscopical refers to distances smaller than the carrier wavelength, and macroscopical refers to distances of any magnitude, but of integer multiples of the carrier wavelength. (ii)The upper input field is squeezed by 10 dB and the lower input field remains pure vacuum, as seen in the left part of Fig. <ref>. The initial squeeze angle is optimized to yield maximum squeezing in the upper output path in the readout quadrature.(iii) The second squeezer is switched on so that both fields are squeezed by 10 dB, as seen in the right part of Fig. <ref>. The initial squeeze angle for the lower field is then also optimized to yield maximum squeezing in the upper output path in the readout quadrature. (iv) Repeat 10,000 times.The result is shown in Fig. <ref>. The blue distribution is obtained with one squeezed field in step (ii), and the red bar is the result obtained in step (iii), when both fields are squeezed. Thus, for any set of random parameter values, we can always obtain 10 dB of squeezing as long as we can independently optimize the two initial squeeze angles.The rest of this subsection is focused on describing the model that was used in more detail. The system can be described by the framework from Sec. <ref>, with N=1 as there are only two fields in this setup. The upper (lower) field, and the operations acting on the upper (lower) field, are everywhere in the setup labeled by n=0 (n=1). The relation between the output fields and the input vacuum fields is given by equation <ref>, however, the transfer matrices 𝒦_1, 𝒦_2 and 𝒯 are modified as follows. Each lossless beam splitter can be represented by 𝒦_i = [ r_i cosβ_i-r_i sinβ_it_i0; r_i sinβ_i r_i cosβ_i0t_i;t_i0 -r_icosβ_i-r_i sinβ_i;0t_ir_isinβ_i -r_icosβ_i ]where r_i ∈ [0.7,1] is the reflection coefficient, t_i is the transmission coefficient satisfying t_i^2 = 1-r_i^2, and β_i ∈ [-π, π] is the phase shift due to the displacement of the beam splitter along its surface normal.The propagation 𝒯 consists of two independent paths of lengths D_n = L_n + δ L_n, where | δ L_n | < λ_0 and L_n = k_n λ_0 with k_n ∈ℕ. Thus, the transfer matrices for paths n=0,1 are given by𝒯_n(Ω) = e^-iθ_n[cosϕ_nsinϕ_n; -sinϕ_ncosϕ_n ].Here,θ_n = Ω L_n/c∈ [0, π]is the phase picked up due to the macroscopical length L_n, and ϕ_n = ω_0 δ L_n/c∈ [-π,π].is the phase shift induced by the microscopical length δ L_n.§ A MORE REALISTIC ADVANCED LIGO MODEL To get a hint of how mode-mismatches inside the interferometer affect the multi-spatial-mode squeezed field, we here consider a Finesse model of an advanced LIGO detector that includes small mode-mismatches between the cavities inside the interferometer. There are two important differences compared to the model described in Sec. <ref>. The first one is that the asymmetries between the two transverse spatial directions are included in the model, which gives rise to mode-mismatches that are small, but not negligible. These asymmetries show up because of nonzero angles of incidence in combination with spherical mirrors. The second important difference is that an Advanced LIGO output mode cleaner has been added to the model. The reason for this is that some fraction of the coherent laser power is in higher-order modes due to the internal mode-mismatches.Without the output mode cleaner, higher-order modes of the quantum field are allowed to beat with the higher-order modes of the coherent carrier field. This creates noise that would not be present with the output mode cleaner included.The experiment was performed by mode-mismatching the filter cavity to the output mode cleaner by varying the position of a mode matching lens along the optical axis. This mode matching lens is located between the filter cavity and the injection point for the squeezed field.The squeezer was kept mode matched to the filter cavity.We computed the quantum-noise-limited-sensitivity in the frequency band of interest for two levels of mode-mismatches. This was done both for a squeezer that emits one and three squeezed spatial modes.The resulting improvements over the no-squeezing case are shown in Fig. <ref>. The behavior at low frequencies is identical to the result obtained with the simpler model considered in Sec. <ref>. At high frequencies, the squeezed field experiences a slightly larger degradation, which mainly seems to be due to the output mode cleaner, however, further investigation is needed to conclude this. Moreover, we can conclude that the internal mode-mismatches included in this model are too small to give rise to any large effects. Future work aims at systematically study the impact of internal mode-mismatches due to, e.g., thermal lensing.§ CONCLUSIONSIn this paper, we have quantified and described how squeezed-light-enhanced interferometric gravitational-wave detectors are affected by spatial mode-mismatches between the interferometer, the filter cavity, and the squeezer.We have shown that spatial mode-mismatches potentially can cause significantly larger squeezing degradations than a pure optical loss, if multiple mode-mismatches allow squeezed states to coherently scatter back and forth between the fundamental mode and higher order modes.We can conclude that even with relatively large mode-mismatches, the injection of frequency dependent squeezed light is beneficial in our model.Further, we have shown that the injection of a field with squeezed states, not only in the fundamental mode, but also in the second-order Hermite-Gaussian modes HG_02 and HG_20, potentially can provide resilience to spatial mode-mismatches. This scheme requires independent optimization of the squeeze angles for all three involved spatial modes, which poses a big challenge for any potential real-world implementation.Further studies of how combinations of external and intra-interferometer spatial mode-mismatches affect the performance of squeezed light are needed to better understand how squeezed light would perform in gravitational wave detectors.§ ACKNOWLEDGEMENTS The authors would like to thank Matthew Evans and Rana Adhikari for useful discussions. This work was supported by the Science and Technology Facilities Council Consolidated Grant (number ST/N000633/1). D. Töyrä was supported by the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme FP7/2007-2013/ (PEOPLE-2013-ITN) under REA grant agreement n [606176].H. Miao was supported by UK STFC Ernest Rutherford Fellowship (Grant No. ST/M005844/11). M. Davis was funded by the United States National Science Foundation via grant number PHY-1460803 to the University of Florida Gravitational Physics IREU program.§ NOISE SCALING OF THE COHERENT SCATTERING EFFECT In this section we derive how the noise due to the coherent scattering effect scales with the coupling coefficient.We use a simplified version of the system considered in Sec. <ref> where the filter cavity is mode-mismatched to the interferometer and the squeezer, while the squeezer and the interferometer are kept mode matched. Here, we only use two fields, i.e., N = 1 in the mathematical framework in Sec. <ref>. The relation between the output field and the input field is given by equation <ref>, but where the matrices are simplified.Only one of the two fields is squeezed, thus, the squeezing matrix can be written asS = [l] e^r0 0 0 0e^-r0 0 0 0 1 0 0 0 0 1 .The scattering matrix is given by 𝒦 = [cosκ 0 -sinκ 0; 0cosκ 0 -sinκ;sinκ 0cosκ 0; 0sinκ 0cosκ ] ,where sinκ is the coupling between the two fields. For the propagation, only the relative phase shift between the two fields is of importance, hence it can be represented by the matrix𝒯 = [ 1 0 0 0; 0 1 0 0; 0 0cosϕ -sinϕ; 0 0sinϕcosϕ ] ,where ϕ is the relative phase shift. Assuming we are squeezing the readout quadrature, the noise is proportional to the element ℳ(2,2), whereℳ = 𝒦^-1𝒯𝒦𝒮( 𝒦^-1𝒯𝒦𝒮)^T= 𝒦^-1𝒯𝒦𝒮^2 𝒦^-1𝒯^T𝒦.Assuming the coupling magnitude sinκ is small, then ℳ(2,2) = e^-2r - 2 κ^2 e^-2r(e^2r - 1 ) ( cosϕ- 1 ) + 𝒪(κ^3) .Thus, the worst case scenario is if the propagation gives rise to a relative phase shift between the two fields of ϕ = π, in which case the noise arising due to the coherent scattering effect scales ase^-2r + 4κ^2( 1-e^-2r) +𝒪(κ^3) .For large squeeze magnitudes, this is a factor of two worse than if these two scattering points would have been exchanged for two optics with small losses κ. unsrt85
http://arxiv.org/abs/1704.08237v2
{ "authors": [ "Daniel Töyrä", "Daniel D. Brown", "McKenna Davis", "Shicong Song", "Alex Wormald", "Jan Harms", "Haixing Miao", "Andreas Freise" ], "categories": [ "physics.optics", "quant-ph" ], "primary_category": "physics.optics", "published": "20170426174514", "title": "Multi-spatial-mode effects in squeezed-light-enhanced interferometric gravitational wave detectors" }
Institute of Science and Technology Austria (IST Austria), Klosterneuburg, Austria Soft and Living Matter Lab, Institute of Nanotechnology (CNR-NANOTEC), Consiglio Nazionale delle Ricerche, Rome, Italy Human Genetics Foundation, Turin, Italy We consider the problem of inferring theprobability distribution of flux configurations in metabolic network models from empirical flux data. For the simple case in which experimental averages are to be retrieved, data are described by a Boltzmann-like distribution (∝ e^F/T) where F is a linear combination of fluxes and the `temperature' parameter T≥ 0 allows for fluctuations. The zero-temperature limit corresponds to a Flux Balance Analysis scenario, where an objective function (F) is maximized. As a test, we have inverse modeled, by means of Boltzmann learning, the catabolic core of Escherichia coli in glucose-limited aerobic stationary growth conditions. Empirical means are best reproduced when F is a simple combination of biomass production and glucose uptake and thetemperature is finite, implying the presence of fluctuations. The scheme presented here has the potential to deliver new quantitative insight on cellular metabolism. Our implementation is however computationally intensive, and highlights the major role that effective algorithms to sample the high-dimensional solution space of metabolic networks can play in this field.Andrea De Martino – Dedicated to John Butcher, on the occasion of his 84-th birthday – ======================================================================== Building a system-level understanding of metabolism, the highly conserved set of chemical processes devoted to energy transduction, growth and maintanance in living cells, is a majorchallenge for systems biology. Constraint-based in silico methods like Flux Balance Analysis (FBA) and its refinements play the central role in this endeavour <cit.>. Such schemes provide a coherent theoretical framework to analyze the capabilities of large (genome-scale) metabolic networks starting from minimal genomic input and physico-chemical constraints. Optimal flux configurations are usually hypothesized to optimize a function of the fluxes, like biomass production, leading to results that can be quantitatively tested against experimental data obtained in controlled conditions <cit.>. On the other hand, the choice of an objective function is in many situations not straightforward, and living cells often appear to be multi-objective (Pareto) optimal <cit.>. In addition, while bulk properties can usually be well described by optimal flux patterns, experiments performed at single-cell resolution display a considerable degree of cell-to-cell variability <cit.>, which has been linked to the inevitable presence of randomness in metabolic processes <cit.>. At the simplest level, this suggests that flux patterns observed empirically can be thought to be sampled from a stationary probability distribution defined over the space of feasible metabolic states. This idea is validated by the fact that empirical growth rate distributions are reproduced by a Maximum Entropy principle at fixed average growth rate, according to which flux configurations 𝐯 occur with a Boltzmann-like probability distribution P(𝐯)∝ e^βλ(𝐯)  ,with λ(𝐯) the growth rate and where β>0 controls the magnitude of fluctuations <cit.>. While more comprehensive studies will hopefully refine this picture, recent work has provided further support to and insight into the MaxEnt scenario <cit.>. A tightly related question is whether one can infer the probability distribution of metabolic states from empirical data rather than postulating it. Inverse problems and related techniques have a long history in applied fields <cit.> and have attracted much attention from more theoretical areas in recent years as they directly connect machine learning, artificial neural networks and statistical mechanics <cit.>. In the biological context, they have lead to new insights in domains as apart as protein science <cit.>, neural dynamics <cit.> and immunology <cit.>. In this short note we discuss a method to obtain the probability distribution of fluxes in a metabolic network from the experimental characterization of a subset of fluxes, and present a proof-of-concept validation based on flux data for Escherichia coli steady state growth. Taking experimental means as the key features to be captured, we look for flux distribution of the formP(𝐯)∝ e^F(𝐯)/T  ,where F is a function of fluxes (to be inferred) and T≥ 0 is an adjustable parameter. In order to learn F, we combine a Boltzmann learning scheme with a Monte Carlo sampling method. TheF thus obtained is found to involve a linear combination of the biomass output and of the glucose intake rate, and data are best described by setting T to a finite (non-zero) value, suggesting, in agreement with previous studies, that empirical fluctuations reflect at least in part some unavoidable (and possibly functionally relevant) noise in the organization of flux configurations.We shall denote by K the number of different samples in which a subset 𝒳 of fluxes has been experimentally quantified, and by v_j^(k) the value of flux v_j in the k-th sample. The empirical mean of flux v_j is given byv_j_ emp=1/K∑_k=1^K v_j^(k)     (j∈𝒳).To define the space of a priori feasible flux configurations for the entire metabolic network, we shall follow the standard route of assuming that viable flux vectors 𝐯 are non-equilibrium steady states of the underlying system of reactions. The feasible space ℱ then corresponds to the solutions of 𝐒𝐯=0, where 𝐒 stands for the M× N stoichiometric matrix (M denoting the number of chemical species, and N that of reactions) and where each flux v_i is constrained to lie within an interval [v_i^min,v_i^max], whose bounds encode for the physiologically relevant regulatory, kinetic and thermodynamic constraints. Geometrically, such an ℱ is a convex polytope. In principle, every point 𝐯∈ℱ is a feasible flux configuration and configurations are assumed to be a priori equiprobable. However, the empirical means (<ref>) represent information that can refine this assumption. In particular, one may expect that flux vectors should occur with a probability distribution P(𝐯) such that∫_ℱ v_j P(𝐯)d𝐯=v_j_ emp     (j∈𝒳)  .Following the Maximum Entropy idea, we focus on the least constrained distribution satisfying (<ref>), which maximizes the entropy S[P]=-∫_ℱ P(𝐯)log P(𝐯)d𝐯subject to (<ref>). This is given byP(𝐯)≡ P(𝐯|𝐜)=e^∑_j∈𝒳 c_j v_j/Z(𝐜)     (𝐯∈ℱ)  ,where 𝐜={c_j}_j∈𝒳 denotes the vector of Lagrange multipliers enforcing (<ref>) for each v_j, whileZ(𝐜)=∫_ℱ e^∑_j∈𝒳 c_j v_j d𝐯ensures proper normalization. A key question at this point concerns the values of the constants c_j. More precisely, can we set them so as to reproduce empirical means most accurately via (<ref>)? By straightforwardly maximizing the log-likelihood of the parameters given the empirical data, i.e.ℒ(𝐜|data)=1/K∑_k=1^K log P(data|𝐜)  ,one sees that∂ℒ/∂ c_j=v_j_ emp-v_j_𝐜  ,wherev_j_𝐜=∫_ℱ v_j P(𝐯|𝐜)d𝐯  .This suggests that the optimal vector 𝐜 can be found by an updating dynamics driven by the difference between the empirical mean and the mean computed using the current vector 𝐜, i.e. via a Boltzmann learning such as c_j(τ+δτ)-c_j(τ)=[v_j_ emp-v_j_𝐜(τ)]δτ  .Ideally, the vector 𝐜^⋆ obtained as the asymptotic fixed point of (<ref>) ensures the best agreement between empirical and theoretical means (i.e. between (<ref>) and (<ref>)), whileP(𝐯|𝐜^⋆) provides our best guess for the (stationary) probability distribution compatible with empirical means that has generated our dataset. In this scenario, the quantity E=∑_j∈𝒳c_j^⋆ v_j would represent the key physical parameter regulating the probability of occurrence of feasible flux vectors. Note that such an E plays the role of F/T in (<ref>).We have implemented the above scheme using data retrieved from <cit.>, where E. coli's central carbon metabolism is characterized in minimal glucose-limited aerobic conditions, at growth/dilution rates below 0.5/h. We collected K=35 control experiments and the corresponding values for |𝒳|=24 reactions fluxes (see <cit.> for more details). We have then studied the inference problem on the feasible space ℱ defined by the E. coli's core metabolic network <cit.>. The dimension of ℱ when a minimal aerobic glucose-limited medium is used to constrain the exchanges between the cell and its surroundings is dim(ℱ)=23.To compute the optimal values of the coefficients c_j from (<ref>) we have used the following procedure: * initialize c_j(0)=0 for all j∈𝒳* at each time step τ: compute v_j_𝐜(τ) from (<ref>) by sampling the distribution P(𝐯|𝐜(τ)) via Hit and Run Monte Carlo <cit.>; then* find the j∈𝒳 for which the difference v_j_ emp-v_j_𝐜(τ) is largest,update its value according to (<ref>), and iterate.Finally, we set δτ =10^-3. By studying numerically the dynamics of the c_j's one finds that, at sufficiently long times τ, coefficients generically behave asc_j(τ)≃ L_j τ     (j∈𝒳)  ,where L_j are flux-specific constants. For most fluxes, though, L_j=0, i.e. the corresponding c_j's converge in time to finite (small) values, while L_j≠ 0 for a small number of fluxes (see Fig. <ref>). This in turn implies that the functionE≃τ∑_j∈𝒳 L_j v_jis asymptotically dominated by terms with L_j≠ 0. Exploiting the linear dependencies between variables, one can express E in terms of biologically significant fluxes other than those in 𝒳. Quite remarkably, one finds that the dominant contribution to E has the formE≃τ (L_λλ+L_u u)  ,where λ and u denote, respectively, the biomass output rate and the glucose in-take flux while L_λ and L_u are numerical coefficients given by approximately by L_λ≃ 2/3 and L_u ≃ 1/3.The potentially high-dimensional function (<ref>) therefore reduces, after Boltzmann learning, to a simple form that combines twovariables of the highest biological significance. In particular, the solution of the inverse problem suggests a scenario very similar to that derived in <cit.>. In addition, though, the network's state also appears to be sensitive to the glucose import rate u.We have validated these results by comparing empirical averages against the means obtained from the inferred distributionP_ inf(𝐯)∝ e^τ (L_λλ+L_u u)  ,as well as against means computed from the simpler form (<ref>), i.e. P_0(𝐯)∝ e^τλ  ,which was obtained by focusing on growth rate distributions (rather than individual fluxes). In both cases, we have performed an asymptotic extrapolation by studying how χ^2 and the mean squared error (MSE) between data and models change upon varying τ, which –comparing (<ref>) and (<ref>)– thus plays the role of 1/T (while L_λλ+L_u u plays the role of F). Results are shown in Fig. <ref>. One sees that both probability distributions generate clear minima in χ^2 and MSE as functions of τ, where data are optimally reproduced. However, the inferred distribution (<ref>) outperforms (<ref>) (deeper minimum, albeit slightly) both in terms of χ^2 and in terms of MSE. Interestingly, the inferred function provides better results at a larger value of τ compared to P_0, which suggests that metabolic configurations are closer to optima of F=L_λλ+L_u u than to optima of λ.In order to get a more precise idea of the improvement obtained via inference, we have displayed in Fig. <ref> a detailed flux-by-flux comparison between inferred means (with errors) and experimental means, also showing for sakes of completeness the predictions obtained from standard biomass-maximizing FBA. It is important to note that the Boltzmann learning dynamics does not converge in general to a vector 𝐜 such that theoretical means match empirical ones perfectly as τ→∞. A possible reason is that empirical means lie outside the feasible space ℱ. Flux values are in fact tightly constrained by the mass balance equations 𝐒𝐯=0, and small numerical errors that may arise e.g. from experimental noise can lead to violations of these conditions. While not surprising per se, this represents a further complication for the inference problem, which for metabolic networks relies on the definition of ℱ, andfurtherinvestigations are needed.To summarize, previous work has shown that maximum entropy distributions at fixed average growth rate in the space of feasible metabolic states reproduce empirical growth rate distributions measured in exponentially growing populations at single-cell resolution <cit.> and outperform standard FBA (retrieved in the limit β→∞) in reproducing experimental data on fluxes <cit.>. The approach discussed here generalizes the above results by extending the input dataset to a subset of metabolic fluxes. Following standard inference schemes, we have computed the function F(𝐯) that best describes the flux dataset as being generated from a Boltzmann-like distribution ∝ e^F(𝐯)/T. Quite remarkably, the biomass output and the glucose intake emerge from the inference process as the key fluxes that govern the statistics of fluxes. The type of inverse modeling discussed here represents a novel and potentially powerful tool to analyze metabolic networks and characterize their large-scale organization in a way that is fully data-driven. `Energy' functions F obtained in this way may provide a new perspective on metabolic functions and objectives in complex settings where biomass growth optimization alone does not suffice to explain observations. However, the predictive capacity of inverse methods is intrinsically limited by the quality of available data. In addition, the scheme employed here has a high computational cost. In fact, as a Monte Carlo sampling of the feasible space ℱ is required at every Boltzmann learning step, performing the same analysis on larger (genome-scale) networks, with a feasible space ℱ whose dimension can be an order of magnitude larger than that addressed here, is very difficult. This problem may however be effectively solved by the use of approximate representations of the feasible space and/or of more efficient computational heuristics <cit.>. In this sense, the present note represents merely a proof of concept and more work is needed to make these ideas viable at genome resolution.We thank A. Braunstein, A. P. Muntoni and A. Pagnani for a critical revision of these ideas and for important comments and suggestions. We acknowledge the support of the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme (FP7/2007-2013) under REA grant agreement no. [291734] (D.D.M).99 cbmsBordbar A, et al. Constraint-based models predict metabolic and associated cellular functions. Nature Rev Genet (2014) 15:107-120 compaSchuetz R, Kuepfer L, Sauer U. Systematic evaluation of objective functions for predicting intracellular fluxes in Escherichia coli. Molec Sys Biol (2007) 3: 119 bayKnorr AL, Jain R, Srivastava R. Bayesian-based selection of metabolic objective functions. Bioinformatics (2007) 23:351-357 biomassFeist A and Palsson B. The biomass objective function. Curr Opin Microbiol (2010) 13:344-349 sciSchuetz R, et al. Multidimensional optimality of microbial metabolism. Science (2012) 336:601 hartHart Y, et al. Inferring biological tasks using Pareto analysis of high-dimensional data. Nature Methods (2015) 12:233 moriMori M, Marinari E, De Martino A. A yield-cost tradeoff governs Escherichia coli's decision between fermentation and respiration in carbon-limited growth. BiorXiv preprint, https://doi.org/10.1101/113183 (2017) elfWallden M, et al. The synchronization of replication and division cycles in individual E. coli cells. Cell (2016) 166:729-739 tansKiviet, DJ, et al. Stochasticity of metabolism and growth at the single-cell level. Nature (2014) 514: 376-379 physbioDe Martino D, Capuani F and De Martino A. Growth against entropy in bacterial metabolism: the phenotypic trade-off behind empirical growth rate distributions in E. coli. Phys Biol (2016) 13:036005 ddmDe Martino D, Andersson A, Bergmiller T, Guet C, Tkacik G,Statistical mechanics for metabolic networks during steady-state growth. ArXiv preprint, https://arxiv.org/abs/1703.01818 (2017) tarantolaTarantola, A. Inverse problem theory and methods for model parameter estimation (Society for Industrial and Applied Mathematics, 2005) iipChau Nguyen H, Zecchina R, Berg J. Inverse statistical problems: from the inverse Ising problem to data science. ArXiv preprint https://arxiv.org/abs/1702.01522 (2017) coccoCocco S, et al. Inverse statistical physics of protein sequences: a key issues review. ArXiv preprint, https://arxiv.org/abs/1703.01222 (2017) frontRoudi Y, Aurell E, Hertz J. Statistical physics of pairwise probability models. Front Comput Neurosci (2009) 3:22 immuneKaplinsky J and Arnaout R. Robust estimates of overall immune-repertoire diversity from high-throughput measurements on samples. Nature Comm (2016) 7:11881 zhang2014cecafdbZhang Z et al. CeCaFDB: a curated database for the documentation, visualization and comparative analysis of central carbon metabolic flux distributions explored by ^13C-fluxomics. Nucl Acids Res (2015) 43.D1:D549-D557 coreOrth JD, Fleming RMT and Palsson B. Reconstruction and use of microbial metabolic networks: the core Escherichia coli metabolic model as an educational guide. EcoSal Plus (2013)doi:10.1128/ecosalplus.10.2.1 mcDe Martino D, Mori M, Parisi V. Uniform sampling of steady states in metabolic networks: heterogeneous scales and rounding PLOS ONE (2015) 10:e0122670 alfreBraunstein, A, Muntoni AP, and Pagnani A. An analytic approximation of the feasible space of metabolic networks. Nature Comm (2017) 8:14915 cossio Fernandez-de-Cossio-Diaz, J and Mulet R. Fast inference of ill-posed problems within a convex space. J Stat Mech (2016) 2016: 073207
http://arxiv.org/abs/1704.08087v1
{ "authors": [ "Daniele De Martino", "Andrea De Martino" ], "categories": [ "q-bio.MN", "cond-mat.dis-nn", "cond-mat.stat-mech", "physics.bio-ph" ], "primary_category": "q-bio.MN", "published": "20170426130258", "title": "Constraint-based inverse modeling of metabolic networks: a proof of concept" }
First results from BISTRO – a SCUBA-2 polarimeter survey of theGould Belt Yusuke Tsukamoto63 December 30, 2023 ===========================================================================We consider the privacy implications of public release of a de-identified dataset of Opal card transactions.The data was recently published at https://opendata.transport.nsw.gov.au/dataset/opal-tap-on-and-tap-off. It consists of tap-on and tap-off counts for NSW's four modes of public transport, collected over two separate week-long periods.The data has been further treated to improve privacyby removing small counts, aggregating some stops and routes, and perturbing the counts. This is a summary of our findings. §.§ About the De-identified Data* This version of the dataset can reasonably be released publicly.* Many of the paired tap-on/tap-off events have been decoupled in the proposed release datasets. Removing the links between related tap-ons and tap-offs, such as those for the same trip, journey, or passenger, is critical to successful privacy protection. We recommend further decoupling in future, e.g., Manly events. * The other significant privacy protection comes from the aggregation or suppression of small counts and sparsely patronised routes.* It is still possible to detect the presence of a suspected individual or small group, with a small probability, in some unusual circumstances.Note that this detects that someone was present, without providing information about who it was.These are probably not a matter of serious concern because of the very small probabilities involved—we estimate less than one in a thousand for small groups, and even less for individuals (about 2 in a million).However, it is important to understand that this is possible and to make a risk assessment based on the likelihood in realistic cases. We provide some specific examples and probability estimates in the main report.* We would not recommend reusing this approach on more sensitive data releases such as trip or trajectory microdata. * It may be reasonable to release more weeks of data, with a careful assessment of how the risks increase as more data is available.§.§ About the De-identification TechniquesDifferential privacy is the gold standard for rigorous privacy protection—we support efforts to make open data provably differentially private.However, it is important to understand that differential privacy does not imply perfect privacy protection, but rather the opportunity to quantify privacy loss.We suggest the following improvements to the specific treatment of DP in this dataset: * The DP techniques and parameters should be made public, including the parameters for perturbing the totals.This is good for utility because it allows those analysing the data to understand with what confidence their conclusions hold.It is also good for privacy because it allows a rigorous risk assessment based on the degree of privacy protection, which is never perfect.* The perturbation parameters can be inferred from the data itself anyway, furthering the argument that they might as well be public.Our estimates are in the full report.* The DP treatment achieves at best (ϵ, δ)-differential privacy for a small, but non-zero constant δ. This represents a weaker (less private) form of privacy than the ϵ-differential privacy variant. The main reason for this is the decision not to perturb zero counts—we recommend that perturbation be extended to zeros in future.* One dataset includes counts with both times and locations, another has only temporal data, and a third has only locations.These three seem to have been derived independently from the same raw data.Presenting three differently-treated versions of the same data may have unexpected implications for privacy. It would be better to derive the temporal data and the spatial data from a differentially-private version of the combined time and location data. The main report describes the specific inferences that can be made. First we explain how it was possible to recover the parameters of the perturbations used for differential privacy.This is not a problem, just a further reason to make those parameters public anyway.The differential privacy framework does not rely on secrecy of these parameters.We then use those parameters to quantify the likelihood of an attacker detecting the presence of individuals or small groups in the published data.The probabilities are small, but these sorts of risks should be considered and recomputed before many more weeks of data are released.Third, we discuss how some suppressed values couldbe recovered with reasonable accuracy based on calculating differences.This demonstrates that independently treating the data in three different ways might inadvertently expose information. § OVERVIEW OF THE DATA AND WHY IT PROTECTS PRIVACYThe main risk from transport data is that an attacker could use partial information about someone's travel patterns to re-identify their record and learn other information about their trip or journey.For example, seeing where someone gets onto a train might, when linked with the raw data, expose where they get off.If the raw data links events for the same trip, journey or Opal card, then that re-identification might expose other trips or journeys by the same person. The released dataset lists tap-on and tap-off counts from two non-contiguous weeks of data from the Opal public transport ticketing system. Effectively all trip information has been removed—there is (almost) no way to link different ends of the same trip, or different trips by the same person.This means that partial information about a person's travel cannot be linked with the Opal datato extract more information such as the other end of their trip or the locations of their other journeys.The removal of these links is critical to good privacy protection, though of course it has a corresponding effect on utility because it prevents analysis of trips and journeys.Another risk is that an attacker could use public data to detect the presence or absence of a suspected traveller at a particular place and time.Although this risk is mostly mitigated in the dataset, it is not entirely eliminated. This report gives some specific examples with estimates of the (small) probability of detection, then some suggestions for improving the treatment to reduce the risk in future releases.The dataset includes records for the four different modes of NSW public transport (train, bus, ferry and light rail), with times binned into 15 minute intervals.There are three different datasets with slightly different aggregation techniques—one contains time and location data, another only times and a third only locations.Each count is perturbed by a random value chosen from the Laplace distribution paramaterised by a privacy parameter p.We take this to mean that the count in the dataset is c = c_raw + Lfor some true value c_raw and noise random variable L∼Lap(0, p).The probability of perturbation level x is L = x =1/2pexp(-|x|/p). Counts less than 18 after perturbation have been removed.In some datasets, an even higher threshold has been applied.Further details about the data treatment are in a report prepared by Data61.§ RECOVERING PERTURBATION PARAMETERS BY ANALYSIS OF DIFFERENCESWe can't immediately tell how much a number has been perturbed by.However, we can get an estimate because in some instances we observe two different perturbed values which both started from the same raw number.Notably, there are two ferry services which operate point-to-point, i.e.they have a single start and end point.Those services are the Manly Ferry, between Circular Quay Wharf 3 and Manly Wharf, and the Newcastle Ferry between Stockton Wharf and Queens Wharf. The Manly ferry has a duration of 30 minutes and is extremely popular. This results in many trips being distinguishable within the dataset, in which a tap on and a tap off can be paired. An added benefit of the Manly ferry route is that it is the only route that operates an automatic tap-off function, to improve speed of disembarking. This is important from an analysis perspective because it means everyone who tapped on will definitely tap off, whereas for other routes there may be a small number who forget to tap off. Using this property it is possible to look at paired tap on and tap offs. Two examples are given in Table <ref>.In the raw data the number of tap ons and the equivalent tap offs must be exactly the same. In the released dataset we see small differences between these numbers because an independent randomly-chosen perturbation has been applied to each. If we plot the frequency distribution of those differences we get the plot in Figure <ref>.This is exactly the distribution we would expect from the difference between two Laplace distributions—the mathematical details are in Appendix <ref>.This is a strong indicator that the differences we are seeing are a result of the differentially-private algorithm. This allows us to estimate the perturbation parameter p as approximately 1.4. Whilst we don't learn much directly from recovering the parameter, it does shed some light onto other areas of the dataset and raises a number of questions around the differential-privacy assumptions. The most important of these is the assumption of independence of tuples within the dataset. When this assumption fails it can result in the privacy guarantees being lower than expected <cit.>, <cit.>. In other words, the privacy budget could be exhausted more quickly, or possibly even exceeded.In this example, the two different records of the number of ferry passengers allow a more accurate guess of the raw value than one record alone would give.Recovering this parameter should not have an impact on privacy, and it would be perfectly reasonable to publish these sorts of algorithmic details.Indeed this demonstrates that nothing is gained by hiding them, because they can often be recovered.Publishing the parameters has a real benefit to utility, too, because it allows data analysts to assess the confidence of their conclusions.§ RECOMMENDATIONS TO ACHIEVE STRONG DIFFERENTIAL PRIVACYIn this section we should how an attacker could, with very small probability, detect the presence of a suspected individual or small group. These are probably not a matter of serious practical concern because of the very small probabilities involved—about 2 in a million for individuals.However, it does mean that the dataset does not technically meet the precise definition of strong Differential Privacy.We explain why, and show how this can be easily corrected for future releases.The released dataset consists of a number of records of certain types (such as tap ons at a particular time and location). Data61's report addendum describes the “” (also labelled as Algorithm 1 in Section 2, Page 4) for perturbing the counts of each type of record. The addendum asserts that the algorithm preserves differential privacy, according to convention implying the strong form of ϵ-differential privacy.However, strictly speaking it doesn't meet this definition.We explain why it doesn't and how it can be easily corrected.The algorithm follows standard techniques but has a subtle error: when a count is zero, it is not perturbed.This means that if an adversary observes a non-zero perturbed value, it has certainly been derived from a non-zero raw value. Put plainly, an output above the algorithm's suppression threshold can be used to rule out some datasets—those with a zero count at that point. This is easy to correct—simply include zero counts in the perturbations applied to all the other counts.In other words, remove from Algorithm 1 the special case in line 3, “if c is zero, continue.”The effect of the error is greatly reduced by the suppression of small values—a small nonzero value would have to be perturbed up to 18 in order to be detected.This is the main reason that the error doesn't represent a serious issue for privacy—a large enough perturbation for detection would be very improbable.However, if the same algorithm is used for datasets that do not suppress small values, the correction should be made.A concrete example would be a very infrequently used stop, for example in an outer urban area at 5am.Suppose the attacker knows that there is only one person who ever uses that stop at that time, and wants to know whether she used it on a particular date.If the dataset contains a zero count, the attacker doesn't learn anything: it might have been a true zero, or it might have been a one perturbed down to zero.However, if the attacker observes an 18, it could not have originated as a zero.It must have been (at least) a one in the raw data, so the person must have been there.This is rather a far-fetched example, and the probabilities are extremely small, so in practice this is not a serious problem for this data. However, being able to detect with certainty the presence of at least one person (even with a small probability) implies that the algorithm does not achieve the strong form of differential privacy. We prove this formally in Appendix <ref>. Moreover, while the may achieve the weaker (ϵ,δ)-differential privacy, we prove a lower bound on the δ possibly achievable in terms of whatever perturbation parameter p the is run with. Usually (ϵ,δ)-differential privacy requires δ to be negligible in the size of the database.That is not the case here—δ is a (very small) constant value.Irrespective of chosen threshold, the does not preserve ϵ-differential privacy for any ϵ. It cannot preserve (ϵ,δ)-differential privacy for δ<exp(-17/p)/(2p). We can now use the estimate from Section <ref> that p ≈ 1.4. This then implies that δ must be at least exp(-17/1.4)/(2.8) ≈ 0.000002.This is the probability of detecting, for certain, an individual at a stop that otherwise had a count of zero.The probabilities for small groups are slightly larger—the same estimation gives a probability of about 0.00004 for detecting a group of 5 and 0.005 for a group of 12.[These numbers depend on our estimate of p.If a precise version of p is known then it can be used to calculate more accurate bounds.]Security and privacy is full of examples ofsubtle errors with serious consequences.This issue was hard to detect, but is easy to fix.This shows the value of careful review—making the algorithm and its analysis public would increase the likelihood that other errors and weaknesses could be found and corrected.§ ESTIMATING SUPPRESSED VALUES FROM MULTIPLE QUERIESThe raw data is presented in three different datasets for each mode of transport: one dataset includes both times and locations, a second aggregates all the times for a given location, and a third aggregates locations and lists only times.When a stop with a small count is suppressed from the time-and-location dataset, it is sometimes possible to estimate the missing value because it forms part of a total in the time-only dataset and has not been suppressed there.This is equivalent to having run multiple queries on the same dataset. When we use the term “query” we regard aggregating mode, tap offs and time as being a distinct query from aggregating mode, location, tap offs and time.When the level of privatizing perturbation is not high (or suppression mitigates the level of differential privacy), then it can be possible to recover some suppressed information using the differences between the datasets that report the same information.Although we don't recover the exact value, we recover a range of possible values within a confidence interval. In future it would be better to generate a differentially-private time-and-location dataset, perform suppression, and then derive the time-only and location-only datasets from it, not directly from the raw data. §.§ Spotting Secret Ferries Late At Night Table <ref> contains an extract from two different datasets that report the same trips. The first two rows are from the time-and-location dataset, whilst the third row is from the time dataset. In the case of the former, they are the only records related to Ferry journeys made at that time on that date. As we can see the total number of people getting off at that time in the time-and-location dataset is 132. That value has been perturbed, so the raw value could be slightly larger or smaller than that. The total number of tap offs across the network at that time was 150 according to the time dataset. This is a sufficiently largedifference to indicate that the time dataset contains (nearly 18) additional records that have been suppressed from the time-and-location dataset. By referencing the timetables for the various ferry routes we can see that there are only a limited number of ferries that are timetabled to arrive in the 00:00 to 00:14 time window. Table <ref> shows an extract from the ferry timetables. Only the F1 and F6 ferries are running at that time. We have already accounted for the F1 ferry in the time-and-location dataset. We can therefore by fairly certain that the remaining value belongs to the F6 ferry—unless the perturbations were so large as to account for the observeddifference. This leaves two possibilities, either Cremorne Point or South Mosman, the most likely is Cremorne Point, since for those tap off's to be in South Mosman it would require the ferry to arrive early, and for all those disembarking to move up the ramp to the Opal reader and tap off in an extremely short period of time. Therefore, with a reasonable degree of certainty, we can assume that those passengers in fact got off at Cremorne Point. Although this does not give us a precise count of Cremorne Point tap offs, it does allow us to estimate it.A large value (nearly 18) is much more likely to have produced this observation than a small value (less than 5).Details of the estimates are in Appendix <ref>. §.§ Timetable ExtremesA very similar issue arises for sparsely patronised bus routes, which are also suppressed differently in different versions of the released data. The process of binning, or combining, multiple points is a valid approach for protecting privacy. In the released dataset bus stops have been binned to their postcode. This is very effective during peak hours, when there are many buses and many bus stops active in any one postcode during any 15 minute time interval. However, when operating at the extremes of the timetable the frequency of buses and active bus stops dramatically reduces, leading to situations whereby the exact location of the bus stop and even the bus the passengers go onto is identifiable. This only has limited impact on privacy since low counts are still removed, but it does reveal some additional information. For an example of this we can look at the N70 night bus route on 2016/07/26. It leaves Penrith Interchange at 04:16. This is in the postcode of 2750. We don't see any time and location entries corresponding to that, so must assume that either no one got on the bus there or only a number below the threshold got on. Table <ref> shows two rows for postcodes that count the Mount Druitt Station and Blacktown Station bus stops respectively. By comparing with the bus timetables covering the area we can be certain that these two rows refer the N70 that left Penrith at 04:16 and is headed to City Town Hall. In this instance we do not learn much more due to the time overlapping with the start of a the regular services shortly after 05:00 which acts to hide the trip. § COUNTING REMOVED ROWSThe Data61 report provides a table of dropped trips. It shows that for LightRail, no trips were dropped for tap ons, while 0.0005% of trips were dropped for tap offs. Looking at the data, there are 23 possible stops on the LightRail network. Additionally, there is an UNKNOWN tap off location, we assume this is where someone fails to tap off and is charged the maximum rate or where a failure in the system occurs. As such, we have 23 tap on locations and 24 tap off locations, given a total of 47. If we multiply that by the number of days, 14, we get a maximum of 658 possible rows related to LightRail. The first issue is that 0.0005% is not a whole number of rows, it equates to 0.00329 rows. Furthermore, when we look at the data we have a total of 658 observations. Either we misunderstand the computation orthe correct value shouldbe 0. § CONCLUSION: EVALUATING UTILITY AND PRIVACY TRADEOFFS The dataset released here does the right thing by completely breaking the links between different tap ons and tap offs by the same person.This is critical for protecting privacy, though obviously it impacts utility by preventing any queries about trips or journeys from being answered. It is probably inevitable that successful techniques for protecting transport customers from the leakage ofinformation about their trips and journeys will also limit the scientific analysis of trips and journeys.In general, sensitive unit-record level data about individuals cannot be securely de-identified without substantially reducing utility. Records should be broken up so that successful re-identification of one component doesn't reveal other information about the person. The utility of a dataset depends on what queries it was intended to answer. There will never be a zero privacy risk, so it is important that the provided utility exceeds the privacy risk. The method of evaluating the utility in Data 61's report is to consider all possible queries in the output set and calculate the error in comparison to running the query on the original data. This is an effective way of determining the impact of the perturbation on the utility of the data, but does not capture the impact of aggregation or suppression. Queries about trips and journeys cannot be answered in the released dataset.Since both aggregation and suppression play a significant part in the privacy of the released dataset they too should be considered in the utility calculations. Clearly whenever a privacy-preserving methodology is applied to a dataset a significant amount of utility may be lost. The important question is whether that utility was sought or needed. Prior to planning a data release the purpose of the release, and the types of queries that it should serve, should be determined. Then the utility calculation could be determined based on the queries the dataset was intended to deliver, as opposed to those that it actually does deliver. If a dataset is being released without a specific target or set of queries to answer, further thought should be given to whether to release the dataset at all. There are finite limits on how much data can be safely released—that budget is best spent on carefully targeted releases that answer useful questions.The authors would like to thank Transport for NSW for the opportunity to work on this project, and for agreeing to make this report public.Openness about data privacy is crucial for engineeringgood privacy protections and earning public trust.The minor problems and unexpected inferences we were able to identify here might help improve the privacy of future releases, by Data61 and others.The more detail that can be published about methods for protecting privacy, the greater the likelihood that errors will be found and fixed before they are repeated.We encourage all open data authorities to describe to the publicwhat they do with data, how they treat or link it, and how it is protected. abbrv § DIFFERENTIAL PRIVACY BACKGROUND Recall the definitions of differential privacy and its weaker variant. Two datasets D, D' are said to be neighbours if they differ on exactly one row.A randomised mechanism M mapping some input dataset D to a vector of d real numbers is said to be ϵ-differential private for privacy budget ϵ>0, if for any neighbouring datasets D, D' and any T⊆^d, it holds that M(D)∈ T≤exp(ϵ) M(D')∈ T. If for additional privacy parameter δ∈(0,1), a mechanism satisfies that for any neighbouring datasets D,D' and T⊆^d, M(D)∈ T≤exp(ϵ)M(D')∈ T+δ then M is said to preserve the weaker form of (ϵ,δ)-differential privacy. One of the first, and still most common, approaches to privatising a non-private function is the Laplace mechanism. First, we need a way to measure the sensitivity of a non-private function to input perturbation. The global sensitivity of a (non-private) function f mapping dataset D to ^d is a bound Δ(f) ≥max_D,D'f(D) - f(D')_1 where x_1=∑_i=1^d |x_i| and the D,D' are taken over neighbouring pairs of data The more sensitive a target, non-private function, the more noise the Laplace mechanism adds so as to smooth out this sensitivity. A mechanism whose output is probably insensitive can preserve differential privacy. For any (non-private) function f mapping dataset D to ^d with known global sensitivity Δ(f), parameter ϵ>0, the Laplace mechanism[The Laplace distribution Lap(ǎ,b) on ^d with mean ǎ∈^d and scale b>0 parameters, has probability density function exp(-ǎ-x̌_1/b)/(2b).] M(D)=f(D)+Lap(0,Δ(f)/ϵ) is ϵ-differential private. § SECOND DATA SET ALGORITHMData61's report addendum describes The “” (found as Algorithm 1 in Section 2, Page 4). The algorithm operates on two objects of interest: * A dataset D of rows of the form ⟨ d_1,…,d_k⟩ over k columns each representing common attributes such as a time, location or mode of transport. Each row represents a recorded individual trip event such as a tap on/off at a particular location and time.* A set of attribute combinations we'll label Q, each member of the form ⟨ q_1,…, q_k⟩ with q_i being a value taken from the domain of D column i. For example, one tuple in Q might represent a type of event: tap on a Manly ferry at Manly Wharf, at 06:45 on a Monday.The goal of the is to release a sanitised version of the contingency table that results from counting up, for each combination q∈ Q the number of matching rows in D. In other words, from input D outputs a new table with a row per q∈ Q and a single column corresponding to the (approximate) corresponding count. achieves this goal by processing releasing an approximate count per q∈ Q as follows * Compute exact count c_q(D) of records in D matching given q;* If c_q(D) = 0: Release 0.* Else: * Compute d_q(D) = c_q(D) + Lap(0, p).* Release d_q(D) if d_q(D)>t a given threshold, or 0 otherwise. * (Post release, the response is rounded to an integer value.)Threshold t=18 is chosen according to some reasoning about group privacy not fully explained in the addendum. Lap(a,b) refers to a Laplace distribution with mean parameter a and scale parameter b.Figure <ref> illustrates possible response distributions, corresponding to c_q(D)=0, c_q(D)∈(0,t] and c_q(D)>t. These are shown pre-rounding since roudning does not affect differential privacy by a well-known post-processing lemma.Data61's report addendum doesn't go into any detail about the type of differential privacy preserved or the level of privacy achieved (though an earlier report on a previous version of the dataset does have more detail). In the absence of a qualification, a claim of differential privacy would usually be taken to mean the stronger ϵ-differential privacy.(ϵ>0 smaller being more private). § PROOF OF THEOREM <REF> The result follows immediately from the fact that on some datasets D, the cannot output some values, that can be output on a neighbouring D'. In full detail, we need only show that there exist neighbouring datasets for which the differential privacy bound cannot hold. (Note that we are analysing the differential privacy of the algorithm pre-rounding, since rounding does not affect the level of privacy achieved.) Consider any q∈ Q and dataset D of arbitrary size n∈ containing one record matching q, and a neighbouring dataset D' with the one matching record changed to non-matching. Then for any ϵ>0, focusing on the marginal response distribution M_q(·) on the (arbitrarily) chosen q,M_q(D)=t+1 = 1/2pexp(-|t+1-c_q(D)|/p) = 1/2pexp(-t/p) > exp(ϵ) · 0 = exp(ϵ) M_q(D')=t+1,where the first equality follows by substitution of the PDF of the zero-mean Laplace with scale p; the second follows by noting c_q(D)=1 by design; the last equality by the fact that c_q(D')=0 and so M_q(D') can never output t+1. Note that this holds for any t≥ 0. The discrepancy arises from the non-perturbation of zero counts.For general outputs of counts over a set Q, the same result occurs. For the joint response distribution M(D)=ť=∏_q∈ QM_q(D)=t_q. Our construction demonstrates that for any Q we can select data sets D,D' such that one term in the product becomes zero, setting the whole product zero—on D'—while on D the product will be a positive value.The second part of the theorem follows from the same example: to achieve level ϵ on the chosen pair D,D', we need δ at least the left-hand side since thenM_q(D)=t+1 = 1/2pexp(-t/p) ≤1/2pexp(-t/p) = exp(ϵ) M_q(D')=t+1 + δ. The estimates for δ in Section <ref> are simply calculations of the likelihood that the Laplace distribution produces a perturbation large enough to take the small values above the threshold of 18, assuming the parameter p estimated below. § DISTRIBUTION OF A DIFFERENCE OF LAPLACE RANDOM VARIABLES This section calculates the distribution derived from the difference of two independent random variables that are distributed asLap(0,b).(What we call b here is p in the observed dataset.)Let L(b) = 1/2bexp(-|x|/b) denote the PDF of the Lap(0,b) distribution.For the case u ≥ 0, the PDF of the difference between the two random variables is given by the convolution∫_-∞^∞ L(x) L (u-x) dx = 1/4b^2∫_-∞^∞exp(-|x/b|). exp(-|u-x|/b) dx = 1/4b^2( ∫_-∞^0exp(x/b). exp((x-u)/b)dx +∫_0^uexp(-x/b). exp((x-u)/b)dx ..+ ∫_u^∞exp(-x/b). exp((u-x)/b)dx) = 1/4b^2( b/2exp(-u/b)[exp(2x/b)]_x=-∞^0+ u exp(-u/b) - b/2exp(u/b)[exp(-2x/b)]_x=u^x=∞) = 1/4b^2( b/2exp(-u/b) - u exp(-u/b) + b/2exp(-u/b) ) = 1/4b^2((u+b)exp(-u/b) ) . The u < 0 case must be symmetric. Figure <ref> shows the simulation of this distribution with p ≈ 1.4, plotted beside the observed differences from Section <ref>.The match is fairly close but not perfect, partly because the suppression of numbers less than 18 produces some large values in the observed data.Nevertheless this gives us a reasonable estimate of the likelihood that these perturbations would produce large values.§ QUANTIFYING CONFIDENCE FOR SECRET FERRIES LATE AT NIGHT This section leverages two indirect, privatised observations of the same trips, so as to calculate probabilities for the raw trip counts. The methodology is indicative of how differences can lead to uncovering raw passenger numbers. From one set of observations we have * A privatised count of 91 passengers at MFV, observing random variable X=x+L_1 where x is the raw count and L_1∼ Lap(0,p);* A privatised count of 41 passengers at CQ, observing random variable Y=y+L_2 where y is a the raw count and L_2∼ Lap(0,p);* A privatised count of 0 passengers at CR, based on z≤ 18 the raw count; and* A privatised count of 150 at MFV, CQ, CR summed, observing random variable S=x+y+z+L_3 where L_3∼ Lap(0,p).Here the L_i are independent, identically distributed. (Note that we have omitted the effect of rounding, which does not materially change the nature of the results, while it does complicate their exposition.) We have assumed that the MB ferry arrived on time and was not counted in the fourth release (this information would be easy to collect, and factor into the analysis if it arrived early). We have also assumed the same level of privacy ϵ,δ and sensitivity for the final release as compared to the first three—based on the first report this is a reasonable assumption as it appears that only the threshold is adapted to changes in aggregation/query (the analysis should be extensible to if a different p were used, although it complicates the derivation).Now observe that the difference between the fourth release and the first three, produces a familiar quantity: a sum of independent, identically distributed random variables (all Laplacian):S - X - Y=(x + y + z + L_3) - (x + L_1) - (y + L_2) = z + L_1 + L_2 + L_3 = 1/3∑_i=1^3 (z + 3 L_i) = 1/3∑_i=1^3 Z_i,where the first equality follows from substituting the definitions of the random variables, the second equality follows from cancelling like terms and noting that the L_i variables are symmetrically distributed about zero, the third equality shares z across the three equal terms of the sum, and the final equality follows from defining independent and identically distributed random variables Z_i∼ Lap(z, 3p). We denote this final average Z.We note some interesting properties of Z: * Z is an unbiased estimator of z, that is the expectation of Z is z. Concretely, if we were to observe Z many times we could average those observations and reconstruct z. In our situation we have only one observation of it, so the goodness of this “guess" depends on the variance;* Z has variance 6 p^2. By independence of the L_i, the variance of their sum is the sum of their variances which are all 2 p^2 (well-known for a Laplace distribution of scale p). Adding constant z does not change this variance;* Z is symmetrically distributed about its expectation. This fact isn't interesting by itself, but is used below in deriving the confidence interval. It follows from the fact that it is the average of symmetric random variables (to see this, consider that each can be replaced by its negative without affecting the distribution of the average). Note that assuming p is chosen rather small so as not to destroy utility, p^2 may be rather small implying just the single observation of Z is a reasonable estimate of z. §.§ A Confidence Interval We can also apply the standard argument for estimating confidence intervals, to this case of Z estimating z, included here for completeness. We deviate slightly from the typical approach, by using bounds on the distribution of the sample. The point of this exercise is to quantify uncertainty of the estimate Z of z.We seek a symmetric interval about Z, [Z-a, Z+a], that is likely to capture z, say with probability at least 1-α for some small α close to zero. For example, α chosen as 5% would correspond to a 95% confidence interval. The interpretation of such a confidence interval, is that out of 100 observations of the (random) interval we expect the interval to successfully cover z 95 times of the 100. This is the standard interpretation of (frequentist statistical) confidence intervals, commonly used to quantify uncertainty of estimates. 1 - α ≤ Z-a ≤ z ≤Z+a= - a ≤Z - z≤ a=1 - 2Z - z ≥ a,where the inequality restates the definition of confidence interval—the interval we are seeking; the first equality follows from manipulating the inequalities/restating the condition sought; the second equality follows from the fact that the distribution of Z is symmetric about z. Rearranging further, we are seeking a such thatZ≥ a + z ≤ α/2.We have to now invert the distribution function on the left-hand side of this inequality, in order to solve for a. Inverting the CDF for example would yield the quantile function. In this case we don't have on hand the distribution of Z (although we could compute it, using characteristic functions). Instead we will bound the probability in terms of a probability that is known and easily invertible. Using a bound does not invalidate the correctness of the confidence interval derived, but rather will produce a wider interval than necessary; as we shall see, the interval is sufficiently tight for our purposes. A natural way to bound tails of i.i.d. sample sums/averages is to use concentration inequalities such as Hoeffding's inequality. We don't bother pursuing such approaches since we have a sample of such a small number of random variables (three): a simpler/looser route works well.Note that the following events' relationship⋂_i=1^3 {Z_i ≥ z + a} ⇒ {Z≥ z + a},implies a bound on our probability of interest in (<ref>)α/2≥ Z≥ z + a≥ ∀ i, Z_i≥ z + a= ∏_i=1^3 Z_i≥ z + a= ∏_i=1^3 1/2exp(-a/3p)= 1/8exp(-a/p),where we have substituted in the known cumulative distribution function for Z_i∼ Lap(z, 3p), leveraged the independence of these variables, and simplified. Solving for a yieldsa ≥ p log(1/(4α)).Summarising, we have proven the following result. A confidence interval for the actual count z of passengers at CR, of confidence level 1-α for any 0<α<1, is given byS-X-Y ± p log(1/(4α)). Given the observed X,Y,S, if p=1.4 for example as estimated above, then we estimate that z is in the range 17.02 to 18.98 with confidence 95% (capturing only one integer 18). A 99% confidence interval is 16.04 to 19.96 (capturing integers 17, 18, 19).
http://arxiv.org/abs/1704.08547v1
{ "authors": [ "Chris Culnane", "Benjamin I. P. Rubinstein", "Vanessa Teague" ], "categories": [ "cs.CR" ], "primary_category": "cs.CR", "published": "20170427131229", "title": "Privacy Assessment of De-identified Opal Data: A report for Transport for NSW" }
firstpage–lastpage A Massive Prestellar Clump Hosting no High-Mass Cores Ken'ichi Tatematsu=====================================================Tight binaries of helium white dwarfs (He WDs) orbiting millisecond pulsars (MSPs) will eventually “merge” due to gravitational damping of the orbit. The outcome has been predicted to be the production of long-lived ultra-compact X-ray binaries (UCXBs), in which the WD transfers material to the accreting neutron star (NS).Here we present complete numerical computations, for the first time, of such stable mass transfer from a He WD to a NS. We have calculated a number of complete binary stellar evolution tracks, starting from pre-LMXB systems, and evolved these to detached MSP+WD systems and further on to UCXBs. The minimum orbital period is found to be as short as 5.6 minutes. We followed the subsequent widening of the systems until the donor stars becomeplanets with a mass of ∼ 0.005 M_⊙ after roughly a Hubble time. Our models are able to explain the properties of observed UCXBs with high helium abundances and we can identify these sources on the ascending or descending branch in a diagram displaying mass-transfer rate vs. orbital period. binaries: close — X-rays: binaries — stars: mass-loss — stars: neutron — white dwarfs — pulsars: general § INTRODUCTION The detection of radio millisecond pulsars (MSPs) in close orbits with helium white dwarfs (He WDs) raises interesting questions about their future destiny.One example is PSR J0348+0432 <cit.> which has an orbital period of 2.46 hr. Due to continuous emission of gravitational waves, this system will “merge” in about 400 Myr.However, rather than resulting in a catastrophic event, once the WD fills its Roche lobe, the outcome is expected to be a long-lived ultra-compact X-ray binary <cit.>.These sources are tight low-mass X-ray binaries (LMXBs) observed with an accreting neutron star (NS) and a typical orbital period of less than 60 min.Because of the compactness of UCXBs, the donor stars are constrained to be either a WD, a semi-degenerate dwarf or a helium star <cit.>.Depending on the mass-transfer rate, the UCXBs are classified in two categories: persistent and transient sources. Until now, only 14 UCXBs have been confirmed (9 persistent, 5 transient), and an additional 14 candidates are known.Therefore, we can infer that UCXBs are difficult to detect or represent a rare population. Earlier studies <cit.> have suggested the need for extreme fine tuning of initial parameters (stellar mass and orbital period of the LMXB progenitor systems) in order to produce an UCXB from an LMXB system. Analytical investigations by <cit.> and <cit.> on the evolution of UCXBs with NS or black hole accretors reveal that these systems can evolve to orbital period of 100-110min, thereby explaining the existence of the so-called diamond planet pulsar <cit.>.UCXBs are detected with different chemical compositions in the spectra of their accretion discs <cit.>. To explain this diversity requires donor stars which have evolved to different levels of nuclearburning and interior degeneracy, and therefore to different scenarios for the formation of UCXBs. Since a large fraction of the UCXBs are found in globular clusters, some of these UCXB systems could also have formed via stellar exchange interactions <cit.>.For a 1.4 M_⊙ NS accretor, only CO WDs with a mass of 0.4 M_⊙ lead to stable UCXB configurations <cit.>, although recent hydrodynamical simulations suggest that this critical WD mass limit could be lower <cit.>.Here we focus on numerical computations covering, for the first time, complete evolution of NS-main sequence star binaries which evolve into LMXBs and later produce UCXBs with a He WD donor star. The evolution is terminated when the donor reaches a mass of ∼0.005 M_⊙ (about 5 Jupiter masses) after several Gyr, with a radius close to the maximum radius of a cold planet.In Section <ref>, we present the applied stellar evolution code, as well as key assumptions on binary interactions and a summary of our applied model. The results of our calculations are presented in Section <ref>. A comparison with the observed UCXBs systems is give in Section <ref>, and finally we further discuss and summarise our results in Section <ref>.§ NUMERICAL CODE AND INITIAL SETUPWe applied the MESA code <cit.> for calculating theevolution of NS–main sequence star binaries.Our initial binary system consists of a 1.3 M_⊙ NS (treated as a point mass) and a zero-age main-sequence (ZAMS) donor star of mass M_2=1.4 M_⊙ with solar chemical composition (X=0.70, Y=0.28 and Z=0.02). We investigated a range of initial orbital periods of P_ orb,i≃ 2-5days, with a total of 40 models.These orbits were assumed to be circular and synchronized. We assumed standard loss of orbital angular momentum due to magnetic braking (only significant during the LMXB phase), gravitational wave (GW) radiation and mass loss <cit.>. We modelled the latter via Roche-lobe overflow (RLO) according to the isotropic re-emission model <cit.>, in which matter flows from the donor star to the accreting NS in a conservative manner and a fraction <cit.> is ejected from the vicinity of the NS, and at all times ensuring sub-Eddington mass-accretion rates (|Ṁ_2|<Ṁ_ Edd≃ 3.0× 10^-8 M_⊙yr^-1). For the initial phase of the mass transfer in the UCXB stage (once the He WD remnant initiates RLO to the NS), the mass-transfer rate is super-Eddington, and thus the NS accretion rate is limited to Ṁ_ Edd. For simplicity, and to avoid too many free parameters, we do not include the possibility of a circumbinary (CB) disc <cit.>, nor do we consider irradiation of the donor star via pulsar winds or photons. § RESULTS §.§ LMXB/pre-UCXB evolutionFig. <ref> shows the orbital period evolution as a function of age for several LMXBs with a range of P_ orb,i=2.2-5.0days and magnetic braking index γ=5 <cit.>.For these close-orbit binaries, magnetic braking efficiently shrinks the orbits such that the companion star is forced to initiate RLO within 1-3Gyr.Based on the classification of <cit.> and <cit.>, we divide the orbital evolution of the resulting LMXBs into diverging, intermediate and converging systems. Notably, we find a narrow range of P_ orb,i for which LMXB systems detach and produce a He WD in a tight orbit <cit.>. Such systems are observable as radio MSP binaries with typical P_ orb≃ 2-9hr <cit.>. These MSP binaries will shrink their orbits further by GW radiation; most of them to the extent that their He WDs are forced to fill their Roche lobe within a total age of less than a Hubble time. At this stage the systems become UCXBs, typically when P_ orb≃ 10-50min, cf. Fig. <ref>.As a result of the (M_ WD,P_ orb)–relation for He WDs <cit.> all our UCXBs initially have M_ WD≃ 0.15-0.17 M_⊙.§.§ UCXB evolutionary tracksIn Fig. <ref>, we plot evolutionary tracks for the UCXB phase (i.e. post-LMXBs) of six systems with values of P_ orb,i between 3.25 and 3.52 days, cf. coloured tracks in Fig. <ref>.Several features of UCXB formation can be seen fromFig. <ref> which we now discuss in more detail.Firstly, it is evident that LMXBs with P_ orb,i below a certain threshold value (depending on initial values of M_2, M_ NS, chemical composition and treatment of magnetic braking; here ∼ 3.45days) will never detach from RLO to produce a He WD. Their donor stars still possess a significant hydrogen content – even in their cores – and due to their very small nuclear burning rates they still have a mixture of hydrogen and helium when they finally become degenerate near the orbital period minimum, P_ orb^ min≃ 10-85min <cit.>.These converging systems become hydrogen-rich UCXBs and are most likely the progenitor systems of the so-called black widow MSPs <cit.>. For our two hydrogen-rich systems with P_ orb,i=3.25days (red track) and P_ orb,i=3.44days (orange track), we find P_ orb^ min≃ 35min and 8 min, respectively.Secondly, for UCXBs with He WD donors, the larger the value of P_ orb,i, the smaller is P_ orb at the onset of the UCXB phase. The reason is that in wider binaries He WDs have larger masses <cit.>; and, more important, since in wider systems it takes longer time for GWs to cause the He WDs to fill their Roche lobe, they will be less bloated <cit.>, i.e. more compact (and colder) by the time they reach the onset of the UCXB phase. Thirdly, we identify a unique pattern in the tracks of these UCXBs (see black arrows). They begin on the vertical tracks, when the He WD initiates RLO, and continue decreasing P_ orb due to GWs while climbing up the ascending branch until the tip of the track at P_ orb^ min. For our He WD UCXBs we find typicallyP_ orb^ min≃ 5-7min. Following P_ orb^ min, which coincides with a maximum value of |Ṁ_2|≃ 10 Ṁ_ Edd (see also Fig. <ref>), all systems settle on the common declining branch while P_ orb steadily increases on a Gyr timescale, with the relation:log |Ṁ_2/M_⊙ yr^-1|=-5.15·log (P_ orb/ min)-2.62.The shape of the UCXBs tracks can be understood from the ongoing competition between GW radiation and orbital expansion caused by mass transfer/loss. The reason that the maximum value of |Ṁ_2| coincides with P_ orb^ min is partly that the He WD donor stars are fully degenerate, which means that their mass–radius exponent is negative (Fig. 4c), whereby they expand in response to mass loss <cit.>.The onset of RLO leads not only to very high mass-transfer rates (Fig.<ref>) but also to an outward acceleration of the orbital size, as a result of the small mass ratio (q≃ 0.1) between the two stars, such that at some point the rate of orbital expansion dominates over that of shrinking due to GW radiation. As the orbits widen further, the value of |Ṁ_2| decreases and the strength of GW radiation levels off due to its steep dependence on orbital separation and the systems settle on the common declining branch while the orbit expands at a continuously slower pace.An analogy of our described UCXB models can be made to RLO in double WD systems <cit.>. Our final M-R tracks (Fig. 4c) terminating at 5 Jupiter masses (∼0.005 M_⊙) are in good agreement with (within 5% of) the adiabatic helium models of <cit.> and the cold helium models of <cit.>. For a comparison to M-R tracks of cold planets, see Fig. 4d. The He WD donors never crystallise but remain Coulomb liquids with Γ≤ 35.In all our models, the final mass of the (post) UCXB NS is 1.70±0.01 M_⊙, reflecting the assumed NS birth mass (here 1.3 M_⊙) and the accretion efficiency. § COMPARISON TO OBSERVATIONSAt first sight in Fig. <ref>, we notice that our UCXB tracks can explain the location of the observed systems quite well. The data were taken from <cit.> and the error bars on |Ṁ_2| reflect bolometric correction uncertainty from the observed X-ray luminosity.The seven persistent and short-orbital period UCXBs (P_ orb≃ 10-25min) are either located on the ascending or the descending branch. A simple way to distinguish between the two options is the sign of Ṗ_ orb. Unfortunately, the intrinsic value is difficult to derive in practice since many of the UCXBs are located in dense globular clusters, thereby suffering from acceleration in the cluster potential or gravitational perturbations from a nearby object. Moreover, a CB disc may cause Ṗ_ orb<0 on the declining branch <cit.>. Statistically, however, UCXBs are much more likely to be on the declining branch since the temporal evolution is a lot slower along this branch <cit.>.The three persistent sources with P_ orb≃ 40-50min are best described by an LMXB system which evolves continuously into an UCXB without forming a detached WD (cf. red track for P_ orb,i=3.25days). Alternatively, for a WD donor, a CB disc model can also account for these sources as a result of a significant increase in |Ṁ_2| <cit.>.All the five transient systems likely populate the declining branch of the evolutionary tracks. Due to their wider orbits, the radius of their accretion disc is larger and its temperature lower, which causes thermal viscous instabilities and thus a transient behavior <cit.>. <cit.> suggested that this might be the reason why so relatively few UCXBs are seen in wide orbits with P_ orb>30min (keeping in mind that UCXBs should accumulate in wide orbits over time). The transient behavior allows radio MSPs to turn on, whereby the “radio ejection mechanism" <cit.> can prevent further accretion. However, pulsar-wind irradiation of the donor may operate, possibly until M_2 < 0.004 M_⊙ (if beaming is favourable), at which point the star is likely to undergo tidal disruption <cit.> <cit.>; potentially leaving behind pulsar planets <cit.>. Our evolutionary tracks are terminated just before that when M_2≃ 0.005 M_⊙ and P_ orb=70-80min.To model even wider UCXBs with He WD donors, will probably require irradiation effects to be included. Whereas these effects apparently have little effect on the UCXB evolution in an (M_2, P_ orb)–diagram <cit.>, they do accelerate the evolution of these systems <cit.>, which is needed to understand some of the observed relatively wide-orbit (post-UCXB) MSP binaries with very small companion star masses.So far, we have not discussed the chemical composition of the observed spectra of the UCXB systems which holds the key to understanding their origin <cit.>. Although the nature of the donor stars has only been established firmly in some cases,it seems already now quite clear that a variety of progenitor models is needed to explain their origin. Our modelling presented here can account for the UCXBs with helium (or hydrogen) lines in their spectra. § SUMMARYWe have used MESA to calculate the completeevolution of close binary systems leading to the formation and evolution of UCXBs. This includes numerical calculations (to our knowledge, for the first time) of stable mass transfer from a WD to an accreting NS. In this work, we have concentrated on an initial binary with a relatively massive ZAMS donor star of 1.4 M_⊙. This allows for producing UCXBs at P_ orb^ min within 10 Gyr. In another recent work, even more massive donor stars have been suggested to produce UCXBs <cit.>.We also performed additional modelling using M_2=1.2 M_⊙ and β=0.3. The evolutionary tracks of these systems closely resemble those plotted inFig. <ref>, although in this case no UCXBs are produced before a total age of 11.6 Gyr. A particular uncertainty in the first part of our calculations is the modelling of magnetic braking. It is evident that Nature produces tight LMXBs, CVs and NS–WD binaries via this channel, but the calibration of the effect as well as e.g. the required depth of the convective envelope remains uncertain <cit.>.To explain the full population of UCXBs, one needs to perform similar computations leading to donor stars evolved to different degrees of nuclear burning and which therefore have different chemical compositions (i.e. hydrogen-rich dwarf stars, naked helium stars, He WDs or CO WDs). Whether such computations (with or without a common-envelope phase) are also possible for CO WDs (or hybrid-CO WDs) remains to be shown.§ ACKNOWLEDGEMENTSRS thanks AIfA, University of Bonn, for funding during this MSc project and Pablo Marchant for help with MESA. We thank Craig Heinke, John Antoniadis and the referee, Lennart van Haaften, for very useful comments. mnras
http://arxiv.org/abs/1704.08260v2
{ "authors": [ "Rahul Sengar", "Thomas M. Tauris", "Norbert Langer", "Alina G. Istrate" ], "categories": [ "astro-ph.SR", "astro-ph.HE" ], "primary_category": "astro-ph.SR", "published": "20170426180004", "title": "Novel modelling of ultra-compact X-ray binary evolution - stable mass transfer from white dwarfs to neutron stars" }
The Private University College of Education of the Diocese of LinzSalesianumweg 3, A-4020 Linz, [email protected] No, This is not a Circle! Zoltán Kovács========================= < g r a p h i c s > A popular curve shown in introductory maths textbooks, seems like a circle. But it is actually a different curve. This paper discusses some elementary approaches to identify the geometric object, including novel technological means by using GeoGebra. We demonstrate two ways to refute the false impression, two suggestions to find a correct conjecture, and four ways to confirm the result by proving it rigorously.All of the discussed approaches can be introduced in classrooms at various levels from middle school to high school. § BUT IT LOOKS LIKE A CIRCLE One possible anti boredom activity is to simulate string art in a chequered notebook, as seen below the title of this paper. This kind of activity is easy enough to do it very early, even as a child during the early school years. The resulting curve, the contour of the “strings”, or more precisely, a curve whose tangents are the strings, is called an envelope.According to Wikipedia <cit.>, an envelope of a family of curves in the plane is a curve that is tangent to each member of the family at some point.[This definition is however polysemic: the Wikipedia page lists other non-equivalent ways to introduce the notion of envelopes. See <cit.> for a more detailed analysis on the various definitions.] Let us assume that the investigated envelope—below the title of this paper—which is defined similarly as the learner activity in Fig. <ref>, is a circle. In the investigated envelope it will be assumed that a combination of 4 simple constructions is used, the axes are perpendicular, and the sums of the joined numbers are 8. To be more general, these sums may be changed to different (but fixed) numbers. These sums will be denoted by d to recall the distance of the origin and the furthermost point for the exterior strings.By using the assumption of the circle property, in our case the family of the strings must be equally far from the center of the circle. This needs to be true because the circle is the only curve whose tangents are equally far from the center. Due to symmetry of the 4 parts of the figure, the only possible center for the circle is the midpoint of the figure. Let us consider the top-left part of the investigated figure (Fig. <ref>). On the left and the top the strings AB and BC have the distance d=OA=OC from center O. On the other hand, the diagonal string DE has distance OF=3/4· d·√(2) from the assumed center, according to the Pythagorean theorem. This latter distance is approximately 1.06· d, that is, more than d. Consequently, the curve cannot be an exact circle. That is, it is indeed not a circle.In schools the Pythagorean theorem is usually introduced much later than the students are ready to simply measure the length of OA and OF by using a ruler. The students need to draw, however, a large enough figure because the difference between OA and OF is just about 6%. Actually, both methods obviously prove that the curve is different from a circle, and the latter one can already be discussed at the beginning of the middle school.§ OK, IT IS NOT A CIRCLE—BUT WHAT IS IT THEN? Let us continue with a possible classroom solution of the problem. Since the strings are easier to observe than the envelope, it seems logical to collect more information about the strings. Extending the definition of the investigated envelope by continuing the strings to both directions, we learn how the slope of the strings changes while continuing the extension more and more (Fig. <ref>). Here we remark that the top-left part of the investigated envelope is now mirrored about the first axis (cf. Fig. <ref>), therefore not the sums but the differences will be constant, namely d=10.The strings in the extension support the idea that the tangents of the curve, when |n| is large enough, are almost parallel to the line y=-x. This observation may refute the opinion that the curve is eventually a hyperbola (which has two asymptotes, but they are never parallel).On the other hand, by changing the segments in Fig. <ref> to lines an obvious conjecture can be claimed, that is, the curve is a parabola (Fig. <ref>). Thus the observed curve must be a union of 4 parabolic arcs.§ WE HAVE A CONJECTURE—CAN WE VERIFY THAT? A GeoGebra applet in Fig. <ref> can explicitly compute the equation of the envelope and plot it accurately. (See <cit.> for a detailed survey on the currently available software tools to visualize envelopes dynamically.) For technical reasons a slider cannot be used in this case—instead a purely Euclidean construction is required as shown in the figure. Free points A and B are defined to set the initial parameters of the applet, and finally segment g=CC” describes the family of strings. The commandwill then produce an implicit curve, which is in this concrete case x^2+2xy-20x+y^2+20y=-100.GeoGebra uses heavy symbolic computations in the background to find this curve <cit.>. Since they are effectively done, the user may even drag points A and B to different positions and investigate the equation of the implicit curve. They are recomputed quickly enough to have an overview on the resulted curve in general—they are clearly quadratic algebraic curves in variables x and y.Without any deeper knowledge of the classification of algebraic curves, of course, young learners cannot really decide whether the resulted curve is indeed a parabola. Advanced learners and maths teachers could however know that all real quadratic curves are either circles, ellipses, hyperbolas, parabolas, a union of two lines or a point in the plane. As in the above, we can argue that the position of the strings as tangents support only the case of parabolas here.On the other hand, for young learners we can still find better positions for A and B. It seems quite obvious that the curve remains definitely the same (up to similarity), so it is a free choice to define the positions of A and B. By keeping A in the origin and putting B on the line y=-x we can observe that the parabola is in the form y=ax^2+bx+c which is the usual way how a parabola is introduced in the classroom.(In our case actually b=0.) For example, when B=(10,-10), the implicit curve is x^2+20y=-100, and this can be easily converted to y=-1/20x^2-5 (Fig. <ref>).This result is computed by using precise algebraic steps in GeoGebra. One can check these steps by examining the internal log—in this case 16 variables and 11 equations will be used including computing a Jacobi determinant and a Gröbner basis when eliminating all but two variables from the equation system. That is, GeoGebra actually provides a proof, albeit its steps remain hidden for the user. (Showing the detailed steps when manipulating on an equation system with so many variables makes no real sense from the educational point of view: the steps are rather mechanical and may fill hundreds of pages.)As a conclusion, it is actually proven that the curve is a parabola. Of course, learners may want to understand why it that curve is.§ PROOF IN THE CLASSROOM Here we provide two simple proofs on the fact that the envelope is a parabola in Fig. <ref>.The first method follows <cit.>.We need to prove that segment CC” is always a tangent of the function y=-1/20x^2-5. First we compute the equation of line CC” to find the intersection point T of CC” and the parabola.We recognize that if point C=(e,-e), then point C”=(e-10,e-10).Now we have two possible approaches to continue.* Since line CC” has an equation in form y=ax+b, we can set up equations for points C and C” as follows:-e = a· e+bande-10 = a·(e-10)+b.Now (<ref>)-(<ref>) results in a=1-1/5e and thus, by using (1) again we get b=-2e+1/5e^2.Second, to obtain intersection point T we consider equation ax+b=-1/20x^2-5 which can be reformulated to search the roots of quadratic function 1/20x^2+ax+b+5. If and only if the discriminant of this quadratic expression is zero, then CC” is a tangent. Indeed, the determinant is a^2-4·1/20·(b+5)=a^2-b/5-1 which is, after inserting a and b, obviously zero.* Another method to show that CC” is a tangent of the parabola is to use elementary calculus. School curricula usually includes computing tangents of polynomials of the second degree.Let T=(t,-1/20t^2-5). Now the steepness of the tangent of the parabola in T is (-1/20t^2-5)'=-1/10t. It means that the equation of the tangent is y=-1/10tx+b, here b can be computed by using x=t and y=-1/20t^2-5, that is b=1/20t^2-5. The equation of the tangent is consequentlyy = -1/10tx+1/20t^2-5. Let us assume now that C and C” are the intersections of the tangent and the lines y=-x and y=x, respectively. The x-coordinate of C can be found by putting y=-x in (<ref>), it isx_C=1/20t^2-5/1/10t-1.On the other hand, the x-coordinate of C” can be found by putting y=x in (<ref>), it isx_C”=1/20t^2-5/1/10t+1.By using some basic algebra it can be confirmed that x_C-10=x_C”, that is CC” is indeed a string.The second proof is technically longer than the first one but still achievable in many classrooms.Both approaches are purely analytical proofs without any knowledge of the synthetic definition of a parabola. The fact is that in many classrooms, unfortunately, the synthetic definition is not introduced or even mentioned.§ A SYNTHETIC APPROACH In the schools where the synthetic definition of a parabola is also introduced, the most common definition is that it is the locus of points in the plane that are equidistant from both the directrix line ℓ and the focus point F. §.§ An automated answer Without any further considerations it is possible to check (actually, prove) that the investigated curve is a parabola also in the symbolic sense. To achieve this result one can invoke GeoGebra's Relation Tool <cit.> after constructing the parabola synthetically as seen in Fig. <ref>.Here the focus point F is the midpoint of BB', and the directrix line ℓ is a parallel line to BB' through A. This piece of information should be probably kept secret by the teacher—the learners could find them on their own. Now to check if the string g is indeed a tangent of the parabola the tangent point T has been created as an intersection of the string and the parabola. Also a tangent line j has been drawn, and finally line g' which is the extension of segment g to a full line. At this point it is possible to compare g' and j by using the Relation Tool.The Relation Tool first compares the two objects numerically and reports that they are equal. By clicking the “More…” button the user obtains the symbolic result of the synthetic statement (Fig. <ref>).We recall that despite the construction was performed synthetically, the symbolic computations were done after translating the construction to an algebraic setup. Thus GeoGebra's internal proof is again based on algebraic equations and still hidden for the user. But in this case we indeed have a general proof for each possible construction setup, not for only one particular case as for thecommand. §.§ A classical proof Finally we give a classical proof to answer the original question. Here every detail uses only synthetic considerations.The first part of the proof is a well known remark on the bisection property of the tangent. That is, by reflecting the focus point about any tangent of the parabola the mirror image is a point of the directrix line. (See e.g. <cit.>, Sect. 3.1 for a short proof.) Clearly, it is sufficient to show that the strings have this kind of bisection property: this will result in confirming the statement.In Fig. <ref> the tangent to the parabola is denoted by j. Let F' be the mirror image of F about j. We will prove that F'∈ℓ. Let G denote the intersection of j and FF'. Clearly ∠ CGF and ∠ C”GF are right because of the reflection.By construction FBC and FAC” are congruent. Thus FC=FC”. Moreover, CFC” is isosceles and FG is its bisector at F, in addition, CG=C”G.Let n denote a parallel line to AB through C”. Let E be the intersection of ℓ and n. Also let H be the intersection of j and ℓ, and let I be the intersection of j and the line BB'. Since ℓ and BB' are parallel, moreover AB and n are also parallel, and EC”=CB (because E is actually the rotation of A around C” by 90 degrees), we conclude that C”EH and CBI are congruent. This means that IC=C”H.That is, using also CG=C”G, G must be the midpoint of HI, thus G lies on the mid-parallel of ℓ and BB'. As a consequence, reflecting F about G the resulted point F' is surely a point of line ℓ.§ CONCLUSION An analysis of the string art envelope was presented at different levels of mathematical knowledge, by refuting a false conjecture, finding a true statement and then proving it with various means.Discussion of a non-trivial question by using different means can give a better understanding of the problem. What is more, reasoning by visual “evidence” can be misleading, and only rigorous (or rigorous but computer based) proofs can be satisfactory.It should be noted that the parabola property of the string art envelope is well known in the literature on Bezier curves, but usually not discussed in maths teacher trainings. The de Casteljeau algorithm for a Bezier curve of degree 2 is itself a proof that the curve is a parabola. (See <cit.> for more details.) Also among maths professionals this property seems rarely known. A recent example of a tweet of excitement is from February 2017 (Fig. <ref>).On the other hand, our approach highlighted the classroom introduction of the string art parabola, and suggested some very recent methods by utilizing computers in the middle and high school to improve the teacher's work and the learners' skills.Lastly, we remark that the definition of the string art envelope looks similar to the envelope of other family of lines. For example, the envelope of the sliding ladder results in a different curve, the astroid <cit.>, a real algebraic curve of degree 6. While “physically” that is easier to construct (one just needs a ladder-like object, e. g. a pen), the geometric analysis of that is more complicated and usually involves partial derivatives. (See also <cit.> on a proof for identifying the string art parabola by using partial derivatives.)§ ACKNOWLEDGMENTSThe author thanks Tomás Recio and Noël Lambert for comments that greatly improved the manuscript.The introductory figure was drawn by Benedek Kovács (12), first grade middle school student.
http://arxiv.org/abs/1704.08483v3
{ "authors": [ "Zoltán Kovács" ], "categories": [ "math.HO", "cs.AI" ], "primary_category": "math.HO", "published": "20170427090410", "title": "No, This is not a Circle" }
Speech is the most common communication method between humans and involves the perception of both auditory and visual channels. Automatic speech recognition focuses on interpreting the audio signals, but it has been demonstrated that video can provide information that is complementary to the audio. Thus, the study of automatic lip-reading is important and is still an open problem. One of the key challenges is the definition of the visual elementary units (the visemes) and their vocabulary. Many researchers have analyzed the importance of the phoneme to viseme mapping and have proposed viseme vocabularies with lengths between 11 and 15 visemes. These viseme vocabularies have usually been manually defined by their linguistic properties and in some cases using decision trees or clustering techniques. In this work, we focus on the automatic construction of an optimal viseme vocabulary based on the association of phonemes with similar appearance. To this end, we construct an automatic system that uses local appearance descriptors to extract the main characteristics of the mouth region and HMMs to model the statistic relations of both viseme and phoneme sequences. To compare the performance of the system different descriptors (PCA, DCT and SIFT) are analyzed. We test our system in a Spanish corpus of continuous speech. Our results indicate that we are able to recognize approximately 58% of the visemes, 47% of the phonemes and 23% of the words in a continuous speech scenario and that the optimal viseme vocabulary for Spanish is composed by 20 visemes.Automatic Viseme Vocabulary Construction to Enhance Continuous Lip-reading Adriana Fernandez-Lopezsup1 and Federico M. Suknosup1 sup1Department of Information and Communication Technologies, Pompeu Fabra University, Barcelona, Spain {adriana.fernandez, federico.sukno}@upf.eduUniversität Heidelberg, Institut für Theoretische Physik, Philosophenweg 16, D-69120 Heidelberg ==============================================================================================================================================================================================================§ INTRODUCTION Speech is the most used communication method between humans, and it is considered a multi-sensory process that involves perception of both acoustic and visual cues since McGurk demonstrated the influence of vision in speech perception <cit.>. Many authors have subsequently demonstrated that the incorporation of visual information into speech recognition systems improves robustness <cit.>. Much of the research in automatic speech recognition (ASR) systems has focused on audio speech recognition, or on the combination of both modalities using audiovisual speech recognition (AV-ASR) systems to improve the recognition rates, but visual automatic speech recognition systems (VASR) are rarely analyzed alone <cit.>, <cit.>, <cit.>, <cit.>.Even though the audio is in general much more informative than the video signal, human speech perception relies on the visual information to help decoding spoken words as auditory conditions are degraded <cit.>, <cit.>, <cit.>, <cit.>. In addition visual information provides complementary information as speaker localization, articulation place, and the visibility of the tongue, the teeth and the lips. Furthermore, for people with hearing impairments, the visual channel is the only source of information if there is no sign language interpreter <cit.>, <cit.>, <cit.>.The performance of audio only ASR systems is very high if there is not much noise to degrade the signal. However, in noisy environments AV-ASR systems improve the recognition performance when compared to their audio-only equivalents <cit.>, <cit.>. On the contrary, in visual only ASR systems the recognition rates are rather low. It is true that the access to speech recognition through the visual channel is subject to a series of limitations. One of the key limitations relies on the ambiguities that arise when trying to map visual information into the basic phonetic unit (the phonemes), i.e. not all the phonemes that are heard can be distinguished by observing the lips. There are two types of ambiguities: i) there are phonemes that are easily confused because they are perceived visually similar to others. For example, the phones /p/ and /b/ are visually indistinguishable because voicing occurs at the glottis, which is not visible. ii) there are phonemes whose visual appearance can change (or even disappear) depending on the context (co-articulated consonants). This is the case of the velars, consonants articulated with the back part of the tongue against the soft palate (e.g: /k/ or /g/), because they change their position in the palate depending on the previous or following phoneme<cit.>. In consequence of these limitations, there is no one-to-one mapping between the phonetic transcription of an utterance and their corresponding visual transcription <cit.>. From the technical point of view lip-reading depends on the distance between the speakers, on the illumination conditions and on the visibility of the mouth <cit.>, <cit.>, <cit.>.The objective of ASR systems is to recognize words. Words can be represented as strings of phonemes, which can then be mapped to acoustic observations using pronunciation dictionaries that establish the mapping between words and phonemes. In analogy to audio speech systems, where there is consensus that the phoneme is the standard minimal unit for speech recognition, when adding visual information we aim at defining visemes, namely the minimum distinguishable speech unit in the video domain <cit.>. As explained above, the mapping from phonemes to visemes cannot be one-to-one, but apart from this fact there is no much consensus on their definition nor in their number. When designing VASR systems, one of the most important challenges is the viseme vocabulary definition. There are discrepancies on whether there is more information in the position of the lips or in their movement  <cit.>, <cit.>, <cit.> and if visemes are better defined in terms of articulatory gestures (such as lips closing together, jaw movement, teeth exposure) or derived from the grouping of phonemes having the same visual appearance  <cit.>, <cit.>. From a modeling viewpoint, the use of viseme units is essentially a form of model clustering that allows visually similar phonetic events to share a group model <cit.>. Consequently several different viseme vocabularies have been proposed in the literature typically with lengths between 11 and 15 visemes  <cit.>, <cit.>, <cit.>, <cit.>. For instance, Goldschen et al. <cit.> trained an initial set of 56 phones and clustered them into 35 visemes using the Average Linkage hierarchical clustering algorithm. Jeffers et al. <cit.> defined a phoneme to viseme mapping from 50 phonemes to 11 visemes in the English language (11 visemes plus Silence). Neti et al. <cit.> investigated the design of context questions based on decision trees to reveal similar linguistic context behaviour between phonemes that belong to the same viseme. For study, based on linguistic properties, they determined seven consonant visemes (bilabial, labio-dental, dental, palato-velar, palatal, velar, and two alveolar), four vowel, an alveolar-semivowel and one silence viseme (13 visemes in total). Bozkurt et al. <cit.> proposed a phoneme to viseme mapping from 46 American English phones to 16 visemes to achieve nature looking lip animation. They mapped phonetic sequences to viseme sequences before animating the lips of 3D head models. Ezzat et al. <cit.> presented a text-to-audiovisual speech synthesizer which converts input text into an audiovisual speech stream. They started grouping those phonemes which looked similar by visually comparing the viseme images. To obtain a photo-realistic talking face they proposed a viseme vocabulary with 6 visemes that represent 24 consonant phonemes, 7 visemes that represent the 12 vowel phonemes, 2 diphthong visemes and one viseme corresponding to the silence. §.§ Contributions In this work we investigate in the automatic construction of a viseme vocabulary from the association of visually similar phonemes. In contrast to the related literature, where visemes have been mainly defined manually (based on linguistic properties) or semi-automatically (e.g. by trees or clustering) <cit.> we explore the fully automatic construction of an optimal viseme vocabulary based on simple merging rules and the minimization of pair-wise confusion. We focus on constructing a VASR for Spanish language and explore the use of SIFT and DCT as descriptors for the mouth region, encoding both the spatial and temporal domains. We evaluated our system in a Spanish corpus (AV@CAR) with continuous speech from 20 speakers. Our results indicate that we are able to recognize more than 47% of the phonemes and 23% of the words corresponding to continuous speech and that the optimal viseme vocabulary for Spanish language is composed by 20 visemes.§ VASR SYSTEM VASR systems typically aim at interpreting the video signal in terms of visemes, and usually consist of 3 major steps: 1) Lips localization, 2) Extraction of visual features, 3) Classification into viseme sequences. In this section we start with a brief review of the related work and then provide a detailed explanation of our method. §.§ Related Work Much of the research on VASR has focused on digit recognition, isolated words and sentences, and only more recently in continuous speech. Seymour et al. <cit.> centred their experiments in comparing different image transforms (DCT, DWT, FDCT) to achieve speaker-independent digit recognition. Sui et al. <cit.> presented a novel feature learning method using Deep Boltzmann Machines that recognizes simple sequences of isolated words and digit utterances. Their method used both acoustic and visual information to learn features, except for the test stage where only the visual information was used. Lan et al. <cit.> used AAM features to quantify the effect of shape and appearance in lip reading and tried to recognize short sentences using a constrained vocabulary for 15 speakers. Zhao et al. <cit.> proposed a spatiotemporal version of LBP features and used a SVM classifier to recognize isolated phrase sequences. Zhou et al. <cit.> used a latent variable model that identifies two different sources of variation in images, those related to the appearance of the speaker and those caused by the pronunciation, and tried to separate them to recognize short utterances (e.g. Excuse me, Thank you,...). Pet et al. <cit.> presented a random forest manifold alignment method (RFMA) and applied it to lip-reading in color and depth videos. The lip-reading task was realized by motion pattern matching based on the manifold alignment. Potamianos et al. <cit.> applied fast DCT to the region of interest (ROI) and retained 100 coefficients. To reduce the dimensionality they used an intraframe linear discriminant analysis and maximum likelihood linear transform (LDA and MLLT), resulting in a 30-dimensional feature vector. To capture dynamic speech information, 15 consecutive feature vectors were concatenated, followed by an interframe LDA/MLLT for dimensionality reduction to obtain dynamic visual features of length 41. They tested their system using IBM ViaVoice database and reported 17.49% of recognition rate in continuous speech recognition. Thangthai et al. <cit.> explored the use of Deep Neural Networks (DNNs) in combination with HiLDA features (LDA and MLLT). They reported very high accuracy (≈ 85%) in recognizing continuous speech although tests were on a corpus with a single speaker. Cappelletta et al. <cit.> used a database with short balanced utterances and tried to define a viseme vocabulary able to recognize continuous speech. They based their feature extraction on techniques as PCA or Optical flow, taking into account both movement and appearance of the lips.Although some attempts to compare between methods have been made it is a quite difficult task in visual only ASR. Firstly, the recognition rates cannot be compared directly among recognition tasks: it is easier to recognize isolated digits trained with higher number of repetitions and number of speakers or to recognize shorts sentences trained in restricted vocabularies than to recognize continuous speech. Additionally, even when dealing with the same recognition tasks, the use of substantially different databases makes it difficult the comparison between methods. Concretely, results are often not comparable because they are usually reported in different databases, with variable number of speakers, vocabularies, language and so on. Keeping in mind these limitations, some studies have shown that most methods recognize automatically between 25% and 64% of short utterances <cit.>. As mentioned before we are interested in continuous speech recognition because it is the task that is closer to actual lip-reading as done by humans. Continuous speech recognition has been explored recently, and there is limited literature about it. The complexity of the task and the few databases directly related to it have slowed its development, achieving rather low recognition rates. Because which technique use in each block of the pipeline is still an open problem, we decided to construct our own visual only ASR system based on intensity descriptors and on HMMs to model the dynamics of the speech. §.§ Our System In this section each step of our VASR system is explained (Figure <ref>). We start by detecting the face and extracting a region of interest (ROI) that comprises the mouth and its surrounding area. Appearance features are then extracted and used to estimate visemes, which are finally mapped into phonemes with the help of HMMs.§.§.§ Lips LocalizationThe location of the face is obtained using invariant optimal features ASM (IOF-ASM) <cit.> that provides an accurate segmentation of the face in frontal views. The face is tracked at every frame and detected landmarks are used to fix a bounding box around the lips (ROI) (Figure <ref>). At this stage the ROI can have a different size in each frame. Thus, ROIs are normalized to a fixed size of 48 × 64 pixels to achieve a uniform representation. §.§.§ Feature ExtractionAfter the ROI is detected a feature extraction stage is performed. Nowadays, there is no universal feature for visual speech representation in contrast to the Mel-frequency cepstral coefficients (MFCC) for acoustic speech. We look for an informative feature invariant to common video issues, such as noise or illumination changes. We analyze three different appearance-based techniques:* SIFT: SIFT was selected as high level descriptor to extract the features in both the spatial and temporal domains because it is highly distinctive and invariant to image scaling and rotation, and partially invariant to illumination changes and 3D camera viewpoint <cit.>. In the spatial domain, the SIFT descriptor was applied directly to the ROI, while in the temporal domain it was applied to the centred gradient. SIFT keypoints are distributed uniformly around the ROI (Figure <ref>). The distance between keypoints was fixed to half of the neighbourhood covered by the descriptor to gain robustness (by overlapping). As the dimension of the final descriptor for both spatial and temporal domains is very high, PCA was applied to reduce the dimensionality of the features. Only statistically significant components (determined by means of Parallel Analysis <cit.>) were retained.* DCT: The 2D DCT is one of the most popular techniques for feature extraction in visual speech <cit.>, <cit.>. Its ability to compress the relevant information in a few coefficients results in a descriptor with small dimensionality. The 2D DCT was applied directly to the ROI. To fix the number of coefficients, the image error between the original ROI and the reconstructed was used. Based on preliminary experiments, we found that 121 coefficients (corresponding to 1% reconstruction error) for both the spatial and temporal domains produced satisfactory performance.* PCA: Another popular technique is PCA, also known as eigenlips  <cit.>, <cit.>, <cit.>. PCA, as 2D DCT is applied directly to the ROI. To decide the optimal number of dimensions the system was trained and tested taking different percentages of the total variance. Lower number of components would lead to a low quality reconstruction, but an excessive number of components will be more affected by noise. In the end 90% of the variance was found to be a good compromise and was used in both spatial and temporal descriptors. The early fusion of DCT-SIFT and PCA-SIFT has been also explored to obtain a more robust descriptor (see results in Section <ref>).§.§.§ Feature Classification and InterpretationThe final goal of this block is to convert the extracted features into phonemes or, if that is not possible, at least into visemes. To this end we need: 1) classifiers that will map features to (a first estimate of) visemes; 2) a mapping between phonemes and visemes; 3) a model that imposes temporal coherency to the estimated sequences.* Classifiers: classification of visemes is a challenging task, as it has to deal with issues such as class imbalance and label noise. Several methods have been proposed to deal with these problems, the most common solutions being Bagging and Boosting algorithms <cit.>, <cit.>, <cit.>, <cit.>. From these, Bagging has been reported to perform better in the presence of training noise and thus it was selected for our experiments. Multiple LDA was evaluated using cross validation. To add robustness to the system, we trained classifiers to produce not just a class label but to estimate also a class probability for each input sample.For each bagging split, we train a multi-class LDA classifier and use the Mahalanobis distance d to obtain a normalized projection of the data into each class c:d_c(x) = √((x-x̅_̅c̅)^T ·Σ_c^-1· (x-x̅_̅c̅))Then, for each class, we compute two cumulative distributions based on these projections: one for in-class samples Φ(d_c(x)-μ_c/σ_c), x ∈ c and another one for out-of-class samples Φ(d_c(x)-μ_c/σ_c), x ∈c, which we assume Gaussian with means μ_c, μ_c and variances σ_c, σ_c, respectively. An indicative example is provided in Figure <ref>. Notice that these means and variances correspond to the projections in (<ref>) and are different from x̅_̅c̅ and Σ_c.We compute a class-likelihood as the ratio between the in-class and the out-of-class distributions, as in (<ref>) and normalize the results so that the summation over all classes is 1, as in (<ref>). When classifying a new sample, we use the cumulative distributions to estimate the probability that the unknown sample belongs to each of the viseme classes (<ref>). We assign the class with the highest normalized likelihood L_c. F(c | x ) = 1-Φ(d_c(x)-μ_c/σ_c)/Φ(d_c(x)-μ_c/σ_c) L_c(x) = F(c | x)/∑_c=1^C F(c | x) Once the classifiers are trained we could theoretically try to classify features directly into phonemes, but as explained in Section <ref>, there are phonemes that share the same visual appearance and are therefore unlikely to be distinguishable by a visual-only system. Thus, such phonemes should be grouped into the same class (visemes). In the next subsection we will present a mapping from phonemes to visemes based on grouping phonemes that are visually similar. * Phoneme to Viseme Mapping: to construct our viseme to phoneme mapping we analyse the confusion matrix resulting by comparing the ground truth labels of the training set with the automatic classification obtained from the previous section. We use an iterative process, starting with the same number of visemes as phonemes, merging at each step the visemes that show the highest ambiguity. The method takes into account that vowels cannot be grouped with consonants, because it has been demonstrated that their aggregation produces worse results <cit.>, <cit.>.The algorithm iterates until the desired vocabulary length is achieved. However, there is no accepted standard to fix this value beforehand. Indeed, several different viseme vocabularies have been proposed in the literature typically with lengths between 11 and 15 visemes. Hence, in Section <ref> we will analyse the effect of the vocabulary size on recognition accuracy. Once the vocabulary construction is concluded, all classifiers are retrained based on the resulting viseme classes. * HMM and Viterbi Algorithm: to improve the performance obtained after feature classification, HMMs of one state per class are used to map: 1) visemes to visemes; 2) visemes to phonemes.An HMM λ = (A, B, π) is formed by N states and M observations. Matrix A represents the state transition probabilities, matrix B the emission probabilities, and vector π the initial state probabilities. Given a sequence of observation O and the model λ our aim is to find the maximum probability state path Q = q_1, q_2, ..., q_t-1. This can be done recursively using Viterbi algorithm <cit.>, <cit.>. Let δ_i(t) be the probability of the most probable state path ending in state i at time t (<ref>). Then δ_j(t) can be computed recursively using (<ref>) with initialization (<ref>) and termination (<ref>).δ_i(t) = max_q_1, …, q_t-1 P (q_1 ... q_t-1 = i, O_1,..., O_t | λ)δ_j(t) = max_1 ≤ i ≤ N [δ_i(t-1) · a_i,j] · b_j(O_t)δ_i(1) = π_i · b_i(O_1),1 ≤ i ≤ NP = max_1 ≤ i ≤ N [δ_i(T)] A shortage of the above is that it only considers a single observation for each instant t. In our case observations are the output from classifiers and contain uncertainty. We have found that it is useful to consider multiple possible observations for each time step. We do this by adding to the Viterbi algorithm the likelihoods obtained by the classifiers for all classes (e.g. from equation (<ref>)). As a result, (<ref>) is modified into (<ref>), where the maximization is done across both the N states (as in (<ref>)) and also the M possible observations, each weighted with its likelihood estimated by the classifiers.δ_j(t) = max_1 ≤ O_t ≤ M max_1 ≤ i ≤ N [δ_i(t-1) · a_i,j] ·b̂_j(O_t)b̂_j(O_t) = b_j(O_t) · L(O_t)where the short-form L(O_t) refers to the likelihood L_O_t(x) as defined in (<ref>). The Viterbi algorithm modified as indicated in (<ref>) is used to obtain the final viseme sequence providing at the same time temporal consistency and tolerance to classification uncertainties. Once this has been achieved, visemes are mapped into phonemes using the traditional Viterbi algorithm (<ref>). § EXPERIMENTS §.§ Database Ortega et al. <cit.> introduced AV@CAR as a free multichannel multi-modal database for automatic audio-visual speech recognition in Spanish language, including both studio and in-car recordings. The Audio-Visual-Lab data set of AV@CAR contains sequences of 20 people recorded under controlled conditions while repeating predefined phrases or sentences. There are 197 sequences for each person, recorded in AVI format. The video data has a spatial resolution of 768x576 pixels, 24-bit pixel depth and 25 fps and is compressed at an approximate rate of 50:1. The sequences are divided into 9 sessions and were captured in a frontal view under different illumination conditions and speech tasks. Session 2 is composed by 25 videos/user with phonetically-balanced phrases. We have used session 2 splitting the dataset in 380 sentences (19 users × 20 sentences/user) for training and 95 sentences (19 users × 5 sentences/user) to test the system. In Table <ref> it is shown the sentences of the first speaker. §.§ Phonetic VocabularySAMPA is a phonetic alphabet developed in 1989 by an international group of phoneticians, and was applied to European languages as Dutch, English, French, Italian, Spanish, etc. We based our phonetic vocabulary in SAMPA because it is the most used standard in phonetic transcription <cit.>, <cit.>. For the Spanish language, the vocabulary is composed by the following 29 phonemes: /p/, /b/, /t/, /d/, /k/, /g/, /tS/, /jj/, /f/, /B/, /T/, /D/, /s/, /x/, /G/, /m/, /n/, /J/, /l/, /L/, /r/, /rr/, /j/, /w/, /a/, /e/, /i/, /o/, /u/. The phonemes /jj/ and /G/ were removed from our experiments because our database did not contain enough samples to consider them. §.§ Results In this section we show the results of our experiments. In particular, we show the comparison of the performances between the different vocabularies, the different features, and the improvement obtained by adding the observation probabilities into the Viterbi algorithm.§.§.§ Experimental SetupWe constructed an automatic system that uses local appearance features based on early fusion of DCT and SIFT descriptors (this combination produced the best results in our tests, see below) to extract the main characteristics of the mouth region in both spatial and temporal domains. The classification of the extracted features into phonemes is done in two steps. Firstly, 100 LDA classifiers are trained using bagging sequences to be robust under label noise. Then, the classifier outputs are used to compute the global normalized likelihood, as the summation over the normalized likelihood computed by each classifier divided by the number of classifiers (as explained in Section <ref>). Secondly, at the final step, one-state-per-class HMMs are used to model the dynamic relations of the estimated visemes and produce the final phoneme sequences.§.§.§ Comparison of Different VocabulariesAs we explained before one of the main challenges of VASR systems is the definition of the phoneme-to-viseme mapping. While our system aims to estimate phoneme sequences, we know that there is no one-to-one mapping between phonemes and visemes. Hence, we try to find the one-to-many mapping that will allow us to maximize the recognition of phonemes in the spoken message.To evaluate the influence of the different mappings, we have analysed the performance of the system in terms of viseme- , phoneme-, and word recognition rates using viseme vocabularies of different lengths. Our first observation, from Figure <ref>, is that the viseme accuracy tends to grow as we reduce the vocabulary length. This is explained by two factors: 1) the reduction in number of classes, which makes the classification problem a simpler one to solve; 2) the fact that visually indistinguishable units are combined into one. The latter helps explain the behaviour of the other metric in the figure: phoneme accuracy. As we reduce the vocabulary length, phoneme accuracy firstly increases because we eliminate some of the ambiguities by merging visually similar units. But if we continue to reduce the vocabulary, too many phonemes (even unrelated) are mixed together and their accuracy decreases because, even if these visemas are better recognized, their mapping into phonemes is more uncertain. Thus, the optimal performance is obtained for intermediate vocabulary lengths, because there is an optimum compromise between the visemes and the phonemes that can be recognized.The same effect can also be seen in Figure <ref> in terms of word recognition rates. We can observe how the one-to-one phoneme to viseme mapping (using the 28 phonemes classes) obtained the lowest word recognition rates and how the highest word recognition rates were obtained for the intermediate vocabulary lengths, supporting the view that the one-to-many mapping from phonemes to visemes is necessary to optimize the performance of visual speech systems. In the experiments presented in this paper, a vocabulary of 20 visemes (summarized in Table <ref>) produced the best performance.§.§.§ Feature ComparisonTo analyse the performance of the different features we have fixed the viseme vocabulary as shown in Table <ref> and performed a 4-fold cross-validation on the training set. We used 100 LDA classifiers per fold, generated by means of a bagging strategy. Figure <ref> displays the results. Visualizing the features independently, DCT and SIFT give the best performances. The fusion of both features produced an accuracy of 0.58 for visemes, 0.47 for phonemes. §.§.§ Improvement by Adding Classification Likelihoods Figure <ref> shows how the accuracy varies when considering the classifier likelihoods in the Viterbi algorithm. The horizontal axis indicates the number of classes that are considered (the rank), in decreasing order of likelihood. The performance of the algorithm without likelihoods (<ref>) is also provided as a baseline (rank 0). We see that the improvement obtained by the inclusion of class likelihoods is up to 20%.Finally, it is interesting to analyse how the system performs for each of the resulting phonemes. Figure <ref> (a) shows the frequency of appearance of each phoneme. In Figure <ref> (b) we show the number of phonemes that are wrongly detected. It can be seen that the input data is highly unbalanced, biasing the system toward the phonemes that appear more often. For example, the silence appears 4 times more than the vowels a,e,i and there are some phonemes with very few samples, such as rr, f or b. This has also an impact in terms of precision and recall, as can be observed in Figure <ref> (c). In the precision and recall figure we can observe the effects of the many-to-one viseme to phoneme mapping. For example, there are phonemes with low precision and recall because have been confused with one of the phonemes of their viseme group (e.g: vowel i have been confused with a and e). Considering the three plots at once, there is a big impact of the silence on the overall performance of the system. In particular, the recognition of silences shows a very high precision but its recall is only about 70%. By inspecting our data, we found that this is easily explained by the fact that, normally, people start moving their lips before speaking, in preparation for the upcoming utterance. Combination with audio would easily resolve this issue.§ CONCLUSIONSWe investigate the automatic construction of optimal viseme vocabularies by iteratively combining phonemes with similar visual appearance into visemes. We perform tests on the Spanish database AV@CAR using a VASR system based on the combination of DCT and SIFT descriptors in spatial and temporal domains and HMMs to model both viseme and phoneme dynamics. Using 19 different speakers we reach a 58% of recognition accuracy in terms of viseme units, 47% in terms of phoneme units and 23% in terms of words units, which is remarkable for a multi-speaker dataset of continuous speech as the one used in our experiments. Comparing the performance obtained by our viseme vocabulary with the performance of other vocabularies, such as those analysed by Cappelleta et al. in <cit.>, we observe that the 4 vocabularies they propose have lengths of 11, 12, 14 and 15 visemes and their maximum accuracy is between 41% and 60% (in terms of viseme recognition).Interestingly, while our results support the advantage of combining multiple phonemes into visemes to improve performance, the number of visemes that we obtain are comparatively high with respect to previous efforts. In our case, the optimal vocabulary length for Spanish reduced from 28 phonemes to 20 visemes (including Silence), i.e. a reduction rate of about 3:2. In contrast, previous efforts reported for English started from 40 to 50 phonemes and merged them into just 11 to 15 visemes <cit.>, with reduction rates from 3:1 to 5:1. It is not clear, however, if the higher compression of the vocabularies obeys to a difference inherent to language or to other technical aspects, such as the ways of defining the phoneme to viseme mapping.Indeed, language differences make it difficult to make a fair comparison of our results with respect to previous work. Firstly, it could be argued that our viseme accuracy is comparable to values reported by Cappelletta et al. <cit.>; however they used at most 15 visemes while we use 20 visemes and, as shown in Figure <ref>, when the number of visemes decreases, viseme recognition accuracy increases but phoneme accuracy might be reduced hence making more difficult to recover the spoken message. Unfortunately, Cappelletta et al. <cit.> did not report their phoneme recognition rates.Another option for comparison is word-recognition rates, which are frequently reported in automatic speech recognition systems. However, in many cases recognition rates are reported only for audio-visual systems without indicating visual-only performance<cit.>, <cit.>. Within systems reporting visual-only performance, comparison is also difficult given that they are often centered on tasks such as digit or sentence recognition <cit.>, <cit.>, <cit.>, <cit.>, which are considerably simpler than the recognition of continuous speech, as addressed here. Focusing on continuous systems for visual speech, Cappelleta et al. <cit.> did not report word recognition rates and Thangthai et al. <cit.> reported tests just on a single user. Finally, Potamianos et al. <cit.> implemented a system comparable to ours based on appearance features and tested it using the multi-speaker IBM ViaVoice database, achieving 17.49% of word recognition rate in continuous speech, which is not far from the 23% of word recognition rate achieved by our system.§ ACKNOWLEDGEMENTSThis work is partly supported by the Spanish Ministry of Economy and Competitiveness under the Ramon y Cajal fellowships and the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502), and the Kristina project funded by the European Union Horizon 2020 research and innovation programme under grant agreement No 645012.[Antonakos et al., 2015]antonakos2015survey Antonakos, E., Roussos, A., and Zafeiriou, S. (2015). A survey on mouth modeling and analysis for sign language recognition. In Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, volume 1, pages 1–7. IEEE.[Bear et al., 2014]bear2014phoneme Bear, H. L., Harvey, R. W., Theobald, B.-J., and Lan, Y. (2014). Which phoneme-to-viseme maps best improve visual-only computer lip-reading? In International Symposium on Visual Computing, pages 230–239. Springer.[Bozkurt et al., 2007]bozkurt2007comparison Bozkurt, E., Erdem, C. E., Erzin, E., Erdem, T., and Ozkan, M. (2007). Comparison of phoneme and viseme based acoustic units for speech driven realistic lip animation. Proc. of Signal Proc. and Communications Applications, pages 1–4.[Buchan et al., 2007]buchan2007spatial Buchan, J. N., Paré, M., and Munhall, K. G. (2007). Spatial statistics of gaze fixations during dynamic face processing. Social Neuroscience, 2(1):1–13.[Cappelletta and Harte, 2011]cappelletta2011viseme Cappelletta, L. and Harte, N. (2011). Viseme definitions comparison for visual-only speech recognition. In Signal Processing Conference, 2011 19th European, pages 2109–2113. IEEE.[Chiţu and Rothkrantz, 2012]chictu12012automatic Chiţu, A. and Rothkrantz, L. J. (2012). Automatic visual speech recognition. Speech enhancement, Modeling and Recognition–Algorithms and Applications, page 95.[Cooke et al., 2006]cooke2006audio Cooke, M., Barker, J., Cunningham, S., and Shao, X. (2006). An audio-visual corpus for speech perception and automatic speech recognition. The Journal of the Acoustical Society of America, 120(5):2421–2424.[Dupont and Luettin, 2000]dupont2000audio Dupont, S. and Luettin, J. (2000). Audio-visual speech modeling for continuous speech recognition. IEEE transactions on multimedia, 2(3):141–151.[Erber, 1975]erber1975auditory Erber, N. P. (1975). Auditory-visual perception of speech. Journal of Speech and Hearing Disorders, 40(4):481–492.[Ezzat and Poggio, 1998]ezzat1998miketalk Ezzat, T. and Poggio, T. (1998). Miketalk: A talking facial display based on morphing visemes. In Computer Animation 98. Proceedings, pages 96–102. IEEE.[Fisher, 1968]fisher1968confusions Fisher, C. G. (1968). Confusions among visually perceived consonants. Journal of Speech, Language, and Hearing Research, 11(4):796–804.[Franklin et al., 1995]franklin1995parallel Franklin, S. B., Gibson, D. J., Robertson, P. A., Pohlmann, J. T., and Fralish, J. S. (1995). Parallel analysis: a method for determining significant principal components. Journal of Vegetation Science, 6(1):99–106.[Frénay and Verleysen, 2014]frenay2014classification Frénay, B. and Verleysen, M. (2014). Classification in the presence of label noise: a survey. IEEE transactions on neural networks and learning systems, 25(5):845–869.[Goldschen et al., 1994]goldschen1994continuous Goldschen, A. J., Garcia, O. N., and Petajan, E. (1994). Continuous optical automatic speech recognition by lipreading. In Signals, Systems and Computers, 1994. 1994 Conference Record of the Twenty-Eighth Asilomar Conference on, volume 1, pages 572–577. IEEE.[Hazen et al., 2004]hazen2004segment Hazen, T. J., Saenko, K., La, C.-H., and Glass, J. R. (2004). A segment-based audio-visual speech recognizer: Data collection, development, and initial experiments. In Proceedings of the 6th international conference on Multimodal interfaces, pages 235–242. ACM.[Hilder et al., 2009]hilder2009comparison Hilder, S., Harvey, R., and Theobald, B.-J. (2009). Comparison of human and machine-based lip-reading. In AVSP, pages 86–89.[Jeffers and Barley, 1980]jeffers1980speechreading Jeffers, J. and Barley, M. (1980). Speechreading (lipreading). Charles C. Thomas Publisher.[Khoshgoftaar et al., 2011]khoshgoftaar2011comparing Khoshgoftaar, T. M., Van Hulse, J., and Napolitano, A. (2011). Comparing boosting and bagging techniques with noisy and imbalanced data. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 41(3):552–568.[Lan et al., 2009]lan2009comparing Lan, Y., Harvey, R., Theobald, B., Ong, E.-J., and Bowden, R. (2009). Comparing visual features for lipreading. In International Conference on Auditory-Visual Speech Processing 2009, pages 102–106.[Llisterri and Mariño, 1993]llisterri1993spanish Llisterri, J. and Mariño, J. B. (1993). Spanish adaptation of sampa and automatic phonetic transcription. Reporte técnico del ESPRIT PROJECT, 6819.[Lowe, 2004]lowe2004distinctive Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2):91–110.[Luettin et al., 1996]luettin1996visual Luettin, J., Thacker, N. A., and Beet, S. W. (1996). Visual speech recognition using active shape models and hidden markov models. In Acoustics, Speech, and Signal Processing, 1996. ICASSP-96. Conference Proceedings., 1996 IEEE International Conference on, volume 2, pages 817–820. IEEE.[McGurk and MacDonald, 1976]mcgurk1976hearing McGurk, H. and MacDonald, J. (1976). Hearing lips and seeing voices. Nature, 264:746–748.[Moll and Daniloff, 1971]moll1971investigation Moll, K. L. and Daniloff, R. G. (1971). Investigation of the timing of velar movements during speech. The Journal of the Acoustical Society of America, 50(2B):678–684.[Nefian et al., 2002]nefian2002coupled Nefian, A. V., Liang, L., Pi, X., Xiaoxiang, L., Mao, C., and Murphy, K. (2002). A coupled hmm for audio-visual speech recognition. In Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference on, volume 2, pages II–2013. IEEE.[Neti et al., 2000]neti2000audio Neti, C., Potamianos, G., Luettin, J., Matthews, I., Glotin, H., Vergyri, D., Sison, J., and Mashari, A. (2000). Audio visual speech recognition. Technical report, IDIAP.[Nettleton et al., 2010]nettleton2010study Nettleton, D. F., Orriols-Puig, A., and Fornells, A. (2010). A study of the effect of different types of noise on the precision of supervised learning techniques. Artificial intelligence review, 33(4):275–306.[Ortega et al., 2004]ortega2004av Ortega, A., Sukno, F., Lleida, E., Frangi, A. F., Miguel, A., Buera, L., and Zacur, E. (2004). Av@ car: A spanish multichannel multimodal corpus for in-vehicle automatic audio-visual speech recognition. In LREC.[Ortiz, 2008]ortiz2008lipreading Ortiz, I. d. l. R. R. (2008). Lipreading in the prelingually deaf: what makes a skilled speechreader? The Spanish journal of psychology, 11(02):488–502.[Pei et al., 2013]pei2013unsupervised Pei, Y., Kim, T.-K., and Zha, H. (2013). Unsupervised random forest manifold alignment for lipreading. In Proceedings of the IEEE International Conference on Computer Vision, pages 129–136.[Petrushin, 2000]petrushin2000hidden Petrushin, V. A. (2000). Hidden markov models: Fundamentals and applications. In Online Symposium for Electronics Engineer.[Potamianos et al., 2003]potamianos2003recent Potamianos, G., Neti, C., Gravier, G., Garg, A., and Senior, A. W. (2003). Recent advances in the automatic recognition of audiovisual speech. Proceedings of the IEEE, 91(9):1306–1326.[Rabiner, 1989]rabiner1989tutorial Rabiner, L. R. (1989). A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286.[Ronquest et al., 2010]ronquest2010language Ronquest, R. E., Levi, S. V., and Pisoni, D. B. (2010). Language identification from visual-only speech signals. Attention, Perception, & Psychophysics, 72(6):1601–1613.[Saenko et al., 2005]saenko2005visual Saenko, K., Livescu, K., Siracusa, M., Wilson, K., Glass, J., and Darrell, T. (2005). Visual speech recognition with loosely synchronized feature streams. In Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1, volume 2, pages 1424–1431.[Sahu and Sharma, 2013]sahu2013result Sahu, V. and Sharma, M. (2013). Result based analysis of various lip tracking systems. In Green High Performance Computing (ICGHPC), 2013 IEEE International Conference on, pages 1–7. IEEE.[Seymour et al., 2008]seymour2008comparison Seymour, R., Stewart, D., and Ming, J. (2008). Comparison of image transform-based features for visual speech recognition in clean and corrupted videos. Journal on Image and Video Processing, 2008:14.[Sui et al., 2015]sui2015listening Sui, C., Bennamoun, M., and Togneri, R. (2015). Listening with your eyes: Towards a practical visual speech recognition system using deep boltzmann machines. In Proceedings of the IEEE International Conference on Computer Vision, pages 154–162.[Sukno et al., 2007]sukno2007active Sukno, F. M., Ordas, S., Butakoff, C., Cruz, S., and Frangi, A. F. (2007). Active shape models with invariant optimal features: application to facial analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(7):1105–1117.[Sumby and Pollack, 1954]sumby1954visual Sumby, W. H. and Pollack, I. (1954). Visual contribution to speech intelligibility in noise. The journal of the acoustical society of america, 26(2):212–215.[Thangthai et al., 2015]thangthai2015improving Thangthai, K., Harvey, R., Cox, S., and Theobald, B.-J. (2015). Improving lip-reading performance for robust audiovisual speech recognition using dnns. In Proc. FAAVSP, 1St Joint Conference on Facial Analysis, Animation and Audio–Visual Speech Processing.[Verbaeten and Van Assche, 2003]verbaeten2003ensemble Verbaeten, S. and Van Assche, A. (2003). Ensemble methods for noise elimination in classification problems. In International Workshop on Multiple Classifier Systems, pages 317–325. Springer.[Wells et al., 1997]wells1997sampa Wells, J. C. et al. (1997). Sampa computer readable phonetic alphabet. Handbook of standards and resources for spoken language systems, 4.[Yau et al., 2007]yau2007visual Yau, W. C., Kumar, D. K., and Weghorn, H. (2007). Visual speech recognition using motion features and hidden markov models. In International Conference on Computer Analysis of Images and Patterns, pages 832–839. Springer.[Zhao et al., 2009]zhao2009lipreading Zhao, G., Barnard, M., and Pietikainen, M. (2009). Lipreading with local spatiotemporal descriptors. IEEE Transactions on Multimedia, 11(7):1254–1265.[Zhou et al., 2014a]zhou2014compact Zhou, Z., Hong, X., Zhao, G., and Pietikäinen, M. (2014a). A compact representation of visual speech data using latent variables. IEEE transactions on pattern analysis and machine intelligence, 36(1):1–1.[Zhou et al., 2014b]zhou2014review Zhou, Z., Zhao, G., Hong, X., and Pietikäinen, M. (2014b). A review of recent advances in visual speech decoding. Image and vision computing, 32(9):590–605.
http://arxiv.org/abs/1704.08035v1
{ "authors": [ "Adriana Fernandez-Lopez", "Federico M. Sukno" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170426093459", "title": "Automatic Viseme Vocabulary Construction to Enhance Continuous Lip-reading" }
http://arxiv.org/abs/1704.08510v1
{ "authors": [ "M. Grasso", "D. Lacroix", "C. J. Yang" ], "categories": [ "nucl-th" ], "primary_category": "nucl-th", "published": "20170427112302", "title": "A Lee-Yang--inspired functional with a density--dependent neutron-neutron scattering length" }
label1]M Carrillo [label1]Laboratorio de Inteligencia Artificial y Supercómputo, Instituto de Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo, Morelia, 58040, México [email protected]]J A González [email protected],label2]S Hernández [label2]Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, ApartadoPostal 70-543, Ciudad de México, 04510, México. [email protected] label1]C E López [email protected]]A Raya [email protected] Within an artificial neural network (ANN) approach, we classify simulated signals corresponding to the semi-classical description of Bloch oscillations on a two-dimensional square lattice. After the ANN is properly trained, we consider the inverse problem of Bloch oscillations (BO) in which a new signal is classified according to the lattice spacing and external electric field strength oriented along a particular direction of the lattice with an accuracy of 96%. This approach can be improved depending on the time spent in training the network and the computational power available. This work is one of the first efforts for analyzing the BO with ANN in two-dimensional crystals. Bloch Oscillations Artificial Neural Networks Square lattice§ INTRODUCTION Flat two-dimensional crystals are unstable against thermal fluctuations according to the Mermin-Wigner theorem <cit.>. Therefore, the early study of these crystals was considered just for academic convenience. More recently, it has been known, nevertheless, that some interesting phenomena occur effectively in two-dimensions, like quantum Hall effect <cit.> and high-T_c superconductivity in cuprates <cit.>. Soon after the first isolation of graphene flakes <cit.>, a new era of materials scienceemerged <cit.> with a huge variety of two-dimensional (2D) systems discovered in the recent past <cit.>.The 2D materials are nowadays a cornerstone of solid state physics and materials science because of their potential technological applicability and their impact in fundamental research. Many of these 2D crystals have the crystal structure of the square lattice, which due to its high symmetry, allows the study of a number of interesting phenomena, like Bloch oscillations (BO) <cit.>. It is well known that BO are not observed directly on crystals because of intraband tunneling andultrafast electron scattering; BO are directly observed in high purity superlattices under different experimental setups <cit.>. The equations of motion of BO are also relevant for a number of optical systems <cit.>. For that purpose, in a previous work <cit.>, some of us posed the inverse problem of BO for the linear chain within an artificial neural network (ANN) approach <cit.>. The idea is to use simulated signals for BO in a semiclassical approximation to train the ANN and then classify a new signal according to the lattice spacing and electric field strength with high accuracy. In this paper we extend these ideas to the 2D square lattice. We develop a framework in which the ANN is trained using the simulated signals corresponding to the semiclassical description of BO for a 2D square lattice considering only the nearest neighbor influence. We then predict the strength of electric field along a particular direction of the lattice and the lattice spacing that produce such trajectories. We achieve up to 96% of accuracy in our classification scheme, which can be improved depending on the computational time and computer power available. For the presentation of ideas, we have organized the remaining of this paper as follows: In Section <ref> we give a description of the BO phenomenology in the semiclassical approach. In Section <ref>, we describe how the signals were generated and the ANN configuration. In Section <ref> the results for all the analyzed cases are discussed and finally, in Section <ref>, the conclusions are presented. § BLOCH OSCILLATION: SEMICLASSICAL APPROACH We start our discussion from the tight-binding Hamiltonian of a monoatomic 2D square lattice of spacing a. Considering the nearest neighbors approximation, we have H ψ_n,m()=- t ψ_n+1,m() - t ψ_n-1,m()- t ψ_n,m+1() - t ψ_n,m-1() + ϵ_0 ψ_n,m()≡ ℰ^(n,m)() ψ_n,m() , where t is the hopping parameter and = k_1ê_x+k_2ê_y is the crystal-momentum of electrons in 2D. From Bloch theorem, it is straightforward to find thatthe energy-momentum dispersion relation is: ℰ^(n,m)(k_1,k_2) = ϵ_0- ϵ^(n,m)(k_1,k_2) , where ϵ^(n,m)(k_1,k_2) =w (1 - cos(k_1 a) - cos(k_2 a)), ϵ_0 is the on-site energy and w=2t. Next, we recall the semiclassical equations of motionfor an electron moving in an external electric field 𝐄 oriented parallel to one direction of the square lattice, d𝐤/dt= -e𝐄 ,d𝐫/dt=1/ħ∂/∂𝐤ϵ^(n,m)(k_1,k_2) . We can straightforwardly integrate the equations of motion and obtain the velocities and trajectories for a given external field strength. Considering the lattice oriented along the x-y plane and a uniform electric field 𝐄=E_1ê_x+E_2ê_y, we integrate Eq. (<ref>) assuming the initial condition k_j(0) = 0 with j = 1,2. Thus k_j(t)=-eE_j/ħt. Rewriting Eq. (<ref>), the electron velocity is given by: v_j^(n,m)(k_j(t)) = w a/ħsin(k_j(t) a),=-w a/ħsin(eE_ja/ħt), and the electric current is simply j_i=-ev_i. Integrating Eqs. (<ref>) we get the profile of BO obtaining the position of the electrons as function of time: x_j^(n,m)(t) =w/eE_jcos(e E_j/ħa t), =w/eE_jcos(ω_E_j t), with ω_E_j =e E_j a/ ħ. Eqs. (<ref>) describe the trajectories which are in fairly good agreement with the experimental observations of BO. In the next Section we describe how the oscillations described by Eq. (<ref>) are simulated and how ANN processes them in order to give an accurate result. § SIGNALS CREATION AND FEATURE PROCESSING For fixed lattice parameters a and t, the trajectories described by Eqs. (<ref>) and (<ref>) are functions of the electric field strength along each spatial direction, which becomes the only free parameter that characterizes a given trajectory in our considerations. We have trained an ANN that associates the electric currents of the electrons with their corresponding electric fields. In other words, the ANN learns through some examples the relationship between the electric current signals in the 2D square lattice and the electric fields that generate those currents. First, let us describe how the training signals were generated then we explain the classification process. For simplicity and without loss of generality, all signals were created following the next considerations: * The parameters of Eqs. (<ref>) and (<ref>) were fixed to dimensionless units e=ħ=1, w_2 = w_2 = a = 0.5. * The signals were generated for a time lapse τ=200. * We integrate the signals considering the possibility of negative and positive electric fields for both E_1 and E_2 on three different ranges defined by E_min and E_max. These cases will be describe more thoroughly later on section <ref>. Once the signals were produced, we selected as inputs of the ANN values for each component of the velocity (v_1 and v_2) at one hundred different times defined by t_i = i Δ t, with Δ t = τ /100 = 2 and i = 0,1,…,99. This means that the ANN will analyze a signal V consisting of two hundred values: V={v_1(t_1),v_2(t_1),…,v_1(t_n),v_2(t_n) }. In Figure <ref> we show an example of BO velocities and the corresponding values where the trajectories were evaluated with E_1 = -0.22 and E_2 = 0.14 generated using Eq. (<ref>). As the goal is to classify the electric field in 2D, we impose that the feedforward ANN has two outputs Ẽ_1 and Ẽ_2. Notice the difference between Ẽ_̃ĩ as the predicted value and E_i the physical value. Considering a single hidden layer with 27 neurons, the equation that defines the predicted value given an input signal V is defined by: Ẽ_j = F(∑_h=1^27σ̃_hjF(∑_i=1^200σ_ih V_i ) ), where j = 1,2. F is the activation function for the hidden and output layers, in this case the standard sigmoid logistic function were used; σ_ih and σ̃_hj are the weights between the input and hidden layer and hidden to output layer respectively. The ANN structure is illustrated in the Figure <ref>. §.§ Electric field scenarios The accuracy of the ANN depends on the frequency of the signals, the electric fields and sampling points. In this Section, we analyze how the performance of the ANN behaves in three different scenarios. Using 625 signals with all the parameters kept fixed except for the electric field that ranges in the scenarios: * Between[E_min = -0.5, E_max = 0.46] separated in steps of Δ E = 0.04. * Between [E_min = -1, E_max=0.92] with Δ E_j = 0.08. * Between [E_min = -0.25, E_max = 0.23] with Δ E_j = 0.02. Considering that the activation function F used in Eq. (<ref>) is a sigmoid function, the output of the network will be within the range [0,1]. The ANN's outputs could be divided in classes that represent the target intervals for E_1 and E_2. This means that the more classes an output has, the more precision is required for a correct classification. For this case, we have decided to divide each output in 5 classes. For clarity, let us develop the case (<ref>) where ΔE_j = 1/25 and E_min = -0.5 and E_max = 0.46. Therefore for each E_j, every class covers up the range: E_min + 5 ζΔ E ≤ E_ζ < E_min +5 (ζ + 1) Δ E, 0≤ζ≤ 4, where E_ζ index each class for any of signal E_j sections. An schematic representation classes division is presented in Figure <ref>. However, because the ANN's output is defined between (0,1), we need to map the electric field class classification into this range. For that, we define the center each one of the five classes Ê_ζ in the output neuron as: E_ζ≡Ê_ζ =0.1 + 0.2ζ. Besides, the center of each class will be used as the target value (Ê_ζ) in the training phase. For example, if the signal is created with any of the first five values for E_1 (ζ = 0) and the last five values of E_2 (ζ = 4), then the ANN has correctly classified this signal if: Ê_0 - 0.1 ≤Ẽ_1 < Ê_0 + 0.1,Ê_4 - 0.1 ≤Ẽ_2 < Ê_4 + 0.1. In the following section we discuss the training procedure used to minimize the error of the predictions. §.§ ANN's training considerations Given that the ANN's weights are adjusted under a supervised training, thus, for each signal is necessary to associate the outputs with their respective electric field used during the signal generation. Using all previous considerations, the ANN was trained with an offline backpropagation algorithm by minimizing the cost function: C(σ⃗)= 1/2 P∑_p=1^P c^p = 1/2 P∑_p=1^P ∑_j=1^2 ( Ẽ_j^p - Ê_j^p)^2. where p=1,2,...,P, with P the number of training patterns. This backpropagation algorithm is a gradient descent method that adjusts the weights σ⃗ after an epoch or iteration s by following the relationship: σ_lm(s+1)=σ_lm(s) - γ/P∑_p=1^Pδ_lm^p(s), where γ is the learning rate and the indexes landm indicate the connection between the l and the m neuron and δ_lm^p is defined by δ_lm^p(s)= ∂C(s,σ⃗)/∂σ_lm^p(s). The number of steps S (1 ≤ s ≤ S) can be selected by achieving a default error, a maximum number of iterations or by cross-validation with an unknown set of signals. In this case, from the 625 signals generated, we chose randomly seventy percent of them as the training set (P=438) and the remaining 187 signals were used as the validation set to check convergence and avoid overfitting to the proposed targets. Moreover, another set of 187 signals were generated to test the accuracy of the ANN to completely unknown signals, namely the test set. During the training phase a maximum of ten thousand iterations were considered, all ANN's weights were initialized randomly between [-1,1] and a learning rate γ=0.005 was used. Moreover, to help the network to converge faster the cost function to a minimum, it is convenient that all the inputs have the same order of magnitude. Therefore a min-max normalization in the velocities is performed in every input: Ṽ^p_i= {[ V^p_i-<V_i>/V_i^max-V_i^min,if V_i^max≠ V_i^min; V^p_i-<V_i>, if V_i^max=V_i^min ]. for 1≤ i ≤ 200, <V_i> is the average, V_i^max and V_i^min are the maximum and minimum values respectively of the i-th input of all the signals. § RESULTS In all the results presented below, we have trained five different networks using different initial weights, and reporting only the best one for all the cases described in the past section. For the first case (<ref>)this ANN has classified correctly with a perfect score the training set, meanwhile with the test set it achieved a 82% and 91% efficiency on Ẽ_1 and Ẽ_2 respectively as it is observed in Table <ref>. In this case, the extreme ranges for E are wider and, according to Eq. (<ref>), the maximum frequency of the signals is twice as in the case (<ref>). Therefore, the signals have a higher frequency, so in principle we should need more points or a lower interval in time to characterize properly these signals before being introduced into the ANN. We consider that the lower accuracy in the network is due to this fact. The results for case number (<ref>) where the extreme values of E_j were in the range [-1,0.92], are less accurate because the outputs only achieved an accuracy of Ẽ_1 = 42% and Ẽ_2 = 40% in the test set as shown in Table <ref>. Finally, for case (<ref>) with the interval [E_min=-0.25, E_max=0.23] and Δ E=1/50, the curves generated have a lower frequency than the case (<ref>), thus the sampled points have more information about the signal, letting the ANN to outperform previous cases, were achieved an efficiency of 96% and 93% for E_1 and E_2 respectively, as it is shown in Table <ref>. An example of the prediction using the BO signals with an electric field composed by E_1 = -0.2046 and E_2 = 0.1969 is shown in Figure <ref>. In this case, the ANN estimates, that the signals where generated with values of E_1 and E_2 belonging to the classes E_ζ = 0 and E_ζ =4 respectively, which is a correct classification. § FINAL REMARKS We have developed a method employing an ANN approach to analyze the Bloch oscillations on a 2D square lattice of atoms within a tight-binding approximation considering the nearest neighbors influence. The ANN considered uses the velocity (electric current) oscillations signals as inputs signals and estimates the corresponding electric field strength projection along each spatial direction. For the purpose of this work, three different scenarios where the maximum and minimum electric fields considered are restricted. The extreme ranges of the electric fields determine the electron velocity frequencies and thus the number of points sampled per cycle, which impact the ANNs performance. The ANNs were trained and cross-validated with 625 signals within these ranges, meanwhile they were tested for signals with random electric fields on those same intervals. In the best case scenario, for low frequency, the ANN reaches at least 93% accuracy on each output on the test set. As mentioned before, this is because for the lower interval of the electric field, the generated curves oscillate less and therefore the curves are described better. Meanwhile, for the greater interval of the electric field the predictions are less accurate, because the curves require more points to described them. From our previous work <cit.> and the results exposed here, it is straightforward to see that this approach has a good potential and encourage us to explore more complex systems. § ACKNOWLEDGMENTS We acknowledge support from CONACyT grant 256494 and CIC-UMSNH (M´exico) under grants 4.22 and 4.23.§ REFERENCESelsarticle-num 00 Mermin Mermin, N.D., Wagner, H.: Absence of Ferromagnetism or Antiferromagnetism in One- or Two-Dimensional Isotropic Heisenberg Models, Phys. Rev. Lett. 17 1133Ö±136 (1966) Bibcode:1966PhRvL..17.1133M, doi:10.1103/PhysRevLett.17.1133 qhe1 Klitzing v., K., Dorna, G., Pepper, M.: New Method for High-Accuracy Determination of the Fine-Structure Constant Based on Quantized Hall Resistance, Phys. Rev. Lett. 45, 494 (1980). qhe2 Tsui, D.C, Stormer, H.L., Gossard, A.C.: Two-Dimensional Magnetotransport in the Extreme Quantum Limit, Phys. Rev. Lett. 49, 1559 (1982). HTc Kyle, S., Seamus, J.C.: Cuprate high-Tc superconductors, Mater. Today11,91421 (2008). graphene1 Novoselov, K. S., et al.: Two-dimensional atomic crystals, Proc. Natl Acad. Sci. USA 102, 10451–10453 (2005). graphene2Zhang Y, et. al.: Experimental observation of the quantum Hall effect and Berry's phase in graphene, Nature 438, 201 (2005). graphene3 Geim, A. K. y Novoselov, K. S.: The rise of graphene, Nature Materials 6, 183–191 (2007). Weyl Vafek, O. y Vishwanath, A.: Dirac Fermions in Solids - from High T_c cuprates and Graphene to Topological Insulators and Weyl Semimetals, Ann. Rev. Cond. Mat. Phys5; 83-112 (2014). Bloch Esaki, L. y Tsu R.: Superlattice and Negative Differential Conductivity in Semiconductors, J. Res. Dev. 61 61 (1970). exp1 Feldmann J., Leo K., Shah J., Miller D. A. B., Cunningham J. E., Meier T., von Plessen G., Schulze A., Thomas P., and Schmitt-Rink S.: Optical investigation of Bloch oscillations in a semiconductor superlattice, Phys. Rev. B46 7252 (1992). exp2von Plessen G. and Thomas P.: Method for observing Bloch oscillations in the time domain, Phys. Rev. B45, 9185 (1992). exp21Leo K., Bolivar P. H., Brüggemann F., Schwedler R., and Köhler K.: Observation of Bloch oscillations in a semiconductor superlattice, Solid State Commun. 84, 943 (1992). exp22Leisching P., Haring Bolivar P., Beck. W., Dhaibi Y., Brüggemann F., Schwedler R., Kurz H., Leo K., and. Köhler K.: Bloch oscillations of excitonic wave packets in semiconductor superlattices Phys. Rev. B50 14389 (1994). exp3 Dekorsy T., Leisching P., Köhler K., and Kurz H.: Electro-optic detection of Bloch oscillations, Phys. Rev. B50 8106 (1994). exp31Dekorsy T., Ott R., Kurz H., and Köhler K.: Bloch oscillations at room temperature, Phys. Rev. B51 17275 (1995). exp4 Waschke C.,Roskos H. G., Schwedler R., Leo K., Kurz H., andKöhler K.: Coherent submillimeter-wave emission from Bloch oscillations in a semiconductor superlattice, Phys. Rev. Lett. 70, 3319 (1993). exp41Roskos H. G., Waschke C., Schwedler R., Leisching P.,Dhaibi Y., Kurz H., and Köhler K. Bloch oscillations in GaAs/AlGaAs superlattices after excitation well above the bandgap, Superlattices and Microstructures 15 281 (1994). exp5 Kolovsky A. R. andKorsch H. J.: Bloch Oscillations in cold atoms in two-dimensional optical lattices, Phys. Rev. A. 67, 063601 (2003). exp51 Witthaut D., Keck F., Korsch H. J. and Mossmann S.: Bloch Oscillations in two-dimensional lattices, New J. Phys. 6, 41 (2004). Array1 Breid, B. M., Witthaut, D. and Korsch H. J.: Bloch–Zener oscillations, New J. Phys. 8, 110 (2006). Array2 Turkera, Z., Yuceb,C.: Super Bloch oscillation in a PTPT symmetric system, Phys. Lett. A 380 2260 (2016). IBP González J. A., Hernández-Ortiz S., López C. E. and Raya A.: Bloch oscillations: Inverse problem, Plasmonics (2016) doi:10.1007/s11468-016-0477-x ANN Rojas R. Neural Networks. A Systematic Introduction 1996 Springer-Verlag dsp Proakis J. G. and Manolakis D. G. Digital Signal Processing 2006 Prentice Hall
http://arxiv.org/abs/1704.08346v1
{ "authors": [ "M Carrillo", "J A González", "S Hernández", "C López", "A Raya" ], "categories": [ "cond-mat.dis-nn" ], "primary_category": "cond-mat.dis-nn", "published": "20170426204602", "title": "Bloch oscillations in two-dimensional crystals: Inverse problem" }
Age-Minimal Transmission in Energy Harvesting Two-hop NetworksThis work was supported by NSF Grants CNS 13-14733, CCF 14-22111, CCF 14-22129, and CNS 15-26608. Ahmed Arafa Sennur UlukusDepartment of Electrical and Computer EngineeringUniversity of Maryland, College Park, MD [email protected] [email protected] April 27, 2017 ==================================================================================================================================================================We consider an energy harvesting two-hop network where a source is communicating to a destination through a relay. During a given communication session time, the source collects measurement updates from a physical phenomenon and sends them to the relay, which then forwards them to the destination. The objective is to send these updates to the destination as timely as possible; namely, such that the total age of information is minimized by the end of the communication session, subject to energy causality constraints at the source and the relay, and data causality constraints at the relay.Both the source and the relay use fixed, yet possibly different, transmission rates. Hence, each update packet incurs fixed non-zero transmission delays. We first solve the single-hop version of this problem, and then show that the two-hop problem is solved by treating the source and relay nodes as one combined node, with some parameter transformations, and solving a single-hop problem between that combined node and the destination.§ INTRODUCTION A source node is collecting measurements from a physical phenomenon and sends updates to a destination through the help of a relay, see Fig. <ref>. Both the source and the relay depend on energy harvested from nature to communicate. Updates need to be sent in a timely manner; namely, such that the total age of information is minimized by a given deadline. The age of information is defined as the time elapsed since the freshest update has reached the destination.Power scheduling in energy harvesting communication systems has been extensively studied in the recent literature. Earlier works <cit.> consider the single-user setting under different battery capacity assumptions, with and without fading. References <cit.> extend this to multiuser settings: broadcast, multiple access, and interference channels; and <cit.> consider two-hop, relay, and two-way channels.Minimizing the age of information metric has been studied mostly in a queuing-theoretic framework; <cit.> studies a source-destination link under random and deterministic service times. This is extended to multiple sources in <cit.>. References <cit.> consider variations of the single source system, such as randomly arriving updates, update management and control, and nonlinear age metrics. <cit.> introduces penalty functions to assess age dissatisfaction; and <cit.>shows that last-come-first-serve policies are optimal in multi-hop networks.Our work is most closely related to <cit.>, where age minimization in single-user energy harvesting systems is considered; the difference of these works from energy harvesting literature in <cit.> is that the objective is age of information as opposed to throughput or transmission completion time, and the difference of them from age minimization literature in <cit.> is that sending updates incurs energy expenditure where energy becomes available intermittently. <cit.> considers random service time (time for the update to take effect) and <cit.> considers zero service time; in our work here, we consider a fixed but non-zero service time.We consider an energy harvesting two-hop network where a source is sending information updates to a destination through a half-duplex relay, see Fig. <ref>. The source and the relay use fixed communication rates. Thus, different from <cit.>, they both incur fixed non-zero amounts of transmission delays to deliver their data. Our setting is offline, and the objective is to minimize the total age of information received by the destination within a given communication session time, subject to energy causality constraints at the source and relay nodes, and data causality constraints at the relay node.We first solve the single-hop version of this problem where the source communicates directly to the destination, with non-zero update transmission delays, extending the offline results in <cit.>; we observe that introducing non-zero update transmission delays is equivalent to having minimum inter-update time constraints. We then solve the two-hop problem; we first show that it is not optimal for the source to send a new update before the relay finishes forwarding the previous ones, i.e., the relay's data buffer should not contain more than one update packet waiting for service, otherwise earlier arriving packets become stale. Then, we show that the optimal source transmission times are just in time for the relay to forward the updates, i.e., it is not optimal to let an update wait in the relay's data buffer after being received; it must be directly forwarded. This contrasts the results in <cit.> that study throughput maximization in energy harvesting relay networks. In there, throughput-optimal policies are separable in the sense that the source transmits the most amount of data to the relay regardless of the relay's energy harvesting profile. In our case, the age-optimal policy is not separable; it treats the source and the relay nodes as one combined node that is communicating to the destination. Hence, our single-hop results serve as a building block to find the solution of the two-hop problem. § SYSTEM MODEL AND PROBLEM FORMULATION A source node acquires measurement updates from some physical phenomenon and sends them to a destination, through the help of a half-duplex relay, during a communication session of duration T time units. Updates need to be sent as timely as possible; namely, such that the total age of information is minimized by time T. The age of information metric is defined asa(t)≜ t-U(t),∀ twhere U(t) is the time stamp of the latest received update packet at the destination, i.e., the time at which it was acquired at the source. Without loss of generality, we assume a(0)=0. The objective is to minimize the following quantityA_T≜∫_0^Ta(t)dt Both the source and the relay depend on energy harvested from nature to transmit their data, and are equipped with infinite-sized batteries to save their incoming energy. Energy arrives in packets of amounts E and E̅ at the source and the relay, respectively. Update packets are of equal length, and are transmitted using fixed rates at the source and the relay. We assume that one update transmission consumes one energy packet at a given node, and hence the number of updates is equal to the minimum of the number of energy arrivals at the source and the relay. Under a fixed rate policy, each update takes d and d̅ amount of time to get through the source-relay channel and the relay-destination channel, respectively[d can be considered, for instance, equal to B/r where B is the update packet length in bits and r=g(E) is the transmission rate in bits/time units, where g is some increasing function representing the rate-energy relationship.].Source energy packets arrive at times {s_1,s_2,…,s_N}≜ s, and relay energy packets arrive at times {s̅_1,s̅_2,…,s̅_N}≜s̅, where without loss of generality we assume that both the source and the relay receive N energy packets, since each update consumes one energy packet in transmission from either node, and hence any extra energy arrivals at either the source or the relay cannot be used. Let t_i and t̅_i denote the transmission time of the ith update at the source and the relay, respectively. We first impose the following constraintst_i≥ s_i, t̅_i≥s̅_i, 1≤ i≤ Nrepresenting the energy causality constraints <cit.> at the source and the relay, which mean that no energy packet can be used before being harvested. Next, we must havet_i+d≤t̅_i, 1≤ i≤ Nto ensure that the relay does not forward an update before receiving it from the source, which represents the data causality constraints <cit.>. We also have the service time constraintst_i+d≤ t_i+1, t̅_i+d̅≤t̅_i+1,1≤ i≤ N-1which ensure that there can only be one transmission at a time at the source and the relay. Hence, d and d̅ represent the service (busy) time of the source and relay servers, respectively.Transmission times at the source and the relay should also be related according to the half-duplex nature of the relay operation. For that, we must have the half-duplex constraints(t_i,t_i+d)∩(t̅_j,t̅_j+d̅)=∅,∀ i,jwhere ∅ denotes the empty set, since the relay cannot receive and transmit simultaneously. These constraints enforce that either the source transmits a new update after the relay finishes forwarding the prior one, i.e., t_i+1≥t̅_i+d̅ for some i; or that the source delivers a new update before the relay starts transmitting the prior one, i.e., t_i+k+d≤t̅_i for some i and k. The latter case means that there are k+1 update packets waiting in the relay's data buffer just before time t̅_i. We prove that this case is not age-optimal. To see this, consider the example of having k+1=2 update packets in the relay's data buffer waiting for service. The relay in this case has two choices at its upcoming transmission time: 1) forward the first update followed by the second one sometime later, or 2) forward the second update only and ignore the first one. These two choices yield different age evolution curves. We observe, geometrically, that A_T under choice 2 is strictly less than that under choice 1. Since the source under choice 2 consumes an extra energy packet to send the first update unnecessarily, it should instead save this energy packet to send a new update after the first one is forwarded by the relay. Therefore, it is optimal to replace the half-duplex constraints in (<ref>) by the following reduced onest̅_i+d̅≤ t_i+1, 1≤ i≤ N-1 Next, observe that (<ref>) can be removed from the constraints since it is implied by (<ref>) and (<ref>). In conclusion, the constraints are now those in (<ref>), (<ref>), and (<ref>). Finally, we add the following constraint to ensure reception of all updates by time Tt̅_N+d̅≤ T In Fig. <ref>, we present an example of the age of information in a system with 3 updates. The area under the curve representing A_T is given by the sum of the areas of the trapezoids Q_1, Q_2, and Q_3, in addition to the area of the triangle L. The area of Q_2 for instance is given by 1/2(t̅_2+d̅-t_1)^2-1/2(t̅_2+d̅-t_2)^2. The objective is to choose feasible transmission times for the source and the relay such that A_T is minimized. Computing the area under the age curve for general N arrivals, we formulate the problem as followsmin_ t,t̅ ∑_i=1^N(t̅_i+d̅-t_i-1)^2-(t̅_i+d̅-t_i)^2 + (T-t_N)^2 t_i≥ s_i, t̅_i≥s̅_i,1≤ i≤ N t_i+d≤t̅_i,1≤ i≤ Nt̅_i+d̅≤ t_i+1,1≤ i≤ Nwith t_0≜0 and t_N+1≜ T.We note that the energy arrival times s and s̅, the transmission delays d and d̅, the session time T, and the number of energy arrivals N, are such that problem (<ref>) has a feasible solution. This is true only ifT ≥s̅_i+(N-i+1)d̅,∀ i T ≥ s_i+(N-i+1)(d+d̅),∀ iwhere (<ref>) (resp. (<ref>)) ensures that the ith energy arrival time at the relay (resp. source) is small enough to allow the reception of the upcoming N-i updates within time T.§ SOLUTION BUILDING BLOCK: THE SINGLE-USER CHANNEL In this section, we solve the single-user version of problem (<ref>); namely, when the source is communicating directly with the destination. We use the solution to the single-user problem in this section as a building block to solve problem (<ref>) in the next section. In Fig. <ref>, we show an example of the age evolution in a single-user setting. The area of Q_2 is now given by 1/2(t_2+d-t_1)^2-1/2d^2. We compute the area under the age curve for general N arrivals and formulate the single-user problem as followsmin_ t ∑_i=1^N(t_i+d-t_i-1)^2+(T-t_N)^2t_i≥ s_i, 1≤ i≤ N t_i+d≤ t_i+1, 1≤ i≤ Nwhere the second constraints are the service time constraints.We note that reference <cit.> considered problem (<ref>) when the transmission delay d=0. We extend their results for a positive delay (and hence a finite transmission rate) in this section. We first introduce the following change of variables: x_1≜ t_1+d; x_i≜ t_i-t_i-1+d, 2≤ i≤ N; and x_N+1≜ T-t_N. These variables must satisfy ∑_i=1^N+1x_i=T+Nd, which reflects the dependent relationship between the new variables {x_i}. This can also be seen from Fig. <ref>. Substituting by {x_i} in problem (<ref>), we get the following equivalent problemmin_ x ∑_i=1^N+1x_i^2 ∑_i=1^kx_i≥ s_k+kd,1≤ k≤ N x_i≥2d,2≤ i≤ N x_N+1≥ d ∑_i=1^N+1x_i=T+NdObserve that problem (<ref>) is a convex problem that can be solved by standard techniques <cit.>. For instance, we introduce the following Lagrangianℒ= ∑_i=1^N+1x_i^2-∑_k=1^Nλ_k(∑_i=1^kx_i- s_k-kd)-∑_i=2^Nη_i(x_i-2d)-η_N+1(x_N+1-d)+ν(∑_i=1^N+1x_i-T-Nd)where {λ_1,…,λ_N,η_2…,η_N+1,ν} are Lagrange multipliers, with λ_i,η_i≥0 and ν∈ℝ. Differentiating with respect to x_i and equating to 0 we get the following KKT conditionsx_1 =∑_k=1^Nλ_k-νx_i =∑_k=i^Nλ_k+η_i-ν,2≤ i≤ Nx_N+1 =η_N+1-νalong with complementary slackness conditionsλ_k(∑_i=1^kx_i- s_k-kd) =0,1≤ k≤ N η_i(x_i-2d) =0,1≤ i≤ N η_N+1(x_N+1-d) =0 We now have the following lemmas characterizing the optimal solution of problem (<ref>): {x_i^*}. Lemmas <ref> and <ref> show that the sequence {x_i^*}_i=2^N+1 is non-increasing, and derive necessary conditions for it to strictly decrease. On the other hand, Lemma <ref> shows that x_1^* can be smaller or larger than x_2^*, and derives necessary conditions for the two cases.For 2≤ i≤ N-1, x_i^*≥ x_i+1^*. Furthermore, x_i^*> x_i+1^* only if ∑_j=1^ix_j^*=s_i+id.We show this by contradiction. Assume that for some i∈{2,…,N-1} we have x_i^*<x_i+1^*. By (<ref>), this is equivalent to having λ_i+η_i<η_i+1, i.e., η_i+1>0, which implies by complementary slackness in (<ref>) that x_i+1^*=2d. This means that x_i^*<2d, i.e., infeasible. Therefore x_i^*≥ x_i+1^* holds. This proves the first part of the lemma.To show the second part, observe that since x_i^*>x_i+1^* holds if and only if λ_i+η_i>η_i+1, then either λ_i>0 or η_i>0. If η_i>0, then by (<ref>) we must have x_i^*=2d, which renders x_i+1^*<2d, i.e., infeasible. Therefore, η_i cannot be positive and we must have λ_i>0. By complementary slackness in (<ref>), this implies that ∑_j=1^ix_j^*=s_i+id. x_1^*>x_2^* only if x_1^*=s_1+d; while x_1^*<x_2^* only if x_i^*=2d, for 2≤ i≤ N.The necessary condition for x_1^* to be larger than x_2^* can be shown using the same arguments as in the proof of the second part of Lemma <ref>, and is omitted for brevity. Let us now assume that x_1^* is smaller than x_2^*. By (<ref>) and (<ref>), this occurs if and only if η_2>λ_1, which implies that x_2^*=2d by complementary slackness in (<ref>). Finally, by Lemma <ref>, we know that {x_i^*}_i=2^N is non-increasing; since they are all bounded below by 2d, and x_2^*=2d, then they must all be equal to 2d. x_N^*≥ x_N+1^*. Furthermore, x_N^*>x_N+1^* only if at least: 1) ∑_i=1^Nx_i^*=s_N+Nd, or 2) x_N^*=2d occurs. The proof of Lemma <ref> is along the same lines of the proofs of the previous two lemmas and is omitted for brevity.We will use the results of Lemmas <ref>, <ref>, and <ref> to derive the optimal solution of problem (<ref>). To do so, one has to consider the relationship between the parameters of the problem: T, d, and N. For instance, one expects that if the session time T is much larger than the minimum inter-update time d, then the energy causality constraints will be binding while the constraints enforcing one update at a time will not be, and vice versa. We formalize this idea by considering two different cases as follows.§.§ Nd≤ T<(N+1)d We first note that Nd is the least value that T can have for problem (<ref>) to admit a feasible solution. In this case, the following theorem shows that the optimal solution is achieved by sending all updates back to back with the minimal inter-update time possible to allow the reception of all of them by the end of the relatively small session time T.Let Nd≤ T<(N+1)d. Then, the optimal solution of problem (<ref>) is given byx_1^* =max{T-(N-2)d/2,max_1≤ k≤ N{s_k-(k-2)d}}x_i^* =2d,2≤ i≤ N x_N+1^* =T-(N-2)d-x_1^* We first argue that if x_1^*≥ x_2^*(≥2d), then ∑_i=1^N+1x_i^*≥(2N+1)d. The last constraint in problem (<ref>) then implies that T≥(N+1)d, which is infeasible in this case. Therefore, we must have x_1^*<x_2^*. By Lemma <ref>, this occurs only if x_i^*=2d for 2≤ i≤ N. Hence, we set x_N+1=T-(N-2)d-x_1, and observe that problem (<ref>) in this case reduces to a problem in only one variable x_1 as followsmin_x_1 x_1^2+(T-(N-2)d-x_1)^2 max_1≤ k≤ N{s_k-(k-2)d}≤ x_1≤ T-(N-1)dwhose solution is given by projecting the critical point of the objective function onto the feasible interval since the problem is convex <cit.>. This directly gives (<ref>). §.§ T≥ (N+1)dIn this case, we propose an algorithmic solution that is based on the necessary optimality conditions in Lemmas <ref>, <ref>, and <ref>. We first solve problem (<ref>) without considering the service time constraints, i.e., assuming that the set of constraints {x_i≥ 2d, 2≤ i≤ N; x_N+1≥ d} is not active. We then check if any of these abandoned constraints is not satisfied, and optimally alter the solution to make it feasible.Let us denote by (P^e) problem (<ref>) without the set of constraints {x_i≥ 2d, 2≤ i≤ N; x_N+1≥ d}, i.e., considering only the energy causality constraints. We then introduce the following algorithm to solve problem (P^e)Start by computingi_1≜max{s_1,s_2/2,…,s_N/N,T-d/N+1}where the set is indexed as {1,…,N+1}, and then setx_1^*=…=x_i_1^*=max{s_1,s_2/2,…,s_N/N,T-d/N+1}+dIf i_1=N+1 stop, else compute i_2≜max{s_i_1+1-s_i_1,s_i_1+2-s_i_1/2,…,. .s_N-s_i_1/N-i_1,T-d-s_i_1/N+1-i_1}where the set is indexed as {i_1+1,…,N+1}, and then setx_i_1+1^*=…=x_i_2^*= max{s_i_1+1-s_i_1,s_i_1+2-s_i_1/2,…,..s_N-s_i_1/N-i_1,T-d-s_i_1/N+1-i_1}+dIf i_2=N+1 stop, else continue with computing i_3 as above. The algorithm is guaranteed to stop since it will at most compute i_N+1 which is equal to N+1 by construction. Note that while computing i_k, if the max is not unique, we pick the largest maximizer. Observe that the algorithm equalizes the x_i's as much as allowed by the energy causality constraints. Let {x̅_i}_i=1^N be the output of the Inter-Update Balancing algorithm and let {x_i^e}_i=1^N denote the optimal solution of problem (P^e). We now have the following results{x̅_i}_i=1^N is a non-increasing sequence, and x̅_j>x̅_j+1 only if ∑_i=1^jx̅_i=s_j+jd. x_i^e=x̅_i, 1≤ i≤ N.Lemma <ref> can be shown using contradiction arguments and the definition of i_k. Lemma <ref> is similar to <cit.>. In fact, the Inter-Update Balancing algorithm reduces to the optimal offline algorithm proposed in <cit.> when d=0. When d>0, some change of parameters can still show the equivalence. The complete proofs of the two lemmas are omitted due to space limits. The next corollary now follows.Consider problem (P^e) with the additional constraint that ∑_i=1^jx_i=s_j+jd holds for some j≤ N. Then, the optimal solution of the problem, under this condition, for time indices not larger than j is given by {x_i^e}_i=1^j.The following theorem shows that the optimal solution of problem (<ref>), {x_i^*}, is found by equalizing the inter-update times as much as allowed by the energy causality constraints. If such equalization does not satisfy the minimal inter-update time constraints, we force it to be exactly equal to such minimum and adjust the last variable x_N+1 accordingly. Let T≥(N+1)d. If x_i^e≥2d, 2≤ i≤ N and x_N+1^e≥ d, then x_i^*=x_i^e, ∀ i. Else, let n_0 be the first time index at which {x_i^e} is not feasible in problem (<ref>). Then, we have n_0≤ N. If n_0>2, we havex_i^* =x_i^e,1≤ i≤ n_0-1 x_i^* =2d, n_0≤ i≤ N x_N+1^* =T+Nd-∑_i=1^Nx_i^*Otherwise, for n_0=2, {x_i^*} is given by the above if x_1^e=s_1+d, else {x_i^*} is given by (<ref>)-(<ref>).The first part of the theorem follows directly since the solution of the less constrained problem (P^e) is optimal if feasible in problem (<ref>). Next, we prove the second part.We first show that n_0≤ N by contradiction. Assume that n_0=N+1, i.e., x_N+1^e<d and x_N^e≥ 2d>x_N+1^e. By Lemma <ref>, this means that ∑_i=1^Nx_i^e=s_N+Nd. Hence, x_N+1^e=T+Nd-s_N-Nd=T-s_N, which cannot be less than d by the feasibility assumption in (<ref>). Thus, n_0≤ N.Now let n_0>2 and observe that x_n_0^e<2d≤ x_n_0-1. Thus, by Lemma <ref>, we must have ∑_i=1^n_0-1x_i^e=s_n_0-1+(n_0-1)d. Now let us show that the proposed policy is feasible; we only need to check whether x_N+1^*≥ d. Towards that, we havex_N+1^* =T+Nd-∑_i=1^n_0-1x_i^*-(N-n_0+1)2d =T-s_n_0-1-(N-n_0+1)d ≥ dwhere the last inequality follows by the feasibility assumption in (<ref>). Therefore, the proposed policy is feasible.We now show that it is optimal as follows. Assume that there exists another policy {x̃_i} that achieves a lower age than {x_i^*}. We now have two cases. First, assume that ∑_i=1^n_0-1x̃_i=s_n_0-1+(n_0-1)d. then by Corollary <ref> we must have x̃_i=x_i^* for 1≤ i≤ n_0-1. Now for n_0≤ i≤ N, if x̃_i>x_i^*=2d, this means that x̃_N+1<x_N+1^* to satisfy the last constraint in (<ref>). Since ∑_i=n_0^N+1x̃_i=∑_i=n_0^N+1x_i^*, then by convexity of the square function, ∑_i=n_0^N+1(x̃_i)^2>∑_i=n_0^N+1(x_i^*)^2 <cit.>, and hence {x̃_i} cannot be optimal. Second, assume that ∑_i=1^n_0-1x̃_i>s_n_0-1+(n_0-1)d=∑_i=1^n_0-1x_i^*. Since x̃_i≥ x_i^*=2d for n_0≤ i≤N, and ∑_i=1^N+1x̃_i=∑_i=1^N+1x_i^*, then we must have x̃_N+1<x_N+1^*. Thus, ∑_i=1^N+1(x̃_i)^2>∑_i=1^N+1(x_i^*)^2 by convexity of the square function <cit.>, and {x̃_i} cannot be optimal.Finally, let n_0=2. If x_1^e=s_1+d, then the proof follows by the arguments for the n_0>2 case. Else if x_1^e>s_1+d, then x_1^e=x_2^e≥ x_N+1^e by Lemma <ref>. Since {x_i^e}_i=2^N have to increase to at least 2d, then x_1^e+x_N+1^e has to decrease to satisfy the last constraint in (<ref>). However, one cannot increase x_1^e to 2d or more and compensate that by decreasing x_N+1^e, by convexity of the square function. Thus, x_1^*<x_2^*, and Lemma <ref> shows that the results of Theorem <ref> follow to give (<ref>)-(<ref>).§ TWO-HOP NETWORK: SOLUTION OF PROBLEM (<REF>) We now discuss how to use the results of the single-user problem to solve problem (<ref>). We have the following theorem.The optimal solution of problem (<ref>) is given by the optimal solution of problem (<ref>) after replacing s_i by max{s̅_i,s_i+d}, ∀ i; d by d+d̅; and T by T+d.Let f denote the objective function of problem (<ref>). Differentiating f with respect to t_i, i≤ N-1, we get ∂ f/∂ t_i=2(t̅_i+d̅-t_i)-2(t̅_i+1+d̅-t_i), which is negative since t̅_i+1>t̅_i. We also have ∂ f/∂ t_N=2(t̅_N+d̅-t_N)-2(T-t_N), which is non-positive since t̅_N+d̅≤ T. Thus, f is decreasing in {t_i}_i=1^N-1 and non-increasing in t_N. Therefore, the optimal {t_i^*} satisfies the data causality constraints in (<ref>) with equality for all updates so as to be the largest possible and achieve the smallest A_T. Setting t_i=t̅_i-d, ∀ i in problem (<ref>) we getf=∑_i=1^N(t̅_i+d̅+d-t̅_i-1)^2-N(d̅+d)^2 + (T+d-t̅_N)^2with the constraints now beingt̅_i≥ s_i+d, t̅_i≥s̅_i,∀ i t̅_i+d̅+d≤t̅_i+1, 1≤ i≤ N-1 t̅_N+d̅≤ TWe now see that minimizing f subject to the above constraints is exactly the same as solving problem (<ref>) after applying the change of parameters mentioned in the theorem. Theorem <ref> shows that the source should send its updates just in time as the relay is ready to forward, and no update should wait for service in the relay's data buffer. Thus, the source and the relay act as one combined node that can send updates whenever it receives combined energy packets at times {max{s̅_i,s_i+d}}. This fundamental observation can be generalized to multi-hop networks as well. Given M>1 relays, each node should send updates just in time as the following node is ready to forward, until reaching destination.§ NUMERICAL RESULTS We now present some numerical examples to further illustrate our results. A two-hop network has energy arriving at times s=[2,6,7,11,13] at the source, and s̅=[1,4,9,10,15] at the relay. A source transmission takes d=1 time unit to reach the relay; a relay transmission takes d̅=2 time units to reach the destination. Session time is T=19. We apply the change of parameters in Theorem <ref> to get new energy arrival times s=[3,7,9,12,15], new transmission delay d=3, and new session time T=20. Then, we solve problem (<ref>) to get the optimal inter-update times, using the new parameters. Note that T≥(N+1)d=18, whence the optimal solution is given by Theorem <ref>. We apply the Inter-Update Balancing algorithm to get x^e=[6.5,6.5,5.67,5.67,5.67,5]. Hence, the first infeasible inter-update time occurs at n_0=3 (x_3^e<2d=6). Thus, we set: x_1^*=x_1^e and x_2^*=x_2^e; x_3^*=x_4^*=x_5^*=2d; and x_6^*=T+Nd-∑_i=1^5x_i^*. We see that x^*=[6.5,6.5,6,6,6,4] satisfies the conditions stated in Lemmas <ref>, <ref>, and <ref>.We consider another example where energy arrives at times s=[0,4,4,9,13] and s̅=[1,3,6,10,12], with T=16. Applying the change of parameters in Theorem <ref> we get T=17<(N+1)d=18, and hence we use the results of Theorem <ref> to get x^*=[5,6,6,6,6,3]. We then increase T to 18. This is effectively 19 according to Theorem <ref>, and therefore we apply Theorem <ref> results. The Inter-Update Balancing algorithm gives x^e=[5.8,5.8,5.8,5.8,5.8,5], and hence n_0=2. Since x_1^e>s_1+d=4, then the optimal solution is given by (<ref>)-(<ref>) as x^*=[5,6,6,6,6,5].§ CONCLUSIONS We proposed age-minimal policies in energy harvesting two-hop networks with fixed transmission delays. The optimal policy is such that the relay's data buffer should not contain any packets waiting for service; the source should send an update to the relay just in time as the relay is ready to forward. This let us treat the source and relay nodes as one combined node communicating with the destination node, and reduce the two-hop problem to a single hop one. We solved the single hop problem by balancing inter-update times to the extent allowed by energy arrival times and transmission delays.10jingP2P J. Yang and S. Ulukus. Optimal packet scheduling in an energy harvesting communication system. IEEE Trans. Comm., 60(1):220–230, January 2012.kayaEmax K. Tutuncuoglu and A. Yener. Optimum transmission policies for battery limited energy harvesting nodes. IEEE Trans. Wireless Comm., 11(3):1180–1189, March 2012.omurFade O. Ozel, K. Tutuncuoglu, J. Yang, S. Ulukus, and A. Yener. Transmission with energy harvesting nodes in fading wireless channels: Optimal policies. IEEE JSAC, 29(8):1732–1743, September 2011.ruiZhangEH C. K. Ho and R. Zhang. Optimal energy allocation for wireless communications with energy harvesting constraints. IEEE Trans. Signal Proc., 60(9):4808–4818, September 2012.jingBC J. Yang, O. Ozel, and S. Ulukus. Broadcasting with an energy harvesting rechargeable transmitter. IEEE Trans. Wireless Comm., 11(2):571–583, February 2012.omurBC O. Ozel, J. Yang, and S. Ulukus. Optimal broadcast scheduling for an energy harvesting rechargebale transmitter with a finite capacity battery. IEEE Trans. Wireless Comm., 11(6):2193–2203, June 2012.elifBC M. A. Antepli, E. Uysal-Biyikoglu, and H. Erkal. Optimal packet scheduling on an energy harvesting broadcast link. IEEE JSAC, 29(8):1721–1731, September 2011.jingMAC J. Yang and S. Ulukus. Optimal packet scheduling in a multiple access channel with energy harvesting transmitters. Journal of Comm. and Networks, 14(2):140–150, April 2012.kaya-interference K. Tutuncuoglu and A. Yener. Sum-rate optimal power policies for energy harvesting transmitters in an interference channel. Journal Comm. Networks, 14(2):151–161, April 2012.ruiZhangRelay C. Huang, R. Zhang, and S. Cui. Throughput maximization for the Gaussian relay channel with energy harvesting constraints. IEEE JSAC, 31(8):1469–1479, August 2013.gunduz2hop D. Gunduz and B. Devillers. Two-hop communication with energy harvesting. In IEEE CAMSAP, December 2011.berkDiamond-jour B. Gurakan and S. Ulukus. Cooperative diamond channel with energy harvesting nodes. IEEE JSAC, 34(5):1604–1617, May 2016.varan_twc_jour B. Varan and A. Yener. Delay constrained energy harvesting networks with limited energy and data storage. IEEE JSAC, 34(5):1550–1564, May 2016.arafa_baknina_twc_dec_proc A. Arafa, A. Baknina, and S. Ulukus. Energy harvesting two-way channels with decoding and processing costs. IEEE Trans. Green Comm. and Networking, 1(1):3–16, March 2017.yates_age_1 S. Kaul, R. Yates, and M. Gruteser. Real-time status: How often should one update? In IEEE Infocom, March 2012.yates_age_mac R. Yates and S. Kaul. Real-time status updating: Multiple sources. In IEEE ISIT, July 2012.ephremides_age_random C. Kam, S. Kompella, and A. Ephremides. Age of information under random updates. In IEEE ISIT, July 2013.ephremides_age_management M. Costa, M. Codreanu, and A. Ephremides. On the age of information in status update systems with packet management. IEEE Trans. Info. Theory, 62(4):1897–1910, April 2016.ephremides_age_non_linear A. Kosta, N. Pappas, A. Ephremides, and V. Angelakis. Age and value of information: Non-linear age case. In IEEE ISIT, June 2017.shroff_age_mdp Y. Sun, E. Uysal-Biyikoglu, R. Yates, C. E. Koksal, and N. B. Shroff. Update or wait: How to keep your data fresh. In IEEE Infocom, April 2016.shroff_age_multi_hop A. M. Bedewy, Y. Sun, and N. B. Shroff. Age-optimal information updates in multihop networks. In IEEE ISIT, June, 2017.yates_age_eh R. D. Yates. Lazy is timely: Status updates by an energy harvesting source. In IEEE ISIT, June 2015.elif_age_eh B. T. Bacinoglu, E. T. Ceran, and E. Uysal-Biyikoglu. Age of information under energy replenishment constraints. In UCSD ITA, February 2015.boyd S. P. Boyd and L. Vandenberghe. Convex Optimization. 2004.
http://arxiv.org/abs/1704.08679v2
{ "authors": [ "Ahmed Arafa", "Sennur Ulukus" ], "categories": [ "cs.IT", "cs.NI", "math.IT" ], "primary_category": "cs.IT", "published": "20170427175005", "title": "Age-Minimal Transmission in Energy Harvesting Two-hop Networks" }
Quasimap counts and Bethe eigenfunctions Mina Aganagic and Andrei Okounkov========================================The lasso model has been widely used for model selection in data mining, machine learning, and high-dimensional statistical analysis. However, with the ultrahigh-dimensional, large-scale data sets now collected in many real-world applications, it is important to develop algorithms to solve the lasso that efficiently scale up to problems of this size. Discarding features from certain steps of the algorithm is a powerful technique for increasing efficiency and addressing the Big Data challenge. In this paper, we propose a family of hybrid safe-strong rules (HSSR) which incorporate safe screening rules into the sequential strong rule (SSR) to remove unnecessary computational burden. In particular, we present two instances of HSSR, namely SSR-Dome and SSR-BEDPP, for the standard lasso problem. We further extend SSR-BEDPP to the elastic net and group lasso problems to demonstrate the generalizability of the hybrid screening idea. Extensive numerical experiments with synthetic and real data sets are conducted for both the standard lasso and the group lasso problems. Results show that our proposed hybrid rules can substantially outperform existing state-of-the-art rules.§ INTRODUCTIONThe lasso model <cit.> is widely used in data mining, machine learning, and high-dimensional statistics. The model is defined as the following optimization problem(λ) = _∈ℝ^p1/2n - _2^2 + λ_1,whereis the n × 1 response vector, = (_1, …, _p) is the n × p feature matrix, ∈ℝ^p is the coefficient vector, and λ≥ 0 is a regularization parameter. · and ·_1 respectively denote the Euclidean (ℓ_2) norm and ℓ_1 norm.Due to its property of automatic feature selection, the lasso model has attracted extensive studies with a wide range of successful applications to many areas, such as signal processing <cit.>, gene expression data analysis <cit.>, face recognition <cit.>, text mining <cit.> and so on. Efficiently solving the lasso model is therefore of great significance to statistical and machine learning practice. Over the past years a number of efficient algorithms have been developed for solving the lasso <cit.>. Among them the pathwise coordinate descent algorithm <cit.> is simple, fast, and able to make use of the sparsity structure of the lasso and “warm start” strategy, making it very suitable and efficient to scale up to high-dimensional lasso problems <cit.>. With the evolving era of Big Data, however, it is increasingly common to encounter large-scale, ultrahigh-dimensional data sets.The increased number of features and observations in these data sets present added challenges to solving the lasso efficiently.One idea for reducing computation time is drop certain features from the analysis prior to fitting the lasso. As a result, the dimensionality of the feature matrix – and hence the computational burden of the optimization – will be substantially reduced. This idea, known as feature screening, has been around for a long time, but was first studied formally by <cit.>. who studied the asymptotic properties of screening out features that have weak correlations with the response variable. However, feature screening, which is usually based on the marginal relationship between a feature and the outcome, can incorrectly screen out important features and does not, therefore, solve the original optimization problem (<ref>).To avoid this problem, other researchers sought to develop safe rules that are guaranteed not to discard any active features.These rules are usually based on exploiting geometric properties of the dual formulation of the lasso problem.Their main idea is to bound the dual optimal solution (λ) of the lasso (formally defined in Section <ref>) within a compact region Θ. Then given a feature _j, its coefficient estimate β_j is guaranteed to be 0 if sup_∈Θ |_j^T | < λ. This assertion is implied by the KKT condition: |_j^T (λ) | < λ⇒β_j = 0 <cit.>. The pioneering work in this direction is the SAFE rule developed by El Ghaoui et al. <cit.>. The smaller the region Θ, the more features will be discarded and more efficiency gained; this has motivated other more powerful rules such as the EDPP rules <cit.>, the Dome test <cit.>, and the Sphere tests <cit.>, which shrink Θ according to different strategies.A separate line of research has sought to develop “strong” rules that are more powerful at discarding features than safe rules and for which violations are unlikely, but possible.This idea was initially proposed by <cit.>, who developed sequential strong rules (SSR) based upon the Karush-Kuhn-Tucker (KKT) conditions for the lasso problem along with an assumption of “unit-slope” bound. The main idea is that we are still solving the original optimization problem, but we can skip certain calculations that are likely to be unnecessary, thereby reducing computational burden. However, because it is possible for these rules to incorrectly discard active features, a post-convergence KKT checking step is required in order to guarantee the correctness of the solution.In this paper, we propose combining safe and strong rules, yielding hybrid safe-strong rules (HSSR) for discarding features in lasso-type problems. The key of HSSR is to incorporate simple yet safe rules into SSR so as to remove a large amount of unnecessary post-convergence KKT checking on features that can be eliminated by safe rules. As a result, this paper will demonstrate that the total computing time for solving the lasso using these hybrid rules is substantially reduced compared to using either safe or strong rules alone.Furthermore, the idea of HSSR provides a rather general feature screening framework since (i) in principle any safe rule can be combined with SSR, resulting in a more powerful rule; and (ii) HSSR can be easily extended to other lasso-type problems, either with different loss functions ordifferent regularization terms. In this paper we focus on three types of lasso problems with quadratic loss, namely, the standard lasso, the group lasso, and the elastic net.Although this idea is relatively simple, we consider it to be novel for two primary reasons.First, the existing literature is firmly divided and for the most part published in entirely different types of journals: most of the research on safe rules has appeared in machine learning and computer science journals, while the research on strong rules has appeared in statistics journals.Most of what has been written gives the impression that these are two irreconcilable and mutually exclusive approaches to improving efficiency.We show here that this is not the case – the two types of rules can be combined in a relatively straightforward manner.Second, the degree of efficiency gained by combining these rules is rather surprising, at least to us.In many cases, the hybrid rules are more than the sum of their parts, providing much greater gains in efficiency when combined than using either type of rule alone.The main contributions of this research include: * We propose a novel optimization framework for lasso screening that combines SSR with simple safe rules, resulting a family of hybrid safe-strong rules (HSSR) that are more efficient and scalable to large-scale data sets.* We develop two instances of HSSR, namely SSR-Dome and SSR-BEDPP, for feature screening in solving the lasso.* We extend SSR-BEDPP to two other lasso-type problems, the elastic-net <cit.> and group lasso <cit.> to demonstrate the generalizability of the hybrid screening idea.* We evaluate the performance of newly proposed screening rules by extensive numerical experiments on both synthetic and real data sets, and show that our rules substantially outperform state-of-the-art ones.* We implement all screening rules in this paper in two publicly accessible R packages. Specifically, the rules for the standard lasso and elastic net are implemented in R package [<https://CRAN.R-project.org/package=biglasso>] <cit.>, which aims to extend lasso model fitting to big data in R. The package [<https://CRAN.R-project.org/package=grpreg>]<cit.> implements screening rules for the group lasso. The underlying optimization algorithm and screening rules in the R packages are implemented in C/C++ for fast computation. In this paper we assume without loss of generality that the response vectoris centered so that the intercept term is dropped from the lasso model. We further assume the feature vectors {_j}_j=1^p are centered and standardized to have unit variance:∑_i=1^n y_i = 0, ∑_i=1^n x_ij = 0, 1/n∑_i=1^n x_ij^2 = 1for j = 1, …, p.Standardization is a typical preprocessing step in fitting lasso models since: (1) it ensures that the penalty is applied uniformly across features with different scales of measurement; (2) it often contributes to faster convergence of the optimization algorithm; (3) as we will see in following sections, it simplifies feature screening rules and thus reduces computation complexity.The rest of the paper is organized as follows. Section <ref> reviews the two categories, strong rules and safe rules, upon which our work is built. We propose our new hybrid screening strategy in Section <ref> and describe two powerful rules, SSR-BEDPP and SSR-Dome, based on this strategy along with a pathwise coordinate descent algorithm to take advantage of them.In addition, this section analyzes the computational complexity of the HSSR rules and compares them to SSR and EDPP.In Section <ref>, we extend SSR-BEDPP to the elastic net and group lasso problems. Section <ref> compares the performance of our rules with existing ones via extensive numerical experiments on synthetic and real data sets for both the standard lasso and the group lasso problems and conclude with some final remarks in Section <ref>.Proofs of theorems are given in the Appendix.§ EXISTING LASSO SCREENING RULES §.§ Sequential strong rulesSSR <cit.> is a heuristic screening rule for discarding features when solving the lasso over a grid of decreasing regularization parameter values λ_1 > λ_2 > … > λ_K. Specifically, after solving for (λ_k) at λ_k, SSR discards the jth feature from the optimization at λ_k+1 if | _j^T (̊λ_k) / n | < 2 λ_k+1 - λ_k,where (̊λ_k) =- (λ_k) is the residual vector at λ_k. To see the rationale of SSR, we start by noting that (λ) satisfies the following KKT conditions for the lasso problem (<ref>):_j^T (̊λ) / n = λsign(β_j), if β_j ≠ 0, | _j^T (̊λ) / n | ≤λ, if β_j = 0.Let c_j(λ) = 1/n_j^T (̊λ_k). The key idea behind SSR is to assume c_j(λ) is non-expansive in λ (or the “unit-slope” bound):| c_j(λ) - c_j(λ̃) | ≤ |λ - λ̃|,for any λ, λ̃∈ (0, λ_max].Now, given (λ_k), λ_k, λ_k+1 (λ_k≥λ_k+1), if conditions (<ref>) and (<ref>) are satisfied, we have| c_j(λ_k+1) |≤|c_j(λ_k+1) - c_j(λ_k) | + |c_j(λ_k) | <λ_k - λ_k+1 + (2λ_k+1 - λ_k)=λ_k+1,and thus β_j(λ_k+1) = 0, implied by the KKT conditions (<ref>). SSR is simple and able to screen out a large amount of inactive features (i.e., those whose coefficients equal zero). However, since assumption (<ref>) may be violated, SSR requires checking KKT conditions (<ref>) for all p coefficients after convergence has been reached at each value of λ to ensure that the calculated β(λ_k+1) is a solution to the original optimization problem.This process is time-consuming when p is large, and even more so if any violations occur, as this involves re-solving the lasso problem with the erroneously discarded features now included.Fortunately, empirical studies show that violations are rare, although certainly possible; see Section 3 of <cit.> for a thorough analysis. §.§ Safe rulesAs noted in the introduction, there are a number of safe rules in the literature; we focus primarily on EDPP rules, as they appear to be the most powerful safe rules developed thus far.EDPP rules are constructed by projecting the scaled response vector onto a nonempty closed and convex polytope. Here we derive simplified versions of the basic EDPP rule (BEDPP) and the sequential EDPP rule (SEDPP) under the standardization condition (<ref>). Compared to original rules, the simplified ones reveal a clearer picture of the computational complexity and reduce the computational burden somewhat. We refer readers to <cit.> for the original EDPP rules and additional technical details. The EDPP rules are based on the dual formulation of Problem (<ref>):(λ) =_∈ℝ^n1/2n^2 - nλ^2/2- /nλ^2subject to |_j^T | ≤ 1, ∀ j=1, ⋯, p,where (λ) is the dual optimal solution of Problem (<ref>) under the constraints (<ref>). The dual and primal solutions are related via: (λ) =- (λ)/nλ The original EDPP rules are developed by exploiting the geometric properties of the dual solutions. The simplified BEDPP and SEDPP rules are stated as the following theorems.For the lasso problem (<ref>), let λ_m := _max = max_j |_j^T/ n| and _* = __j |_j^T |. For any λ∈ (0, λ_m], under condition (<ref>) we have β_j(λ)=0 if|(λ_m + λ) _j^T- (λ_m - λ)sign(_*^T ) λ_m _j^T _*| <2n λλ_m - (λ_m - λ) √(n ^2 - n^2 λ_m^2). For the lasso problem (<ref>), let λ_m:= _max = max_j | _j^T/ n|. Suppose we are given a sequence of λ values λ_m = λ_0 > λ_1 > … > λ_K. Then under condition (<ref>):* For any 0 < k < K, we have β_j(λ_k+1)=0 if(λ_k) is known and the following holds:| _j^T ( - (λ_k) )/λ_k + c/2(_j^T-a _j^T (λ_k)/(λ_k) ^2 )| <n- c/2√( n ^2 - n a^2/(λ_k) ^2 )where c = λ_k - λ_k+1/λ_k λ_k+1 and a = ^T (λ_k) are two scalars.* For k = 0, i.e., λ_k = λ_m, SEDPP rule reduces to BEDPP rule. That is, we have β_j(λ_k+1)=0 if rule (<ref>) holds, in which (λ_m, λ) is replaced by (λ_0, λ_1). Compared to SEDPP, the BEDPP rule is non-sequential in that screening at λ_k+1 via BEDPP doesn't require the lasso solution at λ_k. As a result, BEDPP is much simpler to compute but less powerful in discarding inactive features, as shall seen in Section <ref>. An alternative safe rule, the Dome test, is similar to BEDPP in that it is non-sequential and requires only a small computational burden; due to space constraints, we omit the details of the Dome test from this paper and refer interested readers to <cit.> and <cit.>. A supplementary material containing the details of the simplified Dome test can be found on the GitHub page [<https://github.com/YaohuiZeng/HSSR_paper_supplementary/blob/master/HSSR_supplementary_for_Dome.pdf>]. § HYBRID SAFE-STRONG RULESIn this section, we define our newly proposed hybrid safe-strong rules (HSSR) and compare their computational complexity to the rules discussed in Section <ref>.In addition, we present a re-designed pathwise coordinate descent algorithm that takes advantage of these rules to increase the efficiency of solving the lasso. §.§ DefinitionThe motivation of HSSR is to remove a large amount of unnecessary post-convergence KKT checking, required by SSR, on features that could have been discarded by a safe screening rule. In principle, any safe rule can be combined with SSR, resulting in a family of rules which we call hybrid safe-strong rules and define as follows. For solving the lasso problem (<ref>) over a sequence of λ values λ_1 > λ_2 > … > λ_K, suppose that there exists a safe rule and that (λ_k) is known. Let 𝒮_k+1 denote the safe set, i.e., the set of features not discarded by the safe rule at λ_k+1. Then a corresponding hybrid safe-strong rule (HSSR) can be formulated by combining the safe rule with SSR. Specifically, HSSR discards the jth feature from the lasso optimization at λ_k+1 ifj ∈𝒮_k+1^c ∪{j ∈𝒮_k+1: |_j^T(̊λ_k)| / n| < 2 λ_k+1 - λ_k },where (̊λ_k) =- (λ_k). HSSR builds upon SSR and thus enjoys all of its advantages: simple, sequential, and powerful to discard a large portion of features. As a drawback, it also requires post-convergence KKT checking. However, HSSR only needs to perform KKT checking over a subset of features since all features in the set 𝒮_k+1^c are discarded by the safe rule. Provided that the safe rule is simple to calculate, by which we mean that its time complexity is O(np), HSSR will be more efficient computationally than SSR. The amount of efficiency gained depends on the safe rule, with more powerful rules providing greater increases in speed.In this paper, two instances of HSSR, namely SSR-BEDPP and SSR-Dome, are studied. These two rules respectively use BEDPP and the Dome test as the safe rule. An essential property of HSSR is that for any problem with a unique global optimum and algorithm that converges to that solution, incorporating HSSR into the algorithm will yield the same solution, as stated in the following theorem.Suppose the lasso problem (<ref>) at a given λ is strictly convex such that the sequence of solutions produced by an iterative algorithm a(·) (such as coordinate descent) converges to theunique global optimum, (λ). Then that algorithm with HSSR screening converges to the same solution (λ). Let _𝒮 denote the submatrix ofconsisting only of the features in 𝒮(λ). By the definition of a safe rule, the global optimum (λ) can be decomposed as (λ) = (, ^T_𝒮(λ) )^T, where _𝒮(λ) is the solution to the following optimization problem:_𝒮(λ) = __𝒮∈ℝ^|𝒮(λ)|1/2n - _𝒮_𝒮^2 + λ_𝒮_1. Furthermore, it's easy to verify that the algorithm a(·) with SSR screening for solving (<ref>) converges to the global optimum _𝒮(λ). This is because the KKT checking procedure required by SSR guarantees the final solution satisfies the KKT optimality conditions and hence is the global optimum. Therefore, the algorithm with HSSR screening converges to (λ).§.§ Performance analysisIntuitively, the computational savings achieved by feature screening will be negated if the screening rule itself is too complicated to execute. Therefore, an efficient rule needs to balance the trade-off between its computational complexity and rejection power (i.e., how many features can be discarded). That is, an ideal screening rule should be powerful enough to discard a large portion of features and also relatively simple to compute. To show the advantages of HSSR, we compare the aforementioned screening rules in terms of the rejection power and computational complexity of the rules themselves.§.§.§ Screening power Here we present an empirical comparison of different rules in terms of the power to discard features. Figure <ref> depicts the results based on the GENE data (See details in Section <ref>). First, it's important to note that HSSR, by construction, discards at least as many features as SSR does. Second, HSSR, SSR and SEDPP discard far more features than the non-sequential rules BEDPP and Dome.In particular, the screening power of BEDPP and Dome decreases rapidly as λ decreases. For example, BEDPP cannot discard any features when λ / λ_max is smaller than 0.45 in this case, whereas Dome is the least powerful and discards virtually no features when λ / λ_max is less than 0.6.§.§.§ Computational complexity Table <ref> presents the complexity of computing these rules for the entire path of K values of λ.For SSR (<ref>), it's important to observe that the quantities needed to check the KKT conditions (<ref>), _j^T (̊λ_k), can be re-used for executing SSR at λ_k+1 for that feature. Therefore, SSR requires O(np) operations, as the dominant computation is calculating ^T(̊λ_k). However, since (̊λ_k) changes as a function of λ_k, the total complexity of SSR is O(npK) over the entire solution path. HSSR, on the other hand, only needs to perform KKT checking over the features not discarded by the safe screening step.Thus, _j^T (̊λ_k-1) must be calculated only for features in the safe set 𝒮_k, yielding O(n∑_k=1^K |𝒮_k|)) operations. When the safe rule is effective (e.g. when λ is relatively large, as shown in Figure <ref>), HSSR would avoid a large amount of unnecessary KKT checking and hence be much more efficient than SSR.The complexity of SEDPP (<ref>) is more involved. During coordinate descent, the residuals (̊λ_k) are continually updated and stored.Thus, (λ_k) can be obtained at a cost of O(n) operations since (λ_k) =- (̊λ_k). Furthermore, only O(n) calculations are needed to update (λ_k) and a, while quantities like _j^T andcan be pre-computed to avoid duplicated calculations. The more demanding parts are on the left hand side of (<ref>), specifically, the two terms _j^T(̊λ_k) and _j^T(λ_k).Since these must be calculated for all features, this essentially involves calculating ^T(̊λ_k) and ^T(λ_k), both of which require O(np) calculations.Thus, similar to that of SSR, the total complexity of SEDPP is O(npK) for obtaining the entire solution path. Finally, the complexity of executing BEDPP (<ref>) over the solution path is only O(np) as its dominant calculations are ^T and ^T_*, which only need to be calculated once. After these initial calculations, only O(p) operations are needed to compute the rule, resulting in a complexity of O(pK) over the entire path. Hence the total complexity is O(np) provided that n is larger than K. The Dome test also has complexity of O(np), and can be analyzed in the same fashion based on results in <cit.>.§.§.§ Advantages of HSSRThe advantages of HSSR can be summarized as follows: * Computational efficiency: Solving the lasso with HSSR screening, as compared to other rules, involves the least computational burden. As we will see in Section <ref>, the result is that HSSR is the fastest of the approaches considered here.* Memory efficiency: Both SSR and SEDPP have to fully scan the feature matrix K times, while HSSR only needs to do so for the portion of the lasso path where the safe rule is not able to discard any features. HSSR is therefore more memory-efficient, a particularly appealing advantage in out-of-core computing, where fully scanning the feature matrix requires disk access and therefore becomes the computational bottleneck.* Generalizability: HSSR is a rather general lasso screening framework, and can be easily extended to other lasso-type problems such as the elastic net and the group lasso. §.§ Pathwise coordinate descent with HSSR The pathwise coordinate descent (PCD) algorithm <cit.> solves the lasso solution path along a grid of decreasing parameter values λ_1 > λ_2 > … > λ_K. When solving for (λ_k), PCD utilizes previous solution (λ_k-1) as warm starts. This “warm start” strategy makes the algorithm very efficient. In this section, we re-design the PCD algorithm by incorporating HSSR, as described in Algorithm <ref>. The algorithm starts by initializing the safe sets 𝒮 and 𝒮_prev, which saves the safe set at previous iteration. Another set ℋ, called the strong set, is also initialized to store the features in the safe set not discarded by SSR screening. Thevariable indicates whether the safe rule screening should be turned off or not. The rationale of this design is to stop using the safe rule once it is no longer capable of discarding any features (See Figure <ref>). Note also that the algorithm only needs to update z_j for those “newly-entered” features in the safe set (line 4) before conducting SSR screening. This is because all z_j's associated with features in 𝒮 must have already been computed during post-convergence KKT checking at the previous λ (line 15).
http://arxiv.org/abs/1704.08742v3
{ "authors": [ "Yaohui Zeng", "Tianbao Yang", "Patrick Breheny" ], "categories": [ "stat.ML", "stat.CO" ], "primary_category": "stat.ML", "published": "20170427205316", "title": "Hybrid safe-strong rules for efficient optimization in lasso-type problems" }
(Preprint of paper accepted for the Proc. of the 21st International Conference on Evaluation and Assessment in Software Engineering, 2017)On Using Active Learning and Self-Training when Mining Performance Discussions on Stack OverflowMarkus Borg Software and Systems Engineering Lab.RISE SICS ABLund, [email protected] Iben Lennerstad Dept. of Computer ScienceLund UniversityLund, [email protected] Rasmus Ros, Elizabeth Bjarnason Dept. of Computer ScienceLund UniversityLund, [email protected] 30, 2023 ================================================================================================================================================================================================================================================================================================================================= Abundant data is the key to successful machine learning. However, supervised learning requires annotated data that are often hard to obtain.In a classification task with limited resources, Active Learning (AL) promises to guide annotators to examples that bring the most value for a classifier. AL can be successfully combined with self-training, i.e., extending a training set with the unlabelled examples for which a classifier is the most certain. We report our experiences on using AL in a systematic manner to train an SVM classifier for Stack Overflow posts discussing performance of software components.We show that the training examples deemed as the most valuable to the classifier are also the most difficult for humans to annotate. Despite carefully evolved annotation criteria, we report low inter-rateragreement, but we also propose mitigation strategies. Finally, based on one annotator's work, we show that self-training can improve the classification accuracy. We conclude the paper by discussing implication for future text miners aspiring to use AL and self-training.text mining, classification, active learning, self-training, human annotation. § INTRODUCTION Large datasets are key to successful machine learning and text mining.For example, applying natural language related machine learning to text at web scale <cit.> has enabled many of the advances in the last decade. It is well known that an algorithm that works well on small datasets might be beaten by simpler alternatives as more data are used for training <cit.>. However, while the web contains huge amounts of text, supervised learning requires annotated data – data that are hard to obtain.A common solution to acquire enough annotated data is crowdsourcing using services such as Amazon Mechanical Turk. The possibility to employ a massive, distributed, anonymous crowd of individuals to perform general human-intelligence micro-tasks for micro-payments has radically changed the way many researchers work <cit.>. However, when annotation requires more than general human intelligence, i.e., for non-trivial micro-tasks, such crowdsourcing solutions might not work. Annotation of developers' posts on Stack Overflow is an example of non-trivial classification for which successful crowdsourcing cannot be expected.Active Learning (AL) is a semi-automated approach to establish a training set.The idea is to reduce the overall human effort by focusing on annotating examples that maximize the gained learning, i.e., the examples for which the classifier is the most uncertain.AL has been used for software fault prediction, successfully reducing the need for human intervention <cit.>.AL has also been used in several other fields of research, e.g., for creating large training sets for speech recognition and information extraction <cit.>. Several studies show that AL can successfully be combined with self-training, which is a method to extend the training set by automatic labeling of a trained classifier <cit.>, but the techniques have not previously been used for text mining Stack Overflow.In this study, our target training set is Stack Overflow discussions on performance of software components. Our work is part of the ORION project, in which we aim at developing a decision-support system for software component selection <cit.>.One aspect under study is how to collect and store experiences from previous decisions <cit.>. The ORION project proposes collecting experiences from both internal and external sources, i.e., both from the company and from other organizations. In this paper, we address using machine learning to extract external experiences from the software engineering community by text mining Stack Overflow, the leading technical Q&A platform for software developers <cit.>.We report our experiences from using AL and an SVM classifier in a systematic way consisting of 16 iterations. Our findings show that not only the classifier is uncertain regarding the borderline cases – also the human annotators display limited agreement. Consequently, we stress that annotation criteria must continuously evolve during AL. Moreover, we suggest that AL with multiple annotators should be designed with partly overlapping iterations to enable detection of different interpretations. Finally, we demonstrate that self-training has the potential to improve classification accuracy.The rest of the paper is organized as follows: Section <ref> introduces background and related work, Section <ref> presents the design of our study, and Section <ref> discusses our findings.Finally, we summarize our implications for future mining operations in Section <ref>.§ BACKGROUND AND RELATED WORK Stack Overflow is the dominant technical Q&A platform for software developers, with 101 million monthly unique visitors (March 2017). The information available on Stack Overflow has been studied extensively in the software engineering community, mostly through text mining, but also through qualitative analysis. Fig. <ref> shows an example of a Stack Overflow question with an answer, in which we highlight text chunks related to performance.Treude et al. investigated the type of questions asked and the quality of the answers and found that the information is particularly useful for code reviews and conceptual questions, and for novice developers  <cit.>.Soliman et al. found that Stack Overflow contains information relevant to and useful for decisions within software architectural design, and have idenitified a list of words that may be used to automatically classify such information  <cit.>.Topic modelling has been used to identify what topics that are discussed and relationships between these.In this way, Barua et al. identify a number of current trends within software development, e.g., that mobile app development is increasing faster than web development  <cit.>.It is suggested that knowledge mined from Stack Overflow can be used to provide context-relevant hints in IDEs  <cit.> and for filtering out off-topic posts, e.g., in chat channels  <cit.>.AL is a semi-supervised machine learning approach in which a learning algorithm interactively queries the human to obtain labels for specific examples, typically the most difficult ones. The method for selecting examples to query should be optimized to maximize the gained learning.Uncertainty sampling is a simple technique that selects examples where the classifier is least certain on which label to apply <cit.>.This has the effect of separating the examples into two distinct groups and thus remove borderline cases, see the horizontal histograms in Fig. <ref>. AL enables a shift of focus from momentary data analysis to a process with a feedback loop <cit.>.When mining from crowdsourced data there are usually too many unlabelled examples to annotate them all manually.Semi-supervised learning are methods that use also remaining unlabelled examples to improve the classifier.Self-training (or bootstrap learning <cit.>) is one such method that extends the training set with the unlabelled examples classified with the highest degree of certainty. This complements AL with uncertainty sampling well, since it maximizes the available confident labels <cit.>. To the best of our knowledge, we present the first application of both AL and self-training for Stack Overflow mining.§ METHODWe designed a study to evaluate AL when mining Stack Overflow. Fig. <ref> shows an overview of the research design that consisted of a preparation step and two iterative training steps. In the preparation step, we downloaded the dataset used for the MSR Mining Challenge in 2015 containing 43,336,603 posts <cit.>. We extracted all that were tagged with `performance' and at least one of the following tags: `apache', `nginx' or `rails' – an attempt to get an initial dataset related to components we know well, resulting in 2,304 posts in total.Preparation To assist the manual annotation task, we developed a prototype tool integrating an SVM classifier from scikit-learn <cit.>, i.e., the classifier finds the optimal hyperplane separating two categories of examples <cit.>.In our application, we trained an SVM classifier with n-grams as features (n=1-5) to separate Stack Overflow posts related to performance discussions of software components and other posts. We refer to the two categories as positive and negative examples, respectively.During the tool development, the first and second authors alternated annotating posts and evolving initial annotation criteria – note that this inital step was done without AL. In total, we annotated 970 posts (25.4% positive) and the criteria evolved into “a positive post discusses the performance of a software component, rather than programming languages, the development environment, or measurements tools”.While manually annotating the initially posts, we identified 67 additional component names that also had explicit Stack Overflow tags. We used this to extend our dataset, i.e., we complemented `apache', `nginx' or `rails' with 67 new tags to obtain a larger dataset of Stack Overflow posts. In total we collected 15,287 Stack Overflow posts potentially related to performance of software components[Replication package: URL].Active learning After the preparation, the first and second authors alternated manual annotation of the next 100 posts[A reasonable annotation task that requires roughly 90 min.] closest to the SVM hyperplane – we refer to each such annotation batch <cit.> as an AL iteration. For each iteration, we measured the classification accuracy complemented by precision, recall, and F_1-score using 5-fold cross-validation. Furthermore, we calculated the distance from each post, both labelled and unlabelled, to the SVM hyperplane. We visualize the distribution of posts at different distances from the SVM hyperplane using histograms and beanplots.Self-training We investigated self-training based on the second author's annotation activity (cf. `Self-train. in Fig. <ref>) by adding unlabelled examples as if they were manually annotated. We explored extending the training set with different percentages of unlabelled data, corresponding to different distances to the SVM hyperplane.Our ambition was to identify a successful application of self-training, useful as a proof-of-concept, rather than finding the optimal parameter settings for this particular case.Human annotation To measure the uncertainty in classifying Stack Overflow posts close to the SVM hyperplane, we evaluated the inter-rater reliability of human annotators. The first and second author discussed experiences after each completed iteration, and the annotation criteria evolved. After 8 iterations, halfway into the study, we considered the criteria mature enough for evaluation. The criteria were then:“A positive post (both questions and answers) addresses the performance of a specific software component (incl. frameworks, platforms, and libraries) that could be used to evolve a software-intensive system. Examples: database management systems (MySQL, Oracle, ..), content management systems (Drupal, Joomla, ..), web servers.A post is negative if it discusses performance of/from: * programming languages (e.g., Java, PHP)* operational environments (e.g., Windows, Linux)* development tools (e.g., compilers, IDEs, build systems.)* alternative detailed implementations (e.g., formulation of SQL queries, parsing of XML/JSON structures)* tweaking of componentsor if the post discusses components used to measure performance (e.g., JMeter, SQLTest). The exclusion criteria apply, unless such a discussion clearly originates in poor performance of a specific component”.We designed a hands-on annotation exercise during a research workshop with 12 senior software engineering researchers (cf. `Group annotation' in Fig. <ref>). First, we introduced the exercise, showed some examples, and provided the above criteria. Second, everyone independently annotated 11 posts, printed on paper, during a 20 minute session. In total, 66 posts were distributed using pairwise assignment: two annotators per post, and each possible human pair represented once. Finally, we calculated Krippendorff's α to assess inter-rater reliability, as recommended for difficult nominal tasks <cit.>.After the group annotation, we discussed the outcome to better understand our differences.We continued the annotation activity, following the same process and expecting a growing shared understanding, until iteration 16. Once finished, we had 2,567 annotated posts (32.6% positive). To check our hypothesis of improving agreement, we randomly selected 50 posts among the already annotated (cf. `Pair annotation' in Fig. <ref>). Again we calculated Krippendorff's α, both 1) between the first and second authors (referred to as A and B), and 2) between the new labels and the previous labels. For each post annotated differently, we quantified the certainty of the set label (1-5) and we provided a rationale.§ RESULTS AND LESSONS LEARNEDHuman annotation We begin this section by reporting on the inter-rater reliability. The results from our group annotation exercise after 8 iterations confirmed the challenge of annotating posts close to the SVM hyperplane. Despite annotation criteria that evolved during 8 AL-iterations, the 12 annotators obtained a Krippendorff's α of 0.126 (37/66 shared labels, 56%) – a poor agreement. The first and second authors analyzed the discrepancies, along with posts for which there were agreement, without identifying any concrete patterns. The presence of borderline cases is obvious, but we hypothesized that the alignment between the first and second authors was stronger than within the whole group, and that it would continue improving during the remaining iterations.After 16 AL-iterations, we calculated the inter-rater reliability betweenA and B for a random sample of 50 previously annotated posts. The exercise yielded a Krippendorff's α of 0.028 (29/50 shared labels, 58%), considerably lower than from the group exercise. We also calculated the inter-rater reliability against our previous annotations of the 50 posts, obtaining a Krippendorff's α of 0.768 (18/20 shared labels, 90%), and 0.577 (24/30 shared labels, 80%) for A and B, respectively. Our results show that while our individual annotation remained stable over time, our shared view still differed after 16 iterations. In most cases at least one of us was very uncertain, expressing a certainty level of 1 or 2, which means the post was more or less randomly labelled. More alarming, however, was that in several cases both annotators felt certain but used different labels. An analysis of the latter cases revealed that A was more inclusive regarding posts that related to implementation details and component tweaking, whereas B was more inclusive concerning quality attributes not necessarily related to performance. Furthermore, B did not include posts that could be interpreted as anecdotal experiences. We conclude that AL for text classification is difficult, even after annotating 2,674 posts with several intermediate discussions, our inter-rater reliability was low.Active learning Since our annotation criteria did not properly align our annotation activity, we hesitated to pool our training data. Instead, we trained three separate SVM classifiers using: 1) A data, 2) B data, and 3) A+B data – we refer to these as SVM A, SVM B, and SVM A+B, respectively. Note that we also split the training data from iteration 0 into either A or B, resulting in differently large initial training sets.Fig. <ref> shows the mean value from five runs of 5-fold cross-validation for each iteration. The solid lines with markers show accuracy and F_1-score for SVM A, the dashed lines with markers represent SVM B, and the solid lines without markers illustrate SVM A+B. Regarding accuracy, all three classifiers show similar behavior:The accuracy decreases as additional iterations are added, but the differences are minor. The curves do not resemble typical learning curves, instead they appear to stabilize between 0.7 and 0.8.We explain this by the posts annotated for iteration 0, i.e., clearly positive and negative examples were selected to span the document space, followed by nothing but borderline cases selected using AL.Looking at F_1-score, SVM B and SVM A+B remain fairly stable around 0.5. On the other hand, SVM A improves considerably as more iterations are added.This is likely due to the distribution of examples in the small A iteration 0 training set, containing only 373 examples and a recall of only 0.18 – even adding borderline cases was useful in this case.Fig. <ref> depicts distances between annotated posts and the SVM hyperplanes (SVM A and SVM B) after the preparation step and after the final iterations. The vertical histograms show frequency distributions of posts with distances from the hyperplane on the y-axis, where the sign denotes positive and negative classifications, respectively. Moreover, the figure displays the number of true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN). We notice that as more posts are annotated, the distribution around the hyperplane increases, which is particularly evident for the true negatives. This shows that, from the perspective of the SVM classifiers, 16 AL iterations did not reduce the number of borderline posts. Fig. <ref> presents an analogous view for the unlabelled posts, also separating SVM A and SVM B. However, in this figure we show beanplots, i.e., the frequencies are mirrored on the y-axis. We also report the number of unlabelled posts on both sides of the hyperplanes (cf. |p|). SVM A suggests that there are 716 positive posts remaining in the set of set 13,745 posts, whereas SVM B gives 259 remaining positive posts – these figures reflect A's more inclusive interpretation of the annotation criteria. The goal of AL is to focus annotation efforts on borderline cases to create two clearly separated clusters of examples (cf. Fig. <ref>). This phenomenon is not obvious in the Fig. <ref>, although we observe that SVM B indeed has fewer negative examples close to the hyperplane, i.e., the beanplot close to 0 is thinner after iteration 16. The pattern for SVM A is less clear, and we aim at investigating this in future work by conducting additional iterations.Self-training The rightmost part of Fig. <ref> also illustrates how we evaluated self-training using data annotated by B. As depicted by the dashed horizontal line, we explored adding different fractions of the most confidently classified examples (cf. the white bars) to the training set, annotated with the label predicted by the classifier. As a proof-of-concept, we report our results from adding the following unlabelled examples: 1) 5% positive examples, 2) 50% negative examples, and 3) 5% positive examples, and 50% negative examples. These additions represent adding unlabelled examples farther away from the hyperplane than 1.76 on the positive side, and 0.88 on the negative side.Table <ref> shows our results, compared to the baseline provided by iteration 16 without any self-training. Our results show that active learning combined with self-training can be used to improve an SVM classifier for Stack Overflow posts. Both adding positive and negative examples from the unlabelled examples can improve classification accuracy. We obtained the best results when adding both types of data, resulting in improvements from the baseline corresponding to +4.3% accuracy, +10.3% precision, +6.7% recall, and +7.9% F_1-score.Limitations Finally, we briefly discuss two aspects of threats to validity. First, we stress that we have populated Table <ref> by cherry-picking results from successful self-training runs. Most of our trial runs with self-training generated similar or worse results. Using an approach to semi-exhaustively evaluate different self-training settings, in total running about 50 experimental runs, Table <ref> shows the best results we obtained. However, our work is not a case of publication bias as we aim only to exhibit the existence of a phenomenon <cit.> – a beneficial application of self-training when text mining software repositories.Most self-training settings might deteriorate the accuracy, and a more systematic approach to parameter tuning <cit.> would probably identify even better settings.Second, the external validity <cit.> of our work is limited. AL might be better suited for other software engineering text annotation tasks with less human interpretation. It is probable that another set of annotators, guided by other annotation guidelines, would result in a different inter-rater reliability. As highlighted by Settles <cit.>, while evolving annotation criteria is often a practical reality when applying AL, changes is a violation of the basic stability assumption.We also cannot claim that self-training is beneficial to all types of text mining tasks in software engineering. What we can say, however, is that for our particular task of classifying Stack Overflow posts related to performance of software components, self-training yielded improvements – and that is enough to recommend further research. § CONCLUSION AND IMPLICATIONS FOR FUTURE TEXT MINING We explored using AL and an SVM classifier for Stack Overflow posts with two alternating annotators. The primary lesson learned is that AL and text mining appears to be a difficult combination, at least for short texts such as Stack Overflow posts. In contrast to image classification tasks[Please refer to Karen Zack's viral tweets, e.g., “chihuahua or muffin?”: http://ow.ly/zpF1308F7kK], human Stack Overflow annotators must interpret incomplete informationpresented with limited context – differences in annotations are inevitable. However, we argue that awareness of this intrinsic challenge of AL can be used to complement a traditional annotation process, i.e., AL can be used to identify the borderline cases that are worthwhile to discuss.Based on our experiences, we present two recommendations when using AL for text mining software repositories. First, the annotation criteria must continuously evolve, in parallel to the annotators' interpretation of them, in line with coding guidelines for qualitative research <cit.>. It is not enough to simply count the number of differing labels, instead qualitative analysis is needed to identify any potential systematic differences – before it is too late. Second, we suggest that AL settings with multiple annotators should be designed with partly overlapping iterations to enable early detection of discrepancies.The size of the labelled training set would increase at a slower rate with overlapping iterations, thus this must be balanced against the value of better annotator alignment. In future attempts with AL, we plan to initially design iterations with 25% overlap, and then gradually decrease it to 5% as consensus increases.Based on the second author's AL process, we evaluated complementing the training set using self-training. Our results are promising, we show that adding both positive and negative examples to the training set can increase the classification accuracy. In a semi-structured approach, we achieved improvements of 4.3% accuracy and 7.9% F_1-score. We stress that our findings do not suggest that self-training generally is a good idea, rather our results constitute a proof-of-concept that self-training can be successfully combined with AL.Furthermore, we expect that further improvements from self-training would be possible, and plan to conduct systematic parameter optimization as the next step <cit.>. § ACKNOWLEDGMENTThe work is partially supported by a research grant for the ORION project (reference number 20140218) from The Knowledge Foundation in Sweden, the Wallenberg Autonomous Systems and Software Program (WASP), and the Industrial Excellence Center EASE - Embedded Applications Software Engineering[http://ease.cs.lth.se].IEEEtran
http://arxiv.org/abs/1705.02395v1
{ "authors": [ "Markus Borg", "Iben Lennerstad", "Rasmus Ros", "Elizabeth Bjarnason" ], "categories": [ "cs.CL", "cs.HC", "cs.LG", "cs.SE" ], "primary_category": "cs.CL", "published": "20170426204736", "title": "On Using Active Learning and Self-Training when Mining Performance Discussions on Stack Overflow" }
Effects of custodial symmetry breakingin the Georgi-Machacek model at high energies Kei Yagyu December 30, 2023 ===================================================================================== We introduce Schur multiple zeta functionswhich interpolate both the multiple zeta and multiple zeta-star functions of the Euler-Zagier type combinatorially.We first study their basic properties including a region of absolute convergence and the case where all variables are the same.Then, under an assumption on variables, some determinant formulas coming from theory of Schur functionssuch as the Jacobi-Trudi, Giambelli and dual Cauchy formula are established with the help of Macdonald's ninth variation of Schur functions. Moreover, we investigate the quasi-symmetric functions corresponding to the Schur multiple zeta functions.We obtain the similar results as above for them and, furthermore,describe the images of them by the antipode of the Hopf algebra of quasi-symmetric functions explicitly. Finally, we establish iterated integral representations of the Schur multiple zeta values of ribbon type,which yield a duality for them in some cases.11M41, 05E05.Multiple zeta functions, Schur functions,Jacobi-Trudi formula,quasi-symmetric functions § INTRODUCTION The multiple zeta function and the multiple zeta-star function (MZF and MZSF for short)of the Euler-Zagier type are respectively defined by the seriesζ( s) =∑_m_1<⋯ <m_n1/m_1^s_1⋯ m_n^s_n, ζ^⋆( s) =∑_m_1≤⋯≤ m_n1/m_1^s_1⋯ m_n^s_n, where s=(s_1,…,s_n)∈ℂ^n.These series converge absolutely for (s_1),…,(s_n-1)≥ 1 and (s_n)>1(see, e.g., <cit.> for more precise description about the region of absolute convergence).One easily sees that a MZSF can be expressed as a linear combination of MZFs, and vice versa.For instance,ζ^⋆(s_1,s_2)=ζ(s_1,s_2)+ζ(s_1+s_2), ζ(s_1,s_2)=ζ^⋆(s_1,s_2)-ζ^⋆(s_1+s_2), ζ^⋆(s_1,s_2,s_3)=ζ(s_1,s_2,s_3)+ζ(s_1+s_2,s_3)+ζ(s_1,s_2+s_3)+ζ(s_1+s_2+s_3), ζ(s_1,s_2,s_3)=ζ^⋆(s_1,s_2,s_3)-ζ^⋆(s_1+s_2,s_3)-ζ^⋆(s_1,s_2+s_3)+ζ^⋆(s_1+s_2+s_3), where ζ(s)=ζ^⋆(s) is the Riemann zeta function.More generally, we haveζ^⋆( s) =∑_ t ≼sζ( t), ζ( s) =∑_ t ≼s(-1)^n-ℓ( t)ζ^⋆( t), where, for t=(t_1,…,t_m)∈ℂ^m, ℓ( t)=m andt≼ s means that t is obtained from s by combining some of its adjacent parts. The special value of ζ(s_1,…,s_n) and ζ^⋆(s_1,…,s_n) at positive integerswere first introduced by Euler <cit.> for n=2, and by Hoffman <cit.> and Zagier <cit.> for general n, independently.Many different types of relations among such values have been studied in references such as <cit.>.The purpose of the present paper isto introduce a generalization of both MZF and MZSF,which we call a Schur multiple zeta function,from the viewpoint of n-ple zeta functions.Indeed, it is defined similarly to the tableau expression of the Schur function as follows. For a partition λ of a positive integer n,let T(λ,X) be the set of all Young tableaux of shape λ over a set X and,in particular, SSYT(λ)⊂ T(λ,ℕ) the set of all semi-standard Young tableaux of shape λ(see Section <ref> for precise definitions).Recall that M=(m_ij)∈ T(λ,ℕ) is called semi-standardif m_i1≤ m_i2≤⋯ for all i and m_1j<m_2j<⋯ for all j.For s=(s_ij)∈ T(λ,ℂ),the Schur multiple zeta function (SMZF for short) associated with λ is defined by the series ζ_λ( s) =∑_M∈SSYT(λ)1/M^ s, where M^ s=∏_(i,j)∈ D(λ)m_ij^s_ij for M=(m_ij)∈SSYT(λ)with D(λ) being the Young diagram of λ.It is shown in Lemma <ref> thatthe above series converges absolutely whenever s∈ W_λ where W_λ = { s=(s_ij)∈ T(λ,ℂ) | [ (s_ij)≥ 1 for all (i,j)∈ D(λ) ∖ C(λ); (s_ij)>1 for all (i,j)∈ C(λ) ]. } with C(λ) being the set of all corners of λ.If (1^n) and (n) are denoted by the one column and one row partitions of n, then it is clear that ζ_(1^n)( s) (s∈ T((1^n),ℂ)) and ζ_(n)( s) (s∈ T((n),ℂ)) are nothing but MZF and MZSF, respectively.This shows that SMZFs actually interpolate both MZFs and MZSFs combinatorially.We remark that such interpolation multiple zeta functions were first mentioned in <cit.>from the study of the multiple Dirichlet L-values.In this paper, we study fundamental properties of SMZFs and establish some relations among them,which can be regarded as analogues of those for Schur functions.Indeed, we obtain the following Jacobi-Trudi formulas for SMZFs,which is one of the main results of our paper. To describe the result, we need the set W^diag_λ =W_λ∩ T^diag(λ,ℂ), where, for a set X, T^diag(λ,X)={T=(t_ij)∈ T(λ,X) | t_ij=t_kl if j-i=l-k}.For a tableau s=(s_ij)∈ W^diag_λ, we always write a_k=s_i,i+k for k∈ℤ (and for any i∈ℕ). For example, when λ=(4,3,3,2),s=(s_ij)∈ W^diag_(4,3,3,2) implies that s is of the form ofs = boxsize=18pt,aligntableaux=centers_11s_12s_13s_14 s_21s_22s_23 s_31s_32s_33 s_41s_42=a_0a_1 a_2 a_3a_-1a_0 a_1a_-2a_-1a_0a_-3a_-2.Let λ=(λ_1,…,λ_r) be a partition and λ'=(λ'_1,…,λ'_s) the conjugate of λ. Assume that s=(s_ij)∈ W^diag_λ.(1) Assume further that (s_i,λ_i)>1 for all 1≤ i ≤ r.Then, we haveζ_λ( s) =[ζ^⋆(a_-j+1,a_-j+2,…,a_-j+(λ_i-i+j))]_1≤ i,j≤ r . Here, we understand that ζ^⋆( ⋯)=1 if λ_i-i+j=0 and 0 if λ_i-i+j<0.(2)Assume further that (s_λ'_i,i)>1 for all 1≤ i ≤ s.Then, we haveζ_λ( s) =[ζ(a_j-1,a_j-2,…,a_j-(λ'_i-i+j))]_1≤ i,j≤ s. Here, we understand that ζ( ⋯)=1 if λ'_i-i+j=0 and 0 if λ'_i-i+j<0. As in the case of Schur functions, we call (<ref>) and (<ref>) of H-type and E-type, respectively.From these formulas, as corollaries,one can obtain many algebraic relations given by determinants among MZFs and MZSFs.For example, considering the case λ=(1^n) and λ=(n),we have the following identities. For s_1,…,s_n∈ℂ with (s_1),…,(s_n)>1,we have ζ(s_1,…,s_n)= | [ ζ^⋆(s_1) ζ^⋆(s_2,s_1) ⋯⋯ ζ^⋆(s_n,…,s_2,s_1);1 ζ^⋆(s_2) ⋯⋯ ζ^⋆(s_n,…,s_2); 1⋱⋮;⋱1 ζ^⋆(s_n-1) ζ^⋆(s_n,s_n-1); 20 1 ζ^⋆(s_n) ]|, ζ^⋆(s_1,…,s_n)= | [ ζ(s_1) ζ(s_2,s_1) ⋯⋯ ζ(s_n,…,s_2,s_1);1 ζ(s_2) ⋯⋯ ζ(s_n,…,s_2); 1⋱⋮;⋱1 ζ(s_n-1) ζ(s_n,s_n-1); 20 1 ζ(s_n) ]|. Moreover,just combining (<ref>) and (<ref>),we obtain a family of relations among MZFs and MZSFs.For example, considering the cases λ=(2,2,1) and its conjugate λ'=(3,2),we have boxsize=normal,aligntableaux=centerζ_λ( a bc ad  )=| [ ζ^⋆(a,b) ζ^⋆(c,a,b) ζ^⋆(d,c,a,b); ζ^⋆(a) ζ^⋆(c,a) ζ^⋆(d,c,a);01 ζ^⋆(d) ]| =| [ ζ(a,c,d) ζ(b,a,c,d); ζ(a) ζ(b,a) ]| , ζ_λ'( a c db a  )=| [ ζ^⋆(a,c,d) ζ^⋆(b,a,c,d); ζ^⋆(a) ζ^⋆(b,a) ]| = | [ ζ(a,b) ζ(c,a,b) ζ(d,c,a,b); ζ(a) ζ(c,a) ζ(d,c,a);01 ζ(d) ]| , where a,b,c,d∈ℂ with (a),(b),(d)>1 and (c)≥ 1.As you can see in the above examples and Corollary <ref>,these kind of relations hold even if we replace ζ with ζ^⋆ and vice versa.It is also worth mentioning that both (<ref>) and (<ref>) give meromorphic continuations of ζ_λ( s)to T^diag(λ,ℂ) (=ℂ^s+r-1 where s=λ_1 and r=λ'_1) as a function of a_k for 1-r≤ k≤ 1+sbecause both MZFs and MZSFs admit meromorphic continuations to the whole complex space (see, e.g., <cit.>).The assumption on variables on the same diagonal lines is crucial.Actually, in Section <ref>,we find out that our SMZF, which can be easily generalized to the skew type,with the assumption is realizedas (the limit of) a specialization of Macdonald's ninth variation of Schur function studied by Nakagawa, Noumi, Shirakawa and Yamada <cit.>.Based on this fact, we present some results such as the Jacobi-Trudi formula of skew type,the Giambelli formula and the dual Cauchy formula for SMZFs.Notice that if we work for such formulas without the assumption,then we encounter extra terms (see Remark <ref>),which will be clarified in a forthcoming work. Furthermore, in Section <ref>,we study SMZFs in a more general framework, that is,in the Hopf algebra QSym of quasi-symmetric functions studied by Gessel <cit.>.For a skew Young diagram ν,we define a special type of quasi-symmetric function S_ν(α),which we call a Schur type quasi-symmetric function, similarly to SMZFs.(Note that there is a different type of quasi-symmetric functions,called quasi-symmetric Schur functions defined by Haglund, Mason, Luoto and Willigenburg <cit.>, as a basis of QSym,which arise from the combinatorics of Macdonald polynomials and actually refine Schur functions in a natural way.)Then, we also prove the Jacobi-Trudi formulas of both H-type and E-type for such quasi-symmetric functions under the same assumption as above. Notice that the former corresponds to (<ref>) with entries in the essential quasi-symmetric functionsand the latter to (<ref>) with in the monomial quasi-symmetric functions. Remark that when ν is the one column and one row partitions, the corresponding formulas can be also respectively obtainedby calculating the images of the essential and monomial quasi-symmetric functionsby the antipode S of QSym in two different ways,as shown by Hoffman (<cit.>).More generally, for any skew Young diagram ν,we calculate the image of S_ν(α) by S and see that it is essentially equal to the Schur type quasi-symmetric function again associated with ν^#,the anti-diagonal transpose of ν.In the final section,we give iterated integral representations of Schur multiple zeta values of ribbon typeby following the similar discussion performed in <cit.>.As is the case of the multiple zeta values,one can obtain a duality for Schur multiple zeta values by just making a change of variables in the integral representation if the dual of the value is again of ribbon type.§ SCHUR MULTIPLE ZETA FUNCTIONS §.§ Combinatorial settingsWe first set up some notions of partitions.A partition λ=(λ_1,…,λ_r) of a positive integer nis a non-increasing sequence of positive integers such that |λ|=∑^r_i=1λ_i=n.We call |λ| and ℓ(λ)=r the weight and length of λ, respectively.If |λ|=n, then we write λ⊢ n.We sometimes express λ⊢ n as λ=(n^m_n(λ)⋯ 2^m_2(λ)1^m_1(λ))where m_i(λ) is the multiplicity of i in λ.We identify λ⊢ n with its Young diagram D(λ)={(i,j)∈ℤ^2 | 1≤ i≤ r, 1≤ j≤λ_i},depicted as a collection of n square boxes with λ_i boxes in the ith row.We say that (i,j)∈ D(λ) is a corner of λ if (i+1, j)∉ D(λ) and (i, j+1)∉ D(λ)and denote by C(λ) ⊂ D(λ) the set of all corners of λ.For example, C((4,3,3,2))={(1,4),(3,3),(4,2)}.The conjugate λ'=(λ'_1,…,λ'_s) of λ is defined by λ'_i=#{j | λ_j≥ i}.Namely, λ' is the partition whose Young diagram is the transpose of that of λ.For example, (4,3,3,2)'=(4,4,3,1). Let X be a set.For a partition λ, a Young tableau T=(t_ij) of shape λ over X is a filling of D(λ) obtained by putting t_ij∈ X into (i,j) box of D(λ).Similarly to the above,the conjugate tableau of T is defined by T'=(t_ji) whose shape is λ'.We denote by T(λ,X) the set of all Young tableaux of shape λ over X,which is sometimes identified with X^|λ|.Moreover, we putT^diag(λ,X) ={.(t_ij)∈ T(λ,X) | t_ij=t_kl if j-i=l-k}, which is identified with X^λ_1+ℓ(λ)-1. By a semi-standard Young tableau,we mean a Young tableau over the set of positive integers ℕsuch that the entries in each row are weakly increasing from left to right and those in each column are strictly increasing from top to bottom.We denote by SSYT(λ) the set of all semi-standard Young tableaux of shape λ. §.§ Definition of Schur multiple zeta functionsFor s=(s_ij)∈ T(λ,ℂ), defineζ_λ( s) =∑_M∈SSYT(λ)1/M^ s, where M^ s=∏_(i,j)∈ D(λ)m_ij^s_ij for M=(m_ij)∈SSYT(λ).We also define ζ_λ=1 for the empty partition λ=∅.We call ζ_λ( s) a Schur multiple zeta function (SMZF for short) associated with λand sometimes write it shortly as s if there is no confusion.Clearly, this is an extension of both MZFs and MZSFs.Actually, one sees that boxsize=normal,aligntableaux=centerζ(s_1,…,s_n) =ζ_(1^n)( s_12pt⋮ s_n ) =s_12pt⋮ s_n , ζ^⋆(s_1,…,s_n) =ζ_(n)( s_1⋯s_n )=s_1⋯s_n . We first discuss a region where the series (<ref>) is absolutely convergent. LetW_λ = { s=(s_ij)∈ T(λ,ℂ) | [ (s_ij)≥ 1 for all (i,j)∈ D(λ) ∖ C(λ); (s_ij)>1 for all (i,j)∈ C(λ) ]. }. Then, the series (<ref>) converges absolutely if s∈ W_λ. Write C(λ)={(i_1,j_1),…,(i_k,j_k)} where i_1<⋯<i_k and j_1>⋯>j_k.Then, it can be written as λ=(j_1^i'_1 j_2^i'_2⋯ j_k^i'_k)where i'_l=i_l-i_l-1 with i_0=0.Since (s_ij)≥ 1 for (i,j) ∈ D(λ) ∖ C(λ),we have ∑_M ∈SSYT(λ)|1/M^ s|≤∏^k_l=1∑_(m_ij)∈SSYT(j_l^i'_l)∏^i_l_i=1∏^j_l_j=11/m^(s_ij)_ij≤∏^k_l=1∑^∞_N_l=1C_i'_l,j_l(N_l)/N^(s_i_l,j_l)_l, where C_a,b(N) is a finite sum defined by C_a,b(N) =∑_(m_ij)∈SSYT(b^a)m_a,b=N(i,j) (a,b)∏^a_i=1∏^b_j=11/m_ij.It is well known that, for any ε>0,there exists a constant C_ε>0, which is not dependent on N,such that ∑^N_m=11/m<C_εN^ε.Hence|C_a,b(N)| ≤(i,j) (a,b)∏^a_i=1∏^b_j=1∑^N_m_ij=11/m_ij < C_ε^ab-1N^ε(ab-1)and therefore∑_M ∈SSYT(λ)|1/M^ s|≤∏^k_l=1∑^∞_N_l=1C_ε^i'_lj_l-1N_l^ε(i'_lj_l-1)/N^(s_i_l,j_l)_l=∏^k_l=1 C_ε^i'_lj_l-1ζ((s_i_l,j_l)-ε(i'_lj_l-1)). This ends the proofbecause (s_i_l,j_l)>1 for 1≤ l≤ k and ε can be taken sufficiently small. The condition s∈ W_λ is a sufficient condition that the series (<ref>) converges absolutely.It seems to be interesting to determine the region of absolute convergence of (<ref>) with full description.See e.g., <cit.> for the cases of λ=(1^n) and (n), that is, the cases of MZFs and MZSFs.It should be noted that a SMZF can be also written as a linear combination of MZFs or MZSFs.In fact, for λ⊢ n,let ℱ(λ) be the set of all bijections f:D(λ)→{1,2,…,n}satisfying the following two conditions: (i)for all i, f((i,j))<f((i,j')) if and only if j<j', (ii)for all j, f((i,j))<f((i',j)) if and only if i<i'.Moreover, for T=(t_ij)∈ T(λ,X), putV(T)= {. (t_f^-1(1),t_f^-1(2),…,t_f^-1(n))∈ X^n |f∈ℱ(λ) }.Furthermore, when X has an addition +,we write w≼ T for w=(w_1,w_2,…,w_m)∈ X^mif there exists (v_1,v_2,…,v_n)∈ V(T) satisfying the following:for all 1≤ k≤ m, there exist 1≤ h_k≤ m and l_k≥ 0 such that (i)w_k=v_h_k+v_h_k+1+⋯ +v_h_k+l_k,(ii)there are no i and i' such that i i' and t_ij,t_i'j∈{v_h_k,v_h_k+1,… ,v_h_k+l_k} for some j,(iii)^m_k=1{h_k,h_k+1,…,h_k+l_k}={1,2,…,n}. Then, by the definition, we have ζ_λ( s) =∑_ t ≼sζ( t).This clearly includes the first equation in (<ref>) as the case λ=(n).Moreover, by an inclusion-exclusion argument,one can also obtain its "dual" expression ζ_λ( s) =∑_ t ≼s'(-1)^n-ℓ( t)ζ^⋆( t),which does the second one in (<ref>) as the case λ=(1^n). (1)For s=(s_ij)∈ T((3,1),ℂ), we have V( s)={ (s_11,s_12,s_13,s_21), (s_11,s_12,s_21,s_13), (s_11,s_21,s_12,s_13) }. One sees that t≼ s if and only if t is one of the following:(s_11,s_12,s_13,s_21),(s_11+s_12,s_13,s_21),(s_11,s_12+s_13,s_21),(s_11,s_12,s_13+s_21), (s_11+s_12+s_13,s_21),(s_11+s_12,s_13+s_21),(s_11,s_12+s_13+s_21),(s_11,s_12,s_21,s_13), (s_11+s_12,s_21,s_13),(s_11,s_12+s_21,s_13),(s_11,s_21,s_12,s_13),(s_11,s_21,s_12+s_13).This shows that when s∈ W_(3,1)boxsize=normal,aligntableaux=centers_11s_12s_13,s_21=ζ(s_11,s_12,s_13,s_21)+ζ(s_11+s_12,s_13,s_21)+ζ(s_11,s_12+s_13,s_21) +ζ(s_11,s_12,s_13+s_21)+ζ(s_11+s_12+s_13,s_21)+ζ(s_11+s_12,s_13+s_21) +ζ(s_11,s_12+s_13+s_21)+ζ(s_11,s_12,s_21,s_13)+ζ(s_11+s_12,s_21,s_13) +ζ(s_11,s_12+s_21,s_13)+ζ(s_11,s_21,s_12,s_13)+ζ(s_11,s_21,s_12+s_13)=ζ^⋆(s_11,s_21,s_12,s_13)-ζ^⋆(s_11+s_21,s_12,s_13)-ζ^⋆(s_11,s_21+s_12,s_13),+ζ^⋆(s_11,s_12,s_21,s_13)-ζ^⋆(s_11,s_12,s_21+s_13)+ζ^⋆(s_11,s_12,s_13,s_21).Notice that the second equality follows from the discussion in (2).(2)For s=(s_ij)∈ T((2,1,1),ℂ), we have V( s)={ (s_11,s_12,s_21,s_31), (s_11,s_21,s_12,s_31), (s_11,s_21,s_31,s_12) }. One sees that t≼ s if and only if t is one of the followings:(s_11,s_12,s_21,s_31),(s_11+s_12,s_21,s_31),(s_11,s_12+s_21,s_31),(s_11,s_21,s_12,s_31),(s_11,s_21,s_12+s_31),(s_11,s_21,s_31,s_12).This shows that when s∈ W_(2,1,1)boxsize=normal,aligntableaux=centers_11s_12,s_21,s_31=ζ(s_11,s_12,s_21,s_31)+ζ(s_11+s_12,s_21,s_31)+ζ(s_11,s_12+s_21,s_31),+ζ(s_11,s_21,s_12,s_31)+ζ(s_11,s_21,s_12+s_31)+ζ(s_11,s_21,s_31,s_12)=ζ^⋆(s_11,s_21,s_31,s_12)-ζ^⋆(s_11+s_21,s_31,s_12)-ζ^⋆(s_11,s_21+s_31,s_12) -ζ^⋆(s_11,s_21,s_31+s_12)+ζ^⋆(s_11+s_21+s_31,s_12)+ζ^⋆(s_11+s_21,s_31+s_12) +ζ^⋆(s_11,s_21+s_31+s_12)+ζ^⋆(s_11,s_21,s_12,s_31)-ζ^⋆(s_11+s_21,s_12,s_31) -ζ^⋆(s_11,s_21+s_12,s_31)+ζ^⋆(s_11,s_12,s_21,s_31)-ζ^⋆(s_11,s_12,s_21+s_31).Notice that the second equality follows from the discussion in (1). By the definitions,it is clear that if t=(t_1,t_2,…,t_m) ≼ s∈ T(λ,ℂ),then t_m is expressed as a sum of s_ij where at least one of (i,j) is in C(λ).This together with the expression (<ref>) or (<ref>) also leadsto Lemma <ref>. §.§ A special case We now consider a special case of variables; s={s}^λ (s∈ℂ)where {s}^λ=(s_ij)∈ T(λ,ℂ) is the tableau given by s_ij=s for all (i,j)∈ D(λ).In this case, one sees that our SMZF is realized as a specialization of the Schur function.Actually, for variables x=(x_1,x_2,…), lets_λ =s_λ( x) =∑_(m_ij)∈SSYT(λ)∏_(i,j)∈ D(λ)x_m_ij be the Schur function associated with λ.Then, for s∈ℂ with (s)>1, we haveζ_λ({s}^λ) =e^(s) s_λ =s_λ(1^-s,2^-s,…), where e^(s) is the function sending x_i to 1/i^s. This means that ζ_λ({s}^λ) can be written as a polynomial in ζ(s),ζ(2s),….Let λ⊢ n. Then, for s∈ℂ with (s)>1, we haveζ_λ({s}^λ) =∑_μ⊢ nχ^λ(μ)/z_μ∏^ℓ(μ)_i=1ζ(μ_i s). Here, z_μ=∏_i≥ 1i^m_i(μ)m_i(μ)! and χ^λ(μ)∈ℤis the value of the character χ^λ attached to the irreducible representation of the symmetric group S_n of degree ncorresponding to λ on the conjugacy class of S_n of the cycle type μ⊢ n.For a partition μ,let p_μ=p_μ( x) be the power-sum symmetric function defined byp_μ=∏^ℓ(μ)_i=1p_μ_i where p_r=p_r( x)=∑^∞_i=1x_i^r.We know that the Schur function can be written asa linear combination of power-sum symmetric functions (see <cit.>) as s_λ=∑_μ⊢ nχ^λ(μ)/z_μp_μ. Hence, one obtains the desired expression by noticing e^(s)p_r=p_r(1^-s,2^-s,…)=ζ(r s). For variables x=(x_1,x_2,…),let e_n=e_n( x) and h_n=h_n( x) bethe elementary and complete symmetric functions of degree n,which are respectively defined by e_n=∑_i_1<⋯<i_nx_i_1⋯ x_i_n,h_n=∑_i_1≤⋯≤ i_nx_i_1⋯ x_i_n.Since s_(1^n)=e_n and s_(n)=h_n with χ^(1^n)(μ)=|μ|-ℓ(μ) and χ^(n)(μ)=1, we have from (<ref>)ζ(s,…,s)=e^(s)e_n =e_n(1^-s,2^-s,…) =∑_μ⊢ n(-1)^n-ℓ(μ)/z_μ∏^ℓ(μ)_i=1ζ(μ_i s),ζ^⋆(s,…,s)=e^(s)h_n =h_n(1^-s,2^-s,…) =∑_μ⊢ n1/z_μ∏^ℓ(μ)_i=1ζ(μ_i s). These expression are respectively implied from Theorem 2.2 and 2.1 in <cit.>.It is shown in e.g., <cit.> that ζ(2k,…,2k),ζ^⋆(2k,…,2k)∈ℚπ^2kn.These can be generalized to the Schur multiple zeta values" as follows. It holds that ζ_λ({2k}^λ)∈ℚπ^2k|λ| for k∈ℕ.This is a direct consequence of the expression (<ref>)together with the fact ζ(2k)∈ℚπ^2k obtained by Euler(and hence the rational part can be explicitly written in terms of the Bernoulli numbers).When n=3, we have boxsize=normal,aligntableaux=centersss=1/6ζ(s)^3+1/2ζ(2s)ζ(s)+1/3ζ(3s)=ζ^⋆(s,s,s), ss,s=2/6ζ(s)^3+0/2ζ(2s)ζ(s)+-1/3ζ(3s), s,s,s=1/6ζ(s)^3+-1/2ζ(2s)ζ(s)+1/3ζ(3s)=ζ(s,s,s). Special values of ζ_λ({2k}^λ) for λ⊢ 3 with small k are given as follows: boxsize=11pt,aligntableaux=center ζ_λ({2k}^λ) k=1 k=2 k=3 k=42k 2k 2k31π^6/151204009π^12/3405402000223199π^18/1948964774006252278383389π^24/19384278908520626100002k 2k2kπ^6/840493π^12/510810300086π^18/4331032831125116120483π^24/24230348635650782625000 2k2k2k π^6/5040π^12/6810804002π^18/6496549246687538081π^24/48460697271301565250000When n=4, we have boxsize=normal,aligntableaux=centerssss=1/24ζ(s)^4+1/4ζ(2s)ζ(s)^2+1/8ζ(2s)^2+1/3ζ(3s)ζ(s)+1/4ζ(4s)=ζ^⋆(s,s,s,s), sss,s=3/24ζ(s)^4+1/4ζ(2s)ζ(s)^2+-1/8ζ(2s)^2+0/3ζ(3s)ζ(s)-1/4ζ(4s), ss,ss=2/24ζ(s)^4+0/4ζ(2s)ζ(s)^2+2/8ζ(2s)^2+-1/3ζ(3s)ζ(s)+0/4ζ(4s), ss,s,s=3/24ζ(s)^4+-1/4ζ(2s)ζ(s)^2+-1/8ζ(2s)^2+0/3ζ(3s)ζ(s)+1/4ζ(4s), s,s,s,s=1/24ζ(s)^4+-1/4ζ(2s)ζ(s)^2+1/8ζ(2s)^2+1/3ζ(3s)ζ(s)+-1/4ζ(4s)=ζ(s,s,s,s). Special values of ζ_λ({2k}^λ) for λ⊢ 4 with small k are given as follows: boxsize=11pt,aligntableaux=center ζ_λ({2k}^λ) k=1 k=2 k=3 k=42k 2k 2k 2k 127π^8/60480013739π^16/1136785104000 1202645051π^24/10095978598187826093753467913415992313π^32/279956188158180088608553500000002k 2k 2k2k 239π^8/181440062191π^16/6252318072000062572402π^24/30287935794563478281252019988202341π^32/39993741165454298372650500000002k 2k2k 2k 11π^8/302400113π^16/183891708000014074π^24/4389555912255576562530650383π^32/155704220332691929148250000002k 2k2k2k 11π^8/36288029π^16/178637659200098642π^24/3028793579456347828125332561213π^32/39993741165454298372650500000002k2k2k 2k π^8/362880π^16/125046361440004π^24/43268479706519254687513067π^32/9331872938606002953618450000000§ JACOBI-TRUDI FORMULASThe aim of this section is to give a proof of Theorem <ref>.To do that,we need some basic concepts in combinatorial method.Namely, we try to understand SMZF as a sum of weights of patterns on the ℤ^2 lattice,similarly to Schur functions (more precisely, see, e.g., <cit.>).Now, we do not work on SMZFs themselves, but with a truncated version of those,which may correspond to the Schur polynomial in theory of Schur functions.For N∈ℕ, let SSYT_N(λ) be the set of all (m_ij)∈SSYT(λ) such that m_ij≤ N for all i,j.Defineζ^N_λ( s) =∑_M∈SSYT_N(λ)1/M^ s. In particular, putboxsize=normal,aligntableaux=centerζ^N(s_1,…,s_n) =ζ^N_(1^n)( s_12pt⋮ s_n ) , ζ^N⋆(s_1,…,s_n) =ζ^N_(n)( s_1⋯s_n ). Notice thatlim_N →∞ζ^N_λ( s)=ζ_λ( s) when s∈ W_λ.Similarly to (<ref>),we have the expressionsζ^N⋆( s) =∑_ t ≼sζ^N( t), ζ^N( s) =∑_ t ≼s(-1)^n-ℓ( t)ζ^N⋆( t).§.§ A proof of the Jacobi-Trudi formula of H-type §.§.§ Rim decomposition of partitionA skew partition is a pair of partitions (λ,μ)satisfying μ⊂λ, that is μ_i≤λ_i for all i.The resulting skew shape is denoted by λ/μ andthe corresponding Young diagram is by D(λ/μ).We often identify λ/μ with D(λ/μ).A skew Young diagram θ is called a ribbonif θ is connected and contains no 2× 2 block of boxes. Let λ be a partition.A sequence Θ=(θ_1,…,θ_t) of ribbons is called arim decomposition of λif D(θ_k)⊂ D(λ) for 1≤ k≤ tand θ_1⊔⋯⊔θ_k,the gluing of θ_1,…,θ_k,is (the Young diagram of) a partition λ^(k) for 1≤ k≤ tsatisfying λ^(t)=λ. One can naturally identify a rim decomposition Θ=(θ_1,…,θ_t) of λwith the Young tableau T=(t_ij)∈ T(λ,{1,…,t})defined by t_ij=k if (i,j)∈ D(θ_k). The following Θ=(θ_1,θ_2,θ_3,θ_4) is a rim decomposition of λ=(4,3,3,2);boxsize=normal,aligntableaux=centerΘ = 1 1 3 32 3 32 3 43 3 , which means thatmathmode,boxsize=10pt,aligntableaux=centerθ_1=2 ,θ_2=0,1,1 ,θ_3=2+2,1+2,1+1,2andθ_4=1 .Write λ=(λ_1,…,λ_r).We call a rim decomposition Θ=(θ_1,…,θ_r) of λ an H-rim decomposition if each θ_i starts from (i,1) for all 1≤ i≤ r.Here, we permit θ_i=∅. We denote by Rim^λ_H the set of all H-rim decompositions of λ. The following Θ=(θ_1,θ_2,θ_3,θ_4) is an H-rim decomposition of λ=(4,3,3,2); boxsize=normal,aligntableaux=centerΘ = 1 1 3 33 3 33 4 44 4 , which means thatmathmode,boxsize=10pt,aligntableaux=center θ_1=2 ,θ_2=∅ , θ_3=2+2,3,1and θ_4=1+2,2 .Note that the rim decomposition appearing in Example <ref> is not an H-rim decomposition. The H-rim decompositions also appeared in <cit.>,where they are called the flat special rim-hooks.They are used to compute the coefficients of the linear expansion of a given symmetric function via Schur functions. §.§.§ Patterns on the ℤ^2 latticeFix N∈ℕ.For a partition λ=(λ_1,…,λ_r),let A_i and B_i be lattice points in ℤ^2respectively given by A_i=(r+1-i,1) and B_i=(r+1-i+λ_i,N) for 1≤ i≤ r.Put A=(A_1,…,A_r) and B=(B_1,…,B_r).An H-pattern corresponding to λ is a tuple L=(l_1,…,l_r) of directed paths on ℤ^2,whose directions are allowed only to go one to the right or one up,such that l_i starts from A_i and ends to B_σ(i) for some σ∈ S_r.We call such σ∈ S_r the type of L and denote it by σ=type(L).Note that the type of an H-pattern does not depend on Nin the sense that the number of horizontal edges of each directed path of the H-pattern is independent of N.The number of horizontal edges appearing in the path l_i is called the horizontal distance of l_i and is denoted by hd(l_i).When type(L)=σ,we simply write L:A→ B^σ where B^σ=(B_σ(1),…,B_σ(r)) and l_i:A_i→ B_σ(i). It is easy to see that hd(l_i)=λ_σ(i)-σ(i)+i and ∑^r_i=1hd(l_i)=|λ|.Let ℋ^N_λ be the set of all H-patterns corresponding to λ andS^λ_H={type(L)∈ S_r | L∈ℋ^N_λ}.The following is a key lemma of our study, which is easily verified.For Θ=(θ_1,…,θ_r) ∈Rim^λ_H,there exists L=(l_1,…,l_r)∈ℋ^N_λ such that hd(l_i)=|θ_i| for all 1≤ i≤ r.Moreover, the map τ_H:Rim^λ_H→ S^λ_H given by τ_H(Θ)=type(L) is a bijection. Let λ=(4,3,3,2).Then, we have τ_H(Θ)=(1243)∈ S_4 where Θ is the H-rim decomposition of λ appearing in Example <ref>. §.§.§ Weight of patternsFix s=(s_ij)∈ T(λ,ℂ).We next assign a weight to L=(l_1,…,l_r) ∈ℋ^N_λ via a H-rim decomposition of λ as follows.Take Θ=(θ_1,…,θ_r)∈Rim^λ_H such that τ_H(Θ)=type(L).Then, when the kth horizontal edge of l_i is on the jth row,we weight it with 1/j^s_pq where (p,q)∈ D(λ) is the kth component of θ_i.Now, the weight w^N_ s(l_i) of the path l_i is defined to be the product of weights of all horizontal edges along l_i.Here, we understand that w^N_ s(l_i)=1 if θ_i=∅.Moreover, we define the weight w^N_ s(L) of L∈ℋ^N_λ by w^N_ s(L)=∏^r_i=1w^N_ s(l_i). Let λ=(4,3,3,2).Consider the following L=(l_1,l_2,l_3,l_4)∈ℋ^4_(4,3,3,2);Since type(L)=(1243),the corresponding H-rim decomposition of λ is nothing but the oneappearing in Example <ref>.Let boxsize=18pt,aligntableaux=center s= a b c de f gh i jk l∈ T((4,3,3,2),ℂ). Then, the weight of l_i are given byw^4_ s(l_1) =1/1^a2^b, w^4_ s(l_2) =1, w^4_ s(l_3) =1/3^h3^e3^f3^g3^c4^d, w^4_ s(l_4) =1/2^k2^l2^i2^j. In particular, whenboxsize=18pt,aligntableaux=center s= a_0 a_1 a_2 a_3a_-1a_0a_1a_-2a_-1a_0a_-3a_-2∈ T^diag((4,3,3,2),ℂ),these are equal to w^4_ s(l_1) =1/1^a_02^a_1,w^4_ s(l_2) =1,w^4_ s(l_3) =1/3^a_-23^a_-13^a_03^a_13^a_24^a_3,w^4_ s(l_4) =1/2^a_-32^a_-22^a_-12^a_0. Notice that, in this case, from the definition of the weight,the tuple of indices of the exponent of the denominator of w^4_ s(l_i) along l_i should be equal to(a_1-i,a_1-i+1,a_1-i+2,…) for all i.§.§.§ ProofA proof of (<ref>) is given by calculating the sum X^N_λ( s) =∑_L∈ℋ^N_λε_type(L)w^N_ s(L) =∑_σ∈ S^λ_Hε_σ∑_L:A→ B^σw^N_ s(L), where ε_σ is the signature of σ∈ S_r.First, the inner sum can be calculated as follows. For σ∈ S^λ_H,let Θ^σ=(θ^σ_1,…,θ^σ_r)∈Rim^λ_H be the H-rim decompositionsuch that τ_H(Θ^σ)=σ.Then, we have∑_L:A→ B^σw^N_ s(L) =∏^r_i=1ζ^N⋆(θ^σ_i( s)). Here, for Θ=(θ_1,…,θ_r)∈Rim^λ_H, θ_i( s)∈ℂ^|θ_i| is the tupleobtained by reading contents of the shape restriction of s to θ_i from the bottom left to the top right.Let L=(l_1,…,l_r)∈ℋ^N_λ be an H-pattern of type σ.Then l_i is a path from A_i to B_σ(i) with hd(l_i)=λ_σ(i)-σ(i)+i=|θ^σ_i|.For simplicity, write k_i=λ_σ(i)-σ(i)+i and θ^σ_i( s)=(s_i,1,…,s_i,k_i).Suppose that l_i has n_j steps on the jth row for 1≤ j≤ N.Then, from the definition of the weight, we havew^N_ s(l_i) =1/1^s_i,1⋯1/1^s_i,n_1_n_1 terms1/2^s_i,n_1+1⋯1/2^s_i,n_1+n_2_n_2 terms⋯1/N^s_i,n_1+⋯+n_N-1+1⋯1/N^s_i,n_1+⋯+n_N_n_N termswith n_1+⋯+n_N=k_i.This shows that∑_L:A→ B^σw^N_ s(L)=∏^r_i=1∑_l_i:A_i→ B_σ(i)w^N_ s(l_i)=∏^r_i=1∑_1≤ m_1≤⋯≤ m_k_i≤ N1/m_1^s_i,1⋯ m_k_i^s_i,k_i=∏^r_i=1ζ^N⋆(s_i,1,…,s_i,k_i). From Lemma <ref>, we have X^N_λ( s) =∑_σ∈ S^λ_Hε_σ∏^r_i=1ζ^N⋆(θ^σ_i( s)). Let ℋ^N_λ,0 be the set of all L=(l_1,…,l_r)∈ℋ^N_λ such that any distinct pair of l_i and l_j has no intersection.Define X^N_λ,0( s) =∑_L∈ℋ^N_λ,0ε_type(L)w^N_ s(L), X^N_λ,1( s) =∑_L∈ℋ^N_λ∖ℋ^N_λ,0ε_type(L)w^N_ s(L). Clearly we have X^N_λ( s)=X^N_λ,0( s)+X^N_λ,1( s).Moreover, since type(L)=id for all L∈ℋ^N_λ,0where id is the identity element of S_r andid corresponds to the trivial H-rim decomposition(θ_1,…,θ_r)=((λ_1),…,(λ_r)),employing the well-known identification between non-intersecting lattice paths and semi-standard Young tableaux(see, e.g., <cit.>),we have X^N_λ,0( s) =∑_L∈ℋ^N_λ,0w^N_ s(L) =ζ^N_λ( s). Therefore, from (<ref>),we reach the expressionζ^N_λ( s)=∑_σ∈ S^λε_σ∏^r_i=1ζ^N⋆(θ^σ_i( s)) -X^N_λ,1( s).Now, one can obtain (<ref>) by taking the limit N→∞ of (<ref>)under suitable assumptions on s described in Theorem <ref> together with the following proposition. Assume that s=(s_ij)∈ T^diag(λ,ℂ).Write a_k=s_i,i+k for k∈ℤ. (1) We have X^N_λ( s) =[ζ^N⋆(a_-j+1,a_-j+2,…,a_-j+(λ_i-i+j)) ]_1≤ i,j≤ r. (2)It holds thatX^N_λ,1( s)=0.We first notice that,if s=(s_ij)∈ T^diag(λ,ℂ),then we haveθ^σ_i( s) =(a_1-i,a_1-i+1,…,a_1-i+(λ_σ(i)-σ(i)+i)-1) for all 1≤ i≤ r.Therefore, understanding that ζ^N_(k)=0 for k<0,from (<ref>), we haveX^N_λ( s)= ∑_σ∈ S^λε_σ∏^r_i=1ζ^N⋆(θ^σ_i( s))=∑_σ∈ S_rε_σ∏^r_i=1ζ^N⋆(a_1-i,a_1-i+1,…,a_1-i+(λ_σ(i)-σ(i)+i)-1)=[ζ^N⋆(a_1-i,a_1-i+1,…,a_1-i+(λ_j-j+i)-1) ]_1≤ i,j≤ r=[ζ^N⋆(a_-i+1,a_-j+2,…,a_-j+(λ_i-i+j)) ]_1≤ i,j≤ r. Hence, we obtain (<ref>).We next show the second assertion.To do that,we employ the well-known involution L↦L on ℋ^N_λ∖ℋ^N_λ,0 defined as follows.For L=(l_1,…,l_r)∈ℋ^N_λ∖ℋ^N_λ,0 of type σ, consider the first (rightmost) intersection point appearing in L,at which two paths say l_i and l_j cross.Let L be an H-pattern that contains every paths in L except for l_i and l_j andtwo more paths l_i and l_j.Here, l_i (resp. l_j) follows l_i (resp. l_j)until it meets the first intersection point and after that follows l_j (resp. l_i) to the end. Notice that, if s=(s_ij)∈ T^diag(λ,ℂ),then we have w^N_ s(L)=w^N_ s(L)since there is no change of horizontal edges between L and L.Moreover, we have type(L)=-type(L)because the end points of L and L are just switched.These imply that X^N_λ,1( s)=∑_L∈ℋ^N_λ∖ℋ^N_λ,0ε_type(L)w^N_ s(L)=-∑_L∈ℋ^N_λ∖ℋ^N_λ,0ε_type(L)w^N_ s(L)=-X^N_λ,1( s)and therefore lead to (<ref>). When s∈ T^diag(λ,ℂ),(<ref>) can be also written in terms of the H-rim decomposition as follows;ζ^N_λ( s) =∑_Θ=(θ_1,θ_2,…,θ_r)∈Rim^λ_Hε_H(Θ) ζ^N⋆(θ_1( s))ζ^N⋆(θ_2( s))⋯ζ^N⋆(θ_r( s)), where ε_H(Θ)=ε_τ_H(Θ).Note that ε(Θ)=(-1)^n-#{i | θ_i∅} when λ=(1^n). In some cases, X^N_λ( s) actually has a determinant expression without the assumption on variables;boxsize=normal,aligntableaux=centerX^N_(2,2)( a bc d  ) =| [ ζ^N⋆(a,b) ζ^N⋆(c,d,b); ζ^N⋆(a) ζ^N⋆(c,d) ]|, X^N_(2,2,1)( a bc de  ) =| [ ζ^N⋆(a,b) ζ^N⋆(c,d,b) ζ^N⋆(e,c,d,b); ζ^N⋆(a) ζ^N⋆(c,d) ζ^N⋆(e,c,d); 0 1 ζ^N⋆(e) ]|.However, in general,X^N_λ( s) can not be written as a determinant.For example, we have X^N_(2,2,2)( a bc de f  ) =ζ^N⋆(a,b)ζ^N⋆(c,d)ζ^N⋆(e,f) -ζ^N⋆(a,b)ζ^N⋆(c)ζ^N⋆(e,f,d) -ζ^N⋆(c,a)ζ^N⋆(e,f,d,b) -ζ^N⋆(a)ζ^N⋆(c,d,b)ζ^N⋆(e,f) +ζ^N⋆(c,a,b)ζ^N⋆(e,f,d) +ζ^N⋆(a)ζ^N⋆(c)ζ^N⋆(e,f,d,b) and see that the righthand side does not seem to be expressed as a determinant(but is close to the determinant). Similarly, X^N_λ,1( s) does not vanish in general.For example,X^2_(2,2),1( a bc d  )=(1/1^a1^b1^c1^d+1/1^a1^b1^c2^d +1/1^a2^b1^c1^d+1/1^a2^b1^c2^d..1/1^a2^b2^c2^d+1/2^a2^b1^c1^d+1/2^a2^b1^c2^d+1/2^a2^b2^c2^d) -(1/1^a1^b1^c1^d+1/1^a2^b1^c1^d+1/1^a2^b1^c2^d+1/1^a2^b2^c2^d..+1/2^a1^b1^c1^d+1/2^a2^b1^c1^d+1/2^a2^b1^c2^d+1/2^a2^b2^c2^d)=1/1^a1^b1^c2^d-1/2^a1^b1^c1^d, which actually vanishes when a=d.§.§ A proof of the Jacobi-Trudi formula of E-typeTo prove (<ref>),we need to consider another type of patterns on the ℤ^2 lattice.Because the discussion is essentially the same as in the previous subsection,we omit all proofs of the results in this subsection.Let λ=(λ_1,…,λ_r) be a partition and λ'=(λ'_1,…,λ'_s) the conjugate of λ.A rim decomposition Θ=(θ_1,…,θ_s) of λ is calledan E-rim decomposition if each θ_i starts from (1,i) for all 1 ≤ i ≤ s. Here, we again permit θ_i=∅.We denote by Rim^λ_E the set of all E-rim decompositions of λ. The following Θ=(θ_1,θ_2,θ_3,θ_4) is an E-rim decomposition of λ=(4,3,3,2); boxsize=normal,aligntableaux=centerΘ = 1 2 3 41 2 31 3 33 3 , which means thatmathmode,boxsize=10pt,aligntableaux=center θ_1=1,1,1 ,θ_2=1,1 , θ_3=2+1,2+1,1+2,2and θ_4=1 .Fix N∈ℕ.Let C_i and D_i be lattice points in ℤ^2respectively given by C_i=(s+1-i,1) and D_i=(s+1-i+λ'_i,N+1) for 1≤ i≤ s.Put C=(C_1,…,C_s) and D=(D_1,…,D_s).An E-pattern corresponding to λ is a tuple L=(l_1,…,l_s) of directed paths on ℤ^2,whose directions are allowed only to go one to the northeast or one up,such that l_i starts from C_i and ends to D_σ(i) for some σ∈ S_s.We also call such σ∈ S_s the type of L and denote it by σ=type(L).The number of northeast edges appearing in the path l_i is called the northeast distance of l_i and is denoted by ned(l_i).When type(L)=σ,we simply write L:C→ D^σ where D^σ=(D_σ(1),…,D_σ(s)) and l_i:C_i→ D_σ(i). It is easy to see that ned(l_i)=λ'_σ(i)-σ(i)+i and ∑^s_i=1ned(l_i)=|λ|.Let ℰ^N_λ be the set of all E-patterns corresponding to λand S^λ_E={type(L)∈ S_s | L∈ℰ^N_λ}. For Θ=(θ_1,…,θ_s)∈Rim^λ_E,there exists L=(l_1,…,l_s)∈ℰ^N_λ such that ned(l_i)=|θ_i| for all 1≤ i≤ s.Moreover, the map τ_E:Rim^λ_E→ S^λ_E given by τ_E(Θ)=type(L) is a bijection.Fix s=(s_ij)∈ T(λ,ℂ).A weight on L=(l_1,…,l_s) ∈ℰ^N_λ is similarly defined via the E-rim decomposition of λ as follows.Take Θ=(θ_1,…,θ_s)∈Rim^λ_E such that τ_E(Θ)=type(L).Then, when the kth northeast edge of l_i lies from the jth row to (j+1)th row,we weight it with 1/j^s_pq where (p,q)∈ D(λ) is the kth component of θ_i.Now, the weight w^N_ s(l_i) of the path l_i is defined to be the product of weights of all northeast edges along l_i.Here, we understand that w^N_ s(l_i)=1 if θ_i=∅.Moreover, we define the weight w^N_ s(L) of L∈ℰ^N_λ by w^N_ s(L)=∏^s_i=1w^N_ s(l_i). Let λ=(4,3,3,2).Consider the E-rim decomposition Θ∈Rim^λ_E of λ appeared in Example <ref>.It is easy to see that τ_E(Θ)=(123)∈ S_4 via the following L=(l_1,l_2,l_3,l_4)∈ℰ^6_(4,3,3,2);Let boxsize=18pt,aligntableaux=center s= a b c de f gh i jk l∈ T((4,3,3,2),ℂ). Then, the weight of l_i are given byw^4_ s(l_1) =1/1^a5^e6^h, w^4_ s(l_2) =1/3^b5^f, w^4_ s(l_3) =1/1^c2^g3^j4^i5^l6^k, w^4_ s(l_4) =1/3^d. In particular, whenboxsize=18pt,aligntableaux=center s= a_0 a_1 a_2 a_3a_-1a_0a_1a_-2a_-1a_0a_-3a_-2∈ T^diag((4,3,3,2),ℂ),these are equal to w^4_ s(l_1) =1/1^a_05^a_-16^a_-2, w^4_ s(l_2) =1/3^a_15^a_0, w^4_ s(l_3) =1/1^a_22^a_13^a_04^a_-15^a_-26^a_-3, w^4_ s(l_4) =1/3^a_3. Notice that, in this case, from the definition of the weight,the tuple of indexes of the exponent of the denominator of w^4_ s(l_i) along l_i should be equal to(a_-1+i,a_-1+i-1,a_-1+i-2,…) for all i. We similarly give a proof of (<ref>) by calculating the sum Y^N_λ( s) =∑_L∈ℰ^N_λε_type(L)w^N_ s(L) =∑_σ∈ S^λ_Eε_σ∑_L:C→ D^σ w^N_ s(L), For σ∈ S^λ_E,let Θ^σ=(θ^σ_1,…,θ^σ_s)∈Rim^λ_E be the E-rim decompositionsuch that τ_E(Θ^σ)=σ.Then, we have∑_L:C→ D^σw^N_ s(L) =∏^s_i=1ζ^N(θ^σ_i( s)). Here, for Θ=(θ_1,…,θ_s)∈Rim^λ_E, θ_i( s)∈ℂ^|θ_i| is the tupleobtained by reading contents of the shape restriction of s to θ_i from the top right to the bottom left.From Lemma <ref>, we have Y^N_λ( s) =∑_σ∈ S^λ_Eε_σ∏^s_i=1ζ^N(θ^σ_i( s)). Define ℰ^N_λ,0 similarly to ℋ^N_λ,0and also Y^N_λ,0( s) and Y^N_λ,1( s).It holds that Y^N_λ,0( s) =∑_L∈ℰ^N_λ,0w^N_ s(L) =ζ^N_λ( s). Hence, from (<ref>),we reach the expressionζ^N_λ( s)=∑_σ∈ S^λ_Eε_σ∏^s_i=1ζ^N(θ^σ_i( s)) -Y^N_λ,1( s).Now, (<ref>) is obtained by taking the limit N→∞ of (<ref>)under suitable assumptions on s described in Theorem <ref> together with the following proposition. Assume that s=(s_ij)∈ T^diag(λ,ℂ).Write a_k=s_i,i+k for k∈ℤ. (1) We have Y^N_λ( s) =[ζ^N(a_j-1,a_j-2,…,a_j-(λ'_i-i+j)) ]_1≤ i,j≤ s. (2)It holds thatY^N_λ,1( s)=0. When s∈ T^diag(λ,ℂ),(<ref>) can be also written in terms of the E-rim decomposition as follows;ζ^N_λ( s) =∑_Θ=(θ_1,θ_2,…,θ_s)∈Rim^λ_Eε_E(Θ) ζ^N(θ_1( s))ζ^N(θ_2( s))⋯ζ^N(θ_s( s)), where ε_E(Θ)=ε_τ_E(Θ).Note that ε_E(Θ)=(-1)^n-#{i | θ_i∅} when λ=(n). § SCHUR MULTIPLE ZETA FUNCTIONS AS VARIATIONS OF SCHUR FUNCTIONS §.§ Schur multiple zeta functions of skew typeOur SMZFs are naturally extended to those of skew type as follows.Let λ and μ be partitions satisfying μ⊂λ.We use the same notations T(λ/μ,X),T^diag(λ/μ,X) for a set X,SSYT(λ/μ) and SSYT_N(λ/μ) for a positive integer N∈ℕas the previous sections. Let s=(s_ij)∈ T(λ/μ,ℂ).We define a skew SMZF associated with λ/μ byζ_λ/μ( s) =∑_M∈SSYT(λ/μ)1/M^ s and its truncated sum ζ^N_λ/μ( s) =∑_M∈SSYT_N(λ/μ)1/M^ s, where M^ s=∏_(i,j)∈ D(λ/μ)m_ij^s_ij for M=(m_ij)∈SSYT(λ/μ).As we have seen in Lemma <ref>, the series (<ref>) converges absolutely if s∈ W_λ/μ where W_λ/μ is also similarly defined as W_λ(note that C(λ/μ)⊂ C(λ)).We have againζ_λ/μ( s) =∑_ t ≼sζ( t), ζ_λ/μ( s) =∑_ t ≼s'(-1)^|λ/μ|-ℓ( t)ζ^⋆( t),where ≼ is naturally generalized to the skew types.(1)For s=(s_ij)∈ W_(2,2,2)/(1,1), we have boxsize=normal,aligntableaux=centers_12 s_22 s_31s_32=ζ(s_31,s_12,s_22,s_32)+ζ(s_31+s_12,s_22,s_32)+ζ(s_12,s_31+s_22,s_32),+ζ(s_12,s_31,s_22,s_32)+ζ(s_12,s_22,s_31+s_32)+ζ(s_12,s_22,s_31,s_32)=ζ^⋆(s_31,s_12,s_22,s_32)-ζ^⋆(s_31+s_12,s_22,s_32)-ζ^⋆(s_31,s_12+s_22,s_32) -ζ^⋆(s_31,s_12,s_22+s_32)+ζ^⋆(s_31+s_12+s_22,s_32)+ζ^⋆(s_31+s_12,s_22+s_32) +ζ^⋆(s_31,s_12+s_22+s_32)+ζ^⋆(s_12,s_31,s_22,s_32)-ζ^⋆(s_12,s_31,s_22+s_32) +ζ^⋆(s_12,s_22,s_31,s_32)-ζ^⋆(s_12+s_22,s_31,s_32)-ζ^⋆(s_12,s_22+s_31,s_32).(2)For s=(s_ij)∈ W_(3,3)/(2), we have boxsize=normal,aligntableaux=center s_13 s_21s_22s_23=ζ(s_13,s_21,s_22,s_23)+ζ(s_13+s_21,s_22,s_23)+ζ(s_13,s_21+s_22,s_23) +ζ(s_13,s_21,s_22+s_23)+ζ(s_13+s_21+s_22,s_23)+ζ(s_13+s_21,s_22+s_23) +ζ(s_13,s_21+s_22+s_23)+ζ(s_21,s_13,s_22,s_23)+ζ(s_21,s_13+s_22,s_23) +ζ(s_21,s_13,s_22+s_23)+ζ(s_21,s_22,s_13,s_23)+ζ(s_21+s_22,s_13,s_23)=ζ^⋆(s_13,s_21,s_22,s_23)-ζ^⋆(s_13+s_21,s_22,s_23)-ζ^⋆(s_21,s_13+s_22,s_23),+ζ^⋆(s_21,s_13,s_22,s_23)-ζ^⋆(s_21,s_22,s_23+s_13)+ζ^⋆(s_21,s_22,s_13,s_23). As in Section <ref>,one sees that ζ_λ/μ({s}^λ/μ)=e^(s)s_λ/μ=s_λ/μ(1^-s,2^-s,…) for s∈ℂ with (s)>1where s_λ/μ is the skew Schur function associated with λ/μ (see <cit.>).In particular, since s_λ/μ is a symmetric function and hence can be expressed as a linear combination of the power-sum symmetric functions,we have ζ_λ/μ({2k}^λ/μ)∈ℚπ^2k(|λ|-|μ|) for k∈ℕ.Notice that it is shown in <cit.> thatζ_λ/μ({2k}^λ/μ) for a special choice of λ/μ with k=1,2,3is involved with f^λ/μ, the number of standard Young tableaux of shape λ/μ. §.§ Macdonald's ninth variation of Schur functionsLet W^diag_λ/μ=W_λ/μ∩ T^diag(λ/μ,ℂ).We now show that, when s∈ W^diag_λ/μ,the skew SMZF ζ_λ/μ( s) is realizedas (the limit of) a specialization of the ninth variation of skew Schur functionsstudied by Nakagawa, Noumi, Shirakawa and Yamada <cit.>.As in the previous discussion,we write a_k=s_i,i+k for k∈ℤ (and for any i∈ℕ)for s=(s_ij)∈ W^diag_λ/μ. Let r and s be positive integers.Put η=r+s. Let λ=(λ_1,…,λ_r) and μ=(μ_1,…,μ_r)be partitions satisfying μ⊂λ⊂ (s^r) (we here allow λ_i=0 or μ_i=0) andJ={j_1,j_2,…,j_r} with j_a=λ_r+1-a+a andI={i_1,i_2,…,i_r} with i_b=μ_r+1-b+bthe corresponding Maya diagrams, respectively.Notice that I and J are subsets of {1,2,…,η}satisfying j_1<j_2<⋯<j_r and i_1<i_2<⋯<i_r. Then, Macdonald's ninth variation of skew Schur function S^(r)_λ/μ(X) associated with a general matrix X=[x_ij]_1≤ i,j≤η of size η is defined byS^(r)_λ/μ(X)=ξ^I_J(X_+). Here, we have used the Gauss decomposition X=X_-X_0X_+ of Xwhere X_-,X_0 and X_+ are lower unitriangular, diagonal and upper unitriangular matrices, respectively,which are determined uniquely as matrices with entries in the field of rational functionsin the variables x_ij for 1≤ i,j≤η.Moreover,ξ^I_J(X_+) is the minor determinant of X_+ corresponding to I and J.Put e^(r)_n(X)=S^(r)_(1^n)(X)=ξ^1,…,r_1,…,r-1,r+n(X_+), h^(r)_n(X)=S^(r)_(n)(X)=ξ^1,…,r_1,…,r-n+1,r+1(X_+), which are variations of the elementary and complete symmetric polynomials, respectively.Here r-n+1 means that we ignore r-n+1.For convenience, we put e^(r)_0(X)=h^(r)_0(X)=1 and e^(r)_n(X)=h^(r)_n(X)=0 for n<0.For N∈ℕ,let U=U^(N) be an upper unitriangular matrix of size η defined by U=U_1U_2⋯ U_NwhereU_k =(I_η+u^(1)_k E_12)(I_η+u^(2)_k E_23) ⋯(I_η+u^(η-1)_k E_η-1,η). Here, u^(i)_k are variables for 1≤ k≤ N and 1≤ i≤η-1 andI_η and E_ij are the identity and unit matrix of size η, respectively. The following is crucial in this section. Let s=(s_ij)∈ T^diag(λ/μ,ℂ).Write a_k=s_i,i+k for k∈ℤ.If u^(i)_k=k^-a_i-r, then we have ζ^N_λ/μ( s)=S^(r)_λ/μ(U). It is shown in <cit.> that S^(r)_λ/μ(U) has a tableau representationS^(r)_λ/μ(U) =∑_(m_ij)∈SSYT_N(λ/μ)∏_(i,j)∈ D(λ/μ)u^(r-i+j)_m_ij. Hence the claim immediately follows because u^(r-i+j)_m_ij=m^-a_j-i_ij=m^-s_ij_ijif u^(i)_k=k^-a_i-r.As corollaries of the results in <cit.>,we obtain the following formulas for skew SMZFs. §.§ Jacobi-Trudi formulasIt is shown in <cit.> that S^(r)_λ/μ(X) satisfies the Jacobi-Trudi formulasS^(r)_λ/μ(X)=[h^(μ_j+r-j+1)_λ_i-μ_j-i+j(X)]_1≤ i,j≤ r, S^(r)_λ/μ(X)=[e^(r-1-μ'_j+j)_λ'_i-μ'_j-i+j(X)]_1≤ i,j≤ s, where λ'=(λ'_1,…,λ'_s) and μ'=(μ'_1,…,μ'_s) are the conjugates of λ and μ, respectively(we again allow λ'_i=0 or μ'_i=0).Retain the above notations.Assume that s=(s_ij)∈ W^diag_λ/μ.(1) Assume further that (s_i, λ_i)>1 for all 1≤ i ≤ r.Then, we haveζ_λ/μ( s) =[ζ^⋆(a_μ_j-j+1,a_μ_j-j+2,…,a_μ_j-j+(λ_i-μ_j-i+j))]_1≤ i,j≤ r . Here, we understand that ζ^⋆( ⋯)=1 if λ_i-μ_j-i+j=0 and 0 if λ_i-μ_j-i+j<0.(2)Assume further that (s_λ'_i, i)>1 for all 1≤ i ≤ s.Then, we haveζ_λ/μ( s) =[ζ(a_-μ'_j+j-1,a_-μ'_j+j-2,…,a_-μ'_j+j-(λ'_i-μ'_j-i+j))]_1≤ i,j≤ s. Here, we understand that ζ( ⋯)=1 if λ'_i-μ'_j-i+j=0 and 0 if λ'_i-μ'_j-i+j<0. From (<ref>), we havee^(r)_n(U)=∑_m_1<m_2<⋯ < m_n≤ Nu^(r)_m_1u^(r-1)_m_2⋯ u^(r-n+1)_m_n, h^(r)_n(U)=∑_m_1≤ m_2≤⋯≤ m_n≤ Nu^(r)_m_1u^(r+1)_m_2⋯ u^(r+n-1)_m_n. Now, write r'=μ_j+r-j+1 and k'=λ_i-μ_j-i+j for simplicity. Then, we have u^(r'+i-1)_m_i=m_i^-a_μ_j-j+i if u^(i)_k=k^-a_i-rand hence h^(μ_j+r-j+1)_λ_i-μ_j-i+j(U)=∑_m_1≤ m_2≤⋯≤ m_k'≤ Nu^(r')_m_1u^(r'+1)_m_2⋯ u^(r'+k'-1)_m_k'=∑_m_1≤ m_2≤⋯≤ m_k'≤ Nm_1^-a_μ_j-j+1m_2^-a_μ_j-j+2⋯ m_k'^-a_μ_j-j+k'=ζ^N⋆(a_μ_j-j+1,a_μ_j-j+2,…,a_μ_j-j+(λ_i-μ_j-i+j)). This shows that (<ref>) follows from (<ref>) by letting N→∞.Similarly, (<ref>) is obtained from (<ref>) via the expression e^(r-1-μ'_j+j)_λ'_i-μ'_j-i+j(U)=ζ^N(a_-μ'_j+j-1,a_-μ'_j+j-2,…,a_-μ'_j+j-(λ'_i-μ'_j-i+j)).When λ/μ=(4,3,2)/(2,1), we have boxsize=18pt,aligntableaux=center a_2 a_3a_0 a_1a_-2a_-1=| [ ζ^⋆(a_2,a_3) ζ^⋆(a_0,a_1,a_2,a_3) ζ^⋆(a_-2,a_-1,a_0,a_1,a_2,a_3);1 ζ^⋆(a_0,a_1) ζ^⋆(a_-2,a_-1,a_0,a_1);01 ζ^⋆(a_-2,a_-1) ]|,a_2 a_3a_0 a_1a_-2a_-1= | [ζ(a_-2) ζ(a_0,a_-1,a_-2) ζ(a_2,a_1,a_0,a_-1,a_-2) ζ(a_3,a_2,a_1,a_0,a_-1,a_-2);1ζ(a_0,a_-1)ζ(a_2,a_1,a_0,a_-1)ζ(a_3,a_2,a_1,a_0,a_-1);01 ζ(a_2,a_1) ζ(a_3,a_2,a_1);001 ζ(a_3) ]|. §.§ Giambelli formula For a partition λ, we define two sequences of indices p_1,…,p_t and q_1,…,q_t by p_i=λ_i-i+1 and q_i=λ'_i-i for 1≤ i≤ twhere t is the number of diagonal entries of λ.Notice that p_1>p_2>⋯ >p_t>0 and q_1>q_2>⋯ >q_t≥ 0and λ=(p_1-1,…,p_t-1 | q_1,…,q_t) is the Frobenius notation of λ.It is shown in <cit.> that S^(r)_λ(X) satisfies the Giambelli formulaS^(r)_λ(X) =[S^(r)_(p_i,1^q_j)(X)]_1≤ i,j ≤ t.Retain the above notations.Assume that s=(s_ij)∈ W^diag_λ.Moreover, assume further that(s_i,λ_i)=(a_p_i-1)>1 and (s_λ'_i,i)=(a_-q_i)>1 for 1≤ i ≤ t. Then, we haveζ_λ( s)=[ζ_(p_i, 1^q_j)( s_i,j)]_1 ≤ i,j ≤ t, wheres_i,j= boxsize=25pt,aligntableaux=centera_0 a_1 a_2⋯a_p_i-1 a_-1 ⋮ a_-q_j∈ W_(p_i, 1^q_j).Putting u^(i)_k=k^-a_i-r, from (<ref>) and (<ref>), we haveζ^N_λ( s) =S^(r)_λ(U) =[S^(r)_(p_i,1^q_j)(U)]_1≤ i,j ≤ t=[ζ^N_(p_i,1^q_j)( s_i,j)]_1≤ i,j ≤ t. This leads the desired equation by letting N→∞. When λ=(4,3,3,2)=(3,1,0 | 3,2,0), we have boxsize=18pt,aligntableaux=center a_0a_1 a_2 a_3a_-1 a_0 a_1a_-2a_-1a_0a_-3a_-2=| [ a_0 a_1 a_2 a_3a_-1 a_-2 a_-3a_0 a_1 a_2 a_3a_-1 a_-2 a_0 a_1 a_2 a_3; a_0 a_1a_-1 a_-2 a_-3a_0 a_1a_-1 a_-2 a_0 a_1; a_0a_-1 a_-2 a_-3 a_0 a_-1 a_-2 a_0 ] |. §.§ Dual Cauchy formulaIt is shown in <cit.> (see also <cit.>) that the dual Cauchy formula∑_λ⊂ (s^r)(-1)^|λ|S^(r)_λ(X)S^(s)_λ^∗(Y) =Ψ^(r,s)(X,Y) holds for X=[x_ij]_1≤ i,j≤η and Y=[y_ij]_1≤ i,j≤η.Here, for a partition λ=(λ_1,…,λ_r)⊂ (s^r), λ^∗=(r-λ'_s,…,r-λ'_1).Moreover, Ψ^(r,s)(X,Y) is the dual Cauchy kernel defined byΨ^(r,s)(X,Y) =ξ^1,…,r+s_1,…,r+s(Z)/ξ^1,…,r_1,…,r(X)ξ^1,…,s_1,…,s(Y), Z= [ [ x_11 x_12⋯ x_1η;⋮⋮ ⋮; x_r1 x_r2⋯ x_rη; y_11 y_12⋯ y_1η;⋮⋮ ⋮; y_s1 y_s2⋯ y_sη ]]. Remark that when both X and Y are unitriangular,we have Ψ^(r,s)(X,Y)=(Z). We now show an analogue of (<ref>) for SMZFs.To do that, we first simplify the formula (<ref>) in the case where X=U and Y=V.Here, for M∈ℕ,V=V^(M) is an upper unitriangular matrix of size η similarly defined as U, that is,V=V_1V_2⋯ V_M where V_k =(I_η+v^(1)_k E_12)(I_η+v^(2)_k E_23) ⋯(I_η+v^(η-1)_k E_η-1,η) with v^(i)_k beingvariables for 1≤ k≤ M and 1≤ i≤η-1.Write U=[u_ij]_1≤ i,j≤η and V=[v_ij]_1≤ i,j≤η.We first show thatu_ij = h^(i)_j-i(U) i≤ j,0 i>j,v_ij = h^(i)_j-i(V) i≤ j,0 i>j. Since these are clearly equivalent, let us show only the former.Because U is an upper unitiangular matrix,we have u_ij=0 unless i≤ j. When i≤ j, we have u_ij =∑^η_l_1,…,l_N-1=1(U_1)_i,l_1(U_2)_l_1,l_2⋯ (U_N)_l_N-1,j.Here, for a matrix A, we denote by (A)_i,j the (i,j) entry of A.Since (U_k)_a,b = ∏^b-a-1_h=0u^(a+h)_k a≤ b,0 a>b,we have u_ij =∑_i≤ l_1≤⋯≤ l_N-1≤ j(∏^l_1-i-1_h_1=0u^(i+h_1)_1) (∏^l_2-l_1-1_h_2=0u^(l_1+h_2)_2)⋯(∏^j-l_N-1-1_h_N=0u^(l_N-1+h_N)_N). Furthermore, writing j=i+p, we have u_i,i+p =∑_i≤ l_1≤⋯≤ l_N-1≤ i+p(∏^l_1-i-1_h_1=0u^(i+h_1)_1) (∏^l_2-l_1-1_h_2=0u^(l_1+h_2)_2)⋯(∏^i+p-l_N-1-1_h_N=0u^(l_N-1+h_N)_N)=∑_1≤ m_1≤⋯≤ m_p≤ Nu^(i)_m_1u^(i+1)_m_2⋯ u^(i+p-1)_m_p=h^(i)_p(U),whence we obtain the claim.When X=U and Y=V,from (<ref>), (<ref>) can be written as follows. It holds that ∑_λ⊂ (s^r)(-1)^|λ|S^(r)_λ(U)S^(s)_λ^∗(V) =[ [1 h^(1)_1(U) h^(1)_2(U)⋯ h^(1)_r(U)⋯ h^(1)_η-1(U);01 h^(2)_1(U)⋯ h^(2)_r-1(U)⋯ h^(2)_η-2(U);⋮⋱⋱⋱⋮ ⋮;0⋯01 h^(r)_1(U)⋯ h^(r)_η-r(U);1 h^(1)_1(V) h^(1)_2(V)⋯ h^(1)_s(V)⋯ h^(1)_η-1(V);01 h^(2)_1(V)⋯ h^(2)_s-1(V)⋯ h^(2)_η-2(V);⋮⋱⋱⋱⋮ ⋮;0⋯01 h^(s)_1(V)⋯ h^(s)_η-s(V) ]]. Assume that s=(s_ij)∈ W^diag_(s^r) and t=(t_ij)∈ W^diag_(r^s)with a_k=s_i,i+k and b_k=t_i,i+k for k∈ℤ.Moreover, assume that (s_ij)>1 for all 1≤ i≤ r, 1≤ j≤ s and (t_ij)>1 for all 1≤ i≤ s, 1≤ j≤ r.Then, we have∑_λ⊂ (s^r) (-1)^|λ|ζ_λ( s|_λ)ζ_λ^∗( t|_λ^∗)=[ [1 ζ^⋆(a_1-r) ζ^⋆(a_1-r,a_2-r)⋯ ζ^⋆(a_1-r,…,a_0)⋯ ζ^⋆(a_1-r,…,a_η-1-r);01 ζ^⋆(a_2-r)⋯ ζ^⋆(a_2-r,…,a_0)⋯ ζ^⋆(a_2-r,…,a_η-1-r);⋮⋱⋱⋱⋮ ⋮;0⋯01 ζ^⋆(a_0)⋯ ζ^⋆(a_0, …, a_η-1-r);1 ζ^⋆(b_1-s) ζ^⋆(b_1-s,b_2-s)⋯ ζ^⋆(b_1-s,…,b_0)⋯ ζ^⋆(b_1-s,…,b_η-1-s);01 ζ^⋆(b_2-s)⋯ ζ^⋆(b_2-s,…,b_0)⋯ ζ^⋆(b_2-s,…,b_η-1-s);⋮⋱⋱⋱⋮ ⋮;0⋯01 ζ^⋆(b_0)⋯ ζ^⋆(b_0, …, b_η-1-s) ]]. Here, s|_λ∈ W^diag_λ and t|_λ^∗∈ W^diag_λ^∗are the shape restriction of s and t to λ and λ^∗, respectively.Putting u^(i)_k=k^-a_i-r and v^(i)_k=k^-b_i-r, we have h^(i)_k(U)=∑_m_1≤⋯≤ m_k≤ Nu^(i)_m_1u^(i+1)_m_2⋯ u^(i+k-1)_m_k=∑_m_1≤⋯≤ m_k≤ Nm_1^-a_i-rm_2^-a_i+1-r⋯ m_k^-a_i+k-1-r=ζ^N⋆(a_i-r,a_i+1-r,…,a_i+k-1-r) and similarlyh^(i)_k(V)=ζ^M⋆(b_i-s,b_i+1-s,…,b_i+k-1-s). Therefore, (<ref>) immediately yields (<ref>) by letting N,M→∞.When r=2 and s=3, we haveboxsize=18pt,aligntableaux=center(LHS of (<ref>))=-a_0 a_1 a_2a_-1a_0 a_1 · 1 -a_0 a_1 a_2a_-1a_0· b_0+a_0 a_1 a_2a_-1 ·b_0b_-1 - a_0 a_1 a_2 ·b_0b_-1 b_-2 +a_0 a_1a_-1a_0· b_0 b_1 -a_0 a_1a_-1 ·b_0 b_1b_-1+ a_0 a_1·b_0 b_1b_-1 b_-2 +a_0a_-1 ·b_0 b_1b_-1b_0 - a_0·b_0 b_1b_-1b_0b_-2+1·b_0 b_1b_-1b_0b_-2b_-1 .On the other hand, we have (RHS of (<ref>))=[ [1ζ^⋆(a_-1)ζ^⋆(a_-1,a_0)ζ^⋆(a_-1,a_0,a_1)ζ^⋆(a_-1,a_0,a_1,a_2);01 ζ^⋆(a_0) ζ^⋆(a_0,a_1) ζ^⋆(a_0,a_1,a_2);1ζ^⋆(b_-2) ζ^⋆(b_-2,b_-1) ζ^⋆(b_-2,b_-1,b_0) ζ^⋆(b_-2,b_-1,b_0,b_1);01ζ^⋆(b_-1)ζ^⋆(b_-1,b_0)ζ^⋆(b_-1,b_0,b_1);001 ζ^⋆(b_0) ζ^⋆(b_0,b_1) ]] . § SCHUR TYPE QUASI-SYMMETRIC FUNCTIONS We here investigate SMZFsfrom the view point of the quasi-symmetric functions introduced by Gessel <cit.>. §.§ Quasi-symmetric functions Let t=(t_1,t_2,…) be variables and𝔓 a subalgebra of ℤ[[t_1,t_2,… ]] consisting of all formal power series with integer coefficients of bounded degree.We call p=p( t)∈𝔓 a quasi-symmetric functionif the coefficient of t^γ_1_k_1t^γ_2_k_2⋯ t^γ_n_k_n of p is the same asthat of t^γ_1_h_1t^γ_2_h_2⋯ t^γ_l_h_n of p whenever k_1<k_2<⋯ <k_n and h_1<h_2<⋯ <h_n.The algebra of all quasi-symmetric functions is denoted by Qsym. For a composition γ=(γ_1,γ_2,…,γ_n) of a positive integer,define the monomial quasi-symmetric function M_γ and the essential quasi-symmetric function E_γ respectively byM_γ =∑_m_1<m_2<⋯<m_nt_m_1^γ_1t_m_2^γ_2⋯ t^γ_n_m_n, E_γ =∑_m_1≤ m_2≤⋯≤ m_nt_m_1^γ_1t_m_2^γ_2⋯ t^γ_n_m_n. We know that these respectively form integral basis of Qsym.Notice that E_γ =∑_δ ≼ γM_δ,M_γ =∑_δ ≼ γ(-1)^n-ℓ(δ)E_δ.§.§ Relation between quasi-symmetric functions and multiple zeta values A relation between the multiple zeta values and the quasi-symmetric functions is studied by Hoffman <cit.>(remark that the notations of MZF and MZSF in <cit.> are different from ours;they are ζ(s_n,s_n-1,…,s_1) and ζ^⋆(s_n,s_n-1,…,s_1), respectively, in our notations).Let ℌ=ℤ⟨ x,y⟩ be the noncommutative polynomial algebra over ℤ.We can define a commutative and associative multiplication ∗, called a ∗-product, on ℌ.We call (ℌ,∗) the (integral) harmonic algebra. Let ℌ^1=ℤ1+yℌ, which is a subalgebra of ℌ.Notice that every w∈ℌ^1 can be written as an integral linear combination of z_γ_1z_γ_2⋯ z_γ_n where z_γ=yx^γ-1 for γ∈ℕ.For each N∈ℕ,define the homomorphism ϕ_N:ℌ^1→ℤ[t_1,t_2,…,t_N] by ϕ_N(1)=1 and ϕ_N(z_γ_1z_γ_2⋯ z_γ_n) = ∑_m_1<m_2<⋯<m_n≤ Nt_m_1^γ_1t_m_2^γ_2⋯ t^γ_n_m_nn≤ N,0otherwise, and extend it additively to ℌ^1.There is a unique homomorphism ϕ:ℌ^1→𝔓 such that π_Nϕ=ϕ_Nwhere π_N is the natural projection from 𝔓 to ℤ[t_1,t_2,…,t_N].We have ϕ(z_γ_1z_γ_2⋯ z_γ_n)=M_(γ_1,γ_2,…,γ_n).Moreover, as is described in <cit.>, ϕ is an isomorphism between ℌ^1 and Qsym. Let e be the function sending t_i to 1/i. Moreover, define ρ_N:ℌ^1→ℝ by ρ_N=eϕ_N.For a composition γ, we have ρ_Nϕ^-1(M_γ)=ζ^N(γ), ρ_Nϕ^-1(E_γ)=ζ^N⋆(γ). Here, the second formula follows from the first equations of (<ref>) and (<ref>). Define the map ρ:ℌ^1→ℝ^ℕ by ρ(w)=(ρ_N(w))_N∈ℕ for w∈ℌ^1. Notice that if w∈ℌ^0=ℤ1+yℌx, which is a subalgebra of ℌ^1,then we may understand that ρ(w)=lim_N→∞ρ_N(w)∈ℝ.In particular, for a composition γ=(γ_1,γ_2,…,γ_n) with γ_n≥ 2,we have ρϕ^-1(M_γ) =ζ(γ), ρϕ^-1(E_γ) =ζ^⋆(γ).§.§ Schur type quasi-symmetric functionsNow, one easily reaches the definition of the following Schur type quasi-symmetric functions (of skew type). For partitions λ,μ satisfying μ⊂λ⊂ (s^r) and γ=(γ_ij)∈ T(λ/μ,ℕ), defineS_λ/μ(γ) =∑_(m_ij)∈SSYT(λ)∏_(i,j)∈ D(λ/μ)t^γ_ij_m_ij, which is actually in Qsym.Clearly we have boxsize=normal,aligntableaux=centerS_(1^n)( γ_12pt⋮ γ_n ) =M_(γ_1,…,γ_n),S_(n)( γ_1⋯ γ_n )=E_(γ_1,…,γ_n).Hence S_λ/μ(γ) interpolates both the monomial and essential quasi-symmetric functions.Moreover, one sees that this is the quasi-symmetric functioncorresponding to the Schur multiple zeta value in the sense of (<ref>).Let I_λ/μ = {γ=(γ_ij)∈ T(λ/μ,ℕ) | γ_ij≥ 2 for all (i,j)∈ C(λ/μ). }. Then, for γ∈ I_λ/μ,we have ρϕ^-1(S_λ/μ(γ)) =ζ_λ/μ(γ). This follows from one of the following expressions S_λ/μ(γ) =∑_ u ≼ γM_ u, S_λ/μ(γ) =∑_ u ≼ γ'(-1)^|λ/μ|-ℓ( u)E_ u, similarly obtained as (<ref>),together with (<ref>) and (<ref>). There is another important class of quasi-symmetric functions called the fundamental or ribbon quasi-symmetric function defined by F_γ=∑_δ ≽ γM_δ for a composition γ.We remark that they are not in the class of Schur type quasi-symmetric functions.We again concentrate on the case γ=(γ_ij)∈ T^diag(λ/μ,ℕ).Write c_k=γ_i,i+k for k∈ℤ (and for any i∈ℕ).Then, from the tableau expression (<ref>) of the ninth variation of the Schur function S^(r)_λ/μ(U),if we put u_k^(i)=t^c_i-r_k, then we have u^(r-i+j)_m_ij=t^γ_ij_m_ij and hence S^(r)_λ/μ(U)=∑_(m_ij)∈SSYT_N(λ/μ)∏_(i,j)∈ D(λ/μ)u^(r-i+j)_m_ij=∑_(m_ij)∈SSYT_N(λ/μ)∏_(i,j)∈ D(λ/μ)t^γ_ij_m_ij=ϕ_Nϕ^-1(S_λ/μ(γ)). This shows that, when γ∈ T^diag(λ/μ,ℕ),the Schur type quasi-symmetric function S_λ/μ(γ)is also realized as (the limit of) a specialization of the ninth variation of the Schur functions,whence we can similarly obtain the Jacobi-Trudi, Giambelli and dual Cauchy formulasfor such quasi-symmetric functions.Notice that the following formulas actually hold in the algebra of formal power series,which means that we do not need any further assumptions on variables such as appeared in the corresponding resultsin the previous section for SMZFs.Assume that γ=(γ_ij)∈ T^diag(λ/μ,ℕ)and write c_k=γ_i,i+k for k∈ℤ.(1) We haveS_λ/μ(γ) =[E_(c_μ_j-j+1,c_μ_j-j+2,…,c_μ_j-j+(λ_i-μ_j-i+j))]_1≤ i,j≤ r . Here, we understand that E_( ⋯)=1 if λ_i-μ_j-i+j=0 and 0 if λ_i-μ_j-i+j<0.(2)We haveS_λ/μ(γ) =[M_(c_-μ'_j+j-1,c_-μ'_j+j-2,…,c_-μ'_j+j-(λ'_i-μ'_j-i+j))]_1≤ i,j≤ s. Here, we understand that M_( ⋯)=1 if λ'_i-μ'_j-i+j=0 and 0 if λ'_i-μ'_j-i+j<0.Let λ=(p_1-1,…,p_t-1 | q_1,…,q_t) be a partition writtenin the Frobenius notation (see Section <ref>).Assume that γ=(γ_ij)∈ T^diag(λ,ℕ)and write c_k=γ_i,i+k for k∈ℤ.Then, we haveS_λ(γ)=[S_(p_i,1^q_j)(γ_i,j)]_1 ≤ i,j ≤ t, whereγ_i,j= boxsize=25pt,aligntableaux=centerc_0 c_1 c_2⋯c_p_i-1 c_-1 ⋮ c_-q_j∈ T((p_i,1^q_j),ℕ). Assume that γ=(γ_ij)∈ T^diag((s^r),ℕ) andδ=(δ_ij)∈ T^diag((r^s),ℕ)with c_k=γ_i,i+k and d_k=δ_i,i+k for k∈ℤ.Write η=r+s.Then, we have ∑_λ⊂ (s^r) (-1)^|λ| S_λ(γ|_λ)S_λ^∗(δ|_λ^∗)=[ [ 1 E_(c_1-r) E_(c_1-r,c_2-r) ⋯ E_(c_1-r,…,c_0) ⋯ E_(c_1-r,…,c_η-1-r); 0 1 E_(c_2-r) ⋯ E_(c_2-r,…,c_0) ⋯ E_(c_2-r,…,c_η-1-r); ⋮ ⋱ ⋱ ⋱ ⋮ ⋮; 0 ⋯ 0 1 E_(c_0) ⋯ E_(c_0,…,c_η-1-r); 1 E_(d_1-s) E_(d_1-s,d_2-s) ⋯ E_(d_1-s,…,d_0) ⋯ E_(d_1-s,…,d_η-1-s); 0 1 E_(d_2-s) ⋯ E_(d_2-s,…,d_0) ⋯ E_(d_2-s,…,d_η-1-s); ⋮ ⋱ ⋱ ⋱ ⋮ ⋮; 0 ⋯ 0 1 E_(d_0) ⋯ E_(d_0,…,d_η-1-s) ]]. Here, γ|_λ∈ T^diag(λ,ℕ) and δ|_λ^∗∈ T^diag(λ^∗,ℕ)are the shape restriction of γ and δ to λ and λ^∗, respectively. In <cit.>,a more general type of quasi-symmetric function is defined by a set of equality and inequality conditions. One can see that this includes both the Schur type quasi-symmetric functions and the fundamental quasi-symmetric functions as special casesand actually leads a generalized multiple zeta function via ρϕ^-1.However, because it is too complicated in general,it seems to be difficult to expect that such generalized quasi-symmetric and multiple zeta functionssatisfy the similar kind of determinant formulas as above.We know that Qsym has a commutative Hopf algebra structure (see <cit.>).The antipode S, which is an automorphism of Qsym satisfying S^2=id,is explicitly given as follows.For a composition γ=(γ_1,γ_2,…,γ_n), we have(1)S(M_γ) =∑_γ_1⊔ γ_2⊔ ⋯ ⊔ γ_m = γ(-1)^m M_γ_1M_γ_2⋯ M_γ_m.(2)S(M_γ) =(-1)^nE_γ. Here, γ_1⊔γ_2⊔⋯⊔γ_m is just the juxtaposition of non-empty compositions γ_1,γ_2,…,γ_m and γ=(γ_n,γ_n-1,…,γ_1).Combining these formulas, we reach the expressionsM_γ =∑_γ_1⊔ γ_2⊔ ⋯ ⊔ γ_m = γ(-1)^n-mE_γ_1E_γ_2⋯ E_γ_m, E_γ =∑_γ_1⊔ γ_2⊔ ⋯ ⊔ γ_m = γ(-1)^n-m M_γ_1M_γ_2⋯ M_γ_m. One sees by induction on n that(<ref>) and (<ref>) are respectively equivalent to the formulas M_(γ_1,…,γ_n) = | [ E_(γ_1) E_(γ_2,γ_1) ⋯ ⋯ E_(γ_n,…,γ_2,γ_1); 1 E_(γ_2) ⋯ ⋯ E_(γ_n,…,γ_2); 1 ⋱ ⋮; ⋱ 1 E_(γ_n-1) E_(γ_n,γ_n-1);20 1 E_(γ_n) ]|, E_(γ_1,…,γ_n) = | [ M_(γ_1) M_(γ_2,γ_1) ⋯ ⋯ M_(γ_n,…,γ_2,γ_1); 1 M_(γ_2) ⋯ ⋯ M_(γ_n,…,γ_2); 1 ⋱ ⋮; ⋱ 1 M_(γ_n-1) M_(γ_n,γ_n-1);20 1 M_(γ_n) ]|, which are obtained from the Jacobi-Trudi formulas (<ref>) and (<ref>), respectively.When n=3, we have M_(γ_1,γ_2,γ_3) =E_(γ_3,γ_2,γ_1) -E_(γ_3,γ_2)E_(γ_1)-E_(γ_3)E_(γ_2,γ_1) +E_(γ_3)E_(γ_2)E_(γ_1)= | [ E_(γ_1) E_(γ_2,γ_1) E_(γ_3,γ_2,γ_1); 1 E_(γ_2) E_(γ_3,γ_2); 0 1 E_(γ_3) ]|, E_(γ_1,γ_2,γ_3) =M_(γ_3,γ_2,γ_1) -M_(γ_3,γ_2)M_(γ_1)-M_(γ_3)M_(γ_2,γ_1) +M_(γ_3)M_(γ_2)M_(γ_1)= | [ M_(γ_1) M_(γ_2,γ_1) M_(γ_3,γ_2,γ_1); 1 M_(γ_2) M_(γ_3,γ_2); 0 1 M_(γ_3) ]|.For a skew Young diagram ν, we denote by ν^# the transpose of ν with respect to the anti-diagonal.Similarly, the anti-diagonal transpose of a skew Young tableaux T∈ T(ν,X) is denoted by T^#∈ T(ν^#,X). In the following discussion,we also encounter (T^#)'∈ T((ν^#)',X), the conjugate of T^#. For example,boxsize=normal,aligntableaux=centerγ_11 γ_12 γ_13 γ_21^ # = γ_13γ_12 γ_21 γ_11 ,( γ_11 γ_12 γ_13 γ_21^ # )' =γ_21 γ_13 γ_12 γ_11 Namely, (T^#)' is just the rotation of T by π around the center of ν.Now, the image of the Schur type quasi-symmetric functions by the antipode S is explicitly calculated as follows.For a skew Young diagram ν,we have S(S_ν(γ)) =(-1)^|ν|S_ν^#(γ^#). Moreover, when γ∈ T^diag(ν,ℕ), we have S(S_ν(γ))=(-1)^|ν|∑_Θ=(θ_1,θ_2,…,θ_r)∈Rim^ν^#_Hε_H(Θ)E_θ_1(γ^#)E_θ_2(γ^#)⋯ E_θ_r(γ^#), S(S_ν(γ))=(-1)^|ν|∑_Θ=(θ_1,θ_2,…,θ_s)∈Rim^ν^#_Eε_E(Θ)M_θ_1(γ^#)M_θ_2(γ^#)⋯ M_θ_s(γ^#). From (<ref>) and Theorem <ref> (2),we have S(S_ν(γ))=∑_ u ≼ γS(M_ u)=∑_ u ≼ γ(-1)^ℓ( u)E_ u=(-1)^|ν|∑_ u ≼(γ^#)'(-1)^|ν|-ℓ( u)E_ u=(-1)^|ν|S_ν^#(γ^#). Notice that, in the third equality,we have used the fact that u≼γ if and only if u≼ (γ^#)',which can be verified directly.This shows (<ref>).Now, the rest of assertions are immediately obtained fromS_ν(γ)=∑_Θ=(θ_1,θ_2,…,θ_r)∈Rim^ν_Hε_H(Θ)E_θ_1(γ)E_θ_2(γ)⋯ E_θ_r(γ), S_ν(γ)=∑_Θ=(θ_1,θ_2,…,θ_s)∈Rim^ν_Eε_E(Θ)M_θ_1(γ)M_θ_2(γ)⋯ M_θ_s(γ), which are similarly obtained as (<ref>) and (<ref>)(hence we need the assumption γ∈ T^diag(ν,ℕ)) and lead the Jacobi-Trudi formulas (<ref>) and (<ref>)for the Schur type quasi-symmetric functions.This completes the proof. The formula (<ref>) with ν=(1^n) is nothing but the one in Theorem <ref> (1). When ν=(3,1), we have from (<ref>) boxsize=normal,aligntableaux=centerS (S_(3,1)( γ_11 γ_12 γ_13 γ_21 ) )=S_(2,2,2)/(1,1)(  γ_13γ_12 γ_21 γ_11 ) =E_(γ_21,γ_13,γ_12,γ_11)-E_(γ_21+γ_13,γ_12,γ_11)-E_(γ_21,γ_13+γ_12,γ_11) -E_(γ_21,γ_13,γ_12+γ_11)+E_(γ_21+γ_13+γ_12,γ_11)+E_(γ_21+γ_13,γ_12+γ_11) +E_(γ_21,γ_13+γ_12+γ_11)+E_(γ_13,γ_21,γ_12,γ_11)-E_(γ_13,γ_21,γ_12+γ_11) +E_(γ_13,γ_12,γ_21,γ_11)-E_(γ_13+γ_12,γ_21,γ_11)-E_(γ_13,γ_12+γ_21,γ_11)=M_(γ_21,γ_13,γ_12,γ_11)+M_(γ_21+γ_13,γ_12,γ_11)+M_(γ_13,γ_21+γ_12,γ_11) +M_(γ_13,γ_21,γ_12,γ_11)+M_(γ_13,γ_12,γ_21+γ_11)+M_(γ_13,γ_21+γ_12,γ_11).Here, the second and third equations are similarly obtained as in Example <ref>. On the other hand, we have from (<ref>) boxsize=normal,aligntableaux=centerS (S_(3,1)( γ_11 γ_12 γ_13 γ_21 ) )=E_(γ_13)E_(γ_12)E_(γ_21,γ_11) -E_(γ_12,γ_13)E_(γ_21,γ_11) -E_(γ_13)E_(γ_21,γ_11,γ_12) +E_(γ_21,γ_11,γ_12,γ_13) where each term corresponds to the H-rim decomposition mathmode,boxsize=10pt,aligntableaux=center 123 3 ,223 3 , 133 3 and333 3of (3,1)^#=(2,2,2)/(1,1), respectively,and from (<ref>) boxsize=normal,aligntableaux=centerS (S_(3,1)( γ_11 γ_12 γ_13 γ_21 ) )=M_(γ_21)M_(γ_13,γ_12,γ_11)-M_(γ_13,γ_12,γ_11,γ_21), where each term to the E-rim decomposition mathmode,boxsize=10pt,aligntableaux=center 221 2 and222 2 , respectively. The equation (<ref>) is essentially obtained byMalvenuto and Reutenauer <cit.> for their quasi-symmetric functions.Notice that ν^# is called the conjugate of ν in their notion.If Jacobi-Trudi formulas are obtained for such quasi-symmetric functions, then one may also establish the similar kind of expressions like (<ref>) and (<ref>) for them. Using Theorem <ref>,one automatically gets another relation from a given relation among quasi-symmetric functions by mapping it by the antipode S.For instance, from (<ref>), we obtain the following equation. Assume that γ=(γ_ij)∈ T^diag((s^r),ℕ) andδ=(δ_ij)∈ T^diag((r^s),ℕ)with c_k=γ_i,i+k and d_k=δ_i,i+k for k∈ℤ.Write η=r+s.Then, we have ∑_λ⊂ (r^s)(-1)^|λ| S_(r^s)/λ((γ|_λ^∗)^#)S_(s^r)/λ^∗((δ|_λ)^#)= [ [ 1-M_(c_1-r) M_(c_2-r,c_1-r) ⋯ (-1)^rM_(c_0,…,c_1-r) ⋯ (-1)^η-1M_(c_η-1-r,…,c_1-r); 0 1-M_(c_2-r) ⋯ (-1)^r-1M_(c_0,…,c_2-r) ⋯ (-1)^η-2M_(c_η-1-r,…,c_2-r); ⋮ ⋱ ⋱ ⋱ ⋮ ⋮; 0 ⋯ 0 1-M_(c_0) ⋯ (-1)^η-rM_(c_η-1-r,…,c_0); 1-M_(d_1-s) M_(d_2-s,d_1-s) ⋯ (-1)^sM_(d_0,…,d_1-s) ⋯ (-1)^η-1M_(d_η-1-s,…,d_1-s); 0 1-M_(d_2-s) ⋯ (-1)^s-1M_(d_0,…,d_2-s) ⋯ (-1)^η-2M_(d_η-1-s,…,d_2-s); ⋮ ⋱ ⋱ ⋱ ⋮ ⋮; 0 ⋯ 0 1-M_(d_0) ⋯ (-1)^η-sM_(d_η-1-s,…,d_0) ]] . Here, γ|_λ^∗∈ T^diag(λ^∗,ℕ) and δ|_λ∈ T^diag(λ,ℕ)are the shape restriction of γ and δ to λ^∗ and λ, respectively.We remark that mapping this equation by ρϕ^-1 under suitable convergence assumptions,one obtains the corresponding relation among the Schur multiple zeta values. § INTEGRAL REPRESENTATIONS OF SCHUR MULTIPLE ZETA VALUES We finally show that the Schur multiple zeta value (SMZV for short) has an iterated integral representationwhen it is of ribbon type. §.§ Integral representations For nonnegative integers p_1,q_1,…,p_r,q_r,we denote byrib(h_1,…,h_r) =rib(p_1,q_1 | ⋯ | p_r,q_r) the ribbonobtained by connecting hooks h_1=(p_1,1^q_1),…,h_r=(p_r,1^q_r) from the top right to the bottom left.For example, mathmode,boxsize=10pt,aligntableaux=centerrib(3,3) =3,1,1,1 , rib(0,3 | 4,0) =3+1,3+1,3+1,4 , rib(4,1 | 2,3) =1+4,1+1,2,1,1,1 . Notice thatrib(p,0)=(p),rib(0,q)=(1^q),rib(p,q)=(p,1^q) is a hook and rib(0,q | p,0)=(p^q+1)/((p-1)^q) is an anti-hook. To guarantee the uniqueness of such expressions,we choose the minimum r in the above expression. For example,one cannot write (p)=rib(p-1,0 | 1,0) or (1^q)=rib(0,q-1 | 1,0), and so on.We remark that p_i 0,q_i=0 and p_i+1 0 may occur for some i, however,q_i 0,p_i+1=0 and q_i+1 0 never does for any i.For a ribbon ν, letI_ν ={γ=(γ_ij)∈ T(ν,ℕ) | γ_ij≥ 2 for all (i,j)∈ C(ν).}, which also appeared in the previous section.For γ=(γ_ij)∈ I_ν, we put |γ|=∑_(i,j)∈ D(ν)γ_ij.It is well known that ζ_(1^q)(β), that is,the multiple zeta value (MZV for short), has the following iterated integral representation (see, e.g., <cit.>);boxsize=normal,aligntableaux=centerβ_12pt⋮ β_q=∫_Δ(β)∏^q_l=1(dy_β_1+⋯+β_l-1+1/1-y_β_1+⋯+β_l-1+1∏^β_1+⋯+β_l-1+β_l_j=β_1+⋯+β_l-1+2dy_j/y_j), where β∈ I_(1^q) is the Young tableau in the left hand side of (<ref>) and Δ(β) ={ y=(y_1,…,y_|β|)∈[0,1]^|β| | Q_β( y).} with the condition Q_β( y) : y_1<⋯<y_|β|. Notice that the empty sum and product should be taken to be 0 and 1, respectively.Moreover, it is shown in <cit.> that ζ_(p)(α), that is, the multiple zeta-star value (MZSV for short), also has a similar integral expression asboxsize=normal,aligntableaux=centerα_1⋯ α_p=∫_Δ(α)∏^p_k=1(dx_α_p+⋯+α_p+2-k+1/1-x_α_p+⋯+α_p+2-k+1∏^α_p+⋯+α_p+2-k+α_p+1-k_j=α_p+⋯+α_p+2-k+2dx_j/x_j), where α∈ I_(p) is the Young tableau in the left hand side of (<ref>) andΔ(α) ={ x=(x_1,…,x_|α|)∈[0,1]^|α| | P_α( x).} with the condition P_α( x) : {[ x_j<x_j+1 if j∉{α_p,α_p+α_p-1,…,α_p+⋯+α_2},; x_j>x_j+1 if j∈{α_p,α_p+α_p-1,…,α_p+⋯+α_2}. ].Furthermore,in <cit.>,the following integral expression for the SMZV of anti-hook type,which is denoted by ζ(μ( k, l)) in <cit.>, is obtained;boxsize=normal,aligntableaux=centerβ_12pt⋮ β_qα_1⋯ α_p=∫_Δ(δ)∏^q_l=1(dy_β_1+⋯+β_l-1+1/1-y_β_1+⋯+β_l-1+1∏^β_1+⋯+β_l-1+β_l_j=β_1+⋯+β_l-1+2dy_j/y_j) ×∏^p_k=1(dx_α_p+⋯+α_p+2-k+1/1-x_α_p+⋯+α_p+2-k+1∏^α_p+⋯+α_p+2-k+α_p+1-k_j=α_p+⋯+α_p+2-k+2dx_j/x_j), where δ=δ(β,α)∈ I_rib(0,q | p,0) is the Young tableau in the left hand side of (<ref>),α,β are the previous tableaux with a relaxed condition β_q≥ 1 if p 0 andΔ(δ) ={( y, x)=(y_1,…,y_|β|,x_1,…,x_|α|)∈[0,1]^|β|+|α| | Q_β( y), P_α( x) and y_|β|<x_1.}. By the same idea,one obtains the formula for that of hook type;boxsize=normal,aligntableaux=centerα_1⋯ α_p β_12pt⋮ β_q =∫_Δ(γ)∏^p_k=1(dx_α_p+⋯+α_p+2-k+1/1-x_α_p+⋯+α_p+2-k+1∏^α_p+⋯+α_p+2-k+α_p+1-k_j=α_p+⋯+α_p+2-k+2dx_j/x_j) ×∏^q_l=1(dy_β_1+⋯+β_l-1+1/1-y_β_1+⋯+β_l-1+1∏^β_1+⋯+β_l-1+β_l_j=β_1+⋯+β_l-1+2dy_j/y_j), where γ=γ(α,β)∈ I_rib(p,q) is the Young tableau in the left hand side of (<ref>)andΔ(γ) ={( x, y)=(x_1,…,x_|α|,y_1,…,y_|β|)∈[0,1]^|α|+|β| | P_α( x), Q_β( y) and x_|α|<y_1.}.We haveboxsize=normal,aligntableaux=center213 =∫_y_1<y_2<y_3<y_4<y_5<y_6dy_1/1-y_1dy_2/y_2dy_3/1-y_3dy_4/1-y_4dy_5/y_5dy_6/y_6,3 1 2=∫_x_1<x_2>x_3>x_4<x_5<x_6dx_1/1-x_1dx_2/x_2dx_3/1-x_3dx_4/1-x_4dx_5/x_5dx_6/x_6,312 2 =∫_y_1<y_2<y_3<y_4<x_1<x_2>x_3<x_4dy_1/1-y_1dy_2/y_2dy_3/y_3dy_4/1-y_4dx_1/1-x_1dx_2/x_2dx_3/1-x_3dx_4/x_4, 3 212 =∫_x_1<x_2>x_3<x_4<x_5<y_1<y_2<y_3dx_1/1-x_1dx_2/x_2dx_3/1-x_3dx_4/x_4dx_5/x_5dy_1/1-y_1dy_2/1-y_2dy_3/y_3.Notice that we omit the condition 0<x_i,y_i<1 from the notations. Now, one can easily generalize these results to SMZVs of ribbon type as follows. Let p_1,q_1,…,p_r,q_r be nonnegative integers.For 1≤ i≤ r, let h_i=(p_i,1^q_i) be a hook andboxsize=20pt,aligntableaux=centerα_i = α^(i)_1⋯ α^(i)_p_i∈ I_(p_i), β_i = β^(i)_12pt⋮ β^(i)_q_i∈ I_(1^q_i), γ_i =γ(α_i,β_i) =α^(i)_1⋯ α^(i)_p_i β^(i)_12pt⋮ β^(i)_q_i∈ I_h_i,with relaxed conditions β^(i)_q_i≥ 1 for 1≤ i≤ r-1.Define γ=γ_1⊔⋯⊔γ_r∈ I_rib(h_1,…,h_r)by connecting γ_1,…,γ_r from the top right to the bottom left.Then, it holds that ζ_rib(h_1,…,h_r)(γ)= ∫_Δ(γ)∏^r_i=1[ ∏^p_i_k=1(dx^(i)_α^(i)_p_i+⋯+α^(i)_p_i+2-k+1/1-x^(i)_α^(i)_p_i+⋯+α^(i)_p_i+2-k+1∏^α^(i)_p_i+⋯+α^(i)_p_i+2-k+α^(i)_p_i+1-k_j=α^(i)_p_i+⋯+α^(i)_p_i+2-k+2dx^(i)_j/x^(i)_j) .. ×∏^q_i_l=1(dy^(i)_β^(i)_1+⋯+β^(i)_l-1+1/1-y^(i)_β^(i)_1+⋯+β^(i)_l-1+1∏^β^(i)_1+⋯+β^(i)_l-1+β^(i)_l_j=β^(i)_1+⋯+β^(i)_l-1+2dy^(i)_j/y^(i)_j) ],whereΔ(γ) ={( x_1, y_1,…, x_r, y_r)∈[0,1]^∑^r_i=1(|α_i|+|β_i|) | [P_α_i( x_i), Q_β_i( y_i) (1≤ i≤ r),; x^(i)_|α_i|<y^(i)_1 q_i 0 x^(i+1)_1 q_i=0 (1≤ i≤ r),;y^(i)_|β_i|<x^(i+1)_1 (1≤ i≤ r-1) ].} . Here, we write x_i=(x^(i)_1,…,x^(i)_|α_i|) and y_i=(y^(i)_1,…,y^(i)_|β_i|) for 1≤ i≤ r.This is direct. One can understand the general case by the following example: boxsize=normal,aligntableaux=center1 212 2=∫_x_1<x_2>x_3<y_1<z_1<z_2>z_3<z_4dx_1/1-x_1dx_2/x_2dx_3/1-x_3dy_1/1-y_1dz_1/1-z_1dz_2/z_2dz_3/1-z_3dz_4/z_4, which is the case of h_1=(2,1) and h_2=(2).Actually, we have boxsize=normal,aligntableaux=center(RHS of (<ref>)) =∑^∞_l=11/l∫_x_2>x_3<y_1<z_1<z_2>z_3<z_4 x^l-1_2dx_2dx_3/1-x_3dy_1/1-y_1dz_1/1-z_1dz_2/z_2dz_3/1-z_3dz_4/z_4=∑^∞_l=11/l^2∫_x_3<y_1<z_1<z_2>z_3<z_41-x_3^l/1-x_3dx_3dy_1/1-y_1dz_1/1-z_1dz_2/z_2dz_3/1-z_3dz_4/z_4=∑^∞_l=1∑^l_k=11/l^21/k∫_y_1<z_1<z_2>z_3<z_4y_1^k/1-y_1dy_1 dz_1/1-z_1dz_2/z_2dz_3/1-z_3dz_4/z_4=∑^∞_l=1∑^l_k=1∑^∞_m=11/l^21/k1/k+m∫_z_1<z_2>z_3<z_4z_1^k+m/1-z_1dz_1dz_2/z_2dz_3/1-z_3dz_4/z_4=∑^∞_l=1∑^l_k=1∑^∞_m=1∑^∞_n=11/l^21/k1/k+m1/k+m+n∫_z_2>z_3<z_4 z_2^k+m+n-1dz_2dz_3/1-z_3dz_4/z_4=∑^∞_l=1∑^l_k=1∑^∞_m=1∑^∞_n=11/l^21/k1/k+m1/(k+m+n)^2∫_z_3<z_41-z^k+m+n_3/1-z_3dz_3dz_4/z_4=∑^∞_l=1∑^l_k=1∑^∞_m=1∑^∞_n=1∑^k+m+n_j=11/l^21/k1/k+m1/(k+m+n)^21/j∫^1_0 z^j-1_4dz_4=∑^∞_l=1∑^l_k=1∑^∞_m=1∑^∞_n=1∑^k+m+n_j=11/l^21/k1/k+m1/(k+m+n)^21/j^2[3] =∑_a≤ b, a<c<e, d≤ e1/b^21/a1/c1/e^21/d^2=(LHS of (<ref>)).It seems to be difficult to express a general (that is, non-ribbon type) SMZVs as a single iterated integral as above. Notice that, since we have the expressions (<ref>),every SMZV can be written as a sum of such integrals. §.§ A dualityFrom (<ref>), we have boxsize=normal,aligntableaux=center1 212 2=∫_t_1<t_2>t_3<t_4<t_5<t_6>t_7<t_8dt_1/1-t_1dt_2/t_2dt_3/1-t_3dt_4/1-t_4dt_5/1-t_5dt_6/t_6dt_7/1-t_7dt_8/t_8= ∫_t'_1<t'_2<t'_3>t'_4<t'_5<t'_6>t'_7<t'_8dt'_1/1-t'_1dt'_2/t'_2dt'_3/1-t'_3dt'_4/t'_4dt'_5/t'_5dt'_6/t'_6dt'_7/1-t'_7dt'_8/t'_8=2 4 2 .Here, in the second equality, we have made a change of variables t'_i=1-t_9-i for 1≤ i≤ 8.Such a kind of relation is called a duality.The duality for MZVs is well-known (see <cit.>).On the other hand, the duality for MZSVs has not been obtained yet (see <cit.> for another kind of duality for MZSVs).Theorem <ref> immediately implies that there exists a duality for SMZVs of ribbon type, however,one should remark that in general the dual (in the above sense) of a SMZV of ribbon type is not of ribbon type again.In fact, one sees that, if γ=(γ_ij)∈ I_rib(h_1,…,h_r) has an entry γ_ij=1 where (i,j)∈ D(rib(h_1,…,h_r)) is a horizontal and non-vertical entryor the entry where the ribbon ends,then the dual of ζ_rib(h_1,…,h_r)(γ) is not of ribbon type.One easily sees the following dualities boxsize=normal,aligntableaux=center 21 22 2 =2 3 22, 3 211 2 21 22 =3 2 3 312 2On the other hand, we have12 1 2=∫_t_1<t_2<t_3>t_4>t_5<t_6dt_1/1-t_1dt_2/1-t_2dt_3/t_3dt_4/1-t_4dt_5/1-t_5dt_6/t_6=∫_t'_1<t'_2>t'_3>t'_4<t'_5<t'_6dt'_1/1-t'_1dt'_2/t'_2dt'_3/t'_3dt'_4/1-t'_4dt'_5/t'_5dt'_6/t'_6 and see that the rightmost hand side above can not be realized as a single SMZV of ribbon typebecause of the (2,2) entry 1 of the Young tableau in the left hand side. Notice that there are "self-dual" SMZVs.For example, the dual ofmathmode,boxsize=10pt,aligntableaux=center 2 22 2is itself.When MZSV has no 1 entries, its dual can be explicitly written as follows.Let α_1,…,α_p≥ 2 be positive integers.Then, it holds that boxsize=normal,aligntableaux=centerα_1⋯ α_p=1 2pt⋮ 1 1 22pt⋮11 21[⋱]1 1 22pt⋮ 12, where the ribbon in the right hand side has p columns and α_p+1-j-1 boxes in the jth column for 1≤ j≤ p.If one obtains a duality for the SMZVs, then, from (<ref>),one may be able to get a linear relation for MZVs and MZSVs. For example, we can check the duality boxsize=normal,aligntableaux=center3 2 =12 2 and, from (<ref>), obtain the relation ζ(3,2)+ζ(5) =ζ(1,4)+ζ(1,2,2)+ζ(3,2)+ζ(2,1,2).On the other hand, from (<ref>) and (<ref>), we have boxsize=normal,aligntableaux=center312 2 = 3 212, however, (<ref>) does not yield any relations. It is proved in <cit.> that boxsize=normal,aligntableaux=center1 2pt⋮ 11⋯1 2 =p+qpζ(p+q+1), where the shape of the anti-hook in the lefthand side of (<ref>) is ((q+1)^p)/(q^p-1),that is, p-1 and q are the numbers of 1's in the horizontal and vertical entries of the anti-hook, respectively. We notice that in <cit.>the lefthand side of (<ref>) is expressed as ∑_1≤ n_1<⋯<n_pP_q(H^(1)_n_p,…,H^(q)_n_p)/n_1⋯ n_p-1n_p^2, where P_q(x_1,…,x_q) is the modified Bell polynomialand H^(k)_n=∑^n_m=11/m^kis the generalized harmonic number.One can also prove (<ref>) via the integral representationsand their duality in the above sense.Actually, from (<ref>), we have (LHS of (<ref>)) =∫_y_1<⋯<y_p<z>x_q>⋯>x_1dy_1/1-y_1⋯dy_p/1-y_pdz/zdx_q/1-x_q⋯dx_1/1-x_1=∫_x'_1>⋯>x'_q>z'<y'_p<⋯<y'_1dx'_1/x'_1⋯dx'_q/x'_qdz'/1-z'dy'_p/y'_p⋯dy'_1/y'_1=p+qp∫_z'<w_1<⋯<w_p+qdz'/1-z'dw_1/w_1⋯dw_p+q/w_p+1=p+qpζ(p+q+1)=(RHS of (<ref>)).§ ACKNOWLEDGEMENT We would like to express our appreciation to all those who gave us valuable advice for this article: Prof. Masatoshi Noumi who provided expertise that greatly helped us to prove the results on Schur multiple zeta functions in Section <ref>, Prof. Masanobu Kaneko who gave guidance in quasi-symmetric functions and inspired us to establish the generalized result for such functions,Prof. Takeshi Ikeda who gave meaningful suggestion for our work and Prof. Michael E. Hoffman who pointed out a mistake in Example <ref>.We also would like to thank Prof. Hiroshi Naruse, Prof. Takashi Nakamura, Prof. Soichi Okada and Prof. Yasuo Ohno for their useful comments in many aspects.Moreover, the third-named author is very grateful to the Max-Planck-Institut für Mathematik in Bonnfor the hospitality and support during his research stay at the institute.Finally, the authors thanks the referees for many comments which improve the paper.9999999[AET]AETS. Akiyama, S. Egami and Y. Tanigawa,Analytic continuation of multiple zeta functions and their values at non-positive integers,Acta Arith., 98 (2001), no. 2, 107–116.[C]Chen2017K-W Chen, Generalized harmonic numbers and Euler sums,Int. J. Number Theory, 13 (2017), 513–528. [ELW]EggeaLoehrbWarringtonc2010E. Egge, N. Loehr and G. Warrington,From quasisymmetric expansions to Schur expansions via a modified inverse Kostka matrix,European J. Combin., 31 (2010), no. 8, 2014–2027.[E]EL. Euler,Meditationes circa singulare serierum genus,Novi Comm. Acad. Sci. Petropol., 20 (1775) 140–186;Reprinted in: Opera Omnia, Ser. I, vol. 15, B.G. Teubner, Berlin, 1927, pp. 217–267.[G]GI. M. Gessel,Multipartite P-functions and inner products of skew Schur functions,combinatorics and algebra, Contemp. Math., 34 (1984), 289–301. [HLMW]HLMWJ. Haglund, K. Luoto, S. Mason and S. van Willigenburg,Quasi symmetric Schur functions,J. Combin. Theory Ser. A 118 (2011), no. 2, 463–490.[HG]HGA.M. Hamel and I.P. Goulden,Planar decompositions of tableaux and Schur function determinants,European, J. Combin., 16 (1995), no. 5, 461–477.[H1]H1M. E. Hoffman,Multiple harmonic series,Pacific J. Math., 152 (1992), no. 2, 275–290. [H2]H2M. E. Hoffman,Quasi-symmetric functions and mod p multiple harmonic sums,Kyushu J. Math., 69 (2015), no. 2, 345–366. [IKOO]IKOOK. Ihara, J. Kajikawa, Y. Ohno and J. Okuda,Multiple zeta values vs. multiple zeta-star values,J. Algebra, 332 (2011), 187–208.[KO]KanekoOhno2010M. Kaneko and Y. Ohno,On a kind of duality of multiple zeta-star values,Int. J. Number Theory, 6 (2010), no. 8, 1927–1932.[K]Kassel1995C. Kassel, Quantum groups,Graduate Texts in Mathematics, 155.Springer-Verlag, New York, 1995.[KY]KanekoYamamotoM. Kaneko and S. Yamamoto,A new integral-series identity of multiple zeta values and regularizations,preprint, 2016. arXiv:1605.03117.[LP]LPA. Lascoux and P. Pagacz,Ribbon Schur functions,European, J. Combin., 9 (1988), no. 6, 561–574. [L]Li2012Z. Li, On a conjecture of Kaneko and Ohno,Pacific J. Math., 257 (2012), no. 2, 419–430. [Mac]MacI. G. Macdonald,Schur functions: theme and variations, Séminaire Lotharingien de Combinatoire (Saint-Nabor, 1992), pp. 5–39,Publ. Inst. Rech. Math. Av., 498, Univ. Louis Pasteur, Strasbourg, 1992.[MR]MalvenutoReutenauer1998C. Malvenuto and C. Reutenauer,Plethysm and conjugation of quasi-symmetric functions,Selected papers in honor of Adriano Garsia (Taormina, 1994),Discrete Mathe., 193 (1998), no. 1-3, 225–233.[Mat]MatK. Matsumoto,On the analytic continuation of various multiple-zeta functions,Number Theory for the Millennium (Urbana, 2000), Vol. II, M.A. Bennett et. al. (eds.),A. K. Peters, Natick, MA, 2002, pp. 417–440. [MM]MilnorMoore1965J. W. Milnor and J. C. Moore,On the structure of Hopf algebras,Ann. of Math.,81 (1965), no.2, 211–264.[Mu]MS. Muneta,On some explicit evaluations of multiple zeta-star values,J. Number Theory, 128 (2008), no. 9, 2538–2548.[NNSY]NNSYJ. Nakagawa, M. Noumi, M. Shirakawa and Y. Yamada,Tableau representation for Macdonald's ninth variation of Schur functions,(English summary) Physics and combinatorics (Nagoya, 2000),pp. 180-195, World Sci. Publ., River Edge, NJ, 2001. [N1]N1M. Noumi,Remarks on elliptic Schur functions,talk at the international workshop on “Analysis, Geometry and Group Representations for Homogeneous Spaces”,November 22-26, 2010, Lorentz Center, Leiden, The Netherlands.(http://www.lorentzcenter.nl/lc/web/2010/423/presentations/Noumi.pdf)[N2]N2 M. Noumi, Painlevé Equations through Symmetry,Translations of Mathematical Monographs, 223 (2004).[OZ]OZY. Ohno and W.Zudilin,Zeta stars,Commun. Number Theory Phys., 2 (2008), no.2, 325–347. [Sa]Sagan2001B. E. Sagan, The symmetric group. Representations, combinatorial algorithms, and symmetric functions. Second edition,Graduate Texts in Mathematics, 203. Springer-Verlag, New York, 2001.[Sta]StaR. Stanley,Two remarks on skew tableaux,Electron. J. Combin., 18 (2011), no. 2, Paper 16, 8 pp.[Ste]SteJ. Stembridge,Nonintersecting paths, Pfaffians, and plane partitions,Adv. Math., 83 (1990), no. 1, 96–131. [Sw]Sweedler1969M. E. Sweedler,Hopf algebras,Mathematics Lecture Note Series,W. A. Benjamin, Inc., New York 1969.[Yamam]YamamotoS. Yamamoto,Multiple zeta-star values and multiple integrals,to appear in RIMS Kôkyûroku Bessatsu, arXiv:1405.6499.[Yamas]YY. Yamasaki,Evaluations of multiple Dirichlet L-values via symmetric functions,J. Number Theory, 129 (2009), no. 10, 2369–2386.[Za]ZaD. Zagier, Values of zeta functions and their applications. First European Congress of Mathematics, Vol. II (Paris, 1992), 497–512,Progr. Math., 120, Birkhauser, Basel, 1994.[Zi]ZiP. Zinn-Justin,Six-vertex, Loop and Tiling model : Integrability and Combinatorics, arXiv:0901.0665.[Zl]ZlS. A. Zlobin,Relations for multiple zeta values,Mat. Zametki, 84 (2008), no.6, 825–837;translation in Math. Notes, 84 (2008), no.5-6, 771–782. Maki Nakasuji Department of Information and Communication Science, Faculty of Science,Sophia University, Tokyo, JapanOuamporn Phuksuwan Department of Mathematics and Computer Science, Faculty of Science,Chulalongkorn University, Bangkok, ThailandYoshinori Yamasaki Graduate School of Science and Engineering,Ehime University, Ehime, Japan
http://arxiv.org/abs/1704.08511v3
{ "authors": [ "Maki Nakasuji", "Ouamporn Phuksuwan", "Yoshinori Yamasaki" ], "categories": [ "math.NT", "math.CO", "11M41, 05E05" ], "primary_category": "math.NT", "published": "20170427112411", "title": "On Schur multiple zeta functions: A combinatoric generalization of multiple zeta functions" }
IPMU17-0068 CERN-TH-2017-094[][email protected] CERN, Theoretical Physics Department, CH-1211 Geneva 23, Switzerland[][email protected] Kavli IPMU (WPI), UTIAS, University of Tokyo, Kashiwa, Chiba 277-8583, Japan[][email protected] IPMU (WPI), UTIAS, University of Tokyo, Kashiwa, Chiba 277-8583, Japan[][email protected] Kavli IPMU (WPI), UTIAS, University of Tokyo, Kashiwa, Chiba 277-8583, Japan Hamamatsu Professor The largest global symmetry that can be made local in the Standard Model + 3ν_R while being compatible with Pati-Salam unification is SU(3)_H× U(1)_B-L. The gauge bosons of this theory would induce flavour effects involving both quarks and leptons, and are a potential candidate to explain the recent reports of lepton universality violation in rare B meson decays.In this letter we characterise this type of models and show how they can accommodate the data and naturally be within reach of direct searches. Anomaly-free local horizontal symmetry and anomaly-full rare B-decays Tsutomu T. Yanagida December 30, 2023 ===================================================================== § INTRODUCTION Lepton flavour universality (LFU) violation in rare B meson decays provides a tantalising hint for new physics whose significance has recently increased <cit.>. A consistent picture may be beginning to emerge, with LHCb measurements <cit.> of the theoretically clean ratios <cit.> ℛ_K^(*)=Γ(B→ K^(*)μ^+μ^-)/Γ(B→ K^(*)e^+e^-) , in a combined tension of order 4σ <cit.> with the Standard Model (SM).Several phenomenologically motivated models have been proposed to explain this discrepancy (see <cit.> for a review), one such possibility being a new U(1) gauge symmetry <cit.>. In this letter, we propose a complete model which gives rise to a type of U(1) symmetry that can accommodate the observed low-energy phenomenology.The characteristics of the new physics that might be responsible for the observed discrepancy with the Standard Model follow quite simply from the particles involved in the decay: a new interaction that (i) involves both quarks and leptonsand (ii) has a non-trivial structure in flavour space. This profile is fit by well-motivated theories that unify quarks and leptons and have a gauged horizontal <cit.> –i.e. flavour– symmetry to address points (i) and (ii) respectively.Let us address first the latter point, that is, horizontal symmetries. Given the representations of the five SM fermion fields –q_L ,u_R ,d_R ,ℓ_L ,e_R– under the non-abelian part (SU(3)_c× SU(2)_L) of the gauge group, for one family of fermions there is only a single abelian charge assignment possible for a gauge symmetry.This is precisely U(1)_Y, hence the Standard Model local symmetry, 𝒢_SM=SU(3)_c× SU(2)_L× U(1)_Y.On the other hand, a global U(1)_B-L has only a gravitational anomaly; promoting B-L to be gravity-anomaly free anda local symmetry can be done in one stroke by introducing right-handed (RH) neutrinos, otherwise welcome to account for neutrino masses <cit.> and baryogenesis through leptogenesis <cit.>.The `horizontal' direction of flavour has, on the other hand, three replicas of each field and the largest symmetry in this sector is then SU(3)^6.Anomaly cancellation without introducing any more fermion fields nevertheless restricts the symmetry which can be made local to SU(3)_Q× SU(3)_L.It is worth pausing to underline this result; the largest anomaly-free local symmetry extension that the SM+3ν_R admits is SU(3)_Q× SU(3)_L× U(1)_B-L.However, now turning to point (i), one realises that the horizontal symmetries above do not connect quarks and leptons in flavour space.Although it is relatively easy to break the two non-abelian groups to the diagonal to satisfy (i), the desired structure can arise automatically from a unified theory; one is then naturally led to a Pati-Salam <cit.> model SU(4)× SU(2)_L× SU(2)_R× SU(3)_H, which also solves the Landau pole problem of U(1)_B-L and U(1)_Y. Explicitly: 𝒢 =SU(4)× SU(2)_L× SU(2)_R× SU(3)_H ψ_L =([ u_L d_L; ν_L e_L ]) ψ_R=([ u_R d_R; e_R ν_R ]) where ψ_L∼(4,2,1) and ψ_R∼(4,1,2) under Pati-Salam, and both are in a fundamental representation of SU(3)_H.The breaking of the Pati-Salam group, however, occurs differently from the usual SU(4)× SU(2)^2→𝒢_SM; instead we require SU(4)× SU(2)^2→𝒢_SM× U(1)_B-L.This can be done breaking separately SU(4)→ SU(3)_c × U(1)_B-L and SU(2)_R→ U(1)_3 with U(1)_3 being right-handed isospin –we recall here that hyper-charge is Q_Y=Q_B-L/2+σ_3^R.This breaking would require two scalar fields in each sector to trigger the breaking; the detailed discussion of this mechanism nevertheless is beyond the scope of this work and will not impact the low energy effective theory. § THE MODELHaving discussed the Pati-Salam motivation for our horizontal symmetry, we shall now walk the steps down to the low energy effective theory and the connection with the SM. At energies below unification yet far above the SM scale we have the local symmetry: 𝒢=𝒢_SM× SU(3)_H× U(1)_B-L . The breaking SU(3)_H × U(1)_B-L→ U(1)_h occurs as one goes down in scale with the current of the unbroken symmetry being: J_μ^h=ψ̅γ_μ(g_H c_θ T^H_ℂS+g_B-Ls_θ Q_B-L) ≡ g_h ψ̅γ_μ T^h_ψψT^h_ψ=T^H_ℂS+t_ω Q_B-L , where T^H_𝒞S is an element of the Cartan sub-algebra of SU(3), i.e. the largest commuting set of generators (which we can take to be the diagonal ones), ψ is the Dirac fermion ψ_L+ψ_R with the chiral fields given in Eq. (<ref>), and θ is an angle given by the representation(s) used to break the symmetry.Before proceeding any further, it is useful to give explicitly the basis-invariant relations that the generators of this U(1)_h satisfy: _fl ( T^hT^h)=1/2+3t_ω^2 Q_B-L^2 ,_fl ( T^h)=3t_ω Q_B-L , where the trace is only over flavour indices, there is a generator T^h for each fermion species including RH neutrinos, and the sign of the traceless piece of T^h is the same for all fermion representations.The one condition we impose on the flavour breaking SU(3)_H× U(1)_B-L→ U(1)_h is that the unbroken U(1)_h allows for a Majorana mass term for RH neutrinos, such that they are heavy and can give rise to leptogenesis and small active neutrino masses via the seesaw formula.A high breaking scale is further motivated by the need to suppress FCNC mediated by the SU(3)_H gauge bosons.The desired breaking pattern can be achieved by introducing fundamental SU(3)_H scalar fields, which at the same time generate the Majorana mass term.Let us briefly sketch this: we introduce two scalars[These scalars can each be embedded in a (4,1,2) multiplet under the SU(4)× SU(2)_L× SU(2)_R Pati-Salam group.] ϕ_1, ϕ_2 in (3,-1) of SU(3)_H× U(1)_B-L, so that we can write: ν̅^c_R λ_ijϕ_i^*ϕ_j^†ν_R+h.c. This implies two generations of RH neutrinos have a large Majorana mass (∼ 10^10GeV), which is the minimum required for leptogenesis <cit.> and to produce two mass differences for the light neutrinos ν_L–one active neutrino could be massless as allowed by data. The third RH neutrino requires an extra scalar field charged under U(1)_h to get a mass; depending on the charge of the scalar field this might be a non-renormalisable term, making the RH neutrino light and potentially a dark matter candidate.The second role of these scalar fields is symmetry breaking; in this sense two fundamentals of an U(3) symmetry can at most break it to U(1), this makes our U(1)_h come out by default. To be more explicit, with all generality one has ⟨ϕ_1 ⟩ =(v_H,0,0), ⟨ϕ_2 ⟩ =v'_H(c_α,s_α,0) and then for s_α≠ 0 there is just one unbroken U(1) whose gauge boson Z_h is the linear combination that satisfies: D_μ⟨ϕ_1,2⟩=(g_H T A^H_μ- g_B-L A^B-L_μ)⟨ϕ_1,2⟩=0. Given the v.e.v. alignment, the solution involvesT_8in SU(3)_H and via the rotation A^H,8=c_θ Z_h -s_θ A^', A^B-L=s_θ Z_h+c_θ A^', where A^' is the massive gauge boson, we find that the solution to Eq. (<ref>) is: t_θ= 1/2√(3)g_H/ g_B-L ,t_ω =t_θg_B-L/g_H=1/2√(3) , with g_h=g_Hc_θ, in close analogy with SM electroweak symmetry breaking (EWSB).This solution implies, for leptons T^h_L =T_8^H-t_ω1 = 1/2√(3)( [0;0; -3 ]), whereas for quarks T^h_Q=T^H_8+1/3t_ω1=1/2√(3)([4/3;4/3; -5/3 ]) . At this level the current that the U(1)_h couples to is different for quarks (T^h_Q) and leptons (T^h_L) but vectorial for each of them.On the other hand, most previous Z' explanations for the LFU anomalies have considered phenomenologically motivated chiral U(1) symmetries. Of course, the above charge assignment is one of several possibilities that can be obtained from a bottom-up approach[Additional assumptions on the rotation matrices in <cit.> lead to different mass-basis couplings from those we consider.] <cit.>; however, as we have shown, this particular flavour structure is well-motivated by the underlying UV theory.The last step to specify the low energy theory is to rotate to the mass basis of all fermions.In this regard some comments are in order about the explicit generation of masses and mixings in this model. Charged fermion masses would require the introduction of scalar fields charged under both the electroweak and the horizontal group.[Alternatively, the effective Yukawa couplings can be generated by assuming a horizontal singlet Higgs doublet at the electroweak scale and introducing two pairs of Dirac fermions for each of the six fermion fields, q_L, u_R, d_R, l_L, e_R and ν_R at the SU(3)_H breaking scale, and one pair of these fermions at the U(1)_h breaking scale. The extra fermions are all SU(3)_H singlets. See <cit.> for a similar mechanism.]At scales above the U(1)_h breaking the fields can be categorised according to their U(1)_h charge[Ultimately, these three Higgs belong to H(2,1/2,8) and H(2,1/2,1) under SU2_L× U(1)_Y× SU(3)_H. To realise mass matrices for the quarks and leptons requires three H(2,1/2,8) and one H(2,1/2,1) at the scale of G_SM× SU(3)_H× U(1)_B-L.]; one would need at least a charge 3, a charge -3 and a neutral –in units of g_h/2√(3)– `Higgs' transforming as (2,1/2) under SU(2)× U(1)_Y; a linear combination of these three much lighter than the rest would emerge as the SM Higgs doublet. An additional SM singlet scalar is also required to break U(1)_h and should simultaneously generate a Majorana mass for the third RH neutrino. If this scalar has U(1)_h charge 3, such a term is non-renormalisable and if suppressed by a unification-like scale yields a keV mass, which is interestingly in a range where this fermion could be dark matter <cit.>. Alternatively, a charge 6 scalar would generate a mass of order a few TeV.The main focus of this work is, however, the effect of the gauge boson associated with the U(1)_h. In this sense, however generated, the change to the mass basis implies a chiral unitary rotation. This will change the vectorial nature of the current to give a priori eight different generators T^h_f for each of the eight chiral fermion species after EWSB: f=u_R ,u_L ,d_L ,d_R ,ν_R ,ν_L , e_L ,e_R. However, before performing the chiral rotations, it is good to recall that the vectorial character of the interaction is encoded in the basis-invariant relations: _fl ( T^h_fT^h_f)=1/2+1/4 Q_B-L^2 ,_fl ( T^h_f)=√(3)/2 Q_B-L , which applies to both chiralities of each fermion field f.As mentioned before a priori all fields rotate when going to the mass basis f=U_ff', however we only have input on the mixing matrices that appear in the charged currents: V_CKM=U^†_u_L U_d_L and U_PMNS=U_e_L^† U_ν_L, which involve only LH fields.Hence, for simplicity, we assume that RH fields are in their mass bases and need not be rotated. The CKM matrix is close to the identity, whereas the lepton sector possesses nearly maximal angles; following this lead we assume the angles in U_u_L ,U_d_L are small so that there are no large cancellations in U_u_L^† U_d_L, whereas U_e_L and U_ν_L have large angles.Phenomenologically however, not all angles can be large in U_e_L since they would induce potentially fatal μ-e flavour transitions.Hence we restrict U_e_L to rotate only in the 2-3 sector, which could therefore contribute the corresponding factor in the PMNS as suggested in <cit.>.In the quark sector we assume for simplicity that all mixing arises from U_d_L. To make our assumptions explicit: U_e_L =R^23(-θ_l), U_ν_L =R^23(θ_23-θ_l)R^13(θ_13)R^12(θ_12), U_u_L =1, U_d_L =V_CKM, where R^ij(θ_ab) is a rotation matrix in the ij sector with angle θ_ab.Hence, T^h'_f_L = U^†_f_L T_f^h U_f_L, T^h'_f_R =T^h_f_R, and the current reads: J^h_μ=g_h∑_f (f̅γ_μ T^' h_f_L f_L +f̅ T^' h_f_Rf_R ). We have now made all specifications to describe the interactions of Z_h; all in all only two free parameters, θ_l and g_h, control the couplings to all fermion species. For those processes well below the Z_h mass (∼TeV), the effects are given at tree level by integrating the Z_h out: S =∫ d^4x{1/2Z_h^μ(∂^2+M^2 )Z_h,μ-g_h Z_h^μ J^h_μ}On-shell Z_h=∫ d^4x(-1/2g_h^2J_h^2/M^2+𝒪(∂^2/M^2)) with J^h_μ as given in (<ref>, <ref>, <ref>-<ref>), so that the effective action depends on θ_l and M/g_h. § LOW ENERGY PHENOMENOLOGYThe most sensitive probes of Z_h effects come from flavour observables, in particular the FCNC produced in the down sector.An important consequence of the rotation matrices in Eq. (<ref>) is that these FCNC have a minimal flavour violation (MFV) <cit.> structure: d̅^iγ_μ V_ti^*V_tjd_j. Additionally, there can be charged lepton flavour violation (LFV) involving the τ-μ transition.Even after allowing for these constraints, the Z_h could also potentially be accessible at the LHC.Effects on other potentially relevant observables including the muon g-2, Z-pole measurements at LEP, and neutrino trident production are sufficiently suppressed in our model. Below we discuss the relevant phenomenology in detail. §.§ Semi-leptonic B decaysThe relevant Lagrangian for semi-leptonic B_s decays is ℒ_B_s=-3/4g_h^2/M^2(V_tbV_ts^* s̅γ_μb_L)(J^μ_l_L+J^μ_l_R+J^μ_ν_L)+h.c. , where for simplicity we have assumed all three RH neutrinos are not accessible in B decays and we have J_l_L^ρ =s_θ_l^2μ̅γ^ρμ_L+c_θ_l^2τ̅γ^ρτ_L+s_θ_lc_θ_lμ̅γ^ρτ_L+h.c. ,J_l_R^ρ =τ̅_Rγ^ρτ_R ,J_ν_L^ρ =ν̅^i γ^ρ (U_ν_L^*)_3i (U_ν_L)_3jν_L^j . In recent times, a number of measurements of b→ sμμ processes have shown discrepancies from their SM predictions, most notably in the theoretically clean LFU violating ratios R_K and R_K^*. Global fits to LFU violating data suggest that the observed discrepancies can be explained via a new physics contribution to the Wilson coefficients C_9,10^l, with the preference over the SM around 4σ <cit.>.The effective Hamiltonian is defined as ℋ_eff = -4G_F/√(2)V_tbV_ts^*(C_9^l𝒪_9^l+C_10^l𝒪_10^l+C_ν𝒪_ν), where 𝒪_9^l =α/4π(s̅γ_μ b_L)(l̅γ_μ l ),𝒪_10^l =α/4π(s̅γ_μ b_L)(l̅γ_μγ^5 l ), 𝒪_ν^ij = α/2π(s̅γ_μ b_L)(ν̅^i γ_μν_L^j ) . In our model, separating the Wilson coefficients into the SM contribution (C_SM) and the Z_h piece (δ C) we have, for muons: δ C_9^μ=-δ C_10^μ=-π/α√(2)G_F3/4g_h^2/M^2s_θ_l^2. In fitting the observed anomalies we use the results of Ref. <cit.>, which for the relevant scenario δ C_9^μ=-δ C_10^μ gives δ C_9^μ∈[-0.81 -0.48] ([-1.00, -0.34]) at 1(2)σ.The fully leptonic decay B_s→μμ provides an additional constraint on δ C_10^μ; the current experimental value <cit.> is consistent with the above best-fit region.There is also a contribution to decays involving neutrinos, B→ K^(*)νν̅, where we now have: δ C_ν^ij =δ C_ν (U_ν_L^*)_3i (U_ν_L)_3j , δ C_ν =-π/α√(2)G_F3/4g_h^2/M^2 , so that the ratio to the SM expectation reads: R_νν̅≡Γ/Γ_SM=1+2/3(δ C_ν/C^ν_SM)+1/3(δ C_ν/C^ν_SM)^2, where C^ν_SM≈-6.35 <cit.>.Notice that this is independent of the mixing in the lepton sector, and the rate is always enhanced. The current experimental bound on this ratio is R_νν̅<4.3 at 90% CL <cit.>.Depending on the mixing angle in the lepton sector, the SM-background free LFV decay B→ K^(*)τμ can also be significantly enhanced,whereas there is an irreducible contribution to B→ K^(*)ττ from the RH currents in Eq. (<ref>); both of these contributions nevertheless lie well below the current experimental bounds <cit.>.Finally, one might also expect similar contributions in b→ d and s→ d transitions, the latter leading to effects in K decays.However, given our assumptions on the mixing matrices, the MFV structure in the down quark couplings means that these contributions are sufficiently suppressed.In particular, the otherwise stringent bound from K→πνν̅ <cit.> is found to be comparable, yet still sub-dominant, to that from B→ Kνν̅. §.§ B̅–B Mixing The Z_h gives a tree-level contribution to B̅_s–B_s and B̅_d–B_d mixing, which provide some of the most stringent constraints on the model. The relevant Lagrangian isℒ_Δ B=2=-3/8g_h^2/M^2(V_tbV_ti^* d̅_iγ_μb_L)^2 . This leads to a correction to Δ m_B given by C_B≡Δ m_B/Δ m_B^SM=1+4π^2/G_F^2m_W^2η̂_BS(m_t^2/m_W^2)3/8g_h^2/M^2c(M), where S(m_t^2/m_W^2)≈2.30 is the Inami-Lim function <cit.>, η̂_B≃0.84 accounts for NLO QCD corrections <cit.>, and c(M)≈0.8 includes the running from M down to m_B using the NLO anomalous dimension calculated in Refs. <cit.>. This observable is tightly constrained, yielding 0.899<C_B_s<1.252 and 0.81<C_B_d<1.28 at 95% CL <cit.>. Once again, the MFV structure of the couplings ensures that effects in K̅–K mixing are well below current bounds.In this case the SM prediction for Δ m_K also suffers from theoretical uncertainties. §.§ Lepton Flavour Violation in τ→μThere is a contribution to the cLFV decay τ→ 3 μ: ℒ_LFV = -3/4g_h^2/M^2 s_θ_l^3 c_θ_l τ̅γ^ρμ_Lμ̅γ_ρμ_L, resulting in a branching ratioBR(τ→ 3μ)=m_τ^5/1536π^3Γ_τg_h^4/M^49/8s_θ_l^6c_θ_l^2. The current experimental bound is BR(τ→ 3μ)<2.1×10^-8 at 90% CL <cit.>.This restricts the allowed values of the mixing angle θ_l. §.§ Collider Searches Depending on its mass, the Z_h may be directly produced at the LHC.The large U(1)_h charge in the lepton sector results in a potentially sizeable branching ratio into muons: BR(Z_h→μμ)≃0.08 s_θ_l^4.The strongest bounds on a spin-1 di-muon resonance are from the ATLAS search at √(s)=13TeV with 36fb^-1 <cit.>.Furthermore even for very large masses, M≳6TeV, non-resonant production will continue to provide bounds; these can become important in the future <cit.>.Di-jet searches also provide a complementary strategy, although the constraints are weaker. §.§ Perturbativity The one-loop beta function for U(1)_h isβ(g_h)=269/36g_h^3/(4π)^2 , where we have assumed the U(1)_h breaking scalar has charge 3. The gauge coupling g_h then encounters a Landau pole at the scale Λ=exp(288π^2/269g_h(M)^2)M. This scale should at least be larger than the SU(3)_H × U(1)_B-L→ U(1)_h breaking scale.Assuming that the breaking occurs at 10^10GeV –so that the RH neutrinos obtain a sufficiently large mass for viable leptogenesis– leads to the bound g_h(10 TeV)≲0.9.Also note that depending on the specific UV mechanism for generating the fermion mass matrices, SU(3)_H may not remain asymptotically free, in which case there can be additional constraints from perturbativity. § DISCUSSION In Fig. <ref> we combine the above constraints and show the region of parameter space which can explain the observed LFU anomalies.It is clear that this scenario is already tightly constrained by the existing measurements, in particular B̅–B mixing and LHC searches.Requiring perturbativity up to the scale of the right-handed neutrinos (≳10^10GeV) provides an additional upper bound on the gauge coupling, leaving a small region of parameter space consistent with the best fit value of C_9^μ at 1σ.The 2σ region for C_9^μ –still a significant improvement over the SM– opens up substantially more viable parameter space. The dependence on the mixing angle in the lepton sector is shown in Fig. <ref>.Consistency with the 2σ best-fit region for the anomalies and the bounds from B̅–B mixing requires θ_l≳π/4.There is also a potentially important additional constraint from τ→3μ.In the M-g_h plane, the situation remains similar to Fig. <ref>; however the best-fit regions for the anomaly move towards smaller masses as θ_l is reduced.Let us also comment briefly on the mixing in the quark sector. For simplicity, in Eq. (<ref>) we made the assumption U_d_L=V_CKM.Allowing instead for an arbitrary angle, one obtains the upper bound θ_23≲0.08; this is qualitatively similar to the case we have considered (|V_ts|≃0.04).For θ_23 below this value, B̅–B mixing can be alleviated, but the bounds from LHC searches and perturbativity become more severe.One consequence of the relatively strong experimental constraints is that this model can be readily tested in the relatively near future.Improved precision for Δ m_B would either confirm or rule out this model as a potential explanation for the LFU anomalies.On the other hand, improvements in the LHC limit, when combined with the perturbativity bounds, would force one to consider lower SU(3)_H × U(1)_B-L→ U(1)_h breaking scales.In addition, the LFV decay τ→3μ provides an important complementary probe of the mixing angle in the lepton sector.Similarly, the decay B→ K^(*)τμ can be significantly enhanced and could be observable in the future. In this sense it is good to note that the vectorial character of the U(1)_h reveals itself in the sum rules ∑_lδ C_10^ll =0 ,∑_lδ C_9^ll=2∑_iδ C_ν^ii , ∑_ll'(|δ C_9^ll'|^2+|δ C_10^ll'|^2)=4∑_ij |δ C_ν^ij|^2 , which is basically a manifestation of Eq. (<ref>).Finally, we have focused on the specific case of a G_SM× SU(3)_H× U(1)_B-L symmetry, however there exist other related scenarios which provide equally interesting possibilities. For example, if one instead assumes G_SM× SU(3)_Q× SU(3)_L× U(1)_B-L, it is possible to obtain T^h_L∼ diag(0,0,-3) and T^h_Q∼ diag(0,0,1).This is nothing other than a U(1)_B-L under which only the third generation is charged. The LHC bounds would be significantly weakened in such a scenario; g_h could then remain perturbative up to the Planck scale.Another possible symmetry is G_SM× SU(3)_Q× SU(3)_L if a bifundamental Higgs (3, 3^*) condenses at low energies, since it mixes two U(1) gauge bosons. A merit of this model is that one can give heavy Majorana masses to all right-handed neutrinos by taking the unbroken U(1)_h as diag(0,1,-1) for leptons <cit.>, and diag(1,1,-2) for quarks.The low energy phenomenology of a U(1) with similar flavour structure was previously considered in <cit.>, the latter based on another non-abelian flavour symmetry <cit.>. We leave the detailed investigation of such related scenarios for future work, but application of our analysis is straightforward. § CONCLUSION If confirmed, the violation of lepton flavour universality would constitute clear evidence for new physics.In this letter, we have proposed a complete, self-consistent model in which the observed anomalies are explained by the presence of a new U(1)_h gauge symmetry linking quarks and leptons.We have shown how such a symmetry can naturally arise from the breaking of an SU(3)_H× U(1)_B-L horizontal symmetry.Furthermore, within the SM+3ν_R, this is the largest anomaly-free symmetry extension that is consistent with Pati-Salam unification.The model is readily testable in the near future through direct searches at the LHC, improved measurements of B̅–B mixing and charged LFV decays. Acknowledgements R.A. thanks IPMU for hospitality during the completion of this work. This work is supported by Grants-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan, No. 26104009 (T.T.Y.), No. 16H02176 (T.T.Y.) and No. 17H02878 (T.T.Y.), and by the World Premier International Research Center Initiative (WPI), MEXT, Japan (P.C., C.H. and T.T.Y.).This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 690575.
http://arxiv.org/abs/1704.08158v2
{ "authors": [ "Rodrigo Alonso", "Peter Cox", "Chengcheng Han", "Tsutomu T. Yanagida" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170426151427", "title": "Anomaly-free local horizontal symmetry and anomaly-full rare B-decays" }
Full-Page Text Recognition: Learning Where to Start and When to StopBastien Moysset14, Christopher Kermorvant2, Christian Wolf34 1A2iA SA, Paris,France 2Teklia SAS, Paris,France 3Université de Lyon, CNRS, France4INSA-Lyon, LIRIS, UMR5205, F-69621December 30, 2023 ====================================================================================================================================================================================================Text line detection and localization is a crucial step for full page document analysis, but still suffers from heterogeneity of real life documents. In this paper, we present a new approach for full page text recognition. Localization of the text lines is based on regressions with Fully Convolutional Neural Networks and Multidimensional Long Short-Term Memory as contextual layers.In order to increase the efficiency of this localization method, only the position of the left side of the text lines are predicted. The text recognizer is then in charge of predicting the end of the text to recognize. This method has shown good results for full page text recognition on the highly heterogeneous Maurdor dataset.§ INTRODUCTIONMost applications in document analysis require text recognition at page level, where only the raw image is available and no preliminary hand-made annotation can be used. Traditionally, this problem has mainly been addressed by separating the process into two distinct steps; namely the text line detection task, which is frequently proceeded by additional paragraph and word detection steps, and the text recognition task. In this work we propose a method, which couples these two steps tighter by unloading some of the burden of the difficult localization step to the recognition task. In particular, the localization step detects the starts of the text lines only. The problem of finding where to stop the recognition is solved by the recognizer itself.§.§ Related workNumerous algorithms have been proposed for the text line localization. Some are used in a bottom-up approach by grouping sub-components like connected components or black pixels into lines. RLSA <cit.> uses morphological opening on black pixels to merge the components that belong to the same text line. Similarly, Shi et al. <cit.> resort to horizontal ellipsoidal steerable filters to blur the image and merge the components of the text line. In <cit.>, gradients are accumulated and filtered. Louloudis et al. <cit.> employ a Hough algorithm on the connected component centers while Ryu et al. <cit.> cluster parts of the connected components according to heuristic based successive splits and merges.Other methods follow a top-down approach and split the pages into smaller parts. The XY-cut algorithm <cit.> looks for vertical and horizontal white spaces to successively split the pages in paragraphs, lines and words. Similarly, projection profile algorithms like Ouwayed et al. <cit.> are aimed at finding the horizontal whiter parts of a paragraph. This technique is extended to non-horizontal texts by methods like Nicolaou et al. <cit.> that dynamically finds a path between the text lines or by Tseng et al. <cit.> that use a Viterbi algorithm to minimize this path.Techniques like the ones proposed by Mehri et al. <cit.> or Chen et al. <cit.> classify pixels into text or non-text but need post-processing techniques to constitute text lines.These techniques usually work well on the homogeneous datasets they have been tuned for but need heavy engineering to perform well on heterogeneous datasets like the Maurdor dataset <cit.>. For this reason, Machine learning has proven to be efficient, in particular deep convolutional networks. Early work from Delakis et al. <cit.> classifies scene text image parts as text and non-text with a Convolutional Neural Network on a sliding window. In <cit.>, paragraph images are split vertically using a recurrent neural network and CTC alignment. More recently, methods inspired from image object detection techniques like MultiBox <cit.>, YOLO <cit.> or Single-Shot Detector (SSD) <cit.> have arisen. Moysset et al. <cit.> proposed a MultiBox based approach for direct text line bounding boxes detection. Similarly, Gupta et al. <cit.> and Liao et al. <cit.> use respectively YOLO based and SSD based approach for scene text detection. Moysset et al. <cit.> also propose the separate detection of bottom-left and top-right corners of line bounding boxes.The text recognition part is usually made with variations of Hidden Markov Models <cit.> or 2D Long Short Term Memory (2D-LSTM) <cit.> neural networks.Finally, Bluche et al. <cit.> use a hard attention mechanism to directly perform full page text recognition without prior localization. The iterative algorithm finds the next attention point based on the sequence of seen glimpses modeled through the hidden state of a recurrent network. §.§ Method overviewIn this work, we address full page text recognition in two steps. First, a neural network detects where to start to recognize a text line, and a second network performs the text recognition and decides when to stop the process. More precisely, the former network detects the left sides of each text lines by predicting the value of the object position coordinates as a regression problem. This detection neural network system is detailed in Part <ref> and the left-side strategy is explained in Part <ref>. The latter network recognizes the text and predicts the end of the text of the lineas described in Part <ref>. The experimental setup is described in Part <ref> and results are shown and analyzed in Part <ref>.§ OBJECT LOCALIZATION WITH DEEP NETWORKS §.§ Network description In the lines of <cit.>, we employ a neural network as a regressor to predict the positions of objects in images. The network predicts a given number N of object candidates. Each of these object candidates is indexed by a linear index n and defined by K coordinates l_n={ l^k_n },k=1…4K corresponding to the position of the object in the document and a confidence score c_n. As the number of objects in an image is variable, at test time, only the objects with a confidence score over a threshold are kept.In order to cope with the small amount of training data available for document analysis tasks and to detect a large number of objects corresponding to our text lines, we adopted the method described in <cit.>. We do not use a fully connected layer at the end of the network that has as inputs features conveying information about the whole page image and, as outputs, all the object candidates of the page. Instead, our method is fully convolutional, which allows the network to share parameters over the different regressors. More precisely, we use a 1×1 convolution to predict the objects locally and, consequently, to highly reduce the number of parameters in the network. Layers constituted of Two-Dimensional Long-Short-Term-Memory cells (2D-LSTM) <cit.> are interleaved between the convolutional layers in order to recover the context information lost by the local nature of the detection. The architecture is similar to the one in <cit.>. It is described in Table <ref> and illustrated in Figure <ref>. §.§ TrainingWe used the same training process as the one described in <cit.>. The cost function is a weighted sum between a confidence cost and the Euclidean distance between the two object positions (predicted and ground-truth):Cost = ∑_n=0^N ∑_m=0^M X_nm( αl_n-t_m^2 - log(c_n))- (1 - X_nm) log(1-c_n)Here, the N object candidates have position coordinates l_n and confidence c_n while the M reference objects have position coordinates t_m. α is a parameter weighting localisation and confidence costs.As the output of the network (as well as the ground-truth information) is structured, a matching between the two of them is necessary in order to calculate the loss in equation <ref>. This matching is modelled through variable X={X_nm}, a binary matrix. In particular, X_nm=1 if network output n has been matched to ground truth object m in the given image. Equation <ref> needs to be minimized under constraints enforcing one-to-one matches, which can be solved efficiently through the Hungarian algorithm <cit.>.We could not confirm the claims reported in <cit.> who apply this matching process for object detection in natural images. In particular, no improvement was found when using anchor positions associated toobjects which were mined through k-means clustering. On the other hand, we found it useful to employ different weights α for the two different uses of equation <ref>. A higher value for α was used during matching (solving for X) than for backpropagation (learning of network parameters). This favours the use of all outputs during training — details are given in section <ref>.§ LOCALIZATION AND RECOGNITION §.§ Line detection with left-side triplets The first step detects the left-side of each lines through the network described in Section <ref>. The model predicts three position values, i.e. K=3: 2 coordinates for the lower left corner plus the text height. Additionally, a prediction confidence score is output.We also compare this method with two competing strategies: i) point localization <cit.>, K=2, where only the x and y coordinates of lower left points are detected, ii) and full box localization <cit.>, where K=4 and the x and y coordinates of the bottom-left corners of the text line bounding boxes are predicted with the width and the height of the text lines. We also found that expanding the text box by a 10 pixel margin improves the full-page text recognition rates. §.§ End-of-line detection integrated with recognition Detecting only the left side of the text lines and extending it toward the right part of the image as illustrated in Figure <ref> b) means that for documents with complex layouts, some text from other text lines can be present in the image to be recognized.For this reason, a 2D-LSTM based recognizer similar to the one described in <cit.> is trained with the Connectionist Temporal Classification <cit.> (CTC) alignment procedure to recognize the text present in, and only in, the designed text line. We found that the results were slightly improved by adding a End-of-line (EOL) label at the end of the text labels.Thismeans that the network will learn, through the CTC training, to align the text labels with the frames corresponding to these image characters, as usual. But it will also learn to predict when the line is over, mark it with the EOL label, and learnnot to predict anything else on the right side of the image. The context conveyed by the LSTM recurrent layers is responsible for this learning ability.Two different recognition networks are trained, respectively for French and English. They are trained to recognized both printed and handwritten simultaneously. § EXPERIMENTAL SETUP§.§ DatasetsWe evaluate our method on the Maurdor dataset <cit.>, which is composed of 8773 heterogeneous documents in French, English or Arabic, both printed and handwritten (Train: 6592, Validation: 1110, Evaluation: 1071). Because the annotation is given at paragraph level, we used the technique described in <cit.> to check the quality of line candidates with a constrained text recognition in order to obtain annotation at line level. All these lines are used to train the text recognizers described in section <ref> and, on the test set, for the recognition experiments in Table <ref>.For training the text line detection systems, only the 5308 pages for which we are confident enough that all the lines are detected are kept (Train: 3995, Validation: 697, Evaluation: 616). This subset is also used for the precision experiments shown in Tables <ref> and <ref>.Finally, for the end-to-end evaluation results shown in Table <ref>, we kept all the original documents of the test set in only one language, in order to avoid the language identification problem. We obtain 507 pages in French and 265 pages in English. §.§ MetricsThree metrics were used for evaluation : * F-Measure metrics is used in Tables <ref> and <ref> in order to measure precision theof detected objects being in the neighbourhood of reference objects. A detected object l is considered as correct if it is the closest hypothesis from the reference object t and if ||l^k - t^k|| < T for all k ∈ [0,K] where K is the number of coordinates per object set to 2 for Table <ref> (points) and to 3 in Table <ref> (triplets) and T is the size of the acceptance zone given as a proportion of the page width. * Word Error Rate (WER) metrics is the word level Levenshtein distance <cit.> between recognized and reference sequences of text. * Bag of Word (BOW) metrics is given at page level as a F-Measure of words recognized or not in the page. As explained in <cit.>, it is a proper metric to compute recognition rate at page level because it does not need any alignment or ordering of the text lines that can be ambiguous for unconstrained documents. §.§ Hyper-parametersWe trained with the RmsProp optimizer <cit.> with an initial learning rate of 10^-3 and dropout after each convolutional layer. The α parameter is set to α=1000 for matching (solving for X) and to α=100 for gradient computation. § EXPERIMENTAL RESULTS§.§ Precision of the object localizationsSimilarly to what is described in <cit.>, we observed some instability in the position of the predicted objects when trying to detect boxes. Our intuition is that precisely detecting objects which ends outside of the convolutional receptive field of the outputs is difficult. Characters may have a size of 1 or 2 mm in standard printed pages, corresponding to 0.005 and 0.01 as a proportion of the page width. Interlines may have similar sizes. Therefore, it is important that the position prediction is close enough in order not to harm the text recognition process.The method described in <cit.> was dealing with this problem by detecting separately the bottom-left and top-right points and posteriorly pairing them. We observed that the precision was not harmed by the detection of triplets of coordinates (left, top, bottom). In Table <ref>, we show the F-measure of the detection of left-bottom points for several acceptance zone sizes. The results emphasize that detecting full text boxes reduces precision. Meanwhile, the precision of bottom-left point prediction is equivalent when the network is trained to detect triplets and not points.Table <ref> shows the same experiment with a 3D acceptance zone defined on the triplet positions, showing the same improved results for the triplet detection for small acceptance zones.§.§ Detection of the line end with the text recognizer In Table <ref> we compared two text line recognizers trained respectively on the reference text line images and on text line images defined only by the left sides coordinates of the text line and extended toward the right end of the page. These two recognizers are evaluated in both cases with the WER metric.While the network trained on reference boxes is obviously not working well on extended test images, we see that the network trained on extended lines works on both tasks nearly as well as the network trained on reference boxes.This confirms that we can rely on the text recognizer to ignore the part of the line that does not belong to the text line. §.§ Full page text recognitionFinally, we compared our method with baselines and concurrent approaches forfull page recognition. The evaluation was carried out using the BOW metric and is shown on Table <ref>. We show that the proposed methods yield good results on both the French and English subsets, consistently overpassing the document analysis baselines based on image processing, the object localisation baseline and the concurrent box detection and paired point detection systems. Some illustrations of the left-side triplets detection alongside with the final full-page text recognition are given in Figure 3 and emphasize the ability of the system to give good results on various types of documents.§ CONCLUSIONWe described a full page recognition system for heterogeneous unconstrained documents that is able to detect and recognize text in different languages. The use of a neural network localisation process helps to be robust to the intra-dataset variations. In order to simplify the process and to gain both in precision and in preciseness, we focus on predicting the starting point (left) of the text line bounding boxes and leave the prediction of the end point (right)to a 2D-LSTM based text recognizer. We report excellent results on the Maurdor dataset and show that our method outperform both image-based and concurrent learning-based methods.splncs03
http://arxiv.org/abs/1704.08628v1
{ "authors": [ "Bastien Moysset", "Christopher Kermorvant", "Christian Wolf" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170427155037", "title": "Full-Page Text Recognition: Learning Where to Start and When to Stop" }
=16.5cm =24cm 1.2theoremTheorem acknowledgementAcknowledgement algorithmAlgorithm axiomAxiom claimClaim conclusionConclusion conditionCondition conjectureConjecture corollaryCorollary criterionCriterion definitionDefinition exampleExample exerciseExercise lemmaLemma notationNotation problemProblem propositionProposition remarkRemark solutionSolution summarySummary [0pt]0pt1.3ex[0pt]1.3ex0ptProof:  ⟨ ⟩[ mssym.def PropositionPropositionTheoremTheoremLemmaLemma = -1truecm= -2truecm Maximal violation of Bell inequalities under local filtering Ming Li^†, Huihui Qin^^, Jing Wang^†, Shao-Ming Fei^^♯, and Chang-Pu Sun^♭ ^†College of the Science, China University of Petroleum,Qingdao 266580, P. R. China ^Department of Mathematics, School of Science, South China University of TechnologyGuangzhou 510640, P. R. China ^Max-Planck-Institute for Mathematics in the Sciences,Leipzig 04103, Germany ^♯School of Mathematical Sciences, Capital Normal University,Beijing 100048, P. R. China ^♭ Beijing Computational Science Research Center,Beijing 100048, P. R. China =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================^∗ Correspondence to [email protected] investigate the behavior of the maximal violations of the CHSH inequality and Vèrtesi's inequality under the local filtering operations. An analytical method has been presented for general two-qubit systems to compute the maximal violation of the CHSH inequality and the lower bound of the maximal violation of Vértesi's inequality over the local filtering operations. We show by examples that there exist quantum states whose non-locality can be revealed after local filtering operation by the Vértesi's inequality instead of the CHSH inequality. Quantum mechanics is inherently nonlocal. After performing local measurements on a composite quantum system, non-locality, which is incompatible with local hidden variable theory <cit.> can be revealed by Bell inequalities. The non-locality is of great importance both in understanding the conceptual foundations of quantum theory and in investigating quantum entanglement. It is also closely related to certain tasks in quantum information processing, such as building quantum protocols to decrease communication complexity <cit.> and providing secure quantum communication <cit.>. We refer to <cit.> for more details.To determine whether a quantum state has non-locality, it is sufficient to construct a Bell inequality<cit.> which can be violated by the quantum state. For two qubits systems,Clauser-Horne-Shimony-Holt have presented the famous CHSH inequality <cit.>.Let ℬ_CHSH denote the Bell operator for the CHSH inequality, ℬ_CHSH=A_1⊗ B_1+A_1⊗ B_2+A_2⊗ B_1-A_2⊗ B_2,with A_i and B_j being the observables of the form A_i=∑_k=1^3a_ikσ_k and B_j=∑_l=1^3b_jlσ_l respectively, i,j=1,2, σ_1=( [ -10;01;]),  σ_2=( [ 0 1; 1 0; ])σ_3=( [0i; -i0;])are the Pauli matrices. For any two-qubit quantum state ρ, the maximal violation of the CHSH inequality (MVCI) is given by <cit.>max_ℬ_𝒞ℋ𝒮ℋ|ℬ_𝒞ℋ𝒮ℋ_ρ|=2√(τ_1+τ_2),where τ_1 and τ_2 are the two largest eigenvalues of the matrix T^†T, T is the matrix with entries T_αβ=tr[ρ σ_α⊗σ_β], α, β=1,2,3. For a state admitting local hidden variable (LHV) model, one has max_ℬ_𝒞ℋ𝒮ℋ|ℬ_𝒞ℋ𝒮ℋ_LHV|≤2.Another effective Bell inequality for two-qubit system is given by the Bell operator <cit.> Vértesi ℬ_𝒱=1/n^2[∑_i,j=1^nA_i⊗ B_j+∑_1≤ i < j≤ nC_ij⊗(B_i-B_j)+∑_1≤ i < j≤ n(A_i-A_j)⊗ D_ij],where A_i, B_j, C_ij and D_ij are observables of the form ∑_α=1^3 x_ασ_α with x⃗=(x_1,x_2,x_3) the unit vectors.The maximal violation of Vértesi's inequality(MVVI) is lower bounded by the following inequality <cit.>. For arbitrary two-qubit quantum state ρ, we havemax_ℬ_𝒱|ℬ_𝒱_ρ| ≥ max_a,b,c,d[1/s_abs_cd|∫_Ω_a^b×Ω_c^d<x⃗,Ty⃗>dμ(x⃗)dμ(y⃗)| +1/2s^2_cd∫_Ω_c^d×Ω_c^d|T(x⃗-y⃗)|dμ(x⃗)dμ(y⃗). .+1/2s^2_ab∫_Ω_a^b×Ω_a^b|T^†(x⃗-y⃗)|dμ(x⃗)dμ(y⃗)],where s_αβ=∫_Ω_α^βdμ(x⃗). The maximum on the right side of the inequality goes over all the integral area Ω_a^b×Ω_c^d with 0≤ a < b≤π/2 and 0≤ c < d≤π/2. Here the maximal value max_ℬ_𝒱|ℬ_𝒱_ρ| of a state ρ admitting LHV model is upper bounded by 1.The maximal violation of a Bell inequality above is derived by optimizing the observables for a given quantum state. With the formulas (<ref>) and (<ref>) one can directly check if a two-qubit quantum state violates the CHSH or the Vértesi's inequality. It has been shown that the maximal violation of a Bell inequality is in a close relation with the fidelity of the quantum teleportation <cit.> and the device-independent security of quantum cryptography <cit.>.The maximal violation of a Bell inequality can be enhanced by local filtering operations <cit.>. In <cit.>, the authors present a class of two-qubit entangled states admitting local hidden variable models, and show that the states after local filtering violate a Bell inequality. Hence, there exist entangled states, the non-locality of which can be revealed by using a sequence of measurements.In this manuscript, we investigate the behavior of the maximal violations of the CHSH inequality and Vértesi's inequality under local filtering operations. An analytical method has been presented for any two-qubit system to compute the maximal violation of the CHSH inequality and the lower bound of the maximal violation of Vértesi's inequality under local filtering operations. The corresponding optimal local filtering operation is derived. We show by examples that there exist quantum states whose nonlocality can be revealed after local filtering operation by Vértesi's inequality instead of the CHSH inequality. ResultsWe consider the CHSH inequality for two-qubit systems first. Before the Bell test, we apply the local filtering operation on a state ρ∈H=H_A⊗H_B with dim H_A=dim H_B=2. ρ is mapped to the following form under local filtering transformations <cit.>:ρ'=1/N(F_A⊗ F_B)ρ(F_A⊗ F_B)^†,where N=tr[(F_A⊗ F_B)ρ(F_A⊗ F_B)^†] is a normalization factor, and F_A/B are positive operators acting on the subsystems respectively. Such operations can be a local interaction with the dichroic environments<cit.>.For two-qubit systems, let F_A=UΣ_AU^† and F_B=VΣ_BV^† be the spectral decompositions of F_A and F_B respectively, where U and V are unitary operators. Define that δ_k=Σ_A σ_k Σ_A,   η_l=Σ_B σ_l Σ_Band X be a matrix with entries given byx_kl=tr[ϱδ_k⊗η_l],    k,l=1,2,3,where ϱ is locally unitary with ρ.we have the following theorem.Theorem 1: The maximal quantum bound of a two-qubit quantum state ρ'=1/N(F_A⊗ F_B)ρ(F_A⊗ F_B)^† is given by max_ℬ_𝒞ℋ𝒮ℋ|ℬ_𝒞ℋ𝒮ℋ_ρ'|=max_ϱ2√(τ'_1+τ'_2),where τ'_1 and τ'_2 are the two largest eigenvalues of the matrix X^†X/N^2 with X given by (<ref>). The left max is taken over all B_CHSH operators, while the right max is taken over all ϱ that are locally unitary equivalent to ρ.See Methods for the proof of theorem 1.Now we investigate the behavior of the Vèrtesi-Bell inequality under local filtering operations. In <cit.> we have found an effective lower bound for the MVVI by considering infinite many measurements settings, n→∞. Then the discrete summation in (<ref>) is transformed into an integral of the spherical coordinates over the sphere S^2⊂R^3. We denote the spherical coordinate of S^2 by (ϕ_1,ϕ_2). A unit vector x⃗=(x_1,x_2,x_3) can be parameterized by x_1=sinϕ_1sinϕ_2, x_2=sinϕ_1cosϕ_2, x_3=cosϕ_1. For any 0≤ a≤ b≤π/2, we denote Ω_a^b={x∈ S^2: a≤ϕ_1(x)≤ b}.Theorem 2: For two-qubit quantum state ρ' given by (<ref>), we havemax_ℬ_𝒱|ℬ_𝒱_ρ'|≥max_a,b,c,d1/N[1/s_abs_cd|∫_Ω_a^b×Ω_c^d<x⃗,Xy⃗>dμ(x⃗)dμ(y⃗)| .+1/2s^2_cd.∫_Ω_c^d×Ω_c^d|X(x⃗-y⃗)|dμ(x⃗)dμ(y⃗) +1/2s^2_ab∫_Ω_a^b×Ω_a^b|X^t(x⃗-y⃗)|dμ(x⃗)dμ(y⃗)],where X is defined by (<ref>). X^t stands for the transposition of X, and s_αβ=∫_Ω_α^βdμ(x⃗). The maximization on the right side of the inequality goes over all the integral area Ω_a^b×Ω_c^d with 0≤ a < b≤π/2 and 0≤ c < d≤π/2.See Methods for the proof of theorem 2.Remark: The right hand sides of (<ref>) and (<ref>) depend just on the state σ which is local unitary equivalent to ρ. Thus to compare the difference of the maximal violation for ρ and that for ρ', it is sufficient to just consider the difference between σ and ρ'.Without loss of generality, we set Σ_A=( [ x 0; 0 1; ])   Σ_B=( [ y 0; 0 1; ])with x, y≥ 0. According to the definition of δ_k and η_l in (<ref>), one computes that δ_1=( [ -x^20;01;]), δ_2=( [ 0 x; x 0; ])δ_3=( [ 0ix; -ix 0; ]); η_1=( [ -y^20;01;]), η_2=( [ 0 y; y 0; ])η_3=( [ 0iy; -iy 0; ]).Let σ_0=( [ 1 0; 0 1; ]). Set δ⃗=(δ_1,δ_2,δ_3), η⃗=(η_1,η_2,η_3), and σ⃗=(σ_0, σ_1,σ_2,σ_3). We have δ⃗=Cσ⃗ and η⃗=Dσ⃗, whereC=( [ 1/2(1-x^2) 1/2(1+x^2)00;00x0;000x;])and D=( [ 1/2(1-y^2) 1/2(1+y^2)00;00y0;000y;])respectively. Then one has x_kl=(CWD^†), where W is a 4× 4 matrix with entries w_αβ=tr[σσ_α⊗σ_β]. Let Õ_A=( [ 1 0; 0 O_A; ]) and Õ_B=( [ 1 0; 0 O_B; ]) where O_A and O_B are 3× 3 orthogonal operators. Define that r⃗ and s⃗ be three dimensional vectors with entries r_i=tr[ρσ_0⊗σ_i] and s_j=tr[ρσ_j⊗σ_0] respectively. And let T̃=( [1 r⃗; s⃗T;]). One can further show thatX=CWD^†=CÕ_AT̃Õ_B^†D^†, and N=x_+y_++4x_-y_+(O_As⃗)_1+4x_+y_-(O_Br⃗)_1+4x_-y_-(O_ATO_B^t)_11, where x_+=1/2(1+x^2), x_-=1/2(1-x^2), y_+=1/2(1+y^2) and y_-=1/2(1-y^2). Numerically, one can parameterize O_A and O_B and then search for the maximization in theorem 1. For the lower bound in theorem 2, we refer to <cit.>.Corollary: For two-qubit Werner state<cit.>ρ_w=p|ψ^-ψ^-|+(1-p)I/4, with |ψ^-=(|01-|10)/√(2), one computes T=( [ -p00;0 -p0;00 -p;]). Then by using the symmetric property of the state, (<ref>) and (<ref>), together with theorem 1, we have max_ℬ_𝒞ℋ𝒮ℋ|ℬ_𝒞ℋ𝒮ℋ_ρ'|=2√(τ'_1+τ'_2),where τ'_1 and τ'_2 are the two largest eigenvalues of the matrix X^†X/N^2 with X given byx_kl=tr[ρ_w δ_k⊗η_l],    k,l=1,2,3.ApplicationsIn the following we discuss the applications of local filtering. First we show that a state which does not violate the CHSH and the Vértesi's inequalities could violate these inequalities after local filtering. Consider the following density matrix for two-qubit systems: ϱ_1=1/4(I⊗ I+rσ_1⊗ I-p∑_i^3σ_i⊗σ_i),where -0.3104≤ p≤0.7 to ensure the positivity of ϱ_1. By using the positive partial transposition criteria one has that ϱ_1 is separable for -0.3104≤ p≤0.3104.Case 1: Set r=0.3. It is direct to verify that both the CHSH inequality and Vértesi's inequalities fail to detect the non-locality for the whole region -0.3104≤ p≤0.7. After filtering, non-locality can be detected for 0.6291≤ p≤0.7 (by Theorem 2) and 0.6164≤ p≤0.7 (by Theorem 1) respectively, see Fig.1.Case 2: Set p=0.7050 and r=0.0400. The MVCI of ϱ_1 is 1.994 without local filtering and 1.9988 after local filtering, which means that the CHSH inequality is always satisfied before and after local filtering. The lower bound(<ref>) for ϱ_1 is computed to be less than one, implying the non-locality can not be detected by the lower bound for MVVI derived in <cit.> without local filtering. However, by taking x=y=1.1, a=c=0.1671, b=d=1.1096, from Theorem 2 we have the maximal violation value 1.0005 which is larger than one. Therefore, after local filtering the state's non-locality is detected.Next we give an example that a state admits local hidden variable model (LHV) can violate the Bell inequality under local filtering. Consider two-qubit quantum states with density matrices of the following form: ϱ_2=1/4(I⊗ I+pσ_1⊗ I+p∑_i^3σ_i⊗σ_i).According to the positivity of a density matrix, we have -0.5≤ p≤ 0.3090. By using the positive partial transposition criteria <cit.>, one checks that ϱ_2 is entangled for -0.5≤ p≤ -0.3090. The quantum state satisfies the CHSH inequality for the whole parameter region.We first show that the state ϱ_2 admits LHV models for -0.5≤ p≤ -0.3090.First we rewrite ϱ_2 as a convex combination of singlet and separable states, ϱ_2=q|ψ_-ψ_-|+(1-q)[1/2(I-q/1-qσ_1)⊗I/2],where |ψ_-ψ_-|=1/4(I⊗ I-∑^3_i=1σ_i⊗σ_i) and q=-p.According to <cit.>, with a visibility of q=1/2, the correlations of measurement outcomes produced by measuring the observables A=a·σ and B=b·σon the singlet state can be simulated by an LHV model in which the hidden variableλ_s∈𝐒^2 is biased distributed with probability densityρ(λ_s|a)=|a·λ_s|/2π. With probability 0< q≤1/2, Alice and Bob can share the biased distributed variable resourceand output a=-sgn(a·λ_s) andb=sgn(b·λ_s), respectively. With probability 1-q, Alice outputs a=± 1 with probability p(a|a)=tr[1/2(I-q/1-qσ_z)I±a·λ_s/2], and Bob outputs ± 1 with probability p(b|b)=1/2. Then we can simulate the correlations produced by measuring obesrvables A and B on ϱ_2,p(a,b|a,b,ϱ_2)=tr(I+aaσ/2⊗I+bbσ/2ρ)=1-qaba·b/4 -aa_3q/4,which can be given by the following LHV model, p(a,b|a,b,ϱ_2)= q∫_𝐒^2p(a|a,λ_s)p(b|b·λ_s)ρ(λ_s)dλ_s+(1-q)p(a|a)p(b|b) = q∫_Ω_a,b|a·λ_s|/2πdλ_s+(1-q)p(a|a)p(b|b),where Ω_a,b={λ_s|-sgn(a·λ_s)=a}∩{λ_s|b=sgn(b·λ_s)}. Explicitly,p(1,1a,b,λ_s)=q∫_Ω_1,1|a·λ_s|/2πdλ_s +1-q/2tr[1/2(I-q/1-qσ_z)I+a·λ_s/2],p(1,-1a,b,λ_s)=q∫_Ω_1,-1|a·λ_s|/2πdλ_s +1-q/2tr[1/2(I-q/1-qσ_z)I+a·λ_s/2],p(-1,1a,b,λ_s)=q∫_Ω_-1,1|a·λ_s|/2πdλ_s +1-q/2tr[1/2(I-q/1-qσ_z)I-a·λ_s/2],p(-1,-1a,b,λ_s)=q∫_Ω_-1,-1|a·λ_s|/2πdλ_s +1-q/2tr[1/2(I-q/1-qσ_z)I-a·λ_s/2],where Ω_1,1={λ_sa·λ<0}∩{λ_sb·λ≥0}, Ω_1,-1={λ_sa·λ<0}∩{λ_sb·λ<0}, Ω_-1,1={λ_sa·λ≥0}∩{λ_sb·λ≥0}, Ω_-1,-1={λ_sa·λ≥0}∩{λ_sb·λ<0}.Therefore the state ϱ_2 admits LHV model for -0.5≤ p≤-0.309. However, after local filtering, non-locality (violation of the CHSH inequality) is detected for -0.5≤ p≤ -0.4859, see Fig.2. Remark: In <cit.> Horodeckis have presented the connection between the maximal violation of the CHSH inequality and the optimal quantum teleportation fidelity: ℱ_max≥1/2(1+1/12max_ℬ_𝒞ℋ𝒮ℋ|ℬ_𝒞ℋ𝒮ℋ_ρ|) which means that any two-qubit quantum state violating the CHSH inequality is useful for teleportation and vice versa. Acín et al. have derived the relation between the maximal violation of the CHSH inequality and the Holevo quantity between Eve and Bob in device-independent Quantum key distribution(QKD)<cit.>: χ(B_1:E)≤ h(1+√((max_ℬ_𝒞ℋ𝒮ℋ|ℬ_𝒞ℋ𝒮ℋ_ρ|/2)^2-1)/2), where h is the binary entropy. From our theorem, max_ℬ_𝒞ℋ𝒮ℋ|ℬ_𝒞ℋ𝒮ℋ_ρ| can be enhanced by implementing a proper local filtering operation from smaller to larger than 2, which makes a teleportation possible from impossible, or can be improved to obtain a better teleportation fidelity. The proper(optimal) local filtering operation can be selected by the optimizing process in (<ref>) together with the double cover relationship between the SU(2) and SO(3). For application in the QKD, Eve can enhance the upper bound of Holevo quantity by local filtering operations which makes a chance for attacking the protocol.DiscussionsIt is a fundamental problem in quantum theory to recognize and explore the non-locality of a quantum system. The Bell inequalities and their maximal violations supply powerful ability to detect and qualify the non-locality. Furthermore, the constructing and the computation of the maximal violation of a Bell inequality is in close relationship with quantum games, minimal Hilbert space dimension and dimension witnesses, as well as quantum communications such as communication complexity, quantum cryptography, device-independent quantum key distribution etc. <cit.>. A proper local filtering operation can generate and enhance the non-locality. We have investigated the behavior of the maximal violations of the CHSH inequality and the Vértesi's inequality under local filtering. We have presented an analytical method for any two-qubit system to compute the maximal violation of the CHSH inequality and the lower bound of the maximal violation of Vértesi's inequality under local filtering. We have shown by examples that there exist quantum states whose nonlocality can be revealed by local filtering operations in terms of the Vértesi's inequality instead of the CHSH inequality.Methods Proof of Theorem 1 and Theorem 2 The normalization factor N has the following form,N = tr[UΣ_A^2U^†⊗ VΣ_B^2V^†ρ]=tr[Σ_A^2⊗Σ_B^2U^†⊗ V^†ρ U⊗ V]= tr[Σ_A^2⊗Σ_B^2ϱ],where ϱ=U^†⊗ V^†ρ U⊗ V. Since ρ and ϱ are local unitary equivalent, they must have the same value of the maximal violation for CHSH inequality.We have thatt'_ij = tr[ρ'σ_i⊗σ_j]=1/Ntr[(F_A⊗ F_B)ρ(F_A^†⊗ F_B)^†σ_i⊗σ_j]= 1/N tr[ρ UΣ_A U^†σ_iUΣ_A U^†⊗ VΣ_B V^†σ_jVΣ_B V^†]= 1/N∑_kltr[U^†⊗ V^†ρ U⊗ V Σ_A O^A_ikσ_k Σ_A⊗Σ_B O^B_jlσ_l Σ_B]= 1/N∑_klO^A_ikO^B_jltr[ϱΣ_A σ_k Σ_A⊗Σ_B σ_l Σ_B]= 1/N∑_klO^A_ikO^B_jltr[ϱδ_k⊗η_l]= 1/N∑_klO^A_ikx_klO^B_jl=1/N(O_AXO_B^T)_ij. In deriving the fourth equality in (<ref>) we have used the double cover relation between the special unitary group SU(2) and the special orthogonal group SO(3): for any given unitary operator U, Uσ_iU^†=∑_j=1^3O_ijσ_j, where the matrix O with entries O_ij belongs to SO(3)<cit.>.Finally, one has thatT'=1/NO_AXO_B^†,and(T')^†T'=1/N^2O_BX^†O_A^†O_AXO_B^†=1/N^2O_BX^†XO_B^†. By noticing the orthogonality of the operator O_B we have that the eigenvalues of (T')^†T' and X^†X/N^2 must be the same, which proves theorem 1.We can further obtain theorem 2 by substituting (<ref>) into (<ref>). 1ex1ex99bellBell J.S. On the Einstein Podolsky Rosen Paradox. Physics1, 195-200 (1964).dcc Brukner Č., Żukowski M. & Zeilinger A. Quantum Communication Complexity Protocol with Two Entangled Qutrits. Phys. Rev. Lett.89, 197901 (2002). dcc1 Buhrman H., Cleve R., Massar S., & de Wolf R. Nonlocality and communication complexity. Rev. Mod. Phys.82, 665 (2010).scc1 Scarani V., & Gisin N. Quantum Communication between NPartners and Bell's Inequalities. Phys. Rev. Lett.87, 117901 (2001).scc2 Ekert A.K. Phys. Rev. Lett. Quantum cryptography based on Bells theorem. 67, 661(1991); Barrett J.,Hardy L. & Kent A. Phys. Rev. Lett. No Signaling and Quantum Key Distribution. 95, 010503 (2005).brunner Brunner N., Cavalcanti D., Pironio S., Scarani V., & Wehner S. Bell nonlocality. Rev. Mod. Phys.86, 419 (2014).chsh Clauser J.F., Horne M.A., Shimony A., & Holt R.A. Proposed Experiment to Test Local Hidden-Variable Theories. Phys. Rev. Lett.23, 880 (1969).vbell1 Gisin N. Bell's inequality holds for all non-product states. Phys. Lett. A154, 201-202 (1991).vbell2 Gisin N. & Peres A. Maximal violation of Bell's inequality for arbitrarily large spin. Phys. Lett. A162, 15-17 (1992).vbell3 Popescu S. & Rohrlich D. Generic quantum nonlocality. Phys. Lett. A166, 293-297 (1992).chenjinglingChen J.L., Wu C.F., Kwek L.C., & Oh C.H. Gisin's Theorem for Three Qubits. Phys. Rev. Lett.93, 140407 (2004).liprlLi M. & Fei S.M. Gisins Theorem for Arbitrary Dimensional Multipartite States. Phys. Rev. Lett.104, 240502 (2010).yuYu S.X., Chen Q., Zhang C.J., Lai C.H. & Oh C.H. All entangled pure states violate a single Bell's inequality. Phys. Rev. Lett.109, 120402 (2012).ho340 Horodecki R., Horodecki P. & Horodecki M. Violating Bell inequality by mixed spin-12 states: necessary and sufficient condition. Phys. Lett. A200, 340 (1995).vertesiVértesi T. More efficient Bell inequalities for Werner states. Phys. Rev. A 78, 032112 (2008).Julien Degorre J., Laplante S., & Roland J. Simulating quantum correlations as a distributed sampling problem.Phys. Rev. A72, 062314 (2005).pla222 Horodecki R., Horodecki M., & Horodecki P. Teleportation, Bell's inequalities and inseparability. Phys. Lett. A222, 21 (1996). prl230501 Acín A., Brunner N., Gisin N., Massar S., Pironio S., & Scarani V. Phys. Rev. Lett. Device-Independent Security of Quantum Cryptography against Collective Attacks. 98, 230501 (2007).verstraete Verstraete F., Dehaene J., & De Moor B. Normal forms and entanglement measures for multipartite quantum states. Phys. Rev. A68, 012103 (2003).srep Li M., Zhang T.G., Hua B., Fei S.M., & Li-Jost X.Q. Quantum Nonlocality of Arbitrary Dimensional Bipartite States. Scientific Reports513358 (2015).prl170401 Verstraete F. & Wolf M.M. Entanglement versus Bell Violations and Their Behavior under Local Filtering Operations. Phys. Rev. Lett.89, 170401 (2002). prl160402 Hirsch F., Quintino M.T., Bowles J., & Brunner N. Genuine Hidden Quantum Nonlocality. Phys. Rev. Lett.111, 160402 (2013).gisinpla210 Gisin N., Hidden quantum nonlocality revealed by local filters. Phys. Lett. A210, 151(1996).ppt Peres A. Separability Criterion for Density Matrices. Phys. Rev. Lett.77, 1413 (1996).4396 Schlienz J. & Mahler G. Description of entanglement. Phys. Rev. A52, 4396 (1995).lilu Li M., Zhang T.G., Fei S.M., Li-Jost X.Q. & Jing N.H. Local Unitary Equivalence of Multi-qubit Mixed quantum States. Phys. Rev. A89, 062325 (2014).wernerWerner R. F. Quantum states with Einstein-Podolsky-Rosen correlations admitting a hidden-variable model. Phys. Rev. A40, 4277 (1989). Acknowledgements This work is finished in the Beijing Computational Science Research Center and is supported by the NSFC Grants No. 11275131 and No. 11675113; the Shandong Provincial Natural Science Foundation No.ZR2016AQ06; the Fundamental Research Funds for the Central Universities Grants No. 15CX08011A and No. 16CX02049A; Qingdao applied basic research program No. 15-9-1-103-jch, and a project sponsored by SRF for ROCS, SEM.Author contributionsM. Li and H.H. Qin wrote the main manuscript text. All authors reviewed the manuscript. Additional Information Competing Financial Interests: The authors declare no competing financial interests. ]
http://arxiv.org/abs/1704.08142v1
{ "authors": [ "Ming Li", "Huihui Qin", "Jing Wang", "Shao-Ming Fei", "Chang-Pu Sun" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170426143831", "title": "Maximal violation of Bell inequalities under local filtering" }
Eric D. Feigelson Department of Astronomy & Astrophysics, Pennsylvania State University, University Park PA 16802, [email protected] Studies of Young OB Associations Eric D. Feigelson – Dedicated to John Butcher, on the occasion of his 84-th birthday – ========================================================================We discuss how contemporary multiwavelength observations of young OB-dominated clusters address long-standing astrophysical questions: Do clusters form rapidly or slowly with an age spread? When do clusters expand and disperse to constitute the field star population? Do rich clusters form by amalgamation of smaller subclusters? What is the pattern and duration of cluster formation in massive star forming regions (MSFRs)?Past observational difficulties in obtaining good stellar censuses of MSFRs have been alleviated in recent studies that combine X-ray and infrared surveys to obtain rich, though still incomplete, censuses of young stars in MSFRs.We describe here one of these efforts, the MYStIX project, that produced a catalog of 31,784 probable members of 20 MSFRs.We find that age spread within clusters are real in the sense that the stars in the core formed after the cluster halo.Cluster expansion is seen in the ensemble of (sub)clusters, and older dispersing populations are found across MSFRs.Direct evidence for subcluster merging is still unconvincing.Long-lived, asynchronous star formation is pervasive across MSFRs.§ HISTORICAL DISCUSSIONS OF STAR CLUSTER FORMATIONGalactic Plane star clusters, well-known to classical astronomers like 2^nd century Claudius Ptolemy and 10^th century Abd al-Rahman al-Sufi,were catalogued in the 18-19^th centuries by Charles Messier and William and John Herschel.As astrophysical explanations for astronomical phenomena rose to prominence around the turn of the 20^th century, it was natural that the processes giving rise to clusters were investigated.We address here several astrophysical themes of long-standing importance where, even today, theory is not well-constrained by observation.The historically oldest issue is the argument that most stars are born in clusters that expand and disperse to comprise the field star population.In a 1917 discussion of Kapteyn's `systems of stars which travel together in parallel paths', Charlier <cit.>, director of Lund Observatory in Sweden, arguesthat the stars which now belong to such a system are only the insignificant remnant of a large cluster which at one time constituted a compact system in space. Such questions could be investigated computationally, both by integrating difficult differential equations and by Monte Carlo N-body calculations, in the 1970s.In an important study, Tutukov <cit.> wrote:It is generally believed that ... stars [form] in small groups which dissolve comparatively quickly during very early stages of evolution, practically at the moment of their formation. ... It is natural to suppose that the gas not utilized for star formation was blown away by hot stars, probably due to the ionizing radiation and stellar wind.If the mass of gas is higher than the mass of stars and the kinetic energy of the gas exceeds the binding energy of the cluster, then the disruption of a young cluster seems inevitable. The issue of stellar dispersal arose again when early-type stars were discovered far from their natal clouds away from the Galactic Plane. Greenstein & Sargent <cit.> noted:The kinematical behavior of these stars is, however, quite strange ... The stars are not kinematically relaxed; they are apparently observed soon after formation and ejection. ... [This reveals] a fundamental problem that far too many, hot, high-velocity, apparently normal stars exist.Some of these stars are clearly runaway stars ejected at high velocities from hard binary interactions, but others some dispersed up to ∼ 200 pc from the Plane could not easily be traced to rich clusters <cit.>. In a catalogue of stellar members in OB associations within 3 kpc,Garmany & Stencel <cit.> found that massive OB stars are commonly spread over large (∼ 200 pc) regions; these did not appear to be high-velocity runaways.Another long-standing issue concerns the mechanism by which rich star clusters form. Aarseth & Hills <cit.> sought to evaluate two alternatives views: simultaneous formation of a monolithic rich cluster and its possible later construction from pre-existing subclusters.They wrote:The density distribution of stars in a stellar cluster usually gives every appearance of being smoothing varying and non-clumpy.On the face of it, this is a bit surprising since elementary considerations from [Jeans gravitational collapse] star-formation theory suggest that a cluster should initially be subdivided into a hierarchy of subclusters.... The subdivision process terminates when the cloud becomes opaque enough for the collapse time-scale to catch up with the cooling time-scale ... [so] that a cluster is initially composed of a hierarchy of subclusters.Stellar subgroups were empirically found in a number of nearby rich OB associations by Blaauw <cit.>. But it was unclear whether the primary process is fragmentation of an initially homogeneous cluster, or incomplete consolidation of smaller subclusters into a unified structure.The latter view came to the fore when molecular clouds were discovered to be highly inhomogeneous due to supersonic turbulence <cit.>.Maps obtained with the Herschel satellite far-infrared imaging show that even the coldest and densest cloud structures mostly have clumpy and filamentary structure <cit.>.A third contentious issue is the duration of star formation in molecular clouds.Various researchers argue, on both physical and observational grounds, that cluster formation is rapid, although a small number of stars may form over an extended period before the principal starburst <cit.>. Others suggest that regulation of star formation by magnetically induced turbulence in molecular clouds and feedback from nascent stars prevents large-scale free-fall gravitational collapse and rapid cluster formation <cit.>.The evidence outlined above for widely distributed early-type stars suggests that star formation in massive star forming regions are long-lived, so that earlier generation of massive stars have time to drift outward from still-active star forming regions. § THE OBSERVATIONAL CHALLENGESIt is now clear that most stars form in rich clusters. The cluster luminosity function in the Milky Way Galaxy and nearby galaxies demonstrates that the majority of stars form in clusters with 10^2-10^4 stars <cit.> and, during galactic starburst episodes, superclusters of 10^5 stars may dominate.But even the fundamental physical properties, processes and timescales of cluster formation and early evolution are observationally poorly established.Cogent arguments have been made that clusters form quickly <cit.> and slowly <cit.>, that they form as a unified structure or are assembled from merging subclusters <cit.>, that they form in spherical cloud cores or in filamentary cloud structures <cit.>.Timescales for cluster formation and early dynamical evolution are poorly constrained by observation.Attempts to measure the ages of constituent stars of nearby clusters by fitting their location in Hertzprung-Russell diagrams (HRDs) to theoretical evolutionary tracks is beset with observational difficulties, so that it is unclear whether the observed spreads in HRDs represent true age spreads <cit.>.The reasons for the failure to test competing astrophysical models of cluster formation can arguably be placed on practical observational difficulties in defining their member stars.Much progress has been made in studying the progenitor molecular clouds through, for example, maps of coolant molecular lines with millimeter array telescopes and far-infrared imaging of continuum dust emission with the Herschel satellite.The environmental effects of the hot OB stars can also be traced across the Galactic Plane: ionized gas is easily mapped at radio wavelengths, and heated dust produces PAH band emission mapped with infrared space telescopes.But the actual stellar populations of star clusters beyond distances ∼ 1 kpc are poorly known.Indeed, hardly any members have been identified in most of the massive Galactic star forming regions that would be called `extragalactic giant H II regions' were they to be present in nearby galaxies <cit.>.Acquiring a reliable census of members of star clusters beyond d ∼ 1 kpc faces several challenges.The most devastating is contamination by uninteresting older Galactic field stars along the line-of-sight.At Galactic latitude b ∼ 0^∘ and longitudes in the inner quadrants, field stars have 10-100 times higher surface density than the cluster members over most of the cluster extent at near-infrared magnitudes around the peak of the Initial Mass Function.Interstellar absorption can reach A_V ∼ 30 mag along the line-of-sight to the cluster, and can vary by tens of magnitude within the star forming region due to the local molecular cloud.Detection of faint infrared stars is difficult amid the nebular H II region emission from heated dust. As a result of these problems, the census of young star cluster members has often been restricted to nearby lower-mass clusters or to special subpopulations of massive clusters: the inner cluster core where the surface density rises above the field stars; OB stars that are brighter and bluer than ambient stars and easily confirmed with optical spectroscopy; and pre-main sequence stars with photometric infrared excesses (IRE) from dust protoplanetary disks.The IRE criterion is often used to define the population of `young stellar objects' (YSOs) but it is restricted to disk-bearing pre-main sequence stars (Class I-II).In many clusters, the bulk of the stars have lost their disks and are thus photometrically indistinguishable from contaminant field stars in the infrared bands.Inferences regarding star formation histories may be flawed due to the IRE sample bias towards younger systems with hot inner accretion disks. However, a technique has emerged in recent years that overcomes, to some degree, these observational difficulties and biases.Sensitive and high-resolution imaging of star forming regions with NASA's Chandra X-ray Observatory, sensitive in the 0.5-8 keV (25-1.5 Å) X-ray band, can detect reasonable fractions of young cluster populations out to distances of several kiloparsecs with reasonable exposure times. A typical 100 ksec exposure with Chandra's Advanced CCD Imaging Spectrometer of a typical rich cluster at d ∼ 2-3 kpc will reveal 1000 or more cluster members, perhaps 5-20% of the full Initial Mass function (IMF).Most importantly, the X-ray image captures only a minute fraction of the Galactic field stars that contaminate the infrared images so badly.The main contaminant of X-ray images are quasars seen through the Galactic Plane, and these are readily removed due to their lacking infrared counterparts.X-ray emission in pre-main sequence arises from magnetic flaring activity, similar to that of the Sun but with much more powerful and frequent flares <cit.>.The flaring X-ray emission has a sufficiently `hard' X-ray spectrum that these stars can be detected through high column densities of intervening interstellar material, equivalent to A_V ∼ 100 mag in some cases.Finally, X-ray selection is complementary to IRE selection because it most efficiently captures disk-free (Class III) stars.The remainder of this chapter discusses a particular effort called MYStIX (Massive Young Stellar complexes study in Infrared and X-rays) that combines Chandra X-ray, UKIRT near-infrared, and Spitzer Space Telescope mid-infrared surveys of 20 OB-dominated star forming regions at distances 0.4 < d < 4 kpc <cit.>.After complicated data analysis with statistical procedures designed to reduce contaminants, a sample of ∼31,000 MYStIX Probable Complex Members (MPCMs) is generated.While far from a complete stellar census, the samples are typically much larger than previously available, and appear to be reasonably free from contaminating field stars.After a brief description of the MYStIX observational effort (<ref>) and a new stellar chronometer based on X-ray/infrared photometry (<ref>), we summarize some of the characteristics of these star clusters (<ref>).A variety of results are then outlined (<ref>): the morphology of stellar clustering and maps of stellar surface density, histories of star formation in MSFRs, and direct measurement of cluster expansion.MYStIX is only one of several similar X-ray/infrared surveys that include: Chandra Carina Complex Project <cit.>, Chandra Cyg OB2 Legacy Survey <cit.>,Star Formation in Nearby Clouds <cit.>, NGC 6611 <cit.>,Eagle Nebula <cit.>, NGC 1893 <cit.>, DR 15 <cit.>, NGC 6231 <cit.>, NGC 7538 <cit.>, and others.§ THE MYSTIX PROJECTThe MYStIX effortseeks to construct an improved census of stars in rich clusters and their environs in 20 MSFRs near the Sun.Populations that are not dominated by an O or early-B star are omitted; thus MYStIX omits nearby small star forming regions like the Taurus-Auriga, ρ Ophiuchi and Chamaeleon complexes.Table 1 lists the MYStIX star forming regions with approximate distance from the Sun and spectral type of the dominant star.The accompanying Figure 1 shows the location of the MYStIX regions on a diagram of the Milky Way Galaxy with the Sun at the middle. The MYStIX targets do not constitute a complete sample in any way, but rather were selected by practical considerations: they must have sufficiently deep coverage by the Chandra and Spitzer satellite imagers. Simply stated, the MPCM samples are the sum of probable complex members extracted from X-ray sources in the Chandra X-ray Observatory images, IRE sources from UKIRT near-infrared observations (often part of the UKIDSS Galactic Plane Survey) and the Spitzer Space Telescope mid-infrared observations, and published OB stars confirmed by published optical spectroscopy.But the actual procedure for constructing the MPCM samples is complicated by the need to reduce the often-overwhelming contamination of Galactic field stars combined with spatially variable cloud absorption and nebular emission.Challenges overcome include: X-ray source listswere obtained using the ACIS Extract package and associated software developed for the Chandra ACIS instrument at Penn State <cit.>.This allows detection of sources with as few as 3-5 photons on-axis, even in the presence of crowding and diffuse X-ray emission.Contamination from extragalactic X-ray sources and field X-ray stars was reduced by a naive Bayes classifier based on various properties of the sources and their infrared counterparts <cit.>. The reliability of these sources is validated by the high fraction associated with stars exhibiting other pre-main sequence properties <cit.>.Near-infrared source listswere obtained with the UKIDSS pipeline software modified to accommodate very crowded Galactic plane fields with nebulosity <cit.>. Mid-infrared source listswere obtained with the Spitzer IRAC team software modified to accommodate crowding and nebulosity <cit.>. X-ray/infrared counterpart identificationswere based on a probabilistic calculation of proximate sources that accounts for the magnitude distribution expected for true complex members, in order to reduce false associations with fainter field stars <cit.>. Infrared excess starswere extracted based on a complicated decision tree of criteria designed to reduce the often-heavy contamination by field red giants and false sources associated with nebular knots <cit.>.The classified X-ray sources, IRE stars and published OB stars were then combined into the MPCM catalog of 31,784 stars in the 20 regions of Table 1 <cit.>.The MYStIX papers, and their electronic tables of intermediate and final samples, are collected at the Web site http://astro.psu.edu/mystix.The MPCM sample is far from a complete census.The X-ray samples are generally limited to stars with masses above ∼ 0.5 M_⊙, and thus miss the peak of the IMF of low-mass members.Various biases are present in the sample as well (see Appendix B of Feigelson et al. 2013).Nonetheless, the MPCM samples are the largest for most of the star forming regions under consideration.Tests of the sample reliability were made using the well-studied NGC 2264 population; ∼ 80% of previously identified Hα and optically variable stars were recovered, and dozens of new members are proposed <cit.>.Figures <ref>-<ref> illustrate the MPCM samples for four MYStIX star forming regions.The regions have complex structures though with some similar behaviors.Lagoon Nebula (M 8) In this MSFR,we see two major clusters: the poorly characterized NGC  6523 cluster to the east with the famous massive star Herschel 36; and the well characterized NGC 6530 cluster in a large cavity to the west.As one proceeds westward, the fraction of IRE stars (red circles in Figure <ref>) decreases; it is not immediately clear whether this is an age gradient or a selection effect due to the difficulty of finding IRE stars in the bright PAH nebulosity of the western region.A clump of stars is also seen to the far-southeast associated with a bright rimmed cloud; it includes the luminous embedded star M 8E.NGC 6334 This is a large MSFR elongated along the Galactic Plane with both heavy absorption and complex bright nebular emission that precluded generation of a reliable stellar census in the past.The 1,667-member MPCM sample shows several distinct clusters, some dominated by young IRE stars and others by older X-ray selected stars <cit.>.The morphology might represent a star formation wave from the southwest to the northeast, but older clusters are sometimes superposed on younger clusters and a distributed young star component is also present.A selection of likely protostars, based on MYStIX sources with ascending infrared spectral slopes or ultra-hard X-ray spectra, shows a distribution of very young stars tracing the curved molecular filament to the northeast <cit.>.NGC 6357This region has 2,235 MPCMs, very few of which had previously been identified by optical or infrared surveys even though this is a very active star forming region in the Carina spiral arm.Three very rich clusters are seen; Pismis 24 to the northwest has several ∼ 100 M_⊙ O3 stars.In each cluster, we can see spatial displacements between the infrared and X-ray selected subsamples.The IRE selection method is ineffective around the brightest nebular emission of the northwest H II region.Two dozen new absorbed (4 < A_V < 24 mag) candidate OB stars are identified in the MYStIX catalog in this region <cit.>.Eagle Nebula (M 16)Here the southwestern rich cluster is dominated by disk-free X-ray selected members, while the sparser subclusters to the north and west are dominated by disk-bearing IRE members.As in most MYStIX regions, the X-ray selected stars outnumber the IRE stars, implying that the star formation has endured for many millions of years beyond the typical longevity of infrared-emitting disks.§ A NEW STELLAR CHRONOMETERTo reveal the spatio-temporal history of star formation in MYStIX regions, it would be very desirable to obtain reliable ages of different (sub)clusters of MPCM stars.Two pre-main sequence chronometers are traditionally used: a star's location in the HRD compared to theoretical evolutionary tracks; and the presence of a star's infrared-emitting circumstellar disk <cit.>.But neither are very effective for MSFRs.HRD locations are not available because the stars are often too reddened to readily obtain optical spectra, and in any case several extraneous problems render HRD-derived ages uncertain <cit.>.Disk fractions or classification (Class 0-I-II-III) derived from infrared photometry are inaccurate and difficult to calibrate.For example, IRE populations are reduced by local H II region contamination, differences in infrared-to-X-ray sensitivities can systematically bias disk fraction comparisons between MYStIX regions, and individual disk dissipation timescales range over 0.5-5 Myr or more.A potentially accurate chronometer based on oscillations of intermediate-mass stars has been proposed <cit.>, but it can be applied only to a handful of bright well-studied stars, not to thousands of faint MSFR stars. In the MYStIX context, Getman and colleagues have developed a new, surprisingly simple chronometer for pre-main sequence stars that can be applied to a reasonable fraction of MPCM stars <cit.>.It is based on the long-standing empirical correlation between X-ray luminosity L_x, produced by magnetic reconnection flares, and stellar mass M in pre-main sequence stars.This L_x-M relation is best calibrated in the Taurus-Auriga population <cit.>.The astrophysical cause of this correlation is poorly understood (presumably related to magnetic dynamos in fully convective stellar interiors), but it accounts for much of the 10^4 range of L_x in young stellar populations.MYStIX L_x measurements, after correction for soft X-ray absorption from intervening interstellar gas, thus give mass estimates for each star.MYStIX also gives measures photospheric luminosities L_bol; Getman et al. use dereddened J band magnitudes M_J as a proxy for L_bol. M values inferred from L_x and measured M_J values combined with standard theoretical evolutionary tracks give stellar age estimates for each star, nicknamed Age_JX.Each Age_JX value is quite inaccurate, but obtaining the median Age_JX for a spatially defined subsample of young stars appears to be effective for elucidating histories of star formation within and between clusters.§ IDENTIFYING (SUB)CLUSTERS The MYStIX fields are mostly centered on rich OB associations with optically bright H II region, often with names like `Rosette Nebula' and `Lagoon Nebula' that date to the 19^th century.But examination of the MPCM spatial distributions show considerable diversity in clustering behavior - a simple dichotomy between rich clusters and distributed star formation is clearly inadequate. Global statistics of spatial point processes, such as Ripley's K function and the related two-point correlation function <cit.>, are not directly useful as they are strongly affected by the richest clusters and do not reflect the diversity of patterns within a single field.Defining stellar `clusters' or `groups' by surface density enhancements <cit.> also has the disadvantage of requiring an arbitrary threshold.We therefore proceeded to locate `clusters' using a parametric statistical regression approach known as `mixture models' <cit.>. Here we require that cluster structure have a specific mathematical form corresponding an isothermal sphere or ellipsoid <cit.>.A likelihood function giving the probability that the observed celestial locations of MPCM stars corresponds to a specified mixture of isothermal ellipsoids.When a flat `distributed' stellar population is added, a model with k clusters has 6k+1 parameters.The best fit model is obtained by maximum likelihood estimation for a range of k, and the optimal number of clusters is obtained by maximizing the Akaike Information Criterion, a well-accepted penalized likelihood measure for model selection. Note that the method permits hierarchical structures with one ellipsoid lying within or overlapping another ellipsoid. Model fits are generally excellent with no strong features in the residual spatial maps. The resulting spatial decompositions for the NGC 6357 and Eagle Nebula MYStIX fields are shown in Figure <ref>. The result of this analysis is the assignment of each of the ∼31,000 MPCM stars to one of 142 (sub)clusters or to a distributed population <cit.>. Since each subcluster has an assumed isothermal ellipsoid internal structure, parameters such as core radii and ellipticity can be calculated.Two measures of absorption are available for each (sub)cluster: the median J-H color index and the sample median of the individual median energies of the X-rays fromthe constituent stars. For example, the Eagle Nebula has 12 statistically significant subclusters (Fig. <ref>) with sample populations ranging from 7 to 451 MPCM stars, core radii from 0.07 pc to1.0 pc, ellipticities from 7% to 64%, and absorptions from A_V ∼ 5 to 16 mag.Note that the sample populations are not unbiased measures of the true stellar populations, as they depend on thecircumstantial exposure times of the Chandra and Spitzer observations, region distance and absorption.Two additional critical properties of subclusters can be derived. First, the age of each subcluster can be estimated from the median Age_JX values of the constituent stars (Sec. <ref>).Ages for the Eagle (sub)clusters range from 0.8 to 2.4 Myr.Second,the total stellar population can be inferredby scaling the sample X-ray luminosity function (truncated at different limiting X-ray sensitivities) to the full-sampled X-ray luminosity function of the Orion Nebula Cluster <cit.>.The total populations inferred from X-ray luminosity functions agree well with a parallel analysis based on dereddened J band magnitudes scaled to a standard Initial Mass Function.Combining the estimated total population with the (sub)cluster structural parameters like core radius, unbiased estimates can be made of important quantities such as total stellar mass (in M_⊙), central surface densities (in stars/pc^2), central volume densities (in stars/pc^3), characteristic crossing and relaxation times (in Myr) <cit.>.§ SPATIAL DISTRIBUTION OF STARS ACROSS STAR FORMING REGIONSComparisons of MPCM stellar spatial distributionsin maps like Figs. <ref>-<ref> can be misleading due to inhomogeneity in sensitivity. This particularly affects the X-ray measurements.First, within each Chandra ACIS field the sensitivity is highest at the field center and degrades by a factor of ∼ 3 as one approaches the field edges due to the coma of the high-resolution X-ray mirrors.Second, Chandra fields are often mosaics of overlapping exposures; due to the low background of the ACIS detector, sensitivity scales linearly with exposure time.Third, the Chandra exposure times are not scaled with the square of the MYStIX region distance, so the X-ray luminosity function (and, through the empirical L_x-Mass relationship) and mass function are truncated at different levels. However, as outlined in Sec. <ref>, these problems can be overcome <cit.>.We first `flatten' the intra-ACIS sensitivity variation by omitting the faint sources near the field center.The stellar surface densities are then normalized to the full IMF assuming all regions have the same intrinsic X-ray luminosity function.Although the lower mass stars missed by Chandra cannot be individually identified, the surface densities can be scaled upward to compensate for the different truncation levels.Note it is more difficult to corrected the maps for variations in the surface densities of IRE sources, which are deficient in the brightest H II nebular regions.The result is Fig. <ref> a remarkable new view of the stellar distributions in massive star forming clouds<cit.>. The densities correspond to the full intrinsic stellar populations down to the M∼ 0.08 M_⊙ limitshown on a uniform physical scale (see the 5 pc scale bar) and a uniform color scale in stars/pc^2 (see color calibration bar).We find, for example, that the both the embedded clusters and the revealed massive cluster of the Rosette Nebula region have low surface densities of 10^1 stars/pc^2. But the RCW 38, Orion Nebula Cluster, and M 17 clusters have extremely high central surface densities around 10^4 stars/pc^2.Diversity, rather than consistency, is the premier result from these surface density maps.The main Rosette Nebula cluster NGC 2244 must be in a completely different dynamical state than the RCW 38 or W 40 clusters; and indeed this may be related to the complete absence of mass segregation in NGC 2244 <cit.>. Until these maps were compared, it was not realized that RCW 38 (which is badly contaminated in the IR bands due to nebulosity) has the densest collection of stars of any cluster in the nearby Galaxy.It thus provides an excellent laboratory to study dynamical effects of close stellar encounters <cit.>.The MYStIX maps showing of a wide range of central surface densities, <10^1 to ∼ 3 × 10^4 stars/pc^2 (Fig. <ref>), stands in conflict with the findings of Bressert and colleagues who report that young stellar clusters exhibit a characteristic central surface density distribution with mean around 20 stars/pc^2 <cit.>.Their study is limited to nearby molecular clouds where clusters are generally small and, most importantly, their sample is limited to IRE stars and thus miss thedisk-free X-ray selected stars that dominate many star forming regions.The MYStIX findings on stellar surface densities, although still subject to limitations and biases, are probably more reliable than the more constrained IRE-only results.§ OBSERVATIONAL CONSTRAINTS ON ASTROPHYSICAL QUESTIONSWe now discuss how the MYStIX project – specifically the MPCM sample of 31,747 young stars in 142 (sub)clusters associated with 20 MSFRs – addresses the astrophysical questions outlined in <ref> that concern the origin and early evolution of star clusters. The questions are pursued by searching for spatial and statistical patterns among the various physical quantities measured or inferred for the (sub)clusters.One must recognize that the MPCM sample is constructed in complicated ways with unavoidable incompleteness and biases <cit.>; however, each MYStIX region is analyzed in the same fashion and corrections to alleviate sensitivity and contamination effects can be applied in consistent ways. §.§ Cluster expansion and dispersal The MYStIX dataset shows many cases of the expected range of cluster structures: compact clusters embedded in their molecular cores, larger clusters following molecular gas ejection, and older stars dispersing into the field population.Direct evidence for cluster expansion is shown in Fig. <ref> <cit.>.The first panel shows that MYStIX (sub)cluster core radii systematically increase as clusters range from heavily absorbed to lightly absorbed.The X-ray median energy range is roughly equivalent to 0 < A_V < 40; the same result is seen using J-H as an absorption measure.The other panels show show the relationships between core radii or central density and median Age_JX values for the subclusters.Here we see roughly a factor of 10 increase in radius, and a factor of 1000 decrease in central core density, as (sub)clusters age from ∼ 0.5 to 4 Myr.This is roughly consistent with dynamical calculations of cluster expansion following gas expulsion, although some models assume initial conditions that predict more rapid expansion at earlier times <cit.>. Evidence of this expansion have been presented by Pfalzner and colleagues <cit.> using samples of Galactic and extragalactic young clusters obtained from the literature.They report a `universal sequence' relating cluster size, central density and age indicative of cluster expansion from a uniform compact state.Their `loose clusters', similar to MYStIX clusters, expand ∼ 10-fold from 2-20 Myr.Pre-MYStIX studies had reported that X-ray selected stars, including early-type OB stars, were often dispersed from the molecular cores that active form stars today <cit.>. In the Carina complex, half of the X-ray stars lie outside the regions dominated by the Trumpler 14-15-16 clusters and the South Pillars clouds <cit.>.This pattern is seen in most MYStIX regions.Dispersed stellar surface densities range from near-zero to tens of stars/pc^2in the different regions <cit.>. Age_JX analysis shows that, in nearly all cases, the dispersed stars are older (typically 3 to >5 Myr) than the MYStIX (sub)clusters <cit.>.These findings give confidence in the long-standing argument <cit.> that young clusters often quickly dissipate to constitute the field star population.However, the MYStIX photometric observations cannot distinguish the physical process: do individual stars slowly drift away, are individual stars ejected at high velocity by stellar interactions in the cluster core, or do clusters release all of their stars simultaneously as they become gravitationally unbound?§.§ Cluster formation by merging subclusters The MYStIX (sub)cluster sample gives ample opportunity to reveal merging of smaller subclusters as an important process of building up large equilibrated clusters as predicted in models of cluster formation in turbulent molecular clouds <cit.>.Yet the evidence is unclear.First, consider the geometric properties of MYStIX (sub)clusters without inclusion of physical quantities such as age and mass <cit.>.As exemplified in NGC 6357 and the Eagle Nebula decompositions in Fig. <ref>, some rich clusters are consistent with simple smooth ellipsoidal stellar distributions, while others are clumpy and require several ellipsoids to be adequately modeled.Fig. <ref> is a diagram of the ellipsoidal structures in 15 MYStIX regions placed into a heuristic classification of simple, linear chain, core-halo, and complex clumpy classes <cit.>.As in <ref>, we see a wide diversity of clustering morphologies produced by massive molecular clouds.It is tempting to interpret the morphological classes as an evolutionary sequence where star formation begins as linear chains in filamentary clouds, passes through a clumpy stage as subclusters merge, and ends with core-halo and simple structures that may be in dynamical equilibrium.However, when Age_JX values are examined for these morphological classes, no evidence for an evolutionary sequence is found <cit.>. Perhaps linearmorphologies (like DR 21 and NGC 2264) disperse rather than merge into simpler spherical morphologies (like W 40 and the three clusters of NGC 6357).However, it seems physically reasonable to suggest that the dense but clumpy configuration of M 17 will equilibrate into a unified rich cluster.A second failure to detect (sub)cluster merging is from a scatter plot of total stellar population vs. Age_JX for MYStIX (sub)clusters.No indication of cluster population growth is seen <cit.>.It is possible that the statistical decomposition of stellar clustering into 142 isothermal ellipsoids masquerades a growth effect.A third test, however, gives a hint of cluster growth.A strong anti-correlation between (sub)cluster central star densities and core radiinaturally appears in ensembles of young clusters. A relationship ρ∝ r_c^-3 is expected from a collection of clusters of uniform and constant mass seen at different phases of expansion.The MYStIX sample shows ρ∝ r_c^-2.6 ± 0.1 over the range 0.03 ≤ r_c ≤ 1 pc and 1.5 ≤logρ≤ 5 stars/pc^3 <cit.>.This relationship appears shallower than a -3 powerlaw index, indicatingthat larger clusters have somewhat higher masses than smaller clusters.This suggests that MYStIX subclusters undergo growth from mergers or continued star fomation as they expand.Note this this stands in contrast to Pfalzner's `leaky clussters' that lose mass as they expand <cit.>.A fourth consideration gives a hint that merging may be needed to form the richest young clusters.With the exception of W 3 Main <cit.>, there is no obvious case in the nearby Galaxy of a very rich (thousands of stars) with a dynamically relaxed appearance that is still embedded in its cloud.The typical embedded cluster found in the MYStIX study has is not very rich (tens to hundreds of stars) and often with a clumpy morphology.If rich clusters formed rapidly and monolithically as proposed by in some theoretical studies <cit.>, then perhaps more should be found in embedded environments.But a model where rich clusters form by the merging of smaller structures <cit.> is consistent with the paucity of very rich embedded clusters. §.§ Duration of star formation The MYStIX and related studies give unequivocal evidence that long-lived star formation is pervasive, both across MSFRs and within rich clusters.The acquisition of Age_JX estimates for dozens ofspatially well-defined (sub)clusters allows us to study the history of star formation across MYStIX star forming regions.Getman and colleagues find a clear and consistent pattern: more heavily absorbed clusters have younger ages than lightly absorbed clusters <cit.>. This is shown for two MYStIX regions in Fig. <ref>, RCW 36 with a `simple' structure and Rosette Nebula with a `complex' structure. In RCW 36 the ages range from 0.9 to 1.9 Myr, while in Rosette they range from 1 to 4 Myr. Ages are also available forstars that are not assigned to clusters; these distributed stars always show older ages than absorbed clusters.These results confirm with widespread belief that clusters are formed inside dusty molecular cores (high J-H color environments) and later expel their molecular material (low J-H environments).But there were few quantitative measures of this expectation prior to the MYStIX analysis.Previous demonstrations of age gradients were based on spatial correlations between Class I-II-III (disk-bearing to disk-free) populations and absorption in the W 40 and Rosette Nebula regions <cit.>. Both of these quantities are not calibrated to age in Myr, and the situation is often not so simple; in the Orion L1541 cloud, for example, two clusters dominated by older disk-free stars are lightly obscured while one is heavily obscured <cit.>.A more surprising result is the age spread, and spatial age gradient, found byGetman and colleagues within two nearby rich clusters, in addition to the gradients found earlier between (sub)clusters <cit.>. The cluster cores are much younger than the cluster outer regions (Fig. <ref>). In the Flame Nebula cluster,stars within 0.2 pc of the center are 0.2 Myr old while stars 1 pc from the center are 1.6 Myr old.In the Orion Nebula cluster, the age ranges from 1.2 Myr to 2.0 Myr. This measurement is based entirely on analysis of solar-type stars, and thus does not conflate age and mass segregation.The result is startling because naive models for cluster formation (based on Jeans gravitational collapse in an isothermal cloud core) expect that star fill form first in the dense center, and thus would later appear to have the oldest, not the youngest stars.Other models tend to homogenize the younger and older stars during a subcluster merging process <cit.>.More complex cluster formation scenarios might explain the observed phenomenon;for example, the older stars may have kinematically dispersed from the core, and/or the core may have been supplied with infalling molecular gas to allow star formation after the gas was depleted in the halo<cit.>.But the MYStIX intracluster age gradient also resolves a long-standing controversy concerning apparent stellar age spreads in HRDs <cit.>.The age spread appears to be real, at least in part, because it represents a spatial segregation of older and younger stars.Thus models based on rapid cluster formation in a single collapse time <cit.> are not consistent with the findings, at least for the rich clusters in the Orion cloud complex. § FINAL COMMENTS AND FUTURE RESERACH We emerge with some optimism that a frustrating period is ending when models for clustered star formation were largely unconstrained by empirical results concerning the outcomes of star formation processes (properties of the young stellar populations) to complement empirical results on the inputs to star formation processes (molecular cloud properties).A multiwavelength approach provides the key: X-ray surveys to isolate the pre-main sequence population from the contaminating field star population and to avoid strong nebular emission; near-infrared imaging replacing optical observations to penetrate regions of high absorption; and mid-infrared photometry to discriminate the important subpopulation of disk-bearing young stars from often-overwhelming Galactic field star contamination. The diversity of clustering patterns found in MYStIX regions (Fig. <ref>) points to the importance of studying star formation in multiple environments.The observational strategy of MYStIX can easily be extended to more star forming regions in the nearby (roughly distances <3 kpc) Galaxy. Results are now emerging from Chandra X-ray Observatory observations of ∼ 20 regions with distances ≤ 1 kpc dominated by intermediate-mass BA stars <cit.>, and both Chandra and XMM-Newton missions have observed the nearest star forming regions around 0.14-0.3 kpc.It is more difficult to extend such study to the lowest mass stars that dominate the IMF (0 .1 < M < 0.5 M_⊙), and to the richest star forming regions of the Galaxy lying ∼ 5-12 kpc from the Sun.Million-second Chandra exposures are needed to acquire sufficient X-ray sensitivity, and infrared followup requires both high resolution and high sensitivity.Fortunately, the Chandra satellite is in good health since launch in 1999 and is likely to last for a considerable time into the future.Infrared technologies are continuously improving:the VISTAVia Lactea project gives wide-field, multi-epoch photometry of large portions of the the Galactic Plane <cit.>; the KMOS and MOSFIRE multi-object spectrographs offer efficient spectroscopic capabilities on 8-meter class telescopes; and the James Webb Space Telescope will greatly advance infrared imaging and spectroscopy in a few years.These observational capabilities give confidence that fruitful interactions between theory and observations can become the norm in the study of clustered star formation. This review rests on the labor and talents of the MYStIX team, particularly Patrick Broos, Konstantin Getman, Michael Kuhn, Tim Naylor, Matthew Povich, and Leisa Townsley.Many of the astrophysical results appear in the dissertation of Michael Kuhn and work led by Kostantin Getman; the author is especially grateful for their collaborative energy and thoughtful analysis.The MYStIX Project was principally supported at Penn State by NASA grant NNX09AC74G, NSF grant AST-0908038, and SAO/CXC grant AR7-18002X and ACIS Team contract SV-74018.00Aarseth72 Aarseth, S. J., & Hills, J. G. 1972, Astro. Astrophys, 21, 255Adams04 Adams, F. C., Hollenbach, D., Laughlin, G., & Gorti, U. 2004, Astrophs. J., 611, 360Andre10 André, P., Men'shchikov, A., Bontemps, S., et al. 2010, Astron. Astrophys., 518, LL102Banerjee13 Banerjee, S., & Kroupa, P. 2013, Astrphys. J., 764, 29Bate09 Bate, M. R. 2009, Mon. Not. Royal Astro. Soc., 392, 590Blaauw64 Blaauw, A. 1964, Ann. Rev. Astro. Astrophys., 2, 213Bressert10 Bressert, E., Bastian, N., Gutermuth, R., et al. 2010, Mon. Not. Royal Astro. Soc., 409, L54Broos13 Broos, P. S., Getman, K. V., Povich, M. S., et al. 2013, Astrophys. J. Suppl., 209, 32Charlier17 Charlier, C. V. L. 1917, The Observatory, 40, 387 Damiani16 Damiani, F., Micela, G., Sciortino, S. 2016, Astron. Astrophys., 596, #A82 deWit05 de Wit, W. J., Testi, L., Palla, F., & Zinnecker, H. 2005, Astro. Astrophys., 437, 247Elmegreen00 Elmegreen, B. G. 2000, Astrophys. J., 530, 277 Feigelson99 Feigelson, E. D., & Montmerle, T. 1999, Ann. Rev. Astron. Rev., 37, 363 Feigelson08 Feigelson, E. D., & Townsley, L. K. 2008, Astrophys. J., 673, 354 Feigelson09 Feigelson, E. D., Martin, A. L., McNeill, C. J., et al. 2009, Astron. J., 138, 227Feigelson11 Feigelson, E. D., Getman, K. V., Townsley, L. K., et al. 2011, Astrophys. J. Suppl., 194, 9Feigelson13 Feigelson, E. D., Townsley, L. K., et al. 2013, Astrophys. J. Suppl., 209, 26Figer08 Figer, D. F. 2008, IAU Symposium, 250, 247 Garmany92 Garmany, C. D., & Stencel, R. E. 1992, Astron. Astrophys. Suppl., 94, 211Getman05 Getman, K. V., Flaccomio, E., Broos, P. S., et al. 2005, Astrophys. J. Suppl., 160, 319 Getman14a Getman, K. V., Feigelson, E. D., Kuhn, M. A., et al. 2014, Astrophys. J., 787, 108Getman14b Getman, K. V., Feigelson, E. D., & Kuhn, M. A. 2014, Astrophys. J. 787, 109 Getman17 Getman, K. V., Broos, P. S., Kuhn, M. A., et al. 2017,Astrophys. J. Suppl. 229, #28 Greenstein74 Greenstein, J. L., & Sargent, A. I. 1974, Astrophys. J. Suppl., 28, 157Guarcello07 Guarcello, M. G., Prisinzano, L., Micela, G., et al. 2007, Astro. Astrophys., 462, 245 Guarcello10 Guarcello, M. G., Micela, G., Peres, G., et al. 2010, Astron. Astrophys., 521, AA61Haisch01 Haisch, K. E., Jr., Lada, E. A., & Lada, C. J. 2001, Astrophys. J. Lett., 553, L153Hartmann12 Hartmann, L., Ballesteros-Paredes, J., & Heitsch, F. 2012, Mon. Not. Royal Astro. Soc., 420, 1457Illian08 Illian, J., Pentinnen,A., Stoyan, H. & Stoyan. D. 2008, Statistical Analysis and Modeling of Spatial Point Patterns, Wiley King13 King, R. R., Naylor, T., Broos, P. S., et al. 2013, Astrophys. J. Suppl., 209, 28Krumholz07 Krumholz, M. R., & Tan, J. C. 2007, Astrophys. J., 654, 304Krumholz12 Krumholz, M. R., Klein, R. I., & McKee, C. F. 2012, Astrophys. J., 754, 71Kuhn10 Kuhn, M. A., Getman, K. V., Feigelson, E. D., et al. 2010, Astrophys. J., 725, 2485 Kuhn13a Kuhn, M. A., Getman, K. V., Broos, P. S., et al. 2013, Astrophys. J. Suppl., 209, 27Kuhn13b Kuhn, M. A., Povich, M. S., Luhman, K. L., et al. 2013, Astrophys. J. Suppl., 209, 29Kuhn14a Kuhn, M. A., Feigelson, E. D., Getman, K. V., et al. 2014, Astrophys. J., 787, 107 Kuhn15a Kuhn, M. A., Feigelson, E. D. & Getman, K. V. 2014, Astrophys. J., 802, #60 Kuhn15b Kuhn, M. A., Feigelson, E. D., Getman, K. V., Sills, A., et al. 2015, Astrophys. J., 812, #131 Kuhn17 Kuhn, M. A., Medina, N., Getman, K. V., et al. 2017, Astrophys. J., submitted Lada03 Lada, C. J., & Lada, E. A. 2003, Ann. Rev. Astron. Astrophys., 41, 57MacLow04 Mac Low, M.-M., & Klessen, R. S. 2004, Rev. Mod. Phys., 76, 125McLachlan00 McLachlan, G. & Peel, D. 2000, Finite Mixture Models, Wiley McMillan07 McMillan, S. L. W., Vesperini, E., et al. 2007, Astrophys. J. Lett., 655, L45Minniti10 Minniti, D., Lucas, P. W., Emerson, J. P., et al. 2010, New Astron., 15, 433Naylor13 Naylor, T., Broos, P. S., & Feigelson, E. D. 2013, Astrophys. J. Suppl., 209, 30Odell98 O'dell, C. R. 1998, Astron. J., 115, 263Palla00Palla, F., & Stahler, S. W. 2000, Astrophys. J., 540, 255Pilliteri13Pillitteri, I., Wolk, S. J., Megeath, S. T., et al. 2013, Astrophys. J., 768, 99Pfalzner09 Pfalzner, S. 2009, Astron. Astrophys., 498, L37Pfalzner13 Pfalzner, S., & Kaczmarek, T. 2013, Astron. Astrophys.,559, A38 Pflamm06 Pflamm-Altenburg, J., & Kroupa, P. 2006, Mon. Not. Royal Astro. Soc., 373, 295Povich13 Povich, M. S., Kuhn, M. A., Getman, K. V., et al. 2013, Astrophys. J. Suppl., 209, 31 Povich17 Povich, M. S., Busk, H. A., Feigelson, E. D., et al. 2017, Astrophys. J., 838, 61Preibisch12 Preibisch, T. 2012, Res. Astron. Astrophys., 12, 1Prisinzano11 Prisinzano, L., Sanz-Forcada, J., Micela, G., et al. 2011, Astron. Astrophys., 527, AA77 Rathborne06 Rathborne, J. M., Jackson, J. M., & Simon, R. 2006, Astrophys. J., 641, 389 Richert17 Richert, A. J., Getman, K. V., Feigelson. E. D., et al. Astrophys. J., submitted Rivera15 Rivera-Gálvez, S., Román-Zúñiga, C. G., Jiménez-Baión, E., et al. 2015, Astron. J., 150, 191Romine16 Romine, G., Feigelson, E. D., Getman, K. V., et al. 2016, Astrophys. J., 833, 193Sharma17 Sharma, S., Pandey, A. K., Ojha, D. K., et al. 2017, Mon. Not. Royal Astr. Soc., 467, 2943Tan06 Tan, J. C., Krumholz, M. R., & McKee, C. F. 2006, Astrophys. J. Lett., 641, L121 Telleschi07 Telleschi, A., Güdel, M., Briggs, K. R., et al. 2007, Astron. Astrophys., 468, 425 Townsley11 Townsley, L. K., Broos, P S., Corcoran, M. F., et al. 2011, Astrophys. J. Suppl., 194, #1Townsley14 Townsley, L. K., Broos, P. S., Garmire, G. P., et al. 2014, Astrophys. J. Suppl., 213, 1Tutukov78 Tutukov, A. V. 1978, Astron. Astrophys., 70, 57 Wang08 Wang, J., Townsley, L. K., Feigelson, E. D., et al. 2008, Astrophs. J., 675, 464Wright14 Wright, N. J., Parker, et al. 2014, Mon. Not. Royal Astro. Soc., 438, 639Wright14b Wright, N. J., Drake, J. J., Guarcello, M. G., et al. arXiv:1408.6579 Ybarra13 Ybarra, J. E., Lada, E. A., Román-Zúñiga, C. G., et al. 2013, Astrophys. J., 769, 140 Zwintz14 Zwintz, K., Fossati, L., Ryabchikova, T., et al. 2014, Science, 345, 550
http://arxiv.org/abs/1704.08115v1
{ "authors": [ "Eric D. Feigelson" ], "categories": [ "astro-ph.SR", "astro-ph.GA" ], "primary_category": "astro-ph.SR", "published": "20170426134611", "title": "Multiwavelength Studies of Young OB Associations" }
APS/[email protected] Instituto de Física, Universidad de Antioquia,Calle 70 No. 52-21, Medellín, Colombia INFN, Laboratori Nazionali di Frascati,C.P. 13, 100044 Frascati, Italy [email protected] Universidade Estadual Paulista (Unesp),Instituto de Física Teórica (IFT), São Paulo. R. Dr. Bento Teobaldo Ferraz 271, Barra Funda, São Paulo - SP, 01140-070, Brasil [email protected] Instituto de Física, Universidad de Antioquia,Calle 70 No. 52-21, Medellín, Colombia We present a framework linking axionlike particles (ALPs) to neutrino masses through the minimal inverse seesaw (ISS) mechanism in order to explain the dark matter (DM) puzzle. Specifically, we explore three minimal ISS cases where mass scales are generated through gravity-induced operators involving a scalar field hosting ALPs. In all of these cases, we find gravity-stable models providing the observed DM relic density and, simultaneously, consistent with the phenomenology of neutrinos and ALPs. Remarkably, in one of the ISS cases, the DM can be made of ALPs and sterile neutrinos. Furthermore, other considered ISS cases have ALPs with parameters inside regions to be explored by proposed ALPs experiments. Linking axionlike dark matter to neutrino massesO. Zapata December 30, 2023 ================================================== § INTRODUCTIONThe discovery of neutrino oscillations <cit.> and the fact that baryonic matter only yields a few percent contribution to the energy density of the Universe <cit.> are two experimental evidences calling for physics beyond the standard model (SM). On the theoretical side, the apparent absence of CP violation in the QCD sector is also a strong motivation for going beyond the SM since it can be dynamically explained by the Peccei-Quinn mechanism <cit.>, which requires to extend the SM gauge group with a global symmetry and the existence of a pseudo-Nambu-Goldstone boson, the axion <cit.>.Besides elegantly solving the strong CP problem <cit.>, the Peccei-Quinn mechanism may be also related to the solution of DM and neutrino puzzles by offering a candidate for cold DM, the axion itself, <cit.> and a connection to the neutrino mass generation <cit.>. In the same vein, ALPs, arising from spontaneous breaking of approximate global symmetries, are also theoretically well motivated since these appear in a variety of ultraviolet extensions of the SM <cit.> and, as in QCD axion models, these can make up all of Universe DM <cit.>, or be a portal connecting the DM particle to the SM sector <cit.>. Moreover, there are some astrophysical phenomena such as the cosmic γ-ray transparency <cit.>, the x-ray excess from the Coma cluster <cit.> and the x-ray line at 3.55 keV <cit.> that suggest the presence of ALPs. These hints have led to a plethora of search strategies involving astrophysical observation production and detection in laboratory experiments <cit.> with the aim of establishing the ALPs properties. In the context of ALPs models, the approximate continuous symmetry is typically assumed to be remnant of an exact discrete gauge symmetry as gravity presumably breaks the global symmetries through Planck-scale suppressed operators. In other words, since the global symmetry is highly unstable it is usually stabilized by imposing a discrete gauge symmetry <cit.> such as a Z_N symmetry <cit.> (see Refs.<cit.> for Z_N realizations in QCD axion models).This discrete gauge symmetry protects the ALPs mass against large gravity-induced corrections and it can also be used to stabilize other mass scales present in the theory. In particular, with the aim of generating neutrino masses the authors in Refs. <cit.> used these types of discrete gauge symmetries in order to protect the associated lepton-number-breaking scale. In this work, we go further by building a self-consistent framework of ALPs DM[Note that in <cit.> the ALPs is used to explain some astrophysical anomalies and not to give account for the entire DM abundance.] and neutrino masses via the ISS mechanism <cit.>. For this purpose we make use of appropriate Z_N discrete gauge symmetries to protect the suitable ALPs mass reproducing the correct DM relic abundance as well asto stabilize the mass scales present in the ISS mechanism. It turns out that the ISS mass terms are determined -up to some factor- by v^n_σ/M^n-1_Pl, where v_σ is the vacuum expectation value (VEV) of the scalar field σ that spontaneously breaks the global symmetry U(1)_A and hosts the ALPs, a. n is an integer that is determined by the invariance of such terms under the symmetries of the model and some phenomenological constraints.In order to implement the ISS mechanism we extend the SM matter content by introducing n_N_R (n_S_R) generations of SM-singlet fermions N_R (S_R) as it is usual. In this work, we consider the minimal number of singlet fermionic fields that allows to fit all the experimental neutrino physics: the (n_N_R, n_S_R)=(2,2), (2,3) and (3,3) cases<cit.>. In each case, the ALPs plays the role of the DM candidate. Moreover, for the (2,3) ISS case there is a possibility of having a second DM candidate: the sterile neutrino (the unpaired singlet fermion) <cit.>. Motivated by that, we also build a multicomponent DM framework where the DM of the Universe is composed by ALPs and sterile neutrinos, with the latter being generated through the active-sterile neutrino mixing <cit.> and accounting for a fraction of the the DM relic density. As far as phenomenological issues are concerned, since in each framework theapproximate continuous symmetry is anomalous respect to the electromagnetic gauge group (through an exotic vectorlike fermion) instead of being anomalous respect to QCD, it is possible to build an effective interaction term involving the axion field and the electromagnetic field strength and its dual <cit.>. This in turn implies that the ALPs may be detected in current- and/or proposed- experiments that use the ALPs-photon coupling as their main interaction channel to search for ALPs <cit.>. Moreover, considering this particle as the dark matter candidate it can be part of the Milky Way DM halo and could resonantly convert into a monochromatic microwave signal in a microwave cavity permeated by a strong magnetic field <cit.>. On the other hand, since a large portion of the parameter space of ALPs (e.g. low masses and couplings) is relatively unconstrained by experiment since the conventional experiments, Helioscopes, Haloscopes and others <cit.>, are only sensitive to axion particles whose Compton wavelength is comparable to the size of the resonant cavity, it is important looking for new search strategies in order to cover other regions of the parameter space. To reach smaller values for the ALPs mass and ALPs-photon coupling is necessary a different experimental approachlike the ones associated to the ABRACADABRA proposal <cit.>,where it is suggested a new set of experiments based on either broadband or resonant detection of an oscillating magnetic flux,designed for the axion detection in the range m_a∈[10^-14,10^-6] eV. And it is precisely these kinds of searches that can be used to probed the benchmark regions that we study within the (2,2) and (3,3) ISS cases. The rest of the paper is organized as follows: in Sec. <ref>we discuss phenomenological and theoretical conditions that lead to a successful protection of the ALPs mass and the ISS texture against gravity effects. In Sec. <ref> we search for viable models simultaneously compatible with DM phenomenology, neutrino oscillation observables and lepton-flavor-violating processes. Finally, we present our discussion and conclusions in Sec <ref>. § FRAMEWORK The goal of this section is to present the main ingredients of a SM extension in order to link ALPs to neutrino mass generation, and at the same time, to offer an explanation for the current DM relic density reported by Planck Collaboration <cit.>. In order to achieve that, the SM matter content must to be extended with some extra fields. Besides the scalar σ and fermionicS_Rα and N_Rβ fields, an extra electrically charged fermion E is also added to the SM to make possible the coupling of ALPs to photons, g_aγ. That is necessary because g_aγ is anomaly induced and there is no any U(1)_A symmetry anomalous in the electromagnetic group just with the SM charged fermions.The main role of the anomalous U(1)_A symmetry is to induce an ALPs coupling to two photons. This brings as a consequence that the ALPs can be found, in principle, in current and/or proposed experiments that make use of the ALPs-photon coupling. We will show target regions of some experiments searching for ALPs in Fig <ref>. Also, it will be found that ALPs in some ISS cases discussed in this paper are inside the regions of planned experiments <cit.>.Another key point of the framework is the existence of a Z_N discrete gauge symmetry. In order to understand its role, firstly, note that to impose an anomalous U(1)_A symmetry to the Lagrangian does not seem sensible in the sense that in the absence of further constraints on very high energy physics we should expect all relevant and marginally relevant operators that are forbidden only by this symmetry to appear in the effective Lagrangian with coefficient of order one. However, if this symmetry follows from some other free anomaly symmetry, in our case from the a Z_N discrete gauge symmetry, all terms which violate it are then irrelevant in the renormalization group sense. Secondly, the Z_N symmetry also protects both the ALPs mass and the ISS texture against gravity effects as we will explain in more detail later on. For these reasons, the effective Lagrangian will be invariant under a Z_N discrete gauge symmetry. Due to the ALPs mass is very low and only protected by the U(1)_A symmetry which is explicitly broken by gravity effects, the Z_N symmetry will have a high order. This fact also happens in models with QCD axions and it is shared by all models with this type of stabilization mechanism <cit.>. §.§ LagrangianThe effective Lagrangian that we consider to relate the ISS mechanism to ALPs DM readsℒ⊃ℒ_SM^Yuk+ℒ_σ+ℒ_ISS+ℒ_E,where ℒ_SM^Yuk is nothing more than the Yukawa Lagrangian of the SMℒ_SM^Yuk=Y_ij^(u)Q_LiHu_Rj+Y_ij^(d)Q_LiHd_Rj+Y_ij^(l)L_iHl_Rj+H.c.,with the usual Q_Li,u_Ri,d_Ri and L_i,l_Ri fields denoting the quarks and leptons of the SM, respectively. H is the Higgs SU(2)_L doublet with H=iτ_2H^* (τ_2 is the second Pauli matrix). The term in ℒ_σ (Lagrangian involving the σ field) which is relevant in our discussion is the following non-renormalizable operatorsℒ_σ⊃ gσ^D/M_Pl^D-4+H.c.,with g=e^iδ|g| and D being an integer. The σ fieldis parametrized as σ(x)=1/√(2)[v_σ+ρ(x)]e^ia(x)/v_σ, with a(x) being the ALPs field and ρ(x) the radial part that will gain a mass of order of the vacuum expectation value <cit.>10^9≲√(2)⟨σ⟩≡ v_σ≲10^14 GeV.With the operators in Eq. (<ref>) and the σ(x) parametrization, the ALPs mass term is written as follows <cit.>m_a=|g|^1/2DM_Plλ^D/2-1,where 10^-10≲λ≡v_σ/√(2)M_Pl≲10^-5 and M_Pl=2.44×10^18 GeV is the reduced Planck scale. Now, we turn our attention to the coupling of ALPs to photons which is determined by the interaction term g_aγ/4a F_μνF^μν, where F_μν and F^μν are the electromagnetic field strength and its dual, respectively. This term is anomaly induced and given by [Higher corrections to the g_aγ coupling are possible. For an extensive study of them to see <cit.>. However, for the suitable ALPs masses in order to explain the observed DM relic density, all of them can be safely neglected.]g_aγ=α/2πC_aγ/v_σ,with α≈1/137. Here, the electromagnetic anomaly coefficient C_aγ reads as <cit.>:C_aγ=2∑_ψ(X_ψ_L-X_ψ_R)(C_em^(ψ))^2,where C_em^(ψ) is the electric charge of the fermion ψ, and X_ψ_L,R is its charge under the U(1)_A symmetry. This anomaly coefficient is of order of one (1 or 2 more specifically) in our models and it directly determines the width of the red band in Figure <ref> where ALPs are DM candidates.Also, it is important to note that the existence of a non-null anomaly coefficient guarantees that g_aγ≠0. This is thereason for the total Lagrangian in Eq. (<ref>) is invariant under an anomalous U(1)_A global symmetry. Nevertheless, only with SM model fermions and the neutral S_Rα and N_Rβ fermions is not possible to have an anomalous U(1)_A symmetry in the electromagnetic group. Therefore, we need include the SU(2)_L singlet fermion, E, with an unit of electric charge.On the other hand,the dimension D of the gravity-induced mass operator in Eq. (<ref>) must be, in general, larger than 4 because the astrophysical and cosmological constraints on the properties of ALPs. To be more specific, we show, in Figure <ref>, some regions of the ALPs space of parameters - g_aγ vs m_a - where ALPs give an explanation for some astrophysical anomalies and others forbidden regions <cit.>. Regarding the neutrino mass generation, we have that, once introduced the N_Rβ and S_Rα fields, the ℒ_ISS Lagrangian reads as:ℒ_ISS= y_iβL_iHN_Rβ+ζ_αβσ^p/M_Pl^p-1S_Rα(N_Rβ)^C+η_αα'σ^q/2M_Pl^q-1S_Rα(S_Rα')^C +θ_ββ'σ^r/2M_Pl^r-1N_Rβ(N_Rβ')^C+ H.c., where the y_iβ, ζ_αβ, η_ αα', θ_ ββ', coupling constants, with i,j=1,2,3, α,α'=1,2,(or 3) and β,β'=1,2,(or 3), are generically assumed of order one. The exponents p,q,r are integer numbers chosen for satisfying some phenomenological constraints discussed below. Negative values for these exponents will mean that the term is ∼σ^* n instead of ∼σ^n. Note that, without loss of generality, the exponent p can be assumed to be positive. We will only consider the minimal number of neutral fermionic fields, S_Rα and N_Rβ, that allow to fit all the experimental neutrino physics <cit.>. Specifically, we study the (2,2), (2,3) and (3,3) cases.As the σ field gets a VEV the gravity-induced terms in Eq. (<ref>) give the mass matrix for light (active) and heavy neutrinos <cit.>. Specifically, we can write the mass matrix in the (ν_L,N_R^C,S_R^C) basis as M_ν=[[ 0 M_D^⊺; M_D M_R ]], with M_D≡[[ m_D; 0 ]] and M_R≡[[ μ_N M^⊺; M μ_S ]].where m_D, M, μ_N and μ_S are matrices with dimension equal to n_N_R×3, n_N_R× n_S_R, n_N_R× n_N_R, n_S_R× n_S_R, respectively. The energy scales of the entries in these matrices are determined essentially by √(2)⟨ H⟩≡ v_SM≃246 GeV, λ (or v_σ) and M_Pl GeV as followsm_D iβ=y_iβv_SM/√(2), M_αβ=ζ_αβM_Plλ^p, μ_S αα'=η_αα'M_Plλ^|q|, μ_N ββ'=θ_ββ'M_Plλ^|r|.The mass matrix in Eq. (<ref>) allows light active neutrino masses at order of sub-eV without resorting very large energy scales in contrast to the type I seesaw mechanism <cit.>. In more detail, assuming the hierarchy μ_N≲μ_S≪ m_D<M (note that making μ_S and μ_N small is technically natural) and taking a matrix expansion in powers of M^-1, the light active neutrino masses, at leading order, are approximately given by the eigenvalues of the matrix <cit.>m_νlight ≃m_D^⊺M^-1μ_S(M^⊺)^-1m_D.On the other hand, the heavy neutrino masses are given by the eigenvalues of m_νheavy≃ M_R. NotefromEq. (<ref>) that μ_N does not contribute to the light active neutrino masses at the leading order <cit.>. Actually, the presence of μ_N term gives a subleading contribution to m_νlight of the order of m_D^⊺M^-1μ_S(M^⊺)^-1μ_NM^-1μ_S(M^⊺)^-1m_D,which is a factor μ_Sμ_N/M^2 smaller than the leading contribution <cit.>.Very motivated scales for M and μ_S, μ_N are TeV and keV scales, respectively. These scales allow getting active neutrino masses in the sub-eV scale without consideringsmaller Yukawas and, in some scenarios, such as the (2,3) ISS case, the existence of a keV sterile neutrino as a warm dark matter (WDM) candidate <cit.>. In addition, M has to satisfyM≳√(10μ_S/keV) TeV,because light active neutrino masses are in sub-eV scale and m_D is of order of O(v_SM). Another constraint on the M scale comes from the fact that the mixing matrix that relates the three left-handed neutrinos with the three lightest mass-eigenstate neutrinos is not longer unitary.This implies that deviations of some SM observables may be expected, such as additional contributions to the ℓν W vertex and to lepton-flavor and CP-violating processes, and non-standard effects in neutrino propagation <cit.>. For example, in the inverse seesaw model, the violation of unitary is of order ofϵ^2, with ϵ≡ m_DM^-1 being approximately the mixing between light active and heavy neutrinos <cit.>. Roughly speaking, ϵ^2 at the percent level is not excluded experimentally <cit.>.Taking into account the previous considerations, the ranges chosen for M and μ_S are 1≤ M≤25 TeV, 0.1≤μ_S≤50 keV.Once established that scales of the mass matrices and using Eqs. (<ref>) and (<ref>) (and following a similar procedure as in Ref. <cit.>),the integers p and q in Eq. (<ref>) can only take the values (p,|q|) = (2,3)for6×10^10≲ v_σ≲1×10^11GeV, (p,|q|) = (3,5)for2×10^13≲ v_σ≲8×10^13GeV.That happens because the same VEV simultaneously provides M and μ_S scales. Note that for both possibilities in Eqs. (<ref>) and (<ref>) the light active neutrino mass matrix in Eq. (<ref>) is simplified to m_νlight=[y^⊺ζ^-1η(ζ^⊺)^-1y]v_SM^2/√(2)v_σ.Moreover, the exponent r of the term that generates μ_N in Eq. (<ref>)is also constrained to be r≥ |q|, because μ_N must be ≲μ_S.Finally, we have that ℒ_E, the Lagrangian involving the E charged fermion, is written as ℒ_E ⊃ ϑ_iσ^s/M_Pl^sL_iHE_R+κσ^t/M_Pl^t-1E_LE_R+H.c.,where ϑ_i and κ are Yukawas, in principle, assumed of order one. These two terms are also subjected to phenomenological and theoretical constraints as follow. Because the term ∼σ^tE_LE_R must give a mass large enough for the E fermion to satisfy its experimental constrains. For stable charged heavy lepton, m_E>102.6 GeV at 95% C.L.<cit.>,or for charged long-lived heavy lepton, m_E>574 GeV at 95% C.L.assuming mean life above 7× 10^-10-3× 10^-8 s <cit.>, t must be less or equal than 3. It must be different from zero because the electromagnetic anomaly must be present.On the other hand, s can take the values 1 or 2 because ∼σ^sLHE_R determines the interaction of the E fermion with the SM leptons, and whether s is larger than 2, the charged E fermion becomes stable enough to bring cosmological problems, unless its mass is ≲TeV. Another constraint comes from searches for long-lived particles in pp collisions <cit.>.Now an important discussion about the stability of both the ISS mechanism and the ALPs mass is in order. In general, the gravitational effects must be controlled to give a suitable ALPs mass. With this aim, we introduced a gauge discrete Z_N symmetry assumed as a remnant of a gauge symmetry valid at very high energies <cit.>. Thus, to truly protect the ALPs mass against those effects, Z_N must at least be free anomaly <cit.>, i.e., A_2(Z_N)=A_3(Z_N)=A_grav(Z_N)=0Mod N/2,where A_2, A_3 and A_grav are the [SU(2)_L]^2× Z_N, [SU(3)_C]^2× Z_N and [gravitational]^2× Z_N anomalies, respectively. Other anomalies, such as Z_N^3, do not give useful low-energy constraints because these depend on some arbitrary choices concerning to the full theory.Gravitational effects can also generate terms such as σ^n/M_Pl^n-1S_RS_R^C, σ^n/M_Pl^n-1S_RN_R^C, σ^n/M_Pl^n-1N_R^CN_R or σ^n/M_Pl^nLHS_R (with n smaller than those in the Lagrangian (<ref>)) that jeopardize both the matrix structure - Eqs. (<ref>) and (<ref>) - and the scales of the ISS mechanism. Thus, Z_N will be chosen such that it also prevents these undesirable terms from appear. In general, the Z_N symmetry can be written as a linear combination of the continuous symmetries in the model: the hypercharge Y, the baryon number B and the generalized lepton number 𝕃. The charge assignments for B and 𝕃 symmetries are shown in Table <ref>, whereas the assignment for Y symmetry is the canonical one. Nevertheless, since the hypercharge is free anomaly by construction, the Z_N charges (Z) of the fields can be written as Z=c_1B+c_2𝕃, where c_1,2 are rational numbers in order to make the Z_N charges integers <cit.>. Now, substituting the charges in Table <ref> into the general form of the Z_N symmetry (see Refs. <cit.>) we can obtain the anomaly coefficients. Doing so, we find that A_3(Z_N)=0 and A_2(Z_N) =3/2[c_1+c_2],A_grav(Z_N) = c_2[3-n_N_R-n_S_R ×[qd/2]+sd].Note that A_2(Z_N) and A_grav(Z_N) are not, in general, 0 Mod N/2 which implies strong constraints on the choice of the Z_N discrete symmetry. §.§ ALPs and sterile neutrino dark matterSince the ALPs are very weakly interacting slim particles and cosmologically stable, they can be considered as DM candidates <cit.>. In fact, ALPs may be nonthermally produced via the misalignment mechanism in the early Universe and survive as a cold dark matter population until today. Specifically, its relic density is determined from the following equation <cit.>Ω_a,DMh^2≈0.16 [Θ_i/π]^2×[m_a/eV]^1/2[v_σ/10^11 GeV]^2,where Θ_i is the initial misalignment angle, which is taken as π/√(3), because we are assuming a post-inflationary symmetry-breaking scenario, favorable for models with v_σ≲10^14 GeV <cit.>.On the other hand, the fraction of DM abundance in form of sterile neutrino depends on its mass, m_ν_S, and its mixing angle with the light active neutrino, θ. Specifically, ν_S as a WDM candidate can be generated through the well-known Dodelson-Widrow (DW) mechanism <cit.>, which is present as long as active-sterile mixing is not zero <cit.>. In the (2,3) ISS case, the sterile neutrino through the DW mechanism can account at maximum for ≈ 43% of the observed relic density without conflicting with observational constraints<cit.>. This DM amount can be slightly increased to ≈48% when including effect of the entropy injection of the pseudo-Dirac neutrinos provided the lightest pseudo-Dirac neutrino has mass 1-10 GeV <cit.>. We are not going to consider these effects here.For m_ν_S>0.1 keV, the relic density produced in the usual DW mechanism is given by <cit.>Ω_ν_S,DMh^2= 1.1×10^7∑_αC_α(m_S)|U_α S|^2[m_ν_S/keV]^2; α=e,μ,τ,where C_α(m_S) are active flavor-dependent coefficients which are calculated solving numerically the Boltzmann equations (an appropriated value in this case is C_α(m_S)≃0.8 <cit.>). We also have that the sum of U_α S, the elements of the leptonic mixing matrix, is the active-sterile mixing, i.e., ∑_α|U_α S|^2∼sin^2(2θ). For the case m_ν_S<0.1 keV there is a simpler expression written as follows <cit.> Ω_ν_S,DMh^2=0.3[sin^22θ/10^-10][m_ν_S/100 keV]^2.After imposing bounds coming from stability, structure formation and indirect detection, in addition to the constraints arising from the neutrinos oscillation experiments, it was found that the sterile neutrino as WDM in the (2,3) ISS provides a sizable contribution to the DM relic density for 2≲ m_ν_S≲50 keV and active-sterile mixing angles 10^-8≲sin^2(2θ)≲10^-11<cit.>, where the maximal fraction of DM made of ν_S is achieved when m_ν_S≃7 keV <cit.>.Once established the DM candidates and the parameters that determine the relic density in each case, we are going to search for models satisfying all mentioned conditions in Section <ref> as well as Ω_DM^Planckh^2=Ω_ν_S,DMh^2+Ω_a,DMh^2,where Ω_DM^Planckh^2=0.1197±0.0066 (at 3σ) is the current relic density as reported by Planck Collaboration <cit.>.§ MODELSIn the previous section, we have introduced the general and minimal constraints that models have to satisfy. Now, we proceed to find specific models that give an explanation to the dark matter observed in the Universe. In particular, the (2,2), (3,3) and (2,3) cases of the ISS mechanism are studied in detail. For each model we check the compatibility (at 3σ) with the experimental neutrino physics <cit.> for the normal mass ordering and vanishing CP phases by varing the free Yukawa couplingsy_iβ, ζ_αβ,η_αα,θ_ββ in the range ∼(0.1,3.5). Additionally, we also analize the lepton flavor violating processes such as ℓ_β→ℓ_α+γ, which are induced at one loop by the W boson and the heavy neutrinos. The correspondig decay rates read <cit.> (ℓ_β→ℓ_αγ)= α_W^3s_W^2/256π^2m_ℓ_β^5/m_W^4Γ_ℓ_β ×|∑_iU_β i^*U_α iG(m_N_i^2/m_W^2)|^2, where G(x)=x(1-6x+3x^2+2x^3-6x^2log(x))/[4(1-x)^4], Γ_ℓ_β is the total decay width of ℓ_β and U represents the lepton mixing matrix.We verify that each ISS model is compatible with the current experimental limits Br(μ→ eγ)<5.7×10^-13 <cit.>, Br(τ→ eγ)<3.3×10^-8 and Br(τ→μγ)<4.4×10^-8 <cit.>.§.§ (2,2) ISS case Among the minimal configuration of the ISS mechanism consistent with the experimental neutrino physics and lepton-flavor-violating (LFV) processes <cit.> (for a recent review see Ref. <cit.>), we, firstly, study the (2,2) ISS case because this is the minimal configuration that satisfy all the constraints coming from experimental neutrino physics. For this case, in the neutrino mass spectrumthere are two heavy pseudo-Dirac neutrinos with masses ∼ M and three light active neutrinos with masses of order of sub-eV coming from the mass matrix in Eq. (<ref>) <cit.>. Because in this casen_N_R=n_S_R=2 (similarly for the (3,3) ISS case) there is not a light sterile neutrino ν_S in the mass spectrum. Therefore, all the current DM abundance must be constituted by ALPs, i.e. Ω_DM^Planckh^2=Ω_a,DMh^2.In order to find the main features of the model, we find useful to rewrite Ω_a,DMh^2 in terms of D - the exponentof the mass operator for σ, Eq. (<ref>) - and m_a. Thus, substituting Eqs.(<ref>) and(<ref>) in Eq. (<ref>), we find that Ω_a,DMh^2≃ 0.49 |g|^1/4 √(D)exp[-D/4ln√(2)M_Pl/1 GeV] ×[v_σ/1 GeV]^D+6/4,where g is assumed to be 10^-3≤|g|≤2. Thus, we can see that Ω_a,DMh^2 only depends on (|g|, D, v_σ). In Table <ref> we show (D, v_σ) values for the cases where Ω_a,DMh^2=Ω_DM^Planckh^2 and Ω_a,DMh^2=0.57×Ω_DM^Planckh^2. The last case applies only for the (2,3) ISS case and will be discussed in Section <ref>.In order to obtain the Lagrangian in this scenario, we search for discrete symmetries for the two possibilities showed in Eqs. (<ref>-<ref>)and different values of r,s,t according its respective constraints as follow: considering Eqs. (<ref>-<ref>) and the Table <ref> we can see that, for the range of values of v_σ established in the Section <ref>, only the valuesD=9,10,16-19 are allowed to reproduce the correct relic density to ALPs. Thus we searched for discrete symmetries Z_N that allows the mass operators with those dimensions D,with the following results: The Z_9,10, symmetries allow terms such as σ^*/M_PlL̅HS_R, σ^*2/M_PlN̅_RN_R^C, L̅HS_R, and since H and σ get VEVs, these terms do not give the appropriate zero texture of the ISS mechanism shown in Eqs. (<ref>) and (<ref>).We have searched for all the possible combinations of r,s,t values in the Lagrangian of Eq. (<ref>) without any success. On the other hand, the Z_16,18 symmetries are not free of the gravitational anomaly. In fact, the Z_N≤20 discrete symmetries that satisfy all the anomaly constraints and stabilize the ISS mechanism are Z_17,19. In the case of Z_17 symmetry, the Lagrangian, ℒ_Z_17, is given by Eq. (<ref>) with the parameters D=17 and (p,q,r,s,t)=(3,-5,-6,2,2) in Eqs. (<ref>), (<ref>) and (<ref>), respectively. An assignment of the Z_17 (with Z_17=6B+11U(1)_𝕃) charges and the anomalous U(1)_A symmetry for this case is shown in Table <ref>. Note that, for this model the term ∼σ^*6N_Rβ(N_Rβ')^C in Eq. (<ref>) gives a negligible contribution for the light active neutrino masses.Thecorresponding g_aγ and m_a for this model is givenby g_aγ≅ 7.54×10^-17[3.08×10^13 GeV/v_σ] GeV^-1, m_a≅5.59×10^-10|g|^1/2[v_σ/3.08×10^13 GeV]^15/2 eV.The benchmark region for this case is denoted as M22a in the Figure <ref> where we have considered 10^-3≤|g|≤2 and 2.9×10^13≲ v_σ≲4.2×10^13 GeV. These values for g_aγ and m_a allow that the ALPs explain 100% of the DM relic density. Sharp predictions for neutrinos masses are not possible with just the knowledge of the p,q,r,s,t values and v_σ. However, the order of magnitude of the mass matrices can be estimated from Eqs. (<ref>) and (<ref>) to be (using v_σ≅3.08×10^13 GeV)M≅ζ×1.73 TeV,μ_S≅η×0.13 keV,m_νlight≃[y^⊺ζ^-1η(ζ^⊺)^-1y]×1.38 eV,which is appropriate to satisfy the constraints coming from experimental neutrino physics and unitarity without resorting a fine tuning in couplings. Nevertheless, we have to admit that some care must be taken in order to generate the benchmark region M22a in agreement with bounds coming from LFV processes such as μ→ e+γ.Specifically, due to m_N_i∼ M≫ m_W the loop function tends to G(x)→ 1/2 and the mixing terms are generically given by U∼ m_D/M. This leads to the decay rate for μ→ e+γ of the order of(μ→ eγ)∼ 1.1×10^-13(m_D/10 )^4(3 /M)^4,which implies that small y∼0.1 couplings must be required.For the case with Z_19, the effective Lagrangian is characterized by (p,q,r,s,t)=(3,-5,-8,-2,1), and the results, roughly speaking, are quite similar to the model with Z_17, in the sense that as the p,q,|s| values are equals for both models, the neutrino spectrum is similar in both cases. Nevertheless, since D,t values are not equals, we have as a consequence that: the ALPs mass, the mass term for the exotic fermion E and the ALPs-photon coupling, g_aγ, are different. Specifically, from the Table <ref> and Eq. (<ref>), in the Z_19 model the ALPs parameters are g_aγ≅ 1.12×10^-17[1.0×10^14 GeV/v_σ] GeV^-1,m_a≅1.87×10^-10|g|^1/2[v_σ/1.04×10^14 GeV]^17/2 eV. The benchmark region in this model corresponding to this case in Figure <ref> is denoted as M22b. We also show the values for the neutrino mass spectrum in Table <ref>.Concerning to the upper bound on μ→ e+γ, it is easily fulfilled due to the larger supression coming from M∼ 50 TeV.§.§ (3,3) ISS case Regarding the neutrino mass spectrumthe (3,3) ISS case is quite similar to the previous one in the sense that there is not a light sterile neutrino in the mass spectrumbecause n_N_R=n_S_R=3. Therefore, all the DM abundance in this model has to be made of ALPs.Proceeding in a similar manner to the (2,2) ISScase and taking into account that A_grav(Z_N) is now different (see Eq. (<ref>)), we have searched for all anomaly-free Z_N discrete symmetries, with N≤20 and with (p, q, r, s, t) values established according to the constraints in Section <ref>. Doing that, we found the following results: the Z_9 symmetry is not free of gravitational anomalies, while the Z_10 symmetry allows dangerous terms such as L̅HS_R, σ^*/M_PlL̅HS_R, σN̅_RN_R^C, and others that jeopardize the matrix structure in Eqs. (<ref>) and (<ref>), therefore the possibility of building a model for the solution in Eq. (<ref>) is not realized. On the other hand, the Z_16,18 symmetries corresponding to the solution in Eq. (<ref>), are not free of gravitational anomalies, therefore these are not suitable symmetries. However, the Z_17,19 symmetries forbid the dangerous terms and allow an effective Lagrangian. In the case of Z_17 symmetry, the Lagrangian in Eq. (<ref>) is characterized by the parameters (p, q, r, s, t)=(3, -5, 7, 2, 1) and D=17. Note then that this model hasa Lagrangian very similar to the (2,2) ISS Lagrangian. However, in this case, the mass term for the exotic fermion E has the exponent equal to one and the term associated with μ_N is not allowed with dimension less than seven. Because the parameters (p, |q|) are equals in both cases, the neutrino spectrum is the same as in the M22a model (see Eqs. (<ref>)). Moreover, note that in this case, the term ∼σ^7N_Rβ(N_Rβ')^C gives a negligible contribution for the light active neutrino masses. On the other hand, the fact that the mass term for the exotic fermion differs from (2,2) ISS model imply that the anomaly coefficient C_aγ be different (see charges in the Table <ref> and Eq. (<ref>)), such that the ALPs-photon coupling has also a different value.Possible assignments for the Z_17 and U(1)_A symmetries are shown in Table <ref>, with Z_17=9B+11U(1)_𝕃.The corresponding v_σ value is the same that in the (2,2) ISS case showed in the Table <ref> for D=17, implying also that the m_a is equal to it given in Eq. (<ref>). Nevertheless, the g_aγ turn to be g_aγ ≅3.77×10^-17[3.08×10^13 GeV/v_σ] GeV^-1,because the anomaly coefficient now has a different value. A benchmark region for this case is denoted as M33a in Figure <ref>, where these values for g_aγ and m_a allow that the ALP explains 100% of the DM relic density. For the Z_19 case, we find that the model is determined by the parameters (p, q, r, s, t)=(3, -5, -8, 2, 2), which brings similar conclusions that the M22b model, with some differences coming from the anomaly coefficient C_aγ. Specifically, the coupling g_aγ≅ 2.24×10^-17[1.0×10^14 GeV/v_σ] GeV^-1. The other parameters associated to neutrino spectrum and m_a are similar than in the M22b case, and are shown in Table <ref>. The benchmark region for this case is denoted as M33b in Figure <ref>. On the other hand, the constraints and prospects regarding lepton flavor violating processes are similiar to the ones in case (2,2) since the mass scale M of the benchmark regions M33a and M33b are the same of the benchmark regions M22a and M22b, respectively.We remark that a similar effective Lagrangian for the (3,3) ISS case was worked in the Ref. <cit.> with the aim of explaining some astrophysical phenomena. However, in that case, the DM abundance via ALP was not considered. §.§ (2,3) ISS caseFor this case, because there are n_N_R=2 and n_S_R=3 neutral fermions, the neutrino mass spectrum contains twoheavy pseudo-Dirac neutrinos with masses ∼ M and three lightactive neutrinos with masses of order of sub-eV.In addition, there is a sterile neutrino, ν_S, with mass of order ∼μ_S. Then, for this model, the presence of both the ν_S and the ALPs, a, brings the possibility of having two DM candidates in the (2,3) scenario <cit.>. First, let's consider the case Ω_DM^Planckh^2=Ω_a,DMh^2, i.e., when the DM abundance is totally constituted by ALPs. Now, from Eqs. (<ref>) and (<ref>) and Table <ref>, we can see that(D, v_σ)=(9, (0.6-1.1)×10^11 GeV) and (D, v_σ)=(10, (1.9-3.2)×10^11 GeV) corresponds to the (p,|q|)=(2,3) solution in Eq. (<ref>) (note that v_σ corresponding to D=10 is slightly out of allowed range in Eq. (<ref>)). Moreover, the values D=9,10 restrict the symmetry to be Z_9,10. For these discrete symmetries we find solutions for anomaly free Z_9 and 10 charges, i.e, solutions to Eqs. (<ref>) and (<ref>) with (p,|q|)=(2,3). Nevertheless, all the solutions for the Z_9 and 10 charges allow terms such as ∼σN_Rβ(N_Rβ')^C, ∼σ^*2/M_PlN_Rβ(N_Rβ')^C, ∼σ^*/M_PlL_iHS_Rα and other terms in the Lagrangian that do not give the correct texture to the mass matrix in the ISS mechanism. We also have searched for all the possible combinations of r,s,t values in the Lagrangian (<ref>) with (p,|q|)=(2,3) without any success. Therefore, the (p,|q|)=(2,3) case cannot offer a realization for an effective model providing all the observed DM abundance via ALPs when all the constraints in Section<ref> are considered. However, from Table <ref> we see that forD=15,…,19 with a larger value of v_σ the second solution (p,|q|)=(3,5), cf. Eq. <ref>, can, in principle, offer a model (note that, strictly speaking, the v_σ value corresponding to D=15 is slightly out of allowed range in Eq. (<ref>)). Moreover, the cases of Z_17 and Z_19 are excluded because the condition for the gravitational anomaly is never satisfied, while in the Z_16,18 cases terms as ∼L_iHS_Rα and ∼σN_Rβ(N_Rβ')^C give an incorrect texture for the ISS mass matrix. In fact, after imposing all the constraints, we find that the only symmetry that provides a solution is Z_15. In more detail, we find that the discrete symmetry can be written as Z_15=9B+11U(1)_𝕃 (other combinations for Z_15 are possible). This model has the effective Lagrangian, ℒ_Z_15, given by Eqs. (<ref>), (<ref>) and (<ref>) with (p,q,r,s,t)=(3,-5,-4, 2, 2). Note that the term ∼σ^*4N_Rβ(N_Rβ')^C gives a negligible contribution for the light active neutrino masses.The ℒ_Z_15 is also invariant under a U(1)_A symmetry which is anomalous in the electromagnetic group as must be to generate a non-null coupling between photons and ALPs, g_aγ (see Sec. <ref>). Specifically, for this casewe have that the ALPs parameters are given by g_aγ≃ 2.25×10^-16[1.03×10^13 GeV/v_σ] GeV^-1, m_a≃4.47×10^-8|g|^1/2[v_σ/1.03×10^13 GeV]^13/2 eV.We also check that the neutrino mass spectrum for this model isM≃ζ×6.5×10^-2 TeV; μ_S≃η×5.8×10^-4 keV;m_νlight≃[y^⊺ζ^-1η(ζ^⊺)^-1y]×4.15 eV,where we have used the particular value v_σ≃1.03×10^13 GeV, which is one of the suitable values given in Table <ref> for D=15 giving the 100% of the current DM abundance.For this case, the sterile neutrino as DM candidate has a negligible contribution because the small scales in Eq. (<ref>) imply that the mixing angle betweenthe active and sterile neutrinos has a great suppression. Moreover the mass scale of the sterile neutrino, μ_S, is very small to bring a considerable contribution to DM. Now, from values of M, μ_S, m_νlight in Eq. (<ref>) we note that in this scenario there is a some tension to satisfy the unitarity constraint. In more detail, |y/ζ|<M/v_SM×10^-1=2.6×10^-2 where we have been conservative choosing a ϵ^2 value of 1% (recall ϵ≡ m_DM^-1). However, this upper bound on |y/ζ| implies a lower bound on η>|y/ζ|^-2m_νlight/4.15≈|y/ζ|^-2√(Δ m_atm^2)/4.15≈17.17 (with Δ m_atm^2=2.32×10^-3 eV^2 being the atmospheric squared-mass difference) which is not a perturbative value for η. This happens because the values for v_σ corresponding for D=15 is smaller than the values allowed in the range in Eq. (<ref>). Similar conclusions are found if we consider the case when Ω_a,DMh^2<Ω_DM^Planckh^2. Therefore, the effective Lagrangian ℒ_Z_15 can not provide a natural framework for DM and the neutrino masses in (2,3) ISS case. For this reason we do not show the benchmark region for this model in Figure <ref>.However, models explaining the DM relic density via ALPs and/or sterile neutrinos for the (2,3) ISS case can be found provided we slightly relax some constraints mentioned in Section <ref>. Actually, if an extra Z_N symmetry is allowed, we found that, for example, the solution (p,|q|)=(2,3) in Eq. (<ref>) makes possible a model with D=10 and (p,q,r,s,t)=(2,-3,-3,2,2) in Eqs. (<ref>), (<ref>) and (<ref>), where the discrete gauge symmetry Z_10× Z_4, with the corresponding charges given in Table <ref>, must be considered with the aim of get the correct DM relic density using D=10 to calculate the ALPs mass. It is straightforward to check that for this model, ALPs provide 100% of the DM abundance provided v_σ≅2.03×10^11 GeV with g of order one. In more details, for this benchmark point, we have that g_aγ≃ 1.14×10^-14[2.03×10^11 GeV/v_σ] GeV^-1, m_a≃0.29|g|^1/2[v_σ/2.03×10^11 GeV]^4 eV.with the neutrino mass spectrum given by M≃ζ×8.4 TeV;μ_S≃[η/10^-2]×4.96 keV;m_νlight≃[y^⊺ζ^-1(η/10^-2)(ζ^⊺)^-1y]× 2.11 eV.We note that for η≤10^-2 and the other coupling constants of order one, a suitable neutrino mass spectrum is achieved. In this case, we have also check that the sterile neutrino has a negligible contribution to DM relic density because the mixing angle between the active and sterile neutrinos is smaller than the limits established to consider ν_S as a DM candidate (10^-8≲sin^2(2θ)≲10^-11, see ref. <cit.> for more details). For this model, we have shown in Figure <ref> a benchmark region denoted as M23a where ALPs provide 100% of DM abundance. For the case that the DM abundance is made of ALPs and sterile neutrinos, the scenario slightly changes. We have chosen the case when the DM is made of ≈43% of sterile neutrinos and ≈57% of ALPs as an illustrating example. However, these can take other values provided the DM abundance made of sterile neutrinos is⪅50%, consistently with the constraints over its parameter space <cit.>. Doing a similar procedure as in the previous cases, we can obtaing_aγ≃ 1.02×10^-14[2.28×10^11 GeV/v_σ] GeV^-1, m_a≃0.46|g|^1/2[v_σ/2.28×10^11 GeV]^4eV.and M≃ζ×10.6 TeV;μ_S≃[η/10^-2]×7.1 keV; m_νlight≃[y^⊺ζ^-1(η/10^-2)(ζ^⊺)^-1y]×1.9 eV.In this case for η≈10^-2 the sterile neutrino has m_ν_S≈7.1KeV. In particular, this mass for the sterile neutrino may explain the recently indicated emission lines at 3.5 keV from galaxy clusters and the Andromeda galaxy <cit.>. The benchmark region for this model is denoted as M23b in Figure <ref>. It is worth to mention that for both benchmark regions in those models, the constraints and prospects regarding lepton-flavor-violating processes are also similiar to the ones in case (2,2). This happens because the contribution of the sterile neutrino to Br(ℓ_β→ℓ_βγ) is negligible since G(m_ν_S^2/m_W^2)→0 for m_ν_S≪ m_W.Finally, for clearness, we show in Table <ref> an overview of the main results of all considered models. Specifically, we show energy scales for the neutrino masses and the ALPs parameter space for each ISS case.§ DISCUSSION AND SUMMARYWe have connected two interesting motivations for going beyond the standard model: neutrino masses and ALPs as dark matter. A natural scenario for achieving that is the ISS mechanism. In particular, we have considered the minimal versions of the ISS mechanism in agreement with all the neutrino constraints.Nevertheless, in the considered framework, the mass scales for the ISS mechanism are generated by gravity-induced non-renormalizable operators when the scalar field containing the ALPs gets a vacuum expectation value, v_σ. Naturalness of these scales imposes strong constraints on these operators and, when combining these with the ALPs acceptable range for v_σ, only two solutions are possible: (p,|q|) = (2,3)for6×10^10≲ v_σ≲1×10^11 GeV and (p,|q|)= (3,5)for2×10^13≲ v_σ≲8×10^13 GeV. This implies that operators given M and μ_S scales can only belong to these two categories. Then, a simultaneous application of constraints coming from the texture of ISS mass matrix, the violation of the unitarity, the mass of exotic charged leptons, the stability of the effective Lagrangian against gravitational effects and the suitable ALPs parameter space (m_a and g_aγ) to provide the total DM density almost set the rest of terms in the Lagrangian, only leaving a few of possibilities for all of ISS cases. These constraints ultimatelly lead to a concrete prediction for the viable ALPs masses and ALPs-photon couplings and also for the mass scale of the heavy neutrinos necessary to explain the neutrino oscillation data. In other words, both sectors are deeply connected and the observation of a hypothetical signal of the ALP existence within the proper regions will automatically lead to the existence of heavy neutrino states in the TeV and multi TeV scales. In the same way, the nonobservation of an ALP within such regions or the observation of heavy neutrinos below the TeV scale would disfavour the possible linkage between ALP DM and neutrino masses suggested in this work.Among the minimal ISS mechanisms, the (2,2) and (3,3) ISS cases are quite similar. It is due to the fact that in both of them n_N_R=n_S_R implying that neutrino mass spectrum is characterized by only two mass scales, M and m_νlight. Thus, the results obtained are almost identical. Although, there is a slightly difference in the value of g_aγ due to the presence of more fermions in the (3,3) ISS case. In both cases, we find two effective models denoted as M22a,b and M33a,b in Table <ref>. Since there is not sterile neutrino in these cases, the total DM density is made of ALPs.We also remark that, although, the ALPs in these models can decay to two photons and, in the (2,2) ISS mechanism, to two massless active neutrinos,these are cosmologically stable because those decays are strongly suppressed by factors of 1/M_Pl^2 and/or 1/v_σ^2. On the other hand, the (2,3) ISS case is phenomenologically more interesting due to the presence of a sterile neutrino in the mass spectrum. It implies that the DM density can be made of ALPs and ν_S.We have found a model satisfying all of previously mentioned constraints and, at the same time, offering the total DM. Because sterile neutrinos in the (2,3) ISS mechanism can give, roughly speaking, at most ≈43% of the DM density, it is necessary that the remaining ≈57% of DM be made of ALPs. It is also possible that ALPs give the total DM density. It occurs when the mixing angle between active and sterile neutrinos is very suppressed in order to make the the Dodelson-Widrow mechanism inefficient. Both cases were studied in detail and denoted as M23a and M23b, respectively.Regarding the search for ALPs, the benchmark regions in Figure <ref> are out of reach of the current and future experimental searches for axion/ALPs such as ALPS II, IAXO, CAST <cit.>, since these currently have not enough sensitivity to probe the ALPs/axion-photon couplings and masses that are motivated in models with scales v_σ≳ 10^13 GeV.Nevertheless, for the (2,2) and (3,3) ISS cases the benchmark regions are remarkably within the target regions in proposed experiments based on LC circuits <cit.>, which are designed to search for QCD axions and ALPs and cover many orders of magnitude in the parameter space of these particles, beyond the current astrophysical and laboratory limits <cit.>. Specifically, the ABRACADABRA experiment <cit.>may explore ALPs masses as low as ∼10^-10 eV for a coupling to photons of the order of∼10^-18 GeV^-1, which are well below our benchmark regions (Figure <ref>).Finally, despite the fact that neutrino mass spectrum is not completely predicted in the models found, the matrix scales in the ISS mechanism are estimated to be in agreement with the neutrino constraints [It is worth mentioning that for all the models the normal spectrum is the preferred neutrino mass spectrum <cit.> which in turn implies that our scan results are also compatible with the cosmological upper bound on the neutrino mass sum <cit.>]. Moreover, we have numerically checked, in all models, that there are solutions with coupling constants of order one that also satisfy LFV processes and the unitary condition. These processes can easily avoid without fine-tuning in the models discussed in this paper. Specifically, we have found that the BR(μ→ eγ) in all cases are as small as ∼10^-20-10^-15 which are consistent with the current experimental value BR(μ→ eγ) < 5.7×10^-13 <cit.> and with future sensitivities around 6×10^-14 <cit.>. B. L. S. V. would like to thank Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), Brazil, for financial support. C. D. R. C. acknowledges the financial support given for the Departamento Administrativo de Ciencia, Tecnología e Innovación - COLCIENCIAS (doctoral scholarship 727-2015), Colombia, and the hospitality of Laboratori Nazionali di Frascati, Italy, in the final stage of this work. O. Z. has been partly supported by UdeA/CODI grant IN650CE and by COLCIENCIAS through the Grant No. 111-565-84269.
http://arxiv.org/abs/1704.08340v2
{ "authors": [ "C. D. R. Carvajal", "B. L. Sánchez-Vega", "O. Zapata" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170426202532", "title": "Linking axionlike dark matter to neutrino masses" }